It’s alive!
First rough video of my current Voxel Renderer implementation. It’s nothing too exciting, just showing that the Proof Of Concept actually works in real-time. The flyby is on the edge of the voxel field, since inside it’s just … trippy. This is a 512x512x512 field with 8 bytes per datapoint , so as mentioned before, we’re rendering about 134 million datapoints / 2 Gb worth of voxels here.
And for those who just dropped in: remember I’m raycasting, not just rendering cubes with OpenGL. I’m now 3 months in my doctorate – hope I’m on the right track and tempo.
I made this capture in a hurry, so don’t mind the yellow circle around the mouse. Also, rendering and capturing the output at the same time introduces heavy load on the CPU cores, which results in choppy video. I could have rendered all the frames to image files and then merge them in a movie, but that somehow felt like cheating on the real-time claim.
Next days/weeks, I’ll be focusing on
- Rewriting the DDA Woo algorithm to be iterative (getNextVoxel() instead of getAllVoxelsOnRay()) and incorporate interpolation
- Add opacity effects
- Finding a good way to sample regular triangular-mesh-models and import them, so I can render some bunnies and dragons.
- Although performance is not a direct goal, implementing a basic space subdivision scheme and formatting the data for it might be interesting.
- A lot of you have mentioned GPU acceleration in the comments – would make things faster, of course, and it’s fairly easy: http://www.daimi.au.dk/~trier/?page_id=98 – I want to experiment with the core rendering stuff first – can I attach material info to my data points, for instance.