Efficient rendering using Sparse Voxel Octrees

I’ve been reading a lot lately about using voxels in CG. I’ve always been fascinated by ways of moving away from the traditional polygon-based pipeline, and things are really starting to look good for voxels.

Sampling all geometric and texture data on a regular grid allows for unique content, theoretically. No more repeating textures, pieces of scenery, … artists can basically create a scene and add little customizations to every object, without sacrificing rendering budget, as long as you’ve got a good way of streaming the right data to the GPU. Interesting work is done by Cyril Crassin on Gigavoxels, a paper which shows a working implementation of the Voxels-in-a-tree concept.

I think it’s a nice way to introduce some kind of ray casting (and all the benefits it brings) into a modern-day real-time graphics pipeline, instead of the traditional way of splatting polygons down on a canvas and using several shaders to fake effects which are gained automatically when shooting rays. It would also simplify a lot of algorithms if you can treat geometry and (for example) texture information as being on the same grid and resolution.

In 2011, Nvidia research also published a paper on Sparse Voxel Octrees (link), following a demo on I3D. Some of the code was published here. Interesting read!

Also, a quote from Carmack in a recent interview on ID Tech 6:

 It’s interesting in that the algorithms would be something that, it’s almost unfortunate in the aspect that these algorithms would take great advantage of simpler bit-level operations in many cases and they would wind up being implemented on this 32-bit floating point operation-based hardware.  Hardware designed specifically for sparse voxel ray casting would be much smaller and simpler and faster than a general purpose solution