Voxel Renderer: Spheres and Boundaries

During the last days, I’ve been testing just how much my current renderer implementation can take. My implementation was started from scratch, based on raycasting (eg … interpolating distinct values along a line) in a huge data grid.

At the moment it is without any optimization for quick free-space skipping, or fancy hierarchies (kd-tree, neighbour pointers, …) one needs to do (for example) GPU-assisted raycasting on Sparse Voxel Octrees., which is done in the fantastic Gigavoxel paper from Crassin, or the excellent SVO implementation from Laine at NVIDIA – who was so kind to post source code as well in a google code project.

The goal at the moment is not to get the highest performance. Without any disrespect: I’m no R&D industry researcher – the focus is shifted to other aspects. This is a PhD research project, the focus shifts to getting better insights into the matter and avoiding to use very implementation-specific (but incredibly clever) compression scheme, or streaming implementation,…

I’ll be focusing on how different datasets can contain more information than the usual RGB, Opacity, Normal info. How can we do optimal Level of Detail? With hierarchies, you implicitly have some Level of Detail, but you’ve got to preprocess these ancestor clusters – have to take a look at those algorithms as well.

Of course, somewhere in the next months, I’m going to hit the barrier and start doing some GPU-assisted stuff. Raycasting on a GPU is a well-documented application, and I’m looking forward diving into it. The fact that we’re raycasting into a possible very sparse, humongous data grid complicates matters somewhat: most research papers are written with vertex-based geometry in mind, and the GPU is used to do the heavy lifting for lighting calculation, like in voxel cone tracing, but not for the actual rendering compositioning itself.

Something important first :whether you want to name the data points voxels or sampling volumes is up to you.. I personally use the following convention:

  • A data point is a single unit of information from which the whole datagrid is constructed. The only other information is their position in the grid.
  • A voxel is a volume unit which is defined by 8 data points.(for cubical and rectangular datagrids). Sometimes these volumes are labeled with the smallest coordinate, where their local axis is defined.

Getting the terminology right is half the work – and you’ve got to be aware of the subtle differences when reading research papers..

  • If I render the all or a portion of the data grid’s cells as a cube or box, I’m rendering what a lot of people call Voxels.
  • If I interpolate every value however, I am not rendering the volume spanned by the Voxel’s datapoints. I’m rendering interpolated datapoints, integrated over a ray starting in my eye.
  • This is where the common-knowledge understanding of Voxels breaks: Just thinking out of the box (pun intended) I might not use only the samples from the 8 neighbours, maybe I’ll collect all neighbours in a certain radius. Maybe I will have good reason to do some form of importance sampling. Who knows?
  • Data points will always be data points, Voxels are just a certain way to group them. What do you call a 2×2 section of a regular 2D image? I’d say pixel block.  Voxels is the 3D analogy of that.

So I’ve been checking just how far I can push my current implementation when it comes to voxel field size. My current voxel data is 16 bytes per voxel (using double precision). So a 256 x 256 x 256 field contains around 16 million datapoints, and takes up roughly 250 Mb in memory. A 512 x 512 x 512 grid contains around 134 million datapoints and takes up about 2 Gb in system memory.

Still good for a 32-bit system, but I made a small error where at a time 2 of these representations were present in the system. Luckily, switching to 64bit (and thus being able to cross the 4 Gb barrier imposed by 32-bit adress spacing) was as easy as editing a few compile parameters. Let’s render that!

These are spheres rendered from the 512^3 voxel field, with the rays stopping at the first found nonempty voxel. Most of the spheres have an exponential amount of sparse voxels when moving away from the center, same with the color – by playing with the paremeters, you can get pretty detailed renders. These renders were made at 2000×2000 pixels and finish in about 2.4 seconds (using multiple threads). So that’s no interactive framerate, but that’s to be expected with this kind of massive dataset, resolution and the lack of hierarchical volumes.

1024x1024x1024 data grids result in 16 Gb of system memory just to hold the data. Not possible without a lot of trashing to the HDD. Also, an excellent way to heat up a room and crash your system. When there’s some run-length encoding in place to skip empty space, this size might reduce significantly, depending on the field’s properties.

Next week I’ll be focusing on the following:

  • Make the DDA-line search in iterative style: when I’m going to work with opacity, it’s often useless to trace blocks through the whole grid, get 512-ish blocks in return, and then only use 3 of them :)
  • Working on importing some standard graphics models in volume data format.
  • Implement a basic GUI with a window to render to, and some keyboard controls to move camera between renders. I’m sick and tired of editing start parameters.
  • Getting some kind of interactive rendering going, on a small voxel field (128^3, perhaps) to give a demonstration soon.

And of course, there’s the obligatory graphics blooper – this is how my torus primitive looks  now … nice pattern though.

Comments are closed.