Image Space Oddity

I’ve been trying to work out a good CPU-accelerated version the my suggestive contours algorithm during the last few weeks, and after working through some technical difficulties, I managed to compute and draw regular contour lines this afternoon:

Hard to see? I know,  it looks craptastic.

But after an afternoon of reading tutorials, bulletin boards and newsgroups, I know why. And I think it’s going to help me write a better thesis, so this delightful saturday wasn’t completely wasted. Conclusions behind the cut.

For drawing Suggestive Contours (applies to regular contours too), there are 2 main techniques, categorized by the space in which they operate:

OBJECT SPACE

This is what I’ve been using for my CPU implementation. All computations are done in the space of the mesh itself, using vertex info, and using the Trimesh2 library to compute additional vertex information. It all boils down to connecting zero points of n dot v (the dot product of the normal vector and the view vector) and dwKr (the directional derivative of the radial curvature). If I’ve found these zero points, I can connect them within a face and draw them using regular OpenGL calls:

glBegin(GL_Lines); // start LINE DRAWING
// ... for every face, find zero points
point p1 = w01 * themesh->vertices[v0] + w10 * themesh->vertices[v1];
point p2 = w02 * themesh->vertices[v0] + w20 * themesh->vertices[v2];
glVertex3fv(p1);
glVertex3fv(p2);
// ...
glEnd()

The advantages of this method are:

  • Line width, even individual line width, can be controlled using glLineWidth calls.
  • Temporal coherence techniques like fading can be applied.
  • The code can be structured pretty well, and the implementation I’ve worked on during the first semester of my thesis year is pretty complete.

The main disadvantage of this method is that all the expensive computations (matrix multiplications, dot products) are done on the CPU, while there is hardware available solely for that purpose: the GPU. There is a hardware implementation for regular contours, but it’s fairly complicated. Another option is to draw an inverted normal version of a slightly bigger mesh behind the object, but that’s nothing more than a dirty hack.

Some performance gains for the object-space algorithm can be achieved by cutting down on the amount of faces you have to loop over:

  • For regular contours: early ndotv discarding of faces
  • For suggestive contours: Backface culling
  • For suggestive contours: culling of faces with a positive Gaussian (see this paper by DeCarlo et al.)
  • Random selection of faces: (see previous paper and this paper by Markozian et al.)

In function of my thesis, additional work might be possible in implementing the random selection of faces, since it allows for fast rendering with reasonable quality, very suited for real-time applications.

IMAGE SPACE

These algorithms work on a rendered frame: it consists of drawing some scalar field derivated from the object (normal, depth), and than using signal analysis on this frame: thresholding, edge detection, … These results are then used to determine the pixel color in the final image.

So porting my approach from the object-space to the image-space was bound to run into troubles: there’s no GL_Line, I can only do calculations on the level of individual pixels. I wouldn’t even know where to start on thickening lines :)

Advantage: This approach is much more suited for use with fragment/vertex shaders, where programmability is at the vertex and pixel level and only local information is available.

Disadvantages:

  • Complexity++
  • It’s hard to retain control over line thickness/strokes

In the examples given, I’ve seen two techniques:

  • render normal map with diffuse shading + simple threshold filter
  • render depth map + edge detection filter

I’ve got to start reading up on how to implement these techniques, since I don’t have a lot of experience with OpenGL/GLSL and texture-combinations.

Comments are closed.