The algorithm is an object-space algorithm. This has some implications: Operations are performed with direct vertex info. No intermediate rendering+signal filtering is done. Rendering it on the GPU makes it hard to control line width. This is the main disadvantage in using this algorithm over a CPU-implemented one. The main advantage is – of course – significant speedup in rendering.
The current version computes info for all vertices. I think I can speed things up a bit more by adding some additional tests in the vertex shader, which could signal the fragment shader that the fragment is unlikely to have contours. Whether or not the additional overhead of these tests is worth it, we’ll see. Pretty images behind the cut.
All of the following images can be rendered fast enough for a real-time environment. Tests were performed on an AMD Phenom II QuadCore with a Radeon HD4980. Any card supporting Opengl 1.4 and the ARB extensions will do, though.
Line width for regular contours is okay, but suggestive contours tend to “smudge” out over large faces. I think subdividing the model into smaller triangles might solve that.
Bringing out the details doesn’t work as well as it did int he CPU implementation. This is because the thresholds to limit line width are a bit too agressive at times … nevermind the characters on the top of the skull.
Apart from some artifacts where the suggestive contours merge in the regular contours, this looks pretty much the same as the CPU implementation. The simple geometry allows for good threshold tweaking.
The same problems which can be seen in the triceratops render: regular contours are sharp and rather crisp, suggestive contours are smudged.
Testing the shader on uncleaned, noisy data (like this head scan) results in an okay image. For the record: the nose is not included in the original scan, so the cutoff appearance is like it should be. For reference, I’ve rendered the image with the CPU implementation and regular shading enabled, contours overdrawn:
As you can see, the GPU shader actually brings out more details around the eyes. Again, this is due to the treshold settings.
This render is as good as the CPU render: note that this object has soft curvy edges in general, and all faces are approx. the same size. This is better for the GPU algorithm, so not a lot of “smudge” to be seen here.
The Stanford Dragon. As you can see, finding good thresholds for the entire render is hard.
The Armadillo – the piece de résistance, if you want – a good threshold setting for this model was found immediately, resulting in a good line drawing.