I’ve implemented the pick-some-random-faces-algorithm (the algorithm formerly known as the Markosian algorithm, from this paper) for suggestive contours too now. Results are a bit better when it comes to framerate, with the big gains of course in models with more faces. For example, of the 40000 faces of the skull, only testing 400 for contours and 400 for suggestive contours gives the following (you can see my nifty FPS counter there too).
Some questions to ask myself, and color coding (patent pending) to indicate how much development time is needed to try them. I’d like to move on to implementing another alternative (Image-space algorithm on GPU), so I don’t want to lose time:
- (moderate) Why pick x random faces for contours and another x for suggestive contours? Can’t we use the same pool for both? (With different seeds of course)
- If we’d use the same pool for both, we’d lose the Gaussian face optimization I’m doing now.
- Because the results are not mind-blowingly better (+5 fps on this skull model), the suspicion rises that my performance barrier is situated somewhere else.
- (moderate) I’m being rather uneconomic with large arrays and vector lists. Could this impact performance?
- (hard)I’m not using any Vertex Buffer Objects to push data to the GPU, mainly because I don’t know how, and it seems rather complex to combine with the contour calculations.
- (?)Using GlSmoothline and GlBlend adds additional overhead. Is there another way to make the curves look smooth?
- (easy) For the x random faces which need to be tested, vertex computation is done on-the-fly, where this could be done in a parallel loop before. Some additional time can be gained by pre-computing the x random faces’ vertex points in a parallel loop before rendering. Additional faces found in path walking still need to be computed on-the-fly though.