Thesis Code

Last page revision: 03/08/2010

All my code is distributed freely under the a Creative Commons Attribute-NonCommercial Sharealike 3.0 Unported license. All code is provided as-is,  most important lines and calls are well-commented. If you’ve got any problems getting it to compile, work or understand, don’t hesitate to leave a comment on this page or contact me.


Requirements
All the implementations for my thesis were written in C++, using OpenGL for rendering. Code was compiled succesfully for x86 systems using GCC4.2.x. The required libraries are:

  • Trimesh2: A library for easy input and processing of 3d meshes in .ply or .obj format.
  • FreeGlut: Easy windowing toolkit.
  • GLUI: Additional UI controls.
  • GLEW: OpenGL extension wrangler.

These dependencies can be met in recent Ubuntu versions by performing sudo apt-get install freeglut3-dev glew-utils mesa-dev.


Object Space CPU algorithms
This demo contains code for 3 algorithms, which all operate in Object Space, using CPU functionality only. Shader support is not required on the GPU for this to work:

  • Appel’s algorithm for drawing regular contours.
  • NdotV algorithm for drawing regular contours.
  • Zeropoints in radial curvature for suggestive contours.

You can download a ZIP file containing the code here, and a PDF with instructions here. This archive also contains some GPL models to test with.


Object Space GPU algorithms

The GPU algorithm I developed works in Object Space and calculates radial curvature and directional derivatives using a fragment shader, then uses a pixel shader using these radial curvature values to draw contour lines. It’s a very performant way to draw suggestive contours. You need a GPU which supports  shaders, vertex buffer objects and has at least 5 texture units. (Any modern card will probably do).

The ZIP file contains a modified TriMesh2 mesh viewer, which uses the following shaders to draw the lines (click shader titles to see code).

// Object Space Suggestive Contours vertex shader
// Implementation by Jeroen Baert - www.forceflow.be - [email protected]
//
// Licensed under a Creative Commons Attribute-NonCommercial Sharealike 3.0 Unported license
// which can be found at: http://creativecommons.org/licenses/by-nc-sa/3.0/
//
// TL:DR? You must attribute me, cannot use this for commercial purposes, 
// and if you share this, it should be under the same or a similar license!
//

// IN: camera position (per render)
uniform vec3 cam_pos;

// OUT: variables for fragment shader

varying float ndotv;
varying float t_kr;
varying float t_dwkr;

void main()
{
	vec3 pdir1 = gl_MultiTexCoord1.stp;
	vec3 pdir2 = gl_MultiTexCoord2.stp;
	float curv1 = gl_MultiTexCoord3.s;
	float curv2 = gl_MultiTexCoord4.s;
	vec4 dcurv = gl_MultiTexCoord5.stpq;

	// compute vector to cam
	vec3 view = cam_pos - vec3(gl_Vertex.x,gl_Vertex.y,gl_Vertex.z);
	// compute ndotv (and normalize view)
	ndotv = (1.0f / length(view)) * dot(gl_Normal,view);

	// optimalisation: if this vector points away from cam, don't even bother computing the rest.
	// the data will not be used in computing pixel color
	if(!(ndotv < 0.0f))
	{
		// compute kr
		vec3 w = normalize(view - gl_Normal * dot(view, gl_Normal));
  		float u = dot(w, pdir1);
  		float v = dot(w, pdir2);
  		float u2 = u*u;
    	        float v2 = v*v;
  		t_kr = (curv1*u2) + (curv2*v2);

  		// and dwkr
  		float uv = u*v;
  		float dwII = (u2*u*dcurv.x) + (3.0*u*uv*dcurv.y) + (3.0*uv*v*dcurv.z) + (v*v2*dcurv.w);

  		// extra term due to second derivative
  		t_dwkr = dwII + 2.0 * curv1 * curv2 * ndotv/sqrt((1.0 - pow(ndotv, 2.0)));
  	}
	// position transformation
	gl_Position = ftransform();
}
// Object Space Suggestive Contours fragment shader
// Implementation by Jeroen Baert - www.forceflow.be - [email protected]
//
// Licensed under a Creative Commons Attribute-NonCommercial Sharealike 3.0 Unported license
// which can be found at: http://creativecommons.org/licenses/by-nc-sa/3.0/
//
// TL:DR? You must attribute me, cannot use this for commercial purposes, 
// and if you share this, it should be under the same or a similar license!
//

// IN: computed values from vertex shader
	varying float ndotv;
	varying float t_kr;
	varying float t_dwkr;

// IN: uniform values
	uniform float fz;
	uniform float c_limit;
	uniform float sc_limit;
	uniform float dwkr_limit;

void main()
{
	// base color
	vec4 color = vec4(1.0f, 1.0f, 1.0f, 1.0f);

	// use feature size
	float kr = fz*abs(t_kr); // absolute value to use it in limits
	float dwkr = fz*fz*t_dwkr; // two times fz because derivative
	float dwkr2 = (dwkr-dwkr*pow(ndotv, 2.0));

	// compute limits
	float contour_limit = c_limit*(pow(ndotv, 2.0)/kr);
        float sc_limit = sc_limit*(kr/dwkr2);

	// contours
	if(contour_limit<1.0)
	{color.xyz = min(color.xyz, vec3(contour_limit, contour_limit, contour_limit));}

	// suggestive contours
	else if((sc_limitdwkr_limit)
	{color.xyz = min(color.xyz, vec3(sc_limit, sc_limit, sc_limit));}

	gl_FragColor = color;
}

Image Space: Contour lines with Sobel filter

In Image Space, one can use a Sobel filter on a rendered depth map to detect contour lines. The supplied ZIP file contains a mesh viewer which uses this technique, using the following fragment shader (you can find the vertex shader – which is quite trivial – in the download).

// GLSL Sobel Edge Detection
// Implementation by Jeroen Baert - www.forceflow.be - [email protected]
//
// Licensed under a Creative Commons Attribute-NonCommercial Sharealike 3.0 Unported license
// which can be found at: http://creativecommons.org/licenses/by-nc-sa/3.0/
//
// TL:DR? You must attribute me, cannot use this for commercial purposes, 
// and if you share this, it should be under the same or a similar license!
//

uniform sampler2D edge;
uniform float limit;
uniform int width;

float intensity(in vec4 color)
{
	return sqrt((color.x*color.x)+(color.y*color.y)+(color.z*color.z));
}

vec3 sobel(float step, vec2 center)
{
	// get samples around pixel
    float tleft = intensity(texture2D(edge,center + vec2(-step,step)));
    float left = intensity(texture2D(edge,center + vec2(-step,0)));
    float bleft = intensity(texture2D(edge,center + vec2(-step,-step)));
    float top = intensity(texture2D(edge,center + vec2(0,step)));
    float bottom = intensity(texture2D(edge,center + vec2(0,-step)));
    float tright = intensity(texture2D(edge,center + vec2(step,step)));
    float right = intensity(texture2D(edge,center + vec2(step,0)));
    float bright = intensity(texture2D(edge,center + vec2(step,-step)));

	// Sobel masks (to estimate gradient)
	//        1 0 -1     -1 -2 -1
	//    X = 2 0 -2  Y = 0  0  0
	//        1 0 -1      1  2  1

    float x = tleft + 2.0*left + bleft - tright - 2.0*right - bright;
    float y = -tleft - 2.0*top - tright + bleft + 2.0 * bottom + bright;
    float color = sqrt((x*x) + (y*y));
    if (color > limit){return vec3(0.0,0.0,0.0);}
    return vec3(1.0,1.0,1.0);
 }

void main(void)
{
	gl_FragColor = vec4(1,1,1,0);
	float step = 1.0/512.0;
	vec2 center = gl_TexCoord[0].st;
	gl_FragColor.xyz = sobel(step,center);
}

Image Space: Radial Filter

As described in the thesis text, by using a radial valley-detection filter on a diffuse rendering of the object, one can draw suggestive contour lines in image space. This technique was implemented in GLSL and is demonstrated in the viewer supplied in this ZIP file. The fragment shader used is the following:

// Implementation by Jeroen Baert - www.forceflow.be - [email protected]
//
// Licensed under a Creative Commons Attribute-NonCommercial Sharealike 3.0 Unported license
// which can be found at: http://creativecommons.org/licenses/by-nc-sa/3.0/
//
// TL:DR? You must attribute me, cannot use this for commercial purposes, 
// and if you share this, it should be under the same or a similar license!
//
// texture
uniform sampler2D color;
// radius for valley detection
uniform int radius;
uniform int renderwidth;

float intensity(in vec4 color)
{
	return sqrt((color.x*color.x)+(color.y*color.y)+(color.z*color.z));
}

vec3 radial_edge_detection(in float step, in vec2 center)
{
	// let's learn more about our center pixel
	float center_intensity = intensity(texture2D(color, center));
	// counters we need
	int darker_count = 0;
	float max_intensity = center_intensity;
	// let's look at our neighbouring points
	for(int i = -radius; i <= radius; i++)
	{
		for(int j = -radius; j max_intensity)
			{
				max_intensity = current_intensity;
			}
		}
	}
	// do we have a valley pixel?
	if((max_intensity - center_intensity) > 0.01*radius)
	{
		if(darker_count/(radius*radius) < (1-(1/radius)))
		{
			return vec3(0.0,0.0,0.0); // yep, it's a valley pixel.
		}
	}
	return vec3(1.0,1.0,1.0); // no, it's not.

}

void main(void)
{
	float step = 1.0/renderwidth;
	vec2 center_color = gl_TexCoord[0].st;
	gl_FragColor.xyz = radial_edge_detection(step,center_color);
    gl_FragColor.a = 0.0;
}