This is one of the latest results of our Capita Selecta project (Selected Heads. Hah, the things I do with language …), in which we try to reconstruct human faces using Structure From Motion techniques.
I refer to the previous posts about this subject for technical details, but in short: we only use a camera to capture images from several (random) viewpoints around the object, and then we run them through an SFM pipeline to match keypoints and build a point cloud. As you can see, the detail is getting pretty good, and the depth estimation doesn’t suffer from too much noise like it used to.
The model contains 112 457 coloured vertices, so even without applying any form of texturing (and just interpolating the colors between the vertices), we can get a pretty good image. Images were made using a high-resolution photo camera tethered via USB. The point cloud viewer is a modified version of the OpenGL viewer I made for my thesis project.
I also scanned my thesis promotor, but due to pretty bad lighting conditions, the meshing went horribly wrong. I think it’s best if we don’t publish those results ;) The project is on hold for a while now, so we can both finish our thesis before getting caught up in this (pretty exciting) work. We’re planning to upgrade the pipeline with new Bunder / PMVS versions and incorporate CCSM, a way to do the matching of the keypoints in parallel, which will speed up the computation immensely.