Eurographics Symposium on Rendering: Day 1

Note: You can find my overview of Day 2 and Day 3 on this blog as well.

I have the privilege to attend the annual Eurographics Symposium on Rendering (EGSR) in Paris, France this year. The first warm day is behind us, and it’s time to reflect on and tell about some stuff I saw today.

The first Keynote by Alexei Efros (Carnegie Mellon) was a great opener: what an amazing backlog of interesting papers this man has been involved in! Big Data and the Pursuit of Visual Realism touched a variety of topics which all boiled down to using the millions of images available on the internet today to ‘dirty up’ the too-perfect renders we can create with traditional CG today: insert real-life objects in other photographs, recognizing parts of images, scene completion (perhaps the most known paper), the placement of observed objects in totally different lighting conditions (Webcam clipart), recovering occlusion boundaries from a single image,… Another interesting (and topical) project was What Makes Paris Look like Paris?, in which a technique is developed to measure and categorize certain patterns that are typical to a certain region.

What makes Paris look like Paris?

The end of the presentation touched a very important point: when using these seemingly endless stream of user-generated content, there’s always the danger of forgetting about bias: humans usually take photographs of interesting stuff (like the front of Notre Dame), from a safe position (the sidewalk, not the middle of the street), etc … this introduces a noticable bias in large datasets which should be taken into account.

Mr. Efros is a captivating speaker, and his hour’s time was up before anyone noticed – he also briefly touched Structure From Motion in conjunction with Noah Snavely’s Photosynth tool: I’ve dabbled with both techniques in my master year at university, and they’re yet another example on how Big Data can really augment and extend existing CG outputs.

The rest of the afternoon was dedicated to Optics, a part of the CG spectrum I wasn’t too familiar with at first.

The first paper, Polynomial Optics: A Toolkit for Efficient Ray-Tracing of Lens Systems (link) tackled to problem of raytracing a lot of rays through a real-world camera system. Even with very basic camera knowledge, you can imagine that only a very small portion of the rays eventually go to the aperture hole, so how do we evaluate ray traces and make sure we can still evaluate these traces easily when we build different kinds of lens systems? The paper uses the abberation theory on waveform distortions to propose ray-space formulation for nonlinear effects. The authors plan on releasing the C++ toolkit to use all this before the EGSR Symposium is over, so I guess they won’t be seeing much of Paris in the next days :)

Per-Vertex Defocus Blur

The second paper, Per-Vertex Defocus Blur for Stochastic Rasterization (link), was sponsored by Intel. The point was to put more control over blur which is present in the final image (that’s motion blur and Depth Of Field-effects) in the hands of the artists, by describing it as a per-vertex property. They make the “circle of confusion”, which is used in ray tracing to simulate the effect of images not being in focus, a variable vertex parameter. This allows them to to limit the foreground blur, extend the in-focus range, simulate tilt-shift photography, and specify per-object defocus blur. And double whammy: it’s relatively easy to implement in a given stochastic rasterizer. I think it’s nice to see how a simple mathematical ‘trick’ can help final artists (which are no CG experts) to get more control on the final render.

During the closing Wine and Cheese-tasting (ahh, the French. They do love their vin and fromage), I got the chance to talk to Szymon Rusinkiewicz, co-author of the Suggestive Contours which paper formed the basis for my masters’ thesis, and Samuli Laine, author of the most recent work on high-performance Sparse Voxel Octrees. Exciting to meet the people behind the papers you’ve been reading for a good chunk of your academic career. (And, I must say, quite a test of your social skills when you’re a PhD rookie, alone on your first conference, without any promotor to introduce you to people. I’ve become a silent ninja in quick googling and subtly checking the ID tag the person is wearing!)

Talking about googling, it’s a bit of a downside there’s no active wireless connection at (or close to) the conference. I can understand how having no net access during the talks motivates people to listen to the speaker, but having no access during the (long) coffee breaks, that’s really annoying. I wanted to do some live-blogging during the conference, or show Szymon my thesis work on Suggestive Contours/Suggestive Highlights, but there’s no way I’m going to pull all the info through a roaming (!) 2G (!!) connection on my smartphone – which would cost me a small fortune and possibly a kidney. Missed opportunity – here’s hoping they fix that tomorrow, but I have little hope.

Taking the long way back to the hotel, past Notre Dame

Goodnight Paris!

Comments are closed.