I’ve read some remaining papers about Non-Photorealistic Rendering.
The first one was Image Analogies by A. Hertzmann from Microsoft Research. It describes a framework to process images, where the processing algorithm is derived from a given (set of) A and A’ images, where A is the original image and A’ is the transformed image.
This can be used to ‘learn’ traditional image filters like blur and sharpen, but also for more spectacular stuff like texture synthesis and texture-by-numbers, which I found quite impressive. The maths in that specific paper went a bit over my head, so I focused on the applications.
Next on the list was Real Time Video Abstraction by H. Winnemöller, S. Olsen and B. Gooch, which presented an algorithm to do video abstraction in real time. By modifying the contrast of visually important features, one can produce cartoon-like figures in real-time, with good temporal coherence – a feature that was missing in previous algorithms attempting the same sort of abstraction.
The interesting part of this paper (at least for me) was how they based their framework and tests on several theories of how our human visual system works, and how we percieve things in general. They assume the visual system operates on different featuresof a scene. Changes in these features are of perceptual importance and therefore visually interesting. Polarizing these changes is a useful method for image abstraction. The three main steps in the algorithm are nonlinear diffusion, edge detection and re-synthetization. Once again, the deep maths require more study, but I get the general idea.
They tested base images of celebrities and their abstracted counterparts in different recognition/remember-games, and came to the conclusion that the abstract images were easy to remember and retrieve.
The last paper was Colorization in Manga by Y. Qu, T Wong and P. Heng. Manga comic books differ from traditional western comics in their detailing and how this detail is constructed: most manga comics are in black/white, with no grayscale to support shading. Several bullet/stripes/other patters and their relative distance are used to depict several kinds of shading and ‘colors’. Colouring in these traditional manga comics with traditional tools like “bucket fill” or “flood” is not possible, because of the several gaps between the pattern elements. Also, sometimes the pattern can switch, but not the brightness the artist intended, or vice versa.
The paper offers a solution: a user can smudge some general strokes of colored paint over the parts he/she wants coloured, and then select whether to use the brightness or pattern filler. Maths once again are bonkers (projecting strokes onto a 3D cone, extracting brightness, filtering them). This paper didn’t peak my interest, because it’s a very specific subject, and I don’t see any other uses of it.