Tag Archive: ink


Siggraph paper summary: Digital Facial Engraving by Victor Ostromoukhov 1999

Another interesting technique for the post-processing of photographs or other images, this paper proposes a system for creating the appearance of traditional engraving, like you see on money or in the Wall Street Journal.

User places a series of 2D bezier patches to guide the direction of the lines, and creates masks to separate features. A ‘universal copperplate’–essentially a greyscale heightmap of the furrow shape–is warped to follow the shape of each patch. A set of merging rules allow multiple line directions and features to overlap and interact. Finally, the image source brightness value is used to drive a threshold operation on the warped and merged plate. Somewhat more complex than a basic clamp, it takes into account characteristics of the output medium and human vision to improve the result and keep tones in balance.

One innovation this allows is color engraving: creating CMYK separations with engraved lines rather than halftones or stochastic stippling. Also, engravings need not be limited to the traditional lines and mezzotint; other shapes and patterns could easily be used.

As presented, this technique is limited to still images. What would it take to make this operate as a shader or post-process for animation? Could the UV information provided by the patches could be driven by surface UVs? Or some kind of surface normal information? Line size would need to be driven by output resolution, of course. I’ve seen CG art that (so far as I could tell) mapped an engraving pattern to the geometry and then clamped the result based on light direction/intensity. Used carefully, it can look good but the lines are locked to the geometry, like zebra stripes, while the lines in a true engraving seem to me to be somewhere between the object and the picture. Swimming through lines locked to the picture frame would not be acceptable, of course, but when the object recedes, the line size needs to stay consistent in screen space. Seems like a better solution would be to render surface direction and light intensity, and use those to draw and clamp the lines after the fact.

SIGGRAPH paper summary: Art-Based Rendering Of Fur, Grass, Trees by Michael Kowalski, et al.

A (sort of) particle-based system for an illustrative style of rendering to balance screen-space density of strokes against world-spatial coherence to provide some interframe coherence. Starts with a conventionally-rendered image as reference and places graphtals* according to a “desire” map. As each stroke is placed, it subtracts a blurred version of itself from the desire map, and so on until the desire map is filled, thus preventing overly dense stroke placement. Provides techniques for scaling strokes in screen space as they scale in camera space (does that make any sense?), for deciding how much of the stroke to draw based on factors such as surface facing angle, and for reducing strokes as they are no longer needed to reduce popping. The individual stroke aligns itself to the camera based on user rules such as “always point down” for fur or “always orient clockwise” for truffula tufts. The technique does not fully solve problems of interframe coherence, but gives a starting point.

*Alvy Ray Smith’s concept of “graphtals” (a portmanteau of fractals and graphics): image information that is generated algorithmically/implicitly and only when requested.

Overall this is a idea with potential. The paper points to a lot of interesting prior work, too. Another Brown University project!


Siggraph Paper Summary: Rendering Parametric Surfaces in Pen and Ink by Georges Winkenbach & David H. Salesin, University of Washington

I can still remember the first SIGGRAPH I attended in 1996. By the generosity of MTV Networks, I had a full conference pass which meant conferences and papers (which I barely attended) and the full set of publications, which I devoured eagerly afterwards. This paper really blew my mind, and I still think it’s a technique with a lot of potential that hasn’t been well exploited. Way cooler than

Winkenbach’s paper presents what looks like a robust technique for inking parametric (i.e. NURBS) surfaces. Ink line direction is controlled by surface parameter, the tone by modulating stroke thickness. Strokes can be sharp or wiggly, short strokes become stippling. Tone can be controlled by lighting (they present a technique for shadows), by texture map, or something else (i.e. reflection mapping).

The technique could probably be adapted to utilize UV mapping coordinates. The authors tantalizingly suggest generating strokes “along directions that aremore intrinsic to the geometry of the surface—for example, along directions of principal curvature.” Not addressed as far as I can tell is temporal coherence. What does this technique look like over a series of frames, when the surface is translating and deforming?

This approach is not available in any commercial package that I am aware of, but I wish it were.

Powered by WordPress. Theme: Motion by 85ideas.