Siggraph paper summary: Digital Facial Engraving by Victor Ostromoukhov 1999

Another interesting technique for the post-processing of photographs or other images, this paper proposes a system for creating the appearance of traditional engraving, like you see on money or in the Wall Street Journal.

User places a series of 2D bezier patches to guide the direction of the lines, and creates masks to separate features. A ‘universal copperplate’–essentially a greyscale heightmap of the furrow shape–is warped to follow the shape of each patch. A set of merging rules allow multiple line directions and features to overlap and interact. Finally, the image source brightness value is used to drive a threshold operation on the warped and merged plate. Somewhat more complex than a basic clamp, it takes into account characteristics of the output medium and human vision to improve the result and keep tones in balance.

One innovation this allows is color engraving: creating CMYK separations with engraved lines rather than halftones or stochastic stippling. Also, engravings need not be limited to the traditional lines and mezzotint; other shapes and patterns could easily be used.

As presented, this technique is limited to still images. What would it take to make this operate as a shader or post-process for animation? Could the UV information provided by the patches could be driven by surface UVs? Or some kind of surface normal information? Line size would need to be driven by output resolution, of course. I’ve seen CG art that (so far as I could tell) mapped an engraving pattern to the geometry and then clamped the result based on light direction/intensity. Used carefully, it can look good but the lines are locked to the geometry, like zebra stripes, while the lines in a true engraving seem to me to be somewhere between the object and the picture. Swimming through lines locked to the picture frame would not be acceptable, of course, but when the object recedes, the line size needs to stay consistent in screen space. Seems like a better solution would be to render surface direction and light intensity, and use those to draw and clamp the lines after the fact.