Tag Archive: texture


Siggraph paper summary: Digital Facial Engraving by Victor Ostromoukhov 1999

Another interesting technique for the post-processing of photographs or other images, this paper proposes a system for creating the appearance of traditional engraving, like you see on money or in the Wall Street Journal.

User places a series of 2D bezier patches to guide the direction of the lines, and creates masks to separate features. A ‘universal copperplate’–essentially a greyscale heightmap of the furrow shape–is warped to follow the shape of each patch. A set of merging rules allow multiple line directions and features to overlap and interact. Finally, the image source brightness value is used to drive a threshold operation on the warped and merged plate. Somewhat more complex than a basic clamp, it takes into account characteristics of the output medium and human vision to improve the result and keep tones in balance.

One innovation this allows is color engraving: creating CMYK separations with engraved lines rather than halftones or stochastic stippling. Also, engravings need not be limited to the traditional lines and mezzotint; other shapes and patterns could easily be used.

As presented, this technique is limited to still images. What would it take to make this operate as a shader or post-process for animation? Could the UV information provided by the patches could be driven by surface UVs? Or some kind of surface normal information? Line size would need to be driven by output resolution, of course. I’ve seen CG art that (so far as I could tell) mapped an engraving pattern to the geometry and then clamped the result based on light direction/intensity. Used carefully, it can look good but the lines are locked to the geometry, like zebra stripes, while the lines in a true engraving seem to me to be somewhere between the object and the picture. Swimming through lines locked to the picture frame would not be acceptable, of course, but when the object recedes, the line size needs to stay consistent in screen space. Seems like a better solution would be to render surface direction and light intensity, and use those to draw and clamp the lines after the fact.

Complex, algorithmically generated spot and stripe patterns that conform and react to topology!

SIGGRAPH paper summary: Generating Textures on Arbitrary Surfaces Using Reaction-Diffusion by Greg Turk 1995

Oh man, this one is so cool! Has it ever been implemented in commercial software? Presents a technique for synthesizing animal spot and stripe patterns on polygon surfaces, plus a second technique for mapping these spots without a UV grid.

Reaction-diffusion is a phenomenon where multiple chemicals diffuse at different rates across a surface, eventually reaching a stable state. Alan Turing hypothesized that this mechanism might account for differentiation and organization of parts in embryonic development, and worked out equations that describe the process. Biologically accurate or not, it can be used to generate complex, distinctive, and convincingly organic spot patterns that conform to any kind of polygon topology. Given a set of rules for a style, tweaking initial seed values creates varied but consistent patterns, and it scales nicely. Input values could also be driven by painted textures, to specify desired results like the wider stripes on the zebra’s rump.

This technique could be used with traditional UV coordinates, but Turk describes a novel (to me, anyway) technique for creating a texture space of arbitrary topology that enables the R-D algorithm to better suit the geometry. In short it creates a reference mesh with regularly-sized triangles, places points randomly on that surface, then uses a relax method, based on repulsion, to evenly spread the points across the surface. It then calculates Voronoi regions for these points, and uses the length of those boundaries to control diffusion in the R-D calculation.

In film production, normally you’d paint these maps, but if you had to generate a large group of similar animals (or maybe just organic fill patterns), this technique would be the ticket. How hard would this be to implement? Is the texture coordinate method useful or has it been surpassed by superior techniques? How practical is it compared to a skilled (or unskilled) painter? The paper states that it takes “several hours” to generate the textures for a 64,000 point model on a DEC3100. Soooo… maybe this is something that can run on an iPad?

Powered by WordPress. Theme: Motion by 85ideas.