Category: SIGGRAPH papers

Siggraph paper summary: Digital Facial Engraving by Victor Ostromoukhov 1999

Another interesting technique for the post-processing of photographs or other images, this paper proposes a system for creating the appearance of traditional engraving, like you see on money or in the Wall Street Journal.

User places a series of 2D bezier patches to guide the direction of the lines, and creates masks to separate features. A ‘universal copperplate’–essentially a greyscale heightmap of the furrow shape–is warped to follow the shape of each patch. A set of merging rules allow multiple line directions and features to overlap and interact. Finally, the image source brightness value is used to drive a threshold operation on the warped and merged plate. Somewhat more complex than a basic clamp, it takes into account characteristics of the output medium and human vision to improve the result and keep tones in balance.

One innovation this allows is color engraving: creating CMYK separations with engraved lines rather than halftones or stochastic stippling. Also, engravings need not be limited to the traditional lines and mezzotint; other shapes and patterns could easily be used.

As presented, this technique is limited to still images. What would it take to make this operate as a shader or post-process for animation? Could the UV information provided by the patches could be driven by surface UVs? Or some kind of surface normal information? Line size would need to be driven by output resolution, of course. I’ve seen CG art that (so far as I could tell) mapped an engraving pattern to the geometry and then clamped the result based on light direction/intensity. Used carefully, it can look good but the lines are locked to the geometry, like zebra stripes, while the lines in a true engraving seem to me to be somewhere between the object and the picture. Swimming through lines locked to the picture frame would not be acceptable, of course, but when the object recedes, the line size needs to stay consistent in screen space. Seems like a better solution would be to render surface direction and light intensity, and use those to draw and clamp the lines after the fact.

Complex, algorithmically generated spot and stripe patterns that conform and react to topology!

SIGGRAPH paper summary: Generating Textures on Arbitrary Surfaces Using Reaction-Diffusion by Greg Turk 1995

Oh man, this one is so cool! Has it ever been implemented in commercial software? Presents a technique for synthesizing animal spot and stripe patterns on polygon surfaces, plus a second technique for mapping these spots without a UV grid.

Reaction-diffusion is a phenomenon where multiple chemicals diffuse at different rates across a surface, eventually reaching a stable state. Alan Turing hypothesized that this mechanism might account for differentiation and organization of parts in embryonic development, and worked out equations that describe the process. Biologically accurate or not, it can be used to generate complex, distinctive, and convincingly organic spot patterns that conform to any kind of polygon topology. Given a set of rules for a style, tweaking initial seed values creates varied but consistent patterns, and it scales nicely. Input values could also be driven by painted textures, to specify desired results like the wider stripes on the zebra’s rump.

This technique could be used with traditional UV coordinates, but Turk describes a novel (to me, anyway) technique for creating a texture space of arbitrary topology that enables the R-D algorithm to better suit the geometry. In short it creates a reference mesh with regularly-sized triangles, places points randomly on that surface, then uses a relax method, based on repulsion, to evenly spread the points across the surface. It then calculates Voronoi regions for these points, and uses the length of those boundaries to control diffusion in the R-D calculation.

In film production, normally you’d paint these maps, but if you had to generate a large group of similar animals (or maybe just organic fill patterns), this technique would be the ticket. How hard would this be to implement? Is the texture coordinate method useful or has it been surpassed by superior techniques? How practical is it compared to a skilled (or unskilled) painter? The paper states that it takes “several hours” to generate the textures for a 64,000 point model on a DEC3100. Soooo… maybe this is something that can run on an iPad?

a painterly chameleon

One of the more interesting styles created in this paper

Painterly Rendering with Curved Brush Strokes of Multiple Sizes by Aaron Hertzmann 1998
A novel approach to algorithmically painting photographs, video, or other source images. Builds up the painted frame from large strokes to smaller ones, drawing strokes along the normals of image gradients where there is sufficient contrast, the idea being to place more small strokes in areas of high detail, and fewer larger strokes in low-detail areas. Some of the example images are pretty neat, but the algorithm does not address temporal coherence, so it’s not suitable for animation. Perhaps using oflow to move longer-lived brush strokes would take it further.

SIGGRAPH paper summary: Art-Based Rendering Of Fur, Grass, Trees by Michael Kowalski, et al.

A (sort of) particle-based system for an illustrative style of rendering to balance screen-space density of strokes against world-spatial coherence to provide some interframe coherence. Starts with a conventionally-rendered image as reference and places graphtals* according to a “desire” map. As each stroke is placed, it subtracts a blurred version of itself from the desire map, and so on until the desire map is filled, thus preventing overly dense stroke placement. Provides techniques for scaling strokes in screen space as they scale in camera space (does that make any sense?), for deciding how much of the stroke to draw based on factors such as surface facing angle, and for reducing strokes as they are no longer needed to reduce popping. The individual stroke aligns itself to the camera based on user rules such as “always point down” for fur or “always orient clockwise” for truffula tufts. The technique does not fully solve problems of interframe coherence, but gives a starting point.

*Alvy Ray Smith’s concept of “graphtals” (a portmanteau of fractals and graphics): image information that is generated algorithmically/implicitly and only when requested.

Overall this is a idea with potential. The paper points to a lot of interesting prior work, too. Another Brown University project!

Computer-Generated Watercolor by Cassidy J. Curtis, Sean E. Anderson, Joshua E. Seims, Kurt W. Fleischer, David H. Salesin

Presents a series of techniques for modeling the behavior of watercolor paint on paper in rather thorough detail. Simulates edge-darkening, backruns, glazing, dry-brush, and several other characteristics of watercolor. The technique can be used for painting ala Photoshop, although the paper notes that interactive painting is impossible with the current technology (133 MHz SGI R4600 chip!), creating watercolor from a photograph with manually specified mattes, or from a 3D render. Temporal coherence is not solved (the technique is prone to ’shower-door’ artifacts), but the single-frame results are quite nice. It’s a considerably more advanced model than the one used by Painter or Photoshop.

Siggraph Paper Summary: Rendering Parametric Surfaces in Pen and Ink by Georges Winkenbach & David H. Salesin, University of Washington

I can still remember the first SIGGRAPH I attended in 1996. By the generosity of MTV Networks, I had a full conference pass which meant conferences and papers (which I barely attended) and the full set of publications, which I devoured eagerly afterwards. This paper really blew my mind, and I still think it’s a technique with a lot of potential that hasn’t been well exploited. Way cooler than

Winkenbach’s paper presents what looks like a robust technique for inking parametric (i.e. NURBS) surfaces. Ink line direction is controlled by surface parameter, the tone by modulating stroke thickness. Strokes can be sharp or wiggly, short strokes become stippling. Tone can be controlled by lighting (they present a technique for shadows), by texture map, or something else (i.e. reflection mapping).

The technique could probably be adapted to utilize UV mapping coordinates. The authors tantalizingly suggest generating strokes “along directions that aremore intrinsic to the geometry of the surface—for example, along directions of principal curvature.” Not addressed as far as I can tell is temporal coherence. What does this technique look like over a series of frames, when the surface is translating and deforming?

This approach is not available in any commercial package that I am aware of, but I wish it were.

Powered by WordPress. Theme: Motion by 85ideas.