Archive for November, 2010


SIGGRAPH paper summary: Art-Based Rendering Of Fur, Grass, Trees by Michael Kowalski, et al.

A (sort of) particle-based system for an illustrative style of rendering to balance screen-space density of strokes against world-spatial coherence to provide some interframe coherence. Starts with a conventionally-rendered image as reference and places graphtals* according to a “desire” map. As each stroke is placed, it subtracts a blurred version of itself from the desire map, and so on until the desire map is filled, thus preventing overly dense stroke placement. Provides techniques for scaling strokes in screen space as they scale in camera space (does that make any sense?), for deciding how much of the stroke to draw based on factors such as surface facing angle, and for reducing strokes as they are no longer needed to reduce popping. The individual stroke aligns itself to the camera based on user rules such as “always point down” for fur or “always orient clockwise” for truffula tufts. The technique does not fully solve problems of interframe coherence, but gives a starting point.

*Alvy Ray Smith’s concept of “graphtals” (a portmanteau of fractals and graphics): image information that is generated algorithmically/implicitly and only when requested.

Overall this is a idea with potential. The paper points to a lot of interesting prior work, too. Another Brown University project!

Computer-Generated Watercolor by Cassidy J. Curtis, Sean E. Anderson, Joshua E. Seims, Kurt W. Fleischer, David H. Salesin

Presents a series of techniques for modeling the behavior of watercolor paint on paper in rather thorough detail. Simulates edge-darkening, backruns, glazing, dry-brush, and several other characteristics of watercolor. The technique can be used for painting ala Photoshop, although the paper notes that interactive painting is impossible with the current technology (133 MHz SGI R4600 chip!), creating watercolor from a photograph with manually specified mattes, or from a 3D render. Temporal coherence is not solved (the technique is prone to ’shower-door’ artifacts), but the single-frame results are quite nice. It’s a considerably more advanced model than the one used by Painter or Photoshop.


Siggraph Paper Summary: Rendering Parametric Surfaces in Pen and Ink by Georges Winkenbach & David H. Salesin, University of Washington

I can still remember the first SIGGRAPH I attended in 1996. By the generosity of MTV Networks, I had a full conference pass which meant conferences and papers (which I barely attended) and the full set of publications, which I devoured eagerly afterwards. This paper really blew my mind, and I still think it’s a technique with a lot of potential that hasn’t been well exploited. Way cooler than

Winkenbach’s paper presents what looks like a robust technique for inking parametric (i.e. NURBS) surfaces. Ink line direction is controlled by surface parameter, the tone by modulating stroke thickness. Strokes can be sharp or wiggly, short strokes become stippling. Tone can be controlled by lighting (they present a technique for shadows), by texture map, or something else (i.e. reflection mapping).

The technique could probably be adapted to utilize UV mapping coordinates. The authors tantalizingly suggest generating strokes “along directions that aremore intrinsic to the geometry of the surface—for example, along directions of principal curvature.” Not addressed as far as I can tell is temporal coherence. What does this technique look like over a series of frames, when the surface is translating and deforming?

This approach is not available in any commercial package that I am aware of, but I wish it were.

The Lamest Blog Post

is one where the author says, “Ooops, haven’t updated in a while!”

I haven’t figured out how to make the blog a regular habit yet, so it’s going to be spotty for a while yet. (That’s okay, since no one’s reading it yet.) I  need to build a habit of doing a little bit of work on the blog each day, but I haven’t got that figured out yet. I’ve been pretty good about making exercise a thrice-weekly habit, even going when I don’t particularly want to, and I feel great as a result. If you look carefully and take precise measurements, you might even notice the difference. Anyway, that’s one good habit I’ve built, time to move on to other ones.

The process of work itself is getting better; I am working more consciously and trying to choose the most efficient approach: brute force when it’s needed and cheats when they work. Previz is still a challenge for me b/c there are so many unanswered questions in each sequence and I have an instinctive fear of spending time on approaches that might not work. (I really like finishing projects, because the path forward is usually so clear.) I’m working on letting go of the fear of failure and the fear of wasted time. As long as I’m working on solving the problems of the sequence, it’s not wasted time.

Last item: I joined ACM/SIGGRAPH so I could have access to the phenomenal collection of research papers they publish. I intend to:

  • read all the papers about NPR and associated technologies so I have a broad understanding of the state of the art
  • write capsule summaries on the blog of relevant/useful papers for my own reference (and build Google hits)
  • in the future, build tools based on the best underutilized ideas

Look for SIGGY paper summaries, coming soon.

Powered by WordPress. Theme: Motion by 85ideas.