Quantcast
Channel: interfaze » glgraphics
Viewing all articles
Browse latest Browse all 2

Painter effect

$
0
0

This is a non-photorealistic rendering algorithm where the gradient of the luminance of the input image is averaged iteratively to generate a vector field with many vortices and flows. This field is used to determine the motion of a particle system. Each particle represents a brush that paints the canvas using the color of the underlying pixel of the image. The particle retains the same color for a short interval of time, which creates the impression of “flowing paint”. According on how the parameters are set, this effect can be made more or less pronounced. Other parameters such as the motion noise and brush size can also be adjusted during the execution of the program.

.

.


.

Non-photorealistic rendering

As modern rendering techniques further improve the realism of computer graphics, non-photorealistic rendering (NPR) explores more creative ways to represent digital images. One particular area within NPR is painterly rendering, where the visual results are inspired by those obtained with traditional painting media. The first implementation of a painterly rendering algorithm goes back to an article by Paul Haeberli from early 90′s. A lot of research has been conducted in this area since then, ranging from the use of particle systems to generate entire animations, and “energy functions” to quantify the “quality” of a computer-rendered painting; to genetic algorithms applied to the search of “globally optimal” paintings, and renderings with multiple projections in the same scene.

In the non-photorealistic algorithm I describe here, the basic idea was inspired by this article by Chung-Ming Wang and Jiunn-Shyan Lee, where the iterative averaging of the gradient of the image luminance leads to a vector field containing vortices centered around the points of high luminosity in the image. A particle system of small “brushes particles” then paints the scene by following the directions in the vector field. The application of this algorithm in real-time to static images or even video generates an effect of flowing paint, with motion patterns reminiscent of some of the paintings from Vincent Van Gogh.

The next video shows the painter algorithm being applied to two images drawn alternatively:

GPU implementation using GLSL shaders

To achieve real-time performance, particularly when using live video as the input, the motion of the particles and the calculation of the gradient are done on the GPU using GLSL shaders. In the initial tests, there were two images that changed one into another. When the image swap happened, the gradient of the old image was linearly interpolated on each pixel to morph onto the corresponding vector of the new image. The same approach was used later to process frames coming from a video feed.

The first implementation of the painter algorithm can handle around 10,000 particles at a smooth framerate. It doesn’t make use of techniques such as Vertex Buffer Objects (VBO) or texture displacement mapping in order to speed-up the rendering.

One bottleneck of the first implementation is the texture read-back from GPU memory: in order to set the brush locations, the positions of the particles (which are computed on the GPU) need to be read into CPU memory. This step can be avoided by using texture displacement mapping (vertex displacement tutorial) in the vertex shader. The position texture is accessed in the vertex stage of the pipeline, and the texture values used to alter the vertex positions. With this optimization, the painter was able to handle up to 40,000 particles at smooth framerates.

Up to this point, the calculations on each of the particles are triggered by creating a stream of vertices that are pushed through the GPU pipeline. Each vertex corresponds to one particle. Since the stream is generated on the CPU side by a uniform grid of vertices, the painter could be further optimized by packing these vertices in a VBO (VBO tutorial) that resides in the GPU memory. In this way, there is no need of expensive transfer of vertex coordinates from CPU to GPU at every iteration of the algorithm. By using VBOs, the number of particles grows to ~100,000.  Note that these performance measurements correspond to the hardware that was available by the time this algorithm was first implemented  (around 2006, on a NVidia Geforce 6400). On more recent GPUs, acceptable framerates can be achieved with 200,000 or more particles.

This gallery below shows a couple of landscape images used as the source for the painter algorithm, and some renderings obtained with different parameter combinations:

landscape1 landscape1p1 landscape1p2 landscape2 landscape2p1 landscape2p2

I also implemented a version of the painter algorithm that operated on live video obtained from a webcam. A little installation was configured with this system in the Bermant Gallery at UCLA, with the occasion of the first exhibition as new graduate students in the Design|Media Arts department (Fall 2007). The picture below shows some attendants to the exhibition exploring the possibilities of the painter program in the setting of the gallery space, although somewhat limited by the size of the screen used at the time:

Current availability of the painter algorithm

The latest version of the painter algorithm is implemented as a Processing sketch with the GLGraphics library, and it is included as built-in example of the library. Versions that operate on still images and on live video can be found in the zip package of GLGraphics.


Viewing all articles
Browse latest Browse all 2

Trending Articles