|
Research
|
| |
Conventional digital painting systems rely on procedural rules and physical simulation to render paint strokes. We present an interactive, data-driven painting system that uses scanned images of real natural media to synthesize both new strokes and complex stroke interactions, obviating the need for physical simulation. First, users capture images of real media, including examples of isolated strokes, pairs of overlapping strokes, and smudged strokes. Online, the user inputs an arbitrary new stroke path, and our system synthesizes the 2D texture appearance of the stroke, together with optional smearing or smudging behaviors when strokes overlap. We demonstrate high-fidelity paintings that closely resemble the style of the captured media, and also quantitatively evaluate the fidelity of our rendering methods via user studies.
|
| |
|
| |
HelpingHand: Example-based Stroke Stylization
Jingwan Lu, Fisher Yu, Adam Finkelstein, Stephen DiVerdi
ACM Transactions on Graphics (SIGGRAPH), August 2012
Paper : [PDF], Video : [MP4], Siggraph Talk : [MP4],
Executable : [ZIP] (Mac & Windows 4.6MB),
Dataset : [ZIP],
Readme
Digital painters commonly use a tablet and stylus to drive software like Adobe Photoshop. A high quality stylus with 6 degrees of freedom (DOFs: 2D position, pressure, 2D tilt, and 1D rotation) coupled to a virtual brush simulation engine allows skilled users to produce expressive strokes in their own style. However, such devices are difficult for novices to control, and many people draw with less expensive (lower DOF) input devices. This paper presents a data-driven approach for synthesizing the 6D hand gesture data for users of low-quality input devices. Offline, we collect a library of strokes with 6D data created by trained artists. Online, given a query stroke as a series of 2D positions, we synthesize the 4D hand pose data at each sample based on samples from the library that locally match the query. This framework optionally can also modify the stroke trajectory to match characteristic shapes in the style of the library. Our algorithm outputs a 6D trajectory that can be fed into any virtual brush stroke engine to make expressive strokes for novices or users of limited hardware..
|
| |
|
| |
Active Strokes: Coherent Line Stylization for Animated 3D Models
Pierre Benard, Jingwan Lu, Forrester Cole, Adam Finkelstein, Joelle Thollot
Proceedings of the 10th International Symposium on Non-photorealistic Animation and Rendering (NPAR), June 2012.
Paper : [PDF], Video : [MOV]
This paper presents a method for creating coherently animated line
drawings that include strong abstraction and stylization effects.
These effects are achieved with active strokes: 2D contours that
approximate and track the lines of an animated 3D scene. Active
strokes perform two functions: they connect and smooth unorganized
line samples, and they carry coherent parameterization to support
stylized rendering. Line samples are approximated and tracked
using active contours ("snakes") that automatically update their arrangment
and topology to match the animation. Parameterization
is maintained by brush paths that follow the snakes but are independent,
permitting substantial shape abstraction without compromising
fidelity in tracking. This approach renders complex models
in a wide range of styles at interactive rates, making it suitable for
applications like games and interactive illustrations.
|
| |
|
| |
Perceptual Models of Viewpoint Preference
Adrian Secord, Jingwan Lu, Adam Finkelstein, Manish Singh, Andrew Nealen
ACM Transactions on Graphics (TOG), Volume 30 Issue 5, October 2011
Paper : [PDF], Video : [MP4], Executable : [ZIP]
The question of what are good views of a 3D object has been addressed by
numerous researchers in perception, computer vision, and computer graphics.
This has led to a large variety of measures for the goodness of views
as well as some special-case viewpoint selection algorithms. In this article,
we leverage the results of a large user study to optimize the parameters
of a general model for viewpoint goodness, such that the fitted model can
predict people's preferred views for a broad range of objects. Our model is
represented as a combination of attributes known to be important for view
selection, such as projected model area and silhouette length. Moreover,
this framework can easily incorporate new attributes in the future, based on
the data from our existing study. We demonstrate our combined goodness
measure in a number of applications, such as automatically selecting a good
set of representative views, optimizing camera orbits to pass through good
views and avoid bad views, and trackball controls that gently guide the
viewer towards better views.
|
| |
|
| |
Interactive Painterly Stylization of Images, Videos and 3D Animations
Jingwan Lu, Pedro V. Sander, Adam Finkelstein
Proceedings of Symposium on Interactive 3D Graphics (I3D), February 2010
Paper : [PDF],
Video : [WMV], Project Page
We introduce a real-time system that converts images, video, or
3D animation sequences to artistic renderings in various painterly
styles. The algorithm, which is entirely executed on the GPU,
can efficiently process 512^2 resolution frames containing 60,000
individual strokes at over 30 fps. In order to exploit the parallel
nature of GPUs, our algorithm determines the placement of strokes
entirely from local pixel neighborhood information. The strokes
are rendered as point sprites with textures. Temporal coherence is
achieved by treating the brush strokes as particles and moving them
based on optical flow. Our system renders high quality results while
allowing the user interactive control over many stylistic parameters
such as stroke size, texture and density.
|
| |
|
| |
A GPU-Based Approach for Real-Time Haptic Rendering of 3D Fluids
Meng Yang, Jingwan Lu, Alla Safonova, Katherine J. Kuchenbecker
SIGGRAPH Asia 2008 sketch
IEEE International Workshop on Haptic Audio-Visual Environments and
Games, November 2009
Sketch : [PDF], Paper : [PDF]
Real-time haptic rendering of three-dimensional fluid flow will improve the interactivity and realism of video games and surgical simulators, but it remains a challenging undertaking due to its high computational cost. Humans are intensely familiar with the look and feel of real fluids, so successful interactive simulations need to obey the mathematical relationships of fluid dynamics with high spatial resolution and fast temporal response. In this work we propose an innovative GPU-based approach that enables real-time haptic rendering of high-resolution 3D Navier-Stokes fluids. We show that moving the vast majority of the computation to the GPU allows for the simulation of touchable fluids at resolutions and frame rates that are significantly higher than any other recent real-time methods without a need for pre-computations. Based on our proposed approach, we build a haptic and graphic rendering system that allows users to interact with 3D virtual smoke in real time through the Novint Falcon, a commercially-available haptic device.
|
| |
|
|
|
|
|