CGW_2010-01

Page 1

www.cgw.com January 2010

Flight Simulator CG students use digital techniques for ornithology research

$6.00 USA $8.25 Canada


(

T

3

CWT VT] ^dc '$" PaT _a^

<PZT h^da STbXV] d]U^aVTccPQ[T fXcW 0C8 5XaT?a^— "3 VaP_WXR PRRT[TaPc^ab

VT

ATI FirePro

Nvidia Quadro

V3700 vs FX 380 V3750 vs FX 580 V5700 vs FX 1800 V7750 vs FX 3800 V8700 vs FX 4800

0

1

2

3

4

5

6

7

8

3DS MAX Graphics Composite SPECapc

Maya 6.5 Graphics Composite SPECapc

6XeT h^dabT[U cWT _^fTa P]S b_TTS c^ cPZT _a^YTRcb c^ cWT ]Tgc [TeT[ C^ QaX]V h^da X\PVX]PcX^] c^ [XUT C^ À ]P[XiT Y^Qb QTU^aT cWTXa Sa^_ STPS SPcTb C^ \PZT cWT R^\_TcXcX^] VaTT] fXcW T]eh 0UcTa P[[ 0C8 5XaT?a^— "3 VaP_WXR PRRT[TaPc^ab STcTRc ^_cX\XiT P]S cd]T P__[XRPcX^]b U^a bd_TaX^a _TaU^a\P]RT ?[db cWTh VXeT h^d a^RZ b^[XS aT[XPQX[Xch U^a TgcaT\T _a^SdRcXeXch FWPc S^Tb cWPc \TP] U^a h^da À ]P[ SXVXcP[ R^]cT]c. 8\PVTb cWPc \PZT _T^_[T bc^_ bcPaT P]S bcPaT PVPX] 0]S U^a h^d. 0 SXVXcP[ STbXV] U^aRT c^ QT aTRZ^]TS fXcW CWT cX\T WPb R^\T ?dc 0C8 5XaT?a^ "3 VaP_WXR PRRT[TaPc^ab c^ f^aZ U^a h^d ATI FirePro

0b 0C8 ST[ 4]V CWP fXc PRR RP]

8U h aT[XP 0C8

7 ?

Nvidia Quadro

V3700 vs FX 380 V3750 vs FX 580 V5700 vs FX 1800

0

V7750 vs FX 3800 V8700 vs FX 4800

0

1

2

3

4

5

6

7

8

;TPa] \^aT Pc fff P\S R^\ À aT_a^

8\PVT R^dacTbh ^U 0]c^] 1dVPTe – ! ( 0SeP]RTS <XRa^ 3TeXRTb 8]R 0[[ aXVWcb aTbTaeTS 0<3 cWT 0<3 0aa^f [^V^ 0C8 cWT 0C8 [^V^ 5XaT?a^ P]S R^\QX]PcX^]b cWTaT^U PaT caPST\PaZb ^U 0SeP]RTS <XRa^ 3TeXRTb 8]R >cWTa ]P\Tb PaT U^a X]U^a\PcX^]P[ _da_^bTb ^][h P]S \Ph QT caPST\PaZb ^U cWTXa aTb_TRcXeT ^f]Tab #& $%0

; f

Untitled-2 1

9/4/09 10:46:41 AM


January 2010 • Volume 33 • Number 1

Innovations in visual computing for the global DCC community

Features 8

16

COVER STORY

Taking Flight

students at Cornell take up bird-watching, helping ornithologists further their research with the use of cutting-edge CG technologies. 8Computer

By Barbara Robertson

28

The Tradition Lives On

Disney Animation presents The Princess and the Frog, the studio’s first hand-drawn animation since 2004, only this “hand-drawn” animation 16Walt received a little bit of help from computer graphics.

By Barbara Robertson

Playing the Open-Eyed Dream

20

34

Departments Editor’s Note

Animation: An Extraordinary Medium

we think of the word “animation,” CGI comes to mind. While 2009 brought us a number of fantastic films created in the medium, it also 4When presented us with others crafted by the more traditional methods of 2D and stop motion, renewing our appreciation of animation in all forms.

Spotlight

15.0. Nvidia’s Quadro FX 380 LP, Quadro FX 3800M, 5RenderMan and Quadro FX 2800M. Products Dell’s Precision M6500. Pixar Animation Studios’

director James Cameron and game developer Ubisoft team up to create an interactive title based on Avatar, as teams from both the film 20Movie and the game collaborate on an unprecedented level.

By Martin McEachern

One Step at a Time

animators on the most recent stop-motion production show they are, smart as a fox, as they incorporate touches of CGI into the movie 28Thewell, when necessary, giving The Fantastic Mr. Fox its handcrafted look while getting the most benefit in production. By Barbara Robertson

Real Illusion

late legendary crooner Frank Sinatra makes a special appearance holographic form, thanks to a great deal of complex rotoscoping and 31Theincompositing work by SquareZero.

By Karen Moltenbrey

Viewpoint

Randomness. xxx xxxxxxxxxxx 6Procedural

xxxxxxxxxxxxxxxxxxxxxxxxxxxxx

Education

content creators should continue sharpening their skills so they are ready to take the next step in their careers when opportunities 34Digital become available. xx

SEE IT IN

• James Cameron on Avatar. • Q&A with Lost and Mad Men editor Christopher Nelson. • High-quality audio for lower-budget indie films.

Back Products

software and hardware releases. xxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxx x 40Recent

Enhanced Content

Get more from CGW. We have enhanced our magazine content with related stories and videos online at www.cgw.com. Just click on the cover of the issue (found on the left side of the Web page), and you will find links to these online extras.

ON THE COVER

This image depicts motion paths from flight tunnel data along with the skeletal model of an ivory-billed woodpecker, which is believed to be extinct. The information garnered from these and other CG techniques are being used to extend ornithology research. See pg. 8. January 2010

1


It took an entire week of production, with a crew of over 300 people, to complete 90 seconds of footage for .

Fo

WINNER

B

SPECIAL ACHIEVEMENT AWARD FOR ADVANCES IN RAPID PROTOTYPING INTERNATIONAL ANIMATED FILM SOCIETY

Wr

BES BES

ONE OF THE YEAR’S BEST PICTURES

COMPUTER GRAPHICS WORLD - 1st Spread

Focus Features Ref. 46

Issue Date: 1/7/10

FINAL


RES

For Your Consideration In All Categories Including

BEST ANIMATED FEATURE Written ForThe Screen And Directed By

BEST PICTURE • BEST DIRECTOR • BEST ADAPTED SCREENPLAY • BEST CINEMATOGRAPHY • BEST FILM EDITING BEST ART DIRECTION • BEST COSTUME DESIGN • BEST SOUND • BEST VISUAL EFFECTS • BEST ORIGINAL SCORE For a behind-the-scenes look at the craft of ‘Coraline’ and up-to-the-minute screening information, go to: FocusAwards2009.com ©2009 FOCUS FEATURES. ALL RIGHTS RESERVED.


Editor’sNote

Animation: An Extraordinary Medium in All Its Forms

T

his past fall, Disney/Pixar released a special stereoscopic version of the ground­breaking computer-generated animated films Toy Story and Toy Story 2, taking us back in time to when CGI was in its infancy (see “Stereo Twice Over,” October 2009). We were re-introduced to Buzz, Woody, and the toy gang—the first characters to star in an all-CG feature. Despite being modeled and animated in 3D, the characters made their theater debut in 2D. Nevertheless, the event, just 14 short years ago, was a pinnacle in animation and moviemaking. Even when the sequel continued to break boundaries in animation four years later, this new medium never stopped intriguing moviegoers. They were drawn to CGI just like moths to the flame (though with a much happier ending). And what was initially unique turf for Pixar soon became the domain of other studios creating fulllength CG movies. It may have taken others, including PDI/DreamWorks, a bit longer to establish themselves in this arena, but soon DreamWorks’ Shrek zipped to second place in the top-10 animated movies of all time, becoming the first to break into the Disney/Pixar stronghold. (Blue Sky/Fox would also do so with Ice Age.) It hasn’t taken long for us to equate animated films with CGI. In the minds of many, the two are synonymous. In 2009, we were entertained by a number of colorful, slick, funny, and endearing computer-generated movies. Yet, this time, a new genre began to flex its box-office muscle: stereoscopic 3D, meant to enhance, not replace, CGI. In addition to enjoying the antics of Buzz and Woody in stereo (in anticipation of the 3D release of Toy Story 3), we were captivated by monsters and aliens, carried away by an older gentleman and an eager scout, warmed by the antics of a prehistoric lemur, saber-toothed tiger, neurotic squirrel, and woolly mammoth, and satisfied with the meal served up on a cloudy day—all in CGI, and all in three dimensions. While audiences could opt to watch these instant hits in 2D, the 3D versions brought them to life as never before. No gags, just better storytelling. Without question, CGI—or better yet, CGI with stereo—is here to stay. Some, though, mourned the loss of traditional animation, which had waned as CGI’s star rose. So, what a treat it was when a handful of traditionally animated films not only arrived, but thrived, at the box office. Stop-motion Coraline (which, incidentally, is in stereo, with a sprinkling of CG effects) and the newly released Fantastic Mr. Fox illustrate that the painstaking work of moving beautifully crafted puppets and props frame by frame is still deeply appreciated. In December, Walt Disney Animation also gave us a tasty treat, releasing the hand-drawn animated film The Princess and the Frog, marking a return to this classic style made famous by Disney for decades. Ironically, the film is executive-produced by none other than John Lasseter, director of the Pixar CGI hits Cars, Toy Story, and others. But then, if you dig deep, there is less irony to this than first thought. Lasseter had not lost sight of the fact that “traditional handcrafted animation had not lost its value as either art or entertainment.” As long as the essential ingredients are present in an animated film—compelling characters, rich scenery, and amazing storytelling—it will appeal to audiences, no matter the genre. There have been many successful CG films, and some that were hardly memorable. The same holds true for traditional 2D animated movies and those using stop motion. We were just lucky enough in the past 12 months to have been treated to the best that animation of all kinds have to offer. Which animated feature film from 2009 is your favorite? Blog about it at cgw.com. n

The Magazine for Digital Content Professionals

E D I TO R I A L

Karen moltenbrey Chief Editor

karen@cgw.com • (603) 432-7568 36 East Nashua Road Windham, NH 03087

Contributing Editors

Courtney Howard, Jenny Donelan, Audrey Doyle, George Maestri, Kathleen Maher, Martin McEachern, Barbara Robertson

WILLIAM R. RITTWAGE

Publisher, President and CEO, COP Communications

SA L E S Lisa BLACK

Associate Publisher National Sales • Education • Recruitment lisab@cgw.com • (903) 295-3699 fax: (214) 260-1127

Kelly Ryan

Classifieds and Reprints • Marketing kryan@copcomm.com (818) 291-1155

Editorial Office / LA Sales Office:

620 West Elk Avenue, Glendale, CA 91204 (800) 280-6446

P rod u c tio n KEITH KNOPF

Production Director Knopf Bay Productions keith@copcomm.com • (818) 291-1158

MICHAEL VIGGIANO Art Director

mviggiano@copcomm.com

Chris Salcido

Account Representative

csalcido@copprints.com • (818) 291-1144

Computer Graphics World Magazine is published by Computer Graphics World, a COP Communications company. Computer Graphics World does not verify any claims or other information appearing in any of the advertisements contained in the publication, and cannot take any responsibility for any losses or other damages incurred by readers in reliance on such content. Computer Graphics World cannot be held responsible for the safekeeping or return of unsolicited articles, manuscripts, photographs, illustrations or other materials. Address all subscription correspondence to: Computer Graphics World, 620 West Elk Ave, Glendale, CA 91204. Subscriptions are available free to qualified individuals within the United States. Non-qualified subscription rates: USA—$72 for 1 year, $98 for 2 years; Canadian subscriptions —$98 for 1 year and $136 for 2 years; all other countries—$150 for 1 year and $208 for 2 years. Digital subscriptions are available for $27 per year. Subscribers can also contact customer service by calling (800) 280 6446, opt 2 (publishing), opt 1 (subscriptions) or sending an email to csr@cgw.com. Change of address can be made online at http://www.omeda.com/cgw/ and click on customer service assistance.

Postmaster: Send Address Changes to

Computer Graphics World, P.O. Box 3551, Northbrook, IL 60065-3551 Please send customer service inquiries to 620 W. Elk Ave., Glendale, CA 91204

CHIEF EDITOR karen@CGW.com

4

January 2010


RS om ontees ce. lohat dia atee

sla ort

Dell Unveils Powerful, New Mobile Workstation Dell continued to push the boundaries of mobile performance by announcing the world’s most powerful mobile workstation, the Precision M6500. Shown publicly at Autodesk University recently, the Dell Precision M6500 delivers performance while managing massive amounts of data, making it desirable for creative professionals, designers, animators, engineers, research scientists, and defense customers with authentication and data encryption. The M6500 enables memory scalability of up to 16GB with four DIMM slots. It is also the world’s first mobile workstation to support DDR3 1600 MHz memory, giving it a performance boost capable of handling virtually any mixture of engineering, database, and software-development workloads. The machine also has an optional Intel Core i7-920XM Quad Core Extreme Edition processor linked with fast 1066 MHz, 1333

MHz,

and 1600 MHz memory. It is the first mobile workstation to offer the new Nvidia Quadro FX 3800M graphics solution featuring 128 Cuda parallel computing cores; optimization for OpenGL 3.2, Shader Model 4.0, DirectX 10.1, Direct Compute, and OpenCL professional applications; a 256-bit memory interface; 1GB G-DDR3 graphics memory; ultra-fast 64GB/sec graphics bandwidth; and Nvidia’s PowerMizer 9.0 power management solution. Other graphics options include the Nvidia Quadro FX 2800M and the ATI FirePro M7740. The workstation ships with an optional RGB LED, edge-toedge, 17-inch screen with 100 percent user-selectable color gamut support. It also offers the option of three internal storage drives. Available now, the Dell Precision M6500 mobile workstation carries a starting price of $2749.

PRODUCT: MOBILE WORKSTATION

Pixar Releases RenderMan Pro Server 15.0 Software

Nvidia Rolls Out New Quadro Graphics Solutions

Pixar Animation Studios unveiled Version 15.0 of its RenderMan Pro Server software. The release introduces many innovations, including unlimited threading per machine, volume primitives, important additions to the RenderMan Shading Language, support for Disney’s forthcoming open-source Ptex per-face painted textures, imager shaders, an API for subdivision surfaces, and more. RenderMan Pro Server 15.0 also delivers performance increases for production scene rendering in the areas of raytracing, ambient occlusion, improved thread scalability, and optimized memory management. The introduction of unlimited threading enables artists to maximize the full power of their rendering hardware, allowing each license of RenderMan to utilize any number of threads on the latest multi-core platforms. RenderMan Pro Server 15.0 is the first product to introduce unlimited threading, with others following shortly. RenderMan Pro Server 15 is compatible with Mac OS X, Linux 32-bit and 64-bit, Windows XP 32-bit, Windows Vista 32-bit and 64-bit, and Windows Vista 64-bit HPC Server. RenderMan Pro Server (Photorealistic RenderMan and Alfserver) is priced at $3500; upgrade pricing from RenderMan Pro Server14.0 to 15.0 is also available.

Nvidia launched new Quadro professional graphics solutions for desktop and mobile workstations, designed to boost productivity and enhance creativity of 3D design professionals running Autodesk applications. Optimized and certified for AutoCAD, 3ds Max, and other Autodesk software, the new offerings include the Quadro FX 380 LP low-profile, entry-level graphics solution. The most affordable and flexible Quadro professional graphics solution, with a price of $169, is ideal for design professionals moving from 2D to 3D. The Quadro FX 3800M and Quadro FX 2800M mobile workstation solutions, meanwhile, feature up to 128 Cuda cores for massively parallel computational graphics and deliver 30-bit color accuracy for display of more than one billion colors. With 1GB of dedicated graphics memory, they enable users to efficiently manipulate larger datasets in graphics-intensive 3D applications, such as Autodesk Revit Architecture, Inventor, and Moldflow. The mobile offerings enable real-time raytracing, interactive volume rendering, and the fastest video encoding available in a mobile workstation. The FX 3800M and FX 2800M sells for $742 and $270, respectively, on the Dell Precision M6500.

PRODUCT: RENDERING

PRODUCT: GRAPHICS CARDS January 2010

5


By Gergely Vass

CG

Procedural Randomness

T

A former Maya TD and instructor, Gergely Vass eventually moved to the Image Science Team of Autodesk Media and Entertainment. Currently he is developing advanced postproduction tools for Colorfront in Hungary, one of Europe’s leading DI and post facilities. Vass can be reached at gergely@colorfront.com.

January 2010

noise, and so forth. These typical (and sometimes only theoretical) random signals have an important role and are commonly used in acoustics, electrical engineering, and physics. White noise—for example, a signal that has the same power over all the frequencies—is essential in calibrating amplification systems (and is likely to drive you crazy if you happen to arrive too early to a summer music festival). By feeding the sound system with white noise, the audio engineer is able to determine which frequencies need to be equalized by measuring the power distribution of the output sound. Image by Saurav Subedi.

he most basic pitfall of creating shaders or materials for 3D renderings is the use of too clean, too perfect textures. In reality, every singe object, surface, or material is worn, dirty, or naturally imperfect to some extent. By photographing or scanning real textures, we get these patterns for free, which helps to “sell” the final 3D image. While procedural textures, generated completely by mathematical algorithms, have many advantages over acquired image textures, reproducing the natural randomness may be especially challenging. As we will see, there is much more to natural-looking noise textures than simply perturbing the RGB values with random numbers. Randomness itself is a tricky thing in the realm of computer science, where everything is based on a series of deterministic commands. There is no way to let a typical processor flip a coin or roll dice; thus, it is not possible to produce “true” random numbers. Instead, computational algorithms are used to generate long sequences of apparently random results, called pseudo-random numbers. The heart of such algorithms is the iteration step: given an input number, we compute another, apparently independent one. By repeating this step, we can produce extremely long pseudo-random sequences that are prefect for many applications: computer games, music playlists, dynamic simulations, or even computer animation. For applications where true randomness is critical—lottery or cryptography, for instance­—higher quality random numbers are required. Before the 1950s, large tables of random numbers were published for use by mathematicians and scientists, but now it is possible to buy hardware-based random number generators (RNGs) or to rely on services, such as www.random.org, where the numbers are produced based on the physical measurement of atmospheric noise. To emulate natural random processes, we can generate a series of pseudo-random numbers and, thus, form a digital random signal. However, not all noise signals are the same. Audio noise—a one-dimensional signal—may refer to a high-frequency “hiss” sound or a lowerfrequency “humming” noise, as well. Based on their spectral statistical characteristics, one-dimensional noise signals are often classified using the “color” terminology: white noise, pink noise, brown noise, purple

Planetside Software’s Terragen 2 uses procedural noise textures to create virtual landscapes. In computer graphics, one-dimensional random signals are primarily used to perturb animation channels. An “inactive” human character standing still in a computer game will never appear realistic unless some low-frequency random motion is applied to its muscles. The constant motion due to our body’s physiological actions, including pumping blood or breathing, as well as the constant oscillation of muscle tissue, is typically very slow, thus we need a random signal without any highfrequency components. A simple series of random numbers is of no use for such applications: We need algorithms generating random signals of controllable frequency range. The same concept is true for noise textures—the secret ingredients to make realistic particle or crowd simulations, shaders, or lights. These, two- or three-dimensional noise signals are not simply “soups” of random pixels, but an apparently random series of dark/bright spots. By constraining the sizes of the variations into a well-defined range, we get a chance to match any natural ran­dom pattern and avoid aliasing artifacts. (We certainly do not want random variations smaller than a pixel in the final image.) The visual properties—such as contrast and density—of noise textures need to be precisely controllable so we can use them as a building block to create various natural-looking shading networks. As all natural


Viewpoint

a n u e e -

g l

Image by Saurav Subedi.

y r e t g , e s s m o n s

random patterns, dirt, sand, grass, clouds, or grains have some distinct structure to them, we need to be able to produce the matching procedural texture of similar-sized random features. The first solution to this problem was proposed by Ken Perlin as early as 1983, just about the time when the movie Tron was released. Later, in 1997, Perlin received a Technical Achievement Award from the Academy of Motion Picture Arts and Sciences for his contributions to procedural noise textures. Variations of so-called Perlin Noise are still commonly used; they are easy to control for artists and are able to produce realistic aesthetics. (For further information on Perlin Noise, visit www.noisemachine.com/talk1.) When computing pseudo-random signals, we always start with an initial seed number. This single value—and the production steps, of course—completely determine the whole sequence. Is this a problem for us? Not at all! One benefit is the minimal memory requirement to store such random textures, regardless of resolution. The other critical aspect is repeatability. Imagine that we create a complex particle simulation with a turbulent dynamic field driven by a 3D noise texture. If the signal were truly random, we would have to save the complete simulation to disk, as the final sequence would look different each time we ran it. However, by initializing the noise generation with the same seed number, it is guaranteed that the virtual leaves blown by our turbulent wind will move the same each time we run the sim. But what if the client picks up on a single particle moving in a strange way? We can simply enter a different seed number and have a brand-new simulation with a turbulence of similar “statistical” properties. While photographed dirt textures are images that need to be mapped

n n n n

Perlin Noise textures of one, two, and four octaves are used to displace the surface of the spheres. into the surface of 3D models, the basis of the Perlin (and almost all other) noise is a pseudo-random signal that fills the 3D space. Using 3D, or volumetric procedural textures, we can completely avoid texture mapping as each point of the surface is mapped to a texture coordinate, without introducing any mapping artifacts. It is like inserting the object into a 3D cloud of noise and slicing out the final texture with the surfaces: There will not be any seams or areas of different resolution. Another key property of Perlin Noise is that the sizes of the random variations are roughly the same. While a single layer of the texture may appear a bit too uniform, combining several layers, or octaves, of various characteristic sizes results in a very rich, realistic look. In doing this, we maintain complete control over the frequencies in the deterministic random signal. n

Windows-32�/ Windows-64�/�OS X-32 / OS�X-64

32-bit only�$399

High�Performance Camera Tracking

Use SynthEyes for animated critter insertion, »xing shaky shots, virtual set extensions, making 3D movies, architectural previews, accident reconstruction, virtual product placement, face and body capture, and more.

See the website for details of the latest version! January 2010


n n n n

Science•Engineering

Takin Flig

n April 25, 2004, at 3:42 pm central daylight time, in the Bayou de View area of Arkansas’ Cache River National Wildlife Refuge, while riding in a canoe, researcher David Luneau videotaped a bird many believed to be extinct: an ivory-billed woodpecker. Or, did he? Luneau, an associate professor at the University of Arkansas, along with scientists at the Cornell Lab of Ornithology, made their case in an article published in Science magazine, to much excitement. But, eventually, some experts and academics disagreed, saying that the four seconds of videotape showed a blurry pileated woodpecker, not the rare, possibly extinct, ivorybill. That’s when professor and computer graphics legend Donald Greenberg and several colleagues, notably graduate student Jeff Wang (now a character TD at PDI/DreamWorks), enter the picture, as does the Cornell Ornithology Lab. Because the two woodpeckers in question have black and white wings with opposite patterns, Greenberg and his group suggested that maybe they could animate an ivory-billed and a pileated woodpecker, simulate the camera in the videotape, and create sequences of images for each woodpecker that could be pattern-matched and compared to the video.

By Barbara Robertson

The ivory-billed woodpecker above is digital, created at Cornell University.

This simple suggestion attracted the attention of Kim Bostwick, a research scientist in the Cornell Ornithology Lab, and set in motion a series of projects that continue today. Wang built an accurate 3D representation of the ivorybill that has become a research vehicle for David Kaplan, another graduate student in computer graphics. Brendan Holt, a third CG grad student, collaborated with Bostwick to develop a method for motion-capturing wild birds in free flight that she is now adapting for a field study of manakins. “These are astonishing little birds found in Central and South America that sing with their wings,” she explains. Throughout the process, Bostwick, who uses high-speed video and detailed anatomy to study the functional morphology of birds, worked with the CG students, developing field methods and evaluating the model and the animation. “[The computer graphics students] traversed a very great distance from beginning to end,” Bostwick says. “In the first one or two weeks we started working together, they had us look at an animation they had put together. To most people, it would have looked good. But we saw birds with woodpecker feathers flying like vultures. From there, they developed a sophisticated model that incorporated all sorts of things about anatomy and how birds fly. And, they were asking questions most people don’t ask ornithologists, like how feathers work and what their motion is like, things we don’t know. We have a lot to learn about how birds fly.”

Building the Bird The first step was to build the model. Soon, the simple idea of matching the patterns on the wings evolved into creating a model that precisely matched an ivorybill. “Once we started getting into the literature from the

January 2010


ing light The ivorybill (below, right), with a white trailing edge on its underwing, differs from the pileated woodpecker (below, left), which has a black trailing edge.

Science•Engineering

n n n n

Cornell computer graphics students inspire research in ornithology and aerospace engineering as they study how birds fly ornithologists, it became more and more important to get it scientifically correct,” Wang says. Bostwick, who is the curator of the bird collection at the Cornell University Museum of Vertebrates (CUMV), provided the CG students with access to museum specimens normally available only to biologists. And, she made it possible to scan a one-of-a-kind specimen of an ivorybill, which had been stored in a jar at the Smithsonian for 60 years, with high-resolution computer tomography (CT) at the University of Texas at Austin’s Digital Morphology (DigiMorph) lab. The lab scanned the bird in a natural pose, with the wings tucked, and with the wings slightly open—the specimen, which is the only fully intact one in existence, was too fragile to risk fully opening the wings. From the volume data that resulted, Wang reconstructed the bird using Template Graphics’ Amira visualization software for medical imaging. The scans had produced approximately 2000 slices, each with a resolution of 1024x1024. Rather than rendering a volume, Almira’s thresholding algorithms generated an outline of the skin surface and contours for the skeleton. The process was far from automatic, however. For example, because the feathers and skin had the same density, the CT scan didn’t distinguish between them, so Wang needed to separate feathers from skin. After editing, Wang had an accurate surface representation of the ivory-billed woodpecker’s complete skin and skeleton in two poses. Wang then moved the data into Autodesk’s Maya and used the reconstruction as a reference model to create a lighter-weight model for animation, with joints in correct locations and a subdivision surface of the skin. He also referred to the stuffed ivory-billed woodpeckers from CUMV for external measurements. To best approximate the reconstructed model, Wang “snapped” the majority of the vertices in the base mesh for the skin to points in the reconstructed model, which was dense enough to make this possible. That gave Wang the skin and a rig. Next, he needed to model the bird’s feathers. “Because the people scanning the bird were afraid to unfurl the wings, we didn’t get great information on the width of the feathers from the CT scan, but we got the length,” he says. “However, we were concerned about the pattern the wing creates, not the microgeometry, so I wrote a tool to model the plane of the feathers quickly.”

Animating the Bird The reconstructed skeleton from the CT scans had provided precise geometrical information about the bones in the woodpecker’s wings, but because the model was one solid object, Wang estimated the center point for animation joints. He would fly the bird using traditional keyframe animation; simulating the laws of aerodynamics was beyond the scope of his thesis. However, Wang leaned on ornithological research for the rotation angles, specifically on work by Ken Dial, who Photo by Arthur A. Allen. ©Cornell Lab of Ornithology.

January 2010


n n n n

ackns hers a the

Science•Engineering

The digital wing, with red rods representing individual feathers, flaps using IK and data captured from a red-winged blackbird in free flight. Overlaying the flapping wing on a video of the bird showed how closely the digital wing matched the bird’s motion. plotted rotation angles for the joints of a European starling. To animate the joints in his CG wing, Holt used the angles provided by Dial as rotations for forward kinematic controls. In addition, by incorporating information from a Science paper by Farish Jenkins that described the furcula, or the wishbone, as a spring, he added secondary motion. “At that point, we had an ivory-billed woodpecker that flew like a starling,” Wang says, “the same cycle over and over.” In addition, the team shot a high-speed video of a flying pileated woodpecker, the common but very large woodpecker that most resembles an ivorybill. By rotoscoping this bird in flight, the animators could match the rotations of the wing bones during each wing beat. The high-speed video of the pileated woodpecker and still photos also showed that in the transition between a downstroke and upstroke, the feathers oriented themselves in a nearly parabolic, continuous surface, and bent in response to aerodynamic loads during a wing beat—all of which affected the amount of black and white visible to the camera. The Cornell team mimicked these effects using mathematical orientation constraints. The next steps were tracking Luneau’s camera in the video to discover the path it took, and then matching it to a digital camera. Wang did the match using 2d3’s Boujou. Once he knew the camera’s path, he could calculate the path of the bird. “The ornithologists at Cornell who searched for the bird in Arkansas knew about how far away the bird was when it first took off from the tree,” Wang says. “So we asked them to go back to the same place and measure how high they thought the bird was off the ground. They went back to Arkansas and measured the tree.” Knowing the height of the tree and the dis10

January 2010

tance from the camera, Wang could calculate the position of the bird in 3D space when it took off. “For the rest of the flight path, we drew a ray from the camera, starting with the first position of the bird,” Wang explains. Known data provided flight speed for the birds, and from that, they could calculate how far the bird flew within a frame. “Because we knew the 3D location, how fast the bird flies, and how much time passes between each frame, we could calculate a distance within one frame,” Wang says. “That distance became the radius of a sphere. We knew the bird was somewhere on that sphere, but not exactly where. So to figure that out, we lined up the camera, an image plane with the video, and the sphere in 3D space. We then drew a line starting at the camera through the center of the bird in the video. The intersection of that ray and the sphere was the location of the bird. We did that for the entire video, which was only 100 frames or so to get a threedimensional flight path for the bird.” Then, Wang animated the ivory-billed woodpecker model on that flight path, scaled the bird down to the size of the smaller pileated woodpecker, changed the colors on the pileated woodpecker’s wings, and repeated the flight. “That gave us three sets of images,” Wang says, “the original video, our animation of the ivory-billed woodpecker, and the animation of the pileated woodpecker.” Did either animation match the video? “This is where it gets tricky,” Wang says. “We made so many assumptions, if we were to take what we did to the scientific press, it wouldn’t hold water.” Greenberg agrees. “It’s inconclusive,” he says. “I find that with the data we have, it’s impossible to make a conclusive statement. I could take a subset of frames and say, ‘Un-

questionably, it’s an ivorybill.’ And then take another subset of frames and it’s not clear. I wish I could come up with a different answer. My buddies at the Cornell Ornithology Lab were praying that I could come up with a different answer. But the bird in the video was too far away, and it was a bird escaping. At its closest point it was 400 pixels. If you put your thumb on a normal-size TV screen, that was the image of the bird.” More important, except perhaps to the ornithologists who were hoping for proof, the project has opened new lines of research at Cornell. “None of us want to get into the debate about whether it’s ivory-billed or pileated,” Wang says. “That was the launching point, but it isn’t the be-all, end-all of our project. It is the various disciplines coming together.”

Motion-Capturing a Wild Bird If you were to create an animation of a bird, of a woodpecker or any bird, for that matter, how would you know whether that animation was valid? Researchers have studied birds in wind tunnels, but the unnatural airflow in the confined space might have affected the birds’ flight. Holt wanted to capture data from wild birds flying naturally and use that data to drive a wing. Wang, Kaplan, and Bostwick helped make that possible. “We couldn’t just capture wild songbirds,” Holt says. “You have to have a licensed ornithologist work with you.” Bostwick strung specially designed fine nets that the birds couldn’t see between trees, put birdseed on the ground, and captured, over time, a variety of birds. “We tried robins, chickadees, swallows,

A CT scan of an ivory-billed woodpecker specimen from the Smithsonian helped the Cornell grad students create an accurate skeleton.


DeckLink Studio has SD/HD-SDI & HDMI, loads of analog connections, simultaneous SD & HD playback for only $695! The new DeckLink Studio includes more video and audio connections than any card on the planet! You get SDI, HDMI and enhanced analog connections in full 10 bit in SD and HD. Connect to equipment such as HDCAM, HD-D5, Digital Betacam, Betacam SP, HDV cameras, big-screen TVs, projectors and more!

More Video Connections! DeckLink Studio includes 10 bit SD/ HD-SDI, HDMI, component, composite, S-Video, 4 ch balanced analog audio, 2 ch AES/EBU, reference, RS-422 deck control and a built in hardware down converter. High speed 1 lane PCI Express gives you more HD real time effects and supports advanced video formats such as ProRes(Mac), DVCPro HD, JPEG, DV, HDV playback and 10 bit uncompressed capture and playback!

Built in SD Keyer DeckLink Studio includes a built in internal SD keyer that lets you layer RGBA images over the live video input. You can also use the included Photoshop plug-ins for broadcast graphics! DeckLink Studio also supports external SD keying with key and fill SDI out. Windows™ or Mac OS X™ DeckLink Studio is fully compatible with Apple Final Cut Pro™, Adobe Premiere Pro™, Adobe After Effects™, Adobe Photoshop™, Fusion™ and any DirectShow™ or QuickTime™ based software. DeckLink Studio instantly switches between, 1080HD, 720HD, NTSC and PAL for full worldwide compatibility.

Hardware Down Conversion For monitoring, you’ll love the built in HD down converter that’s always active on the SD-SDI, S-Video and composite video output connections. The built in hardware down converter lets all video outputs remain active in both capture and playback mode, and in all HD video formats! Instantly switch between letterbox, anamorphic 16:9 and center cut 4:3 down conversion styles.

DeckLink Studio

$695

Learn more today at www.blackmagic-design.com


n n n n

Science•Engineering

grackles, woodpeckers, and red-winged blackbirds,” Holt says. “The little birds are easy to catch, but they fly fast. The big birds flap their wings more slowly, so we could get better data, but they’re really smart. The grackles eventually figured out where the mist net was, and we never caught one again.” Bostwick released the birds in a flight tunnel, a wooden structure a little more than 13 meters long with ports along the length. A viewing pyramid juts out from one side, and another port sits on the top for looking down. When released, the wild birds see the light at the end of the tunnel and fly through to escape. In general, the usable flight data lasted less than one second. “We thought woodpeckers would be a good choice because of their patterned wings, but they flew into the tunnel, landed on the wooden ceiling, and stayed there,” Holt says. Instead, they settled on red-winged blackbirds. To capture the bird’s motion, Holt applied markers hole-punched from 3M retroreflective tape that they could easily remove after the test. “Spheres would have been visible from more directions, but they would have interfered with the flight,” he says. Bostwick’s knowledge of bird anatomy was invaluable as Holt determined where to apply the markers. Because his goal was to capture bone orientation and feather deformation, he attached markers at the shoulder, elbow, and wrist joints, at the tip of the hand, and on the feathers. “Bird wings have basically the same bone structure as mammal arms,” he says. “They have a humerus, radius, and ulna for the forearm, and a simplified hand with kind of a thumb bone and two digits that are a little bit flexible in some birds.” Although acquiring the best estimate of joint center rotation might have meant adding markers midway on the bones, they decided to minimize the number of markers for this pilot study. “We weren’t sure how well this would work,” Holt says. “And it’s difficult and confusing to digitize lots of markers, so we kept it simple. Perhaps later, people will decide to continue the study using more markers.” Because the cameras would see both sides of the feathers during flight, the researchers applied markers to the tops and bottoms of a few feather tips, as well as midway along the length. “We figured that the most interesting deformation occurs near the distal part of the wing, near the hand,” Holt explains. “The first nine feathers from the wing tip come off the hand bone. The feathers bend, rotate around the axis of the rachis, which is the quill, and splay like a fan opening and closing. We 12

January 2010

With help from Cornell ornithologist Kim Bostwick, CG grad student Brendan Holt attached retroreflective motion-capture markers to the joints and feathers of a red-winged blackbird. couldn’t capture feather twisting with our limited set of markers, but we could get bending and a little bit of the splay.” Holt and Bostwick positioned two Fastec Imaging TroubleShooter cameras inside the tunnel—one behind the bird and off to the side, and one mostly to the side and low to the ground. Kaplan built a custom light mount for the cameras to provide the bright light necessary for the high frame rate: The digital video cameras shoot 1280x1024-resolution images at 500 frames per second. So that they could locate points in 3D space later, the researchers placed a cube of known size in the region to calibrate the cameras. To track the markers on the birds in flight captured by the video cameras and triangulate them, Holt used a MathWorks’ MatLab script written by Tyson Hedrick. “That gave us dots moving in space,” Holt says. “We fit splines to their trajectories.” And that gave Holt paths in space and time. “It didn’t let us see the wing, though,” he says. “Initially, we treated each marker as a vertex of a mesh and connected the vertices to make a face. The resulting wing mesh gave a crude impression of a flapping motion, but it did not portray the underlying anatomy. So we built a more informative visualization using rods to represent feathers and bones.” To do that, Holt created a simple polygonal model in Maya and rigged it with inverse kinematics (IK) to drive the animation. Then, he applied the data from the motion capture to the joints. “The model has something like

18 feathers, but we had markers on only a few feathers,” he says. “So we created a NURBS curve from the outermost feather tip to the shoulder using the other feather tips as control points. Then we evenly subdivided that curve by 18, the number of feathers.” As the wing bones fold and open during flight, driven by the motion-captured data, the curve changes shape and size. The wing flaps. “That’s how we fit the feathers to the data,” Holt says. “The feathers fan in and out and make a smooth surface. So, we have a model of a wing that we’re driving with real motion data. It isn’t an accurate simulation. The results are somewhat interpreted in the way we fit the feathers and bones to the data with IK. We had only two cameras, so we had missing data. And, it’s hard to validate our work because there is very little research showing joint rotations of bones. But, we showed that it’s possible to get free flight data from a bird without relying on a wind tunnel.”

Flapping Wings Kaplan, who had studied mechanical engineering as an undergraduate and moved into computer graphics as a graduate student, took the work done by Wang and Holt to the next scientific level. First, using a CUMV specimen, he scanned individual feathers of a redwinged blackbird, the same type of bird that Holt had motion-captured in the tunnel, with a 3D laser scanner. Then, working in Maya, he changed Wang’s 3D model to match. “Jeff had made a very, very accurate 3D model,” Kaplan


1984

1992

2000

2009

Here’s to 25 more years of Three Guys on a Cart. Little did we know that this small demonstration would become such an icon. But in a world filled with disposable products, maybe it’s good to make a blatant show of

dependability and strength. So would you like a workstation that works as long and hard as you do? Then discover the entire family of Anthro furniture at anthro.com/cgw.

For these and other creative workstations, visit anthro.com/cgw or call 800.325.3841.


n n n n

Science•Engineering

The CG feathers in the digital red-winged blackbird (at left), which have hundreds of polygons and are topologically accurate, help researchers look at flapping flight. Motion captured from a blackbird in free flight (above) can validate the digital animation. says. “I changed the length and parameters to make it the size and proportions of a blackbird, removed the ivorybill feathers, and attached blackbird feathers.” The feathers have hundreds of polygons and are topologically accurate. “Jeff’s model was so morphologically accurate that I wanted to stay true to that attention to detail and keep as high a degree of accuracy on the feathers as I could,” Kaplan says. Kaplan had “printed” Wang’s model of the woodpecker wings using a rapid-prototyping machine, and put two wings in a wind tunnel. “I measured the force data at a couple different air speeds and angles of attack. I also used a strobe light that illuminated helium bubbles to visualize the flow,” he says. The wings were in a quasi-steady state, however. So, to look at flapping wings, he decided to use the digital blackbird feathers and computer simulations. “I’m doing fairly rudimentary aero­ dynamics using Brendan’s [Holt] wing beat of a red-winged blackbird and applying blade element analysis to each feather as the bird flaps,” Kaplan says. “No one has done this before with an emphasis on geometry of this detail. Zoran Popovic wrote a paper a few years ago [“Realistic Modeling of Bird Flight Animations,” with Jia-Chi Wu, for SIGGRAPH 2003], but he modeled his feathers as two triangles with a hinge. And, he didn’t validate the animation against motion-captured data like that Brendan [Holt] came up with.” What Kaplan hopes to achieve is a predictive model for flapping flight. “I don’t know if we’ll ever get there,” he says. “We’ve studied nonflapping flight quite thoroughly, but as soon as something starts flapping, it’s hard to predict the forces on that flapping body. There’s an important interaction between the feathers and the air. Air pushes on the feathers, they bend and push back. It’s like a spring-mass relationship.” “Brendan had markers along the length of the feathers so we can see the way they 14

January 2010

bend,” Kaplan continues. “But, a feather twists around its axis to some degree, and that affects the aerodynamics enormously. A small feather twist can create a different angle of attack. Even though Brendan’s data was accurate and great to work with, he didn’t capture a number of degrees of freedom. It might not be possible. So, I’m trying to tweak the feathers by hand to see how much they change the lift and drag. I need to look at air speed and angle of attack for each polygon.” Someday, Kaplan’s research might help aerospace engineers design small aircraft with flapping wings. “This is where it’s all headed,” he says. “We want to learn how to take advantage of the loopholes in the laws of aerodynamics that birds take advantage of by design.”

Into the Wild Bostwick had proposed a field study using motion capture long before meeting the CG team, so the collaboration was timely and fruitful; it helped her develop a protocol for the grant she received for the manakin study. Recently she led a team that took the system designed with the CG students into the mountains of Ecuador to capture the motion of manakins in free flight. “I would have had no idea where to begin [designing the motion-capture system] without Don [Greenberg] and his students,” Bostwick says. “The lenses, the sensors, the need to calibrate, the different perspectives, and then the whole world of taking data and putting points on a screen through time . . . that whole process of motion capture. I had no idea how to do it. These are incredible tools. And, most biologists don’t know about them.” Bostwick and her crew packed 350 pounds of equipment—two TroubleShooter cameras, which can run on batteries, audio equipment, syncing devices, calibration cubes, and more— to study birds that make sounds with their wings. “The very instant they produce a sound

is important,” Bostwick says. “So we have a customized device to sync the audio and the high-speed video to one millisecond.” The students will capture the manakins and, as did Holt with the red-winged blackbird, apply markers, then release the birds from a perch onto which they’ve attached the two cameras. To calibrate the cameras, the computer graphics students helped invent a special device. The Tinkertoy calibration “cube” looks something like a model of an atom—plastic balls with stems attached to form a 3D lattice that is firm, transportable, and easy to disassemble. “We put the bird on the perch, and after it flies away, we place the calibration cube on the perch and record it with our cameras using the same focus and zoom,” Bostwick explains. “By pointing a laser from the cameras to the display perch (often many feet above the ground), we can measure the distance. Then, we can calculate backward to find points on the wings to create a volume.” With Bostwick on the field trip is one of Greenberg’s students. “It’s been a really fruitful collaboration,” Bostwick says. “In my world, when people study bird anatomy and how it functions, they’re pretty much limited to pigeons trained to fly in wind tunnels under specific circumstances. I wanted to become independent from the lab. I wanted to find creative methods to get information from wild birds doing their thing.” Thanks to Don Greenberg, a computer graphics pioneer who has long advocated collaboration between computer graphics and many departments, and the hard-working students he inspires, Bostwick’s dream has become a reality. n Barbara Robertson is an award-winning writer and a contributing editor for Computer Graphics World. She can be reached at BarbaraRR@comcast.net.


Intensity Pro introduces professional HDMI and analog editing in HD and SD for $199 Intensity Pro is the only capture and playback card for Windows™ and Mac OS X™ with HDMI and analog connections. Intensity Pro allows you to upgrade to Hollywood production quality with uncompressed or compressed video capture and playback using large screen HDTVs.

Connect to Anything! Intensity Pro includes HDMI and component analog, NTSC/PAL and S-video connections in a low cost plug-in card. Capture from HDMI cameras, VHS and Video8 decks, gaming consoles, set-top boxes and more. Playback to large screen televisions and video projectors.

Beyond the Limits of HDV

Microsoft Windows™ or Apple Mac OS X™

HDV’s heavy compression and limited 1440 x 1080 resolution can cause problems with quality and editing. Intensity Pro eliminates these problems and lets you choose from uncompressed video, Online JPEG and Apple ProRes 422 for full 1920 x 1080 HDTV resolution. Now you can capture in 1080i HD, 720p HD or NTSC/PAL video.

Intensity Pro is fully compatible with both Adobe Premiere Pro on Windows™ and Apple Final Cut Pro on Mac OS X™, as well as Motion™, Color™, DVD Studio Pro™, After Effects™, Photoshop™, Encore DVD™, Combustion™, Fusion™ and many more.

Playback to your Big Screen HDTV Use Intensity Pro’s HDMI or analog output for incredible big screen video monitoring. Unlike FireWire™ based solutions, Intensity uses an uncompressed video connection direct to Final Cut Pro’s real time effects renderer. No FireWire compression means all CPU processing is dedicated to more effects and video layers!

Intensity Pro

$199

Learn more today at www.blackmagic-design.com


■ ■ ■ ■

Animation

Computer graphics tools subtly help artists create Walt Disney Animation’s first hand-drawn feature film since 2004

Although background painters created the lush environments using Adobe’s Photoshop, they did so one brush stroke at a time, much as they might have done using oil paints and watercolors.

16

January 2010

Images ©2009 Disney Enterprises, Inc.


Animation

■ ■ ■ ■

W

hen Walt Disney Animation Studios quit making 2D animated features in favor of films made with 3D computer graphics, it signaled, for many people, the death of that traditional medium. Ironically, directors Ron Clements and John Musker—who were the first directors at Disney to use 3D computer graphics in a film (for the clockworks climax in The Great Mouse Detective), and the first to use CAPS, a computer-aided production system developed by Pixar and Disney for 2D films (for the next to last shot in The Little Mermaid)—have become the first directors to bring traditional animation back to Disney. Borrowing a technique from CG films, the directors moved beyond filming storyboards with dialog: The directing duo’s latest film, The They created animatics with Toon Boom’s Harmony and Photoshop to evaluate staging and lighting. Princess and the Frog, is the first traditionally animated feature created at Disney in five years. taurant. One day, a frog appears on her windowsill. He’s a prince It’s entirely hand-drawn. Entirely hand-drawn, that is, from a faraway country who, while visiting New Orleans, tangled with a little help from computer graphics: Toon Boom with a bad voodoo priest who turned him into a frog. Believing that Animation’s Harmony replaced CAPS, which is in semi- Tiana is a princess, he persuades her to kiss him and break the spell. retirement, as the production system; Autodesk’s Maya But, she isn’t a princess, and the spell backfires. She turns into a frog, helped set designers build reference models; Side and the two become lost in the bayou. Effects Software’s Houdini created some particle ef“They’re a mismatched couple,” Musker says, “like Claudette Colfects; Adobe’s Photoshop provided tools for back- bert and Clark Gable in It Happened One Night. Only he’s more the ground painters; and that company’s After Effects Claudette Colbert character: rich, with not much sense of reality. helped enliven those paintings. But all, very subtly. She’s the blue-collar person who has worked all her life.”

Fairy-tale Frogs The idea for the film had been rolling around Disney and Pixar for some time before John Lasseter, chief creative officer at Pixar and Disney Animation, had asked Clements and Musker to put their spin on the story. “We took elements from the Disney and the Pixar versions,” Clements says, “and pitched it to John [Lasseter] and Ed Catmull [president of Walt Disney and Pixar Animation Studios] as an American fairy tale/musical set in New Orleans’s French Quarter in the 1920s jazz age, and as a hand-drawn animation, with Randy Newman doing the music. There’s a kind of romance and warmth and magic to hand-drawn animation.” Musker believes that this film, in particular, is appropriate for 2D animation. “People have struggled with human characters in CG,” he says. “But human characters are one of the strengths of hand-drawn animation. And, drawings and paintings helped us accomplish the lyrical, romantic, warm, organic nature of the bayou.” In the story, a young African-American woman, Tiana [Anika Noni Rose], works hard to save money to open her own resUsing plug-ins and scripts, Disney’s R&D team incorporated color-blending and other techniques into Harmony from the studio’s semi-retired, in-house computer-aided production system (CAPS).

2D Redux Once they received the green light, the directors began looking for animators who could draw 2D performances. “Because hand-drawn animation was gone, it was almost like building the studio again,” Clements says. “Some of the 2D artists had become 3D stars, but many had left. Yet, just about everybody who did draw wanted to come back. We put together an all-star team of animators.” In addition to current and former Disney animators, the production crew, which topped 300 at its peak, included recent graduates from the California Institute of the Arts. “They had studied handdrawn animation without knowing if they’d have a place to apply their learning, and they blossomed into real talent,” Musker says. Clements adds, “With this type of animation, you have to work with a mentor to learn how to do it and get proficient. It’s a craft and an art that requires a lot of dedication. But, there’s an intuitive connection about drawing, from the brain to the hand to paper, that people miss with computer animation. With just the flip of a pencil, you can change an expression. That casual interaction is much tougher with 3D.” With Lasseter’s encouragement, though, the directors borrowed a process that Pixar uses in creating its 3D animated features: layout animatics. Before with traditional animation, they would film the storyboards and add the dialog track to see the film before they began animating. This time, they added staging and lighting. “We took the storyboards to the next step,” Musker says. “We added camera moves and compositing. We wanted to know if the composition was strong enough to carry the idea quickly, so we composed all our shots in black and white to see the values. Being able to evaluate that in real time, with real lights and darks, was a valuable step.” Kim Keech, technical supervisor, explains that the layout artists created the animatics using Harmony and Photoshop. In-house tools January 2010

17


n n n n

Animation

To give painters perspective reference, set designers built non-organic objects in Autodesk’s Maya and printed those 3D models so the painters could draw over them. then linked individual scenes created in those programs to entire sequences. Effects artists also worked directly with Harmony; however, layout, character animation including all the in-betweens, and cleanup all originated on paper. “We didn’t have any automatic in-betweens,” says Marlon West, visual effects supervisor. “We have been doing early development on automatic in-betweening, but we did this film just like we would have done before.” One change: For this film, all the animators had a desktop scanner to scan in their own drawings and composite an animation test at any stage they wanted.

What’s Old Is New Again For painting and compositing the approved drawings, the studio enhanced Harmony with plug-ins and by using the program’s scripting capability. “Harmony comes with a set of plug-ins and compositing nodes, but we have the capability of developing our own, and the interface allowed that, so we developed a dozen or more plug-ins in-house,” Keech says. In addition, the studio asked Toon Boom to incorporate some new tools. “We asked them

18

January 2010

to implement the color picker and some functions we had in CAPS for color styling,” Keech points out. The technical team at Disney also created plug-ins to mimic the look they had gotten from CAPS. “Because we have people in production who would say, ‘I wish the software could do things like CAPS did,’ we used the CAPS technology in our plug-ins to get a similar look, so the film would look more like a Disney movie,” Keech says, adding, “I was a CAPS developer, so it was nice to see 2D come back.” Using the plug-ins Disney developed, colors in Harmony blend from one region to another on the characters’ cheeks, for example, as they did in the CAPS system, with a soft, rather than a hard, line. The effects teams also asked for plugins. In their case, they wanted to reproduce CAPS’ “turbulence.” “It’s a noise that moves slow or fast that we got used to for rain or mist,” West says. And compositors requested plug-ins that imitated Shake functions.

would have built models and photographed them,” West says. Instead, the painters printed the 3D models and then drew over them. “The background paintings were 99 percent handcrafted,” West says. “They were done in Photoshop, but they were drawn or inked or painted one stroke at a time. The painters applied every brush stroke as they would with a regular painting.” In addition, the effects team sometimes used the puppet tool in After Effects to move trees and leaves, and help bring the background paintings alive. “The challenge was to not have a hybrid movie,” West says. “I thought Atlantis and Tarzan were really cool, and I don’t see a problem with integrating digital elements into a hand-drawn film. But the architects of this film wanted an old-school 2D film like Bambi or Lady and the Tramp.” The directors and the crew believe the return to the rich look of the 2D films of the 1950s will be a novelty for children of the 21st century. “A lot of television animation has moved into stylized graphics,” Musker says. “We felt there was something about the fullness of characters in hand-drawn films that children haven’t seen on a big screen with this caliber of dimensional drawing and atmospheric landscapes.” Adds Clement: “We’re kind of recapturing and reinventing at the same time.” The same was true of the studio itself. “There was a real desire to make a lush, beautiful, entertaining, hand-drawn film even though [traditional] animation had been pronounced dead,” West says. “It wasn’t dead to

To give painters perspective reference, set designers built nonorganic objects in Autodesk’s Maya and printed those 3D models In addition to Harmony, the efso the painters could draw over them. fects team, in particular, created some 3D elements. “We wanted this film to any of us. So, it was nice to have another time look handcrafted,” West says. “But there are at bat. When the opportunity came to make some fireflies and some vehicle wheels that are this film, I had to participate. It was a won3D, and some 3D doors open and shut in Maya. derful experience­—the return of co-workers and good friends. I wouldn’t have missed it for But, it’s a very, very understated use of 3D.” Maya also worked in the background. Set anything.” n designers built the non-organic parts of the film—the buildings, vehicles and other struc- Barbara Robertson is an award-winning writer and a tures—in 3D to give the painters perspective contributing editor for Computer Graphics World. She can reference for paintings. “In the old days, we be reached at BarbaraRR@comcast.net.

A Touch of 3D



■ ■ ■ ■

Gaming

James Cameron partners with Ubisoft to expand the world of Pandora in James Cameron’s Avatar: The Game

Working in parallel and using the same digital models, animations, and textures, Ubisoft and Weta forged a relationship that Avatar film director James Cameron called a “perfect consonance” between the film and game crews, in which game artists borrowed models and mocap data for the blue Avatars (above), while Weta borrowed ideas for Pandoran bioluminescence.

20

January 2010

“It’s all about the storytelling, and this, right here,” James Cameron has stated, pointing two fingers directly at his eternally impassioned, turquoise eyes. The legendary director is stressing the importance of adhering strictly to a character’s point of view during filmmaking, forcing the audience to look through their eyes. More importantly, he’s stressing the theme of Avatar, which is to look beyond the prism of our own life experience and, in his words, “see and understand the world through others’ eyes.” The world of which he speaks is, of course, Pandora, the lush, bioluminescent planet inhabited by the indigenous, 10-foot-tall Na’vi, and despoiled against their will by human mining for a priceless mineral called unobtainium—a superconductor for energy. While the air is unbreathable on Pandora, the mining corporation, called the Resource Development Administration (RDA), creates a human-Na’vi hybrid—the Vishnu-blue Avatar. An RDA soldier, lying in a sarcophagus-like vessel, can project his or her consciousness into the Avatar, controlling the Avatar’s body remotely while interacting with the native Na’vi. From these interactions spring a moral conflict, centered on their opposing perspectives, which the team at Ubisoft’s Montreal studio seemed to grasp instinctively from the outset and managed to encapsulate in its initial pitch to the Titanic director. The director was so impressed that the studio immediately won the rights to develop James Cameron’s Avatar: The Game. “They came up with the idea of allowing the player to choose either a Na’vi perspective or a human perspective,” Cameron has said during one of several industry conferences. “In other words, the good guys and the bad guys are entirely a matter of the player’s choice. I think that’s really cool. And there a number of thresholds throughout the game where, if you feel like you’ve made the initial choice incorrectly, you can switch sides further down throughout all the levels. It’s really pretty remarkable, and it tracks beautifully with the moral message of the movie.” Cameron has likened the Pandora journey to Dorothy’s journey to Oz, and the experience of watching Avatar as “dreaming with your eyes open.” If the film follows the yellow-brick road, the game, at Cameron’s behest, allows the viewer to stray from that narrative brick road into the farthest, most exotic reaches of that open-eyed dream—into 16 diverse environments, each filled with luminescent forests, gorges, gullies, beautiful, floating mountain ranges, and stunning alien flora and fauna.

©2009 Ubisoft.

By Martin McEachern


©2009 Ubisoft.

Gaming

■ ■ ■ ■

Cameron wanted the under-canopy atmosphere in the game to match that of his film, so Ubisoft added subtle coloration to simulate reflected light on characters, specular lighting in the shadow of the foliage, and ambient colors within the shadows.

January 2010

21


n n n n

Gaming rector Pascal Blanche. Meanwhile, Ubisoft’s artists—working behind double-locked doors and under high-security cameras and strict confidentiality clauses in their veritable bunker in downtown Montreal—designed everything from vehicles to costumes, even sounds, that Cameron eventually incorporated into his film, marking a watershed moment in filmgame convergence. “We even received entire animatic sequences of scenes depicted in the film,” says Blanche. “Nevertheless, it’s important to understand that for creating the game, they still served somewhat as references. The models needed to be able to work within our Dunia game engine, and within the tighter constraints that we’re given (we don’t yet have Weta-strength über computers, obviously). Even if the game industry gets close to the movie industry’s pipeline, we still have technical obstacles to overcome that place certain demands on our models and textures, so we often had to rebuild Weta’s models based on the various film assets that we received.”

A Bioluminescent World

(Top) Avatar: the Game features 16 rich, diverse environments teeming with exotic flowers and beasts. (Bottom) When a plant is injured in a firefight, the Dunia engine decreases the intensity of its glow shader to simulate its waning life.

Perfect Consonance To accomplish the daunting task of fully realizing Cameron’s blue-bathed, luridly painted alien biosphere in the interactive medium, Ubisoft began its collaboration with the director and the Weta crew in New Zealand, which was tasked with a great deal of the film’s digital imagery, over two and a half years ago. “This is where the whole game and movie interface usually breaks down,” Cameron has said, “because movies are often on a one-year track screaming to the theater, and a year after, someone pushes the button on the game, and it’s just not enough time to develop a game properly.” Cameron has noted that he set out with Ubisoft to create the ideal model for how a game and a movie should be co-developed, with neither being the redheaded stepchild of the other. Under Cameron’s model, production on the game began concurrently with that of the film. In addition, he has stated, “I proposed that the game should not be a slave to the movie, but should follow its own story line; it should be developed fully in parallel, so that they exist in the same world—using the same 22

January 2010

creatures and environments—and yet have its own story.” Ubisoft rose to that challenge, creating new characters, vehicles, and weapons, embellishing settings, even enhancing the way the Na’vi interact with creatures and plants, and obtain their powers and poisons. “As a result,” Cameron has said, “the world of the Avatar game is considerably richer and more extensive than what you’ll see in the film, and at the same time, it doesn’t contain any spoilers that will ruin the movie experience for you, which allows us to put it on the street before the film comes out. This is really the perfect consonance between the two mediums.” The bulkhead in Cameron’s model for achieving this “perfect consonance” was aggressive asset sharing and two-way collaboration, in which both the film and game teams could develop and share designs and digital assets for each other’s projects. “We received just about every type of reference we could from Lightstorm [Cameron’s production company], from concept designs, characters, creature meshes, and mocap data, to early renders for animation reference,” says Ubisoft artistic di-

Spike Tears. Cliff Slouchers. Stinger Ivy. These are but a few of the alien plant species that inhabit the tropical wonderland of Pandora, all of which Cameron based on the strange bioluminescent marine life he discovered while exploring the kelp forests and coral reefs of the deep sea. Modeled, rigged with bones and IK, and animated in Autodesk’s 3ds Max, these plants are living, sentient beings that can cooperate with the Na’vi or antagonize the humans. To re-create the plants for the game, Weta provided Ubisoft with QuickTime turntables of the plant meshes used in the film, so the artists could analyze them from every point of view. “They also gave us high-res textures from the film, which we used while painting the in-game shaders in Pixologic’s Zbrush,” says Blanche. At night, the plants luminesce, setting the forest alight like some neon Garden of Eden. To achieve this effect, Ubisoft developed special shaders in Zbrush that were then modified in the studio’s Dunia game engine. The engine, which had been developed for Ubisoft’s Far Cry 2, was heavily modified for Avatar to accommodate such effects and to expand its vegetation technology. “Because the Dunia game engine is time-based, we had the opportunity to increase specific layers of glow values and textures in the shaders,” says Blanche, “giving the whole bioluminescent effect an extraordinary natural, organic feel, depending on the time of day.” Cameron, a firm believer that the devil is in



n n n n

Gaming

the details, also insisted that the intensity of the glow should wane when a plant is injured, or wax as it recovers its strength. “For example, when you cut a leaf with a blade or explode one of the many varieties of plants, their bioluminescence will slowly fade as the plant dies,” says Blanche. “We dug pretty deep, pushing Dunia to its limits, to get these effects just right.” Indeed, so extraordinary were some of Ubisoft’s bioluminescent effects that Cameron often asked his Weta team to incorporate them into his film. “I’d often see something really cool that Ubisoft had done, and I’d say to my crew, I want that in the movie—that particular treatment for the bioluminescence, for example,” Cameron has said. In addition to the nocturnal bioluminescence, the player can also use a flamethrower at night to set the forest ablaze, further complicating the lighting challenges. “A lot of the fire effects had already been implemented and tested in the Dunia engine back when we made Far Cry 2. However, we made some significant changes to improve on the undercanopy atmosphere during these fires,” says Blanche, referring specifically to technological advancements that simulate the dancing, spectral shadows and bounce lighting from the fire and luminescent plants. “In addition, Pandora has many expansive

The RDA battles the Hammerhead, modeled and rigged in 3ds Max based on assets provided by Weta and dense rain forests, and we wanted to make sure the light rendering of this under-canopy atmosphere matched Cameron’s artistic direction,” notes Blanche. “If you look closely while you’re playing the game—or in certain screenshots—you’ll notice that we’ve added subtle coloration that simulates reflected light on characters, specular lighting in the shadow of the foliage, and ambient colors within the shadows, too. Most of those rendering effects are built into the shaders or into the lighting system itself, using gradients and intensity curves in the Dunia engine.”

Moving in Stereo “Because Avatar was being made in stereoscopic 3D, I urged Ubisoft to offer the game in 3D,” director James Cameron has stated. “At first they were dubious, but then they came back six months later and showed us a demo in stereo that blew us away. And then, it was on.” Indeed it was, for Avatar is the first major next-gen game to be developed for stereoscopic 3D. Thick smoke and mist swirl and drift beyond the screen, as the Banshees and twin-propellered helicopters reach out toward you. Ubisoft designed the 3D version of the game specifically for 3D-enabled televisions utilizing stereoscopic technology from Sensio, another growing Montreal company, but it will also work on TVs supporting DLP or RealD technologies. These pricey $4000 sets are capable of running at the 120 hz refresh rate required to project two images at once. To create the stereoscopic effect, the game engine uses two cameras, one for each eye, placed in parallel with a controllable interocular distance—the space between the left- and the right-eye cameras. The images from the two cameras are then rendered in real time. Along with the two cameras that are parallel one to the other, the game developers also used the vertex shader’s equivalent of “off-centered projections” to help control the convergence distance—that is, the distance where there is no on-screen separation. Controlling separation and convergence in the vertex shaders enables the artists to correct some special effects and makes them work properly in 3D. “The convergence is positioned farther than our character to have him slightly out of the screen and to enhance the 3D effect,” says artistic director Pascal Blanche. “As for the left and right images, they are rendered in off-screen buffers and combined for each 3D TV’s specific format.” –Martin McEachern

24

January 2010

Pandoran Wildlife While meeting Cameron’s exacting attention to detail in the plant life was difficult, equally challenging was his demand for the most minute details in the Na’vi, humans, Avatars, and wildlife. The notoriously fastidious director wanted the Na’vi to have a specific, graceful way of gliding through the forest, for example; their ears to glow a specific shade of red when backlit; their skin to have a particular, tactile, bioluminescent texture; and they had to shoot their bows with a kind of two-fingered, inverted draw beyond the head. Here again, Ubisoft was provided with mocap data from the performance-capture volume at Giant Studios in Los Angeles, and digital assets from Weta. They had both photos and turntable scans of the actors from the set, including Sigourney Weaver as Dr. Grace Augustine, Giovanni Ribisi as arrogant RDA soldier Parker Selfridge, Michelle Rodriguez as Trudy Chacon, who functions as the player’s chopper pilot, and Zoe Saldana as the Na’vi princess—most of whom reprise their roles for the game. Referencing the turntables and photos, the game artists hand-modeled each character using 3ds Max. “The RDA troopers and scientists were easier to animate because of our familiarity with human movement. The Na’vi, however, provided us with a completely new challenge,” says animation project lead Xavier Rang. “At first, we weren’t sure how to animate them. Poring over reference animations, we studied the Na’vi proportions, skeletal structure, and musculature. Their walk and run cycles were some of the hardest to achieve. Luckily, Lightstorm gave us direct access to the animation director from the film, Richard Baneham, who came to Montreal and provided us with a great amount of constructive feedback.” Working in 3ds Max, animators keyframed


Gaming the in-game animations for all the characters, creatures, and plants. For several of the scripted cinematics involving humans and Na’vi, however, Ubisoft motion-captured actors at its Montreal studio using Vicon mocap cameras, processing the data in Autodesk’s MotionBuilder and exporting it into the game engine via 3ds Max. “The mocapped performances gave the scripted events a bit more of an edge over the in-game animations, which was crucial to differentiating them,” says Rang. But animating the humans and Na’vi were a cakewalk compared to the non-bipedal wildlife that stalk the forest floor. Indeed, the forest is a veritable menagerie of ferocious beasts, like the Banshee, the flying blue dragon the player sits astride as it soars through the sky; the slinking, oil-black Viper Wolves; and the six-legged black panther-like Thanator. But none proved more nightmarish to rig and animate than the Leonopteryx, an orange dragon and the largest animal on Pandora. “For the Leonopteryx, we spent nearly a month just recreating Weta’s animation rig using a custom rig in 3ds Max, so it resembled what we saw in the video references we received from Lightstorm,” says Rang. “We had to learn a lot.” In terms of modeling, texturing, and rig-

ging the Leonopteryx, Rang says the team put a lot of love into properly capturing this amazing flying beast. There are more than 100 bones in the rig, one skeleton for the data export, and another for controlling the rig. “The main question we constantly asked ourselves was, ‘What makes the Leo stand out from all other creatures, and how can we make it truly recognizable?’    ” he relays. To do just that, the artists looked over the length of the wings, the shape of the head and jaw, and the dynamics of liftoff and landing to make sure they got all the details right. “It was one the most complex challenges we had in creating the game,” Rang adds.

Alien Botany The almost endless diversity in Pandora’s glowing, coral-reef inspired plant life made procedural methods of creating the foliage difficult to employ. According to lead texture artist Pierre Theriault, those trees and plants— which followed a standard form of stem-withleaf or trunk-and-branches with leaves—were created using an in-engine plant-creation tool. However, the more complex or alien-shaped plant life had to be hand-modeled by Theriault and his colleagues using a variety of tech-

n n n n

niques, usually involving Zbrush sculpting of a high-res base mesh or polygon modeling using photo-sourced textures. Therefore, the forests are usually a mix of procedurally or handplaced objects. But, given the highly organic nature of the alien plants in Pandora, Theriault found himself taking the Zbrush route more often than not. During gameplay, “foliage zones” are defined in-engine, which are then procedurally populated with the appropriate plant-life assets. “Level artists then did extra detail work, adding focal pieces by hand and adjusting settings and zones for the desired look,” explains Theriault. The plant shaders were similar to the general-use shaders, encompassing the standard map channels, but sometimes with additional special parameters requested by the artists for translucency effects and glow maps that were controlled by the time of day. Due to memory constraints, sometimes a plant had to forego this special shader and use the general shader when a certain effect was needed, like a cube map or texture blending. In general, anything leafy used the translucent leaf shader, and anything with mass to it used the general-use shader. The shaders went through many experimental changes, during which the crew added

January 2010

25


n n n n

Gaming

and removed features, trying to zero-in on the most efficient version to meet as many of the project’s special demands as possible. The key to success, says Theriault, was close collaboration with the shader programmer, who managed to accommodate all the group’s requests with great results.

Carpeting the Forest Floor From the lush, densely foliated landscape of Verdant Pinnacle, to the darker, burned-out palette of Stalker Valley, one of the primary challenges of the game was finding a way to completely carpet the forest floor with grass that looked natural and believable from every angle, high and low, and, moreover, hid each level’s tiling ground texture. “That was the first big task given to me on the project: Hide the ground!” exclaims Theriault. It took a lot of trial and error to find the right approach. The first attempts used dense grass meshes to hide the ground plane. “Unfortunately, it looked great horizontally, but wasn’t believable when looking down from above, Theriault recalls. After other passes, he ended up taking inspiration from the moss and fur technique used in [Team Ico’s] Shadow of the Colossus, thereby creating mossy patches

camps nestled throughout the forest was another challenge. Many types of props and structures were needed for gameplay that naturally did not exist in the movie’s reference materials or story line, according to Theriault. After taking time to study all the references the team had available, the common traits in the Na’vi style of construction became clearer. With that style in mind, Theriault was able to design several original Na’vi structures consistent with what would be seen in the movie. “Cameron and his team were very cool in encouraging and welcoming the creation of original assets on our end, which added more richness to the Avatar universe,” he says.

Retrofitted Armory

Cameron purposely made Pandora hot and humid, while engulfing it in such a powerful magnetic field so that the RDA couldn’t use sophisticated energy weapons. This was an artistic conceit that enabled him to make all the vehicles and weaponry look retrofitted, like they came right out of the late 20th century. Of course, this conceit also guided Ubisoft’s weapon and vehicle creation process, from the military helicopters, to the Samson armored recovery vehicle the player pilots, to the AMP (Armored Military Platform), which resembles Ripley’s loader from the movie Aliens. “All of the six-wheeled vehicles are controlled by in-game physics, but the AMP Suit was really challenging. It’s far more agile than people would expect,” says Blanche. Rigged in 3ds Max like a regular humanoid character, it’s meant to replicate the movement of its pilot through gesture recognition, so it has an endless combination of articulations and No amount of detail was spared in the game, including that extensions that are especially hard used for the operations center. to keep track of. When the AMP in 3ds Max that could be used in Ubisoft’s Suit is jumping, running, or attacking, it was procedural population tool. This quickly car- difficult to show that it was a heavy piece of peted most of the ground with uneven, lumpy machinery, yet still had the requisite agility, he moss that effectively obscured the level’s tiling points out. To this end, the team adjusted the ground texture. The earlier grass meshes were weighting on the bones to give the suit’s mechsalvaged and put into the mix as well, along anized movements a palpable sense of inertia. with filler plants and rock debris. “In the end, a little of everything ended up The Game is Convergence in the final ground cover, giving a very natu- Avatar: The Game represents certainly the ral mishmash effect that was interesting when most intensive collaboration between a game looking down or forward upon it,” Theriault developer and a film director, and, moreover, a says. “It’s easy to overlook this effort with all trifecta in heavyweight film collaborations for the impressive sights of Pandora, but until we Ubisoft, after working first with Peter Jackson got this part right, the environments looked on King Kong and then with both Jackson and Steven Spielberg on a game adaptation of the extremely sparse and sterile.” Creating assets for the numerous Na’vi upcoming Tintin. Now with 56 games under 26

January 2010

its belt, including system-sellers Assassin’s Creed, Prince of Persia, and Splinter Cell, Ubisoft has become an industry behemoth that’s managed to harness Quebec’s massive pool of creative talent, the province’s incredibly generous tax credit system, and its software development infrastructure—which includes Autodesk’s Softimage—to rack up annual worldwide sales totaling more than $1.7 billion. In addition, Ubisoft has drawn a considerable amount of talent from Quebec’s thriving animation schools, hiring 20 percent of the 1500 students who have graduated from Montreal’s National Animation and Design Centre. The goal now for Ubisoft, and in particular, its four Canadian studios (two in Quebec, one in Vancouver, and the recently announced Toronto studio) is convergence. “The game is all about convergence now,” says Yannis Mallat, CEO of the Montreal and Toronto studios, “and allowing directors to explore the universe they create in their movies in a whole new way through interactivity.” Mallat’s vision is of a future where film and game developers can merge their pipelines and share tools, while people with different skill sets can work on the same products across a broad spectrum of entertainment media, from movies and video games to online experiences. “We need people with the technical knowhow to make that happen,” he adds. Cameron also agrees that the number one challenge facing the industry today is convergence. “I believe that when entertainment succeeds in today’s market, it does so by converging movies, video games, books, toys, graphic novels, and online experiences, to create this kind of greater universe in which you can visit and enjoy the characters and settings in multiple ways,” he says. “The story of the movie doesn’t have to be retold in those other outlets, but there should be this sense of enlarging the world and enhancing the viewers’ knowledge of the history, culture, and all these things within the fantasy world.” Indeed, so committed was Cameron to convergence that not only did he borrow many of Ubisoft’s designs—for the Pandora’s bioluminescence, for example—but he also had his own design teams develop vehicles for the game that aren’t in the movie. Avatar: The Game is a seminal moment in the history of film-game convergence, one that Ubisoft is determined to nurture and grow, as it sets its sights now on Spielberg’s Tintin. n Martin McEachern is an award-winning writer and contributing editor for Computer Graphics World. He can be reached at martinmceachern@hotmail.com.


ICI_CGW0110_027.indd 1

1/13/10 6:50 PM


■ ■ ■ ■

Animation

Of the five animated features

nominated for Golden Globe awards this season, two—Coraline and The Fantastic Mr. Fox—used stop motion, one of the oldest animation techniques. Even so, for Mr. Fox, as with most animated films these days, computer graphics played a role. CG artists working on the film, though, found few similarities to hand-drawn or CG films. “Stop motion is quite strange,” says Tim Ledbury, visual effects supervisor for Twentieth Century Fox’s The Fantastic Mr. Fox. “It’s more like making a live-action film than a CG film. As far as I was concerned, the foxes could have been actors. But, it was like live action in slow motion.” Animators creating a stopmotion film work one or two frames at a time, moving tiny models into various positions and then filming them.

28

January 2010


Animation

Animators working on 30 stop-motion stages kept VFX supervisor Tim Ledbury hopping, to be certain his crew—who would touch 75 percent of the film extending sets, adding skies, removing rigs, compositing characters into backgrounds, and so forth—wouldn’t run into problems. “Rather than waiting for a plate, I could sit at my desk, watch a shot, grab a frame, and do a quick TIFF composite to check it,” Ledbury says. “If, say, a character’s head was getting too close to a problematic area, I could run down and stop the animators, and they might move the head. It was quite nice to have that control.” Although the animators worked two frames at a time, the overall production happened at a faster pace. “On a live-action film, we might have three units shooting at one time,” Ledbury says. “On this film, they had 30 units running on different stages at the same time. It was much busier than I envisioned. I thought it would be leisurely, but it was quite intense all the time—sets going up, sets going down, shots coming in at different times, meetings about sets coming up next.” Because Ledbury came onto the film early as a concept designer, he continued working on designs all through the project, in addition to supervising the visual effects work. “Stylistically, Wes [Anderson, the director] wanted as much as possible in camera,” Ledbury says. “Our shot count was high. We touched 75 percent of the film. But of the 617 VFX shots, only 400 have typical visual effects work. The others are fairly simple rig removals.”

A Sense of Scale An in-house crew of 28 worked on 500 shots. Stranger, now NVizible, did 30, and Lip Sync Post handled 80. In addition to removing animation rigs, the visual effects crews extended sets, duplicated sets, lit and rendered scenes, painted skies, and composited characters filmed against greenscreen into CG and miniature backgrounds built in various scales. “We had normal scale, animal scale, and human scale,” Ledbury says, ticking off the various-sized characters the compositors needed to

deal with. “For the animals, we had full size, half size, micro and mini-micro. For the humans, we had full size and half size. The full-size human and half-size animals worked together, and the half-size humans and micro animals worked together. And then we had a full-scale animal set, a micro animal set, and a full-scale human set. All those mixtures created issues.” Ledbury takes a breath and continues: “Plus, although every shot is a main pass, we did multiple passes for safety—different lighting stages, sets with puppets and without puppets. The amount of shots and the volume of data coming at us all the time was the hardest thing about the film. Working with the CG stuff, doing the set extensions, was the haven. That was the fun.” Before production began, Ledbury had previs’d about 100 of the prickliest shots working in Autodesk’s Maya and Apple’s Shake, which were the main production tools along with Mental Images’ Mental Ray for rendering and Andersson Technologies’ SynthEyes for matchmoving. “We did previs for technical reasons, not for story points,” he says. For example, they used previs to determine how many sets of what size they needed to build. Perhaps the most expensive and complex set—and shot—the team worked on was one in which Mr. Fox comes up through the floor of a giant chicken shed. The shed was a miniature, so they could photograph it and use the photos as textures for set extensions. “But, we didn’t have the full shed,” Ledbury says. “We had to duplicate it and make it four times longer, add in CG pipes and feeders, and build the roof.” The animators worked with Mr. Fox and a group of about 60 chicken puppets in front of a greenscreen, but the director wanted more, so the visual effects crew added another 300 or so CG chickens to match the stop-motion puppets. “It took everyone in the CG department because we had to turn the shot around in a week and a half,” Ledbury says. “But we enjoyed the challenge. This, the attic, and the supermarket were the biggest shots.”

■ ■ ■ ■

For the supermarket shot, the production team had filmed characters dancing on four shelves. The VFX crew built CG versions of those shelves to extend the set and mapped photographs of the miniature shelves onto the digital shelves. “The model-making department made such highly detailed sets that we could use photos of the models for textures,” Ledbury says. Because the models are so tiny, though—the supermarket set was six inches tall, the attic was 10 inches by 12 inches wide—they had depthof-field issues. “We photographed multiple angles, and photographed the models in stages, starting with the foreground,” Ledbury says. To complete the supermarket, they also extended the floor, built a ceiling, and added CG lights that the camera would have passed through on a real set. “The CG lights came from a photo of a full-size set that we mapped onto geometry,” Ledbury explains. For other parts of the supermarket, they used CG versions of various sets with photographed textures projected onto the digital objects. Matchmoving the camera from the shots filmed on the stop-motion stages was straightforward. “We had good measurements, and we didn’t have any motion blur,” Ledbury says. As they might have for a live-action film on location, the group used tracking markers, but for this film, they didn’t need to worry about removing the markers from a main plate later. Instead, because there was so little movement, they did matchmove passes. “Once the animator finished, we could put tracking markers all over the set and use those to track from,” he adds.

Lighting Passes They also did lighting passes. For example, if a light would turn on or flicker during a shot, the visual effects team would do multiple passes of the same frame. “I could go down to the set once the animators had finished a shot and talk to the DP, and then take a couple days shooting different lighting conditions and angles,” Ledbury says. “If I didn’t get what I wanted, I could get the set back out and reJanuary 2010

29


n n n n

Animation

The VFX crew built CG versions of four shelves to extend the set for shots in the market, adding the floor, a ceiling, and CG lights, including those the camera would have hit on a real set.

30

January 2010

shoot. We’d have different lights coming on in the same frame so we could mix them together in compositing. I suppose I was treating this like trying to get render passes.” For example, to deal with green spill from the greenscreens, Ledbury would have the crew shoot two passes: one with and another without the greenscreen. “We could use the greenscreen shot for the matte, and the other, without any spill issues, for compositing,” he explains. “The compositors also had to be careful with the saturated colors because we knew the color timers would push the grade. It was quite an orange-red film; when I was in the art department, our intention was that there would be no green or blue in the film. But, that was good for greenscreen shots.” One of the problems unique to stopmotion animation for the visual effects crew was removing animation access holes in the set. “We’d put in a foreground and shoot multiple passes of the set again. But, the sets were wood, and if they had been shooting animation for two weeks on one set, the wood would expand and move.” This was a particular problem if it was rainy or damp, which was not unusual in the UK, where they made the film.

“If we sped through the shots, we could see the set breathing,” Ledbury says. “In the morning, it dipped down, and then came up during the day. So, we had to deal with that in compositing. We’d cut out patches of shots and do 2D tracking to stabilize them so they wouldn’t bounce around.” Initially, Ledbury thought one of the biggest concerns for the compositors would be the furry characters because hair is never easy to lift from a greenscreen background, but one advantage of having no motion blur was that it made this easier. The other was the director. Wes [Anderson] wasn’t as picky as I thought he would be,” Ledbury says. “He was concerned about sky color, as you’d expect, but he wasn’t worried about matte edges. His main concern was with art direction. He left it to us to get the shots done.” And Anderson’s art direction always moved toward preserving the handcrafted look of the film. Even so, as Mr. Fox proves, the most handcrafted films still rely on CG artists. n Barbara Robertson is an award-winning writer and a contributing editor for Computer Graphics World. She can be reached at BarbaraRR@comcast.net.


Holography

n n n n

W

hen singer/actor Frank Sinatra died in May 1998, fans the world over mourned his passing. Born to Italian immigrants, this kid from the blue-collar, working-class city of Hoboken, New Jersey, eventually would turn into a superstar whose nicknames included The Chairman of the Board, The Voice, and ’Ol Blue Eyes. A member of the so-called Rat Pack, Sinatra became a quintessential legend of radio and Hollywood.

SquareZero composited the body of an impersonator with the rotoscoped head of singer Frank Sinatra from film footage to create a hologram of the star.

It’s no surprise that countless people—young and old alike—were big fans of Sinatra’s music and movies. Among them is UK native Simon Cowell, music executive, television producer and personality, and entrepreneur, who is best known as the highly critical judge on the reality TV show American Idol. So when organizers began to make plans for a 50th birthday bash for Cowell at a facility in England, they were determined to have Sinatra make an appearance. Of course, it would be impossible to have the famed crooner appear in the flesh, but having him appear digitally, in a hologram, well … that had possibilities. “The party organizers wanted the hologram to look 100 percent real, as if Sinatra himself had stepped onto the stage for a private performance,” says Vicky Godfrey, director of SquareZero, a London design, animation, and production facility that brought the singer back to digital life. The most challenging and time-consuming part of the project was locating source material for the hologram. Banana Split, which commissioned the project, wanted the holographic singer to belt out “Happy Birthday,” although no footage could be found of the real Sinatra warbling that tune. In fact, the selection of possible songs was extremely narrow due to the limitations of the project. Foremost, the footage had to be continuous and filmed with a single locked-off camera. “The holograms do not work well if you have edited source material; you just can’t do anything with it,” says Godfrey. “The hologram has to look as if a real person is standing in front of you, not [clips of] an edited show. It has to be as real as real can be.”

January 2010

31


n n n n

Holography

Continuous single-camera footage of the entire singer was needed for the project. When that could not be located, digital techniques were required to achieve a head-to-toe image. All told, 24,000 frames of Sinatra’s head were roto’d from film footage within Apple’s Shake, and comp’d onto newly acquired footage of a stand-in using The Foundry’s Nuke and Shake. Moreover, the footage had to be full-body, to make it appear as if Frank himself was making the guest appearance. Otherwise, it would appear as a floating head or partial body, which was obviously undesirable. Godfrey spent nearly three weeks just tracking down footage that would work for the hologram after digital magic was applied. “We couldn’t find any footage of Frank singing ‘Happy Birthday.’ I suggested that the client think about having Marilyn Monroe, but Frank Sinatra is Simon Cowell’s idol, and it had to be Frank Sinatra,” she says. However, the available public archive footage of Sinatra singing was filmed in close-up (head-andshoulders shots) for films. The best possible footage was eventually found—albeit from Sinatra’s own private collection that had never been in the public domain. “That is why it was shot with a single camera and not edited. It was for his own use,” Godfrey says. Godfrey spent weeks negotiating with Sinatra’s estate, and eventually received permission to use the footage for this project; there would be no encore. And, the estate gave SquareZero two choices: footage that used a single-camera feed of the crooner singing Pennies from Heaven or Learning the Blues. SquareZero chose the former, though it still had limitations: Sinatra walks toward the camera and then back, show32

January 2010

ing him, at most, from his knees upward. His lower body is missing.

The Project Gets Legs To make the footage work, the crew at SquareZero had to give Sinatra legs and a lower torso, onto which they planned to fuse the rotoscoped upper body from the original footage. The needed body parts would come from a body double. When SquareZero revealed its plan to the client, Banana Split decided to ditch the project, thinking it required too much work. Then, just two weeks before the party, the client changed its mind, asking SquareZero to resume the project. At this point, Godfrey and her group closely examined the footage again only to realize that the original plan of compositing the stand-in’s footage from the knees down would not work. “After reviewing the footage from Pennies from Heaven, we realized that we had to cut Frank out at the neck level, not the knees,” says Godfrey. “If you are trying to connect [the images] at the knees, you have to be in perfect sync. It’s really difficult. But with the head, as long as you have the same movement with the body double, you can get away with a little more.” So, suddenly after getting the green light on a Thursday and the contract on Friday, and revamping its strategy, SquareZero found

itself fighting the clock. The group located a body double, a Frank Sinatra tribute performer who was the same height as the star. That was important, explains Godfrey, because the head had to have the same proportions as the comp’d body. On the following Monday, the team filmed the body double at the studio. To ensure that the double’s movement was perfectly in sync with Sinatra’s, a low-resolution version of the film footage was projected onto a screen, producing a mirror image for the performer to follow while he practiced, concentrating on the body position and motion. Yet, getting the necessary high-resolution footage, from which Sinatra’s head would be rotoscoped, proved difficult again, this time on another level. “There were gigs, and gigs, and gigs of it, and it all had to be sent via FTP from Los Angeles,” says Godfrey, noting that the file took at least 24 hours to upload. Then came the download. “We would start downloading the file, and it would crash. And then we would have to start again. And again. This process started on Friday, and we didn’t get the high-res footage until the following Thursday. It took nearly a week!”

Creating the Man, the Myth While the crew struggled with the files, a DOP filmed the body double, who was dressed in a suit from the late ’50s/early ’60s, to coincide with the period when the archive footage of the younger Frank was made. Key to the shoot was the lighting: It had to be just right so that the performer’s dark suit would still show up


We support your creativity.

archive footage was black and white). They also utilized the Furnace tools that ship with Nuke to de-grain the footage and restore film damage inherent in the footage. The finished image was then added to a black background. Although Godfrey thought that keeping Sinatra in black and white during his hologram appearance helped situate him more in the era from which he came, the client opted for “living” color. “They wanted him to look as real as possible, as if he just came back from the dead,” she explains. During Sinatra’s recent stage appearance, his image starts out in a picture frame with the original footage, and then the footage “steps” out of the six-meter frame, all the background disappears, and the holographic image, which is in color, performs the song. Two projectors provided the necessary brightness on stage. For the playout system, a Musion Eyeliner 3D holographic projection system, which required an HD static full-body camera shot of Sinatra on a solid black background, was employed. The system uses a technology similar to Pepper’s ghost, an illusionary technique that makes a 2D image appear solid, in 3D, though to do so, the object, or in this case, the character, has to have some slight movement.

DOSCHDESIGN.COM

against the black backdrop that was used in the hologram. The lighting also had to match the lighting in the original footage. However, Godfrey notes, that lighting was not conducive to what was required in the hologram rig—an issue that the director of photography eventually resolved. “At this point, we had only eight days left before the party, and still had to do viewings with the clients,” says Godfrey. Head of animation Olly Tyler and compositor Gabriel Sitjas worked nonstop over the weekend to rotoscope 24,000 frames of Sinatra from the original footage. Not only did they have to separate out his head, but the roto—which was done using Apple’s Shake— also included his eyes, neck, mouth, hat, and hat brim within the cut. “We cut out the stage set from the house [where the original footage was filmed], so we just had Frank standing there,” describes Godfrey. Within the original footage, Sinatra moves around somewhat, mainly in a front-to-back pattern. This action resulted in the size of his head becoming larger and smaller in camera. Lead compositor Jonson Jewell had to counteract this movement, or size differential, so that the singer’s head remained at the same size and in the same position by stabilizing it and keeping it locked on that one plane. “With the holographic projection system, when a person walks back and forth, they get stretched in an odd way, and you end up with an element of distortion at the show that we wouldn’t have had any control over,” says Godfrey. “We just didn’t want to risk that, so we kept him relatively still.” Jewell then worked around the clock mainly in Nuke (The Foundry) and a bit in Shake to convincingly attach the cutout head footage to the newly filmed but headless body double. This task was especially tedious: The slightest mismatch of the head and body would show up badly around the neck area. So Jewell tracked the body double’s neck area and applied that to the head using Shake and Nuke, making minor readjustments every half second throughout the song. Animated shapes were used to patch up areas around the neck and collar, some of them used to create artificial shadows and lighting, which in turn gave the head the correct-looking volume and depth. At one point in the sequence, where the body movement and the head just would not match up, the team created a short morph sequence using a different section of Sinatra’s head footage, which fit the body better. Once the head was locked onto the body, the team used Shake to colorize Sinatra’s face (the

Blast from the Past While SquareZero and countless other facilities have composited heads onto body doubles in the past for commercials and other projects, Godfrey says there is a big leap from doing those sorts of projects to one like this. “You have more flexibility and can hide things behind an edit. Here, we couldn’t do that,” she says. “The hologram was life-size and appeared right in front of the audience’s eyes, so we couldn’t do the typical cheats that are done for TV and film. And, the shot had to be one take, continuous.” SquareZero had been asked to bring other entertainers back to life, including Miles Davis and Freddie Mercury, but the necessary element, Godfrey says, is in locating footage of an entire song captured with a single camera. “What gets archived is just the edit from the multiple cameras, not the footage from each camera,” she explains. “We were lucky to have found [the Sinatra] footage.” Perhaps soon we will see more celebrities who are no longer alive performing digitally. For Sinatra, though, this was a very special one-time appearance. n

100

95

75

25

5

0

Karen Moltenbrey is the chief editor of Computer Graphics World. January 2010 Advertisement-CGW-Dec Mittwoch, 25. November 2009 11:17:13

33


n n n n

Education•Recruitment

The job market may prove challenging in 2010, but staying abreast of the latest tools and techniques is key

By Ken McGorry

A

s competition heats up and tools grow both more sophisticated and less costly, executives in the business of instructing new and veteran pros stress: “Education drives the job market!” Thus, as we enter 2010, education, in its multitude of forms, becomes an important topic, especially for those considering a new job. Here, industry experts offer their perspective on this topic, addressing associated SWOTs: strengths, weaknesses, opportunities, and perceived threats.

Mike Flanagan President, Video Symphony Burbank, California www.videosymphony.com Postproduction “career college” for budding video editors, audio engineers, and motion graphics artists. Emphasis on authorized Avid, Final Cut, Pro Tools, and After Effects job-oriented training in a traditional classroom setting. Strengths: “The times and technologies are ever changing. There’s always more to learn. Knowledge and the ability to learn quickly are what differentiate excellent workers from the ‘also-rans.’ Education drives the job market. There are many incompetent and marginal workers in post. Until they all leave the post marketplace, well-educated, competent workers will not have opportunities.” Weaknesses: “Many post companies don’t ‘get it’ in that they need top-quality, educated staff to operate effectively. I think many companies are unnecessarily scared that if they train their staff better, the staff will demand more pay or leave. These companies are unwilling or unable to pay for their staff to improve. Maybe it’s because these same companies themselves need to be educated about how to run their post businesses successfully. Other than the Holly34

January 2010

Students at Video Symphony who become digital content creators can help shape content for online instruction.

wood Post Alliance, post industry resources for best practices are lacking.” Opportunities: “Making money from the ‘chunking’ of content—parsing out content in discrete slivers. Current TV calls them ‘pods.’ A good example is ringtones. These chunks of songs often ‘ring up’ more sales dollars than do the full songs. Chunking is more about communicating and informing than it is about entertaining. Communicating is growing far faster than we could have imagined (think cell phones, the Internet, Facebook and other social networks, IM-ing, texting, and Tweeting). There’s an absolutely huge amount of show and news content waiting to be chunked and sold as tasty informative bites rather than as full entertainment meals.

“As just one example, the market for online learning is growing and will be huge. Education as a sector of the US economy dwarfs entertainment by several multiples. Digital content creators can play a huge role in shaping content for online instruction.” Threats: “The content market, and postproduction specifically, continues to fragment and decentralize. Far more post jobs exist now than in the past. What’s threatened, though, are the very high paying jobs because audience sizes (and, hence, revenues and budgets) for shows are declining. This trend is unalterable and is a threat primarily to post industry vets with high wage expectations and/or calcified learning curves.” Outlook for 2010: “Many folks in postproduction, or those who want to be, are stressed about jobs. Getting them. Keeping


Maxwell Render gets better and better.... Many times faster, and raising the bar on Quality again. New features in v2 including Stacked Layer Materials, ThinSSS, IES support, new Modo plug-in, and much much more....


n n n n

Education•Recruitment

them. The job malaise will continue in 2010 due to a number of things: the weak economy, leading to a weak advertising market; FUD (fear, uncertainty, doubt) currently plaguing the entertainment industry; and continued fragmenting and decentralization of the post marketplace. Video Symphony will amp up its job-centric focus by publishing a how-to book, Hollywood Jobs, [from Flanagan] due out this month, and creating a sophisticated database-driven software system, code-named Career Aspirin, that will aid our graduates and others in finding good jobs.” Lynda Weinman Co-founder/Executive chair of the board lynda.com Carpinteria, California www.lynda.com

Offering online software training through the company’s Online Training Library and DVDs to individuals, businesses, and academia. Strengths: “Online education and training has distinct strengths: from convenient learning when and where a person needs and wants to learn, to saving companies expensive on-site training expenses. When used for students, it can be significantly less costly than text books in an academic setting.” Weaknesses: “There are definite advantages to face-to-face learning: having immediate interaction with an instructor, the ability to ask questions and have them answered by an

MEWshop students Katie Ainslie (standing) and Michelle Kim discuss film editing techniques, the focus of the educational facility.

distributed content is burgeoning due to new formats and growing worldwide audiences. From broadband Internet access through gaming/media centers in the home, to iPhone and other mobile media in everyone’s pocket, content is accessible from nearly every corner of the globe. New markets include foreign audiences as content becomes more accessible and the demand for software training grows.” Threats: “Many companies that offer content for sale online face the thievery and illegal distribution of their content through torrents by those who believe that all content should be free. While that’s a neverending and difficult battle, a key to the continued success for companies like lynda.com is the fact that the experience of being a lynda.com member can’t be duplicated and isn’t solely based on the training content alone.” Lynda.com is utilizing the latest technologies to improve online Outlook for 2010: “Along training instruction. with all other content and meexpert teacher, and receiving immediate feed- dia, online education resources will continue to back on work. Online training works well as grow by increasing content and their customer a supplement to that kind of training, where base. There will be more courses, new topics, applicable. Since the dot-com crash and 9-11, new kinds of courses, and more efficient types both individuals and companies have smaller of content presentation. Improved technology training and travel budgets, and educational will allow further collaboration with classroom conferences have folded due to high expenses teaching and better peer-to-peer and instrucand lack of attendees. As the economy has tor-to-student video interaction. While the changed and technology has improved, the economy promises to improve, consumers will growth of online training has helped to pick continue to be frugal with their budgets, and up where these alternative methods of teach- will continue to seek sources from which to improve their skills and keep competitive to ing have fallen short.” Opportunities: “The market for online- ensure financial security.”

36

January 2010

Josh Apter Owner/Founder Manhattan Edit Workshop (MEWshop) New York City (www.mewshop.com)

Offering a full range of certified classes in the art and technique of film editing. Customized classes are designed to provide top-tier training both to professionals and aspiring editors. Strengths: “Our greatest strength as an educational facility is the ability to keep pace with rapid developments in content creation and to offer cutting-edge, intensive training in its evolving disciplines. Whether it’s new workshops in DSLR filmmaking or niche classes in popular software plug-in sets, we can respond to the changing needs of our students while still offering the staples of certified training in Apple, Avid, and Adobe products. In recognition of our rapidly morphing technology environment, MEWshop was designed from day one to evolve our teaching methodologies to mesh with how students want to be, and need to be, trained. “Where someone with a working knowledge of Avid or Final Cut Pro may not see the need to take a refresher course, especially in this economic environment, we feel if we can offer something unique, be it an aesthetics of editing course, or a targeted low-cost class like the Filmmaker’s Guide series (that focuses the technical aspects of After Effects to the specific needs of the filmmaker/film editor), we can appeal to a variety of student needs.” Weaknesses: “There will always be a need for the basic “brick and mortar,” instructor-led training that we provide at MEWshop,


Attend the Visual Arts and Design Tracks at...

Game Developers Conference® March 9–13, 2010 | Moscone Center, San Francisco www.GDConf.com

Register before FebruAry 4Th and sAVe up To 35%!

Learn. Network. Inspire. GAM0911_GDC2010_CGWMag.indd 1


n n n n

Education•Recruitment

as the potential for a diminished experience through online and DVD-based training always exists. The endless variables when teaching online—bandwidth problems, software requirements, and media installation—make the ideal of a consistent training experience for each student nearly impossible. That said, we are expanding our offerings to online training and are testing different solutions to mitigate as many of these variables as possible. But if a student can come into a classroom and devote the time to learn with a certified instructor, we can guarantee that the software is configured, that the media is online, and, most importantly, that the experience fits the student’s skill level. Our classes are small enough and the situations controlled enough that, beyond the certified curriculum, we can accommodate a variation in skill level by offering individual attention as well as alternate and additional exercises. “DVD training and classrooms-in-a-book are completely different animals. For self-starters, it’s great, but for students with attentionspan issues (and I’m in that category), it’s a potential disaster.” Opportunities: “MEWshop, backed with tech experts and working filmmakers, understands that the delivery model is changing and will continue to do so. Mobile content, 3D, and immersive environments are part of the creative deliverables, and we’ve adapted to address those with our training. We offer a sixweek intensive course that covers Avid, Final Cut Pro, and After Effects. There’s a components of film theory class almost every day, and students can create a reel with the guidance of a prominent film editor through our artist-in-residence program (Suzy Elmiger came to work with our December group). These students are clearly willing to make a serious commitment to the craft, and we’ve had amazing success with our graduates finding work, and we now find employers coming back to us when they need new hires. “The Manhattan Edit WorkForce program, where editors’ reels are created and distributed to a number of top postproduction facilities, is a connection service that allows students to meet with industry leaders.” Threats: “With the barrier to entry so low today, everyone is a filmmaker (and an expert). This does not change the fact that the cream still rises, and my money is on those with brilliant ideas backed up with the foundation of a developed skill set and strict discipline. Education, whether instructor-led or self-paced, will always be the base from which the success of tomorrow’s creators will be built.” 38

January 2010

Outlook for 2010: “Ten years ago, Mini DV and FCP put three chips into the hands of any filmmaker and an editing system on the desk (and soon every lap) of any editor willing to invest a few thousand dollars in their craft. In 10 short years, we see HD cameras for a fraction of the CMIVFX has witnessed an increased growth in its online visual price, and a culture comeffects training programs. mitted to expressing itself through visual media. That expression requires it, thus propagating a never-ending chain of the organization, distillation, and the technical dilution. “Another weakness is user opposition to artisanship of editing. As long as people shoot, there will be the need to cut, and providing new instructional products for new markets. training that keeps pace with this evolution We may literally get cursed for trying to help has always been the goal of MEWshop. I look users in their area of expertise. But by the time forward to everything that 2010 and the next our second training video is complete for any 10 years will bring to the industry and culture vertical market, the thank-you letters start coming in from the very same people who led of visual storytelling.” the opposition.” Opportunities: “One opportunity is Chris Maynard Owner/Operator teaching jobs. We offer jobs to anyone who CMIVFX shows an interest in training and has the techPrinceton, New Jersey nical merit to deliver the highest quality to our www.cmivfx.com customers. Some of the greatest artists in the Offering HD Training on Demand (HD- world can make great pictures, but they canTOD), whereby customers can log in from not teach to save their lives. This opens up a anywhere in the world to access training vid- new vertical market for highly technical peoeos. Among the programs that CMIVFX cov- ple with great communication skills. We can ers include those pertaining to offerings from prosper from talent around the entire globe Autodesk, Avid, Apple, Adobe, Eyeon, Maxon, and generate more revenue.” Threats: “Laziness. This is hard work, and Apple (Nuke), Side Effects Software, and Pixoour biggest threat is laziness. If you cannot logic (Zbrush). Strengths: “Education is always going work 80 hours a week, then you might want to be there. Its stability draws those individu- to try something different. Traditional threats als who need the consistency of job security. exist as well, such as intellectual property theft, The visual effects and computer graphics in- piracy, global economic issues, political views dustries are still on an incline, and education abroad, and, of course, the increase of compeneeds, whether free or commercial, still need tition. The only defense is a good offense.” Outlook for 2010: “Our formerly scatto be satisfied at all levels of complexity. The key to success is making a better product than tered global user base is now robust and still everyone else. In this industry, it can be quite growing. Our projections for the next year difficult to stand above all the rest. Our cli- have doubled from two years ago. The exentele are acute individuals with a keen eye, pansion into new vertical markets, and the which helps keep the quality of our materials increase in labor times, has added to our proextremely high. Ninety-three percent of our jections. Our inventiveness keeps competitors at bay by us creating innovative, high-quality customers return to purchase from us again.” Weaknesses: “One weakness in the VFX learning materials. It may be hard work, but educational vertical market involves organiza- it ensures that any mistakes we make are only tions trying to make a quick dollar. Unquali- our fault, and this makes fixing mistakes much fied trainers flooding the market with random easier. While this doesn’t always translate to material can dilute the ability to turn a profit, sales directly and continuously, it does assure and they often terminate their efforts, which us a solid position in the market.” n can hurt others. Training is also subject to theft and the cross-pollination of training Ken McGorry is a consulting editor to Post magazine, companies. One company may take learn- CGW’s sister publication. He can be reached at mcgorry@ ing materials from another and prosper from optonline.net.


The STudenTS udenTS of Today are InduST nduSTry LeaderS eaderS of Tomorrow... Our industry is shaped by creative minds who never stop thinking, never stop learning, never stop moving the industry forward.

2010 Education & Recruitment Special Edition: Coming in the July issues of CGW and POST

Never-Ending Story. Artists never stop improving their skills. There are always new tools and techniques to master. We will be looking at ongoing training, online courses, training sites, DVDs, books and more. School Showcase. Whether you are looking for the best sound editing, animation CG, post production, mocap facilities or on-the-job learning, it’s all on display. Job Outlook 2010. We will investigate how headhunters and career placement services can help you, whether you’re starting out, or ready for a move. Don’t miss this opportunity to display your school, products or job openings in this special issue! Bonus Distribution: SIGGRAPH, COMIC-CON, IBC HIGH SCHOOLS AND LIBRARIES ACROSS THE US (VISUAL ARTS CIRRICULUM SCHOOLS)

RESERVE SPACE TODAY: LISA BLACK lisab@cgw.com or (903) 295-3699


For additional product news and information, visit CGW.com

HARDWARE Displays Energy-Efficient Desktops NEC Display Solutions of America has announced three new desktop monitors in its AccuSync Series. The two widescreen displays and one standard-aspect display are well suited to small- to medium-size businesses. The energy-efficient, 19-inch AS191 and AS191WM and 22-inch AS221WM monitors boast up to 48 percent less energy consumption than their predecessors; EPEAT Silver ratings; and Energy Star 5.0 and TCO 5.0 compliance. All three feature ECO Mode technology with two energy-saving modes, tilt functionality for enhanced comfort, a contrast ratio of up to 1000:1, and a 5msec response time. The AS221WM is now shipping at a price of $249. The AS191 and AS191WM are shipping, as well, and cost $199 and $189, respectively.

WIN

NEC Display Solutions of America; www.necdisplay.com

Mosaic display modules are targeted at research programs, medical imaging, command control, collaboration room, and presentation and entertainment environments. The Mosaic displays detailed 2D data, stereo 3D models, and virtual environments.

Mechdyne; www.mechdyne.com

3D Printer Automated and Monochrome Z Corporation has unveiled the first automated, monochrome 3D printer, the ZPrinter 350. The new 3D printer converts 3D data into physical models and offers automatic material loading, snap-in binder cartridges, and self-monitoring operation. The printer achieves 300x450 dpi resolution, up to 0.8 inch/hour vertical build speeds, and up to an 8x10x8-inch build size. Integrated recycling of unused build material, office-safe build materials, aggressive dust-control, and zero liquid waste further round out the ZPrinter 350, now available for $25,900.

Z Corporation; www.zcorp.com

Processor display tech Mechdyne Mosaic Mechdyne unveiled Mosaic display modules, scalable technology that enables single- and multi-screen AV applications, large-scale stereoscopic 3D video walls, and surround-screen immersive environments. The modules employ patent-pending technology to produce detailed stereoscopic 3D and true HD 2D imagery. Individual Mosaic rear-projected modules— each measuring 70 inches wide and 25 inches deep—achieve 1920x1080-pixel resolution and a no-frame design. Multiple modules can be stacked and tiled into virtually any configuration and overall resolution, with minimal impact on image quality.

Nash.

Power-Efficient GPU Processor S3 Graphics has unveiled its 5400E GPU for feature-rich graphics processing, video decoding, and 3D rendering. The new graphics processing unit (GPU) is designed to combine graphics functionality, performance, power, longevity, and stability for high-quality embedded graphics applications. The 5400E takes advantage of OpenCL, a cross-platform standard for harnessing the power of a GPU’s internal shaders to accelerate parallel computations in graphics, video, scientific, medical, and other high-performance computing (HPC) applications. The graphic processor’s programmable, unified shader-architecture core and fixed-function rendering units

deliver hardware acceleration for Microsoft DirectX 10.1, Shader Model 4.1, OpenGL 3.1, and OpenCL 1.0. Users can harness Chrome 5400E programmable shader cores to speed 3D simulations, 3D rendering applications, and other visual processing functions. OpenVG 1.1 support also boosts vector-based 2D graphics and video applications.

S3 Graphics; www.s3graphics.com

SOFTWARE Plug-in Camera Mapper Digieffects has released Camera Mapper, a plug-in for Adobe After Effects that enables artists to simulate a 3D scene from 2D stills or footage. Developed in close collaboration with author Mark Christiansen, Camera Mapper lets users isolate one or several objects in their footage, project these objects on a separate layer, and pull that layer out of the background, thereby creating the visual illusion of the object floating in front of the original footage. Camera mapping is a key part of compositing applications, such as The Foundry’s Nuke, and while the process can be done natively in After Effects, some complicated workarounds would be required. With the new tool, artists can now realistically create the illusion that a still image is fully dimensional moving footage. They can subtly change the perspective of a shot or animate it over time. Camera Mapper, which is priced at $79, is compatible with Adobe After Effects 7, and CS3 and CS4. A bundled offering with Red Giant Software’s PlaneSpace is also available at a price tag of $229.

Digieffects; www.digieffects.com

WIN

MAC

January 2010, Volume 33, Number 1: COMPUTER GRAPHICS WORLD (USPS 665-250) (ISSN-0271-4159) is published monthly (12 issues) by COP Communications, Inc. Corporate offices: 620 West Elk Avenue, Glendale, CA 91204, Tel: 818-291-1100; FAX: 818-291-1190; Web Address: info@copprints.com. Periodicals postage paid at Glendale, CA, 91205 & additional mailing offices. COMPUTER GRAPHICS WORLD is distributed worldwide. Annual subscription prices are $72, USA; $98, Canada & Mexico; $150 International airfreight. To order subscriptions, call 847-559-7310. © 2010 CGW by COP Communications, Inc. All rights reserved. No material may be reprinted without permission. Authorization to photocopy items for internal or personal use, or the internal or personal use of specific clients, is granted by Computer Graphics World, ISSN-0271-4159, provided that the appropriate fee is paid directly to Copyright Clearance Center Inc., 222 Rosewood Drive, Danvers, MA 01923 USA 508-750-8400. Prior to photocopying items for educational classroom use, please contact Copyright Clearance Center Inc., 222 Rosewood Drive, Danvers, MA 01923 USA 508-750-8400. For further information check Copyright Clearance Center Inc. online at: www.copyright.com. The COMPUTER GRAPHICS WORLD fee code for users of the Transactional Reporting Services is 0271-4159/96 $1.00 + .35. POSTMASTER: Send change of address form to Computer Graphics World, P.O. Box 3296, Northbrook, IL 60065-3296.

40 40

January 2010


#26364 - CGW IO Express Ad:Layout 03/12/2009 09:57 Page 1

www.aja.com

Cross-platform power. In the palm of your hands.

Designed for today’s fast-moving file-based workflows, Io Express is a new cross-platform interface for video professionals working with Apple ProRes 422, Apple ProRes 422 (HQ), XDCAM HD, DVCPRO HD and more. Io Express is ideal for capture, monitoring and mastering - on set, or in the edit suite. Compact, portable and affordable, it’s loaded with flexible I/O that provides professional HD/SD connectivity to laptops or desktop systems, while our industry-proven drivers deliver extensive codec and media support within Apple Final Cut Studio and Adobe CS4.

To find out how Io Express can unlock the potential of your file-based workflows, visit us online at www.aja.com.

For Mac and PC Uncompressed I/O – supports popular CPU-based codecs in Apple Final Cut Pro, Adobe CS4, and more HD/SD digital input and output via 10-bit HDMI and SDI HD/SD component analog output 10-bit HD to SD hardware downconvert

Io Express with ExpressCard/34 Adapter for Laptops

Io Express with PCIe Adapter for Mac Pro and Towers

Io Express. Because it matters.

DVCPRO HD, HDV and Dynamic RT hardware scaling acceleration in Apple Final Cut Pro


F O R

Y O U R

C O N S I D E R A T I O N

BEST ANIMATED FEATURE

Monsters vs. Aliens ® & © 2009 DreamWorks Animation LLC. All Rights Reserved.

MONSTERS VS. ALIENS


Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.