www.cgw.com December 2011/January 2012
Making Headlines Weta evolves its VFX techniques for the all-CG Tintin
$6.00 USA $8.25 Canada
11/25/11 4:39 PM
Image courtesy of Chris O’Riley
Take Your Art to the Brink of Reality The latest version of NewTek LightWave™ 3D animation software takes your art to the edge. A complete palette of tools, LightWave 11 is professional, faster and way more powerful. TV. Film. Architectural visualization. Print. And Game development. Get incredible detail. Instancing. Flocking motion. Fracture. Bullet Dynamics. Virtual Studio Tools. HyperVoxels™ Blending. GoZ™ technology. Freedom to stretch your imagination. For real.
©2011 NewTek, Inc. All rights reserved. www.newtek.com
CGW Ad Template 1211.indd 100
11/22/11 11:15 AM
December 2011/January 2012 • Vol. 34 • Number 8
Innovations in visual computing for DCC professionals
Features COVER STORY
comic-book series “Tintin” may be an old classic, but recently it has a number of firsts: the first animated feature directed by Steven 10Theaccomplished Spielberg, the first animated feature produced by Peter Jackson, and the first animated feature created by Weta Digital. By Barbara Robertson
Martin Scorsese’s Hugo features a wide range of visual effects, but the accomplishment is the way stereoscopic 3D is used to tell a moving 20biggest period story.
By Barbara Robertson
Dancing the Ice Away
from down under get their groove on, providing fancy footwork for the computer-generated penguins appearing in Happy Feet Two. 26Animators
Departments Editor’s Note
Stereo 3D: Visible Difference
3D films are growing in popularity, and even though theaters have spent a great deal of money preparing for this new medium, at 2 Stereo times there just weren’t enough 3D screens to accommodate the newest releases—a testament to the quality of the work being generated.
By Barbara Robertson
Road to Oscar
year is nearly over, but the box office is just beginning to heat up with tent-pole films. Find out what our industry has to say about this 32Theholiday year’s visual effects and animated films.
By Karen Moltenbrey
Batman: Arkham City, one of the year’s top games, incorporates German cinematography within an interactive, expansive environment 42Expressionistic where CG villains run amok and CG heroes try to restore law and order.
By Martin McEachern
The Foundry’s Katana 1.0. Dell’s mobile workstations. Systemes’ cloud services. Boxx’s 3DBoxx 3970 Xtreme. 4Dassault Panasonic’s TH-65VX300U plasma display. AMD’s FirePro V4900. Products
Imagineer/Boris FX’s Motion Tracking for Editors. The Foundry’s Ocula 3.0. News Third-quarter graphics shipments are up. Embedded graphics processors killing off IGPs.
SEE IT IN
December’s Post Magazine takes a look at the Strengths, Weaknesses, Opportunities, and Threats relating to Audio, the Business of Post, New Media, Directors & Filmmaking, Stereo 3D, and Training.
hardware and software releases. xxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxx x 48Recent Look for CGW’s VES Awards Supplement in January. Also, visit CGW.com for Web-exclusive features.
ON THE COVER Weta Digital created Tintin’s CG characters, and then performed them using data captured from actors wearing head rigs that are part of a facial-capture system developed at the studio. See pg. 10. December 2011/January 2012
11/25/11 4:46 PM
Stereo 3D: A Visible Difference
The Magazine for Digital Content Professionals
E D I TO R I A L
KAREN MOLTENBREY Chief Editor
ust a few years ago, we were marveling at the quality, and the quantity, of the 3D films released in theaters. Yet in the rush to embrace 3D (and, of course, collect more at the box office), some studios simply jumped on the bandwagon with a half-hearted attempt at stereo. Audiences were quick to forgive, being just as anxious to see a 3D movie as the studios were in offering one. Not so today. In 2010, there were less than 20 films released in stereo 3D. Not among them: Harry Potter and the Deathly Hallows, Part 1. Could Warner Bros. have made a lot more money with a 3D film? Absolutely. The brand was strong enough and the fans numerous enough to warrant it. But rather than put out a mediocre product, the studio decided to wait. A halfyear later, the wait was over. Part 2, which marked an end to the Potter saga, gave us a fitting farewell that embraced the true magic of 3D. In 2011, we saw nearly 30 movies utilize the medium, some coming out on the heels of another. I had to see Pirates in 2D because less than a week after its debut in stereo, theaters began pushing it to their 2D screens to make room for the 3D version of Kung Fu Panda 2. Green Lantern and Cars 2 had the same problem, as did Harry Potter and Captain America. Nevertheless, the number of 3D-equipped theaters continues to grow. In the coming year, however, audiences will have to pony up more dollars for 3D glasses. I hadn’t realized that the cost for the plastic eyewear (definitely an improvement over the old-style paper ones) had been subsidized by the studios. Now Sony is saying that it will no longer do so starting in May, about the time when its new Spider-Man film will be released. Other studios have not commented on their plans, but I expect more to follow Sony’s lead. So, in addition to the added admission price for a 3D film, moviegoers may have to dig a little deeper for the cost of the eyewear. In Europe, viewers pay roughly $1 for a pair of reusable RealD glasses; in Asia, some plunk down a refundable deposit. Of course, there is a third option: designer 3D glasses. A number of designers are jumping at this trend, Oakley among them. And this past summer, Marchon3D began installing vendor machines at cinemas that dispense 3D designer eyewear ranging in cost from approximately $20 to $70. At first, this sounded ridiculous, given that the glasses are worn in dark theaters— who would even notice them? But even though the glasses are geared for RealD movies, they can also be used with passive laptops, gaming consoles, and HDTVs. With all the scheduled 3D releases in 2012, there is little question that moviegoers will bite the bullet and rent or purchase the necessary eyewear to see films like Men in Black III, Brave, The Amazing Spider-Man, The Hobbit, and more. Kicking off the new year is a new look to a classic: Beauty and the Beast 3D. Just a few months ago, Disney released the hugely popular Lion King (1994) in stereo. Initially planned for a two-week run, the film did so well at the box office—garnering approximately $94 million (in addition to the $826 million generated by the original)—that Disney extended its run. The studio also decided to reissue some other classics in 3D, including Finding Nemo (September 2012), Monsters, Inc. (January 2013), and The Little Mermaid (September 2012). “Great stories and great characters are timeless, and at Disney, we’re fortunate to have a treasure trove of both,” said Alan Bergman, president of The Walt Disney Studios. “We’re thrilled to give audiences of all ages the chance to experience these beloved tales in an exciting new way with 3D.” Just don’t forget your glasses! ■
firstname.lastname@example.org • (603) 432-7568
Courtney Howard, Jenny Donelan, Kathleen Maher, George Maestri, Martin McEachern, Barbara Robertson
WILLIAM R. RITTWAGE
Publisher, President and CEO, COP Communications
Vice President of Marketing email@example.com (818) 291-1112
A DV E R T I S I N G SA L E S MARI KOHN
Director of Sales—National firstname.lastname@example.org (818) 291-1153 cell: (818) 472-1491
Director of Sales—West Coast email@example.com (847) 367-4073
Sales Manager—East Coast & International firstname.lastname@example.org (631) 274-9530
Marketing Coordinator email@example.com (818) 291-1155
Editorial Office / LA Sales Office:
620 West Elk Avenue, Glendale, CA 91204 (800) 280-6446
C R E AT I V E S E R V I C E S AND PRODUCTION MICHAEL VIGGIANO Art Director
CUSTOMER SERVICE firstname.lastname@example.org 1-800-280-6446, Opt 3
ONLINE AND NEW MEDIA Stan Belchev email@example.com
Computer Graphics World Magazine is published by Computer Graphics World, a COP Communications company. Computer Graphics World does not verify any claims or other information appearing in any of the advertisements contained in the publication, and cannot take any responsibility for any losses or other damages incurred by readers in reliance on such content. Computer Graphics World cannot be held responsible for the safekeeping or return of unsolicited articles, manuscripts, photographs, illustrations or other materials.Address all subscription correspondence to: Computer Graphics World, 620 West Elk Ave, Glendale, CA 91204. Subscriptions are available free to qualified individuals within the United States. Non-qualified subscription rates: USA—$72 for 1 year, $98 for 2 years; Canadian subscriptions —$98 for 1 year and $136 for 2 years; all other countries—$150 for 1 year and $208 for 2 years. Digital subscriptions are available for $27 per year. Subscribers can also contact customer service by calling (800) 280 6446, opt 2 (publishing), opt 1 (subscriptions) or sending an email to firstname.lastname@example.org.
CHIEF EDITOR karen@CGW.com
December 2011/January 2012
Postmaster: Send Address Changes to
Computer Graphics World, 620 W. Elk Ave., Glendale, CA 91204 Please send customer service inquiries to 620 W. Elk Ave., Glendale, CA 91204
11/23/11 9:17 AM
CGW Ad Template 1211.indd 3
11/22/11 1:36 PM
The Foundry Introduces Katana 1.0
he Foundry recently released Katana 1.0, a look development and lighting tool, replacing the conventional CG pipeline with a flexible recipe-based asset workflow. In tandem with this release, Industrial Light & Magic (ILM), a Lucasfilm Company, has purchased a site license. Currently in use for upcoming productions, ILM made this investment to boost their production pipeline across its ILM and Lucasfilm companies. As a Katana site license holder, ILM will deploy the software both in its San Francisco and Singapore studios. Katana is specifically designed to address the needs of a highly scalable assetbased workflow to: allow updating of assets once shots are already in progress; share lighting setups, such as edits and overrides, between shots and sequences; and allow use of multiple renderers and specifying dependencies between render passes; allow shot-specific modification of assets to become part of the lighting “recipe” for shots, to avoid having to deal with large numbers of shot-specific asset variants. Furthermore, Katana is built from the ground up with the needs of modern productions in mind. Extensive APIs mean it integrates with current pipelines, shader libraries, and workflow tools, while its collaborative nature allows it to scale to meet the needs of the most demanding productions. The main attraction of The Foundry’s Katana stems from the flexibility of the product, as it has the ability to produce incredibly complicated shots while allowing artists to retain control. Katana is backed by The Foundry, a provider of high-end visual effects tools, and has been production-proven on over 20 shows since 2004 at Sony Pictures Imageworks.
Dell Takes a Terabyte from Mobile Workstation Storage
ell’s Precision M6600 and M4600 mobile workstations, which launched in May, are now available with 512gb (SATA3) Mobility Solid State Drives. The M6600 is also offering the Nvidia Quadro 5010M mobile professional graphics GPU with 4gb of dedicated GDDR5 memory. The Dell Precision M6600 and M4600 are the first mobile workstations to offer 512gb SATA3 Mobility SSDs, giving users 500mb/sec read and 300mb/ sec write times. With the M6600 offering two full storage slots with up to two 512gb SSDs and one mini-card slot with up to 128gb, workstation users can experience more than a terabyte of solid-state storage in a mobile workstation. The 512gb SSD and Nvidia 5010M are available with pricing starting at $1120 and $1640, respectively.
Dassault Looks to the Cloud
assault Systèmes (DS) recently announced a cloud-based partnership with Amazon.com’s Web Services arm that will enable clients to use its 3D design and manufacturing software remotely over the cloud. PLM and 3D software are traditionally memory-intensive, but by partnering with Amazon Web Services, DS will be able to offer clients a preconfigured environment to remotely run 3D and PLM software without having to buy expensive hardware. DS is leveraging multiple AWS services to power its Version 6 software platform, providing high performance and highly available resources via the Amazon Elastic Compute Cloud (Amazon EC2) for discrete compute environments. This expands the geographic reach of DS customers, regardless of their physical location. Customers now easily access design content, while DS can store volumes of design data without having to support an extensive array of legacy platforms. In other news, DS has made its new online Version 6 platform—offered as a subscription model—available over the cloud. Also, DS announced its strategic investment in Outscale, a startup providing next-gen SaaS for leveraging dynamic public cloud resources allocation. Lastly, the firm has updated its Version 6 software to V6R2012, delivering an open, collaborative platform by broadening the value of digital assets into new solutions such as immersive retail store experiences and global production system planning.
December 2011/January 2012
11/22/11 12:07 PM
T H E # 1 ANI MATE D M OV I E OF THE YEA R
F O R Y O U R
T H E # 1 A N I M AT E D M OV I E O F T H E Y E A R “The scale of the visuals is enormous, and the animated images are beautiful and spectacular to behold.” -James Verniere, BOSTON HERALD
C O N S I D E R A T I O N
BES T VI SUA L E F F E C T S Alex Parkinson
CGW Ad Template 1211.indd 5
11/22/11 11:18 AM
Q3 Graphics Shipments Up
ccording to Jon Peddie Research (JPR), the industry’s research and consulting firm for graphics and multimedia, the estimated graphics chip shipments and suppliers’ market share for Q3 2011 is up 16.7% over last quarter and 18.4% over last year. Intel led the quarter with 36.5% growth, with Nvidia at 30% growth. Shipments during the third quarter of 2011 did (finally) behave according to past years with regard to seasonality, and were higher on a year-to-year comparison for the quarter. 2011 is still an unusual year for the PC and graphics suppliers, however, as businesses take their own path to recovery. The third quarter of the year is usually the growth quarter, and was this year, which is a positive sign looking forward. The growth in Q3 comes as a welcome change—but is it inventory building for the holiday season? This quarter, Intel celebrated its seventh quarter of embedded processor graphics CPU (EPG, a multi-chip design that combines a graphics processor and CPU in the same package) shipments, and had a very strong double-digit growth in desktops and notebooks. AMD lost in overall market share, while Intel gained more compared to last quarter, and Nvidia declined due to its exiting from the integrated segments. Year to year this quarter, Intel market share increased (9.5%), AMD broke even, and Nvidia slipped (-23%) in the overall market partially due to the company withdrawing from the integrat-
are present in every PC shipped. It can take the form of a discrete chip, a GPU integrated in the chipset, or a GPU embedded in the CPU. The average has grown from 115% (in 2001) to almost 160% GPUs per PC. Discrete graphics processing unit (GPU) chips and other chips GROWTH FROM Q2 TO Q3 with graphics are a 35% leading indicator for the 30% PC market. 25% Market shares shifted 20% 15% for the big three and put 10% pressure on the smaller 5% three, and most showed 0% a decrease in market -5% share as indicated in the -10% 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 chart on this page. Intel -15% Growth from Q1 to Q2 2.27% Growth from Q2 to Q3 13.98% continues to be the overall market share leader, The quarter’s change in total shipments from last quarter elevated by Core i5 EPG increased 16.7% above the 10-year average of 13.9%. CPUs, Sandy Bridge, and Pineview Atom sales for netbooks. AMD lost market share quarextraordinary at an average of 58.4% for ter to quarter, and Nvidia lost share. desktop and notebook, and Intel’s EPG Nvidia is exiting the integrated graphics growth was significant at an average segments and shifting focus to discrete of 23.6%. This is a clear showing of GPUs. The company showed significant the industry’s affirmation of the value of discrete market share gain (30% quarter CPUs with embedded graphics and is to quarter). Nvidia credits strong connect in line with JPR’s forecasts. The major, with new Intel Sandy Bridge notebooks. and logical, impact is on older IGPs, and some on low-end, low-cost add-in boards Ironically, Nvidia enjoyed some serendipitous sales of IGPs in Q3 due to some (AIBS). Almost 92 million PCs shipped older AMD CPU sales in Asia. worldwide this quarter, an increase of AMD’s overall graphics market share 8.8% compared to last quarter (based on an average of reports from Dataquest, dropped 0.3% from last quarter, even though the company’s HPU-class Fusion IDC, and HSI). APU processors are selling very well. At least one and, often, two GPUs ed segments. However, Nvidia gained 10.9% in the desktop discrete area. The quarter’s change in total shipments from last quarter increased 16.7%, above the 10-year average of 13.9%. AMD’s HPU quarter-to-quarter growth has been
The Foundry Unveils Ocula 3.0
he Foundry has rolled out Ocula 3.0, a significant upgrade to its stereo plug-in tool set for the Nuke compositing system. Ocular, used in production on groundbreaking live-action stereo projects including Avatar and Tron: Legacy, provides artists with a set of Nuke tools that assist with the integration of elements and help correct common stereo 3D defects. Ocular 3.0, the biggest upgrade of the product to date, brings new tools to help fix mis-focused camera pairs and retime in stereo, as well as a range of workflow tweaks and improvements to speed up day-to-day Ocula work. The new version is priced starting at $5400.
December 2011/January 2012
11/22/11 12:07 PM
CGW Ad Template 1211.indd 7
11/22/11 1:37 PM
Boxx Goes Xtreme
Panasonic Launches 3D-Ready Pro Plasma
oxx Technologies has released the 3DBoxx 3970 Xtreme, pitched by the company as “the world’s fastest workstation for Autodesk Revit, SolidWorks, and other frequency-bound software applications.”
The 3970 XT—priced at just over $2900—features a performanceenhanced, second-generation (overclocked) Intel Core i7 processor, along with Intel Smart Response Technology, which enables quick access to media files and accelerated performance overall. Both technologies, currently unavailable in mass-produced workstations, enable the system to automatically learn which files users access frequently and copies them from the hard-disk drive to the solid-state drives (SSDs). So the next time these files are requested, the system loads them from the SSDs rather than the slower hard drive, for faster booting, faster application loading, and accelerated performance.
anasonic revealed the TH-65VX300U, the newest addition to its family of HD professional plasma displays. The 65-inch display’s color reproduction approaches digital cinema standards, while the display’s ultra-high-speed drive technology achieves clear and extremely detailed 3D video as well as enhances 2D content. The advanced drive provides smoother gradation, which is double the smoothness of conventional models, resulting in richer gradation expression in a dark area of the screen. The TH-65VX300U is also equipped with multiple customizable functions for the postproduction experience, including a wide color gamut that can be selected from five setting types, an option to customize placement of RGB, and added adjustment menus. Furthermore, independent RGB On/Off functionality checks secondary colors or monochrome images, helping with individual color calibration. The display, priced at $6250, includes a waveform monitor to confirm the incoming signal.
AMD Fires Up FirePro V4900
MD has launched the AMD FirePro V4900, which delivers unequalled performance for DCC and CAD professionals at an entry-level price point. By leveraging AMD’s most advanced graphics technology, including AMD Eyefinity2 technology, the AMD FirePro V4900 improves application performance. In fact, the AMD FirePro V4900 more than doubled the performance of competitive offerings in many CAD and DCC application tests. The AMD FirePro V4900 is designed to exceed the needs of graphics professionals. The GPU’s 1gb of 128-bit GDDR5 RAM drives memory bandwidth to 64gb/sec, allowing rapid data access, while Microsoft DirectX11, OpenGL 4.2, and OpenCL support empowers users to render and manipulate models using the broadest range of tools and applications. Enhanced AMD Eyefinity and DisplayPort 1.2 technology enables six-screen multidisplay setups. The V4900 is available in select Dell and Fujitsu systems and HP workstations. As of November 1, it is being sold for $189 at select online resellers.
Imagineer, Boris FX Release Motion Tracking Bundle
magineer Systems has teamed up with Boris FX to launch the Motion Tracking for Editors bundle, a motion-tracking tool set bundle designed to work with Adobe After Effects, Premiere Pro, Apple FCP 7, Motion, and Sony Vegas Pro. Available immediately for $299, the Motion Tracking for Editors bundle includes Imagineer Systems’ newest release of Mocha AE v2.6.1 and the Boris Continuum Motion Tracker Unit from Boris FX. Mocha AE is a stand-alone
planar tracking and roto tool. The Boris Continuum Motion Tracker Unit delivers matchmove, corner pin, witnessprotection face blurring, and wire remover capabilities. As a result of this collaboration, the new Motion Tracker for Editors bundle enables users to export tracking data from Mocha AE directly to Boris Continuum Complete, giving editors access to more visual effects capabilities within their host system.
December 2011/January 2012
11/22/11 12:07 PM
Embedded Graphics Processors Killing off IGPs
ccording to Jon Peddie Research (JPR), in 2011, with the full-scale production of scalar X86 CPUs with powerful multi-core, SIMD graphics processing elements, a true inflection point occurred in the PC and related industries. As a result, the ubiquitous and stalwart integrated graphics processor (IGP) is fading out of existence. For several reasons, many people believed (and some hoped) that the CPU and the GPU would never be integrated: GPUs are characterized by a high level of complexity, with power and cooling demands, and dramatically different memory management needs; GPU design cycles are faster than those of the CPU; the GPU has grown in complexity compared to the CPU, exceeding the transistor count, and matching or exceeding the die size of the CPU; and the x86 has steadily increased in complexity and power consumption, and become multi-core. With four times the number of transistors possible in the same space as the previous manufacturing node, Moore’s Law seems unstoppable. With the move to 32nm, and
now 28nm, the possibilities for integration of such complex and alien functionality is not only possible and feasible, but a reality. Jon Peddie, president of JPR, notes a new trend impacting discrete GPUs due to the combination of devices being offered with integrated graphics (IGPs, EPGs, and HPUs). “The integrated processors will impact GPU sales and change traditional sales patterns. The trend may even put the category in decline—at least so some believe,” he says, “but it’s not that simple. Nothing in the PC industry is.” The EPG/HPU will revolutionize the PC and associated industries. The amount of computation capability available in the size, weight, and power consumption of systems with EPG/HPUs, coupled with the attractive prices they will carry, will upset the market dynamics like never before, and maybe not since the introduction of the PC. Further details are available in “The Market Dynamics Created by the Embedded Graphics Processors Study” from JPR.
Windows-32 / Windows-64 / OS X-32 / OS X-64
32-bit only $399
High Performance Camera Tracking
Use SynthEyes for animated critter insertion, fixing shaky shots, virtual set extensions, making 3D movies, architectural previews, accident reconstruction, virtual product placement, face and body capture, and more.
Includes Stereoscopic features used in AVATAR December 2011/January 2012
11/22/11 12:06 PM
n n n n
Animation Animation Character
Images ÂŠ2011 Paramount Pictures.
December 2011/January 2012
11/22/11 11:19 AM
n n n n
The artists at Weta Digital turn their fastidious talents to their next film and create a remarkable animated feature to the surprise of everyone except themselves
By Barbara Robertson If the young reporter Tintin, star of the comic-book series by the Belgian artist Hergé and, most recently, an animated feature film, were to write about the making of that film for his newspaper, Le Petit Vingtieme, he’d surely headline it: “First animated feature directed by Steven Spielberg! First animated feature produced by Peter Jackson!” And then we’d see the headline especially interesting to those in computer graphics: “First animated feature created at Weta Digital!” Or, is it? It wouldn’t be much of a stretch to call large portions of Avatar—the most successful film of all time, also largely created at Weta Digital—an animated feature. After all, in much of that film, the Na’vi are animated characters in a virtual environment. And, as they did for Avatar, Weta Digital animators performed Tintin’s characters using data captured from actors wearing head rigs as part of a facial-capture system developed at the studio. Award-winning directors famous for live-action, action-adventure movies directed the actors for both films on a performance-capture stage set up by Giant Studios and “filmed” them with a virtual camera while watching a real-time, on-set composite. “[Tintin] was really an evolution of what we’ve done for visual effects,” says Joe Letteri, senior visual effects supervisor at Weta Digital, who received Oscars for the work on Avatar, King Kong, and the two Lord of the Rings trilogies he supervised. And therein lies one of those clues that Tintin and his dog Snowy so famously uncover: A clue to the reason critics are praising it as the most successful performance-capture film to date. Letteri brushes off the distinction. “We rolled straight into what we had done for Avatar,” Letteri says. “We developed a new subsurface technique for the skin to have it look a little better, we developed some new facial software to add a layer of muscle simulation beyond what we could track and solve from the facial capture, and we developed
a new hair system that we also used on Planet of the Apes. But, from a performance-capture point of view, we are still recording an actor’s performance. It was no different from mapping data to the Na’vi or an ape. We were making comic-book inspired characters, not ones that looked like humans, but there’s always a level of animation and interpretation. We had big sequences in King Kong that were entirely computer-generated, most of the scenes in Avatar were entirely in a CG virtual world, and Tintin is in a virtual world all the way. For us, there’s no difference.” Tintin was successful two months before opening in the US. The film’s approval rating on Rotten Tomatoes hovered around 86 percent as it topped the international box office during the first two weeks following its release in Europe, and by the end of the third week, Tintin had captured $159.1 million at the box office, even though it had yet to open in the US or many other regions. Presented by Paramount Pictures and Columbia Pictures, the film is a rollicking action-adventure that sends Tintin and his dog Snowy dashing through Europe and Africa, on ships, trains, and planes, and even into the past, and a comparison to Spielberg’s Indiana Jones films is apt. It stars Jamie Bell as Tintin; Andy Serkis as the whiskey-soaked Captain Haddock; Daniel Craig as Ivan Ivanovitch Sakharine, a pirate and a descendant of Red Rackham (whom he also plays); Toby Jones as the pickpocket Silk; Simon Pegg and Nick Frost as the bumbling detectives Thompson and Thomson; and Snowy, a little white terrier who is Tintin’s constant companion. All the characters are CG; Snowy is the only star performed entirely with keyframe animation. And yet, everything about Tintin, except for the fact that it is an animated film, has a live-action sensibility. The characters have a cartoon patina, and their performances are a bit broader than a human’s, but the artists started with real performances and then referenced reality to add skin, clothes, and hair. For environments, the crew didn’t have live-action plates, so they December 2011/January 2012
11/22/11 11:19 AM
n n n n
referenced the comic books for design and the real world for textures and dynamics. The film may trace its origin to comic books from the early 1940s, but this is not your father’s animated film. The attention to detail is amazing. “We ran this show exactly like every other show,” says Simon Clutterbuck, digital creature supervisor, “as if we were doing 100 shots in a visual effects movie. The focus on every texture, every motion, every simulation was intense. We never said, ‘Oh, that’s done,’ and locked an asset. We looked at everything every day. We had a process where things ran in parallel; we even built while lighting. If something in a shot needed to change, we changed it. All the way through production, shots constantly evolved and got better and better.”
for Hergé. But we had to develop a threedimensional character.” The artists started with the 2D character, picking frames that, when combined like a flip book, created a three-dimensional look. Next, they translated that look to a rigged CG model, asked an actor to mimic Tintin’s expressions from the comic books, and applied those expressions to the 3D model. “Then, we began exploring changes,” Stables says. “We changed the model’s nose and gave Tintin cheekbones and a jaw. By the time we had a Tintin we liked, we had tried 1600 variations.” In addition to the main characters, the modelers built hundreds of crowd characters. “We created new characters all the way to the end,” says Marco Revelant, models supervi-
Animators at Weta Digital started with performance-capture data for all the characters except Snowy, the little white terrier. Steven Spielberg directed actors who performed the characters on a motion-capture stage using a system at Giant Studios similar to the one James Cameron had used for Avatar.
A Model Production A team of modelers that ranged between 40 and 60 people built the 4000 digital assets needed for the film, creating face shapes and deformations for the animators and adding fur and hair to the characters. Modelers at Weta Digital work within an Autodesk Maya pipeline. Many modelers also sculpt using Autodesk’s Mudbox, originally developed at the studio, and a few add Pixologic’s ZBrush to the mix. Modelers moved back and forth between hard-surface models and characters, although one team specialized in fur and hair, and another in creating face shapes and deformations. Also, the modelers gave the characters especially detailed hands. “We had amazing reference—an MRI and a life cast of a guy’s hands that we used to build new, highfidelity hand models,” Clutterbuck says. The main character, Tintin, had the most difficult face to model. “He’s a balloon with two dark eyes and an oval mouth,” says Wayne Stables, visual effects supervisor. “That worked 12
sor. “I remember adding a female character in the last month. We generate models from the same elements, even using the same topology for the main characters and the generic characters. The distinction between a main character and a crowd character is in the complexity of the facial system, not in the model itself. For the generic characters, we have an automatic way to generate a basic facial system.” To rig the bodies, character technical directors worked with a generic model, which they call “genman,” a fully simulated muscle model. Two creature TDs rigged all the characters, one working on Snowy, the other on the human characters. “We hadn’t done a dog, so that was a fulltime job,” Clutterbuck says. “But we had done a lot of development on bipeds for Avatar and had a good genman model. We used it on Apes, but we took it to an extreme for Tintin and built everyone from the same guy, all procedurally. We started with a surface model and used a process we call ‘warping,’ to fit
the whole rig from a base model to the new model, and it’s good to go. If we weren’t happy with something, we’d fix it on the template and push it out to all the characters.” For Avatar’s nearly naked Na’vi, the crew had developed Tissue, a simulation system, to build muscles, skin, and fat. “It’s a linear-elastic finiteelement system,” Clutterbuck says, “a standalone thing with a front-end bolted onto Maya so artists can interact with it. We plug animation into the system and it adds the simulation on top; it’s our tool set for deformation work.” For Tintin, though, the crew pushed the system further to add dynamics to facial deformations driven by the captured data and keyframed animation. The developers plan to submit a technical paper to SIGGRAPH 2012 on the technique. “We wanted wobbly cheeks, chin folds, skin colliding with itself around the facial area,” Clutterbuck says. “To get that, we needed both dynamics and facial deformations. So we took what’s effectively a series of blendshapes rigged in the facial puppet and mapped them into the simulation system to add simulated elements to the face.” To control the constraint-based simulation, artists painted attribute maps on the facial puppet. Clutterbuck gives an example: “We have a [jowly] character named Barnaby, and we have the performance for his chin and lips, but we wanted those areas to interact with his wobbly chin. So, instead of trying to do two separate solutions and blend them, this system unified everything. We painted little patches around his lips, and the attribute map set up everything once. After that, the simulation was procedural. The solver can also wobble, wrinkle, and buckle all at the same time. The animators didn’t see any of this; they concentrated on the performance.”
Perfecting Performances The animators received data for the characters’ faces, bodies, eyes, thumb, index, and pinky fingers, captured performances that provided what animation supervisor Jamie Beard calls a “starting block.” Beard worked on Tintin for five years, supervising the previsualization and then leading the team of between 50 and 60 animators. “We offered the director an animated and a live-action world,” Beard says. “On set he could be a live-action filmmaker, blocking out the actors and directing them. Once captured, if the scene was perfect, we’d work on the performances only a limited amount to change them slightly if Steven [Spielberg] wanted to tweak them; directors being who they are
December 2011/January 2012
11/22/11 11:19 AM
“Best animated film of the year.” -Roger Moore, Orlando Sentinel
“Easily one of my favorite films to hit the 3D format... I really can’t imagine not enjoying it.” -Harry Knowles, Ain’t It Cool News
F O R YO U R C O N S I D E R AT I O N
BEST VISUAL EFFECTS KEN BIELENBERG
CGW Ad Template 1211.indd 13
11/22/11 11:18 AM
n n n n
always have more ideas. But, we’d always go back to make sure we hadn’t detoured too far from the original essence. If we had given Tintin a bigger smile, we made sure he still had Jamie Bell’s performance.” Finding the balance between Tintin’s stylistic photorealism and reality was the challenge. Unlike for Avatar, in which the animators wanted the audience to see Sigourney Weaver in her avatar’s face, Tintin’s animators needed to apply the facial system to cartoony characters. “We had to cross a threshold,” Beard says. “We have the actors’ performances, but the look comes from Hergé. We wanted those performances, but we had to fit those performances on characters that didn’t look like the
actors. That’s when the artistry of the animators came in. We used the same fidelity of data captured from the small cameras that we used on Avatar only in a completely different way, taking Steven’s direction to fit the expressions and make an animated film. But, you can still see the performances they captured on the characters. We spent a lot of time learning how to move the muscle system for our cartoony humans.” In addition to the main characters, the animators also manipulated data captured for crowds. “Once they finished principal photography [performance capture] for the main actors and the shot was cut together, they would capture actors for the crowds,” Beard explains. “For the pirate battle, which needed
Back Story Steven Spielberg discovered Hergé’s comic books and became a fan after a reviewer in France compared the first Indiana Jones to “Tintin.” In fact, when Spielberg and executive producer Kathleen Kennedy first approached Weta Digital about making Tintin, they planned to make a live-action film. “The idea was to have us create Snowy,” says Joe Letteri, senior visual effects supervisor at Weta Digital. “So we shot a test with someone from Weta Workshop dressed in a Tintin costume and started on a realistic digital version of Snowy. But, in the meantime, I talked to Peter Jackson and came up with the idea of having him on camera auditioning for Captain Haddock, with Snowy stealing the scene from Peter.” The scene was a tip of the hat to Hergé, who often had Snowy steal scenes from Tintin in his comics. Letteri first showed Spielberg the test that the director thought they were working on, and then the test with Jackson. “Steven said to Peter, ‘OK, we’re working together,’ ” Letteri says. Thus, the two directors/producers began exploring ways to make Tintin’s world together, and as they talked, Jackson began suggesting they make it digital. Spielberg was cautious. So, Letteri and Jackson arranged a test. “By then, we had finished King Kong, and we were getting ready for Avatar, so we asked Jim [Cameron] if we could bring Steven [Spielberg] over to have a look,” Letteri says. “Jim gave Steven and Peter the stage for two days during Thanksgiving break, and that got the ball rolling.” And it rolled all the way into an animated feature created with computer graphics, a film, given the antics of Snowy and the wild action scenes, that could never have been made with liveaction photography. Spielberg and Jackson shot the film on a performance-capture stage at Giant Studios in Los Angeles using Giant’s motion-capture technology and the head-rig hardware and software that Weta Digital had developed to capture facial performances for Avatar. “Steven was on stage directing, and Peter checked in remotely because he was still working on Lovely Bones.” Letteri says. “They would confer and work out from day to day what to do next. Peter stayed involved for as long as he could be, but he had to go off and prep Hobbit, and he was involved with that as we finished up. He’d still review things and give notes, but our daily calls were with Steven.” Letteri continues: “I think Steven enjoyed the process. It was freeing to go in and work like he was used to working with the actors and camera, to explore scenes quickly, and then he kicked back to us the things that take a long time. He didn’t have to travel or wait for sets to be built.” –Barbara Robertson
120 people, I had six actors. We’d do multiple passes with those six people to fill up the scene.” Similarly, to fill marketplaces in England and Morocco with crowds, the crew captured six people at a time. Because the entire world is digital, the animators also worked on other elements as well—cans of paint rolling on the floor, coins, ships in the ocean, and so forth, animating by hand all the props and vehicles that couldn’t be animated procedurally. “Procedural animation doesn’t lend itself well to comedy,” Beard says, providing an example. “We had a scene with sleeping sailors on bunks, and they all had to be flopping in their bunks, snoring. The bunks drifted around, and the chains moved independently. The animators who were assigned to that scene had their eyes roll back in their heads. It all had to look natural and slightly comedic. It was a real task.” Beard divided the team by shots, choosing those that reflected particular animators’ skill sets. “Some people were skilled at animating big, heavy scenes, so they would do everything in those shots,” Beard says. “And, I had some fantastic animators who had a really good handle on Snowy. One strong animator, Aaron Gilman, knew Snowy very well. Aaron has lots of energy, and he’s inquisitive, and the more I talk about him, the more I realize that he is Snowy. He fit the role perfectly.”
Scene Stealer Snowy, the only hand-animated character in the film, appears in most of the scenes with Tintin, sometimes even driving the story. Hergé based the dog on a wire fox terrier, and like that breed, Snowy is intelligent, active, and mischievous. As in the comic books, he’s a scene stealer. On the performance-capture stage, a puppeteer moved a toy version of Snowy for blocking and giving the actors proper eye lines. In addition, Beard put cutouts of printed images of Snowy on cardboard stands near Spielberg’s monitor to remind him that Snowy would play a big role. As in the film, at Weta Digital, Snowy often drove the story. “There’s a fine line between a photoreal dog and the caricatured animal in the comic books,” says Clutterbuck. “Finding that balance took a reasonable amount of time. We’d build him, animate, and render him, show him to Peter and Steven, and then fine-tune his proportions until we had a real animal that was also Hergé’s Snowy. It was a full-time job.” Inside, Snowy has cutting-edge technology. His canine anatomy required a new simula-
December 2011/January 2012
11/22/11 11:19 AM
Modelers referenced Hergé’s reference materials, found photographs of the objects he referenced, and then created period-appropriate CG vehicles in the same style as those in Hergé’s comic books. tion model because, unlike humans and apes, dogs don’t have collar bones; the shoulder bone—that is, the scapula—is disconnected. A fascia, which is a connective tissue, surrounds groups of muscles, blood vessels, and nerves, and holds them in place. “We had to build a fascia system that was like a tissue layer that enveloped the muscles,” Clutterbuck says. “Now you can see the form of Snowy’s shoulder down to the elbow changing under the surface of the skin. Richard Dorling [lead software engineer for creatures] developed key muscle models that he attached to the skin to get the surface doing the right thing.” Before giving Snowy’s performance to the animators, the crew tried motion-capturing a dog. “We did only one motion-capture session and then realized it had to be animation,” Beard says. “You’d think motion capture would free you up, but the dog on the liveaction set would be led by a trainer and would look up at the trainer. To get the real terrier attitude of Snowy, we had to animate him all the way. He became one of those characters the animators could really put themselves into. We kept thinking of things Snowy could do to keep people entertained.” In fact, Snowy’s antics are one justification for making an animated feature rather than a live-action film. For reference, the animators visited local dog clubs, brought dogs into the studios, watched videos, and, of course, read Hergé’s comic books because although Hergé based Snowy on a real dog, he was a comic-book character. “Snowy has human characteristics in the comics, particularly in his eyes and brows,” Beard points out. “This isn’t a world with a one-to-one relationship with reality. We would start animating with him and find his nose had to be smaller or bigger, and then we would go back and animate him again. And it was hard to light a character with white fur.
His eyes would become two black dots, and we couldn’t see what he was thinking. So, we had to keep going back into it and reworking until we could read his expressions, making sure his fur wasn’t changing his performance.
Hair Today Revelant, who was in charge of the hair and fur team from the modeling side, has been working with fur at Weta Digital since King Kong. After Avatar, he and code department supervisor Alasdair Coull worked on a prototype system called Barbershop that Coull then took to completion. “The system we had was a problem because it had a long learning curve, and only a few people could use it properly,” Revelant says. “Barbershop really helped with Snowy.” Hergé’s Snowy has a simple design; he’s white, with no shading. “He’s like a cloud that a kid would draw,” Revelant says. “He’s defined by his outline. We found reference of the dog Hergé used as reference, but the problem was that Snowy doesn’t look like that dog. So, we had to figure out two things: What was under the fur, and how was the fur going to work. We’d take the model, apply the fur, look at it, change the model, and transfer the fur to it, back and forth.” With the previous system, the artists would have had to place guide hairs that multiplied into thousands at render time, and after rendering, the artists could not move any one of the resulting strands of hair. With Barbershop, each of Snowy’s million strands of hair could be a curve with which the artists could groom the terrier’s rough coat. Similarly, digital barbers used the system to perfect Tintin’s iconic coif. “The concept is that what you see in Maya is what gets rendered,” Revelant says. “You can see the full density in Maya, although artists can reduce the level of density as they refine the look. And, we use an OpenGL shading
scheme that gave us a good representation of the lighting while we groomed; it uses the same algorithm we use on our [Pixar RenderMan] side. We don’t interpolate hair; there is no creation of hair after we finish grooming.” When hair and fur groomers “brush” the hair, they move control points but at any time can convert the hair to curves and manipulate the curves, as with any Maya primitive. “You can basically use the brush to give parameters to the hair,” Revelant says. “You can comb it the length you want, straighten the curve, or curl it. You’re not painting on a map; all the information stays in the hair.” One advantage of the system is that it is independent from the underlying mesh, which means that changes to the UVs in the topology do not necessarily affect the fur. It also means that the artists could transfer the hair groom for one character to another and generate variations without much hassle. “We can even merge one groom with another and create a third one,” Revelant says. “We used that a lot for the crowd characters.”
Hair Lights In addition to having tricky grooms, Tintin and Snowy also had the most difficult hair to render. Tintin’s light-red hair could easily look too blond or too dark. And Snowy’s hair is white. They were the first two light-haired main characters Weta Digital had encountered, and their hair demanded new shading models. Jedrzej Wojtowicz supervised a team of 16 people in the shading department who, with the help of R&D, dealt with the issue. “The problem was the scattering of light,” Wojtowicz says. “Previously, most of the hair we created was dark, so we could have simpler models than we needed for Tintin. Imagine a hair fiber as a metal tube. If I shine a light on it, it reflects that light; the light bounces back in a straightforward fashion. That’s synonymous to black hair. Light-colored hair is closer to a candle, a cylinder that’s partially reflective but allows light to travel through it. As some of the light travels through, it picks up some of the coloration and bounces out with a different color. The rest of the light travels though in a straight line and absorbs some color. So the problem was how to model the interaction between hundreds of these highly light-scattering hairs. What does the light that picked up color from the first hair do when it bounces into another hair?” And that’s only part of the problem. As the light propagates through a volume of hair, the color it absorbs varies depending on how rough or shiny the hair is. Rougher hair scatters December 2011/January 2012
n n n n
11/22/11 11:19 AM
n n n n
light in more directions than smooth hair, the energy spreads and imparts a different quality and amount of light to neighboring hairs. “This happens in real life,” Wojtowicz says. “Our goal was to imitate it as best we could using materials we can generate by studying photography and by doing spectral measurements. Nature sits as a precedent; that’s why we attack things from a physically based way. If we have to make assumptions after the fact, we will.” To solve the problem, the shader developers moved from a model based on light interacting with a single hair fiber to a dual-scattering model. And then, they found ways to create shadows within the volume. “We had worked on scattering the light between the hairs, but what if the character’s hand blocked half the hair?” Wojtowicz questions. “How do the scattering and absorptive techniques work with our shadowing techniques? Each hair had to ask, ‘How exposed to light am I? How deep in the volume?’ ” To move the hair based on the characters’ actions or on elements such as wind in the environment, the character team used Maya nCloth for dynamic simulations, along with various other methods. “We had different models for different things,” Clutterbuck says. “Hair in the wind took one simulation approach. Snowy took another. And, Barbershop has a deformation interface built into the grooming tool, so we can deform the hair any way we want. For a shot when Tintin walks past a mirror and combs his hair with his hand, we built an animation puppet that we plugged into the animation system to deform the hair. We used a bit of everything”
Skin Tight Weta Digital artists leveled the same degree of attention to detail to create the characters’ digital skin and other textures in the Tintin environment; however, this process derived from the physical world, not the digital. Gino Acevedo, creative art director and textures supervisor, devised the technique for Avatar and enhanced it for Tintin: He does life casts to capture fine details, and then uses a process to scan the result into Adobe’s Photoshop to make displacement maps. For Avatar, Acevedo used a material made from seaweed. For Tintin, he switched to a silicon-based material that he says captures 30 percent more detail than the material he had used before. “I made a huge library of skin patterns—faces, elbows, knees, backs, fronts, butts, feet,” he says. “And the great thing about the process is that it works for rocks and trees. We used it a lot for the tree bark. I’d take my 16
little bucket of silicon and slather it on the sides of trees, then peel it off. It works incredibly well—so much better than scanning.” To capture textures for Tintin’s face, Acevedo started by painting a thin layer of the silicon material on someone with what he calls “interesting skin,” leaving the volunteer’s nose open. The material sets quickly, and once set, he applied plaster bandages over it to create a model of the face. Then he removed the plaster cast, which doesn’t stick to the silicon, carefully peeled off the silicon, and placed the thin layer of silicon in the plaster cast, which acted as a cradle. Next, Acevedo brushed a two-part mixture of urethane into the negative face cast and sloshed it around until it set. “I usually do a couple of layers to build up the thickness and create a shell,” he says. “Then I reinforce
skin. “We cut darts into it to lay it on a flatbed scanner,” Acevedo says. “It looked like a texture map.” Even so, it wasn’t completely flat, so they modified the scanner. “We cut pieces of Plexiglas to build a wall around the top of the glass and filled the void with baby oil,” Acevedo explains. “We put another piece of glass on top and got perfect scans: 8k-resolution maps with incredible detail.” Then, in Photoshop, artists removed any dust, scratches, and air bubbles, and amped up the contrast to create the displacement maps. “For the most part, though, the scans were 85 percent ready to go,” Acevedo says. “We saved them online in a library for the artists. When we started a character, say Captain Haddock, we would look at all the scans of people with crow’sfeet and pick one. Then, in Mari, our 3D paint program, we would move the texture around
Weta Digital’s hair groomers controlled coifs, coats, and beards with a new “what you see is what you get” system called Barbershop. Tintin and Snowy’s light hair caused researchers and character effects TDs to devise new shading models to more accurately scatter light through the volumes. it even more with a rigid polyurethane foam that I pour into the back. It takes up the space and sets up in a few minutes.” When Acevedo removed the plaster bandages and peeled the silicon skin from the urethane, he had a perfect cast of the person’s face, “every nook and cranny,” he says. But, he wasn’t done yet. Next, Acevedo mixed a transparent silicon material, the same type of material used for animatronic puppets, until it was as thick as honey, and poured it over the face cast. “I prop it up and use an air hose to blow [the silicon] around to get an even consistency,” says Acevedo. “When I come back in the morning, it’s cured. When I pull it off, from the top of the forehead down, I get a skin the thickness of a latex glove. It’s a copy of the face. If you hold it up to the light, you can see all the skin detail.” The next task was to digitize the silicon
and paint the displacement onto the model.” Tintin, who has younger, smoother skin, created special problems. “Tintin aged all of us, but I think what we ended up with looks good,” Acevedo says. He explains: “People with perfect skin are very difficult. He’s a redheaded kid, so we thought maybe he should have freckles, but he looked too much like Howdy Doody. So, we started studying young people’s skin to find some details we could use. Tintin now has little scars, like maybe he had a little chicken pox, and very subtle freckles you don’t notice when you first see him, but if they weren’t there, you’d know.” They also experimented with his skin color. “We had different masks for his cheek area to give him a rosy blush from time to time,” Acevedo says. To develop shaders, the team started with those used on Avatar. “Even though Jake was
December 2011/January 2012
11/22/11 11:19 AM
A multi-step method that begins with life casts resulted in libraries of displacement maps that artists could draw from to produce skin textures for characters ranging from craggy Captain Haddock to youthful Tintin. The artists captured tree bark and other textures from the real world, as well. blue and Tintin close to pink, we knew the specular qualities of the skin, the technical setup and structure, and how to exploit RenderMan in the best way,” Wojtowicz says. “We could transfer all that. All the characters then veered from that, but at their core, we started from a unified base in terms of the technical structure.” A new subsurface scattering model helped give the fleshy characters in Tintin a more realistic look, and even helped Snowy. “We had used a dipole model through Avatar,” Wojtowicz says. “That gave us shallow scattering. The new model allowed us to scatter light at a deeper level for different extremes. We could get good-looking candles and have dark-skinned characters, as we do in Africa. It also gave us the ability to give Snowy’s ears that nice pink glow; if we backlit characters, the light would scatter in a more aesthetically pleasing way.” The research into the new subsurface scatting model resulted in a SIGGRAPH 2011 technical paper titled “A Quantized-Diffusion Model for Rendering Translucent Materials” by Eugene d’Eon and Geoffrey Irving.
Costume Department All the characters except Snowy, of course, wear period costumes, and 15 people worked on those digital costumes, creating patterns for all the garments, dressing the characters in multiple layers of clothes, and simulating the movement. “When you look at Tintin, you forget that the guy is wearing a three-piece suit,” Clutterbuck says. “It’s just there, and you expect it to do the right thing. But, it represents years of work.” The studio used nCloth in Maya for the simulation, augmented with proprietary software. “We’ve never done clothing to this scale,”
Clutterbuck says. “The Na’vi wore loincloths. We started thinking that if we can’t see a shirt under a jacket, we wouldn’t need to simulate it. But you don’t get the right look. So, all the clothes are real; they all have dynamics. We solved the shirt, under the jumper, under the jacket, altogether.” For cloth textures, Acevedo scanned materials directly. “We had a wardrobe department that made the costumes and put them on models so the creatures department could take videos of the clothes and see how the different types of material moved. We did scans of those materials and used them for the textures.”
Hergé’s World The artists took as much care with the environments as they did with the characters, carefully creating a world that respected the world Hergé had drawn. This was possible in part because, in addition to the comic books, Hergé’s [Georges Prosper Remi’s] estate gave Weta Digital access to the artist’s original references. “Hergé had a realistic style, but quirky,” Letteri says. “The way he worked was similar to the way we work as visual effects artists. He’d gather all this reference and create, say, a tank that would be a mix of a couple of tanks he liked. We saw his old photos, so we would try to find the objects he photographed. We looked for additional photos as well. We’d figure out the way he drew the object, and then fill it out in three dimensions. It was a really good project.” Tintin’s apartment, for example, which the artists modeled and textured to match artwork from the comic book, has a phone based on the phone Hergé used as a reference. The cars, the street where Tintin lives, the market are all part of the same European style that Hergé used. “We based everything on reality,” says
Stables. “If we don’t have reference for something, it doesn’t exist.” At the VIEW conference in Turin two days before the film opened in Italy, Stables demonstrated the crew’s determination to match Hergé’s world by overlaying a 3D building from the film on a page from the comic. The two matched perfectly. “The assets in this film represent a huge effort from the research and modeling side,” Revelant says. “We have a way to dress the sets procedurally, but generally we hand modeled everything. We went through all the panel art to find the buildings Hergé drew, and looked for references for buildings with the same style and shape. When you go to that level, procedural is not an option. You want to do it right.” Stables supervised much of the work in Tintin’s apartment, inside a ship, and an exciting chase sequence through a marketplace, but the film also puts Tintin on pirate ships and in the middle of a pirate battle. Another supervisor, Keith Miller, handled the neighborhood outside Tintin’s apartment, several shots of a seaplane taking off and flying through a storm, and 85 shots in the pirate battle. All told, five visual effects supervisors split the work on the film. “Water was the most challenging,” Miller says, “particularly for the pirate battle. We tried to keep it as photoreal as possible. The previs stylization was non-physical, so we tried to maintain that character yet preserve the natural aspects of water.” To do that, the team updated its Fast Fourier Transform (FFT) library with new algorithms to simulate the waves and created Smoothed-Particle Hydrodynamics (SPH) simulations for the cresting foam. “We used [Exotic Matter’s] Naiad for hero simulations and interactions when we’re disturbing surfaces with sinking objects,” Miller says, “as well as our own Synapse software.” Concept art from Michael Pengrazio, an artist whose first matte paintings were for Star Wars: Episode V – The Empire Strikes Back in 1980, and who worked as an art director at Weta Digital on several live-action films starting with King Kong, helped everyone visualize the world they wanted to create. “When you look at his work, it seems plausible,” Stables says, “like a day I could photograph.” Even concept art needed to look real. “I felt like I was making a live-action movie,” Stables says, “like I was making an Indiana Jones film, even though we were animating. The way we approached the show—from effects, to simulation, to lighting, to the camera—was to base everything in a plausible, realistic way, with the idea we could take liberties. Steven [Spielberg] is a live-action director. December 2011/January 2012
n n n n
11/22/11 11:19 AM
n n n n
His world has been in live action and film, and live action is a world we understand. The fact that we’re using animated characters and we aren’t filming backgrounds didn’t make any difference. We’re composing and lighting as though we were on a live-action film. The biggest issue for me, though, was the interiors. We had to push our indirect illumination.” Using RenderMan, the lighters sent rays inside a point cloud, which was a simplified color version of a scene. “Then for final beauty renders, the surface shaders did a lookup into the point cloud to do the indirect illumination,” Stables says. “For shadows, we used our PantaRay to generate big point clouds. When the shader executes the final beauty pass, the specular looks up into the point cloud, as well. It’s not a mirror type of reflection. We weren’t doing caustics; we weren’t bouncing specular around. But we were getting a glossy reflection.” The test case was a sequence that takes place within a ship’s corridors. “We couldn’t get away with just diffuse light,” Stables says. “We had to account for specular light. We couldn’t do the kind of cheating and magic lights we might have done in CG. We didn’t want to, and also, Steven Spielberg is extremely particular about lighting.” The indirect specular and indirect diffuse lighting were especially important for lighting the characters. “Because specular is angle-dependant, it’s really the main component that allows you to read the shape of an object,” Wojtowicz says. “So a lot of our look development centered around dialing in the specular qualities to their best, especially with Tintin. In the comics, his face approximates a sphere, and to be faithful to a degree to that, he’s geometrically simple.” The more haggard characters, like Haddock and Sakharine—older, more mischievous, with interesting geometry in their faces—are easier to light. Tintin’s simple, youthful face gave the lighters nothing to hang shadows on, no angles. “We had to squeeze details from a wide array of techniques, and one of those was having an intricate specular response,” Wojtowicz says. “If we were to put Tintin in his apartment with its walls of brightly colored wallpaper, and put a couple of hot light sources at either end, the entire room would light up and wash him out with all the diffuse light contribution from all the angles in the room. So, if we don’t have a specular reflection, we lose his shape. We even used indirect specular in exterior scenes when we needed to increase the visual complexity of an object moving through the scene, or the camera moving through the scene. We were more selective because it’s a bit more expensive in terms of 18
render time, but we did use it.” Because Tintin chases through several countries during the film, the lighters faced situations ranging from the desert in the middle of the day to overcast oceans, and all the lighting needed to interact in a consistent manner with the new hair shading models and the new subsurface scattering models for the skin. All this attention to detail—the new muscle system for the characters’ faces and Snowy’s shoulders, capturing skin textures, new hair and fur systems, new shaders for hair and skin, the 1600 variations of Tintin that it took to produce a character that looked right, the research into reference materials and research into scientific methods, and more—combined to make a film that critics such as Variety’s Leslie Felperin praise: “The motion-capture performances have been achieved with such exactitude they look effortless, to the point where the characters, with their exaggerated features,
what’s around it digitally,” Letteri says. “The whole shot becomes digital, and most people don’t know the difference—and that’s the interesting part. It doesn’t matter. So, it’s hard to define the lines these days. In a way, that’s what Jim [Cameron] was trying to do with Avatar. There should be no barrier moving between these different worlds.” “But,” Letteri continues, “live-action visual effects ground you. You have a photographic plate. You judge everything by the pixels next to it. You know when it doesn’t work. And I think that was the hardest thing about [making an animated film]. If you’re going to try to make it look real, you need a touchstone for reality. In a world that’s completely digital, it becomes easy to convince yourself that something looks good because it looks better than the last time you saw it. But if you put it next to something real, it doesn’t [look as good]. So we couldn’t let ourselves be convinced. Because
The challenge for the water-simulation team was in creating photoreal water in a comic-book style. An updated Fast Fourier Transform library for the waves, Smoothed-Particle Hydrodynamics for cresting foam, Exotic Matter’s Naiad for hero interactions, and Weta’s own Synapse fluid-simulation software helped. almost resemble flesh-and-blood thesps wearing prosthetic makeup.” Asked how he was able to keep the characters in Tintin out of the notorious uncanny valley, Letteri’s answer is, “We didn’t try. We weren’t thinking about it. To tell you the truth, the question only came up when other people started asking about the movie. For us, these are just characters we like to watch. They either work or they don’t, and if they don’t work, you can call it whatever you want. When you’re working on a film, you’re focusing on the specifics. Is that eyelid doing the right thing? Is that lip doing the right thing?” But certainly the studio’s experience with live-action films, with the rigors of matching the real world and often substituting virtual for real, had an effect. “In live-action films, when you have a visual element that isn’t real, it’s becoming easier to create the reality and
we come from visual effects, we strive for accuracy, to make everything believable. We photographed lots of reference. We constantly judged against something real. When we needed to know what Tintin’s hair looked like wet, we persuaded someone with red hair to cut it like Tintin’s and soak his head in a barrel of water.” There you have it. If you want to stay out of the uncanny valley, soak a redhead in a barrel of water. And then hire the best artists and researchers you can find, ones who work meticulously for years to make the world on the movie screen seem real. n Barbara Robertson is an awardwinning writer and a contributing editor for Computer Graphics Use your smartWorld. She can be reached at phone to access BarbaraRR@comcast.net. related video.
December 2011/January 2012
11/23/11 9:20 AM
fa la la la la with focal this holiday season
By Adam Mechtley and Ryan Trowbridge $59.95
By Eric Luhta and Kenny Roy $44.95
The first book to focus exclusively on how to implement Python with Maya - a complete reference for Maya Python and the Maya Python API.
This is an animator’s workflow with complete, step-by-step walkthroughs of essential animation techniques to increase your efficiency and speed.
By Lee Montgomery $49.95 Explore Disney’s 12 principles of animation — from squash and stretch to timing and appeal, while learning how to animate in Maya.
For a look at our holiday titles, visit www.focalpress.com/holiday
Creativity has an endless shelf life. Focal Press Books are available wherever fine books are sold or through your preferred online retailer.
1011.indd 19 13 CGW Ad Template 1211.indd
10/14/11 11:24 4:23 PM 11/22/11 AM
■ ■ ■ ■
Stereo 3D•Visual Effects
VISUAL EFFECTS ARTISTS PUSH DEEP INTO CINEMA HISTORY TO HELP MARTIN SCORSESE CREATE HUGO
MAGIC MA BY BARBARA ROBERTSON
Heralded as the most artistic use of stereo 3D since Avatar, and perhaps even including Avatar, Martin Scorsese’s love letter to filmmaking takes place in 1930s Paris, as seen through the eyes of a boy and realized as if filmed on an early 20th century movie set. Hugo, based on the award-winning children’s book by Brian Selznick, stars Asa Butterfield as Hugo Cabret, the orphaned son of a clockmaker who now lives in a secret part of a Paris train station. Hugo’s father left him a broken automaton, and Hugo believes that if he can repair the machine, a small mechanical man, he might bring back something of his father. To operate the automaton, though, he needs a key, and as if by magic, Hugo meets Isabelle (Chloë Grace Moretz), a girl with the key. But, the real key to the story’s secrets and to the filmmaker’s vision is through Isabelle’s godfather, a toymaker named Georges (Ben Kingsley). The toymaker, we will realize, is Georges Méliès, a pioneering filmmaker who instilled the movies he made between 1896 and 1914 with cinematic versions of the illusions he had created in his magic theater shows. He invented special effects. But, driven out of business by larger studios, Méliès became a toy salesman at the Montparnasse train station. In the beginning of the film, we see a vision of Paris enhanced, as is much of the film, with visual effects used to mimic and augment traditional special effects. In an aerial shot of the city from above the Arc de Triomphe, time-lapse photography of traffic on the 12 streets that radiate out from the center circle give the sequence a mechanical quality. As the camera pans past the Eiffel Tower, we see the hint of a clock mechanism. “We wanted to plant something in your head so the later dialog will make sense,” says Rob Legato, second unit director and visual effects supervisor. The later dialog is a bit of philosophy Hugo shares with Isabelle: that machines are never built with extra parts; that all machines have only the parts they need to run and no more. He posits that if the world is a machine, he must be a part, which means there is a reason why he exists.
ILM artists created the feeling of Paris as a mechanism by using their proprietary Zeno pipeline, which includes Autodesk’s Maya, Adobe’s Photoshop, and other software, and drew on Luxology’s Modo to create streaks of traffic on Parisian streets.
Images ©2011 GK Films, LLC. Photos: Jaap Buitendijk.
December 2011/January 2012
11/22/11 12:20 PM
Stereo 3D•Visual Effects
■ ■ ■ ■
MAN December 2011/January 2012
11/22/11 12:20 PM
n n n n
Stereo 3D•Visual Effects
Artists at Pixomondo created most of the shots in the film, which include digital environments to extend sets, such as the train station (at top), period tweaks to create 1930s Paris (at bottom), and dozens of visual effects that pay homage to Georges Méliès’ illusions. “We wanted to create a subconscious visual of that philosophy,” Legato says, “of Paris as part of a mechanism, so the audience has that in mind when he says his dialog. It’s a touchy kind of thing. Delicate. But John Knoll and Industrial Light & Magic did a fantastic job.” Ben Grossmann led the visual effects teams, working from Pixomondo, which handled the majority of the shots. Nvizage developed the previs, Yannix helped with matchmov ing, ILM created the opening sequence, Lola “youthenized,” Matte World Digital produced matte paintings, and Uncharted Territory built a scene on the banks of the river Seine in Paris. Paramount Pictures and GK Films produced the movie, which Paramount is distributing. All told, the feature contains 850 VFX shots. “We did every trick in the book,” Grossmann says. “The film is a homage to Georges Méliès, so we did the visual effects checklist. In stereo.” 22
At its core, Hugo is a story of parts fitting to gether, of art and craft. And so, too, the mak ing of the movie—beginning with the use of stereo 3D. Legato, credited with creating the virtual production for Avatar, has worked with Scorsese on Shutter Island, Aviator, and other films. He won an Oscar for Titanic’s visual effects and received a nomination for Apollo 13. And, he helped Scorsese design Hugo.
Stereo Design “We planned [stereo 3D] from the begin ning,” Legato says. “And everyone was on board. [Production designer] Dante Ferretti designed the sets with depth, [cinematog rapher] Bob Richardson lit the scenes with depth, Marty [Scorsese] directed the scenes and blocked stereo out as another tool to tell the story. We were all blown away. You can’t add 3D later. It’s like any other piece of art. It
has to be planned from the beginning.” One rainy day in New York City, Scorsese, Legato, and others screened 3D movies from the ’50s in a private theater, movies that been made in 3D but never shown in stereo because the craze had ended. They also watched Avatar, Dial M for Murder, and 2D moves from the ’40s and ’50s, especially those directed by Carol Reed, such as The Third Man and others that featured children. “The fun part about working with a direc tor like Marty is that he adores the history of moviemaking, and this film is about the history of moviemaking,” Legato says. “There’s a sense of reality in the films back then that changes the story. It’s hard to describe. You just feel it.” With the help of Pixomondo and the other visual effects studios, Scorsese embraced that sense of reality and deepened it with stereo. “It’s hard to separate one from the other now,” Legato says. “We had stereo in the forefront and the back of our minds in every scene, every edit, the way we lit the scenes…it all became part of the mix. Everything was de signed, viewed, and staged for the dramatic value of 3D; the depth became part of the storytelling.” Legato provides an example: “We have a little boy in a 1932 Paris train station, in over whelming surroundings. So we use stereo in those shots to emphasize the size and structure and largeness of the building against the small ness of the boy. When you block out the scene, as soon as you see it in depth, it alters the way you consider it. Maybe a wide shot will sell the shot, maybe there is something interesting that you want to look at for a long time. It’s a cumulative thing.” As was true in the early days of filmmaking, Scorsese shot most of the movie on sound stages. He used production facilities in the UK, and much of the visual effects work involved extending those sets and building virtual envi ronments for previs and then later for the final shots. “We rebuilt Georges Méliès’ original studio on a back lot at Shepperton Studios [in Surrey, England],” Legato says, “constructing it to the exact plans, and then photographed it for real. It was a great moment. One of the thrills of moviemaking is to create history and walk around in it. But, we didn’t build much of the train station, and Marty didn’t want to walk onto the stage guessing what the shot would be.” Thus, previs helped Scorsese and Legato design shots prior to set construction, and see digital environments in shots with sets that the visual effects crew would extend or create. “Nvizage did previs on set and prior to produc
December 2011/January 2012
11/22/11 12:21 PM
Stereo 3D•Visual Effects tion,” Grossmann says. “We had mechanical representations of common camera equipment so Rob [Legato] could direct a shot with lots of visual effects in previs. We could show Marty [Scorsese] what we were thinking, and he could pre-approve an edit of the sequence. And, if Marty wanted to design a shot before he built a set, he could operate the camera virtually.” To shoot the film, the crew used a Fusion 3D system from the Cameron-Pace Group. Before production, artists from Nvizage reproduced the sets to scale digitally and loaded them into Autodesk’s MotionBuilder for real-time playback. Then, during filming, motion-control encoders mounted to the camera equipment fed the movements into MotionBuilder. “It was similar to the virtual camera system used for Avatar,” Grossmann says. “We had encoders on camera cranes, dollies, pan-andtilt heads, anything that moved. Wherever Bob Richardson moved the camera, our real-time CG matched it and replaced the greenscreen. So, Marty could see the train station, the trains, whatever, anywhere the camera pointed, with a real-time composite of the actors. If the cameras pointed toward 500 extras walking around in front of a greenscreen, he would see the city of Paris and bridges behind the extras.” This process of providing directors real-time composites of actors in virtual backgrounds has become a familiar part of filmmaking these days. However, Hugo’s director was Martin Scorsese, who is anything but typical. “This wasn’t a documentary,” Grossmann says. “It was a movie someone would make on a movie set. We might be panning to follow Hugo, and Marty would say, ‘I want the Eiffel Tower over here, and maybe over here we’ll see the Arc de Triomphe.’ If he was in the moment, he might walk to the visual effects tent and previs what a shot would look like if the stage didn’t have a roof. He’d tell the actors where he wanted them to be, and then tell us what he wanted in the set, and we’d design the shot while he was shooting.”
it took several months to develop the matchmove pipeline to get all the cameras tracked.” Pixomondo’s Beijing studio did much of the matchmoving using a customized version of Andersson Technologies’ SynthEyes. Yannix also did matchmoving for the project. “On some productions, the directors don’t embrace the medium—they direct as though they’re shooting a traditional film,” Grossmann says. “But Marty was passionate about shooting in stereo. He’d ask, ‘What is the most amazing shot we can do in stereo?’ We’d have these famous actors, Christopher Lee, Ben Kingsley, and Marty would say, ‘Ben, you did good. I just need to do another shot with a slight stereo adjustment.’ ” All those tiny adjustments made it difficult for the matchmovers later. “In stereo, the interocular difference between two cameras is so precise you can see a difference of a quarter of an inch,” Grossmann says. “The entire city of Paris can look like a tabletop set if you’re not careful. It took months to work out all the relationships—the cameras are two inches apart, and the left camera panned this degree at this frame and that degree at that frame. There’s no forgiveness in stereo.” Meanwhile, at various Pixomondo offices, artists began building sets and set extensions
n n n n
the assets started in London.” There, modelers worked from blueprints received from the art department, and then distributed assets to other offices. Most of the Pixomondo offices have Mayabased pipelines with Chaos Group’s V-Ray for rendering, but in some, the artists used Autodesk’s 3ds Max, as well. In addition to the train station—the concourse, lobby, tunnel, clock tower, and so forth, inside and out—the artists created the trains and several sections of Paris. “The big problem for our asset team was that in visual effects, we’re obsessed with making things realistic,” Grossmann says. “So, the asset team built the train station as planned, and then they’d hear, ‘Why don’t we knock down that wall and get more trains in here?’ Or, ‘In this shot, we’ll remove the roof.’ It drove them bonkers.” The answer was to break everything into components that the artists could turn off and on, and move around. For textures and reference, the artists had footage shot on location in Paris, and firsthand information from two visual art directors who were on location. “They’d see the materials and textures, and sit with Dante Ferretti and do concept work,” Grossmann says. “They became immersed in the visual guidelines.” With the movie set in the 1930s, one challenge was to
Matching the Vision All the information gathered on set—the witness camera footage, the data from the motion controllers on the camera equipment, the shots from the stereo camera that Scorsese directed—went to Pixomondo to help augment the matchmoving and camera tracking. “For every camera position, our visual effects wranglers would feed the data into the system and create an [Autodesk] Maya file that showed where all the cameras were and where they moved,” Grossmann says. “We’d know plus or minus one degree where the camera was. But,
Previs from Nvizage and on-set composites helped director Martin Scorsese and senior visual effects supervisor Rob Legato think about how to film shots with stereo 3D, even narrative sequences such as this with Hugo (Asa Butterfield) and Isabelle (Chloë Grace Moretz). and creating the effects, with Grossmann parceling out the work by sequences and specialty. “Some offices specialize in animation,” he says. “Another might be good at effects and destruction; another might complete the lighting, rendering, and compositing. Much of my job was deciding what went where, but most of
create materials and textures that made the digital assets look as if they were new in 1930, or built before and weathered appropriately to that age. Concept art and production paintings that the art directors created helped the visual effects artists create the look Scorsese wanted. “They helped keep the creative consistency,” December 2011/January 2012
11/22/11 12:20 PM
n n n n
Stereo 3D•Visual Effects
Lighting artists at Pixomondo learned that the continuity in this film was in its consistent beauty. Rather than trying to match the lighting on partial sets, they discovered how to mimic the cinematographer’s intent. Grossmann says. “Continuity was out the window in major ways—the Eiffel Tower moved where it needed to be, some routes made no sense at all, the train station would look different in some shots—but there was consistency in that everything looked good. That was the continuity.” The second challenge for the artists was in understanding how to achieve the look of movies from the early 20th century. “The hardest part and the most exciting part for the artists around the world was the exploration,” Grossmann says. “In most movies, you’re doing something like swinging Spider-Man across a bridge. For the artists on this movie, it was never as simple as, ‘Here’s your desk and your shots.’ It was, ‘Here’s your desk and here are 16 hours of highlight reels, some books, and a thousand images of old sets, old trains, old train stations.’ No one cranked out work for weeks, sometimes months, until they got into the mood and tone and look. And then, so much of this movie is a homage. An artist might present a shot and point to something that was distracting, and we’d say, ‘Yes, but it’s distracting on purpose because it references this old film, this old clip.’ ” The lighting artists had similar challenges. As always, they would light the scenes to be photographically real, but their reality needed to be a film shot on a back lot in 1930. “This wasn’t an available light film,” Legato says. “It was a lit movie, and the lighting is part of the storytelling. It’s not real life. So, we had more than one sun. We put lamps behind windows, arc lights behind alleys. It took a while for the artists to get it because it isn’t what we’re trained to do. We usually try to fool the eye that something is hyper-real. We were still making it photographically real, but the photograph had a tone to it. So we’d show people examples, tear 24
sheets, clips from old movies.” Making it even more interesting for the lighters was that the angles might change from one shot to another, as if the sun moved 180 degrees. “It works because the shots are beautiful,” Grossmann says. “It all feels the same, but if you mapped it out, you’d see that it’s all over the place.” Knowing this, the visual effects crew didn’t bother shooting chrome balls on set to gather HDRI and match the lighting. “We realized that if Bob Richardson lit something, he’d light what’s there,” Grossmann says. “If there were five people in the room, he’d light those five people. So, if we added a glass roof, a train, and a luggage cart, it wouldn’t do any good to have HDRI because if those elements had been on the set, he would have lit it differently. We had to match his intent.” Similarly, the artists needed to match Richardson’s intent in all the CG shots that had been impossible for Scorsese to shoot traditionally.
Magic Hour In addition to set extensions and virtual backgrounds, much of the visual effects work centered on Méliès’ illusions. “As the film starts to explore who Georges Méliès is, we see shots that are magical in nature,” Grossmann says. “I could talk for hours about all the little magic tricks. Hundreds and hundreds of shots. By the time we were done, I realized we had done every trick in the book. They’re not like cool magic-wand gags. They all have grounding in old film tricks and in some part of the story. We did all the classic cinema tricks from modern times to today, and pushed beyond anything done before. Miniatures. Digital characters. Stop motion. Time-lapse photography. Persistence-of-vision animation. Matte paintings. Motion-captured characters. Iris wipes. Morphs. CG augmentation. Even the
choreography of a cross-dissolve became a new art form and became visual effects. It was a homage to the kind of work Georges Méliès did, but in a modern-day fantasy film. And we created all those tricks for stereo. I’ve got all my passport pages full now.” In one scene, the children open a secret box that causes an explosion of CG papers to fly out. The images on the papers represent the collected work of Georges Méliès. They swirl around the room in a way that creates an optical illusion, the perception of animation. It’s a persistence-of-vision trick, like a flip book, but a 21st century visual effects version. In another scene, Hugo fixes a mechanical mouse, a mouse that, when Méliès winds it up and places it on the table, spins around, wiggles its tail, and looks up and down. The crew used stop-motion animation for the mouse, shot it in stereo, and then augmented it with visual effects. For a montage that shows the degeneration of Méliès’ studio from happy success into post-World War I bankruptcy, the artists mimicked time-lapse photography using computer graphics to create the images. “The movie is full of these things,” Grossmann says. “We’d take sections of sets and performances, string them together, and choreograph them as if they were one shot. In some shots, we’d have two minutes of visual effects strung back to back. We didn’t have to do a CG tsunami. But, we had CG crowds, fire, snow, wind, water, steam, smoke—a crazy amount of effects. It was humbling to do this while referencing and studying someone who invented the genre. Méliès’ work was pretty miraculous. When we studied his work, sometimes it took days to figure out how in the hell George Méliès did this.” In this film, the innovation was in making the visual effects created to bring Méliès’ illusions to life seem real, and to do so artfully. “We used visual effects and stereo 3D not as separate items, but as a tasteful, integral part of the storytelling, as important as music and lighting and acting,” Legato says. “Our innovation is in appreciating the art of filmmaking by using the tools that used to blow us away with how clever and technical they are, with, now, how beautiful they are.” n Barbara Robertson is an awardwinning writer and a contributing editor for Computer Graphics Use your smartWorld. She can be reached at phone to access related video. BarbaraRR@comcast.net.
December 2011/January 2012
11/23/11 9:21 AM
Your imagination... plus the power to make it happen!
Adobe® Creative Suite® 5.5 Master Collection for Mac
Newly updated for better delivery to smartphones and tablets! only
Wacom Cintiq 24HD Interactive Pen Display Work directly on its giant high-definition LCD screen! only
3D Animation! Autodesk Maya® Ent. Creative Suite 2012
Animation, modeling, simulation, effects, rendering, and compositing! only
Portable Power! Apple 15.4" MacBook Pro
Quad-core Intel® Core™ i7 2.20GHz, 4GB RAM, 750GB Hard Drive! only
Call 1-877-293-6255 or visit macmall.com
No Payments + No Interest if paid in full in 6 months!* FREE Parallels Desktop!* *SIX MONTHS SAME AS CASH OFFER-Valid for purchases over $500. Limited time offer. Call for details. • *FREE PARALLELS DESKTOP OFFER-Get Parallels Desktop 6.0 for Mac free after $20 mfr. and $60 MacMall mail-in rebates with purchase of any new Apple computer. Price before rebates is $80. Ends 12/7/11. • ALL OFFERS VALID WHILE SUPPLIES LAST. Download rebate coupons at www.macmall.com/rebates. Although we do our best to achieve 100% accuracy, occasionally errors and inaccuracies do occur. Should you encounter an error or inaccuracy, please inform us so it can be corrected.
Source code: CGW
#1 Apple Direct Seller Over 20 Years of Experience The Creative Pro's Choice
CGW Ad Template 1211.indd 25
11/22/11 11:52 AM
■ ■ ■ ■
hen Warner Bros. released the first Happy Feet movie, people wondered what they were thinking down under. An animated feature in which many of the character performances started with motion-capture data? Blasphemy. But, Happy Feet’s joyous story caught the imagination of audiences worldwide, and the film went on to win the Oscar for Best Animated Feature Film in 2007. Following that win, director George Miller founded his own studio, Dr. D, in Sydney, Australia, and began preparing for a sequel. In 2009, he hired Rob Coleman to build an animation team and direct the animation for Happy Feet Two, which picks up where the first film left off. Mumble, the Emperor Penguin who could dance but not sing, is now married to Gloria; they have a son, Erik. Erik can’t dance, but when he meets the “Mighty Sven,” a puffin that Erik mistakes for a penguin, Erik becomes determined to fly. Returning penguin voice actors include Elijah Wood as Mumble and Robin Williams as Ramon and Lovelace. Prior to joining the Happy Feet Two crew, Rob Coleman was an animation director and supervisor at Industrial Light & Magic, where he received two Oscar nominations for best visual effects (for Star Wars: Episode II – Attack of the Clones and Episode I – The Phantom Menace) and two BAFTA nominations (for Episode I and Men in Black). We spoke to Coleman soon after work on Happy Feet Two wrapped.
How did you begin this project? I sat with George [Miller] and looked at what he liked and didn’t like in the first film, and I spent the first year, from April 2009 to April 2010, building an animation team. How many animators did you have on your team? I had 75 animators at peak from 14 countries, with 32 from Australia. I was worried when I first came down here because I knew CG animation wasn’t huge. There are companies doing CG, but there aren’t a lot of character animators. But, just before I started hiring, Animal Logic was finishing Guardians and didn’t have another big show yet, so I was able to pick up a lot of senior and mid-level animators and a couple of leads who probably otherwise would have gone to Canada or the UK. Then, I committed to hiring only Australian junior animators. How did you organize the team? I had a number of leads, which is similar to the way I worked at ILM, and divided the work into sequences. At peak, we had nine teams, but most of the time we had six or seven. Each lead had around seven animators. Everyone did penguins, but two of the teams became really good at animating krill, so I cast more krill sequences to them. And, we
December 2011/January 2012
CGW1211-Happy Feetfin.indd 26
11/22/11 12:19 PM
Animation didn’t teach every team how to animate elephant seals, which were all keyframed. The krill? Will the Krill and Bill the Krill, voiced by Brad Pitt and Matt Damon. They’re the reason I wanted to make this movie. There’s a parallel story about the tiny little krill, and their story is so good and so funny. And, to animate things as little as krill sounded amazing. They look like little brine shrimp. They are almost at the bottom of the food chain; they’re insignificant. But they have a big impact on the biosystem of the world. Fish feed on them, whales feed on them. We have thousands and thousands of krill. What is their story? Will the Krill decides he doesn’t want to be in the krill swarm, so he and his best friend, Bill the Krill, break away from the swarm and end up as two little individuals in the ocean. They have contact with our hero penguins. Although neither species knows about the other, we see them both. Did you use motion capture for the penguins, as on the first film? We used some motion capture, predominately for the dancing and
■ ■ ■ ■
dramatic sense when the penguins walk around. [Director] George Miller comes from a live-action background, and he’s comfortable directing actors on stage, so he could do take after take quickly. He could get a performance he wanted in an hour or a day that would have taken us a month. But he also enjoys the animation process because he can plus the performances and add facial animation. Motion capture allowed the two worlds to come together, and because the characters are humanoid and walking around, I’m fine with it. When the characters come to animation, they have their weight built in already. So, we get the combination of movement from talented performers directed by the director and performances stylized by our talented animators. Also, when George wanted thousands of characters on screen dancing intricate choreography, keyframing would have been impractical. Did you motion-capture any of the other characters? We also used motion capture for Sven, the puffin, when he’s on the ground, but we keyframed him when he’s flying. But, not the krill, of course. There was an attempt. They did a bunch of experiments. The krill have 10 legs and arms, so they had a conga line with five dancers trying
Images courtesy Warner Bros. Pictures.
CGW1211-Happy Feetfin.indd 27
11/22/11 12:19 PM
n n n n
to do the legs. They captured Savion Glover [dancer, choreographer] tapping for the krill, as well, which was extremely beneficial for me and the animators. The mo editors [motion editors] could take what he did, apply that to a low-resolution krill model, and we could study the feet and get movements that would have been difficult if we were keyframing 20 legs and arms. We also had early experiments with a puppeteer moving a krill body on the mocap floor, but we couldn’t get the right scale of motion when we put the krill into the water. Were there any other unusual motioncapture experiments? There were always experiments. For the elephant seal, we had four people performing together, but trying to wrangle that was too much effort. I could get a talented animator to do something really beautiful in not too much time. What was the motion-capture process? We used a Giant Studios’ system. We had a bunch of talented people here who had worked on the original, then worked on Avatar, and came back to do Happy Feet Two. George [Miller] would cut the audio first, and they would broadcast that onto the floor so everyone could hear it. The performers pantomimed to the dialog. They might not hit the accent of a word exactly, so it was up to the mo editors to make it feel like the voices were coming out of the bodies. Each day we would recalibrate the dancers. We’d measure their legs and arms precisely so we could translate them to the character maps for each species of penguin. We could capture up to 10 at a time, and could see the penguins walking around in real time as the dancers performed. Their feet weren’t locked to an ultra-resolution set, but we could see where they were. Did you need to change the data much to have the dancers move like penguins? The dancers all went to ‘penguin school,’ and learned how to dance like Emperor or Adelie Penguins, but it took a fair amount of labor to get [the characters] to move and act like penguins. Penguins are like little flour sacks, like little fluffy pillows. When you have a human walking like a penguin, it’s one thing if they pantomime it for you by keeping their legs together and waddling. It’s another to have it look real. Our motion editors had to manipulate the data to make it work the way George Miller wanted. Once George had directed the motion capture, he would make selects. The motion 28
At top: A team of 75 animators from 14 countries worked at Dr. D studios in Sydney, Australia, to perfect and amplify performances captured from dancers for the penguins, and to add facial expressions. At bottom: A separate team of animators used motion cycles and a rules-based system to animate crowds of penguins and schools of fish. editors worked in [Autodesk’s] MotionBuilder and [Giant Studio’s] Nuance. They would pick the matching human performance and re-map the data onto the penguin bodies, and then put the penguins on the ultra-resolution set, the undulating ice field, and spread their toes. I would review the work in progress and make critiques. When I was happy, they converted the files into [Autodesk] Maya files and sent those rigs to the keyframe animation team. Did you develop any particular tools or rigs for the keyframe animators? We had a similar skeleton for each species of penguin, and we had offset rigs. The offset rig was a parental rig on top of a child rig. The child rig received the keyframe data from MotionBuilder. With the parental rig, the animators could add rotation and translation to the big volumes—the hips, head, shoulders, and chest. They were all IK. Often, once George Miller saw [the motion-captured animation], he wanted to go broader than what he saw on the floor. So the animators might put a translation on the chest, or change the eye direction by swinging the head, and so forth. The animators could supersede the data and movement with our offset rig. When did they do keyframe animation? We keyframed the animation when the
characters swam or when they did dangerous actions. We also keyframed the whales, leopard seals, and fish. The fish are basically food in this film. And, there’s no facial capture. Every penguin ended up being a hybrid. George was very happy with what Animal Logic had done. But now that he had some experience with animation, he wanted to spend more time on the movement of the eyes, the dilation of the eyes—the eye dart-ness as he calls it. He was very particular about beak sync, lip sync, and phonemes— about the movement of the tongue and lips—and that was cool with me. And, we spent a lot of time on the non-verbal, reaction shots. It was challenging to get the penguins to look good from multiple angles and still connect with the audience and characters on the screen. You have to see their emotion. They have humanoid faces, but their eyes aren’t binocular. They’re set 30 degrees back on an angle. Did you use the models from the first film? We based the characters on where they left off on the first film, but we were using Maya, and Animal Logic had used Softimage XSI. So, most of the characters were redone and rebuilt; we upgraded the models. And, we started over and redid all the rigs. That wasn’t a big factor for me. If we hired someone who
December 2011/January 2012
CGW1211-Happy Feetfin.indd 28
11/22/11 12:19 PM
CGW Ad Template 1211.indd 29
11/22/11 11:53 AM
n n n n
knew XSI, we could teach them the new keystrokes in a week. What we cared about were their animation and acting skills. Did the animators have video of the actors as reference, as well as the motion data? During the voice recordings in Los Angeles and Sydney, I had a team of videographers shooting the main actors. Even though they’re performing to microphones, once they get into the characters, they start performing with their faces. There are nice things you can do if you’re there to watch or capture them on video; you can use their expressions to drive the animation later. Elijah [Wood] did some things with his eyes that became part of Mumble’s performance. Brad [Pitt] might do something with the tilt of his head, the furl of his brow, that is inspiration for the animators down the road. We weren’t motion-capturing. We were videoing. But, we would see patterns. I cut together what I called ‘spirit reels’ from the videos and had QuickTimes for the animators to reference. Did any of the actors record the dialog together, or did they work separately? Brad and Matt came in for three days, so we had them in the same room, acting to each other. And we had many of the other voice actors in the same space at the same time interacting; upward of eight performances in a big space all recorded at the same time. You get better performances. You get talk-over, which you also have in live action, so why not have it in animation? George [Miller] had the actors do the initial performances until they were happy together. He recorded that. Then, they could do the lines themselves as they had done in the ensemble piece. That way he had the clean lines, and if someone stepped on a word, he could replace it. How did you animate the schools of fish? We’d animate the main characters, the hero characters, and provide swim cycles for the fish—and the penguins—to the crowd team. We had about 25 artists plus a director and supervisor on the crowd team. They’d attach 30
Director George Miller recorded Brad Pitt and Matt Damon acting out the dialog together for three days to give Will the Krill and Bill the Krill their voices. Although the team experimented with motion capture for the tiny creatures, animators created all the performances with keyframe animation. our keyframe animations to a system that had pened in the real world. In our movie, all the run-time rules, and the fish would scatter like communities come together to overcome the in nature when they came near the penguins. troubles of the world, and even the krill have an impact. It was amazing. When the characters are swimming, did you animate to the movement of the water, or did the simulation team move the water based on the keyframe animation? We handled water in two ways. Basically, if the shot was about the character, the water team would match the animation. We would talk about the water with George, and he’d tell us whether he wanted the water to be calm or move a lot. We’d do some keyframe animation, and he might tell us to tumble the characters more or soften their movements as if they were in a swell. Then the effects artists would put the water around them. If the shot was about the effect, like ice tumbling into the water, we would match their simulation. We used [Exotic Matter’s] Naiad for all the splashes and for the interaction of the characters with the water. The effects team then stitched the Naiad splash elements into a high-resolution surface simulated in [Side Effects Software’s] Houdini. They were able to create a realism on the surface of the water that I think is breathtaking. They also did volumetric light shards coming down through the water. There’s a beautiful shot with our two hero krill clutching the bottom of a piece of ice, with smaller pieces of ice tumbling in the turbulence of the water. We look up through the water and see the caustics. For the krill, because they are about the size of a thumbnail, they put silt and dust in the water to help with the scale. It’s amazing when you see it in stereo.
Did the icy environments change much? We had two main environments, Adelie land and Emperor land, and they change during the movie. We start with compacted snow on ice, and then we have fluffier, powdery snow. When the animators go into the scenes, they had a packed-ice layer or a packed-snow layer for the penguins’ feet. Then another team added footprints in the snow, so we’d see little foot trails. The rendering of the snow, with sparkling highlights, is so amazing. It makes it feel like you’re there. Snow is a big part of the story; we had about 50 artists working on the effects team creating character effects, water, volumetrics, and destruction. There are beautiful shots of compacted ice and snow breaking apart.
Do the little krill survive? They do. Through a series of events, a massive rogue iceberg crashes into the entrance of Emperor Penguin land and traps the penguins, which is something that actually hap-
Barbara Robertson is an awardwinning writer and a contributing editor for Computer Graphics Use your smartWorld. She can be reached at phone to access related video. BarbaraRR@comcast.net.
Did you have favorite characters? Well, the krill are certainly high on my list. The elephant seals were a pleasure to animate. The main one, Beach Master, was a fantastic character to get into, and he has a sidekick named Wayne. I liked them a lot. And, animating to Robin Williams is always great. He was Ramon the Adelie penguin, and the Rockhopper. Ramon is very over the top theatrically, and he gets a love interest in this film, so that added a whole other layer to his performance. We see Mumble worried about his son, and Sven wrapped up in being an inspirational speaker. We have big catastrophes and massive dance numbers. The first film hit a very high mark. We tried to step above it. n
December 2011/January 2012
CGW1211-Happy Feetfin.indd 30
11/23/11 9:21 AM
Your workflow is unique — make sure the power behind it is too. With today’s creative professionals facing more competition than ever before, it’s time to give yourself the edge you need. Building on over 20 years of experience, let Safe Harbor Computers help you configure your custom graphics workstation, designed to meet both your needs and your budget. Configured with your choice of hardware and software, a TSUNAMI from Safe Harbor Computers is the professional’s first choice for 3D graphics. Maximize productivity with 64-bit 12-core processing power and up to 48 GB of memory. With an optional NVIDIA® Quadro® display card by PNY, amp up your entire post production workflow and make your machine perfect for rendering 3D models, motion graphics, animation and more.
Visit this unique link for a special workstation offer to CGW readers www.7-t.co.uk
GPU-Xpander Desktop Pro 2
Vue 10 xStream
Cinema 4D R13
Save time and money by adding expansion slots, power and cooling to your desktop. Gain killer 3D graphics or computing cores for high-performance projects using an open PCIe slot to expand I/O capability. Available in a wide array of desktop and rack mount configurations for Mac & PC.
Artist-friendly 3D software that combines modeling, sculpting, painting, animation and rendering in a fused workflow. Ideal for artists and designers working in advertising, package design, game development, film and broadcast, architectural and design visualization, and education. Mac/PC.
Offers CG professionals the premiere solution for creating exceptionally rich and realistic Digital Nature environments, fully immersed within 3ds Max, Maya, Softimage, Lightwave and Cinema 4D. Ideal for VFX Studios, Animators, Architects, Matte Painters, CG students and more!
Now with all new character tools, integrated stereoscopic capabilities, streamlined multiartist collaboration and physical rendering. Available in four flavors tailored to your needs: Prime, Visualize, Broadcast, and Studio. Academic pricing and many upgrade options available.
Adobe® Photoshop® CS5 Extended
NVIDIA® Quadro® 4000 by PNY
Animation: Master v16
Intuos4 - Wireless
Create entire three-dimensional worlds and two-dimensional projects with this fully featured, intuitive, fun to learn and easy to use 3D animation software package. Affordably model, bone, texture, animate, light and render finished cinemaquality animation. The only limit is your imagination!
Popular and versatile pen tablet with the comfort and control that artists, photographers and designers demand. Reduce cord clutter and enjoy the freedom to move about your work area. Features 2,048 levels of sensitivity for natural feel and accuracy. Bundle with Photoshop CS5 and save!
The ultimate solution for advanced digital imaging, delivering all the editing/compositing capabilities of Photoshop CS5 plus breakthrough tools that let you create and edit 3D and motion-based content. Select, adjust, paint and recompose images with precision and freedom. Bundles and upgrades available!
If you’re an artist, designer, or video professional, accelerate your entire workflow with the Quadro 4000 by PNY graphics solution. Delivering excellent graphics performance across a broad range of video, design and animation applications, Quadro 4000 by PNY allows you to do more, faster. Mac/PC availability.
www.sharbor.com SOLUTIONS FOR GRAPHICS PROFESSIONALS Terms: POs accepted from schools and government agencies. • All checks require 7–10 days to clear. • Defective products replaced promptly. RMA number required for all merchandise returns. Returns accepted within 20 days, in original packaging, postage prepaid, undamaged. Opened software not returnable. Shipping charges not refundable. Returns subject to a 18% restocking fee. • Not responsible for typos. Prices subject to change. © 2011 Safe Harbor Computers. All rights reserved.
CGW Ad Template 1211.indd 31
Safe Harbor Computers 530 W. Oklahoma Ave. Ste. 500 Milwaukee, WI 53207
800-544-6599 Information & Orders 414-615-4560 414-615-4567 Fax Mon–Fri 8:30am–5pm CST
11/22/11 11:54 AM
■ ■ ■ ■
started off rather slowly at the box office—possibly an economic statement more so than one reflective of the movie releases. Despite a first-quarter lineup with virtually something for every taste—The Rite, Green Hornet, I Am Number Four, Battle: Los Angeles, Rango, Mars Needs Moms, and Sucker Punch, to name a few—audiences just didn’t open their wallets as expected. However, the dour box-office numbers changed quickly for the better during the summer holidays, as moviegoers, likely feeling less financial pinch, flocked to theaters to see a number of highly anticipated films. Records were broken on Memorial Day. Crowds were entertained. Hollywood smiled and breathed a sigh of relief. As of press time, there were still a handful of tent-pole films yet to be released, including The Adventures of Tintin and Hugo. And judging from the hype surrounding these movies, as well as a few other holiday releases, there’s little doubt that 2011 will close on a very happy note—both financially as well as with amazing movies. How studios and digital artists were able to achieve such a high level of work and continue to push the visual effects and animation bars ever higher in these economic times is a double feat for which they should be applauded. Studios spend a long time working on a film that’s in theaters briefly, only at the end of the year to have viewers narrow down their favorites that, for some reason or another, grabbed their attention. This is what awards season is all about—what people liked both then and now. People love superhero movies. And this year, there were plenty of choices in this regard: Captain America, The Green Lantern, The Green Hornet, Thor, X-Men. Some of these heroes were larger than life, captivating audiences with their digital powers; others dazzled with amazing CG sets and backgrounds. No matter how you look at it, visual effects played a major role in the films. Perhaps the most popular superhero film this year did not contain live-action stars, but a unique set of computer-generated characters who kicked their way into the hearts of theatergoers: Po and the Furious Five in Kung Fu Panda 2. The year also gave us some rather unexpected treats at the theater: a range of entertaining characters and story lines—and, of course, jawdropping visual effects. While many are still trying to comprehend the story from The Tree of Life, there is little confusion about its beautiful imagery, especially during the formation of the universe and expansion of the galaxies, followed by explosive volcanoes and prehistoric beasts. Johnny Depp, reprising his role as Captain Jack Sparrow, left us scratching our heads at times. But that’s Jack. And while he had a 32
somewhat new crew onboard with this latest Pirates of the Caribbean flick, we were treated to some nice VFX gems in the film, among them the digital mermaids. And if Depp’s live-action alter ego was not enough to entertain us, we also had his CG character Rango kicking up dust in a very uncommon all-CG spaghetti western—live-action director Gore Verbinski’s first animated feature foray and the first animated feature to move through ILM’s VFX pipeline. The dirt and dust of the desert created an unusual look for the movie—nearly as unique as the computer-generated characters. ILM was also kicking up more dust (sandy grit and star dust) with the effects in Cowboys & Aliens, a sci-fi western directed by Jon Favreau. A strange clash of worlds, both ripe for awesome visual effects. A sci-fi fan favorite for decades, Planet of the Apes burst into theaters as a series reboot, using new methods of motion capture to give the movie’s simian cast their realistic performances, especially Caesar, the chimpanzee star performed by Andy Serkis. A relatively new sci-fi favorite, Transformers rocketed to the top of the box office with even more complicated Autobots and Decepticons to fill the screen. On the animated side, like in Rango, we met entirely new casts of CG characters starring in Rio, a colorful production from Blue Sky; Hop, an Easter-themed movie delivered by Rhythm & Hues; Gnomeo and Juliet, a unique twist on a classic; and Mars Needs Moms, an out-of-this-world film from ImageMovers Digital before the innovative performancecapture technology company closed its doors. 2011 also brought back older classics albeit in cutting-edge computer graphics form (Smurfs) as well as updated characters for grand re-entrances (Kung Fu Panda 2, Puss in Boots, Happy Feet Two, Cars 2). As we close out the year, anticipation is high for the hair-raising effects of Breaking Dawn and the digitally boosted action in Mission: Impossible – Ghost Protocol. Yet generating the biggest buzz seems to be Peter Jackson/Steven Spielberg’s Tintin, a CGI stereo presentation of a classic Belgian comic-book character. Released early overseas, Tintin quickly established itself on the Oscar watch list. Another late-year release, Hugo is mesmerizing audiences with its dazzling digital work. But let us not forget the year’s top box-office champ as of press time: the last film in the Harry Potter series, with its ambitious visual effects that spanned a decade and culminated in digital mastery. We know what the box office says, and we have heard what the press and audiences have said, about this year’s films. Now, let’s hear what the experts in our industry think.
December 2011/January 2012
11/22/11 5:19 PM
December 2011/January 2012
■ ■ ■ ■
11/22/11 5:19 PM
n n n n
Captain America: The First Avenger Release date: July 22 (US) Production companies: Marvel Enterprises, Marvel Entertainment, Marvel Studios
In an unexpected role reversal, digital effects were used to depict actor Chris Evans as the weakling Steve Rogers, as opposed to the muscled superhero Captain America. To many, this was an unexpected use of CGI. “Lola VFX really stole the show on this one,” notes Matthew Ward, director of photography at Rainmaker Entertainment. “I remember everyone in the industry buzzing with the question, How did they make the newly buff Chris Evans so skinny? The head and body seaming was flawless and helped introduce the character as the complete opposite physique as we’ve all known Captain America to be.” This movie really surprised Aharon Bourland, CG supervisor at Tippett. “I had a good time watching it. One of the more interesting effects was probably the subtlest. The way they made [Chris Evans] all scrawny and small during the first half of the movie was nice. I’m still not quite sure how they did the Red Skull’s face. I couldn’t tell if it was makeup or digital augmentation; it was probably both. But it was cool that I couldn’t tell right off the bat how they did it.”
The Girl with the Dragon Tattoo Release date: December 21 Production companies: Film Rites, MGM, Scott Rudin Productions, Yellow Bird Films
The book series spoke to millions. Can the film do the same? “If David Fincher’s record 34
is any clue as to what we can expect to see in this film, it’ll be another marvel at visual effects so well hidden we’ll never even know they were there,” predicts Rainmaker’s Ward. “Come awards season, we’ll start to see reels showing how effects were done, and we’ll want to go stand in line to watch the film again to see what we think we should have had the eye to pick out in the first place.”
Green Lantern Release date: June 17 Production companies: Warner Bros. Pictures, De Line Pictures, DC Entertainment
Bruce Woloshyn, visual effects supervisor at Method Studios (Jack and Jill, The Twilight Saga: Breaking Dawn—Part 1) relays that this year, he and his 12-year-old son, David, resolved to go and see more movies together. Of course, they “had” to see Green Lantern. “I have been a semi-serious comic-book collec-
tor for more than 20 years, and both my son and I were really looking forward to seeing Oa come to life on the big screen. We both agreed, as we discussed the film over ice cream after the screening, that the animation and appearance of the actual Green Lantern costumes were outstanding (or, to use 12-year-old vernacular, ‘cool’),” he says. “Even with knowing that Sony Imageworks had created CGI uniforms for the corps, we both agreed that it was so well executed that after the initial, ‘Wow, check out the suit,’ we never gave it a second thought…and that’s a good thing.” “Doing a full-body replacement for the lantern suite seemed like a pretty ambitious plan. It could have easily gotten kind of strange looking, but it came together and helped set the character apart from superheroes in other movies,” says Bourland.
Harry Potter and the Deathly Hallows: Part 2 Release date: July 15 Production companies: Heyday Films, Moving Picture Company, Warner Bros. Pictures, Warner Bros.
For a decade, fans have witnessed the digital magic required to take Harry Potter from the pages of a book to the big screen. Over the years, the magic has grown more intense, as have the effects. This summer, the franchise culminated in a range of digital work, from the expected to the unexpected. “I’ve often thought the Harry Potter films, above others, are much more enjoyable big, in the theater, than at home. There’s something about being in a dark theater with these characters, and the effects push the story in every shot,” Ward points out. “I’m sad to see the franchise wrapping up, as the films have each been worth watching and remain enjoyable.” Tippett’s Bourland says he was super-excited about this movie. “The Potter films have constantly gotten better and better, and the final one did not disappoint. The dragon in Gringotts vault was really cool. The Dragon Slayer dragon has always held a special place in my heart, and you could see more than a little bit of it in this dragon’s design,” he says. Moreover, the magic effects were also really pretty, as usual, adds Bourland. “My favorite was when the Death Eaters were destroying the shield that the good wizards built around Hogwarts. I always wanted to work on a Potter movie, so it was a little bitter sweet to realize my last chance had passed.” As Steve Garrad, VFX executive producer at Image-Engine notes, in another year when it seems the visual effects industry is deter-
December 2011/January 2012
11/22/11 5:20 PM
In Ho Beak
take classes Online Or in san franciscO acting* advertising Animation & Visual Effects architecture* art education fashion fine art Game Design Graphic Design illustration industrial Design interior architecture & Design landscape architecture* Motion Pictures & television Multimedia communications Music Production & sound Design for Visual Media Photography Web Design & new Media
CGW Dec'11.indd 1 CGW Ad Template 1211.indd 35
enroll now earn your aa, ba, bfa, ma, mfa or m-arch accredited degree engage in continuing art education courses explore pre-college scholarship programs
www.AcAdEmyArt.Edu 800.544.2787 (u.S. Only) or 415.274.2200 79 neW MOntGOMery st, san franciscO, ca 94105
Accredited member WASC, NASAD, CIDA (BFA-IAD), NAAB (M-ARCH) *Acting, Architecture (BFA) and Landscape Architecture degree programs not currently available online. Visit www.academyart.edu to learn about total costs, median student loan debt, potential occupations and other information.
11/3/11 11:18 AM 11/22/11 11:56 AM
n n n n
mined to tell everyone how bad everything is, the interesting thing for me is how good the quality and consistency of the work being produced globally is. To this end, his two personal choices of films contained featured visual effects from companies based in London and Wellington, New Zealand. “Only one of the summer blockbuster films was a slight letdown in my opinion—and again, that is all that is, my opinion,” he says. “There will be many reasons, mostly not due to any vendor’s faults, that the thousands of man-days spent on that project would not end up being entirely present up on the silver screen.” That said, Garrad’s personal favorite for this year’s Oscar is Harry Potter. “Not only was it an excellent film, but the VFX had the necessary scale and size to end the series; they were consistent and flawless throughout,” he says. “They have been throughout the series; it is time this crew were recognized people!”
Hugo Release date: November 23 Production companies: GK Films, Infinitum Nihil
On the verge of being released as this issue went to press, a number of folks declined to comment on the film, having not seen it. Nevertheless, the imagery in the trailers is dazzling, supporting a heart-warming story. VES President Jeff Okun, a visual effects supervisor, is among those who have not seen the film. “But what I have seen looks astonishing—the real-ness of the robot, the world that cannot be real, yet is,” he says. “It may be the ultimate demonstration of what is good with VFX because they were used properly by an artist, like Martin Scorsese.” Scott Farrar (ASC), visual effects supervisor at ILM, notes that he likes to see the films Martin Scorsese makes because the director tries different types of stories and they always have
Immortals Release date: November 11 Production companies: Relativity Media, Atmosphere Entertainment MM, Hollywood Gang Productions, Virgin Produced
According to Ward, epic films require epic effects, and there seems to be no shortage in Immortals. “We’ve seen films like this made, and, at times, the effects were so featured they took away from the story rather than supported it, entertaining [us] nonetheless,” he says. Ward notes that the trailers look to be big in scope, along with a 3D conversion. “No doubt it’ll be an entertaining film and certainly a spectacle to enjoy in the effects realm.” In a film with so many effects, an insider points to the Titan fight scene as “amazing.”
Mission: Impossible — Ghost Protocol wonderful characters. “For me, Hugo looks interesting because of its steam-punk design sensibility. That style seems fun and is particularly well suited to stereo 3D and storybook-style visual effects shots,” he says. “I’m looking forward to seeing what Rob Legato, the VFX supervisor, and Martin came up with.” 36
Release date: December 21 Production companies: Paramount Pictures, Bad Robot, FilmWorks, Skydance Productions, Stillking Films
“Tom Cruise, Brad Bird, and Mission Impossible sequel? I’m in,” says Rainmaker’s Ward. “I think for all of us VFX and animation artists, most of us are fans of Brad’s work on The Iron Giant and The Incredibles. Needless to say,
we’re all excited to see what Brad brought to this production, and we’re all certainly expecting amazing things.” As Ward notes, the film’s trailer shows action in its modern definition: explosions, high-wire acts, gunfights, and hand-to-hand combat. “The Mission Impossible franchise has always delivered new, clever action sequences, usually only achievable with the help of visual effects artists,” he adds. “I’m very curious to see what this latest chapter has in store for audiences.” Image-Engine’s Garrad notes that out of the yet-to-be-released films, the only one that stands a chance, in his humble opinion, of upsetting the applecart is Mission: Impossible— Ghost Protocol. “The trailer looked like great fun with big visual effects, but as it’s not out yet, we’ll have to wait and see,” he says.
Pirates of the Caribbean: On Stranger Tides Release date: May 20 Production companies: Walt Disney Pictures, Jerry Bruckheimer Films, Moving Picture Company
In a film with Johnny Depp, you can expect a level of quirkiness, and this Pirates film brought that to the screen for another adventure on the high seas. “Again, great work,” says
December 2011/January 2012
11/22/11 5:20 PM
CGW SILVER EDGE WINNER
© Daihei Shibata
NEW CHARACTER TOOLS
“CINEMA 4D R13 builds on MAXON‘s 25-year legacy of enabling digital content creators to produce engaging content quickly and easily for a variety of industries.“ - Computer Graphics World Magazine, Oct/Nov 2011
© Marjin Raeven - www.raeven.be
• New Character Tools • Impressive Stereoscopic Workflow • Physical Rendering Engine • Improved After Effects Integration SEE FOR YOURSELF
Free 42-day trail version @ www.maxon.net
CGW Ad Template 1211.indd 37
11/22/11 11:56 AM
n n n n
Okun. “ILM is killing on these, pushing the envelope on natural phenomenon, chaos, and look—the water, the clouds, smoke, interaction were all fantastic.”
Real Steel Release date: October 7 Production companies: Touchstone Pictures, DreamWorks SKG, 21 Laps Entertainment, Angry Films, ImageMovers, Reliance Entertainment
With this film, image-based capture was on full display, yet it was the performance of the robots, in a very un-robotic style, which resonated with the industry. The robots also looked as good as they moved. “The best part of the work in this film was probably the rendering quality and lighting work done on the robots,” says Bourland. “They seamlessly fit into the plates, which is not an easy task when you are dealing with characters made of everything from translucent plastic with lights inside to rusty metal. The character design was fun. I really liked the design of Noisy Boy.”
Rise of the Planet of the Apes Release date: August 5 Production companies: Twentieth Century Fox Film Corporation, Chernin Entertainment, Dune Entertainment
According to Digital Domain visual effects supervisor Stephen Rosenbaum, who is currently supervising Jack the Giant Killer for director Brian Singer, there are two distinctly different types of FX movies being made these days: those that indulge us with spectacular, gratuitous visuals, which are short on substance but fun to watch, and those which offer a relatively new brand of sentient digital creatures. “This year, we witnessed the perfect exploitation of FX technology used to create the latter type of movie—Rise of the Planet of the Apes,” says Rosenbaum, who received an Oscar for his work on Forrest Gump and Avatar. “Unlike 38
some movies from the last year that used motion capture to produce pedestrian characters, Apes used the technology to help deliver a performance. It demonstrated the real potential for actors to embody a digital creature and inject into it a soul.” In fact, Rosenbaum is one of many who applauds the performance of actor Andy Serkis as he brings yet another digital character to life. “Once again, we were treated to Andy Serkis slapping on digital makeup and performing the role of a principal character. While the primates looked fantastic—the eyes, fur, movements— I can assure you, all would be lost if not for Andy’s masterful understanding of how to personify a character within an untraditional medium,” he adds. “As with any role, it’s all about understanding how the character thinks, moves, and responds to its surroundings. Without those fundamentals, a goodlooking chimp with no personality will very quickly become boring to watch.” Rainmaker’s Ward concurs. “If there’s anything we all remember from the original Planet of the Apes, it’s the rubber-mask prosthetic work of the 1960s makeup artists. Arguably, the effect worked back then, but today’s audiences demand a much more believable illusion, and better still than the much more slick rubber masks of Tim Burton’s 2001 retelling,” he says. “A movie like this can dodge a little close to the uncanny valley, but Weta has nailed the apes even better than they did in King Kong. Another perfect use of performance capture tools, this film allowed us to believe that the apes were real with what seemed to be a hint of the actors playing them. In one way or another, you could define this
film using truly digital makeup—instead of placing a rubber mask on Roddy McDowall, try replacing Any Serkis completely with a digital ape that is driven by every single twitch comprising Andy’s performance. Forget those clunky rubber masks and enjoy every detail on these apes, no mater how close the camera gets ... and it got pretty close in this film—it’s always a challenge for any VFX shot, yet a triumph for Weta.” “The work overall was very good. For me, the standout was the orangutan; that thing was amazing,” says Tippett’s Bourland. “The details in the sculpt and fur groom were outstanding. It also stole the scene with some classic lines— the ‘dumb apes’ line was priceless. A lot of people kept talking about how good the eyes on the apes looked, but I don’t think that was their best feature. I feel their performances and overall presence on screen were more impressive.”
One film that especially resonated with Shawn Walsh, visual effects executive producer at Image-Engine, was Apes. “Due to our participation in Apes as a primary previs vendor, we were privy to some of the stunning visual effects work that was evolving at Weta Digital,” he says. “Kurt Williams showed me some early shots that were being produced during the long shoot, and I was floored by how sophisticated and nuanced the performancecapture work was turning out to be. The eyes especially were working as a true window to the soul, and I thought, ‘Man, this is going to be exceptional work!’” Image-Engine’s Garrad names Rise of the Planet of the Apes as his second favorite movie this year. “Again, a good film, which always helps. The VFX were of the highest quality, and the animation was fantastic,” he says. While he acknowledges that the performance was indeed based on Andy Serkis, he points out that it was assisted by lots of very talented animators. Garrad notes that he ranked this film as a second to Harry Potter because in his opinion, the consistency of the work was not as good. Daniel Jeannette, animation director (Where the Wild Things Are, Happy Feet), says
December 2011/January 2012
11/22/11 5:20 PM
VFX•Animation he was amazed by what he saw in Apes. “The level of subtleties and complexity delivered in the performance of Caesar from the combination of both Andy Serkis’ performance capture and the team of animators at Weta is truly groundbreaking. I feel it’s a very strong favorite for visual effects awards.” In addition, Rosenbaum, AMPAS member of the Visual Effects Branch, is among a growing contingent that feels it is time that an actor’s performance in a digital role is fully recognized. “It is time for the Actor Branch to finally acknowledge that the believability of Caesar came from an actor’s performance. How he looked will surely be recognized by my Branch,” says Rosenbaum.
Thor Release date: May 6 Production companies: Paramount Pictures, Marvel Entertainment, Marvel Studios
“I grew up mainly reading British comics, like ‘Judge Dredd,’ and was never really exposed to ‘Thor,’ ” admits Ben Shepherd, VFX supervisor at Cinesite. “Not knowing what to expect, I was pleasantly surprised by Thor. There were massive set pieces and environments, particularly
reo), which worked much better,” he says. “I wouldn’t place myself in the Transformers fan bracket, but I thought the film was awesome. The quantity and quality of the destruction effects were amazing.” Rainmaker’s Ward challenges folks to find a camera in this film that isn’t moving or barely being operated. “As anyone in VFX knows, a moving camera means a matchmove, and a moving Michael Bay camera often means a matchmove from hell.” Ward describes himself as a fan of Michael Bay’s camera work and was excited to hear him getting back together with DP Amir Mokri after enjoying the crazy sequences they conjured up on Bad Boys 2. “The reunion paid off, as TF3 didn’t disappoint—the ride was constant with every robot-filled frame. Who can forget the detail in Shockwave’s Driller as it tore through the Chicago skyline? Another standing ovation for ILM’s seamless work in this film and a huge pat on the back for all the stereo work in the film. This was the best use of stereo 3D this year.” Destruction, says Tippett’s Bourland, that is what Transformers is about. “Watching Shockwave’s giant mechanical death worm chew its way through a building that our heroes are running around in was probably the ‘best
n n n n
The Adventures of Tintin Release date: December 21 Production companies: Columbia Pictures, Paramount Pictures, Amblin Entertainment, WingNut Films, The Kennedy/Marshall Company, Hemisphere Media Capital, Nickelodeon Movies
While the film had not been released in the US as of press time, there was no shortage of comments pertaining to this highly anticipated film. “Friends of mine either in the industry or not who have seen this film, film geeks, and even the harshest of couch-surfing critics are all boasting at how amazing this film is,” says Ward from Rainmaker. “It’s a winning combination in every way: Spielberg, Jackson, Weta, Georges Rémi’s great writing of the ‘Tintin’ comics.” Ward also believes that it’s here with this film where performance capture as a medium may finally find its foothold. “America will have to wait a little longer for this one, but I’ll continue to drool over the trailers until I can buy my ticket,” he says. Okun describes this film as “technically groundbreaking and amazing work!” But more importantly, he says, it raises the question of whether [the work] is VFX or something else. “Is it something new? Something forecasting our futures in terms of what can be done? It will be a game-changer for the future, as the crossover between acting and VFX will seamlessly merge and no one will ever again be able to tell technique,” Okun adds. “It will be hidden from common understanding—depending on how it is applied in the future.”
Cars 2 Release date: June 26 Production companies: Walt Disney Pictures, Pixar Animation Studios
the impressive Asgard environment. The battle with the ice warriors was well rendered, and there was some very accomplished CG in there.”
Transformers: Dark of the Moon Release date: June 28 Production companies: Paramount Pictures, Hasbro, Di Bonaventura Pictures
Shepherd is not alone in selecting this summer’s Transformers as the best of the franchise so far. “For me, this was the best of the three films they’ve released. In the first two movies, I found the action too fast and confusing, but in the latest installment, the combat has been slowed down (possibly to help the ste-
building being destroyed’ sequence ever,” he says. “I also really appreciated the fact that Michael Bay actually got some guys to squirrelsuit-jump into downtown Chicago.”
“I loved the feel of this film,” says Okun about Pixar’s latest offering. “While it clearly uses newer techniques to arrive at some of the imagery, it also felt warm and comfortable, so the VFX were invisible to the story, as they should be.”
The Twilight Saga: Breaking Dawn — Part 1 Release date: November 18 Production companies: Summit Entertainment, Imprint Entertainment, TSBD Canada Productions, TSBD Louisiana, TSBD Productions, Total Entertainment, Zohar International
A lot can be said for some of the effects in this film, but the consensus seems to be that the CG wolves were done extremely well.
Method Studios’ Woloshyn also enjoyed the film. “There is nothing quite like seeing a Pixar film through the eyes of a child. Seeing Cars 2 with my younger son was indeed December 2011/January 2012
11/22/11 5:20 PM
n n n n
VFX•Animation Doug Chiang and his top-notch art department scream in every shot, as you can literally compare the design work to the final frames.”
Puss in Boots Release date: October 28 Production company: DreamWorks Animation
To prepare Puss in Boots for his leading role, the DreamWorks team gave him more fur that responds better to his movements. A lot of work also went into the film characters’ facial expressions. Another big challenge was the environments, particularly cloud world, with its volumetric clouds. The shooting beanstalk in stereo 3D was also impressive.
Rango Release date: March 4 Production companies: Blind Wink Productions, GK Films, Nickelodeon Movies
a special treat, especially in IMAX 3D. And, despite what some ‘grown-up’ reviewers had to say about the film, Pixar’s target audience (my Joseph) demonstrated for me what is truly magic about great animation, layout, and editing (the things grown-ups think about),” he says. “To my son Joseph, Lightning McQueen, Mater, and the rest of the cast are as ‘real’ as any live-action characters. And to be immersed in the IMAX 3D presentation of the film was about as magical an experience for him as meeting them at Disneyland.”
Happy Feet Two Release date: November 18 Production companies: Kennedy Miller Mitchell, Dr D Studios, Village Roadshow Pictures
In 2007, Happy Feet took the Oscar for Best Animated Feature, besting Pixar’s Cars. In 2012, we have part two of a showdown. “Miller versus Lasseter again in this category, featuring sequels of the same films. Will it turn out the same as last time? It’ll have to be the stronger story that wins,” observes Ward. “Both these films feature brilliant animation and look incredible.”
animation helps keep the look interesting and engaging, he adds. “When I see a 3D character having a flashback in 2D...well, it makes sense, doesn’t it? The feathers on the peacock (Lord Shen) were a show alone. Wet fur, wet feathers; it was like I could reach out and touch these characters, without 3D glasses!”
Mars Needs Moms Release date: March 11 Production companies: Walt Disney Pictures, ImageMovers Digital
As Bourland points out, ILM really broke out of the animated feature mold with this one. “The world they created was rich and dirty, not all clean and polished like a Pixar or PDI film. The amount of detail they put into even background characters was impressive,” he says. “The volumetric effects and style of lighting they chose also gave the film a much more cinematic feel then any other animated feature to date.” “Leave it to ILM and Gore Verbinski to raise the bar on what you ‘can and can’t do’ in a family animated film,” notes Ward. Jeannette was another who was impressed by the visuals in Rango, citing the saloon scene as his favorite moment in the film. “The visuals and lighting were breathtaking,” he says.
“I can speak firsthand at witnessing some of the industry’s best talent working on this film,” says Ward, who had been layout supervisor at ImageMovers before migrating to Rainmaker. “Though audiences met the movie with less than warm appraise, I think ImageMovers Digital did
Release date: April 8 Production companies: Blue Sky Studios, Twentieth Century Fox Animation
an amazing job on the final product. The look fell somewhere between the likes of A Christmas Carol and Monster House, but still held its own unique style, offering a stylized character study with realistic shaders. The incredible designs of
best camera work he has seen in an animated film lately—well operated and conducted. “Having to track the action of birds isn’t easy, nor is animating them to move so realistically and with so much character.” n
There’s no question, Rio is colorful. “Talk about saturation of colors!” notes Ward. He also believes the movie contains some of the
Kung Fu Panda 2 Release date: May 26 Production company: DreamWorks Animation
As Ward points out, the art direction in this film, the color, the lighting—it burns in your mind days after having watched it. DreamWorks Animation’s use of various styles of 40
December 2011/January 2012
11/22/11 5:20 PM
learn / network / inspire W W W. G D C O N F . C O M
GDC12_CGW_Sky_203x273.indd 1 CGW Ad Template 1211.indd 41
10/11/11 10:08 AM 11/22/11 11:56 AM
■ ■ ■ ■
For the sequel Batman: Arkham City, Rocksteady Studios extended the action from the confines of Arkham Asylum to the sprawling mean streets of Gotham, a much larger environment.
December 2011/January 2012
11/22/11 11:54 AM
Gaming When Christopher Nolan’s Dark Knight exploded onto the cultural landscape in 2008, and Rocksteady Studios’ Arkham Asylum arrived on its heels in 2009, the groundbreaking film and equally groundbreaking video game (still lauded by The Guinness Book of World Records as the “best superhero game ever”) were a dynamic duo that set collective imaginations on fire, transcended genre, broke sales records, and established new, almost unreachably high standards for comic-book art in their respective mediums. Now, that dynamic duo is poised to return with a onetwo punch that culminates with Nolan’s Dark Knight Rises in 2012 and begins with Rocksteady’s Batman: Arkham City, the eagerly awaited follow-up to Arkham Asylum that sold a staggering two million copies in its first three weeks alone. Written again by Paul Dini, directed by Sefton Hill, and art-directed by David Hego, the sequel eclipses the scope and scale of its predecessor in almost every way, lifting Batman out of the claustrophobic confines of Arkham Asylum and releasing him onto the mean streets of Gotham, an environment that’s more than five times bigger. Scaling up the playing field meant scaling up the cast of villains and thugs—a population explosion that has the city overrun with almost every super-villain from the Batman mythos. As a result, Rocksteady had to adapt Batman’s gameplay to the massive Gothic sprawl. They gave him a Power Dive to glide between buildings and the ability to chain attacks to contend with the relentless gang assaults, sometimes comprising as many as 30 or more assailants—a far cry from the one-on-one combat of Arkham Asylum (see “Dark Matter,” October 2009). The sequel is set one year after the original. Batman has foiled the Joker’s plans to poison Gotham’s water supply with the zombie-making Titan chemical, but Quincy Sharp, former warden of Arkham Asylum, has taken credit for the collar. Parlaying his notoriety into a successful bid for mayor, Sharp’s first act is to buy out a large section of the slum-infested North Gotham to house the burgeoning inmate population, creating a makeshift prison-city policed by a private military contractor called Tyger. To oversee the so-called Arkham City, Sharp hires psychotic psychiatrist Hugo Strange, who not only has a hidden agenda for the city, but also knows Batman’s true identity, “leaving him vulnerable and exposed in a way he’s never been before,” says Hill. Surveying the open city from atop his gargoyle perch, watching from a distance as it factionalizes under each villain vying for rule, Batman is eventually forced into the city when Two-Face kidnaps Catwoman, his former love, and devises a plot to publicly execute her. Through it all, Batman tangles with Catwoman and allies with Robin to stop Gotham from descending into total chaos.
‘Batman in Gotham’ Feel Whether Batman’s motivation is love or heroism, Rocksteady’s primary motivation for relocating the Caped
The game features a wide range of villains who roam the expansive playing field, including Two-Face. Crusader to Arkham City was to deliver, according to Hill, that “Batman in Gotham feeling.” “That sensation of gliding through the streets of Gotham City as the Dark Knight was one of the key objectives we set for ourselves,” says art director David Hego. “Moving the action out of the asylum and onto the streets was a huge creative and technical undertaking; Batman’s navigation abilities needed to step up, providing an entirely new set of gameplay opportunities for the player. From an artistic perspective, the priority was to create a world suffused with a lot of realistic elements so it would feel believable but still uniquely Gotham.” This uniquely Gotham feel borrows and expands on the architectural styles and atmosphere set in Arkham Asylum. Like before, crumbling Gothic and Victorian buildings abound, where old-fashioned turrets, spires, and gargoyles clash with glaring splashes of neon signage. For Arkham City, however, art directors added flourishes from other architectural and art movements of the 20th century. “Of course, the architecture is reminiscent of Arkham Asylum for the simple fact that we wanted the world we created to remain consistent and logical,” says Hego. “However, we expanded on the world architecturally, buttressing it with new conceptual pillars. Gothic and Victorian-style structures are still present as the foundation and DNA of Gotham City and its dark feel. On top of these two strong styles, we’ve added Art Nouveau elements in the architecture and design. It’s fascinating to explore realworld history and borrow elements to re-create piece by piece for our world.” In addition to borrowing from real-world history, the team extracted visual threads from the early history of cinema and wove them into its dark, visual tapestry. “Another December 2011/January 2012
n n n n
11/22/11 12:23 PM
n n n n
Gaming make its integration coherent without feeling fake. Using the stock lighting of the Unreal engine, somewhat re-engineered by our coders, we decided to use an arrow-style representation of the cone of light through the smoke, instead of a solid light cone. With the arrowstyle lighting, the Signal achieves its functional purpose and is visually impactful without looking out of place.”
Each bad guy has his own lair, an environment designed by artists to reflect the villain’s personality. inspiration for the atmosphere of Arkham City came from German expressionism (think 1920s The Cabinet of Dr. Caligari). We took cues not so much from structures and perspective, but more in the way we lit the world, with crude light and shadows, which is appropriate for Gotham City,” adds Hego. The multiplicity of villains greatly informed the set design, too. Over the course of the game, Batman squares off against a who’s who of villains, including Mark Hamill’s Joker, a Cockney Penguin, Two-Face, and Mr. Freeze. Each villain has staked out his or her own little enclave in Arkham City, where the architecture, graffiti, lighting, and art direction personify the unique psychology of the character. It’s a city diversified and variegated by villainy. Hence, as players make their way from one enclave to another—say, from the courthouse of former DA Harvey Dent (aka Two-Face) to the Penguin’s Iceberg Lounge—they had to feel as if they were making a physical transition to another “emotional space” through the art. “A great example of this is the Solomon Wayne Courthouse, where Two-Face is holed up,” says Hego. “Not only is the location specifically relevant to him as the ex-district attorney of Gotham, but he’s also remodeled the building to reflect his own duality. The right side of the courthouse—both inside and out—is defaced, like his own mutilated right half, smashed up and burnt, symbolizing his lust for chaos and carnage, while the other side is perfectly rendered in accordance with his belief in order and justice. So, in this way, moving from one district to another is signposted by subtle changes in the features and landmarks of the street.” By the same coin, the Joker has established his territory in the Sionis steel mill nestled in the industrial part of the North Gotham docks. Here, the Joker’s gang has redesigned the area 44
into a massive, morbid funfair. “The mix of funfair elements with the industrial setting creates an explosive environment, rich in color, just like the Joker’s personality. It was a great experience trying to imagine how each villain’s faction would mark its territory,” adds Hego. This ghettoized city stretches out before Batman in great vistas when he enters his Power Dive, arcing over streets and the skyline. “The city is such a rich, dense place, filled with these little iconic elements and details,” Hego points out, “that we had to be clever with what we display on screen. To that end, we employed a complex LOD system to hide superficial details at a distance, while keeping texture density and geometry impressively high at street level or while grappling between buildings.”
Unreal to the Max Using Epic Games’ Unreal Engine 3 and Autodesk’s 3ds Max, artists forged all Arkham City’s texture maps, geometry, and lighting. The German Expressionist cinematography— crude, angular, brooding—came mainly from the way the moon lights the world. “It’s not just about the lighting by itself, but about how the light interacts with the materials, the normal map, and the specular levels of the snow, the water, the buildings; and the way the water towers and chimneys cut through the moonlight with dynamic light shafts,” says Hego. “That’s the key to capturing the striking Gothic atmosphere.” Another crucial light source in the game, of course, is the Bat-Signal, not just because of its connotation within the Batman universe, but for its narrative function, too, pointing the player to the next objective as it refracts and reflects off smoke and clouds. “The Signal can be placed arbitrarily anywhere on the map by the player (which means it could end up too distant and dim), so we had to find a way to
The city’s massive cast, composed not merely of homogeneous non-player characters (NPCs), but of highly unique super-villains and their equally unique minions, put Rocksteady’s character modelers to the test. After sculpting a rough base mesh in 3ds Max, modelers refined the geometry in Pixologic’s ZBrush to produce a high-resolution version of each character. From this, they created the in-game model and extracted the normal map. “The poly counts of the in-game models aren’t low, ranging around 15k per character, but the normal map is still vital, to keep all the details of the high-res version,” says lead artist Pablos Hoyos. While most of the intricate details—wrinkles, scars, caking face paint, and so forth— were baked into the normal maps, artists used ZBrush and Adobe’s Photoshop to paint diffuse maps, specular, and specular power maps, as well as transmission maps, to simulate subsurface scattering of light through skin, flesh, and veins. “We always try to add as much detail as we can, especially in the faces. We have skin imperfections, like moles and skin marks, different types of pores, stubble, skin lines, wrinkles, skin tones, and so on. All these details are present in each map of the shader system and, when layered together, produce an astonishing sense of realism,” says Hoyos. Indeed, unlike the square-jawed, nondescript neckless grunts who inhabit most games, Arkham City’s faces reflect the subtleties of strong, nuanced personalities. “Some of the faces presented quite unique challenges, such as Two-Face’s burnt flesh, Solomon Grundy’s “zombified” look, and Penguin’s old skin, which he cakes in makeup because of his vanity,” says Hoyos. Separating the villains with a colorful individuality was a challenge, contends Hoyos. Catwoman was all about playing with her proportions until we got the right mix of beauty and sex appeal. Penguin was all about making his face look pure evil, and the broken glass monocle was an unusual, character-defining touch. Mr. Freeze’s armor is a complex assemblage of many individuals, so modelers had
December 2011/January 2012
11/22/11 12:23 PM
Gaming to carefully plan out their work with the rigging and cinematics teams to ensure the pieces behaved correctly (without intersections) and looked as good as possible.
Mocap Method Acting To drive the characters’ animations, riggers built an IK skeleton using a basic 3ds Max biped rig that was augmented with additional facial bones. “Because we also have to deal with a lot of motion capture, we built a version of this IK setup that runs in MotionBuilder— the primary work space for motion-capturebased animation,” says lead animator Zafer Coban. Unlike Arkham Asylum, in which animators relied heavily on normal-map blending for delicate deformations in subtle facial expressions, this time Rocksteady wanted to push the range in the performances by relying more on mocap. “The primary difference between the two games relate to Batman’s face and face setup,” says Coban. “Primarily, we wanted to enhance his performances and those of all the characters by developing a facial motion-capture pipeline whereby face actors would repeat the lines of the original voice actors [like Mark Hamill or Kevin Conroy, who plays Batman], providing facial acting in the process.” This necessitated a more highly articulated facial rig, granting a much larger and more accurate range of possible motion. In more than 200 motion-capture sessions, Rocksteady shot a total of 14 hours of facial data and 17 hours of body motion for the in-game animation (excluding the hours shot for cinematic sequences). In fact, every single character’s animation set includes some motion capture. The motion-capture setup at Rocksteady is an optical, marker-based Vicon system com-
prising 32 cameras. The team uses Vicon’s Blade software to capture and process the initial data. “We have a solution that ensures good, sturdy data is baked onto the skeleton within Blade before moving things on to the next stage in MotionBuilder,” says animation programmer Tim Rennie. Here, animators can take the original, actor-scale performance and drive the final in-game character setup. Tweaks and embellishments to the performance happen in MotionBuilder, but much of the final animation is keyframed in 3ds Max, which is the final destination before export to the engine. “We produced about 45 minutes of final facial capture for most characters, during which time our facial actors match their voices to the original actors’ performances,” says Rennie. This involved a complex process in which the facial actor would repeat 10-second clips until we had a perfect, fully synced performance. These clips would be captured with their correct time code so they could be reassembled in MotionBuilder. In MotionBuilder, animators would use video reference from the capture to combine everything—marker data, hand-tweakable controls, and automatic correction scripts—to drive the final performance. The resulting facial animation could then be merged separately onto the body capture within the Unreal Engine without the animators having to worry about breaking the sync to the final audio. Along with facial capture, the facial animation system also employs OC3 Entertainment’s FaceFX, depending on the importance and complexity of the scene. Using blendshape targets in a fully articulated FaceFX rig, animators could keyframe expressive eye animations or subtle facial tweaks to punctuate a particular body movement, polishing very quickly hundreds of lines of dialog. “In addi-
Artists baked most of the facial details into the normal maps, and then used ZBrush and Photoshop to paint diffuse, specular, and specular power maps to simulate subsurface scattering through the skin and veins.
tion, we also constructed a hassle-free pipeline to quickly embed FaceFX animations onto body animations for all in-game movements,” says Rennie. Small mercy in a game with 100plus unique faces to animate, excluding weight variants, like the fat or thin.
Feline Finesse While Batman remains the story’s main hero, for about 10 percent of the game, the player can slip into the sleek, skin-tight leather of Catwoman. Armed with a whip and bolo, Catwoman exploits the chaos in Arkham City to go on a kleptomaniacal rampage, thieving jewels and valuables like there’s no tomorrow. In her first mission, to steal an orchid for Poison Ivy, she glides lithely across rooftops and alights upon some unsuspecting Tyger security guards standing over a manhole cover— her access point to a maze of sewers leading to a vault. Inside, she performs her signature “ceiling climb,” dropping down on guards to pickpocket their keys. When the alarm blares, she unloads with a flurry of fluid roundhouse leg kicks that would daze Batman with their blinding speed and grace. In designing Catwoman’s gameplay, Rocksteady’s first objective was to make sure players did not feel like they were guiding a curvier version of Batman. This entailed a wholesale reworking of the rigging and weighting of the standard IK chain, enabling greater speed and flexibility in her animations. “Catwoman’s rig is, of course, unique to her. She’s a lot slimmer, shorter, and has a bunch of bespoke controls for her whip,” says Coban. “As soon as we decided to include Catwoman as a playable character, we wanted the gamer to have a totally new experience of the Arkham City world, not a re-skin of any sort. With combat, we’ve taken our influences from acrobats, gymnasts, and ballerinas to bring a unique flavor to her fighting style. We’ve concentrated on legwork more, and left the hardhitting, brutal punches to Batman himself. For example, Coban says, when Batman hits, the impact registers with sheer, bluntforce trauma, whereas Catwoman’s attacks, while less impactful, are faster, more agile, athletic. “We played with those elements, and it really shows during combat. Players will particularly appreciate this unique legwork in her Stealth Predator gameplay, where she’ll flip up onto a thug’s shoulders, wrap her legs around his arms and head, and choke him out with those long, hardened legs.” Everything from Catwoman’s gadgets to her navigational skills is custom-made for the feline fatale. “She doesn’t have the Grapnel December 2011/January 2012
n n n n
11/22/11 12:24 PM
n n n n
The animators increased the number of Batman’s combat moves to make him appear more powerful. Gun, so we’ve given her the Whip Swing and the Claw Climb; altogether, it looks and feels very different playing her,” says Coban. Catwoman is also a prime example of Rocksteady not only pushing facial and body mocap to enhance performances, but also run-time physics to enhance the dynamic motion of hair, coats, straps, and, specifically, Catwoman’s whip, which snaps and coils with astonishing realism. “We always had the ability to add additional movement to a character’s animation using run-time physics simulation, but we really pushed it hard on Arkham City.”
Cape Animation Newly enhanced run-time physics simulation also underlies much of Batman’s improved cape animation, which billows in the wind, unfurls during the character’s Power Dive, and pleats and settles as he slows into combat mode. The in-game cape combines a mixture of elements: real-time cloth simulation driving a skeletal mesh rig; hand-keyed skeletal animation; and off-line cloth simulation, authored in 3ds Max cloth and baked onto skeletal animation. To produce the ultra-realistic clothing animations in the pre-rendered cinematics, artists baked this off-line cloth sim onto vertex animation. From the outset of production, the team wanted to have extremely fine control of the cloth but also ensure that it was reacting in a natural and dynamic way to the environment, the weather, and Batman’s movements. “The biggest change during the development of Arkham City was the redeveloping of the cape rig midway through production. We had to make it easier for animators to pose the cape more intuitively for any particular action,” says Coban. This pose would then form the driving shape for the final physics simulation at run time. During most of the gameplay, the in-game 46
cape is pure real-time simulation, but when a particular iconic move or a stylized result was needed, the keyframed animation kicks in. In some situations, an animation is used for the overall shape, while the sim adds physics detail at the edges. For example, the wind rippling through the cape is achieved by level artists placing volumes in the world nearby. “We spent a lot of time trying to retain a high level of animator control while still running the cape under live physics simulation,” says Coban. Balancing animator control with live physics simulation was also crucial to animating Robin’s staff. “It’s a complicated piece of kit that can bend, flex, extend, and turn into a shield, all while he’s swapping it from hand to hand,” he adds.
Combat Choreography To handle the crush of assailants and the sprawling, open setting, Batman’s range of
movement for maneuvering through the environments and for hand-to-hand combat has undergone an aggressive expansion. In fact, his animation set has doubled. According to Hego, the expansion of the game world drove a redesign of every aspect of Batman’s navigation and combat, as well as a huge overhaul in the way that the team conveyed story and narrative elements to the player. “For example, the enhanced Power Dive—through which Batman gets around the city and the player experiences the freedom and exhilaration of flying through alleys and over the skyline— was a completely new development challenge for us, resulting in the full-momentum gliding system,” he says. Furthermore, doubling the number of combat moves was essential to convey a sense of variety in Batman’s combat skills, so critical to the feeling of power and dominance offered by the FreeFlow Combat system. Waylaid by Oswald Cobblepot’s goons and the Frankenstein-like Solomon Grundy in the Iceberg Lounge, Batman chains his punches and kicks to clear the room, following up a roundhouse kick to one thug with a swift leg sweep to another in a seamless series of multiple, simultaneous counters, all the while reacting to thrown objects and without the slightest hitch in the blending system. “In Arkham Asylum, thugs would generally attack one at a time, but in Arkham City, we’ve blown that out of the water, letting thugs rain punches and kicks in simultaneous assaults that really make the player feel pressure—as they would in a real fight,” says Hego. “Consequently, Batman can now perform double
Cinematic Touch The numerous cut-scenes spliced throughout the game unspool through the Unreal game engine using an advanced lighting rig setup. The rig—which uses hundreds, if not thousands, of lights—simulates global illumination and bounce lighting to add realism, and can be tailored precisely for the atmosphere and design of the shots. “Our engine is so powerful,” says cinematics director Paul Boulden, “we were able to create shaders that simulate subsurface scattering and ambient occlusion. It was important for us to not merely create a realistic visual, but to stylize it according to Rocksteady’s trademark vision.” Boulden meticulously directed the cut-scene performances, adding them together from extensive facial-capture sessions. “We wanted to bring a new level of realism to the characters. One of our main goals was to bring the characters to life by making them more believable,” he says. “We captured actors with a physical marker setup on their faces. We were able to capture subtle gestures and nuances that would have been otherwise impossible to get. Rocksteady is driven by the conviction that bringing characters to life will yield a stronger connection to the audience, thus allowing us to tell a more convincing and immersive story.” —Martin McEachern
December 2011/January 2012
11/22/11 12:24 PM
Gaming and triple counters, whereby he deflects and dodges all these blows and sends counter attacks [on multiple characters] in one swift move.” The daunting task of programming AI for these complex ensemble fights and endless counterattacking fell to AI programmer Tim Hanagan and his team of coders. “There were so many challenges involved in increasing the crowd combat from 10 or 12 to about 30. First, we had to optimize the performance of all the various systems so that these largescale fights could run at a consistent 30 frames per second,” he says. “The second was managing the positioning of so many enemies, to pre vent the fights from feeling too cramped.” From a visual standpoint, the group had to allow the player to see clearly and assess each situation. Most of these challenges were ad
capacity to support more than 30 combatants in some areas, Hanagan cautions that pushing the crowd beyond that number only made the gameplay confusing. Batman is also armed with a new “context sensitive mechanic,” which allows him to in tegrate his immediate environment into his fighting—improvising with a nearby railing, brick wall, pillar, or street lamp—to subdue as sailants. These contextsensitive moves require code that could rapidly sample the local area to identify whether the surrounding geometry could be used within the current combat move, notes Hanagan. The problem with all systems like this, however, is trying to make sure that the developers balance the accuracy of the checks with the need for fast runtime performance. “You always want to minimize the number of line checks performed, but at the same time,
n n n n
character wants to interact with an item in the world, then a positional marker embedded in the animation tells the system how the charac ter should be aligned. Of course, much of this complex interac tion with Gotham’s urban jungle (as Batman or Catwoman) involves scaling walls and ledges, climbing through sewers and ventila tion ducts, trying to gain a precarious foothold or handhold on a cornice, gargoyle, crack, or crevice. For aligning hands to walls and ledges, Rocksteady used some of the Unreal Engine’s builtin arm IK. However, the team tackled most of the challenge by building the environ ments to standard grid sizes and then animat ing to those sizes. So for a wall climb, there are separate animations for 128, 256, and 384 unit height walls, and if the animations don’t quite match up to the real wall, then the artists would use a blend to shift the entire character. “For aligning the character to the floor, we dynamically calculate a virtual floor plane that approximates the actual floor geometry under neath the character. A standard twobone leg IK then skews the animation to fit the plane. In some situations, where leg IK is insufficient (like Catwoman crawling on the ceiling), we rotated the entire character to fit the virtual plane,” says Rennie.
Dark Knight Rising
Developers used Epic Games’ Unreal Engine 3 and Autodesk’s 3ds Max for the textures, geometry, and lighting. dressed through the studio’s custom character collision system. Implemented within the Unreal engine, this character collision system replaces the existing stock Unreal system with one that’s much faster, more efficient, and streamlined. Hanagan explains: “It uses the navigation mesh data to allow faster collision queries against an approximation of the reallevel ge ometry. This was a major contributing factor in allowing us to support so many active char acters at once.” The team additionally implemented a real time pathsmoothing system to improve the look and realism of the paths the AI take when traversing levels. In addition, they developed a character scripting system within the Unreal Kismet scripting editor to allow animators to implement complicated scripted events with out any code support—all this while still al lowing for a high level of player interaction. While the character collision system has the
you don’t want to end up performing a wall animation on a 10 cmwide lamppost, or end up slamming a thug’s head into what should be a railing but, in reality, is the space between two railinghigh benches,” he says. In building this robust combat system, Rennie contends that the most important thing wasn’t any particular piece of technol ogy, but the animators, gameplay coders, and tech artists all sharing the same studio space and collaborating closely. Beyond this close collaboration, Rocksteady’s artists relied on the studio’s own animation blending system, which harnesses all the standard tools and techniques. This includes crossblending, time warping, additive animation, motion extrac tion, and mirroring. A particular focus was placed on automatically aligning animations. For example, if two characters are interacting with each other, the system will automatically blend them into the correct position based on their relative positions in the animation. If a
Indeed, Rocksteady’s commitment to story telling, high production values, and acting— both animated and mocapped—has delivered the studio to the forefront of the industry’s leaders, and Arkham City to the forefront of contenders for Game of the Year. At this year’s ComicCon, Hamill, Conroy, and Dini held court on a panel that was one of the conven tion’s biggest draws, no small feat considering the presence of Peter Jackson and Steven Spiel berg pushing Tintin. And it’s all by design, too, for the seeds of Arkham City were laid, like Chekov’s gun, in secret plans hidden in a backroom of warden Sharp’s office two years ago in Arkham Asylum. Who knows what little bread crumbs have been dropped for future sequels in Gotham’s mean streets. Only Rocksteady knows. What’s certain is that the developer’s achievements left fans waiting with bated breath for this sequel, and if Rocksteady holds fast to its resolve of pushing the bar higher and higher, The Guinness Book of Records may find itself passing the mantle again…and again. n Martin McEachern is an award-winning writer and contributing editor for Computer Graphics World. He can be reached at email@example.com. December 2011/January 2012
11/22/11 12:24 PM
For additional product news and information, visit CGW.com
SOFTWARE ANIMATION Strike a Pose Smith Micro Software has released Poser 9 and Poser Pro 2012, marking the first time the company has issued simultaneous releases of the animation tool. Both software applications now offer vertex weightmap rigging support and subsurface scattering capabilities, along with a simple-to-use user interface that has evolved from past releases, yet maintains a familiarity that existing users will appreciate. The releases also include more than 3gb of ready-to-use content, including figures (humans, skeletons, etc.) and architectural elements. Full scenes also have been put together, including office and crime lab settings, helping to save users time in building their own environments. Poser 9 is a 32-bit application that’s priced at $249 and offers full-level rendering control. Poser Pro 2010 is a 64-bit application and includes updated PoserFusion plug-ins and Collada support for professionals who may want to export animations into programs such as Maya, Softimage, Cinema 4D, and LightWave. The 64-bit application also includes the FireFly Render Engine and a vertex weightmap editing tool suite.
Smith Micro Software; www.smithmicro.com/poser
Sim Solution AI.implant, Presagis’ multi-platform artificial intelligence (AI) authoring and runtime software solution, has been upgraded to Version 5.7. AI.Implant is designed for simulation and analysis projects requiring realistic and dynamic urban environments, including unmanned aerial vehicle (UAV) and helicopter training, air traffic control applications, and driving simulation. The updated release improves attributes associated with traffic and human interactions, and enables users to build complex and realistic scenarios faster. As a COTS middleware product, AI.implant seamlessly integrates into existing pipelines and simulation engines. The new release improves realism of road traffic and pedestrian interactions. Vehicles can now pass other vehicles using slower or oncoming lanes. Traffic lights can be customized to suit the simulation, or they can also run in automatic mode. The TrafficSolver manages the advancement through the traffic light cycle as the simulation runs. And a vehicle that is not in the correct lane when approaching an intersection will reset its path.
LIGHTING KeyShot Tool Kit Lightmap has launched the HDR Light Studio Live plug-in for Luxion KeyShot renderer, which brings a professional real-time HDRI lighting tool kit directly into KeyShot real-time visualization software. The plug-in allows users to improve the quality of the renders with custom lighting designs for each shot, all via a simpleto-use interface. Existing HDR environments can be augmented to improve the
quality of their renders, with more control over lighting and reflections. Lighting adjustments take place in real time in the KeyShot viewport. HDR Light Studio Live for KeyShot is included with the HDR Light Studio 2.0 Pro edition.
HARDWARE GPU Follow the Link AMD unveiled the FirePro SDI-Link, which helps bring real-time, GPU-accelerated performance to pipelines requiring Serial Digital Interface (SDI) input and output. The FirePro SDI-Link is receiving support from manufacturers, such as AJA, Bluefish444, Blackmagic Design, Deltacast, DVS, and Matrox, as it allows for the design of fully featured SDI- and GPU-based solutions with ultra-low latency between select AMD pro graphics cards and third-party SDI input/output products. AMD also showed the FirePro V7900 SDI, a new graphics card that is the first to support AMD FirePro SDI-Link. The V7900 SDI will be certified as compatible with all five manufacturers providing PCIe cards offering advanced SDI video signal I/O capabilities. The V7900 SDI is designed specifically for broadcast graphics pipelines. The unit is the first to leverage AMD’s DirectGMA technology to help ensure system-level, low-latency synchronized data transfer between the AMD FirePro professional graphics and third-party devices over the PCIe bus. AMD began delivering the FirePro V7900 SDI in October for $2499.
December 2011/January 2012, Volume 34, Number 8: COMPUTER GRAPHICS WORLD (USPS 665-250) (ISSN-0271-4159) is published bimonthly (6 issues annually) by COP Communications, Inc. Corporate offices: 620 West Elk Avenue, Glendale, CA 91204, Tel: 818-291-1100; FAX: 818-291-1190; Web Address: firstname.lastname@example.org. Periodicals Postage Paid at Glendale, CA, 91205 & additional mailing offices. COMPUTER GRAPHICS WORLD is distributed worldwide. Annual subscription prices are $72, USA; $98, Canada & Mexico; $150 International airfreight. To order subscriptions, call 847-559-7310. © 2011/2012 CGW by COP Communications, Inc. All rights reserved. No material may be reprinted without permission. Authorization to photocopy items for internal or personal use, or the internal or personal use of specific clients, is granted by Computer Graphics World, ISSN-0271-4159, provided that the appropriate fee is paid directly to Copyright Clearance Center Inc., 222 Rosewood Drive, Danvers, MA 01923 USA 508-750-8400. Prior to photocopying items for educational classroom use, please contact Copyright Clearance Center Inc., 222 Rosewood Drive, Danvers, MA 01923 USA 508-750-8400. For further information check Copyright Clearance Center Inc. online at: www.copyright.com. The COMPUTER GRAPHICS WORLD fee code for users of the Transactional Reporting Services is 0271-4159/96 $1.00 + .35. POSTMASTER: Send change of address form to Computer Graphics World, P.O. Box 3296, Northbrook, IL 60065-3296.
December 2011/January 2012
11/22/11 11:14 AM
#29996 - Avid CGW_Layout 10/11/2011 09:58 Page 1
All together now AJA and Avid
AJA KONA, Io XT and Io Express. Broadcast-quality capture, monitoring and output for Avid® Media Composer® 6.0, Symphony® 6.0, and NewsCutter® 10.0. Desktop or laptop, PC or Mac, AJA products are designed to keep video professionals ahead of the game, delivering unrivaled quality and connectivity. Now, users of Avid software can benefit from the same workflow-enhancing features that Apple Final Cut Pro, Adobe CS5, and Autodesk Smoke editors have come to rely on. From Io XT, our portable Thunderbolt solution, to KONA 3G with its multi-format 4:4:4 capture/playout and full 3D stereoscopic capability, AJA KONA and Io products have got your workflow covered. All models feature 10-bit uncompressed video I/O, SD and HD compatibility and AJA’s renowned hardware-based format conversion. Compatible with PC or Mac, and with a choice of feature sets, AJA hardware provides any working editor with a powerful combination of professional performance and true flexibility and the freedom to work with the software of your choice.
Find out about using AJA products with Avid at www.aja.com
B e c a u s e
CGW Ad Template 1211.indd 102
m a t t e r s .
11/22/11 11:15 AM