VFX Voice - Fall 2019 Issue

Page 17

TECH & TOOLS

“It’s the next evolution in getting closer to mixing digital and actor performances,” says Russell Earl, Visual Effects Supervisor at ILM on Avengers: The Endgame. “The fidelity is pretty amazing.” You can see the result on CG Hulk’s face as the character looks and performs with the sensitivity and humor of actor Mark Ruffalo. ILM created 80% of the Smart Hulk shots according to the studio, using the Anyma technology for the first time (Framestore did the hangar sequence for the time-travel testing). Doing so meant developing and modifying in-house tools during postproduction to accommodate Anyma’s high-fidelity data. “It was a great leap of faith,” Earl says, “but when we first saw the data, it was ‘Oh my gosh, this is great.’” ANATOMICALLY-INSPIRED MODEL

ILM AND DISNEY TEAM UP FOR CANNY FACIAL CAPTURES IN AVENGERS: ENDGAME By BARBARA ROBERTSON

Actors performing while balancing head helmets fitted with cameras pointed at their faces may be an odd sight, but it’s a common one on film productions when those actors play characters that become CG. The motion-capture gear allows the “CG characters” to interact with live-action actors on set, directors to direct them, the director of photography to light them, and camera operators to frame them. It has helped make the integration of CG characters into live-action films seamless, and it’s pushed forward the path toward creating believable digital humans. But it’s still awkward. Moving one step closer to the point when actors don’t have to wear silly pajamas and head cameras, ILM and Disney Research have worked together to make a markerless, high-fidelity, performance-capture system production-worthy. First prototyped in 2015 by Disney Research, the system they call Anyma has evolved, but had not been used for a film until ILM implemented it to create “Smart Hulk” in Avengers: Endgame. “I think it’s a revolution,” says Thabo Beeler, principal research scientist at Disney Research. “You can capture facial performances and preserve all the skin sliding with witness cameras. It gives the actor freedom to move around.”

All images copyright © 2019 Marvel Studios TOP: ILM joined with Disney Research to use the new Anyma facial-capture system for Hulk in Avengers: Endgame. BOTTOM: Anyma is a markerless tracker that’s able to capture pore-level facial detail.

14 • VFXVOICE.COM FALL 2019

To set the stage for Anyma, the team first did a facial scan of Ruffalo using Disney Research’s Medusa system to measure and describe his face. Unlike other performance-capture systems, Anyma needs only about 20 shapes. Not FACS-based phonemes and expressions that try to activate muscles but, instead, a few extreme positions – for example, everything lifted, or everything compressed, or both eyebrows at once. The system used those shapes from Medusa to automatically build a digital puppet that could be driven by Anyma. That is, the scanned data was integrated and fit to an underlying skull. The system fit the skull, jaw and eyes to the Ruffalo’s Medusa scan, based on forensic measurements of a typical male his age with his BMI [Body Mass Index] to create a digital puppet. “It doesn’t have to be anatomically correct,” Beeler says. “It just provides constraints for later on. The specialty of this puppet is that it has a notion of the underlying skull and the skin thickness. The skin thickness measurements indicate where the skull could be. Same for the jaw and other boney structures. Looking at just the skin is limiting – it doesn’t do well in performance capture.” That insight – thinking of the face as an anatomical structure rather than a shell – is one part of Anyma’s success. “The secret sauce is the anatomically-inspired model,” Beeler says. “Not anatomical muscle simulation. Anyma has a notion of the underlying bone structure and the tissue between the skin and bone. It’s data driven.”

“I think it’s a revolution. You can capture facial performances and preserve all the skin sliding with witness cameras. It gives the actor freedom to move around.” —Thabo Beeler, Principal Research Scientist, Disney Research

TOP: Animators were able to maintain Mark Ruffalo’s performance with more accuracy than on previous films. BOTTOM: ILM was responsible for 80% of Hulk shots in Avengers: Endgame.

DEFORMATION AND TRANSFORMATION

Part two of the secret sauce is that the researchers separated deformation and transformation – that is, they separated motion from deformation. They did this by dividing the face into small patches of approximately one-by-one centimeter. “If you look at a small patch on a face, it doesn’t do all that much,” Beeler says. “It stretches a bit (and) forms wrinkles and patterns. As a whole, the face can do much more; a small patch is not that complex. So Anyma chops up the face into a thousand small patches. Then, for each of those patches it builds a deformation space. And it learns how that patch can be formed from the input shapes.” The problem with that approach is the resulting digital face could do anything – rip apart, blow up. To avoid that, the second

FALL 2019 VFXVOICE.COM • 15


Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.