

AVATAR: FIRE AND ASH


Welcome to the Winter 2026 issue of VFX Voice!


Happy New Year to VES members and VFX Voice readers worldwide! We look forward to bringing you an exciting year of insightful, in-depth features focused on the art and craft of visual effects for entertainment and beyond.
In our Winter cover story on Avatar: Fire and Ash, writer-director James Cameron describes the dramatic evolution of Avatar’s lead characters and the possible future of the franchise with AI. In addition to Fire and Ash, we also spotlight the artistry behind likely contenders for Best Visual Effects at the upcoming 98th Academy Awards, as leading VFX supervisors recount the highlights that defined their work.
Elsewhere in this issue, director Guillermo del Toro explains the role VFX plays in his gritty Frankenstein, Shakespeare’s Hamlet is reimagined through the lens of Mamoru Hosoda’s animated Scarlet, and we profile ILM Digital Artist Paige Warner and Epic Games Co-Founder/CEO Tim Sweeney, the visionary driving Unreal Engine and the ongoing VP revolution.
To anchor our State of the Industry 2026 coverage, we turn to VFX professionals for a diversity of views on the outlook going forward. We also examine how smaller studios adapt to shifting markets, and we continue to chronicle AI’s impact on every corner of the global industry. Then, in Tech & Tools, we unpack the ultimate VFX supervisor’s location survival kit.
Finally, as a special bonus to VES members around the globe who share a common passion for photography, we are delighted to present “A World’s-Eye View,” a unique showcase of original photography by VES members, as selected by a VES panel. Enjoy!
Cheers!
Kim
Davidson, Chair, VES Board of Directors
Nancy
Ward, VES Executive Director
P.S. You can continue to catch exclusive stories between issues only available at VFXVoice.com. You can also get VFX Voice and VES updates by following us on X at @VFXSociety and on Instagram @vfx_society.

FEATURES
8 THE VFX OSCAR: PREVIEW
Meet the contenders for this year’s top prize for the best in effects.
36 VFX TRENDS: SMALLER STUDIOS
How smaller companies diversify to navigate constant change.
46 PHOTOGRAPHERS SHOWCASE: WORLD’S-EYE VIEW
A curated gallery of original photographs by VES members.
56 FILM: FRANKENSTEIN
Birthing the Creature for Guillermo del Toro’s artistic reimagination.
62 PROFILE: TIM SWEENEY
Epic Games Founder/CEO on the convergence of film and games.
68 COVER: AVATAR: FIRE AND ASH
James Cameron sharpens focus on characters, creativity, detail.
76 PROFILE: PAIGE WARNER
ILM Digital Artist bridges the gap between technology and artistry.
82 INDUSTRY: OUTLOOK ‘26
The VFX industry surges forward while searching for consistency.
88 VFX TRENDS: MAN, MACHINE & ART
Training AI to enhance the artist experience and creative control.
94 FILM: SCARLET
Oscar-nominated anime director Mamoru Hosoda adapts Hamlet
100 TECH & TOOLS: VFX SURVIVAL KIT
iPad, iPhone apps fill many roles, yet old-school essentials endure.
DEPARTMENTS
2 EXECUTIVE NOTE
106 VES NEWS
109 THE VES HANDBOOK
110 VES SECTION SPOTLIGHT – GERMANY
112 FINAL FRAME – AVATAR’S VFX LEGACY
ON THE COVER: Oona Chaplin as the intense leader of the Ash People in Avatar: Fire and Ash. The goal for the Ash People was to create a striking visual difference, both in their culture and landscape. (Image courtesy of 20th Century Studios)


VFXVOICE
Visit us online at vfxvoice.com
PUBLISHER
Jim McCullaugh publisher@vfxvoice.com
EDITOR
Ed Ochs editor@vfxvoice.com
CREATIVE
Alpanian Design Group alan@alpanian.com
ADVERTISING
Arlene Hansen arlene.hansen@vfxvoice.com
SUPERVISOR
Ross Auerbach
CONTRIBUTING WRITERS
Trevor Hogg
Chris McGowan
Barbara Robertson
ADVISORY COMMITTEE
David Bloom
Andrew Bly
Rob Bredow
Mike Chambers, VES
Lisa Cooke, VES
Neil Corbould, VES
Irena Cronin
Kim Davidson
Paul Debevec, VES
Debbie Denise
Karen Dufilho
Paul Franklin
Barbara Ford Grant
David Johnson, VES
Jim Morris, VES
Dennis Muren, ASC, VES
Sam Nicholson, ASC
Lori H. Schwartz
Eric Roth
Tom Atkin, Founder
Allen Battino, VES Logo Design
VISUAL EFFECTS SOCIETY
Nancy Ward, Executive Director
VES BOARD OF DIRECTORS
OFFICERS
Kim Davidson, Chair
Susan O’Neal, 1st Vice Chair
David Tanaka, VES, 2nd Vice Chair
Rita Cahill, Secretary
Jeffrey A. Okun, VES, Treasurer
DIRECTORS
Neishaw Ali, Fatima Anes, Laura Barbera
Alan Boucek, Kathryn Brillhart, Mike Chambers, VES
Emma Clifton Perry, Rose Duignan
Dave Gouge, Kay Hoddy, Thomas Knop, VES
Brooke Lyndon-Stanford, Quentin Martin
Julie McDonald, Karen Murphy
Janet Muswell Hamilton, VES, Maggie Oh
Robin Prybil, Lopsie Schwartz
David Valentin, Sean Varney, Bill Villarreal
Sam Winkler, Philipp Wolf, Susan Zwerman, VES
ALTERNATES
Fred Chapman, Dayne Cowan, Aladino Debert, John Decker, William Mesa, Ariele Podreider Lenzi
Visual Effects Society
5000 Van Nuys Blvd. Suite 310
Sherman Oaks, CA 91403
Phone: (818) 981-7861 vesglobal.org
VES STAFF
Elvia Gonzalez, Associate Director
Jim Sullivan, Director of Operations
Ben Schneider, Director of Membership Services
Charles Mesa, Media & Content Manager
Eric Bass, MarCom Manager
Ross Auerbach, Program Manager
Colleen Kelly, Office Manager
Mark Mulkerron, Administrative Assistant
Shannon Cassidy, Global Manager
Adrienne Morse, Operations Coordinator
P.J. Schumacher, Controller


2026 OSCAR CONTENDERS BLEND THE STORYTELLING POWERS OF PRACTICAL AND DIGITAL EFFECTS
By BARBARA ROBERTSON
This year’s Best Visual Effects Oscar contenders display a full range of visual effects from invisible effects supporting dramatic stunts and action to full-blown fantasies with a few superheroes and anti-superheroes in between. Visual effects artists created all four elements for these films – dirty, earthy effects; creatures, characters and vehicles flying through the air; fire and its aftermath; and amazing water effects. VFX also supported an immersive, unflinching reality for a documentary-style film; immersive, fanciful, beautiful other worlds with talking animals, realistic animals and unusual characters; a love story or two tucked into a spectacle; a thriller; and inside a fable, a warning for our times.
Director James Gunn wanted to create a new version of Superman, not a sequel or prequel, one that was closer to the spirit of the comic books. (Image courtesy of Warner Bros. Pictures)
OPPOSITE TOP TO BOTTOM: For Avatar: Fire and Ash, more emphasis was placed on artistry than technical breakthroughs. (Image courtesy of 20th Century Studios)
Everything in Tron: Ares has a light line component to it. Most characters wear round discs rimmed with light on their backs. Light trims all the suits and discs. (Image courtesy of Disney Enterprises)
The visual effects in Warfare are entirely invisible, which contributed to the film’s authenticity. Cinesite served as the sole vendor on Warfare and was responsible for approximately 200 visual effects shots. (Image courtesy of DNA Films, A24 and Cinesite)
Many of the films have an extra challenge: to go one better than before. Wicked: For Good is a sequel to last year’s Best Visual Effects Oscar nominee and Mission: Impossible – The Final Reckoning is a sequel from the year before. There are also sequels for other previous award winners: Superman won a Special Achievement Award for visual effects in 1978; The Lost World: Jurassic Park (1998) and Superman Returns (2006) received Oscar nominations. Three previous Oscar winners launched sequels last year: Avatar, Avatar: The Way of Water and Jurassic Park, as did the Oscar-nominated Tron
Only two highlighted films are not sequels: Warfare and Thunderbolts*, although Thunderbolts* is part of a continuing Marvel Universe story. Here, leading VFX supervisors delve into stand-out scenes that elevated eight possible 2026 Oscar contenders. Strong contenders for Oscar recognition not profiled here include Mickey 17, The Fantastic Four: First Steps and The Running Man, among others.
TOP:

WICKED: FOR GOOD
Industrial Light & Magic’s Pablo Helman, Production Visual Effects Supervisor for Wicked: For Good, earned an Oscar nomination last year for Wicked, which was filled with complex sets, characters, action sequences and animals enhanced with visual effects. According to Helman, that was just the beginning.
“Before, we had an intimate story full of choices,” Helman says. “Now, it opens up to show the consequences of those choices within the universe of Oz. We shot as much in-camera as possible and still have around 2,000 visual effects shots. But the scope is a lot bigger. This film has everything – special effects, practical sets and visual effects all working together. We see Oz with banners all over the territory. We have butterflies, a huge monkey feast, a grown-up lion, train shots, new Emerald City angles, the Tin Man and Scarecrow transformations, crowds, girls flying on a broom among the stars, rig removal, environment replacement, a mirrored environment, a tornado, hundreds of animals…” Animals with feathers and fur, talking animals, animals that interact with actors, hundreds of animals in set pieces and an animal stampede.
As in the first film, the Wizard is out to get the animals. “We listen to the animals’ thoughts when they’re imprisoned; to what happened to them,” Helman says. “We developed new techniques for them and adopted a different sensibility when it came to animation. Action films typically let the cuts control the pacing and viewer focus. But for this film, thanks to Jon M. Chu [director] and Myron Kerstein [Editor], we were able to take the time we needed to get characters to articulate thoughts.”
Animators used keyframe and motion capture tools. Volume




capture for shots within big practical sets helped the VFX team handle crowds of extras. ILM, Framestore, Outpost, Rising Sun, Bot, Foy, Opsis and an in-house VFX team provided the visual effects.
One of the most challenging sequences is when Glinda (Ariana Grande) sings in a room with mirrors. “We had to figure out how to move the cameras in and out of the set, which is basically a two-bedroom apartment, during the three-minute sequence,” Helman explains. “We had to show both sides, Glinda and her reflection. We had to move the walls out, move the camera to turn around, and simultaneously put the set back in. Multiple passes were shot without motion control. Sometimes, we used mirrors. Sometimes, they were reference that we replaced with other views. It was very easy to get confused, and we didn’t have much leeway. It was all tied to a song, so we had to cut on cues. It took a lot of smart visual effects.”
The crew used Unreal Engine to interact and iterate with production design, to look at first passes, to conceptualize, and previs and postvis the intricate shots in the film. For example, in another sequence, Elphaba (Cynthia Erivo) flees on her broom through a forest to escape the Wizard’s monkeys. The crew used a combination of plates shot in England and CG. But it’s more than a chase scene. “She escapes without hurting anyone,” Helman says. “That’s part of the character arc for the monkeys. In this movie, they switch sides.”
Wicked: For Good has a message about illusion that was important to Helman. Glinda’s bubble is not real. The Wizard is not real. “And he lies a lot,” Helman adds. “One thing I like about this film is the incredible message about deception. And hope. Elphaba
TOP: In the Grid in Tron: Ares, digital Ares can walk on walls. When Ares moves into the real world, he obeys real-world physics. (Image courtesy of Disney Enterprises)
BOTTOM: Avatar: Fire and Ash introduces a new clan, the Ash people, and a new antagonist, Varang, leader of the Ash, providing an opportunity for the studio to further refine its animation techniques. (Image courtesy of 20th Century Studios)



BOTTOM: Contributing to the integration of the vault fight was the burning paper found throughout the environment in Thunderbolts*. The live-action blazes from Backdraft served as reference. (Image courtesy of Digital Domain and Marvel Studios)
tries to persuade the animals not to leave. But they take a stand; they decide to exit Oz. That’s married to the theme of deception and what we can do about it. It’s such a metaphor for the time we’re living in. It’s very difficult to go through this movie and not get emotional about it.”
WARFARE
Warfare, an unflinching story about an ambush during the 2006 Battle of Ramadi in Iraq, has been described as the most realistic war movie ever made. Former U.S. Navy SEAL Ray Mendoza’s first-hand experience during the Iraq war grounded directors Mendoza and Alex Garland’s depiction. The film embeds audiences in Mendoza’s fighting unit as the harrowing tale unfolds in real time. While critics have praised the writers, directors and cast, few mention the visual effects. “The effects weren’t talked about at all,” says Cinesite’s Production Visual Effects Supervisor Simon Stanley-Clamp. “They are entirely invisible.” In addition to Cinesite’s skilled VFX artists, Stanley-Clamp attributes that invisibility and the accolades to authenticity. “Ray [Mendoza] was there,” he notes. “He knew exactly what something should look like.”
Cinematographer David J. Thompson shot the film using hand-held cameras, cameras on tripods and, Stanley-Clamp says, a small amount of camera car work. “We did have a crane coming down into the street for the ‘show of force,’ but that was the only time we used a kit like that,” he says. The action takes place largely within one house in a neighborhood, and what the trapped SEALs could see outside – or the camera could show. In addition to a set for the house, the production crew built eight partial houses to extend the set pieces. These buildings worked
TOP: Warfare is an unflinchingly realistic story about an ambush during the 2006 Battle of Ramadi in Iraq. The visual effects crew added the muzzle flashes, tracer fire hits and smoke, with authenticity foremost. (Image courtesy of DNA Films, A24 and Cinesite)



for specific angles with minimal digital set extension. Freestanding bluescreens and two huge fixed bluescreens were also moved into place. Later, visual effects artists rotoscoped, extended the backgrounds, added air conditioning units, water towers, detritus and other elements to create the look of a real neighborhood.
“For city builds, we scattered geometry based on LiDAR of the sets,” Stanley-Clamp says. Although the big IED special effects explosion needed no exaggeration, the crew spent two days shooting other practical explosions and muzzle flashes for reference. During filming, the actors wore weighted ruck-sacks and used accurately weighted practical guns. “We shot 10,000 rounds of blanks in a day during the height of the battle sequence, and we had a lot of on-set smoke,” Stanley-Camp says. In post, the visual effects crew added the muzzle flashes, tracer fire hits and more smoke, again with authenticity foremost. “Ray [Mendoza] knows how tracer fire works, that it doesn’t go off with every bang,” Stanley-Camp says. “The muzzle flashes were in keeping with the hardware being used. In the final piece, we almost doubled the density of smoke. There’s a point where there’s so much smoke that two guys could be standing next to each other and not know it.”
Cinesite artists created simulated smoke based on reference from the practical explosions, added captured elements and blended it all into the live-action smoke.
Cinesite artists created jets flying over for the ‘show of force’ sequence, following Mendoza’s specific direction. To show overhead footage of combatants moving in the neighborhood, the crew used a drone flying 200 feet high and shot extras walking on a bluescreen laid out to match the street layout. “Initially, we were going to animate digital extras,” Stanley-Clamp says. “But Alex [Garland]
TOP: Thunderbolts*’ VFX team worked with production and special effects to do as many effects practically as possible, even when a CG solution seemed easier. Motion blur was a key component of creating the ‘ghost effect.’
(Image courtesy of Digital Domain and Marvel Studios)
BOTTOM: Wicked: For Good has special effects, practical sets and visual effects all working together.
(Photo: Giles Keyte. Courtesy of Universal Pictures)



TOP: Filmmakers shot as much in-camera as possible, yet there were still around 2,000 visual effects shots in Wicked: For Good (Image courtesy of Universal Pictures)
BOTTOM: In Mission: Impossible – Final Reckoning, Tom Cruise’s character, Ethan Hunt, navigates moving torpedoes while inside a partially flooded, rotating submarine, a giant gimbaled submersible built by special effects. (Image courtesy of Paramount Pictures)
said no. People would know they were CG. And he was right. Even with 20-pixel-high people, your brain can tell.”
Warfare could not have been made without visual effects. “The set would never have been big enough,” Stanley-Clamp says. “The explosions would probably have been okay, but they would have had to re-shoot many things. The bullets wouldn’t have landed in the right places. They wouldn’t have been allowed to fly the jets over in a ‘show of force.’ The visual effects supported the storyline throughout. He adds, “This isn’t a 3,000-shot show; the discipline was different. We had such a tight budget to work with and did not spend more than we were allowed. But what is there is so nicely crafted. I’ve never been on such a strong, collaborative team.”
TRON: ARES
At the end of the 2010 film Tron: Legacy, Sam Flynn escapes the Grid with Quorra, an isomorphic algorithm, a unique form of AI, and this digital character experiences her first sunrise in the real world. Neither Sam nor Quorra is listed in Tron: Ares, but the Grid has made the leap into the real world in the form of a highly sophisticated program named Ares, who has been sent on a dangerous mission. In other words, the game has entered our world – a world very much like Vancouver, where the film was shot.
“From day one, from the top down, everyone agreed to anchor the film in the real world,” said Production Visual Effects Supervisor David Seager, who works in ILM’s Vancouver studio. For example: “Our big Light Cycle chase in Vancouver in the middle of the night was with real stunt bikes.” The practical Light Cycle, though, was not powered; it was towed through the streets, with a stunt performer on the back, and later supplemented with CG. “The sequence is a mix and match with CG, but these are real streets, real cars,” Seager says. “We put Ares into that. If we cut to an all-CG shot, it’s hard to see the difference.”
In addition to filming in the real world, real-world sets helped



MIDDLE AND BOTTOM: The all-digital creatures in Jurassic World: Rebirth demanded even more attention to detail to ensure realism. ILM wouldn’t produce a piece of dinosaur animation without finding a specific live-action reference of real animals in that situation. (Image courtesy of Universal Pictures)
anchor the Grid’s digital world; there are multiple grids where digital characters and algorithms compete in games and disc wars. “In the Grid worlds, first we tried to match Legacy, then tried to plus it up,” Seager says. “Our Production Designer, Darren Gilford, was the Production Designer on Legacy. We carried that look into this film. Everything has a light line component to it.” As before, most characters wear round discs rimmed with light on their backs. Ares wears a triangular disc. Light trims all the suits and discs. “All the actors and stunt players had suits with LEDs built into them,” Seager says. “The suits were practical, made at Weta Workshop. Pit crews could race in to fix wiring, but in a small percentage of cases, when it would take too long, we’d fix it in CG.”
Light rims elements throughout the Grid: blue in ENCOM, the company that believes in users; red in the authoritarian Dillinger Grid. Ares, the Master Control Program of the Dillinger grid, is lit with red.
“Tron: Legacy has a shiny, wet look, and it felt like the middle of the night,” Seager says. “We have that in the Dillinger Grid. To make the Grids seem more violent, we literally put a grid that looks like a cage in the sky – blue in ENCOM, red in Dillinger.” In the digital world, as characters get hit, they break into CG voxel cubes that look like body parts. In the real world, Ares crumbles into digital dust. In the Grid, digital Ares can walk on walls; when Ares moves into the real world, he obeys real-world physics.
SUPERMAN
For Stephane Ceretti, Overall Visual Effects Supervisor, there
TOP: Mission: Impossible – Final Reckoning has 4,000 VFX shots, and as in previous MI films, actor Tom Cruise anchored everything, pushing the action, driving innovation. (Image courtesy of Paramount Pictures)




TOP TO BOTTOM: Director Gareth Edwards’s preference for shooting long, continuous action pieces wasn’t conducive to animatronics, so for the first time in Jurassic franchise history, the dinosaurs in Jurassic World: Rebirth were entirely CG. (Image courtesy of ILM and Universal Pictures)
Superman’s all-CG dog, Krypto, is based on director James Gunn’s dog, Ozu. Framestore artists created Krypto from Ozu and gave him a cape. (Image courtesy of Warner Bros. Pictures)
The crew on Wicked: For Good used Unreal Engine to interact and iterate with production design, to look at first passes, conceptualize, and previs and postvis the intricate shots. (Image courtesy of Universal Pictures)
was as much work in the latest Superman as any visual effects supervisor might wish for. There’s Superman himself, lifting up entire buildings with one hand to rescue a passer-by, flying to the snowy, cold Fortress of Solitude to visit his holographic parents, battling the Kaiju, saving a baby from a flood in a Pocket Universe, having a moment with Lois while the Justice League battles a jellyfish outside the window. And that’s far from all. There’s much destruction. Buildings collapse. A giant chasm rifts through a city. There’s a river of matter. A portal to another world. A prison. Lois in a bubble. And a dog, a not very well-behaved dog, that likes to jump on people. Superman’s dog, Krypto, is always CG. “[Director James Gunn] gave us videos and pictures of his dog Ozu, a fastmoving dog with one ear that sticks up much of the time,” Ceretti says. “Framestore artists created our Krypto, gave him a cape, and replaced Ozu in one of James’ videos. When we showed it to him, he said, ‘That’s my dog. It’s not my dog. But it’s my dog.’”
In terms of other visual effects, there were three main environments. Framestore artists mastered the Fortress of Solitude sequences filmed in Svalbard, Norway, including the giant ice shards that forcefully emerge from the snow, and the robots and holographic parents inside. Legacy supplied the robots on set, some of which were later replaced in CG, particularly for the fight scenes. The holograms were another matter. “They needed to be visible from many different angles in the same shot,” Ceretti says. “Twelve takes with 12 camera moves. It would be too complicated to use five different motion control rigs shooting at the same time. I didn’t want to go full CG because we would have close-ups. So, I started looking around.” He found IR (Infinite-Realities), a U.K.-based company that does spatial capture, rendering real-time images 3D Gaussian splats, fuzzy, semi-transparent blobs, rather than sharp-edged geometry, to provide view-dependent rendering with reflections that move naturally with the view. “They filmed our actors with hundreds of cameras then gave us the data,” Ceretti says. “We could see the actors from every angle in beautiful detail



TOP: The entrance of the Fortress of Solitude consists of 6,000 crystals refracting light, making it a challenging asset to render for Superman. (Image courtesy of Warner Bros. Pictures)
BOTTOM: Filmmakers almost doubled the density of smoke in the final cut of Warfare. Cinesite artists created simulated smoke based on reference from the practical explosions, added captured elements, and blended it all into the live-action smoke. (Image courtesy of DNA Films, A24 and Cinesite)
with reflections, do close-ups of the faces and wide shots. It was exactly what we needed.”
For the Metropolis environments, ILM artists handled sequences set early in the show, the final battles in the sky at the end, and all the digital destruction as buildings fall like dominoes into a rift. “The rift and the simulation of the buildings collapsing and reforming at the end was really tricky,” Ceretti says. The production crew filmed those sequences in Cleveland. But, to set the Metropolis stage, the ILM artists built a 3D city based on photos of New York City, integrated with pieces of Cleveland, using Production Designer Beth Mickle’s style guide. “We tried to avoid bluescreens as much as we could,” Ceretti says. “We wanted to base everything on real photography as much as possible. So, we created backdrops in pre-production that we could use as translights for views outside the Luther tower.”
For a tender moment between Superman and Lois, ILM used views of Metropolis outside the apartment window on a volume stage to project a fight between the Justice Gang and a jellyfish on an LED screen. “We used that as a light source for the scene,” Ceretti says. “They could feel what was going on outside, and we could frame it with the camera. It was the perfect way to do it.”
Weta FX managed sequences with the Justice Gang taking charge of the travel to, from and inside the weird Pocket Universe. Production Designer Beth Mickle helped Ceretti envision the otherworldly River Pi in the Pocket Universe. “When I read the script, I saw that James [Gunn] wrote next to the River of Pi, ‘I don’t quite know what that looks like,’” Ceretti says. “We suggested basing it on numbers, and Beth came up with the idea that




TOP TO BOTTOM: A second new Na’vi clan, the Windtraders (Tlalim), is introduced in Avatar: Fire and Ash along with their flying machines, Medusoids, unique Pandoran creatures, and giant floating jelly-fish-like airships that support woven structures propelled by harnessed stingraylike creatures called Windrays. (Image courtesy of 20th Century Studios)
On a practical level, the ‘return to basics’ VFX techniques used in making Thunderbolts* meant ditching bluescreens for set extensions to achieve a more natural tone – without using LED volumes. (Image courtesy of Digital Domain and Marvel Studios)
One of the most challenging sequences for visual effects in Wicked: For Good is when Glinda (Ariana Grande) sings in a room with mirrors, and both sides of the mirror – Glinda and her reflection – had to be shown. And it was all tied to a song, so the sequence had to be cut on cues. (Image courtesy of Universal Pictures)
differently sized cubes, each size representing a unit, would create the river flow.” On set, the special effects team filled a tank with small buoyant, translucent, deodorant plastic balls that Superman actor David Corenswet could try to “swim” through. Later, Weta FX artists rotoscoped the actor out, replaced the balls with digital cubes and added the interaction. They kept Corenswet’s face but sometimes replaced his body with a digital double to successfully jostle the digital cubes.
“We constantly moved between CG characters and actors in this film,” Ceretti says. “To suggest speed, we’d add wind to the hair and sometimes have to replace hair, and make the cape flap at the right speed to feel the wind. But I really tried to have the actors be physically there. As always, it’s a combination of real stuff, CG, digital doubles, and going from one to the other and back and forth all the time.”
THUNDERBOLTS*
“This film is the opposite of everything I’ve been asked to do in the last 10 or 15 years,” says Jake Morrison, Overall Visual Effects Supervisor on Thunderbolts*, whose 35 film credits include other Marvel films. “Most of the visual effects I do are about spectacle, world-building, planets and aliens,” he says. “Jake Schreier, the director of Thunderbolts*, had the exact opposite brief. He wanted it to feel as organic as possible, grounded, real.”
Thunderbolts*’ unconventional and at times dysfunctional team of antihero characters resonated with critics who hailed the Marvel Studios’ film as a “return to basics.” “Basics” reflects the approach taken for many visual effects techniques. For example, on a practical level, it meant ditching bluescreens for set extensions to achieve a more natural tone – and doing without LED volumes. “Any time we couldn’t afford to do a set extension, I asked the art department to build a plug in the right color for what will be in the movie,” Morrison says. “It took a minute for the team to understand, but it had an incredible ripple effect. You’d look at a monitor live on set and see a finished shot even though there would be set extensions later. It felt like a movie. The amazing thing was that we didn’t have to change skin tones and reflectance. I would say the set extension work was more real because it was additive, not subtractive.”
In that same vein, rather than filming in an LED volume, Morrison used a translight on set for a view out the penthouse in Stark Tower, which in this film has 270 degrees of glass windows wrapping it, a shiny marble floor and a shiny ceiling. “A team from ILM shot immensely high-detail stills of the MetLife building in New York to create an impressive panorama that we printed on a vinyl backing 180 feet long,” Morrison says. “With a huge light array behind the vinyl, the photometrics were correct. One of the guys walked on set and wobbled with vertigo.” Visual effects artists replaced the translight imagery in post, adding areas hidden by the actors standing in front of the vinyl when necessary. “But every reflection in the room was exactly right,” Morrison says. “It’s the same idea as an LED volume, but the color space differs. There’s nothing like the flexibility you get with LED volumes, but there’s something that feels genuinely organic when you backlight



TOP: For the biplane sequence at the end of Mission: Impossible – Final Reckoning, where Tom Cruise hangs onto the wing of a biplane while it performs aerobatics, a pilot in a green suit was flying the plane, with Cruise holding on. The VFX teams later removed the pilot and safety rigging. (Image courtesy of Paramount Pictures)
BOTTOM: Fewer than 10% of the shots in the final Essex boat sequence in Jurassic World: Rebirth feature any real ocean water. With few exceptions, entire sequences were filmed on dry land in Malta. (Image courtesy of ILM and Universal Pictures)
photography at the right exposure.”
Throughout the shoot, Morrison worked with production and special effects to have them do as many effects practically as possible, even when a CG solution might have seemed easier. Offering CG as a safety net in case the practical effects didn’t work helped convince them. “I was able to give Jake [Schreier] shots of the lab sequence, and he couldn’t tell which were plates and which were CG,” Morrison says. “You can only do that if you commit production to build it for real. Then, you can scan that and have it as reference, and you can do stuff with it. We can shake it. Every glass vial rattling and falling, the ceiling cracking, all looks real because we pushed for it to be real. It all becomes subjective if you don’t have that reference on a per-frame basis.”
For a driving sequence, he convinced the crew to take a “’50s” approach rather than work in air-conditioned bliss in an LED volume. Instead, they filmed in the Utah heat. “Everyone knows there’s a crutch,” Morrison says. “It’s easier to change the LED walls. But something old school brings a level of reality.” For motorbike work in that sequence, rather than painting the stunt biker’s large safety helmet blue or green to replace it with CG later, Morrison had the helmet painted the color of the actor’s skin and glued a wig on it. “People on the show thought I was kinda nuts,” Morrison says. “It looked so funny, we didn’t have a dry eye in the house. But we shrank down the oversized helmet and hair by 25% and had an in-camera shot. Just because you could go all CG doesn’t mean you should. You should use VFX for things that need it, not for things that could be practical.”
Even when characters in the film transform into a void, Morrison looked for practical possibilities. “Jake and Kevin [Producer Kevin Feige] would challenge me with ‘How would you do this character if you didn’t have CG?’” Morrison says. “We came up with three types of voiding, each of which could have been done with traditional techniques.” For one, they used what Morrison calls a simple effect that is difficult to pull off: They shot actor Lewis Pullman, rotoscoped his silhouette, zeroed it back until it looked like a hole in the negative, then had each frame show just enough detail to be scary. For the second, they projected shadows out from a character as if it were a light source, using aerial plates shot in New York. For the third, they zapped characters out of the frame and splatted their shadows onto the ground in the direction from which they’ve been zapped. “The effect we created is very old school visually, but we used all modern techniques. It’s really good to be put in a box sometimes,” he adds. “It’s an exercise in restraint. Sometimes, the playground is a little bit too big. Being able to unleash that creativity is fun. If people like the work we did, it’s because we tried so hard for you not to notice it.”
MISSION: IMPOSSIBLE – THE FINAL RECKONING
Visual Effects Supervisor Alex Wuttke received a Best Visual Effects Oscar nomination and numerous other awards in 2024 for Mission: Impossible – Dead Reckoning Part One. The effects in this year’s follow-up film, Mission: Impossible – The Final Reckoning, could provide the same accolades. Production Visual Effects Supervisor Wuttke, who is based in ILM’s London studio,




says, “We were successful last time. So, we took what worked and extended it. We wanted to take it to the next level.” This film has 4,000 VFX shots, and as in previous MI films, actor Tom Cruise anchored everything. “He pushed himself,” Wuttke says. “We’re his enablers. His desire to up the ante drives the rest of us to be innovative.”
Two big sequences illustrate the tight interplay between special effects and visual effects: the submarine and biplane sequences. For both, the crews looked for ways to help Cruise perform the scenes realistically. For example, Cruise’s character, Ethan Hunt, must navigate around moving torpedoes while inside a partially flooded, rotating submarine. Special effects built a giant gimbaled submersible that Cruise drove inside in a deep dive tank. The Third Floor’s previs team mapped out the action. The actions and the range of motion dictated the rig that was built. All the water inside the submarine is real. “There were so many moving parts,” Wuttke says. “We had to drop torpedoes around Tom’s performance. Sometimes that was not possible within the chaos of Tom being inside a washing machine. We knew we would need repeat passes.” When it wasn’t safe practically, CG torpedoes hit with the right impact, and water simulations sold those shots. Even when the action was real, the water, filtered for health and safety, needed digital enhancements. CG particulate, bubbles and so forth added reality, scale and scope.
“One of my favorite sequences is the biplane sequence at the end that we shot in Africa,” Wuttke says. In this sequence, Cruise hangs onto the wing of a biplane while it performs aerobatics. “It was so complicated working out how we would execute those sequences, “Wuttke says, “but it was hugely rewarding. The visual effects work is seamless, and Tom was astonishing. We would have a pilot in a green suit flying the plane, and Tom would be holding on.” The VFX teams removed the pilot and the safety rigging. They also replaced backgrounds for pick-up shots – gimbal work filmed in the U.K. – with the African environment.
Several VFX studios worked on the film, including Bolt, Rodeo FX, Lola, MPC and ILM. “Our chief partner was ILM. They were amazing,” Wuttke says. “Jeff Sutherland, Visual Effects Supervisor, was incredible. Their Arctic work with the blizzard was phenomenal. We were worried about the sequence when Hunt confronts the entity, but we were very pleased with the CG work created jointly by ILM and MPC. Hats off to Kirstin Hall, who picked up MPC’s work and finished it at another house without missing a beat.”
Framestore artists mastered the Fortress of Solitude sequences for Superman filmed in Svalbard, Norway, including the giant ice shards that forcefully emerge from the snow, and the robots and holographic parents inside. (Image courtesy of Warner Bros. Pictures)
To boost realism and add fidelity to the creature simulations for Jurassic World: Rebirth, a new procedural deformer applied high-resolution dynamic skin-wrinkling on muscle and fatty tissue simulations, removing previous limitations of 3D meshes. (Image courtesy of ILM and Universal Pictures)
“We’ve talked about the big sequences, but we had close to 4,000 shots,” Wuttke adds. “A lot of work might not be noticed. We amassed a huge amount of material.”
JURASSIC WORLD: REBIRTH
ILM’s David Vickery, Production Visual Effects Supervisor on Jurassic World: Rebirth, received two VES Award nominations for 2023’s Jurassic World: Dominion Rebirth was more challenging. “This film presented us with so many new technical and creative challenges that it’s hard to compare from a complexity standpoint to any film I’ve worked on before,” he says. From pocket-sized
TOP TO BOTTOM: Lo’ak, narrator of Avatar: Fire and Ash, surfing his “spirit brother,” the whale-like Tulkun Payakun. The third film dives deep into tulkun culture. (Image courtesy of 20th Century Studios)




TOP TO BOTTOM: The Light Cycle chase in Tron: Ares took place in Vancouver in the middle of the night with stunt bikes towed through the streets with a stunt performer on the back, and later supplemented with CG. (Image courtesy of Disney Enterprises)
Payakan’s role is elevated from his initial appearance in Avatar: The Way of Water to a major “soldier” in the Na’vi and human alliance against the RDA. (Image courtesy of 20th Century Studios)
Cinesite artists created jets flying over for the ‘show of force’ sequence in Warfare. The fighter jet in the sequences was entirely CG. (Image courtesy of DNA Films, A24 and Cinesite)
dinosaurs to T. rex, Mosasaur, Spinosaur, Mutadon, Titanosaur, Quetzalcoatlus dinosaurs and more. Water in rapids, water in open ocean, water on dinosaurs, boats in water, dinosaurs in water, people in water, white water, waves, splashes. Capuchin monkeys. Inflatable raft. Digital limbs. Jungles. Titanosaur plains. Filming in wildly remote locations. All within a short production schedule – 14 months from the first meetings with director Gareth Edwards to final delivery.
ILM’s work on the dinosaurs was an example of its ability to deliver the director’s vision. Edwards’s shooting preference – long, continuous action pieces – isn’t conducive to animatronics. “For the first time, we didn’t match the performance of practical animatronics,” Vickery says. “Gareth wanted our dinosaurs to be entirely digital and behave like real animals, not just turn up on screen, hit a mark and roar. We all agreed never to produce a piece of dinosaur animation without finding specific live-action reference of real animals in that situation. Our animators at ILM stuck to this plan and it worked beautifully.” Then, given concept approval for the dinosaurs, artists used the new tools to manipulate the creatures’ soft tissue – skin folds, neck fat, cartilage and so forth. To boost realism and add fidelity to the creature simulations, a new procedural deformer that puts high-resolution dynamic skin-wrinkling on muscle and fatty tissue simulations, removing the previous limitations of 3D meshes. The output, a per-frame point cloud with normal-based displacement information, could be read back into look development files.
“We worked on any part of the creature that wouldn’t have been preserved in the fossil records,” Vickery says. “Things that could give our dinosaurs unique personalities, yet wouldn’t contradict current scientific understanding. The new tools gave us an incredible level of detail in our creature simulations and brought a physical presence and level of realism to the dinosaurs that the franchise hadn’t seen yet.” That same level of realism and the opportunity to create new tools extended to the CG water. “There was a huge focus on realism for all the water in Rebirth,” Vickery says. “It was our main concern from the start and one of our biggest technical and creative challenges.” CG Supervisor Miguel Perez Senent, working with Lead Digital Artist Stian Halvorsen, developed a broad suite of tools, including a new end-to-end water effects pipeline based on a wrapper around Houdini’s FLIP solver with custom additions for efficiency and realism. Low-res sims defined the overall behavior of a body of water and guided higher resolution FLIP sims to enhance details where collisions and interactions took place. A mesh-based emission from the resulting water surface provided fidelity in secondary splashes from areas churned up in the main simulation.
“The biggest innovation was a new whitewater solver that handled the complex behavior of splashes,” Vickery says. “The details in these simulations are second to none. You can see whitewater runoff on the dinosaurs fall back down and trigger tertiary splashes when hitting the water surface.” Fewer than 10% of the shots in the final Essex boat sequence feature any real ocean water, and with few exceptions, entire sequences were filmed on dry land in Malta. ILM artists extended the partial build of the boat, created




TO
(Image courtesy of ILM and Universal Pictures)
Animals articulate their thoughts in Wicked: For Good, and the audience hears their experiences, raising the level of empathy. (Image courtesy of Universal Pictures)
Sparks were treated as 3D assets, which allowed them to be better integrated into shots as interactive light for Thunderbolts* (Image courtesy of Digital Domain and Marvel Studios)
the ocean surface, simulated the boat engines, the whitewater, the wake, crashing bow waves, sea spray, mist, and integrated the Mosasaur and Spinosaur dinosaurs within all that.
“I pushed production to shoot on a real boat at sea, but noise from the boat’s diesel engines made recording sound impossible. In the end, with consistent light on the boat and cast a priority, they used a water tank alongside the ocean in Malta for most of the shots. SFX Supervisor Neil Corbould, VES covered the real Essex with IMU’s (internal measurement units) and recorded how it performed in varying ocean conditions. Then, that data drove a partial replica of the Essex he built and put on a gimbal. “We never filled the Essex tank with water because that made the shooting prohibitively slow,” Vickery says. Similarly, the T. rex rapids were largely CG. “It’s a flawlessly executed, fully CG environment based on rocky rivers we found around the world,” Vickery says. “We replaced everything shot in-camera, except a small section of rocks that the cast grab onto at the end. The inflatable raft is CG. There was complex ground and foliage interaction with the T. rex; shots above and below the water surface, and beautifully choreographed T. rex animation.”
Another invention, a custom VCam system, allowed Edwards and Vickery to shoot real-time previs using Gaussian splats of their remote Thailand locations. Vickery and VFX Producer Carlos Ciudad shot 360-degree footage using an Insta360 camera during tech scouts. Proof Inc. created Gaussian splats from the footage, and Production Designer James Clyne built virtual replicas of the sets inside the Gaussian splats. Then, Proof used Unreal Engine to build a virtual camera with the 35mm anamorphic lens package for the splat scenes and motion capture of stand-ins that allowed Vickery and Edwards to block out and direct action sequences.
ILM handled most of the work on Rebirth’s 1,515 VFX shots with an assist from Important Looking Pirates and Midas. Proof did previs and postvis.
AVATAR: FIRE AND ASH
Much has been written about the groundbreaking, award-winning visual effects and production techniques used to help craft James Cameron’s Avatar (2009) and Avatar: The Way of Water (2022). Indeed, both films earned Best Visual Effects Oscars for the supervisors working on the films, giving Senior Visual Effects Supervisor Joe Letteri, VES of Weta FX his fourth and fifth Oscars. Expect to hear about new groundbreaking tools and techniques for the latest film, Avatar: Fire and Ash, but this time, more emphasis is on artistry than technical breakthroughs.
“This film is a lot more about the creativity,” Letteri says, “about getting to work with the tools we’ve built. We’re not trying to guess how to do something with three months to go. We’ve front-loaded most of the R&D.” He adds, “A lot of times you build the tools, make the movie, then you’re done. You don’t get to go back and say, ‘Gee, I wish I knew then what I know now.’ This film gave us the chance to do that because the films are back-to-back. We look at the second and third films as one continuous run.” For example, much of the live-action motion capture for characters in Fire and Ash was accomplished at the same time as capture for the second
TOP
BOTTOM: ILM handled most of the work on Jurassic World: Rebirth’s 1,515 VFX shots with assists from Important Looking Pirates and Midas. Proof Inc. did previs and postvis.




TOP TO BOTTOM: Unfolding in real time, Warfare was shot over a period of 28 days at Bovingdon Airfield in Hertfordshire, U.K. (Image courtesy of DNA Films, A24 and Cinesite)
Since Avatar: The Way of Water, animators have been able to manipulate faces with a neural-network-based system, giving characters’ faces an extra level of realism, unified by underlying motion capture. (Image courtesy of 20th Century Studios)
As complicated as it was to work out the execution of the biplane sequences in Mission: Impossible – Final Reckoning, the visual effects work was seamless, driven by Cruise’s desire to the push stunts to the limit. (Image courtesy of Paramount Pictures)
Avatar, using techniques pioneered on the first film. “We didn’t shoot anything new since the last film,” Letteri says. However, Fire and Ash introduces a new clan, the Ash people, and a new antagonist, Varang, leader of the Ash people. They, particularly she, provided an opportunity for the studio to further refine its animation techniques. “We have a great new set of characters,” Letteri says. “They allowed all kinds of new emotional paths. Varang [Oona Chaplin] is a bit of a show-off; I really enjoyed scenes with her.”
On prior films, animators used a FACS system with blend-shaped models. Since Way of Water, they have been able to manipulate faces with a new neural-network-based system. “We wanted the face to respond in a plausible way, given different inputs,” Letteri says. “Before, animators would move one part of the face, then move another, and kind of ratchet all the pieces together to make it work. The capture underneath helps unify it, but they have to fine-tune that last level of realism. With the neural network, pulling on one side of the face will affect he other side of the face because the network spans throughout the face. They get a complementary action. The left eyelid might move if they move the right corner of the mouth. It gives the animators an extra level of complexity. It took getting used to because it’s different, but it’s how a face behaves. Once the animators became comfortable with the system, it gave them more time to be creative. The system was easier to work with.”
As for the fire of Fire and Ash, Letteri says, “We’ve done fire before, but not as much as we had on this film. So that was a big focus. Simulations are hard to do, but we do them the old-fashioned way – simulating everything together so the forces are coupled, but breaking layers for rendering to incorporate pieces of geometry. As for the rest, the focus was, ‘We know how to do this. How can we make it better?’”
In addition to the Ash People (Mangkwan), a second new Na’vi clan, the Windtraders (Tlalim), is introduced for the first time –along with their flying machines. Windtraders fly using Medusoids, unique Pandoran creatures, and giant floating jelly-fish-like airships that support woven structures propelled by harnessed stingray-like creatures called Windrays. “The Medusoids were in Jim’s [Cameron] original treatment, and I was bummed when he cut them from the first film,” Letteri says. “But here they are. They’re pretty spectacular. Everything is connected to everything else. We captured parts of it, so we had to anchor that to our craft floating through the air. It was great to work out all the details and set it flying. Jim shoots in native 3D, so when you look at it in stereo, you’ll really be up in the air with these giant, essentially gas-filled creatures. It’s very three-dimensional.”
The crew also began refining their proprietary renderer, Manuka. “We used a neural network for denoising on the second film and realized that was okay, but we lost too much information,” Letteri says. “It really belongs inside the renderer, not after the render. So, we started moving those pieces into Manuka, and it’s starting to pay off. There will be more improvements to come.”
Letteri observes, “This film is all about the characters and the people behind them. One of the secrets of this business is that with all the computers and automation, there’s still a fair amount of handwork that goes into these films. A lot of work by artists is the key to it, which is great because what else would we be doing?”


WHAT IT TAKES FOR SMALLER VFX STUDIOS TO SURVIVE – AND THRIVE
By TREVOR HOGG
Juggernauts like DNEG and Framestore are not impervious to the volatility of the visual effects marketplace, as demonstrated by the dissolution of Technicolor, while mainstays like ILM, Scanline VFX, Pixomondo and Sony Pictures Imageworks have become the property of streamers and Hollywood studios. Despite the perpetual financial uncertainty, there is one constant fact that remains, which is that digital augmentation has become a fundamental part of filmmaking and the ambitions of each new generation of cinematic talent keep pushing the boundaries of technology to achieve their growing creative ambitions. So, how do the smaller visual effects companies find prosperity within this environment where budgets and schedules are getting tighter and more demanding? The answer is not singular but as multifaceted as crafting a believable photoreal CG image.
TOP: Being involved in different industries causes a cross-pollination of ideas and techniques for the Territory Group, which contributed effects to Atlas (2024). (Image courtesy of Territory Group and Netflix)
OPPOSITE TOP TO BOTTOM: Tippett Studio has become part of Phantom Media Group, which includes Phantom Digital Effects, Milk VFX and Lola Post. (Image courtesy of Tippett Studio, Disney+ and Lucasfilm Ltd.)
One of the great joys for Tippett Studio is being tasked with creating monsters, as it did for Star Wars: Skeleton Crew. (Image courtesy of Tippett Studio, Disney+ and Lucasfilm Ltd.)
Tippett Studio, which worked on Star Wars: Skeleton Crew, believes visual effects are an ever-evolving blend of technology and artistry. (Image courtesy of Tippett Studio, Disney+ and Lucasfilm Ltd.)
BUF has been around for 40 years with the main facility located in Paris and a second one established 10 years ago in Montreal; their specialty has remained the same, which is conceptualizing specific designs for film and television productions. “They always come to us saying, ‘We want something we’ve never seen before,’” states Olivier Cauwet, VFX Supervisor at BUF. “When we were working on American and Luc Besson movies, we had 300 employees; it was too much and changed the spirit of BUF. We wanted to have fewer artists, so now we’re around 140, which allows us to preserve our philosophy and the spirit of how we work. We’re not growing too much because with the [current] market, it’s dangerous to go too fast because you can lose everything.” Project demands have greatly increased. Cauwet says, “My first movie was as an artist on Fight Club, and we did 11 shots in six months. Now we’re doing more than 500 shots in three months. There’s a huge gap, and the technology accounts a lot for that. We are working on proprietary software, so we’re not using Nuke or Maya. We have an

R&D department which releases software every three months for us. The tools are much more efficient now, especially for tracking and rotoscoping. This allows us more time to be creative with shots. When I manage a team, I only have generalists with me, and they’re working on all the shots, doing matchmoving, rotoscoping, modeling, texturing, lighting, compositing and rendering. Generalists are more personally involved in the process, which means they understand the sequence and story. They’re not only doing a task, but also their shots.”
Commercials are the core business for London-based Freefolk, which subsequently established a film and episodic department. “There are certain companies that specialize in one area or another, but we decided to have feet in both camps,” remarks Paul Wright, COO at Freefolk. “Commercials work in a far looser fashion, not necessarily requiring the dedicated tech pipeline that film or episodic requires. We developed a pipeline to go with film and episodic that has effectively been adopted company wide.” Diversification is critical. Wright notes, “You have to cast your net wider today than you might have needed to a long time ago. At the same time, you have to manage the delivery of all those and maintain the quality level that you would like to be able to deliver.” The visual effects industry has changed. “I remember when the industry was hardware-led and you couldn’t get your hands on the kit,” notes Meg Guidon, Executive Producer, Film & Episodic at Freefolk. “Things have broadened out tremendously, as well as what younger artists can offer. You need to be open to the talent that comes through your door, and we’ve got this expanding network of people we enjoy working with. If we see that they have a talent for motion graphics, our response isn’t, ‘We don’t really do motion graphics.’ We take it on and do it now.”
Headquartered in Culver City, California, Wylie Co. specializes




“When I manage a team, I only have generalists with me, and they’re working on all the shots, doing matchmoving, rotoscoping, modeling, texturing, lighting, compositing and rendering. Generalists are more personally involved in the process, which means they understand the sequence and story. They’re not only doing a task, but their shots.”
—Olivier Cauwet. VFX Supervisor, BUF
in previs, postvis and visual effects for film and episodic. “Visual effects are so nuanced and complicated, but you’re still serving the subjective thing,” states Jake Maymudes, CEO & COO at Wylie Co. “You try to make it to order and hand it to the client. Then they can say, ‘It’s too blue or dark or light, change the animation.’ A lot of times, the company has to incur the costs of this very subjective thing they’re selling. It’s complicated and hard. In 2024, we lost money for the first time in 10 years, but this year we’re making money.” The actors’ strike had a substantial impact. Maymudes remarks, “It was a real hit to the industry. It’s still finding its feet. It was a big reset in the content. As far as visual effects is concerned, it could have been the combination of after the strike and Marvel at the same time realizing they made too [many films] because their slate has come way down.” AI and machine learning have become significant disrupters. “If you really dig in, the outcome could be, you film an entire feature film on an iPhone or something really small and run it through an AI post process to give it that epic Hollywood look and feel.” There is no desire to focus on one aspect of visual effects. Maymudes says, “I’ve never thought that specializing was a good idea. which is why we do everything, even though we’re small. Even with the emergence of AI, my personal opinion is, specializing isn’t a good business model, because ultimately, you’re going to have competition that is going to do it just as good as you, and it’s going to be negligent. If you’re a working group of artists who are capable of doing great work in a variety of different ways, that’s what you should sell. Do it all. We use machine learning and AI in all sorts of ways. If you go in and specialize in machine learning face replacements or deepfakes, or even AI deepfakes, there could be a startup that you haven’t heard of yet that pops up
TOP AND BOTTOM: Over the past 40 years, BUF has specialized in specific designs for film and television productions such as 3 Body Problem. (Images courtesy of BUF and Netflix)

next week and does that in a click of a button, essentially making your whole process obsolete.”
Straddling the world of visual effects and industrial design is the Territory Group, which originated in London. “We had that moment where everyone wasn’t looking for a new design studio, visual effects shop or digital agency,” recalls David Sheldon-Hicks, Founder of Territory Group. “We picked off niches in individual areas because we needed to find a way to thread between that, so it was the new things that were coming through. There were lots of freelancers working on on-set graphics. However, at the time, there were few small shops dedicated to on-set graphics that were doing it at scale with a robust structure behind them. Then, was anyone taking on-set graphics and applying it into post? There were post houses doing it, but I don’t know if they came from a design background.” Different industries cross-pollinate one another. Sheldon-Hicks says, “User interfaces and holograms in film feel connected to what’s happening in the automotive industry. Because we’ve been working in computer games, actually longer than films, games love to look at the film world and understand how they do what they do and then apply it in their own domain. Equally, the technology knowledge that we got from working in game engine for the last 15 years, not only has it been interesting in terms of virtual production, but also in terms of all the in-car experiences now built out in Unity or Unreal Engine; this is because they give you that next level of three-dimensional data and imagery that is real-time, that can be codified and powered through LiDAR scans or volumetric sensors. There’s this opportunity to use technologies from other fields and bring them across to other industries. We really enjoy that.”

TOP: PFX contributed effects to Locked (2025). PFX believes in building a solid studio in each country, which has been achieved mostly through the acquisition of local companies. (Image courtesy of PFX and Paramount Pictures)
BOTTOM : Spanning Central Europe is PFX, a full-service post-production studio that started off as a boutique studio in Prague and currently works on major studio projects such as Napoleon. (Image courtesy of PFX and Apple TV).




Global creative studio Tendril, based in Toronto, uses design, animation and technology to produce innovative stories and branding. “Nothing that I’ve done in the past 20 years would I ever have anticipated or stepped toward intentionally,” notes Chris Bahry, Co-Founder and CCO of Tendril. “Either you run toward the work or the work runs toward you. On the side of our building, it says ‘Pure Signal,’ and we had a muralist do it typographically as a mural. It’s part of this phrase, ‘Put out a pure signal so the right people can find you.’ That’s our mantra: If you’re putting out the right signal in the form of the work you’re putting out, it’s going to attract the people you want to work with and probably the kinds of work you’re seeking to do.” The work falls on the boutique end of the spectrum. “Sometimes we have projects that are bigger and require a larger team, but they are tiny compared to the ones you have in a large visual effects facility. I’ve always thought of Tendril being akin to ILM’s Rebel [Mac] Unit led by John Knoll. They wanted an agile process based around artists who were generalists and could take a shot from a concept, then design, develop, light and render it, and take it across the finish line. Whereas, in a scalable pipeline, normally it gets into a specialist workflow where it’s like an assembly line. Sometimes, certain problems are hard to solve in a linear, scaled-up specialist pipeline like that because it’s built to do particular things. The way Tendril is built is to be able to adapt and solve unique problems at a much faster cadence. Our project cycles tend to be about eight weeks, or we’ll sometimes only have two weeks to figure out something.”
Spanning Central Europe is PFX, a full-service post-production studio that started as a boutique studio in Prague. “In the beginning, we were an enthusiastic group of guys who wanted to be part of the industry and work on a nice movie,” recalls Lukas Keclik, Producer and Partner at PFX. “Little by little, through hard work, investing all of our money and using every contact, we experienced an evolution.” Subsidies are a driving force as to where the work gets sent, leading to additional PFX facilities being
TOP TWO: BUF, which led effects for Eiffel (2021), does not believe in growing too much or too fast, as it undermines the spirit and philosophy of the company based in Paris and Montreal. (Images courtesy of BUF)
BOTTOM TWO: Visual effects can be deeply nuanced and complicated but still subjective in nature, which Wylie Co. has to keep in mind when dealing with clients like director David Fincher for The Killer (2023). (Images courtesy Wylie Co. and Netflix)





TOP TO BOTTOM: Dune: Prophecy benefited from the knowledge gained in graphic design by the Territory Group. (Image courtesy of Territory Group and HBO)
Holograms have become a specialty of the Territory Group, as seen in Dune: Part Two (Image courtesy of Territory Group and Warner Bros. Pictures)
Tendril believes in using design, animation and technology to produce innovative branding and stories such as American Gods (2017). (Image courtesy of Tendril and STARZ)
Tendril believes that putting out good work will attract interesting collaborators and intriguing projects. (Image courtesy of Tendril)
established in Slovakia, Poland, Germany, Austria and Italy. “A couple of years back, we realized that the company wasn’t small anymore, but not big enough. We started to feel this pressure of subsidies becoming more important. We had 150 people, so it wasn’t a boutique studio anymore, while the subsidies in the Czech Republic were not competitive enough. However, we didn’t want to be a big corporate studio. Based on our experiences and where we felt the industry was heading, the decision was made to be in multiple locations, ideally connected with one pipeline. We didn’t want to do a business model of having a headquarters and establishing satellite studios. We want to build a solid studio with a local presence and a supervisor in each branch with a production team. The first country was Slovakia, which offered a subsidy of 33%, and we built a studio from scratch. It took a year to establish a team and integrate them, which took too much time away from us having contact with our clients and team. Then we decided it would probably be better to go the acquisition route.”
Originating in Gothenburg and establishing an operation in Los Angeles, Haymaker VFX is an independent visual effects company that does films, episodic and commercials. “Everything in the industry is relationship-based and those relationships drive the type of work,” observes Leslie Sorrentino, Executive Producer at Haymaker VFX. “You’re seeing agencies pulling editorial and coloring in-house, but not heavy CG. Like animation, you’ve got a bandwidth of clients wanting more of this heavy lifting. Hence, the new Superman. We don’t want to see everything with the kitchen sink in it, but there are set extensions and period pieces. Also, we are the house of record for Polestar in Europe; that relationship gives me a lot of service stuff I can do, like putting cars in environments. On general visual effects marketing and what we’re doing, size does matter, and we have to grow exponentially. It is also compounded by the fact that you’ve seen the contraction of DNEG and closing of Technicolor. We started out doing Season 2 of Warrior Nun, working in conjunction with MCS Canada; that drove us more into the streaming services, which is where the business is at the moment.”
The streamers are going through a readjustment period in regard to demand for content. Sorrentino remarks, “The streaming services oversaturated the market in order to position themselves and, because of that, the content level became ridiculous. There is a cost we paid for that. I firmly believe that there will be winners and losers of the streaming environment. But there will still be fresh content with streamers developing things like Fallout, which was a gaming thing to begin with, or these other series that you’re seeing, like This Is Us. Those are going to continue to grow and be a force in our environment to provide work, but nowhere near the level where it expanded initially. It was much too overblown. Hence the contraction now. That will still be a prominent source of revenue and work. Sports advertising will continue to grow, and experiential verticals are going to be an important part of visual effects growth integrated with AI.”
Embracing AI and machine learning is Toronto-based MARZ, which developed Vanity AI to process large volumes of high-end 2D aging, de-aging, cosmetic and prosthetic fixes, as well as





TOP TO BOTTOM: PFX worked on Winning Time. Being a midsize visual effects company is problematic: PFX was neither small nor big enough to survive, so the decision was made to expand. (Image courtesy of PFX and HBO)
MARZ has worked on shows such as The Studio and has embraced technology as a means to level the playing field. (Image courtesy of MARZ and Apple TV)
Straddling the worlds of visual effects and industrial design is the Territory Group, which provides everything from user interfaces for cars and movies such as Ant-Man and the Wasp: Quantumania (2023). (Image courtesy of Territory Group and Marvel Studios)
The work for Tendril falls on the boutique end of the spectrum. (Image courtesy of Tendril)
LipDub AI, a realistic AI lip sync video generator. “It’s a niche-to-win strategy,” states Jonathan Bronfman, CEO at MARZ. “It’s still difficult, but going out there right now as any run-of-the-mill visual effects provider, you’re using the same tech and pool of artists; there’s no differentiation. It’s not a compelling pitch to a given project to say, ‘We’re MARZ and do great visual effects. We will give you a good price.’ Everyone is saying that. What I’m finding is that the productions are finding more comfort going with the bigger shops. A lot of these shops are owned by studios: Pixomondo and Sony, Scanline VFX and Netflix, ILM and Disney. Those studios are incentivized to keep their own companies fed before feeding other companies. Historically, we were going after big shows like Marvel. The shows would typically go to ILM or Wētā FX, and ask, ‘How much can you take on?’ MARZ was a great alternative to deliver not the final sequence of an Avengers movie but a fight sequence within it. Now those big shops are taking all the work with no spillover.” Technology is the means to level the playing field. Bronfman states, “We’ve placed ourselves in the arena of AI and machine learning. Not only are we developing proprietary machine learning tools, but we’re also wrapping around some open-source projects. Some people resent that, because they think it’s taking away jobs. It’s not taking away jobs but opening up new jobs and opportunities that otherwise wouldn’t exist. But even more important is the differentiation. We have to, otherwise what’s our competitive edge?”
Originally centered around stop motion animation, Tippett Studio has gained a reputation for creating digital creatures. “Visual effects is an ever-evolving blend of technology and artistry,” notes Christina Wise, Vice President Business Development at Tippett Studio. “At Tippett, we stay at the forefront of innovation. We combine legacy techniques like stop-motion animation with cutting-edge tools, including AI, to push creative boundaries while meeting production demands. Expanding our clientele starts with the strength of our work and the legacy behind it. While relationships are key in this industry, we often hear from first-time clients drawn in by their admiration for Phil Tippett’s work and the reputation we’ve built over decades. People might not know the Tippett name right away, but chances are, they’ve grown up watching our work.”
Tippett Studio has become part of Phantom Media Group, which also includes PhantomFX, Spectre Post, Milk VFX and Lola Post. “As part of PMG, we think of ourselves as a new chain of interdependent studios, backed by the appeal of tax incentives and a global presence. Our outreach strategy combines inbound interest with traditional efforts: some clients find us, others we actively pursue. We’re especially seeing momentum from legacy and fan-driven projects, as well as a new wave of younger creators who are passionate about stop motion and anime aesthetics.” Diversification is paramount. Wise notes, “Tippett’s expansive footprint in China includes high-profile fly rides and immersive park content, which continue to perform strongly. Most recently, Jurassic World: The Exhibition opened in Bangkok, showcasing once again how audiences are eager for in-person, experiential storytelling. This type of project not only helps offset industry downswings but also aligns with the growing demand for physical, emotionally engaging experiences beyond the screen.”

WORLD’S-EYE VIEW
VES Members are artists with an eye for composition, beauty and light and shadow, both within the field of visual effects and as fine artists. The World’s-Eye View celebrates the creative spirit that unites our community by sharing original photographs created by our members and showcasing their unique views and personal approaches to the art and craft of photography. A panel of VES members reviewed and selected these 20 photos to share with the world. One image at a time, we invite you to see the world through the eyes of our global community. The full gallery of photographs can be found at the VES Flickr page: http://bit.ly/47Atvl0

Artist: Nitesh Nagda
VES Section: Bay Area
“In the middle of one of the world’s oldest rainforest in Khao Sok where the silhouettes tell the history.”
Silhouettes

“

“I love surrealism and the unpredictability of underwater photography. In contrast to this industry where everything is scheduled and has a goal, sometimes I do explorational photography with only the intent to create.”
Dream Artist: Antelmo Villarreal VES Section: Oregon
Solitary Artist: Ajit Menon VES Section: New York
This is a lovely solitary tree against a massive dune at sunset in Sossusvlei, Namibia.”

“This was challenge with the study of taking a photo of an egg. This one broke when boiled. I glue-mounted it and used smaller lights and crystals to create an abstract space-like environment. My goal was to take an egg as far away from being just an egg as possible. Because none of us are normal or just eggs.”

Broken Egg
Artist: Stephan Fleet VES Section: Los Angeles
Zabriskie Point DVNP
Artist: Zsolt Krajcsik VES Section: Los Angeles
“Wintertime in Death Valley National Park.”

“At eight, I found a camera on a beach, igniting my passion for photography, which led to my career as a VFX supervisor. This passion shapes my VFX process, from planning to finalization, aiming for photographic authenticity. Unlike my controlled VFX work, my photography involves minimal editing, capturing raw, unfiltered moments. My images invite viewers to pause and see the ordinary anew.”

“During filming of All That Is Left of You, in the old town of Rhodes, actors waiting for the call to action.”
The Wait Artist: Bastian Hopfgarten VES Section: Germany
The Gravity of Dreams Artist: Stephane Pivron VES Section: France

“The
multi-hued
mid-morning.”

The Tree Artist: Anthony Barcelo VES Section: Oregon
famous
Japanese Maple taken at the Portland Japanese Garden
Silent Figures Under Pont des Arts Artist: Stephane Pivron VES Section: France
“Stolen moment from a Joseph Kolinski perfume shoot, captured at night under the Pont des Arts in Paris, shot with a Sony A7R and a Sony 50mm F1.2GM.”

Claustral Canyon
Artist: Marty Blumen
VES Section: Australia
“A lone canyoneer stands poised on a mossy boulder, dwarfed by towering walls draped in ferns and mist. In the stillness of the canyon, the scale of the world reveals itself: wild, ancient and humbling.”

Ethereal Drift
Artist: Timothy Zhao
VES Section: Vancouver
“As a VFX artist, I am constantly drawn to the subtle power of light, rhythm and natural form. I use B+W to strip away distraction, inviting the viewer to pause, breathe, and witness the quiet beauty that often drifts unnoticed beneath the surface.”


Red-Eye Tree Frog
Artist: Zsolt Krajcsik
VES Section: Los Angeles “A night photo in Costa Rica rainforest.”
Untitled
Artist: Alberto Loseno Ros
VES Section: Los Angeles N/A


“These two caught my eye between takes/setups a couple hours out of Budapest in a museum village. Hungarian is an interesting language and impossible to understand for an American, but fun to listen to.”
The Conversation
Artist: David Stern VES Section: Los Angeles
ELREM Artist: Chris Hönninger VES Section: Germany “Playing with mirrors.”


“I’m fascinated with the ephemeral, transient nature and light interaction with these megastructures of water vapor.”

“A
Aurora Borealis Over Lake MacDonald
Artist: Jason Patnode VES Section: At-Large
“The northern lights broke out over Lake MacDonald in Glacier National Park in one of the most stunning displays I have seen. This panoramic pic is comprised of five separate images.”
Los Angeles
Artist: Chris Schnitzer VES Section: Los Angeles
dramatic and rarely seen view of the City of Angels.”
Cloudscape Artist: Jason Key VES Section: Atlanta

City of Colors
Artist: Sam Javanrouh
VES Section: Toronto
“I wanted to capture the vibrant colors of Toronto in the fall. This is a panoramic blend of nine photos taken in downtown Toronto, using a drone.”

Beach Home
Artist: Evan Pontoriero
VES Section: Bay Area
“Climbing down the Jenner escarpment to find a beachcomber’s home.”
VES would like to thank our jury members: Chair: Jeffrey A. Okun, VES, Jurors: Adam Brewer, Rita Cahill, Toni Pace Carstensen, VES, Nicolas Casanova, Dao Champagne, Ian Dawson, Jason Dowdeswell, Tim Enstice, Urs Franzen, Brian Gaffney, Shiva Gupta, Dennis Hoffman, Robert House, Isaac Kerlow, David Legault, Suzanne Lezotte, Lance Lones, Josselin Mahot, Ray McMillan, VES, Chad Nixon, Christy Page, Jason Quintana, Erasmo Romero III, Daniel Rosen, Afonso Salcedo, Timothy Sassoon, Xristos Sfetsios, Russ Sueyoshi, Ahmed Turki, Sukhpal Singh Vasdev, Gautami Vegiraju, Robert van de Vries, Kate Xagoraris, Timothy Zhao.

GIVING FRANKENSTEIN THE SPARK OF LIFE
By TREVOR HOGG
Images courtesy of Netflix.
Frankenstein has long been a passion project for Guillermo del Toro, which is not surprising as he has mastered the art of creating a beautiful, grotesque Gothic aesthetic and has a deep fascination with monsters. Thus, he was the perfect choice to adapt Mary Shelley’s story, in which a scientist obsessed with defying death attempts to turn a creature assembled from various human body parts into a living being.
“[Monster stories] are excellent parables about the human condition,” states Guillermo del Toro, Producer, Writer and Director. “They tackle things that can be discussed without absolutes, so they have a symbolic power. You don’t have to talk about a middle-class father who is hard on his son wanting to be a rock ‘n’ roll star. You can actually talk about a father creating his son, and the son basically being crucified for the sins of the father. It is metaphorical and real at the same time; that’s what is fantastic.”
The creature determines the cinematic environment. “It’s like visually creating a terrarium. In terms of tone, you need to create a movie that allows the creature to feel real, not like visual effects, makeup, a combination of both or an actor wearing silicon prosthetics. The design of Frankenstein is more operatic, theatrical, and pushed so that the creature can exist there. You design the creature around the body and bone structure of the performer because it’s a character, not a monster.”
The iconic creation scene offers something different from previous film and television adaptations. “What you have almost never seen before is the actual anatomical putting together of the Creature,” del Toro remarks. “That is pretty goddamn new because most people just skip through it. You just see the Creature receiving the electricity. I wanted to shoot it as a joyous moment.
TOP: Victor Frankenstein (Oscar Isaac) visits the aftermath of a battle with a shopping list of body parts.
OPPOSITE TOP: Guillermo del Toro discusses a scene with Oscar Isaac behind an anatomically detailed and realist rendition of a human body. (Photo: Ken Woroner)

We have this waltz that is celebratory as part of the score by Alexandre Desplat [Composer]. Victor Frankenstein is happy being alone, sawing, patching and designing. It’s somebody who is hard at work doing what they love to do. Then, what I tried to do with the creation is to make it very hard. Normally, what you get is the electricity, the Creature comes to life, and that’s it. But I wanted everything to go wrong. Victor has to climb a tower and put on the lightning rod. He has to come down. One of the parts bends. Everything starts to explode. Harlander dies. Machines are breaking up and releasing steam. The battery may not charge, and the Creature seemingly doesn’t come to life.” Despite the fantastical premise, there is a heavy reliance on earthly restrictions. “I believe that visual effects should only take over when physical reality doesn’t allow tangible things to exist. That’s why we combine every trick in the book. I was an effects technician for many years. I know the language and the tools, so I understand where to draw the line between one boundary and another. We use physical miniatures of large-scale [sets], we use makeup effects and huge set construction, then when the need arises, we seamlessly do a relay race with visual effects.”
For the Arctic sequences where Victor Frankenstein takes refuge on board an expedition ship trapped in the ice and is hunted by his creation, a real ship was constructed, and lanterns were set aflame in a parking lot in Toronto. “Once we designed the ship, it had to be redesigned with the gimbal in mind,” explains Production Designer Tamara Deverell. “We had a Class A engineer who does a lot of film work with us to approve the whole thing because it’s a big engineering feat to move the whole ship. You’re seeing the ship from the port side. On the starboard side, it was completely open
with access to the gimbal. It was quite incredible. We built a giant riser for the icefield and brought it right up to the ships’ edge. Not only did we have the gimbal, but we also had to allow space for it. Then we had to build a pool for when the Creature sinks into the water; that was another level of engineering.” A 3D model of the ship was created to fill in the physical gaps. “We only built it up to about two-thirds of the highest mast height. Visual effects used our model for all their work afterwards because we had designed the entire ship right up to the crow’s nest of the highest mast; they also used it to make sure that Guillermo del Toro and Dan Laustsen [Cinematographer] were framing for the full height of the ship’s masts. Dennis Berardi [Visual Effects Supervisor] had done some ship stuff already, which he shared with me and was an inspiration. Dennis had added all of the ropes, masts, rigging, and this and that. Now we had a lot more than we physically put on our ship. That was helpful as we were breaking it down early on to see what Dennis was capable of, what was in his head, and what we had to do.”
Victor Frankenstein scours a frigid, wintry battlefield looking for human body parts to assemble into the Creature. “That was shot in an old parking lot somewhere in Toronto,” Laustsen recalls. “We were lucky with the weather. We didn’t have any lights on that, just negative fill behind the camera. That’s a seamless shot with four setups and a big wide shot craning in. Dennis did a lot of work on that. He made the snow. Then we color corrected to the cold side to make it even colder.” There were times when bluescreen had to be deployed, in particular for the scene where Victor climbs to the top of the water tower to place the lightning rod. “The challenge was to make the light soft enough to have this feeling about magic hour,


and we have the lightning strikes coming in and a lot of rain. The key for the bluescreen was to keep it as dark as you can; however, you still need to be able to key it to see the blue, but you don’t want to see blue everywhere. Dennis is cool about that. And because the camera is moving so much, we could not have negative fill there because we shot 280 degrees. The only thing we couldn’t solve was where a crane was coming in.” An innovative lighting trick involved a specific atmospheric and prop. “Our batteries have to go from green to red, but I didn’t want to only put a light into the battery. We also had steam put into the battery, so the light was not clean. When it’s becoming red, we put in more steam. For the batteries, we did a lot of tests because we wanted to be sure that the color was right,” Laustsen says.
Editorial turnovers happened quickly. “Herne Hill Media /Mr. X was on the film from the very beginning and had built all of these environments in Unreal Engine,” explains Evan Schiff, Editor. “If we were adding the Swiss Alps or something like that in the background of a shot, within a few days after shooting and editing it together in the scene, Dennis gave me back temp comps. Normally, we do a lot in Avid Media Composer temp visual effects, and when we start to get real versions back from the vendor, we replace those.” Having practical elements in the frame made cutting scenes easier. “One example would be in the wolf attack. There are two shots stitched together, where the Creature throws one of the wolves up against the fireplace, another comes into frame, and the
TOP: It was important for del Toro to amplify the drama of the creation scene by having everything go wrong for Victor Frankenstein.
BOTTOM: 90% of the movie was shot using a Leitz Thalia 24mm lens.

camera pans. Dennis and I worked together on, ‘What is the timing of that stitch?’ Initially, the camera was panning ahead of the wolf that came in, which felt like the camera was leading the action in a way that didn’t make any sense. Dennis and I got together and said, ‘We need to delay this a little bit. Let the second wolf come in, run through the frame and be the reason the camera pans to the left back to the Creature, rather than leading the action.’ Sometimes, it’s the little details that make a big difference in how you perceive the scene. Getting together with Dennis talking through how to make the most impactful shots was useful,” Schiff states.
Herne Hill Media/Mr. X was responsible for 950 of the 1,200 visual effects shots, while ILM created 150, and Ticket VFX produced 100. “The old water tower only ever existed as a partial build in the Markham Fairgrounds just north of Toronto,” Berardi remarks. “We built the first 15 feet of it, dressed snow around it, and digitally extended the top of the tower. We had the Magic Camera Company in the U.K. working with us for the explosion and the collapse moment, as well as a wonderful miniature effects artist, José Granell. We previsualized the whole sequence, modeled the tower, got signoff from Guillermo, put it all in Unreal Engine, and got Guillermo to approve it all. Then, José and the Magic Camera Company built the 1/20th scale version of the tower. Guillermo designed exactly how the tower was supposed to collapse. José achieved that silhouette by carefully putting the detonating cord in and weakening parts of the tower miniature so

A massive amount of work and detail went into constructing Frankenstein’s lab, with the
BOTTOM: The old hydro water treatment tower never existed and came to life through the combined efforts of
TOP:
circular window being the centerpiece.
Tamara Deverell and Dennis Berardi.


BOTTOM: Adding to the believability was an actual constructed ship
it would fall exactly in the right way. We shot it outside, which was great. It became a good reference for us. It mostly ended up being digital, although there are a few shots where the miniature plays with digital fire enhancement and debris.”
An outstanding scene is when Victor does a presentation to the Royal Society of Edinburgh with the torso of a corpse. “Mike Hill, the incredibly talented makeup effects/special effects creature designer on the movie, deserves a lot of credit,” Berardi notes. “Mike and Guillermo designed this mostly dissected half corpse, where you literally see the heart and lungs. That was basically an articulated puppet. We had a rigged table in front of the puppeteers, who were all in bluescreen suits. As the corpse does this ‘Aaaah” as it animates to life, that was physically animated by a puppeteer. We removed them from all the shots and brought back the rest of the lecture hall they were occluding. We did a digital table to replace the partial table that was built. We enhanced the half corpse where Guillermo wanted to sweeten or touch it up. We worked on the crown where the brain was exposed to make it look wetter. We did work on the cross-sections and some of the silicon folds to clean those up. When Victor throws the ball, the puppeteers really catch it. There was a rig that came off the hands and attached back to the elbow and shoulder. It was real hands, digital forearm to elbow, then back into real from the bicep up to the shoulder. We replaced the section of the arm that didn’t line up with the real puppeteer’s hand to catch it.”
TOP: Skies were carefully crafted to get the desired tone for a shot.
that was digitally extended.

Simpler effects appear. “The captain’s quarters on the ship was shot on a stage,” Berardi remarks. “If you focus and look out the window, it should feel like an Arctic icescape. I didn’t want to spill blue into that beautiful set, so we did white screen on that. But then you look at something like the butterfly, which is a beautiful moment between Victor and Elizabeth Lavenza. We did a deep dive into butterfly anatomy, how they move and catch little wind currents. I’m hoping that stands the test of time.” Rodents are drawn towards the Creature. “He’s at one with nature. The mice pitter-patter all over him, and he even holds them at one point. It’s telling the story that he’s a gentle soul. We shot the scene as if the mice were there. Then we did an editorial pass with Evan Schiff and turned it over to ILM for blocking and animation passes. There were many invisible visual effects, like the skies, and some of the work with the Creature’s eyes. For all the exteriors, we we added horses, carriages and people in the background.” The director found himself in a dangerous situation. “One of the mandates we had was we’re going to shoot every shot with the real wolves and use it as reference for the animation,” del Toro states. “We combined both hand puppets for the biting heads and mechanical puppets for the launching of the wolves,” del Toro notes. “We did a complicated rig model for the wolves and a good grooming system for their pelts. Wolves in the winter have several layers of pelt, one that is denser and closer to the body, and one that is fluffier and dirtier outside, so you have to do a good
grooming system. Shooting them with the real wolves in the real set was harrowing because no matter ‘how trained,’ wolves are feral animals. I’m a fat guy, so I didn’t want to be perceived as a snack!”
Themes of Frankenstein are relatable. “It’s a movie that is extremely biographical for me,” del Toro reveals. “Elizabeth is me. Victor is me. Harlander is me – or the studio, depending on the scene!”

TOP: A signature prop is the Y table, which resembles a crucifix.
BOTTOM: Mike Hill was responsible for creating the prosthetic makeup effects for the Creature. (Photo: Ken Woroner)

DECODING THE EPIC SUCCESS OF TIM SWEENEY
By TREVOR HOGG
A narrative trademark of the Amblin Entertainment movies from the 1980s was a group of adolescences riding around town on bikes toward their next adventure. Such was the childhood of Tim Sweeney in Potomac, Maryland, who would go on to establish Epic Games, leading him to receive an Honorary VES Award and gain a reputation for environmental philanthropy by buying large swathes of North Carolina forest for preservation.
“My generation had massive individual freedom,” recalls Tim Sweeney, Founder and CEO of Epic Games. “The mothers of the neighborhood would give us breakfast and say, ‘Be back by dark.’ We would go and roam. You might find us 10 miles away on our bikes exploring some woods. There were a lot of adventures and things going on all of the time. I was in a nerd family. My father was a cartographer and made maps for the government but also did printing. My older brother, Steve, was 16 years older, a serious electronics enthusiast and a professional in the hardware business; he always had massive amounts of old and scrapped computing hardware lying around. We would go to junk auctions and buy computers from the 1950s, ‘60s and ‘70s, and set up a basement full of awesome computer pieces from ages ago. We tried to build goofy things out of them. We constructed go-karts and drove them around the neighbourhood. My friends and I were always creating.”
Artistry goes beyond the paintbrush and pencil. “Generally, when programmers are writing new systems and games from scratch, it is a creative thing that doesn’t require a skill in drawing,” Sweeney notes. “I remember my first year of programming with nearly perfect recollection. Everything that I got hung up on and learned a solution to is burnt into my brain and constantly influences what are problems with software that make it unintuitive, and how do you improve on that? When you learn to program and learn about ‘goto’ and ‘gosub,’ you quickly realize that the computer is capable of doing anything, if you could only figure out the instructions to tell it to do that thing. I looked at lot of early games on the Apple II and what they were doing. They were easy to see. I can do that. I just need to put the pixels in the right place and optimize the code enough to make it run a performance.” Video games were not the only focus. “I was also programming communication software. Figuring out how to get computers talking with each other and how to scale them to interesting systems to implement games or other types of interactive systems. That experience of using software and figuring out how it’s doing what it’s doing was the key to all of Epic’s early growth and breakthrough.”
OPPOSITE TOP TO BOTTOM: It was not until Epic Games created its first 3D game and customers began calling about licensing their game engine that the company actually realized they had created one with Unreal Engine.
Niagara is the visual effects system in Unreal Engine that produces and previews particle effects in real-time.
Behind Sweeney are a collection of books ranging from the illustrated encyclopedia of the supernatural, Man, Myth & Magic, to the The Most Beautiful Villages of Ireland
While the arrival of Wolfenstein 3D was amazing, Doom was revolutionary. “I gave up on programming for nine months because it was absolutely not clear looking at the screen what Doom was actually doing to make that experience possible,” Sweeney recalls. “But then Michael Abrash wrote some articles about 3D graphics and texture mapping. I had all of these notes. One of the early engine programmers showed me some of the tricks for how to do 3D, and I went, ‘Maybe I can do that.’ That’s how Unreal Engine started. Before Unreal Engine, Epic was a company with a whole lot of small developers working independently contracting with us to build cool games around the world. There was Cliff Bleszinski working on Jazz Jackrabbit with Arjan Brussee, James Schmalz
Images courtesy of Tim Sweeney and Epic Games.
TOP: Tim Sweeney, Founder & CEO, Epic Games

“Nobody in the world has succeeded yet in building a single UI that scales from the most complex usage cases to the simplest one. This is a point where the industry is far from mature, but I can envision that this problem is going to be solved over the next five years.”
—Tim Sweeney, Founder & CEO, Epic Games
working on Epic Pinball and other games. We realized after Doom had come out that we needed to move into 3D and figure out how to build a game in that space. We took the best people from these small teams and put them together into a big team and starting building on the content, levels and tools for Unreal Engine. I got tasked with writing the editor for this 3D project because I had some spare time.”
“Every industry that needs a 3D engine has a choice of several,” Sweeney notes. “There’s Unreal Engine, Unity 3D, Coda and some proprietary solutions. Our goal is to serve every industry that has a need for high-quality computer graphics. If you need computer graphics and don’t care about quality, there are cheaper ways to do it than Unreal Engine. We’re trying to serve everybody who has high-end needs and be the best solution for them all. This doesn’t mean building vast business empires around vertical industries. For example, almost every studio doing virtual production at scale is doing it with Unreal Engine with great success. We have small teams supporting virtual production because what virtual production mostly needs is an awesome engine. Then they need some features to interface with their funky camera hardware and other on-set devices, but mostly they need an awesome engine. You have to look at the question of ‘Is this engine going to focus on this industry?’ as if you’re asking a question about Linux. Is Linux going to serve the medical industry? Of course, because it’s the best operating system.”





Machine learning and artificial intelligence will make computer programming even better. “Of course, there’s always going to be curmudgeons who don’t like the new way and prefer the old way,” Sweeney remarks. “In the early days of computing, the way to tell it was machine language code and hexadecimal; that was painful but people could do it. Then the way became to write programs and high-level language like C++ or Java or C#. It was a whole lot nicer but still a lot of manual work and you figuring out everything for yourself. The way to make a computer do the thing you want now is for an increasing number of people in an increasing number of fields to tell it what you want in English and have it produce something like that and refine it over time. It’s a better way to work. It puts more power at people’s fingertips for the problems that AI can solve. That’s not for every problem in the world right now, far from it. But it will increase over time, so it’s probably everything or at least close to it.”
“There are several arcs to the evolution of software,” Sweeney observes. “One is the evolution of engines from the big pixels on the screen to being able to achieve absolute photorealism or stylized perfection that you might see in a Pixar movie. We’re close to the end of that, with the exception of human animation, motion and precise face details. Nanite and Lumen are near to real-world, and you can imagine that we’re a few years and steps away from having absolute photorealism for everything. The other arc is content developer productivity. How can you build something that meets your vision? That’s much more important because if you can build something for 100 times less cost, then the process of developing games and movies becomes vastly simpler. AI, more than anything else, is going to create an absolute revolution in that area, and the human effort to be able to create a thing is going to go down by probably by two orders of magnitude. This is the prompt,
TOP TO BOTTOM: It was around 1985 when Sweeney connected to a multi-user dungeon that was a text adventure game. Fast forward to current times where Fortnite has elevated the online gameplay into a metaverse type of experience.
From left to right: Jay Wilbur, Vice President, Business Affairs at Epic Games; Taka Kawasaki, Managing Director, Epic Games Japan; Tim Sweeney, Founder & CEO of Epic Games, and Junya Shimoda, Support Manager, Epic Games Japan. Sweeney has some fun with Cliff Bleszinski, former Design Director at Epic Games.

and the image is finished, or prompts to game. It’s going to happen probably within this decade and revolutionize that. Another facet of the system is the power of the system to create large-scale social multiplayer simulations, and that’s an area it doesn’t look like AI is going to be able to help us. People play Fortnite or Roblox, and you have many millions of concurrent users, but they’re all playing in these little 50 or 100 players sessions because that’s all we can fit onto a server. An entirely new arc of technology development must happen there. It’s too much of a larger-scale simulation that needs to work while respecting all of the other properties, like photorealism, high performance and ease of content creation and programming.”
Remaining elusive is the perfect user interface. “Nobody in the world has succeeded yet in building a single UI that scales from the most complex usage cases to the simplest one,” Sweeney states. “This is a point where the industry is far from mature, but I can envision that this problem is going to be solved over the next five years. I can imagine a word processor with all of the power of Microsoft Word but with the ease of Google Docs. What you don’t want are 1,000 different sliders on the screen. What you really want is a single piece of software in which customized, more advance pieces can be pulled up on demand and are not getting in your way and staring you in the face when you don’t need them. What you really want is the ability of AI to prompt anything to do the complex parts for you and do a good job of it as a way of getting it to at least to a natural starting point that you can go and tweak. Just imagine how much easier it would be to learn Unreal Engine if it had five buttons on the screen and a prompt. You could start prompting it. If you don’t like what you get then you can take what’s given to you, click on and zoom in on objects, and use prompting and actual hand editing to refine things from there. You

TOP: The goal of Epic Games is to empower all of the developers to fully compete with awesome games as demonstrated with the tech demo for The Witcher IV
BOTTOM: An early advertisement for what would become known as Unreal Engine.




would find the average person being able to get up to speed 10 times faster and being able to do 10 times more.”
Tech monopolies are stifling technological innovation. “We’re in an especially dysfunctional point in human history,” Sweeney believes. “Governments have allowed unchecked tech monopolies to shut down the normal competitive process of entire industries by enabling whomever controls an operating system to then have a monopoly; even if they want another operating system or hardware purely for merit to then impose a non-merit-based monopoly on app distribution and payments. To use that monopoly to extract far aberrant rents from all commercial activity on the system, like 30% app store fees, and also impose self-cross-referencing regimes that make it hard for other companies to compete with that company’s services. Even if they’re Spotify competing with Apple Music, it’s really hard. Apple has the worst AI in the industry right now, and you would think that it would be no problem. OpenAI should be able to create their own phone AI to replace the Siri on the iPhone, and now you have a vastly better AI, or Brox. AI should be able to do it, or Perplexity or Anthropic. But ‘there can be no other AI than our AI.’ The dystopian world of the moment will end, partly because it’s intolerable for legal and regulatory reasons, and partly because it doesn’t work. The world would be a lot more vibrant and profitable for Apple and Google if they were to let all app and game developers compete on their merits and get out of the way, and instead compete on providing services that developers could choose to use of their own free will by having the best default payment method, browser, search engine and AI assistant.”
Epic Games is hitting its stride. “I don’t have decades more of driving the industry forward,” Sweeney reflects. “I’m 55, so at this age you have to say, ‘God willing!’ Because it’s not assured. We’re starting to get into the primetime of the entire real-time 3D creative industries, everything from film and television production and visual effects to gaming. The most exciting and interesting times are ahead. During this year we’re seeing AI disrupt everything, and people are worried about that. But remember the last time we had a disruption of this scale was when we went from doing everything with on-set models and pyrotechnic explosions to actually moving to computer graphics. It’s always stressful to navigate these changing times.”
Sweeney has a conversation with Dean Takahashi, Editorial Director at GamesBeat
The 2015 SIGGRAPH Award for Best Real-Time Graphics and Interactivity was given to the animated short “A Boy and His Kite,” created by using Unreal Engine.
The predicted convergence of the film and video game industries has finally occurred. “You’re seeing film production using the same exact tools and assets as video game production. Content is migrating from film sets to video games and vice versa. The process of creating the tools is getting better and more interconnected. We’re heading toward a world over the remainder of the decade where any creative work or intellectual property that is suitable exists both in gamers’ repertoire of what they play and the linear world of watching a pre-made experience. It’s an opportunity for everybody that is much greater as a result of these two industries crossing over like this, and it’s awesome to be at the center of it.”
TOP TO BOTTOM: Sweeney and Mark Rein, Vice President, Business Development at Epic Games, during a parody video shoot.
Scott Stoddard, Chair Entertainment, Epic Games, and Sweeney share a moment.


THE BURNING AMBITION OF AVATAR: FIRE AND ASH
By TREVOR HOGG
Images courtesy of 20th Century Studios.
TOP: The evolution in Fire and Ash is dramatic in terms of where the characters go and what confronts them.
OPPOSITE TOP: James Cameron has a conversation with Stephen Lang, who reprises his role as the adversarial Miles Quaritch.
Fulfilling his promise to explore the various biomes and cultures of Pandora with each of the sequels, James Cameron takes to the air and walks in the volcanic wastelands for Avatar: Fire and Ash where Jake Sully and Neytiri are reeling from the death of their son Neteyam and the lunar conflict goes beyond the human-operated Resources Development Administration to include intertribal warfare.
“The evolution in Fire and Ash is dramatic in regard to where the characters go and what confronts them,” notes James Cameron, Producer, Writer and Director. “We don’t have to go out and slay a giant every time we make an Avatar movie. We’re trying to future-proof ourselves. In the meantime, generative AI has come along with all of its threat and promise, so that means everything that we developed and thought we would advertise across the four films we are probably going to reconsider if we do move forward with more sequels. There will probably need to be a mini-evolution or mini-revolution at the very least between where we’re ending Fire and Ash and where we start on Avatar 4 and 5, maybe a year from now.”
“There is an individual algorithm that gets created for each character, and there is some machine learning that goes into the process as well,” Cameron remarks. “We try to get the animators to be as hands-off as possible when it comes to facial. The animators have to get involved when it comes to muscle tension in the neck, which is a surprising amount of performance. I always say that the face ends at the clavicles because there is so much intensity that can be shown with the tendons in the neck. We get coarse data on the hands, so the hand work is done by keyframe animators to get exactly the finger pressure, bend back, contact,

“Generative AI has come along with all of its threat and promise, so that means everything that we developed and thought we would advertise across the four films we are probably going to reconsider if we do move forward with more sequels. There will probably need to be a mini-evolution or mini-revolution at the very least between where we’re ending Fire and Ash and where we start on Avatar 4 and 5 , maybe a year from now.”
—James Cameron, Writer, Director & Producer
tendon tension and hand rotation, and then you’ve got procedural muscle simulations. Often you go in and do shot-specific sculpting on top of the procedural stuff to get the right muscle tension, tendons firing, the IT band and quads; we look at all of this. There is still a lot of human in the loop work that brings the body fully to life. The data is fine enough coming out of capture that we can see individual breaths, but sometimes that informs neck tendons and body tension. It’s this interesting dance between the capture and the final character that emerges. The characters are running around with not a lot of clothing on, but, of course, we have to have cloth simulations, and that has become a more developed artform than what it was.”
Starting off as concept art directors on Avatar and taking over the role of production designers on the sequels are Ben Procter and Dylan Cole. “We each have a specialty,” Procter explains. “Dylan handles Pandora, everything to do with the look of the moon, flora, fauna and tribal cultural designs. The RDA [Resources Development Administration] and all of the human technology is my realm. There’s a production-related pragmatic difference in that. Because I’m doing human stuff, I have a lot more of the
live-action sets, which is not to say Dylan doesn’t have sets. He’s got tons of them. And it doesn’t mean that I don’t have a lot of visual effects-related tasks to oversee as well.” The conceptualization and execution process does not change in the digital realm. “Whether it’s pixels or plaster, it is still production design,” Cole states. “I care about what’s on the screen, not how it got there, because it’s all design and world-building. We build giant proxy sets because as we learned way back on the first Avatar, jungles aren’t flat, so you can’t run around a flat soundstage. We’ll have undulating terrain and logs to jump over and duck under; that’s our simplest stuff. For Fire and Ash, we meet a brand-new culture called the Wind Traders, and they essentially have these giant-gondola-floating-pirate-shiptype things. We built that as a one-to-one proxy set with multi-deck rigging, and it filled a stage.”
Building a world bigger than one movie has always been the goal. Procter remarks, “Knowing I needed to expand the palette of the RDA, I leaned into a concept I call high-performance cruelty. There’s got to be something cool and aspirational about the RDA designs because they’re not just meant to be intimidating or oppressive. I definitely introduced more color, which you find in



Some of the character work for Quaritch was provided
a lot of heavy-duty maritime industrial vessels from oil rigs to giant ships.” Landscape and culture are entwined. “For the Ash People, we wanted a striking visual difference, both in their culture and landscape, and to suggest why they’re the way they are,” Cole states. “Their home was taken out by a volcano, so they have rejected Eywa and all basic Na’vi principles. The Ash People embrace death and conquering, but at the same time they’re still Na’vi, highly skilled, intelligent crafts people. This makes them much more terrifying, especially Varang. They are a harsh people, so it’s a harsh contrast, black and white. The Wind Traders have a different way of life. They’re nomadic traders and benefit from visiting all of the other cultures. We see music and dancing, not that the other cultures didn’t do that, but it’s always a good time when the Wind Traders come to town.”
There is a major misconception that the Avatar movies are animated films with the characters being voiced by the cast. “The truth is that every character is played by an actor performing their scenes on the motion capture stage,” explains Stephen E. Rivkin, Editor, who received an Oscar-nomination for Avatar. “There is even a separate face camera for each actor to record every nuance of their facial performance in order to deliver a true and accurate representation of the actor’s performance in the final CG shot. Even the large crowd scenes have real actors playing the background characters. But we still have to build them in a mosaic because they’re not all captured at the same time. We may capture the principal characters in one pass and then do crowd passes where we fill in secondary characters and sections of crowds captured to a playback of the principal actors. If you imagine it like overdubbing in recording audio, we would have to overdub and add layer upon layer of more characters in a large scene with many CG characters. This is not an animated film. Everything you see is based on an actor’s performance.”
TOP TO BOTTOM: The ash wasteland is an extreme contrast to the usual lush jungle environments of Pandora.
There is a misconception that the Avatar movies are animated films when, in fact, the performance of characters such as Tonowari and Ronal are driven by what Cliff Curtis and Kate Winslet do on the motion capture stage.
by ILM.

Sometimes, facial performances were replaced. “There may be an instance where Jim wants to change a particular character’s line for story purposes, or have something said in one scene opposed to another,” Rivkin remarks. “We can bring the actor in for what we call FPR [Facial Performance Replacement]. They put the head rig on, and we have them say a different line that would be incorporated into their performance from before.” Avatar: Fire and Ash was more complicated than Avatar: The Way of Water because of the increased number of shots that required the integration of the live-action Spider character with CG cast members. “We did our usual performance capture process capturing Spider as a CG character first, then put together a performance edit using the best takes from everyone, and prepared the files for Jim to shoot the virtual camera scenes in the usual way. The scene was then edited in this template form. This would provide a blueprint for each shot in the edited sequence that Spider needed to be replaced with his live-action element. When shooting the live action, Jim could see the other CG characters [minus the CG Spider] played back in the camera to direct the shoot. We used computer-programmed, repeatable camera moves to match the CG camera moves, but this time with live-action Spider in the shot; that way we could do multiple takes. Ultimately, when finished, there is a seamless blend of live-action and CG characters,” Rivkin says.
Going from Avatar to Avatar: The Way of Water was a major technological leap. “We rebuilt the entire pipeline,” remarks Ryan Champney, Virtual Production Supervisor, who previously was the Simulcam Technical Director for Giant Studios on Avatar “In the course of making The Way of Water and Fire and Ash, it was about scale. Day one, it was like, ‘Brand new technology. It’s working great. Yeah!’ But after that, it’s, ‘Now add 10 more characters into the scene live. I want to have fire onstage.’ And anyone


ship
and processing
The creatures need to appear perfectly natural as animals while designed for Na’vi to jump and ride.
TOP TO BOTTOM: Concept art of the Ash village, which has been established inside the remains of a hometree.
The factory
takes killing
Tulkun to a mass industrial level.



who understands mocap knows that fire is a huge emission of IR radiation, and everything we’re capturing is infrared. We tackled that problem. Then it was the underwater problem. Every bubble looked like a marker. It was this constant evolution of, ‘How do we tackle these problems?’ There’s no rule book or reference to look for how to do those things. Even on the performance capture stage, Jim wanted to performance capture someone taking a shower, but he wanted the real believability of them under the waterfall. We created a waterfall, but that water reflects all the infrared light and distorts the markers, so we had to create filtering and all sorts of other tools for those pieces.”
Fire is a significant atmospheric that also happens to be a lighting source. “You have to be judicious about how you use it,” observes Richard Baneham, Executive Producer and Visual Effects Supervisor, who won Oscars for Avatar and Avatar: The Way of Water. “For us, it has always been a matter of sculpting with light. If you watch Jim’s movies, traditionally, he lights in layers. When you introduce a chroma, like fire, and that bioluminescence, which is sitting in that cyan range, you get to sculpt characters and images, not unlike a Maxfield Parrish painting.” Careful attention was paid to the life and death cycle of vegetation to ensure the believability of environments. “We have a reservoir of Pandorian fauna and flora. But it goes a little further. We have to understand growth patterning and the relationship of plant to plant, which animals feed on which plants, and what kind of destruction would be there. You’re replicating the idea of how real-world environments come to be.”
Physics had to be properly represented. “You’ll often see in big creature movies where a dragon will go up and down based on a flight,” Baneham remarks. “The truth is that’s not sustainable. Flight works because you ascend, plateau, ascend, plateau or you
TOP TO BOTTOM: The fire dance was inspired by the Baining people who live on New Britain, which is part of Papua New Guinea.
Avatar: Fire and Ash enabled Wētā FX to spend more time on the creative side rather than having to solve technical issues.
The Way of Water and Fire and Ash were shot as a single production.

would never be able to fly up. Then you would glide down and preserve energy. Being able to employ that with all our Ikran [Na’vi for Mountain Banshees] is a way to establish a reality. But we had some interesting challenges on this one, which was finding the appropriate vocabulary for the Medusoids and Windrays, which are essentially a ship and a tugboat. It’s a great symbiotic relationship that the Wind Traders managed to set up. When you start to look at how complex nature is and how hard it is to understand and replicate that; that’s the stuff where we get to play and be like a kid again.”
Wētā FX, which won Oscars for both Avatar and Avatar: The Way of Water, returns once again with ILM contributing some character work on Jake, Quaritch and Neytiri. “What Fire and Ash has allowed us to do is to spend much more time on the creative,” remarks Joe Letteri, VES, Senior Visual Effects Supervisor at Wētā FX. “And a lot less of, ‘Why is this thing not working? How do we fix it? You know, what’s the problem here? Is it fundamental?’ This has been a lot more about, ‘Are we getting through creatively what we need?’ That’s been great to see. Everyone’s feeling that way, and if I could speak for Jim, he’s feeling that way, too. We’ve spent a long time developing this whole process; the idea of performance capture, and evolving that from the stage to underwater, and his method of building a template to work from, that the cut is based on and that our work is based on. This idea of unifying the whole flow, even including the live-action shoot. It’s all interwoven, and it’s all come together to the point right now where we’re able to look at what it needs to be creatively, and spend our time thinking about that.”
“Because we’ve had three movies now, we understand how the Na’vi act, the performances and subtleties that we were learning along the way, like the ears and tails, all the extra things that bring


TOP TO BOTTOM: Given that proper water interaction was figured out for The Way of Water, the team at Wē
ā FX was able to refine the final output a lot more for Fire and Ash
SeaDragons escort a factory ship as they hunt for Tulkun.
All of the creatures on Pandora have some basis in reality, with the Tulkun inspired by the whale.
t



these characters to life,” states Eric Saindon, Visual Effects Supervisor at Wētā FX. “With all of this extra time we’ve had with these characters, we’ve refined that, and the characters come to life so much more.” Tools have been refined from The Way of Water. “One of the things we had on this was not for animation, but for presentation and carrying facial puppets in the scene,” remarks Dan Barrett, Senior Animation Supervisor at Wētā FX. “We had real-time representation of the work that the artists had done. The puppets themselves were working. They were the usual animation puppet that wouldn’t run in real-time, but anything that had been done, we could bank that in our scenes and had real-time for that. It made life a lot easier for animators seeing everyone together in a scene performing.”
Varang is distinguished by her physical, not facial, performance. “It might have been Richard Taylor [Founder, Wētā Workship] who said, ‘The Na’vi that you are used to carry themselves with this upright grace,’” Barrett states. “You look at the way Oona Chaplin embodied Varang and the way that she moves. She’s pelvis forward and sinewy. It’s an incredible thing for an animation team to work with, but also the physicality of her body is quite something.” There is marriage between animation and live-action. “We capture the actors doing these things, but a lot of times because of the proportional differences of the bodies, there are subtle changes,”
Letteri notes. “When Oona moves as Varang and you’re putting that on a body that’s got longer legs and torso, it still needs to convey that same sense of movement, such as the head movements relative to the body. All those things have to be looked at and dialed a little bit so you’re not looking at the mathematical translation from one to the other. In the end, you’re looking at the emotional translation.”
“When you think about it, Fire and Ash is 3,500 shots, so it’s all about having scalable processes,” Cameron observes. “It’s about maintaining the highest possible quality at mass production rate; that was a challenge on Avatar, The Way of Water and now on Fire and Ash. How do we keep the quality absolutely top of the line on every single one of 3,500 shots? You need robust workflows and technology to do that.” The production is always on the cutting edge of technology. “Sometimes, the initial tool rollout will almost be like a laboratory experiment,” Champney reveals. “In cases of things like depth comp and our current generation of Simulcam, it literally rolled out four days after we started live action. We weren’t sure it was going to work, and it was literally those last few days where everything came together. Over the course of the years, as we’re filming, those processes become more like a regular tool pipeline. We’re not thinking about holding it together, and don’t need all the engineers down there waiting for it to explode!” There have been a multitude of challenges. “The Way of Water had the water which was a unique challenge,” Baneham notes. “In Fire and Ash, it was representing the full swath and breadth of the world. Our job is to represent the multidimensional world and invite people to come on a journey.”
TOP TO BOTTOM: Conceptualizing the gondola used by the Wind Traders with rock spires placed in the background.
For the Ash People, the goal was to have a striking visual difference, both in their culture and landscape, and to suggest why they’re the way they are.
A high-angle view of a Tulkun celebration taking place in a lagoon.


PAIGE WARNER ENJOYS BUILDING TOOLS THAT HELP FELLOW ARTISTS EXCEL
By TREVOR HOGG
Paige Warner is not afraid to break things in order to press ahead. Her forward thinking paid off in 2017 when Warner, along with colleagues Kiran Bhat, Michael Koperwas and Brian Cantwell, received a Scientific and Technical Award from the Academy of Motion Pictures Arts & Sciences for designing and developing a facial performance-capture solving system for ILM. The pathway to achievement walked by the digital artist was paved by being interested in both technology and artistry. “I bridge the gap, and that’s where I excel the most,” notes Warner, Digital Artist at ILM. “I have a sense of the artistic side but still have a heavy technical side as I like knowing how everything works, and I enjoy building the tools that we use. For me, one of the great things about the combination is that I’m using the tools that I’m also helping to develop, so the feedback is immediate as far as what works and doesn’t work.”
Initially, her ambitions were analog in nature. “I love working with my hands and making stuff, so as a youngster I hoped to become a modelmaker,” Warner recalls. “But as I was going through school, computers became a big thing. My dad got us a Macintosh Plus around 1989, and I started creating art on it, and that took off. Even more so in high school, doing graphics and whatnot. I helped to layout the school newspaper, as far as graphic design; how it should look and formatting text. All of that stuff led into a job at a newspaper after graduation.”
The conclusion of the original Star Wars trilogy made a lasting impression. “The key one for me was Return of the Jedi and seeing the speeder bike chase in theaters when that came out,” Warner states. “I was blown away by that and also knowing, through the grapevine or adults talking, that it was captured in Northern California in the redwoods. My family would throw a birthday party for me, and we would go to Samuel P. Taylor State Park, which is a beautiful park in West Marin; it’s all redwoods. I remembered seeing the same thing in Jedi, so that gave me a spark. Then, going forward, seeing more visual effects, and starting to see movie magic and the different shows that revealed the technology behind the scenes. The whole process fascinated me.”
Computer expertise came from personal experimentation. “I’m mostly self-taught,” Warner remarks. “I was able to get some software when I was getting close to graduating high school, and built a PC. I learned how to do that. I stayed up late every night, fascinated with the process of modeling and having a 3D environment within the computer that you could work with. That was exciting, and I wanted to continue pursuing that. I went to community college for a few years, and took classes related to computer graphics and things like that. But I learned more from doing it versus what was in school. And I got a lucky break getting my foot in the door at ILM.”
OPPOSITE TOP: A joy for Warner was going from being a fan of Return of the Jedi to actually working on the Star Wars franchise. (Image courtesy of Lucasfilm Ltd.)
OPPOSITE BOTTOM: Warner has had a long-standing relationship with Hulk dating back to the self-titled film directed by Ang Lee through The Avengers movies. (Image courtesy of Marvel Studios)
Ray Harryhausen inspired a project that served as a demo reel. “I did a vignette of a skeleton walking through the forest and having a sword fight with a friend of mine,” Warner explains. “I worked on that for well over a year. It was three shots. One big shot was the actual action scene, as it were, but that was enough to submit to try to get an internship at ILM. It ultimately landed on the desk of Keith Johnson, who was the head of matchmove at the time, which became the layout department. He saw that it wasn’t just animation but full visual effects. Keith gave me a call back and I had an
TOP: Paige Warner, Digital Artist, ILM

“A pleasant surprise is when someone takes something you’ve built and makes something new with it. The other part I was inspired by early on was starting to write tools and seeing how much enthusiasm I got from artists that I was making something for them. That’s been a key for my career and always having it be the end goal.”
—Paige Warner, Digital Artist, ILM
interview with him, but they already had lined up an intern for the season. He suggested an entry level position as a technical assistant the way people typically applied. However, at the time, ILM filtered out anyone who didn’t have a degree. I didn’t get a job there but kept in touch with Keith. After the internship ended, the intern wanted to go back to school and finish their animation degree. ILM had turned that internship into a job, so I was hired to be a department coordinator, helping to organize all the on-set data coming in as Polaroids so it was available for the artists.”
Having an interest in learning how things work goes far back to when Warner was a child assisting her stepdad repair a vacuum. “I remember walking out and looking at it,” Warner says. “My stepdad was trying to figure out how he was going to make it work. I was like, ‘Why don’t you stick a little piece of wood in there?’ And he said, ‘That’s not a bad idea.’ And it worked! Ever since then, it’s always been something I’ve enjoyed, and that’s an important




TOP TO BOTTOM: Working on several projects at the same time, such as Ready Player One, has enabled ILM to conduct a variety of experiments as a global studio and incorporate the best ideas into the next project. (Image courtesy of Warner Bros. Pictures)
What’s important isn’t the computer programming code or how it works, but the experience of the artist using it and how well it works when creating the desired shots for shows like Star Wars: Episode IIIRevenge of the Sith. (Image courtesy of Lucasfilm Ltd.)
Warcraft was the first movie where Warner focused on the workflow for face capture. (Image courtesy of Legendary Pictures)
aspect of our process, our work. I share with John Knoll [Executive Creative Director & Senior VFX Supervisor, ILM] the idea of being able to take things apart and being somewhat fearless or confident enough that you can get it back together and keep things working. It is valuable in terms of understanding how these things work, tuning them and making all of our processes as efficient or productive as possible.”
Failure is not something to fear. “I’m comfortable failing, and I fail a lot,” Warner remarks. “A lot of the failures, as I would put them, are the trial and error of trying to figure out how something works until you finally get to the point where, ‘There it is.’ That’s the combination I needed to achieve to understand what was going on, and now I can move forward. It’s the end result of all of that hard work that people see, and you are impressed with being able to figure that out. But I tried a lot before I got there. That journey of getting to that point of making those small mistakes, or things that don’t work before you get to a point that does work, is valuable in the long term, because it helps form your opinions. When new ideas are brought up, or different approaches, there is an extra knowledge. You can be, ‘I actually tried that, and there’s this problem that you might not have thought of. We should approach it this way because of that prior failed experience.’”
On a day-to-day basis, most of the time is spent on face capture. “I got roped into face capture because of prior work, as far as the folks I’ve been working with, helping build out a new pipeline for that process,” Warner states. “There were different points in ILM’s history where some face cap stuff would come and go. Warcraft was the first show where I focused on the workflow for face capture. What I find interesting with face capture is all of the different disciplines that are required. We’re using machine vision cameras to capture the face more often than not. But then you have to

“The crazy part now is how big ILM is and how many projects we have going on at the same time. That has its own challenges, but it allows different experiments to take place, as far as how we approach that work and learn from that as a global studio, and to incorporate the best ideas we learn into the next project.”
—Paige Warner, Digital Artist, ILM
understand the involvement of creature development as far as the rigging of the face. There’s sculpting and the corrections that might need to be done to mainly fix the facial solve. There’s the process of taking a solve on an actor and re-targeting it to the creature or the character they’re portraying, and how we transfer that data; that is still an evolving process. It’s a tricky problem, and there’s no one solution that works in all cases.”
Tools have been personally invented. “I got into a side project where I developed my own iPhone app that resembled the way we capture on set where you’re using long exposure,” Warner explains. “It would accumulate the video stream and average out the images so you get a nice motion blur in a time lapse-type capture rather than being a quick clip with an intervalometer.” A key goal is to eliminate the need for artists to go between different applications to complete their work. “If you had to take one thing out of one package, export it and then import it to another package, it’s hard to manage and keep track of all that. [With Zeno, ILM’s proprietary 3D software package], we’re able to use one piece of software as the hub of that process and tap into different processes and different tools. As far as the underlying solver, we’ve done a lot of great work and are used to benefiting from Disney Research and a lot of the technology they’ve developed. It’s the same process of making sure we’re using their technology, but in the most efficient way

BOTTOM: Rango was a special project in that ILM brought its expertise and a new visual way of storytelling that had never before been realized in an animated feature. (Image courtesy of Lucasfilm Ltd.)
TOP: Large-scale productions like Space Jam: A New Legacy allow time for software development. (Image courtesy of Warner Bros. Pictures)



Much like the Rebel Alliance taking on the Imperial forces, Warner believes that failure is not something to be feared, as it is through trial and error that solutions are discovered. (Image courtesy of Lucasfilm Ltd.)
Warner, who spends most of the time on face capture, helped bring Peter Cushing back to life digitally for Rogue One: A Star Wars Story (Image courtesy of Lucasfilm Ltd.)
possible that is geared towards production and large productions. Plus, being able to manage a large team of artists doing this work all at the same time, keeping track of that and providing a cohesive process for the artists to do the work. That way, they’re focusing on the important parts needing their attention and are not busy getting data from one package to another.”
Big projects allow time for software development. “Big shows can afford to have more time spent up front trying to make a process, set that up and be ready for full production,” Warner notes. “Rango was special in that ILM brought its expertise to that project and brought a new visual way of storytelling that hadn’t been realized in animated features up to that point. Because the level of detail and photorealism that was applied with the juxtaposition of these wacky characters was cool. It was neat to be able to tell that ILM made it. It had the visual cues that we would apply to any project that we are set up to work on.”
The final shot of Hulk stands out. “It was a neat experience because I worked on the matchmove as the camera starts pulling away. It’s the ‘Power of 10’ shot that had to go through a miniature shoot, so I had to work with the stage and help guide the modelmakers, as well as the DP doing the motion control move, on how that was supposed to work and be put together in a CG sense. At the end, I got to work on the composite and assist in putting the whole thing together; that was a great opportunity to see a shot through the whole process.”
There has been a backlash towards visual effects, leading to them being downplayed. “That’s something where you’ve seen discussions between different artists, as far as the marketing team will say, ‘We used practical effects,’” Warner observes. “But you can tease out that just about every shot has been touched in some digital way. Our work can be undermined in that way. It’s never bothered me too much given that my take on it from a young age was the point of visual effects was the suspension of disbelief and it not being there. You’re there to enjoy the story. In more modern times, we have a lot of projects where visual effects are a big part of the spectacle; that’s great too.”
Everything in the visual effects industry is project based. “We’re working on a project and when that one finishes, we deliver it and move on to the next one. The crazy part now is how big ILM is and how many projects we have going on at the same time. That has its own challenges, but it allows different experiments to take place, as far as how we approach that work and learn from that as a global studio, and to incorporate the best ideas we learn into the next project.”
“A pleasant surprise is when someone takes something you’ve built and makes something new with it,” Warner states. “The other part I was inspired by early on was starting to write tools and seeing how much enthusiasm I got from artists that I was making something for them. That’s been a key for my career and always having it be the end goal. The code is not important in how it works. What’s important is the experience of the artist using it and how well it works. Removing clunkiness or slowness or all the kind of things that can creep into something, that’s always an ongoing challenge. We’ve got this new feature, but it also throws something else off in the code and now operates slower. We always have to be mindful of those things as we move forward.”
TOP TO BOTTOM: There have been many modern-day projects at ILM like Avengers: Infinity War where visual effects are a big part of the spectacle and storytelling. (Image courtesy of Marvel Studios)


ENTERING 2026, VFX/ANIMATION INDUSTRY BALANCES UNCERTAINTY AND OPPORTUNITY
By CHRIS McGOWAN
Over the last few years, the VFX/animation industry has contended with a pandemic, Hollywood strikes, the spectacular collapse of Technicolor and a roller-coaster business track. Yet, at the same time, the industry has surged forward technologically with AI, virtual production, real-time rendering, the cloud and more. Visual effects have been created for feature films, television series, live events, commercials, gaming, theme parks, augmented reality and virtual reality. “The VFX industry will continue to broaden its scope beyond traditional sectors, and VFX expertise will become increasingly integrated with other industries such as fashion, automotive and healthcare,” comments Ian Unterreiner, Executive Vice President of PhantomFX.
Unterreiner states, “This cross-industry collaboration will drive innovation and open new avenues for visual storytelling, experiential marketing and product visualization, making VFX an essential creative partner in a wider range of fields. Additionally, the perception of VFX studios will continue to evolve – from being seen as vendors to being recognized as true creative partners across both VFX and production. VFX artists and engineers are masterful storytellers whose contributions are integral to crafting on-screen experiences, whether from a technical or production standpoint.”
OPPOSITE TOP: Spin VFX contributed visual effects to the film Dora and the Search for Sol Dorado (Image courtesy of Spin VFX, Nickelodeon Movies and Paramount+)
The up/down VFX/animation business grew 9.3% in the second half of 2024 and contracted -7.6% in the first half of 2025 for an overall 1.0% gain during those 12 months, according to the Visual Effects & Animation World Atlas 2025. India, France, Australia and New Zealand were the major Hollywood markets to show net growth over that period, and the top-five companies by head count were DNEG, ILM, Framestore, Wētā FX and Walt Disney Animation Studios, according to the Atlas.
TOP: Raynault VFX combined traditional VFX with real-time world-building for LED stage production in the Emmy-winning Percy Jackson and the Olympians series. (Image courtesy of Raynault VFX and Disney Television Studios)

In a time of great opportunity and uncertainty, here are some industry voices discussing the above and more.
FOUNDRY
Jody Madden worked at Digital Domain, Lucasfilm and ILM before becoming CEO of Foundry, based in London. “When I started, our industry worked primarily from VFX hubs in London and the West Coast of California, but now the talent is truly distributed around the globe. The advancements in hardware and software technology over the same period have made it possible for talent worldwide – from individuals to 10,000-strong VFX companies – to create groundbreaking visuals for our screens,” Madden says.
“The core philosophy behind our products has always been to enable artists to work on the most complex projects more efficiently,” Madden continues. “Looking ahead, we understand the pace of adopting new products and technologies is not one-size-fits-all. We’ll continue to support and deliver value to our customers in their existing pipelines, while building new products and workflows that enable the ways they’ll want to work tomorrow. We have to do both. For 2026, this includes a reimagined 3D system for Nuke, expanding our machine learning toolsets, further developments in our virtual production product, Nuke Stage, plus updates across Mari, Katana and Flix.”
Madden notes, “These new technologies provide an opportunity to work more efficiently and collaboratively, while providing more creative flexibility to artists when the tools that employ these enabling technologies are effectively implemented to solve realworld production problems.”
Olivia McLean, Executive Producer of Mathematic Film, remarks, “We’re continually chasing new efficiencies in this industry. Regarding cloud workflows, Mathematic is connected to the Sohonet Media Network, allowing us to connect directly to vendor platforms seamlessly. We also have a global file system called Hammerspace that can sync all the files directly to our Los Angeles office. As a company with its chief operation in France but an international client base, particularly for the U.S. market, we need to build an infrastructure that unifies our workflow and encourages virtual collaboration between our artists.”
Speaking of AI, McLean observes, “Beyond the acknowledgment of the general rise of AI’s impact across various sectors of the industry, more specifically, AI agents are starting to creep up, not only for personal assistance but, upon training, for further management and operational necessities. Optimistically, I believe the value of human connection and in-person mentorship will outweigh the pursuit of displacing production staff with a virtual agent. Still, it’s definitely a developing tech to be wary of.”
MILK VFX
Neil Roche, Deputy CCO and VFX Supervisor at Milk VFX, says, “It has been a challenging period, with the ongoing effects of the strikes and a noticeable strain on many established VFX houses; however, there’s a growing sense of momentum returning. The use of real-time game engines for VFX is very successful at Milk and Lola Post. Using Unreal for final-pixel VFX allows for more iterations and faster render times, which means our clients get increased productivity. Cloud workflows, particularly for
MATHEMATIC FILM



TOP TO BOTTOM: According to Spin VFX, the most significant technological breakthroughs impacting the VFX industry today are near real-time rendering and AI-assisted image generation. (Image courtesy of Spin VFX/Nickelodeon Movies/Paramount+)
Rodeo FX contributed visual effects to Netflix’s Sandman, a fantasy drama based on the Neil Gaiman comic book. (Image courtesy of Rodeo FX and Netflix)
Spin VFX contributed visual effects to sci-fi thriller Beacon 23, Season 2. (Image courtesy of Spin VFX and MGM+)
rendering, are great for scaling up production without the need to over-stretch the tech side of the company.” LED stages will continue to grow in importance. “Absolutely, they can be very cost-effective, even for simple things such as driving process sequences. The cost base for LED volumes seems to be decreasing, and the quality of in-camera content is also increasing.”
Roche notes, “Obviously, the big thing [from last year] has been advancements in AI and machine learning. The impact will be when the DCC software starts to use AI and ML in its tooling. Nuke already has a few ML nodes, and I’m sure this will increase. Increases in tax incentives, not only in the U.K but also in Europe, should also positively impact the industry.”
PHANTOMFX
In 2025, Phantom Digital Effects acquired Milk VFX and Lola Post. According to Bejoy Arputharaj, Founder and CEO of PhantomFX, the addition “further solidifies the Group’s position as a global creative partner of choice.” He adds, “The VFX industry is undergoing a major shift as AI, machine learning and real-time technologies reshape how content is created and delivered.” For example, Tippett Studio [owned by Phantom] “is leveraging AI to produce Phil Tippett’s Sentinel, the highly anticipated follow-up to the cult phenomenon Mad God. By blending analog craftsmanship, cutting-edge CG and pioneering AI techniques, Sentinel pushes the boundaries of what’s possible in independent animation and genre storytelling.”
BASILIC FLY STUDIO
Balakrishnan R, Founder, Managing Director and CEO of Basilic Fly Studio, states, “There’s not just one tech breakthrough having the biggest impact right now or recently on the VFX industry. AI, real-time rendering and cloud workflows [are] all reshaping how we work. AI is doing a lot of the heavy lifting roto, clean-up and even some lookdev work, but it’s not about AI replacing artists. It’s about using it smartly so our teams can focus on the real creative stuff.”
He continues, “Real-time engines like Unreal are a gamechanger, too. You’re seeing scenes live on set. What used to take days is now instant. It’s speeding up decisions and bringing everyone into the creative process earlier. And the cloud? Total enabler. We’ve got teams in different countries working like they’re in the same room. That kind of flexibility didn’t exist five years ago.” He believes “2026 will be a year of acceleration for the VFX industry. After a slower cycle in 2024–2025, we already see signs of production pipelines filling up again, driven by both streaming and theatrical projects. AI and real-time technologies will move from experimental to mainstream in VFX workflows, making production cycles faster and more collaborative. Studios with global reach, diverse talent pools and tech-enabled pipelines will have a clear advantage.”
CLEAR ANGLE STUDIOS
In 2025, Clear Angle Studios, a 3D capture and processing company, announced a strategic partnership with Robot Air, a
specialist scanning company based in Cape Town, South Africa. “Productions shooting in South Africa will now have direct access to a full suite of globally recognized 3D capture services,” explains Christopher Friend, Director and Co-Founder of Clear Angle Studios. “These tools span the full scanning suite from character and costume scanning to LiDAR, props and full-environment capture. It’s the kind of solution that will remove logistical roadblocks that have been problematic for the international film landscape while empowering local productions to deliver worldclass results.”
Friend adds, “As production locations diversify, having reliable, high-fidelity scanning capabilities on-site becomes a real strategic advantage for global teams. This move illustrates how the industry normalizes access to specialist tech in growing creative hubs like South Africa.”
Among tech breakthroughs that are having a big impact right now in 3D capture and processing and scanning, “Gaussian splatting is the current hot ticket,” Friend says. “It’s an amazing tool for capturing dynamic, high-detail environments and subjects that are quickly translatable into an immersive, interactive experience that can be rendered in real time.”
LUX AETERNA
Rob Hifle, Co-Founder, CEO and Creative Director of Lux Aeterna, notes, “In these rocky times of late, we’ve had to think smarter at Lux Aeterna to navigate some of the stalling and non-commissioning of projects due to global economic uncertainty over the last two years.” Paul Silcox, VFX Director for Lux Aeterna, explains, “One of the driving forces enabling our flexibility is our investment into our Universal Scene Description (USD) pipeline; it means that we have been able to work quickly, efficiently and optimize management of assets and workflows.” He adds, “We’ve been forging into the virtual production and immersive sectors, creating more cinematic VFX work on large-scale global immersive screen projects and film stages. Working on these large-scale event spaces has been both liberating and creatively exciting.”
Hifle adds, “We’ve also seen huge cost-effective solutions with machine learning processes on some of Lux Aeterna’s more laborious VFX tasks. It’s been instrumental in clean-up and rotoscoping, reducing costs and speeding up turnaround times. Improvements to ML denoising algorithms have also sped up render times significantly, which has a direct influence on our efficiencies with regard to schedules, costs and our carbon footprint. We are seeing more real-time rendering tools such as Unreal Engine, in our pipeline. This is only going to grow in 2026. It’s helping manage and transform workflows and shortening production cycles. We are seeing more projects utilizing LED volumes with our immersive and film stage work.”
BEEBLE AI
Beeble AI, based in Seoul, South Korea, specializes in AI-driven relighting technology. “At Beeble, we’re pushing the boundaries of what’s possible in visual effects by turning standard 2D footage into fully relightable, physically-based render [PBR] assets,”


TOP: Nuke Stage is a Foundry virtual production solution that enables real-time playback of photorealistic environments onto LED walls. (Image courtesy of Foundry)
BOTTOM: Clear Angle Studios offers full-body and facial capture. (Image courtesy of Clear Angle Studios)


One of the driving forces enabling Lux Aeterna’s flexibility is its investment in its Universal Scene Description (USD) pipeline, which helps it work quickly and efficiently and optimize management of assets and workflows. (Image courtesy of Lux Aeterna and Netflix)
BOTTOM: Clear Angle offers a full suite of 3D capture services. These tools span the full scanning suite from character and costume scanning to LiDAR, props and full-environment capture. (Image courtesy of Clear Angle Studios)
comments Hoon Kim, Co-Founder and CEO of Beeble AI. “This enables creators to relight actors in post-production as if they were shot on an LED wall, drastically cutting down costs and time while preserving full artistic control.” Kim observes, “One of the most impactful shifts in 2025 [was] the rise of hybrid workflows –blending traditional CGI techniques with AI-generated assets and enhancements. Rather than replacing artists or pipelines outright, AI is augmenting them: accelerating routine tasks, enabling creative iteration earlier in the process and opening new doors for storytelling.”
Kim foresees that in 2026, “VFX production will become more collaborative and web-native, powered by cloud rendering, versioning and shared asset platforms. The next generation of visual storytelling will be defined not just by realism, but by flexibility, speed and creative freedom. AI is helping us move beyond the constraints of physical production environments.”
RAYNAULT VFX
“Economic fluctuations, political uncertainty and the expectation that AI will reduce VFX costs could push studio heads to adopt tighter budgets,” says Mathieu Raynault, Founder and CEO of Raynault VFX. “Despite all these unknowns, including AI, I remain optimistic about the future of VFX and our company. I believe that by staying lean and flexible, we can continue to thrive, turning each of our artists into a creative powerhouse and multiplying our overall capacity exponentially.”
Raynault comments, “Generative AI tools and machine learning are clearly going to reshape VFX workflows. Right now, the flood of new tools is overwhelming, and they evolve incredibly fast. Our biggest challenge is figuring out which ones are ready, secure and legally safe to integrate. Over time, these tools will drastically reduce the time spent on tasks across our pipeline, letting our artists work more efficiently and focus on higher-value creative work.”
SPIN VFX
Neishaw Ali, CEO and Executive Producer of Spin VFX, observes, “The biggest technological breakthroughs impacting the VFX industry today are near real-time rendering and AI-assisted image generation. Tools like Unreal Engine for real-time rendering and AI platforms such as Midjourney, Adobe Firefly, Luma AI and others are transforming how we work. The ability to see changes instantly and generate compelling visual concepts in minutes means filmmakers, artists and production teams can align creatively at the earliest stages.”
Ali notes, “LED stages will grow exponentially in later years as technology matures with better panels, lower latency and more accurate color, coupled with a standardized workflow that is easier to drop into production without massive R&D and overhead. The biggest challenge we faced in 2025 wasn’t the technology itself, but the mindset shift. We need to commit to learning the tools and methods that will define the next decade, such as real-time rendering, USD-driven interoperability, AI-assisted automation and virtual production.”
TOP:
“2026 is poised to be a year of evolution rather than explosion. The future looks bright, but it’s calling for leaner, smarter and more tech-savvy VFX.”
—Sébastien Moreau, Founder & President, Rodeo FX
“Simultaneously, we’ll see pipeline consolidation with crosstrained teams replacing siloed departments. Changing the mindset of our existing staff to help them let go of old habits and accept that the fundamentals have shifted will be as critical as learning the tools themselves. At the same time, we must push schools and universities to adapt their curriculum now, so graduates enter the workforce with the real-time, USD-based, AI-assisted, cross-disciplinary skills that will become the basic standard that the industry demands.”
RODEO FX
Martin Walker, Chief Technology Officer at Rodeo FX, states, “By leveraging the power of modern GPUs, real-time rendering engines, like Unreal Engine, can render complex, photorealistic scenes instantly. This means an artist can adjust lighting, swap materials or move the camera to explore new compositions without delay. This immediacy transforms the creative process from a linear, one-way street into a dynamic, interactive playground where artists can test dozens of ideas in the time it used to take to render one.”
Artificial intelligence is automating many of the most laborious tasks on the animation front, acting as the ultimate animator’s assistant. Walker says, “Instead of manually keyframing every subtle movement, animators can leverage AI to significantly speed up the process. AI algorithms can automatically clean up noisy motion capture data, saving countless hours of manual adjustments. Furthermore, new tools can generate realistic character animations from simple text prompts or video input, providing a high-quality starting point for animators to refine.”
Walker continues, “When combined, real-time rendering and AI animation create an incredibly powerful workflow. This synergistic loop of AI generation and real-time feedback allows creators to iterate on entire scenes – animation, lighting and effects – holistically and rapidly.”
Sébastien Moreau, President and Founder of Rodeo FX, notes, “The demand for high-quality content isn’t slowing down. Streaming platforms, global franchises and immersive media continue to drive appetite for VFX across film, television and advertising. However, economic headwinds mean many projects will come with tighter budgets. The silver lining? Technology is catching up. Real-time rendering, virtual production and AI-powered tools are helping studios speed up workflows and reduce costs. These innovations are reshaping how VFX is made, unlocking creative freedom while keeping bottom lines in check.”
Moreau concludes, “All things considered, 2026 is poised to be a year of evolution rather than explosion. The future looks bright, but it’s calling for leaner, smarter and more tech-savvy VFX.”



TOP TO BOTTOM: Rodeo FX contributed visual effects to Dune: Prophecy Real-time rendering and AI-powered animation are fundamentally changing how creative projects are developed at Rodeo. (Image courtesy of Rodeo FX and Warner Bros.)
Beeble AI turns standard 2D footage into fully relightable, physically-based render (PBR) assets. (Image courtesy of Beeble AI).
Raynault VFX contributed visual effects to Marvel Studios’ Thunderbolts* (Image courtesy of Raynault VFX and Disney Television Studios)

THE EVOLVING COEXISTENCE OF MAN, MACHINE AND THE ART OF VFX
By TREVOR HOGG
A fascinating aspect of the visual effects industry is that the final product is dependent upon technicians and artists working together; however, both of these groups speak and think in different ways. Yet, they have the mutual goal of creating imagery that allows filmmakers to achieve their creative ambitions and fully immerse audience members in the storytelling. The traditional visual effects paradigm is being fundamentally altered as AI and machine learning streamline how imagery gets created and produced. There is no shortage of opinions on whether this new reality will be empowering or the death knell for creativity.
TOP: Currently, AI has not moved beyond impacting the workflow for conceptualization, but it will continue to evolve like the creatures in Walking with Dinosaurs, with an assist from Lola Post. (Image courtesy of Lola Post and BBC)
OPPOSITE TOP TO BOTTOM: Even when generative AI was utilized for The Eternaut, the process was highly guided by artists. (Image courtesy of Netflix)
AI is a useful companion for testing ideas, much like the relationship between John Cena and Idris Elba in Heads of State, with effects from Milk VFX (Image courtesy of Milk VFX and Prime Video)
The opening title sequence of Secret Invasion utilized generative AI. (Image courtesy of Marvel Studios)
“In many ways, creators and AI are evolving together,” states Paul Salvini, Global Chief Technology Officer at DNEG. “AI is becoming more capable of creating more controllable outputs, and creators are becoming more familiar with using AI to augment their existing tools and workflows. It will take a while for visual effects workflows to be reshaped, as the AI toolset for visual effects is still in its infancy. It’s exciting to see new ways of interacting, such as natural language interfaces, emerging within the field. As our chosen tools become more intelligent, we can expect to see more human interactions being about expressing creative intent and less about the mechanics of execution.” The one thing that distinguishes high-end from mainstream content creation is the need for complete creative control. “The lack of controllability of AI is frustrating for many professional users. It’s difficult to tweak subparts of a generated image that is close but not yet there. As we develop a new generation of AI tools for artists, the artist experience and the enhancement of creative control are paramount,” Salvini says.
Every technology has its pros and cons. “AI has been a controversial technology because of concerns over how models are trained [the source of data and the rights of the artists] and what

AI might mean for various jobs in the future. We need to be extremely responsible and ethical in how we develop and deploy AI technologies,” Salvini remarks. The rapid advancement of AI technologies has been impressive. “The high level of investment in AI has resulted in a major leap in capabilities available to end users. Like all new technologies, I’ve learned that AI takes time to mature and be integrated into existing tools and workflows.” The biggest challenge has been the hype. “The high-end of visual effects requires a level of creative control that isn’t available with most of today’s AI tools. There is a general frustration that the consumer tools for AI are advancing far more rapidly than those for the professional market,” Salvini says. Misconceptions have become a daily occurrence. “Depending on the conversation, its impact is often dramatically underestimated or overestimated. There is a sense that AI will transform workflows overnight when that simply isn’t the case.” Everything centers around bringing great stories to life. Salvini states, “AI brings a set of capabilities that will expand our toolset for making that happen. As the technology in our space evolves [and not just through AI], our focus needs to remain on making it easier to focus on the creative value we add.”
AI models in their best form serve as a force multiplier and/ or mentor. “For senior personnel, it allows them to either iterate certain ideas faster and then curate the results, choosing which show the best potential for further development,” notes Steve MacPherson, CTO at Milk VFX. “In other cases, it can open up increased understanding across different disciplines, which increases the breadth of activity for that individual. Finally, it can serve as a mentor, allowing the mentee to zero in on specific areas of questioning in exploring any given topic.” Currently, AI has not moved beyond impacting the workflow for conceptualization. “Under the hood on the development side, there are times when
“[AI] is wrong frequently enough to remind me to keep thinking and to make sure I use it to challenge me as much as I remind myself to keep challenging it.”
—Steve MacPherson, CTO, Milk VFX




it can assist with code-based analysis or initial idea prototyping. I say this cautiously for a couple of reasons. First, any code that goes into production should be peer reviewed. Second, a large part of the development job is ensuring the code base is stable and maintained, and I am wary of introducing machine-generated technical debt.” AI is a useful companion. “I test ideas against it. I use it to challenge assumptions. I use it to increase breadth of understanding or as a refresher on topics I’m already familiar with. I’ve learned a new-found sense of confidence in tackling complex topics. Not that I can understand them quickly, but that I can chisel away in a method that works for how I learn. It’s brought a real and genuine sense of fun and exploration to the work I do,” MacPherson says.
“The pros are the extended range AI can offer the individual,” MacPherson notes. “The ability to gather basic information quickly. To learn at your own speed in a manner that suits your style of learning. The cons, and I am always on guard against this for myself and others, are to let the tool do the work for you. By this, I mean the critical thinking, the following of intuition, the understanding of the individual pieces and how they fit into the whole. For me, the emphasis on building a team is on building human character and identifying people whose principles around effort and knowledge are aligned with the company’s goals.”
Some of the results have been unexpected. MacPherson says, “I have dumb questions. We all do. Things we should know but for whatever reason don’t. These LLMs are a path to asking those
TOP: Just as the characters of Mark Wahlberg and Rosa Salazar in Play Dirty evolved together, the same thing applies to creators and AI, according to DNEG. (Photo: Jasin Boland. Courtesy of Prime Video)
BOTTOM: Machine learning was critical in enabling ILM to recreate Harrison Ford as he was in the early 1980s for Indiana Jones and the Dial of Destiny. (Image courtesy of Lucasfilm Ltd.)

dumb questions without fear of judgment. It gets them out of the way and plugs gaps in knowledge quickly and efficiently. It’s wrong frequently enough to remind me to keep thinking and to make sure I use it to challenge me as much as I remind myself to keep challenging it.” The hype has to be ignored, as what is important is staying focused on real-world solutions. “It’s both oversold and undersold. GenAI is currently a novelty act that might progress out of the basement and into daylight. Maybe. I’m banking on the need for humans. They are language models, and the limits of their ability are the limits of our language. As language models, the insights they can give us into how we think and express ourselves are incredibly insightful and rewarding for me as a career technologist.” Joe Strummer of the Clash once declared, ‘The future is unwritten.’ MacPherson adds, “It’s perfectly human that the most obvious and succinct summation of our times comes from a ‘70s punk whose band rebelled against the complacency and excesses of the music industry!”
Generative AI is not going away, leading Cinesite to establish an internal technological exploration unit called TechX. “We’re a non-production unit, so we’re not using the technologies directly on productions at the moment,” explains Andrew McNamara, Generative AI Lead at Cinesite. “It allows us to inform ourselves, our team, different departments and senior management about the potential of how these things can be used. One part is to increase efficiency in certain areas, and the second is to allow us to amplify creatively what we’re doing and what it means in the

TOP: Unlike the assassins of Mortal Kombat 2, the lack of controllability of AI is a source of frustration for many professional users.
(Image courtesy of Warner Bros. Pictures)
BOTTOM: Milk VFX, which worked on Heads of State, believes, as with any tool, it is important not to let AI do the critical thinking for a shot, as the narrative context needs to be taken into account.
(Image courtesy of Milk VFX and Prime Video)




longer term in the wider content creation ecosystem. There is a lot of fear around whether you can do a whole shot with a generative AI workflow. We’re trying to figure out where that sweet spot is. You have one end stating, ‘We’re going to replace everything.’ Then there will be tangible use cases in visual effects or an animated production where a lot of the technologies can be utilized to support mundane, tedious, time-consuming and expensive things that aren’t creatively driven but can give us an advantage.” There was a time when a picture was worth a thousand words, but now you need a thousand words to generate an image. McNamara says, “It’s about having a human in the loop. Having AI as a collaborator, not something to replace you, is important. For example, you want to leverage the ideas and skills of concept artists to generate stuff. But you have a client who wants to see 20 versions of the same thing tomorrow. That’s a lot of pressure on a concept artist to turn around. You could argue that AI gives concept artists more options to support that accelerated workflow.”
Hybrid workflows are where everything is headed. “We will probably see new tools and technologies to support artists and creatives that probably haven’t been invented yet that take different mentalities of how we interact with the AI,” McNamara believes, “but also using traditional things like hand-drawn and concept art. It will end up somewhere in the middle in the future.” AI has already taken on the role of Google Assistant. “If you start to look at some of the large language models that are getting integrated into some of the digital content creation tools, it’s there to support you. You can use ChatGPT, feed it some artwork you’ve done or a photo, and ask, ‘Can you show me if this had a green horn on the right-hand side?’ You’re supplying it with some starting points and getting it to give you some options. The more you delve into these kinds of tools, even Midjourney has some quite interesting tooling at the moment to allow you to have these interactive dialogues. You can reject what it gives you. It’s not the end-point but part of a process. You still have the element of choice to say if you want it to do certain things. It’s not fully automated. Maybe we’ll eventually see, in five-or-10 years’ time, haptic stuff where you’re physically sculpting a whole range of things in collaboration with voice and prompts. It potentially allows the artist to choose which of those mentalities best suits how they want to work. It won’t just be prompts in the future. It might have a 3D component where you use some of the basic 3D tools to specify your compositions.” The visual effects industry is the business of manufacturing beautiful imagery. McNamara notes, “You could argue that it’s a homogeneous way of doing the work. Things like specialism within that are quite interesting. Do we think that specialists will be less important if you could have a visual effects or creative generalist who can do more and be augmented with AI? I don’t have the answer for that question.”
An example of TechX model training. (Image courtesy of Cinesite) TechX utilizes AI for 2D crowd replication. (Image courtesy of Cinesite)
Terms are being misused. “I even hate using the phrase AI because I don’t think that there’s a lot of intelligence in these tools,” reveals Kevin Baillie, Vice President, Head of Studio at Eyeline. “They’re like rocket fuel for people who are less
TOP TO BOTTOM: The Eternaut used of generative AI to create a collapsing building in Buenos Aires. (Image courtesy of Netflix)
AI enhanced the Hungarian dialogue of Adrien Brody and Felicity Jones in The Brutalist. (Image courtesy of A24)
technical and more creative. However, we still need technicians to help ensure that these things can work at scale and fit into normal production processes.” The role of the technician will change. “Some of them will all of a sudden find themselves in a position of being a creative sitting next to one of their favorite directors on the planet, like I did when I started working with Robert Zemeckis,” Baillie says. There are misconceptions. “We have a narrative out there from some of the bigger tech companies that are creating these tools, like ‘prompt in and movie out,’ and that’s far from the truth. There are some instances, such as early concept ideation, where it can add details that otherwise would take forever to do. You might have an image of a creature from a piece of traditional concept design or something modeled in ZBrush and use a tool like Veo3, Runway or Luma AI to make it move. You can discover things from that process, which you wouldn’t have thought of, these happy accidents. By and large, they are just tools, especially when it comes to doing final pixel work. We spend more time fighting the tools to get them to be more specific in their output and have output fidelity. In cases like the Argentinian series The Eternaut, which had a few fully generative shots, they started with a low-resolution version of an apartment building in a city block in Argentina to make this building collapse. The inputs were very much curated in the same way you would if you were doing previs or postvis. A lot of the things in the middle were using various models to restyle that very low-resolution early image. That was then used to feed a video diffusion model to make it move right. The process was highly iterative and highly guided by artists exactly as the visual effects process would be now.”
Workflows are going to change fundamentally. “When we’re looking at these workflows, it is important to embrace the powerful things about them,” Baillie notes. “In a traditional visual effects workflow, there’s an artistry that is separated by technical steps. Whereas, in the generative image and video creation process, a lot of those steps are combined. You’re going to get the best result out of an image generation if you can give the diffusion model everything it needs to know to make that image in one go. There are limits to that, but as we think about how to best leverage these tools, the steps will be fundamentally different from those in a traditional visual effects workflow. Could we have an instance come with UI living within Nuke, that you do an input into, and then there’s a whole generative, specific workflow within a node, and it pipes out the output? That would probably be a comfortable thing for visual effects artists to wrap their heads around. There is space for that. However, it is important not to let the traditional visual effects steps and mechanisms get in the way of potential opportunities the tools offer.” Baillie addresses a sense of déjà vu. “I will remind anybody paying attention that it feels like the 1990s to me, when there were many unsolved problems, such as digital humans, water and fire. No one tool did it all. You had to take all these disparate tools and work with smart people to figure out how to combine them all to get beyond each one’s limitations to then arrive at a result that the filmmaker wants, all in service of telling a great story.”


TOP: TechX experiments with style transfer. (Image courtesy of Cinesite)
BOTTOM: TechX explores object replacement. (Image courtesy of Cinesite)

CHALLENGING THE RULES OF PERCEPTION WITH SCARLET
By TREVOR HOGG

OPPOSITE TOP: When they appear on screen together, the primary goal was to ensure that neither Scarlet nor Hijiri was overly dominant, while using visual contrast to make each character appear interesting.
A prevailing theme during the 50th Toronto International Film Festival had to do with William Shakespeare’s story about a son seeking vengeance against the uncle who killed his father to sit usurp the throne of Denmark. There was Hamlet, Hamnet (which deals with the origins of the play) and Scarlet, in which a poisoned Danish princess finds herself in the afterlife determined to assassinate the conniving relative responsible for death of the king and herself.
“When I think about Hamlet, the theme of revenge and the difficulty for someone to forgive, I see that a lot in today’s social climate,” explains Mamoru Hosoda, Director and Writer, while attending the North American premiere of Scarlet at TIFF. Studio Chizu, Nippon Television Network Corporation, Kadokawa Corporation, Sony Pictures Entertainment and Toho Co., Ltd. joined forces with Hosoda to make his eighth feature film. “I wanted to interpret Hamlet from 400 years ago and see what that would look like now. Would Hamlet still have to die or would he have more options?”
A great deal of thought went into deciding on the animation style. “2D is a primary form of art in animation in Japan versus 3D, which is more prevalent in Europe and North America,” Hosoda notes. “I thought maybe it would be better to mix these forms of art within Scarlet. I took inspiration from different kinds of styles, like Spider-Verse in the United States or Arcane in Europe. For Scarlet, I wanted to use more of that CG look to make the 2D come to life.” Establishing the hybrid animation template was Belle, which transposed Beauty and the Beast into the metaverse. “I thought I would use a similar approach where I would split the Otherworld into CG style and the modern world into 2D. We often think that CG is futuristic or smart, but how I utilize it in Scarlet is to portray the raw emotion of a human being, which adds a different spin
Images courtesy of Studio Chizu.
TOP: Scarlet finds herself transported to a hellish world inspired in part by Dante’s Inferno.
BOTTOM: Mamoru Hosoda, Director and Writer

on the use of CG.” The Otherworld transcends time and space, enabling a medieval princess to interact with a modern-day nurse, and challenges the rules of perception in regard to what is actually being seen. “When Belle is riding the whale, we may have assumed that it was inside of the virtual world, but maybe it could have been a reality. What is actually real differs for each audience member. I like to use this kind of spin on imagination in Scarlet. What we see on land might be perceived as water, or what we perceive as water could be the sky. I always like adding the different layers of perception and putting that into the artform and animation.”
Designing Scarlet was surprisingly straightforward. “This was largely because the director unexpectedly suggested using the design from his previous film, Belle, almost entirely as-is,” states Jin Kim, Character Designer. “The main reason was probably that Belle and Scarlet share significant character traits.” Extensive research was conducted. “For this project, I researched medieval clothing, armor, swords and hairstyle. I also referenced many live-action films based on Hamlet. For Hijiri, I needed to study various styles of nurse or paramedic uniforms and the traditional Japanese bow he would use. To design his well-built physique and attractive face, the first image that came to mind was Shohei Ohtani, the world-famous baseball player for the Los Angeles Dodgers. Even in the final design, you might sense a slight resemblance to him, except for the hairstyle. To further contrast the characteristics of the two characters, Scarlet was given very messy hair and a complex silhouette, while Hijiri was given a short haircut and a simple body silhouette,” Kim notes.
Conceptualizing the supporting characters varied in difficulty. “It was enjoyable work to differentiate the various types of male characters, but for some characters, I couldn’t see the right approach and it took time,” remarks Tadahiro Uesugi, Character
Designer and Production Designer. “Amlet resembles his younger brother Claudius, yet his personality is the complete opposite. My initial conception of Gertrude was influenced by past films and such, which differed from the director’s vision. The Mysterious Old Woman allowed for many interpretations, so it took time to align our visions.” The theme of revenge was kept in mind. “Since there are many antagonists, I aimed to make them appear more hateful and formidable as enemies.” The biggest design challenge was not character related. Uesugi states, “It is the gate to the unseen place. As the entrance to heaven, I had to consider what design would be convincing. If the design were just odd, it would make the entire story seem ridiculous. It must maintain a level of realism, yet be compelling, and not be a place beyond imagination.”
Motion was treated differently this time around. “While the director often used 8fps before, this time 12 frames per second were more common to match the smoothness of the CG,” observes Takaaki Yamashita, Animation Director. “Belle had about equal CG and hand-drawn animation, but it used eight frames per second more often. Since Belle’s U world and Suzu’s reality existed in different dimensions, and Suzu and Belle were distinct characters, matching frames wasn’t necessary. This time, however, while the world changes, Scarlet remains the same character. I think the director wanted to maintain continuity in the smoothness of the movement.” The basic direction was laid out in the storyboards. “More often than not, it was the content of the shot itself rather than the character that influenced the posing.” Crowds were complex to execute. “Throughout the entire film, not just specific scenes, there were an overwhelming number of crowd shots that were by far the most complex. I truly admire how the key animators meticulously drew each and every one by hand,” Yamashita says.



As with the metaverse realm of U in Belle, the Otherworld was treated as a 3D environment.
In certain cuts, simulated crowds were blended with handanimated characters to enhance realism and immersion.
Stuntvis and motion capture assisted with the dance and battle scenes. “For the action scenes, Kensuke Sonomura, the Stunt Coordinator, supervised the creation of live-action video storyboards,” states Yasushi Kawamura, CG Director. “For the dance scenes, Tomohiko Tsujimoto oversaw the choreography, and motion capture was conducted under the director’s supervision. Afterwards, the animators adjusted the movements to resemble traditional 2D animation.” The best of 2D and 3D animation were applied to get the desired form of emotional expression. “Subtle emotional cues, like the delicate movement of the eyes and eyebrows or the slight shifts in the mouth and shoulders to convey breathing or pain, are techniques rarely used in traditional hand-drawn animation, but they are areas where CG excels. At the same time, we also incorporated traditional animation techniques such as held poses, intentional omissions, and animating by 12 or 8 frames per second. These methods rely on the viewer’s imagination to fill in the gaps, a hallmark of classic anime expression. By fusing these two techniques, CG’s subtle precision and the brain’s ability to fill in expressive gaps, we were able to achieve a distinctive style of facial animation in this film that balances delicate nuance with suggestive abstraction.”
Scarlet went through the most revisions because of her wardrobe and continuity reasons. “Her outfit changes multiple times throughout the film, and she accumulates dirt and injuries during battles, so we had to create a wide range of model variations,” remarks Ryô Horibe, CG Director. “Her companion on the journey, Hijiri, also posed unique challenges; his simple shaved hairstyle made it particularly challenging to balance the proportions of his face and head, requiring numerous adjustments and refinements.” There were no major problems when modeling the dragon that dominates the sky. “In order to match the silhouettes depicted in the storyboard, a single model was not sufficient, and we had to
TOP TO BOTTOM: Mamoru Hosoda wanted to interpret Hamlet from 400 years ago and see what that would look like now.


adjust its shape for each shot. This dragon is an enormous creature, stretching several kilometers in length, so we had to pay close attention to many aspects, such as its movement speed, the size and motion of the surrounding clouds and the subtle atmospheric gradients, during compositing to convey depth and scale. Cloud placement and layout, in particular, received extensive feedback from the director, and we made numerous revisions accordingly.”
CG was relied on to create the Otherworld except for a few parts.
“Since the environment features many natural landscapes such as rocky terrains, we were able to effectively utilize Megascans
BOTTOM: There was a discussion as to how much blood and dirt should be placed on the face of Scarlet.
TOP: Looming over the Otherworld is the castle where Claudius resides.



Tears and sweat running down the skin were keyframe animated because they were directly
The vast desert and barren landscapes of the Otherworld are evocative of Lawrence of Arabia
assets,” remarks Yôhei Shimozawa, CG Director. “For deserts and uniquely shaped rocks, procedural modeling with Houdini proved especially useful, allowing us to efficiently produce numerous cuts within the same location. When it came to animation, particularly in action scenes, we defined the stage in 3D and roughly determined the camera positions for each cut in advance. This approach helped streamline the animation workflow and made the process smoother.” Rough previs animation was created for the wormhole sequence. Since the sequence relied heavily on visual effects and had to sustain viewer interest over a long duration, we had to be creative, especially because the director prohibited any swaying motion to avoid causing motion sickness. We designed the visuals so that flames would gradually transform into various elements, eventually becoming abstract beams of light, aiming to convey a sense of super-human speed. We also varied the spatial scale depending on the elements used. The production was based in Houdini, with additional use of After Effects plug-ins such as Trapcode Tao and Mir. Multiple artists were responsible for different components, so we paid close attention to the assembly process.”
Breaking from tradition was the way the imagery was composited. “Previous Studio Chizu works featured a ‘shadowless’ look with relatively light compositing, allowing the cell and background materials to appear largely unaltered,” explains Akiko Saito, Compositing Director. “I believed this project would follow that approach, but the director wanted a different expression. While keeping the fundamental animation techniques unchanged, I was asked to create a different visual style. The challenge was determining what techniques to use for this new look.” A certain key term described the visual aesthetic for 16th century Denmark. “During screen tests where both the hand-drawn and CG parts were previewed simultaneously, we held discussions and then finalized
TOP TO BOTTOM: While everyone wears similar medieval-style costumes, Voltemand and Cornelius were given attire that reflected the image of experienced soldiers.
tied to the emotional expression of the characters.

decisions after further talks with the director. The director initially suggested the keyword ‘like watching a medieval documentary film.’ Using that as a hint, I aimed for a simple yet densely packed visual style. While maintaining our usual aesthetic, I consciously pursued a slightly grainy look even within the cel-shaded look.”
The opening scene was treated as archival footage. Saito remarks, “The grainy, rough texture of the image conveys Scarlet’s fixation on vengeance. In contrast, the final scene, where she has come to terms with her feelings and appears refreshed, minimizes grain processing to express this difference. The final scene also felt like a return to the ‘usual Studio Chizu style.’”
An incredible amount of care went into crafting every single shot in order to meet Hosoda’s vision. “If I had to pick a few highlights, I’d point to the early horizontal shot in the red-hued scene, where the princess’s expression of sorrow and despair is especially striking,” Kawamura states. “The action sequences featuring the princess and the soldiers, as well as the horse scene in the middle of the film, are also visually dynamic and worth noting. The vast desert and barren landscapes of the Land of the Dead, evocative of Lawrence of Arabia, offer a powerful road-movie aesthetic.” Kawamura continues, “In the latter half of the film, the subtle emotional shifts between the princess and Hijiri are beautifully portrayed through a series of intimate bust shots and a memorable dance sequence. Honestly, the list could go on and on, but the crowd scenes, the volcanic eruption, the stairway in the sky, the lightning and the dragon are all spectacular and packed with visual impact. In the end, I’ve pretty much mentioned most of the sequences!”
Of all the characters, the protagonist is the most fascinating. “If I had to choose, I would want to meet Scarlet,” Kim notes. “Of course, it’s an adaptation of Hamlet’s story, but she is essentially
Hamlet. Who wouldn’t want to meet a character like Hamlet? I want to meet her and comfort her for the anguish and suffering she endured. I pay my respects to director Hosoda. He conceived and executed the idea of adapting Shakespeare’s classic Hamlet into an animated film. Many directors likely never even considered such a thing.”



TOP: The character of the Mysterious Old Woman allowed for many interpretations, so it took time to align the visions of Tadahiro Uesugi and Mamoru Hosoda.
MIDDLE: Hijiri and Scarlet are not only from different eras, but also have polar-opposite attitudes toward life.
BOTTOM TWO: The first image that came to mind for Jin Kim when designing Hijiri was world-famous Japanese baseball player Shohei Ohtani. Scarlet was given very messy hair and a complex silhouette.

INSIDE THE ON-SET VFX SUPERVISOR’S SURVIVAL KIT – WITH SOME ADVICE
By TREVOR HOGG
When taking into consideration that there are companies specializing in data capture and technology has become more sophisticated and mobile, the need for a visual effects supervisor to pack an extensive on-set survival kit is no longer a necessity. It has gotten to the point that the apps found on iPads and iPhones can do practically anything in a pinch, from scanning to quick HDRIs, plus there is the added security of having data wranglers to assist in data acquisition and management. There are still essential tools to have around, but the presentation and application of them has significantly been altered.
“It’s amazing that we still rely on gray and mirror balls and Macbeth charts, but we do,” notes Christopher Townsend, Visual Effects Supervisor for Shang-Chi and the Legend of the Ten Rings. “Along with HDRIs, they still provide the best ground truth of how a known object looks in the lighting environment. Along with rolls of gaffer tape and mini-LEDs for tracking markers, and various measuring devices, my iPhone is my most useful tool: shooting stills and video, tracking the sun path, calculating, messaging, note-taking, sketching, mapping, digital slating, capturing splats, editing, the list goes on, and it’s an internet accessor and communication tool, too.”
OPPOSITE BOTTOM: The iPhone does everything from LiDAR scanning, time-of-day GPS, light meter and face scanning to operating a camera in Unreal Engine. (Image courtesy of Apple)
One cannot downplay the significance of the iPhone. “The iPhone does everything,” remarks Robert Legato, Visual Effects Supervisor for The Lion King. “LiDAR scan, time-of-day GPS, light meter, face scan, and operates a camera in Unreal Engine. The iPhone has become indispensable on set to the point where I rarely need to carry anything else. While I don’t typically take written notes, I can easily dictate thoughts, questions, measurements or comments, and use tools like ChatGPT to structure
TOP: For Christopher Townsend, gray and mirror balls and Macbeth charts still provide the best ground truth of how a known object looks in the lighting environment when on set shooting Shang-Chi and the Legend of the Ten Rings. (Image courtesy of Marvel Studios)
OPPOSITE TOP: Greg Butler on set with Ranjeet Singh Nanrah (left) and Richard Morgan (center) during filming of Season 2 of The Lord of the Rings: The Rings of Power. (Photo: Richard Bain)

them into organized notes. The built-in LiDAR scanner, although not something I’d use in active production, is incredibly useful for quickly assessing spatial relationships and measurements on set. For lighting references, the iPhone is especially valuable, not just for capturing videos or photos of setups, but because each image automatically includes GPS data and time-stamp metadata. This allows me to later analyze the lighting conditions using sun-tracking apps to determine exactly where the sun was positioned at any given moment. I also rely heavily on production-specific apps like Panavision’s lens simulator to visualize field of view, and Helios, which accurately predicts sun paths throughout the day. Together, these tools mean I’m equipped with nearly everything I need in one device. There’s no need to carry extra gear. Since I’m not typically involved in data wrangling, the




iPhone covers my needs entirely. I’ll often take a photo of the slate as well, which allows me to correlate my notes and visual references with the exact shot. And now, with the integration of AI tools for creating quick storyboard frames or visualizing concepts on the fly, the iPhone has essentially consolidated the core toolkit of a visual effects supervisor into a single pocket-sized device.”
Analog tools still have a purpose, such as laser pointers and small flashlights. “Whenever I am telling people where to move something or what needs to be set up physically, I always pull out my laser pointer so I can point to exactly where I want something,” remarks Eric Carney, On-Set Visual Effects Supervisor for Game of Thrones Seasons 4 to 8. “I have two, a red for indoors and a green for outdoors. I’m constantly working on dark stages, and if you’ve ever gone outside and then back into the dark stage, it takes a minute or so for your eyes to adjust; that’s where a small flashlight is helpful to make sure you don’t trip over something.” A small laser distance-measuring tool, a disto, is helpful. “Even though visual effects wranglers are constantly measuring things with a disto, for notes I find it incredibly useful to be able to quickly measure something on my own. For instance, if I want to set something from the greenscreen, I can take my disto and quickly measure six meters.” There is a personal iPad for notes and a production one with the QTAKE app to see the imagery from the cameras. “I find it annoying to have to carry around two iPads, but they often won’t let you use your own personal iPad and their iPad is locked down, so I can’t install my own apps on it.”
Particular standout apps include Google Docs, Google Sheets, Google Drive, Artemis, pCam, Trig Solver, Sun Solver and Cyclops. “Google Docs and Google Sheets are great for writing notes and creating spreadsheets with info,” Carney states. “I have sceneby-scene and sometimes shot-by-shot breakdowns of the visual effects work with pictures and text and methodology that I can quickly reference. Artemis is a digital viewfinder app that is great for looking to see what something will be like when using a specific lens on the cinema camera. pCam converts lens FOV from one camera to a lens on another camera, and lots of other things. Trig Solver is a simple trigonometry app that is useful when you need to do a little trigonometry, which comes up more often than you’d think. When looking at sun paths, Sun Surveyor is incredibly useful to make sure when you need to shoot elements that you know when the sun will be in the right place to match the lighting. Cyclops is an AR viewfinder app [developed by The Third Floor] that is similar to Artemis, but allows you to add CG creatures and other assets; it’s great for visualizing what a creature or set extension will look like in situ. I also use it to make sure a bluescreen will be big enough to hold the actor.”
Different questions have to be answered on set. “Are we shooting what we need?” reflects Greg Butler, Senior Visual Effects Supervisor at DNEG. “Is precious shoot time being spent on the right things? Do we need another take of a stunt or special effects explosion, or can we work with it in post? Do we need any special reference passes? Those decisions are the realm of the director, DP and 1st AD, but they are usually looking for an informed opinion from the person [me] who will see it through in post.” Butler offers
TOP TO BOTTOM: QTAKE is used to log, capture, playback, edit and process video. (Image courtesy of QTAKE)
Cyclops AR was created by The Third Floor to enable the visualization of CG assets such as characters, creatures, vehicles and CG set extensions by compositing them into the iPad’s camera view in real-time. (Image courtesy of The Third Floor)
Sun Surveyor can visualize sunlight and shade throughout the year for any site. (Image courtesy of Sun Surveyor)
some advice for a visual effects person on set. “Do not run on a set or near one. Show up at least 30 minutes before shooting call time. Many important discussions and decisions for the day happen then. Make sure the 1st AD knows who you are as soon as you arrive on a new set. Don’t shout. You have a radio or are near someone who does. If you see an issue on the first take, you may be able to fix it for the next one; that’s when your relationship-building with other departments, especially grip and camera, can pay off. Never touch or borrow another department’s equipment without permission.”
“Be very careful what you say and who you say it to,” Butler continues. “Chain of command is important. One comment at the wrong time, or to the wrong person, can spin out of control quickly. Avoid bright clothing as it can interfere with actor eye lines. Be careful what you say on set as there are ears everywhere. Be situationally aware and socially adept. Keep your eyes and ears open. Be prepared. Read the call sheet ahead of time, and know the expected visual effects work, as well as what could turn into VFX work. Watch your step at all times. There are tripping hazards all over the place. Always make way for people carrying something heavy or awkward. Know which way the camera is pointing before walking onto a set. If you really need something, ask for it. It takes a lot of effort and money to get a shooting unit together. You’re there to help make the most of it. Be confident but not arrogant. Be curious, helpful and yourself.”
Visual effects supervisors, shifting between representing the clients and vendors, are influencing the problem-solving and the approaches taken on set. “Whether I’m in the trenches on set or months deep in pre-production planning, I believe the best visual effects come from collaboration, clarity and readiness,” notes Roni Rodrigues, Visual Effects Supervisor for News of the World “That’s where the survival kit comes in and why I never show up without it. Let’s start with what’s always glued to my hand, which is my iPad. But it’s not the hardware; it’s the app that matters. Notability is the unsung hero of my day. I drop in call sheets, scripts, storyboards and notes from the director. One tap and I can drag an email attachment straight into its folder. No more, ‘Where’s that PDF buried in yesterday’s inbox avalanche?’ It’s all in one place, searchable and scribble-friendly. Organized chaos, but the good kind.”
Having a spare T-shirt is good idea. “This may sound funny, but trust me, after 12 hours on a dusty location, sprinting between monitors, drones and greenscreens, a fresh T-shirt is worth its weight in gold,” Rodrigues states. “Especially, if there’s a surprise production meeting or a client dropping by.” Pairing a mirrorless camera with a belt-mounted camera clip is helpful. “As a 2D artist at heart, I can’t resist snapping reference photos. Having a belt clip for my camera keeps it handy without turning me into a backpack mule. From textures and light cues to last-minute continuity shots, having that camera ready is like having a second pair of eyes.” A 360-degree camera is a reliable sidekick during tech recces. “I snap 360-degree shots of a location in seconds, while still fully engaged in a visual effects chat with the DP and director. Later, I can virtually ‘walk through’ the space from any




Helios Pro allows cinematographers and photographers to visualize exactly what shots will look like at any location, at any time of the year or day. (Image courtesy of Artemis)
Artemis Pro comes with an in-built series of looks that emulate LUTS as well as allowing for customization. (Image courtesy of Artemis)
David Eubank developed the pCAM app to get on-set calculations done relatively quickly. (Image courtesy of pCAM)
TOP TO BOTTOM: Artemis Pro can create custom frame lines for any shape, size or aspect ratio. (Image courtesy of Artemis)
Artemis



and
with
(Image courtesy of Sun Surveyor)
When working on shows like The Nevers, it is important for Johnny Han to have items such as Zeiss mini eyeglass spray because one cannot be a visual effects supervisor without being able to see clearly. (Image courtesy of HBO)
Cyclops AR, created by The Third Floor, allows filmmakers to visualize what CG assets, like characters or set extensions, would look like in position.
(Image courtesy of The Third Floor)
angle. It’s a game-changer for planning clean-up, extensions and even HDRI setups.”
Velcro tracker markers have become part of the on-set routine. “A couple of years ago, I started using tracker markers with Velcro, and I’ll never go back,” Rodrigues recalls. “Picture this: We’re mid-shot and someone realizes that nice clean greenscreen CU has zero markers. No problem. I channel my inner ninja, chuck a few Velcro dots into place and boom! We’re back in business.” Some traditional tools never get old. “Gray and chrome balls are the holy relics of visual effects but have saved many a CG comp from turning into a lighting nightmare. A laser measure with peak viewfinder is for those moments when the only way to explain depth is to show it. It is good to have a HDRI kit with Nodal Ninja because you will be asked to shoot a clean 32-bit HDRI at golden hour while the special effects team ignites something in the background. A texture camera with multiple lenses is great for grabbing that flaky rust, blood-splattered floor or mossy rock that will become the environment in post. GoPros are discreet, reliable, perfect for secondary references or even plate backups. And finally, gaffer tape in every color, which fixes everything from a broken kit to taping tracker markers to actor’s shoes [don’t ask!].”
For those wondering what can be kept inside of a backpack instead of a Pelican case, here are some items: “A tiny moleskin notepad for notes, notes, notes; Sharpies of various colors because big, bold colors get people’s attention when sketching something for them; and a Xrite pocket color chart for those off-the-cuff iPhone photos or videos I know I will use as a comp element later,” remarks Johnny Han, Visual Effects Supervisor for The Penguin. “Zeiss mini eyeglass spray is important because you can’t be a visual effects supervisor without being able to see clearly. Metric and imperial small tape measures end debates about if someone’s disto-meter is working right. USB-A to USB-C adapters as well as a NVME M2 4TB drive, as you always need one super-fast drive, just in case. It’s also good to have a USB thumb drive and one drive you don’t mind losing when lending it. For those overtime shoot days in the middle of the forest, a power bank is useful. To stay protected and healthy, include an Arc’teryx waterproof light down jacket, sunblock, Chapstick, hand sanitizer, bug spray, itch cream, antihistamine, Tums, Tylenol and mints. For those long onsite Zooms when your AirPods give out, wired earbuds come in handy.” Peculiar items are also important. Han adds, “In one production office I worked in, they’d always run out of the good stuff, so I’d bring my own Nespresso coffee pods. When your bag is a common black North Face, it can go missing a lot, so bring an Apple AirTag. Never underestimate the importance of spare socks because on London shows, socks can get wet very easily. Cash is not obsolete, especially when going into remote locations that don’t take American Express! And finally, rolls of 1-inch gaffer tape, black and/or green, as I promise it will get used at some point.”
A survival kit is about surviving the unpredictable world of location shooting. “Fortunately, over the past two decades, on-set visual effects teams, particularly data wranglers, have
TOP TO BOTTOM: Sun Surveyor predicts sun
moon positions
a 3D compass, interactive map, AR and a detailed ephemeris.
become so well organized and professional that some of the kit responsibility/stress has been taken off the supervisors, and the wranglers typically handle the bulk of image and reference capture gear,” observes Adrian de Wet, Visual Effects Supervisor for The Hunger Games: The Ballad of Songbirds & Snakes. “This includes the chrome and gray reference spheres, DSLRs for HDRI capture, and Macbeth charts for color reference. The challenge comes when multiple units are shooting simultaneously. At that point, I either need to deploy extra gear from my own kit or, on more than one occasion, call in a favor from a friendly local vendor [Thank you, DNEG!].” Witness cameras are an important resource. De Wet notes, “Whether it’s GoPros or Blackmagic cameras, I’m meticulous about ensuring our multiple witness cameras are perfectly in sync. To make that happen, I carry Tentacle Sync devices to maintain consistent LTC across every camera. Not everyone brings them, and recently they’ve been hard to get, so I keep them in my kit as an insurance policy. I travel with a DSLR on a tripod and a nodal head, paired with a small but high-quality prime set – 24mm, 50mm and 200mm – plus a versatile 24-105mm zoom. For HDRIs, especially with second units, I keep an 8mm fisheye lens handy. I also carry my Leica Q2 for quick all-day, point-and-shoot reference capture. And of course: batteries. Backup batteries. And backups for the backups”
Essential for a multitude of reasons are a disto laser measure and measuring tape. “I also bring as many memory cards as I can fit in my bag, along with external drives for immediate onsite backups,” de Wet states. “My laptop travels everywhere with me, as does a color-calibrated iPad in a waterproof case, preloaded with techvis, previs, storyboards and scripts. The iPad is invaluable for monitoring a live feed via QTAKE. I also carry multiple chargers and a powerful Anker brick that can simultaneously top up a laptop and a phone, plus more hard drives for daily incremental backups.” Cables ensure connectivity. “My cable bag is a tangle of every conceivable connector, some so obsolete I’m not even sure what they connect to anymore [FireWire?]. A mobile hotspot router can be a lifesaver. Sometimes the visual effects department needs its own private Wi-Fi network to send large files to vendors without depending on patchy on-location Internet.”
Nothing replaces having a notebook and pen. “There’s something about jotting down a director’s ideas by hand,” de Wet reflects. “It’s faster, less distracting and, to me, has better optics than tapping away on an iPhone when someone’s talking to you.” A label maker is non-negotiable. “Otherwise, half your cables will walk away by wrap. I also stock up on camera tape in every color imaginable for impromptu tracking markers.” Weather can be brutal. “I keep a waterproof shell jacket, waterproof trousers and boots at the ready. For cold night exteriors in winter in Toronto, an arctic-rated down jacket is essential. I also pack a folding chair, thermal flask, hats, gloves, face masks, sunglasses, cleaning cloths, lens spray, a small glasses repair kit and my own walkie-talkie earpiece with microphone. Ultimately, my kit is about redundancy, organization and preparation. On a film set, if something can fail, get lost or run out of power, it eventually will. My job is to make sure that when it happens, we’re ready.”


TOP: Trig Solver is an app that does simple trigonometry, which comes up more often than one would think. (Image courtesy of Creature Code Mobile)
BOTTOM: BOGS Classic High Men’s Insulated Boots come in handy when dealing with cold and wet weather. (Image courtesy of Bogs)
Visual Effects Society Recognizes 2025 Honorees
By ROSS AUERBACH
VES celebrated a distinguished group of VFX artists and industry leaders this past November at the 2025 VES Honors Celebration. The festive evening, hosted at Sony Pictures Imageworks, brought VES members together to acknowledge the incredible contributions made by each of this year’s Honorees.
The Visual Effects Society proudly welcomed two new Honorary VES Members: Jon Favreau, the award-winning filmmaker behind Iron Man (2008), The Lion King (2019) and television series including The Mandalorian (2019-2023), and Tim Sweeney, Founder and CEO of Epic Games, and creator of Unreal Engine.
“There is nothing more fun than programming tools that enable creatives to express their vision,” Sweeney said when accepting his VES Honorary Membership.
Favreau connected with ceremony attendees. “Cinema comes from the tradition of marrying storytelling with technology, and the people in this room are torchbearers for this tradition,” Favreau said.
This year, the VES Founders Award was presented to Bob Coleman, VES, for his sustained contributions to the art, science and business of VFX, and his distinguished service to the Society. Dennis Hoffman received VES Lifetime Membership for his work with the Society, the industry, and for furthering the values and interests of VFX artists around the world.
“We belong to an exceptional global industry, offering enormous opportunities to the best and brightest in our ranks around the world. Never underestimate the talent and experience that got you here and how it can be applied in unimagined endeavors,” advised VES Founders Award recipient Bob Coleman, VES.
The 2025 VES Fellows – Colin Campbell, VES, Darin Grant, VES and Gayle Munro, VES – were recognized for their outstanding contributions to VFX and dedicated service over 10 of the last 20 years.
The 2025 class of VES Hall of Fame inductees are Emmy Award winner and prolific visual effects artist Glenn Campbell (1956-2024); trailblazing silent film director and actress Mabel Normand (1893-1930); and co-creator of Godzilla and Ultraman, the “Father of Tokusatsu,” Eiji Tsuburaya (1901-1970).
For their exceptional dedication to the Society and long-term service as leaders of their Section Board of Managers, VES honored Rachel Copp (New Zealand), Eric Greenlief (Washington), Anthony Tan (Montreal), Agon Ushaku (Germany) and Philipp Wolf (Montreal).
Celebration attendees were eyewitness to an epic battle fought between Tsuburaya Productions’ hero Ultraman and the villainous Alien Baltan.



Dennis Hoffman, recipient of VES Lifetime Membership. VES Founders Award Honoree Bob Coleman, VES.
2025’s VES Fellows: Darin Grant, VES; Gayle Munro, VES; and Colin Campbell, VES.

TOP TO BOTTOM: VES Honorary Members Jon Favreau and Tim Sweeney with VES Board Chair Kim Davidson and Executive Director Nancy Ward.





TOP LEFT: Under attack! Tsuburaya’s Ultraman defeated Alien Baltan and banished him to the depths of outer space.
TOP RIGHT: Tammy Klein and Tsuburaya Productions President Masayuki Nagatake memorialized Hall of Fame inductees Glenn Campbell, Mabel Normand and Eiji Tsuburaya.
MIDDLE LEFT: Section Honorees Agon Ushaku, Rachel Copp and Eric Greenlief (L-R) were on hand to accept their honors.
MIDDLE RIGHT AND BOTTOM LEFT: Attendees connected at the VES Honors Celebration.
VES Announces Publication of the VES Handbook of Visual Effects, 4th Edition
On December 18th, the Visual Effects Society published the fourth edition of the VES Handbook of Visual Effects, edited by Jeffrey A. Okun, VES, Susan Zwerman, VES and Susan Thurmond O’Neal. Widely hailed as the definitive VFX resource for industry professionals, this guidebook has evolved along with the latest technology. This new edition stays on the cutting edge with contributions from 95 visual effects
experts and explores topics including: Script and shot breakdowns; Using AI in VFX and compositing; Immersive Experiences Utilizing AR/VR Technologies, Hemispheres and Domes; Use of Real-time Engines for Virtual Production, Previs and Animation, and so much more.
This guide to the latest in VFX technology is more than useful on set, it’s essential. Order your copy today: http://bit.ly/3JnG2yT.

VES Members Take In The VIEW
Last October, the VIEW conference in Torino, Italy, brought visual effects practitioners together to share their visions of the future of the industry. VES hosted the panel, “Bridging the Gap in VFX –Collaboration & Innovation,” with contributions from creative and technological players Kevin Baillie, Johnny Han, Michael Ralla, Christine Resch, and moderated by Christina Caspers-Römer. The engaging conversation focused on how AI-assisted tools, open


World VFX Day
technology standards and cloud-based collaboration are changing the way we work and think.
VES Germany hosted a mid-week VES Aperitivo event welcoming members from around the world to Torino. Guests raised a glass of Aperol spritz and toasted to a great evening sharing insights and building a global community.


The Visual Effects Society was proud to again partner with World VFX Day to shine a light on the art and craft of visual effects. VES presented two engaging 30-minute panel conversations. The first celebrated the innovative films of Georges Méliès, with a close-up look at Le Voyage dans la Lune (1902), linking his techniques to current VFX practices (hosted by archivist and VFX artist Kate Xagoraris and Board of Directors 2nd Vice Chair David Tanaka, VES). The second, “Bridging the Gap in VFX – Collaboration & Innovation,” highlighted excerpts from the VES-hosted VIEW Conference panel. The 3rd annual day-long event was held live from London on December 8th, the birthday of Georges Méliès, and streamed globally. Packed with interactive panels, roundtable conversations and industry collaborations, VES members were active participants and contributors to the worldwide program.


David
Group shot of all VIEW Conference speakers. Attendees packed the theater for Janet Lewin’s talk, “Beyond the Frame: 50 Years of ILM and the Next Era of Visual Storytelling.”
VES Germany welcomed VIEW attendees at VES Aperitivo. (Photo by Boyan Georgiev)


VES, and Kate Xagoraris discuss Méliès’ cinematic innovations. Hand-colored frame from Georges Méliès’ La Voyage dans la Lune, considered the first science fiction film.
VES presented highlights from the VIEW Conference panel, “Bridging the Gap in VFX – Collaboration & Innovation.”
LEFT TO RIGHT: VES hosted the panel “Bridging the Gap in VFX – Collaboration & Innovation.”
LEFT TO RIGHT: World VFX Day 2025 streamed worldwide live from London on December 8th
Tanaka,
Overview of Generative Artificial Intelligence (GenAI)
By LUCIENT HARRIOT
Edited for this publication by
Jeffrey A. Okun, VES
Abstracted from The VES Handbook of Visual Effects, 4th Edition
Edited by Jeffrey A. Okun, VES, Susan Zwerman, VES and Susan Thurmond O’Neal

Understanding Generative AI and Learning Models
Generative AI refers to algorithms that can produce novel content – images, videos, and even entire scenes – based on what they have learned from vast datasets. Unlike traditional tools that rely on hand-crafted rules, GenAI learns patterns from existing material and then extrapolates to form new, unseen results. Early Artificial Intelligence (AI) started in the ‘60s. The past few years has seen increased access to tremendous computational power that has enabled a transformative jump to Large Language Models (LLMs) which, in turn, enabled Generative AI (GenAI) to write text like humans. GenAI technology can be trained on anything digital and at the time of this writing, has the ability to recognize and/or generate sound, images, and video based on large datasets/models trained on billions of data files.
Machine learning is the act of training a model. At its core, AI is simply statistics and probability used to predict. An example of Predictive AI is when one’s phone tries to finish one’s sentences or suggest the next item in one’s social media feed. This technology can now be applied to any data from images to motion capture. Models need vast, diverse datasets to understand a wide range of visual styles, lighting conditions and objects. The broader the dataset, the better the model’s ability to generate high-quality, contextually relevant outputs.
Data Processing and Feature Extraction
Converting visual features into numerical representations means breaking down images and videos into mathematical forms. Before the model can learn, it examines pixels, edges, textures and color distributions to identify patterns and correlations. These insights enable it to reconstruct or generate new visuals that align with the original features.

5.2 Image generation from text prompts: “Fantastical creature,” “Futuristic cityscape.”
Creation and Manipulation Capabilities
Users can provide text descriptions or reference images, and the AI will generate new content that reflects these inputs, whether it is a fantastical creature, a futuristic cityscape, or a specific lighting setup. Beyond visuals, certain AI models understand text prompts and use this information to guide image and video creation. This interplay between language and imagery allows for more intuitive creative direction, letting artists describe what they envision and have the AI bring those ideas to life.
Custom Model Training for Specific Elements
Fine-tuning AI models with LoRA (Low-Rank Adaptation) methods allows artists to integrate small, specialized models into larger ones, achieving unique character designs or distinct texture palettes. By training these models on curated datasets, teams can ensure consistent characters, objects and environments across multiple shots and projects.
Inpainting, Outpainting and Object Removal
Inpainting techniques focus on regenerating specific parts of an image. For instance, if a building’s facade requires correction, AI can seamlessly fill in missing details or repair damaged areas. This method is also ideal for removing unwanted elements, such as props, stray equipment or crew members, making cleanup tasks quick and efficient.
Outpainting, conversely, extends an image beyond its original boundaries. When reframing or enlarging a composition, AI intuitively adds new details that blend naturally with the existing lighting, texture and perspective, creating a cohesive and expanded visual.
Purchase your copy here: http://bit.ly/3JnG2yT
Figure
VES Germany: A Vibrant Community Engaged Locally and Globally
By ROSS AUERBACH



Cross-border travel is
During a tour of LEDCave in Berlin, VES Germany took part in a hands-on demonstration of CHAOS’ “Project Arena” virtual production solution.
As the Visual Effects Society continues to increase in membership and international reach, each of its 16 Sections plays a major role in building relationships from the ground up and supporting the community with an active calendar of events. With more than 225 active members, VES Germany has thrived since it was founded in 2017. Just since 2022, Section membership has climbed 15% with in-person and virtual events drawing crowds and creating a large and welcoming community. Germany’s dedicated Board Chairs have faced the challenge of a membership spread across the country, and have brought their passion to holding events throughout the country’s VFX hubs – Berlin, Munich and Stuttgart – allowing the Section to continue to grow and engage.
Despite the diversity of locations, “our Section fosters dynamic engagement and allows us to reach a broader creative community. Coordinating simultaneous meetups or drives across these hubs isn’t always feasible, but we successfully manage to do so at least once a year – demonstrating both commitment and cohesion. Beyond that, we’re proud to be the first Section in our region to proactively initiate cross-border collaboration, engaging with fellow VES members throughout Europe and setting a precedent for international connectivity,” said VES Germany Chair Agon Ushaku. One recent event highlighted these burgeoning relationships: A visit to Platige Image Studio in Warsaw, Poland created meaningful connections between their VFX artists and VES Germany.
Stuttgart, the site of FMX (Film & Media Exchange), gives VES Germany the perfect opportunity to host the global VES community each year. The 2025 luncheon at Brauhaus Schönbuch delighted attendees and allowed VES members from all over the world to get to know each other in person. The Section also co-hosted the “FMX Women’s Breakfast Meetup” to start the day of events with camaraderie among women and allies in the industry.
Summer parties in Berlin and Munich brought VES Germany members and their families together to create long-lasting memories. VES members went to theaters to share the experience at screenings of new release feature films and episodics. These local events cement the Section as a part of the visual effects community across Germany and Europe.
VES Germany Vice-Chair Christine Resch remarked that Section members “span an impressive variety of projects from local productions to major international collaborations, making it a perfect hub for sharing knowledge and elevating each other’s work. The members bring skills from all over the world and use them to raise the bar for German productions in clever, thoughtful ways.” Section members are active in all areas of VFX work, including international and local film and episodics, gaming and commercial production, and have conducted research into cutting-edge VFX – virtual production, Gaussian splatting and the use of Generative AI in VFX shots. VES Germany hosted a
All photos courtesy of the VES Germany Section
TOP TO BOTTOM: VES Germany Board Members assemble at FMX2025 with VES Chair Kim Davidson.
nothing new for VES Germany. Members were welcomed to Platige Studios in Warsaw, Poland.
hands-on event at virtual production studio LEDCave in Berlin, where practitioners from CHAOS demonstrated its “Project Arena” virtual production solution, bringing virtual assets straight from the artists to the LED wall.
Spearheaded by VES Germany and now in its 6th year, the international education series “VES Megabrain Masterclass” continues to engage VFX artists with educational webinars by experts taking live Q&A on a wide variety of subjects. Held last April, “Masterclass Volume 11” featured Dr. Diana Arellano, Senior Lecturer for Technical Directing at Animationinstitut of the Filmakademie Baden-Württenberg; Christopher Herrick, CG Supervisor, VFX Generalist and Character Artist; and Sebastian Schütt, Lead Compositor at Image Engine. “Megabrain Masterclass” will record its 12th session focusing on “Women in VFX” in April 2026 with presentations by Amanda Heppner, WeFX; Valentina Rosselli, ILM; and Kader Bagli, RiseFX. The series has branched out to include “Megabrain Insight,” live events focused on the future of the industry, and “Megabrain Students,” virtual panels that include international state-of-the-art student projects. All Masterclass sessions are recorded and available to the entire global membership on the VES website.
Over the past year, members of VES Germany traveled across Europe and represented the Society, speaking and networking at global events: IBC Amsterdam, SIGGRAPH and VIEW. ViceChair Jonas Kluger noted, “Our members are active across the globe, attending conferences, sharing knowledge and representing German VFX talent internationally. Positioned at the center of Europe, we’re surrounded by countries with thriving VFX industries, which makes collaboration and connection a natural part of who we are. We’re proud to be a hub where creativity and community come together.”
What makes VES Germany so special is that “it is a tight-knit network where you learn from some of the most diverse and talented people in the field, all while staying connected to the global VES Sections and the bigger conversation about where the industry is heading. These close connections make collaboration natural and meaningful, turning the community into more than just a professional network,” said Christine Resch. 2026 is packed with activity for VES Germany: Live nomination events for the 24th Annual VES Awards will be held in both Berlin and Munich (a first for the Section!); travel is already booked for international conferences; the location for the FMX2026 mixer is being scouted; and “Megabrain Students Volume 2” will record live in March. In June, fostering cross-section collaboration, VES Germany and VES France will co-host a major event for the 2026 Annecy International Animation Film Festival. Many more exciting events are planned throughout the year.
The Visual Effects Society would like to congratulate Chair Agon Ushaku, a 2025 VES Section Honoree, for the years of hard work he has put into leading the VES Germany Section.





Camaraderie was on the menu for a festive VES Germany Summer Party in Berlin.
VES Germany hosts live in-person nomination events in Berlin for the 23rd Annual VES Awards.
A capacity crowd in attendance at Trixter in Munich for a talk on Nukepedia.
VES Germany hosted a lively mixer at FMX2025 in Stuttgart, bringing together the global VES community.
TOP TO BOTTOM: A meeting of the minds. VES France and VES Germany Section Chairs join forces to plan an event for Annecy Festival 2026.
Myth, Nature, Code: The Avatar VFX Philosophy

Directed by James Cameron, the entire Avatar series is fundamentally influenced by a cinematic drive for absolute immersion and technical innovation, building upon the grand scale of widescreen historical epics (Lawrence of Arabia, Dances with Wolves, etc.) and the high-concept visual storytelling of classic science fiction such as Star Wars and Cameron’s own The Abyss and Terminator 2, which pioneered digital performance and motion capture. The core VFX influence is the refinement of performance capture (mocap), evolving techniques pioneered for characters like Gollum to achieve unprecedented fidelity in real-time facial and body movements, and making the Na’vi characters the primary focus rather than a secondary element. Technologically, Cameron has pushed the industry with stereoscopic 3D capture (using his custom Fusion Camera System) and real-time virtual camera directing, allowing him to shoot within the digital world as if on a physical set, and the use of High Frame
Rate (HFR) in The Way of Water to sharpen visual clarity, especially in fast-moving and underwater sequences. This ongoing technical ambition continues with Fire and Ash (see article, page 68), which introduces the fire-worshipping Ash Clan, demanding new challenges in volumetric rendering, fire simulation, and incorporating new, highly detailed environments that further ground the fantastical world of Pandora in photorealistic detail. In many ways, the Avatar series fundamentally redefines what audiences accept as “real” in cinema, proving that photoreal digital characters could carry entire films emotionally, not just technically. Cameron’s insistence on performance capture as acting – not animation – legitimized virtual production as a serious filmmaking tool and pushed the industry to invest billions in LED volumes, real-time rendering and neural rendering technologies. The films don’t just raise the VFX bar; they move it into a different arena entirely.
Image from Avatar: Fire and Ash courtesy of 20 th Century Studios.

