

LIFELIKE HOPPERS


Welcome to the Spring 2026 issue of VFX Voice!


This edition spotlights the recent 24th Annual VES Awards Show. Congratulations to all our nominees, winners and honorees – and thank you to the worldwide VFX community whose talent, passion and support continue to inspire the VES and VFX Voice! Our cover story leaps into Hoppers, Pixar’s new animated adventure that began life in 2D before evolving into a richly layered 3D experience. We also explore the expanded effects in Season 2 of Netflix’s live-action, sea-voyaging One Piece, from its stretch-powered hero to a fully CG character, a pair of towering giants and a massive sperm whale. In gaming, we introduce you to the evocative environmental art – and the compelling creative philosophy behind it – that elevates Ghost of Yōtei.
Elsewhere in this issue, we examine the intersection of science and art in creature animation, track the realities of time zone-crossing VFX pipelines on international co-productions, and convene renowned supervisors to discuss how three of the greatest VFX films of all-time pushed the art and craft to new heights. Our Spring special focus turns to VFX in India, and in honor of ILM’s 50th anniversary, we salute the Dykstraflex – the ingenious homemade motion control camera system that made the VFX in Star Wars possible. We also profile Framestore Lead Modeler Crystal Bretz and Intel Principal Engineer Attila T. Áfra.
Be sure to visit the VES Awards Show Photo Gallery, capturing the 2026 award winners, the celebration, and presentation of the Lifetime Achievement Award to Producer Jerry Bruckheimer and the VES Visionary Award to Wētā Workshop Co-Founder and Chief Creative Officer Sir Richard Taylor.
Dive in and meet the innovators and risk-takers who push the boundaries of what’s possible to advance the field of visual effects.
Cheers!
Kim Davidson, Chair, VES Board of Directors
Nancy Ward, VES Executive Director
P.S. You can continue to catch exclusive stories between issues only available at VFXVoice.com. You can also get VFX Voice and VES updates by following us on X at @VFXSociety and VFX_Society on Instagram.

FEATURES
8 VFX TRENDS: CREATURE ANIMATION
Science and art: What it takes to believe in a creature.
14 COVER: HOPPERS
Pushing the idea that animals know more than they show.
18 INDUSTRY: INTERNATIONAL CO-PRODUCTIONS
The pros and cons of taking a more global view of VFX.
26 PROFILE: CRYSTAL BRETZ
Framestore Montreal’s Lead Modeler accepts the challenge.
30 TELEVISION/STREAMING: ONE PIECE
An entirely CG character joins the Straw Hats for Season 2.
36 SPECIAL FOCUS: VFX IN INDIA
Capacity for growth continues India’s rise as a global power.
42 VFX VAULT: DIGITAL INNOVATORS
How Terminator 2, Inception and Life of Pi changed VFX.
48 THE 24TH ANNUAL VES AWARDS
Celebrating excellence, innovation and artistry.
56 VES AWARD WINNERS
Photo gallery of the winners in Film, Animation and TV.
62 VFX HISTORY: THE DYKSTRAFLEX
The motion control camera system home-grown for Star Wars
70 TECH & TOOLS: ADVANCING HOLOGRAMS
Technology on the bridge to future visual media innovation.
74 PROFILE: ATTILA T. ÁFRA
Academy-honored Intel Engineer is busy thinking about AI.
80 VIDEO GAMES: GHOST OF YOTEI
The power of nature telling the story in color, light and weather.
86 VR/AR/MR TRENDS: STAR WARS: BEYOND VICTORY
Lucasfilm game introduces a new twist via MR integration.
DEPARTMENTS
2 EXECUTIVE NOTE
90 THE VES HANDBOOK
92 VES SECTION SPOTLIGHT – TORONTO
94 VES NEWS
96 FINAL FRAME – VFX VISIONARIES
ON THE COVER: Mabel Beaver is influenced by her human tendencies in Hoppers (Image courtesy of Disney/Pixar)


SPRING
VFXVOICE
Visit us online at vfxvoice.com
PUBLISHER
Jim McCullaugh publisher@vfxvoice.com
EDITOR
Ed Ochs editor@vfxvoice.com
CREATIVE
Alpanian Design Group alan@alpanian.com
ADVERTISING
Arlene Hansen arlene.hansen@vfxvoice.com
SUPERVISOR
Ross Auerbach
EDITORIAL ASSISTANT
Desiree Bowie
CONTRIBUTING WRITERS
Trevor Hogg
Katie Kasperson
Chris McGowan
Barbara Robertson
ADVISORY COMMITTEE
David Bloom
Andrew Bly
Rob Bredow
Mike Chambers, VES
Lisa Cooke, VES
Neil Corbould, VES
Irena Cronin
Kim Davidson
Paul Debevec, VES
Debbie Denise
Karen Dufilho
Paul Franklin
Barbara Ford Grant
David Johnson, VES
Jim Morris, VES
Dennis Muren, ASC, VES
Sam Nicholson, ASC
Lori H. Schwartz
Eric Roth
Tom Atkin, Founder
Allen Battino, VES Logo Design
VISUAL EFFECTS SOCIETY
Nancy Ward, Executive Director
VES BOARD OF DIRECTORS
OFFICERS
Kim Davidson, Chair
David Tanaka, VES, 1st Vice Chair
Brooke Lyndon-Stanford, 2nd Vice Chair
Frederick Lissau, Secretary
Jeffrey A. Okun, VES, Treasurer
DIRECTORS
Fatima Anes, Laura Barbera, Alan Boucek
Kathryn Brillhart, Colin Campbell, VES, Mary Carr
Christina Caspers-Römer, Emma Clifton Perry
Rachel Copp, Dayne Cowan, Laurence Cymet
Dave Gouge, Kay Hoddy, Dennis Hoffman
Thomas Knop, VES, Karen Murphy, Maggie Oh
Joel Pennington, Robin Prybil, Christine Resch
Lopsie Schwartz, Agon Ushaku, Sean Varney
Sam Winkler, Philipp Wolf
ALTERNATES
Klaudija Cermak, Fred Chapman, Aladino Debert
Jess Loren, William Mesa
Visual Effects Society
5000 Van Nuys Blvd. Suite 310
Sherman Oaks, CA 91403 Phone: (818) 981-7861 vesglobal.org
VES STAFF
Elvia Gonzalez, Associate Director
Jim Sullivan, Director of Operations
Ben Schneider, Director of Membership Services
Wayne Watkins, Donor Relations Manager
Charles Mesa, Media & Content Manager
Eric Bass, MarCom Manager
Ross Auerbach, Program Manager
Colleen Kelly, Office Manager
Shannon Cassidy, Global Manager
Adrienne Morse, Operations Coordinator
P.J. Schumacher, Controller


THE SCIENCE OF CREATURE ANIMATION: ACCURACY VS. ARTISTRY
By KATIE KASPERSON
Talking animals, aliens, fantastical beasts – when we see a creature onscreen, what does it take for us to believe in it? In VFX and animation, this crucial question is a guiding light that keeps creators toeing the line between biomechanical accuracy and artistic license. Achieving this delicate balance can mean the difference between a story falling flat or hitting its intended emotional beats.
INVENTING AN ALIEN
Developing a creature from scratch begins with addressing every detail. “We start with the story, the tone and what the director wants,” says DNEG’s Robyn Luckham, Animation Director on Mickey 17. Tasked with animating the Creepers, alien creatures that inhabit the film’s icy planet of Niflheim, Luckham spoke with director Bong Joon Ho and the film’s lead designer to nail down the Creepers’ every characteristic. “It starts with where they live and how the creatures are formed, based on their environment. Then, we get into character – what characters do they play in the story, and what’s the tone of the story? Is it a serious film or more playful? Are they more playful?”
TOP: With a background in zoology, Framestore Animation Director Pablo Grillo brings an in-depth understanding of animal anatomy, psychology and behavior to his VFX work. (Image courtesy of Framestore and StudioCanal Films Ltd.)
OPPOSITE TOP: Animation Director Pablo Grillo believes what made the Hippogriff in Harry Potter and the Prisoner of Azkaban so compelling was that it was naturalistic in its performance and paid homage to the real world. (Image courtesy of Framestore and Warner Bros. Entertainment)
The Creepers have their own character arc. According to Luckham, at first, they’re ‘repulsive,’ but eventually they ‘win you over.’ To inject them with cute factor a cute factor, Luckham drew inspiration from bear cubs and the Cat Bus in My Neighbor Totoro, as well as walruses, millipedes, horses and dogs. “What creatures are out there in our world that represent something similar,” he asks, “and how can I make a Frankenstein’s monster?”
Luckham also considered their habitat; how they eat, breathe and move, and how they protect themselves. “You have to make sure there’s an alignment between the creature’s design and the environment. There are a lot of assumptions you make because you’re on Earth, and when you’re creating a fantastical creature, you have to

reassess. You’re taking a leap when doing an alien,” he continues. “You can make it as fantastic as you want, but you have to justify it because everyone will know if you haven’t. Sometimes, it has to look real, and sometimes it has to be more cinematic.”
After addressing the unknowns, Luckham prepares for the first animation test. The whole process takes over a year. “The Creepers are an evolution – they started as a [2D] picture, but in the film, they’re standing up, they’re rolling, they’re doing all sorts of things. It takes time to get that right.”
While animators and VFX supervisors often turn to scientists –zoologists, paleontologists, biologists and the like – for guidance, Luckham was afforded no such luxury on Mickey 17. “I would love to talk to scientists on every project, but it depends on the show and the budget.” On a sci-fi film like Mickey 17, the creatures in question are entirely fictional. “It’s a fantasy creature, so having a scientist there – they don’t really have a frame of reference,” Luckham remarks. “I could make the judgments myself.”
Instead of consulting with external academics, the crew employed its own team “of phenomenally trained Creeper supervisors” who were “experts in anatomy and muscle activation,” according to Luckham. These experts ultimately advised on movement – the jiggle of fat or the rippling of skin. “You have to build layer by layer,” he argues.
After rounds and rounds of animation tests and tweaks, the Creepers finally came into their own. Each creature has “a thick outer layer to protect it from all the elements,” Luckham states, “and it has a small mouth with hard ends that it uses to crunch rocks.”
It’s one thing for a creature to exist; it’s another for it to interact. Crucial to the story is the Creepers’ ability to communicate, not only with one another but also with the titular Mickey. “There’s
that one little line of dialogue, which is, ‘How are you, Mickey?’ I had to work backwards,” Luckham admits. “All our speech comes from breath: breathe in, speak, breathe out. That goes through our larynx. That’s our logic.”
Occasionally, Luckham would ‘bend the rules’ at Bong Joon Ho’s instruction. “You have to make compromises,” he says, but achieving a sense of reality was his primary goal. “I want to get it as real as possible because every creature we see, we put our emotions into, and we can feel that emotion back. It’s our interpretation. It’s quite unique and quite a privilege to create a brand-new creature,” he adds. “We’re creating a language, creating a physiology from scratch. It was a real challenge, but that’s the work I enjoy.”
ACCESSING THE SOUL
Like many of today’s movies, Mickey 17 is a page-to-screen adaptation of the sci-fi novel Mickey7. While the written word knows no bounds in what imagery it can inspire, films are limited by their reliance on visual perception. Russell Dodgson, VFX Supervisor at Framestore, faced this fact while working on His Dark Materials, HBO’s live-action interpretation of Philip Pullman’s fantasy trilogy. “We had to create a character called a Mulefa, and the way it’s written in the books is an absolute design disaster,” he states. “It’s got a diamond-shaped body, four legs and no spine, and it can roll around on seed pods. [Pullman has] written something fantastical, but he hasn’t necessarily worried about how it translates to the screen. That’s not his concern. He tells you that it’s elegant, but when you see it, it doesn’t look as elegant as described.”
Like Luckham, Dodgson began with the big questions. “What’s pleasing to look at? Which rules do you stick by? Do you give it a more human-like ability to express, or do you honor the true mechanics?” he asks, noting the importance of intention when it


TOP: To inject the Creepers with the ‘cute factor’ for Mickey 17, Visual Effects Supervisor Robyn Luckham pulled inspiration from bear cubs and the cat bus in My Neighbor Totoro as well as from walruses, millipedes, horses and dogs. (Image courtesy of DNEG and Warner Bros. Pictures)
BOTTOM: Visual Effects Supervisor Robyn Luckham had to make sure there was an alignment between the creature’s design and the environment in Mickey 17. The Creepers are fantasy creatures, so science played no role. (Image courtesy of DNEG and Warner Bros. Pictures)
comes to creature design. Dodgson contacted a zoologist to advise on the non-human characters in His Dark Materials “We went through all the analogous creatures that exist, that we could draw from, to build this complete fantasy creature. There’s a very fine line between what rules you can break and what you can’t, relative to an audience’s perception or what they understand about something. You slowly build this picture that works through trial and error.”
For every human in His Dark Materials, there is an animal companion called a daemon – a living manifestation of the person’s soul. “The whole point is that they have a human consciousness,” explains Dodgson, who called upon a team of puppeteers during production. “As long as you have puppeteers who are sensitive to performance, who understand the rhythm of the real creature and the body language that is going to work with an actor, then what you’ve got on set is something that is physically the right shape, moving in a similar rhythm and tempo [as the animation], but has this human intent.”
Since the daemons act more civilized than wild beasts, “we had to strip out animal behavior and replace it with human focus and intent,” Dodgson notes. “We had to strike a delicate balance. There’s so much bidirectional language between animals and us. That’s why they’re such a beautiful vessel for connecting with an audience.”

BIRDS, BEARS AND FANTASTIC BEASTS
Pablo Grillo, seasoned Animation Director at Framestore, is credited with animating the Paddington, Fantastic Beasts and Harry Potter franchises. Grillo believes that all animals, humans included, share the same essence. “We’re all vertebrates and, going even beyond that, are all made of the same recipe,” he says. “Once you understand that, you can find the throughlines. A lot of differences are arbitrary.”
With a background in zoology, Grillo brings an in-depth understanding of animal anatomy, psychology and behavior to his VFX work. “Visual effects animation demands that the material looks completely authentic when it’s put in a real space and against live action. We have to do a huge level of study and be very analytical. That’s what binds the artistic and scientific.” Whatever the project, Grillo stresses the significance of realism and universality as it relates to creature animation. “Finding something familiar to lean on is essential,” he shares. “I always tell other animators to look at animal ‘fail’ videos. I think there’s a lot to be learned there. You experience feelings of optimism, failure and shame. What you’re trying to do is access a truth.” Harry Potter and the Prisoner of Azkaban, for instance, introduces a character called a Hippogriff, a legendary creature that’s both bird and mammal. “I was lucky enough to work on the Hippogriff,” Grillo recalls. “What made it so tangible that it was utterly realistic in its performance was that it looked real. It was naturalistic. It wasn’t trying to do anything

established the basics of
for Fantastic Beasts and Where to Find Them, he endowed it with a personality, beginning with the aggression of a honey badger mixed with other animal traits. Nifflers are known for their insatiable attraction to shiny objects, precious metals and jewelry, which they store in a belly pouch. (Image courtesy of Warner Bros. Pictures)
BOTTOM: Visual Effects Supervisor Russell Dodgson contacted a zoologist to advise on the non-human characters in His Dark Materials (Image courtesy of Framestore and HBO)
TOP: Once Animation Director Pablo Grillo
the Niffler




TOP TWO: Among the visually-striking creatures in Fantastic Beasts: The Crimes of Grindelwald is the Zouwu. The team referenced lizards for body movement in contrast to the cat-like facial performance. They also took reference for her head and tail movements from Chinese dragon parade performances. (Image courtesy of Warner Bros. Pictures)
BOTTOM TWO: Puppeteers who worked on His Dark Materials understood the rhythm of the creature and the body language that works with an actor, moving in a similar rhythm and tempo as the animation, but with human intent. (Image courtesy of Framestore and HBO)
outlandish. I always thought that was a successful creature because of its homage to the real world. It felt absolutely compelling.”
Years later, Grillo joined the Paddington movies, animating the beloved titular bear himself. “Paddington is a young bear, but more importantly, he’s a person that you can believe and invest in. There was a choice in terms of defining his anatomy, to have him keep up with the other actors and walk normally, and for that not to be a distraction,” Grillo says. In this case, he split away from biomechanical accuracy and instead embraced a narrative-driven design. “We had to devise an anatomy where the legs poke out from under the belly; we built a strange hybrid pelvis. It was a creative decision.”
Grillo returned to the Harry Potter cinematic universe for Fantastic Beasts and Where to Find Them, a film that introduces audiences to extraordinary fictional beings. “One of the most enjoyable creations for me was the Niffler because it’s a creature that, on the page, was almost impossible in that it was very small but had to be very fast and voracious,” Grillo describes. “It had to be a humorous creature – a comedy actor as much as an animal – but it had to look real. It had to look sweet. I built a lot of mood boards for the director and offered up a variety of creatures,” he continues. “We honed in on the monotremes, platypuses and echidnas. There’s a primitivism to them that’s already entertaining by itself. If you juxtapose that with a mental state that is incredibly headstrong, you could have something really interesting.”
After playing around with early animations, Grillo eventually settled on a design that was ‘just right.’ He says, “It’s about finding a balance where you don’t take the design too far, but it looks otherworldly and alien. It’s an interesting thing when you’re creating something that doesn’t exist. You go, ‘Well, that looks like an animal I’ve never seen before, but I believe it.’” Once he established the Niffler’s basis, Grillo added the ‘trimmings’ and endowed it with personality. He began with the aggression of a honey badger and mixed and matched other animal traits from there. In the end, he argues, “There are no real rules. You either buy it or you don’t.”
ON COMMON GROUND
Grillo believes digital technology has allowed animators to go much further, approaching the design process more “holistically.” He explains, “You build anatomy; you look at color pictures, and look at the acting and the movement. You don’t commit to anything until you’ve explored these different angles of what a creature can be. By doing so, you also discover what they’re capable of, and through that you get more story. The plasticity of the process allows you to keep moving gently towards that finished product.”
Creature animation is both science and art, both objective and subjective. “A lot of the art is being able to look at things that happen in the natural world and see them as a universal truth,” Dodgson remarks. “It’s through finding a tangible solution that you create a sense of realism that the audience can then believe in,” Grillo states. “That way, they can invest in more important things in the story. It’s an incredible privilege to get invested in these creatures. They open your eyes to the beauty and intricacy of nature. What we do is beautiful, and it’s full of passionate people. I think we’re all scientists at heart.”


VIEWING THE NATURAL WORLD THROUGH HOPPERS
By TREVOR HOGG
After a grizzly, panda and polar bear attempt to infiltrate human society in his We Bare Bears, director Daniel Chong reverses the narrative for his Pixar directorial debut Hoppers, where scientists are able to ‘hop’ human consciousness into lifelike robotic animals that are able to talk to and understand what every creature big or small is saying.
“There are these creatures who live with us and share this planet, but we can’t talk to them,” Chong states. “You can look at the Internet and see that everyone is trying to make sense of them. It’s amusing to even try to give voice to them.” The biggest inspiration was not James Cameron or David Attenborough, but videos where awkward robots with cameras in their eyes try to fit in with animals. “There are even these funny ideas or things that we were seeing in real life with panda enclosures where in order not to habituate the pandas to humans, people will dress up in panda costumes and go feed them. There are a bunch of stories like that, and it makes you think, ‘Humans are so interested in animals and are trying to infiltrate in these wonky and clearly hilarious ways. How can we take that and push that idea?”
Images courtesy of Disney/Pixar.
TOP: The comedy of the movie comes from defying the expectations of audience members.
OPPOSITE TOP: Lighting was important in ensuring the animals stayed on model and always looked their best.
Transitioning from 2D to 3D animation required some adjustments. “The original boards for the movie were so charming and worked so well as 2D cartoons,” remarks Nicole Paradis Grindle, Producer. “I remember saying to Daniel that I thought the biggest challenge we would have was translating this into 3D because they worked so well in 2D. It was a big leap to translate and also hold onto the humor. There were jokes that didn’t play in 3D in the same way they played in 2D, so we had to come up with a different

play on certain gags.” The story constantly shifts between the human and animal perspectives. “Another big thing was dot eyes versus cartoon eyes. That was not something that was a slam dunk. We had to work for a long time to get it so that the same model could have those dot eyes that made it seem more like an animal and then have the cartoon eyes. The whole language around the human perspective without the earpiece versus the perspective in that animal world; that was a long journey,” Grindle says.
Adopting the disguise of a robotic beaver is Mabel, who attempts to solve the mysteries of the animal world. “For human Mabel and Mabel Beaver, we had a few things outside of her general personality and performance that we made sure tracks through both forms,” explains Alon Winterstein, Animation Supervisor. “When Mabel is a kid, her hair spikes up or goes fuzzy when she gets angry, and we incorporated that into the beaver. What made Mabel unique was her human instinct inside of the beaver. Where a beaver’s instinct would be to walk on fours, Mabel would feel more comfortable walking on twos. There is a scene when George is trying to encourage her to come with them to the Super Lodge and swim. Mabel is not sure if she is going to do it. Mabel did what she needed to and now is going back home. But Mabel gets convinced. However, her choice to jump into the water is different than any other beaver. A beaver would jump head on and swim in an organic way. Mabel ‘cannonballs’ into the water. It’s a much more human instinct.”
Maintaining the appeal of the characters was not easy. “We definitely had innovations on the characters themselves, as they
were built in more of the rigging and groom parts to allow for the animators to achieve what they need out of the performance and the graphic shapes, appeal, silhouette and performance,” Winterstein notes. “There was a lot of effort there to create models that could support all of that range, but also unique solutions for things such as those short arms and legs that sound simple in concept. But a character that is so round, and everything is so close to each other, it tends to collapse on itself. To manage that was challenging in addition to a tail. Once they go between two legs to four legs, for that transition, the proportion actually needs to change to accommodate a silhouette that is more appealing on fours. You need to find ways to cheat that in the transition but also make sure that this model looks good in both ways and doesn’t start looking unappealing.”
New tools were created. “We developed a new system for generating feathers on characters and for rigging the wings of birds,” states Laura Beth Albright, Visual Effects Supervisor. “When birds fold their wings, the way that the feathers have to follow the wing, fold up and unfold, is actually complex. There are layers of different feathers; some of them articulate like fingers and some follow the others. Instancing the feathers on the body was another project. We hadn’t updated the way we make feathers in quite some time. Since we had some birds that were going to be prominent, close to camera, right up with our main characters, we wanted the feathers to be compelling and nice.” A prominent plant had to be accommodated. “We worked early on with the tools department to create a new internal tool for modeling and instancing procedural trees because we were going to have a ton of them,” Albright notes.


Having the ability to manage the visual complexity of nature led to the Brushstroke Project. “It’s a system of projecting points in space onto the environment, applying a texture card on each one of those points, bringing that in and rendering it, pulling the color from the existing beauty pass render and then applying the texture cards with those colors in the final composite of the image in whatever way the compositor chooses to dial it in,” Albright remarks. “What that did was to provide a way to visually stylize the extremely complex environment. Natural environments have so many little pieces, it can be really busy and visually hard to look at. You have your characters in front with all of these little things going on. We needed some way to quiet all of that visual noise and frame the characters. It adds to the tactile style that we wanted for the movie as well.” Effects simulations like water, fire and smoke did not vary that much between human and animal perspectives. Albright adds, “It was more in the design and shaping. You can also see in our human world that things are more orderly and organized. Even the vegetation is more structurally controlled. Then, in the natural world, things are wilder.”
Comedy is a fragile concept. “I feel like we’ve learned a lot about how you can step on the comedy by making something overdramatic or not putting clarity in the right place,” states Ian Megibben, DP, Lighting. “Much like the timing is important, the clarity of what you see is important. Seeing it at the right time. A lot of this is worked out already between the blocking they do in layout and the work they do with the edit, but when you have an eye fixed on a cut, making
TOP: For Daniel Chong, animation is a medium that lends itself to the suspension of disbelief, allowing for a more fantastical world without the need to explain everything.
BOTTOM: Daniel Chong has a specific aesthetic for his humor and storytelling where the imagery had to feel real without feeling necessarily realistic.

sure that we’re clearly highlighting that in a subtle way. That was probably the biggest lesson. I’m probably proudest of putting a lens flare in the movie that got the largest laugh that I’ve ever seen a lens flare get in the right way. It was rewarding for me to contribute to a movie that is hilarious in a way that elevates the humor. I put so much thought into lens aberrations and effects in a way that feels organic. It doesn’t draw the eye. There is one time where I was like, ‘We can pour a lot of sauce on this shot!’”
Given the nature of the humor and storytelling, the visual aesthetic had to feel real without having to be realistic. “There is such an odd character design, and odd in the best way, and a cartoony feel to things in the natural world,” remarks Jeremy Lasky, DP, Layout. “The human world gave us permission to come up with our own style for it without having to feel like we were doing Finding Dory. We could get away from making it feel like we were just observing these characters doing what they are doing. Ian and I wanted to establish this tone of pushing the visuals just enough in the beginning that we came up with this stylized feel that could be then really pushed to extremes when we needed to because the story gets really weird. It gave this baseline that the audience was comfortable with, so they weren’t saying, ‘What do you mean you have a shark being pulled by a bunch of seagulls that lift it out of the water? I don’t understand how that works physically.’ It doesn’t matter because you’re hooked in the story and the characters that you roll with it.”


TOP TO BOTTOM: A key landmark is a rock where Mabel would spend time with her grandmother.
The robotic beaver was inspired by people recording videos of robots awkwardly interacting with animals.
Director Daniel Chong, Production Designer Bryn Imagire and Story Supervisor John Kim during a Hoppers art review.

THE GLOBAL REACH OF VISUAL EFFECTS TODAY
By TREVOR HOGG
TOP: Predator: Badlands. Tax incentives have spurred global growth. (Image courtesy of 20th Century Studios)
OPPOSITE TOP: Foundation. Circumstances are constantly changing and must be accounted for in advance. (Image courtesy of Apple TV)
Financing movies and television programs remains expensive, prompting studios and production companies to take a more global view, with both positive and negative effects on the visual effects industry. No longer does one have to be based in California to find work, but reliance on tax incentives has also led to a more nomadic lifestyle for digital artists. “Despite the entertainment industry being consumed by everybody, it’s a very niche business in itself, and yet, the visual effects industry and post-production exist as a niche within a niche,” notes Will Cohen, VFX Producer, Executive Producer and VFX Consultant, who co-founded Milk VFX in 2013. “It’s a creative business, but also technology-driven, so it’s a lovely fusion of art, science, technology and craft to help create images. The movie business as we know it really starts in Hollywood with the studio system that grew out of it and with the global success of people going to the cinema as a premium form of entertainment. Within this studio system, the special effects departments build on Georges Méliès’s work. Then, in San Francisco, George Lucas and ILM revolutionized the special effects industry with Star Wars, John Dykstra, motion control, and a whole new wave of physical and optical special effects. At some point between the mid-1980s and mid-1990s, digital visual effects emerged. All of this is coming out of Los Angeles, and the first rival to the West Coast of America is London; that is born not out of any particular tax incentive, but from the creative industry, particularly music videos and commercials, and what was possible digitally. Clairol commercials with morphing people for hair products and Ridley Scott doing advertisements based on 1984 for Apple were huge events.”
The opportunity to expand the scope of storytelling was attractive. “The big expansion and taking on of Los Angeles on the West Coast came out of the artistic desire to do it,” Cohen states. “There

were loads of brilliant artists in Paris at the time doing some great work. BUF was doing excellent work with David Fincher. This is because they want to participate in their artistry digitally within the premium entertainment format. Harry Potter put a billion dollars’ worth of visual effects work through the U.K. over a decade, creating regular work for companies to keep crews together and develop their capabilities. By 2008, Framestore was established. Mill Film is its own story. Cinesite is still there. Moving Picture Company has made inroads through Digital Film Company. DNEG is beginning. It was an exciting time.” The influx of visual effects companies, along with the increasing quality of digital augmentation, has led to a shift in attitude. Cohen observes, “Let’s say from Jurassic Park in the mid-1990s to 2010 is an era where people would come to see us from Hollywood, and they would say, ‘I’ve got this movie and a particular effects sequence. Are you actually able to do it?’ That was the question before it was, ‘How much?’ Between 2010 and 2020 came an era of people being in the game and the globalization of the industry. Money becomes this huge factor, and people stop asking, ‘Can you do it?’ They assume, if they’re talking to you, that you can do it.”
Tax incentives have spurred global growth. “In the early days, tax incentives helped build and grow some of these new territories and companies,” reflects Todd Isroelit, Senior Vice President, Visual Effects Production at 20th Century Studios. “For example, when I was first doing visual effects work in Australia, the tax incentives weren’t structured in a way that made sense to grow the visual effects business. There was a high minimum spend needed to trigger the PDV rebate with companies. You couldn’t see yourself spending that kind of money because they didn’t have the level of experience or resource capacity. Just through conversations
with Ausfilm and the government and making the case for bringing these thresholds down, such as having the requirement to film there, allowed these companies to start growing and expanding their reach globally. That’s what happened with Rising Sun Pictures, Animal Logic and some of the smaller companies. It was a ‘build a better rebate and they will come’ approach.” This also started to bring in established visual effects companies from outside of Australia, and that in turn creates a stronger talent pool.”
One factor remains an important part of any tax incentives. Isroelit says, “The bottom line is it’s got to be the right creative team at a vendor. Sometimes a vendor will put up a team in Canada and Australia. We have to vet and decide which is the better leadership team for this scope of work. Once we have established that, we can figure out how to split the work across the same vendor to maximize both incentives and resources. Perhaps it’s a Framestore in Melbourne that’s the creative lead, but then they split some of the work with Framestore Montreal to help with resources and pricing, including the value of the native currency. Ideally, our team is just going to be dealing with the supervisor and creative leads in that one main award location.”
Because work is performed across different time zones, the visual effects industry has become a 24/7 enterprise. “Initially, it used to be considered a handicap, but now people have embraced it,” Isroelit remarks. “If you can structure it in the right way for the production team’s and the studios’ schedules, you can actually get more productivity. Production site teams may need to schedule their days and tasks accordingly, particularly when they require direct engagement from the filmmakers. At least from California you can manage the Australian and Asian vendors later in your day. Conversely, you can wake up and start with Europe, and then hit the East Coast. You


can look at the clock across the globe and set up your schedules in a beneficial way.” The number of vendors influences the size of the visual effects team. “Definitely on bigger shows, like Predator: Badlands, you have multiple coordinators. One coordinator is in charge of vendors A and B, while another coordinator might be responsible for vendors C and D. In that context, you’re not overwhelming one coordination effort. Basically, you’re creating multiple pipelines within your own production. It becomes a delicate balance in managing the visual effects supervisor’s time across five or six vendors in different time zones. I’ve had those conversations with my production teams about making sure we don’t burn out. We have to find time for them to sleep, eat, and do their notes. To protect some quality of life for the long run of the schedule, it’s trying to figure out the right plan, then how to manage the vendors with little subsets of teams within the team.”
Circumstances are constantly changing and must be accounted for in advance. “When I first break down a script and turn in my budget, I’ll do a blended 25% tax incentive so that I can work anywhere in the world,” explains Kathy Chasen-Hay, Head of VFX,
TOP: A Knight of the Seven Kingdoms. Visual effects companies that specialize in the creative needs of a production are sought out. (Photo: Steffan Hill. Image courtesy of HBO)
BOTTOM: Murderbot. An effort is made to have visual companies work on their own sequences to ensure continuity. (Image courtesy of Apple TV)

Paramount Features & TV at Paramount Skydance. “Because then, if you are a larger film or TV, you can say, ‘I’m going to spread the work around.’ It’s usually safer not to keep it all in one territory. We might do a little bit in Canada, Australia and the U.K. I realize that Ireland is offering this great tax incentive, but I want to go first and foremost where the talent is, and I want to go to a company that has done right by me, delivered good-looking shots on time, and treats its artists fairly. There are so many ways to pick a company or a team to do your work. A lot of it is based on relationships. You may have a team that doesn’t have the best rebate. An example might be Digital Domain, which among visual effects companies, still maintains a strong presence in Los Angeles. So, we might get a 22% tax incentive from them because they’re doing work in Vancouver and also work in Los Angeles and Montreal.” Even within the same visual effects company, studios must track how work is distributed across facilities to avoid tax incentive conflicts. Chasen-Hay remarks, “I remember when we first started doing the rebates about 15 years ago, there was a lot of confusion. But now, when you’re in the contract phase, you would say 90% of the work

TOP: The Last of Us. To meet the budget, the cost of using vendors doing specialized work in non-incentivized countries can be offset by using other vendors in incentivized countries or regions. (Image courtesy of HBO)
BOTTOM: The Gorge. Having multiple visual effects companies working on the same shot is not preferable. (Image courtesy of Skydance and Apple TV)


BOTTOM: The Last of Us S2. The end result suffers when filmmakers do not give visual effects companies the flexibility to place the work where the best creative talent is available. (Image courtesy of HBO)
will be done in Toronto, and 10% will be non-rebate because all the vendors need flexibility to ship out roto, paint and matchmove to India or a different part of the world.” Chasen-Hay adds, “Our job is 50% accountant and 50% being up on all of the territories and what the incentive programs are.”
Having multiple visual effects companies working on the same shot not preferable. “We don’t like to do that because you’ve got to share assets,” Chasen-Hay states. “Some of the vendors, like DNEG, might share an asset between their territories because theoretically they are sharing their software live. What we try to do is have vendors work on their own sequences because then, if there’s anything that looks different, it makes more sense as there’s continuity and you’re telling a story within that sequence. But often, as the visual effects sequence grows, we have less time to deliver; that’s when we bring in vendors six or seven to incorporate into the pipeline, and that’s when you have to share the assets. I really noticed it in Australia and Vancouver, where you might have two companies, they’re talking to each other as if they’re working for the same vendor. Artists are constantly switching visual effects facilities, and many people are friends with one another, so it’s a collaborative field. If your neighbor’s shot doesn’t look as good as your shot, who knows whose shot it was, and you could get ridiculed for someone else’s work. It’s in everybody’s best interest for the work to look good. You try not to share an asset for the same shot.” Machine learning and AI will have global ramifications. Chasen-Hay notes, “Right now, you send all of your non-creative work, like roto, paint or matchmove, to cheaper countries to do
TOP: The creatures in It: Welcome to Derry tended to be standalone, so various types of visual effects were able to be assigned to different vendors. (Image courtesy of HBO)
that type of work. But if you’re a visual effects facility using some software that is going to do that for you, you don’t necessarily have to send that work out. It will affect people in other countries and the lower-level jobs.”
Sharing work with multiple facilities and visual effects companies worldwide is hardly challenging. “Filmmaking is such a collaborative art form, requiring teams across different disciplines to be in creative sync,” observes David Conley, Executive Producer at DNEG. “Nothing will ever come close to surpassing the experience of being in a screening room with a team of artists working on a sequence. With increased pressure to deliver on global tax incentives, companies must build teams spanning time zones through video conferencing, data networking, and an international company infrastructure that supports the management of a diversified creative portfolio across multiple sites. Global visual effects production stress tests a company’s ability to seamlessly recreate the experience of being together in a screening room.” Dividing and conquering is a critical management issue for any global visual effects company. “Dividing the work among the facilities is primarily dictated by the rebates that inform the net target the filmmakers are looking to achieve. As a company, you work diligently to ensure you can support achieving the filmmakers’ financial parameters by developing a talent base at both the creative and production management levels that consistently achieves the high standards you set across all sites. This isn’t always possible due to capacity constraints in a particular territory. The consequence is that the creative output suffers when filmmakers can’t give a company the flexibility to place the work where the best creative talent is available.”
Each project has different creative needs. “Some projects have specialties like fire or water or effects simulations or creatures and types of creatures; we will look for vendors who specialize in that,” states Janet Muswell Hamilton, Senior Vice President of VFX at HBO. “We will know what is needed to hit our budget, but if we have a facility, for example, ILP in Sweden, there is no incentive there, but we used them a lot. We offset the fact that we’re not getting the incentive for a big chunk of work by using other incentivized countries or regions for work that isn’t as specialized as they’re doing.” Generally, there is a routine regarding how visual effects are distributed over the course of a season. “Normally, the first and last episodes are huge, and normally, episode seven or five or both are big. What we have to look at first is assets. Are these assets across the entire season, or are they different assets that we can assign to various vendors? A good example is It: Welcome to Derry. Because the creatures in the episodes were standalone, for the most part, we were able to assign those various types of visual effects to different vendors. We did that because the schedule is tight, so we have to ensure that the visual effects company handling episode eight isn’t backlogged by episode seven. Even if they can offer a big crew, you still need to balance that work because just getting it through the facility is difficult.”
Keeping track of the facilities being used within the same visual effects company is important. “A good example would be House of the Dragon,” Hamilton notes. “Pixomondo has been our dragon



TOP TO BOTTOM: Predator: Badlands. Because work is done across different time zones, the visual effects industry has become a 24/7 enterprise.
(Image courtesy of 20th Century Studios)
Springsteen: Deliver Me From Nowhere. Managing the visual effects supervisor’s time across five or six vendors in different time zones becomes a delicate balance.
(Image courtesy of 20th Century Studios)
The Last of Us. When the schedule is tight, it’s more difficult to get back-to-back episodes of work through a single facility. (Image courtesy of HBO)



facility since way back on Game of Thrones, and we used three of their facilities in three different regions. We review which work is assigned to which facility and assess the incentives in those regions, as well as, the extent of outsourcing. If they’re outsourcing 10%, that’s 10% for which you will not receive an incentive. All of that has to be worked out. It becomes a complicated grid and a lot of discussion.” Templates have been implemented to make looking after the different shows more manageable and feasible. Hamilton says, “We use Flow [formerly ShotGrid] to track where we are with things. We have a whole system that scrapes the data up into an Airtable, so I don’t have to go and look at everybody’s Flow. I’ve got an overview that’s constantly being updated. What I’m mainly looking at is whether a show is getting into trouble. When I have 15 or 20 shows to look at, I can’t look at them differently. We try to make it as simple as possible. Everyone is satisfied with our financial tracking. If a show asks for a change and it’s a good idea, we will accommodate it then roll it out through all our other shows. We track all the places where we’re applying for incentives because there are rules governing that. Every incentive has a cost associated with it, so we have to make sure that is accounted for in the budget.”
Each country has a unique culture that, in part, influences how visual effects companies operate there. “Understanding cultural differences is foundational to the success of any company,” observes DNEG’s David Conley. “Language and culture shape communication, leadership, teamwork and decision-making in multicultural, multigenerational environments within a global company. That said, watching Ted Lasso offers up some really great management tips on how to handle multicultural teams. Or Aliens, depending on how you view your clients!” Talent is not in short supply. “There is incredible talent worldwide that can be accessed through remote work or by encouraging relocation to different countries. Any perception of labor shortages can be attributed to the oversaturation of work being pushed into heavily subsidized regions and, unfortunately, to the loss of experienced artists moving into other industries. We really need to figure out how to protect our global talent base from the effects of downward financial pressures that are undermining the stability of our industry.”
Cloud computing, real-time rendering and AI/machine learning are leading the next wave of technological innovation. “All these technological developments are fantastic for creating a global multi-site facility that can access artists worldwide,” Conley remarks. “These developments will be foundational in building a suite of tools to improve compute speed, designed to deliver images to a higher standard, and unite artists in sites around the world. But we need to be careful not to let discussions about technology minimize the artists’ contributions, which are more needed than ever.”
Murderbot. Dividing and conquering is a critical management issue for any global visual effects company. (Image courtesy of Apple TV)
It: Welcome to Derry. CG assets needed to be assessed as to whether they will be used across the series. (Image courtesy of HBO)
The visual effects industry continues to evolve. Conley concludes, “What we’re seeing is that the economics of content creation are being challenged by the various distribution models that audiences around the world use to consume their content. As an industry, we need to adapt to those evolving models. But not at the expense of undermining our position in the filmmaking process. It’s essential that we continue to deliver visually groundbreaking work.”
TOP TO BOTTOM: Fountain of Youth. Even within the same visual effects company, studios must keep track of how the work is distributed among the various facilities to avoid tax incentives conflicts. (Image courtesy of Skydance and Apple TV)


PUSHING CREATIVE AND STORYTELLING BOUNDARIES WITH CRYSTAL BRETZ
By TREVOR HOGG
While attending the Victoria School of the Arts, Crystal Bretz played guitar, sang in a choral choir, performed ballet and contemporary dance, and experimented with 3D animation, graphic arts and photography. “I still play guitar and sing but don’t dance much anymore,” reflects Bretz, who is based in Montreal and is a Lead Modeler at Framestore. “In general, anything art related definitely helps. Just keeping rhythm in dance and guitar can assist you in understanding the rhythm and flow of shots and sequences.” Life began in Richmond, B.C., then the family moved to Edmonton, Alberta. “Where I grew up was a music-oriented place because there were not a whole lot of other things to do. To keep busy, you did artistic things, which were dancing, music, arts and things like that. I went to an arts high school in Edmonton, which wasn’t a common place to go. I was always telling my parents I’m not interested in sports, so we ended up finding the Victoria School of the Arts. Instead of doing gym classes, you can take alternative classes like ballet and guitar. I also took my first animation class, so I got exposed to a lot of different forms of arts early on, which was nice.”
While the arts were not a big part of the Bretz household, with her mother being a mail carrier for Canada Post and her father selling parts for big oil rig machinery, the offspring have been driven towards more artistic endeavors. “Part of what I do has inspired my family to become more artistic,” Bretz reflects. “Now, my brother is a graphic designer; he decided to go back to school to study that recently. My dad recently got into bird carving, and it is cool to see this artistic side coming out of him now.” Next stop was the Vancouver Film School, which saw the post-secondary student specialize in 3D modeling, surfacing and lighting and earn a diploma with honors in 3D animation and visual effects. “I wanted to go into their game design course, but they told me there might be more career opportunities in visual effects because if you learn one, you could maybe go both ways. I went into that program with the goal to be an animator, but then I found a lot of love and interest in modeling. Starting from nothing and building something out of it was really satisfying for me. I ended up going down that route, and at some point in their curriculum you can split off from the other fields and specialize. I ended up specializing in character modeling but didn’t realize how difficult it would be to get into that professionally.”
Images courtesy of Crystal Bretz, except where noted.
TOP: Crystal Bretz, Lead Modeler, Framestore
OPPOSITE TOP: Translating the characteristics of James Gunn’s dog Ozu onto Krypton was difficult because the two canines were of different proportions and sizes. (Image courtesy of Warner Bros. Pictures)
OPPOSITE BOTTOM: Bretz proudly wears Superman’s iconic crest beside the theatrical poster.
Upon graduating there were two job offers. “I couldn’t figure out which one I wanted to take because I wasn’t entirely sure at the beginning of my career what would be the right path,” Bretz recalls. “I had an offer from MPC to join their lighting academy for young artists entering the industry. There was another offer on the table as a generalist at a small TV visual effects company. I ended up taking that job because it was recommended to me because I might get the opportunity to work on a lot of different types of things as a generalist, including characters. At the beginning of my career, I started working at Artifex Studios, and they taught me so much. I did simulations and rigging, everything 3D related. That was a good kickoff to my career because it made me a lot more knowledgeable and fast about every part of the pipeline, rather than being stuck into one category.” Subsequent

“There’s so much involved with creating a character. I pick the hardest thing there is to do. If it’s not pushing the boundaries to the next level, I don’t want it. The most challenging things don’t frustrate me; they get me excited because I get to solve something.”
—Crystal Bretz, Lead Modeler, Framestore
jobs include Senior Organic Modeler at Method Studios, Senior Modeler at DNEG and Digital Domain, and being granted an Unreal Virtual Production Fellowship. “I was so surprised that I got into the Unreal Fellowship because I know they only select a certain number of people, and I learned a lot. It was three or four weeks of going through the full Unreal Engine and learning how to do everything. I don’t technically use the skills I have developed now, but I know that if I was to go into a virtual production or even a game career in the future, I could definitely jump right in. Also, nowadays, if an Unreal Engine file comes in, I’m the person that they send it to!”
“There’s a lot to think about when you’re translating stuff from 2D to 3D, especially when you don’t have that concept model beforehand that’s already figured those things out for you in a rough manner,” Bretz observes. “For example, with Krypto in Superman, we only had a 2D concept to work from, and had to match James Gunn’s dog, Ozu, but his dog was very small, and the dog they wanted was a lot bigger. We needed to figure out how to make big-dog proportions work on a small dog, but also how do we get the exact same look of Ozu in this character? It was a lot of problem-solving, for sure. We would go back and forth on changing structure and design and eventually landed in this nice place where things just worked. I don’t know if there’s ever a cut-and-dry answer of how to translate something from 2D into 3D.”
Motion is kept in mind when modeling. “A lot of the time, you





Bretz attends an event held by the Montreal Section of the Visual Effects Society.
Bretz has some fun with IF memorabilia while at the movie theater.
Framestore
have to build things exactly how it would need to be moved practically,” Bretz states. “You’re always thinking about the anatomy inside the bodies and where the bones are connecting. It gets really intricate. Some creatures or characters are fantastical, so they don’t exist. You have to think about things a bit more like, ‘Is this a jumping type of creature? What kind of bone structure would they have? How would they move?’ It impacts how you build something, which is the fun part about this job.” Realistic and stylized characters share some common principles. “No matter how stylized it is, someone will notice that it doesn’t feel right if it doesn’t actually function like a person would expect or something you’ve actually seen before. If you don’t know basic anatomy and don’t understand how things should work, then you don’t know how to stylize and tweak it to make it a little bit different but still feel anatomically right. It’s a really fine balance.”
Mentors have been prevalent throughout Bretz’s career.
“Some of them I’ve never even met, but I’ve seen their art and been really inspired.” Bretz notes. “Then there are some people I’ve met who have taught me a lot. Justin Holt is an amazing texture artist, and he ended up being one of my supervisors and taught me a lot about texturing and the requirements for that.
Eugene Fokin is a really good modeler who I got the opportunity to work with, and he taught me a lot as well. Overall, mentorship is important in this industry because I find when you go to college, you’re only learning a quarter of the information you need. The rest of the information you get in the industry and from those people who have the experience. This industry is constantly growing, so it’s hard to keep up sometimes without a little bit of help.” Bretz is a Character Art Mentor at the Vertex School and has conducted Gnomon workshops on “Creating
TOP TO BOTTOM: There was only 2D concept art to work from for Krypto, which then had to be translated into 3D. (Image courtesy of Warner Bros. Pictures)
modeling team attending a 2025 Christmas party. Left to right: Stephanie Qin, Marjorie Veillette, Kanisha Lopez, Vik Sorensangbam, Klaudio Ladavac, Natasza Nalewajek, Christopher Helin, Pascal Clement, Wilfried Vougny, Crystal Bretz, Pierre-Edouard Merien, Lyrian Corneil, Maxime Moreira, Patrick Comtois, Duy Tran, Simon Martineau and Johannes Boivin.

a Stylized Female Character” and “Creating a Male Groom.” “Being a mentor has been one of the best things I’ve done. The surprising part is that you always know a little bit more than the person you’re mentoring and always have something useful to share, and you always know something different and come with a different perspective. The other part is helping people grow; it’s so rewarding when they get their first job – for them and for me.”
The attitude towards character modeling has remained consistent while at the same time techniques and technology are constantly changing. “I’m definitely not doing the same things I did back when I started, and I, as well as the programs and workflows, have grown a lot since then,” Bretz reflects. “I am looking forward to the sort of machine-learning style that aids processes that we didn’t like doing before, like UVs. But anything beyond that, I’m happy keeping the art as organic and human as possible because there are still flaws. There is an automated feeling behind some of the machine-learning things that come up, especially with concept art. Some tools are being developed with AI to aid everyday processes that take enormous computing power, and I support the uses here, for sure. The other thing that could be personally useful as well is just creating scripts on the fly for quick problem-solving, but leaving the complex scripting to the pros.” For Bretz, this is an enjoyable part of the job. “I like creating something from nothing. I love the challenge of it because there are muscles, bones and facial structures as well as face shapes. There’s so much involved with creating a character. I pick the hardest thing there is to do. If it’s not pushing the boundaries to the next level, I don’t want it. The most challenging things don’t frustrate me; they get me excited because I get to solve something. I’m always learning more things about this field. It’s a constant learning curve, but that’s why I love it so much.”


TOP AND MIDDLE: Bretz modeled Unicorn for John Krasinski’s IF. The film was her first project as model lead at Framestore and has a special place in her heart. (Image courtesy of Paramount Pictures)
BOTTOM: Ale Barbosa, Crystal Bretz, Bell de Deus and Stephanie Hayot at Cafe VFX in Montreal giving a Q&A panel with Artstation.

NAVIGATING THE HIGH SEAS FOR ONE PIECE: INTO THE GRAND LINE
By TREVOR HOGG
Setting sail on the treacherous ocean current that flows around the entire Blue Planet in an effort to find a legendary treasure and be crowned the Pirate King, One Piece: Into the Grand Line marks the return of Monkey D. Luffy and his Straw Hat crew for eight episodes that travel to Loguetown, Reverse Mountain, Whisky Peak, Little Graden and Drum Island, along with introducing an entirely CG character, a massive sperm whale, a pair of giants, and antagonists with the ability to manipulate wax and smoke.
“This show wouldn’t be possible without visual effects, and we have an incredible visual effects team,” states Joe Tracz, Co-Showrunner. “There were definitely superpowers to figure out for the first season. Luffy is made of rubber, so him stretching was the big challenge. But each season gets bigger and bigger. In Season 2, we knew that one of our big challenges was we were introducing a new crew member to the Straw Hats, and he is a little reindeer boy known as Tony Tony Chopper, who is an entirely visual effects character.” The methodology was borrowed from Guardians of the Galaxy. “We have an incredible local actress named N’kone Mametja. She was on set every day in a body suit, giving someone that our actors could do a real performance with. You’d shoot with her, shoot the clean plate for visual effects, and then in post-production, Chopper’s face and voice were provided by another actress, Mikaela Hoover, who also was in the Guardians movies.”
TOP: Laboon is an entirely CG creature, which was tough to frame properly in shots.
OPPOSITE TOP: There were too many shots with two CG giants, so the characters of Dory and Brogy were achieved practically.
Things get even weirder in the second season. “In the writers’ room, we constantly have questions of, ‘Can we do dinosaurs and giants?’” Tracz recalls. “The answer was, ‘It’s One Piece. You have to do dinosaurs and giants. If you don’t do dinosaurs and giants, Little Images courtesy of Netflix.

Garden, the island where they discover those things, just isn’t the same Little Garden.’ There are expectations that fans of the manga and the anime have that we want to make sure we’re delivering on because that’s the promise you make when you’re adapting something as specific and beloved as One Piece. If you shy away from those things in favor of realism, you lose what makes this world so special.”
Given the scope of the world-building, the task was divided between two production designers with each island treated as a unique environment. “[The approach towards visual effects was] definitely different for various parts of the show,” remarks Tom Hannam, Production Designer. “Whisky Peak was one of the islands where everything was in-camera and I didn’t have to consider the visual effects side of it so much. But Little Garden is an island populated by two giants that were 60-to-70-feet tall, and they got bigger in post as these things always do. That had a lot of Straw Hats, human-sized crew, dealing with these two enormous giants. It took a lot of collaboration with visual effects right from the beginning. I worked it out with Victor Scalise [Visual Effects Supervisor] and Scott Ramsey [Visual Effects Producer] literally shot-by-shot, but at the same time there was still the desire to do as much in-camera as possible.”
Everything was like putting together a jigsaw puzzle consisting of practical and digital pieces. “For each setup we would have continuous meetings,” states Max Gottlieb, Production Designer. “You would have full set extension and bits you could and couldn’t see through. For instance, with Laboon we built the lighthouse up
to the edge, and we had to make a mark where there was this huge drop and cliff, and then there was the sea and Laboon. Beforehand, we have to proportion the whole thing into a series of drawings that are exactly to scale and then design where the eye of Laboon would be. Luffy is standing on what would be the clifftop, and we have to pinpoint where his eye would be and where he’s looking at different parts of the action. There is the financial aspect, as well, of how many wide and close-ups shots you can have. In the end, the whole thing becomes this jigsaw puzzle broken down shot-by-shot.”
“One of the most important parts of a cinematographer’s job is to try and facilitate the best outcome for all the imagery, whether live-action or visual effects,” remarks Michael Swan, Cinematographer, who was responsible for the final three episodes. “I try and collaborate with the visual effects team as much as possible to make their life as easy as it can be. This is a visual effects-heavy show, and nothing is undemanding.” Previs and storyboarding are important. “All the big visual effects sequences are prevised and often storyboarded by the director. In addition, the fight scenes are carefully rehearsed and choreographed well ahead of time.” There are few practical locations with the emphasis on elaborate sets and shooting against screens. “The look was established in the first season, and although we used less extreme wide-angle lenses for close-ups in Season 2, the look remained the same. Netflix has full control over the final color grade,” Swan notes.
Ensuring that there is enough time to produce the required


visual effects is a quick and efficient editorial turnover. “While we are still offline, we have this absolutely fantastic team of visual effects editors and also our own previs artists so we can do internal turnovers constantly and get temp visual effects moving,” remarks Tessa Verfuss, Editor. “Even by the time we’re presenting to Netflix, we have something there. It’s different when we’re talking about a full CG character, but things like Luffy’s punches, we’re getting previs for that before we even send cuts to Netflix. That just becomes an ongoing process that we keep moving. Personally, I do have an advantage of a time zone difference because I’m in Cape Town, so most of that team are only starting when I finish my day, so we’re not literally trying to jump into each other’s bins and timelines at the same time.”
Imagination is required when assembling sequences. “When you do a show like this, so much is on bluescreen or greenscreen,” states Eric Litman, Editor. “You have a character that is not present because it’s CGI. And from an editor’s perspective, it requires, at least from me, so much imagination. As an editor you are imagining the timing and pacing. Is this enough time to do x, y and z or deliver a line? The boat is doing this right now, but we don’t have the effect of what’s causing that. We’re constantly figuring out cause and effect and how that translates into visual effects. It also requires a tremendous amount of conversation and homework with the people you are working with, the director and visual effects department, in understanding the storyboards.
TOP: After sailing over Reverse Mountain where a gravity-defying river runs upwards, the Going Merry encounters a massive whale known as Laboon.
BOTTOM: A major challenge was getting the character’s eye level with Laboon, given the huge scale difference between them.

Sometimes you have to go to the manga to understand the concept of these shots.”
“What is good about One Piece is that the show accelerates its insanity from a storytelling perspective. Everything gets bigger, and I’m sure Season 3 will be no different,” observes Tim Kinzy, Editor. “It was different with the full CG character for the first month with this flashback scene, which had nothing to do with the Straw Hats. It was basically Mark Harelik and N’kone Mametja doing the Chopper/Dr. Hiriluk scenes. I felt like I was working on a spin-off. It was refreshing because this is nothing like I saw in Season 1 or have even dealt with before. It was a unique opportunity, a different kind of pacing, especially with a fully CG Chopper that is expensive per shot. A lot of shots get pared down or taken out to save money and for economy of storytelling. That was quite a challenge. You can’t just cut away to Chopper whenever you’re stuck on an editor because it’s going to cost a lot of money, so sometimes the pacing had to play out. It’s still One Piece, but it feels like something unique.”
Pushing the gimbal for the Going Merry was the sequence where the pirate vessel goes up Reverse Mountain. “We had the Going Merry at 20-odd degrees going up Reverse Mountain,” states Mickey Kirsten, Special Effects Supervisor. “That was incredibly tricky. Everything needed to be elevated, like our water cannons. We were shooting water onto the ship and people were sliding around. Resetting was quite challenging. People literally had to use

TOP: Tony Tony Chopper is the first entirely CG principal cast member.
BOTTOM: Getting an upgrade is the ability of Monkey D. Luffy to stretch his limbs like rubber.




TO BOTTOM: Miss All Sunday makes a dramatic entrance with swirling cherry blossoms.
Principal photography for One Piece centers around Cape Town.
Mr. 13 and Miss Friday, an otter and vulture, respectively, are members of the criminal organization Baroque Works.
The hard part of Cactus Island was coming up with techniques to give texture to something that is really grass and fields but looks like a cactus from the distance.
ropes to pull themselves up and re-set everything that was falling down during the take.” Mr. 3 generates a specific substance with his fingers. “The wax was insane. It was such a journey trying to find a product that we could have on the artist, and get it to do what we wanted it to do. We literally went from food stuffs like marzipan to baking products. We finally settled on a medical plastic that you can mold and it breaks really cool and nicely,” Kirsten says.
Creating the interior of the whale, Laboon, was a labor of love. “There was a lot of stomach acid inside of the whale,” Kirsten remarks. “Skeletons would fall into these little pools of water and would smoke and bubble away. We had to keep the walls of Laboon moist all the time. If you think about it, it’s 40-odd meters by 25 meters, and you square that. It’s a lot of rain rigs working there. We had kilometers of plumbed air underneath Laboon for the bubbles, steam and smoke, and also plumbed water underneath it, causing little eruptions. It was probably three, four months of prep on that one.”
While Season 1 had 2,300 visual effects shots, Season 2 has 3,800 created by Framestore, Rising Sun Pictures, ILP, Folks VFX, Barnstorm VFX, Ingenuity VFX, Mr. Wolf and Refuge VFX. “We broke down the approach coming up to Reverse Mountain as one section so you wouldn’t have to match into water,” states Scott Ramsey. “We gave the basic idea that the water needs to be sucking the boat toward Reverse Mountain, and as it does that it gets closer. When the Going Merry gets sucked into it, then we turn it over to Rising Sun Pictures. As it keeps going up to the top, at the top we know that ILP is very good at specializing in work like that. Once we get down to the bottom, that’s Laboon. We broke it down in those areas, hitting the vendors’ strengths.”
“The biggest problem we had was fighting the scale of the water [for Reverse Mountain] because a lot of the fluid simulators are meant to build oceans,” notes Victor Scalise, Visual Effects Supervisor. “You’re using water simulations that are defying physics and also need to be moving upward, colliding with the landscape, as well as being like white water rafting.” The giants were achieved practically. “We ended up getting two actors, and we worked with Tom Hannam to build big and small sets. Christoph Schrewe [director] came in, and we blocked everything out. We had little characters that were the size they were, so on the giant-size set, we could have Straw Hats, stand-ins that were the real size. We did a lot of math and planning to where it all snapped together beautifully in post.” Digital effects were unavoidable for Smoker. Scalise states, “The punches are true to the manga, and with the tendrils, we wanted to give them more shape, so a tornado-ish swirling was added. One of the tricky parts was, how do you make tendrils, when they grab Luffy, actually feel like it’s not emitting smoke from Luffy? We came up with the idea of these spinning cuffs that gave more depth. We also had a lot of fun developing the Gum Gum Gatling effects as Luffy is punching Smoker. We went through a lot of cool simulations to answer the questions: How do things go through Smoker? How does it react? How does it interact? Smoker was a lot of fun, and as viewers watch it, I hope they’ll enjoy what we came up with for it. The sequence is intense.”
TOP


THE INDIAN VFX INDUSTRY – A MAJOR GLOBAL FORCE
By TREVOR HOGG
TOP: An indication that the Indian domestic film industry has become more accepting of visual effects is the blockbuster RRR (Image courtesy of DVV Entertainment)
OPPOSITE TOP: Foundation. BOT VFX has invested seriously in compositing as well as asset and CG development. (Image courtesy of Apple TV)
Creating waves back in 2014 when Prime Focus World merged with DNEG and carrying into 2025 when Phantom Digital Effects consolidated Milk VFX, Tippett Studio, PhantomFX, Lola Post and Spectre Post into the Phantom Media Group, India remains a significant player in the visual effects industry, with the South Asian country getting its own chapter in the Visual Effects & Animation World Atlas 2025. Joseph Bell states in the comprehensive research report that Mumbai is the largest visual effects hub worldwide by headcount. Six out of 10 of the fastest growing visual effects and animation hubs in the world are in India. The vast majority of the visual effects and animation revenue comes from international revenue, with domestic clients accounting for only 10% to 15%, and 2,000 visual effects and animation workers lost their jobs with the collapse of Technicolor. Unlike other markets where tax incentives have been a driving force in attracting international productions, it has been the cheap labor that has turned India into a global outsourcing powerhouse; however, that might start to change as the Indian government becomes more actively involved.
India has over a 100-year history with films and filmmaking. “Given India’s long history with films, visual effects were not an unknown element here, though they weren’t really used to their full potential,” states Akhauri P. Sinha, Managing Director, India for Framestore. “I, along with many other people from the industry in India, have always maintained across various forums that visual effects is something filmmakers should start thinking about at the beginning, instead of as an afterthought or a corrective tool, as it usually was in the early days in India. As filmmakers started understanding the ability of visual effects to help realize their cinematic vision, and their films did well commercially, it

made others aware of the possibilities. The other part of the puzzle that also fell in place was the availability of talent, as over the years, working on global projects had grown the talent pool as well as honed the skills of the artists here.” Machine learning/AI are tools that aid air artists and enhance output. “It’s also reasonable to assume that as technology evolves, certain repetitive tasks could also be automated. Digital infrastructure like cloud computing and storage has been a game-changer for smaller Indian companies as it allows them to scale without expensive capital outlays.”
“The Indian government has made significant progress in supporting the visual effects industry,” remarks Sudhir Reddy, President, Global VFX Business for Digital Domain. “The Ministry of Information and Broadcasting now offers incentives for international film and visual effects projects, allowing production service companies to claim up to 40% of qualifying expenses incurred in India, with a maximum payout of approximately $3.6 million USD, substantially higher than previous limits. The government has also established the National AVGC [Animation, Visual Effects, Gaming and Comics] task force and proposed policy frameworks to develop the sector further. However, comprehensive nationwide implementation is still underway. At the state level, support remains uneven, with only some states [such as Telangana and Karnataka] implementing dedicated visual effects policies. Skill development programs are in place, but there is still a need for more focused approaches to advanced specialization and training. Overall, while necessary steps have been taken, large-scale and consistent national support is still evolving.”
Since 2005, the Indian visual effects industry has experienced rapid growth. “Initially centered on basic corrective work such as clean-up and motion graphics, the sector has transformed
dramatically due to exposure to international markets and the entry of global visual effects leaders, like Rhythm & Hues, DreamWorks, Technicolor, DNEG, Digital Domain, Framestore and ILM,” Reddy explains. “These companies have integrated Indian talent into the global production pipeline, elevating technical standards and expertise. Domestically, Indian filmmakers are now making visual effects a core part of their creative process, leading to more ambitious storytelling and higher production values in film, television and streaming content. Internationally, India has become a vital hub for major Hollywood productions, delivering complex visual effects at scale. Strategic partnerships, government incentives, and ongoing investments in talent and technology continue to strengthen India’s reputation as a major force in the global visual effects industry.”
When Digital Domain established a facility in Hyderabad in 2017, Reddy was appointed the Head of Digital Studio. “Digital Domain expanded to India to support global growth, improve margins and leverage the country’s growing talent and technology pool,” Reddy notes. “The supportive ecosystem and government encouragement for the AVGC sector were also key factors. The primary advantages have been access to skilled professionals and cost efficiencies. Challenges include heavy reliance on North American projects, limited high-revenue domestic work and time zone differences. Indian teams have, however, demonstrated strong adaptability and flexibility.” Real-time rendering, cloud computing and AI/ML are reshaping the visual effects landscape. “Artists are actively upskilling to maintain global competitiveness. Indian tech firms are also investing in cloud render farms and data centers, with support from government initiatives. The rise of AI tools is expected to significantly affect outsourcing studios in India,



particularly those focused on labor-intensive tasks like roto, paint and camera tracking,” Reddy says.
Kiran Prasad has a varied career in India with experience in food and beverage, retail, insurance and visual effects. “For the quality of ILM to be done, you need good quality people, and we have roughly 400 people now [at the Mumbai facility],” remarks Prasad, Executive in Charge of ILM Mumbai. “The talent exists, and it’s just a matter of time to grow to the level of what the Western world has seen. You have to realize that India doesn’t have a 50-year history of visual effects. We started about 15 to 20 years ago. That cycle is picking up.” Education needs to be improved. “That is one area where we could do much better in terms of the quality of education that we’re giving in visual effects. When I say quality, I don’t mean how you use the application, but how to think differently. How do you apply what you have learned in a different situation? It’s about thinking big and using technology and different forms of storytelling. There are very few schools honoring that knowledge. Recently, the Maharashtra government established a college of creative technology. There are a lot of baby steps happening together, and this is part of reviving or making sure that India becomes self-sufficient in every industry.” The domestic industry has become more accepting of digital augmentation. “Four or five years back, visual effects would have formed only 5% of the budget of any Indian movie. Today, it is even up to 30% or 40%. There is awareness of the fact that stories can be made more engaging with the visual effects.”
“The benefit of having DNEG and ILM [in India] is that the talent pool spreads to these smaller outlets and helps them become big, more relevant and efficient,” Prasad notes. “There
TOP TO BOTTOM: Twisters. The time zone difference means that any notes will be available at the beginning of the next day. (Image courtesy of Universal Pictures)
The Lost Bus. The benefit of companies like ILM establishing facilities in India is in the expansion of the domestic talent pool. (Image courtesy of Apple TV)
How to Train Your Dragon. Framestore runs a Launchpad Apprenticeship program that offers practical training for entry-level visual effects roles. (Image courtesy of DreamWorks Animation and Universal Pictures)

are a lot of things that we’ve learned from the West, but we also have numerous things that we do indigenously over here, which works well for us. We are given the brief, pick up the work, and execute it. Culture doesn’t matter as long as we’re able to deliver it on time. The benefit is that we all speak English well, so there is no risk of miscommunication or misunderstanding. We work exactly the same way as any other ILM studio does. We are given work; we independently manage it, have a visual effects supervisor who handles the show, submit the work to the hub side, and, in some cases, present it to the client-side supervisor if our time zones align. I’m not seeing much of a cultural issue from that perspective. We may have a different way of working, but we do not see much concern about getting the job done. We start at 8:30 a.m. and close at 5:30 p.m. My team is gone by 6 p.m. With both London and Sydney, we have a coverage of four hours, which is good enough. In San Francisco and Vancouver, we use all the available tools to communicate what we want. At the end of our day, we would have sent out recordings or notes to people in San Francisco or Vancouver: ‘We have worked on all of this. These are the things we want from you. Let’s talk about this tomorrow.’ We send it across. They look at it, and at the end of the day, they would send back their notes: ‘This is approved. Moved on to this. This is what we want you to work on. And here are the answers to all of the questions that you had.’ It works well because at the beginning of the day, when we come back, we have all of the answers that we need.”
Going from providing ride support for “Harry Potter and Forbidden Journey” at the Universal Orlando Resort in 2008 with a team of six, to being involved with Deadpool & Wolverine, Wicked and Pachinko Season 2 with a crew of 745 in 2024 is BOT VFX, which has facilities in Hyderabad, Pune, Coimbatore, Chennai and


companies
partnerships
and
Prehistoric Planet: Ice Age. Privately-run training companies, film schools and universities have been partnering with visual effects companies for curriculum development, guest lectures, workshops and internships. (Image courtesy of Apple TV)
Lilo & Stitch. The common language of business is English, which minimizes any miscommunication. (Image courtesy of Disney and ILM)
TOP TO BOTTOM: Wicked: For Good. BOT VFX has developed
with major visual effects
such as ILM, Digital Domain
Wētā FX. (Image courtesy of Universal Pictures)




The
. Six out of 10 of the fastest
effects and animation hubs in the world are in India. (Image courtesy of Warner Bros. Pictures)
FutureWorks provided visual effects shots for Kesari Chapter 2 and established the FutureWorks Academy to further develop local talent. (Image courtesy of FutureWorks)
RRR. India has become a vital hub for major Hollywood productions, delivering complex visual effects at scale. (Image courtesy of DVV Entertainment)
RRR. In 2017, Digital Domain established its 10th facility, in Hyderabad. (Image courtesy of DVV Entertainment)
Atlanta. “When I started in this industry in late 2003 and 2004, no one was using the word outsourcing to India,” recalls Hitesh Shah, CEO and Founder of BOT VFX. “No one was taking any of the work. I started a company called FrameFlow. We worked with Sony Pictures Imageworks, and most of the services were roto and paint, then gradually matchmove. Back then, it was hard to do anything more than revolutions per minute work out of India. It wasn’t a heavy creative collaboration because you could describe or annotate what you wanted to get done, and the next day or within days, you could have that; it lent itself to the development of that market quite well. When stereo conversion got big, it picked up even more, and more roto and paint work were needed. Over the last decade, there has been a gradual rise in skill levels in India. Many companies have been chipping away at this. Even when Rhythm & Hues was around in India, they had started doing compositing training. When Technicolor and MPC were still around, they began pushing out a lot more certain types of compositing, asset development, and even more complex work in Mufasa: The Lion King. Many companies, including ours, ILM and Framestore, have begun investing in a footprint in India that extends beyond roto and paint, and is now seriously into comp and some parts of asset and CG development. At the same time, a lot of Indian facilities serving the Indian domestic market are doing full-on visual effects work.”
“Significant skill development happens through Western projects because Indian domestic films still don’t use a heavy number of visual effects, and the budgets are very constrained,” Shah states. “You don’t learn as much as when you work on a Mufasa or The Jungle Book. Those operations trained many talented artists in mature pipelines. Your artists are only as good as the number of shots they’ve worked through in that skill set.” It is not all about tax incentives. “The labor cost advantage is a sufficient driver of value, where we can produce a budget that is still attractive to studios and production houses that have other constraints, like, ‘I’m already shooting here, so I have to keep 80% of my budget there.’ Those are things that we can’t help, but pound for pound, if you give me a sequence to bid and work on, India still comes out ahead.” Partnerships have been established with the major visual effects companies. Shah says, “We have a dual relationship. We are a vendor as well, and so is Digital Domain, not as much these days with DNEG and ILM, and heavily with Wētā FX and Rodeo FX. All of these players source their roto, paint and matchmove work to us and others. We are a subcontractor to them. They’re happy that we have that scale. They’re not putting roto paint management resources in India but trying to work on higher-end talent.”
Diversification is a risky business because it cannot be achieved overnight. “What we want to do is geographically expand with the same core competency to the main markets that are now beginning to embrace visual effects the way that Hollywood has for a long time,” Shah states.
FutureWorks Academy, founded by a visual effects, post and rental company based in Mumbai, contributes to local talent
TOP TO BOTTOM:
Conjuring: Last Rites
growing visual
development. “FutureWorks has taken an active role in talent development through the FutureWorks Academy, which bridges academic learning and real-world production,” states Gaurav Gupta, CEO of FutureWorks. “The Academy leverages internal talent to mentor and provide structured, hands-on training that prepares students for studio environments. By integrating education with live-project experience, we’re helping to shape a new generation of skilled artists who are ready to contribute from day one.” Roto is not the only focus. “India delivers the full spectrum of visual effects from concept development and asset creation to complex compositing, effects simulations, digital environments and final finishing. Studios here contribute to global features, streaming series and high-end commercials as well as local productions. The range extends from large-scale, visible effects to subtle, invisible work that supports storytelling. What stands out most today is the combination of artistry, technical precision and the ability to collaborate seamlessly with creative teams around the world.”
Diversification is important. “Having multiple specialized divisions enables us to bring depth and flexibility to every project,” Gupta observes. “At FutureWorks, our visual effects, post-production and camera rental teams each operate with dedicated expertise while collaborating closely across disciplines. With a team of more than 500 professionals, that specialization ensures we can deliver creative and technical excellence at every stage of production. Establishing strong internal capabilities and continuously developing proprietary workflows is equally important.”
One has to keep an open mind about potential opportunities. “To sustain innovation, we must constantly explore adjacent sectors like animation, virtual production and immersive media. This exploratory mindset is why we at FutureWorks developed proprietary software tools that enable seamless cross-facility collaboration, and we are preparing to bring these powerful solutions to the broader market.”
“Renderman XPU, Houdini, USD and Katana are shaping the industry in India and globally by automating routine tasks to give artists more room for creativity,” Gupta remarks. “Real-time rendering accelerates visualization, and cloud computing offers massive scalability. While machine learning and AI streamline workflows, their adoption demands careful governance. At FutureWorks, our dedicated R&D team, led by CTO Krishna Prasad, focuses on applying these innovations responsibly, ensuring they enhance artistry while adhering to the highest ethical and technical standards.” Success requires balancing artistry, technology and finance. Gupta concludes, “To thrive, the industry must be relentless in the pursuit of excellence while remaining operationally lean. This involves continuing to foster innovation, strategically investing in new technologies and deepening cross-industry collaboration. Robust partnerships between studios, educators and technology leaders will sustain a strong talent pipeline, and ongoing R&D will keep teams ahead of emerging trends. By balancing efficiency with technical mastery and creativity, India is positioned to maintain its leading role in global visual storytelling.”




The Acolyte. The visual effects industry in India began 15 to 20 years ago. (Image courtesy of Disney+ and Lucasfilm Ltd.)
F1. In order for India to stay competitive it needs to keep sharpening skills and moving up the value chain. (Image courtesy of Apple TV)
Prehistoric Planet: Ice Age. Education in visual effects in India needs to be improved, not just in how to apply it, but in how to think about technology and different forms of storytelling. (Image courtesy of Apple TV)
TOP TO BOTTOM: Thunderbolts*. Increasingly, the visual effects work being done in India spans the full visual effects pipeline, with Framestore Mumbai functioning as a full-service visual effects studio. (Image courtesy of Marvel Studios)

RAISING THE BAR: REVISITING TERMINATOR 2: JUDGMENT DAY, INCEPTION AND LIFE OF PI
By TREVOR HOGG
For many people, the watershed cinematic moment for CGI was the Gallimimus stampede in Jurassic Park; however, the real breakthrough occurred two years earlier when James Cameron decided to build upon the alien water tentacle that mimicked the face of Mary Elizabeth Mastrantonio in The Abyss to produce the T-1000, a shape-shifting liquid metal assassin from the future that serves as the antagonist in Terminator 2: Judgment Day. The digital innovation did not stop there. Christopher Nolan envisioned a dreamscape in Inception where Paris folds in on itself, and Ang Lee cast a Bengal tiger as a principal cast member in Life of Pi. All three of these accomplishments remain the gold standard despite the constant evolution of technology, fueled by filmmakers’ storytelling ambitions and audiences’ expectations.
TOP: Every individual shape of the T-1000 in Terminator 2: Judgment Day was modeled by hand. (Image courtesy of Industrial Light & Magic)
OPPOSITE TOP: Paul Franklin suggested that Paris be turned into a series of linked drawbridge sections with joints at every intersection that lift and fold the city into a box shape. (Image courtesy of Warner Bros. Pictures)
OPPOSITE BOTTOM: Mark A.Z. Dippé and Doug Chiang at a Terminator production meeting with Dennis Muren, VES. (Image courtesy of Industrial Light & Magic)
Douglas Smythe, a CG supervisor at ILM, served as a computer graphics shot supervisor for ILM on Terminator 2: Judgment Day “Technically, the T-1000 wasn’t metal,” Smythe notes. “It was a poly alloy, which is the name somebody made up so that it didn’t have to match any particular metal or chrome. We needed a little bit of diffuse for shaping, otherwise you couldn’t read anything. But as far as the underlying technology at the time, you’ve got the diffuse and specular components, and you colorize and balance them in different ways. If you do it this way, it looks like plastic, or if you make it transparent, it looks like glass, or if it ripples, it looks like water, or if you remove the diffuse, it looks like chrome.” Specific teams rather than pipelines were formed. “We would get a modeler, animation, lighting and technical type of person, and they would figure out how to do a shot. We didn’t have a shot pipeline. We had a morph team and a 3D character team, which is when the T-1000 is in a humanoid form, but all metal. There were also the amorphous blob and Terminator death teams.”

Computer graphic technology had to be invented to produce the desired shots. “All the people who worked on Star Wars and Terminator 2 had a similar challenge : ‘We have to make a shot that looks like this storyboard and this idea, but we don’t have the tools right now in front of us that let us do that. Let’s figure out how to take the pieces we have and make the shot happen,’” Smythe observes. “We had an early version of Alias for modeling and animation, and back in those days, we were using bicubic patches for things. It wasn’t NURBS and certainly not subdivision meshes yet. How to stitch them together so you get a continuous surface, as opposed to having gaps, seams and holes, and solving the problem of having three or five or six patches coming together at a corner, which is necessary for human shapes to do – we hired a mathematician to figure that problem out.”
Every individual shape of the T-1000 was modeled by hand. “When the face splits out into the whole salad bowl shape and folds back in together,” Smythe explains, “that was a combination of hand modeling and me with graph paper, figuring out the coordinates so the shape moves from here to here.” A classic moment is when the morphing T-1000 is held back by a gun that gets caught between the jail bars. “We had our cyber scan of Robert Patrick and the mesh, and pushed it through the bars with absolutely nothing happening to let us get the timing. ‘Okay, at this frame, it hits the side of his nose, or it hits the side of his cheek, so at that point it has to bend in.’ The dynamics of creating a CG character have essentially remained the same. “You start with some concept art and ideas of what it’s supposed to look like, then you refine until it gets to the level that is what you need. Hopefully, you start running with it in shots as soon as possible, rather than perfecting it. You realize that there’s this situation that you forgot to take into account or
“[Paris folding onto itself and settling in place] was one of those sequences that could quite easily have ended up looking a bit fake, so the biggest challenge and worry was to do it justice and make it look as real as the rest of the film. We were all quite proud of the sequence as a whole. It became a signature part of the movie and really captured people’s imagination.”
—Andrew Lockley, Visual Effects Supervisor, Inception



The diffuse and specular components of the T-100
didn’t know about because you hadn’t gotten turnovers for that sequence yet.”
Before becoming the Production Visual Effects Supervisor and Creative Director at beloFX, Paul Franklin co-founded DNEG. He was a member of Christopher Nolan’s inner circle, which led him to orchestrate Paris folding in on itself in Inception. “I spent a lot of time thinking about the characters in the script,” Franklin states. “In particular, Ariadne and how her background as a student of architecture might influence the way she looks at the world and, in turn, how she might work within the dreamscape that she creates in the Paris scene. The tone of the scene is deliberately playful; that seemed important to me. On the one hand she’s playing with the rules of the dream world, but because of her knowledge of architecture she approaches it with the logic and rigor of an engineer. Another key idea for me was one Chris and I had both experienced in 2004 in Chicago during the filming of Batman Begins. We had been able to get the city to raise the drawbridges across the Chicago River. If you stood in the street near one of the bridges as it rose, it looked like the world was folding on a mechanical hinge, taking the road surface, sidewalks and street lamps with it. I proposed that we turn Paris into a series of linked drawbridge sections with joints at every intersection that lift and fold the city into a box shape. The uniform footprint of the Parisian city blocks lent itself admirably to this idea. That became the basis of our approach.”
Presently a lighting TD at DNEG Animation, Alison Wortman
TOP: Believe it or not, Alvin and the Chipmunks had a heavy influence on the facial rig used for Richard Parker in Life of Pi. (Image courtesy of 20th Century Studios)
BOTTOM:
were colorized and balanced in different ways for Terminator 2: Judgment Day. (Image courtesy of Industrial Light & Magic)

was DNEG’s CG Supervisor on Inception, who did the layout for the shot, which Nolan approved on first viewing. “In the previs, we began by looking at the shoot location in Google Street View, which allowed me to scout the street layout and understand the views that would be available to the shooting crew,” Wortman states. “This also helped me start to plan how the street might fold up, for example, identifying where the intersections could make crease points. On the shoot, we took extensive reference photography of the location to help the team at DNEG build up a photorealistic version of the previs I had blocked out. This included shooting bracketed HDR photography of each facade of every building around the main shooting intersection, and if I remember rightly, a decent amount [an additional block worth] in each direction spreading out from the main intersection. We took the photos at ground level from a tripod. We shot high-resolution images using a panoramic head to create tiles that could then be stitched together into a large, high-res image from which the team could extract texture information. We repeated each tripod location but took the photos from a scissor lift to capture more orthographic shots of the facades. All of this photography was sent back to the team in London, and they used it to build textures and inform the details of the model builds going on back at DNEG.”
Andrew Lockley is a Visual Effects Supervisor at DNEG, the role he held when working on Inception. “For the shot where the city settles into place we went around with the lighting a lot,” Lockley

TOP: Ultimately, what raised the bar on Life of Pi was having access to real tigers. (Image courtesy of 20th Century Studios)
BOTTOM: Specific teams rather than pipelines were formed for the various shots in Terminator 2: Judgment Day. (Image courtesy of Industrial Light & Magic)


reveals. “We were struggling to get the buildings to look completely real, and we weren’t sure if it was because the buildings looked weird upside down or if we were missing something in the lighting that was making them look flat and painted. We just couldn’t get it looking right. In the end, we decided to break the rules and invented a light source that cast sunlight onto the building as they came down into position. We placed it at quite an awkward angle, rather than making it too pretty, so it felt accidental. It didn’t make sense logically, but it gave them great scale and shape, and it just worked. It’s the version that ended up in the movie.” Maintaining a sense of realism was hard. “It was one of those sequences that could quite easily have ended up looking a bit fake, so the biggest challenge and worry was to do it justice and make it look as real as the rest of the film,” Lockley notes. “We were all quite proud of the sequence as a whole. It became a signature part of the movie and really captured people’s imagination. It’s amazing to be involved in something that has so many iconic moments.”
Hired to be the Production Visual Effects Supervisor on Life of Pi, Bill Westenhofer has remained on the client side. “There were two technical advancements that were attributed to the 15% to 20% of the improvement [of the creature pipeline],” Westenhofer remarks. “One is something called ray tracing. When lighting the hairs themselves in the past, it was so computationally expensive that you had to do a number of lights and big shadows. Now, computers are fast enough that we can render light bouncing off the boat, hit the tiger, bounce back, and get that whole thing working. There is another little thing called subsurface scattering, especially with clumps of white fur: light comes in one side, and the hair is not completely opaque, so it bounces around, and you get some glow coming out the other side. Those kinds of things add to the realism. Ultimately, what raised the bar on this one was having access to real tigers.”
Every trick in the book was utilized to get the proper interaction between Richard Parker and Pi Patel (Suraj Sharma). “With Suraj, you could tell him the tiger is going to be here, and even if we didn’t supply him with anything you could see in his eyes that he was imagining it,” Westenhofer states. “With that said, we would do things. Quite often, my Animation Director, Erik-Jan de Boer, would put on a blue suit, hop into the boat and be the tiger when he’s fighting over the pole and the fish. It’s Erik who is catching the end of the stick and grabbing and shaking it to give some resistance. When Suraj pulls the tiger into his lap, we molded the tiger’s head and shoulders out from our model, using sand bags inside to give it the right weight and pull. We replaced that with a digital version, which had hairs.”
Freelance Visual Effects Supervisor Erik-Jan de Boer was an animation supervisor at Rhythm & Hues. “Our tiger had to be completely believable as a predatory animal for Ang Lee to be able to sell the relationship between Richard Parker and Pi, from fear to respect, dependency, and ultimately compassion,” de Boer remarks. “We would need to be able to see our own emotions reflected in his eyes. Early in our schedule, we rotoscoped reference footage for side-by-side studies to assist the character build, rigging and lighting development. These are not purely technical exercises; the animator needs to be very skilled and understand how the animal moves, its weight, timing, and how to design the motion effectively through
TOP: Technically, the T-1000 in Terminator 2: Judgment Day wasn’t metal but a poly alloy, which meant that it didn’t have to match any particular metal or chrome.
(Image courtesy of Industrial Light & Magic)
BOTTOM: Detailed storyboards illustrating the T-100 going through the prison bars in Terminator 2. (Image courtesy of Industrial Light & Magic)
the rigging controls. Rotoscoping gave us very realistic motion, but the performance was never appropriate for the movie, as it is next to impossible to direct these animals. This is also the main reason why any form of motion capture was out of the question. We created all our performances from rest using keyframe animation in Rhythm & Hues’ proprietary animation package, Voodoo. Keyframing meant that the believability of each performance depended on the talent and perseverance of each individual artist.”
Paws can be quite expressive. “Not just the obvious pro- or retracted state of the claws, but also the change in shape between relaxed, load-bearing and placements on uneven surfaces,” de Boer observes. “We did additional research and development on our extremities and greatly improved the modeling, mechanics and control structures of the paws. When Richard Parker confronts Pi for the first time and Pi throws him the rat, there is a moment where it had to become clear that the cat is uncomfortable on the tarp. We are close-up on both paws with a right claw stuck in the cloth. Richard Parker pulls away and the claw releases, muscles flex, tendons groove the skin, and the darker interior fur flashes in and out with the claws. The paw digits articulate, and the shape changes are clearly legible. There is texture and detail in the motion. This fidelity really allowed us to bring a level of realism and a sense of real weight and connection to the contacts that we had not achieved before.”
Freelance Visual Effects Supervisor Jason Bayever was a digital effects supervisor at Rhythm & Hues. “Believe it or not, Alvin and the Chipmunks had a heavy influence on Richard Parker,” Bayever reveals. “Alvin and the Chipmunks essentially had the same face rig as Richard Parker. This enabled our modeling department and everybody leading up to this to have a consistent list of blend shapes. It meant that our animators didn’t have to learn a new face shape and a new process for each character. If they wanted a certain shape, that shape was always there. We could also build motion libraries for those shapes. Theoretically, we could have taken Alvin’s lip syncing a song and applied it directly to Richard Parker. It was an unbelievable system for rigging.”
Other layers had to be built for Richard Parker. “One thing that’s very cool about wild animals is their disconnected scapula,” Bayever observes. “Our scapula is connected, so you can only move it a little bit. When lions, tigers or any of those big cats walk, their scapula moves up and down, which enables them to be powerful and run quickly. They are almost 500 pounds of pure muscle inside a sack of skin, so they can move around within it, unlike us. There’s a lot of what we call ‘skin slide.’ They built a sophisticated system of multiple layers of simulation. I believe there were three simulations: under skin, muscles and over skin on top of that. That was just the rigging side of it. We did a lot of side-by-side comparisons of King [a real tiger used as reference], and then our simulation. Erik-Jan de Boer’s team would animate the tiger doing the exact same thing that King was doing in the real footage. We would keep simulating and building processes until the two of them matched.”
A prequel franchise has elevated the CG creature pipeline. “There is no question that the Planet of the Apes movies have taken furry creatures and character elements to a whole new level.”




TOP TO BOTTOM: Crucial in elevating the creature pipeline for Life of Pi were technical advancements in ray tracing and subsurface scattering. (Image courtesy of 20th Century Studios)




VISUAL EFFECTS ARTISTRY IN THE SPOTLIGHT
Captions list all members of each Award-winning team, even if some were not present or out of frame. For more Show photos and the complete list of nominees and winners of the 24th Annual VES Awards, visit www.vesglobal.org
All photos by Moloshok Photography.
honoring the industry’s
On February 25th, the Visual Effects Society held the 24th Annual VES Awards, the prestigious celebration that recognizes outstanding visual effects artistry and innovation in film, animation, television, commercials, real-time projects and special venues.
Industry guests gathered at The Beverly Hilton hotel to celebrate the accomplishments of the talented VFX artists across 25 categories. Avatar: Fire and Ash received the top photoreal feature award and six others. KPop Demon Hunters was named best animated film, winning three total awards. Prehistoric Planet: Ice Age - The Big Freeze won best VFX in a photoreal episode.
Comedy duo, The Sklar Brothers, returned to host the VES Awards for the second year. Director of F1: The Movie, Joseph Kosinski, presented the VES Lifetime Achievement Award to iconic film and television producer Jerry Bruckheimer. Sir Richard Taylor, Co-Founder of the Wētā companies and Co-Owner of Wētā Workshop, received the VES Visionary Award, presented by Adam Savage.
“The VES is honored to recognize brilliant artistry and technological innovation across a wide range of disciplines,” said VES Board Chair Kim Davidson. “The craft of visual effects is constantly evolving to push the limits of our imaginations, and tonight’s inspiring winners and nominees represent best-in-class work from around the world. Congratulations to all!”
1. More than 1,200 visual effects artists, filmmakers and luminaries gathered in Los Angeles on February 25, 2026, as the industry celebrated the artistry, innovation and magic that brought the year’s most unforgettable images to life.
2. Visual Effects Society Executive Director Nancy Ward welcomes attendees to the evening’s celebration, setting the stage for a night
visionary creators and groundbreaking achievements in visual effects.
3. VES Board Chair Kim Davidson, President and CEO of SideFX, took the stage to present the first awards of the evening.
4. The Sklar Brothers (Randy and Jason Sklar) returned to bring their razor-sharp wit and irresistible twin-powered chemistry back to the 24th Annual VES Awards for the second consecutive year.





The team behind Avatar: Fire and Ash, Richard Baneham, Peter Litvack, Eric Saindon, Nicky Muir and Steve Ingram, accepted the Award for Outstanding Visual Effects in a Photoreal Feature.
6. The VFX trailblazers behind Sinners triumphed with the Award for Outstanding Supporting Visual Effects in a Photoreal Feature. VFX Artists included Michael Ralla, James Alexander, Nick Marshall, Espen Nordahl and Donnie Dean.
7. Prehistoric Planet: Ice Age - The Big Freeze won the Award for Outstanding Visual Effects in a Photoreal Episode. Team members comprised Russell Dodgson, Tracey Gibbons, François Dumoulin and Gavin McKenzie.

8. KPop Demon Hunters came up golden with the Award for Outstanding Animation in an Animated Feature. Team members included Joshua Beveridge, Jacky Priddle, Benjamin Hendricks and Clara Chan.
9. The VFX crew of The Residence - The Fall of the House of Usher, including Seth Hill, Tesa Kubicek, John Nelson and Gabriel Vargas, were recognized with the Award for Outstanding Supporting Visual Effects in a Photoreal Episode.
10. The innovators behind Andor, Welcome to the RebellionThe Senate District secured the Award for Outstanding Environment in an Episodic, Commercial, Game Cinematic or RealTime Project. The team included John O’Connell, Falk Boje, Hasan Ilhan and Kevin George.
5.






11. Zootopia 2 - Marsh Market earned the Award for Outstanding Environment in an Animated Feature. The team included Limei Z. Hshieh, Alexander Nicholas Whang, Joshua Fry and Ryan DeYoung.
12. Sir Richard Taylor, Co-founder of the Wētā companies, received the VES Visionary Award. Recognized for his transformative impact on the film industry, his creative vision and technical innovation have redefined what’s possible in visual effects and practical design.
13. Haley Joel Osment took the stage to present awards that honor the artists whose work makes audiences see things they never thought possible.
14. The creators of BMW Heart of Joy - Meet Okto the Octopus snagged the Award for Outstanding Visual Effects in a Commercial. The VFX team included Tom Raynor, Helen Tang, Jack Harris and Alex Kulikov.
15. The artists behind Avatar: Fire and AshBridgehead Industrial City captured the Award for Outstanding Environment in a Photoreal Feature. Team members included Gianluca Pizzaia, Steve Bevins, Dziga Kaiser and Zsolt Máté.
16. Ghost of Yōtei received the Award for Outstanding Visual Arts in a Real-Time Project. VFX team members included Jason Connell, Matt Vainio, Joanna Wang and Jasmin Patry.



17. The Wizard of Oz at Sphere spirited away the Award for Outstanding Visual Effects in a Special Venue Project. The winners included Ben Grossmann, Tamara Watts Kent, Dr. Irfan Essa, Matt Dougan and Glenn Derry.
18. Awards presenter Jazz Raycole, star of The Lincoln Lawyer, brought her undeniable charisma to the 24th Annual VES Awards stage.



19. The VFX artists behind Avatar: Fire and Ash, Steve Deane, A.J. Briones, Zachary Brake and Andrew Moffett, with team member Maggie Boudreaux, secured the Award for Outstanding CG Cinematography.
20. The visionaries behind Avatar: Fire and Ash - Kora Fire Toolset, Alexey Dmitrievich Stomakhin, John Edholm, Murali Ramachari and Aleksandr Isakov, accepted the Emerging Technology Award.
21. Attendees toasted to a wonderful evening at the pre-show Cocktail Hour.
22. Autodesk’s Jocelyn Moffatt (far left) presented the creators of Azimuth with the Award for Outstanding Visual Effects in a Student Project. This VFX alliance included Thomas Teisseire, Cassandre Cinier, Martin Bluy and Mathis Giraudeau. Autodesk is proud to support the future innovators of VFX.






23. Avatar: Fire and Ash - The Windtraders’ Gondola received the VES Award for Outstanding Model in a Photoreal or Animated Project. The VFX ensemble included Michael Smale, Sam Sharplin, Joe W. Churchill and Jacqi Dillon.
24. The VFX group behind AndorWho Are You? took home the Award for Outstanding Special (Practical) Effects in a Photoreal Project. Team members included Luke Murphy, Dean Ford, Jody Eltham and Darrell Guyon.
25. The VFX trailblazers behind The Last of Us: Through the ValleyA Storm of Ice, Fire and Flesh received the Award for Outstanding Compositing & Lighting in an Episode. This VFX force included Tobias Wiesner, Mark Julien, Owen Longstaff and Brendan Naylor.
26. From the gripping world of Paradise, actress Enuka Okuma brought elegance, presence and star power to the 24th Annual VES Awards stage.
27. Alex Kulikov, Jack Harris, Adam Chabane and Nicola Borsari received the Award for Outstanding Compositing & Lighting in a Commercial for BMW Heart of Joy - Meet Okto the Octopus
28. The VFX magicians behind F1: The Movie - Modern Race and POV Footage accepted the Award for Outstanding Compositing & Lighting in a Feature. The team included Hugo Gauvreau, Chris Davies, Raushan Raj and Amaury Rospars.



29. VES Board Chair Kim Davidson shares a moment backstage with members of YouTube filmmaking team, Corridor Crew: Niko Pueringer,
Gorski and Wren Weichman.
30. Jerry Bruckheimer received the Visual Effects Society’s Lifetime Achievement Award, honoring a career that has pushed the boundaries of cinematic spectacle. His films have consistently been at the forefront of visual innovation, helping define the modern blockbuster and inspiring generations of VFX artists.



31. Attendees snap a quick selfie during a break in the action. Colleagues from around the world reunited to celebrate the art and craft of visual effects.
32. The team behind Prehistoric Plant: Ice Age - The Big Freeze, Edward Ferrysienanda, Kevin Christensen, Guy Schuleman and Kevin Tarpinian, accepted the Award for Outstanding Effects Simulations in an Episode, Commercial, Game Cinematic or Real-Time Project.
33. KPop Demon Hunters took home the Award for Outstanding Effects Simulations in an Animated Feature. The VFX unit included Filippo Maccari, Nikolaos Finizio, Daniel La Chapelle and Srdjan Milosevic.
34. The groundbreaking team behind Avatar: Fire and Ash: Simulating Pandora, Nicholas James Illingworth, Sarah C. Farmer, James Robinson and Ryan Bowden, accepted the Award for Outstanding Effects Simulations in a Photoreal Feature.
Sam






35. Grammy Awardwinning singer, songwriter and Sinners collaborator Raphael Saadiq graced the 24th Annual VES Awards stage as a presenter with the same soulful intensity he brings to everything he touches.
36. A highlight of the event, the VFX team behind Avatar: Fire and Ash - Varang: Leader of the Ash Clan, Stephen Clee, Stuart Adcock, Keven Norris and Joseph Kim, received the Award for Outstanding Character in a Photoreal Feature, and brought to the stage Oona Chaplin, whose embodiment of Varang brought her fiercely to life.
37. Philip Harris-Genois, Pierric Danjou, Chloé Ostiguy and Jonathan Bourdua captured the Award for Outstanding Character in an Episodic, Commercial, Game Cinematic or Real-Time Project for IT: Welcome to Derry - The Thing in the Dark - The Pickle Monster
38. Jim Morris, VES, President of Pixar Animation Studios and VES Founding Board Chair, brought the full weight of an extraordinary career to the 24th Annual VES Awards stage as a presenter.
39. The visionaries behind KPop Demon Hunters - Rumi claimed the Award for Outstanding Character in an Animated Feature. The VFX unit included Sophia (Seung Hee) Lee, Andrea Matamoros, Marc Souliere and Joshua Beveridge.
40. Omar Benson Miller took the stage as a presenter, fresh from his powerful turn in Sinners





41. VES Award winners cheered as they were called to the stage to accept their statuettes.
42. Special effects whiz Adam Savage backstage with VES Awards founder and consulting executive producer Jeffrey A. Okun, VES.
43. Lil Rel Howery presented the final awards of the evening. The effortlessly charming Howery brought genuine joy to the evening, making the whole room feel like a party.
44. Jerry Bruckheimer, VES Lifetime Achievement Award recipient, with VES Board Chair Kim Davidson (left), director Joseph Kosinski and VES Executive Director Nancy Ward. 45. The VES Awards Committee celebrated the culmination of a year’s preparation to honor all the deserving award winners.
AVATAR: FIRE AND ASH





Avatar: Fire and Ash earned seven awards, including: Outstanding Visual Effects in a Photoreal Feature; Outstanding Character in a Photoreal Feature (Varang, played by Oona Chaplin); Outstanding Environment in a Photoreal Feature (Bridgehead Industrial City); Outstanding CG Cinematography; Outstanding Model in a Photoreal or Animated Project (The Windtraders’ Gondola); Outstanding Effects Simulations in a Photoreal Feature (Simulating Pandora); and the Emerging Technology Award (for the Kora Fire Toolset). (Photos courtesy of 20 th Century Studios)
KPOP DEMON HUNTERS




KPop Demon Hunters won three awards, including Outstanding Animation in an Animated Feature, Outstanding Character in an Animated Feature (Rumi) and Outstanding Effects Simulations in an Animated Feature. (Photos courtesy of Netflix)
PREHISTORIC PLANET: ICE AGE




Prehistoric Planet: Ice Age won for Outstanding Visual Effects in a Photoreal Episode (“The Big Freeze”) and Outstanding Effects Simulations in an Episode, Commercial, Game Cinematic or Real-Time Project (“The Big Freeze”). (Photos courtesy of Apple TV)

PIVOTAL TECHNOLOGY: HOW THE DYKSTRAFLEX TRANFORMED THE VFX INDUSTRY – AND MOVIES
By BARBARA ROBERTSON
Magic can happen when the right group of people come together at the right time to work on the right project. That was true for the young visual effects crew working in a Van Nuys, California warehouse for the fledgling studio Industrial Light & Magic. The effects they created for the 1977 breakout film, Star Wars, made history. The film won six Academy awards in 1978 including Best Visual Effects, made hundreds of millions of dollars and set the stage for visual effects-driven blockbuster films to come.
As we commemorate ILM’s 50th anniversary, it’s fitting to celebrate an invention that made the visual effects in Star Wars possible. The dramatic opening scene and iconic space battles could not have happened without the Dykstraflex, the handmade, home-brewed motion control camera system the crew built for the film. Nothing quite like the Dykstraflex had existed before. ILM’s crew named the system after John Dykstra, the “Special Photographic Effects Supervisor” – aka Visual Effects Supervisor, and built it completely from scratch.
“It was a huge effort,” Dykstra says. “We had to make the assumption that all the interactive pieces, the motion control system, the speed, the scale of the models, would integrate and support one another based on the ability of the guys at ILM to collaborate, and their willingness to put the project in the fore
Images courtesy of Industrial Light & Magic.
TOP: Gene Kozicki, pictured here, and Brooke Breton worked in partnership with the Science and Technology Council at the Academy of Motion Picture Arts and Sciences to mount a working model of the Dykstraflex system for the popular exhibition at the Academy Museum in 2024.

rather than individual promotion.” In 1978, Dykstra, Electronics Designer Al Miller and Electronic Systems Designer Jerry Jeffress received the Academy’s “Scientific and Technical Award” to recognize their achievement. The plaque read: “To John C. Dykstra for the development of a facility uniquely oriented toward visual effects photography, and to Alvah J. Miller and Jerry Jeffress for the engineering of the Electronic Motion Control System used in concert for multiple exposure visual effects motion picture photography.
“I remember when the first Star Wars came out and I couldn’t figure out how the shots were done,” says John Knoll, Executive Creative Director at ILM, who was then a student and avid Cinefantastique reader. “There’s the famous shot of the TIE fighter diving down into the Death Star. It curves, drops, levels out and zooms down into the trench. I knew it wasn’t stop-motion because there’s motion blur, but it carried depth of field the whole way. I didn’t know how they did it.” They did it, of course, using the Dykstraflex, or D-Flex as ILMers often refer to the system. “The D-Flex was a real revolution,” Knoll says. “Before, there were often stylistic boundaries – live action would end and suddenly there would be a visual effect. The D-Flex was the first to break that boundary. The camera wasn’t restricted to shots with only simple

TOP: Visual effects camera operator Peter Daulton at work on Back to the Future Part III (1990).
BOTTOM: A standing-room-only crowd watches as a reconstituted Dykstraflex motion control camera system films a model of the Millennium Falcon at the Academy Museum in 2024.


moves and ones that had to be locked off. What we imagined could be executed. A whole world became doable.” Knoll joined ILM in 1986 and was a motion control camera operator using the D-Flex, its upgrades and revisions for three years before transitioning to the digital world.
THE SYSTEM
The 1,500-pound Dykstraflex system was built with stepper motors, a VistaVision camera on a boom, and typically a 40-foot track. Camera operators used joysticks or potentiometers to program camera moves that could be recorded and repeated precisely. Simultaneous control of different axes, such as roll, pan, tilt, swing, boom, traverse and track, meant spaceships could bank, dive, fly and curve. One pass might have an X-wing starfighter curve toward the viewer, a second might send a second ship curving away. Additional passes could add lighting and the star field. Each pass would be viewed in black and white on a Movieola and composited optically before being filmed in color and optically composited to create final shots. Although the system was revolutionary, John Dykstra and his merry band of artists and engineers didn’t invent the Dykstraflex out of a void. Visual effects crews and others had used motion control systems for years. “In the early days, visual effects for the most part relied on stop-motion animation,” Dykstra says. “With these incremental still frames, there
TOP: Visual Effects Supervisor Dennis Muren, VES, at the controls of a motion-control camera as he shoots a Star Destroyer miniature for Star Wars: The Empire Strikes Back (1980).
BOTTOM: Peter Daulton working with a Dykstraflex motion control camera system at Industrial Light & Magic, 1982.

is no motion blur. People understand if something has a jittery motion without motion blur that it’s not real.”
For Stanley Kubrick’s 2001: A Space Odyssey Douglas Trumbull, VES, and his team devised a motion control system that moved a camera down a track past a model, panning in a linear way, which kept detail and depth of field. Multiple synchronized passes recorded and composited together produced beautiful final shots with high quality, stately motion. But even using an upgrade for Silent Running, a camera operator couldn’t accelerate or decelerate the moving camera. Dykstra worked on Silent Running – his title was “Special Photographic Effects” – and that helped him land a job on a scientific project funded by the National Science Foundation at the University of California Berkeley’s Environmental Simulation Laboratory. Researchers there wanted to evaluate whether simulated environments could more accurately convey how people might experience a proposed plan than static models and drawings. The goal was to determine what made images believable.
For the simulation, they built a detailed model of a suburb and populated it with landscaping, streetlights, cars, houses, even gas stations. Each inch represented 30 feet of streets and terrain. A moving gantry above the model had a 16mm camera hanging down and an optical system that provided an eye-level view. A PDP-11 computer controlled the rig, allowing a route to be precisely

TOP: A motion-control camera is positioned over the miniature Death Star surface for Star Wars: Return of the Jedi (1983).
BOTTOM: Visual effects camera operator Don Dow, left, readies a shot for Star Wars: The Empire Strikes Back (1980) with Visual Effects Supervisor Dennis Muren, VES.


repeated. Of the 800 people who viewed the simulation, many refused to believe that models were used. And it was most effective when it simulated something never before seen by the viewers.
Dykstra’s next project would be Star Wars. George Lucas provided the opportunity, vision and need. Dykstra and the young crew he hired provided the MacGyver energy and willingness. “I think George wanted the audience to be a participant, to become engaged at a personal level, to have the engagement that 2001 was able to generate,” Dykstra says. “The whole premise of the movie screen being a portal into which you are drawn had a lot to do with George’s vision of having visuals that supported his reality. But one of the difficult things to do is to find out what the cues are.”
“People have subliminal survival systems,” Dykstra explains. “They’re able to evaluate speed, how much energy something traveling carries; whether that rock coming toward you has the same sense of mass and trajectory as in real life. Does it bring you out of your seat and into the movie?” That, he says, was the premise behind the dogfight battles that were intrinsic in the Star Wars movies; behind capturing the energy of handheld cameras in the middle of the action just like World War II movies, with ships flying by. “That was something I was totally involved in,” Dykstra says. “Creating that sense of momentum and personal involvement. That is what the Dykstraflex was involved in, to get all the subtle cues you have in real life.”
THE KEY
“The key was to figure out a way to integrate the motion of the camera in multiple axes and to have the camera in the same position more than one time,” Dykstra notes. “What we did was control
TOP: Visual Effects Supervisor John Knoll readies a motion-control camera movement with the Razor Crest miniature during production of The Mandalorian Season 1 (2019).
BOTTOM: The Dykstraflex motion-control camera system, complete with boom arm and dolly track, during a setup with a TIE fighter miniature for Star Wars: A New Hope (1977).

the camera moves with a computer, which gave us the ability to synchronize several axes with the motion of the camera and repeat them. We could generate multiple elements with the same camera move.” Knoll states, “To put all that into a package that was cameraman-friendly, with a conventional-like camera crane that could shoot any move, was transformative.” First cameraman Richard Edlund, VES, and his camera operators could film one pass and the system would precisely replicate the motion for a second pass. A boom arm let the underslung camera move close to a model. The VistaVision camera gave them large-format film. But there was something more.
“I had precision accuracy of the camera in all axes of motion and could vary the speed,” Dykstra says. “To move with acceleration with the motion control system was something no other system could do. Then, to enhance that sense of speed change, we could move close to the miniatures.” That flexibility proved to be essential for Star Wars’ spaceship dogfights. In the television documentary Light and Magic, George Lucas says, “Movies are kinetic. It’s about movement. Forget the actors, forget the story. It’s all about movement.” No other system provided camera moves with acceleration and deceleration.
“John [Dykstra] must have recognized that we needed to be able to slow things down and speed them up, and he was absolutely right,” says Dennis Muren, VES, who was a second cameraman at ILM back in 1977 on A New Hope and went on to win eight Academy Awards for best visual effects at ILM before semi-retiring. “If you’re going to have a curve, one motion is going to need to slow down, and then you have to go back and put in the other motion of speeding up to get the curve.”

TOP: A final composite of two X-wing fighters from Star Wars: A New Hope (1977). (Image courtesy of ILM and Lucasfilm Ltd.)
BOTTOM: The ILM crew prepares a motion-control shot of the Millennium Falcon in Star Wars: A New Hope (1977).


After New Hope, Muren moved north with ILM from Los Angeles to Marin County and was a cameraman on Close Encounters. “That system was all linear, nothing like the same situation at all; it was easy,” Muren says. “New Hope had been so hard I was just thinking about getting the shots done. It took a long time to figure out mechanically how to use the D-Flex. Each motor controlled one axis, so trying to do two things at once, like pan and tilt, was like working an Etch A Sketch. And you’re doing the layers all one at a time, so you can’t really see them together until you play it back. But, during that time I worked on Close Encounters, I thought about what I had learned on Star Wars. I thought about all the dynamics you can have in a three-dimensional space.” Muren envisioned these new shot designs knowing the Dykstraflex would give him the control he needed.
BEYOND A NEW HOPE
“If you can formulate three dimensions and fluidity in your mind, you can design shots that you’ve never seen before that are completely different from storyboards,” Muren says. “You may find a frame that’s like the storyboard, but the dynamics give it power. We used to take storyboards more literally,” he adds. “But you know, even previs today is done with cheating. If you have something go off in the distance, they’ll just scale it down. They think it’s the same thing, but it’s not. It’s not going to have the reality that the rest of the film has with actors walking around and living in that world.”
Muren tried out his new shot design ideas on Battlestar Galactica in 1978 using the Dykstraflex. He describes a storyboard with tiny Cylon ships off in the distance strafing the Galactica ship, shooting at it, getting farther and farther away. “I thought what if you start the Cylon close to the camera with the Galactica way down there,” he says. “You start going with the Cylon as it goes faster and farther, and it ends up reversed with the Galactica huge and the Cylon small. I realized recently that it’s a Vertigo shot. It was one long shot, but sadly they cut it into three shots for the film. That’s what the Dykstraflex gave me. The possibility. I could previsualize something in my head spatially with multiple objects moving at various speeds and put them in a way that it creates something dynamic at the level needed for a major motion picture – not just moving around, but with pacing – and then get it on film.”
After Galactica, Muren became Effects Director of Photography for The Empire Strikes Back and received an Oscar for best visual effects, as did VFX Supervisor Richard Edlund, who had moved to Northern California with ILM. They both earned Oscars again for Return of the Jedi
“I would never have conceived a tool that complicated,” Muren says of the Dykstraflex. “And yet once you figured it out, you could do anything. I took shot design further because I had a tool that could do it. And, I applied what I learned to later films, even to Jurassic Park, even though there’s no motion control in Jurassic I’m very glad John built the Dykstraflex and that George let him build this crazy thing. It gave us a level of wonder that would not otherwise have been there. It opened up something for me.” It
affected others, as well. Take animator Peter Daulton, for example. “I learned how to be a camera assistant and motion control camera operator on the D-Flex,” says Daulton, who worked at ILM from 1983 until 2019. “Those skills came in handy when I made the switch to CG in 1993 as an animator. The Dykstraflex made my dream career possible.” Daulton is now writing, producing, directing and filming short documentary films. The last thing he remembers shooting on the Dykstraflex was a miniature gothic mansion for Death Becomes Her. “Soon after that, I switched to computer graphics. As a motion control operator, I had learned how to smooth curves, to modify our curves. That was a step toward CG.”
“The D-Flex was pivotal in spawning a significant era in visual effects,” says Jim Morris, Pixar’s General Manager, who was at ILM from 1987 to 2005 as a producer, Executive in Charge of Production, then General Manager. “It was a piece of mechanical equipment able to do a job because it was connected to a computer, which made the repeatability possible. It was a transitional link between the photochemical and mechanical technologies used for years and the digital worlds now.”
Motion control camerawork fell out of favor when miniatures tended to become more expensive than CG models, and when computer graphics became more flexible – a digital model could be used in an infinite number of concurrent shots, whereas a physical model could only be used to film a single shot at any given time. But recently, John Knoll returned to the world of motion control camerawork to create some shots for the television series Mandalorian. “[Producer, writer, director] Jon Favreau became interested in using miniatures,” Knoll says. “We showed him a couple of [CG] shots of the Razor Crest flying through space, and he said that something didn’t feel right. Shouldn’t it be more reflective? He thought we should build a model and shoot reference.”
Knoll took it on as a challenge and built a motion control system in his garage, including all the electronics, that he used to film 16 shots of a model built by modelmakers John Goodson and Dan Patrascu. “Even though we used the miniature for only 16 shots, they made all the other shots look better,” he says. “We can get a look with a miniature physical object and lighting that’s surprisingly hard to do with computer graphics. We retooled the look of the CG model.”
It’s easy to think of the Dykstraflex as simply the system that gave George Lucas his WWII dog fights in space, although that’s accomplishment enough. But people using it achieved something deep and long lasting that changed visual effects beyond Star Wars. It opened ideas for shot design that undoubtedly helped Dennis Muren win eight Academy Awards for Best Visual Effects. It helped foster new careers for artists like Peter Daulton. It was a transitional link. And, a modern-day version of the Dykstraflex is being used today at ILM. It has given visual effects artists the possibility of images never seen before. And the revolution continues to this day.
“At their core, that’s what all these tools should be about,” Morris says. “They should extend our artistic reach, to be creative and make better stuff.” And then magic happens.
Dystraflex at the Academy
The D-Flex moved to Northern California with ILM in 1978, and camera operators continued using it and updated versions of it until around 1992, and it was set aside. People lucky enough to be in L.A. in 2024 had an opportunity to see the legendary camera system in action during an exhibition at the Academy Museum of Motion Pictures.
“The Dykstraflex is part of the Academy Collection and a revolutionary camera system that fundamentally changed how special effects were filmed in the original [Star Wars] trilogy and transformed cinematic visual effects worldwide,” says Museum Director and President Amy Homma.
Working in partnership with the Science and Technology Council at the Academy of Motion Picture Arts and Sciences, Academy Foundation Vice President and VFX Branch Governor Brooke Breton, VES, along with Visual Effects Producer and Historian Gene Kozicki, VES, helped spearhead the effort.
It took about six months to plan and one week to cobble together and install the Dykstraflex and a model of the Millennium Falcon. Scheduled to run from May 4 to July 8, the exhibition was so popular the museum extended it to July 28. Live demonstrations occurred throughout.
“The turnout exceeded our expectations,” Homma says. “Especially during weekend demonstrations in May. It was standing room only for each of our sessions, and we reached our maximum capacity in our gallery.” Special guests included Richard Edlund and John Dykstra, both of whom received Oscars for the Best Visual Effects in A New Hope, as did with Jon Erland, VES, and Alvah Miller, the electronics and electronic systems designers.

here in a final composite from Season 2 of The Mandalorian
OPPOSITE TOP TO BOTTOM: Visual Effects Supervisor John Knoll, left, and modelmaker John Goodson pose with the motion-control camera system and Razor Crest miniature created for The Mandalorian Season 1 (2019).
Camera operator Selwyn Eddy operates the Vista Cruiser during production of Enemy Mine (1985).
TOP: The miniature version of Moff Gideon’s light cruiser, seen
(2019-23). (Image courtesy of ILM and Lucasfilm Ltd.)

COMING TO TERMS WITH HOLOGRAMS
By TREVOR HOGG
TOP: Glasses are an integral part of the technology as they separate the images for the left and right eyes to create the impression of depth. (Image courtesy of Axiom Holographics)
OPPOSITE TOP: Axiom’s holographic creatures add to the learning and enjoyment of children as the experience promotes interaction. (Image courtesy of Axiom Holographics)
OPPOSITE BOTTOM: Looking Glass Factory began by releasing a desktop holographic development kit and moved on to create a personal holographic display called Glass Portrait. (Image courtesy of Looking Glass Factory)
Although holography dates back to Gabriel Lippmann’s use of lightwave interference to capture color in photography in 1886, there is still considerable confusion about what constitutes a hologram, which has led to it becoming a catch-all term. “A lot of what gets marketed as ‘holographic’ is just clever 2D display work, projections, screens and visual tricks dressed up to look three-dimensional,” explains Jeff Barnes, VES, Founder of Maginera, which provides consulting for visual effects and immersive media experiences from Santa Barbara, California. “A true hologram, in the optical sense, recreates a ‘real image.’ It delivers an abundance of viewpoint-dependent information so the scene changes as you move your head or shift position. This includes proper motion parallax, near objects sliding faster across your view than distant ones, correct occlusion of background objects by foreground objects, and physically correct reflections and refractions. When those cues align, your brain perceives a real three-dimensional scene, as in the real world. Achieving that level of realism requires a display that serves high-resolution scene data that updates with both angle and location as the real world does. There’s also the eyeline rule: a genuine holographic object can only appear in the sightline between your eyes and its light source. If something seems visible outside that geometry, you’re looking at a cinematic illusion, not a hologram.”

Based in Brooklyn, New York, Looking Glass Factory began by releasing a desktop holographic development kit and later introduced a personal holographic display called Glass Portrait, as well as enabling hologram sharing on the Internet via the Looking Glass Blocks platform. Most of the questions revolved around solving the problem of presence. “How do you make a person or a character feel more like they’re in the room than they would feel on a 2D display?” questions. Shawn Frayne, CEO and Co-Founder of Looking Glass Factory. “Either it was the shark in Back to the Future II that gobbled up Marty McFly or the holodeck in Star Trek; each of them has a different flavor and perspective on how a human computer interface fits into the future. The holodeck, for example, is a room you have to go into, but in that room, anything can happen. You can create anything. In the case of Marty McFly being gobbled up by the holographic shark, the view of the future is that holograms are not confined to one place but are pervasive, like 2D displays are today. Of course, physics comes into it at a certain point when you want to make something real.”
Axiom Holographics, based in Murarrie, Australia, has developed hologram zoos and holographic devices ranging from rooms, tables and walls to tunnels, as well as real-time projections for stage performances. “We’ve done the hologram zoo entertainment
chains, and that’s the fastest growing entertainment chain in the world – 45 centers in 16 months,” states Bruce Dell, CEO of Axiom Holographics. “They’re popping up all over the world, in Mongolia and Nepal, as well as all the proper places in Europe and America. It’s like a normal zoo, except you go there and all of the animals are giant projections. You can see them from the front and from the other side. They come off the walls and fly around with people. Things that people haven’t seen before. Children are laughing and screaming because dinosaurs come to try and eat them, and they’re




Proto Hologram has struck deals with celebrities such as William Shatner to appear as AI avatars. (Image courtesy of Proto Hologram)
Anyone with a smartphone can utilize the Proto technology. (Image courtesy of Proto Hologram)
life-size. At the moment, the belief is that it could be the future of entertainment.”
Headquartered in St. John’s, Newfoundland and Labrador, Avalon Holographics produces a holographic table display called NOVAC that can turn digital twins, AI or 3D environments into holograms. “I don’t necessarily care whether the photons are being generated from the screen out or whether they’re being reflected off the screen from a projector somewhere else,” notes Russ Baker, Vice President and Co-Founder of Avalon Holographics. “What I care about is the direction the photons go as they come off the screen. The density of the individual light sources is probably the single most important thing.” Everything comes down to time and money. “There’s nothing at a physics level stopping you from getting to a point where you could eventually build these incredibly dense arrays of light sources at scale, which means you can get the cost and power consumption down and do them using manufacturing technologies that allow you to compress everything. It’s a long tech cycle, but it is possible. Our technology is still in the first stage.”
Located in Adelaide, Australia, Voxon employs VLED technology to produce real-time interactive volumetric holograms with millions of points of light floating in 3D space. “The closest we have to the concept of the holodeck is putting on a VR headset and being able to step inside,” observes Will Tamblyn, Co-Chief Executive Officer of Voxon. “VR has done a good job of creating that single-person experience. You’re standing inside, looking out at the world, whereas our technology does not require headsets. You walk up, and it’s there; you’re outside looking into the world. Holographic display technologies have used either the glasses or a two-dimensional screen to simulate a three-dimensional volumetric scene through a window. But our technology physically creates that scene in three-dimensional space. It’s basically 3D printing with light. It offers a different interaction to anything screen-based because you can walk 360 degrees around it, and it physically occupies that volume, and you can have what we call a ‘campfire experience.’”
Based in Van Nuys, California, Proto Hologram enables the creation, management, delivery and playback of interactive human-AI experiences through the life-sized holoportation device Proto Luma and the tabletop-sized Proto M2. “We’re being used quite a bit at universities,” states David Nussbaum, Founder and CEO of Proto. “Professors and teachers from all over the world are beaming into universities in dozens of countries. We’re at the University of Central Florida, Duke University and MIT, and they’re being used in seminars and training, medical schools, and even for remote diagnosis to train future healthcare providers.” Connected to education is healthcare. Nussbaum says, “because our HIPAA compliance oncologists are beaming from the large hospitals into small rural clinics miles away to have encrypted and private one-on-one consultations with their cancer patients.” Merchandise, such as shoes, can be spun around and zoomed in and out, courtesy of the touchscreen. “We’re in about 100 malls right now, and it’s
TOP AND BOTTOM: Avalon displays are driven by real-time 3D tools and ray tracing. (Image courtesy of Avalon Holographics)
being used a lot for advertisements, clothing and brands.” Deals have been struck with celebrities such as William Shatner and Howie Mandel [an investor in Proto] to appear as AI avatars, and a studio promotional campaign featured cast members of Now You See Me 3 taking selfies with fans. “You walk into the movie theater, tap the screen, and whoever is standing there takes a selfie with you. What’s so cool about it is that they’re the ones who are taking their phones out and taking the selfie.”
Holographic Studios, the world’s oldest gallery of holography, is situated in New York City. Its founder views the idea of directors being able to walk through volumetric environments and adjusting effects in real-time without headsets as more fiction than science. “A hologram is an optical recording of an image stored in a field of interfering light waves,” states Jason Sapan (aka Dr. Laser), Founder of Holographic Studios. “The basic theory has remained the same, but the technology and techniques have evolved with the introduction of digital.” As for what is involved in producing a holographic set, Sapan responds, “A typical hologram utilizes laser light to illuminate an object and then creates interference by having that reflected light-pass meet up with pure light from the same laser. The phase displacement is captured on a photographic medium and stores the full three-dimensional map of the original object being recorded.” There will come a time when holograms can appear outdoors, much like an AR application. “It’s a two-step process. If you use data recorded outside in a manner that can capture parallax, such as recording a file or video on a rail, this can then be used to recreate that information as a hologram back in the lab.” Real-time game engines and machine learning/AI are emerging technologies for creating and executing holograms. “It is still currently in the early stages,” Sapan says.
Directors being able to walk through volumetric environments and adjust effects in real-time without headsets is something else entirely. “You’re referring to the fake CGI that we see in sci-fi movies,” Jason Sapan (aka Dr. Laser), Founder of Holographic Studios. “Holograms do not work like that. If you stand inside a holographic image, you wouldn’t see it.” The notion of a holodeck is based on misconceptions. “Think of a hologram as a sculptural mold. It has recorded shape within that mold.” The current state of holographic technology has strengths and weaknesses. “Currently, motion has limitations. We can film a motion image hologram, but the vertical motion must be slow or the image smears. Holography excels in its detail since our ‘pixel’ is a light wave; therefore, you can actually magnify the image to see fully dimensional views.” Projection technology has limitations. “While projection technology has improved dramatically, it will never be capable of the fine resolution intrinsic in holograms.” Sapan does not have a favorite hologram. “Since there are many different types of holograms. I do not have a single favorite image, but I love seeing artists explore the medium in nontraditional ways.” Holograms are part of the ebb and flow of technological innovation. “I have always felt that holography is a bridge to future visual media in the same way as black-and-white photography was to color. Eventually, a newer and totally different process will be able to breach the limits of this technology.”



Triceratops is one of the dinosaurs that can be found in the holographic zoos developed by Axiom Holographics.
(Image courtesy of Axiom Holographics)
NOVAC is a three-foot square holographic table, and there was already a military demand for it to do mission planning and battle management within a 3D space.
(Image courtesy of Avalon Holographics)
TOP TO BOTTOM: The evolution of artificial intelligence has made Proto more of a spatial computing platform than a holographic display technology. (Image courtesy of Proto Hologram)

ATTILA T. ÁFRA HAS TURNED CODING INTO AN ART FORM
By TREVOR HOGG
What makes the entertainment industry unique is that artists and technicians have to work together in order to turn a creative idea into something tangible. Making a significant contribution on the technological side of the equation is Attila T. Áfra, who has gone from writing a series of theses on ray tracing while getting a Ph.D in Computer Science at Babeș-Bolyai University to being co-honored with an Academy Award for Technical Achievement for early research and technical direction of the Intel Embree Ray Tracing Library in 2021, and another for pioneering work at NVIDIA applying U-Nets to denoising in 2025.
His fascination with computer programming was fostered at a young age. “My father was already interested in electronics and introduced me to programming when I was in second or third grade,’ recalls Áfra, Principal Engineer at Intel, Open Image Denoise. “I got this Romanian clone of the ZX Spectrum and a programming book from him. We sat down and did a few things together, but after a while he realized that I could do the learning on my own. The technical affinity in the family was always strong, and it was a natural thing to get into programming and computer science because it was a hot topic at that time. All of these new opportunities opened up just a few years after the revolution in Romania. Before that, you could only access mainframe computers at big companies; personal computers were completely unknown. At that time, it was possible for anyone to buy a personal computer and do something with it. My parents thought this could be a promising direction to go. It was an experiment to see whether I would be interested or not, and the experiment was successful!”
Reading PC Magazine as a grade five student led to an epiphany. “PC Magazine came with a CD, and on that were application and game demos,” Áfra states. “The demo scene was something that caught my attention a lot, because these little programs, ranging from few kilobytes or a couple of hundred bytes to megabytes, were all about showing off the skills of these mostly anonymous programmers. It was mind-blowing that it was possible to render some interesting effects in just a couple of hundreds of bytes, and also that was written in assembly code. Regardless of the complexity of the codes, visually, these were more impressive than anything else I’d seen before. There were tutorials in the magazine about how to write some of these little effects; how to render a Mandelbrot fractal in 256 bytes, for example. I started to move more into the graphics direction in programming.”
Sparking an interest in ray tracing was the release of Avatar “Avatar introduced ray tracing in a crude way,” Áfra notes. “This was a joint effort between Wētā FX and NVIDIA, and they were actually rendering some gigantic datasets, which were much bigger than what anyone else had done. Ray tracing was something appealing, because before that, most of these large-scene rendering algorithms were based on rasterization. But ray tracing has some inherent algorithmic advantages compared to rasterization. Avatar was a huge milestone at that time, and that inspired me to go deeper into ray tracing. But even before my Ph.D., I did some ray tracing experiments, mostly ray casting, and I had a little project where I was visualizing in real-time planet-scale atmospheric scattering. I wrote that for a local competition at the university, and
Images courtesy of Attila Áfra and Intel, except where noted.
TOP: Attila Áfra, Principal Engineer, Open Image Denoise, Intel

“There is still a lot of work and research to do with AI methods. The tools haven’t yet been developed to make the gap between real-time and offline smaller and smaller. This is something I’m thinking about nowadays, and it will keep me busy for at least a couple more years from now.”
—Attila T. Áfra, Principal Engineer, Intel, Open Image Denoise
technically, that was my first ray tracing or ray casting project.”
Time was spent as a teaching assistant at Babeș-Bolyai University in Romania. “I haven’t taught that much at universities, mostly during my Ph.D.,” Áfra remarks. “After a while, I felt that it wasn’t satisfying enough for me. It got repetitive because we have a curriculum, and every year you basically need to teach the same thing. However, I learned things during teaching, such as how to explain difficult concepts to people who are not really familiar with that domain; something which can be used outside of teaching.”
Áfra also attended the Budapest University of Technology and Economics where he met his most influential mentor. “Professor László Szirmay-Kalos is a legend in computer graphics in Hungary. He was one of the first researchers to introduce computer graphics teaching in universities. He wrote some famous books as well, which helped many Hungarians get started and interested in the domain. Under his lead, I started my Ph.D., mostly regarding ray tracing, and that was the primary focus, which continued later on as well.”
During his Ph.D., Áfra attended multiple conferences leading him to be recruited by Intel Sweden Tech Lead Tomas AkenineMöller as a rendering intern; subsequently, he was hired full-time as a ray tracing software engineer. “There were so many hacks with rasterization,” Áfra observes. “You couldn’t even do simple reflections that also needed to be hacked. Movies looked so much better than games, and there was obviously a gap between them.

TOP: A final frame from Smurfs, denoised with Intel ® Open Image Denoise, from Paramount Animation. (Image courtesy of Paramount Pictures, Smurfs™ & PEYO)
BOTTOM: A still from Animal Farm, denoised with Intel ® Open Image Denoise. (Image courtesy of Angel Studios, Aniventure & Imaginarium Productions)



Áfra at home with his first PC, 1999.
Áfra giving a talk on Intel® Open Image Denoise at the High-Performance Graphics conference in Copenhagen, Denmark, June 2025.
One motivation for me was, ‘How far could we push real-time graphics so we could get closer to what the movies can achieve?’ Ray tracing was something that was too far and not achievable at that time, not even for movies and especially not for games. But after all the reading and demos that I saw, I liked the elegance of it. I also did it for the sake of doing it because it was interesting to me. The other thing is that I wanted to do something that could help people create nice images.”
Computer software development is driven more by the needs of the client rather than anticipating what will be needed. “In the case of Intel Open Image Denoise, it wasn’t the first denoise around the market, but it was the first open-source AI denoiser,” Áfra explains. “The fundamental requirements were clear, like, what it should do roughly, what kind of inputs it should expect, what kind of output it should produce, and roughly how it should integrate into a workflow. As soon as we have some kind of basic functionality up and running, then we basically need feedback from clients to say, ‘This is good, but that should be better or faster. The quality here isn’t that good. I need this feature.’ As you move along with the process, you get more people and customers to talk to, and in many cases, everyone wants the same thing. There are some features that are universally required. In other cases, it wasn’t something we thought would be necessary, but customers said, ‘This would be helpful for our workflow.’”
Creating software for a hardware company is not an entirely different dynamic. “The end result is much the same regardless of what kind of company is paying for the project,” Áfra states. “But internally, it’s different, because justifying creating purely software products in a hardware company isn’t trivial. There are some obvious benefits and connections to the hardware. One of the goals of Embree and Open Image Denoise was to make sure that these
TOP TO BOTTOM: A final frame from Smurfs, denoised with Intel® Open Image Denoise, from Paramount Animation. (Image courtesy of Paramount Pictures, Smurfs™ & PEYO)

kinds of rendering tasks would work efficiently on Intel hardware. But these aren’t exclusive for Intel hardware. Initially, both were for targeting only CPUs, but implicitly, that meant that not only Intel CPUs would be supported. Later on, for Open Image Denoise, GPU support was added for many different vendors; that was something that we had to do because of the demands from the customers.”
Hardware and software influence each other. “The advantage of the ability to work on such a project in a hardware company is that we know up front what kind of hardware we’ll be releasing,” Áfra observes. “This means we can make sure that by the time that particular hardware gets released, the absolutely optimized version of the software can already be on the market on day one. The other thing is that there’s also an opportunity to influence the hardware. Since the hardware and software are being developed at the same company, we can work together to ensure that a future version of the hardware could take better advantage of the kind of workload that we are working on.”
The ambition to create nice images led to two Academy Awards. “Embree was recognized because of three reasons,” Áfra states. “Its main goal was to squeeze out every last bit of performance from whatever CPU or hardware you have. The second reason was there weren’t that many libraries, especially for visual effects, that were open source. The third one was the ease of use.” Denoising was the natural next step. “Open Image Denoise got popular because there was no other AI-based denoiser on the market that was open source. Modification, regardless of the nature of it, was important. Performance was another important aspect. Open Image Denoise was not only suitable for offline rendering but also for interactive rendering, which is a critical part of the artistic process. Finally, being able to run on CPUs meant Open Image Denoise could


Áfra defending his Ph.D thesis “High-Performance Ray Tracing on Modern Parallel Processors” on November 22, 2013, at Babe ș-Bolyai University in Cluj-Napoca, Romania.
Two Academy Technical Achievement Awards received for Intel Embree and Intel Open Image Denoise.
TOP TO BOTTOM: A final frame from Smurfs, denoised with Intel ® Open Image Denoise, from Paramount Animation. (Image courtesy of Paramount Pictures, Smurfs™ & PEYO)




TWO: Intel
in the
BOTTOM TWO: Noise-free frame denoised with Intel ® Open Image Denoise and noisy frame from the 2019 “ open movie” Spring rendered with Blender Cycles, using Intel® Embree to accelerate ray intersections. The noisy image was intentionally rendered with very few samples per pixel to better illustrate the effect of denoising. (Images courtesy of Blender Studio)
operate anywhere; that sensibility was one of the other reasons it got the award.”
Some of the software has been used in surprising ways. “There were a couple of cases where I was like, ‘Wow. This is amazing,’” Áfra remarks. “For example, in one particular case, even though the project was designed for rendering beautiful images for visual effects or CAD, we learned just by accident that it is being used for thermal modeling for buildings to determine which walls are getting more sunlight and how they should design the interior so that the interiors wouldn’t be too warm.” Computer programming requires abstract thinking. “Many students have a hard time understanding some of the more abstract concepts in programming like, ‘What is a variable? What is a CPU register? How do you fix something in there?’ Some of this thinking comes from math.”
Often, limitations open up other possibilities that otherwise would not be pursued. “Jurassic Park is a classic example of this,” Áfra notes. “Computer graphics was in its infancy at that time, but still, after many years, Jurassic Park is considered amazing. If you looked back at how it was created compared to modern visual effects, it’s absolutely crazy. A lot of that happened because of the limitations they had at the time. Sometimes, if you have too many options or something is too easy to do, especially with generative AI nowadays, you can get lazier.” AI has not altered the workflow. “Even if I’m doing AI for a project I’m working on right now, I still have to do ray tracing for the training dataset. AI is a little bit different because it’s more abstract. I don’t have as much control over a neural network as I do over the traditional code that I write.” Advances in technology inspire further innovation. “When we achieve the next frontier, we will realize that there are further problems to solve and we can push it even further, but what we have right now isn’t sufficient for that. Generative AI is the next step on this ladder.”
Eventually real-time and offline will merge together. “It makes sense from a practical perspective,” Áfra states. “Artists can almost see the final result while they’re actually working on the model or scene. That’s a huge difference between just seeing the wireframe, then start a render, and they will see the result a few days later.” In the short term, there is still a lot of work to do with Open Image Denoise. “One key feature that is missing is temporal denoising; that makes a big difference in terms of the quality and reduces render time significantly. Once that is finished there will still be a ton of work to do because real-time ray tracing is still resource-constrained, and we still cannot do what we can do with visual effects. There is still a lot of work and research to do with AI methods. The tools haven’t yet been developed to make the gap between real-time and offline smaller and smaller. This is something I’m thinking about nowadays, and it will keep me busy for at least a couple more years from now. I have way more work than I can handle lined up in front of me, for sure! Beyond five years, it’s impossible to say. This is such a dynamically changing landscape that I’m sure I will know what’s the next step once I get there.”
TOP
Embree interactive demo
Intel booth at SIGGRAPH 2016, Anaheim, California.





































WORLD-BUILDING THE EDGE OF JAPAN FOR GHOST OF YOTEI
By TREVOR HOGG
A mainstay of the creative team at Sucker Punch Productions ever since working on Sly 2: Band of Thieves (2004) as a texture artist, and subsequently becoming the environments lead on Ghost of Tsushima (2020), Joanna Wang got to explore the Edo period as an art director with Ghost of Yōtei (2025), which takes place 329 years after the samurai sensation. There were lessons learned from Ghost of Tsushima, which were applied to the revenge tale where the mercenary Atsu attempts to find and kill the Yōtei Six, responsible for the death of her family.
Images courtesy of Sucker Punch Productions and Sony Interactive Entertainment.
TOP: The challenge for the Yōtei Grasslands was making it inviting for new players while carrying an emotional weight for Atsu.
OPPOSITE TOP: Screenshot from Ghost of Yōtei
“From Ghost of Tsushima, we learned how powerful it is when nature tells the story,” notes Joanna Wang, Art Director at Sucker Punch Productions. “How color, light and weather can guide a player’s heart without a single word; that emotional honesty became our compass. With Ghost of Yōtei, we wanted to carry that soul forward. The same poetry, the same beauty in motion, but expressed through a colder, wilder, untamed land. Ezo has a different spirit. The wind feels stronger, the sky stretches wider and the weather is unpredictable. It’s raw land beyond the edge of Japan. That contrast is what makes Yōtei feel connected, yet new. Tsushima gave us the heartbeat; Yōtei gave us a new voice to speak with.”
World-building always begins with the same idea. “It’s not

only about how the world looks, but what we want the player to feel,” Wang states. “Art, gameplay, culture and mood all have to work together. The environment should guide the player, support the story, respect the land, and still carry our emotional, artistic voice. This time, with Ghost of Yōtei, the approach evolved because the map itself is different. We divided the island into six distinct regions, each with its own look, emotion, and even gameplay rhythm. We designed each region with a sense of build-up, a shift in tone and atmosphere as players cross into new areas. The core methodology stayed the same, but the scale and structure of Yōtei pushed us to think more deeply about regional identity and how the world grows with the player’s journey.”
Guiding the visual language was a particular phrase coined by co-directors Nate Fox and Jason Connell. “I remember very early on, the phrase they used was ‘The edge of Japan,’” Wang remarks. “It instantly captures the feeling of wild, mysterious, untamed land with so many stories waiting to be told.” Dealing with the scale of Hokkaido was the most immediate challenge. “Those open fields that stretch forever, forests that swallow the light, and snow mountains that rise like giants on the horizon. It stretched both vertically and horizontally. It’s beautiful, but also incredibly intimidating. That scale pushed us in every direction. Gameplay had to
adapt because encounters work differently when players can see danger from a mile away. Rendering had to adapt, since showing that much land, sky and shifting weather without breaking the game became a major technical hurdle. Art had to adapt too. We had to make those massive spaces feel intentional and emotional, not empty. It wasn’t easy, but that’s what made it exciting. Those challenges shaped Yōtei into something bold and new.”
Locations were scouted in Japan, such as national parks, remote wild areas and Matsumae Castle. “We even spent time with an Ainu family, foraging, cooking, learning how they honor nature and seeing how they design fabric,” Wang recalls. “We visited their museum to better understand their history. The team brought back tons of photos and videos for reference, but honestly, the most important thing we brought back is memory. What we saw, what we heard, what we felt, even what we tasted in Hokkaido. The way wind moves through bamboo grass, the texture of old tree bark, the sound of birds echoing across open fields. Those sensory moments stayed with us and became the emotional foundation of our world. Also, same as we did for Ghost of Tsushima, we worked closely with Japanese cultural advisors every step of the way. They guided us, corrected us and helped ensure our interpretation stayed respectful and authentic. All of that: those experiences,


those feelings and that collaboration shaped the heart of the world you see in the game.”
Narrative, mood, emotion and gameplay feed into one another. “The narrative, mood and emotion set the tone for how the world should feel – lonely, hopeful, dangerous, peaceful – and that feeling becomes the backbone of our world- building,” Wang explains. “Gameplay then shapes the rhythm, where tension builds, where players can breathe, where the world needs to open up or tighten in. But it also works the other way. For example, memory swap changes the environment and can quietly nudge the narrative in a new direction just by one push of a button. For us, none of these pieces stand alone. Mood shapes the world, gameplay shapes the flow, and the world answers back with new ideas. That push and pull is what makes the world feel alive, and why everything feels connected when you step into Yōtei.”
An important aspect of gaming is creating environments that players want to explore. “For me, the real trick is curiosity through contrast,” Wang remarks. “Giving the world small moments that tug at you. A single red tree burning on a distant hill, a strange rock silhouette that feels like an animal, or a weathered gate standing alone at the edge of a forest; little things that make you wonder, ‘What’s over there?’ And then the world gently guides you. The wind kicks in and hints you toward your next destination. A patch of white flowers that give your horse a boost. A bird suddenly appears, inviting you to follow and discover something hidden. When the world feels alive like
TOP AND BOTTOM: Ishikari Plain went through several iterations to get the right balance on theme, shape, color and content density.

that, full of tiny invitations and quiet mysteries, players explore, not because they’re told to but because their heart pulls them forward.”
Six major regions are explored. “The Yōtei Grasslands is where Atsu’s story begins, so it needed warmth, memorable places and the constant presence of Mount Yōtei,” Wang states. “We added memory swaps throughout the area to make you feel this is the place she lived 16 years ago – familiar, comforting, but touched by loss. The challenge was to make it inviting for new players while carrying that emotional weight.” Tokachi Range is one of the most open vast areas of the game. “Endless grass fields stretching toward distant snow peaks. The challenge was scale and readability. We balance the size, shape of landmarks and POI [points of interest]. The pagoda stands behind a foggy bamboo forest, a windshaped tree in the middle of the hill, and a sharp vertical mountain cliff that leads you to one of most beautiful shrines. We try to keep it open and natural without letting it feel empty or directionless.”
Teshio Ridge, in the far north, is harsh, mysterious and breathtakingly beautiful. “Enemies can disappear into the white and ambush you from the snow, and the terrain hides countless secrets,” Wang remarks. “We built layered puzzles through caves, frozen passages and hidden pockets of landscape, making exploration feel risky and rewarding in this unforgiving world.” The Nayoro Wilds are home to the Ainu, an indigenous ethnic group

TOP AND BOTTOM: The Oshima Coast is where some of the biggest battles unfold, giving the region a sense of history, tension and movement.



World-building is not only about how the environment looks, but what feeling you want to evoke from the players.
Atmospherics are part of the world’s personality, shaping how players experience every moment they spend in Ghost of Yōtei.
residing in northern Japan. “We worked closely with Ainu cultural advisors, understanding traditions and making sure the environment is respectful and felt authentic without losing our artistic voice.”
Ishikari Plain was the first valley to be worked on. “We had lots of iteration,” Wang reveals. “It took a good amount of effort finding the right balance on theme, shape, color and content density. In the end, we began with Hokkaido’s vibrant autumn colors, then shaped the region around the Oni and fire theme. Orange-red trees add to the mood while a steep hilltop castle dominates the entire space. Burned forests and ruined towns add strong environmental storytelling.” The Oshima Coast is the most established area due to the presence of the Matsumae base. “Its layout needed to feel structured and lived in. Inspired by the real Matsumae Castle, it features towns, farmland and fortified spaces tied closely to the narrative. It’s also where some of the biggest battles unfold, giving the region a sense of history, tension and movement.”
Each region has its own landmark and signature moment. “Mount Yōtei is the big one,” Wang notes. “It sits right in the heart of the map. This massive, isolated volcano mountain that you can see from other regions. Its scale plays completely differently from the rest of the world – towering, quiet and always watching you. It became a visual landmark of the entire game. Another key landmark is Matsumae Castle. With its white walls, dark rooflines and the cherry blossoms framing the structure, it brings a strong cultural presence and a sense of history to the world. There are many more, but those two are perfect examples of how each region gets its own identity, something iconic, memorable and emotionally tied to that part of Yōtei.”
Shrines are an integral part of the spiritual life in Japan. “With a map as big and rich as Yōtei, the shrines were our chance to show players the world from a different perspective,” Wang explains. “We treated each shrine like a little stage, a pocket of the world with its own personality. Maybe it’s perched on a mountaintop with the whole valley unfolding beneath you, or tucked by a misty lake where everything feels soft and quiet, or on the coastal rock where wind hits hard and waves crash beneath your feet. We wanted players to pause, take a breath and just feel each place. Also, every shrine needed its own journey, a traversal challenge that fit its vibe. Some build upward, some stretch sideways, some tease you with a view before making you work for it. The fun part was letting the landscape lead us, finding those special spots where the world already felt magical, then designing a shrine that brings that magic to life.”
In order to emotionally engage players, the visuals had to feel like a living painting. “To get there, we leaned heavily into shape, color and motion,” Wang states. “Clear silhouettes that read instantly. Bold color palettes that set the mood the moment you enter a space. And natural-motion-made wind sweeping through grass, snow drifting off branches, mist rolling across the hills and birds singing through the sky. All of these layers stack together to bring the world to life.” Visuals are not meant to be distracting. Wang adds, “At the same time, we’re constantly simplifying, stripping away anything that doesn’t matter, from big picture layout to small detailed texture level. The important things can stand out – your path, your goal and the emotional beat of the moment. When it all comes together, the world stops feeling like a level and starts feeling like a living painting.”
Seasonal changes in nature guided the color palette. “Each region
TOP TO BOTTOM: Teshio Ridge has caves, frozen passages and hidden pockets of landscape, which makes exploration risky and rewarding.
Three Pieces of Concept Art
Delving even deeper into the world-building of Ghost of Yōtei, here are the stories behind three pieces of concept art.




has its own moment in the year,” Wang explains. “Spring in the south of Oshima Coast with soft pink cherry blossoms. Summer’s lush greens sweeping across the Yōtei Grasslands. Autumn turns the Tokachi Range and Ishikari Plain into waves of gold and deep red. And finally, the far north of Teshio Ridge wrapped in pure white snow. Second, color became a way to elevate the narrative tone. The Oni lean into intense reds and warm, destructive hues while the Kitsune carry cooler tones, pale blues, quiet whites, even the soft shimmer of blue fox fire. The palette shifts not just with the land, but with the emotion of the story. On top of that, we added other touches that give the world a distinctly Japanese feel, like red maple leaves drifting across a snowy hill. Moments like that make the world feel both real and poetic.”
Ishikari Plain Town. Explains Wang, “This was one of our early concepts for the Ishikari Plain town, and we were trying to establish the tone of the region, harsh weather and deep isolation. We leaned into a slight western vibe, wind sweeping dust through empty streets, yellow grass blown to one side by constant gusts. The buildings feel worn down, with wooden beams shaped by years of storms and strong winds. The whole town feels rough and lonely, and there’s always a quiet hint of the unknown and a touch of danger in the air.”
Oni’s Breath Inn. “This is another concept from the Ishikari Plain. Once we locked down the valley theme, Jason Connell pushed us to make this area feel more mysterious and tied deeply into that mood. That’s how the idea of Oni’s Breath Inn began. It sits in a foggy forest. The ground is covered in vibrant red leaves that almost glow through the mist. Strange masks hang from the trees, lanterns, guiding the player toward the inn. The entire space is built around this mask motif; everywhere you look, there’s something watching, something hidden. Even the characters here wear masks, adding to that uneasy, otherworldly feeling. And in the end, the in-game version came out pretty close to this concept.”
Matsumae Castle Fight & Ghost Stance. “This one is from Oshima Coast, during Atsu’s fight through Matsumae Castle. The mission here is extremely intense; she’s cutting through waves of enemies, and we wanted the environment to reflect that chaos. The air is thick with smoke and fire, visibility drops low, and you often see enemies only as silhouettes moving through the haze. We needed to carve out just enough clarity for gameplay while still selling that overwhelming sense of danger and destruction. And on top of that, we had to make sure it contrasted with the Ghost Stance, which turns the entire screen red. The base atmosphere had to be smoky and muted, letting that moment hit even harder.” (Images courtesy of Sucker Punch Productions and Sony Interactive Entertainment)
Lessons were also learned while making Ghost of Yōtei
“I learned that you can plan everything, map out every detail and imagine exactly how the world should come together,” Wang remarks. “But at some point, the game starts speaking for itself. Things line up in ways you never predicted, and suddenly you’re heading down a path you didn’t even know was there. That’s why it’s so important to lock down the core pillars but stay loose for some other areas. You go with your instincts, make the best choice you can in the moment, and you let the world shift as it grows. Every pass brings its own surprise, and those surprises often shape the world into something better than the original plan. In the end, world-building is so much more than just doing the work. It’s paying close attention, staying flexible, nurturing ideas and letting the game naturally grow into something stronger.”

STAR WARS: BEYOND VICTORY – THE FORCE MEETS MIXED REALITY
By CHRIS McGOWAN
With the VR experiences Vader Immortal: A Star Wars VR Series and Star Wars: Tales from the Galaxy’s Edge, Lucasfilm’s ILMxLAB raised the bar for immersive, interactive home entertainment, delivering high production values, compelling voice acting and cinema-quality visuals. ILM and Lucasfilm’s Star Wars: Beyond Victory – A Mixed Reality Playset, a single-player immersive game for Meta Quest 3 and 3S VR headsets, introduces a new twist through mixed reality integration.
Director Jose Perez III states, “At ILM, we are constantly pushing the future of storytelling, and mixed reality was right there as the next step. It wasn’t just about wishing the technology was ready; it was ready now. With the platform becoming mass-market – meaning a lot of people were able to see and experience it – it felt like the absolutely right time to jump in, test the waters and see what we could achieve with this new medium.”
“The three modes of the experience are Adventure, Arcade and Playset,” Perez explains. “This design provides variety and easy entry points, allowing users to jump directly into podracing or play with virtual toys.” More importantly, it enables the telling of “a holistic story.” There are rich visuals throughout, especially during the full VR segments, and a feeling of immersion in the Star Wars universe. The primary narrative journey is in the Adventure mode, which offers a story-driven experience that unfolds in both MR (for third-person exploration) and VR (for interacting with characters and solving puzzles and challenges from a first-person perspective). You assume the role of Volo Bolus, an eager up-and-coming podracer who gets mixed up with the Star Wars legacy character Sebulba, a famed podracer who is a shifty mentor.
Early
OPPOSITE TOP: A podracer approaching the track in mixed reality. ILM’s long experience enabled telling a more nuanced story, which sets Beyond Victory apart from other MR experiences.
The Arcade mode in MR “transforms the player’s physical space into a holotable-style racing arena where high-speed podraces are controlled from a top-down view,” according to ILM. “Each track features multiple paths to the finish line” with “playing experiences linked to the story.” Playset continues the story at home
TOP:
podracer concept art by Stephen Zavala and Evan Whitefield.

through interactive toys in MR. Digital characters and vehicles populate the player’s own physical environment, and players can stage their own Star Wars stories. The three modes thematically connect the movie, video game and toy experience.
Perez recalls, “One of the original ideas was to create a VR playset, and we had created that on the film side. However, once we’d done this in VR, which is amazing, we realized how cool it would be in mixed reality. The idea of taking a playset and being able to put it in your room, around your other actual toys, was really exciting to us. While the original idea was a VR playset, the evolution quickly moved toward MR. From that technical seed, we developed the story, focusing on a personal theme: choosing family over fame and fortune, understanding what is truly important in life, and avoiding the trap of running away or hiding behind one’s ego. The core concept came from wanting to make cool toys and playset experiences. We then filtered elements from my life, [writer] Ross [Beeley]’s life, and others’ personal experiences, the universal emotional moments, and reflected them into this cool Star Wars world.”
Over time, the project evolved significantly. Perez observes, “Initially, our ambitions were much larger. We envisioned multiple playsets and content beyond racing, including elements with spaceships and other vehicle types. However, we ultimately decided to focus intensely on the podracing mode [Arcade] due to the high excitement surrounding it. From there, the concept evolved significantly, moving from an initial idea that felt like a slot-car experience to something more akin to a Spy Hunter-style 1980s video game mixed with a miniature theatrical presentation.”
“In Beyond Victory, the virtual cameras were handled by our animation department with input from the art team to capture the right composition to serve the story’s needs,” says Ronman Ng, CG Supervisor. “Specifically, with the view portal holotable, the collaboration needed to be close because achieving a good composition
in 3D space required some extra tinkering since it has to work in all angles. This is different from a traditional feature film, where the camera work is mostly handled by a director of photography or a VFX supervisor [for virtual cameras] who dictates the framing. Throughout development, the animation department was in constant communication with the art team when it came to animating characters/vehicles/props, capturing a piece of concept work as intended and ensuring all character motion felt authentic.”
Perez wore many hats during the shoot, including design supervision, story development (collaborating with Ross Beeley and the Lucasfilm franchise group), and working with Production Designer Stephen Zavala on art approvals. In addition, he served as a Performance Director and Voice-Over (VO) Director (directing the VO sessions, with assistance from Kevin Bolen on the audio team), and “prototyping/grayboxing.” Alyssa Finley served as Executive Producer and Steve McManus as Lead Experience Designer. At the same time, the voice cast included Lewis MacLeod (Sebulba), Fin Argus (Volo Bolus), Greg Proops (Fode and Trizz), Emilie Talbot (Sornah) and Bobby Moynihan (Grakkus the Hutt).
Pulling off a project like Star Wars: Beyond Victory was a great cross-disciplinary team effort. According to Perez, “You need amazing programmers, designers, artists, actors, lighters and franchise considerations to ensure it fits into the grand mosaic of Star Wars. You have many different perspectives and points of view. To achieve this, you need to communicate and collaborate across these disciplines, as it’s not a single, controllable output. It’s different from watching a movie, where you control every single frame; at any point, the person wearing the headset can look wherever they want. Crafting a story for that environment and ensuring that all parts feel good, no matter where the user is looking or what they are doing, takes a massive collaborative effort from many different talented eyes.”
Working at ILM has been gratifying for Perez. “The ILM culture


BOTTOM: The three modes of the experience are Adventure, Arcade and Playset, which provide variety and easy entry points, allowing users to jump directly into podracing or play with virtual toys.
OPPOSITE TOP: A scene depicting Sebulba, Luuda and Trizz in Adventure mode. A highlight of the MR experience is a scalable holotable that floats in front of the player and transforms the space.
OPPOSITE BOTTOM: A cutscene playing out inmixed reality featuring Volo, Deland and a pit droid. The filmmakers had to stay true to the Star Wars canon, with Star Wars: Episode I – The Phantom Menace serving as a primary resource.
is built on the creative spirit infused by pioneers like George Lucas and Dennis Muren, VES, and it continues to permeate our walls,” he notes. “We maintain genuine kindness and respect for each other, caring about the person, not just the project. Crucially, this kindness is balanced by brutal honesty. If something isn’t working, we dig into the issues. We understand that our world-class artists thrive on freedom and a bit of structure. While that freedom occasionally leads us off into the weeds, we prioritize bringing the vision into alignment with the system. Ultimately, the quality of the final piece of art on screen matters more than any ego or disagreement. It comes down to providing enough agency. You have to respect that people will make their own decisions, and they expect a certain degree of agency in these environments. The key is to give them the ability to express themselves as they move through your story. This ability for the player to express themselves creatively while being delivered an engaging and forwardpushing story or specific type of gameplay experience is what makes these interactive immersive projects successful.”
One notable feature of the MR experience is a scalable holotable that floats in front of the player and can transform the player’s space into an arena. Perez explains, “The way we created this was exciting and brought new ideas to how people could experience MR. We experimented with ideas that might be considered controversial, such as our approach to cutting the camera while watching these miniatures. This was done to ensure we conveyed the story exactly as we intended and maintained a consistent perspective on the narrative. Furthermore, being here at ILM, our visual quality strong storytelling chops allow us to tell a more nuanced story, which really sets this apart from many other MR experiences.”
The title’s Arcade mode was influenced by 1980s retro games,
TOP: Even in the middle of the Malastare desert, the garage was conceived as a bustling place with visitors coming and going from every corner of the planet. Concept art by Stephen Zavala.

according to Perez. “I’m a huge retro game fan, so I spent a lot of time in the arcades and loved my Nintendo. The title really embraced that ‘arcadiness’ quality for our Arcade mode. We drew inspiration from games such as Spy Hunter, 1942, Galaga, Hang-On and even the early Mario Kart games. The main takeaway was the idea of quick jump-in, jump-out fun. You can jump in quickly, play for a few exciting minutes and jump back out. The more you play, the more skilled you become, allowing you to achieve a high score that you want to beat, and beat your friends’ high scores. It’s that immediate, satisfying gameplay loop that you can easily fit into your day before running off to play the Playset or whatever else you’re doing.”
Visual consistency in the modes was key. “To help create a more consistent visual style among these three modes, we started with authoring the assets in a similar way,” Perez describes. “Therefore, the model silhouette and the textures were using the same hi-res reference model and texture sets. The lighting environment in MR [for example, Arcade and Playset modes] was more generic but equally balanced in key-to-fill ratios for light intensity. With this methodology, we could maintain an equal visual ‘weight’ and contrast to the characters and environments in our experience.”
Throughout, the filmmakers had to stay true to the Star Wars canon. “This is a fun and difficult challenge,” Perez says. “We achieved balance by telling a new micro-story through the lens of Star Wars. For innovation, we pushed on mixed reality with new camera angles and user guidance techniques. For canon integrity, we ensured characters, including Sebulba [important to the mythos], fit authentically within the global timeline of the Galactic battle. The story focuses on the consequences for ordinary people trying to make a life in that world.”
Star Wars: Episode I – The Phantom Menace was a primary resource. “I watch all the Star Wars movies all the time for comfort, but Episode I was by far the biggest inspiration. This was especially true for the animation team, who were going back and animating characters for the first time in over 20 years, and of course, for the podracing aspect itself, including how the pods move. Beyond the surface-level elements, the rest of the Star Wars mythos was essential, specifically that idea of found family, companionship and choosing what’s right over what is wrong.”
Stephen Zavala, Production Designer on Beyond Victory, says, “We truly hope players take a moment to really dig in and experience the story. We were making this for the fans, and the idea that Star Wars is for everybody is central to that. So, if you get a chance to play, open up the Playset and put a Stormtrooper on your cat!”

Hemispheres and Domes
By JOHN GROD
Edited for this publication by Ross Auerbach
Abstracted from The VES Handbook of Visual Effects, 4th Edition
Edited by Jeffrey A. Okun, VES, Susan Zwerman, VES and Susan Thurmond O’Neal

Immersive technology has faced many headwinds during its growth into the mainstream. Some of the most glaring issues revolve around ergonomics, display resolution and lack of a shared experience. Domes and hemispheres provide a solution to the first two, whereas head mounted displays (HMD) continue to fall short.
Of all the domes currently available for use, “The Holodome,” developed by Vulcan Inc., is one of the premier experiences for immersive content. Development began in 2016 and was limited to an equatorial projection resolution of 5k, using two projects to display virtual content. Three years later, it boasted almost double the resolution at 9.2k with four projectors, 15 speakers, full haptic feedback and 15.1 Ambisonic 360 audio.
Unlike applications or content delivered through an HMD, a dome allows multiple participants to consume an immersive experience together. Moving around within a 360° immersive environment is known as ‘locomotion’ and brings up one of the most difficult obstacles facing VR/AR experiences: motion sickness. In The Holodome, the user is grounded in the real world, and can see their own body, companions and projectors; therefore, they have something to lock onto while the virtual world is moving.
Some guidelines for comfortable locomotion are: limit rotating tipping and rolling the simulation underfoot; when simulating vehicle movement keep the vehicle head and position locked within the dome; project fixed platforms under people’s feet when simulating vehicles; limit non-vehicle movement to mostly straight lines; and use the vibroacoustic haptic floor to evoke the sensation of movement.
The Holodome is different than the typical igloo and cave industry domes; these are low-cost hemispheres that scale, etc. The Holodome is closer to a ‘cave’ – a fully enclosed immersive environment. What makes it different is that it is a shared immersive reality platform. Six people can share in a total, fully interactive game, real-time experience, etc.
The future of domes holds huge potential not only for gaming and cinematic content, but also ancillary categories such as trade shows and festivals. Without the prerequisite of an HMD, whether a music festival or art installation or architectural conference, viewers can more freely experience the product being distributed. As the cost-quality curve continues to become more attainable, domes will be one of the premier and most effective distribution platforms for enterprise and communal entertainment.
Purchase your copy here: http://bit.ly/3JnG2yT

For VES Toronto, Diversity is Just the Start
By ROSS AUERBACH




Dr. Paul Debevec, VES (center) visits Sheridan College’s Screen Industries Research and Training (SIRT) Centre in Toronto with VES Toronto members.
Kevin Tureski and Thomas Burns present on the origins of Maya at the “Pixel Pioneers: Alias” event.
VFX Supervisor Stephan Fleet and VFX artists discuss their work on Sony/Amazon’s The Boys
Now in its 13th year, VES Toronto has established itself as a center of the community in an active and diverse city. Membership grew by 60% to 215+ members in the past three years alone. Section Chair, and Head of Technology for WeFX, Laurence Cymet credits much of this to “Toronto’s long history in VFX and building a tight community where people have worked at many of the different studios and software companies. That small community creates a strong connection between people in the VFX, software and animation industries, and they often want to take part and contribute.”
One consistent theme among the Board of Managers (BOM) is the strength the community finds in the wide diversity of its members. VES Toronto Treasurer Ray McMillan, VES, says, “Toronto is a unique VES member community because of its diverse VFX talent pool and developing technologies. It’s home to over 20 major gaming software development companies, legacy VFX companies like SPIN VFX, as well as boutique VFX companies. It’s a great place to work.”
Ryan Stasyshyn, Head of Studio and VFX Supervisor for RodeoFX and VES Toronto member, believes the Section is a mirror of the city itself. “Diversity in people, thoughts and ideas – it’s pretty unique,” he says. “Regardless of the studio you work at, walk into or virtually join, you’ll be greeted hearing people with different accents, skillsets and career paths. Our diversity is our greatest strength, and I think it makes the overall community special for industry veterans and newcomers alike.”
A buzzing hive of activity, VES Toronto hosted in-person and virtual events, toured studios, engaged with new and veteran animators and visual effects artists, and continued to cement its place as a welcoming community of like-minded artists and technologists focused on bringing out the best in each other.
Amanda Heppner, VES Toronto Vice Chair, believes shared vision is what drives the VES Toronto community. “When we come together as VES members, I see us show up as one collective – supportive, collaborative and deeply invested in each other’s success,” Heppner states. “What grounds our Section is a shared understanding that, at the end of the day, we are all building this industry together.”
Dr. Paul Debevec, VES, Global VFX leader and Chief Research Officer for Eyeline Studios, visited Toronto to present new technologies in HDRI, virtual cinematography, volumetric capture and more. The sold-out event was just one part of the trip, which included working with students at University of Toronto’s Dynamic Graphics Project and Sheridan College’s Screen Industries Research and Training Centre, to build up the next generation in visual effects.
Launched in 2024, the annual event, “Pixel Pioneers,” organized by BOM member Kate Xagoraris, highlights the historical contributions of Toronto visual effects artists. The first in the series, held at University of Toronto looked at early computer
TOP TO BOTTOM: The VES Toronto Board of Managers at the 2025 “Big Summer Party.”

graphics. The 2025 event focused on Alias/Wavefront and the inception of Autodesk Maya. Stay tuned for the 2026 events – details coming soon!
Party-lovers enjoyed the annual “Big Summer Party,” which brought together more than 200 members of the Toronto VFX community to celebrate and welcome the newest members, often new to the city as well as their job. The “Behind the VFX: Spooky Edition Halloween Party” animated the creative side of VES Members with a costume party and panel on Horror VFX work from genre experts and artists.
Other well-attended events included the “Behind the VFX” panel on Amazon Prime’s The Boys, showcasing how the visual and practical effects worked together to create the look of the show. And, in May, Toronto BOM members introduced VFX as a career option to young filmmakers at Pinewood Studios’ first Future Festival. Virtual events complemented the many in-person events with a focus on the past, present and future of visual effects, and strategies for building a career in the industry.
Members of the Toronto Section have contributed in myriad ways to the development and implementation of new technologies. From SideFX’s Houdini, Autodesk Software including Maya and Project Reframe, Foundry Nuke, Physical.AI and much more. Advancements in the science and craft of visual effects are taking place every day in Toronto.
2026 began with Toronto hosting Nomination Events for the 24th Annual VES Awards with two in-person judging panels and one virtual. This year, the Section plans to host workshops focused on business skills, core art skills and color science. Partnerships with TIFF, Ontario Creates and local organizations highlight the local contributions of visual effects artists.
Of the Section’s forward-looking agenda, Cymet says, “Another goal we have is to provide better representation for everyone in our community. Partnering with WIFT+ Toronto, we hosted an in-person panel and networking event discussing leadership for Women in M&E called ‘Lead without Permission.’ There was a great turnout and a lot of participation in discussion, and we learned a lot about where we could go next in this space.”
VES Toronto Secretary Jo Hughes is passionate about the community and how the Section can continue to grow. She says, “Toronto is home to a fabulous variety of locally-grown and global talent. Our diverse community celebrates collaboration, and VES Toronto does everything we can to contribute to events which support that!”




CLOCKWISE FROM TOP: Josh Miles Joudrie and Laurence Cymet bring out the scares at the VES Toronto “Behind The VFX: Spooky Edition Halloween Party.” (Photo: Sam Javanrouh)
VES Toronto Vice Chair Amanda Heppner connects with young people interested in filmmaking careers at Pinewood Studios’ Future Festival.
Colleen Jenkinson, VES Toronto BOM, moderates the panel “Lead Without Permission” in partnership with WIFT+ Toronto.
BOTTOM TWO: In-person Nomination Panels for the 24 th VES Awards were held at SPIN VFX and SohoVFX.
VES Elects 2026 Board of Directors
By ROSS AUERBACH





David
Frederick
Jeffrey
The Visual Effects Society proudly announced the 2026 VES Board Executive Committee, selected by the global Board of Directors.
Kim Davidson, Chair, shared, “I’m honored to continue leading the VES Board of Directors alongside both new and returning colleagues on the Executive Committee. As a global organization, the VES continues to grow and thrive thanks to the dedication of our Board members and our many Section leaders and volunteers worldwide, and it’s inspiring to see their talent and camaraderie in action. I look forward to continuing our critical work to champion the craft of visual effects.”
“On behalf of the VES and our membership, I’m thrilled by the caliber of expertise this new leadership team brings to our global organization,” shared Nancy Ward, VES Executive Director. “Their passion and commitment will enable us to grow and enhance the VES and support the VFX industry.”
The 2026 Officers of the Board of Directors are:
• Chair: Kim Davidson (Toronto, Canada)
• 1st Vice Chair: David H. Tanaka, VES (San Francisco Bay Area, U.S.)
• 2nd Vice Chair: Brooke Lyndon-Stanford (London, England)
• Secretary: Frederick Lissau (Los Angeles, U.S.)
• Treasurer: Jeffrey A. Okun, VES (Los Angeles, U.S.)
Kim Davidson is the President and CEO of SideFX®, world-leading innovator of advanced 3D animation and special effects Houdini® software, a company he co-founded in 1987. He has three Scientific and Technical Awards from the Academy of Motion Picture Arts and Sciences. Davidson was the first Chair of the VES Toronto Section and has served on the Toronto Section board for eight years, the global VES Board of Directors for nine years and several VES Committees. Davidson has been a VES mentor and played an active role in the Society’s recent strategic planning process on its Impact & Visibility and Globalization sub-committees.
David H. Tanaka, VES, VES Lifetime Member, is an Editor, Producer and Creative Director with experience ranging from visual effects and animation, to documentary and live-action feature filmmaking
at Industrial Light & Magic and Pixar Animation Studios. Along with his wife, Dorianne, David co-founded Cinemabilities®, a remote and in-person filmmaking course for students with and without disabilities, ages 10 to 22+, now in its sixth year. He has been on the global Board of Directors for over 10 years, has just completed his third term as 2nd Vice Chair on the VES Executive Committee and has served three terms as VES Bay Area Section Chair. He currently Co-Chairs the VES Archive Committee, and is a contributor to the VES Handbook of Visual Effects publications.
Brooke Lyndon-Stanford is the founder and CEO of Atomic Arts, a visual effects house created in 1994 with bases in Europe, North America and Asia. Brooke is also the founder and owner of Omeira, a production company created to make thought-provoking films and highend scripted television, including Paul Schrader’s First Reformed. Brooke has been on the global VES Board of Directors for over 12 years, has previously served on the Executive Committee as Treasurer (the first-ever member of the VES EC not from North America), London Section Chair and Co-Chair of the Outreach Committee. He has worked tirelessly to help flourish a more global leadership for the VES.
Frederick Lissau, the Director of Business Development for Black Point, a visual effects house based in Armenia, recently celebrated his 30th year in the visual effects, animation and games industry. His career has led him to hold positions in seven cities, four countries and three continents. This is Lissau’s first year as a member of the VES Board of Directors and Executive Committee. He has been a VES member since 2016 and previously served on the Board of Managers for three VES Sections: Los Angeles, New York and Montreal. He is also an active member of the VES Membership Committee.
Jeffrey A. Okun, VES, has garnered international recognition and accolades for his groundbreaking contributions to iconic VFX productions for more than 30 years. As VES Board Chair for seven years, he championed the global VFX community, and was instrumental in forming an international software anti-piracy alliance alongside the U.S. Government. He has served as 1st Vice Chair and Treasurer for the VES and founded the VES Awards program and show, establishing a lasting legacy within the industry. Co-Editor of award-winning VES Handbook of Visual Effects and the comprehensive VES Handbook of Virtual Production, Okun has helped shape the standards and best practices of the field.
FROM TOP:
Kim Davidson
H. Tanaka, VES
Brooke Lyndon-Stanford
Lissau
A. Okun, VES
VES Welcomes New Sections – Japan and Spain
The Visual Effects Society is pleased to welcome two new regional Sections: Japan and Spain, authorized by the VES Board of Directors in January. This brings the total number of VES Sections to 18, reflecting the Society’s growing membership – now over 5,500 members worldwide.
“Our Sections around the world are the beating heart of the Visual Effects Society. With a diverse international membership, Section leaders play a critical role in addressing the needs specific to their regions and being global ambassadors for the craft of visual effects,” shared Nancy Ward, VES Executive Director. “We’re thrilled to formally establish our newest Sections in Japan and Spain, which each have distinct, vibrant visual effects communities.”
Japan
“Japan has a rich visual effects history dating back to Eiji Tsuburaya and his groundbreaking work on Godzilla through today, with Takashi Yamazaki leading the first-ever Japanese crew to win the Academy Award for Best Visual Effects for Godzilla Minus One in 2024,” said Jeffrey Dillinger, Co-CEO of Megalis, who played a key role in forming the Section. “Having a VES Section in Japan is an exciting development that will enable us to bring more awareness of the Japanese industry to our peers internationally, and to expand our membership locally thanks to all the community benefits of being a hub.”
Organizers behind the VES Japan Section have community at the forefront while also leveraging the momentum of the country’s visual effects industry. One of their first initiatives is appealing to the Japan Academy Awards to add a visual effects category. The Section is also planning ways to highlight its members’ work internationally through connections with other VES Sections.
Spain
“There has been a strong demand from Spanish VES members for a local Section that reflects the maturity and scale of the industry here. With more international productions coming to Spain driven by tax incentives, studio infrastructure and highly skilled crews, our community has grown significantly. Many of us have worked abroad and experienced first-hand the value of local VES Sections in building strong professional networks. Establishing a Spain Section felt like a natural step to support our region’s growth and give Spain a stronger voice within the global VES network,” shared Astrid Busser Casas, Founder & VFX Supervisor of Ghost Light VFX, who helped to establish the VES Spain Section.
Spain’s rapidly growing, increasingly international VFX community includes artists working across film, episodic, animation, advertising, virtual production and emerging AI-driven workflows. The VES Spain Section aims to unify this local talent pool, creating a visible, connected community that positions Spain as a strong and reliable VFX partner within the global production ecosystem. Another priority for the Section is to help bridge the gap between international productions coming to Spain and the local VFX talent pool, encouraging long-term industry growth alongside the country’s strong production incentives and studio network.
“We’re thrilled to formally establish our newest Sections in Japan and Spain, which each have distinct, vibrant visual effects communities.”
—Nancy Ward, VES Executive Director


TOP: One of the first initiatives of VES Japan is to appeal to the Japan Academy Awards to add a visual effects category. The new Section is also planning ways to highlight its members’ work internationally through connections with other VES Sections.
BOTTOM: In addition to creating a visible, connected community, a priority for VES Spain is to help bridge the gap between international productions coming to Spain and the local VFX talent pool, encouraging long-term industry growth alongside the country’s strong production incentives and studio network.
When VFX Visionaries Challenged the Impossible

This issue reunites a remarkable trifecta of VFX revolutionaries: Andrew Lockley, who bent reality for Inception; Douglas Smythe, who brought liquid metal to terrifying life in Terminator 2; and Erik-Jan de Boer, who made us believe a boy could survive the Pacific with a Bengal tiger in Life of Pi. Each one didn’t just raise the bar – they vaulted over it. (See article, page 42). What counts as a watershed VFX moment? That’s the kind of debate that could rage until the heat death of the universe. But here’s a lightning round of the undeniables:
1895: First Visual Effects shot in motion picture history, in Alfred Clark’s stop-action substitutions in Joan of Arc and The Execution of Mary, Queen of Scots, both filmed on the same day.
1933: Willis O’Brien’s stop-motion wizardry gave King Kong a soul, making audiences buy into a lovelorn ape scaling the Empire State Building.
1963: Ray Harryhausen’s skeleton army in Jason and the Argonauts took years to complete – a painstaking 13 frames per day – but that sword fight became the stuff of legend.
1993: Steven Spielberg and ILM unleashed Jurassic Park, creating the first truly believable CG creatures and launching three decades of digital dominance.
1999: The Matrix gave us Bullet Time – 20 cameras freezing Neo mid-dodge – and spawned cinema’s most shamelessly copied effect.
2002: Andy Serkis and Wētā Digital made Gollum matter in The Lord of the Rings: The Two Towers, proving digital characters could break your heart, not just your suspension of disbelief.
2009: Jim Cameron’s Avatar let directors see their virtual worlds in real-time, fundamentally rewiring how CG films get made.
2013: Gravity flipped the script entirely – roughly 80% CGI, choreographed in virtual space first, then actors filmed to match. Filmmaking in reverse.
2022: Cameron again, with Avatar: The Way of Water, solving underwater mocap in a 900,000-gallon tank because apparently regular innovation wasn’t challenging enough.
The list goes on, and everyone’s got their own picks. But one thing’s certain: VFX teams and artists don’t do fear. They do impossible.
Image from The Matrix courtesy of Warner Bros. Pictures

