FEATURE: GAME AUDIO
QUANTUM MECHANICS John Broomhall talks to Remedy Entertainment’s audio director Richard Lapington about the sound of a long-awaited new Xbox One title that combines gaming with live action TV.
uantum Break is a big deal involving a serious audio production, all funded by a massive investment from Microsoft. This futuristic interactive/ linear hybrid entertainment offering for Xbox One marries videogame with live action TV-show featuring a parallel story in which hero, Jack, has to survive ‘Stutters’ – time breaking down and making different game world objects jump backwards and forwards. He can also make everything slow down around him and use his powers to stop enemies and certain objects in their tracks – all interesting propositions for sound design, which unsurprisingly played a key narrative role. Yet according to Richard Lapington, ace game developer Remedy Entertainment’s audio director, by far the most challenging aspect of the project’s development was the branching dialogue system via which players can navigate the story in different ways as they make their own choices 20
of verbal responses to questions and observations thrown up by the various game characters they encounter. “From a purely audio perspective the branching dialogue wasn’t all that problematic. We marked up which dialogue belonged to which branch in the screenplay, then recorded and named the files accordingly,” he says. “We created a bespoke internal tool called Dialogue Writer, which proved invaluable for organising dialogue lines, wav files, facial animation files, and integration into the game. It helped speed up and keep track of production and implementation. Actually the branching dialogue was more of a headache per se for the level designers and quality assurance testers – and of course for the writers, who had to ensure all narrative threads made sense in both the game and the TV show. “For me, the bigger challenge was dealing with the time travel elements and that cross-pollination with the show – because as you progress through
the game, you re-visit scenes. But you re-visiting them from alternative perspectives. It’s entirely possible to hear the same scene’s dialogue from two or even three perspectives. For instance, there could be a main scene in the show that you can ‘accidently’ overhear in the game. Another example would be a scene encountered early in the game, which you later witness from a completely different spatial perspective. This made things quite complex – say recycling dialogue originally recorded for the show on-set for use in the game – needing to make it all work together sonically. “Plus ADR from the show, speech captured on the mocap (motion capture) stage for the game, some recorded in our own in-house studio and then some more traditionally recorded at a VO studio for the game. Of course, we tried to match mics and signal chains across as much of it as possible – but perhaps unsurprisingly it all sounded slightly different. There was plenty of cleaning-up and match EQ’ing…
“Logistically, working with such high-profile actors and actresses could be challenging. They’re all incredibly professional and talented but making a new game with new IP including timetravel and branching dialogue inevitably required iteration. Sometimes getting studio time with our super-busy cast when it suited us wasn’t so easy and that meant we always had to nail it – no second chances – for either voice or simultaneous facial capture, the majority of which we did in-house using a specially created comprehensive pipeline for recording and editing VO. “We had a very specific software and hardware setup for this. It had to be robust and reliable. We also had a lot of ADR to do in-house with the facial movement capture system, which always involved a super-quick turnaround. We could not have lived without our Sound Devices 744T (which we used for every single I/O in conjunction with our facial recording system). Add to that EdiCue and
20-21 AMI Apr 2016 Game Audio_Final.indd 1