Page 1

International TECHNOLOGY AND TRENDS FOR THE PRO-AUDIO PROFESSIONAL www audiomediainternational com

April 2017 2016 December

FIGHTING TALK The Creative Assembly audio team on creating an epic sense of scale for Microsoft’s ‘Halo Wars 2’ p28




The pros and cons of on-set noise reduction tech p22

Capturing, mixing and rendering sound for virtual reality p30

Funky Junk’s founder on the firm’s first 25 years p42

ULTRA-COMPACT MODULAR LINE SOURCE Packing a 138 dB wallop, Kiva II breaks the SPL record for an ultra-compact 14 kg/31 lb line source. Kiva II features L-Acoustics’ patented DOSC technology enhanced with an L-Fins waveguide for ultimate precise and smooth horizontal directivity. WSTŽ gives Kiva II long throw and even SPL, from the front row to the back, making it the perfect choice for venues and special events that require power and clarity with minimal visual obtrusion. Add to that a 16 ohm impedance for maximized amplifier density and a new sturdy IP45 rated cabinet, and you get power, efficiency and ruggedness in the most elegant package.


Experts in the issue


STAFF WRITER Colby Ramsey ADVERTISING MANAGER Ryan O’Donnell ACCOUNT MANAGER Rian Zoll-Khan HEAD OF DESIGN Jat Garcha DESIGNER Tom Carpenter PRODUCTION ASSISTANT Warren Kelly CONTENT DIRECTOR James McKeown PRINT SUBSCRIPTIONS To subscribe to AMI please go to Should you have any questions please email

is published 10 times a year by NewBay Media Europe Ltd, The Emerson Building, 4th Floor, 4-8 Emerson Street, London SE1 9DU Editorial tel: +44 (0)20 7354 6002 Sales tel: +44 (0)20 7354 6000

NewBay Media Europe Ltd is a member of the Periodical Publishers Association

© Copyright NewBay Media Europe Ltd 2017 All rights reserved. No part of this publication may be reproduced or transmitted in any form or by any means electronic or mechanical, including photocopying, recording or any information storage or retrieval system without the express prior written consent of the publisher. The contents of Audio Media International are subject to reproduction in information storage and retrieval systems.

Printed by Pensord Press Ltd, NP12 2YA Print ISSN: 2057-5165

Niv Adiri is an award-winning sound designer and re-recording mixer best known for his work on Gravity and Fantastic Beasts, who has most recently been working on T2 Trainspotting.

Graeme Harrison is executive vice president of Biamp Systems, manufacturer of AV install solutions such as the Tesira media system for digital audio networking.

Dr. Henney Oh is co-founder and CEO of G’Audio Lab, a spatial audio company dedicated to developing immersive and extensive interactive 3D audio production software solutions for creative professionals.

Mark Thompson is the founder of Funky Junk, one of Europe’s leading suppliers of new and used recording equipment.

n the same day that this issue is due to hit desks, the classic Abbey Road Studios EMI TG12345 MK IV console used to record Pink Floyd’s The Dark Side of the Moon is going up for auction in New York. Predicted to sell for ‘a significant six-figure sum’, the interest in the board’s availability from the industry and the wider public shows that no matter how progressive we become with our choice of equipment, many of us remain fascinated by the technology of that era. It’s also something of a coincidence that the passing of the iconic desk into new hands very nearly coincides with the upcoming opening of a new Pink Floyd exhibition at the V&A in London. After our coverage of the company’s involvement in both the ‘David Bowie Is’ and ‘You Say You Want a Revolution?’ projects over the past couple of years it probably doesn’t come as much of a surprise that Sennheiser has been tasked with providing the ‘Sound Experience’ once again, but this time the firm is taking things one step further.


As our story on page 14 explains, the immersive mix for the main showpiece was again made using AMBEO technology, but required the efforts of two renowned producers and Pink Floyd recording engineer Andy Jackson, as well as the careful construction of a threetier, 25-speaker Neumann system, which we got the chance to sample at Abbey Road Studio 2 last month before the same setup debuts at the V&A in May. That report isn’t our only look at an immersive subject this month, by the way. We know it’s still a technology that is still very much in its early stages – and will be for some time – but we felt it was time for another look at what’s been going on in the VR space of late. Over on page 26, Erica Basnicki gets an update from the new Audio 360 team at Facebook about the company’s move into the immersive space and we also have an expert in spatial audio tell us how to capture, mix and render audio for VR later on in the issue. And going back to technology from a bygone era for a minute, this month we’ve also caught up with a man who’s become one of the world’s leaders in the sale of high-quality vintage equipment – see our Back Page Q&A for an interview with Funky Junk’s Mark Thompson, who’s celebrating his firm’s 25th anniversary, which we agree is a milestone worth taking note of.

Adam Savage Editor Audio Media International

April 2017






Steinberg previews Nuendo 8


Sound Devices adds automatic mixing for 633 recorder


Ferrofish launches A32 Dante AD/DA converter


LD Systems announces Stinger G3 series


NEWS IN DEPTH Sennheiser takes AMBEO to next level for new Pink Floyd exhibition



POST-PRODUCTION Jerry Ibbotson weighs up the arguments for and against on-set noise reduction technology, and the impact it is having on post crews.


IMMERSIVE AUDIO Erica Basnicki explores Facebook’s Audio 360 move, which has seen the introduction of a new set of spatialised sound tools


GAME AUDIO We find out how the audio team behind ‘Halo Wars 2’ came up with an explosive sound mix to match its stunning visuals.


HOW TO G’Audio Lab co-founder and CEO Dr. Henney Oh talks us through capturing, mixing and rendering sound for virtual reality


END USER FOCUS: Headphones


SHOW PREVIEW: Prolight + Sound




OPINION Biamp Systems’ Graeme Harrison on why AVB/ TSN is not going anywhere


TECH TALK Sound designer and re-recording mixer Niv Adiri describes how he used Nugen’s new Halo Upmix tool during the making of ‘T2 Trainspotting’



INTERVIEW Funky Junk founder Mark Thompson chats to Adam Savage about his firm’s first 25 years in business as it celebrates a landmark anniversary April 2017

36 REVIEWS 34 36 38 40

Steinberg Cubase Pro 9 Focusrite Red 8Pre MOTU 624 Polyverse Manipulator


WAVES ADDS NEW DUGAN BUNDLES Waves Audio’s new Dugan Automixer + Dugan Speech bundles are designed to save live or broadcast engineers the need to manually ride faders while trying to keep up with several people talking at once through multiple channels of speech mics. The bundle includes two plugins for auto-mixing multiple mics in real time: the new Dugan Speech for integrated use inside the Waves eMotion LV1 live mixing console, and the Dugan Automixer for other live consoles. With the Dugan Speech plugin, the Dugan Automixer interface is incorporated into a designated layer mode within the eMotion LV1 mixing console, which users can easily access from any LV1 channel. With non-LV1 consoles, it can be used together with the Dugan Automixer plugin via the Waves MultiRack plugin host. The plugins are designed to control the levels of multiple microphones automatically and in real time,

dramatically reducing noise, feedback and comb filtering from adjacent microphones. They ensure that system gain remains consistent, even when several speakers are talking at the same time, and make ‘perfectly matched’ crossfades, without any signal compression and without a noise gate that would cause undesirable artifacts. Developed with pro-audio inventor Dan Dugan, both plugins are powered by his patented voice-activated technology. Owners of the Dugan Automixer plugin with Waves Update Plan coverage can upgrade to the bundle at no added cost.

SOFTUBE CONSOLE 1 UAD UPDATE Softube has announced that its Console 1 software update with support for selected UAD powered plug-ins from Universal Audio is now online, and is initially available for free to owners of firstgeneration Console 1 systems. The company launched Console 1 Mk II – an upgraded version of its hardware/software mixing system with some minor layout changes, such as more visible LED markers – earlier this year at the 2017 NAMM Show.


April 2017

Owners of first-generation Console 1 units can continue to quickly switch between tracks and control EQ, compressor, gate, and more, mixing with the sound of the included Console 1 SSL SL 4000 E emulation developed in collaboration with Solid State Logic. With the software update, users can customise channels to fit their needs with over 60 Console 1 system-ready plug-ins, adding brands such as Chandler Limited, Fairchild, and Helios to the range of products that can be used with Console 1 systems. Console 1 Mk II is scheduled to start shipping this Spring, though the software update with the UAD powered plug-ins compatibility can now be downloaded for free from Softube’s website.

PRESONUS UNVEILS QMIX-UC QMix-UC, a new version of Presonus’ free monitor-mix control app for StudioLive mixers, allows up to 14 musicians to simultaneously control their monitor (aux) mixes wirelessly from their mobile device. The new version of the app, which now runs on Android devices in addition to iPhone and iPod touch, supports the new StudioLive Series III mixers, as well as Studio 192-series interfaces. For StudioLive AI-series mixer customers, QMix-UC replaces QMix-AI automatically when the app is updated. Building on the features and functionality of its predecessors, QMix-UC adds the ability to create four channel groups, making it easier for advanced users to control large mixes. QMix-UC retains the popular Wheel of Me, which lets users select multiple “Me” channels and turn them all up in their monitor at the same time, while controlling the relative balance between themselves and the rest of the band. Aux mix send levels and pan

positions (for linked auxes), in addition to the new groups, are available in Landscape view. StudioLive mixers also let users set permissions that determine which features can be controlled from each wireless device on the network, including those running QMix-UC. Studio 192-series interfaces require a computer running UC Surface to use QMix-UC. PreSonus QMix-UC is available free from the Apple App Store, Amazon App Store, or the Google Play Store.


The latest iteration of Steinberg’s flagship DAW, although not available until the summer, enhances Nuendo 7’s Game Audio Connect toolset with Game Audio Connect 2, which provides an interactive music workflow by handing entire music compositions over to Wwise, including audio and MIDI tracks along with cycle and cue markers. Nuendo 8 features Direct Offline Processing with its Live! Rendering capability, allowing users to easily apply frequently used processes in an offline plug-in chain and render

offline processes in real time. Another highlight is Auto ReNamer, which automatically assigns new names to all events – another common workflow in game audio. The Sound Randomizer plug-in creates different variations of a sound instantaneously, adjusting its pitch, timbre, impact and timing. The newly introduced Sampler Track allows users to drag and drop audio from the MediaBay into the track and play and manipulate the sample. Also included is the virtual-analogue Retrologue 2 synthesizer, HALion Sonic SE 3, over 80 effects processors, including the new eight-band fully parametric Frequency EQ, and an assortment taken from the 2017 Hybrid Library by Pro Sound Effects, plus a newly developed video engine that replaces the former Quicktime-based engine.




PRODUCT NEWS: BROADCAST SOUND DEVICES UPGRADES 633 RECORDER Sound Devices has announced that its 633 mixer/recorder now offers two high-performing automatic mixing options: Dugan automixing and Sound Devices MixAssist. The two automixing algorithms are available at no additional cost to 633 owners with the release of Firmware Version 4.50. With this update, the 688 mixer/recorder with its SL-6 powering and wireless system accessory now supports SuperSlot integration with Sennheiser EK-6042 wireless, twochannel slot-in receivers. Sound Devices added Dugan’s automatic mic mixing to its 688

mixer/recorder in 2016, making it the first field mixer to incorporate the technology. The inclusion of the Dugan Speech System and MixAssist makes it ideally suited for more portable production applications, while the new feature provides automixing for up to six audio channels. In challenging field applications, which often require the use of wireless lavalier mics, auto mixers improve intelligibility, reduce noise and reverberation, and maintain consistent overall gain as microphones are turned on and off.


NTP Technology used this year’s BVE show to launch its new DAD DX32R Digital Audio Bridge. The DX32R provides conversion between Dante/AES67 and not only MADI but also AES. It includes a coax MADI interface and eight AES3 interfaces, and can be expanded with a further two MADI interfaces – either optical or coax. A built-in router allows the user to pick and choose any channels from any interface to be routed to Dante/ AES67 and vice versa. Apart from providing routing of audio channels, the DX32R with

an optional Pro Mon 2 license can also perform summing/mixing of channels. Designed for missioncritical applications, the DX32R includes as standard a redundant Dante interface and redundant power supplies plus two optional MADI interfaces, which can also operate in redundant mode. In addition, NTP, which coexhibited with HHB at BVE, hosted an AES67 interoperability demo on the supplier’s stand, which demonstrated how a Dante-based Audio-over-IP system can talk to AES67 devices.

Made in Denmark


GET CLOSER TO PURE PERFORMANCE When both body and voice need to be free, a DPA Bodyworn Mic should be your go-to solution. Headset or miniature, visible or concealed, the d:fine™ & d:screet™ series of bodyworn microphones give you astoundingly accurate sound in an incredibly small design.


April 2017


NEXO’S NEW NEMO 2.0 SOFTWARE The new version of Nexo’s NeMo system management software, which is available for the first time on the macOS platform, introduces advances in remote control and monitoring of Nexo NXAMP controller-amplifiers. With NeMo, users can monitor the electronic parameters of amplifiers over a wireless network, in real or past-time, and store the data as a log. It also makes it easier to exploit advanced sensing functions in the NXAMP, which helps protect loudspeaker cabinets. An offline mode is now available (in the macOS version) with which the user can

create offline devices, edit their settings and later match them to online devices, using a device identification feature. Using custom control panels, users can create interfaces which offer an ‘extraordinary’ level of control, described by Nexo sales manager Gareth Collyer as “a complete game-changer for the use of NXAMPs.” Nexo NeMo version 2.0 now embraces Nexo’s new DTD Controller, allowing for the remote control of one or several DTDs simultaneously, including preset, patch, EQ, compressor, gain and delay editing, as well as level monitoring. First launched on the IOS platform in 2013, the new version of Nexo NeMo sees every feature now replicated for macOS, so that users have the choice of going wired or wireless. Nexo NeMo version 2.0 also welcomes zones, alert emailing, automatic update of preset libraries, EQ library, as well as improved performance and stability.

LD SYSTEMS REVEALS STINGER G3 The Adam Hall Group has unveiled the new Stinger G3 Series from LD Systems – the third generation of its multi-functional full-range speakers and subwoofers. The new range comprises active and passive speaker cabinets in 8in, 10in, 12in and 15in formats, as well as two 15in and 18in subwoofers, available in active or passive versions. The dispersion characteristic of the full range speakers has been further optimised in the high frequency range. During the development and optimisation of the horns, the Boundary Element Method (BEM) was used, and the distance between the tweeter drivers and woofers has been minimised to avoid partial drop-outs caused by comb filter effects. All active speakers are equipped with Class D amplifiers while DynX DSP technology ensures ‘crystal clear

sound’, according to the company. With their four presets (Full Range, Satellite, Monitor, Flat), the speakers can be optimised for the desired application at the touch of a button. In addition to three different high-cut filters, the active Stinger G3 subwoofer’s DSP presets also feature a cardioid preset. By combining three Stinger G3 subs of the same design, a cardioid dispersion characteristic is achieved with this circuitry, offering a number of advantages over conventional subwoofer setups.

A series of high-intensity loudspeakers from Funktion-One, engineered for unforgettable audience experiences

See us at Prolight+Sound 2017 Outdoor demos in Agora area every two hours, please check our stand for demo times Hall 3.1 Stand E81 4 - 7 April 2017


April 2017

Evolution Series



Synthax Audio has announced the release and shipping of the Ferrofish A32 Dante AD/ DA converter, which has been designed to streamline system cabling in both live and installed sound applications. The A32 Dante unit serves as a high-end AD/ DA audio and format converter that supports a range of audio formats. By using uncompressed, multi-channel digital media networking technology with near-zero latency and synchronisation, the company’s latest offering is designed to eliminate unsightly cables employed in large scale productions or AV installations. The unit supports 64 channels of MADI I/O, 32 channels of ADAT optical I/O, and 32 channels of analogue I/O. Additionally, one ADAT optical connector can be used alternatively as S/PDIF.

With a wealth of I/O choices, the Ferrofish A32 Dante can also operate as an audio format converter while also functioning as an AD/DA converter. Audio can be freely routed (in groups of 8) between all interfaces. The unit employs 24-bit 192 kHz converters with analogue gain switches. The gains of each channel can be separately adjusted in 0.5 dB steps, and the standard levels (+4 dBu, +13 dBu und +20 dBu) are switched in the analogue domain, ensuring the full analogue performance of the converter is preserved. The four on-board TFT screens provide complete control of all levels and adjustments, including its routing capabilities, while remote control is possible via Dante, MIDI, and USB. The A32 Dante is now shipping in the United Kingdom for £2,980.

ANTELOPE SHIPS ORION32 HD Antelope Audio has announced that its Orion32 HD interface, unveiled at this year’s NAMM Show back in January, is now shipping. The third member of the Orion32 family features full Pro Tools HD and Native compatibility, along with 64 channels of audio in 192kHz via HDX or USB3 with real time monitoring. Powered by 64-bit Acoustically Focused Clocking (AFC) jitter-management technology, Orion32 HD supports multiple monitor mixes and is compatible with ‘any DAW on the market’. Both HD and USB3 ports are available for simultaneous use, enabling more than one DAW platform to be used and accessed at once. The integration of Antelope’s Field Programmable Gate Array (FPGA)-based


April 2017

DSP processing engine means the unit is able to handle tracking and mixing through custom-modelled vintage effects in real time. Additionally, the unit – in its slim black housing – does not require an internal fan and takes up a single rack space. The Orion32 boasts HDX, USB3, MADI, ADAT, and S/PDIF connectivity and 32-in/32-out analogue inputs via DB25. This is in addition to routing and mixing software for Mac and Windows, which now features both Antelope’s colour-coded routing matrix and an alternate matrix-style view to simplify routing. It also features movable and resizable panels to help speed up the workflow of multi-screen setups. The Orion32 HD is shipping now and is priced at $3,495 USD.

Meet your next pro audio partner Visit us at Prolight + Sound Frankfurt for access to our exclusive evening event with new equipment unveiled, insight sessions and live performances. Discover why venues worldwide are choosing Pioneer Pro Audio systems and how you can become one of our business partners.



WWW.PPAEVENT.COM Visit us at Prolight + Sound Frankfurt, 4-7 April 2017, Hall 3.1, Stand A91

#madeintheuk | Pioneerproaudio *See full terms and conditions at


SENNHEISER TAKES AMBEO TO NEXT LEVEL FOR PINK FLOYD PROJECT We took a trip to Abbey Road Studios, where we were given a demo of the technology used to create a new immersive mix of Comfortably Numb from Live 8, which will be a major feature of the upcoming V&A exhibition.

ennheiser is soon set to employ its AMBEO 3D immersive audio technology at another V&A museum exhibition – this time for ‘Pink Floyd: Their Mortal Remains’, which gets underway in May. An audio-visual journey through the band’s 50-year history, the exhibition’s finale will include an immersive showcase of their performance of the hit Comfortably Numb from Live 8, featuring Sennheiser’s AMBEO 3D (18.3) audio technology and a 360-degree visual display. Audio Media International was recently invited to Abbey Road’s Studio 2 – where the band recorded some of their most famous albums – to meet the team behind the new AMBEO mix and hear a demonstration of the 25-speaker setup ahead of the public opening. Sennheiser is the official audio partner of the exhibition, and its systems will be used for all of the audio elements, including the delivery of high-quality arrangements from Pink Floyd historic audio documents. The band has now been using Sennheiser and Neumann equipment for 50 years,



April 2017

starting with the Sennheiser MD 409. Leading the demonstration, in a space created to match the purposebuilt, acoustically treated exhibition area that will be in place at the V&A, producers Simon Franglen and Simon Rhodes collaborated closely with Pink Floyd recording engineer Andy Jackson to create the new version of the classic track. Franglen’s credits include production for tracks including My Heart Will Go On and films such as Avatar and the remake of The Magnificent Seven. “There are four screens around the room, and the point of all of this is to put you actually inside the music,” said Franglen. “One of the things that Simon Rhodes and I found when doing these immersive mixes was that suddenly you don’t have that restriction of having just one wall. You’ve got all these extra dimensions and that allows you to hear a lot more inside the mix. “You’ll find as you listen that instruments and vocals breathe like they couldn’t before, but you also hear the song as you’ve always remembered it and it is still Pink Floyd. We’re not trying to change a Crown Jewel; we’re just trying to give you a different

perspective. Pink Floyd were pioneers in surround sound and this is a natural extension of that and now we have the technology to take it further. “The keyboards are over here and David Gilmour or Roger Waters is over there but the drums and the bass we’ve put in the middle, the idea being that it anchors everything,” Franglen explained. “The audience is everywhere and around the edges, and the idea is to give everything room to breathe rather than it being just ‘over there.’” The loudspeaker system in Studio 2, which will be recreated at the V&A, comprised an arrangement of Neumann KH420 on top and KH870 on the lower tier, which were “able to handle anything that we’ve thrown at them,” according to Franglen. “We used these speakers on big cinema mixes and they sound amazing, like the best sounding PA we’ve ever heard,” said Franglen of the rig. “With the opportunity that Sennheiser has given us to basically put it everywhere, you get the perfect hi-fi – a really wonderful system. To put that into this space is very special, and to mix with it is very fun.” Pink Floyd’s journey through the 1970s

saw them embracing studio technology and using all the resources at their disposal at Abbey Road, on albums such as Meddle, The Dark Side Of The Moon and Wish You Were Here. Several instruments used on these albums are displayed there, including David Gilmour’s famous Stratocaster, ‘The Black Strat’, which has been used on many Pink Floyd tours since making its debut at the 1970 Bath Festival Of Blues And Progressive Music. Jackson, described by Franglen as ‘the guardian of the Pink Floyd sound,’ remarked: “I had this preconception of what you could do, that you’ve got the perimeter of a circle and trying to get inside that circle doesn’t really work. When Simon said let’s put the drums in the middle of the room I thought that you couldn’t do that, but of course yes you can! Not only are you adding the vertical, but you are making use of the entire volume.” ‘The Pink Floyd Exhibition: Their Mortal Remains’ is a rare glimpse into one of the world’s most iconic rock bands, and will open its doors on 13 May 2017 for 20 weeks.

Now available


Compromise is not an option when everyone is counting on you.

Better performance, quicker setup, immediate payoff: Digital 6000 was developed to exceed the expectations of audio professionals and business managers alike. Our new professional wireless series delivers reliable performance in even the most challenging RF conditions. Intermodulation is completely eliminated by Digital 6000, enabling more channels to operate in less space. Discover more:

Digital 6000 utilizes groundbreaking technology from our agship Digital 9000. Dependability is guaranteed by our renowned Long Range transmission mode and proprietary audio codec. Digital integration is seamless with AES3 and optional Dante output. Monitoring and control of the two-channel receiver is at your ďŹ ngertips, with an elegant, intuitive user interface.



This year’s Prolight + Sound welcomes a new ‘Silent Stage’ area as well as a reformed training event programme, with a number of significant new product launches also expected to take place.


What? Prolight + Sound 2017 Where? Messe Frankfurt, Germany When? 4-7 April

//////////////////////////////////////////////////////////// rade visitors from all sectors of the events business, including venue operators, planning companies, retailers, sound experts, stage designers, studio engineers, event-service providers and exhibition companies will once again come together in Frankfurt to gather information about new technical developments, products and services at this year’s Prolight + Sound show. PL+S 2017 will run in parallel with its sister MI-focused event Musikmesse over three days following feedback from exhibitors, associations and the wider industry gathered after the 2016 event, when they ran alongside each other over just two days, having previously taken place on identical datelines. The information and training events will be regrouped under the name Prolight + Sound Conference. This will then be subdivided into three main sections: Event Technology, Media Systems and VDT Academy. With lectures and presentations, speakers in the media technology field will pass on knowledge gained from practical experience and present



April 2017

product solutions and services from the fields of AV media technology and systems integration. In the Event Technology section, speakers will discuss security in the event sector, legal issues, regulations and training options. The VDT Academy is an information event organised by the Association of German Sound Engineers (Verband Deutscher Tonmeister – VDT). The new ‘Silent Stage’ area in Hall 4.1 will give event technology professionals and musicians the chance to learn how this innovative stage setup can help improve not only the mix but also the performance of musical acts. The advantages of a silent stage will be explained and opportunities for implementing the concept illustrated by live demonstrations featuring a presenter and band, which will play several times a day throughout the fair. An overview of some of the best systems on the market can also be seen and heard in action at the Concert Sound Arena, while a range of mobile PA systems will be demonstrated under realistic conditions at the Live Sound Arena.

ON THE SHOWFLOOR On the Adam Hall stand, LD Systems will demonstrate the MAUI 5 GO, said to be the world’s first mobile battery-powered column-PA. It offers six hours of battery life from a single charge, ultra-portable construction, a PA and monitor system in one, an integrated mixer with Bluetooth, 800 Watts peak power and high sound pressure with 120 dB peak. Funktion-One will expand its Evolution Touring Series with the addition of supplementary mid-high and mid-bass loudspeaker enclosures. Evolution 7TH is the mid-high section of the Evo 7T, featuring a 10in mid-range and a 1.4in compression driver for high frequencies. It is also a lot smaller than the Evo 7T, making it flexible and adaptable to a number of configurations. Evolution 7TL-215 features two Evo 7T horn-loaded 15in drivers, providing mid-bass reinforcement for flown and groundstacked setups. These new additions also mean that configurations are now eminently scalable. The company will also add to its Bass Reflex range of speakers with the BR132, which utilises Powersoft’s M-Force 10kW linear transducer and

is around 40% smaller than the F132, which debuted at last year’s show. The L-Acoustics booth will provide details of the P1 – a new networked digital audio processor – during twice-daily presentations, while the company will also hold presentations of its Syva segment source, which is expected to ship to first clients in June. Syva is ideally suited providing sound reinforcement for corporate events, fashion and trade shows, amphitheatres and performing arts centres. Visitors will also have the chance to attend one of the daily presentations of the workflow and control tools for the company’s L-ISA immersive sound solutions. Following a launch at the NAMM Show back in January, PMC is showing the latest additions to its Main Monitor range at this year’s show. The company’s MB3 and BB6 active Advanced Transmission Line (ATL) loudspeakers are specifically designed for critical music creation and production. Available as single or twin-cabinet (XBD) versions, the ultrahigh-resolution monitors have digital and analogue inputs and are designed for freestanding or soffit-mounted use in recording, mixing, mastering and outside broadcast applications. Additionally, Martin Audio says it will ‘unveil multiple products across different categories that will both disrupt the marketplace and delight customers’ on its booth B71 in Hall 3.1.


WHERE CONTENT REALLY IS KING A brief overview of the early exhibitor news ahead of this year’s Las Vegas event.


What? NAB 2017 Where? Las Vegas Convention Center When? 22-27 April

Photo: Robb Cohen Photography and Video

//////////////////////////////////////////////////////////////////////////////// eturning to Las Vegas from 22-27 April 2017, the NAB Show is the world’s largest electronic media show covering the creation, management and delivery of content across all platforms, attracting over 100,000 attendees from 166 countries and 1,700+ exhibitors. Although we don’t have the space to run through what all of these exhibitors will be doing this year, we have managed to compile a few predicted highlights from the showfloor. Calrec’s main focus will be on its new Brio and RP1 lines. The company considers Brio to be the most powerful and compact digital broadcast audio console in its class, with a comprehensive broadcast feature set that supports a wider breadth of broadcasters. Brio is only 892mm wide, and the dual-layer, 36-fader surface provides more faders in a given footprint than any other audio broadcast console.


The new RP1 remote production engine is a 2U core that contains integrated, FPGA-based DSP, which enables a console surface at another facility to control all mixing functionality. The RP1 core manages all of the processing for IFB routing and remote monitor mixes, and it does so locally with no latency. Digital audio specialist Jünger Audio will be present at two locations at NAB 2017 – on its own booth in the North Hall (N4831) where the theme will be Smart Audio Solutions, and on its US distributor Independent Audio’s booth in Central Hall (C3036), where the focus will be on audio processor hardware. Jünger will show its full range of loudness control and audio processing solutions for the broadcast and pro-audio industries, including its full range of D*AP products, all of which incorporate a collection of adaptive processing algorithms.

Lawo’s Michael Dosch and Stephan Türkay have been selected to present a Radio White Paper at the 2017 NAB Broadcast Engineering and Information Technology Conference. The paper, titled ‘Virtualizing Radio Studios: Broadcasting + IT = AoIP 2.0’ will be co-presented by Dosch and Türkay on Monday 23 April. Their presentation will explain how virtualising the tools needed to produce modern radio is the next frontier in broadcast technology, giving real-world examples of radio stations that have already successfully virtualised their studios. Nugen Audio will unveil Halo Downmix, an all-new product in the company’s award-winning line that also includes a number of Upmix variants. Halo Downmix is a creative solution for precise downmixing of feature-film or 5.1 mixes to stereo. Halo Downmix gives the engineer hands-on control over the relative levels, timing, and

direct/ambient sound balance within the downmix process. By allowing appropriate adjustment, Halo Downmix delivers a result far superior to a typical in-the-box, coefficients-based process, according to Nugen. Wisycom will be introducing its brand new MAT244 Programmable RF Combiner at this year’s NAB Show (Booth C865). This new solution is a four-channel version of the company’s popular MAT288, further expanding its customer base to reach those users that do not require the higher number of areas controlled, while also providing a more cost-efficient solution. Keep checking the Audio Media International website throughout the month to stay up to date with all the news from both NAB and Prolight + Sound as and when we get it.

April 2017



WHY AVB IS NOT THE BETAMAX OF THE INDUSTRY The continuing rise of Audinate’s Dante protocol doesn’t mean Audio Video Bridging is on its way out, according to Graeme Harrison, executive vice president at Biamp Systems.


t’s easy to believe, reading the press and talking to people at trade shows, that the industry is in the midst of a protocol war between Dante and AVB/TSN with one side winning and the other crashing and burning – something similar to what happened with VHS and Betamax in the 1980s. The truth, however, is very different and possibly more interesting. Dante is a protocol for our industry developed and sold by Audinate. It has many licencees and is unquestionably currently the most widely adopted audio protocol in the professional AV industry. AVB/TSN, on the other hand, is a widely recognised set of IEEE standards of deterministic Ethernet with Avnu, the interoperability alliance setting interoperability standards and third party certifications. This standard allows for different manufacturers’ equipment to be used together seamlessly in the same way as the Wi-Fi alliance does for the IEEE 802.11 wireless Ethernet standards. AVB/TSN has many technical advantages, allowing for any deterministic data to be transmitted (audio, video, and other data) over pipelines of any size. It is unique in being truly deterministic, allowing a guaranteed transmission time, even in converged networks in which this deterministic data runs alongside standard computer data (think of a corporate network of the future). There are several other technical advantages, but wasn’t Betamax technically superior to VHS, and that didn’t seem to help it, I hear you think!



April 2017

I’m not going to focus on the technical side, but instead think about two other aspects. The first of these concerns the commercial side of this question. Those of you of a certain age might remember IBM token ring, a proprietary standard for networking computers developed by the then industry-leading company. This gained some early momentum – first mover advantage – but in the end Ethernet, a set of IEEE standards won. This is just one example of something that has happened repeatedly in technology: a proprietary standard having first mover advantage and an open standard winning out in the long run. Some people in our industry, while admitting this, say that our industry is not ready for an open standard – that networking is not widely adopted and that we (our industry, that is) are not mature enough for open standards. It is certainly true that pro AV has never developed any open standards (although it uses some, like Ethernet, Wi-Fi, and USB) but this has been a growing problem. The reason we are all looking for a new protocol was that the last proprietary standard that the industry adopted, CobraNet, atrophied after it’s developer Peak Audio, was acquired by Cirrus Logic. Many companies were burned by this and some, like Biamp and QSC, determined they did not want to go through this again. QSC responded by developing their own protocol, a perfectly reasonable proprietary response with them being able to have total control over its future.

Open minded Biamp Systems is philosophically predisposed towards ‘playing well with others’ and open standards, so we adopted AVB/TSN. We are active on the various Avnu committees to help advance the standard forward in a way that benefits our industry. Both are valid answers to the problem of control and longevity, but I would argue that simply adopting the latest proprietary standard du jour is simply putting one’s head in the sand and delaying the inevitable. It’s an easy option because a third-party company does all

of the development work and supports their protocol, but licencees have next to no control over the future of the protocol. An additional benefit of using a protocol based on IEEE standards is that this standards body is well known and trusted by the IT industry, and IT departments are increasingly our end customers with AV moving from a facilities operation to residing under IT in most large corporations, universities, hospitals and government institutions. Whilst on the subject of commercial realities, if one believes, like Biamp does, that the future involves the network being pushed ever increasingly to the endpoints, those end-points can only really be cost effective on a granular level (for example, single-channel audio or video wallplates) if there is no licence fee to be paid to a protocol developer. This is another reason for the long-term success of open standards. Finally, I’d like to zoom out a little and look at the technology industry as a whole and especially focus on the Internet of Things (IoT) – a very trendy phrase to throw into after dinner conversation (along with AI, VR, AR, ‘the cloud’ and UC if you are looking for other such topics!). If we think about IoT, the German phrase for it is M2M – machine to machine, as it involves machines talking to other machines without human supervision. This is what our industry has been doing for decades now – think about control systems, DSP

audio platforms, video ecosystems, and more. If we think about smart buildings and smart cities and AV’s place in them, this involves interaction with many other types of system (HVAC, security, BMS, and much more). What is not commonly known in the industry is that Avnu currently serves four markets: the pro-audio and pro-AV market, but also the automotive, consumer electronics, and industrial (currently industrial process control and financial transactions) markets. At Avnu meetings all of us talk together about common problems and solutions. It is a time when AV has a seat at the ‘big table.’ I will leave you with the thought that AVB/TSN is not going away because it is fast becoming the deterministic protocol of the Internet of Things. Companies like Intel, Cisco, GE, and National Instruments are investing huge amounts of time and money developing and implementing it. IoT and AVB/TSN will certainly develop in the future – it’s not going away – and the only question is: Is our industry brave enough to sit at the big table, or are we going to let IoT develop without us? I know what I believe the answer to be, how about you? Graeme Harrison is executive vice president of Biamp Systems, manufacturer of AV install solutions such as the Tesira media system for digital audio networking.

THE SMALLEST AND MOST POWERFUL TRI-ELEMENT MICROPHONE AVAILABLE Setting a new standard in tri-element microphone performance and design, the M3 is the only multi-element mic available with studio quality sound, adjustable cable length, rotational positioning, and a UL rated plenum box solution above the ceiling tile. The three phase coherent hypercardioid capsules of the M3 have a tailored frequency response that optimizes speech intelligibility and rejects extraneous noise making it the ideal mic for video conferencing, distance learning, courtroom activities, and surgical procedures. Available in charcoal or white, the TAA-compliant M3 is a stunning addition to the Audix family of conference miking solutions.

M3 Š2017 Audix Corporation All Rights Reserved. Audix and the Audix Logo are trademarks of Audix Corporation.



Sound designer and rerecording mixer Niv Adiri on how he utilised Nugen Audio’s Halo Upmix tool during the making of T2 Trainspotting.

ound design and mixing is something that has always come naturally to Niv Adiri. “Growing up in Israel, I knew at an early age that I wanted to work with sound. It’s a passion that’s stayed with me ever since,” he says. From his first sound job as a DJ at age 13, Adiri has gone on to become an award-winning sound designer and rerecording mixer. He shared an Academy Award and a BAFTA Award with the sound team for the 2013 film Gravity, and earlier this year was nominated for a BAFTA for his work on Fantastic Beasts and Where to Find Them. Adiri is currently working as a sound designer and re-recording mixer at Sound 24, a facility housed within Pinewood Studios. Most recently he worked as a sound designer and re-recording mixer on T2 Trainspotting, currently screening in cinemas throughout the UK and soon to be released in the US. Through his career, Adiri has seen technology evolve to become a true enabler of the craft of sound for films. He cites recent technology advances, such as the ability to mix using digital audio workstation systems like Avid Pro Tools. “Instead of technology dictating how we do our jobs, we’ve reached a point where we can harness technology to our benefit – we can still be creative but we can work more efficiently,” he says. “These tools give us a single protocol for developing sound on a project in one session and on a single machine, which helps tremendously and accelerates our turnaround on changes that come in at the last minute.” Adiri cites an example from his latest project: “In mixing the music and sound effects for T2 Trainspotting, we were able to develop the sound from



April 2017

beginning to end in one session. It meant we didn’t have to start from scratch when edits came in – we could pick up where the producers and directors left off the last time,” he explains. One of Adiri’s requirements for T2 Trainspotting was the creation of a surround 5.1 version of the music – a task that was accelerated using Nugen Audio’s Halo Upmix plug-in. With Halo Upmix, he was able to work from stereo source tracks to deliver a 5.1 mix that was true to the original sound of the music but also sounded ideal in a surround environment. “The Trainspotting music was naturally very percussive and rhythmic, which can present problems when creating a 5.1 mix. The surround needs to be tied as closely as possible to the source so it won’t sound diffused in larger rooms. That means it needs to be very tight without the slightest little delays -- the last thing you want is short reverb, fusion, or ambience reverb,” he explains. “I had some experience with

Halo before and knew it could provide the tightest output compared to other tools I tested.” Adiri adds that Halo Upmix supported him creatively, as well. “In the upmix, the basic shape and treatment of the music was there, and Halo helped me improve it even further. I was able to find just the right combination of reverbs and slap delays. But most importantly, I was able to create a 5.1 mix that is really enveloping, with music that surrounds the listener and fills up the room in a close way without sounding too distant. That’s the type of effect that Halo enables.”

Taking it further On an upcoming project, Adiri is looking forward to trying out the 9.1 option for Halo Upmix, which includes overhead positioning and generates a 7.1.2 (Dolby Atmos) bed track-compatible upmix. “In this next project I’ll be using Halo to go from a stereo mix to 7.1 for the music and sound effects, and I’m looking forward to experimenting with the overheads.”

With the 9.1 Halo option, Adiri knows he’ll be able to work closely with the Nugen team. “From my first few experiments with Halo, it’s been really refreshing to work directly with the Nugen Audio founders. Paul Tapper has been extremely generous with his time, and he’s made himself available to explain the logic behind the algorithms and give insight into how the tool works. With technology solutions, you can’t take for granted that the person who answers the phone will be able to tell you what’s going on inside the tool. But Nugen Audio is the exception. It’s tremendously helpful to be able to have a chat with the guy that actually wrote the plug-in’s algorithms. “Halo Upmix is a brilliant addition to my toolset. It’s become my go-to plug-in for creating surround upmixes, especially for music that’s more electronic, rhythmic, and percussive. It’s the best way to go to surround when you have only stereo sources for music.”

BRIO: DESIGNED AND MANUFACTURED IN THE UK. 100% BROADCAST, 100% CALREC. Calrec Audio is relied on every single day by the world’s most successful broadcasters. Calrec’s reputation for build quality, reliability and audio performance has made it an industry standard across the world.

Find out what makes the ultimate broadcast desk in the Calrec Periodic Table of Broadcast


Location sound mixer Dave Sansom

REDUCED SERVICE Jerry Ibbotson hears the arguments for and against new on-set noise reduction technology, and finds varying opinions about the impact it is having on post crews.

///////////////////////////////////////////////////////////////////////////////////////// n the last 16 years of writing for Audio Media, I don’t think I’ve talked about generators quite as much as I have for this article. They’re things that provide power and that’s it. But once you get into a conversation about location noise (as opposed to sound) with both production and postproduction audio mixers, you find they crop up quite a lot. It’s all about extraneous noise, you see, and gennies do tend to make quite a hum. But what’s to be done about this – and other unwanted sound – out

I 22

April 2017

on location? It seems sets are getting noisier while some actors are getting quieter but with more and more technology-based solutions appearing on the market, is it something that no longer has to be a concern at all? Dave Sansom is a former dubbing mixer turned location sound mixer with shows like Broadchurch, Happy Valley and Life On Mars to his credit. He uses the Cedar DNS 2 portable dialogue noise suppressor when working on productions like the recent Cold Feet reboot.

“That’s a good example,” he says. “There are established locations from the very first series, 18 years ago. But with the exterior of one of the houses what you don’t see is there is now a little industrial unit opposite with air conditioning units on the rooftops, going all the time. No one is going to turn those off when we’re filming. The Cedar can dramatically reduce the noise for the edit and production rushes but it’s all there to be unpicked if they decide they’d rather have it later.”

But not everyone is a fan of allowing noise reduction technology to move out of the dubbing studio and on to location. “It can always, always be done better later with some care and human control,” says production sound mixer Stevie Haywood, whose most recent project was the Love Actually reunion for Comic Relief. “I see on-set noise reduction as being similar to using compression or reverb. You would never do that to location sound. The brief is clear: capture it as cleanly and realistically as


//////////////////////////////////////////// possible to give the maximum amount of control to post-production.” Stevie’s beef with some of the new hardware available to production sound teams is that he believes it can fundamentally change the dynamic of working on location. This is where the generators come in. They are often a source of noise on-set but are also essential to powering the production. Part of a sound mixer’s job is to negotiate with other crew members to have noisy equipment moved further away from the action.

Haywood says: “One of my big worries about the implementation of on-set noise reduction is that anyone with a set of headphones will hear what can be done immediately and may think, ‘Why do we have to move the generator, why should we lose 10-15 minutes of production time to move it in the first place if the sound can be so easily removed? Why do we need to spend any time to help sound sort out a problem?’ It’s hard enough anyway to get help from the various departments as it is but anything that starts to remove that

noise on the front end will undermine our argument.” This view is echoed by Paul Laynes, head of sound at Yellowmoon. He’s been dubbing mixer on Line Of Duty and The Fall for the BBC. “It could be seen as a fix for not bothering to find a quiet location or even for replacing the sound recordist on lower end documentaries where they have the cameraman record the sound. We’re trying to say you need someone there to think sound – the cameraman is thinking pictures.” He also thinks that processing audio is best done in the studio: “With on-set noise reduction it could be useful but it could be quite disastrous. If sound recordists were to start using products like these they would have to still deliver us an untreated version as well in case they went too far. [On location] they only have one or two shots at it whereas in the dub we can keep playing it back until we get it right. “

Keep it clean But Laynes would still prefer clean sound to begin with: “If I’m spending most of my time fixing dialogue I’m not bringing much to it [the mix] beyond fixing problems. The creative end gets killed because we’re spending our very limited time in the mix these days cleaning up the problem on set. And Haywood agrees, adding: “The post-production budget is a finite sum

of money and if you spend time just cleaning up and removing noise that could have been dealt with on location without any real hassle at all then you’re losing time for doing more creative stuff and more interesting mixing.” Surely then, this is the very argument for using on-set noise reduction in the first place – to fix it on the day and not in post? Haywood argues: “The problem with doing it on location is that it’s not discerning and it’s always on. It’s always doing what it’s doing and it’s always removing what it thinks needs removing. And as smart as the algorithm may be, it doesn’t really know what you want to get rid of. It’s not genuinely intelligent.” One of the key bits of kit in this area include microphones that aim to reduce background noise through Digital Signal Processing (DSP). This is different to a more analogue approach, which might use a separate mic capsule for “listening” for unwanted sound. Haywood recalls when he first came into contact with the technology. “I bought into it and had one for a while in 2012. I had a scene to shoot by the sea, with a couple in love at the sea edge in their swimming costumes. We couldn’t radio mic them, it was all boomed with the digital mic as the main boom and then off-screen boomed with an MKH 60 Sennheiser. You could barely hear anything they were saying but the production Aptil 2017


FEATURE: POST PRODUCTION had asked me if I could use Izotope RX (noise reduction software) to do something with it to let them hear what the scene could be like. So I pumped it all into RX and spent some time trying to fix it and I found that the audio recorded processed from the microphone was completely useless. There was nothing more I could do with it. What was printed into the audio was it and it hadn’t done a very good job. Whereas I took the audio from the MKH 60 and was able to do some quite serious processing and I did get something useable out of it in the end. “I realised at that point that the fact that it was printed into the audio was a problem. So I resigned myself to recording everything processed and unprocessed, but then I thought – what’s the point in that? I just don’t think it’s something we should be doing quickly on location. In RX I spent the best part of an hour on just a few minutes of audio.”

Cutting costs One of the problems facing audio teams, both in production and post, is that locations are getting noisier and noisier and this is often down to restricted budgets. Paul Haynes has experience of this. “Definitely delivering us the best sound possible is important and for me that means a quiet set and a well-controlled one. Generators are far away and there’s an attempt by all other departments to make sure that happens. That used to be how it was done on fast turnaround things like BBC dramas but not so much now. Line of Duty (the BBC’s police corruption drama about to have its fourth season) for example, was entirely shot on location.


April 2017

Even the offices you see are real offices where people are working on the floors above and below the shoot. So that produces a lot of noise on set – with things like air conditioning that can’t be switched off.” Again, I suggest this sounds like a powerful argument in favour of starting the noise reduction process early and not leaving it to guys like Paul to fix later. He agrees, up to a point. “We’ve better and more creative things to be getting on with than being bogged down in fixing noisy dialogue. But that is the way it’s going and that’s what we have to do. In the dubbing suite I use a lot of noise reduction tools, mainly Cedar and iZotope RX. You’re looking to get rid of the noise without hurting the dialogue and leaving artefacts. You can hear it on TV drama now because we are forced to use it, because sets are getting noisier.” And for him, there’s another bug-bear – mumbling actors. It’s something that gets social media and the tabloids in a tizz (think of the BBC’s recent SS-GB drama as one example) and the dubbing mixers are in the firing line. “There’s this other problem, ‘Mumblegate’, where actors are getting quieter than they ever were before. There’s a trend in drama where actors go inside themselves and deliver quiet performances. It’s up to us to deal with – with the technology (but) if you have noise on top of that actor, you have real problems trying to get rid of that. As a dialogue editor and dubbing mixer what we want to see is louder actors and quieter sets!”

What should the priority be? Stevie Haywood thinks the craft skills of audio professionals are more important

Production sound mixer Stevie Haywood

than algorithms when it comes to delivering good audio. “Mitigating noise on location by whatever cunning tricks we’ve got – not by processing – is part of what we do. Putting tape on shoes so you don’t get footsteps, that sort of thing, is the craft. If you find a way of removing it using software on location, it would be ok if it had no side effects. “But we need to protect people’s respect for what we do – like stopping people from talking on set. There’s a danger they might not bother if they thought the technology would just strip it out. “We should be getting it right on location – getting people to be quieter from the start.” But Sansom is happy to use equipment like the DNS 2 on a regular basis, with a caveat: it’s really just for the picture edit and for checking of rushes. “My DNS2s sit across a two channel mix I give to the editor. All I do is clean up the edit and rushes and I use it sparingly. I don’t do anything destructive

with it; it’s just for the edit. Every mix I record is isolated so any post house can start with those non-Cedared tracks. I always do that and even offer the option of a second machine with a non-Cedar mix as a master copy. “At the end of the day you still go into a location and seek out what’s making the noise in the first place. This isn’t an alternative, but it’s a really useful help when you can’t get things as quiet as you’d like to.” He highlights one example where using on location noise reduction made a colleague’s life easier. “Normally if you get a message from an editor it’s bad news – something is missing or doesn’t work. But on Last Tango in Halifax, at the end of the first week the editor rang to say these were the cleanest rushes he’d ever had and that was with the Cedar.” Technology aside, what every audio professional does agree on is that making a location as quiet as possible is the first big step to clean sound – at least until someone invents a truly silent generator.



FREE Exhibits-Plus pass VIP code: AES142NBE



2017 EXHIBITION: MAY 20 – 22 PROGRAM: MAY 20 – 23

The Latest Hardware and Software Cutting-Edge Technical Presentations Audio Networking Pavilion and Sessions Pro Sound Expo Stage - Open To All Demo Rooms and Tours

If It’s About AUDIO, It’s At AES!

For exhibition and sponsorship opportunities contact Graham Kirk:



Some may have dismissed VR as a flash in the pan, but now that Facebook has introduced a set of spatialised audio tools, attitudes towards it are changing. Erica Basnicki explores the significance of the tech giant’s latest move. n late 2016 a small but noticeable handful of tech writers denounced virtual reality technology as nothing more than a passing fad. It is often convenient (and rewarding) to take a contrarian view on anything popular, and VR was arguably the year’s Goliath to take out. The bulk of the arguments centred around three (valid) points: it’s expensive, the headsets are bulky when worn and for many a virtual experience creates a very real feeling of nausea. These criticisms were largely targeting a virtual reality gaming experience and perhaps there is still a way to go on that front. Set gaming aside, however, and our figurative Davids will have to look for a bigger and better stone to throw as media giant Facebook (parent company of VR manufacturer Oculus) is poised to offer immersive content to its global network of users – and creators – with no headsets or investment in gear required. In 2015 Facebook introduced 360 video, but as anyone who has encountered the medium knows, the experience isn’t complete unless accompanied by fully spatialised audio. Enter Edinbugh-based Two Big Ears, acquired by Facebook in May 2016, to provide the audio technology behind the recently launched Facebook 360 Spatial Workstation. For the first time, users can now experience high-quality spatial



April 2017

audio across a wide range of devices and platforms. Yes, even using a pair of earbuds on your mobile phone. This is especially important for 360 content creators, and Facebook has not only provided the tools free of charge, but has prioritised the creation experience. “One of the key things that we’ve been striving for is ensuring consistent quality across the whole ecosystem that we’ve developed within Facebook; viewing, publishing and designing 360 videos that could be seen and heard on Facebook newsfeeds, and also on a VR headset,” says Varun Nair, cofounder of Two Big Ears and a member of the Audio 360 team at Facebook. “We’ve been speaking to sound designers and producers over the past period who work in the space, (and) realised that there’s no single one-fit solution to what this medium is about. It’s early days and everyone is still exploring and understanding the medium. When you’re doing a mix, you want it to be exactly the way you designed it when it’s playing back off the’s sort of a number one rule that’s been around professionally for a long time. It was important for us to ensure that that is the case across the ecosystems in which you work with.”

How does it work? The Audio 360 audio engine uses a hybrid

high-order ambisonic system that can output eight channels of audio, including two channels of “head-locked” audio – in other words, audio (narration, perhaps) that does not need to be spatialised. “Creatively it offers a lot of options which lots of sound designers need and use, and that’s something we’ve understood over time being in close proximity to them,” says Nair. Hans Fugal, who has been at Facebook for about seven years, has a computer music and audio background. “When the Two Big Ears crew came in I helped to integrate the work that they’ve done into the Facebook product itself. We have the ability to play back the same thing you hear in your mixing session; using our same rendering SDK we’re able to play that back with the same code on IOS, Android, Web and GearVR. This is available not only for VR systems like Gear VR, but also the phone just by itself. If you have headphones, you can have a window into the 360 video world and hear the exact same thing that you could here there. The rendering quality is the same.” Delivering 360 audio content across multiple platforms came with its fair share of technical challenges. In order to deliver an encoder sooner rather than later, the team needed to stick to Facebook’s native video format: MP4 with H.264 video and AAC audio. AAC

audio in MP4 supports eight channels, but not ten channels, and AAC encoders understand eight-channel audio to be in the 7.1 surround format, which applies an aggressive low-pass filter and other techniques to compress the LFE channel, which is incompatible with faithful rendering of spatial audio. “The file formats, the standards, these are all things that are under constant evolution at this time and nothing has emerged as a leading technology yet. We’re hopeful that soon we’re all able to settle on something, but in the meantime we’ve had to make things work with what we’ve got,” Fugal explains. Facebook will also continue to work with 360 content creators and will “do what’s best for the community and the ecosystem as a whole”, says Fugal (and in fact, has a very active group on – where else – Facebook where he, Nair and the entire Audio 360 team offer support and take note of issues as they are posted) as technology changes and new standards – if any– emerge. How the entire 360 ecosystem will further develop will depend in very large part on how creators are using it, says Fugal: “It’s such a new medium and everyone is trying to figure out the best way to deal with it so a lot of assumptions need to be tried and tested.”

5th – 9th June 2017, Central London, UK

DISRUPTION | TRANSFORMATION | EVOLUTION THE BRAVE NEW WORLD OF MEDIA AND ENTERTAINMENT MediaTech 360 joins the worlds of broadcast media, AV, and pro audio through a week of exclusive conferences and events. Join executives from across the media and entertainment industries to examine the technological challenges and operational strategies that will define our future markets. EARLY BIRD DISCOUNT AVAILABLE – SAVE TODAY!

What’s on at MediaTech 360? MONDAY 5TH JUNE





MediaTech 360 Kick Off Webinar


OTT Breakfast Briefing

Closing Webinar

PSN Presents & Opening Drinks Reception

MediaTech 360 Summit Day 1

MediaTech 360 Summit Day 2



Contact Pete McCarthy for sponsorship opportunities today: #mediatech360



Colby Ramsey finds out how the audio team behind Microsoft’s latest RTS hit ‘Halo Wars 2’ came up with an explosive audio mix to match its stunning visuals.

////////////////////////////////////////////////////////////////////////////////////////// s advancements in technology are made and audio quality in entertainment mediums continues to improve across the board, a greater awareness of sound amongst creative communities is becoming more commonplace, with people quickly realising the power of immersive video game audio in particular. For Halo Wars 2 – a new real-time strategy (RTS) video game for PC and Xbox One developed by Creative Assembly and 343 Industries, and the latest instalment in Microsoft’s sci-fi Halo franchise – the audio team at Creative Assembly, best known for their work on games like Total War, simply wanted to make “the best sounding RTS we could.”



April 2017

In an RTS game, the player controls a disembodied camera that can move quickly around a map or playing area. The player is required to manage various aspects of the battle, whether it is resource managing, base building, conducting a skirmish, or assaulting an enemy base. “We needed to convey a sense of scale with these big battles and environments, supporting action-heavy gameplay with really exciting audio,” says sound designer and Halo Wars 2 audio lead Sam Cooper. “We tried to learn as much as possible about how to do that as the game was developed.” With a lot of the gameplay taking place off screen, it was extremely important that the game’s sound really communicates the action in many different ways, from many locations

around the map. “That was a big focus for us, in addition to creating a sense of immersion and excitement with the sound design,” Cooper adds. With many of the sounds in the game – especially weapons and explosions that are heard from quite a long distance across a map – two versions of the same asset were created: a close perspective and a distant perspective, the latter of which was altered to achieve this distant effect while also retaining the characteristics of the original sound. “This was in addition to using our middleware to filter and attenuate the sound,” explains Cooper. “The objective being that you hear sounds in the game from a long distance away but they are still recognisable as particular weapons from the Halo universe.”

A change of scenery The mix for Halo Wars 2 is driven by a number of systems that shape what the player hears depending on gameplay context, “which was one of the biggest challenges for us to get completely right,” according to Cooper. In the game, the player could be focusing on a quiet section of the map in one moment, and then zip over to a much more intense and explosive encounter. “In the quieter moments you’re hearing more subtle, ambient environmental audio, maybe some chit-chat between marines, some Foley etc. whereas during a battle, it’s complete carnage,” Cooper remarks. “In this context, our mix systems need to consider the size of the battle, which weapons and how many of them are firing, that sort of thing,” adds Cooper. “We use all of that information to drive


//////////////////////////////////////////// volume and EQ adjustments on the fly, in order to keep control of the soundscape and ensure that the player hears the most important things at each moment.” Every sound in Halo Wars 2 fits within a priority band, and each one has got volume and EQ inserts sidechained to various other high priority output busses, resulting in a mix that is constantly altering depending on the context. “There are a lot of sounds competing for space in the mix sometimes so we had to deal with it in an adaptive way,” Cooper explains. “A lot of iteration went into finding the sweet spot for the player, while also keeping the mix de-cluttered.” Some experimentation also took place with regards to the game’s new faction, The Banished – a postapocalyptic race whose sound design

needed to come across as sci-fi but not too shiny and polished. This meant that Foley for The Banished was recorded with leathery materials, loose metal plates and chains in contrast to the familiar UNSC human forces that wear more high-tech materials, requiring Foley with more plastic, synthetic fibres. “We actually did a fair amount of custom recordings for this project,” notes Cooper. “A lot of rocky, desert environmental sounds were recorded in an old lime quarry, while a lot of jets, helicopters, tanks and firearms recordings we did ourselves.”

Loud and Proud Cooper was also part of the Alien: Isolation team two years ago, which as a survival horror game, is a lot more

atmospheric and slower paced than his latest project. “The sound design aesthetic there is centered around building tension and fear, whereas in Halo Wars 2, there is some tension, but it’s more about the big spectacular battles and exciting audio that accompanies them,” Cooper explains. “In this respect, we were able to be a bit more bombastic with stuff, with certain sound effects like special leader powers often blasted out in full over-the-top surround sound. “Alien: Isolation was more narrativedriven, so in terms of the mix, we could script mix alterations really carefully around key story moments, whereas in Halo Wars 2 the gameplay is emergent and changes really rapidly,” he adds. “So it was a lot more about building up the tech to support these adaptive systems.” The audio team at Creative Assembly make use of an RME Fireface 800 interface along with Dynaudio BM14s and BM6a MKIIIs, while their recording options include Sennheiser’s MKH 416 and 8040 shotgun microphones along with 744 and 702 audio recorders from Sound Devices. Additionally, they used Nuendo 7, Waves Platinum and a range of FabFilter plugins to satisfy some of Halo Wars 2’s more experimental audio elements.

Rallying the Squad Cooper goes on to describe the “great collaborative relationship” that Creative Assembly maintained with the other teams and individuals involved, including 343 Industries and Paul Lipson (vice president - creative services Formosa

Interactive) who was “key in working with us on the audio vision.” “He’s an encyclopedia of Halo franchise audio knowledge,” Cooper recalls. “He was able to tell us how specific sounds were designed in previous Halo games, and also where our boundaries were in terms of divergence from certain core, signature sounds from the Halo universe.” Relying on dialogue to feed a lot of important information to the player meant that the dialogue system in Halo Wars 2 was huge, and took a lot of iterating to get into shape, as Cooper explains: “The system is very contextsensitive, so it was tricky finding the balance while prioritising sounds for the player to hear and mix that on the fly.” Another thing that took a lot of iterating to get right for Halo Wars 2 was the reverb system. In terms of environmental sound, Cooper and the team spent a long time brainstorming how to represent reflections in the environment: “We ended up going for a three-tiered system that generates early reflections from the reverb zones around the larger structures in the environment,” he reveals. “That’s all knitted together with a global combination reverb that helps it all gel, and makes the action feel immersive and grounded within the environment.” Creative Assembly collaborated closely with the external music teams, including a company called Finishing Move who specialise in electronic sound design composition, and they came together with composer Gordy Haab – who creates big cinematic orchestral pieces – to create “an awesome hybrid soundtrack that is very Halo, but also quite unique to this game in particular.” With video games now reaching more mature audiences, some of which have grown up with them from a young age, players are naturally becoming more aware and demanding of certain game elements, including sound design. Cooper recognises better than most the importance of creating big cinematic mixes for video games, which is why so many would agree that the audio for Halo Wars 2 delivers on its promise to be as immersive and explosive as ever.

April 2017



PRODUCING IMMERSIVE AUDIO FOR VR Dr. Henney Oh, co-founder and CEO of spatial audio specialist G’Audio Lab talks us through the processes of capturing, mixing and rendering sound for virtual reality and 360-degree video applications.

Dr. Henney Oh

he premise of VR and 360-degree video is to simulate an alternate reality. For this to be truly immersive, it needs cogent sound to match the visuals. Humans rely heavily on sound cues to inform us of our environment, which is why immersive graphics need equally immersive 3D audio that replicates the natural listening experience. The challenge becomes how to draw the viewer’s attention to a specific point when there is continuous imagery in every direction, and sound cues can help with that. The key to creating realistic audio for this is to synchronise sounds according to the user’s head orientation and view in real time. This helps replicate an actual human hearing mechanism, which makes the listening experience more realistic.



April 2017

Producing truly immersive sound requires several steps. First, you must capture the audio signals, then mix the signals and finally render the sound for the listener.

Capture To replicate the natural listening experience, the use of two audio signals – Ambisonics and object – is essential. Ambisonics is a technique that employs a spherical microphone to capture a sound field in all directions, including above and below the listener. This requires placing a soundfield microphone (also known as an Ambisonics or 360 microphone) somewhere near the position where you intend to listen to. Keep in mind that these microphones will record a full sphere of sound at the position of the microphone, so be strategic with where

you place them. It’s also important that your mic is not spotted in the scene, so we encourage placing the microphone directly below the 360 camera. In addition to capturing audio from a soundfield microphone, content creators also need to acquire sounds from each individual object as a mono source. This enables you to attach higher fidelity sounds to objects as they move through the scene for added control and flexibility. With this object-based audio technique, you can control the sound attributed to each object in the scene and adjust those sounds depending on the user’s view. Capturing mono sound can also be tricky because the traditional use of a boom microphone to capture mono does not work in VR. In synchronised 360 sound recording, there is no space to place the boom microphone, so it is

helpful to place a lavalier microphone directly on the individual (hidden underneath apparel).

Mixing Previously, sound mixing was typically formed by its target loudspeaker layout, but today’s object-based audio techniques allow for individual objects on screen, like a dinosaur, to be free from the representation layout, user’s listening point and even the sonic space. It is possible because you can send all of the object tracks to the player side. As with traditional mixing, you might need to add extra Foley, ADR and background music tracks to complete the sonic scene. Combining object, Ambisonics and channels (like traditional 2.0 if needed) and balancing them plays an important role in mixing and mastering 3D audio.

TECHNOLOGY: HOW TO If you captured the object and the Ambisonics together, be sure that the Ambisonics signal already contains the objects. You may need an additional process to remove or balance those object tracks to ensure they aren’t counted twice. Traditionally, you only needed to work on synchronising your sound with your image in time domain, which is referred to as lip-synchronisation. But with cinematic VR and 360 video, you also need to work on spatial synchronisation between the sound and image. For example, when producing traditional cinematic audio, you only need to look at an actor’s mouth and play the sound according to the movements of the mouth. With VR and 360 video content, you not only need to consider the actor’s mouth movements but also carefully place the sound according to the position of the actor on the 360 screen, which requires a new and more dedicated sound mastering tool. Specifically, it’s

now important to use a tool that lets you edit as you watch, so that while watching the visuals, you can match the sounds accordingly in both space and time. There are many special processes needed on top of the conventional mixing workflow, requiring a dedicated authoring tool to work properly and conveniently.

Rendering Historically, content creators relied on DAWs for everything from mixing to mastering into a target layout. So the output of a DAW was a pre-rendered sound bed. However, with VR, sound rendering must take place on the listeners’ end, which, in this case is the actual VR hardware and is most frequently a head-mounted display (HMD). All of the possible scenarios have to be processed through HMD devices, which can require a huge amount of additional processing power. As such, while it still maintains higher quality, minimising latency as well as the amount

of computation power needed when rendering is key. A benefit of the renderer being on the listener’s end is the possibility for unprecedented levels of personalisation. Keep in mind that with a conventional pre-rendered bed, you can’t variate its rendering according to each user. However, personalisation is still a long way out as measuring an individual’s personalized HRTF is still an expensive and time-consuming process. In addition to the special capturing and mixing techniques, we believe high-quality VR rendering is the most crucial enabler for completion of the VR audio experience. One way to improve this experience is to use the same binaural rendering engine on DAW and on the player side. This requires a type of end-to-end solution like the one we’ve developed at G’Audio Lab. Our Sol player for cinematic VR and 360 video allows for real-time rendering by reflecting the HMD user’s head orientation and interactive motions with real-time

calculations of relative sound source positions. Sol leverages G’Audio’s binaural rendering technology, which was adopted in the next generation international standard by Moving Picture Experts Group’s 3D Audio (MPEG-H 3D Audio) for requiring minimal processing power while delivering the best audio quality possible. VR content can be played as intended without being degraded by hardware limitations. When compared to the solutions available for creating VR video, tools for producing truly immersive sound still have some catching up to do. However, there’s been an overarching shift in the industry to focus on audio, and I’m confident we’ll see huge strides made in the months to come. Dr. Henney Oh is co-founder and CEO of G’Audio Lab, a spatial audio company dedicated to developing immersive and extensive interactive 3D audio production software solutions for creative professionals.

LEEDS | 9-10 MAY, 2017





The must-attend event for industry professionals. Discover the biggest names and exciting new products for live entertainment technology.

/plasashow @plasashow #PLASALeeds


April 2017





With a plethora of listening options available on the market, we assemble another diverse group of sound specialists and ask them what they’re wearing at the moment and why. or almost every item in their signal chain, audio professionals tend to like as broad a selection available as possible, and there are good reasons for that of course. A studio engineer who regularly caters for a variety of different musical styles, for example, will in most cases have an array of effects hardware and/or plugin bundles in his/her arsenal, in the same way that a sound recordist might like to have a choice of microphones to hand depending on the situation or location they find themselves in.


But there is one type of equipment that we often choose to stick with no matter what we’re faced with, and that’s our headphones. For many of us, we take our cans everywhere, and they become our closest companions during those marathon mixing sessions, so you could say that picking the right pair is up there with the most important decisions you can make. It can be a tricky process, and there are more factors to consider than you might at first think – going for a closed or openback model being just one of them. With hundreds of models on the market each best suited for their

Sennheiser HMD 26

Simon Bishop Sound recordist Simon Bishop took an interest in the Sennheiser HMD 26 as soon as it became available, after previously coming to the conclusion that his next purchase would be a headset-style pair. “The built-in boom microphone has been a game changer for me,” he reveals. “The recent trend in TV drama is to shoot with two or more cameras. It is also quite popular to shoot with fewer rehearsals, so I find myself wanting to communicate with my boomswinger(s) and assistant(s) more


April 2017

frequently than ever before. It is really useful to be able to whisper in their ears occasionally in the middle of a shot. The HMD 26s sit well on my ears, as opposed to around them. The ‘seal’ is really good, so they keep out sounds from the rest of the world, leaving me to concentrate on what I am trying to record. Even when working in a noisy or busy control room, I can be in my own little audio world once the ‘phones are on my head.” Bishop says there are two important things that he looks for in his headphones. The first is comfort, and his comments above show that he feels the HMD 26 ticks that particular box. The second is reliability: “I spent many years ‘coping’ with a particular model of headphones and they sounded great, but the cables were forever going intermittent or one-eared,” he recalls. “My headphones have to sound honest, true, accurate, natural and neutral, whilst not being fatiguing to listen to.”

particular jobs, well informed end users from music producers to broadcast engineers are more often than not savvy about their listening options, whether they utilise a specific design or safeguard features for comfort and protected hearing during long studio sessions, a microphone for transmitting high-quality audio in broadcast situations, or simply a high power output for accurate reproduction of audio in field recording projects. As more and more users fly their faithful pieces of gear with them to farreaching recording locations, durability

Key Features „ Lightweight with extra soft cushions for ‘excellent wearing comfort’ „ Accurate and linear sound reproduction for radio and TV applications „ Microphone provides audio transmission in broadcast quality „ ActiveGard (on/off switch) for protected hearing „ Swivelling ear cap for single sided listening

is also a big consideration, with the most robust models finding popularity due to their hard-wearing nature. Whatever the requirement, there will always be an option on the market that will suit an individual’s needs. So in our second End User Focus, we’re asking three very different pros to tell us what they’ve gone for and why. Did it come down solely to sound quality, or was comfort and durability an important selling point? Perhaps a combination of all of these things? We put these questions to another diverse trio of sound creatives, and this is what they told us…


Audio-Technica ATH-M70x

Chiara Luzzana Chiara Luzzana is a composer and sound designer whose most recent project, Sound of the City, has seen her tasked with taking the sounds of everyday life that often go unnoticed and transforming them into music. “For this reason I have an absolute need for closed headphones, with a clear and transparent response, as I need to feel the true sound without colouration,” she explains.

“For me it’s essential that the headphones are comfortable; I have tried every type of headphone, and every time after a couple of hours I began to feel a nagging pain in the ear. These [Audio-Technica’s ATH-M70x] are the only headphones that instead wrap around my ears so softly. I might even wear them to sleep!” As well as clarity and comfort, Luzzana says she was looking for headphones that she could use for everything from recording to mixing and mastering, but also live performances and DJing as well. “I travel a lot for my project, and the ATHM70x’s have become my travelling companion in each situation. They allow me to discover the hidden sides of the cities because the sound is so clear that I can recognise any shade of frequency and the real “voice” of the places. With these headphones I have heard sounds that I never found with other models.”

Key Features „ Closed-back design „ 45mm driver diameter „ Frequency response: 5-40,000Hz „ Maximum input power: 2,000mW at 1kHz „ Sensitivity: 97dB


Steve Pageot

Pageot, who won a Grammy for his work on the Aretha Franklin track ‘Wonderful’, has used the K240 MkII’s on countless projects, including music for TV, film scoring and most notably on the song ‘Helpless’ off of ‘The Hamilton Mixtape’. “This was an important project for me to secure so all the features I mentioned came in handy while crafting the music in the studio,” notes Pageot. The album debuted at #1 on the Billboard 200 Albums Chart in the US, and according to Atlantic Records’ Riggs Morales is on its way to gaining Gold (500,000 units) status.

Award-winning producer/engineer Steve Pageot says he looks for many attributes when it comes to headphones: durability, weight, clarity, transparency, precision and “most importantly the reproduction of the recordings.” It was for these reasons that he chose the K240 MkII headphones from AKG. “I don’t see myself wearing other headphones to record in the studio,” he states. “I have three pairs in my home studio; once you put them on you can spend an entire session without ear fatigue and sonically they are engineered for the safety of your ears.”

Key Features „ Over-ear design for comfort during long sessions „ Semi-open technology for solid bass and airy highs „ Patented Varimotion 30mm XXL transducer „ Self-adjusting headband „ 104 dB SPL/V sensitivity

April 2017




Stephen Bennett investigates how the software for producers, composers and mixing engineers has moved on since its “black-and-white glory” days. couple of months ago, a colleague called me over to his house as he had something that he’d inherited that he thought I might be interested in seeing. It turned out to be an immaculate Atari 1040ST computer, complete with 1MB RAM, a 720KB floppy drive and a high-resolution SM-124 monochrome monitor. He also had a box of contemporary software so, after a few minutes of dongle connecting, whirring disks and an overwhelming wave of nostalgia, the Arrange page of Steinberg’s Cubase version 1 was onscreen in all its black-and-white glory. In today’s mature Digital Audio Workstation (DAW) market, where the software capabilities and features of different companies’ products leapfrog each other version by version, it’s hard



April 2017

to recall how revolutionary this first version of Cubase was, with its real-time MIDI recording and editing and that nowubiquitous Arrange page. We are two years from the 30th anniversary of the launch of Steinberg’s sophomore DAW and, I assume, the company is planning something special in celebration. In the meantime, Steinberg has released the subject of this review, Cubase version 9. If you wonder why we’re only on such a low number after so many years, don’t worry – the numbering system (on Atari, Mac and Windows versions) has been all over the place for decades in a way that Bill Gates would approve of. If you are a Cubase novice, you may want to look at the Version 8 review in the February 2015 issue of Audio Media International for an overview of the program before you go any further.

The specific version under scrutiny here is the Pro version – there is also Cubase Artist and Cubase Elements that sport fewer features at a lower cost, so you don’t end up having to pay extra for stuff you’ll never use. Cubase 9 runs on both OSX and Windows-based platforms and uses a serial number and USB e-licenser dongle-based authorisation system, which is easy and quick to use, and makes the software easily portable between machines. Steinberg now supplies a ‘universal’ installer that unlocks the correct version of the software purchased, which should make for a seamless and concurrent upgrade path for all versions of Cubase in future. For some, the headline difference between this and earlier versions of Cubase is that the program no longer supports 32-bit plug-ins.

Key Features „ Award-winning 32-bit floating-point Steinberg audio engine with up to 192 kHz, 5.1 surround, flexible routing and full automatic delay compensation „ Unlimited audio, instrument and MIDI tracks and up to 256 physical inputs and outputs „ Complete suite of more than 90 high-end audio and MIDI VST effect processors „ Compositional tools like Chord Track, Chord Pads and advanced Chord Assistant „ Comprehensive set of over eight instruments with 3,000-plus sounds, including HALion Sonic SE 2 and Groove Agent SE 4. RRP: €579 How important this is for you depends on whether your favourite software is available in 64-bit format – most major

TECHNOLOGY REVIEW players now are – but that’s no help if the company that developed your essential VST compressor went out of business two Cubase versions ago. Version 9 now scans all plug-ins to see if they conform fully to VST 2 or 3 standards via the sci-fi-sounding Plug-in Sentinel – although the ones that fail the test can be re-enabled at the user’s own risk. Both these features, Steinberg says, should improve the stability of the program.

Let’s look at the layout That main window of Cubase (now called the Project window) is where the changes in version 9 are most apparent. The Transport Zone is now located at the bottom of the screen and there have been differences in the main toolbar at the top as well. Steinberg want you to think of various areas of the program as Zones, so the racks and inspector are now the Right and Left Zones. This change is not purely semantic in nature, as we shall see, but it makes for an initially confusing experience for the seasoned Cubase user – but I’m sure we’ll get used to it in time. All toolbars are toggle-able in visibility as before and the various Zones are resizable. The reason for the Zone nomenclature is the appearance of a new feature – the Lower Zone – which displays, appropriately enough, on the lower part of the screen. The resizable Lower Zone features a series of tabs – all available via key commands – that display various sections of Cubase’s interface. The upshot of this is that most of Cubase’s day-to-day editing functions can now be accessed from a single screen. The MixConsole tab can display a choice of faders, inserts and sends, while the Editor tab can show any of the Cubase editors apart from the List editor. What data is displayed here depends on what is selected in the Project window and you can still get Cubase to open separate editor windows if you prefer. It’s obvious that the introduction of Lower Zones is aimed at those working on laptops, which is probably now the majority of users. The other two Lower Zone editing tabs cover Chord Pads, Cubase’s creative MIDI arrangement tool, and the Sampler track – a new feature. Dragging audio to a Sampler track creates a MIDIcontrollable sample that is mapped to

MIDI note numbers – shades of Logic’s much-missed Touch Tracks here. You can perform basic editing of the audio on the Sampler track and apply the audio warp feature that affects sample speed, pitch and tempo synchronisation. There are also filters and envelope generators – in fact, everything you might find in a sophisticated soft-sampler. If that’s not enough you can, with a click, transfer the sample to HALion, Steinberg’s standalone VST sampler if you have it installed. It’s quick and intuitive to use and makes incorporating sampler-type

“For some, the headline difference is that the program no longer supports 32-bit plugins.” Stephen Bennett

effects into your music so simple I may start using them myself. It’s going to be a godsend for sound effects people.

Quick fixes Most DAWs have sophisticated undo/ redo features but not always on the mixer itself, which can be a real pain if you move a fader accidentally or want to just try out some small mix changes. Cubase’s Mixer now has its own undo/ history feature – similar but separate to the main Cubase one and most of

the parameters that you’d want to undo when mixing are covered including volume, pan and plug-in modifications. Cubase 9 allows you to have multiple Marker tracks, which you may find very useful if you’re swapping between time and beat-based editing or just like to have different markers set up for different tasks. The ‘Link Project and Lower Zone Editor Cursors’ command links the zoom factors and horizontal scroll position of the Lower Zone and the Project window, while VST Connect SE 4 is Steinberg’s collaboration application that allows Cubase users to work with remote partners. Aside from these workflow differences, Cubase 9 has a couple of new plug-ins and some parameter and visual tweaks to old favourites including Autopan, which now gains extra sync modes, shapes and panorama settings. Frequency is a new EQ that can work in stereo, dual mono, or Mid/Side and features eight parametric bands that can be individually set to Linear phase mode. It’s precise in effect and sounds excellent with a useful on-screen keyboard that lines up frequencies displayed as musical notes. Maximizer does what it says on the tin – if you really want to crush the bejeezus out of your mixes this will do it just as well as most of the third-party competition, but it’s also useful for levelling out individual tracks without doing too much damage. There have also been updates to the Score editor, including a rhythmic editor mode for creating condensed leadsheets and some other minor tweaks and bug fixes.

Conclusion Cubase has come a long way since the heady days of version 1. Often at the forefront of new DAW technology, with innovations such as native audio processing and Virtual Studio Technology (VST), the latest version is, in many ways, playing catch-up with other competitors – specifically Logic Pro, which has featured plug-in compatibility checking and single window editing for some time. But that’s not to disparage what Steinberg has achieved here. Version 9 contains some major workflow changes and improvements while keeping longterm users’ preferences on board and, unlike Apple’s flagship DAW, you’re not limited to a single, increasingly expensive computer brand and operating system. The black-and-white Cubase-using-me is continuously amazed by the ultra-powerful colourCubase-using-me and version 9 does nothing to dispel that feeling. The upgrade is a no-brainer for current Cubase users and a sensible entry choice for newcomers who need a fully featured multi-platform solution.

The Reviewer Stephen Bennett has been involved in music production for over 30 years. Based in Norwich he splits his time between writing books and articles on music technology, recording and touring, and lecturing at the University of East Anglia.

April 2017




Simon Allen discovers why the manufacturer is so confident of the quality of its new flagship Thunderbolt, Pro Tools HD and Dante-compatible interface. here is no hiding from the fact that this interface is probably the best interface Focusrite has ever made. There’s only one question that we need to answer here: is this the ultimate audio interface to date? Admittedly, there are features about this product that won’t suit everyone. However, if money were no object and sound quality was the biggest consideration, has the Red 8Pre really set a new standard?


technology and the dramatic price drops of pro-audio equipment, anyone can set up a recording studio of sorts at home. While this has given way to creativity in new ways, it has in fact limited some elements of creativity upon capture. We all seem to focus on keeping the signal on ‘tape’ as clean as possible, then expecting our plug-ins to add all the magic. What happened to tracking a great mic, through a 1073 with a TLA-100A inserted, which then hardly requires any further processing in the mix?

The Red Sound Due to the natural progression of audio interface development, manufacturers have been focused on designing the “cleanest” and the “flattest” sounding pre-amps and converters they possibly can. There is of course still merit for these achievements, particularly for those that are using additional hardware to ‘colour’ their sound to taste. The original job role of an audio interface was simply that – to interface your front end with your Digital Audio Workstation. However – and this is a big however for me – for some reason this concept has hung around like a bad smell. This apparent ‘correct’ method of working should perhaps now be challenged. Thanks to the advancements in 36

April 2017

“What I admire so much about the approach Focusrite has taken with these products is the combination of crystal clear and flat converters with low-noise preamps.” Simon Allen

I’m not suggesting we need valvedriven interfaces, but I do wonder if the chase for purity has gone too far. The trouble is, those that are using an interface, often as their only hardware, are missing out on the exciting experience a large-format studio

offers. For me, this is why the Red interface range by Focusrite is such a key product line in today’s market. These interfaces, within this new era of preamp and converter technology, began with the now well respected RedNet series, followed by the Clarett interfaces and of course the Red 4Pre. The Red 8Pre follows in the footsteps of the Red 4Pre, which presented even more impressive EIN and total gain values. What I admire so much about the approach Focusrite has taken with these latest Red products is the combination of crystal clear and flat converters with low-noise preamps, while keeping some colouration in the pres. They aren’t hugely coloured in their standard mode, making them suitable for those sensitive scenarios, such as classical music recordings, but they are still pleasing to listen to somehow. Then in their ‘AIR’ mode, which adds exactly what the term suggests via some additional circuitry (not DSP trickery), they become really exciting to work with. I really enjoy using this latest generation of Focusrite preamp, of which the Red 8Pre is currently the pinnacle.

Connectivity The Red 8Pre’s party piece is of course

Key Features „ Up to 64 x 64 channel count „ Thunderbolt and Pro Tools HD compatible, with Dante audio connectivity „ Eight mic pres with unique ‘Air’ effect „ Round-trip latency as low as 1.67ms „ 1RU rackmount unit RRP: £3,199.99 it’s audio I/O capabilities. Not only does this single-rack-mount unit offer an impressive 64 channels of inputs and outputs, but this total comprises a very flexible array of connections. Digitally there are dual ADAT connections for a total of 16 channels over light pipe (@ 48kHz), as well as SPDIF and Dante. Note, there aren’t any AES inputs or outputs, which might be a consideration for some applications. Focusrite presumably perceive the Dante connections trump any need for AES. On the analogue side, there are monitor outputs, two independent stereo headphone outputs, 16 line inputs, 16 line outputs and of course the eight Red preamps. The first two channels also offer direct instrument inputs, making this unit suitable for both working from a studio rack and mobile recording use. However much I enjoy the sound this unit delivers and respect the

TECHNOLOGY REVIEW design approach, there are a couple of issues I have with the Red 8Pre. The first is with the connectivity for the eight preamps, which is achieved via a D-Sub. This means you need a break-out cable to XLR, even if you’re on location recording a single microphone. With their Clarett range, Focusrite offer an 8Pre and an 8PreX, with the latter being a 2U unit. Why they didn’t apply the same principle to the Red 4Pre and Red 8Pre respectably, I think is a shame. If you seriously need the channel count this thing offers, then I doubt you’d be too bothered by the increase in size if it were easier to use.

Interfacing The last major talking point for the Red 8Pre is its variety of interfacing options. There are dual Thunderbolt ports, which will work with any Mac that has a Thunderbolt or Thunderbolt 2 port. While this restricts the interface to Macs only, the benefits of round trip latency as low as 1.67ms

speaks for itself. I’m really pleased to see dual Thunderbolt ports to allow daisy chaining and that they actually include a 2m Thunderbolt cable with the unit. A rarity for Thunderbolt devices in general. Alternatively you can connect the Red 8Pre to a Pro Tools HD system via two mini-DigiLink cables. This provides Pro Tools HD users with an exciting alternative to Avid’s own hardware, particularly now Avid has separated the software from the hardware with a separate DigiLink I/O License. The only consideration is that the Focusrite Control app for changing settings on the unit only works over Thunderbolt. If you don’t have a Thunderbolt port on your Pro Tools HD system, all settings would need to be made via a different machine or from the front panel of the device. Finally, as presented for the first time with the Red 4Pre, The Red 8Pre offers dual Dante connections. This is a really exciting move for an interface to

offer, adding more possibilities even for PC users. However, there is another issue here. Just as with the Pro Tools HD connectivity, you can’t control the unit over Dante. I’m hoping Focusrite will add the Red 4Pre and Red 8Pre to their RedNet control app. [Ed: Focusrite says that this is indeed part of their plans]. This will be really exciting, even enabling these amazing quality preamps to be used in the live sound world as you would with a RedNet unit.

has already overcome. I like that Focusrite has stayed firm on delivering Thunderbolt connectivity, due to the proven latency benefits. The on-board screens and user interface is very good, hopefully with future control solutions to be announced. This is an all singing and all dancing interface at the very top of the market. The only aspect that Focusrite don’t cover is any built-in DSP processing which is a shame, but the flexibility and sonics on offer here are easily worth the sizeable investment.

Conclusion There are several reasons why this unit is so strong on paper and why it carries the price tag it does. However, the sound you can achieve with the preamps is so wonderful, this should be as significant as the Red 8Pre’s charttopping connectivity and feature set. There will be a few inconveniences for some users thanks to the control only over Thunderbolt, but there are a number of hurdles the Red 8Pre

The Reviewer Simon Allen is a freelance internationally recognised engineer/producer and pro audio professional with nearly 2 decades of experience. Working mostly in music, his reputation as a mix engineer continues to reach new heights.

Make it Alive Superior Digital Sound Quality

4 – 7.4.2017 Hall 4.1 Booth D40

ACT 2400 Series Digital Wireless Systems Universal 2.4 GHz - license-free operation worldwide Detailed & Crystal-clear 24-bit digital sound quality True Diversity up to 100-meter | 328-foot reception range Wide Dynamic Range ideal for instrument & vocal experience Digital Sound without analog’s companding noise 64 ID Codes increased compatible channels simultaneously Rechargeable transmitters > 12 hours of operation

Exclusive UK & Eire Distributor: CUK Audio | Norwood Court, Ibrox Business Park, Glasgow, G51 2JR | Tel: (44) 141 440 5333 | |

April 2017



MOTU 624


Alistair McGhee gets his head around the new Thunderbolt/USB3 AVB interface, set up alongside an accompanying M64 MADI unit.

n these times of austerity, value for money (VFM for short) has become almost all-important in our purchasing decisions. So how might we judge the latest offerings from MOTU against the VFM yardstick? I suggest there are two main factors in our choice of audio interfaces: the basic I/O count, and what you might call the extras – all the other goodies. Two of MOTU’s latest designs, the 624 (Thunderbolt/USB3) and M64 (MADI), come in the standard MOTU half-rack package. The 624 has pots for the front panel guitar inputs and also for the two mic inputs, which also have separate hardware controls for the pad and phantom power. The M64 has no audio controls beyond the headphone pot but there is full control via the web interface. And both devices can recall factory and user presets from the front. The new MOTU 624 offers eight analogue inputs, two mics, two Hi-Z for guitars etc. and four balanced/ unbalanced line inputs. You also get eight digital inputs on ADAT, four at high sample rates. Outputs offered are six balanced/unbalanced analogue line, plus a stereo headphone feed and eight outputs on ADAT. That’s a tidy amount of I/O but I think it is the other half of the proposition that really strengthens the 624’s hand and the big players here are DSP and AVB, now often seen as AVB/ TSN (Time-Sensitive Networking). The plus for an AVB-equipped device like the 624 is that down one Ethernet cable you get lots of channels – 512 in a MOTU network (at 1x sample rate) and 64 channels of low latency audio in and out of a 624. The MOTU AVB range now offers everything from a fully rigged 16-in/8-out stage box, via all manner of analogue and digital options through to the M64, which offers 128 channels of MADI input.



April 2017

Key Features Up until the arrival of AVB/TSN, the classic way to expand your channel count has been ADAT or MADI – but look at the AVB advantages. It offers huge channel expansion possibilities, runs on standard Ethernet cables up to 100m and is fully network capable. Build yourself an AVB network and you can aggregate multiple devices routing audio in and out of the network at will. To build a proper network you will need a switch – probably MOTU’s AVB switch, as you can’t use an ordinary Ethernet switch. And once your other MOTU AVB units are connected you have full control over them from a web-based app, which can run anywhere on the network into which the interface is connected, or over the USB/Thunderbolt connection. The other big boost is the mixing power. Each AVB interface carries enough DSP to mix and process 48 channels of audio and given you have up to 64 extra inputs this mixer is realworld relevant. It includes some sweet EQ and gates and compressors on every channel. There’s even a basic reverb for your monitor mix.

In Use I began by plugging some channels of MADI mic preamp into the M64; I then piped that over AVB into the 624, which in turn was connected by USB (or Thunderbolt) to my Macbook Air or a Windows 10 machine. If I had enough sources, I could now take my pick from up to 128 MADI channels from the M64 – one over coax and the other over optical plus the 16 inputs of the 624.

MOTU specs the 624 with USB 3 and Thunderbolt on board as delivering 128 channels at 44/48 and up to 64 channels all the way up to 192Khz. The M64 is a digital-only device but MOTU have worked hard on the 624’s analogue audio. It features the Sabre32 DAC, which is the same as used in it’s bigger brother, the 1248, and offers the same increased signal-to-noise on the analogue outputs. I tried the 624 mic amps against some big-ticket alternatives and found the MOTU preamps easily holding their own. Put AVB and MADI together and you get some serious reach. The 624 is connected to the computer by a 2m Thunderbolt lead but the M64 can be up to 100m away down the Cat5 and the MADI kit another 100m away – add a switch and we have another 100m of cable. This is massive flexibility and enables deployment way beyond the standard duties of sound cards. Control of your MOTU is down to a web-based app that discovers your device or devices, integrates their connected AVB I/O and presents it to you in a browser window for routing and mixing. The down side? Well, the massive flexibilty means you need to be on your toes in terms of setup. Remember, with AVB on board you have potentially 78 inputs and outputs on your 624 and that’s a big matrix. And then there’s the 48-channel mixer to think about. Fortunately, MOTU has included lots of ways of simplifying what you see and the signal present metering in the routing app is a big help.

„ 16 x 16 Thunderbolt/USB3 audio interface „ Renowned ESS Sabre32 Ultra DAC technology „ Mobile half-rack enclosure „ Round-trip latency as low as 1.6ms at 96kHz over Thunderbolt, and 1.9ms over USB „ Console-style DSP mixing with 48 channels and 12 busses RRP: £799 (624) & £599 (M64) With 128 MADI inputs routable to 128 MADI outputs plus the 64 channels of AVB the M64 is a great way to get MADI into your computer, although I do wish it had gone USB3, similar to the 624. Having optical and coax inputs on board and optical output plus mirrored coax outputs means that the M64 is a powerful MADI problem solver. And out of the box the AVB devices come with factory presets to help you start solving a variety of problems. The 624 opens the door to AVB world and gets you a seat at MOTU’s top audio quality table at a more affordable price than has been possible up to now.

The Reviewer Alistair McGhee began audio life in Hi-Fi before joining the BBC as an audio engineer. After 10 years in radio and TV, he moved to production. When BBC Choice started, he pioneered personal digital production in television. Most recently, Alistair was assistant editor, BBC Radio Wales and has been helping the UN with broadcast operations in Juba.

Save the Date Conference 14 – 18 September 2017 Exhibition 15 – 19 September 2017 RAI, Amsterdam


Where the entertainment, media and technology industry does business Join over 1,800 exhibitors showcasing the latest technological innovations, 400+ speakers delivering the latest industry insights and 55,000 attendees providing unlimited networking opportunities at IBC’s 50th annual conference and exhibition. Add dates to your diary Follow us on social media for the latest news and updates #IBCShow



Time to see what Alan Branch makes of this new vocal transforming processor.

Key Features olyverse Music, in collaboration with electronica production/artist duo Infected Mushroom has taken the magic of granular synthesis and developed it into every audio manipulation angle possible with its new Manipulator plugin. Used either live or in the studio, Manipulator is a powerful melodic and texture warping tool, ready to bend, twist, and separate instruments or vocals. The Manipulator GUI has a dark neon theme, and here it’s well laid out and surely made with live performance in mind as the colours and sections are easy to see at a glance. Three large main encoder-style algortihms marked Pitch, Harmonics and Alternator break down into subset controllers. Pitch is a granular pitch shifter with a separate Formant control and a +/- two octave sweep marked into semitones. Incoming monophonic audio can easily be pitched in real-time, whilst the Formant is used to change the timbre. There is a tiny micro slider between the Pitch and Formant dials that would be easy to miss, but this is a valuable Smooth Grains control to help fill gaps that sound too separated at low pitches. It’s important to remember that Manipulator has a pitch detection range – Hi ,Mid and Low – much like Antares Autotune; this helps it recognise what pitch the incoming audio is, so Low for bassy voices and Hi for sopranos. It could be any audio as long as it’s monophonic. The central Harmonics algorithm is offset with Ratio and FM dial controls, which enable you to shift around the harmonic order while adding frequency modulation. Polyverse calls this a “pitch tracking frequency shifter”, with +/- four harmonic steps to change the level of the harmonics from one to another set by the Ratio control, so the harmonics shifts can be changed into whole, half or variable harmonic shifts, giving a lot of timbre choices. Added to this is the



April 2017

„ Create harmonies with up to four polyphonic voices „ Ten different effects with ‘endless’ combinations „ Extensive modulation capabilities „ Real-time processing for live performance „ Supports VST, AU and AAX plug-in formats RRP: $149 assigned to several parameters at once, for example simply raising pitch via a modwheel or controlling other modulators. The possibilities are huge with the modulators, all of which can be powered on or off.

In Use FM controller to provide an amount of frequency modulation to the frequency shifter. Using this section is where some of the creative sound sculpturing can reach into the extremes – robotic, growling voices etc. The Alternator changes the pitch intervals between grain cycles, and this is supplemented by an Octave control. Here we get another dimension to the sound by splitting the grain cycles tracking the pitch into alternative pitch intervals, so one goes up and the other down, and modulating with an Octave control will change the grain cycle length by up to eight octaves. The incoming pitch of the audio designates the grain cycle length, so lower notes have longer cycles. It’s hard to describe this effect but it’s most useful when modulating in real time, so the grains are either building up or slowing down, like an extreme tremolo effect. To help support these algorithms are three effects: Smear, Stereo and Detune. These are neatly featured underneath the main controller dials, and look like mini swipeable harp controllers. Smear is a fabulous glitch tool, sampling and looping incoming audio into the

Manipulator; Stereo is a width tool for adjusting the soundscape output while retaining phase coherence for mono compatibility; and with the Detune function, five slow granular pitch shifters and delays help create a stereo synth effect/chorus. Some of the real power of Manipulator comes in its MIDI functionality; incoming MIDI is set via four modes: Off, Mono, Poly and Gated – Mono being a single voice, Poly four voices and using Gated means Manipulator is only triggered when notes are played. There is also a simple Glide portamento control as well as MIDI-only Glide mode to trigger smoothing only when notes are playing. Underneath the main controller dials are four square modulator slots: Meta Knob, MIDI, Sequncer, Follower and an ADSR, which can be loaded with various modulators. These modulators can be slotted and mapped to multiple controls in Manipulator’s main parameters. Each one is shown as five colour-coded dots around the main controls separated into +/-25% segments. These modulators enable a full synth-style control of Manipulator – Meta Knobs can be

The nature of Manipulator leads itself to experimentation, and I did struggle at first to find material it might fit, however once I brought up a track that was still in the writing process, Manipulator came into its own, as creating new sounds and effects can be inspirational and inventive. It’s the type of plugin that can be used to make tiny pitch adjustments, create triggered pitched backing vocals, vocoder-style effects, right through to crazy transformations to any melodic audio and all in real time. I can see an EDM producer loving this, a live artist could have incredible fun, but I can also see sound designers and creative music makers searching for something unique to add to their music making toolset, as Manipulator is quite different to anything that’s gone before it.

The Reviewer Alan Branch is a freelance engineer/producer. His list of credits include Jamiroquai, Beverley Knight, M People, Simply Red, Depeche Mode, Shed 7, Sinead O’ Connor and Bjork.



With his company now celebrating its 25th Anniversary, Mark Thompson, founder of Funky Junk, one of Europe’s leading suppliers of new and used recording equipment, chats to Adam Savage about how he’s seen both his firm and the industry progress over the past quarter of a century. client what was best for them, rather than just selling them what we had. When did you decide it was time to expand internationally? In 1997-1998 we opened up in Europe – Italy, France and Sweden; Spain came later – which I think was pretty unique for a business of this size. I was doing a lot of business in those countries, and although people said there wasn’t a demand for pro-audio in Italy, I couldn’t see why not – musically Italy is a very advanced country and we were getting plenty of enquiries from there. Funky Junk Italy has now become extremely successful doing an awful lot of broadcast, post-production as well as music. Where did the idea for Funky Junk come from? In the very early ’90s I got involved in finding and supplying equipment for producers and engineers in the industry and finding my own equipment, hence Funky Junk started by accident rather than by design. The company grew out of creative projects, instead of somebody saying ‘hey, I can make money out of selling black boxes with knobs on them.’ When I first got involved in the industry what amazed me was if you wanted to buy second hand equipment you went to a broker who basically had a list of photographs and phone numbers – they were like estate agents and would stand between the seller and the buyer and take commission. It just seemed illogical because I came from a studio background and I thought that the equipment should be there, it should be serviced by an engineer, you should get a warranty with it, I wanted to try it and see it, and a lot of it was expensive. You wouldn’t buy a secondhand car without taking it for a drive. So what did you do differently? At the outset, with whatever resources I had I bought the equipment into stock 42

April 2017

and the first person I employed was an engineer to service it. Coincidentally, at exactly the same time I met a guy called Mike Nehra, who also came from a music and production background in Detroit and had come up with exactly the same concept and we became very close friends. He has a company called Vintage King, and we changed the way that second-hand equipment was sold simultaneously in the States and the UK. The business just exploded – it was a service that the industry wanted. When did you first start to make significant changes to your equipment offering? In the mid-‘90s we started to get more and more involved with multi-track machines because of course back then it was the pre-digital age. In 1996 I sold 54 multi-track recorders; I was driving all over Europe to buy the machines, servicing them, doing them up and shipping them to the States. When did you notice the next technological shift? We began to see the initial growth of a dramatic advance in digital recording in the early 2000s.

What did you think of digital at first? Was it something that you took an immediate interest in? The digital revolution was not merely a technological revolution; it was a creative revolution, and I didn’t feel too easy about that. I felt the technology was moving so quickly that I was loathe to make a heavy investment in something that might be redundant in a year or 18 months time. We avoided digital until I felt that it had stabilised, grown up and reached a point where it made sense. Probably if we’d been more into film and post-production we’d have got involved earlier. You must have found yourselves having to frequently adapt to new industry developments over the past 25 years? Funky Junk has probably reinvented itself half a dozen times as changes have happened. It’s been my job to not just monitor but anticipate changes that happen. We started to sell new equipment in the mid-‘90s. Up until then, if you wanted a Lexicon you went to the Lexicon distributor, if you wanted an Amek you went to Amek. We changed the blueprint because our philosophy was to sell the

Are those branches all run in a similar way to the UK operation then, or do they all have their own indvidual way of doing things? One thing I’ve always felt very strongly is that they’re all different markets and I have to trust the people who run those companies to understand their own client base. The companies are not just mirror images of Funky Junk in London; they have their own identity and approach. Finally, how important is it to have a passion for vintage gear like you do, rather than just seeing it as a business opportunity? Nobody at Funky Junk comes from a sales background; everyone comes from a musical background – even the techs got involved with the industry because of their love for music. The ethos of the company has always been that the stuff we sell is a means to an end, not an end in itself – tools for making music. Keep an eye on the Audio Media International website in the coming weeks for an extended online version of our interview with Thompson.

AMI April 2017 Digital Edition  
AMI April 2017 Digital Edition