20 minute read

After Hours’ With The Weeknd—In Stereo and Atmos

‘After Hours’ On The Weeknd

Chart-Topping Stereo Mix Gets Full Atmos Treatment

By Steve Harvey

In 2019, UMG released The Beatles’ Abbey Road and Sgt Pepper’s Lonely Hearts Club Band, two of the most popular albums from one of the most popular bands, mixed in Dolby Atmos by Giles Martin. The labels have also released Atmos versions of backcatalog albums by Kraftwerk, INXS and REM. But for Dolby Atmos Music to hit critical mass, it’s surely going to require contemporary, charttopping artists to release current albums in the immersive format.

That’s beginning to happen. Late last year, Alicia Keys previewed “Show Me Love,” the first single from her upcoming album, Alicia, mixed by George Massenburg in Atmos (Manny Marroquin mixed the original stereo version). Now comes the news that After Hours, the new album by The Weeknd, is being released in a Dolby Atmos version.

The stereo version of After Hours, released March 20, 2020, went straight to Number 1 on the Billboard 200. It’s the fourth album from The Weeknd, the professional name of Canadian artist Abel Tesfaye, and the fourth to reach Number 1. The day before release, a record number of subscribers—1.02 million—pre-added After Hours to their Apple Music libraries. All 14 tracks on the album have since charted on the Billboard Hot 100, 10 in the top 40, with one, “Blinding Lights,” reaching the top slot.

Thirteen-time Grammy-winning engineer John Hanes mixed the Dolby Atmos version of After Hours at MixStar Studios in Virginia Beach, Va. Hanes co-owns the private studio with Serban Ghenea, a 17-time Grammy-winner himself, who mixed five of the stereo album’s songs at MixStar. The pair first met while working for Teddy Riley at the artist and producer’s Future Records facility in Virginia Beach in the early 1990s. “We partnered up again in 2001 and set up

Mix engineers John Hanes, seated, and Serban Ghenea in their MixStar Studios, with Genelec-based Dolby Atmos monitoring system.

our own studio,” says Hanes, a 30-year veteran of the business.

Ghenea and Hanes have an enviable discography. Think of a record from the Hot 100 over the past 20 years and they likely had a hand in its success. Highlights just from recent years include releases by Bruno Mars and Mark Ronson, Maroon 5, Halsey, Taylor Swift, Benny Blanco, Katy Perry, Kelly Clarkson, Ariana Grande, Jonas Brothers, The Weeknd and, well, you get the picture.

SETTING UP THE STUDIO Hanes reports that he and Ghenea first experienced Dolby Atmos Music when a team from Amazon visited MixStar to demonstrate an early version of the company’s Echo Studio smart speaker. Recognizing the immersive format’s potential, the pair decided to start playing with the technology, adding Genelec 8320 speakers to the existing monitor setup in Hanes’ room to ultimately create a 7.1.4 setup.

“We didn’t need full-size monitors all the way around, we just needed something of an appropriate size to give us that surround,” says Hanes. “Because we’ve done the mix already,

Opposite: The Weeknd appearing as January 22's musical guest on "Jimmy Kimmel Live!" Photo by Randy Holmes via Getty Images

we know the EQ and everything else is already there on the speakers that we’re used to.” Plus, he says, the compact Genelecs didn’t require extra bracing in the walls or ceiling. “I started with a 5.1.2 system. I put up Ariana Grande’s ‘7 Rings.’ It’s a nice, sparse song with elements you can really move and place. That was my first experiment.”

As momentum builds behind Dolby Atmos, Hanes believes that the technology will proliferate beyond the current handful of professional music studios equipped for the immersive format. “We’re on the edge of this becoming just like the home recording boom,” he says. “[But] there’s new technology you’re going to have to learn. It’s fairly complicated.”

To begin with, because the Dolby Atmos Renderer accepts up to 128 inputs—10 for beds and 118 for objects—the system relies on a Dante transport. “You’re going to have to learn to set up a Dante system,” Hanes explains. “I’m using a Focusrite RedNet I/O box, and there’s the Dante Controller program on the computer. You have to set up the routing from box to box in the computer. I have the Grace m908 monitor controller. It’s super-flexible, but there’s a lot of programming that goes into setting it up, assigning channels to speakers, and to monitor setups. It’s maddening when you’re sending something to one speaker and it’s coming out another, and you don’t know where in the chain you’ve gone wrong.”

STEREO MIX, ATMOS MIX After Hours was something of a baptism by fire for Hanes, who was given relatively little time to create the Atmos mixes. “We were pretty under the gun. They had their tie-in release date with Amazon, so I had 10 days to mix 14 songs. The record label people said, ‘We trust you, just get it done.’”

The goal of any Atmos mix is to re-create the sound of the stereo mixes, Hanes stresses. “I know that the mixers worked so hard on creating exactly the sounds that they wanted with the artist and the producers. I had the stereo masters, so I could make it as close as I could, then start experimenting with moving things around.”

A typical stereo mix at MixStar might take a day to a day and a half in total, including review by the client. But an Atmos mix doesn’t necessarily need that much time, says Hanes. “It’s faster than a stereo mix because a lot of the decisions—volumes, edits—have been made. I’m trying to maintain all of that. I’m respecting every decision they’ve already made and trying to just not fuck it up.”

“We did some of the production two or three times on some of these songs,” says engineer Ghenea of the stereo mixes. “I’ll tweak things until the cows come home, but as soon as it starts going backwards, I’ll throw my hands up and say, ‘We had it a few versions ago.’ Except in this case it got better every time.”

Tesfaye is very specific about what he wants, Ghenea. adds “He hears the thing in his head, and he wants to get whatever it is out. Sometimes it takes a few tries; there’s a lot of trial and error. You end up with a lot of parts. Being ‘80s-inspired, there are a lot of synths and textures; there’s a lot of stuff going on.”

Mixing anything in stereo is rewarding, says Ghenea, as that’s the version the majority of listeners will hear. And it’s possible to create a three-dimensional soundstage in stereo, of course. “You’re dealing with two speakers, but you try to get the feel of an immersive experience as much as possible. My biggest concern is that you end up losing the glue, the cohesiveness [when you then mix for Atmos].”

The album’s various other producers and mixers delivered their songs as a collection of stereo stems, some of which needed work to match the masters before Hanes could start the

Atmos mix.

“With Serban’s sessions, the lack of mastering doesn’t change the overall sound of the mix for Atmos,” Hanes says. “But there were some mixes where the stems didn’t fully re-create the sound of the mastered stereo mix. That’s probably due to a combination of the stereo mix master bus hitting differently with less tracks feeding it, as well as choices done in mastering the tracks.”

ESTABLISHING BEDS AND OBJECTS Working from dense digital sessions is not like remixing from a 24- or 32-track tape. “This modern music, a sort of wall of sound, sometimes doesn’t translate as well when you pick it apart and move things,” Hanes says. “Are you collapsing the imaging, are you collapsing the vibe of the song? I’m A/B’ing back and forth between the stereo mix and my Atmos mix constantly to make sure I’m in that same vein that they worked so hard to get to.”

Hanes says his overall approach was to place about half the elements of a given song into the or using a send to throw something to the back while the regular track is going to wherever it’s going. You drop it out of the main audio and send it to an object and automate it.”

There are still relatively few plug-ins specific to Atmos mixing. “I used all the same stuff we’ve been using for stereo,” says Hanes. But Gaffel, from Swedish developer Klevgränd Produktion, was one problem-solver for manipulating stems. “It’s a band-splitting plug-in that you can put on multiple copies of one track. Each track then has part of the band, and they’re all synchronized. I use that to split stuff up and move the highs to one place, the lows to another. You can take a stereo pad and spread it a bit from front to back.”

Since he came up in the world of mixing consoles and tape machines, he says, “My philosophy is still based on that. If I need to duplicate a track, change the EQ and use those parts to move it around, I still do some of that.”

As for reverbs, even the dedicated surround plug-ins aren’t especially useful, he says. “The Waves surround suite is 5.1, so you can’t stick object,” he explains. “Then I’m keying every object to the main master, so that if the main bed compressor hits then it’s going to pull everything equally down.”

Hanes mastered every song, but they were not mastered as an album. “I don’t know if anyone wants to sit and listen to an hour of any artist’s Atmos mixes. People just pick and choose, and I’m guessing that Atmos streaming will exacerbate that,” he says. “But it’s something that will need to be figured out in the future to make track-to-track Atmos mixes sound like an album.”

“Everybody is responsible for adhering to the spec to make sure everything is the way it’s supposed to be, that levels are not too crazy, and so on,” says Ghenea. “If people are working within the spec and making sure everything is right, then it’s going to be fine. But as soon as this goes mainstream and everybody is doing it, it could get crazy, with people doing things that aren’t supposed to happen.”

Ultimately, says Hanes, delivering a Dolby

“This modern music, a sort of wall of sound, sometimes doesn’t translate as well when you pick it apart and move things. Are you collapsing the imaging, are you collapsing the vibe of the song? I’m A/B’ing back and forth between the stereo mix and my Atmos mix constantly to make sure I’m in that same vein that they worked so hard to get to.” —John Hanes

Atmos bed, sending the remaining elements as objects around the room. “When you’re doing a stereo mix, it’s basically a two-channel bed. I’m just expanding on that.”

He’s mindful that the approach may affect translation elsewhere. “If I put this sound in a bed, it’s going to come out in some rooms along the full left or right side of the wall. I know that when I’m placing sounds, so I’m thinking of how it might translate,” he says.

The number of stems per song varied. “Until I Bleed Out,” for instance, was delivered as 25 stereo stems, the fewest of any song. “Fewer than that for a song with so much happening in it, it makes you get more creative,” says Hanes. “The music is so complicated on some of these songs. You have so much information there. It’s good to have extensive stems and everything broken out separately.”

Where there were multiple elements in a single stereo stem that he wanted to separate, Hanes had to find a technical workaround. On some stems, he says, “You end up either breaking it up and sending different things different ways it on a 7.1 track. You have to create your own stuff and understand what it means to spread something out and how to do it, using delays or whatever means necessary to manipulate the audio so that it doesn’t collapse back into mono or stereo. They’re standard techniques, just in a different dimension now.”

Hanes requested dry stems with the effects printed separately, but that wasn’t always possible, he says. “You have to be technically creative to solve the problems of being unable to separate a sound from the reverb. I’ll listen to the reverb and try and figure out what type it is, recreate just the reverb and throw a little bit of that main sound into it, and throw that somewhere. I might throw another short reverb on it or a short delay that spreads it a little bit.”

COMING OUT OF THE BOX No stereo master means no stereo bus compression, a challenge Hanes solves by putting a compressor on every object and bed. “I’m creating a master fader for every object and copying the settings of every master to every Atmos Music project feels a little strange. “I do the mix, send it out into the world and it sort of disappears. There are not many people that can hear it. I don’t even have a setup to listen to an Atmos stream from Amazon. We’re doing what we think philosophically sounds and should be right and hoping that it works.”

“That’s the thing I’ve been concerned about,” says Ghenea. “We’re working in the dark a little bit. A&R people can’t even hear it. Because artists aren’t able to listen and comment and approve, the most important thing for me is still that the intent of the music is what it was on the album, which is what everyone worked so hard to get to.”

Hanes and Ghenea are looking forward to more Atmos mix projects; indeed, they’re working with a major best-selling artist currently. MixStar also has a huge catalog extending back two decades. If there’s a budget available for an Atmos mix, Hanes says, “some of those projects would be really fun to re-create.”

“It’s turning into something fun,” agrees Ghenea. “There’s no reason why you should still only be working in two speakers.” n

Mastering for Immersive Audio

What Is It? And Why Do We Need It?

By Michael Romanowski

Ihave been thinking about this question for a while now. I’ve had countless conversations the past couple of years with engineers and producers that I highly respect., and invariably, eventually, the question comes up: “What do you do about mastering?” This question is both from me to the engineers, and from them back to me.

I had a conversation at AES in October 2019 with a longtime friend and fantastic engineer who was excited to let me know that he was mixing in Dolby Atmos. Excellent! I asked him about his room, his process and approach to mixing, and how or where he had his projects mastered. He replied that he just made the proper ADM file and sent it to the label. He said that he wasn’t sure how it could be mastered or why it might need to be. I was a little taken aback.

But his concern shifted when he said the thing that concerned him the most is that he only mixed five of the 10 songs for the release, and he didn’t know how the other songs would be completed, or if they would go together as a body of work for the artist. Exactly.

Who would be putting together the tracks for authoring the Blu-Ray, or making sure the formats and metadata were correct for digital distribution? Who would make sure that the songs and files were properly prepared to be archived and best suited for future formats? Who would listen objectively to make sure that the music presents itself as best as possible across multiple formats and playback systems? That “other set of ears”? And on it went.

This conversation has been repeated in various forms at trade shows, conventions and in talks with producers, engineers and artists alike who want to be working in the formats, but are not entirely clear on some of the details like deliverables and distribution.

I have been mastering audio in surround formats—and now, immersive audio—for 20 years, on releases for Marvin Gaye, Sting, YES, Sheryl Crow, Colorado Symphony, Soundgarden, Rob Thomas, America and many more. Most recently, I’ve worked on Dolby Atmos releases by Kenny Wayne Shepherd and Alicia Keys. I have also been working with Dolby, Sony, Fraunhoffer and other organizations to help develop the tools and procedures for mastering immersive audio. As an industry, developing an efficient and inclusive way to create immersive audio requires ongoing conversations between mixing and mastering engineers, and the companies creating the software.

I am a big fan of immersive audio. I built my first surround (5.1) room in 2000 and had been working in that format until 2018, when I converted the main room at Coast Mastering to a 7.1.4 system. The last year and a half have been spent lifting my house, digging down underground so that we could get 14 feet of height, and building a 9.1.6 mastering facility.

The monitoring environment is crucial to any mastering facility, and especially so once you start adding more speaker channels. Proper construction, speaker placement and tuning is paramount to a good immersive monitoring environment. I have been fortunate to have Bob Hodas tuning every mastering room I have worked in during the past 26 years.

Put simply, mastering is the point where art and science come together. Or, the subjective and the technical. The technical side is knowing how the music is going to be distributed, and delivering the best masters appropriate for each format and stream. The artistic side starts with translatability and presentation. Music has its

The author’s facility, Coast Mastering in Berkeley, Calif., now set up for mastering in all immersive formats, with a Focal-based 9.1.6 monitoring system.

best chance to be enjoyed as an artistic expression if it translates, meaning that it sounds as good as it can on as many different systems as possible.

So, what is Immersive Mastering? Exactly the same thing. It is receiving mixes, listening very carefully to them, then deciding what, if any, adjustments need to be made with regards to balance, levels, tonality, etc.—for all of the songs. Once the artistic decisions have been made and approved, the next step is to create the correct masters for distribution. The main difference between stereo and immersive mastering is in the complexities of the channels and the delivery.

THE CHALLENGES The first big challenge is receiving the mixes. What gets sent to the mastering engineer? Individual tracks, ADM files, stems? A combination? Sometimes the answer is based on what software the mastering engineer is using. Right now, there are very few tools available for immersive mastering. As an example, if I want to use an EQ for a song, I would like to be able

U.S., etc. When the day was over, my favorite was still the original mono LP. Here’s why:

The art and craft that was needed to have all of the individual sounds and instruments be heard in their own sonic place out of one speaker was tremendous. If you spread those out an immersive format simply as is (which is NOT what Giles Martin did, by the way), each piece of audio would be a weak representation of itself because the context of space is different. Simply spreading doesn’t work.

The exact opposite is true when down-mixing. Folding all of the textures down to a smaller listening view is trying to cram more sound in a smaller space. It becomes a dense mess with no clarity or sense of localization. Algorithms don’t know the intent of the artist or the engineer to know what to sculpt to make the space.

This applies to mastering, as well. When I am mastering a project, I am listening for the authentic nature of the presentation and the tone and space each element takes up. Not what it is, but how it is. The art form of Immersive Mastering is in making sure that the tonal and level transition occur smoothly across the sound sources. Filling in the sonic gaps and pulling back the peaks, if needed.

to apply it to any, all or some of the channels. I would also like to see a compressor/limiter that would allow me to link or trigger for any single or grouped audio, or be able to apply the same amount of dynamic reduction across all channels that are linked, based on the trigger channel. For now, we need to work around the limitations of our tools.

DAWs present another challenge. It is unrealistic to try and master out of the box. I wish I could, as I greatly prefer it for stereo work. So I choose to use Steinberg Nuendo, as it can easily handle the different types of objects, beds and output bus structure. I can use it to import different ADM files which include the positional metadata for the object tracks in place. Or, If I get sent 12 channels of audio, for example in a 7.1.4 project, I can pan those to the correct speaker positions and proceed from there. I wish there were more options for immersive mastering.

Delivery for Immersive Audio to the manufacturers or streaming services is another challenge. Each platform has different requirements. And each format has its own software for creating those deliverables. Dolby has the Atmos Mastering Renderer, Sony has Architect, Fraunhoffer IIR reverb and 3DAConversion tool. Et cetera.

Finally, I want to touch on down-mixing. For me, down-mixing does not work in practice. Each channel-type of delivery is best when it is created specifically. Each version—2.0, 5.1, 7.1, 5.1.4, 7.1.4—is best when each has its own mix. There are timing and phase issues that I feel must be dealt with by the mix engineer to achieve the best results. You can’t put ten pounds of flour in a five pound bag.

Here’s a story to help me explain: When Giles Martin remixed Sgt. Pepper’s in Dolby Atmos, the two questions that kept coming up, from fans and from professional engineers, were: “Do you like it?” And “Is it better?”

I decided to do an experiment and hold an all-day AES listening session with about 15 other engineers to compare every version of the record we could find: Original mono LP, stereo LP, remastered, re-released, Hi-res, surround, U.K. vs.

IDEALLY, SEPARATE MIXES The best case is that we receive separate mixes for each format. The complexities of the immersive formats, in every stage of the production, are far greater than stereo. Mastering is an essential part of the process of making records. A mastering engineer with skill and an ear to be attentive to the last detailed steps before release is a crucial part of the success of the format.

I believe very strongly in the human element and what it brings to the technical capabilities getting the listener closer to the intention of the art form. What moves me about all music, mono to immersive, is its ability to pull me in. The way it can tell you a story, or take you on a journey, is powerful. What I love about mastering for immersive audio, and stereo mastering, for that matter, is trying to help make sure the artist sounds more like themselves, wherever or however their music is being heard. The consumer then become more of an active listener than a passive listener. What the artist is saying musically becomes more important than how they are saying it. That’s when we get pulled in. That’s when we stop what we are doing and just… Pause. Listen. Nice. n

This article is from: