This article originally appeared in the October 2018 issue of Professional Sound magazine.
By Andrew King
The MMVAs have long been one of the most anticipated live events on the Canadian entertainment calendar. Suffice it to say you’d be hard-pressed to find a music fan born in the last 30 years that isn’t at least somewhat familiar with the event – the packed parking lot at 299 Queen St. West, the swirling searchlights, the multiple stages tricked out with elaborate and eye-catching set pieces…
The 2018 edition, hosted by actress, rapper, and comedian Awkwafina, featured an impressive list of Canadian and international performers, including Shawn Mendes, Halsey, 5 Seconds of Summer, Alessia Cara, and more.
Every year, it’s a technical team from Much’s parent company, Bellmedia, that works behind the scenes to deliver the energy and excitement from the ground to millions of viewers across the country and beyond. This year was no different, and as far as those viewers were concerned, the show looked and sounded as slick as expected.
Unbeknownst to most, though, the audio team handling the broadcast mix for 2018 was trying something new. Armed with an array of state-of-the-art technologies, including a trio of new hardware pieces from Dolby Laboratories, they were creating a live Dolby Atmos mix simultaneously with the surround mix that’s become the standard in recent years. It was essentially a proof-of-concept experiment aiming to assess the potential of broadcasting such a mix in the not-so-distant future as the sound and picture capabilities of consumer technologies continue to advance at a rapid rate.
Here, Bellmedia personnel discuss what they were doing, how they did it, and whether or not they got the results they were anticipating.
[Photo courtesy of CTV]
PROOF OF CONCEPT
Michael Nunan, senior manager of audio broadcast operations at Bellmedia, explains that the broadcast systems topology for the 2018 MMVAs was, functionally speaking, consistent with the past few editions.
“Since 2015, the show has largely been fixed in terms of our technical footprint,” he says. “The 2015 iteration earned us [an Outstanding] Technical Achievement award at the Canadian Screen Awards, and the notion of building a show like this with a large router at the centre, and then basically designing the system like a giant star, has remained mostly unchanged since then.”
The event was first broadcast in 4K in 2016 – a time when that resolution was still in its relative infancy for wide distribution – and has been for the two years since. But despite supporting 4K and UHD resolution on the picture side, the corresponding audio for that content has been limited to 5.1 surround for the past 15-plus years – and Nunan asserts there’s really no good reason for that.
“Everyone is basically trading on the commercial Blu-ray spec, so dynamic UHD with Dolby Digital Plus, or enhanced Dolby Digital. So even though you could have theoretically gone to 7.1 or at least a better-sounding 5.1 with higher bitrates, no one is yet demanding increased audio support for picture.”
He acknowledges that this is partly because broadcasters, recognizing an incoming consumer demand, rushed into 4K to get some experience with the format and figure out how it could be done while awaiting the arrival of ATSC 3.0-compliant technologies, and potentially with Dolby AC4 audio.
“We know the day is coming where we’ll be able to support the picture better with more elaborate audio,” he says – “either with a wider and taller format, or something that offers a level of personalization for the listener.”
At Bellmedia, he and his team have been experimenting in that regard over the past two years, trying out different things and, as he puts it, “filing them under, ‘Be ready to answer the call.’”
He elaborates: “We know the day is going to come that distributors can support these next-generation formats, and eventually, one of our client partners is going to ask us to do this, and we don’t want to wait for that call to figure it out.”
That’s how the 2018 iHeartRadio MMVAs broadcast turned into a proof-of-concept initiative for the team – a means of discovering if, when distributors are able to take the signal, they can properly provide it, and also to show clients and collaborators a real-world, visceral experience of what these capabilities could mean for a finished product.
Since early 2018, Bellmedia has enjoyed a close collaboration with Dolby Laboratories engineers regarding a number of different projects and initiatives. “And what became clear to us was that, while we’ve been playing with high-order surround and surround-plus-height formats for almost two years, with Dolby’s help, it would be possible to do this in real-time, in a live setting,” he shares. “That sort of changes everything, so we wanted to know if we could do it, see how it works, and understand what the potential penalties might be.”
And so the team from Bellmedia – anchored by Nunan, Production Audio Supervisor Howard Baggley, Post Sound Supervisor David Midgley, and Systems Engineer Sean Corcoran – along with their various technical collaborators for the broadcast, set out on their mission.
[RF wireless trailer (white at corner) & Dome Productions journey & B-unit trucks at 299 Queen St. W. ahead of the 2018 iHeartRadio MMVAs]
The core consideration throughout the entire experiment was that, no matter what, they still needed to put a conventional show to air; thus, the Atmos mix would be a fully-parallel finish, and in addition to proving whether this could be achieved, it would also prove that it could be achieved without a risk to the main show.
“Luckily, since the architecture of that show is based around a massive router and over MADI, with a small amount of Ravenna thrown in, we knew it’d be really easy to just graft another mix stage onto the show apparatus and be totally air-gapped from the rest of the show,” Nunan explains.
The MMVAs have a history of using a similar “simultaneous mix” model. In the early days of HD, when broadcasters were in the preliminary stages of experimentation with 5.1, the main show mix was done in two-channel stereo, at which point those stems would be sent to another mix room for a 5.1 accompaniment to the HD product. As it happens, the engineer responsible for that surround mix years ago was Baggley, who was behind the live Atmos mix this time around.
While Baggley is one of the most experienced broadcast engineers in the country, the height element was new to him for a live application. Conversely, Midgley has logged plenty of hours mixing in Atmos in post studios, but typically isn’t as active with live productions.
“We worked together to kind of leverage the best of those worlds,” Midgley explains. “Howard has mixed a ton of stuff in 5.1, but this was a bit new to him, so I was there to assist and just give him an idea of ‘what happens when…’”
That mix was largely handled from a Martin Pilchner-designed audio control room, ACR3, at 299 Queen St. West that would’ve otherwise been unused for the show. The necessary technical components for the system Corcoran designed were basically brought in as a flypack, built around an Avid S3 control surface and Merging Technologies Pyramix and Avid Pro Tools digital workstations.
As an engineer who handles the IP set-ups, control rooms, and interconnectivity across his home campus and other remote productions, Corcoran has his hands full leading up to and during the show in a typical year. Fortunately, this one didn’t add any overly stressful challenges despite the ambitious experiment.
“In general, it doesn’t have a huge impact,” he reinforces. “Because the Dolby gear sits at the ingress and egress of the system as a whole, I had a similar system to what I’d do for our remote production kit and inside the building. And because Dolby sits in an AES stream and all of our devices were 24-bit transparent, everything just went through, so as far as integrating Atmos, it wasn’t a very heavy lift.”
[Pictured: Sennheiser MK4 on the roof of 299 Queen St. West for ambient mix]
The three units he references – all new to the Bellmedia team – are Dolby’s DP580 professional reference decoder, DP590 authoring tool, and DP591 broadcast processor. To help with the learning curve, a pair of engineers from Dolby came to Toronto to support the set-up and execution.
“Out of the box and with the manual and the [Dolby engineers] there, it was a fairly painless process for set-up and programming,” offers Midgley. “It was a matter of learning how the boxes integrated and what each one did, then figuring out what the GUI was telling you as far as information and metadata, and then setting those parameters. Then, once we had content, it ran really well.”
The control room would get MADI signals from the show router, convert them to Ravenna, and bring them into the two Pyramix environments – one a recorder, one a mix engine – and then into the interfaces on a substantial Ethernet switch. Next, the traffic was sent to the necessary engines and recorders as audio over IP. Components like the Pro Tools backup recording, the Lawo unit doing video embedding, etc. were interfaced via MADI using Merging’s Horus units, which convert from Ravenna to other standards.
Consistent with a recent investment at Bellmedia HQ, the monitoring solution for the control room was comprised entirely of Neumann active studio monitors in a 7.1.4 configuration. They achieved this by mounting some truss in the mix suite to properly position the overhead boxes they needed, making for a “makeshift” Atmos mix environment.
The only non-digital components in the signal path for the 2018 show were the ambiance mics around the parking lot and on the roof of the building. These are integral to delivering an accurate experience of what it’s like being in that packed parking lot for this show, and the goal for 2018 was to see how much of that could transition to audio over IP.
To achieve this, the team invited Anthony Kuzub from Toronto’s Ward-Beck Systems to join the crew along with some of his company’s preMO mic preamps.
“All of the ambient mics were delivered to the ambient mix stage over IP, and at the same moment, they installed IP onramps in a few places on Richmond Street – in one of the music trucks, in the main Dome Productions truck, the RF truck [from Ottawa’s RF Wireless] – so the Ravenna network was really an almost fully parallel piece of infrastructure, basically floating on top of all of the other wiring in the show to, again, see if in a complicated show like this with a robust architecture, how easy it might be in the future to duplicated that architecture as audio over IP.”
With broadcasters using the consumer Blu-ray spec for content delivery, the best current means of delivering Atmos into the home with UHD picture is via the extra headroom in Dolby Digital Plus JOC (joint object coding). This method is somewhat limiting, with 5.1.4 being the best it can manage (5.1 surround with four height channels). In contrast, the cinema variant is typically 7.1.4. That may not seem like a significant difference, but as Midgley stresses, it is.
“One of the main differences in live Atmos versus posted Atmos is that extra two channels, which are fairly significant when you’re dealing with concert-like material,” he explains. “In 5.1.4, there are only 10 objects available for rendering in addition to a 5.1 ‘bed layer’; with 7.1.4, there are 118. But while that might feel limiting, it really wasn’t. We weren’t flying things around, and once we had an idea of where we wanted to start in terms of objects, it was a treat to be able to fine-tune those on the day for a 5.1.4 system.”
For the real-time mix, that meant they could simply take the 5.1 mix stems from the various mix stages that were already producing 5.1 content for the conventional HD+5.1 broadcast and add the four height sources; however, after the event, they typically create a Blu-ray archive. This year, they’ll be able to do that with a more robust 7.1.4 variant.
The initial 5.1 mixes for the musical performances were split between Bellmedia engineer Anthony Montano, operating out of the Broadcast Audio Services (BAS) truck, and the LiveWire Remote Recorders truck, led by owner Doug McClement.
Midgley offers a bit of insight into mixing the performances in Atmos: “Having things object-based on the live performances gave us the ability to find positions that worked for both aesthetics with picture and for keeping as much detail in the Atmos arena as possible, and then doing it in post just gave us that much bigger of an envelope.”
The biggest concern in this case, he says, is the picture. No matter what, if there’s something on the screen that the viewer should be hearing, then that audio needs to be married with a focal point on the screen. “And we can’t predict if someone’s watching on a phone or on an 80-in. screen,” Midgley notes, “so object placement and how we use the tools is delicate and it has to be married in a picture. When we don’t have anything referenced by picture – say the announcer mic – then we can do what we want.”
As mentioned, a major component of both the live and post Atmos mixes is the way the crowd and ambiance mics are incorporated. Two technicians, Dave Tedesco and Chris Sampson, were mixing the crowd and camera mics in the basement at 299 Queen West.
“Since that’s really the heart of what it’s like to stand in that parking lot and watch the show,” Nunan begins, “we guessed it’d be about how we presented that ambient signature in the plus-height environment that would be the real difference-maker.”
[Pictured: Howard Baggley in temporary Atmos Mix Room, ACR3, at 299 Queen St. West]
Again, delivering a clean main broadcast was the top priority, and so the team had to consider what they could comfortably ask of the various parties involved in the process that wouldn’t compromise their abilities to do their jobs.
Nunan says it came down to having Tedesco and Sampson simply consider height when placing their complement of nearly 30 microphones across the site. In the end, some were placed at ground level, some in the truss above the stages, and some on the roof of the building. Then it was just a matter of having them think of things in groups – high, mid-level, and crowd-level. Summing those groups together, they’d have their usual 5.1 sum to contribute to the main telecast, but by simultaneously sending those source groups to the Atmos stage, they could have their cake and eat it, too.
“Something that we dealt with in post and might bake into the live setting next time was to delay certain arrays or microphones with others so they all had the same impulse time,” Midgley says of the unique challenge of having to time-align all of the crowd and ambiance microphones throughout the site. “Then, on the day of the show, we could delay the stage microphones to the edge of the stage so they would be closely aligned with the audience mics, which tightens up the whole mix in a complete Atmos arena.”
Generally speaking, Nunan says these types of initiatives would be far more difficult without a crack team of collaborators and support from his colleagues and leaders.
“Anthony [Montano] and I have always attempted to push the envelope with this show, and luckily, everyone else on the team is very much onside with that,” he enthuses. “We have fantastic support from our engineering departments here, and especially from our technical producer, David Azoulay. He’s our biggest champion in terms of letting us conduct these sorts of experiments where it might be harder to convince other people that aren’t as friendly towards audio, or a bit more cynical about it.”
Of course, that level of confidence from higher ups is born of the stellar reputation shared by the team. “None of us are new to television, so the notion of a clean show at all costs is in our collective DNA,” Nunan asserts. “Not only did we do this in a next-gen format, but did it with a new toolset riding on an audio-over-IP environment, so we made it very hard on ourselves, but that meant everyone involved got some very specific learnings out of it.”
Corcoran, for example, gained experience deploying a large amount Ravenna gear in a unique configuration, and getting equipment from various manufacturers to play along. Baggley logged mixing time in an entirely new format, and even Midgley, who has plenty of experience with Atmos in a post environment, benefited from a different type of exposure.
“This was a lot of fun,” says Baggley, recapping the experience. “We proved that Atmos works really well for live music, but I’m excited to see how this new format will allow us to better support some of the other genres we work in. Maybe the most interesting aspect of the technology is that it allows us to mix in this new high-end format, while still guaranteeing the experience that every user has, all the way down to someone listening on an iPhone.”
With archives of the final product – both the live 5.1.4 mix and the enhanced 7.1.4 post variant – they’ll now be able to share the results with colleagues and current and potential clients.
“Ultimately, we delivered a very successful show that was clean to air in all respects – no muss, no fuss – but at the same time, had executed our Atmos mix in real-time and recorded all of the results,” Nunan says in summary. “We look forward to the entire ecosystem being able to support this. When that day comes, we know we’ll be ready.”
[Front row (L-R): Dolby’s Gary Epstein; Bell Media’s Roy Janke, David Midlgey, Michael Nunan & Sean Corcoran. Stairs (L-R): BAS/Lawo’s Doug Smith & Bell Media’s Anthony Montano. Rear (L-R): Bell Media’s Howard Baggley; freelance engineer Kent Ford; Dolby’s Douglas Ribordy & Bell Media’s Chris Berry.]
Andrew King is the Editor-in-Chief of Professional Sound.