The 2024 NAB Show

Many have opined that there’s no longer a need for large trade shows. Clearly the folks running the Las Vegas Convention Center don’t buy into that view. They are in the middle of a multi-year project to expand and renovate their facilities. That was the backdrop for this month’s National Association of Broadcasters annual exhibition. Aside from meetings and presentations to official NAB members, this show is best known as one of two major international showcases for new technology in the areas of radio, film, television, streaming, production, post-production, and distribution. The other is the International Broadcasting Convention (IBC) in Amsterdam, to be held later this year.

The tally this year clocked in at over 61,000 attendees of which more that half were first-timers and over a fourth from outside of the US. Some might suggest that you can get all the info you want about gear by watching YouTube. If you’ve ever been to CES, NAMM, IBC, NAB, InfoComm, or similar shows, then you know that there’s nothing like the in-person experience. That most certainly was part of the draw for these first-time attendees. 

A show like NAB gives you a way to kick the tires and compare. Not only do you experience the look and feel of actual products, but the manufacturers are often presenting them in actual use. Want to see how the various Canon lenses perform? Simply walk down the line in their booth’s camera set-up and see for yourself. Want to compare the features of manufacturer A’s product against those from manufacturer B? Just walk from one booth to the other and check it out for yourself. That’s the value NAB presents. Add to this the opportunity to meet and chat in person with some of the online friends and mentors you’ve had – then, that’s just icing on the cake.

I’ve written a show overview for postPerspective (click this link) if you want to read about some of the new products and features that caught my attention. If you didn’t get a  chance to go, this gallery will give you a sense of NAB 2024.

Click on any image below to view an NAB slideshow.

©2024 Oliver Peters

Life in Six Strings

Whether you’re a guitar nerd or just into rock ‘n roll history, learning what makes our music heroes tick is always entertaining. Music journalist and TV presenter Kylie Olsson started a YouTube channel during the pandemic lockdown, teaching herself how to play guitar and reaching out to famous guitarists that she knew. This became the concept for a TV series called “Life in Six Strings with Kylie Olsson” that airs on AXS TV. The show is in the style of “Comedians in Cars getting Coffee.” Olsson explores the passions behind these guitarists, plus gets a few guitar pointers along the way.

I spoke with James Tonkin and Leigh Brooks about the post workflow for these episodes. Tonkin is founder of Hangman in London, which handled the post on the eight-part series. He was also the director of photography for the first two episodes and has handled the online edit and color grading for all of the episodes. Leigh Brooks (Firebelly Films) was the offline (i.e. creative) editor on the series, starting with episode three. Together they have pioneered an offline-to-online post workflow based entirely around Blackmagic Design DaVinci Resolve.

James, how did you get started on this project?

James Tonkin: Kylie approached us about shooting a pilot for the series. We filmed that in Nashville with Joe Bonamassa and it formed the creative style for the show. We didn’t want to just fixate on the technical side of the guitar and tone of these players, but their geographical base – explore the city a little bit. We had to shoot it very documentary style, but wrap it up into a 20-25 minute episode. No pre-lighting, just a tiny team following her around, interacting with with these people.

Then we did a second one with Nuno Bettencourt and that solidified the look of the show during those two initial episodes. She eventually got distribution through AXS TV in the States for the eight-part series. I shot the first two episodes and the rest were shot by a US-based crew, which followed the production workflow that we had set up. Not only the look and retaining the documentary format, but also maintaining the highest production value we could give it in the time and budget that we’re working with.

We chose to shoot anamorphic with a cinematic aspect ratio, because it’s slightly different from the usual off-the-cuff reality TV look. Also whenever possible, record in a raw codec, because we (Hangman) were doing all of the post on it and me specifically being the colorist. I always advocate for a raw workflow, especially something in a documentary style. People are walking from daylight into somebody’s house and then down to a basement, basically following them around. And Kylie wants to keep interacting with whomever she’s interviewing without needing to wait for cameras to stop and re-balance. She wants to keep it flowing. So when it comes to posting that, you’ve got a much more robust digital negative to work with [if it was shot as camera raw].

What was the workflow for the shows and were there any challenges?

Leigh Brooks: The series was shot mainly with Red and Canon cameras as 6K anamorphic files. Usually the drive came to me and I would transcode the rushes or create proxy files and then send the drive to James. The program is quite straightforward and narrative-based, without much scope for doing crazy things with it. It’s about the nuts and bolts of guitars and the players that use them. But that being said, each episode definitely had its own little flavor and style. Once we locked the show, James took the sequence, got hold of the rushes, and then got to work on the grade and the sound.

What Kylie’s pulled off on her own is no small feat. She’s a great producer, knows her stuff, and really does the research. She’s so passionate about the music and the people that she’s interviewing and that really comes across. The Steve Vai episode was awesome. He’s very holistic. These people dictate the narrative and tell you where the edit is going to go. Mick Mars was also really good fun. That was the trickiest show to do, because the A and B-side camera set-up wasn’t quite working for us. We had to really get clever in the edit.

DaVinci Resolve gets a lot of press when it’s used for finishing and color grading. But on this project it was used for the offline cut as well. TV series post is usually dominated by Avid Media Composer. Why do you feel that Resolve is a good choice for offline editing?

Tonkin: I’ve been a longtime advocate of working inside of Resolve, not just from a grading perspective, but editorial. As soon as the Edit page started to offer me the feature set that we needed, it became a no-brainer that we should do all of our offline in Resolve, whenever possible. On a show like this, I’ve got about six hours of online time and I want to spend the majority being as creative as I can. So, focusing on color correction, looking at anything I need to stabilize, resize, any tracking, any kind of corrective work – rather than spending two or three hours conforming from one timeline into another.

The offline on this series was done in DaVinci Resolve, with the exception of the first episode, which was cut in Final Cut Pro X. I’m trying to leave editors open to the choice of the application they like to use. My gentlemen’s agreement with Matt [Cronin], who cut the first pilot, was that he could cut it in whatever he liked, as long as he gave me back a .drp (DaVinci Resolve project) file. He loves Final Cut Pro X, because that’s what he’s quickest at. But he also knows the pain that conforms can be. So he handled that on his side and just gave me back a .drp file. So it was quick and easy.

From episode three onwards, I was delighted to know that Leigh was based in Resolve, as well, as his primary workflow. Everything just transfers and translates really quickly. Knowing that we had six more episodes to work through together, I suggested things that would help us a lot, both for picture for me and for audio as well, which was also being done here in our studio. We’re generating the 5.1 mix.

Brooks: I come from an Avid background. I was an engineer initially before ever starting to edit. When I started editing, I moved from Avid to Final Cut Pro 7 and then back to Avid, after which I made the push to go to Resolve. It’s a joy to edit on and does so many things really well. It’s become my absolute workhorse. Avid is fine in a multi-user operation, but now that doesn’t really matter. Resolve does it so well with the cloud management. I even bought the two editor keyboards. The jog wheel is fantastic! The scrolling on that is amazing.

You mentioned cloud. Was any of that a factor in the post on “Life in Six Strings?”

Tonkin: Initially, when Leigh was reversioning the first two episodes for AXS TV, we were using his Blackmagic Cloud account. But for the rest of the episodes we were just exchanging files. Rushes either came to me or would go straight to Leigh. He makes his offline cut and then the files come to me for finishing, so it was a linear progression.

However, I worked on a pilot for another project where every version was effectively a finished online version. And so we used [the Blackmagic] Cloud for that all the way through. The editor worked offline with proxies in Resolve. We worked from the same cloud project and every time he had finished, I would log in and switch the files from proxy to camera originals with a single click. That was literally all we had to do in terms of an offline-to-online workflow.

Brooks: I’m working on delivering a feature length documentary for [the band] Nickelback that’s coming out in cinemas later in March. I directed it, cut it in Avid, and then finished in Resolve. My grader is in Portsmouth and I can sit here and watch that grade being done live, thanks to the cloud management. It definitely still has a few snags, but they’re on it. I can phone up Blackmagic and get a voice – an actual person to talk to that really wants to fix my problem.

Were there any particular tools in Resolve that benefitted these shows?

Tonkin: The Blackmagic team are really good at introducing new tools to Resolve all the time. I’ve used trackers for years and that’s one of my favorite things about Resolve. Their AI-based subtitling is invaluable. These are around 20-minute episodes and 99% of it is talking. Without that tool, we would have to do a lot of extra work.

Resolve is also good for the more complex things. For example, a driving sequence in the bright California sun that wasn’t shot as camera raw. The only way I could get around the blown-out sky was with corrections specifically for the sky portion of the shot. Obviously, you want to track the subject so that the sky is not going over his head. All of those types of tools are just built in. When I’ve effectively got six hours to work on an episode and I might have about 20 shots like that, then having these tools inside are invaluable from a finishing perspective.

You’ve both worked with a variety of other nonlinear editing applications. How do you see the industry changing?

Tonkin: Being in post for a couple of decades now and using Final Cut Studio, Final Cut Pro X, and a bit of Premiere Pro throughout the years, I find that the transition from offline to online starts to blur more and more these days. Clients watching their first pass want to get a good sense of what it should look like with a lot of finishing elements in place already. So, you’re effectively doing these finishing things right at the beginning.

It’s really advantageous when you’re doing both in Resolve. When you offline in a different NLE, not all of that data is transferred or correctly converted between applications. By both of us working in Resolve, even simple things you wouldn’t think of, like timeline markers, come through. Maybe he’s had some clips that need extra work. He can leave a marker for me and that will translate through. You can fudge your way through one episode using different systems, but if you’re going to do at least six or eight of them – and we’re hopefully looking at a season two this year – then you want to really establish your workflow upfront just to make things more straightforward.

Brooks: Editing has changed so much over the years. When I became an engineer, it was linear and nonlinear, right? I was working on “The World Is Not Enough” – the James Bond film around 1998. One side of the room was conventional – Steenbeck’s, bins, numbering machines. The other side was Avid. We were viewing 2K rushes on film, because that’s what you can see on the screen. On Avid it was AVR 77. It’s really interesting to see it come full circle. Now with Resolve, you’re seeing what you need to see rather than something that’s subpar.

I’d say there are a lot of editors who are ‘Resolve curious.’ If you’re in Premiere Pro you’re not moving, because you’re too tied into the way Adobe’s apps work. If you know Premiere, you know After Effects and are not going to move to Resolve and relearn Fusion. I think more people would move from Avid to Resolve, because simple things in Resolve are very complicated in Avid – the effects tab, the 3D warp, and so on.

Editors often have quite strange egos. I find the incessant arguing between platforms is just insane. It’s this playground kind of argument about bloody software! [laugh] After all, these tools are all there to tell stories. There are a lot of enthusiastic people on YouTube that make really good videos about Resolve, as well. It’s nice to be in that ecosystem. I’d implore any Avid editor to look at Resolve. If you’re frustrated and you want to try something else, then this might open your eyes to a new way of working. And the Title Tool works! [smile]

This article was originally posted at postPerspective.

©2024 Oliver Peters

The 2024 DIY Final Cut Studio

A decade ago I wrote about the wider collection of apps that you might need as part of a broader Final Cut Pro-centric ecosystem. But it’s time for an updated look. In the waning years of Final Cut Pro “legacy” the Apple pro applications were bundled as Final Cut Studio. With the introduction of Final Cut Pro X, you had a lower cost app, but had to augment it with the missing pieces that were appropriate for more involved workflows.

Some are fans of Final Cut Pro as their main NLE. Others use it because it’s an alternative to subscription plans. Adobe Creative Cloud is the most comprehensive of these, so let’s look at what it would take to replace that level of functionality in a modern Final Cut Pro bundle.

Core Applications

To start, combine all of the Apple pro applications, including Final Cut Pro, Motion, Compressor, and Logic Pro. This covers you for editing, motion graphics, compression/encoding, and audio recording and mixing. Granted, the last one is optional for most video editors, but this lines up as an alternative to Adobe Audition – and a much better one at that. Since Compressor will not encode certain formats, I would also suggest adding Shutter Encoder (donation requested) to your video toolkit.

Photo / graphics

Apple has conceded graphic design applications to Adobe and others. But if you want to avoid subscriptions, then Pixelmator and Affinity (now owned by Canva) are the two best options for graphics and design. Affinity Photo, Designer, and Publisher form equivalents to Adobe Photoshop, Illustrator, and In Design. Pixelmator Pro isn’t as broad; however, it supports both raster and vector graphics.

Unfortunately, Apple dropped its popular Aperture application in favor of Photos. This is a lightweight mashup of Aperture and iPhoto. While it’s deceptively more full-featured than you might think at first glance, a good upgrade is Photomator from the folks at Pixelmator. There are certainly other options, but these two companies fit well into the Apple ecosystem.

Interoperability / augmentation 

Speech-to-text is an exciting and valuable new area for editors and a must-have for many editorial workflows. Final Cut Pro lags behind Premiere Pro and Resolve in this area, so – third-party apps to the rescue. A good option is the free Jojo Transcribe. Use this to generate text, which can then be brought into Final Cut Pro for captioning.

If you need to interchange editorial files with other shops that use different applications, then you will need to translate the editorial files from and into list formats not supported by Final Cut Pro. This includes interchange between FCP and Premiere Pro, as well as sending to Pro Tools. While you might only need some of these, the following interchange apps cover the bases: XtoCC, SendToX, EDL-X, Worx4 X, X2Pro Audio Convert, and Xsend Motion. The latter sends an FCP timeline to Motion as a Motion timeline and is available via FxFactory. The others can be bought from the Mac App Store.

Plugins

You can go crazy with plugins, so be judicious with your selections. FCP includes a lot of useful video effects and transitions from the get-go. If you own Motion, you can export Motion’s stock effects or your own, unique creations as Motion template effects and transitions. These will then augment the standard FCP load. For outside options, I like to stick with the same company for consistency, like CoreMelt, Boris FX, MotionVFX, or FxFactory. The free FxFactory installation comes with a few effects, or bump that up to Pro for more. This also operates as a plugin manager where you can purchase any additional FxFactory effects that fit your needs or that of a new project.

Aside from these selections, I would also recommend the free Boris FX BCC+ Looks filter. Boris FX offers many great tools, but BCC+ Looks a good starting point. If you need more color correction horsepower, then my choice would be Color Finale 2. It’s designed specifically for Final Cut Pro and operates as a high-end grading tool. Another useful add-on for film stock looks and effects in FilmConvert Nitrate.

As far as audio plugins are concerned, there are many great options for Logic Pro if you need more than the comprehensive selection that it comes with. However, be careful in using these within Final Cut Pro. They will show up, but many either don’t work correctly or not at all. Plus FCP does not use a track-based timeline, so it’s not really conducive for advanced mixing with a multitude of tracks. On the other hand, FxFactory offers three audio clean-up/restoration effects from Accentize, which are better than FCP’s stock enhancement and work well within this video application.

To wrap it up, you could dive in deep and get everything on this list. If you do, then you’ll have a toolkit that’s comparable to Adobe Creative Cloud in most aspects. Many of these are available through the Mac App Store. Some are even free. It will cost more on the front end than simply purchasing Final Cut Pro alone. However, over the course of one or two years, it will prove to be the cheaper option than the cumulative subscription fees of other companies, like Adobe. This is even more true if you own multiple machines on the same Apple ID. Last but not least, don’t forget DaVinci Resolve. While it’s certainly a viable option to much of this list on its own, the free version is a good item for the kit to augment FCP. Even if you don’t edit with it, Resolve is useful for batch transcoding, and of course, color correction. It’s available from the Blackmagic Design website or the Mac App Store.

Have fun building your 2024 DIY Final Cut Studio!

©2024 Oliver Peters

Hybrid Top-Down Mixing

There are two overarching concepts that determine how modern music is mixed. The first is the room workflow, which is divided between ITB and OTB mixing. ITB (in the box) mixing means that you are working totally within the confines of a DAW (digital audio workstation) application, like Logic Pro, Cubase, or Pro Tools. OTB (out of the box) mixing means that you are working in a studio with a physical console and a myriad of outboard audio processing gear. This is how music was always mixed before the invention of the DAW. Modern studios and mix engineers use both techniques. Often a hybrid approach is employed, such as mixing in a DAW, but routing some of the signals out to external hardware and back into the DAW.

The second concept is whether you are mixing bottom-up or top-down, which is the subject of this post. In a typical bottom-up mix nearly all of the processing is applied to the individual tracks or instruments. Those channels are mixed together though the stereo output or 2-bus. Only basic compression and limiting is applied to the 2-bus signal to “glue” the mix together and tame signal peaks.

In the opposite approach – top-down mixing – a lot of processing is applied to the 2-bus to shape and control the signal (EQ, expanders, exciters, compression, limiting, etc), but very little is applied to each individual channel/track/instrument. The idea here is that you are “driving” the signal into the effects chain on the 2-bus, where the total mix is being shaped.

Analog console evolution

Bottom-up versus top-down workflows stem from the evolution of analog console design. Mixing consoles are built around a series of input channel strips, whose signals are combined into a mono, stereo, or multichannel output. The earliest consoles only had channel strips with preamp and volume controls. As these evolved, most console channel strips also gained high/low-pass filters and equalization. Any other processing (noise gates, de-essers, exciters, reverbs, compression, limiting, etc) had to be handled by external hardware. Naturally, such equipment was and still is expensive, so most studios could only afford a limited amount of this gear. The result was often to only apply this processing to the output of the mix.

Solid State Logic changed this with the introduction of its consoles, notably the SSL 4000 series. Their key innovation was that the circuits for each channel strip integrated a full dynamics control section (compression, gating, expansion) in addition to the preamp, equalization, and filtering. This design made it possible for a recording engineer/mixer using an SSL console to employ a wider degree of control for each individual instrument without the need for external gear. Mixers also liked the tonal decisions made by SSL’s design engineers, so SSL consoles became popular in studios around the world. Many of the most beloved rock records of the late 20th century were recorded and/or mixed using SSL consoles.

Applying analog console design to digital software

Fast forward to the present and you’ll see that the way people design studios and mixes – whether ITB or OTB – is based on concepts from the analog days. Modern DAW software mimics the layout of tracks and channel strips. Depending on the application, some have channel strips with built-in processing effects, some rely only on effects inserts (built-in and third-party plug-ins), but many use a combination of the two.

For example, Fairlight (Blackmagic Design DaVinci Resolve) offers a selection of built-in channel strip effects. Click on a section and adjust the controls. In Logic Pro, if you click the EQ panel at the top of any channel strip, you activate the default digital parameter EQ on that channel. Click on the gain reduction panel and the default Platinum compressor is applied. Of course, you can swap these out for other built-in or third-party tools.

The downside of this approach is that each separate effect has its own GUI, so opening just an EQ and a compressor for multiple channels quickly covers your screen. Many users prefer various third-party channel strip emulations, such as those that mimic SSL or Neve hardware. The advantage to these is that all of the different processing tools for a channel strip open up within a single, cohesive GUI.

There are plenty of SSL clones, but the Waves CLA Mixhub takes this up a notch by turning the channel strip plug-in into a virtual console. Apply a Mixhub plug-in to a series of channels (up to 64 total) and assign each to one of eight “buckets.” Then in Mixhub’s bucket view you can see up to eight channels of EQ or dynamics side-by-side within a single window. It’s like having a virtual SSL console on your computer screen.

Some analog console manufacturers offered a set of integrated submix buses. This architecture has been carried over into DAWs. You can combine and route a group of similar instruments over to one or more buses. The individual channel levels are set relative to the rest of the channels in that group and then the bus level is set relative to the other buses as part of the mix.

Typically, buses are either VCA or summing. In the simplest of terms, a VCA bus is a glorified remote control where a single bus fader applies relative volume changes to the channels within the group. In most cases the VCA bus isn’t actually working with a combined signal (although it looks like that in the GUI). This affects the gain of the signal and whether or not effects (plug-ins) can be applied to that bus. A summing bus actually combines the individual signals into one and then allows for absolute volume changes to that group, along with the addition of processing. This design requires more attention to gain-staging, but makes the hybrid top-down mixing solution possible.

Pros and cons to analog emulation

There is no right or wrong way to mix. There are plenty of award-winning mixers who fall into any of these camps. You have to develop a methodology that works best for your needs and style. I mix music on a casual basis, mainly using Logic Pro. I own a number of channel strip plug-ins that were inspired by or emulate noted analog consoles – SSL, Neve, Focusrite, etc. I’ve created mixes using each of these, as well as just with stock Logic Pro plug-ins.

Regardless of how many demos you’ve seen and heard or how many YouTube influencers have touted a product, you might not hear much of a difference between these different tools. The truth is that all compressors, all EQs, all reverbs, and so on do pretty much the same thing. Some have different coloration to the sound. Some are “character free” – i.e. clean. But once you get everything into a mix and are no longer fixating on isolated channels, you’ll realize that the differences are pretty slight. This is especially true when listening to your mix on headphones or small/medium near-field desktop speakers as opposed to A-level recording studio monitors.

If you’ve applied a plug-in like an analog-style channel strip onto each channel, then this becomes the virtual equivalent of an analog console and represents a starting point for a bottom-up mix. Likewise, you can insert a stack of processing plug-ins onto the 2-bus and push the mix into these as a classic example of top-down mixing.

The hybrid mixing method

Here’s the approach I’ve settled on. The first step is to bring everything in with flat faders. Not everyone agrees, but I do worry about gain build-up in the process. Logic Pro runs in 32-bit float, but I’m not sure if that’s true for every third-party plug-in that I use. I have heard distortion with some when the input level was too hot. So, I’ll typically drop the gain of each track/channel by -4dB to -6dB. Next, I’ll group common instruments into a set of Summing Track Stacks, which is Logic Pro’s way of automatically creating a summed group bus.

At a minimum on a rock or pop track, I’ll have buses for drums, bass, keys, guitars, and vocals. These in turn are routed to a new bus, which becomes my Submix bus. It feeds the Stereo Output (2-bus). I will then apply my channel strip emulations (or comparable plug-ins) to each instrument group (Track Stack). Usually some light compression and limiting is applied to the 2-bus.

The bulk of my processing to shape and color the sound of similar instruments is happening at the buses, which is a classic top-down mixing method. However, it’s a hybrid, because I’m doing that to instrument groups instead of the full mix. After all, if I’m going to apply the same basic effects to each instrument, it’s more straightforward to apply it once to the group. In addition, I can still control the level and processing of one bus relative to another, rather than have to tweak individual channels. For instance, once I get the channels of the drum mix right, then I only need to deal with the whole drum kit as a single unit.

Since I’m applying processing at the bus level, I can then “push” the mix of those instruments within the group into that processing. Using this method means that there are fewer effects that I need to apply to individual channels. Sure, I might EQ an individual vocal track or apply an guitar amp emulation to a guitar DI track. But, that’s less involved and often better sounding than monkeying with each individual track.

Let’s boil this down to some simple steps: 1) Import tracks, create buses, and get a nice overall balance of volume and panning. 2) Apply processing to each group/bus and adjust. 2) Tweak individual tracks to enhance the balance, clarity, and depth within that group and the mix. 3) Apply light 2-bus processing to “glue” the mix.

Which emulation or plug-in chain should be used on the group buses? I’m primarily looking for EQ and filtering to shape/color the sound and compression to solidify the group and tame extreme level changes. Picking a single chain is something I’m still experimenting with. I recently mixed five songs – each with a different set of plug-ins applied to the groups. This wasn’t intended to be a “scientific” comparison. Rather, I’m looking for the best workflow for me. My choices included stock Logic plug-ins, Waves CLA MixHub, Waves Scheps Omni Channel 2, Sonimus SatsonCS, and Kiive Audio Filkchannel Strip | MK2. A different approach for each mix.

I’ve worked with a range of options for 2-bus processing. Usually that means a little extra EQ followed by compression/limiting. I like the stock Logic compressor (picking one of its seven variations), TDR Nova, FabFilter Pro-MB, Sonible smart:comp 2, and/or others. I’ll bounce out that mix and do a separate mastering pass, which I previously explained. Of course, depending on the style of the music, I will use additional and/or alternative plug-in options. But this is a quick description of how I apply the hybrid top-down mix concept.

Not all third-party plug-ins work equally well in all audio and video applications. Most of mine are fine in Logic Pro and Audition, as well as in Premiere Pro and Resolve/Fairlight. However, Final Cut Pro has definite issues with many of these third-party plug-ins. Logic Pro crashes when I use the Brainworx emulations. Therefore, when selecting plug-ins test a trial version first if it’s available.

For a more in-depth look at these workflow ideas, here’s a good tutorial by Nashville mixer Joe Carrell: Part 1 and Part 2. Another good listen is this recent interview with Grammy-winning mixer, Andrew Scheps, who discusses his approach to mixing, as well as the collaboration with Waves on the recently-updated Scheps Omni Channel 2 plug-in.

©2024 Oliver Peters

DIY Music Mastering with Logic Pro

I’ve owned Apple’s Logic Pro for nearly a decade. Since I’m a video editor and not a professional audio mixer, it’s been a tool to deal with certain issues, test audio plug-ins for reviews, and to capture and clean up vinyl recordings. I’ve always had a love for music and if I hadn’t become a video editor, my path might have gone the route of audio engineering.

Two years ago I decided to dive deeper into what Logic Pro was really designed for – mixing music. Over this time, I’ve mixed multiple songs for fun from available, downloadable multitracks. It’s both a hobby and a tool that helps me to better understand how to improve my mixes on video projects. With all the free or low-cost DAW applications on the market, I would highly recommend this to any video editor. Don’t forget that if you have DaVinci Resolve, you have the built-in Fairlight DAW. If you subscribe to Adobe Creative Cloud, then Adobe Audition fits the bill, as well.

Over these past two years my mixing journey has evolved. At the start, I’d build up tracks with plug-ins to shape the sound. Then I’d apply a series of processing plug-ins on the 2-bus (stereo output bus). Some mixers call that approach “top down” mixing. This mean they apply a series of plug-ins on the 2-bus and mix with those enabled, thus “pushing” the track mixes into them. The opposite approach is to build up the tracks first and then as a final stage apply minimal 2-bus processing. Both approaches work and there are successful engineers who use each of these as their workflow. For much of this time, my workflow was a hybrid of both.

Mastering as you mix versus mixing, then mastering

As my mixing changed, I decided to turn this into a two-stage process: mixing and mastering. In most commercial music mixes, the final sound you hear is the result of the enhancements performed by a mastering engineer to the studio mix done by the recording/mixing engineer. Originally, mastering was a simpler process of level adjustment, compression, and limiting to prepare a recording for successful cutting, pressing, and distribution on vinyl. Modern mastering engineers are less concerned with vinyl and tend to focus on levels for CDs and streaming services.

Good mastering can sonically enhance the mix and add depth. Some mastering engineers also provide the service to master from stems (instrument groups), giving the engineer the ability to rebalance the recording in the mastering suite. As the mastering process has changed, so have the tools. The types of consoles, processing hardware, plug-ins, and even monitors will be different in most mastering suites as compared with mixing studios.

I typically work in 48kHz projects. I suppose if I did this for a living, it would be 96kHz. In my two-step approach, I complete the mix adding only light compression and limiting on the 2-bus. This is bounced out (exported) as the “unmastered” track (48kHz/32-bit float) and then imported into a new, fresh Logic Pro project. Here I add a series of mastering-style plug-ins, shaping the sound to add width, impact, and loudness. Regardless of the tools used, the advantage of this two-stage process is that you think differently. When you are trying to do everything within the same project, then you are always mixing. You’ll tweak a plug-in on the 2-bus and then decide to go back and tweak again at the track level. Thus, you can find yourself chasing your tail.

When you apply this separation (often with a literal day or two in between), then it allows you to focus on a better mix while you are mixing, without the crutch of smashing the mix through 2-bus plug-ins. This is essentially how mixers work in studios with real consoles (as opposed to “in the box” DAW mixing). Then, when you shift to the mastering stage, you can focus on enhancing the mix, not necessarily changing it. The mastered version is then bounced out as the final file (48kHz/24-bit, dithered).

Enter the Logic Pro Mastering Assistant

Logic Pro 10.8 introduced an intelligent Mastering Assistant plug-in. This is always applied (initially disabled) as the last effect on the stereo output bus. Once you click on the name, it is automatically enabled to immediately analyze the mix and then apply suggested processing (more in a moment). Once this step is completed, you can move the Master Assistant effect to a different slot in the effects stack. For example, I typically apply a metering plug-in after Mastering Assistant.

Of course, Apple is by no means the first to offer this. There are other plug-in developers and online services that offer similar “instant mastering” products. You might look at something like this and consider it a gimmick; but, Mastering Assistant in Logic Pro is actually quite good. On the other hand, like all art, this is subjective. Whether or not you get equal results to a leading mastering engineer is hard to say. Probably not, but it might just be good enough for your needs.

The Mastering Assistant signal chain is a combination of intelligent EQ, exciter, compressor/limiter, and stereo imager (width). There are four “character” styles: clean, valve, punch, and transparent. Only clean is available on Intel Macs. Depending on your track, the recommended EQ curve after analysis is a series of boost and cuts across the frequency range of the mix.

From what I’ve seen so far, many of the mixes tend to result in what mixers call the “smiley face” curve. This means that the low and high ends are boosted, while much of the midrange is attenuated. Some analysis results in an EQ curve with the “Pultec trick” – a big boost at the bottom with an immediate cut right next to it at the low end. This tends to focus the low end. You cannot change the specific boosts and cuts within the analyzed EQ curve; however, there are custom low and hi-end shelf controls, plus a midrange adjustment. The slider at the left lets you manually accentuate or tame the boost and cuts of the generated curve.

The dynamics are set for the common streaming service target: -14 LUFS and -1 dB true peak. Adjust the loudness dial to increase or decrease levels, which can be metered when you hit the start and reset buttons. Most of the time, the default value is fine. There’s also an exciter circuit, which is disabled by default. An exciter adds saturation (harmonic distortion) to the top end and can make some mixes sound brighter and more open. Care is needed when you use this, because it can add distortion. For example, in my mastering template, I apply Logic’s Vintage Console EQ plus the FabFilter Saturn 2 saturation plug-in ahead of the Mastering Assistant. These already add saturation and character, so applying the exciter function usually pushes the mix over the edge with hot peaks sounding raspy and unpleasant.

At the bottom right of the interface is a width control to spread or tighten the stereo image. Changing width can induce phase problems, so there’s a phase correlation meter next to it. I believe this is a full-spectrum control (I’ve found no specific documentation), which isn’t always what you want. For example, widening a bass guitar or a honky-tonk piano often causes part of the mix to be out-of-phase. In the modern world, mono listening is less of a factor, but I’d rather err on the side of caution. For example, iZotope’s imaging effect is multi-band. This allows me to keep the low frequencies mono, while spreading the mid and upper ranges of the mix.

Finally, the plug-in includes loudness compensation for comparison. This way you aren’t fooled by the “louder sounds better” phenomenon when comparing active versus bypassed states. Be sure to turn this off when exporting the mix and/or checking against a downstream meter.

Mastering the mix

In my mastering project, I’ll bring the track it at -2 dB, route the channel to a submix bus, and then to the 2-bus (stereo output). I usually apply the Console EQ and Saturn 2 to the submix. Then the Mastering Assistant and Multimeter are applied to the 2-bus. The Console EQ is only there to provide some drive control, but it does give me the option for broad 3-band tweaks and low-end filtering. Saturn 2 is usually set to the “subtle tube” character with some occasional tone control tweaks.

A small percentage of the mixes I’ve done via Mastering Assistant don’t sound quite as good as I would have liked. And so, I also keep iZotope Ozone and the Waves J37 tape plug-ins handy in the template. These are disabled unless I need them. Overall, the results I get from Logic Pro’s Mastering Assistant sound very similar to the results from Ozone’s built-in Mastering Assistant. However, Ozone includes many more modular effects. Each module offers far greater control than the simpler Logic version. But, sometimes Ozone will sound too aggressive.

I recently tested a track using Mastering Assistant, Ozone, as well as the new Waves online mastering service. What I found was that Waves sounded a lot like the original, only louder. You have a few options, but you can’t adjust these. There wasn’t much EQ change, so the Waves master didn’t sound as open as the others.

iZotope’s Ozone and Logic’s Mastering Assistant were close to each other, but I’d give the latter the edge. I think this is partly due to the fact that its EQ changes feature many “micro-EQ” adjustments that sculpt the sound. These small changes go far beyond what you can do in most digital parametric EQ plug-ins. Consequently, for my mixes and sensibilities, what Logic Pro offers (together with a few additions) covers 80-90% of my mixes.

This new Mastering Assistant is good addition to Logic Pro and likely offers the right level of adjustability for most Logic Pro users. I wish that Apple had built in a bit more control, plus additional presets for other target loudness ranges. For example, CD mastering or TV mixes use different targets than a streaming service like Spotify or iTunes. Hopefully some future update will add a few of these presets. Of course, if you do own Logic Pro, then there’s nothing stopping you from exporting a TV mix from your NLE for a show, promo, or commercial and running it through the Mastering Assistant. I’ve tested this on several files and the results were an improvement over the NLE mix. In short, Mastering Assistant is a useful addition to an already powerful DAW.

©2024 Oliver Peters