Final Cut Pro, Logic Pro, iPad… Oh my!

This week Apple dropped a big one on its users. By the end of May, there will be new, mobile versions of Final Cut Pro and Logic Pro for the iPad. These apps feature interfaces optimized for touch and the Pencil. Both will also include some advanced features not yet found in the desktop versions. Presumably updates of those will also come in short order to maintain compatibility.

The other two Apple professional video applications – Motion and Compressor – have been left out of the loop. At least for now. System requirements for Final Cut Pro are somewhat different than for Logic Pro. FCP will require a newer iPad Pro or iPad Air. Logic Pro will run on any iPad with an A12 Bionic chip or later. Both require iPadOS 16.4 or later.

The timing of this announcement is curious, since it precedes WWDC 2023. The speculation is that it was designed to beat Google to the punch one day before the Google I/O 2023 event, where details of the Pixel Tablet have been revealed. It’s an iPad competitor, although not equal to the iPad Pro models. However, by making this early iPad-related announcement, Apple gains the attention of the tech press and might pull some attention away from Google. Hmm…

What does the competition look like?

I reviewed LumaTouch’s LumaFusion when it was launched five years ago. At a casual glance, I would say that Apple picked up design inspiration from LumaTouch. Of course, it could be argued that LumaTouch mimicked FCP in the first place. There’s also Adobe Rush and Blackmagic Design’s port of DaVinci Resolve to the iPad. Rush uses the same UI for mobile and desktop versions, but it’s completely different than Premiere Pro. Unlike the others, Resolve’s iPad application is largely the same as Resolve on the desktop.

Blackmagic Design’s expansive NAB booth had only one demo pod showing Resolve running on the iPad. It worked well, but didn’t feel optimized. I would imagine it’s still being refined. According to Blackmagic’s PR staff, they do have an idea of sales numbers, but not demographics. So they don’t actually know what sort of users have picked up this solution, unless they hear directly from them.

Toss in iMovie and GarageBand and you now have at least five NLEs and two DAWs running on the iPad. (Search the App Store and you’ll find more, but most are ones you’ve never heard of and are not marketed for high-end use.) I have no doubt that Final Cut Pro and Logic Pro will be the most fluid performers among these various options. The obvious question is what type of user are these really designed for? 

Although I routinely use both Final Cut and Logic, I will focus the rest of the discussion around Final Cut Pro. However, if your focus is Logic Pro, then check out this video by Nathan James Larsen for some thoughts and considerations.

Who is today’s professional?

Split the professional user market into two camps. In one camp, you have traditional editors who cut commercials, corporate videos, television shows, and films. In the other, you have content creators who produce much of the media seen on social platforms. Social media editors range from individual influencers to mini-marketing firms. They produce unboxing and review videos for a fee (or ad revenue) and populate multiple YouTube (and other) channels under various channel names. The needs of these two types of pros – traditional and social – can diverge wildly, but there is also some overlap. For instance, many of the larger social media content creators have made investments into traditional high-end gear – lighting, cameras, post infrastructure, etc.

There are many traditional pros who have adopted Final Cut Pro and Logic Pro as their apps of choice – more internationally than in North America. Apple has designed these products to appeal to the single user who wants to build out the software with third-party tools to be what they need it to be. Unlike Blackmagic Design, Adobe, or Avid, Apple has decided not to toss in everything and the kitchen sink. This has been very frustrating for traditional professional editors who might adopt FCP if Apple were more responsive to their requests. But these requests come from a niche part of the market. If all of the requests were addressed, would it move the sales needle for Final Cut Pro? I seriously doubt it.

Meanwhile, Apple has been looking at a different type of pro who is working the social media side of the market. After all, the needs of social media content creators are different than those of a feature film editor. Many up and coming video enthusiasts have their sights set on being the next big YouTuber rather than the next Walter Murch. The application that works well for this new type of pro and aspiring pro is more broadly applicable to a wider set of potential buyers. Furthermore, this demographic skews younger – meaning that they are less likely to rely solely on traditional desktop computers and laptops (“What’s a computer?”). They are also more open to not owning gear and software – i.e. open to subscription models.

Current US ownership of tablets is around 66%. Gen Z (those born in the late 1990s or early 2000s) is around one-fifth of the US population. From most studies, these younger users tend to use multiple devices, prefer mobile devices, and are less patient with technology issues, like slow or less-than-fluid operations.

Designing for the needs of the “new” professional

It’s into this market that Apple is now releasing Final Cut Pro and Logic Pro for the iPad. The iPad itself has always been a product that’s hard to pin down. Is it a creation device or a consumption device or both? For me, the iPad Pro models still don’t make me want to use them over a standard laptop or desktop unit. Maybe I’m too old, but I also don’t produce content on location. I’m also not a fan of the less accurate touch environment. I prefer the resources available from a traditional computing device. However, there are many users of all ages who travel and produce media along the way. For them even a small laptop is more than they’d like to carry. This is another group for whom the iPad Pro is ideal.

Whatever the ideal market may be, Apple still has the need to convince buyers that an iPad Pro is a comparable creative tool to any of its other computers. From this angle, shouldn’t a product labelled Pro also have software labelled Pro if it comes from the same company? Whether it really is or not.

To meet that objective, Apple has seemingly diverted the significant engineering and design resources of the ProApps team away from a big series of updates for the desktop applications in order to create Final Cut Pro and Logic Pro for the iPad. I have no inside knowledge, but it’s quite possible that this has helped drive some the recent leadership departures within the ProApps team. Or maybe that’s just coincidence. Whatever the case, it will be difficult to maintain parity between the two versions of these applications.

Features

I have not tested either of these two applications. However, this iJustine “first look” video is a good place to start if you are curious. Based on the early videos that I’ve seen, I believe the claim that these versions will only differ slightly from their desktop siblings is hyperbole at best. These are a 1.0 version of a reimagined application.

One feature that has received positive response on forums is the Scene Removal Mask for FCP on the iPad. The demo shows a person on a solid white background. This is an easy replacement. The feature is much like Keyper by Sheffield Softworks (available through FxFactory). I have tested that and the use cases are minimal. Once you use a real-world background rather than a white cyc, it’s hard to perfectly refine the edge of a moving person. Keyper – and presumably the Scene Removal Mask as well – is fine when you want to place text behind a person or add some visual effect to the background, but not the person. In such situations, the edge refinement issue is less obvious. Maybe Apple’s in-house approach yields better results, but I’m skeptical.

On the other hand, there are two features that look intriguing to me. The first is Live Drawing, which takes advantage of the Pencil. Draw lines on the screen to highlight something and FCP will then “animate” those lines by stacking a series of connected title clips. The other is an elegant form of music retiming. Adobe Premiere Pro and Audition have had a similar feature for years. The FCP version “automagically” lengthens or shortens a music track to fit your video as you slide the end of the clip. It seems to work more elastically than Adobe’s feature. It’s not clear yet what the actual control variables are nor what audio artifacts occur when you do this. 

Things to consider before you leap

There are several things to consider, such as getting media in and out of an iPad and whether or not you can work with an external drive. According to iJustine, media is ingested and stored within the FCP library file on the iPad itself. If you intend to do serious work, you’ll probably want a 2TB model. Right now a fully loaded 2TB iPad Pro with Pencil, Smart Keyboard Folio, and AppleCare (2 years) is about $2,900, plus taxes and cellular service plan.

Then there’s your Apple iCloud plan. This is optional and not required for FCP or Logic to work on the iPad. However, how many iPhone and iPad users actually bother to manage their cloud back-up settings? Mobile devices like the iPhone and iPad are designed to back-up to iCloud. So, if you do intend to back-up a device with a lot of media on a 2TB internal drive, then you will likely also have to increase your Apple iCloud+ premium account. Apple has historically been stingy with storage, limiting free space to 5GB. While iCloud+ plans aren’t onerous, a 2TB plan in the US is $9.99/month. iCloud is also more of a consumer solution and doesn’t compare well to Dropbox or Google Drive in professional scenarios.

I imagine that Apple sees the ideal workflow this way. Start by shooting with an iPhone, the iPad itself, or a DSLR. Media is then brought into the iPad via AirDrop or an SD card. You do a preliminary edit in FCP on the iPad. Or even finalize it there. If you need to refine it further, transfer the FCP library file to your Mac or MacBook Pro computer and complete the project.

In this first iteration, you cannot move a project/library from the desktop version of FCP back to the iPad. There are also no third-party Motion templates or plug-ins available for the iPad, although that’s listed as “coming soon” by Apple. Watch Larsen’s video for Logic Pro. He also makes a good point about third-party audio plug-in development for the iPad.

The subscription pricing model

This brings me to the biggest change. Both Final Cut Pro and Logic Pro for the iPad are only available through subscriptions ($4.99/month or $49 annually for each) with a one-month free trial. It’s the first time Apple has used subscription pricing for any of its key applications. This indirectly brings into question how pricing will be handled going forward for any of the desktop applications and even macOS, iOS, or iPadOS. WWDC is coming up, so hopefully many of these questions will be answered.

I’m not sure the mechanisms and regulatory guidelines for the two Apple App Stores are the same. Subscription pricing for mobile apps was introduced a while back. If you are a Filmic Pro user, the most recent version is only offered though subscription. If you purchased Filmic Pro before this change, you can still use the legacy version on your device(s) without change. But if you want to update, then you shift to a subscription for the new version (free download with in-app purchases).

Final Cut Pro and Logic Pro users have become quite comfortable with the idea of buying the app once and getting free updates, including content in the case of Logic Pro users. Potentially Apple could shift to paid updates or in-app purchases instead of subscription. However, based on their experiences with the legacy version of Final Cut, this introduces complex tax implications in how a corporation realizes revenue. Ultimately such a move could impede the timely development of new features.

My understanding of the current App Store rules is that a developer (presumably including Apple itself) can only charge for an update if the changes are so significant that the application can be released as a new product. Pixelmator did this in the change from Pixelmator to Pixelmator Pro. In May, Pixelmator released Photomator as a new desktop application. It is a hybrid between Apple Photos and Pixelmator Pro designed specifically for photo editing, a la Adobe’s Lightroom. This is priced in a similar manner to the updated Filmic Pro. Download the application from the App Store for free, but then pay a subscription or a one-time charge via the in-app purchase mechanism.

Hypothetically, if a new whizz-bang version of Final Cut Pro were released – let’s say Final Cut Pro XI – then maybe Apple could legitimately charge you another $300 (or whatever). But this would be a completely separate application and not an update to your existing installed software. At the moment, what I’m discussing is nothing more than pure speculation and we’ll need to wait to see what happens.

Final thoughts

Will a shift to subscription – at least for mobile apps – stick for Apple and is it even a good move? As with any professional subscriptions, if you are making money from your work – as opposed to the hobbyist or student user – then application subscriptions are a cost of doing business. That’s typically how most Adobe customers have reconciled this. However, if you are only a casual user, why spend the money, especially when there are other options? Part of the reason Apple creates software is to showcase the potential of their hardware and generate hardware sales. For a mobile app, you may well spend $10 – $50 (once) to become an occasional user. By going subscription, I contend that Apple risks losing this tier.

As I said earlier, I’m viewing this through the lens of an older person. Apple is seeking a demographic that seems to be willing to rack up a large amount of cumulative monthly expenses, from Netflix and Apple TV+ to HelloFresh and software. This isn’t all bad and some of it can even be a trade-off in cost versus time savings. Also, this market segment’s view of owning perpetual software licenses is different than mine. Or as iJustine pointed out, cut out a couple of Starbucks trips a month and you’ve paid for the subscription.

Apple is also seeing that Microsoft, Adobe, and Avid have successfully made the switch to subscription pricing plans, so maybe they can do it, too. I would caution that a shift to subscription for a brand new product will probably fly. But if they intend to shift the existing products to subscription, then they may want to consult with Waves first. Just sayin’. In any case, I hope they leave the price structure of their desktop software alone.

I’ll close my thoughts with the issue of innovation. Will any of these changes – mobile versions, subscriptions, etc – spurn new and significant features in their professional desktop applications? I don’t know. All I can say is that for traditional professional editors, Apple is running a distant third behind Blackmagic Design and Adobe. Even Avid is picking up innovation awards for items that Final Cut Pro editors would love to see in their favorite tool. However, as I’ve been pointing out, Apple’s target user is different and Final Cut Pro and Logic Pro are positioned accordingly.

I do think this move is an interesting development that might expand the user base for the desktop applications as well. But, it’s not for users like me – traditional professional editors. I need access to more than just Final Cut Pro. I need my mobile editing applications to match my desktop applications. For me, that’s a powerful laptop and not an iPad. Ultimately the market will decide the path forward for both Apple and its customers.

UPDATE – May 24, 2023 – Yesterday Apple officially released Final Cut Pro and Logic Pro for the iPad. Alongside these, Apple also released companion/compatibility upgrades for the desktop versions of Final Cut Pro, Motion, Compressor, Logic Pro, iMovie, and an accompanying pro video formats pack.

If you want to understand the “who” and “why” of the iPad versions or these apps, then a good place to start is with popular indie musician and YouTuber, Mary Spender. Never mind the sales pitch aspect of this – I think it does a better job than Apple’s own marketing videos or any of the techie reviews. But if you do want a deeper dive, then check out these for FCP (desktop), FCP (iPad), and Logic Pro (iPad).

On a side note – if you are running these apps with macOS Ventura, then things have changed with Audio Units and some third-party audio plug-ins will no longer work in Logic Pro and/or Final Cut Pro. Unfortunately it’s different ones in each application. The reason is that AU validation will fail and, therefore, the offenders will be disabled. Of course, they work just fine in the Adobe apps or Resolve. Most of the time you can fix the issue by updating the plug-in to a newer version. On my system, this affected certain plug-ins from iZotope, Acon Digital, and Sonible. Updates will also be required for certain video plug-ins.

©2023 Oliver Peters

Impressions of NAB 2023

2023 marks the 100th year of the NAB Convention, which started out as a radio gathering in New York City. This year you could add ribbons to your badges indicating the number of years that you’d attended – 5, 10, etc. My first NAB was 1979 in Dallas, so I proudly displayed the 25+ ribbon. Although I haven’t attended each one in those intervening years, I have attended many and well over 25.

Some have been ready to sound the death knell for large, in-person conventions, thanks to the pandemic and proliferation of online teleconferencing services like Zoom. 2019 was the last pre-covid year with an attendance of 91,500 – down from previous highs of over 100,000. 2022 was the first post-covid NAB and attendance was around 52,400. That was respectable given the climate a year ago. This year’s attendance was over 65,000, so certainly an upward trend. If anything, this represents a pent-up desire to kick the tires in person and hook back up with industry friends from all over the world. My gut feeling is that international attendance is still down, so I would expect future years’ attendance to grow higher.

Breaking down the halls

Like last year, the convention spread over the Central, North, and new West halls. The South hall with its two floors of exhibition space has been closed for renovation. The West hall is a three-story complex with a single, large exhibition floor. It’s an entire convention center in its own right. West hall is connected to the North hall by the sidewalk, an enclosed upstairs walkway, as well as the LVCC Loop (the connecting tunnel that ferries people between buildings in Teslas). From what I hear, next year will be back to the North, Central, and South halls.

As with most NAB conventions, these halls were loosely organized by themes. Location and studio production gear could mostly be found in Central. Post was mainly in the North hall, but next year I would expect it to be back in the South hall. The West hall included a mixture of vendors that fit under connectivity topics, such as streaming, captioning, etc. It also included some of the radio services.

Although the booths covered nearly all of the floor space, it felt to me like many of the big companies were holding back. By that I mean, products with large infrastructure needs (big shared storage systems, large video switchers, huge mixing desks, etc) were absent. Mounting a large booth at the Las Vegas Convention Center – whether that’s for CES or NAB – is quite costly, with many unexpected charges.

Nevertheless, there were still plenty of elaborate camera sets and huge booths, like that of Blackmagic Design. If this was your first year at NAB, the sum of the whole was likely to be overwhelming. However, I’m sure many vendors were still taking a cautious approach. For example, there was no off-site Avid Connect event. There were no large-scale press conferences the day before opening.

The industry consolidates

There has been a lot of industry consolidation over the past decade or two. This has been accelerated thanks to the pandemic. Many venerable names are now part of larger holding companies. For example, Audiotonix owns many large audio brands, including Solid State Logic, DiGiCo, Sound Devices, among others. And they added Harrison to their portfolio, just in time for NAB. The Sennheiser Group owns both Sennheiser and Neumann. Grass Valley, Snell, and Quantel products have all been consolidated by Black Dragon Capital under the Grass Valley brand. Such consolidation was evident through shared booth space. In many cases, the brands retained their individual identities. Unfortunately for Snell and Quantel, those brands have now been completely subsumed by Grass Valley.

A lot of this is a function of the industry tightening up. While there’s a lot more media production these days, there are also many inexpensive solutions to create that media. Therefore, many companies are venturing outside of their traditional lanes. For example. Sennheiser still manufactures great microphone products, but they’ve also developed the AMBEO immersive audio product line. At NAB they demonstrated the AMBEO 2-Channel Spatial Audio renderer. This lets a mixer take surround mixes and/or stems and turn them into 2-channel spatial mixes that are stereo-compatible. The control software allows you to determine the stereo width and amount of surround and LFE signal put into the binaural mix. In the same booth, Neumann was demoing their new KH 120-II near-field studio monitors.

General themes

Overall, I didn’t see any single trend that would point to an overarching theme for the show. AI/ML/Neural Networks were part of many companies’ marketing strategy. Yet, I found nothing that jumped out like the current public fascination with ChatGPT. You have to wonder how much of this is more evolutionary than revolutionary and that the terms themselves are little more than hype.

Stereoscopic production is still around, although I only found one company with product (Stereotec). Virtual sets were aplenty, including a large display by Vu Studios and even a mobile expando trailer by Magicbox for virtual set production on-location. Insta360 was there, but tucked away in the back of Central hall.

Of course, everyone has a big push for “the cloud” in some way, shape, or form. However, if there is any single new trend that seems to be getting manufacturers’ attention, it’s passing video over IP. The usual companies who have dealt in SDI-based video hardware, like AJA, Blackmagic Design, and Matrox, were all showing IP equivalents. Essentially, where you used to send SDI video signals using the uncompressed SDI protocol, you will now use the SMPTE ST 2110 IP protocol to send it through 1GigE networks.

The world of post production

Let me shift to post – specifically Adobe, Avid, and Blackmagic Design. Unlike Blackmagic, neither Avid nor Adobe featured their usual main stage presentations. I didn’t see Apple’s Final Cut Pro anywhere on the floor and only one sighting in the press room. Avid’s booth was a shadow of itself, with only a few smaller demo pods. Their main focus was showing the tighter integration between Media Composer and Pro Tools (finally!). There were no Pro Tools control surfaces to play with. However, in their defense, NAMM 2023 (the large audio and music products exhibition) was held just the week before. Most likely this was a big problem for any audio vendor that exhibits at both shows. NAMM shifts back to January in 2024, which is its historical slot on the calendar.

Uploading media to the cloud for editing has been the mantra at Frame io, which is now under the Adobe wing. They’ve enhanced those features with direct support by Fujifilm (video) and Capture One (photography). In addition, Frame has improved features specific to the still photography market. New to the camera-to-cloud game is also Atomos, which demoed its own cloud-based editor developed by asset management developer Axle ai.

Adobe demoed the new, text-based editing features for Premiere Pro. It’s currently in beta, but will soon be in full release. In my estimation, this is the best text-based method of any of the NLEs. Avid’s script-based editing is optimized for scripted content, but doesn’t automatically generate text. Its strength is in scripted films and TV shows, where the page layout mimics a script supervisor’s lined script.

Adobe’s approach seems better for documentary projects. Text is generated through speech-to-text software within Premiere Pro. That is now processed on your computer instead of in the cloud. When you highlight text in the transcription panel, it automatically marks the in and out points on that source clip. Then, using insert and overwrite commands while the transcription panel is still selected, automatically edit that portion of the source clip to the timeline. Once you shift your focus to the timeline, the transcription panel displays the edited text that corresponds to the clips on the timeline. Rearrange the text and Premiere Pro automatically rearranges the clips on the timeline. Or rearrange the clips and the text follows.

Meanwhile over at Blackmagic Design’s massive booth, the new DaVinci Resolve 18.5 features were on full display. 18.5 is also in beta. While there are a ton of new features, it also includes automatic speech-to-text generation. This felt to me like a work-in-progress. So far, only English is supported. It creates text for the source and you can edit from the text panel to the timeline. However, unlike Premiere Pro, there is no interaction between the text and clips in the timeline.

I was surprised to see that Blackmagic Design was not promoting Resolve on the iPad. There was only one demo station and no dedicated demo artist. I played with it a bit and it felt to me like it’s not truly optimized for iPadOS yet. It does work well with the Speed Editor keyboard. That’s useful for any user, since the Cut page is probably where anyone would do the bulk of the work in this version of Resolve. When I used the Apple Pencil, the interface lacked any feedback as icons were clicked. So I was never quite sure if an action had happened or not when I used the Pencil. I’m not sure many will do a complete edit with Resolve on the iPad; however, it could evolve into a productive tool for preliminary editing in the field.

Here’s an interesting side note. Nearly all of the Blackmagic Design demo pods for DaVinci Resolve were running on Apple’s 24″ candy-colored iMacs. Occasionally performance was a bit sluggish from what I could tell. Especially when the operator demoed the new Relight feature to me. Nevertheless, they seemed to work well throughout the show.

In other Blackmagic news, all of the Cloud Store products are now shipping. The Cintel film scanner gets an 8mm gate. There are now IP versions of the video cards and converters. There’s an OLPF version of the URSA Mini Pro 12K and you can shoot vertical video with the Pocket Cinema Camera that’s properly tagged as vertical.

Of course, not everyone wants their raw media in the cloud and Blackmagic Design wasn’t showing the only storage products. Most of the usual storage vendors were present, including Facilis, OpenDrives, Synology, OWC, and QNAP. The technology trends include a shift away from spinning drives towards solid state storage, as well as faster networking protocols. Quite a few vendors(like Sonnet) were showing 25GbE (and faster) connections. This offers a speed improvement over the 1GbE and 10GbE ports and switches that are currently used.

Finally, one of the joys of NAB is to check out the smaller booths, where you’ll often find truly innovative new products. These small start-ups often grow into important companies in our industry. Hedge is just such a company. Tucked into a corner of the North hall, Hedge was demonstrating its growing portfolio of essential workflow products. Another start-up, Colourlab AI shared some booth space there, as well, to show off Freelab, their new integration with Premiere Pro and DaVinci Resolve.

That’s a quick rundown of my thoughts about this year’s NAB Show. For other thoughts and specific product reviews, be sure to also check out NAB coverage at Pro Video Coalition, RedShark News, and postPerspective. There’s also plenty of YouTube coverage.

Click on any image below to view an NAB slideshow.

©2023 Oliver Peters

The Oscar. Now what?

Everything Everywhere All at Once dominated the Academy Awards night, including winning the Best Film Editing award for Paul Rogers. The team used Adobe Premiere Pro as their NLE of choice. By extension this becomes the first editing Oscar win for Premiere. Of course, it’s the team and editor that won the award, not the software that they used. Top editors could cut with any application and get the same result.

The Academy Awards started as a small celebratory dinner for insiders to recognize each other’s achievements in film. Over the decades this has become a major cultural event. Winning or even being nominated is a huge feather in the cap for any film. This can be heavily leveraged by the marketing teams of not only the film distributors and talent agents, but also the various products used in the process – be that cameras or software.

Avid’s dominance

When it comes to editing, Avid has been the 800-pound gorilla in the modern digital era. Ever since Walter Murch won for editing The English Patient using Media Composer, the specific NLE on an Oscar-winning film has become a hot topic among editors. This was never the case when the only options were Moviola, KEM, or Steenbeck.

Even this year nine out of the ten nominees for the Oscar for Best Picture and four out of the five nominees for Best Film Editing used Media Composer. Yet, Avid’s dominance in the winner’s circle has seen some occasional cracks from competitors, like Apple’s Final Cut Pro (legacy version) and Lightworks. Nevertheless, Media Composer is still a safe bet. And let’s not forget sound, where Pro Tools has even less competition from other DAWs among film and TV sound editors and mixers. All of the nominees for the Oscar for Best Sound at this year’s Academy Awards used Pro Tools.

There are, of course, many awards competitions around the world, including the ACE Eddie Awards, BAFTA, Golden Globes, and others, including various film festivals. Many of these don’t give out specific craft awards for editors or editing; however, a lot of these winning films have been edited with other tools. For example, many award-worthy indie films, especially documentaries, have been edited with Premiere Pro. Even Final Cut Pro (the current “X” version) has had wins in such categories. This includes wins for the short films, The Silent Child and Skin at the 2018 and 2019 Academy Awards.

Stacking up the NLE competitors

The truth of the matter is that today, there are seven viable applications that might be used to cut a professional feature film or documentary: Media Composer, Final Cut Pro, Premiere Pro, DaVinci Resolve, Lightworks, Edius X, and Vegas Pro. You could probably also factor in others, such Final Cut Pro 7 (now zombie-ware) and Media 100 (yes, still alive), not to mention consumer-oriented NLEs like iMovie or Movie Maker. Realistically, most experienced film editors are likely to only use one of the first five on the list.

Of those five, Blackmagic Design’s DaVinci Resolve is the app that most editors have their eyes on. Aside from its widespread use in color correction, Resolve is also a perfectly capable editing application. Although it has yet to pull off an Oscar win for editing, Resolve has been widely used in many aspects of the production and post workflow of top films. Owing to its nature as a “Swiss Army Knife” application, Resolve fits into various on-set, editing, and visual effects niches. It’s only a matter of time before Resolve gets an Oscar win for editing. But other Blackmagic Design products also shouldn’t be overlooked. In the 2023 Academy Awards, more than 20 films across the technical, documentary, short film, international feature film, and animated categories used some Blackmagic Design product.

Marketing

When an application is used on an award-winning film, I’d bet that the manufacturer’s marketing department is doing high-fives. But does this really move the sales needle? Maybe. It’s all aspirational marketing. They want you to feel that if you use the same software as an Oscar-winning film editor used, then you, too, could be in that league. Talent is always the key factor, but we can all dream. Right? That’s what marketing plays upon, but it also impacts the development of the application itself.

Both Avid and Adobe have been fine-tuning their tools with professional users in mind for years. They’ve added features based on the needs of a small, but influential (or at least vocal) market sector. This results in applications that tick most of the professional boxes, but which are also harder to learn and eventually master.

That’s a route Apple also chose to pursue with Final Cut Pro 1 through 7. Despite a heralded introduction with Cold Mountain in 2003, it took until 2010 before Angus Wall and Kirk Baxter nailed down an Oscar with The Social Network. They then reprised that in 2011 with a win for The Girl with the Dragon Tattoo. Even as late as 2020, the discontinued FCP 7 was represented by Parasite, winning Best Picture and nominated for Best Film Editing.

Apple and Final Cut Pro’s trajectory unexpectedly changed course with the introduction of Final Cut Pro X. This shift coincided with the growth of social media and a new market of many non-traditional video editors. Final Cut Pro in its current iteration is the ideal application for this market and has experienced a huge growth in users. But, it still gets labelled as being not ready for professional users, even though a ton of professional content is posted using the app. Apple took the platform approach – opting to leave out many advanced features and letting third party developers fill in the gaps where needed. This is the core of much of the criticism.

How advanced/complex does a professional NLE really need to be?

In the case of FCP, it’s certainly capable of Hollywood-level films along with a range of high-end, international dramas. Witness the many examples I’ve written about, like Focus, Whiskey Tango Foxtrot, Voice from the Stone, The Banker, Jezebel, and Blood Red SkyHowever, a wide range of professional editors would like to see more.

The internal corporate discussion goes like this. Marketing asks, “What do we have to do to get broader adoption among professional film editors?” Engineering answers, “It will take X dollars and X amount of time.” Top management asks, “What’s the return if we do that?” And that’s usually where the cycle stops, until the next year or awards season.

The truth is that the traditional high-end post market is extremely small for a company like Apple. The company is already selling hardware, which is their bread and butter. Will a more advanced version of FCP sell more hardware? Probably not. Avid, Adobe, and Blackmagic Design are already doing that for them. On the other hand, what is more influential for sales in today’s market – Oscar-winning professional editors or a bevy of YouTube influencers touting your product?

I’m not privy to sales numbers, so I have no idea whether or not going after the very small professional post market makes financial sense for either Blackmagic Design or Adobe. In the case of Avid, their dominance pays off through their ecosystem. Avid-based facilities are also likely to have Avid storage and Pro Tools audio facilities. Hardware most likely covers the development costs. Plus, both Avid and Adobe have shifted to subscription models (Adobe fully, Avid as an option). This seems to be good for both companies.

Blackmagic Design is also a hardware developer and manufacturer. Selling cameras and a wide range of other products enables them to offer DaVinci Resolve for as little as free. You’d be hard-pressed to find a production company that wasn’t using one or more Blackmagic products. Only time will tell which company has taken the approach that a) ensures their long term survival, and b) benefits professional film editors in the best way. In the case of Apple, it’s pretty clear that adding new feature to Final Cut Pro will generate more revenue in an amount that many competitors would envy. Yet, it would be small by Apple’s measurement.

In the end, awards are good for a developer’s marketing buzz, but don’t forget the real team that won the award itself. It’s wonderful for Paul Rogers and Adobe that Everything Everywhere All at Once was tapped for the Oscar for Best Film Editing. It’s an interesting milestone, but when it comes to software, it’s little more than bragging rights. Great to have, but remember, it’s Rogers that earned it, regardless of the tools he used.

©2023 Oliver Peters

What is a Finishing Editor?

To answer that, let’s step back to film. Up until the 1970s dramatic television shows, feature films, and documentaries were shot and post-produced on film. The film lab would print positive copies (work print) of the raw negative footage. Then a team of film editors and assistants would handle the creative edit of the story by physically cutting and recutting this work print until the edit was approved. This process was often messy with many film splices, grease pencil marks on the work print to indicate dissolves, and so on.

Once a cut was “locked” (approved by the director and the execs) the edited work print and accompanying notes and logs were turned over to the negative cutter. It was this person’s job to match the edits on the work print by physically cutting and splicing the original camera negative, which up until then was intact. The negative cutter would also insert any optical effects created by an optical house, including titles, transitions, and visual effects.

Measure twice, cut once

Any mistakes made during negative cutting were and are irreparable, so it is important that a negative cutter be detail-oriented, precise, and works cleanly. You don’t want excess glue at the splices and you don’t want to pick up any extra dirt and dust on the negative if it can be avoided. If a mistaken cut is made and you have to repair that splice, then at least one frame is lost from that first splice.

A single frame – 1/24th of a second – is the difference in a fight scene between a punch just about to enter the frame and the arm passing all the way through the frame. So you don’t want a negative cutter who is prone to making mistakes. Paul Hirsch, ACE points out in his book A long time ago in a cutting room far, far away…. that there’s an unintentional jump cut in the Death Star explosion scene in the first Star Wars film, thanks to a negative cutting error.

In the last phase of the film post workflow, the cut negative goes to the lab’s color timer (the precursor to today’s colorist), who sets the “timing” information (color, brightness, and densities) used by the film printer. The printer generates an interpositive version of the complete film from the assembled negative. From this interpositive, the lab will generally create an internegative from which release prints are created.

From the lab to the linear edit bay

This short synopsis of the film post-production process points to where we started. By the mid-1970s, video post-production technology came onto the scene for anything destined for television broadcast. Material was still shot on film and in some cases creatively edited on film, as well. But the finishing aspect shifted to video. For example, telecine systems were used to transfer and color correct film negative to videotape. The lab’s color timing function was shifted to this stage (before the edit) and was now handled by the telecine operator, who later became known as a colorist.

If work print was generated and edited by a film editor, then it was the video editor’s job to match those edits from the videotapes of the transferred film. Matching was a manual process. A number of enterprising film editors worked out methods to properly compute the offsets, but no computerized edit list was involved. Sometimes a video offline edit session was first performed with low-res copies of the film transfer. Other times producers simply worked from handwritten timecode notes for selected takes. This video editing – often called online editing and operated by an online editor – was the equivalent to the negative cutting stage described earlier. Simpler projects, such as TV commercials, might be edited directly in an online edit session without any prior film or offline edit.

Into the digital era

Over time, any creative editing previously done on film for television projects shifted to videotape edit systems and later to digital nonlinear edit systems (NLEs), such as Avid and Lightworks. These editors were referred to as offline editors and post now followed a bifurcated process know as offline and online editing. This was analogous to film’s work print and negative cutting stages. Likewise, telecine technology evolved to not only perform color correction during the film transfer process, but also afterwards working from the assembled master videotape as a source. This process, known as tape-to-tape color correction, gave the telecine operator – now colorist – the tools to perform better shot matching, as well as to create special looks in post. With this step the process had gone full circle, making the video colorist the true equivalent of the lab’s color timer.

As technology marched on, videotape and linear online edit bays gave way to all-digital, NLE-based facilities. Nevertheless, the separation of roles and processes continued. Around 2000, Avid came in with its Symphony model – originally a separate product and not just a software option. Avid Symphony systems offered a full set of color-correction tools and the ability to work in uncompressed resolutions.

It became quite common for a facility to have multiple offline edit bays using Avid Media Composer units staffed by creative, offline editors working with low-res media. These would be networked to an Avid shared storage solution. In addition, these facilities would also have one or more Avid Symphony units staffed by online editors.

A project would be edited on Media Composer until the cut was locked. Then assistants would ingest high-res media from files or videotape, and an online editor would “conform” the edit with this high-res media to match the approved timeline. The online editor would also handle Symphony color correction, insert visual effects, titles, etc. Finally, all tape or file deliverables would be exported out of the Avid Symphony. This system configuration and workflow is still in effect at many facilities around the world today, especially those that specialize in unscripted (“reality”) TV series.

The rise of the desktop systems

Naturally, there are more software options today. Over time, Avid’s dominance has been challenged by Apple Final Cut Pro (FCP 1-7 and FCPX), Adobe Premiere Pro, and more recently Blackmagic Design DaVinci Resolve. Systems are no longer limited by resolution constraints. General purpose computers can handle the work with little or no bespoke hardware requirements.

Fewer projects are even shot on film anymore. An old school, film lab post workflow is largely impossible to mount any longer. And so, video and digital workflows that were once only used for television shows and commercials are now used in nearly all aspects of post, including feature films. There are still some legacy terms in use, such as DI (digital intermediate), which for feature film is essentially an online edit and color correction session.

Given that modern software – even running on a laptop – is capable of performing nearly every creative and technical post-production task, why do we still have separate dedicated processes and different individuals assigned to each? The technical part of the answer is that some tasks do need extra tools. Proper color correction requires precision monitoring and becomes more efficient with specialized control panels. You may well be able to cut with a laptop, but if your source media is made up of 8K RED files, a proxy (offline-to-online) workflow makes more sense.

The human side of the equation is more complex

Post-production tasks often involve a left/right-side brain divide. Not every great editor is good when it comes to the completion phase. In spite of being very creative, many often have sloppy edits, messy timelines, and their project organization leaves a lot to be desired. For example, all footage and sequences just bunched together in one large project without bins. Timelines might have clips spread vertically in no particular order with some disabled clip – based on changes made in each revision path. As I’ve said before: You will be judged by your timelines!

The bottom line is that the kind of personality that makes a good creative editor is different than one that makes a good online editor. The latter is often called a finishing editor today within larger facilities. While not a perfect analogy, there’s a direct evolutionary path from film negative cutter to linear online editor to today’s finishing editor.

If you compare this to the music world, songs are often handled by a mixing engineer followed by a mastering engineer. The mix engineer creates the best studio mix possible and the mastering engineer makes sure that mix adheres to a range of guidelines. The mastering engineer – working with a completely different set of audio tools – often adds their own polish to the piece, so there is creativity employed at this stage, as well. The mastering engineer is the music world’s equivalent to a finishing editor in the video world.

Remember, that on larger projects, like a feature film, the film editor is contracted for a period of time to deliver a finished cut of the film. They are not permanent staff. Once, that job is done the project is handed off to the finishing team to accurately generate the final product working with the high-res media. Other than reviewing the work, there’s no value to having a highly paid film editor also handle basic assembly of the master. This is also true in many high-end commercial editorial companies. It’s more productive to have the creative editors working with the next client, while the staff finishing team finalizes the master files.

The right kit for the job

It also comes down to tools. Avid Symphony is still very much in play, especially with reality television shows. But there’s also no reason finishing and final delivery can’t be done using Apple Final Cut Pro or Adobe Premiere Pro. Often more specialized edit tools are assigned to these finishing duties, including systems such as Autodesk Smoke/Flame, Quantel Rio, and SGO Mistika. The reason, aside from quality, is that these tools also include comprehensive color and visual effects functions.

Finishing work today includes more that simply conforming a creative edit from a decision list. The finishing editor may be called upon to create minor visual effects and titles along with finessing those that came out of the edit. Increasingly Blackmagic Design DaVinci Resolve is becoming a strong contender for finishing – especially if Resolve was used for color correction. It’s a powerful all-in-one post-production application, capable of handling all of the effects and delivery chores. If you finish out of Resolve, that cuts out half of the roundtrip process.

Attention to detail is the hallmark of a good finishing editor. Having good color and VFX skills is a big plus. It is, however, a career path in its own right and not necessarily a stepping stone to becoming a top-level feature film editor or even an A-list colorist. While that might be a turn-off to some, it will also appeal to many others and provide a great place to let your skills shine.

©2023 Oliver Peters

NLE Tips – Audio Track Mixing in Final Cut Pro

In the past I’ve explained how audio in routed through the Final Cut Pro architecture. I’ve also discussed track-based audio mixing, predominantly based on the workflow in Premiere Pro. Today I’d like to extend that workflow into the realm of Final Cut Pro.

Everyone knows that FCP is not track-based. The timeline consists of a string of audio/video clips called the primary storyline, which empowers its magnetic feature. Additional audio and video clips can be attached to the clips on the primary storyline as connected clips – video above, audio below. At this level the software is indeed trackless. (Click on any image to see an enlarged view.)

Understanding audio roles and lanes

Several years ago, Apple added the “roles” feature. Audio and video clips can be assigned default and/or custom role designations, which can be used for visual organization and other functions. For example, do you want to export a “textless” ProRes file from your timeline? Then simply disable the Titles video role in the export dialogue.

Apple engineers have done more with audio roles, which can be further grouped into audio “lanes” through the timeline index window. If you’ve assigned the correct audio roles to each clip, then all dialogue clips are grouped into the dialogue lane, all music clips in the music lane, and so on. If you exported an FCPXML file for an outside mixer, then audio roles help to organize the track layout in other audio software.

At this point the clips are still individual. However, once you combine all clips in the sequence into a single compound clip, then the audio for all clips within an audio lane are summed together. This is similar to a group or submix bus in a DAW. The combination of lanes are in turn summed together and sent to the mix output. In essence, each audio lane within the compound clip is similar to a summing track stack in Logic Pro. You can adjust volume and apply effects to the entire lane, on top of anything done to individual clips contained inside of that lane.

Mixing in FCP on real-world projects

I’m working on an Alaska travelogue series – on-camera host on location, voice-overs, voice-over pick-ups, and music. The host stand-ups were recorded in two environments – close to the shoreline and in a quiet wooded area.

The location sound mixer recorded both a Lavaliere mic and a boom mic on separate channels. My personal preference is the boom, but sometimes the waves on the beach created too much background noise. In those cases, it’s the lav mic, but then I have to contend with the duller sound of the mic under the clothing, along with some rustle.

The next challenge is getting the voice-overs to sound close to the on-camera audio. These were recorded on location, but in a quiet room. The final challenge is to match the sonic quality of the voice-over pick-ups (done by the host at his home) to the original voice-overs.

Step One

The first step in this process is to assign the proper audio roles before clips are edited into the FCP sequence. Roles are quite versatile. If you had multiple speakers, each one could be assigned a separate role. In this project, my audio roles are Dialogue, VO, VO2, and Music. Once clips are imported and roles assigned, I can edit as I normally would in Final Cut. I personally add very few audio effects at this point to the individual clips, because I will do that later. In addition, certain effects, like noise reduction simply don’t work very well with short clips (more on that in a minute). So I only add what I need to “sell” the cut.

Step Two

Once the cut is approved and locked, I can move on to a final mix. To start, I’ll remove any audio effects that I’ve added to individual clips. Then, I meticulously go through and even out any level imbalances. Final Cut Pro features multiple gain stages. You have the clip volume control, but if you expand the audio, you see the individual channels, which each have volume controls, as well. Each of these can be raised by up to 12dB. So if you’ve applied 12dB to the clip and it’s still too quiet, expand the audio and bump up the channel volume. Or work this process in reverse. My objective is to end up with a clip volume that’s a bit hot in the peaks and then use the range tool to highlight the larger peaks and duck them down a bit.

Expand audio and make sure you have overlaps with fade handles between all clips. This is somewhat time-consuming. It’s far simpler in Premiere Pro to add audio dissolves (crossfades) across all audio edits in the timeline in a single step. But it’s a necessary step, including the addition or room tone/ambience to fill any gaps in the speech.

Finally, check the music. Make sure the edits work musically. Overall, the music volume can be a bit loud at this stage, but you want to make sure the balance is right for the entire sequence. So pay attention to the proper and graceful ducking of music around spoken audio.

Step Three

After you’ve made everything as uniform as possible, compound the sequence. Open the timeline index, enable “show audio lanes” which expands the audio of the compound. You’ll now see a “track” or summing bus for each audio role – Dialogue, VO, VO2, and Music. When you select an audio lane, you can adjust its volume and apply audio effects to only that lane. That lane’s audio parameters are shown in the inspector pane.

Selecting the topmost level of the clip, displays the output (i..e mix) bus parameters. Additional effects can be added here. It’s fine to apply and adjust such “master” effects, but I recommend that you do not make any changes to the volume. That’s because the volume control comes after any effects, which would include a meter plug-in, such as the built-in multimeter plug-in. Leave the volume slider alone if you want to see accurate volume levels.

Aside from mixing in tracks/busses, audio roles add another value at the time of export. My deliverables include a ProRes file without titles, as well as audio that’s split into separate tracks. In Final Cut Pro’s export setting, I can select the Multitrack Quicktime and then arrange the combination and order of roles. For this project, it’s a ProRes file with four stereo tracks corresponding to the four roles that I’m working with.

Note that when you export a multitrack file, each lane output also has any master output effects added to them. For example, if your mix uses a compressor and a limiter on the main output of the compound clip, then each lane/bus/track of the multitrack will now also have the added effect of that compression and limiting. If you don’t want this, then make sure to disable these effects prior to exporting a Multitrack Quicktime file.

Which effects should you use?

I’ve now discussed how the process works, but what combination of effects should you be using? Obviously that’s a question of style and personal taste. The type of effects for me will be similar to my description in the Premiere Pro article. I tend to stick with native Final Cut Pro effects, so that I don’t have to worry about what’s installed if I move to another Mac or a different editor has to step in. Also, Final Cut Pro is often a poor host for some third party audio plug-ins. I don’t know the reason, but have been told it’s up to those developers to optimize their tools for FCP. In most cases these same plug-ins work well in Logic Pro, not to mention other non-Apple applications. Go figure!

I’m happy with most of the built-in Apple audio plug-ins, with the exception of noise reduction and other audio repair tasks. The Accusonus tools are my go-to, but they are sadly no longer available. After that it’s the RX package from iZotope. If you have a really challenging piece of audio, then use the standalone RX package on that clip and re-import. If you don’t own either of these, then the newly added voice isolation feature in Resolve is pretty sweet (and better than what’s in FCP). Another impressive contender is Adobe’s Podcast beta. The AI-powered voice enhancement feature is available for free use through their web portal. I’ve used it for some really poor Zoom interview audio and it did an outstanding job of cleaning up all manner of audio defects.

Where this explanation is most pertinent is on location-based dialogue recordings. These are the ones that often benefit from noise removal/repair. These tools require consistency and some lead-in to the first audio, so they are best applied to full tracks and not individual clips. That’s why I make sure I have overlaps and fill in gaps and do all of this processing on the lanes of the compound and not on individual clips. If you have different dialogue sections – some noisy and some clean – then it’s best to organize these into separate audio roles, so that they are sorted out correctly once you compound the clip.

My typical processing chain

My FCP effects layout is similar to the description in the Premiere Pro post. Dialogue and VO tracks get some noise reduction, EQ, and compression. Voice-overs are particularly susceptible to plosives (popping “p” consonants) and sibilance, so plosive and de-essing filters are useful. For music, I usually spread the stereo image more and dip the EQ in the midrange. Plus some compression. All of this is designed to allow the dialogue to sit better in the mix. 

The last level of processing is what you do to the top level of the compound clip itself. That’s a bit like mastering in audio production. Applying effects to the compound clips is analogous to applying effects to a mix or output bus in the DAW world. On this particular chain, it’s EQ, exciter, compressor, adaptive limiter, and the multimeter. The effects stack is processed before the volume slider. Since I’m judging peak and loudness levels with the multimeter plug-in, I don’t want to make any volume slider changes on the compound clip, because those would be applied after the reading on the multimeter.

You’ll notice from my screen grabs that different compressor models have been used. These are all from the same Logic Pro compressor in FCP. This single plug-in features various presets designed to emulate tried-and-true analog compressors favored by top recording engineers/mixers.

Final thoughts 

As with my other Final Cut Pro audio articles and posts, I can already hear some screaming that this is just a workaround for the fact that Final Cut Pro has no “true” audio mixing panel. While that may be true, it’s also irrelevant. Until such time as Apple’s ProApps engineers redesign the audio section or add a “roles-based mixer” to the tool set, this is the software you have. If you want to mix in Final Cut Pro and deliver a properly mixed master file without using specialized audio software, then it’s best to understand how to achieve the required results.

If you step into the compound clip to make any editorial changes to the sequence or to individual clips, then you will not hear the results of the top-level mixing and effects. The proper mix is only heard when you step back out. This is a short-coming compared with this same process in Premiere Pro. Therefore, when you are editing in Final Cut Pro, it’s best to leave all of the final mixing until the end. In Premiere Pro, I tend to mix as I go.

Hopefully this post gives you some insight into the “guts” of the software. If you can’t send the audio to a mix engineer and don’t want to bounce over to Logic Pro, Pro Tools, or Resolve (Fairlight) yourself, then there’s no reason Final Cut Pro can’t be made to work for you.

©2023 Oliver Peters