A Conversation with Steve Bayes

As an early adopter of Avid systems at a highly visible facility, I first got to know Steve Bayes through his on-site visits. He was the one taking notes about how a customer used the product and what workflow improvements they’d like to see. Over the years, as an editor and tech writer, we’ve kept in touch through his travels from Avid to Media 100 and on to Apple. It was always good to get together and decompress at the end of a long NAB week.

With a career of using as well as helping to design and shepherd a wide range of post-production products, Steve probably knows more about a diverse field of editing systems than most other company managers at editing systems manufacturers. Naturally many readers will know him as Apple’s Senior Product Manager for Final Cut Pro X, a position he held until last year. But most users have little understanding of what a product manager actually does or how the products they love and use every day get from the drawing board into their hands. So I decided to sit down with Steve over Skype and pull back the curtain just a little on this very complex process.

______________________________________________________

[OP]  Let’s start this off with a deep dive into how a software product gets to the user. What part does a product manager play in developing new features and where does engineering fit into that process?

[SB]  I’m a little unconventional. I like to work closely with the engineers during their design and development, because I have a strong technical and industry background. More traditional product managers are product marketing managers who take a more hands-off, marketing-oriented approach. That’s important, but I never worked liked that.

My rule of thumb is that I will tell the engineers what the problem is, but I won’t tell them how to solve it. In many cases the engineers will come back and say, “You’ve told us that customers need to do this ‘thing.’ What do they really want to achieve? Are you telling us that they need to achieve it exactly like this?” And so you talk that out a bit. Maybe this is exactly what the customers really want to do, because that’s what they’ve always done or the way everyone else does it. Maybe the best way to do it is based on three other things in emerging technology that I don’t know about.

In some cases the engineers come back and say, “Because of these other three things you don’t know about, we have some new ideas about how to do that. What do you think?” If their solution doesn’t work, then you have to be very clear about why and be consistent throughout the discussion, while still staying open to new ways of doing things. If there is a legitimate opportunity to innovate, then that is always worth exploring.

Traveling around the world talking to post-production people for almost 30 years allowed me to act as the central hub for that information and an advocate for the user. I look at it as working closely in partnership with engineering to represent the customer and to represent the company in the bigger picture. For instance, what is interesting for Apple? Maybe those awesome cameras that happen to be attached to a phone. Apple has this great hardware and wonderful tactile devices. How would you solve these issues and incorporate all that? Apple has an advantage with all these products that are already out in the world and they can think about cool ways to combine those with professional editing.

In all the companies I’ve worked for, we work through a list of prioritized customer requests, bug fixes, and things that we saw on the horizon within the timeframe of the release date or shortly thereafter. You never want to be surprised by something coming down the road, so we were always looking farther out than most people. All of this is put together in a product requirements document (PRD), which lays out everything you’d like to achieve for the next release. It lists features and how they all fit together well, plus a little bit about how you would market that. The PRD creates the starting point for development and will be updated based on engineering feedback.

You can’t do anything without getting sign-off by quality assurance (QA). For example, you might want to support all 10,000 of the formats coming out, but QA says, “Excuse me? I don’t think so!” [laughs] So it has to be achievable in that sense – the art of the possible. Some of that has to do with their resources and schedule. Once the engineers “put their pencils down,” then QA starts seriously. Can you hit your dates? You also have to think about the QA of third parties, Apple hardware, or potentially a new operating system (OS). You never, ever want to release a new version of Final Cut and two weeks later a new OS comes out and breaks everything. I find it useful to think about the three points of the development triangle as: the number of features, the time that you have, and the level of stability. You can’t say, “I’m going to make a really unstable release, but it’s going to have more features than you’ve ever seen!” [laughs] That’s probably a bad decision.

Then I start working with the software in alpha. How does it really work? Are there any required changes? For the demo, I go off and shoot something cool that is designed specifically to show the features. In many ways you are shooting things with typical problems that are then solved by whatever is in the new software. And there’s got to be a little something in there for the power users, as well as the new users.

As you get closer to the release, you have to make decisions about whether things are stable enough. If some feature is not going to be ready, then you could delay it to a future release — never ideal, but better than a terrible user experience. Then you have to re-evaluate the messaging. I think FCP X has been remarkably stable for all the releases of the last eight years.

You also have to bring in the third parties, like developers, trainers, or authors, who provide feedback so we can make sure we haven’t broken anything for them. If there was a particularly important feature that required third parties to help out, I would reach out to them individually and give them a little more attention, making sure that their product worked as it should. Then I would potentially use it in my own presentation. I worked closely with SpeedScriber transcription software when Apple introduced subtitling and I talked every day with Atomos while they were shooting the demo in Australia on ProRes RAW. 

[OP]  What’s the typical time frame for a new feature or release – from the germ of an idea until it gets to the user?

[SB]  Industry-wide, companies tend to have a big release and then a series of smaller releases afterwards that come relatively quickly. Smaller releases might be to fix minor, but annoying bugs that weren’t bad enough to stop the larger release. You never ship with “priority one” (P1) bugs, so if there are some P2s or P3s, then you want to get to them in a follow-up. Or maybe there was a new device, codec, camera, or piece of hardware that you couldn’t test in time, because it wasn’t ready. Of course, the OS is changing while you are developing your application, as well. One of my metaphors is that “you are building the plane while you are flying it.” [laughs]

I can’t talk about the future or Apple specifically, but historically, you can see a big release might take most of a year. By the time it’s agreed upon, designed, developed, “pencils down – let’s test it” – the actual development time is not as long as you might think. Remember, you have to back-time for quality assurance. But, there are deeper functions that you can’t develop in that relatively short period of time. Features that go beyond a single release are being worked on in the background and might be out in two or three releases. You don’t want to restrict very important features just to hit a release date, but instead, work on them a bit longer.

Final Cut is an excellent application to demonstrate the capabilities of Apple hardware, ease of use, and third party ecosystem. So you want to tie all these things together as much as you can. And every now and then you get to time things so they hit a big trade show! [laughs]

[OP]  Obviously this is the work of a larger team. Are the romanticized tales of a couple of engineers coming out of the back room with a fully-cooked product more myth than reality?

[SB]  Software development is definitely a team effort. There are certain individuals that stand out, because they are good at what they do and have areas of specialty. They’ll come back and always give you more than you asked for and surprise you with amazing results. But, it’s much more of a coordinated effort – the customer feedback, the design, a team of managers who sign off on all that, and then initial development.

If it doesn’t work the way it’s supposed to, you may call in extra engineers to deal with the issues or to help solve those problems. Maybe you had a feature that turned out more complicated than first thought. It’s load balancing – taking your resources and moving them to where they do the most good for the product. Plus, you are still getting excellent feedback from the QA team. “Hey, this didn’t work the way we expected it to work. Why does it work like that?” It’s very much an effort with those three parts: design, engineering, and QA. There are project managers, as well, who coordinate those teams and manage the physical release of the software. Are people hitting their dates for turning things in? They are the people banging on your door saying, “Where’s the ‘thing with the stuff?'” [laughs]

There are shining stars in each of these areas or groups. They have a world of experience, but can also channel the customer – especially during the testing phase. And once you go to beta, you get feedback from customers. At that point, though, you are late in the process, so it’s meant to fix bugs, not add features. It’s good to get that feature feedback, but it won’t be in the release at that point.

[OP]  Throughout your time at various companies, color correction seems to be dear to you. Avid Symphony, Apple Color when it was in the package, not to mention the color tools in Final Cut Pro X. Now nearly every NLE can do color grading and the advanced tools like DaVinci Resolve are affordable to any user. Yet, there’s still that very high-end market for systems like Filmlight’s Baselight. Where do you see the process of color correction and grading headed?

[SB]  Color has always meant the difference for me between an OK project and a stellar project. Good color grading can turn your straw into gold. I think it’s an incredibly valuable talent to have. It’s an aesthetic sense first, but it’s also the ability to look at an image and say, “I know what will fix that image and it will look great.” It’s a specialized skill that shouldn’t be underrated. But, you just don’t need complex gear anymore to make your project better through color grading.

Will you make it look as good as a feature film or a high-end Netflix series? Now you’re talking about personnel decisions as much as technology. Colorists have the aesthetic and the ability to problem-solve, but are also very fast and consistent. They work well with customers in that realm. There’s always going to be a need for people like that, but the question is what chunk of the market requires that level of skill once the tools get easier to use?

I just think there’s a part of the market that’s growing quickly – potentially much more quickly – that could use the skills of a colorist, but won’t go through a separate grading step. Now you have look-up tables, presets, and plug-ins. And the color grading tools in Final Cut Pro X are pretty powerful for getting awesome results even if you’re not a colorist. The business model is that the more you can do in the app, the easier it is to “sell the cut.” The client has to see it in as close to the finished form as possible. Sometimes a bad color mismatch can make a cut feel rough and color correction can help smooth that out and get the cut signed off. As you get better using the color grading tools in FCP X, you can improve your aesthetic and learn how to be consistent across hundreds of shots. You can even add a Tangent Wave controller if you want to go faster. We find ourselves doing more in less time and the full range of color grading tools in FCP X and the FX Plug plug-ins can play a very strong roll in improving any production. 

[OP]  During your time at Apple, the ProRes codec was also developed. Since Apple was supplying post-production hardware and software and no professional production cameras, what was the point in developing your own codec?

[SB]  At the time there were all of these camera codecs coming out, which were going to be a very bad user experience for editing – even on the fastest Mac Pros at the time. The camera manufacturers were using compression algorithms that were high quality, but highly compressed, because camera cards weren’t that fast or that big. That compression was difficult to decode and play back. It took more processing power than you could get from any PC at that time to get the same number of video streams compared with digitizing from tape. In some cases you couldn’t even play the camera original video files at all, so you needed to transcode before you could start editing. All of the available transcoding codecs weren’t that high in quality or they had similar playback problems.

Apple wanted to make a better user experience, so ProRes was originally designed as an intermediate codec. It worked so well that the camera manufacturers wanted to put it into their cameras, which was fine with Apple, as long as you met the quality standards. Everyone has to submit samples and work with the Apple engineers to get it to the standard that Apple expects. ProRes doesn’t encode into as small file sizes as some of the other camera codecs; but given the choice between file size, quality, and performance, then quality and performance were more important. As camera cards and hard drives get bigger, faster, and cheaper, it’s less of an issue and so it was the right decision.

[OP]  The launch of Final Cut Pro X turned out to be controversial. Was the ProApps team prepared for the industry backlash that happened?

[SB] We knew that it would be disruptive, of course. It was a whole new interface and approach. It integrated a bunch of cutting edge technology that people weren’t familiar with. A complete rewrite of  the codebase was a huge step forward as you can see in the speed and fluidity that is so crucial during the creative process. Metadata driven workflows, background processing, magnetic timeline — in many ways people are still trying to catch up eight years later. And now FCP X is the best selling version of Final Cut Pro ever.

[OP]  When Walter Murch used Final Cut Pro to edit the film, Cold Mountain, it gained a lot of attention. Is there going to be another “Cold Mountain moment” for anyone or is that even important anymore?

[SB]  Post Cold Mountain? [chuckle] You have to be careful — the production you are trying to emulate might have nothing to do with your needs on an everyday basis. It may be aspirational, but by adopting Hollywood techniques, you aren’t doing yourself any favors. Those are designed with budgets, timeframes, and a huge crew that you don’t have. Adopt a workflow that is designed for the kind of work you actually do.

When we came up in the industry, you couldn’t make a good-looking video without going to a post house. Then NLEs came along and you could do a bunch of work in your attic, or on a boat, or in a hotel room. That creative, rough-cut market fractured, but you still had to go to an online edit house. That was a limited world that took capital to build and it was an expense by the hour. Imagine how many videos didn’t get made, because a good post house cost hundreds of dollars an hour.

Now the video market has fractured into all these different outlets – streaming platforms, social media, corporate messaging, fast-turnaround events, and mobile apps. And these guys have a ton of powerful equipment, like drones, gimbals, and Atomos ProRes RAW recorders – and it looks great! But, they’re not going to a post house. They’re going to pick up whatever works for them and at the end of the day impress their clients or customers. Each one is figuring out new ways to take advantage of this new technology.

One of the things Sam Mestman teaches in his mobile filmmaking class is that you can make really high-quality stuff for a fraction of the cost and time, as long as you are going to be flexible enough to work in a non-traditional way. That is the driving force that’s going to create more videos for all of these different outlets. When I started out, the only way you could distribute directly to the consumer was by mailing someone a VHS tape. That’s just long gone, so why are we using the same editing techniques and workflows?

I can’t remember the last time I watched something on broadcast TV. The traditional ways of doing things are a sort of assembly line — every step is very compartmentalized. This doesn’t stand to benefit from new efficiencies and technological advances, because it requires merging traditional roles, eliminating steps, and challenging the way things are charged for. The rules are a little less strict when you are working for these new distribution platforms. You still have to meet the deliverable requirements, of course. But if you do it the way you’ve always done it, then you won’t be able to bring it in on time or on budget in this emerging world. If you want to stay competitive, then you are forced to make these changes — your competition maybe already has. How can you tell when your phone doesn’t ring? And that’s why I would say there are Cold Mountain moments all the time when something gets made in a way that didn’t exist a few years ago. But, it happens across this new, much wider range of markets and doesn’t get so much attention.

[OP]  Final Cut Pro X seems to have gained more professional users internationally than in the US. In your writings, you’ve mentioned that efficiency is the way local producers can compete for viewers and maintain quality within budget. Would you expand upon that?

[SB]  There are a range of reasons why FCP X and new metadata-driven workflows are expanding in Europe faster than the US. One reason is that European crews tend to be smaller and there are fewer steps between the creatives and decision-making execs. The editor has more say in picking their editing system. I see over and over that editors are forced to use systems they don’t like in larger projects and they love to use FCP X on their own projects. When the facilities listen to and trust the editors, then they see the benefits pretty quickly. If you have government funded TV (like in many countries in Europe), then they are always under public pressure to justify the costs. Although they are inherently conservative, they are incentivized to always be looking for new ways to improve and that involves risks. With smaller crews, Europeans can be more flexible as to what being “an editor” really means and don’t have such strict rules that keep them from creating motion graphics – or the photographer from doing the rough cut. This means there is less pressure to operate like an assembly line and the entire production can benefit from efficiencies.

I think there’s a huge amount of money sloshing around in Europe and they have to figure out how to do these local-language productions for the high quality that will compete with the existing broadcasters, major features, and the American and British big-budget shows. So how are you going to do that? If you follow the rules, you lose. You have to look at different methods of production. 

Subscription is a different business model of continuing revenue. How many productions will the subscription model pay for? Netflix is taking out $2 billion in bonds on top of the $1 billion they already did to fund production and develop for the local languages. I’ve been watching the series Criminal on Netflix. It’s a crime drama based on police interrogations, with separate versions done in four different countries. English, French, German, and Spanish. Each one has its own cultural biases in getting to a confession (and that’s why I watched them all!). I’ve never seen anything like it before.

The guys at Metronome in Denmark used this moment as an opportunity to take some big chances with creating new workflows with FCP X and shared storage. They are using 1.5 petabytes of storage, six Synology servers, and 30 shows being edited right now in FCP X. They use the LumaForge Jellyfish for on-location post-production. If someone says it can’t be done, you need to talk to these guys and I’m happy to make the introduction.

I’m working with another company in France that shot a series on the firefighters of Marseilles. They shot most of it with iPhones, but they also used other cameras with longer lenses to get farther away from the fires. They’re looking at a series of these types of productions with a unique mobile look. If you put a bunch of iPhones on gimbals, you’ve got a high-quality, multi-cam shoot, with angles and performances that you could never get any other way. Or a bunch of DSLRs with Atomos devices and the Atomos sync modules for perfect timecode sync. And then how quickly can you turn out a full series? Producers need to generate a huge amount of material in a wide range of languages for a wide range of markets and they need to keep the quality up. They have to use new post-production talent and methods and, to me, that’s exciting.

[OP]  Looking forward, where do you see production and post technology headed?

[SB]  The tools that we’ve developed over the last 30 years have made such a huge difference in our industry that there’s a part of me that wants to go back and be a film student again. [laughs] The ability for people to turn out compelling material that expresses a point of view, that helps raise money for a worthy cause, that helps to explain a difficult subject, that raises consciousness, that creates an emotional engagement – those things are so much easier these days. It’s encouraging to me to see it being used like this.

The quality of the iPhone 11 is stunning. With awesome applications, like Mavis and FiLMiC Pro, these are great filmmaking tools. I’ve been playing around with the DJI Osmo Pocket, too, which I like a lot, because it’s a 4K sensor on a gimbal. So it’s not like putting an iPhone on a gimbal – it’s all-in-one. Although you can connect an iPhone to it for the bigger screen. 

Camera technology is going in the direction of more pixels and bigger sensors, more RAW and HDR, but I’d really like to see the next big change come in audio. It’s the one place where small productions still have problems. They don’t hire the full-time sound guy or they think they can shoot just with the mic attached to the hot shoe of the camera. That may be OK when using only a DSLR, but the minute you want to take that into a higher-end production, you’re going to need to think about it more.

Again, it’s a personnel issue. I can point a camera at a subject and get a pretty good recording, but to get a good sound recording – that’s much harder for me at this point. In that area, Apogee has done a great job with MetaRecorder for iOS. It’s not just generating iXML to automatically name the audio channels when you import into FCP X — you can actually label the FCP X roles in the app. It uses Timecode Systems (now Atomos) for multiple iOS recording devices to sync with rock-solid timecode and you can control those multiple recorders from a single iOS device. I would like to see more people adopt multiple microphones synced together wirelessly and controlled by an iPad.

One of the things I love about being “semi-retired” is if something’s interesting to me, I just dig into it. It’s exciting that you can edit from an iPad Pro, you can back up to a Gnarbox, you can shoot high-quality video with your iPhone or a DJI Osmo Pocket, and that opens the world up to new voices. If you were to graph it – the cost of videos is going down and to the right, the number of videos being created in going up and to the right, and at some point they cross over. That promises a huge increase in the potential work for those who can benefit from these new tools. We are close to that point.

It used to be that if your client went to another post house, you lost that client. It was a zero sum game — I win — you lose. Now there are so many potential needs for video we would never have imagined. Those clients are coming out of the woodwork and saying, “Now I can do a video. I’ll do some of it myself, but at some point I’ll hand it off to you, because you are the expert.” Or they feel they can afford your talent, because the rest of the production is so much more efficient. That’s a growing demand that you might not see until your market hits that crossover point.

This article also appears at FCPco.

©2019 Oliver Peters

It is time to reconsider Final Cut Pro X?

While Final Cut Pro X may have ultimately landed in the market sector that Apple envisioned, the industry widely acknowledged that the original launch could have been better managed. Many staunch Final Cut Pro (“legacy”) users were irrevocably alienated. That’s a shame, because FCPX wasn’t a bad design when released – merely incomplete. In the eight years that have followed, the user base has grown to more than 2.5 million (April 2018) and the application sports the widest third-party support of any editing software.

I have certainly gone back and forth in my own use of FCPX, depending on whether it was the right tool for a given job. I cut a feature film with it back in the pre-10.1 days when it was a bifurcated application with separate Event and Project files. Since then, I have also used it on plenty of spots and corporate videos. Although my daily workflow is largely Premiere Pro-based now, I regularly use Final Cut Pro X when appropriate, as well as Blackmagic Design DaVinci Resolve and Avid Media Composer. Modern editors need to be NLE-multilingual.

I realize that winning Oscars and cutting large-scale productions isn’t what the majority of editors do. Nevertheless, these types of productions give any product street cred. You are probably aware of Focus and Whiskey Tango Foxtrot, but there are certainly others that have used FCPX. Hollywood studios films are dominated by films cut with Avid Media Composer; however, short films cut using FCPX have won the short film Oscar category for two years in a row. While largely invisible to many US viewers, major international productions, on par with Game of Thrones, have been edited using Final Cut Pro X.

If you were one of those FCP7 users who jumped ship to another tool, then maybe it’s time to revisit Final Cut Pro X. There are many reasons I say that. In the past eight years, Apple has added wide codec support, LUTs, HDR capabilities, vastly improved color correction tools, and an easy method of working with captioning. Final Cut is clearly the better tool in many situations and here’s a quick overview why I feel that way.

What productions are best with FCPX?

Final Cut Pro X is capable of handling all types of editing, but it’s more ideal for some than others. The biggest differentiator is turnaround time. If you have to get done quickly – from ingest to delivery – then FCPX is hard to beat. It handles media better than any other NLE without the need for the beefiest hardware. Want to cut 4K ProResHQ on a two-year-old MacBook Pro? Then FCPX shines. That makes it a natural in broadcast news, promos, and sports. It’s also perfect for non-broadcast event coverage. Frankly, I’m surprised that US broadcasters haven’t gravitated to it like various other broadcasters around the world – especially for cutting news stories. The workflow, interface, and low hardware requirements make it well-suited to the task.

Station promo production might be questionable for some, but stop and think about the use of Motion Templates and how that technology can be applied to broadcast design. Final Cut features the unique ability to use templates that any user can create and publish as an effect out of Apple Motion. Therefore, custom effects, animation, and graphics can easily be created specifically for a station’s bespoke look.

For example, a broadcast group or network that owns multiple stations in different cities could have one creative team develop a custom station graphics package for each outlet, simply by using Motion. Those templates could be deployed to each promo department and installed into the individual FCPX edit systems. This would allow each editor to modify or customize time and event information based on the published parameters without mistakenly deviating from the prescribed graphic look. That’s a broadcast creative director’s dream.

A simple hardware footprint

Obviously Final Cut requires Apple computers, but there’s easy connectivity to media from external Thunderbolt, USB, and ethernet-based storage. Some facilities certainly need elaborate shared storage systems for collaborative workflows, but others don’t. If you are a creative editorial boutique, all of a given project’s proxy editing files can be stored on a single SSD drive, allowing the editor to easily move from room to room, or home to work, simply by carrying the SSD with them. They can even be cutting on a laptop and then bring that in to work, connect to an external display for better monitoring, and keep rocking. With the advent of external GPU systems (eGPU), you can easily augment the horsepower of middle-level Macs when the need arises. 

No external i/o hardware is required for monitoring. While I recommend a simple audio i/o interface and external speakers as a minimum, there are plenty of fixed-location systems where the editors only use headphones. AJA or Blackmagic interfaces to play video out to an external display are optional. Simply connect a high-quality display to the Mac via HDMI or Thunderbolt and FCPX will feed real video to it full screen. Premiere Pro can also do this, but Media Composer and Resolve do not.

Third-party ecosystem

Some of Final Cut’s deficits have developed into a huge asset. It enjoys one of the best ecosystems of third-party tools that enhance the application. These range from translation tools from vendors like Intelligent Assistance and Marquis Broadcast, to a myriad of plug-ins, such as those from FxFactory and Coremelt. Final Cut already comes with a very solid set of built-in effects filters – probably the most useful variety of the various NLE options. Even better, if you also purchase Motion, you can easily create more effects by building your own as Motion Templates. This has resulted in a ton of small developers who create and sell their own variations using this core technology.

You certainly don’t have to purchase any additional effects to be productive with FCPX, but if you do, then one of the better options is FxFactory by Noise Industries. FxFactory is both a set of effects and a delivery platform for other developers. You can use the FxFactory interface to purchase, install, and manage plug-ins and even applications from a diverse catalogue of tools. Pick and choose what you need and grow the repertoire as you see fit. One of the first options to start with is idustrial revolution’s newly revamped XEffects Toolkit. This includes numerous effects and title templates to augment your daily work. Some of these employ built-in tracking technology that allows you to pin items to objects within a shot.

Apple’s latest feature addition is workflow extensions. Adobe introduced this technology first in its products. But Apple has built upon it through macOS integration with apps like Photos and now in Final Cut Pro X. In short, an extension allows direct FCPX integration with another application. Various extensions can be downloaded from the Mac App Store and installed into FCPX. An extension then adds a panel into Final Cut, which allows you to interact with that application from inside the FCPX interface. Initially some of the companies offering extensions include frame.io, Shutterstock, Simon Says, and others.

Subscription

A sore point for many Adobe customers was the shift to the subscription business model. While the monthly rates are reasonable if you are an ongoing business, they have caused some to stick with software as old as CS6 (yikes!). As more companies adopt subscriptions, you have to start wondering when enough is enough. I don’t think we are there yet and Creative Cloud is still a solid value. But if you are an individual who doesn’t make a living with these tools, then it’s a concern. Adobe recently raised eyebrows with the doubling of the monthly cost for its Photography plan. As it turns out this is an additional pricing plan with more storage and not a replacement, but that’s only evident after the website page appears to have been quickly fixed. Predictably this gives competitors like ON1 an avenue for counter-marketing.

Concerned with subscriptions? Then the Apple professional applications are an alternative. Final Cut Pro X, Compressor, Motion, and Logic ProX – coupled with photo and graphics tools from Affinity and/or Pixelmator – provide a viable competing package to Adobe Creative Cloud. Heck, augment that with Fusion and/or DaVinci Resolve – even the free versions – and the collection becomes a formidable toolkit.

The interface

Naturally, the elephant in the room is the FCPX interface. It’s what simultaneously excited and turned off so many FCP7 users. In the end, how you edit with Final Cut Pro X does not have to be all that different than your editing style with other NLEs. Certainly there are differences, but once you get used to the basics, there’s more that’s similar than is different.

Isn’t imitation the highest form of flattery? You only have to look at Adobe Premiere Rush or the new Cut Page in Resolve 16 to realize that just maybe, others are starting to see the value in Apple’s approach. On top of that, there are features touted in Resolve 16, like facial (actually shape) recognition or adjustment layers, that were there even in FCPX 10.0. Whether this all is blatant copying or simply a tip-of-the-hat doesn’t matter. Each company has come to the conclusion that some workflows and some newer editors need a faster and more direct user interface that is easily scalable to small and large screens and to single and dual-display systems.

I realize that many out there will read this post and scream Apple apologist. Whatever. If you’ve shifted to PC, then very little of what I’ve said applies to you. I make my daily living with Apple hardware. While I recognize you can often get superior performance with a PC, I don’t find the need to make a change yet. This means that Final Cut Pro X remains a great option for my workflows. It’s a tool I can use for nearly any job and one that is often times better than most. If you rejected it eight years ago, maybe it’s time to take a second look.

©2019 Oliver Peters

CoreMelt PaintX

When Apple launched Final Cut Pro X, it was with a decidedly simplified set of video effects. This was enhanced by the easy ability for users to create their own custom effects, using Apple Motion as a development platform. The result has been an entirely new ecosystem of low-cost, high-quality video effects. As attractive as that is, truly advanced visual effects still require knowledgeable plug-in developers who are able to work within the FCPX and macOS architecture in order to produce more powerful tools. For example, other built-in visual effects tools, such as Avid Media Composer’s Intraframe Paint or the Fusion page in DaVinci Resolve, simply aren’t within the scope of FCPX, nor what users can create on their own through Motion templates.

To fill that need, developers like CoreMelt have been designing a range of advanced visual effects tools for the Final Cut Pro market, including effects for tracking, color correction, stabilization, and more. Their newest release is PaintX, which adds a set of Photoshop-style tools to Final Cut Pro X. As with many of CoreMelt’s other offerings, PaintX includes planar tracking, thanks to the licensing of Mocha tracking technology.

To start, drop the PaintX effect onto a clip and then launch the custom interface. PaintX requires a better control layout than the standard FCPX user interface has been designed for. Once inside the PaintX interface window, you have a choice of ten brush functions, including paint color, change color, blur, smear, sharpen, warp, clone, add noise, heal, and erase. These functions cover a range of needs, from simple wire removal to beauty enhancements and even pseudo horror makeup effects. You have control over brush size, softness, aspect ratio, angle, and opacity. The various brushes also have specific controls for their related functions, such as the blur range for the blur brush. Effects are applied in layers and actions. Each stroke is an action and both remain editable. If you aren’t the most precise artist, then the erase brush comes in handy. Did you color a bit too far outside of the lines? Simply use the erase brush on that layer and trim back your excess.

Multiple brush effects can be applied to the same or different areas within the image, simply by adding a new layer for each effect. Once you’ve applied the first paint stroke, an additional brush control panel opens – allowing you to edit the brush parameters, after the fact. So, if your brush size was too large or not soft enough, simply alter those settings without the need to redo the effect. Each effect can be individually tracked in either direction. The Mocha tracker offers additional features, such as transform (scale/position) versus perspective tracking, along with the ability to copy and paste tracking data between brush layers.

As a Final Cut Pro X effect, PaintX works within the standard video pipeline. If you applied color correction upstream of your PaintX filter, then that grade is visible within the PaintX interface. But if the color correction is applied downstream of the PaintX effect, you won’t see it when you open the PaintX interface. However, that correction will still be uniformly applied to the clip, including the areas altered within the PaintX effect. If you’ve “punched into” a 4K clip on an HD timeline, when you open PaintX, you’ll still see the full 4K frame. Finally, you have additional FCPX control over the opacity and mix of the applied PaintX filter.

I found PaintX to be well-behaved even on a modest Mac, like my 3-year-old laptop. However, if you don’t have a beefy Mac, keep the effect simple. The more brush effects that you apply and track in a single clip, the slower the real-time response will become, especially on under-powered machines. These effects are GPU-intensive and paint strokes are really a particle system; therefore, simple, single-layer effects are the easiest on the machine. But, if you intend to do more complex effects like blurs and sharpens in multiple layers, then you will really want one of the more powerful Macs. Playback response is generally better, once you’ve saved the effect and exit back to Final Cut. I did run into one minor issue with the clone brush on a single isolated clip, while using a 2013 Mac Pro. CoreMelt told me there have been a few early bugs with certain GPUs and is looking into the anomaly I discovered. That model in particular has been notorious for GPU issues with video effects. (Update: CoreMelt sent me a new build, which has corrected this problem.)

Originally written for RedShark News

©2018 Oliver Peters

Hawaiki AutoGrade

The color correction tools in Final Cut Pro X are nice. Adobe’s Lumetri controls make grading intuitive. But sometimes you just want to click a few buttons and be happy with the results. That’s where AutoGrade from Hawaiki comes in. AutoGrade is a full-featured color correction plug-in that runs within Final Cut Pro X, Motion, Premiere Pro and After Effects. It is available from FxFactory and installs through the FxFactory plug-in manager.

As the name implies, AutoGrade is an automatic color correction tool designed to simplify and speed-up color correction. When you install AutoGrade, you get two plug-ins: AutoGrade and AutoGrade One. The latter is a simple, one-button version, based on global white balance. Simply use the color-picker (eye dropper) and sample an area that should be white. Select enable and the overall color balance is corrected. You can then tweak further, by boosting the correction, adjusting the RGB balance sliders, and/or fine-tuning luma level and saturation. Nearly all parameters are keyframeable, and looks can be saved as presets.

AutoGrade One is just a starter, though, for simple fixes. The real fun is with the full version of AutoGrade, which is a more comprehensive color correction tool. Its interface is divided into three main sections: Auto Balance, Quick Fix, and Fine-Tune. Instead of a single global balance tool, the Auto Balance section permits global, as well as, any combination of white, black, and/or skin correction. Simply turn on one or more desired parameters, sample the appropriate color(s) and enable Auto Balance. This tool will also raise or lower luma levels for the selected tonal range.

Sometimes you might have to repeat the process if you don’t like the first results. For example, when you sample the skin on someone’s face, sampling rosy cheeks will yield different results than if you sample the yellowish highlights on a forehead. To try again, just uncheck Auto Balance, sample a different area, and then enable Auto Balance again. In addition to an amount slider for each correction range, you can also adjust the RGB balance for each. Skin tones may be balanced towards warm or neutral, and the entire image can be legalized, which clamps video levels to 0-100.

Quick Fix is a set of supplied presets that work independently of the color balance controls. These include some standards, like cooling down or warming up the image, the orange and teal look, adding an s-curve, and so on. They are applied at 100% and to my eye felt a bit harsh at this default. To tone down the effect, simply adjust the amount slider downwards to get less intensity from the effect.

Fine-Tune rounds it out when you need to take a deeper dive. This section is built as a full-blown, 3-way color corrector. Each range includes a luma and three color offset controls. Instead of wheels, these controls are sliders, but the results are the same as with wheels. In addition, you can adjust exposure, saturation, vibrance, temperature/tint, and even two different contrast controls. One innovation is a log expander, designed to make it easy to correct log-encoded camera footage, in the absence of a specific log-to-Rec709 camera LUT.

Naturally, any plug-in could always offer more, so I have a minor wish list. I would love to see five additional features: film grain, vignette, sharpening, blurring/soft focus, and a highlights-only expander. There are certainly other individual filters that cover these needs, but having it all within a single plug-in would make sense. This would round out AutoGrade as a complete, creative grading module, servicing user needs beyond just color correction looks.

AutoGrade is a deceptively powerful color corrector, hidden under a simple interface. User-created looks can be saved as presets, so you can quickly apply complex settings to similar shots and set-ups. There are already many color correction tools on the market, including Hawaiki’s own Hawaiki Color. The price is very attractive, so AutoGrade is a superb tool to have in your kit. It’s a fast way to color-grade that’s ideal for both users who are new or experienced when it comes to color correction.

(Click any image to see an enlarged view.)

©2018 Oliver Peters

More about ProRes RAW

A few weeks ago I wrote a two-part post – HDR and RAW Demystified. In the second part, I covered Apple’s new ProRes RAW codec. I still see a lot of misinformation on the web about what exactly this is, so I felt it was worth an additional post. Think of this post as an addendum to Part 2. My apologies up front, if there is some overlap between this and the previous post.

_____________________________

Camera raw codecs have been around since before RED Digital Camera brought out their REDCODE RAW codec. At NAB, Apple decided to step into the game. RED brought the innovation of recording the raw signal as a compressed movie file, making on-board recording and simplified post-production possible. Apple has now upped the game with a codec that is optimized for multi-stream playback within Final Cut Pro X, thus taking advantage of how FCPX leverages Apple hardware. At present, ProRes RAW is incompatible with all other applications. The exception is Motion, which will read and play the files, but with incorrect default – albeit correctable – video levels.

ProRes RAW is only an acquisition codec and, for now, can only be recorded externally using an Atomos Inferno or Sumo 19 monitor/recorder, or in-camera with DJI’s Inspire 2 or Zenmuse X7. Like all things Apple, the complexity is hidden under the surface. You don’t get the type of specific raw controls made available for image tweaking, as you do with RED. But, ProRes RAW will cover the needs of most camera raw users, making this the raw codec “for the rest of us”. At least that’s what Apple is banking on.

Capturing in ProRes RAW

The current implementation requires a camera that exports a camera raw signal over SDI, which in turn is connected to the Atomos, where the conversion to ProRes RAW occurs. Although no one is very specific about the exact process, I would presume that Atomos’ firmware is taking in the camera’s form of raw signal and rewrapping or transforming the data into ProRes RAW. This means that the Atomos firmware would require a conversion table for each camera, which would explain why only a few Sony, Panasonic, and Canon models qualify right now. Others, like ARRI Alexa or RED cameras, cannot yet be recorded as ProRes RAW. The ProRes RAW codec supports 12-bit color depth, but it depends on the camera. If the SDI output to the Atomos recorder is only 10-bit, then that’s the bit-depth recorded.

Until more users buy or update these specific Atomos products – or more manufacturers become licensed to record ProRes RAW onboard the camera – any real-word comparisons and conclusions come from a handful of ProRes RAW source files floating around the internet. That, along with the Apple and Atomos documentation, provides a pretty solid picture of the quality and performance of this codec group.

Understanding camera raw

All current raw methods depend on single-sensor cameras that capture a Bayer-pattern image. The sensor uses a monochrome mosaic of photosites, which are filtered to register the data for light in the red, green, or blue wavelengths. Nearly all of these sensors have twice as many green receptors as red or blue. At this point, the sensor is capturing linear light at the maximum dynamic range capable for the exposure range of the camera and that sensor. It’s just an electrical signal being turned into data, but without compression (within the sensor). The signal can be recorded as a camera raw file, with or without compression. Alternatively, it can also be converted directly into a full-color video signal and then recorded – again, with or without compression.

If the RGGB photosite data (camera raw) is converted into RGB pixels, then sensor color information is said to be “baked” into the file. However, if the raw conversion is stored in that form and then later converted to RGB in post, sensor data is preserved intact until much later into the post process. Basically, the choice boils down to whether that conversion is best performed within the camera’s electronics or later via post-production software.

The effect of compression may also be less destructive (fewer visible artifacts) with a raw image, because data, rather than video is being compressed. However, converting the file to RGB, does not mean that a wider dynamic range is being lost. That’s because most camera manufacturers have adopted logarithmic encoding schemes, which allow a wide color space and a high dynamic range (big exposure latitude) to be carried through into post. HDR standards are still in development and have been in testing for several years, completely independent of whether or not the source files are raw.

ProRes RAW compression

ProRes RAW and ProRes RAW HQ are both compressed codecs with roughly the same data footprint as ProRes and ProRes HQ. Both raw and standard versions use a variable bitrate form of compression, but in different ways. Apple explains it this way in their white paper: 

“As is the case with existing ProRes codecs, the data rates of ProRes RAW are proportional to frame rate and resolution. ProRes RAW data rates also vary according to image content, but to a greater degree than ProRes data rates. 

With most video codecs, including the existing ProRes family, a technique known as rate control is used to dynamically adjust compression to meet a target data rate. This means that, in practice, the amount of compression – hence quality – varies from frame to frame depending on the image content. In contrast, ProRes RAW is designed to maintain constant quality and pristine image fidelity for all frames. As a result, images with greater detail or sensor noise are encoded at higher data rates and produce larger file sizes.”

ProRes RAW and HDR do not depend on each other

One of my gripes, when watching some of the ProRes RAW demos on the web and related comments on forums, is that ProRes RAW is being conflated with HDR. This is simply inaccurate. Raw applies to both SDR and HDR workflows. HDR workflows do not depend on raw source material. One of the online demos I saw recently immediately started with an HDR FCPX Library. The demo ProRes RAW clips were imported and looked blown out. This made for a dramatic example of recovering highlight information. But, it was wrong!

If you start with an SDR FCPX Library and import these same files, the default image looks great. The hitch here, is that these ProRes RAW files were shot with a Sony camera and a default LUT is applied in post. That’s part of the file’s metadata. To my knowledge, all current, common camera LUTs are based on conversion to the Rec709 color space, not HDR or wide gamut. If you set the inspector’s LUT tab to “none” in either SDR or HDR, you get a relatively flat, log image that’s easily graded in whatever direction you want.

What about raw-specific settings?

Are there any advantages to camera raw in the first place? Most people will point to the ability to change ISO values and color temperature. But these aren’t actually something inherently “baked” into the raw file. Instead, this is metadata, dialed in by the DP on the camera, which optimizes the images for the sensor. ISO is a sensitivity concept based on the older ASA film standard for exposing film. In modern digital cameras, it is actually an exposure index (EI), which is how some refer to it. (RedShark’s Phil Rhodes goes into depth in this linked article.)

The bottom line is that EI is a cross-reference to that camera sensor’s “sweet spot”. 800 on one camera might be ideal, while 320 is best on another. Changing ISO/EI has the same effect as changing gain in audio. Raising or lowering ISO/EI values means that you can either see better into the darker areas (with a trade-off of added noise) – or you see better highlight detail, but with denser dark areas. By changing the ISO/EI value in post, you are simply changing that reference point.

In the case of ProRes RAW and FCPX, there are no specific raw controls for any of this. So it’s anyone’s guess whether changing the master level wheel or the color temp/tint sliders within the color wheels panel is doing anything different for a ProRes RAW file than doing the same adjustment for any other RGB-encoded video file. My guess is that it’s not.

In the case of RED camera files, you have to install a camera raw plug-in module in order to work with the REDCODE raw codec inside of Final Cut Pro X. There is a lot of control of the image, prior to tweaking with FCPX’s controls. However, the amount of image control for the raw file is significantly more for a REDCODE file in Premiere Pro, than inside of FCPX. Again, my suspicion is that most of these controls take effect after the conversion to RGB, regardless of whether or not the slider lives in a specific camera raw module or in the app’s own color correction controls. For instance, changing color temperature within the camera raw module has no correlation to the color temperature control within the app’s color correction tools. It is my belief that few of these actually adjust file data at the raw level, regardless of whether this is REDCODE or ProRes RAW. The conversion from raw to RGB is proprietary with every manufacturer.

What is missing in the ProRes RAW implementation is any control over the color science used to process the image, along with de-Bayering options. Over the years, RED has reworked/improved its color science, which theoretically means that a file recorded a few years ago can look better today (using newer color science math) than it originally did. You can select among several color science models, when you work with the REDCODE format. 

You can also opt to lower the de-Bayering resolution to 1/2, 1/4, 1/8, etc. for a RED file.  When working in a 1080p timeline, this speeds up playback performance with minimal impact on the visible resolution displayed in the viewer. For full-quality conversion, software de-Bayering also yields different results than hardware acceleration, as with the RED Rocket-X card. While this level of control is nice to have, I suspect that’s the sort of professional complication that Apple seeks to avoid.

The main benefit of ProRes RAW may be a somewhat better-quality image carried into post at a lower file size. To get the comparable RGB image quality you’d need to go up to uncompressed, ProRes 4444, or ProRes 4444 XQ – all of which become very taxing in post. Yet, for many standard productions, I doubt you’ll see that great of a difference. Nevertheless, more quality with a lower footprint will definitely be welcomed.

People will want to know whether this is a game-changer or not. On that count, probably not. At least not until there are a number of in-camera options. If you don’t edit – and finish – with FCPX, then it’s a non-starter. If you shoot with a camera that records in a high-quality log format, like an ARRI Alexa, then you won’t see much difference in quality or workflow. If you shoot with any RED camera, you have less control over your image. On the other hand, it’s a definite improvement over all raw workflows that capture in image sequences. And it breathes some life into an older camera, like the Sony FS700. So, on balance, ProRes RAW is an advancement, but just not one that will affect as large a part of the industry as the rest of the ProRes family has.

(Note – click any image for an enlarged view. Images courtesy of Apple, FilmPlusGear, and OffHollywood.)

©2018 Oliver Peters

Luca Visual FX builds Mystery & Suspense

For most editors, creating custom music scores tends to fall into the “above my pay grade” category. If you are a whizz with GarageBand or Logic Pro X, then you might dip into Apple’s loop resources. But most commercials and corporate videos are easily serviced by the myriad of stock music sites, like Premium Beat and Music Bed. Some music customization is also possible with tracks from companies like SmartSound.

Yet, none of the go-to music library sites offer curated, genre-based, packages of tracks and elements that make it easy to build up a functional score for longer dramatic productions. Such projects are usually the work of composers or a specific music supervisor, sound designer, or music editor doing a lot of searching and piecing together from a wide range of resources.

Enter Luca Visual FX – a developer best known for visual effects plug-ins, such as Light Kit 2.0. It turns out that Luca Bonomo is also a composer. The first offering is the Mystery & Suspense Music and Sound Library, which is a collection of 500 clips, comprising music themes, atmospheres, drones, loops, and sound effects. This is a complete toolkit designed to make it easy to combine elements, in order to create a custom score for dramatic productions in the mystery or suspense genre.

These tracks are available for purchase as a single library through the LucaVFX website. They are downloaded as uncompressed, stereo AIF files in a 24-bit/96kHz resolution. This means they are of top quality and compatible with any Mac or PC NLE or DAW application. Best yet, is the awesome price of $79. The package is licensed for a single user and may be used for any audio or video production, including for commercial purposes.

Thanks to LucaVFX, I was able to download and test out the Library on a recent short film. The story is a suspense drama in the style of a Twilight Zone episode, so creating a non-specific, ethereal score fits perfectly. Drones, dissonance, and other suspenseful sounds are completely in line, which is where this collection shines.

Although I could have used any application to build this, I opted for Apple’s Final Cut Pro X. Because of its unique keyword structure, it made sense to first set up a separate FCPX library for only the Mystery & Suspense package. During import, I let FCPX create keyword collections based on the Finder folders. This keeps the Mystery & Suspense FCPX library organized in the same way as they are originally grouped. Doing so, facilitates fast and easy sorting and previewing of any of the 500 clips within the music library. Then I created a separate FCPX library for the production itself. With both FCPX libraries open, I could quickly preview and place clips from my music library to the edit sequence for the film, located within the other FCPX library.

Final Cut uses Connected Clips instead of tracks. This means that you can quickly build up and align overlapping atmospheres, transitions, loops, and themes for a densely layered music score in a very freeform manner. I was able to build up a convincing score for a half-hour-long piece in less that an afternoon. Granted, this isn’t mixed yet, but at least I now have the musical elements that I want and where I want them. I feel that style of working is definitely faster in Final Cut Pro X – and more conducive to creative experimentation – but it would certain work just as well in other applications.

The Mystery & Suspense Library is definitely a winner, although I do have a few minor quibbles. First, the music and effects are in keeping with the genre, but don’t go beyond it. When creating a score for this kind of production, you also need some “normal” or “lighter” moods for certain scenes or transitions. I felt that was missing and I would still have to step outside of this package to complete the score. Secondly, many of the clips have a synthesized or electronic tone to them, thanks to the instruments used to create the music. That’s not out of character with the genre, but I still would have liked some of these to include more natural instruments than they do. In fairness to LucaVFX, if the Mystery & Suspense Library is successful, then the company will create more libraries in other genres, including lighter fare.

In conclusion, this is a high quality library perfectly in keeping with its intended genre. Using it is fast and flexible, making it possible for even the most musically-challenged editor to develop a convincing, custom score without breaking the bank.

©2018 Oliver Peters

FCPX Color Wheels Take 2

Prior to version 10.4, the color correction tools within Final Cut Pro X were very basic. You could get a lot of work done with the color board, but it just didn’t offer tools competitive with other NLEs – not to mention color plug-ins or a dedicated grading app like DaVinci Resolve. With the release of 10.4, Apple upped the game by adding color wheels and a very nice curves implementation. However, for those of us who have been doing color correction for some time, it quickly became apparent that something wasn’t quite right in the math or color science behind these new FCPX color wheels. I described those anomalies in this January post.

To summarize that post, the color wheels tool seems to have been designed according to the lift/gamma/gain (LGG) correction model. The standard behavior for LGG is evident with a black-to-white gradient image. On a waveform display, this appears as a diagonal line from 0 to 100. If you adjust the highlight control (gain), the line appears to be pinned at the bottom with the higher end pivoting up or down as you shift the slider. Likewise, the shadow control (lift) leaves the line pinned at the top with the bottom half pivoting. The midrange control (gamma) bends the middle section of the line inward or outward, with no affect on the two ends, which stay pinned at 0 and 100, respectively. In addition to luminance value, when you shift the hue offset to an extreme edge – like moving the midrange puck completely to yellow – you should still see some remaining black and white at the two ends of the gradient.

That’s how LGG is supposed to work. In FCPX version 10.4, each color wheel control also altered the levels of everything else. When you adjusted midrange, it also elevated the shadow and highlight ranges. In the hue offset example, shifting the midrange control to full-on yellow tinted the entire image to yellow, leaving no hint of black or white. As a result, the color wheels correction tool was unpredictable and difficult to use, unless you were doing only very minor adjustments. You ended up chasing your tail, because when one correction was made, you’d have to go back and re-adjust one of the other wheels to compensate for the unwanted changes made by the first adjustment.

With the release of FCPX 10.4.1 this April, Apple engineers have changed the way the color wheels tool behaves. Corrections now correspond to the behavior that everyone accepts as standard LGG functionality. In other words, the controls mostly only affect their part of the image without also adjusting all other levels. This means that the shadows (lift) control adjusts the bottom, highlights (gain) will adjust the top end, and midrange (gamma) will lighten or darken the middle portion of the image. Likewise, hue offsets don’t completely contaminate the entire image.

One important thing to note is that existing FCPX Libraries created or promoted under 10.4 will now be promoted again when opened in 10.4.1. In order that your color wheel corrections don’t change to something unexpected when promoted, Projects in these Libraries will behave according to the previous FCPX 10.4 color model. This means that the look of clips where color wheels were used – and their color wheel values – haven’t changed. More importantly, the behavior of the wheels when inside those Libraries will also be according to the “old” way, should you make any further corrections. The new color wheels behavior will only begin within new Libraries created under 10.4.1.

These images clarify how the 10.4.1 adjustments now work (click to see enlarged and expanded views).

©2018 Oliver Peters