A Conversation with Steve Bayes

As an early adopter of Avid systems at a highly visible facility, I first got to know Steve Bayes through his on-site visits. He was the one taking notes about how a customer used the product and what workflow improvements they’d like to see. Over the years, as an editor and tech writer, we’ve kept in touch through his travels from Avid to Media 100 and on to Apple. It was always good to get together and decompress at the end of a long NAB week.

With a career of using as well as helping to design and shepherd a wide range of post-production products, Steve probably knows more about a diverse field of editing systems than most other company managers at editing systems manufacturers. Naturally many readers will know him as Apple’s Senior Product Manager for Final Cut Pro X, a position he held until last year. But most users have little understanding of what a product manager actually does or how the products they love and use every day get from the drawing board into their hands. So I decided to sit down with Steve over Skype and pull back the curtain just a little on this very complex process.

______________________________________________________

[OP]  Let’s start this off with a deep dive into how a software product gets to the user. What part does a product manager play in developing new features and where does engineering fit into that process?

[SB]  I’m a little unconventional. I like to work closely with the engineers during their design and development, because I have a strong technical and industry background. More traditional product managers are product marketing managers who take a more hands-off, marketing-oriented approach. That’s important, but I never worked liked that.

My rule of thumb is that I will tell the engineers what the problem is, but I won’t tell them how to solve it. In many cases the engineers will come back and say, “You’ve told us that customers need to do this ‘thing.’ What do they really want to achieve? Are you telling us that they need to achieve it exactly like this?” And so you talk that out a bit. Maybe this is exactly what the customers really want to do, because that’s what they’ve always done or the way everyone else does it. Maybe the best way to do it is based on three other things in emerging technology that I don’t know about.

In some cases the engineers come back and say, “Because of these other three things you don’t know about, we have some new ideas about how to do that. What do you think?” If their solution doesn’t work, then you have to be very clear about why and be consistent throughout the discussion, while still staying open to new ways of doing things. If there is a legitimate opportunity to innovate, then that is always worth exploring.

Traveling around the world talking to post-production people for almost 30 years allowed me to act as the central hub for that information and an advocate for the user. I look at it as working closely in partnership with engineering to represent the customer and to represent the company in the bigger picture. For instance, what is interesting for Apple? Maybe those awesome cameras that happen to be attached to a phone. Apple has this great hardware and wonderful tactile devices. How would you solve these issues and incorporate all that? Apple has an advantage with all these products that are already out in the world and they can think about cool ways to combine those with professional editing.

In all the companies I’ve worked for, we work through a list of prioritized customer requests, bug fixes, and things that we saw on the horizon within the timeframe of the release date or shortly thereafter. You never want to be surprised by something coming down the road, so we were always looking farther out than most people. All of this is put together in a product requirements document (PRD), which lays out everything you’d like to achieve for the next release. It lists features and how they all fit together well, plus a little bit about how you would market that. The PRD creates the starting point for development and will be updated based on engineering feedback.

You can’t do anything without getting sign-off by quality assurance (QA). For example, you might want to support all 10,000 of the formats coming out, but QA says, “Excuse me? I don’t think so!” [laughs] So it has to be achievable in that sense – the art of the possible. Some of that has to do with their resources and schedule. Once the engineers “put their pencils down,” then QA starts seriously. Can you hit your dates? You also have to think about the QA of third parties, Apple hardware, or potentially a new operating system (OS). You never, ever want to release a new version of Final Cut and two weeks later a new OS comes out and breaks everything. I find it useful to think about the three points of the development triangle as: the number of features, the time that you have, and the level of stability. You can’t say, “I’m going to make a really unstable release, but it’s going to have more features than you’ve ever seen!” [laughs] That’s probably a bad decision.

Then I start working with the software in alpha. How does it really work? Are there any required changes? For the demo, I go off and shoot something cool that is designed specifically to show the features. In many ways you are shooting things with typical problems that are then solved by whatever is in the new software. And there’s got to be a little something in there for the power users, as well as the new users.

As you get closer to the release, you have to make decisions about whether things are stable enough. If some feature is not going to be ready, then you could delay it to a future release — never ideal, but better than a terrible user experience. Then you have to re-evaluate the messaging. I think FCP X has been remarkably stable for all the releases of the last eight years.

You also have to bring in the third parties, like developers, trainers, or authors, who provide feedback so we can make sure we haven’t broken anything for them. If there was a particularly important feature that required third parties to help out, I would reach out to them individually and give them a little more attention, making sure that their product worked as it should. Then I would potentially use it in my own presentation. I worked closely with SpeedScriber transcription software when Apple introduced subtitling and I talked every day with Atomos while they were shooting the demo in Australia on ProRes RAW. 

[OP]  What’s the typical time frame for a new feature or release – from the germ of an idea until it gets to the user?

[SB]  Industry-wide, companies tend to have a big release and then a series of smaller releases afterwards that come relatively quickly. Smaller releases might be to fix minor, but annoying bugs that weren’t bad enough to stop the larger release. You never ship with “priority one” (P1) bugs, so if there are some P2s or P3s, then you want to get to them in a follow-up. Or maybe there was a new device, codec, camera, or piece of hardware that you couldn’t test in time, because it wasn’t ready. Of course, the OS is changing while you are developing your application, as well. One of my metaphors is that “you are building the plane while you are flying it.” [laughs]

I can’t talk about the future or Apple specifically, but historically, you can see a big release might take most of a year. By the time it’s agreed upon, designed, developed, “pencils down – let’s test it” – the actual development time is not as long as you might think. Remember, you have to back-time for quality assurance. But, there are deeper functions that you can’t develop in that relatively short period of time. Features that go beyond a single release are being worked on in the background and might be out in two or three releases. You don’t want to restrict very important features just to hit a release date, but instead, work on them a bit longer.

Final Cut is an excellent application to demonstrate the capabilities of Apple hardware, ease of use, and third party ecosystem. So you want to tie all these things together as much as you can. And every now and then you get to time things so they hit a big trade show! [laughs]

[OP]  Obviously this is the work of a larger team. Are the romanticized tales of a couple of engineers coming out of the back room with a fully-cooked product more myth than reality?

[SB]  Software development is definitely a team effort. There are certain individuals that stand out, because they are good at what they do and have areas of specialty. They’ll come back and always give you more than you asked for and surprise you with amazing results. But, it’s much more of a coordinated effort – the customer feedback, the design, a team of managers who sign off on all that, and then initial development.

If it doesn’t work the way it’s supposed to, you may call in extra engineers to deal with the issues or to help solve those problems. Maybe you had a feature that turned out more complicated than first thought. It’s load balancing – taking your resources and moving them to where they do the most good for the product. Plus, you are still getting excellent feedback from the QA team. “Hey, this didn’t work the way we expected it to work. Why does it work like that?” It’s very much an effort with those three parts: design, engineering, and QA. There are project managers, as well, who coordinate those teams and manage the physical release of the software. Are people hitting their dates for turning things in? They are the people banging on your door saying, “Where’s the ‘thing with the stuff?'” [laughs]

There are shining stars in each of these areas or groups. They have a world of experience, but can also channel the customer – especially during the testing phase. And once you go to beta, you get feedback from customers. At that point, though, you are late in the process, so it’s meant to fix bugs, not add features. It’s good to get that feature feedback, but it won’t be in the release at that point.

[OP]  Throughout your time at various companies, color correction seems to be dear to you. Avid Symphony, Apple Color when it was in the package, not to mention the color tools in Final Cut Pro X. Now nearly every NLE can do color grading and the advanced tools like DaVinci Resolve are affordable to any user. Yet, there’s still that very high-end market for systems like Filmlight’s Baselight. Where do you see the process of color correction and grading headed?

[SB]  Color has always meant the difference for me between an OK project and a stellar project. Good color grading can turn your straw into gold. I think it’s an incredibly valuable talent to have. It’s an aesthetic sense first, but it’s also the ability to look at an image and say, “I know what will fix that image and it will look great.” It’s a specialized skill that shouldn’t be underrated. But, you just don’t need complex gear anymore to make your project better through color grading.

Will you make it look as good as a feature film or a high-end Netflix series? Now you’re talking about personnel decisions as much as technology. Colorists have the aesthetic and the ability to problem-solve, but are also very fast and consistent. They work well with customers in that realm. There’s always going to be a need for people like that, but the question is what chunk of the market requires that level of skill once the tools get easier to use?

I just think there’s a part of the market that’s growing quickly – potentially much more quickly – that could use the skills of a colorist, but won’t go through a separate grading step. Now you have look-up tables, presets, and plug-ins. And the color grading tools in Final Cut Pro X are pretty powerful for getting awesome results even if you’re not a colorist. The business model is that the more you can do in the app, the easier it is to “sell the cut.” The client has to see it in as close to the finished form as possible. Sometimes a bad color mismatch can make a cut feel rough and color correction can help smooth that out and get the cut signed off. As you get better using the color grading tools in FCP X, you can improve your aesthetic and learn how to be consistent across hundreds of shots. You can even add a Tangent Wave controller if you want to go faster. We find ourselves doing more in less time and the full range of color grading tools in FCP X and the FX Plug plug-ins can play a very strong roll in improving any production. 

[OP]  During your time at Apple, the ProRes codec was also developed. Since Apple was supplying post-production hardware and software and no professional production cameras, what was the point in developing your own codec?

[SB]  At the time there were all of these camera codecs coming out, which were going to be a very bad user experience for editing – even on the fastest Mac Pros at the time. The camera manufacturers were using compression algorithms that were high quality, but highly compressed, because camera cards weren’t that fast or that big. That compression was difficult to decode and play back. It took more processing power than you could get from any PC at that time to get the same number of video streams compared with digitizing from tape. In some cases you couldn’t even play the camera original video files at all, so you needed to transcode before you could start editing. All of the available transcoding codecs weren’t that high in quality or they had similar playback problems.

Apple wanted to make a better user experience, so ProRes was originally designed as an intermediate codec. It worked so well that the camera manufacturers wanted to put it into their cameras, which was fine with Apple, as long as you met the quality standards. Everyone has to submit samples and work with the Apple engineers to get it to the standard that Apple expects. ProRes doesn’t encode into as small file sizes as some of the other camera codecs; but given the choice between file size, quality, and performance, then quality and performance were more important. As camera cards and hard drives get bigger, faster, and cheaper, it’s less of an issue and so it was the right decision.

[OP]  The launch of Final Cut Pro X turned out to be controversial. Was the ProApps team prepared for the industry backlash that happened?

[SB] We knew that it would be disruptive, of course. It was a whole new interface and approach. It integrated a bunch of cutting edge technology that people weren’t familiar with. A complete rewrite of  the codebase was a huge step forward as you can see in the speed and fluidity that is so crucial during the creative process. Metadata driven workflows, background processing, magnetic timeline — in many ways people are still trying to catch up eight years later. And now FCP X is the best selling version of Final Cut Pro ever.

[OP]  When Walter Murch used Final Cut Pro to edit the film, Cold Mountain, it gained a lot of attention. Is there going to be another “Cold Mountain moment” for anyone or is that even important anymore?

[SB]  Post Cold Mountain? [chuckle] You have to be careful — the production you are trying to emulate might have nothing to do with your needs on an everyday basis. It may be aspirational, but by adopting Hollywood techniques, you aren’t doing yourself any favors. Those are designed with budgets, timeframes, and a huge crew that you don’t have. Adopt a workflow that is designed for the kind of work you actually do.

When we came up in the industry, you couldn’t make a good-looking video without going to a post house. Then NLEs came along and you could do a bunch of work in your attic, or on a boat, or in a hotel room. That creative, rough-cut market fractured, but you still had to go to an online edit house. That was a limited world that took capital to build and it was an expense by the hour. Imagine how many videos didn’t get made, because a good post house cost hundreds of dollars an hour.

Now the video market has fractured into all these different outlets – streaming platforms, social media, corporate messaging, fast-turnaround events, and mobile apps. And these guys have a ton of powerful equipment, like drones, gimbals, and Atomos ProRes RAW recorders – and it looks great! But, they’re not going to a post house. They’re going to pick up whatever works for them and at the end of the day impress their clients or customers. Each one is figuring out new ways to take advantage of this new technology.

One of the things Sam Mestman teaches in his mobile filmmaking class is that you can make really high-quality stuff for a fraction of the cost and time, as long as you are going to be flexible enough to work in a non-traditional way. That is the driving force that’s going to create more videos for all of these different outlets. When I started out, the only way you could distribute directly to the consumer was by mailing someone a VHS tape. That’s just long gone, so why are we using the same editing techniques and workflows?

I can’t remember the last time I watched something on broadcast TV. The traditional ways of doing things are a sort of assembly line — every step is very compartmentalized. This doesn’t stand to benefit from new efficiencies and technological advances, because it requires merging traditional roles, eliminating steps, and challenging the way things are charged for. The rules are a little less strict when you are working for these new distribution platforms. You still have to meet the deliverable requirements, of course. But if you do it the way you’ve always done it, then you won’t be able to bring it in on time or on budget in this emerging world. If you want to stay competitive, then you are forced to make these changes — your competition maybe already has. How can you tell when your phone doesn’t ring? And that’s why I would say there are Cold Mountain moments all the time when something gets made in a way that didn’t exist a few years ago. But, it happens across this new, much wider range of markets and doesn’t get so much attention.

[OP]  Final Cut Pro X seems to have gained more professional users internationally than in the US. In your writings, you’ve mentioned that efficiency is the way local producers can compete for viewers and maintain quality within budget. Would you expand upon that?

[SB]  There are a range of reasons why FCP X and new metadata-driven workflows are expanding in Europe faster than the US. One reason is that European crews tend to be smaller and there are fewer steps between the creatives and decision-making execs. The editor has more say in picking their editing system. I see over and over that editors are forced to use systems they don’t like in larger projects and they love to use FCP X on their own projects. When the facilities listen to and trust the editors, then they see the benefits pretty quickly. If you have government funded TV (like in many countries in Europe), then they are always under public pressure to justify the costs. Although they are inherently conservative, they are incentivized to always be looking for new ways to improve and that involves risks. With smaller crews, Europeans can be more flexible as to what being “an editor” really means and don’t have such strict rules that keep them from creating motion graphics – or the photographer from doing the rough cut. This means there is less pressure to operate like an assembly line and the entire production can benefit from efficiencies.

I think there’s a huge amount of money sloshing around in Europe and they have to figure out how to do these local-language productions for the high quality that will compete with the existing broadcasters, major features, and the American and British big-budget shows. So how are you going to do that? If you follow the rules, you lose. You have to look at different methods of production. 

Subscription is a different business model of continuing revenue. How many productions will the subscription model pay for? Netflix is taking out $2 billion in bonds on top of the $1 billion they already did to fund production and develop for the local languages. I’ve been watching the series Criminal on Netflix. It’s a crime drama based on police interrogations, with separate versions done in four different countries. English, French, German, and Spanish. Each one has its own cultural biases in getting to a confession (and that’s why I watched them all!). I’ve never seen anything like it before.

The guys at Metronome in Denmark used this moment as an opportunity to take some big chances with creating new workflows with FCP X and shared storage. They are using 1.5 petabytes of storage, six Synology servers, and 30 shows being edited right now in FCP X. They use the LumaForge Jellyfish for on-location post-production. If someone says it can’t be done, you need to talk to these guys and I’m happy to make the introduction.

I’m working with another company in France that shot a series on the firefighters of Marseilles. They shot most of it with iPhones, but they also used other cameras with longer lenses to get farther away from the fires. They’re looking at a series of these types of productions with a unique mobile look. If you put a bunch of iPhones on gimbals, you’ve got a high-quality, multi-cam shoot, with angles and performances that you could never get any other way. Or a bunch of DSLRs with Atomos devices and the Atomos sync modules for perfect timecode sync. And then how quickly can you turn out a full series? Producers need to generate a huge amount of material in a wide range of languages for a wide range of markets and they need to keep the quality up. They have to use new post-production talent and methods and, to me, that’s exciting.

[OP]  Looking forward, where do you see production and post technology headed?

[SB]  The tools that we’ve developed over the last 30 years have made such a huge difference in our industry that there’s a part of me that wants to go back and be a film student again. [laughs] The ability for people to turn out compelling material that expresses a point of view, that helps raise money for a worthy cause, that helps to explain a difficult subject, that raises consciousness, that creates an emotional engagement – those things are so much easier these days. It’s encouraging to me to see it being used like this.

The quality of the iPhone 11 is stunning. With awesome applications, like Mavis and FiLMiC Pro, these are great filmmaking tools. I’ve been playing around with the DJI Osmo Pocket, too, which I like a lot, because it’s a 4K sensor on a gimbal. So it’s not like putting an iPhone on a gimbal – it’s all-in-one. Although you can connect an iPhone to it for the bigger screen. 

Camera technology is going in the direction of more pixels and bigger sensors, more RAW and HDR, but I’d really like to see the next big change come in audio. It’s the one place where small productions still have problems. They don’t hire the full-time sound guy or they think they can shoot just with the mic attached to the hot shoe of the camera. That may be OK when using only a DSLR, but the minute you want to take that into a higher-end production, you’re going to need to think about it more.

Again, it’s a personnel issue. I can point a camera at a subject and get a pretty good recording, but to get a good sound recording – that’s much harder for me at this point. In that area, Apogee has done a great job with MetaRecorder for iOS. It’s not just generating iXML to automatically name the audio channels when you import into FCP X — you can actually label the FCP X roles in the app. It uses Timecode Systems (now Atomos) for multiple iOS recording devices to sync with rock-solid timecode and you can control those multiple recorders from a single iOS device. I would like to see more people adopt multiple microphones synced together wirelessly and controlled by an iPad.

One of the things I love about being “semi-retired” is if something’s interesting to me, I just dig into it. It’s exciting that you can edit from an iPad Pro, you can back up to a Gnarbox, you can shoot high-quality video with your iPhone or a DJI Osmo Pocket, and that opens the world up to new voices. If you were to graph it – the cost of videos is going down and to the right, the number of videos being created in going up and to the right, and at some point they cross over. That promises a huge increase in the potential work for those who can benefit from these new tools. We are close to that point.

It used to be that if your client went to another post house, you lost that client. It was a zero sum game — I win — you lose. Now there are so many potential needs for video we would never have imagined. Those clients are coming out of the woodwork and saying, “Now I can do a video. I’ll do some of it myself, but at some point I’ll hand it off to you, because you are the expert.” Or they feel they can afford your talent, because the rest of the production is so much more efficient. That’s a growing demand that you might not see until your market hits that crossover point.

This article also appears at FCPco.

©2019 Oliver Peters

More about ProRes RAW

A few weeks ago I wrote a two-part post – HDR and RAW Demystified. In the second part, I covered Apple’s new ProRes RAW codec. I still see a lot of misinformation on the web about what exactly this is, so I felt it was worth an additional post. Think of this post as an addendum to Part 2. My apologies up front, if there is some overlap between this and the previous post.

_____________________________

Camera raw codecs have been around since before RED Digital Camera brought out their REDCODE RAW codec. At NAB, Apple decided to step into the game. RED brought the innovation of recording the raw signal as a compressed movie file, making on-board recording and simplified post-production possible. Apple has now upped the game with a codec that is optimized for multi-stream playback within Final Cut Pro X, thus taking advantage of how FCPX leverages Apple hardware. At present, ProRes RAW is incompatible with all other applications. The exception is Motion, which will read and play the files, but with incorrect default – albeit correctable – video levels.

ProRes RAW is only an acquisition codec and, for now, can only be recorded externally using an Atomos Inferno or Sumo 19 monitor/recorder, or in-camera with DJI’s Inspire 2 or Zenmuse X7. Like all things Apple, the complexity is hidden under the surface. You don’t get the type of specific raw controls made available for image tweaking, as you do with RED. But, ProRes RAW will cover the needs of most camera raw users, making this the raw codec “for the rest of us”. At least that’s what Apple is banking on.

Capturing in ProRes RAW

The current implementation requires a camera that exports a camera raw signal over SDI, which in turn is connected to the Atomos, where the conversion to ProRes RAW occurs. Although no one is very specific about the exact process, I would presume that Atomos’ firmware is taking in the camera’s form of raw signal and rewrapping or transforming the data into ProRes RAW. This means that the Atomos firmware would require a conversion table for each camera, which would explain why only a few Sony, Panasonic, and Canon models qualify right now. Others, like ARRI Alexa or RED cameras, cannot yet be recorded as ProRes RAW. The ProRes RAW codec supports 12-bit color depth, but it depends on the camera. If the SDI output to the Atomos recorder is only 10-bit, then that’s the bit-depth recorded.

Until more users buy or update these specific Atomos products – or more manufacturers become licensed to record ProRes RAW onboard the camera – any real-word comparisons and conclusions come from a handful of ProRes RAW source files floating around the internet. That, along with the Apple and Atomos documentation, provides a pretty solid picture of the quality and performance of this codec group.

Understanding camera raw

All current raw methods depend on single-sensor cameras that capture a Bayer-pattern image. The sensor uses a monochrome mosaic of photosites, which are filtered to register the data for light in the red, green, or blue wavelengths. Nearly all of these sensors have twice as many green receptors as red or blue. At this point, the sensor is capturing linear light at the maximum dynamic range capable for the exposure range of the camera and that sensor. It’s just an electrical signal being turned into data, but without compression (within the sensor). The signal can be recorded as a camera raw file, with or without compression. Alternatively, it can also be converted directly into a full-color video signal and then recorded – again, with or without compression.

If the RGGB photosite data (camera raw) is converted into RGB pixels, then sensor color information is said to be “baked” into the file. However, if the raw conversion is stored in that form and then later converted to RGB in post, sensor data is preserved intact until much later into the post process. Basically, the choice boils down to whether that conversion is best performed within the camera’s electronics or later via post-production software.

The effect of compression may also be less destructive (fewer visible artifacts) with a raw image, because data, rather than video is being compressed. However, converting the file to RGB, does not mean that a wider dynamic range is being lost. That’s because most camera manufacturers have adopted logarithmic encoding schemes, which allow a wide color space and a high dynamic range (big exposure latitude) to be carried through into post. HDR standards are still in development and have been in testing for several years, completely independent of whether or not the source files are raw.

ProRes RAW compression

ProRes RAW and ProRes RAW HQ are both compressed codecs with roughly the same data footprint as ProRes and ProRes HQ. Both raw and standard versions use a variable bitrate form of compression, but in different ways. Apple explains it this way in their white paper: 

“As is the case with existing ProRes codecs, the data rates of ProRes RAW are proportional to frame rate and resolution. ProRes RAW data rates also vary according to image content, but to a greater degree than ProRes data rates. 

With most video codecs, including the existing ProRes family, a technique known as rate control is used to dynamically adjust compression to meet a target data rate. This means that, in practice, the amount of compression – hence quality – varies from frame to frame depending on the image content. In contrast, ProRes RAW is designed to maintain constant quality and pristine image fidelity for all frames. As a result, images with greater detail or sensor noise are encoded at higher data rates and produce larger file sizes.”

ProRes RAW and HDR do not depend on each other

One of my gripes, when watching some of the ProRes RAW demos on the web and related comments on forums, is that ProRes RAW is being conflated with HDR. This is simply inaccurate. Raw applies to both SDR and HDR workflows. HDR workflows do not depend on raw source material. One of the online demos I saw recently immediately started with an HDR FCPX Library. The demo ProRes RAW clips were imported and looked blown out. This made for a dramatic example of recovering highlight information. But, it was wrong!

If you start with an SDR FCPX Library and import these same files, the default image looks great. The hitch here, is that these ProRes RAW files were shot with a Sony camera and a default LUT is applied in post. That’s part of the file’s metadata. To my knowledge, all current, common camera LUTs are based on conversion to the Rec709 color space, not HDR or wide gamut. If you set the inspector’s LUT tab to “none” in either SDR or HDR, you get a relatively flat, log image that’s easily graded in whatever direction you want.

What about raw-specific settings?

Are there any advantages to camera raw in the first place? Most people will point to the ability to change ISO values and color temperature. But these aren’t actually something inherently “baked” into the raw file. Instead, this is metadata, dialed in by the DP on the camera, which optimizes the images for the sensor. ISO is a sensitivity concept based on the older ASA film standard for exposing film. In modern digital cameras, it is actually an exposure index (EI), which is how some refer to it. (RedShark’s Phil Rhodes goes into depth in this linked article.)

The bottom line is that EI is a cross-reference to that camera sensor’s “sweet spot”. 800 on one camera might be ideal, while 320 is best on another. Changing ISO/EI has the same effect as changing gain in audio. Raising or lowering ISO/EI values means that you can either see better into the darker areas (with a trade-off of added noise) – or you see better highlight detail, but with denser dark areas. By changing the ISO/EI value in post, you are simply changing that reference point.

In the case of ProRes RAW and FCPX, there are no specific raw controls for any of this. So it’s anyone’s guess whether changing the master level wheel or the color temp/tint sliders within the color wheels panel is doing anything different for a ProRes RAW file than doing the same adjustment for any other RGB-encoded video file. My guess is that it’s not.

In the case of RED camera files, you have to install a camera raw plug-in module in order to work with the REDCODE raw codec inside of Final Cut Pro X. There is a lot of control of the image, prior to tweaking with FCPX’s controls. However, the amount of image control for the raw file is significantly more for a REDCODE file in Premiere Pro, than inside of FCPX. Again, my suspicion is that most of these controls take effect after the conversion to RGB, regardless of whether or not the slider lives in a specific camera raw module or in the app’s own color correction controls. For instance, changing color temperature within the camera raw module has no correlation to the color temperature control within the app’s color correction tools. It is my belief that few of these actually adjust file data at the raw level, regardless of whether this is REDCODE or ProRes RAW. The conversion from raw to RGB is proprietary with every manufacturer.

What is missing in the ProRes RAW implementation is any control over the color science used to process the image, along with de-Bayering options. Over the years, RED has reworked/improved its color science, which theoretically means that a file recorded a few years ago can look better today (using newer color science math) than it originally did. You can select among several color science models, when you work with the REDCODE format. 

You can also opt to lower the de-Bayering resolution to 1/2, 1/4, 1/8, etc. for a RED file.  When working in a 1080p timeline, this speeds up playback performance with minimal impact on the visible resolution displayed in the viewer. For full-quality conversion, software de-Bayering also yields different results than hardware acceleration, as with the RED Rocket-X card. While this level of control is nice to have, I suspect that’s the sort of professional complication that Apple seeks to avoid.

The main benefit of ProRes RAW may be a somewhat better-quality image carried into post at a lower file size. To get the comparable RGB image quality you’d need to go up to uncompressed, ProRes 4444, or ProRes 4444 XQ – all of which become very taxing in post. Yet, for many standard productions, I doubt you’ll see that great of a difference. Nevertheless, more quality with a lower footprint will definitely be welcomed.

People will want to know whether this is a game-changer or not. On that count, probably not. At least not until there are a number of in-camera options. If you don’t edit – and finish – with FCPX, then it’s a non-starter. If you shoot with a camera that records in a high-quality log format, like an ARRI Alexa, then you won’t see much difference in quality or workflow. If you shoot with any RED camera, you have less control over your image. On the other hand, it’s a definite improvement over all raw workflows that capture in image sequences. And it breathes some life into an older camera, like the Sony FS700. So, on balance, ProRes RAW is an advancement, but just not one that will affect as large a part of the industry as the rest of the ProRes family has.

(Note – click any image for an enlarged view. Images courtesy of Apple, FilmPlusGear, and OffHollywood.)

©2018 Oliver Peters

Comparing Color, Resolve, SpeedGrade and Symphony

df_ccc_main_sm

It’s time to talk about color correctors. In this post, I’ll compare Color, Resolve, SpeedGrade and Symphony. These are the popular desktop color correction systems in use today. Certainly there are other options, like Filmlight’s Baselight Editions plug-in, as well as other NLEs with their own powerful color correction tools, including Autodesk Smoke and Quantel Rio. Some of these fall outside of the budget range of small shops or don’t really provide a correction workflow. For the sake of simplicity, in this post I’ll stick with the four I see the most.

df_ccc_sym_sm

Avid Technology Media Composer + Symphony

Although it started as a separate NLE product with dedicated hardware, today’s Symphony is really an add-on option to Media Composer. The main feature that differentiates Symphony from Media Composer in file-based workflows is an enhanced color correction toolset. Symphony used to be the “gold standard” for color correction within an NLE, combining controls “borrowed” from many other software and systems, like Photoshop, hardware proc amps and hardware versions of the DaVinci correctors. It was the first to use the color wheel control model for balance/hue offsets. A subset of the Symphony tools has been migrated into Media Composer. Basic correction features in Symphony include channel mixing, hue offsets (color balance), levels, curves and more.

Many perceive Symphony correction as a single level or layer of correction, but that’s not exactly true. Color correction occurs on two levels – segment and program track. Most of your correction is on individual clips and Symphony offers a relational grading system. This means you can apply grades based on single clips or all instances of a master clip, tape ID, camera, etc. All clips used from a common source can be automatically graded once the first instance of that clip is graded on the timeline. The program track grade allows the colorist to apply an additional layer of grading to a clip, a section of the timeline or the entire timeline. So, when the client asks for everything to be darker, a global adjustment can be made using the program track.

Symphony also offers secondary grading based on isolating colors via an HSL key and adjusting that range. Although Symphony doesn’t offer nodes or correction layers like other software, you can use Avid’s video track timeline hierarchy to add additional correction to blank tracks above those tracks containing the video clips. In this way you are using the tracks as de facto adjustment layers. The biggest weakness is the lack of built-in masking tools to create what is commonly referred to as “power windows” (a term originated by DaVinci). The workaround is to use Avid’s built-in Intraframe/Animatte effects tools to create masks. Then you can apply additional spot correction within the mask area. It takes a bit more work than other tools, but it’s definitely possible. Finally, many plug-in packages, like GenArts Sapphire, Boris Continuum Complete and Magic Bullet Looks include vignette filters that will work with Symphony.

The bottom line is that Symphony started it all, though by today’s standards is “long-in-the-tooth”. Nevertheless, the relational grading model – and the fact that you are working within the NLE and can freely move between color correction and editing/trimming – makes Symphony a fast unit to operate, especially in time-sensitive, long-form productions, like TV shows.

df_ccc_spgrd_sm

Adobe SpeedGrade CC

If you are current as a Creative Cloud subscriber, then you have access to the most recent version of Adobe Premiere Pro CC and SpeedGrade CC. With the updates introduced late last year, Adobe added Direct Link interaction between Premiere Pro and SpeedGrade. When you use Direct Link to send your Premiere Pro timeline to SpeedGrade, the actual Premiere Pro sequence becomes the SpeedGrade sequence. This means codec decoding, transitions and Premiere Pro effects are handled by Premiere Pro’s effects engine, even though you are working inside SpeedGrade. As such, a project created via Direct Link supports features and codecs that would not be possible within a standalone SpeedGrade project.

Another unique aspect is that native and third-party transitions and effects used in Premiere Pro are visible (though not adjustable) when you are working inside SpeedGrade. This is an important distinction, because other correction workflows that rely on roundtrips don’t include NLE-based filters. You can’t see how the correction will be affected by a filter used in the NLE timeline. Naturally, in the case of SpeedGrade, this only works if you are working on a machine with the same third-party filters installed. When you return to Premiere Pro from SpeedGrade, the color corrections on clips are collapsed into a Lumetri filter effect that is applied to the clip or adjustment layer within the Premiere Pro sequence. Essentially this Lumetri effect is similar to a LUT that encapsulates all of the grading layers applied in SpeedGrade into a single effect in Premiere Pro. This is possible because the two applications share the same color science. The result is a render-free workflow with the easy ability to go back-and-forth between Premiere Pro and SpeedGrade for changes and adjustments. Unlike a standard LUT, Lumetri filters can carry masks, keyframes and are 100% precise.

As a color corrector, SpeedGrade is designed with a layer-based interface, much like Photoshop. Layers can be primary (fullscreen), secondary (keys and masks) or filters. A healthy selection of effects filters and LUTs are included. The correction model splits the signal into what amounts to a 12-way color wheel arrangement. There are lift/gamma/gain controls for the overall image, as well as for each of the shadow, middle and highlight ranges. Controls can be configured as wheels or sliders, with additional sliders for contrast, pivot, temperature (red vs. blue bias), magenta (red/blue vs. green bias) and saturation. There are no curves controls.

Overall, I like the looks I get with SpeedGrade, but I find it lacking in some ways. There are definite plusses and minuses. I miss the curves. It currently does not work with Blackmagic Design hardware. Matrox, Bluefish and AJA are OK. It’s got a tracker, but I find both tracking and masking to be mediocre. The biggest workflow shortcoming is the lack of a temporary memory register feature. You can save a whole grade, which saves the entire stack of grading layers applied to a clip as a Lumetri filter. You can apply grades from earlier timeline clips quite simply and SpeedGrade lets you open multiple playheads for comparison/correction between multiple shots on the timeline. You can access the nine grades ahead and the nine grades beyond the current playhead position. You can also copy the grade from the clip below mouse position to the clip under the playhead by pressing the C key. What you cannot do is store a random set of grades or just a single layer in a temporary buffer and then apply it from that buffer somewhere else in the timeline. Adding these two items would greatly speed up the SpeedGrade workflow.

df_ccc_resolve_sm

Blackmagic Design DaVinci Resolve

The DaVinci name is legendary among color correction products, but that reputation was earned with its hardware products, like the DaVinci 2K. Resolve was the software-based product built around a Linux cluster. When Blackmagic bought the assets and technology of DaVinci, all of the legacy hardware products were dropped, in favor of concentrating on Resolve as the software that had the most life for the future. There are now four versions, including Resolve Lite (free), Resolve (paid – software only), Resolve with a Blackmagic control surface and Resolve for Linux. The first three work on Mac and PC. You may download the free Lite version from the Blackmagic website or Apple’s Mac App Store. The Lite version has nearly all of the power of the paid software, but with these limitations: noise reduction, stereoscopic tools and the ability to output at a resolution above UltraHD requires a paid version.

I’m writing this based on Resolve 10, which has rudimentary editing features. It is designed as a standalone color corrector that can be used for some editing. Blackmagic Design doubled-down on the editing side with Resolve 11 (shown at NAB 2014). When that’s finally released this summer, you’ll have a powerful NLE built into the application. The demos at NAB were certainly impressive. If that turns out to be the case, Resolve 11 would function as an Avid Symphony or Quantel Rio type of system. That means you could freely move between creative editing and color correction, simply by changing tabs in the interface. For now, Resolve 10 is mainly a color corrector, with some very good roundtrip and conforming support for other NLEs. Specifically there is very good support for Avid and FCP X workflows.

As a color corrector, Resolve offers the widest set of correction tools of any of these systems. In the work I’ve done, Resolve allows for more extreme grading and is more precise when trying to correct problem shots. I’ve done corrections with it that would have been impossible with any other tool. The correction controls include curves, wheels, primary sliders, channel mixers and more. Corrections are node-based and can be applied to clips or an entire track. Nodes can be applied in a serial or parallel fashion, with special splitter/combiner and layer mixing nodes. The latter includes Photoshop-style blend modes. Unlike SpeedGrade, you can store the value of a single node in a buffer (using the keyboard copy function) and then paste the value of just that node somewhere else. This makes it pretty fast when working up and down a timeline. Finally, the tracker is amazing.

A few things bother me about Resolve, in spite of its powerful toolset. The interface almost presents too many tools and it becomes very easy to lose track of what was done and where. There is no large viewer or fullscreen mode that doesn’t hide the node tree. This forces a lot of toggling between workspace configurations. If you have two displays, you cannot use the second display for anything other than the scopes and audio mixer. (This will change with Resolve 11.) Finally, you can only use Blackmagic Design hardware to view the video output on a grading monitor.

df_ccc_color_sm

Apple Color

Some of you are saying, “Why talk about that? It was killed off a few years ago! Who uses that anymore?” Yes, I know. What people so quickly forget, was that when the software was FinalTouch (before Apple’s purchase), it was very expensive and considered to be very innovative. Apple bought it, added some features and cleaned up some of the workflow. As part of Final Cut Studio, it set the standard for round-tripping with an NLE. Unfortunately for many Mac users, it retained its less glossy, “Unixy” interface and thus, didn’t really catch on for many editors. However, it still works just fine on the newest machines and OS versions and remains a fast, high-quality color corrector.

Nearly all of the long-form jobs I’ve done – including feature films and TV shows up to even a few months ago – have been done with Color. There are two reasons that I prefer it. The first is that most of these jobs were cut using FCP 7, so it’s still the most integrated software for these projects. More importantly, there are several key features that make it faster than SpeedGrade and Resolve for projects that fall within a standard range of grading. In other words, the in-camera look was good and there were no huge problem areas, plus the desired grade didn’t swing into extreme looks.

Color is designed with 10 levels of grading per clip – primary in, eight secondaries and primary out. Since secondaries can be fullscreen or a portion of the image qualified by an HSL key or mask, each secondary layer can actually have two corrections – inside and outside of the mask. In addition to these, there’s a ColorFX layer for node-based filter effects, which can also include color adjustments. In reality, the maximum number of corrections to a single clip could be up to 19. The primary corrections can include value changes for RGB lift/gamma/gain and saturation levels, as well a printer lights. On top of this are lift/gamma/gain color wheels and luma controls. Lastly there are curves. The secondaries include custom mask shapes and hue/sat/luma curves. There’s a tracker, too, but it’s not that great.

Where Color still shines for me is in workflow. Each layer is represented by a labelled bar on the timeline under the clip. This makes it easy to apply only a single secondary adjustment to other clips on the timeline simply by sliding the corresponding secondary bar from one timeline clip to one or more of the others. For example, I used Secondary 3 to qualify a person’s face and brighten it. I could then simply drag the bar for S3 that appears under the first clip on the timeline over to every other clip with the same person and similar set-up. All without selecting each of these clips prior to applying the adjustment.

Color works with all cards that work with Final Cut Pro, so there’s no AJA versus Blackmagic issue as mentioned above. Dual monitors work well. You can have scopes and the viewer (or a fullscreen viewer) on one display and the full control interface on the other. Realistically, Color works best with up to 2K video and one of the standard Apple codecs (uncompressed or ProRes work best). A lot of the footage I’ve graded with it was ProResHQ or ProRes 4444 that came native from an ARRI Alexa or transcoded from a C300, RED or a Canon 5D/7D. But I’ve also done a film that was all native EX rewrapped as .mov from a Sony camera and Color had no issues. Log-profile footage grades very nicely in Color, so Alexa ProRes 4444 encoded as Log-C forms a real sweet spot for Apple Color.

©2014 Oliver Peters

NAB 2014 Thoughts

Whodathunkit? More NLEs, new cameras from new vendors and even a new film scanner! I’ve been back from NAB for a little over a week and needed to get caught up on work while decompressing. The following are some thoughts in broad strokes.

Avid Connect. My trip started early with the Avid Connect costumer event. This was a corporate gathering with over 1,000 paid attendees. Avid execs and managers outlined the corporate vision of Avid Everywhere in presentations that were head-and-shoulders better than any executive presentations Avid has given in years. For many who attended, it was to see if there was still life in Avid. I think the general response was receptive and positive. Avid Everywhere is basically a realignment of existing and future products around a platform concept. That has more impact if you own Avid storage or asset management software. Less so, if you only own a seat of Media Composer or ProTools. No new software features were announced, but new pricing models were announced with options to purchase or rent individual seats of the software – or to rent floating licenses in larger quantities.

4K. As predicted, 4K was all over the show. However, when you talked to vendors and users, there was little clear direction about actual mastering in 4K. It is starting to be a requirement in some circles, like delivering to Netflix, for example; but for most users 4K stops at acquisition. There is interest for archival reasons, as well as for reframing shots when the master is HD or 2K.

Cameras. New cameras from Blackmagic Design. Not much of a surprise there. One is the bigger, ENG-style URSA, which is Blackmagic’s solution to all of the add-ons people use with smaller HDSLR-sized cameras. The biggest feature is a 10” flip-out LCD monitor. AJA was the real surprise with its own 4K Cion camera. Think KiPro Quad with a camera built around it. Several DPs I spoke with weren’t that thrilled about either camera, because of size or balance. A camera that did get everyone jazzed was Sony’s A7s, one of their new Alpha series HDSLRs. It’s 4K-capable when recorded via HDMI to an external device. The images were outstanding. Of course, 4K wasn’t everywhere. Notably not at ARRI. The news there is the Amiraa sibling to the Alexa. Both share the same sensor design, with the Amira designed as a documentary camera. I’m sure it will be a hit, in spite of being a 2K camera.

Mac Pro. The new Mac Pro was all over the show in numerous booths. Various companies showed housings and add-ons to mount the Mac Pro for various applications. Lots of Thunderbolt products on display to address expandability for this unit, as well as Apple laptops and eventually PCs that will use Thunderbolt technology. The folks at FCPworks showed a nice DIT table/cart designed to hold a Mac Pro, keyboard, monitoring and other on-set essentials.

FCP X. Speaking of FCP X, the best place to check it out was at the off-site demo suite that FCPworks was running during the show. The suite demonstrated a number of FCP X-based workflows using third-party utilities, shared storage from Quantum and more. FCP X was in various booths on the NAB show floor, but to me it seemed limited to partner companies, like AJA. I thought the occurrences of FCP X in other booths was overshadowed by Premiere Pro CC sightings. No new FCP X feature announcements or even hints were made by Apple in any private meetings.

NLEs. The state of nonlinear editing is in more flux than ever. FCP X seems to be picking up a little steam, as is Premiere Pro. Yet, still no clear market leader across all sectors. Autodesk announced Smoke 2015, which will be the last version you can buy. Following Adobe’s lead, this year they shift to a rental model for their products. Smoke 2015 diverges more from the Flame UI model with more timeline-based effects than Smoke 2013. Lightworks for the Mac was demoed at the EditShare booth, which will make it another new option for Mac editors. Nothing new yet out of Avid, except some rebranding – Media Composer is now Media Composer | Software and Sphere is now Media Composer | Cloud. Expect new features to be rolled in by the end of this year. The biggest new player is Blackmagic Design, who has expanded the DaVinci Resolve software into a full-fledged NLE. With a cosmetic resemblance to FCP X, it caused many to dub it “the NLE that Final Cut Pro 8 should have been”. Whether that’s on the mark or just irrational exuberance has yet to be determined. Suffice it to say that Blackmagic is serious about making it a powerful editor, which for now is targeted at finishing.

Death of i/o cards. I’ve seen little mention of this, but it seems to me that dedicated PCIe video capture cards are a thing of the past. KONA and Decklink cards are really just there to support legacy products. They have less relevance in the file-based world. Most of the focus these days is on monitoring, which can be easily (and more cheaply) handled by HDMI or small Thunderbolt devices. If you looked at AJA and Matrox, for example, most of the target for PCIe cards is now to supply the OEM market. AJA supplies Quantel with their 4K i/o cards. The emphasis for direct customers is on smaller output-only products, mini-converters or self-contained format converters.

Film. If you were making a custom, 35mm film scanner – get out of the business, because you are now competing against Blackmagic Design! Their new film scanner is based on technology acquired through the purchase of Cintel a few months ago. Now Blackmagic introduced a sleek 35mm scanner capable of up to 30fps with UltraHD images. It’s $30K and connects to a Mac Pro via Thunderbolt2. Simple operation and easy software (plus Resolve) will likely rekindle the interest at a number of facilities for the film transfer business. That will be especially true at sites with a large archive of film.

Social. Naturally NAB wouldn’t be the fun it is without the opportunity to meet up with friends from all over the world. That’s part of what I get out of it. For others it’s the extra training through classes at Post Production World. The SuperMeet is a must for many editors. The Avid Connect gala featured entertainment by the legendary Nile Rodgers and his band Chic. Nearly two hours of non-stop funk/dance/disco. Quite enjoyable regardless of your musical taste. So, another year in Vegas – and not quite the ho-hum event that many had thought it would be!

Click here for more analysis at Digital Video’s website.

©2014 Oliver Peters

 

Comparing Final Cut Pro X, Media Composer and Premiere Pro CC

df_nle_1_sm

The editing world includes a number of software options, such as Autodesk Smoke, Grass Valley EDIUS, Lightworks, Media 100, Sony Vegas and Quantel. The lion’s share of editing is done on three platforms: Apple Final Cut Pro, Avid Media Composer or Adobe Premiere Pro. For the last two years many users have been holding onto legacy systems, wondering when the dust would settle and which editing tool would become dominant again. By the end of 2013, these three companies released significant updates that give users a good idea of their future direction and has many zeroing in on a selection.

df_nle_11_sm

Differing business models

Adobe, Apple and Avid have three distinctly different approaches. Adobe and Avid offer cross-platform solutions, while Final Cut Pro X only works on Apple hardware. Adobe offers most of its content creation software only through a Creative Cloud subscription. Individual users have access to all creative applications for $49.99 a month (not including promotional deals), but when they quit subscribing, the applications cease to function after a grace period. Users may install the software on as many computers as they like (Mac or PC), but only two can be activated at any time.

Apple’s software sells through the Mac App Store. Final Cut Pro X is $299.99 with another $49.99 each for Motion and Compressor. Individual users may install and use these applications on any Mac computers they own, but enterprise users are supposed to purchase volume licenses to cover one installation per computer. With the release of FCP X 10.1, it appears that Apple is offering updates at no charge, meaning that once you buy Final Cut, you never pay for updates. Whether that continues as the official Apple policy from here on is unknown. FCP X uses a special version of XML for timeline interchange with other applications, so if you need to send material via EDL, OMF or AAF – or even interchange with previous versions of Final Cut Pro – you will need to augment FCP X with a variety of third-party utilities.

Avid Media Composer remains the only one of the three that follows a traditional software ownership model. You purchase, download and install the software and activate the license. You may install it on numerous Macs and PCs, but only one at a time can be activated. The software bundle runs $999 and includes Media Composer, several Avid utilities, Sorenson Squeeze, Avid FX from BorisFX and AvidDVD by Sonic. You can expand your system with three extra software options: Symphony (advanced color correction), ScriptSync (automated audio-to-script alignment) and PhraseFind (a dialogue search tool). The Symphony option also includes the Boris Continuum Complete filters.

Thanks to Avid’s installation and activation process, Media Composer is the most transportable of the three. Simply carry Mac and Windows installers on a USB key along with your activation codes. It’s as simple as installing the software and activating the license, as long as any other installations have been de-activated prior to that. While technically the FCP X application could be moved between machines, it requires that the new machine be authorized as part of a valid Apple ID account. This is often frowned upon in corporate environments. Similarly, you can activate a new machine as one of yours on a Creative Cloud account (as long as you’ve signed out on the other machines), but the software must be downloaded again to this local machine. No USB key installers here.

df_nle_5_sm

Dealing with formats

All three applications are good at handling a variety of source media codecs, frame rates and sizes. In some cases, like RED camera files, plug-ins need to be installed and kept current. Both Apple and Avid will directly handle some camera formats without conversion, but each uses a preferred codec – ProRes for Final Cut Pro X and DNxHD for Media Composer. If you want the most fluid editing experience, then transcode to an optimized codec within the application.

Adobe hasn’t developed its own mezzanine codec. In fact, Premiere Pro CC has no built-in transcoding tools, leaving that instead to Adobe Prelude or Adobe Media Encoder. By design, the editor imports files in their native format without transcoding or rewrapping and works with those directly in the sequence. A mix of various formats, frame rates, codecs and sizes doesn’t always play as smoothly on a single timeline as would optimized media, like DNxHD or ProRes; but, my experience is that of these three, Premiere Pro CC handles such a mix the best.

Most of us work with HD (or even SD) deliverables, but higher resolutions (2K, UHD, 4K) are around the corner. All three NLEs handle bigger-than-HD formats as source media without much difficulty. I’ve tested the latest RED EPIC Dragon 6K camera files in all three applications and they handle the format well. Both Adobe and Apple can output bigger sequence sizes, too, such as 2K and 4K. For now, Avid Media Composer is still limited to HD (1920 x 1080 maximum) sequences and output sizes. Here are some key features of the most recent updates.

df_nle_3_sm

Adobe Premiere Pro CC (version 7.2.1)

The current build of Premiere Pro CC was released towards the end of 2013. Adobe has been enhancing editing features with each new update, but two big selling points of this version are Adobe Anywhere integration and Direct Link between Premiere Pro CC and SpeedGrade CC. Anywhere requires a shared server for collaborative workflows and isn’t applicable to most users who don’t have an Anywhere infrastructure in place. Nevertheless, this adds the client-side application integration, so those who do, can connect, sign in and work.

df_nle_7_smOf more interest is Direct Link, which sends the complete Premiere Pro CC timeline into SpeedGrade CC for color correction. Since you are working directly with the Premiere Pro timeline, SpeedGrade functions with a subset of its usual controls. Operations, like conforming media to an EDL, are inactive. Direct Link facilitates the use of various compressed codecs that SpeedGrade wouldn’t normally handle by itself, since this is being taken care of by Premiere Pro’s media engine. When you’ve completed color correction, the saved timeline is sent back to Premiere Pro. Each clip has an applied Lumetri filter that contains grading information from SpeedGrade. The roundtrip is achieved without any intermediate rendering.

df_nle_6_smThis solution is a good first effort, but I find that the response of SpeedGrade’s controls via Direct Link are noticeably slower than working directly in a SpeedGrade project. That must be a result of Premiere Pro working in the background. Clips in Premiere Pro with applied Lumetri effects also require more resources to play well and rendering definitely helps. The color roundtrip results were good in my tests, with the exception of any clips that used a filter layer with a LUT. These displayed with bizarre colors back in Premiere Pro.

You can’t talk about Premiere Pro without addressing Creative Cloud. I still view this as a “work in progress”. For instance, you are supposed to be able to sync files between your local drive and the Cloud, much like DropBox. Even though everything is current on my Mac Pro, that tab in the Creative Cloud application still says “coming soon”. Others report that it’s working for them.

df_nle_2_sm

Apple Final Cut Pro X (version 10.1)

This update is the tipping point for many FCP 7 users. Enough updates have been released in over two years to address many of the concerns professional editors have expressed. 10.1 requires an operating system update to Mavericks (10.9 or later) and has three marquee items – a revised media structure, optimization for 4K and overall better performance. It is clear that Apple is not about to change the inherent design of FCP X. This means no tracks and no changes to the magnetic timeline. As with any update, there are plenty of small tweaks, including enhanced retiming, audio fades on individual channels, improved split edits and a new InertiaCam stabilization algorithm.

df_nle_9_smThe most obvious change is the move from separate Events and Projects folders to unified Libraries, similar to Aperture. Think of a Library as the equivalent to a Final Cut Pro 7 or Premiere Pro CC project file, containing all data for clips and sequences associated with a production. An FCP X Library as viewed in the Finder is a bundled file, which can be opened using the “show package contents” Finder command. This reveals internal folders and files for Events, Projects and aliases linked to external media files. Imported files that are optionally copied into a Library are also contained there, as are rendered and transcoded files. The Libraries no longer need to live at the root of a hard drive and can be created for individual productions. Editors may open and close any or all of the Libraries needed for an edit session.

df_nle_8_smFCP X’s performance was optimized for Mavericks, the new Mac Pro and dual GPU processing. By design, this means improved 4K throughput, including native 4K support for ProRes, Sony XAVC and REDCODE camera raw media files. This performance boost has also filtered down to older machines. 10.1 brought better performance with 1080p ProRes and even 5K RED files to my 2009 Mac Pro. Clearly Apple wants FCP X to be a showcase for the power of the new Mac Pro, but you’ll get benefits from this update, even if you aren’t ready to leap to new hardware.

Along with Final Cut Pro X 10.1, Apple also released updates to Motion and Compressor. The Motion update was necessary to integrate the new FxPlug3 architecture, which enables developers to add custom interface controls. Compressor was the biggest change, with a complete overhaul of the interface in line with the look of FCP X.

df_nle_4_sm

Avid Media Composer (version 7.0.3)

The biggest feature of Media Composer 7.0.3 is optimization for new operating systems. It is qualified for Windows 8.1 and Mac OS X 10.8.5, 10.9 and 10.9.1. There are a number of interface changes, including separate audio and video effects palette tabs and changing the appearance of background processing indicator icons. 24fps sound timecode is now supported, the responsiveness with the Avid Artist Color Controller has been improved and the ability to export a simplified AAF file has been  added.

df_nle_10_smTranscode choices gain a set of H.264 proxy file codecs. These had been used in other Avid news and broadcast tools, but are now extended into Media Composer. Support for RED was updated to handle the RED Dragon format. With the earlier introduction of 7.0, Avid added background transcoding services and FrameFlex – Avid’s solution for bigger-than-HD files. FrameFlex enables resizing and pan/scan/zoom control within that file’s native resolution. Media Composer also accepts mixed frame rates within a single timeline, by applying Motion Adapters to any clip that doesn’t match the frame rate of the project. 7.0.3 improves control over the frame blending method to give the editor a better choice between temporal or spatial smoothness.

There is no clear winner among these three. If you are on Windows, then the choice is between Adobe and Avid. If you need 4K output today, Apple or Adobe are your best option. All three handle a wide range of popular camera formats well – especially RED. If you like tracks – go Avid or Adobe. If you want the best application for the new Mac Pro, that will clearly be Apple Final cut Pro X. These are all great tools, capable of any level of post production – be it commercial, corporate, web, broadcast entertainment or feature films. If you’ve been on the fence for two years, now is the time to switch, because there are no bad tools – only preferences.

Originally written for Digital Video magazine / CreativePlanetNetwork.

©2014 Oliver Peters

The NLE that wouldn’t die II

df_nledie2_sm

With echoes of Monty Python in the background, two years on, Final Cut Pro 7 and Final Cut Studio are still widely in use. As I noted in my post from last November, I still see facilities with firmly entrenched and mature FCP “legacy” workflows that haven’t moved to another NLE yet. Some were ready to move to Adobe until they learned subscription was the only choice going forward. Others maintain a fanboy’s faith in Apple that the next version will somehow fix all the things they dislike about Final Cut Pro X. Others simply haven’t found the alternative solutions compelling enough to shift.

I’ve been cutting all manner of projects in FCP X since the beginning and am currently using it on a feature film. I augment it in lots of ways with plug-ins and utilities, so I’m about as deep into FCP X workflows as anyone out there. Yet, there are very few projects in which I don’t touch some aspect of Final Cut Studio to help get the job done. Some fueled by need, some by personal preference. Here are some ways that Studio can still work for you as a suite of applications to fill in the gaps.

DVD creation

There are no more version updates to Apple’s (or Adobe’s) DVD creation tools. FCP X and Compressor can author simple “one-off” discs using their export/share/batch functions. However, if you need a more advanced, authored DVD with branched menus and assets, DVD Studio Pro (as well is Adobe Encore CS6) is still a very viable tool, assuming you already own Final Cut Studio. For me, the need to do this has been reduced, but not completely gone.

Batch export

Final Cut Pro X has no batch export function for source clips. This is something I find immensely helpful. For example, many editorial houses specify that their production company client supply edit-friendly “dailies” – especially when final color correction and finishing will be done by another facility or artist/editor/colorist. This is a throwback to film workflows and is most often the case with RED and ALEXA productions. Certainly a lot of the same processes can be done with DaVinci Resolve, but it’s simply faster and easier with FCP 7.

In the case of ALEXA, a lot of editors prefer to do their offline edit with LUT-corrected, Rec 709 images, instead of the flat, Log-C ProRes 4444 files that come straight from the camera. With FCP 7, simply import the camera files, add a LUT filter like the one from Nick Shaw (Antler Post), enable TC burn-in if you like and run a batch export in the codec of your choice. When I do this, I usually end up with a set of Rec 709 color, ProResLT files with burn-in that I can use to edit with. Since the file name, reel ID and timecode are identical to the camera masters, I can easily edit with the “dailies” and then relink to the camera masters for color correction and finishing. This works well in Adobe Premiere Pro CC, Apple FCP 7 and even FCP X.

Timecode and reel IDs

When I work with files from the various HDSLRs, I prefer to convert them to ProRes (or DNxHD), add timecode and reel ID info. In my eyes, this makes the file professional video media that’s much more easily dealt with throughout the rest of the post pipeline. I have a specific routine for doing this, but when some of these steps fail, due to some file error, I find that FCP 7 is a good back-up utility. From inside FCP 7, you can easily add reel IDs and also modify or add timecode. This metadata is embedded into the actual media file and readable by other applications.

Log and Transfer

Yes, I know that you can import and optimize (transcode) camera files in FCP X. I just don’t like the way it does it. The FCP 7 Log and Transfer module allows the editor to set several naming preferences upon ingest. This includes custom names and reel IDs. That metadata is then embedded directly into the QuickTime movie created by the Log and Transfer module. FCP X doesn’t embed name and ID changes into the media file, but rather into its own database. Subsequently this information is not transportable by simply reading the media file within another application. As a result, when I work with media from a C300, for example, my first step is still Log and Transfer in FCP 7, before I start editing in FCP X.

Conform and reverse telecine

A lot of cameras offer the ability to shoot at higher frame rates with the intent of playing this at a slower frame rate for a slow motion effect – “overcranking” in film terms. Advanced cameras like the ALEXA, RED One, EPIC and Canon C300 write a timebase reference into the file that tells the NLE that a file recorded at 60fps is to be played at 23.98fps. This is not true of HDSLRs, like a Canon 5D, 7D or a GoPro. You have to tell the NLE what to do. FCP X only does this though its Retime effect, which means you are telling the file to be played as slomo, thus requiring a render.

I prefer to use Cinema Tools to “conform” the file. This alters the file header information of the QuickTime file, so that any application will play it at the conformed, rather than recorded frame rate. The process is nearly instant and when imported into FCP X, the application simply plays it at the slower speed – no rendering required. Just like with an ALEXA or RED.

Another function of Cinema Tools is reverse telecine. If a camera file was recorded with built-in “pulldown” – sometimes called 24-over-60 – additional redundant video fields are added to the file. You want to remove these if you are editing in a native 24p project. Cinema Tools will let you do this and in the process render a new, 24p-native file.

Color correction

I really like the built-in and third-party color correction tools for Final Cut Pro X. I also like Blackmagic Design’s DaVinci Resolve, but there are times when Apple Color is still the best tool for the job. I prefer its user interface to Resolve, especially when working with dual displays and if you use an AJA capture/monitoring product, Resolve is a non-starter. For me, Color is the best choice when I get a color correction project from outside where the editor used FCP 7 to cut. I’ve also done some jobs in X and then gone to Color via Xto7 and then FCP 7. It may sound a little convoluted, but is pretty painless and the results speak for themselves.

Audio mixing

I do minimal mixing in X. It’s fine for simple mixes, but for me, a track-based application is the only way to go. I do have X2Pro Audio Convert, but many of the out-of-house ProTools mixers I work with prefer to receive OMFs rather than AAFs. This means going to FCP 7 first and then generating an OMF from within FCP 7. This has the added advantage that I can proof the timeline for errors first. That’s something you can’t do if you are generating an AAF without any way to open and inspect it. FCP X has a tendency to include many clips that are muted and usually out of your way inside X. By going to FCP 7 first, you have a chance to clean up the timeline before the mixer gets it.

Any complex projects that I mix myself are done in Adobe Audition or Soundtrack Pro. I can get to Audition via the XML route – or I can go to Soundtrack Pro through XML and FCP 7 with its “send to” function. Either application works for me and most of my third-party plug-ins show up in each. Plus they both have a healthy set of their own built-in filters. When I’m done, simply export the mix (and/or stems) and import the track back into FCP X to marry it to the picture.

Project trimming

Final Cut Pro X has no media management function.  You can copy/move/aggregate all of the media from a single Project (timeline) into a new Event, but these files are the source clips at full length. There is no ability to create a new project with trimmed or consolidated media. That’s when source files from a timeline are shortened to only include the portion that was cut into the sequence, plus user-defined “handles” (an extra few frames or seconds at the beginning and end of the clip). Trimmed, media-managed projects are often required when sending your edited sequence to an outside color correction facility. It’s also a great way to archive the “unflattened” final sequence of your production, while still leaving some wiggle room for future trimming adjustments. The sequence is editable and you still have the ability to slip, slide or change cuts by a few frames.

I ran into this problem the other day, where I needed to take a production home for further work. It was a series of commercials cut in FCP X, from which I had recut four spots as director’s cuts. The edit was locked, but I wanted to finish the mix and grade at home. No problem, I thought. Simply duplicate the project with “used media”, create the new Event and “organize” (copies media into the new Event folder). I could live with the fact that the media was full length, but there was one rub. Since I had originally edited the series of commercials using Compound Clips for selected takes, the duping process brought over all of these Compounds – even though none was actually used in the edit of the four director’s cuts. This would have resulted in copying nearly two-thirds of the total source media. I could not remove the Compounds from the copied Event, without also removing them from the original, which I didn’t want to do.

The solution was to send the sequence of four spots to FCP 7 and then media manage that timeline into a trimmed project. The difference was 12GB of trimmed source clips instead of HUNDREDS of GB. At home, I then sent the audio to Soundtrack Pro for a mix and the picture back to FCP X for color correction. Connect the mix back to the primary storyline in FCP X and call it done!

I realize that some of this may sound a bit complex to some readers, but professional workflows are all about having a good toolkit and knowing how to use it. FCP X is a great tool for productions that can work within its walls, but if you still own Final Cut Studio, there are a lot more options at your disposal. Why not continue to use them?

©2013 Oliver Peters

Apple expands Final Cut Pro X

On the same day Apple launched the iPad mini, the fourth generation iPad, a refresh of the iMac line and the addition of a 13” MacBook Pro with Retina display, Apple also quietly released the 10.0.6 version of Final Cut Pro X. By the end of the day, the App Store lit up and the various online forums were buzzing. The Pro Apps engineers made good on the bullet points that were pre-announced at NAB – dual viewers, multichannel audio editing, MXF plug-in support and RED camera support. Plus, there were a number of feature and interface changes to round it out – many of which appear to be in direct response to user feedback.

The four bullet points

Dual viewers. The Unified Viewer was a huge shock when FCP X was first released. As you move between a source clip in the Event Browser and the project’s edited timeline, the Viewer display toggles between these two images. You now have the option to change this behavior by opening a second Event Viewer window. Source clips show in the Event Viewer while the main Viewer only displays the project timeline image. You cannot skim or scrub with the mouse directly from within this window. In a two-monitor configuration, you have to skim the thumbnail or filmstrip of the event clip on one display, but watch the viewer on the other screen. It’s a bit disconcerting for muscle memory and some editors, who initially clamored for it, have found it less useful than they’d hope. There is also no way to gang source clips and timelines together. Having this second viewer does add some cool new features, like the ability to have scopes with each viewer. These can be displayed in a horizontal or vertical arrangement. The good news is that you have the choice between single and dual viewers depending on your task.

Multi-channel audio editing. To prevent audio from slipping out-of-sync due to user error – and to reduce timeline clutter – FCP X keeps clips as combined a/v sources. Until this release, if you shot an interview and used two audio channels for individual microphones, you could not separately edit or mix levels on them, unless you broke the audio out as separate clips. Then you risked the possibility of accidentally slipping them out-of-sync. With this update, audio channels still stay attached to their source clips, but you can expand the clip in the timeline or inspector to reveal multiple audio channels. This enables renaming, editing, volume and pan control for each individual audio channel. Unfortunately, there’s still no global audio mixer panel as many had hoped for.

RED camera support. The RED user community has been very vocal about wanting native edit support for their REDCODE camera raw, compressed media format. Until now, Adobe offered one of the few native editing solutions. With 10.0.6, Apple has more than met that challenge. There’s native file support at up to 5K sizes, plus you can transcode to an optimized ProRes4444 or ProRes Proxy format for a more fluid editing experience. With FCP X’s unique architecture, transcoding happens in the background, so you can start with the native files, which in turn are automatically replaced by the optimized or proxy files when ready. Edit with proxies for a lightweight load on your system (like laptop editing) and then switch to the optimized or native files for the final output. Or simply stay with the native files throughout, if that’s your preference.

The RED Rocket card is supported for accelerated playback, transcoding and rendering with full resolution debayering. Software-based renders, exports and generating optimized media will also be at full resolution, but much slower. In order to enable RED support, you’ll need to install the latest RED plug-in. The RED Rocket card also requires a firmware update. Both may be downloaded for free from RED’s website.

The best part is that you now have direct access to the RED camera raw color settings from within FCP X. Click “Modify RED RAW Settings” in the Inspector window and a floating heads up display (HUD) pops up with adjustment sliders. Select one clip or a group of clips in the event browser and change the settings for a single clip or for all by adjusting one HUD panel. Native .r3d files in a 4K project played well on my Mac Pro, thanks to multicore playback. Performance seemed comparable to what I see with Premiere Pro on the same computer. Given Apple’s optimized/proxy media workflow and the ease of adjusting raw settings, I feel that now FCP X offers the best option for cutting a RED-originated production.

MXF plug-in support. Final Cut Pro X has now added native support for MXF camera files, like Panasonic P2, Sony XDCAM and other MXF formats. Previous FCP X versions rewrapped these files into QuickTime movie containers upon import. As with FCP “legacy” versions, the 10.0.6 update now lets you use plug-ins offered by Hamburg Pro Audio and Calibrated Software for direct access. This enables native use of MXF files and facilitates end-to-end MXF workflows, such as the DPP digital delivery standard in the UK, when Hamburg Pro Media ships their AS-11 Import and Export product.

A few surprises

There are a lot of other changes throughout the application. The engineers added more metadata (like a whole slew of ARRI ALEXA and RED camera metadata), changed a number of interface functions, updated the XML format and added 42 new effects, transitions, titles and generators, including a drop shadow filter and a one-step freeze frame.

Several of these changes are big for users. We now gain back the ability to copy and paste clip attributes. You may paste specific effects, individual filters, transforms and audio parameters to one or multiple clips on the timeline. There’s a new range selection function. Many editors had asked for “persistent in and out points” – basically that a source clip holds the last in/out marks made by the user. Instead, Apple opted to place multiple marked ranges in a fashion similar to range-based Favorites, which may take some getting used to. For instance, if you mark two ranges within a single event clip and then decide to reject the clip (with the event browser set to “Hide Rejected”) you are now left with three clips instead of one. Those three clips represent the leftover, unmarked sections of the one original clip. In order to prevent this, you first have to mark the whole clip (the X key) and then reject it (the delete key).

Connected clips have been a learning experience for many. The benefit is that you can move a group of linked clips simply by moving the one main clip on the primary storyline. Sometimes you don’t want this, such as, when you want to move a sound bite clip without moving the attached B-roll cutaway shots. Holding down the grave/tilde key as you move, slip or slide a primary storyline clip keeps any connected clips in their original place and prevents their movement.

Previously, the process for importing media files was different than the import module for camera media. This has been combined into a single-window interface. Media can be previewed in a filmstrip view from this window, regardless of whether it’s from a camera card or a file on your hard drive. If the file comes from a camera card or a mounted volume (such as a disc image made of a camera card), then you additionally have the ability to select ranges within the file for import. Once imports have started, the window may be closed, allowing you to continue editing, while the import happens in the background. Commonly used areas, like a shared folder, may be dragged to a Favorites area of the window.

Lastly, the Share menu has been moved and streamlined. This is where you export media. It may be used for master files, as well as batch processes, like DVD creation or Vimeo uploads. You may use the existing presets or set up your own, but now there’s also a Bundle function. This is a folder of presets designed as a job batch. For example, if you always need to create three versions for your client – a master file, an iPhone review copy and a YouTube upload – set up a bundle with these presets and you are ready to go. There are other enhancements to Compound Clips, Markers and Multicam, as well as faster rendering performance that I won’t go into. Suffice it to say that this update has a lot in it, so it’s well worth diving in to explore.

Things to know before you update

Final Cut Pro requires OS 10.6.8, 10.7.5 or 10.8.2. I was already on 10.7.4, so the bump to 10.7.5 was easy through Apple’s software update. If you opt to go with 10.8.2, then it’s an App Store purchase if you’re using an earlier OS or an App Store update if you are on an earlier version of Mountain Lion (10.8 or 10.8.1). Running this OS X update also enables an update of Safari and Aperture (if applicable). Once you are on either of these OS versions, then the App Store will let you update FCP X, Motion and Compressor, from earlier installations. These are free updates if you already own the applications and, like all App Store purchases, are valid for up to five personal computers on a single Apple ID.

I’m running a three-year-old Mac Pro and five-year-old MacBook Pro and FCP X works fine on either. Obviously performance is better on the tower, but as most folks have noted, the newest MacBook Pro and iMac models are best overall, thanks to their i5 and i7 processors. On my Mac Pro, I tested two GPU cards – my own ATI 5870 and a Quadro 4000 on loan from NVIDIA for reviews. FCP X runs best with the ATI card, thanks to OpenCL support. I built a six-layer 1080p timeline with color correction and five 2D picture-in-picture transform effects. The timeline played in real-time (high quality) without dropping frames using the ATI 5870, but choked when I tried the Quadro 4000. It turns out that card is not on Apple’s compatibility list (the older FX4800 is), even though it’s the only NVIDIA card sold at Apple’s online store. That’s a shame, because the Quadro 4000 is the better card for DaVinci Resolve or the Adobe CS6 applications. In fact, Resolve 9 is unusable under Lion with an ATI card (but supposedly fixed with Mountain Lion), as it puts glitches into the highlights of the picture. For FCP X, the Quadro is fine, but the ATI is better.

Final Cut Pro X 10.0.6 seems to be a relatively benign update in how it interrelates with other hardware and software. Most of the AJA and Blackmagic Design products work well with it. The exception at launch is any of the Matrox MXO2 units. Expect driver updates from all of these companies. I’ve tested the update with a Decklink HD Extreme 3D card in a Mac Pro and an AJA T-Tap on a Thunderbolt-enabled iMac and MacBook Pro and they each worked well. This update also bumps up the XML version to 1.2 and exposes a lot more metadata. If your workflows use one of the XML utilities like Xto7 and 7toX or relies on a roundtrip to DaVinci Resolve, then make sure you have updated those applications. Resolve 9.0.3 supports the new XML format and FCP X 10.0.6.

Be aware that this update has changed a lot of under-the-hood items, most notably project audio channel configurations. When you first launch FCP X after the update, existing projects and events will be updated. Usually this will be fine, but it’s not without occasional anomalies, some of which affect performance. For example, I’ve found that the audio changes in one of my project timelines caused the response time to be slower between hitting the space bar to play and having it actually start. A brand new project was fine. I have one project where levels and panning change through copy-and-pasting. Very frustrating!

In addition, a number of fresh bugs have cropped up. Some users, myself included, have experienced render problems. In my case, I have seen several projects that randomly render or export with a number of corrupt frames. When I repeat the rendering, the place of corruption is often in a different location each time. To be safe, wait for a lull in your workload before updating. Also to be fair, users on the newest iMacs running 10.8.2 seem to be happiest and report the least issues.

Final Cut Pro X 10.0.6 is generally a solid upgrade that may be the turning point for many professionals. I’ve been editing most of my broadcast and corporate projects for months in FCP X. For the most part this has been a successful endeavor – these newest issues not withstanding. Yes, it’s different, but it’s also growing and evolving. Apple is addressing issues and concerns, so make sure you use their software feedback site. Changes in this version are a direct answer to the needs of professional editors. No software is perfect – and this update is not without its flaws – but it checks off many items that may have been objections before. At least now, folks who’ve been sitting on the fence can judge Apple’s commitment by the progress made in FCP X to date.

Originally written for Digital Video magazine / Creative Planet Networks

©2012 Oliver Peters