Color Finale 2.1 Update

 

Color grading roundtrips are messy and prone to errors. Most editors want high-quality solutions that keep them within their favorite editing application. Color Trix launched the revamped Color Finale 2 this past December with the goal of building Final Cut Pro X into a competitive, professional grading environment. In keeping to that goal, Color Trix just released Color Finale 2.1 – the first major update since the December launch. Color Finale 2.1 is a free upgrade to Color Finale 2 owners and adds several new features, including inside/outside mask grading, an image mask, a new smoothness function, and the ability to copy and paste masks between layers. (Right-click images to see enlarged view.)

Grading with inside/outside masks

Color Finale 2 launched with trackable, spline masks that could be added to any group or layer. But in version 2.0, grading occurred either inside or outside of the mask, but not both. The new version 2.1 feature allows a mask to be applied to a group, which then becomes the parent mask. Grading would then be done within that mask. If you want to also grade the area outside of that mask, simply apply a new group inside the first group. Then add a new mask that is an invert of the parent mask. Now you can add new layers to grade the area outside of the same mask.

In the example image, I first applied a mask around the model at the beach and color corrected her. Then I applied a new group with an inverted mask to adjust for the sky. In that group I could add additional masking, such as an edge mask to create a gradient. The parent mask around the model maintains that the sky gradient is applied behind her rather than in the foreground. Once you get used to this grouping strategy with inside and outside masks, you can achieve some very complex results.

Image masks

The second major addition is that of image masks. This is a monochrome version of the image in which the dark-to-light contrast range acts as a qualifier or matte source to restrict the correction being applied to the image. The mask controls include black and white level sliders, blurring, and the ability to invert the mask. Wherever you see a light area in the mask is where that color correction will be applied. This enables a number of grading tricks that are also popular in photography, including split-toning and localized contrast control.

Simply put, split-toning divides the image according to darks and lights (based on the image mask) and enables you to apply a different correction to each. This can be as extreme as a duotone look or something a bit more normal, yet still stylized.

In the duotone example, I first removed saturation from the original clip to create a black-and-white image. Then, the boxer’s image mask divides the range so that I could apply red and blue tinting for the duotone look.

In the second example, the image mask enabled me to create glowing highlights on the model’s face, while pushing the mids and shadows back for a stylistic appearance.

Another use for an image mask can be for localized contrast control. This technique allows me to isolate regions of the image and grade them separately. For example, if I want to only correct the shadow areas of the image, I can apply an image mask, invert it (so that dark areas are light in the mask), and then apply grading within just the dark areas of the image – as determined by the mask.

Smoothness

Color Finale 2 included a sharpness slider. New in version 2.1 is the ability to go in the opposite direction to soften the image, simply by moving the slider left into negative values. This slider controls the high frequency detail of the overall image – positive values increase that detail, while negative values decrease it.

Since this is an overall effect, it can’t be masked within the layers panel. If you wanted to apply it just to a person’s face, like other “beauty” filters, then that can be achieved by using Final Cut Pro X’s built-in effects masks. This way a similar result can be reached while staying within the Color Finale workflow.

One last addition to version 2.1 is that Final Cut Pro X’s hotkeys now stay active while the Color Finale layers panel is open. Color Trix has stated that they plan more upgrades and options over the next nine months, so look for more ahead. Color finale 2.1 is already a powerful grading tool for nearly any level of user. Nevertheless, more features will certainly be music to the ears of advanced users who prefer to stay within Final Cut Pro X to finish and deliver their projects. Stay tuned.

Originally written for FCP.co.

©2020 Oliver Peters

Chasing the Elusive Film Look

Ever since we started shooting dramatic content on video, directors have pushed to achieve the cinematic qualities of film. Sometimes that’s through lens selection, lighting, or frame rate, but more often it falls on the shoulders of the editor or colorist to make that video look like film. Yet, many things contribute to how we perceive the “look of film.” It’s not a single effect, but rather the combination of careful set design, costuming, lighting, lenses, camera color science, and color correction in post.

As editors, we have control over the last ingredient, which brings me to LUTs and plug-ins. A number of these claim to offer looks based on certain film emulsions. I’m not talking about stylized color presets, but the subtle characteristics of film’s color and texture. But what does that really mean? A projected theatrical film is the product of four different stocks within that chain – original camera negative, interpositive print, internegative, and the release print. Conversely, a digital project shot on film and then scanned to a file only involves one film stock. So it doesn’t really mean much to say you are copying the look of film emulsion, without really understanding the desired effect.

My favorite film plug-in is Koji Advance, which is distributed through the FxFactory platform. Koji was developed between Crumplepop and noted film timer, Dale Grahn. A film timer is the film lab’s equivalent to a digital colorist. Grahn selected several color and black-and-white film stocks as the basis for the Koji film looks and film grain emulation. Then Crumplepop’s developers expanded those options with neutral, saturated, and low contrast versions of each film stock and included camera-based conversions from log or Rec 709 color spaces. This is all wrapped into a versatile color correction plug-in with controls for temperature/tint, lift/gamma/gain/density (low, mid, high, master), saturation, and color correction sliders. (Click an image to see an expanded view.)

This post isn’t a review of the Koji Advance plug-in, but rather how to use such a filter effectively within an NLE like Final Cut Pro X (or Premiere Pro and After Effects, as well). In fact, these tips can also be used with other similar film look plug-ins. Koji can be used as your primary color correction tool, applying and adjusting it on each clip. But I really see it as icing on the cake and so will take a different approach.

1. Base grade/shot matching. The first thing you want to do in any color correction session is to match your shots within the sequence. It’s best to establish a base grade before you dive into certain stylized looks. Set the correct brightness and contrast and then adjust for proper balance and color tone. For these examples, I’ve edited a timeline consisting of a series of random FilmSupply stock footage clips. These clips cover a mix of cameras and color spaces. Before I do anything, I have to grade these to look consistent.

Since these are not all from the same set-up, there will naturally be some variances. A magic hour shot can never be corrected to be identical to a sunny exterior or an office shot. Variations are OK, as long as general levels are good and the tone feels right. Final Cut Pro X features a solid color correction tool set that is aided by the comparison view. That makes it easy to match a shot to the clip before and after it in the timeline.

2. Adding the film look. Once you have an evenly graded sequence of shots, add an adjustment layer. I will typically apply the Koji filter, an instance of Hue/Sat Curves, and a broadcast-safe limiter into that layer.

Within the Koji filter, select generic Rec 709 as the camera format and then the desired film stock. Each selection will have different effects on the color, brightness, and contrast of the clips. Pick the one closest to your intended effect. If you also want film grain, then select a stock choice for grain and adjust the saturation, contrast, and mix percentage for that grain. It’s best to view grain playing back at close to your target screen size with Final Cut set to Better Quality. Making grain judgements in a small viewer or in Better Performance mode can be deceiving. Grain should be subtle, unless you are going for a grunge look.

The addition of any of these film emulsion effects will impact the look of your base grade; therefore, you may need to tweak the color settings with the Koji controls. Remember, you are going for an overall look. In many cases, your primary grade might look nice and punchy – perfect for TV commercials. But that style may feel too saturated for a convincing film look of a drama. That’s where the Hue/Sat Curves tool comes in. Select LUMA vs SAT and bring down the low end to taste. You want to end up with pure blacks (at the darkest point) and a slight decrease in shadow-area saturation.

3. Readjust shots for your final grade. The application of a film effect is not transparent and the Koji filter will tend to affect the look of some clips more than others. This means that you’ll need to go back and make slight adjustments to some of the clips in your sequence. Tweak the clip color correction settings applied in the first step so that you optimize each clip’s final appearance through the Koji plug-in.

4. Other options. Remember that Koji or similar plug-ins offer different options – so don’t be afraid to experiment. Want film noir? Try a black-and-white film stock, but remember to also turn down the grain saturation.

You aren’t going for a stylized color correction treatment with these tips. What you are trying to achieve is a look that is more akin to that of a film print. The point of adding a film filter on top is to create a blend across all of your clips – a type of visual “glue.” Since filters like this and the adjustment layer as a whole have opacity settings, is easy to go full bore with the look or simply add a hint to taste. Subtlety is the key.

Originally written for FCP.co.

©2020 Oliver Peters

Color Finale 2.0

HDR, camera raw, and log profiles are an ever-increasing part of video acquisition, so post-production color correction has become an essential part of every project. Final Cut Pro X initially offered only basic color correction tools, which were quickly augmented by third party developers. One of the earliest was Color Finale – the brainchild of colorist/trainer Denver Riddle and ex-DI supervisor and color correction software designer Dmitry Lavrov. In the last year Lavrov created both Cinema Grade, now owned and run by Riddle, and Color Finale 2.0, owned and run by Lavrov himself under his own company, Color Trix Ltd. By focusing exclusively on the development of Color Finale 2.0, Lavrov can bring to market more advanced feature ideas, upgrades, and options with the intent of making Final Cut a professional grading solution.

For many, Blackmagic Design’s DaVinci Resolve and Fimlight’s Baselight systems set the standard for color correction and grading. So you might ask, why bother? But if you edit with Final Cut Pro X, then this requires a roundtrip between Final Cut and a dedicated grading suite or application. Roundtrips pose a few issues, including turnaround time, additional media rendering, and frequent translation errors with the edit and effects data between the edit and the grading application. The ideal situation is to never leave the editing application, but that requires more than just a few, simple color correction filters.

Over the course of eight years of Final Cut Pro X’s existence, the internal color tools have been improved and even more third-party color correction plug-ins have been developed. However, effective and fast color correction isn’t only about looks presets, LUTs, and filters. It’s about having a tool that is properly designed for a grading workflow. If you want to do advanced correction in FCPX with the least amount of clicking back-and-forth, then there are really only two options: Coremelt’s Chromatic and Color Finale.

This brings us to the end of 2019 and the release of Color Finale 2.0, which has been redesigned from the ground up as a new and improved version of the original. The update has been optimized for Metal and the newest color management, such as ACES. It comes in two versions – standard and Pro. Color Finale 2 Pro supports more features, such as Tangent panel control, ACES color space, group grading, mask tracking, and film grain emulation. Color Finale has been designed from the beginning as only a Final Cut Pro X plug-in. This focus means better optimization and a better user experience.

Primary color correction

Color Finale 2 is intended to give Final Cut users similar grading control to that of Resolve, Avid Symphony, or Adobe Premiere Pro’s Lumetri panel. It packs a lot of punch and honestly, there’s a lot more than I can easily cover with any depth here. The user interface is designed around two components: the FCPX Inspector controls and the floating Layers panel. The Inspector pane is a lot more than simply the place from which to launch the Layers panel. In fact, it’s a separate primary grading panel, not unlike the functions of the Basic tab within Adobe’s Lumetri panel.

The Inspector pane is where you control color management, along with exposure, contrast, pivot, temperature, tint, saturation, and sharpness. According to Lavrov, “Our Exposure tool is calibrated to real camera F-stop numbers. We’ve actually taken numerous images with the cameras and test charts shot at the different exposure settings and matched those to our slider control. Basically setting the Exposure slider to 1 means you’ve increased it by one stop up.”

There are also copy and paste buttons to transfer Color Finale settings between clips, false color indicators, and shot-matching based on standard color charts. Finally, there’s a Film Emulation tab, which is really a set of film grain controls. At the bottom is a mix slider to control the opacity value of the applied correction.

Layers

The real power of Color Finale 2 happens when you launch the Layers panel. This panel can be resized and positioned anywhere over the FCPX interface. It includes four tools: lift/gamma/gain color wheels/sliders (aka “telecine” controls), luma+RGB curves, six-vector secondary color, and hue/sat curves. This is rounded out by a looks preset browser. Each of these tools can be masked and the masks can be tracked within the image. Mask tracking is good, though not quite as fast as Resolve’s tracker (almost nothing is).

I suspect most users will spend the bulk of their time with color wheels, which can be toggled from wheels to sliders, depending on your preference. Of course, if you invested in a Tangent panel, then the physical trackballs control the color wheels. Another nice aspect of the lift/gamma/gain color tool is saturation management. You can adjust saturation for each of the three ranges. There is also a master saturation control with separate controls for shadow and highlight range restrictions. This means that you can increase overall saturation, but adjust the shadow or highlights range value so that more or less of the dark or light areas of the image are affected.

As you add tools, each stacks as a new layer within the panel. The resulting color correction is the sum of all of the layers. You can stack as many layers as you like and the same tool can be used more than once. Layers can be turned on and off to see how that correction affects the image. They can also be reordered and grouped into a folder. In fact, when you load a preset look, this is actually a group of tools set to generate that look. Finally, each layer has a blend control to set the opacity percentage and alter the blend mode – normal, add, multiply, etc – for different results.

Advanced features

Let me expand on a few of the advanced grading features, such as color management. You have control over four methods: 1) assume video (the default) – intended for regular Rec 709 video or log footage where FCPX has already applied a LUT (ARRI Alexa, for example); 2) assume log – pick this if you don’t know the camera type and Color Finale will apply a generic Rec 709 LUT correction; 3) use ACES; and 4) use input LUT – import a technical or custom LUT file that you wish to apply to a clip.

ACES is an advanced color management workflow designed for certain delivery specs, such as for Netflix originals. The intent of the ACES color space is to be an intermediate color space that can be compatible with different display systems, so that your grade will look the same on any of these displays. Ideally you want to select ACES if you are working within a complete ACES color pipeline; however, you can still apply it to shots for general grading even if you don’t have to provide an ACES-compliant master. To use it, you must select both the input LUT (typically a camera-specific technical LUT) and the target display color space, such as Rec 709 100 nits (for non-HDR TVs and monitors).

In order to facilitate a proper ACES workflow, Color Trix added the ability to import and export CDLs (color decisions lists). Currently this is more for testing purposes and is designed for compatibility between Final Cut and ACES-compliant grading systems, like Baselight. A CDL is essentially like an EDL (edit decision list), but with basic color correction information. This will translate to the lift/gamma/gain/saturation settings in Color Finale 2 Pro, but nothing more complex, such as curves, selective color, or masks.

Performance and workflow

Overall, I really liked how the various tools worked. Response was fast and I was able to get good grading results with a build-up of several layers. In addition, I prefer the ergonomics of a horizontal layout for color wheels versus the cluster of controls used by Apple’s built-in tool. I had tested the betas of both Color Finale 1.0 and now 2.0 and I remember that it originally took a while to dial in the RGB curves for the 1.0 release. In general, curves can be quite destructive, so if you don’t get the math right, you’ll see banding with very little change of a curve. That was fixed before 1.0 was ever released and the quality in 2.0 looks very good.

Color Finale 2.0 beta had an issue with color wheels. For some users (myself included) the image didn’t update in real-time as you moved the color wheel pucks with a mouse. This was fixed right after release with an update. So if you are experiencing that issue, make sure you have re-installed the update.

The difference between grading and simple clip-based color correction is workflow. That’s where a good colorist using a dedicated grading application will shine. Unfortunately the “apply color correction from one (two, three) clip(s) back” command in Final Cut Pro X can only be used with its own built-in correction. So if you intend to use Color Finale 2 for a full timeline of clips, then you have to develop a workflow to quickly apply the Color Finale or Color Finale Pro effect, without constantly dragging it from the effects browser to each individual clip.

One solution is to apply the effect to the first clip, copy that clip, select all the rest, and then apply “paste effects” or “paste attributes” to the rest of the clips in the timeline. As you move from clip to clip, the Color Finale effect is open in the Inspector so you can tweak settings and edit layers as needed. I have found that by using this method the layers panel often doesn’t stay open persistently. The second method is to designate the Color Finale or Pro effect as the default video effect and map “apply default effect” to a key. Using this second method, the panel stayed open in my testing when go through successive clips on the timeline. Documentation and tutorials are a bit light at the moment, so hopefully Color Trix will begin posting more tips-and-tricks information to their support page or YouTube channel.

One can only run a valid test of any plug-in by using it on a real project. As an example of what you can do with Color Finale 2, I’ve graded Philip Bloom’s 2013 “Hiding Place” short featuring actress Kate Loustau. This was shot on the London Eye in “stealth” mode using the Blackmagic Pocket Cinema Camera. Bloom made the ungraded cut available for non-commercial use. I’ve used it a number of times to test color correction applications. Click the link to see the video, which includes two different grading looks, achieved through Color Final 2 Pro.

Color Finale 2.0 is a huge improvement over the original, but it’s not a one-click solution. It’s designed as an advanced, yet easy to use color correction tool. I find the toolset and visual results similar to the old Apple Color. The graded images appear very natural, which is a good fit for my aesthetic. DaVinci Resolve is better for extreme “surgical” grading, but Color Finale 2.0 certainly covers at least 90% of most color correction needs and styles. If you want to stay entirely within the Final Cut Pro X environment and skip the roundtrips, then Color Finale 2 Pro should be part of your arsenal. It’s this sort of extensibility that FCPX users like about the approach Apple has taken. Having powerful tools, like Color Finale 2.0, from independent developers, like Color Trix, definitely validates the concept.

Check out the Color Finale website for the various purchase and upgrade plans, including add-ons, like the Ascend presets packages.

The article was originally written for FCPco.

©2020 Oliver Peters

A Conversation with Steve Bayes

As an early adopter of Avid systems at a highly visible facility, I first got to know Steve Bayes through his on-site visits. He was the one taking notes about how a customer used the product and what workflow improvements they’d like to see. Over the years, as an editor and tech writer, we’ve kept in touch through his travels from Avid to Media 100 and on to Apple. It was always good to get together and decompress at the end of a long NAB week.

With a career of using as well as helping to design and shepherd a wide range of post-production products, Steve probably knows more about a diverse field of editing systems than most other company managers at editing systems manufacturers. Naturally many readers will know him as Apple’s Senior Product Manager for Final Cut Pro X, a position he held until last year. But most users have little understanding of what a product manager actually does or how the products they love and use every day get from the drawing board into their hands. So I decided to sit down with Steve over Skype and pull back the curtain just a little on this very complex process.

______________________________________________________

[OP]  Let’s start this off with a deep dive into how a software product gets to the user. What part does a product manager play in developing new features and where does engineering fit into that process?

[SB]  I’m a little unconventional. I like to work closely with the engineers during their design and development, because I have a strong technical and industry background. More traditional product managers are product marketing managers who take a more hands-off, marketing-oriented approach. That’s important, but I never worked liked that.

My rule of thumb is that I will tell the engineers what the problem is, but I won’t tell them how to solve it. In many cases the engineers will come back and say, “You’ve told us that customers need to do this ‘thing.’ What do they really want to achieve? Are you telling us that they need to achieve it exactly like this?” And so you talk that out a bit. Maybe this is exactly what the customers really want to do, because that’s what they’ve always done or the way everyone else does it. Maybe the best way to do it is based on three other things in emerging technology that I don’t know about.

In some cases the engineers come back and say, “Because of these other three things you don’t know about, we have some new ideas about how to do that. What do you think?” If their solution doesn’t work, then you have to be very clear about why and be consistent throughout the discussion, while still staying open to new ways of doing things. If there is a legitimate opportunity to innovate, then that is always worth exploring.

Traveling around the world talking to post-production people for almost 30 years allowed me to act as the central hub for that information and an advocate for the user. I look at it as working closely in partnership with engineering to represent the customer and to represent the company in the bigger picture. For instance, what is interesting for Apple? Maybe those awesome cameras that happen to be attached to a phone. Apple has this great hardware and wonderful tactile devices. How would you solve these issues and incorporate all that? Apple has an advantage with all these products that are already out in the world and they can think about cool ways to combine those with professional editing.

In all the companies I’ve worked for, we work through a list of prioritized customer requests, bug fixes, and things that we saw on the horizon within the timeframe of the release date or shortly thereafter. You never want to be surprised by something coming down the road, so we were always looking farther out than most people. All of this is put together in a product requirements document (PRD), which lays out everything you’d like to achieve for the next release. It lists features and how they all fit together well, plus a little bit about how you would market that. The PRD creates the starting point for development and will be updated based on engineering feedback.

You can’t do anything without getting sign-off by quality assurance (QA). For example, you might want to support all 10,000 of the formats coming out, but QA says, “Excuse me? I don’t think so!” [laughs] So it has to be achievable in that sense – the art of the possible. Some of that has to do with their resources and schedule. Once the engineers “put their pencils down,” then QA starts seriously. Can you hit your dates? You also have to think about the QA of third parties, Apple hardware, or potentially a new operating system (OS). You never, ever want to release a new version of Final Cut and two weeks later a new OS comes out and breaks everything. I find it useful to think about the three points of the development triangle as: the number of features, the time that you have, and the level of stability. You can’t say, “I’m going to make a really unstable release, but it’s going to have more features than you’ve ever seen!” [laughs] That’s probably a bad decision.

Then I start working with the software in alpha. How does it really work? Are there any required changes? For the demo, I go off and shoot something cool that is designed specifically to show the features. In many ways you are shooting things with typical problems that are then solved by whatever is in the new software. And there’s got to be a little something in there for the power users, as well as the new users.

As you get closer to the release, you have to make decisions about whether things are stable enough. If some feature is not going to be ready, then you could delay it to a future release — never ideal, but better than a terrible user experience. Then you have to re-evaluate the messaging. I think FCP X has been remarkably stable for all the releases of the last eight years.

You also have to bring in the third parties, like developers, trainers, or authors, who provide feedback so we can make sure we haven’t broken anything for them. If there was a particularly important feature that required third parties to help out, I would reach out to them individually and give them a little more attention, making sure that their product worked as it should. Then I would potentially use it in my own presentation. I worked closely with SpeedScriber transcription software when Apple introduced subtitling and I talked every day with Atomos while they were shooting the demo in Australia on ProRes RAW. 

[OP]  What’s the typical time frame for a new feature or release – from the germ of an idea until it gets to the user?

[SB]  Industry-wide, companies tend to have a big release and then a series of smaller releases afterwards that come relatively quickly. Smaller releases might be to fix minor, but annoying bugs that weren’t bad enough to stop the larger release. You never ship with “priority one” (P1) bugs, so if there are some P2s or P3s, then you want to get to them in a follow-up. Or maybe there was a new device, codec, camera, or piece of hardware that you couldn’t test in time, because it wasn’t ready. Of course, the OS is changing while you are developing your application, as well. One of my metaphors is that “you are building the plane while you are flying it.” [laughs]

I can’t talk about the future or Apple specifically, but historically, you can see a big release might take most of a year. By the time it’s agreed upon, designed, developed, “pencils down – let’s test it” – the actual development time is not as long as you might think. Remember, you have to back-time for quality assurance. But, there are deeper functions that you can’t develop in that relatively short period of time. Features that go beyond a single release are being worked on in the background and might be out in two or three releases. You don’t want to restrict very important features just to hit a release date, but instead, work on them a bit longer.

Final Cut is an excellent application to demonstrate the capabilities of Apple hardware, ease of use, and third party ecosystem. So you want to tie all these things together as much as you can. And every now and then you get to time things so they hit a big trade show! [laughs]

[OP]  Obviously this is the work of a larger team. Are the romanticized tales of a couple of engineers coming out of the back room with a fully-cooked product more myth than reality?

[SB]  Software development is definitely a team effort. There are certain individuals that stand out, because they are good at what they do and have areas of specialty. They’ll come back and always give you more than you asked for and surprise you with amazing results. But, it’s much more of a coordinated effort – the customer feedback, the design, a team of managers who sign off on all that, and then initial development.

If it doesn’t work the way it’s supposed to, you may call in extra engineers to deal with the issues or to help solve those problems. Maybe you had a feature that turned out more complicated than first thought. It’s load balancing – taking your resources and moving them to where they do the most good for the product. Plus, you are still getting excellent feedback from the QA team. “Hey, this didn’t work the way we expected it to work. Why does it work like that?” It’s very much an effort with those three parts: design, engineering, and QA. There are project managers, as well, who coordinate those teams and manage the physical release of the software. Are people hitting their dates for turning things in? They are the people banging on your door saying, “Where’s the ‘thing with the stuff?'” [laughs]

There are shining stars in each of these areas or groups. They have a world of experience, but can also channel the customer – especially during the testing phase. And once you go to beta, you get feedback from customers. At that point, though, you are late in the process, so it’s meant to fix bugs, not add features. It’s good to get that feature feedback, but it won’t be in the release at that point.

[OP]  Throughout your time at various companies, color correction seems to be dear to you. Avid Symphony, Apple Color when it was in the package, not to mention the color tools in Final Cut Pro X. Now nearly every NLE can do color grading and the advanced tools like DaVinci Resolve are affordable to any user. Yet, there’s still that very high-end market for systems like Filmlight’s Baselight. Where do you see the process of color correction and grading headed?

[SB]  Color has always meant the difference for me between an OK project and a stellar project. Good color grading can turn your straw into gold. I think it’s an incredibly valuable talent to have. It’s an aesthetic sense first, but it’s also the ability to look at an image and say, “I know what will fix that image and it will look great.” It’s a specialized skill that shouldn’t be underrated. But, you just don’t need complex gear anymore to make your project better through color grading.

Will you make it look as good as a feature film or a high-end Netflix series? Now you’re talking about personnel decisions as much as technology. Colorists have the aesthetic and the ability to problem-solve, but are also very fast and consistent. They work well with customers in that realm. There’s always going to be a need for people like that, but the question is what chunk of the market requires that level of skill once the tools get easier to use?

I just think there’s a part of the market that’s growing quickly – potentially much more quickly – that could use the skills of a colorist, but won’t go through a separate grading step. Now you have look-up tables, presets, and plug-ins. And the color grading tools in Final Cut Pro X are pretty powerful for getting awesome results even if you’re not a colorist. The business model is that the more you can do in the app, the easier it is to “sell the cut.” The client has to see it in as close to the finished form as possible. Sometimes a bad color mismatch can make a cut feel rough and color correction can help smooth that out and get the cut signed off. As you get better using the color grading tools in FCP X, you can improve your aesthetic and learn how to be consistent across hundreds of shots. You can even add a Tangent Wave controller if you want to go faster. We find ourselves doing more in less time and the full range of color grading tools in FCP X and the FX Plug plug-ins can play a very strong roll in improving any production. 

[OP]  During your time at Apple, the ProRes codec was also developed. Since Apple was supplying post-production hardware and software and no professional production cameras, what was the point in developing your own codec?

[SB]  At the time there were all of these camera codecs coming out, which were going to be a very bad user experience for editing – even on the fastest Mac Pros at the time. The camera manufacturers were using compression algorithms that were high quality, but highly compressed, because camera cards weren’t that fast or that big. That compression was difficult to decode and play back. It took more processing power than you could get from any PC at that time to get the same number of video streams compared with digitizing from tape. In some cases you couldn’t even play the camera original video files at all, so you needed to transcode before you could start editing. All of the available transcoding codecs weren’t that high in quality or they had similar playback problems.

Apple wanted to make a better user experience, so ProRes was originally designed as an intermediate codec. It worked so well that the camera manufacturers wanted to put it into their cameras, which was fine with Apple, as long as you met the quality standards. Everyone has to submit samples and work with the Apple engineers to get it to the standard that Apple expects. ProRes doesn’t encode into as small file sizes as some of the other camera codecs; but given the choice between file size, quality, and performance, then quality and performance were more important. As camera cards and hard drives get bigger, faster, and cheaper, it’s less of an issue and so it was the right decision.

[OP]  The launch of Final Cut Pro X turned out to be controversial. Was the ProApps team prepared for the industry backlash that happened?

[SB] We knew that it would be disruptive, of course. It was a whole new interface and approach. It integrated a bunch of cutting edge technology that people weren’t familiar with. A complete rewrite of  the codebase was a huge step forward as you can see in the speed and fluidity that is so crucial during the creative process. Metadata driven workflows, background processing, magnetic timeline — in many ways people are still trying to catch up eight years later. And now FCP X is the best selling version of Final Cut Pro ever.

[OP]  When Walter Murch used Final Cut Pro to edit the film, Cold Mountain, it gained a lot of attention. Is there going to be another “Cold Mountain moment” for anyone or is that even important anymore?

[SB]  Post Cold Mountain? [chuckle] You have to be careful — the production you are trying to emulate might have nothing to do with your needs on an everyday basis. It may be aspirational, but by adopting Hollywood techniques, you aren’t doing yourself any favors. Those are designed with budgets, timeframes, and a huge crew that you don’t have. Adopt a workflow that is designed for the kind of work you actually do.

When we came up in the industry, you couldn’t make a good-looking video without going to a post house. Then NLEs came along and you could do a bunch of work in your attic, or on a boat, or in a hotel room. That creative, rough-cut market fractured, but you still had to go to an online edit house. That was a limited world that took capital to build and it was an expense by the hour. Imagine how many videos didn’t get made, because a good post house cost hundreds of dollars an hour.

Now the video market has fractured into all these different outlets – streaming platforms, social media, corporate messaging, fast-turnaround events, and mobile apps. And these guys have a ton of powerful equipment, like drones, gimbals, and Atomos ProRes RAW recorders – and it looks great! But, they’re not going to a post house. They’re going to pick up whatever works for them and at the end of the day impress their clients or customers. Each one is figuring out new ways to take advantage of this new technology.

One of the things Sam Mestman teaches in his mobile filmmaking class is that you can make really high-quality stuff for a fraction of the cost and time, as long as you are going to be flexible enough to work in a non-traditional way. That is the driving force that’s going to create more videos for all of these different outlets. When I started out, the only way you could distribute directly to the consumer was by mailing someone a VHS tape. That’s just long gone, so why are we using the same editing techniques and workflows?

I can’t remember the last time I watched something on broadcast TV. The traditional ways of doing things are a sort of assembly line — every step is very compartmentalized. This doesn’t stand to benefit from new efficiencies and technological advances, because it requires merging traditional roles, eliminating steps, and challenging the way things are charged for. The rules are a little less strict when you are working for these new distribution platforms. You still have to meet the deliverable requirements, of course. But if you do it the way you’ve always done it, then you won’t be able to bring it in on time or on budget in this emerging world. If you want to stay competitive, then you are forced to make these changes — your competition maybe already has. How can you tell when your phone doesn’t ring? And that’s why I would say there are Cold Mountain moments all the time when something gets made in a way that didn’t exist a few years ago. But, it happens across this new, much wider range of markets and doesn’t get so much attention.

[OP]  Final Cut Pro X seems to have gained more professional users internationally than in the US. In your writings, you’ve mentioned that efficiency is the way local producers can compete for viewers and maintain quality within budget. Would you expand upon that?

[SB]  There are a range of reasons why FCP X and new metadata-driven workflows are expanding in Europe faster than the US. One reason is that European crews tend to be smaller and there are fewer steps between the creatives and decision-making execs. The editor has more say in picking their editing system. I see over and over that editors are forced to use systems they don’t like in larger projects and they love to use FCP X on their own projects. When the facilities listen to and trust the editors, then they see the benefits pretty quickly. If you have government funded TV (like in many countries in Europe), then they are always under public pressure to justify the costs. Although they are inherently conservative, they are incentivized to always be looking for new ways to improve and that involves risks. With smaller crews, Europeans can be more flexible as to what being “an editor” really means and don’t have such strict rules that keep them from creating motion graphics – or the photographer from doing the rough cut. This means there is less pressure to operate like an assembly line and the entire production can benefit from efficiencies.

I think there’s a huge amount of money sloshing around in Europe and they have to figure out how to do these local-language productions for the high quality that will compete with the existing broadcasters, major features, and the American and British big-budget shows. So how are you going to do that? If you follow the rules, you lose. You have to look at different methods of production. 

Subscription is a different business model of continuing revenue. How many productions will the subscription model pay for? Netflix is taking out $2 billion in bonds on top of the $1 billion they already did to fund production and develop for the local languages. I’ve been watching the series Criminal on Netflix. It’s a crime drama based on police interrogations, with separate versions done in four different countries. English, French, German, and Spanish. Each one has its own cultural biases in getting to a confession (and that’s why I watched them all!). I’ve never seen anything like it before.

The guys at Metronome in Denmark used this moment as an opportunity to take some big chances with creating new workflows with FCP X and shared storage. They are using 1.5 petabytes of storage, six Synology servers, and 30 shows being edited right now in FCP X. They use the LumaForge Jellyfish for on-location post-production. If someone says it can’t be done, you need to talk to these guys and I’m happy to make the introduction.

I’m working with another company in France that shot a series on the firefighters of Marseilles. They shot most of it with iPhones, but they also used other cameras with longer lenses to get farther away from the fires. They’re looking at a series of these types of productions with a unique mobile look. If you put a bunch of iPhones on gimbals, you’ve got a high-quality, multi-cam shoot, with angles and performances that you could never get any other way. Or a bunch of DSLRs with Atomos devices and the Atomos sync modules for perfect timecode sync. And then how quickly can you turn out a full series? Producers need to generate a huge amount of material in a wide range of languages for a wide range of markets and they need to keep the quality up. They have to use new post-production talent and methods and, to me, that’s exciting.

[OP]  Looking forward, where do you see production and post technology headed?

[SB]  The tools that we’ve developed over the last 30 years have made such a huge difference in our industry that there’s a part of me that wants to go back and be a film student again. [laughs] The ability for people to turn out compelling material that expresses a point of view, that helps raise money for a worthy cause, that helps to explain a difficult subject, that raises consciousness, that creates an emotional engagement – those things are so much easier these days. It’s encouraging to me to see it being used like this.

The quality of the iPhone 11 is stunning. With awesome applications, like Mavis and FiLMiC Pro, these are great filmmaking tools. I’ve been playing around with the DJI Osmo Pocket, too, which I like a lot, because it’s a 4K sensor on a gimbal. So it’s not like putting an iPhone on a gimbal – it’s all-in-one. Although you can connect an iPhone to it for the bigger screen. 

Camera technology is going in the direction of more pixels and bigger sensors, more RAW and HDR, but I’d really like to see the next big change come in audio. It’s the one place where small productions still have problems. They don’t hire the full-time sound guy or they think they can shoot just with the mic attached to the hot shoe of the camera. That may be OK when using only a DSLR, but the minute you want to take that into a higher-end production, you’re going to need to think about it more.

Again, it’s a personnel issue. I can point a camera at a subject and get a pretty good recording, but to get a good sound recording – that’s much harder for me at this point. In that area, Apogee has done a great job with MetaRecorder for iOS. It’s not just generating iXML to automatically name the audio channels when you import into FCP X — you can actually label the FCP X roles in the app. It uses Timecode Systems (now Atomos) for multiple iOS recording devices to sync with rock-solid timecode and you can control those multiple recorders from a single iOS device. I would like to see more people adopt multiple microphones synced together wirelessly and controlled by an iPad.

One of the things I love about being “semi-retired” is if something’s interesting to me, I just dig into it. It’s exciting that you can edit from an iPad Pro, you can back up to a Gnarbox, you can shoot high-quality video with your iPhone or a DJI Osmo Pocket, and that opens the world up to new voices. If you were to graph it – the cost of videos is going down and to the right, the number of videos being created in going up and to the right, and at some point they cross over. That promises a huge increase in the potential work for those who can benefit from these new tools. We are close to that point.

It used to be that if your client went to another post house, you lost that client. It was a zero sum game — I win — you lose. Now there are so many potential needs for video we would never have imagined. Those clients are coming out of the woodwork and saying, “Now I can do a video. I’ll do some of it myself, but at some point I’ll hand it off to you, because you are the expert.” Or they feel they can afford your talent, because the rest of the production is so much more efficient. That’s a growing demand that you might not see until your market hits that crossover point.

This article also appears at FCPco.

©2019 Oliver Peters

Did you pick the right camera? Part 2

HDR (high dynamic range) imagery and higher display resolutions start with the camera. Unfortunately that’s also where the misinformation starts. That’s because the terminology is based on displays and not on camera sensors and lenses.

Resolution

4K is pretty common, 8K products are here, and 16K may be around the corner. Resolution is commonly expressed as the horizontal dimension, but in fact, actual visual resolution is intended to be measured vertically. A resolution chart uses converging lines. The point at which you can no longer discern between the lines is the limit of the measurable resolution. That isn’t necessarily a pixel count.

The second point to mention is that camera sensors are built with photosites that only loosely equate to pixels. The hitch is that there is no 1:1 correlation between a sensor’s photosites and display pixels on a screen. This is made even more complicated by the design of a Bayer-pattern sensor that is used in most professional video cameras. In addition, not all 4K cameras look good when you analyze the image at 100%. For example, nearly all early and/or cheap drone and ‘action’ cameras appear substandard when you actually look at the image closely. The reasons include cheap plastic lenses and high compression levels.

The bottom line is that when a company like Netflix won’t accept an ARRI Alexa as a valid 4K camera for its original content guidelines – in spite of the number of blockbuster feature films captured using Alexas – you have to take it with a grain of salt. Ironically, if you shoot with an Alexa in its 4:3 mode (2880 x 2160) using anamorphic lenses (2:1 aspect squeeze), the expanded image results in a 5760 x 2160 (6K) frame. Trust me, this image looks great on a 4K display with plenty of room to crop left and right. Or, a great ‘scope image. Yes, there are anamorphic lens artifacts, but that’s part of the charm as to why creatives love to shoot that way in the first place.

Resolution is largely a non-issue for most camera owners these days. There are tons of 4K options and the only decision you need to make when shooting and editing is whether to record at 3840 or 4096 wide when working in a 4K mode.

Log, raw, and color correction

HDR is the ‘next big thing’ after resolution. Nearly every modern professional camera can shoot footage that can easily be graded into HDR imagery. That’s by recording the image as either camera raw or with a log color profile. This lets a colorist stretch the highlight information up to the peak luminance levels that HDR displays are capable of. Remember that HDR video is completely different from HDR photography, which can often be translated into very hyper-real photos. Of course, HDR will continue to be a moving target until one of the various competing standards gains sufficient traction in the consumer market.

It’s important to keep in mind that neither raw nor log is a panacea for all image issues. Both are ways to record the linear dynamic range that the camera ‘sees’ into a video colorspace. Log does this by applying a logarithmic curve to the video, which can then be selectively expanded again in post. Raw preserves the sensor data in the recording and pushes the transformation of that data to RGB video outside of the camera. Using either method, it is still possible to capture unrecoverable highlights in your recorded image. Or in some cases the highlights aren’t digitally clipped, but rather that there’s just no information in them other than bright whiteness. There is no substitute for proper lighting, exposure control, and shaping the image aesthetically through creative lighting design. In fact, if you carefully control the image, such as in a studio interview or a dramatic studio production, there’s no real reason to shoot log instead of Rec 709. Both are valid options.

I’ve graded camera raw (RED, Phantom, DJI) and log footage (Alexa, Canon, Panasonic, Sony) and it is my opinion that there isn’t that much magic to camera raw. Yes, you can have good iso/temp/tint latitude, but really not a lot more than with a log profile. In one, the sensor de-Bayering is done in post and in the other, it’s done in-camera. But if a shot was recorded underexposed, the raw image is still going to get noisy as you lift the iso and/or exposure settings. There’s no free lunch and I still stick to the mantra that you should ‘expose to the right’ during production. It’s easier to make a shot darker and get a nice image than going in the other direction.

Since NAB 2018, more camera raw options have hit the market with Apple’s ProRes RAW and Blackmagic RAW. While camera raw may not provide any new, magic capabilities, it does allow the camera manufacturer to record a less-compressed file at a lower data rate.  However, neither of these new codecs will have much impact on post workflows until there’s a critical mass of production users, since these are camera recording codecs and not mezzanine or mastering codecs. At the moment, only Final Cut Pro X properly handles ProRes RAW, yet there are no actual camera raw controls for it as you would find with RED camera raw settings. So in that case, there’s actually little benefit to raw over log, except for file size.

One popular raw codec has been Cinema DNG, which is recorded as an image sequence rather than a single movie file. Blackmagic Design cameras had used that until replaced by Blackmagic RAW.  Some drone cameras also use it. While I personally hate the workflow of dealing with image sequence files, there is one interesting aspect of cDNG. Because the format was originally developed by Adobe, processing is handled nicely by the Adobe Camera Raw module, which is designed for camera raw photographs. I’ve found that if you bring a cDNG sequence into After Effects (which uses the ACR module) as opposed to Resolve, you can actually dig more highlight detail out of the images in After Effects than in Resolve. Or at least with far less effort. Unfortunately, you are stuck making that setting decision on the first frame, as you import the sequence into After Effects.

The bottom line is that there is no way to make an educated decision about cameras without actually testing the images, the profile options, and the codecs with real-world footage. These have to be viewed on high quality displays at their native resolutions. Only then will you get an accurate reading of what that camera is capable of. The good news is that there are many excellent options on the market at various price points, so it’s hard to go wrong with any of the major brand name cameras.

Click here for Part 1.

Click here for Part 3.

©2019 Oliver Peters