What is a Finishing Editor?

To answer that, let’s step back to film. Up until the 1970s dramatic television shows, feature films, and documentaries were shot and post-produced on film. The film lab would print positive copies (work print) of the raw negative footage. Then a team of film editors and assistants would handle the creative edit of the story by physically cutting and recutting this work print until the edit was approved. This process was often messy with many film splices, grease pencil marks on the work print to indicate dissolves, and so on.

Once a cut was “locked” (approved by the director and the execs) the edited work print and accompanying notes and logs were turned over to the negative cutter. It was this person’s job to match the edits on the work print by physically cutting and splicing the original camera negative, which up until then was intact. The negative cutter would also insert any optical effects created by an optical house, including titles, transitions, and visual effects.

Measure twice, cut once

Any mistakes made during negative cutting were and are irreparable, so it is important that a negative cutter be detail-oriented, precise, and works cleanly. You don’t want excess glue at the splices and you don’t want to pick up any extra dirt and dust on the negative if it can be avoided. If a mistaken cut is made and you have to repair that splice, then at least one frame is lost from that first splice.

A single frame – 1/24th of a second – is the difference in a fight scene between a punch just about to enter the frame and the arm passing all the way through the frame. So you don’t want a negative cutter who is prone to making mistakes. Paul Hirsch, ACE points out in his book A long time ago in a cutting room far, far away…. that there’s an unintentional jump cut in the Death Star explosion scene in the first Star Wars film, thanks to a negative cutting error.

In the last phase of the film post workflow, the cut negative goes to the lab’s color timer (the precursor to today’s colorist), who sets the “timing” information (color, brightness, and densities) used by the film printer. The printer generates an interpositive version of the complete film from the assembled negative. From this interpositive, the lab will generally create an internegative from which release prints are created.

From the lab to the linear edit bay

This short synopsis of the film post-production process points to where we started. By the mid-1970s, video post-production technology came onto the scene for anything destined for television broadcast. Material was still shot on film and in some cases creatively edited on film, as well. But the finishing aspect shifted to video. For example, telecine systems were used to transfer and color correct film negative to videotape. The lab’s color timing function was shifted to this stage (before the edit) and was now handled by the telecine operator, who later became known as a colorist.

If work print was generated and edited by a film editor, then it was the video editor’s job to match those edits from the videotapes of the transferred film. Matching was a manual process. A number of enterprising film editors worked out methods to properly compute the offsets, but no computerized edit list was involved. Sometimes a video offline edit session was first performed with low-res copies of the film transfer. Other times producers simply worked from handwritten timecode notes for selected takes. This video editing – often called online editing and operated by an online editor – was the equivalent to the negative cutting stage described earlier. Simpler projects, such as TV commercials, might be edited directly in an online edit session without any prior film or offline edit.

Into the digital era

Over time, any creative editing previously done on film for television projects shifted to videotape edit systems and later to digital nonlinear edit systems (NLEs), such as Avid and Lightworks. These editors were referred to as offline editors and post now followed a bifurcated process know as offline and online editing. This was analogous to film’s work print and negative cutting stages. Likewise, telecine technology evolved to not only perform color correction during the film transfer process, but also afterwards working from the assembled master videotape as a source. This process, known as tape-to-tape color correction, gave the telecine operator – now colorist – the tools to perform better shot matching, as well as to create special looks in post. With this step the process had gone full circle, making the video colorist the true equivalent of the lab’s color timer.

As technology marched on, videotape and linear online edit bays gave way to all-digital, NLE-based facilities. Nevertheless, the separation of roles and processes continued. Around 2000, Avid came in with its Symphony model – originally a separate product and not just a software option. Avid Symphony systems offered a full set of color-correction tools and the ability to work in uncompressed resolutions.

It became quite common for a facility to have multiple offline edit bays using Avid Media Composer units staffed by creative, offline editors working with low-res media. These would be networked to an Avid shared storage solution. In addition, these facilities would also have one or more Avid Symphony units staffed by online editors.

A project would be edited on Media Composer until the cut was locked. Then assistants would ingest high-res media from files or videotape, and an online editor would “conform” the edit with this high-res media to match the approved timeline. The online editor would also handle Symphony color correction, insert visual effects, titles, etc. Finally, all tape or file deliverables would be exported out of the Avid Symphony. This system configuration and workflow is still in effect at many facilities around the world today, especially those that specialize in unscripted (“reality”) TV series.

The rise of the desktop systems

Naturally, there are more software options today. Over time, Avid’s dominance has been challenged by Apple Final Cut Pro (FCP 1-7 and FCPX), Adobe Premiere Pro, and more recently Blackmagic Design DaVinci Resolve. Systems are no longer limited by resolution constraints. General purpose computers can handle the work with little or no bespoke hardware requirements.

Fewer projects are even shot on film anymore. An old school, film lab post workflow is largely impossible to mount any longer. And so, video and digital workflows that were once only used for television shows and commercials are now used in nearly all aspects of post, including feature films. There are still some legacy terms in use, such as DI (digital intermediate), which for feature film is essentially an online edit and color correction session.

Given that modern software – even running on a laptop – is capable of performing nearly every creative and technical post-production task, why do we still have separate dedicated processes and different individuals assigned to each? The technical part of the answer is that some tasks do need extra tools. Proper color correction requires precision monitoring and becomes more efficient with specialized control panels. You may well be able to cut with a laptop, but if your source media is made up of 8K RED files, a proxy (offline-to-online) workflow makes more sense.

The human side of the equation is more complex

Post-production tasks often involve a left/right-side brain divide. Not every great editor is good when it comes to the completion phase. In spite of being very creative, many often have sloppy edits, messy timelines, and their project organization leaves a lot to be desired. For example, all footage and sequences just bunched together in one large project without bins. Timelines might have clips spread vertically in no particular order with some disabled clip – based on changes made in each revision path. As I’ve said before: You will be judged by your timelines!

The bottom line is that the kind of personality that makes a good creative editor is different than one that makes a good online editor. The latter is often called a finishing editor today within larger facilities. While not a perfect analogy, there’s a direct evolutionary path from film negative cutter to linear online editor to today’s finishing editor.

If you compare this to the music world, songs are often handled by a mixing engineer followed by a mastering engineer. The mix engineer creates the best studio mix possible and the mastering engineer makes sure that mix adheres to a range of guidelines. The mastering engineer – working with a completely different set of audio tools – often adds their own polish to the piece, so there is creativity employed at this stage, as well. The mastering engineer is the music world’s equivalent to a finishing editor in the video world.

Remember, that on larger projects, like a feature film, the film editor is contracted for a period of time to deliver a finished cut of the film. They are not permanent staff. Once, that job is done the project is handed off to the finishing team to accurately generate the final product working with the high-res media. Other than reviewing the work, there’s no value to having a highly paid film editor also handle basic assembly of the master. This is also true in many high-end commercial editorial companies. It’s more productive to have the creative editors working with the next client, while the staff finishing team finalizes the master files.

The right kit for the job

It also comes down to tools. Avid Symphony is still very much in play, especially with reality television shows. But there’s also no reason finishing and final delivery can’t be done using Apple Final Cut Pro or Adobe Premiere Pro. Often more specialized edit tools are assigned to these finishing duties, including systems such as Autodesk Smoke/Flame, Quantel Rio, and SGO Mistika. The reason, aside from quality, is that these tools also include comprehensive color and visual effects functions.

Finishing work today includes more that simply conforming a creative edit from a decision list. The finishing editor may be called upon to create minor visual effects and titles along with finessing those that came out of the edit. Increasingly Blackmagic Design DaVinci Resolve is becoming a strong contender for finishing – especially if Resolve was used for color correction. It’s a powerful all-in-one post-production application, capable of handling all of the effects and delivery chores. If you finish out of Resolve, that cuts out half of the roundtrip process.

Attention to detail is the hallmark of a good finishing editor. Having good color and VFX skills is a big plus. It is, however, a career path in its own right and not necessarily a stepping stone to becoming a top-level feature film editor or even an A-list colorist. While that might be a turn-off to some, it will also appeal to many others and provide a great place to let your skills shine.

©2023 Oliver Peters

Final Cut Pro + DaVinci Resolve

The concept of offline and online editing goes back to the origins of film editing. Work print was cut by the film editor during the creative stage of the process and then original negative was conformed by the lab and married to the final mix for the release prints (with a few steps in between). The terms offline and online were lifted from early computer lingo and applied to edit systems when the post process shifted from film to video. Thus offline equates to the creative editorial stage, while conforming and finishing services are defined as online.

Digital nonlinear edit systems evolved to become capable of handling all of these stages of creative editorial and finishing at the highest quality level. However, both phases require different mindsets and skills, as well as more advanced hardware for finishing. And so, the offline/online split continues to this day.

If you are an editor cutting local market spots, YouTube videos, corporate marketing pieces, etc, then you are probably used to performing all of these tasks on your own. However, most major commercials, TV shows, and films definitely split them up. In feature films and high-end TV shows, the film editors are separate from the sound editing/mixing team and everything goes through the funnel of a post facility that handles the finishing services. The latter is often referred to as the DI (digital intermediate) process in feature film productions.

You may be cutting on Media Composer, Premiere Pro, or Final Cut Pro, but the final assembly, insertion of effects, and color correction will likely be done with a totally different system and/or application. The world of finishing offers many options, like SGO Mistika, Quantel Rio, and Filmlight Baselight. But the tools that pop up most often are Autodesk Flame, DaVinci Resolve, and Avid Symphony (the latter for unscripted shows). And of course, Pro Tools seemingly “owns” the audio post market.

Since offline/online still exists, how can you use modern tools to your advantage?

If Apple’s Final Cut Pro is your main axe, then you might be reading this and think that you can easily do this all within FCP. Likewise, if you’ve shifted to Resolve, you’re probably wondering, why not just do it all in Resolve? Both concepts are true in theory; however, I contend that most good editors aren’t the best finishers and vice versa. In addition, it’s my opinion that Final Cut is optimized for editing, whereas Resolve is optimized for finishing. That doesn’t make them mutually exclusive. In fact, the opposite is true. They work great in tandem and I would suggest that it’s good to know and use both.

Scenario 1: If you edit with FCP, but use outside services for color and sound, then you’ll need to exchange lists and media. Typically this means AAF for sound and FCPXML for Resolve color (or possibly XML or AAF if it’s a different system). If those systems don’t accept FCPXML lists, then normally you’d need to invest in tools from Intelligent Assistance and/or Marquis Broadcast. However, you can also use Resolve to convert the FCPXML list into other formats.

If they are using Resolve for color and you have your own copy of Resolve or Resolve Studio, then simply import the FCPXML from Final Cut. You can now perform a “preflight check” on your sequence to make sure everything translated correctly from Final Cut. Take this opportunity to correct any issues before it goes to the colorist. Resolve includes media management to copy and collect all original media used in your timeline. You have the option to trim files if these are long clips. Ideally, the DP recorded short takes without a lot of resets, which makes it easy to copy the full-length clip. Since you are not rendering/exporting color-corrected media, you aren’t affected by the UHD export limit of the free Resolve version.

After media management, export the Resolve timeline file. Both media and timeline file can go directly to the colorist without any interpretation required at the other end. Finally, Resolve also enables AAF exports for audio, if you need to send the audio files to a mixer using Pro Tools.

Scenario 2: What if you are doing everything on your own and not sending the project to a colorist or mixer for finishing? Well, if you have the skillset and understand the delivery criteria, then Resolve is absolutely your friend for finishing the project. For one thing, owning Resolve means you could skip purchasing Apple Motion, Compressor, and/or Logic Pro, if you want to. These are all good tools to have and a real deal from a cost standpoint; however, Resolve or Resolve Studio definitely covers most of what you would do with these applications.

Start the same way by sending your FCPXML into Resolve. Correct any editorial issues, flatten/collapse compound and multicam clips, etc. Insert effects and titles or build them in the Fusion page. Color correct. When it comes to sound, the Fairlight page is a full-fledged DAW. Assuming you have the mixing chops, then Fairlight is a solid stand-in for Logic Pro, Pro Tools, or other DAWs. Finally, export the various formats via the Deliver page.

Aside from the obvious color and mixing superiority of Resolve over Final Cut Pro, remember that you can media-manage, as well as render out trimmed clips – something that FCP won’t do without third-party applications. It’s also possible to develop proxy workflows that work between these two applications.

While both Final Cut Pro and DaVinci Resolve are capable of standing alone to cover the creative and finishing stages of editing, the combination of the two offers the best of all worlds – a fast editing tool and a world-class finishing application.

©2023 Oliver Peters

Colourlab Ai

An artificial intelligence grading option for editors and colorists

There are many low-cost software options for color correction and grading, but getting a stunning look is still down to the skill of a colorist. Why can’t modern artificial intelligence tools improve the color grading process? Colorist and color scientist Dado Valentic developed Colourlab Ai as just that solution. It’s a macOS product that’s a combination of a standalone application and companion plug-ins for Resolve, Premiere Pro, Final Cut Pro, and Pomfort Live Grade.

Colourlab Ai is comprised of two main functions – grading and show look creation. Most Premiere Pro and Final Cut Pro editors will be interested in either the basic Colourlab Ai Creator or the richer features of Colourlab Ai Pro. The Creator version offers all of the color matching and grading tools, plus links to Final Cut Pro and Premiere Pro. The Pro version adds advanced show look design, DaVinci Resolve and Pomfort Live Grade integration, SDI output, and Tangent panel support. These integrations differ slightly, due to the architecture of each host application.

Advanced color science and image processing

Colourlab Ai uses color management similar to Resolve or Baselight. The incoming clip is processed with an IDT (input device transform), color adjustments are applied within a working color space, and then it’s processed with an ODT (output device transform) – all in real-time. This enables support for a variety of cameras with different color science models (such as ARRI Log-C) and it allows for output based on different display color spaces, such as Rec 709, P3, or sRGB.

If you prefer to work directly with the Colourlab Ai application by itself – no problem. Import raw footage, color correct the clips, and then export rendered movie files with a baked in look. Or you can use the familiar roundtrip approach as you would with DaVinci Resolve. However, the difference in the Colourlab Ai roundtrip is that only color information moves back to the editing application without the need to render any new media.

The Colourlab Ai plug-in for Final Cut Pro or Premiere Pro reads the color information created by the Colourlab Ai application from an XML file used to transfer that data. A source effect is automatically applied to each clip with those color parameters. The settings are still editable inside Final Cut Pro (not Premiere Pro). If you want to modify any color parameter, simply uncheck the “Use Smart Match” button and adjust the sliders in the inspector. In fact, the Colourlab Ai plug-in for FCP is a full-featured grading effect and you could use it that way. Of course, that’s doing it the hard way!

The ability to hand off source clips to Final Cut Pro with color metadata attached is unique to Colourlab Ai. This is especially a game changer for DITs who deliver footage with a one-light grade to editors working in FCP. The fact that no media need be rendered also significantly speeds up the process.

A professional grading workflow with Final Cut Pro and Colourlab Ai

Thanks to Apple’s color science and media architecture, Final Cut Pro can be used as a professional color grading platform with the right third-party tools. CoreMelt (Chromatic) and Color Trix (Color Finale) are two examples of developers who have had success offering advanced tools, using floating panels within the Final Cut Pro interface. Colourlab Ai takes a different approach by offloading the grade to its own application, which has been designed specifically for this task.

My workflow test involved two passes – once for dailies (such as a one-light grade performed by a DIT on-set) and then again for the final grade of the locked cut. I could have simply sent the locked cut once to Colourlab Ai, but my intention was to test a workflow more common for feature films. Shot matching between different set-ups and camera types is the most time-consuming part of color grading. Colourlab Ai is intended to make that process more efficient by employing artificial intelligence.

Step one of the workflow is to assemble a stringout of all of your raw footage into a new FCP project (sequence). Then drag that project from FCP to the Colourlab Ai icon on the dock (Colourlab Ai has already been opened). The Colourlab Ai app will automatically determine some of the camera sources (like ARRI files) and apply the correct IDT. For any unknown camera, manually test the settings for different cameras or simply stick with a default Rec 709 IDT.

The Pro interface features three tabs – Grade, Timeline Intelligence, and Look Design. The top half of the Grade tab displays the viewer and reference images used for matching. Color wheels, printer light controls, scopes, and versions are in the bottom half. Scope choices include waveform, RGB parade, or vectorscope, but also EL Zones. Developed by Ed Lachman, ASC, the EL Zone System is a false color display with 15 colors to represent a 15-stop exposure range. The mid-point equates to the 18% grey standard.

AI-based shot matching forms the core

Colourlab Ai focuses on smart shot matching, either through its Auto-Color feature or by matching to a reference image. The application includes a variety of reference images, but you can also import your own, such as from Shotdeck. The big advance Colourlab Ai offers over other matching solutions is Color Tune. A small panel of thumbnails can be opened for any clip. Adjust correction parameters – brightness, contrast, density, etc – simply by stepping through incremental value changes. Click on a thumbnail to preview it in the viewer.

The truly unique aspect is that Color Tune lets you choose from eleven matching options. Maybe instead of a Smart model, you’d prefer to match based only on Balance or RGB or a Perceptual model. Step through the thumbnails and pick the look that’s right for the shot. Therefore, matching isn’t an opaque process. It can be optimized in a style more akin to adjusting photos than traditional video color correction.

Timeline Intelligence allows you to rearrange the sequence to group similar set-ups together. Once you do this, use matching to set a pleasing look for one shot. Select that shot as a “fingerprint.” Then select the rest of the shots in a group and match those to the fingerprinted reference shot. This automatically applies that grade to the rest. But, it’s not like adding a simple LUT to a clip or copy-and-pasting settings. Each shot is separately analyzed and matched based on the differences within each shot.

When you’re done going through all of the shots, right-click any clip and “push” the scene (the timeline) back to Final Cut Pro. This action uses FCPXML data to send the dailies clips back to Final Cut, now with the added Colourlab Ai effect containing the color parameters on each source clip.

Remember that Final Cut Pro automatically adds a LUT to certain camera clips, such as ARRI Alexa files recorded in Log-C. When your clips comes back in from Colourlab Ai, FCP may add a LUT on top of some camera files. You don’t want this, because Colourlab Ai has already made this adjustment with its IDT. If that happens, simply change the inspector LUT setting for that source file to “none.”

Lock the edit and create your final look

At this point you can edit with native camera clips that have a primary grade applied to them. No proxy media rendered by a DIT, hence a much faster turnaround and no extra media to take up drive space. Once you’ve locked the edit, it’s time for step two – the show look design for the final edit.

Drag the edited FCP project (new sequence with the graded clips) to the Colourlab Ai icon on the dock to send the edited sequence back to Colourlab Ai. All of the clips retain the color settings created earlier in the dailies grading session. However, this primary grade is just color metadata and can be altered. After any additional color tweaks, it’s time to move to Show Looks. Click through the show look examples and apply the one that fits best.

If you have multiple shots with the same look, apply a show look to the first one, copy it, and then apply that look to the rest of the selected clips. In most cases, you’ll have a different show look for various scenes within a film, but it’s also possible that a single show look would work through the entire film. So, experiment!

To modify a look or create your own, step into the Look Design tab (Pro version). Here you’ll find the Filmlab and Primary panels. Filmlab uses film stock emulation models and film’s subtractive color (CMY instead of RGB) for adjustments. Their film emulation is among the most convincing I’ve seen. You can select from a wide range of branded negative and print film stocks and then make contrast, saturation, and CMY color adjustments. The Primary panel gives you even more control over RGBCMY for the lift, gamma, and gain regions. Custom adjustments may be saved to create your own show looks. Once you’ve set a show look for all of your shots, push the sequence back to Final Cut Pro. Voila – a fully graded show and no superfluous media created in the process.

Some observations

Colourlab Ai is a revolutionary tool based on a film-style approach to grading. Artificial intelligence models speed up the process, but you are always in control. Thanks to the ease of operation, you can get great results without Resolve’s complex node structure. You can always augment a shot with FCP’s own color tools for a power window or a vignette.

The application currently lacks a traditional undo/redo stack. Therefore, use the version history to experiment with settings and looks. Each time you generate a new match, such as with Auto-Color or using a reference image, a new version is automatically stored. If you want to iterate, then manually add a version at any waypoint if a new match isn’t involved – for example, when making color wheels adjustments. The version history displays a thumbnail for each version. Step through them to pick the one that suits you best.

If you are new to color correction, then Colourlab Ai might look daunting at first glance. Nevertheless, it’s deceptively easy to use. There are numerous tutorials available on the website, as well as directly accessible from the launch window. A 7-day free trial can be downloaded for you to dip your toes in the water. The artificial intelligence at the heart of Colourlab Ai will enable any editor to deliver professional grades.

©2022 Oliver Peters

Analogue Wayback, Ep. 10

Color correction all stems from a slab of beef.

Starting out as an online editor at a production and post facility included working on a regional grocery chain account. The production company had a well-oiled “assembly line” process worked out with the agency in order to crank out 40-80 weekly TV commercials, plus several hundred station dubs. Start on Tuesday shooting product in the studio and recording/mixing tracks. Begin editing at the end of the day and overnight in time for agency review Wednesday morning. Make changes Wednesday afternoon and then copy station dubs overnight. Repeat the process on Thursday for the second round of the week.

The studio product photography involved tabletop recording of packaged product, as well as cooked spreads, such as a holiday turkey, a cooked steak, or an ice cream sundae. There was a chef on contract, so everything was real and edible – no fake stylist food there! Everything was set up on black or white sweep tables or large rolling, flat tables that could be dressed in whatever fashion was needed.

The camera was an RCA TK-45 with a short zoom lens and was mounted on a TV studio camera pedestal. This was prior to the invention of truly portable, self-contained video cameras. For location production, the two-piece TKP-45 was also used. It was tethered to our remote production RV.

This was a collaborative production, where our DP/camera operator handled lighting and the agency producers handled props and styling. The videotape operator handled the recording, camera set-up, and would insert retail price graphics (from art cards and a copy stand camera) during the recording of each take. Agency producers would review, pick takes, and note the timecode on the script. This allowed editors to assemble the spots unsupervised overnight.

Since studio recording was not a daily affair, there was no dedicated VTR operator at first. This duty was shared between the editors and the chief engineer. When I started as an editor, I would also spend one or two days supporting the studio operation. A big task for the VTR operator was camera set-up, aka camera shading. This is the TV studio equivalent to what a DIT might do today. The camera control electronics were located in my room with the videotape recorder, copy stand camera, and small video switcher.

Television cameras feature several video controls. The iris and pedestal knobs (or joysticks) control black level (pedestal) and brightness/exposure (iris). The TK-45 also included a gain switch, which increased sensitivity (0, +3dB, or +6dB), and a knob called black stretch. The latter would stretch the shadow area much like a software shadows slider or a gamma control today. Finally, there were RGB color balance controls for black and white. In normal operation, you would point the camera at a “chip chart” (grayscale chart) and balance RGB so that the image was truly black and white as measured on a waveform scope. The VTR operator/camera shader would set up the camera to the chart and then only adjust pedestal and iris throughout the day. 

Unfortunately not all food – especially raw ham, beef, or a rare steak – looks great under studio lighting nor in the world of NTSC color. Thankfully, RCA had also developed a camera module called the Chromaproc (chroma processor). This was a small module on the camera control unit that allowed you to adjust RGBCMY – the six vectors of the color spectrum. The exact details are hard to find now, but if I remember correctly, there were switches to enable each of the six vector controls. Below that were six accompanying hue control pots, which required a tweaker (small screwdriver) to adjust. When a producer became picky about the exact appearance of a rare steak and whether or not it looked appetizing, then you could flick on the Chromaproc and slightly shift the hues with the tweaker to get a better result. Thus you were “painting” the image.

RCA used this chroma processing technology in their cameras and telecine controls. The company eventually developed a separate product that predated any of the early color correctors, like the Wiz (the original device from which DaVinci was spawned). In addition to RGB color balance of the lift/gamma/gain ranges, you could further tweak the saturation and hue of these six vectors, which we now refer to as secondary color correction. The missing ingredients were memory, recall, and list management, which were added by the subsequent developers in their own products. This latter augmentation led to high-profile patent lawsuits, which have now largely been forgotten.

And so when I talk about color correction to folks, I’ll often tell them that everything I know about it was learned by shading product shots for grocery commercials!

©2022 Oliver Peters

CineMatch for FCP

Last year FilmConvert, developers of the Nitrate film emulation plug-in, released CineMatch. It’s a camera-matching plug-in designed for multiple platforms – including operating systems and different editing/grading applications. The initial 2020 release worked with DaVinci Resolve and Premiere Pro. Recently FilmConvert added Final Cut Pro support. You can purchase the plug-in for individual hosts or as a bundle for multiple hosts. If you bought the bundled version last year, then that license key is also applicable to the new Final Cut Pro plug-in. So, nothing extra to purchase for bundle owners.

CineMatch is designed to work with log and raw formats and a wide range of camera packs is included within the installer. To date, 70 combinations of brands and models are supported, including iPhones. FilmConvert has created these profiles based on the color science of the sensor used in each of the specific cameras.

CineMatch for FCP works the same way as the Resolve and Premiere Pro versions. First, select the source profile for the camera used. Next, apply the desired target camera profile. Finally, make additional color adjustments as needed.

If you a shoot with one predominant A camera that is augmented by B and C cameras of different makes/models, then you can apply CineMatch to the B and C camera clips in order to better match them to the A camera’s look.

You can also use it to shift the look of a camera to that of a different camera. Let’s say that you want a Canon C300 to look more like an ARRI Alexa or even an iPhone. Simply use CineMatch to do that. In my example images, I’ve adjusted Blackmagic and Alexa clips so that they both emulate the color science of a Sony Venice camera.

When working in Final Cut Pro, remember that it will automatically apply Rec 709 LUTs to some log formats, like ARRI Alexa Log-C. When you plan to use CineMatch, be sure to also set the Camera LUT pulldown selector in the inspector pane to “none.” Otherwise, you will be stacking two LUT conversions resulting in a very ugly look.

Once camera settings have been established, you can further adjust exposure, color balance, lift/gamma/gain color wheels, saturation, and the luma curve. There is also an HSL curves panel to further refine hue, saturation, and luma for individual color ranges. This is helpful when trying to match two cameras or shots to each other with greater accuracy. FCP’s comparison viewer is a great aid in making these tweaks.

As a side note, it’s also possible to use CineMatch in conjunction with FilmConvert Nitrate (if you have it) to not only adjust color science, but then to subsequently emulate different film stocks and grain characteristics.

CineMatch is a useful tool when working with different camera types and want to achieve a cohesive look. It’s easy and quick to use with little performance impact. CineMatch now also supports M1 Macs.

©2021 Oliver Peters