Blackmagic Cloud Store is here.

Last year Blackmagic Design announced a new line of network storage products along with Blackmagic Cloud. The storage products include Blackmagic Cloud Store (a high-performance RAID-5 array), Blackmagic Cloud Store Mini 8TB (a RAID-0 storage solution), and Blackmagic Cloud Pod (an appliance to turn any USB-C drive into network storage). I reviewed the Blackmagic Cloud Store Mini in August. Now its big brother, Blackmagic Cloud Store, is finally shipping.

Good things in small packages

Blackmagic Cloud Store comes in three sizes: 20TB ($7,595 USD), 80TB ($22,995 USD), and 320TB ($88,995 USD). Each uses the same canister-style enclosure as the company’s eGPU. It features dual power supplies and fast, quiet M.2 SSD memory cards, which are installed around a central core. You can literally leave it on your desk and hardly hear the fans running. Cloud Store runs Blackmagic OS and applies wear leveling, so each M.2 card won’t see excessive data writes. Every sixth M.2 card is used for RAID-5 parity/data protection. The quoted capacity is a net figure, meaning you actually have the full 20TB, 80TB, or 320TB of useable storage.

In the unlikely case of hardware issues, such an M.2 SSD card going down, you would need to contact Blackmagic Design support. Cloud Store is not designed for end-user repair. However, it would be easy to repair by an authorized service engineer, even though it’s not a rack mount design. Various internal assemblies can be unbolted from the core chassis and replaced.

For editors and colorists working with shared media, there’s a built-in 10G switch with four 10G high-speed (10Gb/s) ethernet ports. Connect an external network switch to one of these if you need high-speed access to the array from more than four computers. Next, there are two USB-C and two standard 1G ethernet ports, which can be used to connect additional users at slower ethernet speeds.

The intended use for the USB ports is to connect external drives for ingest and back-up*. An ethernet cable from your internet modem or switch to the 1G ports is needed for Dropbox and Google Drive syncing (more on that in a moment). There is also an HDMI port for a monitor used to display real-time data, such as storage activity, drive health, and connected users. Functions like port aggregation of the 10G switch and the USB-C media I/O have not yet been enabled.

In theory, all eight data ports could be used to connect users, if you forgo syncing and the media I/O function. Although the M.2 SSD array is fast, the network connections will determine the true speed. For example, the 1G and USB-C ports yield write/read speeds of around 200-300MB/s, whereas the 10G ports perform in the 800-1,100 MB/s range.

Setting up a shared network

Download the Blackmagic Cloud Store set-up application to your computer. Run the installer for the set-up app, a user guide PDF, and the standalone Proxy Generator application. Review the network set-up section of the user guide. You can connect a device without using the application, but you’ll need it to set up media sharing over the internet. Bring your own standard 3-prong AC power cord for the unit, too.

I created a small workgroup by connecting the Blackmagic Cloud Store to my 2020 27″ iMac and my 2021 14″ M1 Max MacBook Pro. The iMac has a 10G port and was directly connected. The MacBook Pro was connected using a Sonnet Solo 10G Thunderbolt-to-10G ethernet adapter. If you use a bus-powered adapter like the Sonnet with a laptop, make sure you keep the laptop on AC power. Otherwise, the storage volume will tend to unmount. As with most NAS systems, each time you start the computer, you’ll need to manually mount the storage volume again within the OS.

The Cloud Store device is largely plug-and-play using standard network protocols built into the computer’s operating system. The iMac connected right away, but, I had to change the IP address for the MacBook Pro within its preferences. Other than that minor hiccup, setting up Blackmagic Cloud Store was the easiest installation that I’ve ever done with any NAS system.

If you are using Blackmagic Cloud Store on-premises in a workgroup, then you are set to go. Blackmagic Design intended this to be an easy system to administer. Therefore, you cannot subdivide it into different virtual volumes nor assign different levels of user permissions. The Blackmagic Cloud Store drive is mounted as a single drive volume on your desktop and shared media is accessible on all systems. This product is not solely built for DaVinci Resolve users. Apple Final Cut Pro or Adobe Premiere Pro library/project files also work fine when stored on the Cloud Store volume.

Syncing remote media

Blackmagic Design has factored in remote workflows, which is where Dropbox and Google Drive come in. Connect an ethernet cable for an internet feed to one of the 1G ports on the Blackmagic Cloud Store. There’s a tab in the Cloud Store set-up application for Dropbox or Google Drive. Now assign the Cloud Store volume as the location for your Dropbox or Google Drive folder. You can the opt to share only proxy media or full-res files and proxies. Proxy media files can be generated by DaVinci Resolve itself or using the Proxy Generator application.

Editors with whom you collaborate remotely will have access to the media thanks to Dropbox or Google Drive syncing. The remote editors don’t need Blackmagic Cloud Store units for this to work and can certainly work with other storage solutions. There are a variety of possible workflows, depending on whether it’s an editor sharing files with a colorist or an editor working with assistants on a feature film.

Dropbox and Google Drive syncing allows for an incremental workflow. For example, many productions are filmed over several days. As new media is added to the primary Blackmagic Cloud Store volume, syncing can happen automatically for all remote collaborators. Remember that the Dropbox and Google Drive options are based on your account and not Blackmagic Design. So you may incur charges based on your plan with these companies.

I personally have reservations about leaving your storage directly connected to the internet. As many NAS owners who had systems exposed to the internet can attest, getting hacked and having your media held for ransom is a very real risk. So take precautions – you’ve been warned.

Blackmagic Cloud and DaVinci Resolve

Blackmagic Design has specifically tailored the workflow for DaVinci Resolve, which works with a database (library) containing multiple projects. There are three types of databases: local (stored on your computer), server (stored on a separate networked computer), or cloud, i.e. the Blackmagic Cloud server. Anyone can sign up at the Blackmagic Design website to get their own free Cloud account. If you decide to add a library to Blackmagic Cloud, then the charge is $5 per month, per library. Of course, a single library can contain multiple projects.

In a typical Blackmagic Cloud scenario, the main editor adds a Resolve library to Cloud and creates the active Resolve project there. When its time to share the project with other editors/VFX artists/colorists, turn on multi-user collaboration within Resolve. The library owner sends an invitation to the email address tied to the remote user’s account. The second editor has already received the media via a shipped drive or synced over the internet. That editor logs into their Blackmagic Cloud account to gain access to the library and that project. Open the project, relink the media, and it’s off to the races.

The first person to open a sequence has write access to that sequence. Everyone else has read-only access to the open sequence, but write access to any others that they open. If a change is made to a writeable sequence and saved, the library on Blackmagic Cloud is updated. This is relatively fast, but not as instant as if the database were local. Anyone viewing the sequence in a read-only mode is prompted to refresh the sequence. Both Resolve Studio and Resolve (the free App Store version) worked fine.

Who this is for

There are three potential use cases for Blackmagic Cloud Store. You could simply use it as a local drive attached to one computer. This wouldn’t be the best solution, because Thunderbolt arrays are faster and cheaper. The second use case is the small workgroup under one roof. For example, this could be a small post house or a team of editors cutting a film. Simply connect four computers to the Blackmagic Cloud Store unit and now everyone can share media and project files.

The final use case embraces a remote workflow. One or more users are connected to a Blackmagic Cloud Store at one location. They can then share media and Resolve projects using the built-in syncing and Blackmagic Cloud. For example, you might be a great editor, but not the best colorist. Using this workflow, you could share your project remotely with an experienced colorist and work together through a sequence interactively. Or it’s a feature film and several editors, each working remotely, is editing a different reel of the same film.

There’s plenty of competition in the market for shared, networked storage solutions. Most require a certain level of IT knowledge to set up and administer. Blackmagic Cloud Store is a deceptively simple, yet powerful, storage device that can fit many operational models. It’s a high-performance drive array that can sit quietly on your desktop without the need for rack space or extra cooling. Couple it with a Blackmagic Cloud account and you have one of the simplest way to collaborate across town or across the country.

Update  *This feature was enabled with the Blackmagic Cloud Store 1.1.3 software update. This update also includes several bugs fixes, as well as performance and stability improvements.

An earlier version of this review was written for and appears at Pro Video Coalition.

©2023 Oliver Peters

Impressions of NAB 2023

2023 marks the 100th year of the NAB Convention, which started out as a radio gathering in New York City. This year you could add ribbons to your badges indicating the number of years that you’d attended – 5, 10, etc. My first NAB was 1979 in Dallas, so I proudly displayed the 25+ ribbon. Although I haven’t attended each one in those intervening years, I have attended many and well over 25.

Some have been ready to sound the death knell for large, in-person conventions, thanks to the pandemic and proliferation of online teleconferencing services like Zoom. 2019 was the last pre-covid year with an attendance of 91,500 – down from previous highs of over 100,000. 2022 was the first post-covid NAB and attendance was around 52,400. That was respectable given the climate a year ago. This year’s attendance was over 65,000, so certainly an upward trend. If anything, this represents a pent-up desire to kick the tires in person and hook back up with industry friends from all over the world. My gut feeling is that international attendance is still down, so I would expect future years’ attendance to grow higher.

Breaking down the halls

Like last year, the convention spread over the Central, North, and new West halls. The South hall with its two floors of exhibition space has been closed for renovation. The West hall is a three-story complex with a single, large exhibition floor. It’s an entire convention center in its own right. West hall is connected to the North hall by the sidewalk, an enclosed upstairs walkway, as well as the LVCC Loop (the connecting tunnel that ferries people between buildings in Teslas). From what I hear, next year will be back to the North, Central, and South halls.

As with most NAB conventions, these halls were loosely organized by themes. Location and studio production gear could mostly be found in Central. Post was mainly in the North hall, but next year I would expect it to be back in the South hall. The West hall included a mixture of vendors that fit under connectivity topics, such as streaming, captioning, etc. It also included some of the radio services.

Although the booths covered nearly all of the floor space, it felt to me like many of the big companies were holding back. By that I mean, products with large infrastructure needs (big shared storage systems, large video switchers, huge mixing desks, etc) were absent. Mounting a large booth at the Las Vegas Convention Center – whether that’s for CES or NAB – is quite costly, with many unexpected charges.

Nevertheless, there were still plenty of elaborate camera sets and huge booths, like that of Blackmagic Design. If this was your first year at NAB, the sum of the whole was likely to be overwhelming. However, I’m sure many vendors were still taking a cautious approach. For example, there was no off-site Avid Connect event. There were no large-scale press conferences the day before opening.

The industry consolidates

There has been a lot of industry consolidation over the past decade or two. This has been accelerated thanks to the pandemic. Many venerable names are now part of larger holding companies. For example, Audiotonix owns many large audio brands, including Solid State Logic, DiGiCo, Sound Devices, among others. And they added Harrison to their portfolio, just in time for NAB. The Sennheiser Group owns both Sennheiser and Neumann. Grass Valley, Snell, and Quantel products have all been consolidated by Black Dragon Capital under the Grass Valley brand. Such consolidation was evident through shared booth space. In many cases, the brands retained their individual identities. Unfortunately for Snell and Quantel, those brands have now been completely subsumed by Grass Valley.

A lot of this is a function of the industry tightening up. While there’s a lot more media production these days, there are also many inexpensive solutions to create that media. Therefore, many companies are venturing outside of their traditional lanes. For example. Sennheiser still manufactures great microphone products, but they’ve also developed the AMBEO immersive audio product line. At NAB they demonstrated the AMBEO 2-Channel Spatial Audio renderer. This lets a mixer take surround mixes and/or stems and turn them into 2-channel spatial mixes that are stereo-compatible. The control software allows you to determine the stereo width and amount of surround and LFE signal put into the binaural mix. In the same booth, Neumann was demoing their new KH 120-II near-field studio monitors.

General themes

Overall, I didn’t see any single trend that would point to an overarching theme for the show. AI/ML/Neural Networks were part of many companies’ marketing strategy. Yet, I found nothing that jumped out like the current public fascination with ChatGPT. You have to wonder how much of this is more evolutionary than revolutionary and that the terms themselves are little more than hype.

Stereoscopic production is still around, although I only found one company with product (Stereotec). Virtual sets were aplenty, including a large display by Vu Studios and even a mobile expando trailer by Magicbox for virtual set production on-location. Insta360 was there, but tucked away in the back of Central hall.

Of course, everyone has a big push for “the cloud” in some way, shape, or form. However, if there is any single new trend that seems to be getting manufacturers’ attention, it’s passing video over IP. The usual companies who have dealt in SDI-based video hardware, like AJA, Blackmagic Design, and Matrox, were all showing IP equivalents. Essentially, where you used to send SDI video signals using the uncompressed SDI protocol, you will now use the SMPTE ST 2110 IP protocol to send it through 1GigE networks.

The world of post production

Let me shift to post – specifically Adobe, Avid, and Blackmagic Design. Unlike Blackmagic, neither Avid nor Adobe featured their usual main stage presentations. I didn’t see Apple’s Final Cut Pro anywhere on the floor and only one sighting in the press room. Avid’s booth was a shadow of itself, with only a few smaller demo pods. Their main focus was showing the tighter integration between Media Composer and Pro Tools (finally!). There were no Pro Tools control surfaces to play with. However, in their defense, NAMM 2023 (the large audio and music products exhibition) was held just the week before. Most likely this was a big problem for any audio vendor that exhibits at both shows. NAMM shifts back to January in 2024, which is its historical slot on the calendar.

Uploading media to the cloud for editing has been the mantra at Frame io, which is now under the Adobe wing. They’ve enhanced those features with direct support by Fujifilm (video) and Capture One (photography). In addition, Frame has improved features specific to the still photography market. New to the camera-to-cloud game is also Atomos, which demoed its own cloud-based editor developed by asset management developer Axle ai.

Adobe demoed the new, text-based editing features for Premiere Pro. It’s currently in beta, but will soon be in full release. In my estimation, this is the best text-based method of any of the NLEs. Avid’s script-based editing is optimized for scripted content, but doesn’t automatically generate text. Its strength is in scripted films and TV shows, where the page layout mimics a script supervisor’s lined script.

Adobe’s approach seems better for documentary projects. Text is generated through speech-to-text software within Premiere Pro. That is now processed on your computer instead of in the cloud. When you highlight text in the transcription panel, it automatically marks the in and out points on that source clip. Then, using insert and overwrite commands while the transcription panel is still selected, automatically edit that portion of the source clip to the timeline. Once you shift your focus to the timeline, the transcription panel displays the edited text that corresponds to the clips on the timeline. Rearrange the text and Premiere Pro automatically rearranges the clips on the timeline. Or rearrange the clips and the text follows.

Meanwhile over at Blackmagic Design’s massive booth, the new DaVinci Resolve 18.5 features were on full display. 18.5 is also in beta. While there are a ton of new features, it also includes automatic speech-to-text generation. This felt to me like a work-in-progress. So far, only English is supported. It creates text for the source and you can edit from the text panel to the timeline. However, unlike Premiere Pro, there is no interaction between the text and clips in the timeline.

I was surprised to see that Blackmagic Design was not promoting Resolve on the iPad. There was only one demo station and no dedicated demo artist. I played with it a bit and it felt to me like it’s not truly optimized for iPadOS yet. It does work well with the Speed Editor keyboard. That’s useful for any user, since the Cut page is probably where anyone would do the bulk of the work in this version of Resolve. When I used the Apple Pencil, the interface lacked any feedback as icons were clicked. So I was never quite sure if an action had happened or not when I used the Pencil. I’m not sure many will do a complete edit with Resolve on the iPad; however, it could evolve into a productive tool for preliminary editing in the field.

Here’s an interesting side note. Nearly all of the Blackmagic Design demo pods for DaVinci Resolve were running on Apple’s 24″ candy-colored iMacs. Occasionally performance was a bit sluggish from what I could tell. Especially when the operator demoed the new Relight feature to me. Nevertheless, they seemed to work well throughout the show.

In other Blackmagic news, all of the Cloud Store products are now shipping. The Cintel film scanner gets an 8mm gate. There are now IP versions of the video cards and converters. There’s an OLPF version of the URSA Mini Pro 12K and you can shoot vertical video with the Pocket Cinema Camera that’s properly tagged as vertical.

Of course, not everyone wants their raw media in the cloud and Blackmagic Design wasn’t showing the only storage products. Most of the usual storage vendors were present, including Facilis, OpenDrives, Synology, OWC, and QNAP. The technology trends include a shift away from spinning drives towards solid state storage, as well as faster networking protocols. Quite a few vendors(like Sonnet) were showing 25GbE (and faster) connections. This offers a speed improvement over the 1GbE and 10GbE ports and switches that are currently used.

Finally, one of the joys of NAB is to check out the smaller booths, where you’ll often find truly innovative new products. These small start-ups often grow into important companies in our industry. Hedge is just such a company. Tucked into a corner of the North hall, Hedge was demonstrating its growing portfolio of essential workflow products. Another start-up, Colourlab AI shared some booth space there, as well, to show off Freelab, their new integration with Premiere Pro and DaVinci Resolve.

That’s a quick rundown of my thoughts about this year’s NAB Show. For other thoughts and specific product reviews, be sure to also check out NAB coverage at Pro Video Coalition, RedShark News, and postPerspective. There’s also plenty of YouTube coverage.

Click on any image below to view an NAB slideshow.

©2023 Oliver Peters

What is a Finishing Editor?

To answer that, let’s step back to film. Up until the 1970s dramatic television shows, feature films, and documentaries were shot and post-produced on film. The film lab would print positive copies (work print) of the raw negative footage. Then a team of film editors and assistants would handle the creative edit of the story by physically cutting and recutting this work print until the edit was approved. This process was often messy with many film splices, grease pencil marks on the work print to indicate dissolves, and so on.

Once a cut was “locked” (approved by the director and the execs) the edited work print and accompanying notes and logs were turned over to the negative cutter. It was this person’s job to match the edits on the work print by physically cutting and splicing the original camera negative, which up until then was intact. The negative cutter would also insert any optical effects created by an optical house, including titles, transitions, and visual effects.

Measure twice, cut once

Any mistakes made during negative cutting were and are irreparable, so it is important that a negative cutter be detail-oriented, precise, and works cleanly. You don’t want excess glue at the splices and you don’t want to pick up any extra dirt and dust on the negative if it can be avoided. If a mistaken cut is made and you have to repair that splice, then at least one frame is lost from that first splice.

A single frame – 1/24th of a second – is the difference in a fight scene between a punch just about to enter the frame and the arm passing all the way through the frame. So you don’t want a negative cutter who is prone to making mistakes. Paul Hirsch, ACE points out in his book A long time ago in a cutting room far, far away…. that there’s an unintentional jump cut in the Death Star explosion scene in the first Star Wars film, thanks to a negative cutting error.

In the last phase of the film post workflow, the cut negative goes to the lab’s color timer (the precursor to today’s colorist), who sets the “timing” information (color, brightness, and densities) used by the film printer. The printer generates an interpositive version of the complete film from the assembled negative. From this interpositive, the lab will generally create an internegative from which release prints are created.

From the lab to the linear edit bay

This short synopsis of the film post-production process points to where we started. By the mid-1970s, video post-production technology came onto the scene for anything destined for television broadcast. Material was still shot on film and in some cases creatively edited on film, as well. But the finishing aspect shifted to video. For example, telecine systems were used to transfer and color correct film negative to videotape. The lab’s color timing function was shifted to this stage (before the edit) and was now handled by the telecine operator, who later became known as a colorist.

If work print was generated and edited by a film editor, then it was the video editor’s job to match those edits from the videotapes of the transferred film. Matching was a manual process. A number of enterprising film editors worked out methods to properly compute the offsets, but no computerized edit list was involved. Sometimes a video offline edit session was first performed with low-res copies of the film transfer. Other times producers simply worked from handwritten timecode notes for selected takes. This video editing – often called online editing and operated by an online editor – was the equivalent to the negative cutting stage described earlier. Simpler projects, such as TV commercials, might be edited directly in an online edit session without any prior film or offline edit.

Into the digital era

Over time, any creative editing previously done on film for television projects shifted to videotape edit systems and later to digital nonlinear edit systems (NLEs), such as Avid and Lightworks. These editors were referred to as offline editors and post now followed a bifurcated process know as offline and online editing. This was analogous to film’s work print and negative cutting stages. Likewise, telecine technology evolved to not only perform color correction during the film transfer process, but also afterwards working from the assembled master videotape as a source. This process, known as tape-to-tape color correction, gave the telecine operator – now colorist – the tools to perform better shot matching, as well as to create special looks in post. With this step the process had gone full circle, making the video colorist the true equivalent of the lab’s color timer.

As technology marched on, videotape and linear online edit bays gave way to all-digital, NLE-based facilities. Nevertheless, the separation of roles and processes continued. Around 2000, Avid came in with its Symphony model – originally a separate product and not just a software option. Avid Symphony systems offered a full set of color-correction tools and the ability to work in uncompressed resolutions.

It became quite common for a facility to have multiple offline edit bays using Avid Media Composer units staffed by creative, offline editors working with low-res media. These would be networked to an Avid shared storage solution. In addition, these facilities would also have one or more Avid Symphony units staffed by online editors.

A project would be edited on Media Composer until the cut was locked. Then assistants would ingest high-res media from files or videotape, and an online editor would “conform” the edit with this high-res media to match the approved timeline. The online editor would also handle Symphony color correction, insert visual effects, titles, etc. Finally, all tape or file deliverables would be exported out of the Avid Symphony. This system configuration and workflow is still in effect at many facilities around the world today, especially those that specialize in unscripted (“reality”) TV series.

The rise of the desktop systems

Naturally, there are more software options today. Over time, Avid’s dominance has been challenged by Apple Final Cut Pro (FCP 1-7 and FCPX), Adobe Premiere Pro, and more recently Blackmagic Design DaVinci Resolve. Systems are no longer limited by resolution constraints. General purpose computers can handle the work with little or no bespoke hardware requirements.

Fewer projects are even shot on film anymore. An old school, film lab post workflow is largely impossible to mount any longer. And so, video and digital workflows that were once only used for television shows and commercials are now used in nearly all aspects of post, including feature films. There are still some legacy terms in use, such as DI (digital intermediate), which for feature film is essentially an online edit and color correction session.

Given that modern software – even running on a laptop – is capable of performing nearly every creative and technical post-production task, why do we still have separate dedicated processes and different individuals assigned to each? The technical part of the answer is that some tasks do need extra tools. Proper color correction requires precision monitoring and becomes more efficient with specialized control panels. You may well be able to cut with a laptop, but if your source media is made up of 8K RED files, a proxy (offline-to-online) workflow makes more sense.

The human side of the equation is more complex

Post-production tasks often involve a left/right-side brain divide. Not every great editor is good when it comes to the completion phase. In spite of being very creative, many often have sloppy edits, messy timelines, and their project organization leaves a lot to be desired. For example, all footage and sequences just bunched together in one large project without bins. Timelines might have clips spread vertically in no particular order with some disabled clip – based on changes made in each revision path. As I’ve said before: You will be judged by your timelines!

The bottom line is that the kind of personality that makes a good creative editor is different than one that makes a good online editor. The latter is often called a finishing editor today within larger facilities. While not a perfect analogy, there’s a direct evolutionary path from film negative cutter to linear online editor to today’s finishing editor.

If you compare this to the music world, songs are often handled by a mixing engineer followed by a mastering engineer. The mix engineer creates the best studio mix possible and the mastering engineer makes sure that mix adheres to a range of guidelines. The mastering engineer – working with a completely different set of audio tools – often adds their own polish to the piece, so there is creativity employed at this stage, as well. The mastering engineer is the music world’s equivalent to a finishing editor in the video world.

Remember, that on larger projects, like a feature film, the film editor is contracted for a period of time to deliver a finished cut of the film. They are not permanent staff. Once, that job is done the project is handed off to the finishing team to accurately generate the final product working with the high-res media. Other than reviewing the work, there’s no value to having a highly paid film editor also handle basic assembly of the master. This is also true in many high-end commercial editorial companies. It’s more productive to have the creative editors working with the next client, while the staff finishing team finalizes the master files.

The right kit for the job

It also comes down to tools. Avid Symphony is still very much in play, especially with reality television shows. But there’s also no reason finishing and final delivery can’t be done using Apple Final Cut Pro or Adobe Premiere Pro. Often more specialized edit tools are assigned to these finishing duties, including systems such as Autodesk Smoke/Flame, Quantel Rio, and SGO Mistika. The reason, aside from quality, is that these tools also include comprehensive color and visual effects functions.

Finishing work today includes more that simply conforming a creative edit from a decision list. The finishing editor may be called upon to create minor visual effects and titles along with finessing those that came out of the edit. Increasingly Blackmagic Design DaVinci Resolve is becoming a strong contender for finishing – especially if Resolve was used for color correction. It’s a powerful all-in-one post-production application, capable of handling all of the effects and delivery chores. If you finish out of Resolve, that cuts out half of the roundtrip process.

Attention to detail is the hallmark of a good finishing editor. Having good color and VFX skills is a big plus. It is, however, a career path in its own right and not necessarily a stepping stone to becoming a top-level feature film editor or even an A-list colorist. While that might be a turn-off to some, it will also appeal to many others and provide a great place to let your skills shine.

©2023 Oliver Peters

Final Cut Pro + DaVinci Resolve

The concept of offline and online editing goes back to the origins of film editing. Work print was cut by the film editor during the creative stage of the process and then original negative was conformed by the lab and married to the final mix for the release prints (with a few steps in between). The terms offline and online were lifted from early computer lingo and applied to edit systems when the post process shifted from film to video. Thus offline equates to the creative editorial stage, while conforming and finishing services are defined as online.

Digital nonlinear edit systems evolved to become capable of handling all of these stages of creative editorial and finishing at the highest quality level. However, both phases require different mindsets and skills, as well as more advanced hardware for finishing. And so, the offline/online split continues to this day.

If you are an editor cutting local market spots, YouTube videos, corporate marketing pieces, etc, then you are probably used to performing all of these tasks on your own. However, most major commercials, TV shows, and films definitely split them up. In feature films and high-end TV shows, the film editors are separate from the sound editing/mixing team and everything goes through the funnel of a post facility that handles the finishing services. The latter is often referred to as the DI (digital intermediate) process in feature film productions.

You may be cutting on Media Composer, Premiere Pro, or Final Cut Pro, but the final assembly, insertion of effects, and color correction will likely be done with a totally different system and/or application. The world of finishing offers many options, like SGO Mistika, Quantel Rio, and Filmlight Baselight. But the tools that pop up most often are Autodesk Flame, DaVinci Resolve, and Avid Symphony (the latter for unscripted shows). And of course, Pro Tools seemingly “owns” the audio post market.

Since offline/online still exists, how can you use modern tools to your advantage?

If Apple’s Final Cut Pro is your main axe, then you might be reading this and think that you can easily do this all within FCP. Likewise, if you’ve shifted to Resolve, you’re probably wondering, why not just do it all in Resolve? Both concepts are true in theory; however, I contend that most good editors aren’t the best finishers and vice versa. In addition, it’s my opinion that Final Cut is optimized for editing, whereas Resolve is optimized for finishing. That doesn’t make them mutually exclusive. In fact, the opposite is true. They work great in tandem and I would suggest that it’s good to know and use both.

Scenario 1: If you edit with FCP, but use outside services for color and sound, then you’ll need to exchange lists and media. Typically this means AAF for sound and FCPXML for Resolve color (or possibly XML or AAF if it’s a different system). If those systems don’t accept FCPXML lists, then normally you’d need to invest in tools from Intelligent Assistance and/or Marquis Broadcast. However, you can also use Resolve to convert the FCPXML list into other formats.

If they are using Resolve for color and you have your own copy of Resolve or Resolve Studio, then simply import the FCPXML from Final Cut. You can now perform a “preflight check” on your sequence to make sure everything translated correctly from Final Cut. Take this opportunity to correct any issues before it goes to the colorist. Resolve includes media management to copy and collect all original media used in your timeline. You have the option to trim files if these are long clips. Ideally, the DP recorded short takes without a lot of resets, which makes it easy to copy the full-length clip. Since you are not rendering/exporting color-corrected media, you aren’t affected by the UHD export limit of the free Resolve version.

After media management, export the Resolve timeline file. Both media and timeline file can go directly to the colorist without any interpretation required at the other end. Finally, Resolve also enables AAF exports for audio, if you need to send the audio files to a mixer using Pro Tools.

Scenario 2: What if you are doing everything on your own and not sending the project to a colorist or mixer for finishing? Well, if you have the skillset and understand the delivery criteria, then Resolve is absolutely your friend for finishing the project. For one thing, owning Resolve means you could skip purchasing Apple Motion, Compressor, and/or Logic Pro, if you want to. These are all good tools to have and a real deal from a cost standpoint; however, Resolve or Resolve Studio definitely covers most of what you would do with these applications.

Start the same way by sending your FCPXML into Resolve. Correct any editorial issues, flatten/collapse compound and multicam clips, etc. Insert effects and titles or build them in the Fusion page. Color correct. When it comes to sound, the Fairlight page is a full-fledged DAW. Assuming you have the mixing chops, then Fairlight is a solid stand-in for Logic Pro, Pro Tools, or other DAWs. Finally, export the various formats via the Deliver page.

Aside from the obvious color and mixing superiority of Resolve over Final Cut Pro, remember that you can media-manage, as well as render out trimmed clips – something that FCP won’t do without third-party applications. It’s also possible to develop proxy workflows that work between these two applications.

While both Final Cut Pro and DaVinci Resolve are capable of standing alone to cover the creative and finishing stages of editing, the combination of the two offers the best of all worlds – a fast editing tool and a world-class finishing application.

©2023 Oliver Peters

Colourlab Ai

An artificial intelligence grading option for editors and colorists

There are many low-cost software options for color correction and grading, but getting a stunning look is still down to the skill of a colorist. Why can’t modern artificial intelligence tools improve the color grading process? Colorist and color scientist Dado Valentic developed Colourlab Ai as just that solution. It’s a macOS product that’s a combination of a standalone application and companion plug-ins for Resolve, Premiere Pro, Final Cut Pro, and Pomfort Live Grade.

Colourlab Ai is comprised of two main functions – grading and show look creation. Most Premiere Pro and Final Cut Pro editors will be interested in either the basic Colourlab Ai Creator or the richer features of Colourlab Ai Pro. The Creator version offers all of the color matching and grading tools, plus links to Final Cut Pro and Premiere Pro. The Pro version adds advanced show look design, DaVinci Resolve and Pomfort Live Grade integration, SDI output, and Tangent panel support. These integrations differ slightly, due to the architecture of each host application.

Advanced color science and image processing

Colourlab Ai uses color management similar to Resolve or Baselight. The incoming clip is processed with an IDT (input device transform), color adjustments are applied within a working color space, and then it’s processed with an ODT (output device transform) – all in real-time. This enables support for a variety of cameras with different color science models (such as ARRI Log-C) and it allows for output based on different display color spaces, such as Rec 709, P3, or sRGB.

If you prefer to work directly with the Colourlab Ai application by itself – no problem. Import raw footage, color correct the clips, and then export rendered movie files with a baked in look. Or you can use the familiar roundtrip approach as you would with DaVinci Resolve. However, the difference in the Colourlab Ai roundtrip is that only color information moves back to the editing application without the need to render any new media.

The Colourlab Ai plug-in for Final Cut Pro or Premiere Pro reads the color information created by the Colourlab Ai application from an XML file used to transfer that data. A source effect is automatically applied to each clip with those color parameters. The settings are still editable inside Final Cut Pro (not Premiere Pro). If you want to modify any color parameter, simply uncheck the “Use Smart Match” button and adjust the sliders in the inspector. In fact, the Colourlab Ai plug-in for FCP is a full-featured grading effect and you could use it that way. Of course, that’s doing it the hard way!

The ability to hand off source clips to Final Cut Pro with color metadata attached is unique to Colourlab Ai. This is especially a game changer for DITs who deliver footage with a one-light grade to editors working in FCP. The fact that no media need be rendered also significantly speeds up the process.

A professional grading workflow with Final Cut Pro and Colourlab Ai

Thanks to Apple’s color science and media architecture, Final Cut Pro can be used as a professional color grading platform with the right third-party tools. CoreMelt (Chromatic) and Color Trix (Color Finale) are two examples of developers who have had success offering advanced tools, using floating panels within the Final Cut Pro interface. Colourlab Ai takes a different approach by offloading the grade to its own application, which has been designed specifically for this task.

My workflow test involved two passes – once for dailies (such as a one-light grade performed by a DIT on-set) and then again for the final grade of the locked cut. I could have simply sent the locked cut once to Colourlab Ai, but my intention was to test a workflow more common for feature films. Shot matching between different set-ups and camera types is the most time-consuming part of color grading. Colourlab Ai is intended to make that process more efficient by employing artificial intelligence.

Step one of the workflow is to assemble a stringout of all of your raw footage into a new FCP project (sequence). Then drag that project from FCP to the Colourlab Ai icon on the dock (Colourlab Ai has already been opened). The Colourlab Ai app will automatically determine some of the camera sources (like ARRI files) and apply the correct IDT. For any unknown camera, manually test the settings for different cameras or simply stick with a default Rec 709 IDT.

The Pro interface features three tabs – Grade, Timeline Intelligence, and Look Design. The top half of the Grade tab displays the viewer and reference images used for matching. Color wheels, printer light controls, scopes, and versions are in the bottom half. Scope choices include waveform, RGB parade, or vectorscope, but also EL Zones. Developed by Ed Lachman, ASC, the EL Zone System is a false color display with 15 colors to represent a 15-stop exposure range. The mid-point equates to the 18% grey standard.

AI-based shot matching forms the core

Colourlab Ai focuses on smart shot matching, either through its Auto-Color feature or by matching to a reference image. The application includes a variety of reference images, but you can also import your own, such as from Shotdeck. The big advance Colourlab Ai offers over other matching solutions is Color Tune. A small panel of thumbnails can be opened for any clip. Adjust correction parameters – brightness, contrast, density, etc – simply by stepping through incremental value changes. Click on a thumbnail to preview it in the viewer.

The truly unique aspect is that Color Tune lets you choose from eleven matching options. Maybe instead of a Smart model, you’d prefer to match based only on Balance or RGB or a Perceptual model. Step through the thumbnails and pick the look that’s right for the shot. Therefore, matching isn’t an opaque process. It can be optimized in a style more akin to adjusting photos than traditional video color correction.

Timeline Intelligence allows you to rearrange the sequence to group similar set-ups together. Once you do this, use matching to set a pleasing look for one shot. Select that shot as a “fingerprint.” Then select the rest of the shots in a group and match those to the fingerprinted reference shot. This automatically applies that grade to the rest. But, it’s not like adding a simple LUT to a clip or copy-and-pasting settings. Each shot is separately analyzed and matched based on the differences within each shot.

When you’re done going through all of the shots, right-click any clip and “push” the scene (the timeline) back to Final Cut Pro. This action uses FCPXML data to send the dailies clips back to Final Cut, now with the added Colourlab Ai effect containing the color parameters on each source clip.

Remember that Final Cut Pro automatically adds a LUT to certain camera clips, such as ARRI Alexa files recorded in Log-C. When your clips comes back in from Colourlab Ai, FCP may add a LUT on top of some camera files. You don’t want this, because Colourlab Ai has already made this adjustment with its IDT. If that happens, simply change the inspector LUT setting for that source file to “none.”

Lock the edit and create your final look

At this point you can edit with native camera clips that have a primary grade applied to them. No proxy media rendered by a DIT, hence a much faster turnaround and no extra media to take up drive space. Once you’ve locked the edit, it’s time for step two – the show look design for the final edit.

Drag the edited FCP project (new sequence with the graded clips) to the Colourlab Ai icon on the dock to send the edited sequence back to Colourlab Ai. All of the clips retain the color settings created earlier in the dailies grading session. However, this primary grade is just color metadata and can be altered. After any additional color tweaks, it’s time to move to Show Looks. Click through the show look examples and apply the one that fits best.

If you have multiple shots with the same look, apply a show look to the first one, copy it, and then apply that look to the rest of the selected clips. In most cases, you’ll have a different show look for various scenes within a film, but it’s also possible that a single show look would work through the entire film. So, experiment!

To modify a look or create your own, step into the Look Design tab (Pro version). Here you’ll find the Filmlab and Primary panels. Filmlab uses film stock emulation models and film’s subtractive color (CMY instead of RGB) for adjustments. Their film emulation is among the most convincing I’ve seen. You can select from a wide range of branded negative and print film stocks and then make contrast, saturation, and CMY color adjustments. The Primary panel gives you even more control over RGBCMY for the lift, gamma, and gain regions. Custom adjustments may be saved to create your own show looks. Once you’ve set a show look for all of your shots, push the sequence back to Final Cut Pro. Voila – a fully graded show and no superfluous media created in the process.

Some observations

Colourlab Ai is a revolutionary tool based on a film-style approach to grading. Artificial intelligence models speed up the process, but you are always in control. Thanks to the ease of operation, you can get great results without Resolve’s complex node structure. You can always augment a shot with FCP’s own color tools for a power window or a vignette.

The application currently lacks a traditional undo/redo stack. Therefore, use the version history to experiment with settings and looks. Each time you generate a new match, such as with Auto-Color or using a reference image, a new version is automatically stored. If you want to iterate, then manually add a version at any waypoint if a new match isn’t involved – for example, when making color wheels adjustments. The version history displays a thumbnail for each version. Step through them to pick the one that suits you best.

If you are new to color correction, then Colourlab Ai might look daunting at first glance. Nevertheless, it’s deceptively easy to use. There are numerous tutorials available on the website, as well as directly accessible from the launch window. A 7-day free trial can be downloaded for you to dip your toes in the water. The artificial intelligence at the heart of Colourlab Ai will enable any editor to deliver professional grades.

©2022 Oliver Peters