Colourlab Ai

An artificial intelligence grading option for editors and colorists

There are many low-cost software options for color correction and grading, but getting a stunning look is still down to the skill of a colorist. Why can’t modern artificial intelligence tools improve the color grading process? Colorist and color scientist Dado Valentic developed Colourlab Ai as just that solution. It’s a macOS product that’s a combination of a standalone application and companion plug-ins for Resolve, Premiere Pro, Final Cut Pro, and Pomfort Live Grade.

Colourlab Ai is comprised of two main functions – grading and show look creation. Most Premiere Pro and Final Cut Pro editors will be interested in either the basic Colourlab Ai Creator or the richer features of Colourlab Ai Pro. The Creator version offers all of the color matching and grading tools, plus links to Final Cut Pro and Premiere Pro. The Pro version adds advanced show look design, DaVinci Resolve and Pomfort Live Grade integration, SDI output, and Tangent panel support. These integrations differ slightly, due to the architecture of each host application.

Advanced color science and image processing

Colourlab Ai uses color management similar to Resolve or Baselight. The incoming clip is processed with an IDT (input device transform), color adjustments are applied within a working color space, and then it’s processed with an ODT (output device transform) – all in real-time. This enables support for a variety of cameras with different color science models (such as ARRI Log-C) and it allows for output based on different display color spaces, such as Rec 709, P3, or sRGB.

If you prefer to work directly with the Colourlab Ai application by itself – no problem. Import raw footage, color correct the clips, and then export rendered movie files with a baked in look. Or you can use the familiar roundtrip approach as you would with DaVinci Resolve. However, the difference in the Colourlab Ai roundtrip is that only color information moves back to the editing application without the need to render any new media.

The Colourlab Ai plug-in for Final Cut Pro or Premiere Pro reads the color information created by the Colourlab Ai application from an XML file used to transfer that data. A source effect is automatically applied to each clip with those color parameters. The settings are still editable inside Final Cut Pro (not Premiere Pro). If you want to modify any color parameter, simply uncheck the “Use Smart Match” button and adjust the sliders in the inspector. In fact, the Colourlab Ai plug-in for FCP is a full-featured grading effect and you could use it that way. Of course, that’s doing it the hard way!

The ability to hand off source clips to Final Cut Pro with color metadata attached is unique to Colourlab Ai. This is especially a game changer for DITs who deliver footage with a one-light grade to editors working in FCP. The fact that no media need be rendered also significantly speeds up the process.

A professional grading workflow with Final Cut Pro and Colourlab Ai

Thanks to Apple’s color science and media architecture, Final Cut Pro can be used as a professional color grading platform with the right third-party tools. CoreMelt (Chromatic) and Color Trix (Color Finale) are two examples of developers who have had success offering advanced tools, using floating panels within the Final Cut Pro interface. Colourlab Ai takes a different approach by offloading the grade to its own application, which has been designed specifically for this task.

My workflow test involved two passes – once for dailies (such as a one-light grade performed by a DIT on-set) and then again for the final grade of the locked cut. I could have simply sent the locked cut once to Colourlab Ai, but my intention was to test a workflow more common for feature films. Shot matching between different set-ups and camera types is the most time-consuming part of color grading. Colourlab Ai is intended to make that process more efficient by employing artificial intelligence.

Step one of the workflow is to assemble a stringout of all of your raw footage into a new FCP project (sequence). Then drag that project from FCP to the Colourlab Ai icon on the dock (Colourlab Ai has already been opened). The Colourlab Ai app will automatically determine some of the camera sources (like ARRI files) and apply the correct IDT. For any unknown camera, manually test the settings for different cameras or simply stick with a default Rec 709 IDT.

The Pro interface features three tabs – Grade, Timeline Intelligence, and Look Design. The top half of the Grade tab displays the viewer and reference images used for matching. Color wheels, printer light controls, scopes, and versions are in the bottom half. Scope choices include waveform, RGB parade, or vectorscope, but also EL Zones. Developed by Ed Lachman, ASC, the EL Zone System is a false color display with 15 colors to represent a 15-stop exposure range. The mid-point equates to the 18% grey standard.

AI-based shot matching forms the core

Colourlab Ai focuses on smart shot matching, either through its Auto-Color feature or by matching to a reference image. The application includes a variety of reference images, but you can also import your own, such as from Shotdeck. The big advance Colourlab Ai offers over other matching solutions is Color Tune. A small panel of thumbnails can be opened for any clip. Adjust correction parameters – brightness, contrast, density, etc – simply by stepping through incremental value changes. Click on a thumbnail to preview it in the viewer.

The truly unique aspect is that Color Tune lets you choose from eleven matching options. Maybe instead of a Smart model, you’d prefer to match based only on Balance or RGB or a Perceptual model. Step through the thumbnails and pick the look that’s right for the shot. Therefore, matching isn’t an opaque process. It can be optimized in a style more akin to adjusting photos than traditional video color correction.

Timeline Intelligence allows you to rearrange the sequence to group similar set-ups together. Once you do this, use matching to set a pleasing look for one shot. Select that shot as a “fingerprint.” Then select the rest of the shots in a group and match those to the fingerprinted reference shot. This automatically applies that grade to the rest. But, it’s not like adding a simple LUT to a clip or copy-and-pasting settings. Each shot is separately analyzed and matched based on the differences within each shot.

When you’re done going through all of the shots, right-click any clip and “push” the scene (the timeline) back to Final Cut Pro. This action uses FCPXML data to send the dailies clips back to Final Cut, now with the added Colourlab Ai effect containing the color parameters on each source clip.

Remember that Final Cut Pro automatically adds a LUT to certain camera clips, such as ARRI Alexa files recorded in Log-C. When your clips comes back in from Colourlab Ai, FCP may add a LUT on top of some camera files. You don’t want this, because Colourlab Ai has already made this adjustment with its IDT. If that happens, simply change the inspector LUT setting for that source file to “none.”

Lock the edit and create your final look

At this point you can edit with native camera clips that have a primary grade applied to them. No proxy media rendered by a DIT, hence a much faster turnaround and no extra media to take up drive space. Once you’ve locked the edit, it’s time for step two – the show look design for the final edit.

Drag the edited FCP project (new sequence with the graded clips) to the Colourlab Ai icon on the dock to send the edited sequence back to Colourlab Ai. All of the clips retain the color settings created earlier in the dailies grading session. However, this primary grade is just color metadata and can be altered. After any additional color tweaks, it’s time to move to Show Looks. Click through the show look examples and apply the one that fits best.

If you have multiple shots with the same look, apply a show look to the first one, copy it, and then apply that look to the rest of the selected clips. In most cases, you’ll have a different show look for various scenes within a film, but it’s also possible that a single show look would work through the entire film. So, experiment!

To modify a look or create your own, step into the Look Design tab (Pro version). Here you’ll find the Filmlab and Primary panels. Filmlab uses film stock emulation models and film’s subtractive color (CMY instead of RGB) for adjustments. Their film emulation is among the most convincing I’ve seen. You can select from a wide range of branded negative and print film stocks and then make contrast, saturation, and CMY color adjustments. The Primary panel gives you even more control over RGBCMY for the lift, gamma, and gain regions. Custom adjustments may be saved to create your own show looks. Once you’ve set a show look for all of your shots, push the sequence back to Final Cut Pro. Voila – a fully graded show and no superfluous media created in the process.

Some observations

Colourlab Ai is a revolutionary tool based on a film-style approach to grading. Artificial intelligence models speed up the process, but you are always in control. Thanks to the ease of operation, you can get great results without Resolve’s complex node structure. You can always augment a shot with FCP’s own color tools for a power window or a vignette.

The application currently lacks a traditional undo/redo stack. Therefore, use the version history to experiment with settings and looks. Each time you generate a new match, such as with Auto-Color or using a reference image, a new version is automatically stored. If you want to iterate, then manually add a version at any waypoint if a new match isn’t involved – for example, when making color wheels adjustments. The version history displays a thumbnail for each version. Step through them to pick the one that suits you best.

If you are new to color correction, then Colourlab Ai might look daunting at first glance. Nevertheless, it’s deceptively easy to use. There are numerous tutorials available on the website, as well as directly accessible from the launch window. A 7-day free trial can be downloaded for you to dip your toes in the water. The artificial intelligence at the heart of Colourlab Ai will enable any editor to deliver professional grades.

©2022 Oliver Peters

Analogue Wayback, Ep. 20

D2 – recursive editing

Video production and post transitioned from analog to digital starting in the late 1980s. Sony introduced the component digital D1 videotape recorder, but that was too expensive for most post facilities. These were also harder to integrate into existing composite analog facilities. In 1988 Ampex and Sony introduced the D2 format – an uncompressed, composite digital VTR with built-in A/D and D/A conversion.

D2 had a successful commercial run of about 10 years. Along the way it competed for marketshare with Panasonic’s D3 (composite) and D5 (component) digital formats. D2 was eventually usurped by Sony’s own mildly compressed Digital Betacam format. That format coincided with the widespread availability of serial digital routing, switching, and so on, successfully moving the industry into a digital production and post environment.

During D2’s heyday, these decks provided the ideal replacement for older 1″ VTRs, because they could be connected to existing analog routers, switchers, and patch bays. True digital editing and transfer was possible if you connected the decks using composite digital hardware and cabling (with large parallel connections, akin to old printer cables). Because of this bulk, there weren’t too many composite digital edit suites. Instead, digital i/o was reserved for direct VTR to VTR copies – i.e. a true digital clone. Some post houses touted their “digital” edit suites, but in reality their D2 VTRs were connected to the existing analog infrastructure, such as the popular Grass Valley Group 200 and 300 video switchers.

One unique feature of the D2 VTRs was “read before write”, also called “preread”. This was later adopted in the Digital Betacam decks, too. Preread enabled the deck to play a signal and immediately record that same signal back onto the same tape. If you passed the signal through a video switcher, you could add more elements, such as titles. There was no visual latency in using preread. While you did incur some image degradation by going through D/A and A/D conversions along the way, the generation loss was minor compared with 1″ technology. If you stayed within a reasonable number of generations, then there was no visible signal loss of any consequence.

Up until D2, performing a simple transition like a dissolve required three VTRs – the A and B playback sources, plus the recorder. If the two clips were on the same source tape, then one of the two clips had to be copied (i.e dubbed) onto a second tape to enable the transition. If you knew that a lot of these transitions were likely, an editor might take the time before any session to immediately copy the camera tape, creating a “B-roll dub” before ever starting. One hourlong camera tape would take an hour to copy. Longer, if the camera originals were longer.

With D2 and preread, the B-roll dub process could be circumvented, thus shaving unproductive time off of the session. Plus, only two VTRs were required to make the same edit – a player and a recorder. The editor would record the A clip long in order to have a “handle” for the length of the dissolve. Then switch on preread and preview the edit. If the preview looked good, then record the dissolve to the incoming B clip, which was playing from the same camera tape. This was all recorded onto the same master videotape.

Beyond this basic edit solution, D2’s preread ushered in what I would call recursive editing techniques. It has a lot of similarities with sound-on-sound audio recording innovated by the legendary Les Paul. For example, television show deliverables often require the master plus a “textless” master (no credits or titles). With D2, the editor could assemble the clean, textless master of the show. Next make a digital clone of that tape. Then go back on one of the two and use the preread function to add titles over the existing video. Another example would be simple graphic composites, like floating video boxes over a background image or a simple quad split. Simply build up all layers with preread, one at a time, in successive edit passes recorded onto the same tape. 

The downside was that if you made a mistake, you had to start over again. There was no undo. However, by this time linear edit controllers were pretty sophisticated and often featured complex integrations with video switchers and digital effects devices. This was especially true in an online bay made up of all Sony hardware. If you did make a mistake, you could simply start over using the edit controller’s auto-assembly function to automatically re-edit the events up to the point of the mistake. Not as good as modern software’s undo feature, but usually quite painless.

D2 held an important place in video post. Not only as the mainstream beginning of digital editing, but also for the creative options it inspired in editors.

©2022 Oliver Peters

Analogue Wayback, Ep. 19

Garage bands before the boy bands

As an editor, I’ve enjoyed the many music-oriented video productions I’ve worked on. In fact one of my first feature films was a concert film highlighting many top Reggae artists. Along the way, I’ve cut numerous jazz concerts for PBS, along with various videos for folks like Jimmy Buffet and the Bob Marley Foundation.

We often think about the projects that “got away” or never happened. For me, one of those was a documentary about the “garage band” acts of central Florida during the 1960s. These were popular local and regional acts with an eye towards stardom, but who never became household names, like Elvis or The Beatles. Central Florida was a hot bed for such acts back then, in the same way as San Francisco, Memphis, or Seattle have been during key moments in rock ‘n roll history.

For much of the early rock ‘n roll era music was a vertically-integrated business. Artist management, booking, recording studios, and marketing/promotion/distribution were all handled by the same company. The money was made in booking performances more so than record sales.

Records were produced, especially 45RPM “singles”, in order to promote the band. Singles were sent for free to radio stations in hopes that they would be placed into regular rotation by the station. That airplay would familiarize listeners/fans with the bands and their music. While purchasing the records was a goal, the bigger aim was name recognition, so that when a band was booked for a local event (dance, concert, youth club appearance, tour date) the local fans would buy tickets and show up to the event. Naturally some artists broke out in a big way, which meant even more money in record sales, as well as touring.

Record labels, studios, recording  studios, and talent booking services – whether the same company or separate entities – enjoyed a very symbiotic relationship. Much of this is chronicled in a mini-doc I cut for the Memphis Rock ‘n Soul Museum. It highlighted studios like Sun, Stax, and Hi and their role in the birth of rock ‘n roll and soul music.

In the central Florida scene, one such company was Bee Jay, started by musician/entrepreneur Eric Schabacker. Bee Jay originally encompassed a booking service and eventually a highly regarded recording studio responsible for many local acts. Many artists passed through those studio doors, but one of the biggest acts to record there was probably Molly Hatchet. I got to know Schabacker when the post facility I was with acquired the Bee Jay Studios facility.

Years later Schabacker approached me with an interesting project – a documentary about the local garage bands on the 60s. Together with a series of interviews with living band members, post for the documentary would also involve the restoration of several proto-music videos. Bee Jay had videotaped promotional videos for 13 of the bands back in the day. While Schabacker handled the recording of the interviews, I tackled the music videos.

The original videos were recorded using a rudimentary black-and-white production system. These were recorded onto half-inch open reel videotape. Unfortunately, the video tubes in the cameras back then didn’t always handle bright outdoor light well and the video switcher did not feature clean vertical interval switching. The result was a series of recordings in which video levels fluctuated and camera cuts often glitched. There were sections in the recordings where the tape machine lost servo lock during recording. The audio was not recorded live. Instead, the bands lip-synced to playback of their song recordings, which was also recorded in sync with the video. These old videos were transferred to DV25 QuickTime files, which formed my starting point.

Step one was to have clean audio. The bands’ tunes had been recorded and mixed at Bee Jay Studios at the time into a 13-song LP that was used for promotion to book those bands. However, at this point over three decades later, the master recordings were no longer available. But Schabacker did have pristine vinyl LPs from those session. These were turned over to local audio legend and renowned master engineer, Bob Katz. In turn, he took those versions and created remastered files for my use.

Now that I had good sound, my task was to take the video – warts and all – and rebuild it in sync with the song tracks, clean up the video, get rid of any damage and glitches, and in general end up with a useable final video for each song. Final Cut Pro (legacy) was the tool of choice at that time. Much of the “restoration” involved the slight slowing or speeding up of shots to resync the files – shot by shot. I also had to repeat and slomo some shots for fit-and-fill, since frames would be lost as glitchy camera cuts and other disturbances were removed. In the end, I rebuilt all 13 into a presentable form.

While that was a labor of love, the down side was that the documentary never came to be. All of these bands had recorded great-sounding covers (such as Solitary Man), but no originals. Unfortunately, it would have been a nightmare and quite costly to clear the music rights for these clips if used in the documentary. A shame, but that’s life in the filmmaking world.

None of these bands made it big, but in subsequent years, bands of another era like *NSYNC and the Backstreet Boys did. And they ushered a new boy band phenomenon, which carries on to this day in the form of K-pop, among other styles.

©2022 Oliver Peters

Analogue Wayback, Ep. 18

Connections Redux

In 1993 I worked on a corporate image short film for AT&T entitled Connections: AT&T’s Vision of the Future. I wrote about this in a 2010 blog post, but I thought it was a good topic to revisit in the context of this Analogue Wayback series. Next year will be 30 years since its release, which makes it a good time to compare these futurists’ ideas with what was actually developed. (The full film can be viewed here on YouTube.)

The inspiration for the film came from AT&T exec Henry Bassman. It was designed as a vision piece to be used in various public and investor relations endeavors. The concepts shown in the film were based on the ideas of a number of theorists working with AT&T’s labs and grounded in actual technology that was being studied and developed there. The film’s concept was to extrapolate those ideas 20 years into the future and show actual productization that might come to be. Henry Bassman and director Robert Wiemer wove these ideas into the fictional narrative of this 15-minute short film. Bassman discussed Connections: AT&T’s Vision of the Future in this 2007 interview with the Paleo-Future blog.

The production was filmed in the Universal Studios Florida soundstages on 35mm and posted at Century III. We transferred the film to Sony D2 composite digital tape using our Rank Cintel Mark III/DaVinci-equipped telecine suite. The offline edit was handled with an Ediflex system and the online conform/finishing edit done in our online edit bays (CMX 3600 edit system, Grass Valley 300 switcher with Kaleidoscope DVE, and D2 mastering). My role was the online edit, along with a number of standard visual effects, like screen inserts and basic composites. The more advanced 2D and 3D visual effects were handled by our designers.

While the film might certainly seem quaint to modern eyes, the general concepts and the quality of the visual effects were in keeping with other productions of that era, such as Star Trek: The Next Generation – of course, without the fantasy, sci-fi component. Remember that the internet was still young, no iPhone existed, and most of today’s commonplace technology simply never existed outside of the lab. Naturally, as with any of these past looks into the future, the way that theoretical concepts morph into real technology is never exactly the same as depicted, nor as seamless in operation. But these were pretty darn close.

I covered much of the technology and those concepts in my 2010 post, but it’s worth taking a new look at the ideas shown:

Simultaneous Facetime or Zoom-like conversations

Real-time captioning with live foreign language translation

Seat back airline entertainment systems with communications capabilities

16×9 displays

Foldable tablets

Tablet cameras with augmented reality

A form of the metaverse with avatars and Oculus-style interfaces

Noise-cancelling communication area

Large flat-panel TV displays with computer interfaces

Computer intelligent assistants

Online shopping with augmented reality

Online, computer -assisted leaning in classrooms

Super-thin computers

Automotive communications/media electronics

One can certainly point out flaws when viewed through the modern lens. Plus, since this is an AT&T piece, it focuses on some of their ideas, like active phone booths, and the video phone. Not to mention some obvious misses, like not really seeing the advent of the modern smart phone in a clear way. Nevertheless, it’s interesting to see how close so much of this is. It makes you wonder how we will view back onto today 20 years from now.

©2022 Oliver Peters

Will DaVinci Resolve 18 get you to switch?

DaVinci Resolve has been admired by users of other editing applications, because of the pace of Blackmagic Design’s development team. Many have considered a switch to Resolve. Since its announcement earlier this year, DaVinci Resolve fans and pros alike have been eagerly awaiting Resolve 18 to get out of public beta. It was recently released and I’ve been using it ever since for a range of color correction jobs.

DaVinci Resolve 18 is available in two versions: Resolve (free) or Resolve Studio (paid). These are free updates to existing customers. They can be downloaded/bought either from the Blackmagic Design website (Windows, Mac, Linux) or through the Apple Mac App Store (macOS only – Intel and M1). The free version of Resolve is missing only a few of the advanced features available in Resolve Studio. Due to App Store policies and sandboxing, there are also some differences between the Blackmagic and App Store installations. The Blackmagic website installations may be activated on up to two computers at the same time using a software activation code. The App Store versions will run on any Mac tied to your Apple ID.

(Click images to see an enlarged view.)

A little DaVinci Resolve history

If you are new to DaVinci Resolve, then here’s a quick recap. The application is an amalgam of the intellectual property and assets acquired by Blackmagic Design over several years from three different companies: DaVinci Systems, eyeon (Fusion), and Fairlight Instruments. Blackmagic Design built upon the core of DaVinci Resolve to develop an all-in-one, post production solution. The intent is to encompass an end-to-end workflow that integrates the specialized tasks of editing, color grading, visual effects, and post production sound all within a single application.

The interface character and toolset tied to each of these tasks is preserved using a page-style, modal user interface. In effect, you have separate tools, tied to a common media engine, which operate under the umbrella of a single application. Some pages are fluidly interoperable (like edit and Color) and others aren’t. For example, color nodes applied to clips in the Color page do not appear as nodes within the Fusion page. Color adjustments made to clips in a Fusion composition need to be done with Fusion’s separate color tools.

Blackmagic has expanded Resolve’s editing features – so much so that it’s a viable competitor to Avid Media Composer, Apple Final Cut Pro, and/or Adobe Premiere Pro. Resolve sports two editing modes: the Cut page (a Final Cut Pro-style interface for fast assembly editing) and the Edit page (a traditional track-based interface). The best way to work in Resolve is to adhere to its sequential, “left to right” workflow – just like the pages/modes are oriented. Start by ingesting in the Media page and then work your way through the tasks/pages until it’s time to export using the Deliver page.

Blackmagic Design offers a range of optional hardware panels for Resolve, including bespoke editing keyboards, color correction panels, and modular control surface configurations for Fairlight (sound mixing). Of course, there’s also Blackmagic’s UltraStudio, Intensity Pro, and DeckLink i/o hardware.

A new collaboration model through Blackmagic Cloud

The biggest news is that DaVinci Resolve 18 was redesigned for multi-user collaboration. Resolve projects are usually stored in a database on your local computer or a local drive, rather than as separate binary project files. Sharing projects in a multi-user environment requires a separate database server, which isn’t designed for remote editing. To simplify this and address remote work, Blackmagic Design established and hosts the new Blackmagic Cloud service.

As I touched on in my Cloud Store Mini review, anyone may sign up for a free Blackmagic Cloud account. When ready, the user creates a Library (database) on Cloud from within the Resolve UI. That user is the “owner” of the Library, which can contain multiple projects. The owner pays $5/library/month for each Library hosted on Blackmagic Cloud.

The Library owner can share a project with any other registered Blackmagic Cloud user. This collaboration model is similar to working in Media Composer and is based on bin locking. The first user to open a bin has read/write permission to that bin and any timelines contained in it. Other users opening the same timeline operate with read-only permission. Changes made by the user with write permission can then be updated by the read-only users on their systems.

Blackmagic Design only hosts the Library/project files and not any media, which stays local for each collaborator. The media syncing workflow is addressed through features of the Cloud Store storage products (see my review). Both collaboration via Blackmagic Cloud and the storage products are independent of each other. You can use either without needing the other. However, since Blackmagic Cloud is hosted “in the cloud” you do need an internet connection. 

There is some latency between the time a change is made by one user before it’s updated on the other users’ machines. In my tests, the collaborator needs to relink to the local media each time a shared project is accessed again. You can also move a project from Cloud back to your local computer as needed.

What else is new in DaVinci Resolve 18?

Aside from the new collaboration tools, DaVinci Resolve 18 also features a range of enhancements. Resolve 17 already introduced quite a few new features, which have been expanded upon in Resolve 18. The first of these is a new, simplified proxy workflow using the “prefer proxies” model. Native media handling has always been a strength of Resolve, especially with ProRes or Blackmagic RAW (BRAW) files. (Sorry, no support for Apple ProRes RAW.) But file sizes, codecs, and your hardware limitations can impede efficient editing. Therefore, working with proxy files may be the better option on some projects. When you are ready to deliver, then switch back to the camera originals for the final output.

The website installer for DaVinci Resolve Studio 18 includes the new Blackmagic Proxy Generator application. This automatically creates H.264, H.265, or ProRes proxy files using a watch folder. However, you can also create proxies internally from Resolve without using this app, or externally using Apple Compressor or Adobe Media Encoder. The trick is that proxy files must have matching names, lengths, timecode values, and audio channel configurations.

Proxy files should be rendered into a subfolder called “Proxy” located within each folder of original camera files. (Resolve and/or the Proxy Generator application do this automatically.) Then Resolve’s intelligent media management automatically detects and attaches the proxies to the original file. This makes linking easy and allows you to automatically toggle between the proxy and the original files.

Regarding other enhancements, the Color page didn’t see any huge new features, since tools like the Color Warper and HDR wheels were added in Resolve 17. However, there were some new items, including object replacement and enhanced tracking. But, I didn’t find the results to be as good as Adobe’s Content Aware Fill techniques.

Two additions worth mentioning are the Automatic Depth Map and Resolve FX Beauty effect. The beauty effect is a subtle skin smoothing tool. It’s nice, but quite frankly, too subtle. My preference in this type of tool would be Digital Anarchy’s Beauty Box or Boris FX’s Beauty Studio. However, Resolve does include other similar tools, like Face Refinement where you have more control.

Automatic Depth Map is more of a marquee feature. This is a pretty sophisticated process – analyzing depth separation in a moving image without the benefit of any lens metadata. It shows up as a Resolve FX in the Edit, Fusion, and the Color pages. Don’t use it in the Edit page, because you can’t do anything with it there. In the Color page, rather than apply it to a node, drag the effect into the node tree, where it creates its own node.

After brief clip analysis, the tool generates a mask, which you can use as a qualifier to isolate the foreground and background. Bear in mind this is for mild grading differences. Even though you might think of this for blurring a background, don’t do it! The mask is relatively broad. If you try to tighten the mask and use it to blur a background, you’ll get a result that looks like a Zoom call background. Instead, use it to subtly lighten or darken the foreground versus the background within a shot. Remember, the shot is moving, which can often lead to some chatter on the edges of the mask as the clip moves. So you’ll have to play with it to get the best result. Playback performance at Better Quality was poor on a 2017 iMac Pro. Use Faster while working and then switch to Better when you are ready to export or render.

Fusion

Complex visual effects and compositing are best done in the Fusion page. Fusion is both a component of Resolve, as well as a separate application offered by Blackmagic Design. It uses a node-based interface, but these nodes are separate and unrelated to the nodes in the Color page. Fusion features advanced tracking, particle effects, and a true 3D workspace that can work with 3D models. If you have any stereoscopic projects, then Fusion is the tool to use. The news for Fusion and the standalone Fusion Studio 18 application in this update focuses on GPU acceleration.

Before the acquisition by Blackmagic Design, eyeon offered several integrations of Fusion with NLEs like DPS Velocity and Avid Media Composer. The approach within Resolve is very similar to those – send a clip to Fusion for the effect, work with it inside the Fusion UI, and then it’s updated on the timeline as a Fusion clip. This is not unlike the Dynamic Link connection between Premiere Pro and After Effects, except that it all happens inside the same piece of software.

If you are used to working with a layer-style graphics application, like After Effects, Motion, or maybe HitFilm, then Fusion is going to feel foreign. It is a high-end visual effects tool, but might feel cumbersome to some when doing standard motion graphics. Yet for visual effects, the node-based approach is actually superior. There are plenty of good tutorials for Fusion, for any user ready to learn more about its visual effects power.

There are a few things to be aware of with Fusion. The image in the Fusion viewer and the output through UltraStudio to a monitor will be dark, as compared with that same image on the Edit page. Apparently this has been an ongoing user complaint and I have yet to find a color management setting that definitively solves this issue. There is also no way to “decompose” or “break apart” a Fusion composition on the timeline. You can reset the clip to a Fusion default status, but you cannot revert the timeline clip back to that camera file without it being part of a Fusion composition. Therefore, the best workaround is to copy the clip to a higher track before sending it to Fusion. That way you have both the Fusion composition and the original clip on the timeline.

In addition to visual effects, Fusion templates are also used for animated titles. These can be dropped onto a track in the Edit page and then modified in the inspector or the Fusion page. These Fusion titles function a lot like Apple’s Motion templates being used in Final Cut Pro.

Fairlight

Fairlight Instruments started with a popular digital audio workstation (Fairlight CMI) at the dawn of digital audio. After Blackmagic’s acquisition, the software portion of Fairlight was reimagined as a software module for audio post built into DaVinci Resolve. The Fairlight hardware and control surfaces were modularized. You can definitely run Fairlight in Resolve without any extra hardware. However, you can improve real-time performance on mixes with heavy track counts by adding the Fairlight Audio Core accelerator card. You can also configure one or more Blackmagic control surfaces into a large mixing console.

Taken as a whole, this makes the Fairlight ecosystem a very scalable product line in its own right that can appeal to audio post engineers and other audio production professionals. In other words, you can use the Fairlight portion of Resolve without ever using any of the video-centric pages. In that way, Resolve with Fairlight competes with Adobe Audition, Avid Pro Tools, Apple Logic Pro, and others. In fact, Fairlight is still the only professional DAW that’s actually integrated into an NLE.

Fairlight is designed as a DAW for broadcast and film with meter calibration based on broadcast standards. It comes with a free library of sound effects that can be downloaded from Blackmagic Design. The Fairlight page also includes an ADR workflow. DaVinci Resolve 18 expanded the Fairlight toolset. There’s new compatibility for FlexBus audio busing/routing with legacy projects. A lot of work has been put into Dolby Atmos support, including a binaural renderer, and an audio Space view of objects in relation to the room in 3D space.

On the other hand, if you are into music creation, then Fairlight lacks software instruments and music-specific plug-ins, like amp emulation. The MIDI support is focused on sound design. A musician would likely gravitate towards Logic Pro, Cubase, Luna, or Ableton Live. Nevertheless, Fairlight is still quite capable as a DAW for music mixes. Each track/fader integrates a channel strip for effects, plus built-in EQ and compression. Resolve comes with its own complement of Fairlight FX plug-ins, plus it supports third-party AU/VST plug-ins.

I decided to test that concept using some of the mixes from the myLEWITT music sessions. I stacked LEWITT’s multitrack recordings onto a blank Fairlight timeline, which automatically created new mono or stereo tracks, based on the file. I was able to add new busses (stem or submaster channels) for each instrument group and then route those busses to the output. It was easy to add effects and control levels by clip, by track, or by submaster.

Fairlight might not be my first choice if I were a music mixer, but I could easily produce a good mix with it. The result is a transparent, modern sound. If you prefer vintage, analog-style coloration, then you’ll need to add third-party plug-ins for that. Whether or not Fairlight fits the bill for music will depend on your taste as a mixer.

Conclusion

Once again, Blackmagic Design has added more power in the DaVinci Resolve 18 release. Going back to the start of this post – is this the version that will finally cause a paradigm shift away from the leading editing applications? In my opinion, that’s doubtful. As good as it is, the core editing model is probably not compelling enough to coax the majority of loyal users away from their favorite software. However, that doesn’t mean those same users won’t tap into some of Resolve’s tools for a variety of tasks.

There will undoubtedly be people who shift away from Premiere Pro or Final Cut Pro and over to DaVinci Resolve. Maybe it’s for Resolve’s many features. Maybe they’re done with subscriptions. Maybe they no longer feel that Apple is serious. Whatever the reason, Resolve is a highly capable editing application. In fact, during the first quarter of this year I graded and finished a feature film that had been cut entirely in Resolve 17.

Software choices can be highly personal and intertwined with workflow, muscle memory, and other factors. Making a change often takes a big push. I suspect that many Resolve editors are new to editing, often because they got a copy when they bought one of the Blackmagic Design cameras. Resolve just happens to be the best application for editing BRAW files and that combo can attract new users.

DaVinci Resolve 18 is a versatile, yet very complex application. Even experienced users don’t tap into the bulk of what it offers. My advice to any new user is to start with a simple project. Begin in the Cut or Edit page, get comfortable, and ignore everything else. Then learn more over time as you expand the projects you work on and begin to master more of the toolkit. If you really want to dive into DaVinci Resolve, then check out the many free and paid tutorials from Blackmagic Design, Mixing Light, and Ripple Training. Resolve is one application where any user, regardless of experience, will benefit from training, even if it’s only a refresher.

I’ve embedded a lot of links throughout this post, so I hope you’ll take the time to check them out. They cover some of the enhancements that were introduced in earlier versions, the history of DaVinci Resolve, and links to the new features of DaVinci Resolve 18. Enjoy!

©2022 Oliver Peters