Project organization

Leading into the new year, it’s time to take a fresh look at a perennial subject. Whether you work as a solo editor or part of a team, having a plan for organizing your projects – along with a workflow for moving media though your system – will lead to success in being able to find and restore material when needed at a future date. For a day-to-day workflow, I rely on five standard applications: Post Haste, Hedge, Better Rename, DiskCatalogMaker, and Kyno. I work on Macs, but there are Windows versions or alternatives for each.

Proper project organization. Regardless of your NLE, it’s a good idea to create a project “silo” for each job on your hard drive, RAID, or networked storage (NAS). That’s a main folder for the job, with subfolders for the edit project files, footage, audio, graphics, documents, exports, etc. I use Post Haste to create a new set of project folders for each new project.

Post Haste uses default or custom templates that can include Adobe project files. This provides a common starting point for each new project based on a template that I’ve created. Using this template, Post Haste generates a new project folder with common subfolders. A template Premiere Pro project file with my custom bin structure is contained within the Post Haste template. When each new set of folders is created, this Premiere file is also copied.

In order to track productions, each job is assigned a number, which becomes part of the name structure assigned within Post Haste. The same name is applied to the Premiere Pro project file. Typically, the master folder (and Premiere project) for a new job created through Post Haste will be labelled according to this schema: 9999_CLIENT_PROJECT_DATE.

Dealing with source footage, aka rushes or dailies. The first thing you have to deal with on a new project is the source media. Most of the location shoots for my projects come back to me with around 1TB of media for a day’s worth of filming. That’s often from two or three cameras, recorded in a variety of codecs at 4K/UHD resolution and 23.98fps. Someone on location (DIT, producer, DP, other) has copied the camera cards to working SSDs, which will be reused on later productions. Hedge is used to copy the cards, in order to provide checksum copy verification.

I receive those SSDs and not the camera cards. The first step is to copy that media “as is” into the source footage subfolder for that project on the editing RAID or NAS. Once my copy is complete, those same SSDs are separately copied “as is” via Hedge to one or more Western Digital or Seagate portable drives. Theoretically, this is for a deep archive, which hopefully will never be needed. Once we have at least two copies of the media, these working SSDs can be reformatted for the next production. The back-up drives should be stored in a safe location on-premises or better yet, offsite.

Since video cameras don’t use a standard folder structure on the cards, the next step is to reorganize the copied media in the footage folder according to date, camera, and roll. This means ripping media files out of their various camera subfolders. Within the footage folder, my subfolder hierarchy becomes shoot date (MMDDYY), then camera (A-CAM, B-CAM, etc), and then camera roll (A001, A002, etc). Media is located within the roll subfolder. Double-system audio recordings go into a SOUND folder for that date and follow this same hierarchy for sound rolls. When this reorganization is complete, I delete the leftover camera subfolders, such as Private, DCIM, etc.

It may be necessary to rename or append prefixes to file names in order to end up with completely unique file names within this project. That’s where Better Rename comes in. This is a Finder-level batch renaming tool. If a camera generates default names on a card, such as IMG_001, IMG_002 and so on, then renaming becomes essential. I try to preserve the original name in order to be able to trace the file back to back-up drives if I absolutely have to. Therefore, it’s best to append a prefix. I base this on project, date, camera, and roll. As an example, if IMG_001 was shot as part of the Bahamas project on December 20th, recorded by E-camera on roll seven, then the appended file would be named BAH1220E07_IMG_001.

Some camera codecs, like those used by drones and GoPros, are a beast for many NLEs to deal with. Proxy media is one way or you can transcode only the offending files. If you choose to transcode these files, then Compressor, Adobe Media Encoder, or Resolve are the best go-to applications. Transcode at the native file size and resolution into an optimized codec, like ProRes. Maintain log color spaces, because these optimized files become the new “camera” files in your edit. I will add separate folders for ORIG (camera original media) and PRORES (my transcoded, optimized files) within each camera roll folder. Only the ProRes media is to be imported into the NLE for editing.

Back-up! Do not proceed to GO! Now that you’ve spent all of this effort reorganizing, renaming, and transcoding media, you first want a back-up the files before starting to edit. I like to back up media to raw, removable, enterprise-grade HGST or Seagate hard drives. Over the years, I’ve accumulated a variety of drive sizes ranging from 2TB to now 8TB. Larger capacities are available, but 8TB is a cost-effective and manageable capacity. When placed into a Thunderbolt or USB drive dock, these function like any other local hard drive. 

When you’ve completed dealing with the media from the shoot, simply copy the whole job folder to a drive. You can store multiple projects on the same drive, depending on their capacity. This is an easy overnight process with most jobs, so it won’t impact your edit time. The point is to back up the newly organized version of your raw media. Once completed, you will have three copies of the source footage – the “as is” copy, the version on your RAID or NAS, and this back-up on the raw drive. After the project has been completed and delivered, load up the back-up drive and copy everything else from this job to that drive. This provides a “clone” of the complete job on both your RAID/NAS and the back-up drive.

In order to keep these back-up drives straight, you’ll need a catalog. At home, I’ve accumulated 12 drives thus far. At work we’ve accumulated over 200. I’ve found the easiest way to deal with this is an application called DiskCatalogMaker. It scans the drive and stores the file information in a catalog document. Each drive entry mimics what you see in the Finder, including folders, files, sizes, dates, and so on. The catalog document is searchable, which is why job numbers become important. It’s a good idea to periodically mount and spin up these drives to maintain reliability. Once a year is a minimum.

If you have sufficient capacity on your RAID or NAS, then you don’t want to immediately delete jobs and media when the work is done. In our case, once a job has been fully backed up, the job folder is moved into a BACKED UP folder on the NAS. This way we know when a job has been backed up, yet it is still easily retrieved should the client come back with revisions. Plus, you still have three total copies of the source media.

Other back-ups. I’ve talked a lot about backing up camera media, but what about other files? Generally files like graphics are supplied, so these are also backed up elsewhere. Plus they will get backed up on the raw drive when the job is done.

I also use Dropbox for interim back-ups of project files. Since a Premiere Pro project file is light and doesn’t carry media, it’s easy to back up in the cloud. At work, at the end of each day, each editor copies in-progress Premiere files to a company Dropbox folder. The idea is that in the event of some catastrophe, you could get your project back from Dropbox and then use the backed up camera drives to rebuild an edit. In addition, we also export and copy Resolve projects to Dropbox, as well as the DiskCatalogMaker catalog documents.

Whenever possible, audio stems and textless masters are exported for each completed job. These are stored with the final masters. Often it’s easier to make revisions using these elements, than to dive back into a complex job after it’s been deeply archived. Our NAS contains a separate top-level folder for all finished masters, in addition to the master subfolder within each project. When a production is done, the master file is copied into this other folder, resulting in two sets of the master files on the NAS. And by “master” I generally mean a final ProRes file along with a high-quality MP4 file. The MP4 is most often what the client will use as their “master,” since so much of our work these days is for the web. Therefore, both NAS locations hold a ProRes and an MP4. That’s in addition to the masters stored on the raw, back-up drive.

Final, Final revised, no really, this one is Final. Let’s address file naming conventions. Every editor knows the “danger” of calling something Final. Clients love to make changes until they no longer can. I work on projects that have running changes as adjustments are made for use in new presentations. Calling any of these “Final” never works. Broadcast commercials are usually assigned definitive ISCI codes, but that’s rarely the case with non-broadcast projects. The process that works for us is simply to use version numbers and dates. This makes sense and is what software developers use.

We use this convention: CLIENT_PROJECTNAME_VERSION_DATE_MODIFIER. As an example, if you are editing a McDonald’s Big Mac :60 commercial, then a final version might be labelled “MCD_Big Mac 60_v13_122620.” A slight change on that same day would become “MCD_Big Mac 60_v14_122620.” We use the “modifier” to designate variations from the norm. Our default master files are formatted as 1080p at 23.98 with stereo audio. So a variation exported as 4K/UHD or 720p or with a 5.1 surround mix would have the added suffix of “_4K” or “_720p” or “_51MIX.”

Some projects go through many updates and it’s often hard to know when a client (working remotely) considers a version truly done. They are supposed to tell you that, but they often just don’t. You sort of know, because the changes stop coming and a presentation deadline has been met. Whenever that happens, we export a ProRes master file plus high-quality MP4 files. The client may come back a week later with some revisions. Then, new ProRes and MP4 files are generated. Since version numbers are maintained, the ProRes master files will also have different version numbers and dates and, therefore, you can differentiate one from the other. Both variations may be valid and in use by the client.

Asset management. The last piece of software that comes in handy for us is Kyno. This is a lightweight asset management tool that we use to scan and find media on our NAS. Our method of organization makes it relatively easy to find things just by working in the Finder. However, if you are looking for that one piece of footage and need to be able to identify it visually, then that’s where Kyno is helpful. It’s like Adobe Bridge on steroids. One can organize and sort using the usual database tools, but it also has a very cool “drill down” feature. If you want to browse media within a folder without stepping through a series of subfolders, simply enable “drill down” and you can directly browse all media that’s contained therein. Kyno also features robust transcode and “send to” features designed with NLEs in mind. Prep media for an edit or create proxies? Simply use Kyno as an alternative to other options.

Hopefully this recap has provided some new workflow pointers for 2021. Good luck!

©2021 Oliver Peters

Simple Color Workflow in FCPX

Building on the heels of the previous post, I’d like to cover five simple steps to follow when performing basic color correction, aka “grading,” in Final Cut Pro X. Not every clip or project will use all of these, but apply the applicable steps when appropriate.

Step 1. LUTs (color look-up tables)

There are technical and creative LUTs. Here we are talking only about technical camera LUTs that are useful when your footage was recorded in a log color space. These LUTs convert the clip from log to a display color space (REC 709 or other) and turn the clip’s appearance from flat to colorful. Each manufacturer offers specific LUTs for the profile of their camera models.

Some technical LUTs are already included with the default FCPX installation and can be accessed through the settings menu in the inspector. Others must be downloaded from the manufacturer or other sources and stored elsewhere on your system. If you don’t see an appropriate option in the inspector, then apply the Custom LUT effect and navigate to a matching LUT stored on your system.

Step 2. Balance Color

Next, apply the Balance Color effect for each clip. This will slightly expand the contrast of the clip and create an averaged color balance. This is useful for many, but not all clips. For instance, a clip shot during “golden hour” will have a warm, yellow-ish glow. You don’t want that to be balanced neutral. You have no control over the settings of the Balance Color process, other than to pick between Automatic and White Balance. Test and see when and where this works to your advantage.

Note that this works best for standard footage without a LUT or when the LUT was applied through the inspector menu. If the LUT was applied as a Custom LUT effect, then Balance Color will be applied ahead of the Custom LUT and may yield undesirable results.

Step 3. Color correction – color board, curves, or color wheels

This is where you do most of the correction to alter the appearance of the clip. Any or all of FCPX’s color correction tools are fine and the tool choice often depends on your own preference. For most clips it’s mainly a matter of brightening, expanding contrast, increasing or decreasing saturation, and shifting the hue offsets of lows (shadow area), midrange, and highlights. What you do here is entirely subjective, unless you are aiming for shot-matching, like two cameras in an interview. For most projects, subtlety is the key.

Step 4. Luma vs Sat

It’s easy to get carried away in Step 3. This is your chance to reign it back in. Apply the Hue/Sat Curves tool and select the Luma vs Sat Curve. I described this process in the previous post. The objective is to roll off the saturation of the shadows and highlights, so that you retain pure blacks and whites at the extreme ends of the luminance range.

Step 5. Broadcast Safe

If you deliver for broadcast TV or a streaming channel, your video must be legal. Different outlets have differing standards – some looser or stricter than others. To be safe, limit your luminance and chrominance levels by applying a Broadcast Safe effect. This is best applied to an adjustment layer added as a connected clip at the topmost level above the entire sequence. Final Cut Pro X does not come with an adjustment layer Motion template title, but there are plenty available for download.

Apply the Broadcast Safe effect to that adjustment layer clip. Make sure it’s set to the color space that matches your project (sequence) setting (typically Rec 709 for HD and 4K SDR videos). At its default, video will be clipped at 0 and 100 on the scopes. Move the amount slider to the right for more clipping when you need to meet more restrictive specs.

These five steps are not the end-all/be-all of color correction/grading. They are merely a beginning guide to achieve quick and attractive grading using Final Cut Pro X. Test them out on your footage and see how to use them with your own workflows.

©2020 Oliver Peters

Drive – Postlab’s Virtual Storage Volume

Postlab is the only service designed for multi-editor, remote collaboration with Final Cut Pro X. It works whether you have a team collaborating on-premises within a facility or spread out at various locations around the globe. Since the initial launch, Hedge has also extended Postlab’s collaboration to Premiere Pro.

When using Postlab, projects containing Final Cut Pro X libraries or Premiere Pro project files are hosted on Hedge’s servers. But, the media lives on local drives or shared storage and not “in the cloud.” When editors work remotely, media needs to be transferred to them by way of “sneakernet,” High Tail, WeTransfer, or other methods.

Hedge has now solved that media issue with the introduction of Drive, a virtual storage volume for media, documents, and other files. Postlab users can utilize the original workflow and continue with local media – or they can expand remote capabilities with the addition of Drive storage. Since it functions much like DropBox, Drive can also be used by team members who aren’t actively engaged in editing. As a media volume, files on Drive are also accessible to Avid Media Composer and DaVinci Resolve editors.

Drive promises significantly better performance than a general business cloud service, because it has been fine-tuned for media. The ability to use Drive is included with each Postlab plan; but, storage costs are based on a flat rate per month for the amount of storage you need. Unlike other cloud services, there are no hidden egress charges for downloads. If you only want to use Drive as a single user, then Hedge’s Postlab Solo or Pro plan would be the place to start.

How Drive works

Once Drive storage has been added to an account, each team member simply needs to connect to Drive from the Postlab interface. This mounts a Drive volume on the desktop just like any local hard drive. In addition, a cache file is stored at a designated location. Hedge recommends using a fast SSD or RAID for this cache file. NAS or SAN network volumes cannot be used.

After the initial set up, the operation is similar to DropBox’s SmartSync function. When an editor adds media to the local Drive volume, that media is uploaded to Hedge’s cloud storage. It will then sync to all other editors’ Drive volumes. Initially those copies of the media are only virtual. The first time a file is played by a remote team member, it is streamed from the cloud server. As it streams, it is also being added the local Drive cache. Every file that has been fully played is now stored locally within the cache for faster access in the future.

Hedge feels that latency is as or more important than outright connection speed for a fluid editing experience. They recommend wired, rather than wi-fi, internet connections. However, I tested the system using wi-fi with office speeds of around 575Mbps down / 38Mbps up. This is a business connection and was fast enough to stream 720p MP4 and 1080p ProRes Proxy files with minimal hiccups on the initial streamed playback. Naturally, after it was locally cached, access was instantaneous.

From the editor’s point of view, virtual files still appear in the FCPX event browser as if local and the timeline is populated with clips. Files can also be imported or dragged in from Drive as if they are local. As you play the individual clips or the timeline from within FCPX or Premiere, the files become locally cached. All in all, the editing experience is very fluid.

In actual practice

The process works best with lightweight, low-res files and not large camera originals. That is possible, too, of course, but not very efficient. Drive and the Hedge servers support most common media files, but not a format like REDCODE raw. As before, each editor will need to have the same effects, LUTs, Motion templates, and fonts installed for proper collaboration.

I did run into a few issues, which may be related to the recent 10.4.9 Final Cut update. For example, the built-in proxy workflow is not very stable. I did get it to work. Original files were on a NAS volume (not Drive) and the generated proxies (H.264 or ProRes Proxy) were stored on the Drive volume of the main system. The remote editing system would only get the proxies, synced through Drive. In theory that should work, but it was hit or miss. When it worked, some LUTs, like the standard ARRI Log-C LUTs, were not applied on the remote system in proxy mode. Also the “used” range indicator lines for the event browser clips were present on the original system, but not the remote system. Other than these few quirks, everything was largely seamless.

My suggested workflow would be to generate editing proxies outside of the NLE and copy those to Drive. H.264 or ProRes Proxy with matching audio configurations to the original camera files work well. Treat these low-res files as original media and import them into Final Cut Pro X or Premiere Pro for editing. Once the edit is locked, go to the main system and transfer the final sequence to a local FCPX Library or Premiere Pro project for finishing. Relink that sequence to the original camera files for grading and delivery. Alternatively, you could export an FCPXML or XML file for a Resolve roundtrip.

One very important point to know is that the entire Postlab workflow is designed around team members staying logged into the account. This maintains the local caches. It’s OK to quit the Postlab application, plus eject and reconnect the Drive volume. However, if you log out, those local caches for editing files and Drive media will be flushed. The next time you log back in, connection to Drive will need to be re-established, Drive information must be synced again, and clips within FCPX or Premiere Pro will have to be relinked. So stay logged in for the best experience.

Additional features

Thanks to the Postlab interface, Drive offers features not available for regular hard drives. For example, any folder within Drive can be bookmarked in Postlab. Simply click on a Bookmark to directly open that folder. The Drop Off feature lets you generate a URL with an expiration date for any Bookmarked folder. Send that link to any non-team member, such as an outside contributor or client, and they will be able to upload additional media or other files to Drive. Once uploaded to Hedge’s servers, those files show up in Drive within the folder and will be synced to all team members.

Hedge offers even more features, including Mail Drop, designed for projects with too much media to efficiently upload. Ship Hedge a drive to copy dailies straight into their servers. Pick Up is another feature still in development. When updated, you will be able to select files on Drive, generate a Pick Up link, and send that to your client for download.

Editing with Drive and Postlab makes remote collaboration nearly like working on-site. The Hedge team is dedicated to expanding these capabilities with more services and broader NLE support. Given the state of post this year, these products are at the right time and place.

Check out this Soho Editors masterclass in collaboration using Postlab and Drive.

Originally written for FCP.co.

©2020 Oliver Peters

Working with ACES in DaVinci Resolve

In the film days, a cinematographer had a good handle on what the final printed image would look like. The film stocks, development methods, and printing processes were regimented with specific guidelines and limited variations. In color television production, up through the early adoption of HD, video cameras likewise adhered to the standards of Rec. 601 (SD) and Rec. 709 (HD). The advent of the video colorist allowed for more creative looks derived in post. Nevertheless, video directors of photography could also rely on knowing that the image they were creating would translate faithfully throughout post-production.

As video moved deeper into “cinematic” images, raw recording and log encoding became the norm. Many cinematographers felt their control of the image slipping away, thanks to the preponderance of color science approaches and LUTs (color look-up tables) generated from a variety of sources and applied in post. As a result, the Academy Color Encoding System (ACES) was developed as a global standard for managing color workflows. It’s an open color standard and method of best practices created by filmmakers and color scientists under the auspices of the Science and Technology Council of the Academy of Motion Picture Arts and Sciences (AMPAS, aka “The Academy”). To dive into the nuances of ACES – complete with user guides – check out the information at ACEScentral.com.

The basics of how ACES works

Traditionally, Rec. 709 is the color space and gamma encoding standard that dictates your input, timeline, and exports for most television projects. Raw and log recordings are converted into Rec. 709 through color correction or LUTs. The color gamut is then limited to the Rec. 709 color space. Therefore, if you later try to convert a Rec. 709 ProResHQ 4:2:2 master file into full RGB, Rec. 2020, HDR, etc., then you are starting from an already-restricted range of color data. The bottom line is that this color space has been defined by the display technology – the television set.

ACES is its own color space designed to be independent of the display hardware. It features an ultra-wide color gamut that encompasses everything the human eye can see. It is larger than Rec. 709, Rec. 2020, P3, sRGB, and others. When you work in an ACES pipeline, ACES is an intermediate color space not intended for direct viewing. In other world, ACES is not dictated by current display technology. Files being brought into ACES and being exported for delivery from ACES pass through input and output device transforms. These are mathematical color space conversions.

For example, film with an ARRI Alexa, record as LogC, and grade in a Rec. 709 pipeline. A LogC-to-REC709 LUT will be applied to the clip to convert it to the Rec. 709 color space of the project. The ACES process is similar. When working in an ACES pipeline, instead of applying a LUT, I would apply an Input Device Transform (IDT) specific for the Alexa camera. This is equivalent to a camera profile for each camera manufacturer’s specific color science.

ACES requires one extra step, which is to define the target device on which this image will be displayed. If your output is intended to be viewed on television screens with a standard dynamic range, then an Output Device Transform (ODT) for Rec. 709 would be applied as the project’s color output setting. In short, the camera file is converted by the IDT into the ACES working color space, but is viewed on your calibrated display based on the ODT used. Under the hood, ACES preserves all of the color data available from the original image. In addition to IDTs and ODTs, ACES also provides for Look Modification Transforms (LMT). These are custom “look” files akin to various creative LUTs built for traditional Rec. 709 workflows.

ACES holds a lot of promise, but it is still a work-in-progress. If your daily post assignments don’t include major network or studio deliverables, then you might wonder what benefit ACES has for you. In that case, yes, continuing to stick with a Rec. 709 color pipeline will likely be fine for a while. But companies like Netflix are behind the ACES initiative and other media outlets are bound to follow. You may well find yourself grading a project that requires ACES deliverables at some point in the future.

There is no downside in adopting an ACES pipeline now for all of your Resolve Rec. 709 projects. Working in ACES does not mean you can instantly go from a grade using a Rec. 709 ODT to one with a Rec. 2020 ODT without an extra trim pass. However, ACES claims to make that trim pass easier than other methods.

The DaVinci Resolve ACES color pipeline

Resolve has earned a position of stature within the industry. With its low price point, it also offers the most complete ACES implementation available to any editor and/or colorist. Compared with Media Composer, Premiere Pro, or Final Cut Pro X, I would only trust Resolve for an accurate ACES workflow at this point in time. However, you can start your edit in Resolve as Rec. 709 – or roundtrip from another editor into Resolve – and then switch the settings to ACES for the grade and delivery. Or you can start with ACES color management from the beginning. If you start a Resolve project using a Rec. 709 workflow for editing and then switch to ACES for the grade, be sure to remove any LUTs applied to clips and reset grading nodes. Those adjustments will all change once you shift the settings into ACES color management.

To start with an ACES workflow, select the Color Management tab in the Master Settings (lower right gear icon). Change Color Science to ACEScct and ACES version 1.1. (The difference between ACEScc and ACEScct is that the latter has a slight roll-off at the bottom, thus allowing a bit more shadow detail.) Set the rest as follows: ACES Input Device Transform to No Input Transform. ACES Output Device Transform to Rec. 709 (when working with a calibrated grading display). Process Node LUTs in ACEScc AP1 Timeline Space. Finally, if this is for broadcast, enable Broadcast Safe and set the level restrictions based on the specs that you’ve been supplied by the media outlet.

With these settings, the next step is to select the IDT for each camera type in the Media page. Sort the list to change all cameras of a certain model at once. Some media clips will automatically apply an IDT based on metadata embedded into the clip by the camera. I found this to be the case with the raw formats I tested, such as RED and BRAW. While an IDT may appear to be doing the same thing as a technical LUT, the math is inherently  different. As a result, you’ll get a slightly different starting look with Rec. 709 and a LUT, versus ACES and an IDT.

Nearly all LUTs are built for the Rec. 709 color space and should not be used in an ACES workflow. Yes, you can apply color space transforms within your node structure, but the results are highly unpredictable and should be avoided. Technical camera LUTs in Resolve were engineered by Blackmagic Design based on a camera manufacturer’s specs. They are not actually supplied as a plug-in by the manufacturer to Blackmagic. The same is true for Apple, Avid, and Adobe, which means that in all cases a bit of secret sauce may have been employed. Apple’s S-Log conversion may not match Avid’s for instance. ACES IDTs and ODTs within Resolve are also developed by Blackmagic, but based on ACES open standards. In theory, the results of an IDT in Resolve should match that same IDT used by another software developer.

Working with ACES on the Color page

After you’ve set up color management and the transforms for your media clips, you’ll have no further interaction with ACES during editing. Likewise, when you move to the Color page, your grading workflow will change very little. Of course, if you are accustomed to applying LUTs in a Rec. 709 workflow, that step will no longer be necessary. You might find a reason to change the IDT for a clip, but typically it should be whatever is the correct camera profile for the associated clip. Under the hood, the timeline is actually working in a log color space (ACEScc AP1); therefore, I would suggest grading with Log rather than Primary color wheels. The results will be more predictable. Otherwise, grade any way you like to get the look that you are after.

Currently Resolve offers few custom look presets specific to the ACES workflow. There are three LMTs found under the LUTs option / CLF (common LUT format) tab (right-click any node). These include LMT Day for Night. LMT Kodak 2383 Print Emulation, LMT Neon Suppression. I’m not a fan of either of the first two looks. Quite frankly, I feel Resolve film stock emulations are awful and certainly nowhere near as pleasing as those available through Koji Advance or FilmConvert Nitrate. But the third is essential. The ACES color space has one current issue, which is that extremely saturated colors with a high brightness level, like neon lights, can induce image artifacts. The Neon Suppression LMT can be applied to tone down extreme colors in some clips. For example, a shot with a highly saturated red item will benefit from this LMT, so that the red looks normal.

If you have used LUTs and filters for certain creative looks, like film stock emulation or the orange-and-teal look, then use PowerGrades instead. Unlike LUTs, which are intended for Rec. 709 and are typically a “black box,” a PowerGrade is simply a string of nodes. Every time you grab a still in the Color page, you have stored that series of correction nodes as a PowerGrade. A few enterprising colorists have developed their own packs of custom Resolve PowerGrades available for free or sale on the internet.

The advantages are twofold. First, a PowerGrade can be applied to your clip without any transform or conversion to make it work. Second, because these are a series of nodes, you can tweak or disable nodes to your liking. As a practical matter, because PowerGrades were developed with a base image, you should insert a node in front of the added PowerGrade nodes. This will allow you to optimize your image for the settings of the PowerGrade nodes and provide an optimal starting point.

Deliverables

The project’s ODT is still set to Rec. 709, so nothing changes in the Resolve Deliver page. If you need to export a ProResHQ master, simply set the export parameters as you normally would. As an extra step of caution, set the Data Levels (Advanced Settings) to Video and Color Space and Gamma Tags to Rec. 709, Gamma 2.4. The result should be a proper video file with correct broadcast levels. So far so good.

One of the main reasons for an ACES workflow is future proofing, which is why you’ve been working in this extended color space. No common video file format preserves this data. Furthermore, formats like DNxHR and ProRes are governed by companies and aren’t guaranteed to be future-proofed.

An ACES archival master file needs to be exported in the Open EXR file format, which is an image sequence of EXR files. This will be a separate deliverable from your broadcast master file. First, change the ACES Output Device Transform (Color Management setting) to No Output Device and disable Broadcast Safe limiting. At this point all of your video clips will look terrible, because you are seeing the image in the ACES log color space. That’s fine. On the Deliver page, change the format to EXR, RGB float (no compression), and Data Levels to Auto. Color Space and Gamma Tags to Same As Project. Then Export.

In order to test the transparency of this process, I reset my settings to an ODT of Rec. 709 and imported the EXR image sequence – my ACES master file. After import, the clip was set to No Input Transform. I placed it back-to-back on the timeline against the original. The two clips were a perfect match: EXR without added grading and the original with correction nodes. The one downside of such an Open EXR ACES master is a huge size increase. My 4K ProRes 4444 test clip ballooned from an original size of 3.19GB to 43.21GB in the EXR format.

Conclusion

Working with ACES inside of DaVinci Resolve involves some different terminology, but the workflow isn’t too different once you get the hang of it. In some cases, camera matching and grading is easier than before, especially when multiple camera formats are involved. ACES is still evolving, but as an open standard supported globally by many companies and noted cinematographers, the direction can only be positive. Any serious colorist working with Resolve should spend a bit of time learning and getting comfortable with ACES. When the time comes that you are called upon to deliver an ACES project, the workflow will be second nature.

Originally written for Pro Video Coalition.

©2020 Oliver Peters

Dialogue Mixing Tips

 

Video is a visual medium, but the audio side of a project is as important – often more important – than the picture side. When story context is based on dialogue, then the story will make no sense if you can’t hear or understand that spoken information. In theatrical mixes, it’s common for a three person team of rerecording mixers to operate the console for the final mix. Their responsibilities are divided into dialogue, sound effects, and music. The dialogue mixer is usually the team lead, precisely because intelligible dialogue is paramount to a successful motion picture mix. For this reason, dialogue is also mixed as primarily mono coming from the center speaker in a 5.1 surround set-up.

A lot of my work includes documentary-style entertainment and corporate projects, which frequently lean on recorded interviews to tell the story. In many cases, sending the mix outside isn’t in the budget, which means that mix falls to me. You can mix in a DAW or in your NLE. Many video editors are intimidated by or unfamiliar with ProTools or Logic Pro X – or even the Fairlight page in DaVinci Resolve. Rest assured that every modern NLE is capable of turning out an excellent stereo mix for the purposes of TV, web, or mobile viewing. Given the right monitoring and acoustic environment, you can also turn out solid LCR or 5.1 surround mixes, adequate for TV viewing.

I have covered audio and mix tips in the past, especially when dealing with Premiere. The following are a few more pointers.

Original location recording

You typically have no control over the original sound recording. On many projects, the production team will have recorded double-system sound controlled by a separate location mixer (recordist). They generally use two microphones on the subject – a lav and an overhead shotgun/boom mic.

The lav will often be tucked under clothing to filter out ambient noise from the surrounding environment and to hide it from the camera. This will sound closer, but may also sound a bit muffled. There may also be occasional clothes rustle from the clothing rubbing against the mic as the speaker moves around. For these reasons I will generally select the shotgun as the microphone track to use. The speaker’s voice will sound better and the recording will tend to “breathe.” The downside is that you’ll also pick up more ambient noise, such as HVAC fans running in the background. Under the best of circumstances these will be present during quiet moments, but not too noticeable when the speaker is actually talking.

Processing

The first stage of any dialogue processing chain or workflow is noise reduction and gain correction. At the start of the project you have the opportunity to clean up any raw voice tracks. This is ideal, because it saves you from having to do that step later. In the double-system sound example, you have the ability to work with the isolated .wav file before syncing it within a multicam group or as a synchronized clip.

Most NLEs feature some audio noise reduction tools and you can certainly augment these with third party filters and standalone apps, like those from iZotope. However, this is generally a process I will handle in Adobe Audition, which can process single tracks, as well as multitrack sessions. Audition starts with a short noise print (select a short quiet section in the track) used as a reference for the sounds to be suppressed. Apply the processing and adjust settings if the dialogue starts sounding like the speaker is underwater. Leaving some background noise is preferable to over-processing the track.

Once the noise reduction is where you like it, apply gain correction. Audition features an automatic loudness match feature or you can manually adjust levels. The key is to get the overall track as loud as you can without clipping the loudest sections and without creating a compressed sound. You may wish to experiment with the order of these processes. For example, you may get better results adjusting gain first and then applying the noise reduction afterwards.

After both of these steps have been completed, bounce out (export) the track to create a new, processed copy of the original. Bring that into your NLE and combine it with the picture. From here on, anytime you cut to that clip, you will be using the synced, processed audio.

If you can’t go through such a pre-processing step in Audition or another DAW, then the noise reduction and correction must be handled within your NLE. Each of the top NLEs includes built-in noise reduction tools, but there are plenty of plug-in offerings from Waves, iZotope, Accusonus, and Crumplepop to name a few. In my opinion, such processing should be applied on the track (or audio role in FCPX) and not on the clip itself. However, raising or lowering the gain/volume of clips should be performed on the clip or in the clip mixer (Premiere Pro) first.

Track/audio role organization

Proper organization is key to an efficient mix. When a speaker is recorded multiple times or at different locations, then the quality or tone of those recordings will vary. Each situation may need to be adjusted differently in the final mix. You may also have several speakers interviewed at the same time in the same location. In that case, the same adjustments should work for all. Or maybe you only need to separate male from female speakers, based on voice characteristics.

In a track-based NLE like Media Composer, Resolve, Premiere Pro, or others, simply place each speaker onto a separate track so that effects processing can be specific for that speaker for the length of the program. In some cases, you will be able to group all of the speaker clips onto one or a few tracks. The point is to arrange VO, sync dialogue, sound effects, and music together as groups of tracks. Don’t intermingle voice, effects, or music clips onto the same tracks.

Once you have organized your clips in this manner, then you are ready for the final mix. Unfortunately this organization requires some extra steps in Final Cut Pro X, because it has no tracks. Audio clips in FCPX must be assigned specific audio roles, based on audio types, speaker names, or any other criteria. Such assignments should be applied immediately upon importing a clip. With proper audio role designations, the process can work quite smoothly. Without it, you are in a world of hurt.

Since FCPX has no traditional track mixer, the closest equivalent is to apply effects to audio lanes based on the assigned audio roles. For example, all clips designated as dialogue will have their audio grouped together into the dialogue lane. Your sequence (or just the audio) must first be compounded before you are able to apply effects to entire audio lanes. This effectively applies these same effects to all clips of a given audio role assignment. So think of audio lanes as the FCPX equivalent to audio tracks in Premiere, Media Composer, or Resolve.

The vocal chain

The objective is to get your dialogue tracks to sound consistent and stand out in the mix. To do this, I typically use a standard set of filter effects. Noise reduction processing is applied either through preprocessing (described above) or as the first plug-in filter applied to the track. After that, I will typically apply a de-esser and a plosive remover. The first reduces the sibilance of the spoken letter “s” and the latter reduces mic pops from the spoken letter “p.” As with all plug-ins, don’t get heavy-handed with the effect, because you want to maintain a natural sound.

You will want the audio – especially interviews – to have a consistent level throughout. This can be done manually by adjusting clip gain, either clip by clip, or by rubber banding volume levels within clips. You can also apply a track effect, like an automatic volume filter (Waves, Accusonus, Crumplepop, other). In some cases a compressor can do the trick. I like the various built-in plug-ins offered within Premiere and FCPX, but there are a ton of third-party options. I may also apply two compression effects – one to lightly level the volume changes, and the second to compress/limit the loudest peaks. Again, the key is to apply light adjustments, because I will also compress/limit the master output in addition to these track effects.

The last step is equalization. A parametric EQ is usually the best choice. The objective is to assure vocal clarity by accentuating certain frequencies. This will vary based on the sound quality of each speaker’s voice. This is why you often separate speakers onto their own tracks according to location, voice characteristics, and so on. In actual practice, only two to three tracks are usually needed for dialogue. For example, interviews may be consistent, but the voice-over recordings require a different touch.

Don’t get locked into the specific order of these effects. What I have presented in this post isn’t necessarily gospel for the hierarchical order in which to use them. For example, EQ and level adjusting filters might sound best when placed at different positions in this stack. A certain order might be better for one show, whereas a different order may be best the next time. Experiment and listen to get the best results!

©2020 Oliver Peters