Understanding SpeedGrade

df1615_sg_1How you handle color correction depends on your temperament and level of expertise. Some editors want to stay within the NLE, so that editorial adjustments are easily made after grading. Others prefer the roundtrip to a powerful external application. When Adobe added the Direct Link conduit between Premiere Pro CC and SpeedGrade CC, they gave Premiere Pro editors the best of both worlds.

Displays

df1615_sg_4SpeedGrade is a standalone grading application that was initially designed around an SDI feed from the GPU to a second monitor for your external video. After the Adobe acquisition, Mercury Transmit was eventually added, so you can run SpeedGrade with one display, two computer displays, or a computer display plus a broadcast monitor. With a single display, the video viewer is integrated into the interface. At home, I use two computer displays, so by enabling a dual display layout, I get the SpeedGrade interface on one screen and the full-screen video viewer on the other. To do this you have to correctly offset the pixel dimensions and position for the secondary display in order to see it. Otherwise the image is hidden behind the interface.

Using Mercury Transmit, the viewer image is sent to an external monitor, but you’ll need an appropriate capture/monitoring card or device. AJA products seem to work fine. Some Blackmagic devices work and others don’t. When this works, you will lose the viewer from the interface, so it’s best to have the external display close – as in next to your interface monitor.

Timeline

df1615_sg_3When you use Direct Link, you are actually sending the Premiere Pro timeline to SpeedGrade. This means that edits and timeline video layers are determined by Premiere Pro and those editing functions are disabled in SpeedGrade. It IS the Premiere Pro timeline. This means certain formats that might not be natively supported by a standalone SpeedGrade project will be supported via the Direct Link path – as long as Premiere Pro natively supports them.

There is a symbiotic relationship between Premiere Pro and SpeedGrade. For example, I worked on a music video that was edited natively using RED camera media. The editor had done a lot of reframing from the native 4K media in the 1080 timeline. All of this geometry was correctly interpreted by SpeedGrade. When I compared the same sequence in Resolve (using an XML roundtrip), the geometry was all wrong. SpeedGrade doesn’t give you access to the camera raw settings for the .r3d media, but Premiere Pro does. So in this case, I adjusted the camera raw values by using the source settings control in Premiere Pro, which then carried those adjustments over to SpeedGrade.

df1615_sg_2Since the Premiere Pro timeline is the SpeedGrade timeline when you use Direct Link, you can add elements into the sequence from Premiere, in order to make them available in SpeedGrade. Let’s say you want to add a common edge vignette across all the clips of your sequence. Simply add an adjustment layer to a top track while in Premiere. This appears in your SpeedGrade timeline, enabling you to add a mask and correction within the adjustment layer clip. In addition, any video effects filters that you’ve applied in Premiere will show up in SpeedGrade. You don’t have access to the controls, but you will see the results interactively as you make color correction adjustments.

df1615_sg_17All SpeedGrade color correction values are applied to the clip as a single Lumetri effect when you send the timeline back to Premiere Pro. All grading layers are collapsed into a single composite effect per clip, which appears in the clip’s effect stack (in Premiere Pro) along with all other filters. In this way you can easily trim edit points without regard to the color correction. Traditional roundtrips render new media with baked-in color correction values. There, you can only work within the boundaries of the handles that you’ve added to the file upon rendering. df1615_sg_16Not so with Direct Link, since color correction is like any other effect applied to the original media. Any editorial changes you’ve made in Premiere Pro are reflected in SpeedGrade should you go back for tweaks, as long as you continue to use Direct Link.

12-way and more

df1615_sg_5Most editors are familiar with 3-way color correctors that have level and balance controls for shadows, midrange and highlights. Many refer to SpeedGrade’s color correction model as a 12-way color corrector. The grading interface features a 3-way (lift/gamma/gain) control for four ranges of correction: overall, shadows, midrange, and highlights. Each tab also adds control of contrast, pivot, color temperature, magenta (tint), and saturation. Since shadow, midrange, and highlight ranges overlap, you also have sliders that adjust the overlap thresholds between shadow and midrange and between the midrange and highlight areas.

df1615_sg_7Color correction is layer based – similar to Photoshop or After Effects. SpeedGrade features primary (“P”) , secondary (“S”) and filter layers (the “+” symbol). When you add layers, they are stacked from bottom to top and each layer includes an opacity control. As such, layers work much the same as rooms in Apple Color or nodes in DaVinci Resolve. You can create a multi-layered adjustment by using a series of stacked primary layers. Shape masks, like that for a vignette, should be applied to a primary layer. df1615_sg_10The mask may be normal or inverted so that the correction is applied either to the inside or the outside of the mask. Secondaries should be reserved for HSL keys. For instance, highlighting the skin tones of a face to adjust its color separately from the rest of the image. The filter layer (“+”) is where you’ll find a number of useful tools, including Photoshop-style creative effect filters, LUTs, and curves.

Working with grades

df1615_sg_13The application of color correction can be applied to a clip as either a master clip correction or just a clip correction (or both). When you grade using the default clip tab, then that color correction is only being applied to that single clip. If you grade in the master clip tab, then any color correction that you apply to that clip will also be applied to every other instance of that same media file elsewhere on the timeline. Theoretically, in a multicam edit – made up of four cameras with a single media file per camera – you could grade the entire timeline by simply color correcting the first clip for each of the four cameras as a master clip correction. All other clips would automatically inherit the same settings. Of course, that almost never works out quite as perfectly, therefore, you can grade a clip using both the master clip and the regular clip tabs. Use the master for a general setting and still use the regular clip tab to tweak each shot as needed.

df1615_sg_9Grades can be saved and recalled as Lumetri Looks, but typically these aren’t as useful in actual grading as standard copy-and-paste functions – a recent addition to SpeedGrade CC. Simply highlight one or more layers of a graded clip and press copy (cmd+c on a Mac). Then paste (cmd+v on a Mac) those to the target clip. These will be pasted in a stack on top of the default, blank primary correction that’s there on every clip. You can choose to use, ignore, or delete this extra primary layer.

SpeedGrade features a cool trick to facilitate shot matching. The timeline playhead can be broken out into multiple playheads, which will enable you to compare two or more shots in real-time on the viewer. This quick comparison lets you make adjustments to each to get a closer match in context with the surrounding shots.

A grading workflow

df1615_sg_14Everyone has their own approach to grading and these days there’s a lot of focus on camera and creative LUTs. My suggestions for prepping a Premiere Pro CC sequence for SpeedGrade CC go something like this.

df1615_sg_6Once, you are largely done with the editing, collapse all multicam clips and flatten the timeline as much as possible down to the bottom video layer. Add one or two video tracks with adjustment layers, depending on what you want to do in the grade. These should be above the last video layer. All graphics – like lower thirds – should be on tracks above the adjustment layer tracks. This is assuming that you don’t want to include these in the color correction. Now duplicate the sequence and delete the tracks with the graphics from the dupe. Send the dupe to SpeedGrade CC via Direct Link.

In SpeedGrade, ignore the first primary layer and add a filter layer (“+”) above it. Select a camera patch LUT. For example, an ARRI Log-C-to-Rec-709 LUT for Log-C gamma-encoded Alexa footage. Repeat this for every clip from the same camera type. If you intend to use a creative LUT, like one of the SpeedLooks from LookLabs, you’ll need one of their camera patches. This shifts the camera video into a unified gamma profile optimized for their creative LUTs. If all of the footage used in the timeline came from the same camera and used the same gamma profile, then in the case of SpeedLooks, you could apply the creative LUT to one the adjustment layer clips. This will apply that LUT to everything in the sequence.

df1615_sg_8Once you’ve applied input and output LUTs you can grade each clip as you’d like, using primary and secondary layers. Use filter layers for curves. Any order and any number of layers per clip is fine. Using this methodology all grading is happening between the camera patch LUT and the creative LUT added to the adjustment layer track. Finally, if you want a soft edge vignette on all clips, apply an edge mask to the default primary layer of the topmost adjustment layer clip. Adjust the size, shape, and softness of the mask. Darken the outside of the mask area. Done.df1615_sg_11

(Note that not every camera uses logarithmic gamma encoding, nor do you want to use LUTs on every project. These are the “icing on the cake”, NOT the “meat and potatoes” of grading. If your sequence is a standard correction without any stylized creative looks, then ignore the LUT procedures I described above.)

df1615_sg_15Now simply send your timeline back to Premiere Pro (the “Pr” button). Back in Premiere Pro CC, duplicate that sequence. Copy-and-paste the graphics tracks from the original sequence to the available blank tracks of the copy. When done, you’ll have three sequences: 1) non-color corrected with graphics, 2) color corrected without graphics, and 3) final with color correction and graphics. The beauty of the Direct Link path between Premiere Pro CC and SpeedGrade CC is that you can easily go back and forth for changes without ever being locked in at any point in the process.

©2015 Oliver Peters

Color LUTs for the Film Aesthetic

df_lut_1_sm

Newer cameras offer the ability to record in log gamma profiles. Those manufactured by Sony, ARRI and Canon have become popular, and with them, so have a new class of color correction filters used by editors. Color look-up tables, known as LUTs, have long been used to convert one color space to another, but now are increasingly used for creative purposes, including film stock emulation. A number of companies offer inexpensive plug-ins to import and apply common 3D color LUTs within most NLEs, grading software and compositors.

While many of these developers include their own film look LUTs, it is also easy to create your own LUTs that are compatible with these plug-ins. A commonly used LUT format is .cube, which can easily be generated by a knowledgable editor using DaVinci Resolve, AMIRA Color Tool or FilmConvert, to name a few.

Most LUTs are created with a particular color space in mind, which means you actually need two LUTs. The first, known as a camera profile patch, adjusts for a specific model of camera and that manufacturer’s log values. The second LUT provides the desired “look”. Depending on the company, you may have a single filter that combines the look with the camera patch or you may have to apply two separate filters. LUTs are a starting point, so you will still have to color correct (grade) the shots to get the right look. Often in a chain of filters, you will want the LUT as the last effect and do all of your grading ahead of that filter. This way you aren’t trying to recover highlights or shadow detail that might have been compressed by the LUT values.

Color Grading Central’s LUT Utility

df_lut_2a_smThe LUT Utility has become a “go to” tool for Final Cut Pro X editors who want to use LUTs. It installs with eleven basic LUTs, which include a number of camera log to Rec 709 patches, as well as several film looks for Fuji, Kodak, 2 Strip and 3 Strip emulation. LUT Utility installs as a Motion template and also appears as a System Preferences pane. You may use this pane to install additional LUTs, however, you can also install them by placing the LUT file into the correct Motion template folder. Since it uses LUTs in the .cube format, any application that generates a 3D LUT in that format can be used to create a new LUT that is compatible with LUT Utility. When you apply a LUT Utility filter to a clip or an adjustment layer inside FCP X, the inspector pane for the filter gives you access to all recognized LUTs through a pulldown menu. The only control is a slider to adjust the amount of the LUT that is applied.

Color Grading Central also has a partnership with visionColor to bundle the Osiris film emulation filters separately or together with LUT Utility. The Osiris set includes nine film emulations that cover a number of stocks and stylized looks. They are provided in the .cube format. Although both the Color Grading Central and Osiris filters are offered for FCP X, it’s worth noting that the LUTs themselves are compatible with Avid Media Composer, Adobe Creative Cloud, Autodesk Smoke and DaVinci Resolve. One thing to be careful of in FCP X, is that Apple also includes its own log processing for various cameras, such as the ARRI ALEXA, and often applies this automatically. When you apply a LUT to log footage in FCP X, make sure you are not double-processing the image. Either use a LUT designed for Rec 709 imagery, or set the FCP X log processing for the clip to “none”, when using a LUT designed for log color space.

In addition to Osiris, visionColor and Color Grading Central developed the ImpulZ LUT series. ImpulZ comes in a Basic, Pro and Ultimate set of LUTs, based on the camera profiles you typically need to work with. It covers a mixture of stock brands and types, including negative print and still film stocks. The Ultimate collection includes about 1950 different LUT files. In addition to camera profiles, these LUTs also cover four gamma profiles, including film contrast, film print, VisionSpace and Cineon Log (Ultimate only). The VisionSpace profile is their own flatter curve that is more conducive to further grading.

Koji Color

df_lut_2b_smAnother LUT package just released for Final Cut Pro X – but also compatible with other applications – is Koji Color. This is a partnership between noted color timer Dale Grahn (Predator, Saving Private Ryan, Dracula) and plug-in developer Crumplepop. This partnership previously resulted in the Dale Grahn Color iPad app. Unlike many other film emulation packages that attempt to apply a very stylized look, Koji Color, is design to provide an accurate emulation of a number of popular print stocks.

As implied by the name (Koji appears to be a mash-up of “Kodak” and “Fuji”), three key stocks from each brand are covered, including Kodak 2383, 2393, 2302 (B&W) and Fuji 3514, 3521, 3523. Each print stock has specific contrast and color characteristics, which these LUTs seek to duplicate. In FCP X, you select and apply the correct version of the filter based on camera type. Then from the inspector, select the film stock. There are extra skiers to tweak saturation and exposure (overall, hi, mid, shadow). This helps you dial in the look more precisely.

Koji Color comes in three product packages. The basic Koji DSLR is a set of filters that you would apply to standard HD cameras running in the video and not log mode. The output format is Rec 709 video. If you shoot with a lot of log profile cameras, then you’ll want Koji Log, which also includes Koji DSLR. This package includes the same LUTs, but with filters that have built-in camera patches for each of the various camera models that shoot log. Again, the output format is Rec 709.

The most expensive bundle is Koji Studio, which also includes the other two packages. The main difference is that Studio also supports output in the DCI-P3 color space. This is intended for digital intermediate color correction, which goes beyond the needs of most video editors.

SpeedLooks

df_lut_3_smLookLabs is the development side of Canadian post facility Jump Studios. As an outgrowth of their work for clients and shows, the team developed a set of film looks, which they branded under the name SpeedLooks. If you use Adobe Creative Cloud, then you know that SpeedLooks comes bundled for use in Adobe Premiere Pro CC and SpeedGrade CC. Like the other film emulation LUTs, SpeedLooks are based on certain film stocks, but LookLabs decided to make their offerings more stylized. There are specific bundles covering different approaches to color, including Clean, Blue, Gold and others.

SpeedLooks come in both .cube and Adobe’s .look formats, so you are not limited to only using these with Adobe products. LookLabs took a slightly different approach to cameras, by designing their looks based on the starting point of their own virtual log space. This way they can adjust for the differences in the log spaces of the various cameras. A camera patch converts the camera’s log or Rec 709 profile into LookLabs’ own log profile. When using SpeedLooks, you should first apply a camera profile patch as one filter and then the desired look filter as another.

If you use Premiere Pro CC, all you need to do is apply the Lumetri color correction effect. A standard OS dialogue opens to let you navigate to the right LUT file. Need to change LUTs? Simply click the set-up icon in the effect control window and select a different file. If you use SpeedGrade CC, then you apply the camera patch at the lowest level and the film look LUT at the highest level, with primary and secondary grading layers in between. LookLabs also offers a version of SpeedLooks for editors. This lower-cost package supplies the same film look LUTs, but intended for cameras that are already in the Rec 709 color space. All of these filters can be used in a number of applications, as well as in FCP X through Color Grading Central’s LUT Utility.

Like all of these companies, LookLabs has taken time to research how to design LUTs that match the response of film. One area that they take pride in is the faithful reproduction of skin tones. The point is to skew the color in wide ranges, without resulting in unnatural skin tones. Another area is in highlight and shadow detail. LUTs are created by applying curve values to the image that compress the highlight and shadow portion of the signal. By altering the RGB values of the curve points, you get color in addition to contrast changes. With SpeedLooks, as highlights are pushed to the top and shadows to the bottom, there is still a level of detail that’s retained. Images aren’t harshly clipped or crushed, which leaves you with a more filmic response that rolls off in either direction.

FilmConvert

df_lut_4_smFilmConvert (an arm of Rubber Monkey Software) is now in the 2.0 version of its popular film emulation application and plug-ins. You may purchase it as a standalone tool or as filters for popular NLEs. Unlike the others that use a common LUT format, like .cube, FilmConvert does all of its action internally. The plug-ins not only provide film emulation, but are also full-fledged, 3-way color correction filters. In fact, you could use a FilmConvert filter as the sole grading tool for all of your work. FilmConvert filters are available for Adobe, Avid, Final Cut and in the OFX format for Resolve, Vegas and Scratch. The Resolve version doesn’t include the 3-way color correction function.

You may keep your setting generic or select specific camera models. If you don’t have that camera profile installed, the filter will prompt you to download the appropriate camera file from their website. Once installed, you are good to go. FilmConvert offers a wide range of stock types for emulation. These include more brands, but also still photo stocks, such as Polaroid. The FilmConvert emulations are based on color negative film (or slide transparencies) and not release print stocks, like those of Koji Color.

In addition to grading and emulating certain stocks, FilmConvert lets you apply grain in a variety of sizes. From the control pane, dial in the amount of the film color, curve and grain, which is separate from the adjustments made with the 3-way color correction tool. New in this updated version is that you can generate a 3D LUT from your custom look. In doing so, you can create a .cube file ready for application elsewhere. This file will carry the color information, though not the grain. The standalone version is a more complete grading environment, complete with XML roundtrips and accelerated rendering.

Originally written for Digital Video magazine / CreativePlanetNetwork

©2014, 2015 Oliver Peters

DaVinci Resolve 11

df_resolve11_1_sm

With the release of DaVinci Resolve 11, Blackmagic Design has firmly moved into the ranks of nonlinear editing. In addition to a redesigned logo and splash screen, Resolve 11 sports more editorial tools than ever before. Now for the first time it is worthy for consideration as your NLE of choice. I have covered previous releases of Resolve, so I’ll only briefly touch on color correction in this article.

As before, DaVinci Resolve 11 comes in four versions: Resolve Lite (free), Resolve Software ($995), Resolve (with the control surface for $29,995) and the Linux configuration. Both free and paid versions support a variety of third-party control surfaces, with the most popular being the Avid Artist Color and the Tangent Devices Element panels. Resolve Lite supports output up to UltraHD (3840 x 2160). It includes most of the features of the paid software, except collaboration, stereo 3D and noise reduction. Although you can operate Resolve without any third-party i/o hardware, if you want external monitoring or output to tape, you’ll need to purchase one of Blackmagic Design’s PCIe capture cards or Thunderbolt i/o devices.

Color Match

df_resolve11_2_smThe interface is divided into four modules: media, edit, color and deliver. All color correction occurs in the color module. Here you’ll find a wealth of grading tools, including camera raw settings, color wheels, primary sliders and more. Color Match is a new correction tool. If you included a color chart when you shot your footage, Resolve can use the image of that chart to set an automatic correction for the color balance of the scene.

Color Match features three template settings for charts, including X-Rite ColorChecker, Datacolor SpyderCheckr and DSC Labs OneShot. If you used one of these charts and it’s in your footage, then select the appropriate set of color swatches in the Color Match menu. Next, select the Color Chart grid from the viewer tools, which opens an overlay for that chart. Corner-pin the overlay so that the grid lines up over the color swatches in the image and hit the Match button. Resolve will instantly adjust its curves to correct the color balance of the shot, so that the chart in the image matches the template for that chart in Resolve. Now you can copy this grade and apply it to the rest of the shots within that same set-up.

Although this isn’t a one-shot fix, it’s intended to give you a good starting point for your grade. While this feature demos really well and is certainly a whizz-bang attention-getter, it has the most value for novice users or for DITs who need to get a quick grade for dailies while on location.

Editing

df_resolve11_3_smThe biggest spark of interest I’ve seen for Resolve 11 is due to the editing tools. As an NLE, it’s somewhat of a mash-up between Final Cut Pro 7 and Final Cut Pro X. It copies a lot of X’s design aesthetic and even some features, like clip skimming in the media bins; yet, it is clearly track-based. For editors who like a lot of FCP X, but are put off by Apple’s trackless, magnetic timeline, Resolve 11 becomes a very tantalizing, cross-platform alternative.

Resolve 11’s edit module most closely aligns with Final Cut Pro 7, although there is no multi-cam feature, yet. The keyboard commands mirror the FCP 7 set, as do menu options and much of the working style. One big improvement is a very advanced trim mode, which offers good asymmetrical trimming. If you start a project from the beginning in Resolve 11, you can easily import media, organize clips into bins/folders, add logging information and, in general, do all of the nuts-and-bolts things you do in every editing application.

The interface uses a modal design and supports dual and single monitor configurations. Although there are numerous panels and windows that can be opened as needed, the general layout is fixed. Certain functions are restricted to the edit module and others to the color module. For example, transition effects, titles and generators would be added and adjusted in the edit module, while working with a standard timeline. Color correction and other image effects are reserved for the color module, which uses a node-based hierarchy. Resizing and repositioning can be done in either.

Resolve includes an inspector pane on the right side of the timeline viewer that is much like that of FCP X. Here you’ll find composite, transform, cropping, retiming and scaling controls. If you select a transition, then its adjustment controls appear in the inspector panel. Resolve supports the OpenFX video plug-in architecture. Third-party transitions will show up in the edit mode’s OpenFX library, while the filters only show up when you are in the color mode. Like FCP X, inspector controls are limited to sliders, color pickers and numerical entry, with no allowance for custom third-party plug-in interfaces.

My biggest beef is performance. Resolve 11 is optimized to pass the highest quality images through its pipeline, which seems to impede real-time playback, even with ungraded footage. In other NLEs, hitting play or the space bar brings you to full-speed, real-time playback in a fraction of a second. In Resolve it takes a few seconds, which is clearly evident in its dropped-frame indicator. Even with proper real-time playback, video motion does not look as smooth and fluid in the viewer as I would expect. There are a number of factors that affect this, including drive performance (high-performance storage is good), GPU performance (one or more high-end cards are desirable) and age of the machine (a new top-of-the-line system is ideal). Resolve is also not as gracious with a wide range of native media types as some of the other NLEs.

df_resolve11_4_smColor grades will affect performance. What if you start grading in the color mode and bounce back into the edit mode? This has the same impact on the computer as applying several filters in a traditional NLE. Add a stack of effects on most NLEs and playback performance through those clips is often terrible until you render. To mitigate this issue, Resolve includes smart caching, which is a similar sort of background render as that of FCP X. The software renders clips with a grade or an effect applied anytime the machine is idle.

Audio in Resolve 11 is still in the very early stages. There is no audio plug-in architecture. Hopefully Blackmagic will add AU and/or VST support down the road. Having multiple audio tracks also hurts system performance. Complex audio in the timeline quickly choked my system. Having even a few tracks caused the audio to drop out during playback. Resolve employs a similar track design to Adobe Premiere Pro. This means adaptive tracks, where a single timeline track can contain one mono channel, two stereo channels or multiple surround channels. This is an interesting design, but it seems to impact round-tripping between other applications. For example, I’ve exported multi-channel timelines via XML. In this process, when I brought that timeline into FCP 7 or Premiere Pro, these tracks only showed up as mono tracks with one channel of audio.

Roundtrips

df_resolve11_6_smWhere Resolve 11 really shines is in its roundtrip capabilities. It can take media and edit list formats from a range of systems, then let you process the media and finally output a new set of media files and corresponding lists. EDL, AAF, XML and FCPXML formats are supported, making Resolve 11 one of the better cross-application conversion tools. For instance, you can edit in FCP X, conform and grade in Resolve 11 and then output that in a compatible format to finish in the same or different NLE, such as Media Composer, Smoke, Premiere Pro, etc. Of course, with Resolve 11, you could simply finish in Resolve and output final deliverables from right within the application. That’s clearly the design goal Blackmagic had in mind.

Personally, I still prefer to use the roundtrip method, but there are a few wrinkles in this process. I have already mentioned audio issues. Another is resizing, such as FCP X’s “spatial conform” and Premiere Pro’s “scale to frame size”. These are automatic timeline functions to fit oversized images into smaller timeline frames, such as putting 4K media into a 1080 timeline. This feature automatically down-scales the source image so that either horizontal or vertical dimensions match. Unfortunately some of this information gets lost in the translation between applications.

df_resolve11_8_smI recently ran into this on two jobs with 4K RED media and Resolve 11. The first was a project cut in FCP X. The roundtrip went fine, but when the newly rendered 1080 media was back in FCP X, the application still thought it needed to enable spatial conform, which had been used in the offline edit. Disabling spatial conform caused FCP X to blow up the 1080 media 200%. The simple fix was just to leave spatial conform on and let FCP X render this media on export. There were no visible issues that I could detect.

The second was a music video project that the director had cut on Premiere Pro CC2014. There was extensive reframing and repositioning throughout. Importing this timeline into Resolve 11 was a complete disaster and would have meant rebuilding all of this work to reframe images. Ultimately I opted to use SpeedGrade CC2014 on this particular job, since it correctly translated the Premiere Pro timeline via Adobe’s Direct Link feature.

As a general rule, I would recommend that if you know you are going outside of the application, do not use any of these automatic resizing tools in the offline NLE. Instead, manually set the scale and position values, because Resolve does an excellent job of interpreting these parameters when set during the offline edit.

OpenFX

df_resolve11_5_smBlackmagic added the OpenFX architecture with Resolve 10, but now that Resolve 11 is out, new developers are joining the party. On my test system I installed both the FilmConvert 2.0 plug-in and the Boris Continuum Complete 9 package. The filters are accessed in the color modules and are applied to nodes, just like other grading functions. Although other host versions of the FilmConvert filter include color wheels within the filter’s control panel, they are excluded in the OpenFX version. You do get the camera and film emulsion presets. This is my favorite film emulation and grain plug-in and it makes a suitable complement to Resolve.

Boris FX’s BCC 9 for Resolve includes most of the same filters as for other hosts, including the new FX Browser. You can launch it from inside the Resolve interface, but when I tried to use it, the browser crashed the application. I’m running the public beta of 11.1, so that could be part of it. Otherwise, the filters themselves worked fine. So, if you need to add a glow, cartoon effect or spray paint noise to a shot, you can do so from inside Resolve with BCC 9.

OpenFX filters installed for other applications also show up in Resolve. I discovered this during my review of the HP Z1G2 workstation. Sony Vegas Pro 13 was installed, which also uses OpenFX. The NewBlueFX filters that were installed for Vegas also showed up in Resolve 11 on that machine.

A key point to remember it to apply OpenFX filters in a separate node. If you need to change the filter, simply delete the node and create a new one for a different filter. That way you won’t lose any of the correction applied to the clip.

Collaboration

df_resolve11_7_smResolve 11 enables collaboration among multiple users on the same project. This requires a paid version of Resolve 11 for each collaborator, a network and a shared DaVinci Resolve database. To test this feature, I enlisted the help of colorist and trainer Patrick Inhofer (Tao of Color, Mixing Light). Patrick set up a simple ethernet network between a Mac Pro and a MacBook Pro, each running a paid version of Resolve 11. You have to set up a shared project and open both Resolve seats in the collaboration mode. Once both systems are open with the same project, then it is possible to work interactively.

This is not like two or more Avid Media Composers running in a Unity-style sharing configuration. Rather, this approach is intended for an editor and a colorist to be able to simultaneously work on one timeline at the same time. One person is the “owner” of the project, while anyone else is a “collaborator”. In this model, the “owner” has control of the editing timeline and the “collaborator” is the colorist working in the color module. You could also have a third collaborator logging metadata for clips.

df_resolve11_9_smIn the collaboration mode, a bell-shaped alert icon is added to the lower left corner of the interface. Whenever the colorist adds or changes a correction on one or more clips and publishes his changes, the editor receives an alert to update the clips. When the update is made, the colorist’s changes become visible on the clips in the editor’s timeline. If the editor makes editorial changes to the timeline, such as trimming, adding or deleting clips, then he or she must save the project. Once saved, the colorist can reload the project to see these updates.

As long as you follow these procedures, things work well; however, in our tests, when we went the other direction, updates didn’t happen correctly. For example, color changes made by the editor or timeline edits made by the colorist, did not show up as expected on the other person’s system. Collaboration worked well, once we both got the hang of it, but the feature does feel like a 1.0 version. Updating changes worked, but you can also reject a change by choosing “revert”. This is supposed to take the clip back to the previous grade. Instead, it dropped the grade entirely and went back to an un-corrected version of the clip with all nodes removed.

DaVinci Resolve 11 is a powerful new version of this best-in-class color grading application. Although you might not edit a project from start-to-finish in Resolve, you certainly could. For now, Blackmagic Design is positioning Resolve as an NLE designed for finishing. Edit your creative cut in Media Composer, Final Cut Pro or Premiere Pro – mix in Logic Pro X, Pro Tools or Audition – and then bring them all together in Resolve 11. As we all know, clients like to tweak the cut until the very end. Now the grading environment can enjoy more interactivity than ever before.

Originally written for Digital Video magazine / CreativePlanetNetwork.

©2014 Oliver Peters

Color Grading Strategies

df_gradestrtgy_1_sm

A common mistake made by editors new to color correction is to try to nail a “look” all in a single application of a filter or color correction layer. Subjective grading is an art. Just like a photographer who dodges and burns areas of a photo in the lab or in Photoshop to “relight” a scene, so it is with the art of digital color correction. This requires several steps, so a single solution will never give you the best result. I follow this concept, regardless of the NLE or grading application I’m using at the time. Whether stacked filters in Premiere Pro, several color corrections in FCP X, rooms in Color, nodes in Resolve or layers in SpeedGrade – the process is the same. The standard grade for me is often a “stack” of four or more grading levels, layers or nodes to achieve the desired results. (Please click on any of the images for an expanded view.)

df_gradestrtgy_1red_smThe first step for me is always to balance the image and to make that balance consistent from shot to shot. Achieving this varies with the type of media and application. For example, RED camera raw footage is compatible with most updated software, allowing you to have control over the raw decoding settings. In FCP X or Premiere Pro, you get there through separate controls to modify the raw source metadata settings. In Resolve, I would usually make this the first node. Typically I will adjust ISO, temperature and tint here and then set the gamma to REDlogFilm for easy grading downstream. In a tool like FCP X, you are changing the settings for the media file itself, so any change to the RED settings for a clip will alter those settings for all instances of that clip throughout all of your projects. In other words, you are not changing the raw settings for only the timeline clips. Depending on the application, this type of change is made in the first step of color correction or it is made before you enter color correction.

df_gradestrtgy_cb1_smI’ll continue this discussion based on FCP X for the sake of simplicity, but just remember that the concepts apply generally to all grading tools. In FCP X, all effects are applied to clips before the color board stage. If you are using a LUT filter or some other type of grading plug-in like Nattress Curves, Hawaiki Color or AutoGrade, remember that this is applied first and then that result is effected by the color board controls, which are downstream in the signal flow. If you want to apply an effect after the color board correction, then you must add an adjustment layer title generator above your clip and apply that effect within the adjustment layer.

df_gradestrtgy_cb2_smIn the example of RED footage, I set the gamma to REDlogFilm for a flatter profile to preserve dynamic range. In FCP X color board correction 1, I’ll make the necessary adjustments to saturation and contrast to restore this to a neutral, but pleasing image. I will do this for all clips in the timeline, being careful to make the shots consistent. I am not applying a “look” at this level.

df_gradestrtgy_cb2a_smThe next step, color board correction 2, is for establishing the “look”. Here’s where I add a subjective grade on top of color board correction 1. This could be new from scratch or from a preset. FCP X supplies a number of default color presets that you access from the pull-down menu. Others are available to be installed, including a free set of presets that I created for FCP X. df_gradestrtgy_cb2b_smIf you have a client that likes to experiment with different looks, you might add several color board correction layers here. For instance, if I’m previewing a “cool look” versus a “warm look”, I might do one in color correction 2 and another in color correction 3. Each correction level can be toggled on and off, so it’s easy to preview the warm versus cool looks for the client.

Assuming that color board correction 2 is for the subjective look, then usually in my hierarchy, correction 3 tends to be reserved for a mask to key faces. Sometimes I’ll do this as a key mask and other times as a shape mask. df_gradestrtgy_cb3_smFCP X is pretty good here, but if you really need finesse, then Resolve would be the tool of choice. The objective is to isolate faces – usually in a close shot of your principal talent – and bring skin tones out against the background. The mask needs to be very soft so as not to draw attention to itself. Like most tools, FCP X allows you to make changes inside and outside of the mask. If I isolate a face, then I could brighten the face slightly (inside mask), as well as slightly darken everything else (outside mask).df_gradestrtgy_cb3a_sm

Depending on the shot, I might have additional correction levels above this, but all placed before the next step. For instance, if I want to darken specific bright areas, like the sun reflecting off of a car hood, I will add separate layers with key or shape masks for each of these adjustments. df_gradestrtgy_cb3b_smThis goes back to the photographic dodging and burning analogy.

df_gradestrtgy_cb4_smI like adding vignettes to subtly darken the outer edge of the frame. This goes on correction level 4 in our simplest set-up. The bottom line is that it should be the top correction level. The shape mask should be feathered to be subtle and then you would darken the outside of the mask, by lowering brightness levels and possibly a little lower on saturation. df_gradestrtgy_cb4a_smYou have to adjust this by feel and one vignette style will not work for all shots. In fact, some shots don’t look right with a vignette, so you have to use this to taste on a shot by shot basis. At this stage it may be necessary to go back to color correction level 2 and adjust the settings in order to get the optimal look, after you’ve done facial correction and vignetting in the higher levels.df_gradestrtgy_cb5_sm

df_gradestrtgy_cb5a_smIf I want any global changes applied after the color correction, then I need to do this using an adjustment layer. One example is a film emulation filter like LUT Utility or FilmConvert. Technically, if the effect should look like film negative, it should be a filter that’s applied before the color board. If the look should be like it’s part of a release print (positive film stock), then it should go after. For the most part, I stick to after (using an adjustment layer), because it’s easier to control, as well as remove, if the client decides against it. df_gradestrtgy_cb5b_smRemember that most film emulation LUTs are based on print stock and therefore should go on the higher layer by definition. Of course, other globals changes, like another color correction filters or grain or a combination of the two can be added. These should all be done as adjustment layers or track-based effects, for consistent application across your entire timeline.

©2014 Oliver Peters

The FCP X – RED – Resolve Dance

df_fcpx-red-resolve_5

I recently worked on a short 10 minute teaser video for a potential longer film project. It was shot with a RED One camera, so it was a great test for the RED workflow and roundtrips using Apple Final Cut Pro 10.1.2/10.1.3 and DaVinci Resolve 11.

Starting the edit

As with any production, the first step is to properly back up and verify the data from the camera and sound cards. These files should go to redundant drives that are parked on the shelf for safe keeping. After this has been done, now you can copy the media to the editorial drives. In this case, I was using a LaCie RAID-5 array. Each day’s media was placed in a folder and divided into subfolders for RED, audio and other cameras, like a few 5D shots.

df_fcpx-red-resolve_4Since I was using FCP X and its RED and proxy workflows, I opted not to use REDCINE-X Pro as part of this process. In fact, the Mac Pro also didn’t have any RED Rocket accelerator card installed either, as I’ve seen conflicts with FCP X and RED transcodes when the RED Rocket card was installed. After the files were copied to the editorial drives, they were imported into an FCP X event, with media left in its original location. In the import setting, the option to transcode proxy media was enabled, which continues in the background while you start to work with the RED files directly. The camera files are 4K 16×9 .r3d files, so FCP X transcodes these to half-sized ProRes Proxy media.

df_fcpx-red-resolve_1Audio was recorded as double-system sound using a Sound Devices recorder. The audio files were 2-channel broadcast WAV files using slates for syncing. There was no in-camera audio and no common timecode. I was working with a couple of assistant editors, so I had them sync each clip manually. Instead of using FCP X’s synchronized clips, I had them alter each master clip using the “open in timeline” command. This lets you edit the audio directly to the video as a connected clip within the master clip. Once done, your master clip contains synced audio and video.  It functions just like a master clip with in-camera audio – almost (more on that later).df_fcpx-red-resolve_9

All synced clips were relabeled with a camera, scene and take designation, as well as adding this info to the camera, scene and take columns. Lastly, script notes were added to the notes column based on the script supervisor’s reports.

Transcodes

df_fcpx-red-resolve_6Since the post schedule wasn’t super-tight, I was able to let the transcodes finish overnight, as needed. Once this is done, you can switch FCP X to working with proxies and all the media will be there. The toggle between proxy and/or optimized-original media is seamless and FCP X takes care of properly changing all sizing information. For example, the project is 4K media in a 1080p timeline. FCP X’s spatial conform downscales the 4K media, but then when you toggle to proxy, it has to make the corresponding adjustments to media that is now half-sized. Likewise any blow-ups or reframing that you do also have to match in both modes.

df_fcpx-red-resolve_2The built-in proxy/optimized-original workflow provides you with offline/online editing phases right within the same system. Proxies for fast and efficient editing. Original or high-resolution transcodes for finishing. To keep the process fast and initially true to color decisions made on set, no adjustments were made to the RED files. FCP X does let you alter the camera raw color metadata from inside the application, but there’s no real reason to do this for offline editing files. That can be deferred until it’s time to do color correction. So during the edit, you see what the DoP shot as you view the RED files or the transcoded proxies.

df_fcpx-red-resolve_3We did hit one bad camera load. This might have been due to either a bad RED drive or possibly excessive humidity at that location. No matter what the reason, the result was a set of corrupt RED clips. We didn’t initially realize this in FCP X, and so, hit clips that caused frequent crashes. Once I narrowed it down to the load from that one location, I decided to delete these clips. For that group of shots, I used REDCINE-X Pro to transcode the files. I adjusted the color for a flatter, neutral profile (for later color correction) and transcoded full-resolution debayered 1080p ProRes 4444 files. We considered these as the new camera masters for those clips. Even there, REDCINE-X Pro crashed on a few of the clips, but I still had enough to make a scene out of it.

Editing

The first editing step is culling down the footage in FCP X. I do a first pass rejecting all bogus shots, like short clips of the floor, a bad slate, etc. Set the event browser to “hide rejected”. Next I review the footage based on script notes, looking at the “circle takes” first, plus picking a few alternates if I have a different opinion. I will mark these as Favorites. As I do this, I’ll select the whole take and not just a portion, since I want to see the whole take.

Once I start editing, I switch the event browser to “show favorites”. In the list view, I’ll sort the event by the scene column, which now gives me a quick roadmap of all possible good clips in the order of the script. During editing, I cut mainly using the primary storyline to build up the piece. This includes all overlapping audio, composites, titles and so on. Cutting proceeds until the picture is locked. Once I’m ready to move on to color correction, I export a project XML in the FCPXML format.

Resolve

df_fcpx-red-resolve_7I used the first release version (not beta) of DaVinci Resolve 11 Lite to do this grade. My intention was to roundtrip it back to FCP X and not to use Resolve as a finishing tool, since I had a number of keys and composites that were easier done in FCP X than Resolve. Furthermore, when I brought the project into Resolve, the picture was right, but all of the audio was bogus – wrong takes, wrong syncing, etc. I traced this down to my initial “open in timeline” syncing, which I’ll explaining in a bit. Anyway, my focus in Resolve was only grading and so audio wasn’t important for what I was doing. I simply disabled it.

Importing the FCPXML file into a fresh Resolve 11 project couldn’t have been easier. It instantly linked the RED, 5D and transcoded ProRes 4444 files and established an accurate timeline for my picture cut. All resizing was accurately translated. This means that in my FCP X timeline, when I blew up a shot to 120% (which is a blow-up of the 1080p image that was downscaled from the 4K source), Resolve knew to take the corresponding crop from the full 4K image to equal this framing of the shot without losing resolution.

The one video gotcha I hit was with the FCP X timeline layout. FCP X is one of the only NLEs that lets you place video BELOW what any other software would consider to be the V1 track – that’s the primary storyline. Some of my green screen composite shots were of a simulated newscast inserted on a TV set hanging on a wall in the primary scene. I decided to place the 5 or 6 layers that made up this composite underneath the primary storyline. All fine inside FCP X, however, in Resolve, it has to interpret the lowest video element as V1, thus shifting everything else up accordingly. As a result the, bulk of the video was on V6 or V7 and audio was equally shifted in the other direction. This results in a lot of vertical timeline scrolling, since Resolve’s smallest track height is still larger than most.

df_fcpx-red-resolve_8Resolve, of course, is a killer grading tool that handles RED media well. My grading approach is to balance out the RED shots in the first node. Resolve lets you adjust the camera raw metadata settings for each individual clip, if you need to. Then in node 2, I’ll do most of my primary grading. After that, I’ll add nodes for selective color adjustments, masks, vignettes and so on. Resolve’s playback settings can be adjusted to throttle back the debayer resolution on playback for closer-to-real-time performance with RED media. This is especially important, when you aren’t running the fastest drives, fastest GPU cards nor using a RED Rocket card.

To output the result, I switched over to Resolve’s Deliver tab and selected the FCP X easy set-up. Select handle length, browse for a target folder and run. Resolve is a very fast renderer, even with GPU-based RED debayering, so output wasn’t long for the 130 clips that made up this short. The resulting media was 1080p ProResHQ with an additional 3 seconds per clip on either side of the timeline cut – all with baked in color correction. The target folder also contains a new FCPXML that corresponds to the Resolve timeline with proper links to the new media files.

Roundtrip back into FCP X

Back in FCP X, I make sure I’ve turned off the import preference to transcode proxy media and that my toggle is set back to original/optimized media. Find the new FCPXML file from Resolve and import it. This will create a new event containing a new FCP X project (edited sequence), but with media linked to the Resolve render files. Audio is still an issue, for now.

There is one interesting picture glitch, which I believe is a bug in the FCPXML metadata. In the offline edit, using RED or proxy media, spatial conform is enabled and set to “fit”. That scales the 4K file to a 1080p timeline. In the sequence back from Resolve, I noticed the timeline still had yellow render bars. When I switched the spatial conform setting on a clip to “none”, the render bar over it went away, but the clip blew up much larger, as if it was trying to show a native 4K image at 1:1. Except, that this was now 1080 media and NOT 4K. Apparently this resizing metadata is incorrectly held in the FCPXML file and there doesn’t appear to be any way to correct this. The workaround is to simply let it render, which didn’t seem to hurt the image quality as far as I could tell.

Audio

Now to an explanation of the audio issue. FCP X master clips are NOT like any other master clips in other NLEs, including FCP 7. X’s master clips are simply containers for audio and video essence and, in that way, are not unlike compound clips. Therefore, you can edit, add and/or alter – even destructively – any material inside a master clip when you use the “open in timeline” function. You have to be careful. That appears to be the root of the XML translation issue and the audio. Of course, it all works fine WITHIN the closed FCP X environment!

Here’s the workaround. Start in FCP X. In the offline edited sequence (locked rough cut) and the sequence from Resolve, detach all audio. Delete audio from the Resolve sequence. Copy and paste the audio from the rough cut to the Resolve sequence. If you’ve done this correctly it will all be properly synced. Next, you have to get around the container issue in order to access the correct WAV files. This is done simply by highlighting the connected audio clip(s) and using the “break apart clip items” command. That’s the same command used to break apart compound clips into their component source clips. Now you’ll have the original WAV file audio and not the master clip from the camera.

df_fcpx-red-resolve_11At this stage I still encountered export issues. If your audio mixing engineer wants an OMF for an older Pro Tools unit, then you have to go through FCP 7 (via an Xto7 translation) to create the OMF file. I’ve done this tons of time before, but for whatever reason on this project, the result was not useable. An alternative approach is to use Resolve to convert the FCPXML into XML, which can then be imported into FCP 7. This worked for an accurate translation, except that the Resolve export altered all stereo and multi-channel audio tracks into a single mono track. Therefore, a Resolve translation was also a fail. At this point in time, I have to say that a proper OMF export from FCP X-edited material is no longer an option or at least unreliable at best.

df_fcpx-red-resolve_10This leaves you with two options. If your mixing engineer uses Apple Logic Pro X, then that appears to correctly import and convert the native FCPXML file. If your mixer uses Pro Tools (a more likely scenario) then newer versions will read AAF files. That’s the approach I took. To create an AAF, you have to export an FCPXML from the project file. Then using the X2Pro Audio Convert application, generate an AAF file with embedded and trimmed audio content. This goes to the mixer who in turn can ingest the file into Pro Tools.

Once the mix has been completed, the exported AIF or WAV file of the mix is imported into FCP X. Strip off all audio from the final version of the FCP X project and connect the clip of the final mix to the beginning of the timeline. Now you are done and ready to export deliverables.

For more on RED and FCP X workflows, check out this series of posts by Sam Mestman at MovieMaker.

Part 1   Part 2   Part 3

©2014 Oliver Peters

24p HD Restoration

df_24psdhd_6

There’s a lot of good film content that only lives on 4×3 SD 29.97 interlaced videotape masters. Certainly in many cases you can go back and retransfer the film to give it new life, but for many small filmmakers, the associated costs put that out of reach. In general, I’m referring to projects with $0 budgets. Is there a way to get an acceptable HD product from an old Digibeta master without breaking the bank? A recent project of mine would say, yes.

How we got here

I had a rather storied history with this film. It was originally shot on 35mm negative, framed for 1.85:1, with the intent to end up with a cut negative and release prints for theatrical distribution. It was being posted around 2001 at a facility where I worked and I was involved with some of the post production, although not the original edit. At the time, synced dailies were transferred to Beta-SP with burn-in data on the top and bottom of the frame for offline editing purposes. As was common practice back then, the 24fps film negative was transferred to the interlaced video standard of 29.97fps with added 2:3 pulldown – a process that duplicates additional fields from the film frames, such that 24 film frames evenly add up to 60 video fields in the NTSC world. This is loaded into an Avid, where – depending on the system – the redundant fields are removed, or the list that goes to the negative cutter compensates for the adjustments back to a frame-accurate 24fps film cut.

df_24psdhd_5For the purpose of festival screenings, the project file was loaded into our Avid Symphony and I conformed the film at uncompressed SD resolution from the Beta-SP dailies and handled color correction. I applied a mask to hide the burn-in and ended up with a letter-boxed sequence, which was then output to Digibeta for previews and sales pitches to potential distributors. The negative went off to the negative cutter, but for a variety of reasons, that cut was never fully completed. In the two years before a distribution deal was secured, additional minor video changes were made throughout the film to end up with a revised cut, which no longer matched the negative cut.

Ultimately the distribution deal that was struck was only for international video release and nothing theatrical, which meant that rather than finishing/revising the negative cut, the most cost-effective process was to deliver a clean video master. Except, that all video source material had burn-in and the distributor required a full-height 4×3 master. Therefore, letter-boxing was out. To meet the delivery requirements, the filmmaker would have to go back to the original negative and retransfer it in a 4×3 SD format and master that to Digital Betacam. Since the negative was only partially cut and additional shots were added or changed, I went through a process of supervising the color-corrected transfer of all required 35mm film footage. Then I rebuilt the new edit timeline largely by eye-matching the new, clean footage to the old sequence. Once done and synced with the mix, a Digibeta master was created and off it went for distribution.

What goes around comes around

After a few years in distribution, the filmmaker retrieved his master and rights to the film, with the hope of breathing a little life into it through self-distribution – DVDs, Blu-rays, Internet, etc. With the masters back in-hand, it was now a question of how best to create a new product. One thought was simply to letter-box the film (to be in the director’s desired aspect) and call it a day. Of course, that still wouldn’t be in HD, which is where I stepped back in to create a restored master that would work for HD distribution.

Obviously, if there was any budget to retransfer the film negative to HD and repeat the same conforming operation that I’d done a few years ago – except now in HD – that would have been preferable. Naturally, if you have some budget, that path will give you better results, so shop around. Unfortunately, while desktop tools for editors and color correction have become dirt-cheap in the intervening years, film-to-tape transfer and film scanning services have not – and these retain a high price tag. So if I was to create a new HD master, it had to be from the existing 4×3 NTSC interlaced Digibeta master as the starting point.

In my experience, I know that if you are going to blow-up SD to HD frame sizes, it’s best to start with a progressive and not interlaced source. That’s even more true when working with software, rather than hardware up-convertors, like Teranex. Step one was to reconstruct a correct 23.98p SD master from the 29.97i source. To do this, I captured the Digibeta master as a ProResHQ file.

Avid Media Composer to the rescue

df_24psdhd_2_sm

When you talk about software tools that are commonly available to most producers, then there are a number of applications that can correctly apply a “reverse telecine” process. There are, of course, hardware solutions from Snell and Teranex (Blackmagic Design) that do an excellent job, but I’m focusing on a DIY solution in this post. That involves deconstructing the 2:3 pulldown (also called “3:2 pulldown”) cadence of whole and split-field frames back into only whole frames, without any interlaced tearing (split-field frames). After Effects and Cinema Tools offer this feature, but they really only work well when the entire source clip is of a consistent and unbroken cadence. This film had been completed in NTSC 29.97 TV-land, so frequently at cuts, the cadence would change. In addition, there had been some digital noise reduction applied to the final master after the Avid output to tape, which further altered the cadence at some cuts. Therefore, to reconstruct the proper cadence, changes had to be made at every few cuts and, in some scenes, at every shot change. This meant slicing the master file at every required point and applying a different setting to each clip. The only software that I know of to effectively do this with is Avid Media Composer.

Start in Media Composer by creating a 29.97 NTSC 4×3 project for the original source. Import the film file there. Next, create a second 23.98 NTSC 4×3 project. Open the bin from the 29.97 project into the 23.98 project and edit the 29.97 film clip to a new 23.98 sequence. Media Composer will apply a default motion adapter to the clip (which is the entire film) in order to reconcile the 29.97 interlaced frame rate into a 23.98 progressive timeline.

Now comes the hard part. Open the Motion Effect Editor window and “promote” the effect to gain access to the advanced controls. Set the Type to “Both Fields”, Source to “Film with 2:3 Pulldown” and Output to “Progressive”. Although you can hit “Detect” and let Media Composer try to decide the right cadence, it will likely guess incorrectly on a complex file like this. Instead, under the 2:3 Pulldown tab, toggle through the cadence options until you only see whole frames when you step through the shot frame-by-frame. Move forward to the next shot(s) until you see the cadence change and you see split-field frames again. Split the video track (place an “add edit”) at that cut and step through the cadence choices again to find the right combination. Rinse and repeat for the whole film.

Due to the nature of the process, you might have a cut that itself occurs within a split-field frame. That’s usually because this was a cut in the negative and was transferred as a split-field video frame. In that situation, you will have to remove the entire frame across both audio and video. These tiny 1-frame adjustments throughout the film will slightly shorten the duration, but usually it’s not a big deal. However, the audio edit may or may not be noticeable. If it can’t simply be fixed by a short 2-frame dissolve, then usually it’s possible to shift the audio edit a little into a pause between words, where it will sound fine.

Once the entire film is done, export a new self-contained master file. Depending on codecs and options, this might require a mixdown within Avid, especially if AMA linking was used. That was the case for this project, because I started out in ProResHQ. After export, you’ll have a clean, reconstructed 23.98p 4×3 NTSC-sized (720×486) master file. Now for the blow-up to HD.

DaVinci Resolve

df_24psdhd_1_smThere are many applications and filters that can blow-up SD to HD footage, but often the results end up soft. I’ve found DaVinci Resolve to offer some of the cleanest resizing, along with very fast rendering for the final output. Resolve offers three scaling algorithms, with “Sharper” providing the crispest blow-up. The second issue is that since I wanted to restore the wider aspect, which is inherent in going from 4×3 to 16×9, this meant blowing up more than normal – enough to fit the image width and crop the top and bottom of the frame. Since Resolve has the editing tools to split clips at cuts, you have the option to change the vertical position of a frame using the tilt control. Plus, you can do this creatively on a shot-by-shot basis if you want to. This way you can optimize the shot to best fit into the 16×9 frame, rather than arbitrarily lopping off a preset amount from the top and bottom.

df_24psdhd_3_smYou actually have two options. The first is to blow up the film to a large 4×3 frame out of Resolve and then do the slicing and vertical reframing in yet another application, like FCP 7. That’s what I did originally with this project, because back then, the available version of Resolve did not offer what I felt were solid editing tools. Today, I would use the second option, which would be to do all of the reframing strictly within Resolve 11.

As always, there are some uncontrollable issues in this process. The original transfer of the film to Digibeta was done on a Rank Cintel Mark III, which is a telecine unit that used a CRT (literally an oscilloscope tube) as a light source. The images from these tubes get softer as they age and, therefore, they require periodic scheduled replacement. During the course of the transfer of the film, the lab replaced the tube, which resulted in a noticeable difference in crispness between shots done before and after the replacement. In the SD world, this didn’t appear to be a huge deal. Once I started blowing up that footage, however, it really made a difference. The crisper footage (after the tube replacement) held up to more of a blow-up than the earlier footage. In the end, I opted to only take the film to 720p (1280×720) rather than a full 1080p (1920×1080), just because I didn’t feel that the majority of the film held up well enough at 1080. Not just for the softness, but also in the level of film grain. Not ideal, but the best that can be expected under the circumstances. At 720p, it’s still quite good on Blu-ray, standard DVD or for HD over the web.

df_24psdhd_4_smTo finish the process, I dust-busted the film to fix places with obvious negative dirt (white specs in the frame) caused by the initial handling of the film negative. I used FCP X and CoreMelt’s SliceX to hide and cover negative dirt, but other options to do this include built in functions within Avid Media Composer. While 35mm film still holds a certain intangible visual charm – even in such a “manipulated” state – the process certainly makes you appreciate modern digital cameras like the ARRI ALEXA!

As an aside, I’ve done two other complete films this way, but in those cases, I was fortunate to work from 1080i masters, so no blow-up was required. One was a film transferred in its entirety from a low-contrast print, broken into reels. The second was assembled digitally and output to intermediate HDCAM-SR 23.98 masters for each reel. These were then assembled to a 1080i composite master. Aside from being in HD to start with, cadence changes only occurred at the edits between reels. This meant that it only required 5 or 6 cadence corrections to fix the entire film.

©2014 Oliver Peters

New NLE Color Features

df_mascliplut_2_sm

As someone who does color correction as often within an NLE as in a dedicated grading application, it’s nice to see that Apple and Adobe are not treating their color tools as an afterthought. (No snide Apple Color comments, please.) Both the Final Cut Pro 10.1.2 and Creative Cloud 2014 updates include new tools specifically designed to improve color correction. (Click the images below for an expanded view with additional explanation.)

Apple Final Cut Pro 10.1.2

df_mascliplut_3_sm

This FCP X update includes a new, built-in LUT (look-up table) feature designed to correct log-encoded camera files into Rec 709 color space. This type of LUT is camera-specific and FCP X now comes with preset LUTs for ARRI, Sony, Canon and Blackmagic Design cameras. This correction is applied as part of the media file’s color profile and, as such, takes affect before any filters or color correction is applied.

These LUTs can be enabled for master clips in the event, or after a clip has been edited to a sequence (FCP X project). The log processing can be applied to a single clip or a batch of clips in the event browser. Simply highlight one or more clips, open the inspector and choice the “settings” selection. In that pane, access the “log processing” pulldown menu and choose one of the camera options. This will now apply that camera LUT to all selected clips and will stay with a clip when it’s edited to the sequence. Individual clips in the sequence can later be enabled or disabled as needed. This LUT information does not pass though as part of an FCPXML roundtrip, such as sending a sequence to Resolve for color grading.

Although camera LUTs are specific to the color science used for each camera model’s type of log encoding, this doesn’t mean you can’t use a different LUT. Naturally some will be too extreme and not desirable. Some, however, are close and using a different LUT might give you a desirable creative result, somewhat like cross-processing in a film lab.

Adobe CC 2014 – Premiere Pro CC and SpeedGrade CC

df_mascliplut_1_sm

In this CC 2014 release, Adobe added master clip effects that travel back and forth between Premiere Pro CC and SpeedGrade CC via Direct Link. Master clip effects are relational, meaning that the color correction is applied to the master clip and, therefore, every instance of this clip that is edited to the sequence will have the same correction applied to it automatically. When you send the Premiere Pro CC sequence to SpeedGrade CC, you’ll see that the 2014 version now has two correction tabs: master clip and clip. If you want to apply a master clip effect, choose that tab and do your grade. If other sections of the same clip appear on the timeline, they have automatically been graded.

Of course, with a lot of run-and-gun footage, iris levels and lighting changes, so one setting might not work for the entire clip. In that case, you can add a second level of grading by tweaking the shot in the clip tab. Effectively you now have two levels of grading. Depending on the show, you can grade in the master clip tab, the clip tab or both. When the sequence goes back to Premiere Pro CC, SpeedGrade CC corrections are applied as Lumetri effects added to each sequence clip. Any master clip effects also “ripple back” to the master clip in the bin. This way, if you cut a new section from an already-graded master clip to that or any other sequence, color correction has already been applied to it.

In the example I created for the image above, the shot was graded as a master clip effect. Then I added more primary correction and a filter effect, by using the clip mode for the first time the clip appears in the sequence. This was used to create a cartoon look for that segment on the timeline. Compare the two versions of these shots – one with only a master clip effect (shots match) and the other with a separate clip effect added to the first (shots are different).

Since master clip effects apply globally to source clips within a project, editors should be careful about changing them or copy-and-pasting them, as you may inadvertently alter another sequence within the same project.

©2014 Oliver Peters