Telestream Switch


For many editors, Apple QuickTime Player Pro (not QuickTime Player X) has been their go-to media player and encoding application. Since this is a discontinued piece of software and Apple is actively deprecating QuickTime with each new version of Mac OS X, it stands to reason that at some point QuickTime Player Pro will cease to function. Telestream – maker of the highly-regarded Episode encoder – plans to be ready with Switch.

Switch will run on Mac and Windows platforms and has steadily gained features since its product launch. (It is currently in version 1.6.) Switch is a multi-function media player that comes in three versions: Switch Player – a free, multi-format media player with file inspection capabilities; Switch Plus – to play, inspect, and fix media file issues; and, Switch Pro – a comprehensive file encoder. All Switch versions will play a wide range of media file formats and allow you to inspect the file properties, but only the Plus and Pro paid versions include encoding.

Building on its knowledge in developing Episode and its tight relationship with Apple, Telestream hopes to make Switch the all-purpose encoder of choice for most editors. The intent is for editors to use Switch where they would normally have used QuickTime Player Pro in the past. Unlike other open source media players, Telestream can play many professional media formats (like MXF), display embedded captions and subtitles, and properly encode to advanced file formats (like Apple ProRes). Since Switch Plus and Pro are designed for single-file processing, instead of batch encoding like Episode, their prices are also lower than that of Episode.

While the playback capabilities of Switch cover many formats, the encoding/export options are more limited. Switch Plus, which was added with version 1.6, can export MPEG-2, MPEG-4, and QuickTime (.mov) files. There’s also a pass-through mode in cases where files simply need to be rewrapped. For example, you might choose to convert Canon C300 clips from MXF into QuickTime movies, but maintain the native Canon XF codec. This might make it easier for a producer to review the media files before an upcoming edit session. Switch Plus also adds playback support for HEVC and MPEG-2 on windows, AC3 audio, and pro audio meters that display tru-peak and momentary loudness values.

Switch Pro includes all of the Plus features, as well as playback of Avid DNxHD, DNxHR, and JPEG 2000 files. It can encode in QuickTime (.mov), MPEG-4, and MPEG-2 (transport and program stream) containers. You can also export still frames and iTunes Store package formats. Codec encoding support includes H.264, MPEG-2, and ProRes. (ProRes export on Windows is ProRes HQ 4:2:2 for iTunes only.) While that’s more limited than Episode, Telestream plans to add more capabilities to Switch over time.

Switch Pro is more than an encoder, it also includes SDI out via AJA i/o devices (for preview to an external calibrated device), loudness monitoring, and caption playback. Even the free Player will pass audio out to speakers through AJA cards and USB-connected Core Audio devices. Unfortunately this does not appear to work when you have a Blackmagic Design card installed. Telestream has acknowledged this as a bug that it plans to fix in the 2.0 release later this year.

The goal for the Switch product line is to be a powerful and affordable visual QC tool, that you can also use it to make corrections to metadata, formats, audio, etc., and encode to a new file. Along with the usual inspection of file properties, Switch includes a set of audio meters that display volume and loudness readings. Although it does not offer audio and video adjustment or correction controls, you can re-arrange audio channels and speaker assignments. Telestream Switch is a very useful encoder, but if you just need a versatile media player and inspection tool, then you can easily start with the free player version.

Originally written for Digital Video magazine / CreativePlanetNetworks.

©2015 Oliver Peters

Final Cut Pro X Organizing Tips


Every nonlinear editing application is a database; however, Apple took this concept to a higher level when it launched Final Cut Pro X. One of its true strengths is making the organization of a mountain of media easier than ever. To get the best experience, an editor should approach the session holistically and not simply rely on FCP X to do all the heavy lifting.

At the start of every new production, I set up and work within a specific folder structure. You can use an application like Post Haste to create a folder layout, pick up some templates online, like those from FDPTraining, or simply create your own template. No matter how you get there, the main folder for that production should include subfolders for camera files, audio, graphics, other media, production documents, and projects. This last folder would include your FCP X library, as well as others, like any After Effects or Motion projects. The objective is to end up with everything that you accrue for this production in a single master folder that you can archive upon completion.

FCP X Libraries

df1415_organize_5It helps to understand the Final Cut Pro X Library structure. The Library is what would otherwise be called a project file, but in FCP X terminology an edited sequence is referred to as a Project, while the main session/production file is the Library. Unlike previous versions and other NLEs, the Library is not a closed, binary data file. It is a package file that you can open and peruse, by right-clicking the Library icon and using the “show package contents” command. In there you will find various binary files (labeled CurrentVersion.fcpevent) along with a number of media folders. This structure is similar to the way Avid Media Composer project folders are organized on a hard drive. Since FCP X allows you to store imported, proxy, transcoded, and rendered media within the Library package, the media folders can be filled with actual media used for this production. When you pick this option your Library will grow quite large, but is completely under the application’s control, thus making the media management quite robust.

df1415_organize_4Another option is to leave media files in their place. When this is selected the Library package’s media folders will contain aliases or shortcut links to the actual media files. These media files are located in one or more folders on your hard drive. In this case, your Library file will stay small and is easier to transfer between systems, since the actual audio and video files are externally located. I suggest spreading things out. For example, I’ll create my Library on one drive, the location of the autosaved back-up files on another, and my media on a third. This has the advantage of no single point of failure. If the Library files are located on a drive that is backed up via Time Machine or some other system-wide “cloud” back-up utility, you have even more redundancy and protection.

Following this practice, I typically do not place the Library file in the projects folder for the production, unless this is a RAID-5 (or better) drive array. If I don’t save it there during actual editing, then it is imperative to copy the Library into the project folder for archiving. The rub is that the package contains aliases, which certain software – particular LTO back-up software – does not like. My recommendation is to create a compressed archive (.zip) file for every project file (FCP X Library, AE project, Premiere Pro project, etc.) prior to the final archiving of that production. This will prevent conflicts caused by these aliases.

If you have set up a method of organization that saves Libraries into different folders for each production, it is still possible to have a single folder, which shows you all the Libraries on your drives. To do this, create a Smart Folder in the Finder and set up the criteria to filter for FCP X Libraries. Any Library will automatically be filtered into this folder with a shortcut. Clicking on any of these files will launch FCP X and open to that Library.

Getting started

df1415_organize_2The first level of organization is getting everything into the appropriate folders on the hard drive. Camera files are usually organized by shoot date/location, camera, and card/reel/roll. Mac OS lets you label files with color-coded Finder tags, which enables another level of organization for the editor. As an example, you might have three different on-camera speakers in a production. You could label clips for each with a colored tag. Another example, might be to label all “circle takes” picked by the director in the field with a tag.

The next step is to create a new FCP X Library. This is the equivalent of the FCP 7 project file. Typically you would use a single Library for an entire production, however, FCP X permits you to work with multiple open Libraries, just like you could have multiple projects open in FCP 7. In order to set up all external folder locations within FCP X, highlight the Library name and then in the Inspector panel choose “modify settings” for the storage locations listed in the Library Properties panel. Here you can designate whether media goes directly into the internal folders of the Library package or to other target folders that you assign. This step is similar to setting the Capture Scratch locations in FCP 7.

How to organize clips within FCP X

df1415_organize_8Final Cut Pro X organizes master source clips on three levels – Events, Keyword Collections, and Smart Collections. These are an equivalent to Bins in other NLEs, but don’t completely work in the same fashion. When clips are imported, they will go into a specific Event, which is closest in function to a Bin. It’s best to keep the number of Events low, since Keyword Collections work within an Event and not across multiple Events. I normally create individual Events for edited sequences, camera footage, audio, graphics, and a few more categories. Clips within an Event can be grouped in the browser display in different ways, such as by import date. This can be useful when you want to quickly find the last few files imported in a production that spans many days. Most of the time I set grouping and sorting to “none”.

df1415_organize_7To organize clips within an Event, use Keywords. Setting a Keyword for a clip – or a range within a clip – is comparable to creating subclips in FCP 7. When you add a Keyword, that clip or range will automatically be sorted into a Keyword Collection with a matching name. Keywords can be assigned to keyboard hot keys, which creates a very quick way to go through every clip and assign it into a variety of Keyword Collections. Clips can be assigned to more than one Collection. Again, this is equivalent to creating subclips and placing them into separate Bins.

On one set of commercials featuring company employees, I created Keyword Collections for each person, department, shoot date, store location, employees, managers, and general b-roll footage. This made it easy to derive spots that featured a diverse range of speakers. It also made it easy to locate a specific clip that the director or client might ask for, based on “I think Mary said that” or “It was shot at the Kansas City store”. Keyword Collections can be placed into folders. Collections for people by name went into one folder, Collections by location into another, and so on.

df1415_organize_3The beauty of Final Cut Pro X is that it works in tandem with any organization you’ve done in the Finder. If you spent the time to move clips into specific folders or you assigned color-coded Finder tags, then this information can be used when importing clips into FCP X. The import dialogue gives you the option to “leave files in place” and to use Finder folders and tags to automatically create corresponding Keyword Collections. Camera files that were organized into camera/date/card folders will automatically be placed into Keyword Collections that are organized in the same fashion. If you assigned clips with Mary, John, and Joe to have red, blue, and green tags for each person, then you’ll end up with those clips automatically placed into Keyword Collections named red, blue, and green. Once imported, simply rename the red, blue, and green Collections to Mary, John, and Joe.


The third level of clip organization is Smart Collections. Use these to automatically filter clips based on the criteria that you set. With the release of FCP X version 10.2, Smart Collections have been moved from the Event level (10.1.4 or earlier) to the Library level – meaning that filtering can occur across multiple Events within the Library. By default, new Libraries are created with several preset Smart Collections that can be used, deleted, or modified. Here’s an example of how to use these. When you sync double-system sound clips or multiple cameras, new grouped clips are created – Synchronized Clips and Multicam Clips. These will appear in the Event along with all the other source files, which can be unwieldy. To focus strictly on these new grouped clips, create a Smart Collection with the criteria set by type to include these two categories. Then, as new grouped clips are created, they will automatically be filtered into this Smart Collection, thus reducing clutter for the editor.

Playing nice with others

df1415_organize_9Final Cut Pro X was designed around a new paradigm, so it tends to live in its own world. Most professional editors have the need for a higher level of interoperability with other applications and with outside vendors. To aid in these functions, you’ll need to turn to third party applications from a handful of vendors that have focused on workflow productivity utilities for FCP X. These include Intelligent Assistance/Assisted Editing, XMiL, Spherico, Marquis Broadcast, and Thomas Szabo. Their utilities make it possible to go between FCP X and the outside world, through list formats like AAF, EDL, and XML.

df1415_organize_11Final Cut’s only form of decision list exchange is FCPXML, which is a distinctly different data format than other forms of XML. Apple Logic Pro X, Blackmagic Design DaVinci Resolve and Autodesk Smoke can read it. Everything else requires a translated file and that’s where these independent developers come in. Once you use an application like XtoCC (formerly Xto7) from Intelligent Assistance to convert FCPXML to XML for an edited sequence, other possibilities are opened up. The translated XML file can now be brought into Adobe Premiere Pro or FCP 7. Or you can use other tools designed for FCP 7. For instance, I needed to generate a print-out of markers with comments and thumbnail images from a film, in order to hand off notes to the visual effects company. By bringing a converted XML file into Digital Heaven’s Final Print – originally designed with only the older Final Cut in mind – this became a doable task.

df1415_organize_13Thomas Szabo has concentrated on some of the media functions that are still lacking with FCP X. Need to get to After Effects or Nuke? The ClipExporter and ClipExporter2 applications fit the bill. His newest tool is PrimariesExporter. This utility uses FCPXML to enable batch exports of clips from a timeline, a series of single-frame exports based on markers, or a list of clip metadata. Intelligent Assistance offers the widest set of tools for FCP X, including Producer’s Best Friend. This tool enables editors to create a range of reports needed on most major jobs. It delivers them in spreadsheet format.

Understanding the thought processes behind FCP X and learning to use its powerful relational database will get you through complex projects in record time. Better yet, it gives you the confidence to know that no editorial stone was left unturned. For more information on advanced workflows and organization with Final Cut Pro X, check out FCPworks, MacBreak Studio (hosted by Pixel Corps), Larry Jordan, and Ripple Training.

For those that want to know more about the nuts and bolts of the post production workflow for feature films, check out Mike Matzdorff’s “Final Cut Pro X: Pro Workflow”, an iBook that’s a step-by-step advanced guide based on the lessons learned on Focus.

Originally written for Digital Video magazine / CreativePlanetNetwork.

©2015 Oliver Peters

Building FCP X Effects – Update


A few weeks ago I built and posted a small FCP X color correction effect using the Motion template process. While I have no intention of digging deeper into plug-in design, it’s an interesting experiment into understanding how you can use the power of Motion and Final Cut Pro X to develop custom effects, transitions, and generators. In this process, I’ve done a bit of tweaking, created a few more effects, and gotten a better understanding of how it all works. If you download the updated effects, there are a total of three filters (Motion templates) – a color corrector, a levels filter and a DVE.

Color science

In going through this exercise, a few things have been brought to my attention. First of all, filters are not totally transparent. If you apply my color correction filter, you’ll see slight changes in the videoscopes even when each tab is at its default. This doesn’t really matter since you are applying a correction anyway; but if it annoys you, then simply uncheck the item you aren’t using, like brightness or contrast.

df2615_fcpxfilterupdate_3Secondly, the exact same filter in FCP X may or may not use the same color science as the Motion version, even though they are called the same thing. Specifically this is the case with the Hue/Saturation filter. My template uses the one from Motion, of course. The FCP X Hue/Sat filter uses a color model in which saturation is held constant and luma (a composite of RGB) varies. The Motion version holds luma constant and allows saturation to vary.

The quickest way to test this is with a solid red generator. Apply the FCP X Hue/Sat filter and rotate the hue control. Set the scopes to display an RGB parade, vectorscope, and the waveform set to luma. As you rotate the hue around the dial, you’ll notice that the color dot stays neatly in the boxes of the vectorscope and moves in a straight, diagonal line from vector to vector. The RGB parade will show a perfect combination of red, blue, and green values to achieve the correct RMBCGY coordinates. However, the waveform luma levels will move up and down with large changes.

Now compare this to the hue control in the Hue/Sat filter included in my template. This is from Motion. As you rotate the hue control around the dial, the saturation value moves in what seems to be an erratic fashion around the vectorscope; but, the luma display changes very little. If you apply this same test to real footage, instead of a generated background color, you’ll get perceptually better results with Motion’s Hue/Sat filter than with the FCP X version. In most cases, either approach is acceptable, since for the purposes of color correction, you will likely only move the dial a few degrees left or right from the default of zero. Hue changes in color grading should be very subtle.


Expanding filter features

After I built this first Motion template, I decided to poke around some more inside Motion to see if it offered other filters that had value for color correction. And as a matter of fact, it does. Motion includes a very nice Levels filter. It includes sliders for RGB as a group, as well as individual settings for red, green, and blue. Each group is broken down into sliders for black in/out, white in/out, and gamma. Then there’s an overall mix value. That a total of 21 sliders, not counting opacity, which I didn’t publish in my template. Therefore, you have fairly large control over grading using only the Levels filter.

df2615_fcpxfilterupdate_4I thought about building it into the earlier Oliver Color filter I had created, but ran into some obvious design issues. When you build these effects, it’s important to think through the order of clicking publish on the parameters that you want to appear inside of FCP X. This sequence will determine where these values appear in the stack of controls in the FCP X inspector. In other words, even though I placed this Levels filter ahead of Color Balance within Motion, the fact that I clicked publish after these other values had already been published, meant that these new controls would be placed to the bottom of my stack once this was displayed in FCP X. The way to correct this is to first unpublish everything and then select publish for each parameter in the order that you want it to appear.

A huge interface design concern is just how cluttered you do or don’t want your effect controls to be inside of FCP X. This was a key design issue when FCP X was created. You’ll notice that Apple’s built-in FCP X effects have a minimalist approach to the number of sliders available for each filter. Adding Levels into my Color filter template meant adding 21 more sliders to an interface that already combined a number of parameters for each of the other components. Going through this exercise makes it clear why Apple took the design approach they did and why other developers have resorted to various workarounds, such as floating controls, HUDs, and other solutions. The decision for me was simply to create a separate Oliver Levels filter that could be used separately, as needed.


More value from color presets 

An interesting discovery I made was how Color Board presets can be used in FCP X 10.2. When you choose a preset from the Color Board’s pulldown menu, you can access these settings as you always have. The downside is that you can’t preview a setting like you can other effects in the effects palette. You have to apply a preset from the Color Board to see what it will look like with your image.df2615_fcpxfilterupdate_5

FCP X 10.2 adds the ability to save filter presets. Since color correction using the Color Board has now been turned into a standard filter, you can save color presets as an effects preset. This means that if you have a number of Color Board presets (the built-in FCP X settings, mine, or any custom ones you’ve created) simply apply the color preset and then save that color correction filter setting as a new effects preset. When you do this you get a choice of what category to save it into. You can create your own, such as My Color Presets. Now these presets will show up in that category inside the effects palette. When you skim over the preset icon, your image will be previewed with that color correction value applied.

Although these presets appear in the same palette as other Motion templates, the effects presets themselves are stored in a different place. They are located in the OS X user library under Application Support/ProApps/Effects Presets. For example, I created 40 Color Board presets that can all be turned into Effects Presets visible within the Effects palette. I’m not going to post them that way, but if you feel ambitious, I would invite you to download the Color Board presets and make your own effects presets out of them.

All of this is a great way to experiment and see how you can use the resources Apple has provided to personalize a system tailored to your own post needs.

Click here to download the Motion template effects.

Click here to download updated and additional Motion template effects (FCP X 10.2.1 or later).

Click here to download the Color Board presets.

For some additional resources for free plug-ins, check out Ripple Training, Alex4D and FxFactory.

©2015 Oliver Peters

The FCP X – RED – Resolve Dance II


Last October I wrote about the roundtrip workflow surrounding Final Cut Pro X and Resolve, particularly as it relates to working with RED camera files. This month I’ve been color grading a small, indie feature film shot with RED One cameras at 4K resolution. The timeline is 1080p. During the course of grading the film in DaVinci Resolve 11, I’ve encountered a number of issues in the roundtrip process. Here are some workflow steps that I’ve found to be successful.

Step 1 – For the edit, transcode the RED files into 1080p Apple ProRes Proxy QuickTime movies baking in camera color metadata and added burn-in data for clip name and timecode. Use either REDCINE-X Pro or DaVinci Resolve for the transcode.

Step 2 – Import the proxies and double-system audio (if used) into FCP X and sync within the application or use Sync-N-Link X. Ideally all cameras should record reference audio and timecode should match between the cameras and the sound recorder. Slates should also be used as a fall-back measure.

Step 3 – Edit in FCP X until you lock the cut. Prepare a duplicate sequence (Project) for grading. In that sequence, strip off (detach and remove) all audio. As an option, you can create a mix-down track for reference and attach it as a connected clip. Flatten the timeline down to the Primary Storyline where ever possible, so that Resolve only sees this as one track of video. Compound clips should be broken apart, effects should be removed, and titles removed. Audition clips should be finalized, but multicam clips are OK. Remove effects filters. Export an FCPXML (version 1.4 “previous”) list. You should also export a self-contained reference version of the sequence, which can be used to check the conform in Resolve.

Step 4 – Launch Resolve and make sure that the master project settings match that of your FCP X sequence. If it’s supposed to be 1920×1080 at 23.976 (23.98) fps, then make sure that’s set correctly. Resolve defaults to a frame rate of 24.0fps and that won’t work. Locate all of your camera original source media (RED camera files in this example) and add them to your media bin in the Media page. Import the FCPXML (1.4), but disable the setting to automatically load the media files in the import dialogue box. The FCPXML file will load and will relink to the RED files without issue if everything has gone correctly. The timeline may have a few clip conflicts, so look for the little indicator on the clip corner in the Edit window timeline. If there’s a clip conflict, you’ll be presented with several choices. Pick the correct one and that will correct the conflict.

Step 5 – At this point, you should verify that the files have conformed correctly by comparing against a self-contained reference file. Compound clips can still be altered in Resolve by using the Decompose function in the timeline. This will break apart the nested compound clips onto separate video tracks. In general, reframing done in the edit will translate, as will image rotation; however, flips and flops won’t. To flip and flop an image in FCP X requires a negative X or Y scale value (unless you used a filter), which Resolve cannot achieve. When you run across these in Resolve, reset the scale value in the Edit page inspector to normal from that clip. Then in the Color page use the horizontal or vertical flip functions that are part of the resizing controls. Once this is all straight, you can grade.

Step 6 option A – When grading is done, shift to the Deliver page. If your project is largely cuts-and-dissolves and you don’t anticipate further trimming or slipping of edit points in your NLE, then I would recommend exporting the timeline as a self-contained master file. You should do a complete quality check the exported media file to make sure there were no hiccups in the render. This file can then be brought back into any NLE and combined with the final mixed track to create the actual show master. In this case, there is no roundtrip procedure needed to get back into the NLE.

Step 6 option B – If you anticipate additional editing of the graded files – or you used transitions or other effects that are unique to your NLE – then you’ll need to use the roundtrip “return” solution. In the Deliver page, select the Final Cut Pro easy set-up roundtrip. This will render each clip as an individual file at the source or timeline resolution with a user-selected handle length added to the head and tail of each clip. Resolve will also write a corresponding FCPXML file (version 1.4). This file will retain the original transitions. For example, if you used FCP X’s light noise transition, it will show up as a dissolve in Resolve’s timeline. When you go back to FCP X, it will retain the proper transition information in the list, so you’ll get back the light noise transition effect.

Resolve generates this list with the assumption that the media files were rendered at source resolution and not timeline resolution. Therefore, even if your clips are now 1920×1080, the FCPXML represents these as 4K. When you import this new FCPXML back into FCP X, a spatial conform will be applied to “fit” the files into the 1920×1080 raster space of the timeline. Change this to “none” and the 1080 media files will be blown up to 4K. You can choose to simply live with this, leave it to “fit”, and render the files again on FCP X’s output – or follow the next step for a workaround.

Step 7 – Create a new Resolve project, making sure the frame rate and timeline format are correct, such as 1920×1080 at 23.976fps. Load the new media files that were exported from Resolve into the media pool. Now import the FCPXML that Resolve has generated (uncheck the selection to automatically import media files and uncheck sizing information). The media will now be conformed to the timeline. From the Edit page, export another FCPXML 1.4 for that timeline (no additional rendering is required). This FCPXML will be updated to match the media file info for the new files – namely size, track configuration, and frame rate.

At this stage, you will encounter a second serious flaw in the FCP X/Resolve/FCP X roundtrip process. Resolve 11 does not write a proper FCPXML file and leaves out certain critical asset information. You will encounter this if you move the media and lists between different machines, but not if all of the work is being done on a single workstation. The result will be a timeline that loads into FCP X with black clips (not the red “missing” icon). When you attempt to reconnect the media, FCP X will fail to relink and will issue an “incompatible files” error message. To fix the problem, either the colorist must have FCP X installed on the Resolve system or the editor must have Resolve 11 installed on the FCP X system. This last step is the one remaining workaround.

Step 8 option A – If FCP X is installed on the Resolve machine, import the FCPXML into FCP X and reconnect the media generated by Resolve. Then re-export a new FCPXML from FCP X. This new list and media can be moved to any other system. You can move the FCP X Library successfully, as well.

Step 8 option B – If Resolve is installed on the FCP X machine, then follow Step 7. The new FCPXML that you create there will load into FCP X, since you are on the same system.

That’s the state of things right now. Maybe some of these flaws will be fixed with Resolve 12, but I don’t know at this point. The FCPXML list format involves a bit of voodoo at times and this is one of those cases. The good news is that Resolve is very solid when it comes to relinking, which will save you. Good luck!

©2015 Oliver Peters

Understanding SpeedGrade

df1615_sg_1How you handle color correction depends on your temperament and level of expertise. Some editors want to stay within the NLE, so that editorial adjustments are easily made after grading. Others prefer the roundtrip to a powerful external application. When Adobe added the Direct Link conduit between Premiere Pro CC and SpeedGrade CC, they gave Premiere Pro editors the best of both worlds.


df1615_sg_4SpeedGrade is a standalone grading application that was initially designed around an SDI feed from the GPU to a second monitor for your external video. After the Adobe acquisition, Mercury Transmit was eventually added, so you can run SpeedGrade with one display, two computer displays, or a computer display plus a broadcast monitor. With a single display, the video viewer is integrated into the interface. At home, I use two computer displays, so by enabling a dual display layout, I get the SpeedGrade interface on one screen and the full-screen video viewer on the other. To do this you have to correctly offset the pixel dimensions and position for the secondary display in order to see it. Otherwise the image is hidden behind the interface.

Using Mercury Transmit, the viewer image is sent to an external monitor, but you’ll need an appropriate capture/monitoring card or device. AJA products seem to work fine. Some Blackmagic devices work and others don’t. When this works, you will lose the viewer from the interface, so it’s best to have the external display close – as in next to your interface monitor.


df1615_sg_3When you use Direct Link, you are actually sending the Premiere Pro timeline to SpeedGrade. This means that edits and timeline video layers are determined by Premiere Pro and those editing functions are disabled in SpeedGrade. It IS the Premiere Pro timeline. This means certain formats that might not be natively supported by a standalone SpeedGrade project will be supported via the Direct Link path – as long as Premiere Pro natively supports them.

There is a symbiotic relationship between Premiere Pro and SpeedGrade. For example, I worked on a music video that was edited natively using RED camera media. The editor had done a lot of reframing from the native 4K media in the 1080 timeline. All of this geometry was correctly interpreted by SpeedGrade. When I compared the same sequence in Resolve (using an XML roundtrip), the geometry was all wrong. SpeedGrade doesn’t give you access to the camera raw settings for the .r3d media, but Premiere Pro does. So in this case, I adjusted the camera raw values by using the source settings control in Premiere Pro, which then carried those adjustments over to SpeedGrade.

df1615_sg_2Since the Premiere Pro timeline is the SpeedGrade timeline when you use Direct Link, you can add elements into the sequence from Premiere, in order to make them available in SpeedGrade. Let’s say you want to add a common edge vignette across all the clips of your sequence. Simply add an adjustment layer to a top track while in Premiere. This appears in your SpeedGrade timeline, enabling you to add a mask and correction within the adjustment layer clip. In addition, any video effects filters that you’ve applied in Premiere will show up in SpeedGrade. You don’t have access to the controls, but you will see the results interactively as you make color correction adjustments.

df1615_sg_17All SpeedGrade color correction values are applied to the clip as a single Lumetri effect when you send the timeline back to Premiere Pro. All grading layers are collapsed into a single composite effect per clip, which appears in the clip’s effect stack (in Premiere Pro) along with all other filters. In this way you can easily trim edit points without regard to the color correction. Traditional roundtrips render new media with baked-in color correction values. There, you can only work within the boundaries of the handles that you’ve added to the file upon rendering. df1615_sg_16Not so with Direct Link, since color correction is like any other effect applied to the original media. Any editorial changes you’ve made in Premiere Pro are reflected in SpeedGrade should you go back for tweaks, as long as you continue to use Direct Link.

12-way and more

df1615_sg_5Most editors are familiar with 3-way color correctors that have level and balance controls for shadows, midrange and highlights. Many refer to SpeedGrade’s color correction model as a 12-way color corrector. The grading interface features a 3-way (lift/gamma/gain) control for four ranges of correction: overall, shadows, midrange, and highlights. Each tab also adds control of contrast, pivot, color temperature, magenta (tint), and saturation. Since shadow, midrange, and highlight ranges overlap, you also have sliders that adjust the overlap thresholds between shadow and midrange and between the midrange and highlight areas.

df1615_sg_7Color correction is layer based – similar to Photoshop or After Effects. SpeedGrade features primary (“P”) , secondary (“S”) and filter layers (the “+” symbol). When you add layers, they are stacked from bottom to top and each layer includes an opacity control. As such, layers work much the same as rooms in Apple Color or nodes in DaVinci Resolve. You can create a multi-layered adjustment by using a series of stacked primary layers. Shape masks, like that for a vignette, should be applied to a primary layer. df1615_sg_10The mask may be normal or inverted so that the correction is applied either to the inside or the outside of the mask. Secondaries should be reserved for HSL keys. For instance, highlighting the skin tones of a face to adjust its color separately from the rest of the image. The filter layer (“+”) is where you’ll find a number of useful tools, including Photoshop-style creative effect filters, LUTs, and curves.

Working with grades

df1615_sg_13The application of color correction can be applied to a clip as either a master clip correction or just a clip correction (or both). When you grade using the default clip tab, then that color correction is only being applied to that single clip. If you grade in the master clip tab, then any color correction that you apply to that clip will also be applied to every other instance of that same media file elsewhere on the timeline. Theoretically, in a multicam edit – made up of four cameras with a single media file per camera – you could grade the entire timeline by simply color correcting the first clip for each of the four cameras as a master clip correction. All other clips would automatically inherit the same settings. Of course, that almost never works out quite as perfectly, therefore, you can grade a clip using both the master clip and the regular clip tabs. Use the master for a general setting and still use the regular clip tab to tweak each shot as needed.

df1615_sg_9Grades can be saved and recalled as Lumetri Looks, but typically these aren’t as useful in actual grading as standard copy-and-paste functions – a recent addition to SpeedGrade CC. Simply highlight one or more layers of a graded clip and press copy (cmd+c on a Mac). Then paste (cmd+v on a Mac) those to the target clip. These will be pasted in a stack on top of the default, blank primary correction that’s there on every clip. You can choose to use, ignore, or delete this extra primary layer.

SpeedGrade features a cool trick to facilitate shot matching. The timeline playhead can be broken out into multiple playheads, which will enable you to compare two or more shots in real-time on the viewer. This quick comparison lets you make adjustments to each to get a closer match in context with the surrounding shots.

A grading workflow

df1615_sg_14Everyone has their own approach to grading and these days there’s a lot of focus on camera and creative LUTs. My suggestions for prepping a Premiere Pro CC sequence for SpeedGrade CC go something like this.

df1615_sg_6Once, you are largely done with the editing, collapse all multicam clips and flatten the timeline as much as possible down to the bottom video layer. Add one or two video tracks with adjustment layers, depending on what you want to do in the grade. These should be above the last video layer. All graphics – like lower thirds – should be on tracks above the adjustment layer tracks. This is assuming that you don’t want to include these in the color correction. Now duplicate the sequence and delete the tracks with the graphics from the dupe. Send the dupe to SpeedGrade CC via Direct Link.

In SpeedGrade, ignore the first primary layer and add a filter layer (“+”) above it. Select a camera patch LUT. For example, an ARRI Log-C-to-Rec-709 LUT for Log-C gamma-encoded Alexa footage. Repeat this for every clip from the same camera type. If you intend to use a creative LUT, like one of the SpeedLooks from LookLabs, you’ll need one of their camera patches. This shifts the camera video into a unified gamma profile optimized for their creative LUTs. If all of the footage used in the timeline came from the same camera and used the same gamma profile, then in the case of SpeedLooks, you could apply the creative LUT to one the adjustment layer clips. This will apply that LUT to everything in the sequence.

df1615_sg_8Once you’ve applied input and output LUTs you can grade each clip as you’d like, using primary and secondary layers. Use filter layers for curves. Any order and any number of layers per clip is fine. Using this methodology all grading is happening between the camera patch LUT and the creative LUT added to the adjustment layer track. Finally, if you want a soft edge vignette on all clips, apply an edge mask to the default primary layer of the topmost adjustment layer clip. Adjust the size, shape, and softness of the mask. Darken the outside of the mask area. Done.df1615_sg_11

(Note that not every camera uses logarithmic gamma encoding, nor do you want to use LUTs on every project. These are the “icing on the cake”, NOT the “meat and potatoes” of grading. If your sequence is a standard correction without any stylized creative looks, then ignore the LUT procedures I described above.)

df1615_sg_15Now simply send your timeline back to Premiere Pro (the “Pr” button). Back in Premiere Pro CC, duplicate that sequence. Copy-and-paste the graphics tracks from the original sequence to the available blank tracks of the copy. When done, you’ll have three sequences: 1) non-color corrected with graphics, 2) color corrected without graphics, and 3) final with color correction and graphics. The beauty of the Direct Link path between Premiere Pro CC and SpeedGrade CC is that you can easily go back and forth for changes without ever being locked in at any point in the process.

©2015 Oliver Peters

Preparing Digital Camera Files


The modern direction in file-based post production workflows is to keep your camera files native throughout the enter pipeline. While this might work within a closed loop, like a self-contained Avid, Adobe or Apple workflow, it breaks down when you have to move your project across multiple applications. It’s common for an editor to send files to a Pro Tools studio for the final mix and to a colorist running Resolve, Baselight, etc. for the final grade. In doing so, you have to ensure that editorial decisions aren’t incorrectly translated in the process, because the NLE might handle a native camera format differently than the mixer’s or colorist’s tool. To keep the process solid, I’ve developed some disciplines in how I like to handle media. The applications I mention are for Mac OS, but most of these companies offer Windows versions, too. If not, you can easily find equivalents.

Copying media

df0815_media_6_smThe first step is to get the media from the camera cards to a reliable hard drive. It’s preferable to have at least two copies (from the location) and to make the copies using software that verifies the back-up. This is a process often done on location by the lowly “data wrangler” under less than ideal conditions. A number of applications, such as Imagine Products’ ShotPut Pro and Adobe Prelude let you do this task, but my current favorite is Red Giant’s Offload. It uses a dirt simple interface permitting one source and two target locations. It has the sole purpose of safely transferring media with no other frills.

Processing media on location

df0815_media_5_smWith the practice of shooting footage with a flat-looking log gamma profile, many productions like to also see the final, adjusted look on location. This often involves some on-site color grading to create either a temporary look or even the final look. Usually this task falls to a DIT (digital imaging technician). Several applications are available, including DaVinci Resolve, Pomfort Silverstack and Redcine-X Pro. Some new applications, specifically designed for field use, include Red Giant’s BulletProof and Catalyst Browse/Prepare from Sony Creative Software. Catalyst Browse in free and designed for all Sony cameras, whereas Catalyst Prepare is a paid application that covers Sony cameras, but also other brands, including Canon and GoPro. Depending on the application, these tools may be used to add color correction, organize the media, transcode file formats, and even prepare simple rough assemblies of selected footage.

All of these tools add a lot of power, but frankly, I’d prefer that the production company leave these tasks up to the editorial team and allow more time in post. In my testing, most of the aforementioned apps work as advertised; however, BulletProof continues to have issues with the proper handling of timecode.

Transcoding media

df0815_media_2_smI’m not a big believer in always using native media for the edit, unless you are in a fast turnaround situation. To get the maximum advantage for interchanging files between applications, it is ideal to end up in one of several common media formats, if that isn’t how the original footage was recorded. You also want every file to have unique and consistent metadata, including file names, reel IDs and timecode. The easiest common media format is QuickTime using the .mov wrapper and encoded using either Apple ProRes, Panasonic AVC-Intra, Sony XDCAM, or Avid DNxHD codecs. These are generally readable in most applications running on Mac or PC. My preference is to first convert all files into QuickTime using one of these codecs, if they originated as something else. That’s because the file is relatively malleable at that point and doesn’t require a rigid external folder structure.

Applications like BulletProof and Catalyst can transcode camera files into another format. Of course, there are dedicated batch encoders like Sorenson Squeeze, Apple Compressor, Adobe Media Encoder and Telestream Episode. My personal choice for a tool to transcode camera media is either MPEG Streamclip (free) or Divergent Media’s EditReady. Both feature easy-to-use batch processing interfaces, but EditReady adds the ability to apply LUTs, change file names and export to multiple targets. It also reads formats that MPEG Streamclip doesn’t, such as C300 files (Canon XF codec wrapped as .mxf). If you want to generate a clean master copy preserving the log gamma profile, as well as a second lower resolution editorial file with a LUT applied, then EditReady is the right application.

Altering your media

df0815_media_3_smI will go to extra lengths to make sure that files have proper names, timecode and source/tape/reel ID metadata. Most professional video cameras will correctly embed that information. Others, like the Canon 5D Mark III, might encode a non-standard timecode format, allow duplicated file names, and not add reel IDs.

Once the media has been transcoded, I will use two applications to adjust the file metadata. For timecode, I rely on VideoToolShed’s QtChange. This application lets you alter QuickTime files in a number of ways, but I primarily use it to strip off unnecessary audio tracks and bad timecode. Then I use it to embed proper reel IDs and timecode. Because it does this by altering header information, processing a lot of files happens quickly. The second tool in this mix is Better Rename, which is batch renaming utility. I use it frequently for adding, deleting or changing all or part of the file name for a batch of files. For instance, I might append a production job number to the front of a set of Canon 5D files. The point in doing all of this is so that you can easily locate the exact same point within any file using any application, even several years apart.

df0815_media_1_smSpeed is a special condition. Most NLEs handle files with mixed frame rates within the same project and sequences, but often such timelines do not correctly translate from one piece of software to the next. Edit lists are interchanged using EDL, XML, FCPXML and AAF formats and each company has its own variation of the format that they use. Some formats, like FCPXML, require third party utilities to translate the list, adding another variable. Round-tripping, such as going from NLE “A” (for offline) to Color Correction System “B” (for grading) and then to NLE “C” (for finishing), often involves several translations. Apart from effects, speed differences in native camera files can be a huge problem.

A common mixed frame rate situation in the edit is combining 23.98fps and 29.97fps footage. If both of these were intended to run in real-time, then it’s usually OK. However, if the footage was recorded with the intent to overcrank for slomo (59.94 or 29.97 native for a timebase of 23.98) then you start to run into issues. As long as the camera properly flags the file, so that every application plays it at the proper timebase (slowed), then things are fine. This isn’t true of DSLRs, where you might shoot 720p/59.94 for use as slomo in a 1080p/29.97 or 23.98 sequence. With these files, my recommendation is to alter the speed of the file first, before using it inside the NLE. One way to do this is to use Apple Cinema Tools (part of the defunct Final Cut Studio package, but can still be found). You can batch-conform a set of 59.94fps files to play natively at 23.98fps in very short order. This should be done BEFORE adding any timecode with QtChange. Remember that any audio will have its sample rate shifted, which I’ve found to be a problem with FCP X. Therefore, when you do this, also strip off the audio tracks using QtChange. They play slow anyway and so are useless in most cases where you want overcranked, slow motion files.

Audio in your NLE

The last point to understand is that not all NLEs deal with audio tracks in the same fashion. Often camera files are recorded with multiple mono audio sources, such as a boom and a lav mic on channels 1 and 2. These may be interpreted either as stereo or as dual mono, depending on the NLE. Premiere Pro CC in particular sees these as stereo when imported. If you edit them to the timeline as a single stereo track, you will not be able to correct this in the sequence afterwards by panning. Therefore, it’s important to remember to first set-up your camera files with a dual mono channel assignment before making the first edit. This same issue crops up when round-tripping files through Resolve. It may not properly handle audio, depending on how it interprets these files, so be careful.

These steps add a bit more time at the front end of any given edit, but are guaranteed to give you a better editing experience on complex projects. The results will be easier interchange between applications and more reliable relinking. Finally, when you revisit a project a year or more down the road, everything should pop back up, right where you left it.

Originally written for Digital Video magazine / CreativePlanetNetwork.

©2015 Oliver Peters

Adobe Anywhere and Divine Access


Editors like the integration of Adobe’s software, especially Dynamic Link and Direct Link between creative applications. This sort of approach is applied to collaborative workflows with Adobe Anywhere, which permits multiple stakeholders, including editors, producers and directors, to access common media and productions from multiple, remote locations. One company that has invested in the Adobe Anywhere environment is G-Men Media of Venice, California, who installed it as their post production hub. By using Adobe Anywhere, Jeff Way (COO) and Clay Glendenning (CEO) sought to improve the efficiency of the filmmaking process for their productions. No science project – they have now tested the concept in the real world on several indie feature films.

Their latest film, Divine Access, produced by The Traveling Picture Show Company in association with G-Men Media, is a religious satire centering on reluctant prophet Jack Harriman. Forces both natural and supernatural lead Harriman down a road to redemption culminating in a final showdown with his long time foe, Reverend Guy Roy Davis. Steven Chester Prince (Boyhood, The Ringer, A Scanner Darkly) moves behind the camera as the film’s director. The entire film was shot in Austin, Texas during May of 2014, but the processing of dailies and all post production was handled back at the Venice facility. Way explains, “During principal photography we were able to utilize our Anywhere system to turn around dailies and rough cuts within hours after shooting. This reduced our turnaround time for review and approval, thus reducing budget line items. Using Anywhere enabled us to identify cuts and mark them as viable the same day, reducing the need for expensive pickup shoots later down the line.”

The production workflow

df0115_da_3_smDirector of Photography Julie Kirkwood (Hello I Must Be Going, Collaborator, Trek Nation) picked the ARRI ALEXA for this film and scenes were recorded as ProRes 4444 in 2K. An on-set data wrangler would back up the media to local hard drives and then a runner would take the media to a downtown upload site. The production company found an Austin location with 1GB upload speeds. This enabled them to upload 200GB of data in about 45 minutes. Most days only 50-80GB were uploaded at one time, since uploads happened several times throughout each day.

Way says, “We implemented a technical pipeline for the film that allowed us to remain flexible.  Adobe’s open API platform made this possible. During production we used an Amazon S3 instance in conjunction with Aspera to get the footage securely to our system and also act as a cloud back-up.” By uploading to Amazon and then downloading the media into their Anywhere system in Venice, G-Men now had secure, full-resolution media in redundant locations. Camera LUTs were also sent with the camera files, which could be added to the media for editorial purposes in Venice. Amazon will also provide a long-term archive of the 8TB of raw media for additional protection and redundancy. This Anywhere/Amazon/Aspera pipeline was supervised by software developer Matt Smith.

df0115_da_5_smBack in Venice, the download and ingest into the Anywhere server and storage was an automated process that Smith programmed. Glendenning explains, “It would automatically populate a bin named for that day with the incoming assets. Wells [Phinny, G-Men editorial assistant] would be able to grab from subfolders named ‘video’ and ‘audio’ to quickly organize clips into scene subfolders within the Anywhere production that he would create from that day’s callsheet. Wells did most of this work remotely from his home office a few miles away from the G-Men headquarters.” Footage was synced and logged for on-set review of dailies and on-set cuts the next day. Phinny effectively functioned as a remote DIT in a unique way.

Remote access in Austin to the Adobe Anywhere production for review was made possible through an iPad application. Way explains, “We had close contact with Wells via text message, phone and e-mail. The iPad access to Anywhere used a secure VPN connection over the Internet. We found that a 4G wireless data connection was sufficient to play the clips and cuts. On scenes where the director had concerns that there might not be enough coverage, the process enabled us to quickly see something. No time was lost to transcoding media or to exporting a viewable copy, which would be typical of the more traditional way of working.”

Creative editorial mixing Adobe Anywhere and Avid Media Composer

df0115_da_4_smOnce principal photography was completed, editing moved into the G-Men mothership. Instead of editing with Premiere Pro, however, Avid Media Composer was used. According to Way, “Our goal was to utilize the Anywhere system throughout as much of the production as possible. Although it would have been nice to use Premiere Pro for the creative edit, we believed going with an editor that shared our director’s creative vision was the best for the film. Kindra Marra [Scenic Route, Sassy Pants, Hick] preferred to cut in Media Composer. This gave us the opportunity to test how the system could adapt already existing Adobe productions.” G-Men has handled post on other productions where the editor worked remotely with an Anywhere production. In this case, since Marra lived close-by in Santa Monica, it was simpler just to set up the cutting room at their Venice facility. At the start of this phase, assistant editor Justin (J.T.) Billings joined the team.

Avid has added subscription pricing, so G-Men installed the Divine Access cutting room using a Mac Pro and “renting” the Media Composer 8 software for a few months. The Anywhere servers are integrated with a Facilis Technology TerraBlock shared storage network, which is compatible with most editing applications, including both Premiere Pro and Media Composer. The Mac Pro tower was wired into the TerraBlock SAN and was able to see the same ALEXA ProRes media as Anywhere. According to Billings, “Once all the media was on the TerraBlock drives, Marra was able to access these in the Media Composer project using Avid’s AMA-linking. This worked well and meant that no media had to be duplicated. The film was cut solely with AMA-linked media. External drives were also connected to the workstations for nightly back-ups as another layer of protection.”

Adobe Anywhere at the finish line

df0115_da_6_smOnce the cut was locked, an AAF composition for the edited sequence was sent from Media Composer to DaVinci Resolve 11, which was installed on an HP workstation at G-Men. This unit was also connected to the TerraBlock storage, so media instantly linked when the AAF file was imported. Freelance colorist Mark Todd Osborne graded the film on Resolve 11 and then exported a new AAF file corresponding to the rendered media, which now also existed on the SAN drives. This AAF composition was then re-imported into Media Composer.

Billings continues, “All of the original audio elements existed in the Media Composer project and there was no reason to bring them into Premiere Pro. By importing Resolve’s AAF back into Media Composer, we could then double-check the final timeline with audio and color corrected picture. From here, the audio and OMF files were exported for Pro Tools [sound editorial and the mix is being done out-of-house]. Reference video of the film for the mix could now use the graded images. A new AAF file for the graded timeline was also exported from Media Composer, which then went back into Premiere Pro and the Anywhere production. Once we get the mixed tracks back, these will be added to the Premiere Pro timeline. Final visual effects shots can also be loaded into Anywhere and then inserted into the Premiere Pro sequence. From here on, all further versions of Divine Access will be exported from Premiere Pro and Anywhere.”

Glendenning points out that, “To make sure the process went smoothly, we did have a veteran post production supervisor – Hank Braxtan – double check our workflow.  He and I have done a lot of work together over the years and has more than a decade of experience overseeing an Avid house. We made sure he was available whenever there were Avid-related technical questions from the editors.”

Way says, “Previously, on post production of [the indie film] Savageland, we were able to utilize Anywhere for full post production through to delivery. Divine Access has allowed us to take advantage of our system on both sides of the creative edit including principal photography and post finishing through to delivery. This gives us capabilities through entire productions. We have a strong mix of Apple and PC hardware and now we’ve proven that our Anywhere implementation is adaptable to a variety of different hardware and software configurations. Now it becomes a non-issue whether it’s Adobe, Avid or Resolve. It’s whatever the creative needs dictate; plus, we are happy to be able to use the fastest machines.”

Glendenning concludes, “Tight budget projects have tight deadlines and some producers have missed their deadlines because of post. We installed Adobe Anywhere and set up the ecosystem surrounding it because we feel this is a better way that can save time and money. I believe the strategy employed for Divine Access has been a great improvement over the usual methods. Using Adobe Anywhere really let us hit it out of the park.”

Originally written for DV magazine / CreativePlanetNetwork.

©2015 Oliver Peters