New NLE Color Features

df_mascliplut_2_sm

As someone who does color correction as often within an NLE as in a dedicated grading application, it’s nice to see that Apple and Adobe are not treating their color tools as an afterthought. (No snide Apple Color comments, please.) Both the Final Cut Pro 10.1.2 and Creative Cloud 2014 updates include new tools specifically designed to improve color correction. (Click the images below for an expanded view with additional explanation.)

Apple Final Cut Pro 10.1.2

df_mascliplut_3_sm

This FCP X update includes a new, built-in LUT (look-up table) feature designed to correct log-encoded camera files into Rec 709 color space. This type of LUT is camera-specific and FCP X now comes with preset LUTs for ARRI, Sony, Canon and Blackmagic Design cameras. This correction is applied as part of the media file’s color profile and, as such, takes affect before any filters or color correction is applied.

These LUTs can be enabled for master clips in the event, or after a clip has been edited to a sequence (FCP X project). The log processing can be applied to a single clip or a batch of clips in the event browser. Simply highlight one or more clips, open the inspector and choice the “settings” selection. In that pane, access the “log processing” pulldown menu and choose one of the camera options. This will now apply that camera LUT to all selected clips and will stay with a clip when it’s edited to the sequence. Individual clips in the sequence can later be enabled or disabled as needed. This LUT information does not pass though as part of an FCPXML roundtrip, such as sending a sequence to Resolve for color grading.

Although camera LUTs are specific to the color science used for each camera model’s type of log encoding, this doesn’t mean you can’t use a different LUT. Naturally some will be too extreme and not desirable. Some, however, are close and using a different LUT might give you a desirable creative result, somewhat like cross-processing in a film lab.

Adobe CC 2014 – Premiere Pro CC and SpeedGrade CC

df_mascliplut_1_sm

In this CC 2014 release, Adobe added master clip effects that travel back and forth between Premiere Pro CC and SpeedGrade CC via Direct Link. Master clip effects are relational, meaning that the color correction is applied to the master clip and, therefore, every instance of this clip that is edited to the sequence will have the same correction applied to it automatically. When you send the Premiere Pro CC sequence to SpeedGrade CC, you’ll see that the 2014 version now has two correction tabs: master clip and clip. If you want to apply a master clip effect, choose that tab and do your grade. If other sections of the same clip appear on the timeline, they have automatically been graded.

Of course, with a lot of run-and-gun footage, iris levels and lighting changes, so one setting might not work for the entire clip. In that case, you can add a second level of grading by tweaking the shot in the clip tab. Effectively you now have two levels of grading. Depending on the show, you can grade in the master clip tab, the clip tab or both. When the sequence goes back to Premiere Pro CC, SpeedGrade CC corrections are applied as Lumetri effects added to each sequence clip. Any master clip effects also “ripple back” to the master clip in the bin. This way, if you cut a new section from an already-graded master clip to that or any other sequence, color correction has already been applied to it.

In the example I created for the image above, the shot was graded as a master clip effect. Then I added more primary correction and a filter effect, by using the clip mode for the first time the clip appears in the sequence. This was used to create a cartoon look for that segment on the timeline. Compare the two versions of these shots – one with only a master clip effect (shots match) and the other with a separate clip effect added to the first (shots are different).

Since master clip effects apply globally to source clips within a project, editors should be careful about changing them or copy-and-pasting them, as you may inadvertently alter another sequence within the same project.

©2014 Oliver Peters

Adobe Anywhere

df_anywhere_1_sm

Adobe Anywhere for video is Adobe’s first foray into collaborative editing. Anywhere functions a lot like other shared storage environments, except that editors and producers are not bound to working within the facility and its hard-wired network. The key difference between Adobe Anywhere and other NLE/SAN combinations is that all media is stored at the central location and the system’s servers handle the actual editing and compositing functions of the editing software. This means that no media is stored on the editor’s local computer and lightweight client stations can be used, since the required horsepower exists at the central location. Anywhere works within a facility using the existing LAN or externally over the internet when client systems connect remotely over VPN. Currently Adobe Anywhere is integrated directly into Adobe Premiere Pro CC and Prelude CC (Windows and OS X). Early access to After Effects integration is part of Adobe Anywhere 1.6, with improved integration available in the next release.

The Adobe Anywhere cluster

df_anywhere_4_smAdobe Anywhere software is installed on a set of Windows servers, which are general purpose server computers that you would buy from a vendor like Dell or HP. The software creates two types of nodes: a single Adobe Anywhere Collaboration Hub node and three or more Adobe Mercury Streaming Engine nodes. Each node is installed on a separate server, so a minimum configuration requires four computers. This is separate from the shared storage. If you use a SAN, such as a Facilis Technology or an EditShare system, the SAN will be mounted at the OS level by the computing cluster of Anywhere servers. Local and remote editors can upload source media to the SAN for shared access via Anywhere.

The Collaboration Hub computer stores all of the Anywhere project metadata, manages user access and coordinates the other nodes in the system. The Mercury Streaming Engine computers provide real-time, dynamic viewing streams of Premiere Pro and Prelude sequences with GPU-accelerated effects. Media stays in its native file format on the storage servers. There are no proxy files created by the system. In order to handle real-time effects, each of the Streaming Engine servers must be equipped with a high-end NVIDIA graphics card.

As a rule of thumb, this minimum cluster size supports 10-15 active users, according to Adobe. However, the actual number depends on media type, resolution, number of simultaneous source clips needed per editor, as well as activities that may be automated like import and export. Adobe prices the Anywhere software based on the number of named users. This is a subscription model of $1,000/year/user. That’s in addition to installed seats of Creative Cloud and the cost of the hardware to make the system work, which is supplied by other vendors and not Adobe. Since this is not sold as a turnkey installation by Adobe, certain approved vendors, like TekServe and Keycode Media, have been qualified as Adobe Anywhere system integrators.

How it works

df_anywhere_5_smWhile connected to Adobe Anywhere and working with an Anywhere project, the Premiere Pro or Prelude application on the local computer is really just functioning as the software front-end that is driving the application running back at the server. The result of the edit decisions are streamed back to the local machine in real-time as a single stream of video. The live stream of media from the Mercury Streaming Engine is being handled in a similar fashion to the playback resolution throttle that’s already part of Premiere Pro. As native media is played, the computer adjusts the stream’s playback compression based on bandwidth. Whenever playback is paused, the parked frame is updated to full resolution – thus, enabling an editor to tweak an effect or composite and always see the full resolution image while making the adjustments.

To understand this better, let’s use the example of a quad split. If this were done locally, the drives would be playing back four streams of video and the software and GPU of that local computer would composite the quad split and present a single stream of video to the viewer display. In the case of Adobe Anywhere, the playback of these four streams and the compositing of the quad split would take place on the Mercury Streaming Engine computer. In turn, it would stream this live composite as a single feed of video back to the remotely connected computer. Since all the “heavy lifting” is done at “home base” the system requirements for the client machine can be less beefy. In theory, you could be working with a MacBook Air, while editing RED Epic 5K footage.

Productions

Another difference with Adobe Anywhere is that instead of having Premiere Pro or Prelude project files, users create shared productions, designed for multi-user and multi-application access. This way a collaborating team is set up like a workgroup with assigned permission levels. Media is common and central to avoid media duplication. Any media that is added on-site, is uploaded to the production in its native resolution and becomes part of the shared assets of the production. The Collaboration Hub computer manages the database for all productions.

When a user remotely logs into an Adobe Anywhere Production, media to which he has been granted access is available for browsing using Premiere Pro’s standard Media Browser panel. When an editor starts working, Anywhere automatically makes a virtual “clone” of his or her production items and opens them in a private session. Because multiple people can be working in the same production at the same time, Adobe Anywhere provides protection against conflicts or overwrites. In order to share your private changes, you must first get any updates from the shared production. This pulls all shared changes into your private view. If another person has changed the same asset you are working on, you are provided with information about the conflict and given the opportunity to keep the other person’s changes, your changes or both. Once you make your choices, you can then transfer your changes back to the shared production. Anywhere also maintains a version history, so if unwanted changes are made, you can revert back to an earlier or alternate version.

Adobe Anywhere in the wild

df_anywhere_2_smAlthough large installations like CNN are great for publicity headlines, Adobe Anywhere is proving to be useful at smaller facilities, too. G-Men Media is a production company based in Venice, California. They are focused primarily on feature film and commercial broadcast work. According to G-Men COO, Jeff Way, “G-Men was originally founded with the goal of utilizing the latest digital technologies available to reduce costs, accelerate workflow and minimize turnaround time for our clients. Adobe Anywhere allowed us to provide our clients a more efficient workflow on post productions without having to grow infrastructure on a per project basis.”

“A significant factor of Adobe Anywhere, which increased the growth of our client base, was the system’s ability to organize production teams based on talent instead of location. If we can minimize or eliminate time required for coordinating actual production work (i.e. shipping hard drives, scheduling meetings with editors, awaiting review/approval), we can save clients money that they can then invest into more creative aspects of the project – or simply undercut their budget. Furthermore, we have the ability to scale up or down without added expenses in infrastructure. All that’s required on our end is simply granting the Creative Cloud seat access to the system assets for their production.”

df_anywhere_3_smThe G-Men installation was handled by Keycode Media, based on the recommended Adobe configuration described at the beginning of this article. This includes four SuperMicro 1U rack-mounted SuperServers. Three of these operate as the Adobe Anywhere Mercury Streaming Engines and the fourth acts as the Adobe Anywhere Collaboration Hub. Each of the Mercury Streaming Engines has its own individual NVIDIA Tesla K10 GPU card. The servers are connected to a Facilis Terrablock shared storage array via a 10 Gigabit Ethernet switch. Their Internet feed is via a fiber optic connection, typically operating at 500Mbps (down) /150Mbps (up). G-Men has used the system on every project, since it went live in August of 2013. Noteworthy was its use for post on Savageland – the first feature film to run through an Adobe Anywhere system.

Way continued, “Savageland ended up being a unique situation and the ultimate test of the system’s capabilities. Savageland was filmed over three years with various forms of media from iPhone and GoPro footage to R3D raw and Canon 5D. It was really a matter of what the directors/producers could get their hands on from day-to-day. After ingesting the assets into our system, we were able to see a fluid transition straight into editing without having to transcode media assets. One of the selling factors of gaining Savageland as a client was the flexibility and feasibility of allowing all of the directors and editors (who lived large distances from each other in Los Angeles) to work at their convenience. The workflow for them changed from setting aside their weekends and nights for review meetings at a single location to a readily available review via their MacBooks and iPads.”

“For most of our clients, the system has allowed them to bring on the editorial talent they want without having to worry about the location of the editor. At the same time, the editors enjoyed the flexibility of working from wherever they wanted – many times out of their own homes. The benefit for editors and directors is the capability to remotely collaborate and provide feedback immediately. We’ve had a few productions where there are more than one editor working on the same assets – both creating different versions of the same edit. At the same time we had a director viewing the changes immediately after they were shared, with notes on each version. Then they had the ability to immediately make a decision on one or the other or provide creative feedback, so the editors could immediately apply the changes in real time.”

G-Men is in production on Divine Access, a feature film being shot in Austin, Texas. Way explained, “We’re currently in Austin beginning principal photography. Knowing the cloud-based editing workflows available to us, we wanted to expand the benefits we are gaining in post to the entirety of a feature film production from first location scout to principal photography and all the way through to delivery. We’re using our infrastructure to ingest and begin edits as we shoot, which is really new and exciting to all of the producers working on the film.  With the upload speeds we have available to us, we are able to provide review/approvals to our director the same day.”

Originally written for Digital Video magazine/CreativePlanetNetwork.

©2014 Oliver Peters

SpeedGrade Looks

df_sglooks_1_sm

In a previous post, I discussed how to use Final Cut Pro X Color Board presets. For that post, I created a set of presets and made them available as a free download. That remains one of the most viewed blog posts I’ve written and literally thousands of readers have downloaded the presets. In this post, I’m doing much the same with Adobe SpeedGrade CC Looks.

Adobe SpeedGrade CC uses the Lumetri deep color engine and presets may be shared between Premiere Pro CC and SpeedGrade CC via the Direct Link protocol. Grades, LUTs and  presets applied in SpeedGrade are combined into a single Lumetri filter effect that gets applied to the clip in Premiere Pro. When SpeedGrade CC is installed, it includes a number of preset Look examples developed by Adobe and Looks Labs. These include stylized grades, film emulations and camera log conversions among others. When you work in SpeedGrade, it is possible to save user-created Looks, as well. These are a combination of any set of layers and grades that you have applied to a single clip. These may include color correction adjustments, but also LUTs and special visual effects filters. User files are saved as .look files with a corresponding .jpg thumbnail of the shot that the grade was originally applied to. These .look files are saved by SpeedGrade in a number of possible folder locations, so you have to be careful, as to which folder is open and selected when you save a file.

df_sglooks_2_smI have created a variety of custom Looks covering color treatments, effects, film styles and more. These Looks were built around an image I’ve used for many of my color correction blog posts, because it has a nice spectrum of colors. For example, it’s hard to set up a characteristic “orange & teal” look, when the image has no blues, greens or skin tones. To start, download the file from the link below and unzip the archive file. Inside, you’ll find a folder called “op_sgrades”. Let me point out that my testing and instructions are based on a Mac. I have not tested this on a Windows PC, so I am not sure where the proper default installation folder lives.

On a Mac, the supplied Looks styles (Lumetri and SpeedLooks) are inside the closed application bundle. To install this new folder, you need to open the SpeedGrade CC package contents (right-click the application icon and choose “show package contents”). This will expose the application’s Contents folder. From there, navigate to the MacOS subfolder and then the Look Examples subfolder. Drag the “op_sgrades” folder into the Look Examples folder. When you next open SpeedGrade CC, you will be able to access this new set of Looks in the Looks Management pane. On a PC, right-click the application program icon and select “open file location”. This will expose a set of files, including the Look Examples folder.

df_sglooks_4_sm

Another caveat to this procedure. What happens with the next Adobe update to SpeedGrade CC? I’m not sure what happens to any folders inside the application contents package during an update. It may be that you would have to install this custom folder into the Look Examples folder again after a SpeedGrade CC version update. We’ll see when the next SpeedGrade CC update happens.

Since each of these presets was built on the same log-encoded (flat) image, you will need to adjust the grade according to the image you apply it to. In all of these, the first Primary layer (bottom of the stack) will be the same and is used to neutralize the image. The sliders I adjusted include input saturation, pivot, contrast, temperature and magenta. Only the global settings were adjusted in this layer. You can tweak it, hide/disable it or replace it with a LUT adjustment instead. I have stayed away from camera LUTs, as a way of neutralizing the image, because these will drastically affect the other corrections in the stack – often in unpredictable ways.

If you look back at my FCP X Color Board Presets article, you may notice that those looks were more extreme. In this set, I stayed more subtle, but the presets will be more complex, since SpeedGrade CC permits built-in effects. Some of these may be slow to display and update. This is especially true of any that include blurs.

Click here to download a .zip archive file of the Looks presets.

Click on any image below for a slideshow of the various Looks.

©2014 Oliver Peters

Comparing Color, Resolve, SpeedGrade and Symphony

df_ccc_main_sm

It’s time to talk about color correctors. In this post, I’ll compare Color, Resolve, SpeedGrade and Symphony. These are the popular desktop color correction systems in use today. Certainly there are other options, like Filmlight’s Baselight Editions plug-in, as well as other NLEs with their own powerful color correction tools, including Autodesk Smoke and Quantel Rio. Some of these fall outside of the budget range of small shops or don’t really provide a correction workflow. For the sake of simplicity, in this post I’ll stick with the four I see the most.

df_ccc_sym_sm

Avid Technology Media Composer + Symphony

Although it started as a separate NLE product with dedicated hardware, today’s Symphony is really an add-on option to Media Composer. The main feature that differentiates Symphony from Media Composer in file-based workflows is an enhanced color correction toolset. Symphony used to be the “gold standard” for color correction within an NLE, combining controls “borrowed” from many other software and systems, like Photoshop, hardware proc amps and hardware versions of the DaVinci correctors. It was the first to use the color wheel control model for balance/hue offsets. A subset of the Symphony tools has been migrated into Media Composer. Basic correction features in Symphony include channel mixing, hue offsets (color balance), levels, curves and more.

Many perceive Symphony correction as a single level or layer of correction, but that’s not exactly true. Color correction occurs on two levels – segment and program track. Most of your correction is on individual clips and Symphony offers a relational grading system. This means you can apply grades based on single clips or all instances of a master clip, tape ID, camera, etc. All clips used from a common source can be automatically graded once the first instance of that clip is graded on the timeline. The program track grade allows the colorist to apply an additional layer of grading to a clip, a section of the timeline or the entire timeline. So, when the client asks for everything to be darker, a global adjustment can be made using the program track.

Symphony also offers secondary grading based on isolating colors via an HSL key and adjusting that range. Although Symphony doesn’t offer nodes or correction layers like other software, you can use Avid’s video track timeline hierarchy to add additional correction to blank tracks above those tracks containing the video clips. In this way you are using the tracks as de facto adjustment layers. The biggest weakness is the lack of built-in masking tools to create what is commonly referred to as “power windows” (a term originated by DaVinci). The workaround is to use Avid’s built-in Intraframe/Animatte effects tools to create masks. Then you can apply additional spot correction within the mask area. It takes a bit more work than other tools, but it’s definitely possible. Finally, many plug-in packages, like GenArts Sapphire, Boris Continuum Complete and Magic Bullet Looks include vignette filters that will work with Symphony.

The bottom line is that Symphony started it all, though by today’s standards is “long-in-the-tooth”. Nevertheless, the relational grading model – and the fact that you are working within the NLE and can freely move between color correction and editing/trimming – makes Symphony a fast unit to operate, especially in time-sensitive, long-form productions, like TV shows.

df_ccc_spgrd_sm

Adobe SpeedGrade CC

If you are current as a Creative Cloud subscriber, then you have access to the most recent version of Adobe Premiere Pro CC and SpeedGrade CC. With the updates introduced late last year, Adobe added Direct Link interaction between Premiere Pro and SpeedGrade. When you use Direct Link to send your Premiere Pro timeline to SpeedGrade, the actual Premiere Pro sequence becomes the SpeedGrade sequence. This means codec decoding, transitions and Premiere Pro effects are handled by Premiere Pro’s effects engine, even though you are working inside SpeedGrade. As such, a project created via Direct Link supports features and codecs that would not be possible within a standalone SpeedGrade project.

Another unique aspect is that native and third-party transitions and effects used in Premiere Pro are visible (though not adjustable) when you are working inside SpeedGrade. This is an important distinction, because other correction workflows that rely on roundtrips don’t include NLE-based filters. You can’t see how the correction will be affected by a filter used in the NLE timeline. Naturally, in the case of SpeedGrade, this only works if you are working on a machine with the same third-party filters installed. When you return to Premiere Pro from SpeedGrade, the color corrections on clips are collapsed into a Lumetri filter effect that is applied to the clip or adjustment layer within the Premiere Pro sequence. Essentially this Lumetri effect is similar to a LUT that encapsulates all of the grading layers applied in SpeedGrade into a single effect in Premiere Pro. This is possible because the two applications share the same color science. The result is a render-free workflow with the easy ability to go back-and-forth between Premiere Pro and SpeedGrade for changes and adjustments. Unlike a standard LUT, Lumetri filters can carry masks, keyframes and are 100% precise.

As a color corrector, SpeedGrade is designed with a layer-based interface, much like Photoshop. Layers can be primary (fullscreen), secondary (keys and masks) or filters. A healthy selection of effects filters and LUTs are included. The correction model splits the signal into what amounts to a 12-way color wheel arrangement. There are lift/gamma/gain controls for the overall image, as well as for each of the shadow, middle and highlight ranges. Controls can be configured as wheels or sliders, with additional sliders for contrast, pivot, temperature (red vs. blue bias), magenta (red/blue vs. green bias) and saturation. There are no curves controls.

Overall, I like the looks I get with SpeedGrade, but I find it lacking in some ways. There are definite plusses and minuses. I miss the curves. It currently does not work with Blackmagic Design hardware. Matrox, Bluefish and AJA are OK. It’s got a tracker, but I find both tracking and masking to be mediocre. The biggest workflow shortcoming is the lack of a temporary memory register feature. You can save a whole grade, which saves the entire stack of grading layers applied to a clip as a Lumetri filter. You can apply grades from earlier timeline clips quite simply and SpeedGrade lets you open multiple playheads for comparison/correction between multiple shots on the timeline. You can access the nine grades ahead and the nine grades beyond the current playhead position. You can also copy the grade from the clip below mouse position to the clip under the playhead by pressing the C key. What you cannot do is store a random set of grades or just a single layer in a temporary buffer and then apply it from that buffer somewhere else in the timeline. Adding these two items would greatly speed up the SpeedGrade workflow.

df_ccc_resolve_sm

Blackmagic Design DaVinci Resolve

The DaVinci name is legendary among color correction products, but that reputation was earned with its hardware products, like the DaVinci 2K. Resolve was the software-based product built around a Linux cluster. When Blackmagic bought the assets and technology of DaVinci, all of the legacy hardware products were dropped, in favor of concentrating on Resolve as the software that had the most life for the future. There are now four versions, including Resolve Lite (free), Resolve (paid – software only), Resolve with a Blackmagic control surface and Resolve for Linux. The first three work on Mac and PC. You may download the free Lite version from the Blackmagic website or Apple’s Mac App Store. The Lite version has nearly all of the power of the paid software, but with these limitations: noise reduction, stereoscopic tools and the ability to output at a resolution above UltraHD requires a paid version.

I’m writing this based on Resolve 10, which has rudimentary editing features. It is designed as a standalone color corrector that can be used for some editing. Blackmagic Design doubled-down on the editing side with Resolve 11 (shown at NAB 2014). When that’s finally released this summer, you’ll have a powerful NLE built into the application. The demos at NAB were certainly impressive. If that turns out to be the case, Resolve 11 would function as an Avid Symphony or Quantel Rio type of system. That means you could freely move between creative editing and color correction, simply by changing tabs in the interface. For now, Resolve 10 is mainly a color corrector, with some very good roundtrip and conforming support for other NLEs. Specifically there is very good support for Avid and FCP X workflows.

As a color corrector, Resolve offers the widest set of correction tools of any of these systems. In the work I’ve done, Resolve allows for more extreme grading and is more precise when trying to correct problem shots. I’ve done corrections with it that would have been impossible with any other tool. The correction controls include curves, wheels, primary sliders, channel mixers and more. Corrections are node-based and can be applied to clips or an entire track. Nodes can be applied in a serial or parallel fashion, with special splitter/combiner and layer mixing nodes. The latter includes Photoshop-style blend modes. Unlike SpeedGrade, you can store the value of a single node in a buffer (using the keyboard copy function) and then paste the value of just that node somewhere else. This makes it pretty fast when working up and down a timeline. Finally, the tracker is amazing.

A few things bother me about Resolve, in spite of its powerful toolset. The interface almost presents too many tools and it becomes very easy to lose track of what was done and where. There is no large viewer or fullscreen mode that doesn’t hide the node tree. This forces a lot of toggling between workspace configurations. If you have two displays, you cannot use the second display for anything other than the scopes and audio mixer. (This will change with Resolve 11.) Finally, you can only use Blackmagic Design hardware to view the video output on a grading monitor.

df_ccc_color_sm

Apple Color

Some of you are saying, “Why talk about that? It was killed off a few years ago! Who uses that anymore?” Yes, I know. What people so quickly forget, was that when the software was FinalTouch (before Apple’s purchase), it was very expensive and considered to be very innovative. Apple bought it, added some features and cleaned up some of the workflow. As part of Final Cut Studio, it set the standard for round-tripping with an NLE. Unfortunately for many Mac users, it retained its less glossy, “Unixy” interface and thus, didn’t really catch on for many editors. However, it still works just fine on the newest machines and OS versions and remains a fast, high-quality color corrector.

Nearly all of the long-form jobs I’ve done – including feature films and TV shows up to even a few months ago – have been done with Color. There are two reasons that I prefer it. The first is that most of these jobs were cut using FCP 7, so it’s still the most integrated software for these projects. More importantly, there are several key features that make it faster than SpeedGrade and Resolve for projects that fall within a standard range of grading. In other words, the in-camera look was good and there were no huge problem areas, plus the desired grade didn’t swing into extreme looks.

Color is designed with 10 levels of grading per clip – primary in, eight secondaries and primary out. Since secondaries can be fullscreen or a portion of the image qualified by an HSL key or mask, each secondary layer can actually have two corrections – inside and outside of the mask. In addition to these, there’s a ColorFX layer for node-based filter effects, which can also include color adjustments. In reality, the maximum number of corrections to a single clip could be up to 19. The primary corrections can include value changes for RGB lift/gamma/gain and saturation levels, as well a printer lights. On top of this are lift/gamma/gain color wheels and luma controls. Lastly there are curves. The secondaries include custom mask shapes and hue/sat/luma curves. There’s a tracker, too, but it’s not that great.

Where Color still shines for me is in workflow. Each layer is represented by a labelled bar on the timeline under the clip. This makes it easy to apply only a single secondary adjustment to other clips on the timeline simply by sliding the corresponding secondary bar from one timeline clip to one or more of the others. For example, I used Secondary 3 to qualify a person’s face and brighten it. I could then simply drag the bar for S3 that appears under the first clip on the timeline over to every other clip with the same person and similar set-up. All without selecting each of these clips prior to applying the adjustment.

Color works with all cards that work with Final Cut Pro, so there’s no AJA versus Blackmagic issue as mentioned above. Dual monitors work well. You can have scopes and the viewer (or a fullscreen viewer) on one display and the full control interface on the other. Realistically, Color works best with up to 2K video and one of the standard Apple codecs (uncompressed or ProRes work best). A lot of the footage I’ve graded with it was ProResHQ or ProRes 4444 that came native from an ARRI Alexa or transcoded from a C300, RED or a Canon 5D/7D. But I’ve also done a film that was all native EX rewrapped as .mov from a Sony camera and Color had no issues. Log-profile footage grades very nicely in Color, so Alexa ProRes 4444 encoded as Log-C forms a real sweet spot for Apple Color.

©2014 Oliver Peters

Using FCP X with Adobe CC

df_x-cc_1

While the “battle” rages on between the proponents of using either Apple Final Cut Pro X or Adobe Premiere Pro CC as the main edit axe, there is less disagreement about the other Adobe applications. Certainly many users like Motion, Aperture and Logic, but it’s pretty clear that most editors favor Adobe solutions over others. I have encountered very few power users of Motion, as compared with After Effects wizards – nor graphic designers who can get by without touching Illustrator or Photoshop. This post isn’t intended to change anyone’s opinion, but rather to offer a few pointers on how to productively use some of the Adobe Creative Cloud (or CS6) applications to complement your FCP X workflows. (Click images below for an expanded view.)

Photoshop

df_x-cc_2_sm

For many editors, Adobe Photoshop is the title tool of choice. FCP X has some nice text tools, but Photoshop is significantly better – especially for logo creation. When you import a layered Photoshop file into FCP X, it comes in as a special layered graphics file. Layers can be adjusted, animated or disabled when you “open in timeline”. Photoshop layer effects, like a drop shadow, glow or emboss, do not show up correctly inside FCP X. If you drop the imported Photoshop file onto the timeline, it becomes a self-contained title clip. Although you cannot “open in editor” to modify the file, there is a workaround.

To re-edit the Photoshop file in Adobe Photoshop, select the clip in FCP X and “reveal in Finder”. From the Finder window open the file in Photoshop. Now you can make any changes you like. Once saved, the changes are updated in FCP X. There is one caveat that I’ve noticed. All changes that you make have to be made within the existing layers. New, additional layers do not update back inside FCP X. However, if you created layer effects and then merge that layer to bake in the effects, the update is successful in FCP X and the effects become visible.

This process is very imperfect because of FCP X’s interpretation of the Photoshop files. For example, layer alignment that matches in Photoshop may be misaligned in FCP X. All layers must have some content. You cannot create blank layers and later add content into them. When you do this, the updates will not be recognized in FCP X.

Audition

df_x-cc_3_sm

Sound mixing is still a weak link in Final Cut Pro X. All mixing is clip-based without a proper mixing pane, like most other NLEs have. There are methods (X2Pro Audio Convert) to send the timeline audio to Pro Tools, but many editors don’t use Pro Tools. Likewise sending an FCPXML to Logic X works better than before, but why buy an extra application if you already own Adobe Audition? I tested a few options, like using X2Pro to get an AAF into Premiere Pro and then into Audition, but none of this worked. What does work is using XML.

First, duplicate the sequence and work from the copy for safety. Review your edited sequence in FCP X and detach/delete any unused audio elements, such as muted audio associated with connected clips that are used as video-only B-roll. Next, break apart any compound clips. I recommend detaching the desired audio, but that’s optional. Now export an FCPXML for that sequence. Open the FCPXML in the Xto7 application and save the audio tracks as a new XML file.

Launch Audition and import the new XML file. This will populate your multitrack mixing window with the sequence and clips. At this stage, all clips that were inside FCP X Libraries will be offline. Select these clips and use the “link media” command. The good news is that the dialogue window will allow you to see inside the Library file and let you navigate to the correct file. Unfortunately, the correct name match will not be bolded. Since these files are typically date/time-stamped, make sure to read the names carefully when you select the first clip. The rest will relink automatically. Note that level changes and fades that were made in FCP X do not come across into Audition.

Now you can mix the session. When done, export a stereo (or other) mixed master file. Import that into FCP X and attach as a connected clip to the head of your sequence. Make sure to delete, disable (make “invisible”) or mute all previous audio.

After Effects

df_x-cc_4_sm

For many editors, Adobe After Effects is the finishing tool of choice – not just for graphics and effects, but also color correction and other embellishments. Thanks to the free ClipExporter application, it’s easy to go from FCP X to After Effects.

Similar to the Audition step, I recommend detaching/deleting all audio. Some folks like to have audio inside After Effects, but most of the time it’s in the way for me. Break part all compound clips. You might as well remove any FCP X titles and effects filters/transitions, since these don’t translate into After Effects. Lastly, I recommend selecting all connected clips and using the “overwrite to storyline” command. This will place everything onto the primary storyline and result in a straightforward cascade of layers once inside After Effects.

Export an FCPXML file for the sequence. Open ClipExporter and select the AE conversion tab. Import the FCPXML file. An important feature is that ClipExporter supports FCP X’s retiming function, but only for AE exports. Now run ClipExporter and save the resultant After Effects script file.

Launch Adobe After Effects and from the File/Script pulldown menu, select the saved script file created by ClipExporter. The script will run and load the clips and a your sequence as a new composition. Each individual shot is stashed into its own mini-composition and these are then placed into a stack of layers for the timeline of the main AE composition. Should you need to trim/slip the media for a shot, all available media can be accessed and adjusted within the shot’s individual mini-comp. If a shot has been retimed in FCP X, those adjustments also appear in the mini-comp and not in the main composition.

Build your effects and render a flattened file with everything baked in. Import that file into FCP X and add it as a connected clip to the top of your sequence. Disable all other video clips.

©2014 Oliver Peters

NAB 2014 Thoughts

Whodathunkit? More NLEs, new cameras from new vendors and even a new film scanner! I’ve been back from NAB for a little over a week and needed to get caught up on work while decompressing. The following are some thoughts in broad strokes.

Avid Connect. My trip started early with the Avid Connect costumer event. This was a corporate gathering with over 1,000 paid attendees. Avid execs and managers outlined the corporate vision of Avid Everywhere in presentations that were head-and-shoulders better than any executive presentations Avid has given in years. For many who attended, it was to see if there was still life in Avid. I think the general response was receptive and positive. Avid Everywhere is basically a realignment of existing and future products around a platform concept. That has more impact if you own Avid storage or asset management software. Less so, if you only own a seat of Media Composer or ProTools. No new software features were announced, but new pricing models were announced with options to purchase or rent individual seats of the software – or to rent floating licenses in larger quantities.

4K. As predicted, 4K was all over the show. However, when you talked to vendors and users, there was little clear direction about actual mastering in 4K. It is starting to be a requirement in some circles, like delivering to Netflix, for example; but for most users 4K stops at acquisition. There is interest for archival reasons, as well as for reframing shots when the master is HD or 2K.

Cameras. New cameras from Blackmagic Design. Not much of a surprise there. One is the bigger, ENG-style URSA, which is Blackmagic’s solution to all of the add-ons people use with smaller HDSLR-sized cameras. The biggest feature is a 10” flip-out LCD monitor. AJA was the real surprise with its own 4K Cion camera. Think KiPro Quad with a camera built around it. Several DPs I spoke with weren’t that thrilled about either camera, because of size or balance. A camera that did get everyone jazzed was Sony’s A7s, one of their new Alpha series HDSLRs. It’s 4K-capable when recorded via HDMI to an external device. The images were outstanding. Of course, 4K wasn’t everywhere. Notably not at ARRI. The news there is the Amiraa sibling to the Alexa. Both share the same sensor design, with the Amira designed as a documentary camera. I’m sure it will be a hit, in spite of being a 2K camera.

Mac Pro. The new Mac Pro was all over the show in numerous booths. Various companies showed housings and add-ons to mount the Mac Pro for various applications. Lots of Thunderbolt products on display to address expandability for this unit, as well as Apple laptops and eventually PCs that will use Thunderbolt technology. The folks at FCPworks showed a nice DIT table/cart designed to hold a Mac Pro, keyboard, monitoring and other on-set essentials.

FCP X. Speaking of FCP X, the best place to check it out was at the off-site demo suite that FCPworks was running during the show. The suite demonstrated a number of FCP X-based workflows using third-party utilities, shared storage from Quantum and more. FCP X was in various booths on the NAB show floor, but to me it seemed limited to partner companies, like AJA. I thought the occurrences of FCP X in other booths was overshadowed by Premiere Pro CC sightings. No new FCP X feature announcements or even hints were made by Apple in any private meetings.

NLEs. The state of nonlinear editing is in more flux than ever. FCP X seems to be picking up a little steam, as is Premiere Pro. Yet, still no clear market leader across all sectors. Autodesk announced Smoke 2015, which will be the last version you can buy. Following Adobe’s lead, this year they shift to a rental model for their products. Smoke 2015 diverges more from the Flame UI model with more timeline-based effects than Smoke 2013. Lightworks for the Mac was demoed at the EditShare booth, which will make it another new option for Mac editors. Nothing new yet out of Avid, except some rebranding – Media Composer is now Media Composer | Software and Sphere is now Media Composer | Cloud. Expect new features to be rolled in by the end of this year. The biggest new player is Blackmagic Design, who has expanded the DaVinci Resolve software into a full-fledged NLE. With a cosmetic resemblance to FCP X, it caused many to dub it “the NLE that Final Cut Pro 8 should have been”. Whether that’s on the mark or just irrational exuberance has yet to be determined. Suffice it to say that Blackmagic is serious about making it a powerful editor, which for now is targeted at finishing.

Death of i/o cards. I’ve seen little mention of this, but it seems to me that dedicated PCIe video capture cards are a thing of the past. KONA and Decklink cards are really just there to support legacy products. They have less relevance in the file-based world. Most of the focus these days is on monitoring, which can be easily (and more cheaply) handled by HDMI or small Thunderbolt devices. If you looked at AJA and Matrox, for example, most of the target for PCIe cards is now to supply the OEM market. AJA supplies Quantel with their 4K i/o cards. The emphasis for direct customers is on smaller output-only products, mini-converters or self-contained format converters.

Film. If you were making a custom, 35mm film scanner – get out of the business, because you are now competing against Blackmagic Design! Their new film scanner is based on technology acquired through the purchase of Cintel a few months ago. Now Blackmagic introduced a sleek 35mm scanner capable of up to 30fps with UltraHD images. It’s $30K and connects to a Mac Pro via Thunderbolt2. Simple operation and easy software (plus Resolve) will likely rekindle the interest at a number of facilities for the film transfer business. That will be especially true at sites with a large archive of film.

Social. Naturally NAB wouldn’t be the fun it is without the opportunity to meet up with friends from all over the world. That’s part of what I get out of it. For others it’s the extra training through classes at Post Production World. The SuperMeet is a must for many editors. The Avid Connect gala featured entertainment by the legendary Nile Rodgers and his band Chic. Nearly two hours of non-stop funk/dance/disco. Quite enjoyable regardless of your musical taste. So, another year in Vegas – and not quite the ho-hum event that many had thought it would be!

Click here for more analysis at Digital Video’s website.

©2014 Oliver Peters

 

NLE Tips – Week 2

df_nle2_1_sm

Adobe Premiere Pro – Stacked Sequences

If you are used to editing in Adobe Premiere Pro or Apple Final Cut Pro “legacy”, then you are familiar with the concept of tabbed sequences. That is, you can have several open sequences, which each appear as a tab in the timeline window. This lets editor work between them, using copy and paste functions or compare one version of an edit to another. (Click images for an expanded view.)

Adobe’s interface design is based on dockable windows. In Premiere Pro, this means you can arrange the window layout in various custom workspace configurations that are conducive to your personal style or task needs. Sequences can be torn off into separate window elements. They may then be docked as a tab or embedded into any of four sides of the window as a separate pane within that window. Therefore, you can easily dock two sequences on top of each other within the same timeline window. When you do this, the focus of the sequence viewer and the effects control panel will follow whichever clip is selected by the editor in either sequence.

df_nle2_2_smLet’s say that you like to work from a “selected takes” sequence to a second sequence that is a “cutdown” of these selects. Stack one sequence above the other and then simply drag a clip from sequence 1 to sequence 2. Or highlight a clip in sequence 1, copy it and paste it to sequence 2. This also makes it easy to re-arrange the order of clips from one sequence to the other, when building stories based on soundbite and voice-over elements.

In another example, you might have two versions of an edit, such as a long-form cut for the web and a :30 cut for commercial TV. Each will have the same effects applied to shots that are common to both versions. Stack the sequences and open the effects controls. As you click on a clip, the effects that have been applied are revealed in the control panel. Or you can apply new effects to that clip by adding them to this open window.

df_nle2_3_smOnce you’ve applied and adjusted effects in the long-form cut, select the effects in that window and copy them. Then click on the same shot in the second sequence. The effect control window has been “refocused” on the other clip and is therefore empty. Paste the matching effect(s) to the empty effects control panel. Now the shot in the short-form cut will match the appearance of that same shot from the long-form cut. All done by simply moving back and forth between the two stacked sequences in the timeline window.

©2014 Oliver Peters