Color Grading Strategies

df_gradestrtgy_1_sm

A common mistake made by editors new to color correction is to try to nail a “look” all in a single application of a filter or color correction layer. Subjective grading is an art. Just like a photographer who dodges and burns areas of a photo in the lab or in Photoshop to “relight” a scene, so it is with the art of digital color correction. This requires several steps, so a single solution will never give you the best result. I follow this concept, regardless of the NLE or grading application I’m using at the time. Whether stacked filters in Premiere Pro, several color corrections in FCP X, rooms in Color, nodes in Resolve or layers in SpeedGrade – the process is the same. The standard grade for me is often a “stack” of four or more grading levels, layers or nodes to achieve the desired results. (Please click on any of the images for an expanded view.)

df_gradestrtgy_1red_smThe first step for me is always to balance the image and to make that balance consistent from shot to shot. Achieving this varies with the type of media and application. For example, RED camera raw footage is compatible with most updated software, allowing you to have control over the raw decoding settings. In FCP X or Premiere Pro, you get there through separate controls to modify the raw source metadata settings. In Resolve, I would usually make this the first node. Typically I will adjust ISO, temperature and tint here and then set the gamma to REDlogFilm for easy grading downstream. In a tool like FCP X, you are changing the settings for the media file itself, so any change to the RED settings for a clip will alter those settings for all instances of that clip throughout all of your projects. In other words, you are not changing the raw settings for only the timeline clips. Depending on the application, this type of change is made in the first step of color correction or it is made before you enter color correction.

df_gradestrtgy_cb1_smI’ll continue this discussion based on FCP X for the sake of simplicity, but just remember that the concepts apply generally to all grading tools. In FCP X, all effects are applied to clips before the color board stage. If you are using a LUT filter or some other type of grading plug-in like Nattress Curves, Hawaiki Color or AutoGrade, remember that this is applied first and then that result is effected by the color board controls, which are downstream in the signal flow. If you want to apply an effect after the color board correction, then you must add an adjustment layer title generator above your clip and apply that effect within the adjustment layer.

df_gradestrtgy_cb2_smIn the example of RED footage, I set the gamma to REDlogFilm for a flatter profile to preserve dynamic range. In FCP X color board correction 1, I’ll make the necessary adjustments to saturation and contrast to restore this to a neutral, but pleasing image. I will do this for all clips in the timeline, being careful to make the shots consistent. I am not applying a “look” at this level.

df_gradestrtgy_cb2a_smThe next step, color board correction 2, is for establishing the “look”. Here’s where I add a subjective grade on top of color board correction 1. This could be new from scratch or from a preset. FCP X supplies a number of default color presets that you access from the pull-down menu. Others are available to be installed, including a free set of presets that I created for FCP X. df_gradestrtgy_cb2b_smIf you have a client that likes to experiment with different looks, you might add several color board correction layers here. For instance, if I’m previewing a “cool look” versus a “warm look”, I might do one in color correction 2 and another in color correction 3. Each correction level can be toggled on and off, so it’s easy to preview the warm versus cool looks for the client.

Assuming that color board correction 2 is for the subjective look, then usually in my hierarchy, correction 3 tends to be reserved for a mask to key faces. Sometimes I’ll do this as a key mask and other times as a shape mask. df_gradestrtgy_cb3_smFCP X is pretty good here, but if you really need finesse, then Resolve would be the tool of choice. The objective is to isolate faces – usually in a close shot of your principal talent – and bring skin tones out against the background. The mask needs to be very soft so as not to draw attention to itself. Like most tools, FCP X allows you to make changes inside and outside of the mask. If I isolate a face, then I could brighten the face slightly (inside mask), as well as slightly darken everything else (outside mask).df_gradestrtgy_cb3a_sm

Depending on the shot, I might have additional correction levels above this, but all placed before the next step. For instance, if I want to darken specific bright areas, like the sun reflecting off of a car hood, I will add separate layers with key or shape masks for each of these adjustments. df_gradestrtgy_cb3b_smThis goes back to the photographic dodging and burning analogy.

df_gradestrtgy_cb4_smI like adding vignettes to subtly darken the outer edge of the frame. This goes on correction level 4 in our simplest set-up. The bottom line is that it should be the top correction level. The shape mask should be feathered to be subtle and then you would darken the outside of the mask, by lowering brightness levels and possibly a little lower on saturation. df_gradestrtgy_cb4a_smYou have to adjust this by feel and one vignette style will not work for all shots. In fact, some shots don’t look right with a vignette, so you have to use this to taste on a shot by shot basis. At this stage it may be necessary to go back to color correction level 2 and adjust the settings in order to get the optimal look, after you’ve done facial correction and vignetting in the higher levels.df_gradestrtgy_cb5_sm

df_gradestrtgy_cb5a_smIf I want any global changes applied after the color correction, then I need to do this using an adjustment layer. One example is a film emulation filter like LUT Utility or FilmConvert. Technically, if the effect should look like film negative, it should be a filter that’s applied before the color board. If the look should be like it’s part of a release print (positive film stock), then it should go after. For the most part, I stick to after (using an adjustment layer), because it’s easier to control, as well as remove, if the client decides against it. df_gradestrtgy_cb5b_smRemember that most film emulation LUTs are based on print stock and therefore should go on the higher layer by definition. Of course, other globals changes, like another color correction filters or grain or a combination of the two can be added. These should all be done as adjustment layers or track-based effects, for consistent application across your entire timeline.

©2014 Oliver Peters

More 4K

df_4Kcompare_main

I’ve talked about 4K before (here, here and here), but I’ve recently done some more 4K jobs that have me thinking again. 4K means different things to different people and in terms of dimensions, there’s the issue of cinema 4K (4096 pixels wide) versus the UltraHD/QuadHD/4K 16:9 (whatever you want to call it) version of 4K (3840 pixels wide). That really doesn’t make a lot of difference, because these are close enough to be the same. There’s so much hype around it, though, that you really have to wonder if it’s “the Emperor’s new clothes”. (Click on any of these images for expanded views.)

First of all, 4K used as a marketing term is not a resolution, it’s a frame dimension. As such, 4K is not four times the resolution of HD. That’s a measurement of area and not resolution. True resolution is usually measured in the vertical direction based on the ability to resolve fine detail (regardless of the number of pixels) and, therefore, 4K is only twice the resolution of HD at best. 4K is also not sharpness, which is a human perception affected by many things, such as lens quality, contrast, motion and grading. It’s worth watching Mark Schubin’s excellent webinar on the topic to get a clearer understanding of this. There’s also a very good discussion among top DoPs here about 4K, lighting, high dynamic range and more.

df_4kcompare_1A lot of arguments have been made that 4K cameras using a color-pattern filter method (Bayer-style), single CMOS sensor don’t even deliver the resolution they claim. The reason is that in many designs 50% of the pixels are green versus 25% each for red and blue. Green is used for luminance, which determines detail, so you do not have a 1:1 pixel relationship between green and the stated frame resolution of the sensor. That’s in part why RED developed 5K and 6K sensors and it’s why Sony uses an 8K sensor (F65) to deliver a 4K image.

The perceived image quality is also not all about total pixels. The pixels of the sensor, called photosites, are the light-receiving elements of the sensor. There’s a loose correlation between pixel size and light sensitivity. For any given sensor of a certain physical dimension, you can design it with a lot of small pixels or with fewer, but larger, pixels. This roughly correlates to a sensor that’s of high resolution, but a smaller dynamic range (many small pixels) or one with lower resolution, but a higher dynamic range (large, but fewer pixels). Although the equation isn’t nearly this simplistic, since a lot of color science and “secret sauce” goes into optimizing a sensor’s design, you can certainly see this play out in the marketing battles between the RED and ARRI camps. In the case of the ALEXA, ARRI adds some on-the-sensor filtering, which results in a softer image that gives it a characteristic filmic quality.df_4kcompare_2

Why do you use 4K?

With 4K there are two possible avenues. The first is to shoot 4K for the purpose of reframing and repositioning within HD and 2K timelines. Reframing isn’t a new production idea. When everyone shot on film, some telecine devices, like the Rank Cintel Mark III, sported zoom boards that permitted an optical blow-up of the 35mm negative. You could zoom in for a close-up in transfer that didn’t cost you resolution. Many videographers shoot 1080 for a 720 finish, as this allows a nice margin for reframing in post. The second is to deliver a final 4K product. Obviously, if your intent is the latter, then you can’t count on the techniques of the former in post.

df_4kcompare_3When you shoot 4K for HD post, then workflow is an issue. Do you shoot everything in 4K or just the items you know you’ll want to deal with? How will this cut with HD and 2K content? That’s where it gets dicey, because some NLEs have good 4K workflows and others don’t. But it’s here that I contend you are getting less than meets the eye, so to speak.  I have run into plenty of editors who have dropped a 4K clip into an HD timeline and then blown it up, thinking that they are really cropping into the native 4K frame and maintaining resolution. Depending on the NLE and the settings used, often they are simply blowing up an HD shot. The NLE scaled the 4K to HD first and then expanded the downscaled HD image. It didn’t crop into the actual 4K native resolution. So you have to be careful. And guess what, if the blow up isn’t that extreme, it may not look much different than the crop.

df_4kcompare_4One thing to remember is that a 4K image that is scaled to fit into an HD timeline gains the benefits of oversampling. The result in HD will be very sharp and, in fact, will generally look better perceptually than the exact same image natively shot in an HD size. When you now crop into the native image, you are losing some of that oversampling effect. A 1:1 pixel relationship is the same effective image size as a 200% blow-up. Of course, it’s not the same result. When you compare the oversampled “wide shot” (4K scaled to HD) to the “close-up” (native 4K crop), the close-up will often look softer. You’ll see defects of the image, like chromatic aberration in the lens, missed critical focus and sensor noise. Instead, if you shoot a wide and then an actual close-up, that result will usually look better.

On the other hand, if you blow up the 4K-to-HD or a native HD shot, you’ll typically see a result that looks pretty good. That’s because there’s often a lot more information there than monitors or the eye can detect. In my experience, you can commonly get away with a blow-up in the range of 120% of the original image size and in some cases, as much as 150%.

To scale or not to scale

df_4K_comparison_Instant4KLet me point out that I’m not saying a native 4K shot doesn’t look good. It does, but often the associated workflow hassles aren’t worth it. For example, let’s take a typical 1080p 50” Panasonic plasma that’s often used as a client monitor in edit suites. You or your client may be sitting 7 to 10 feet away from it, which is closer than most people sit in a living room with that size of a screen. If I show a client the native image (4K at 1:1 in an HD timeline) compared with an separate HD image at the same framing, it’s unlikely that they’ll see a difference. Another test is to take two exact images – one native HD and the other 4K. Scale up the HD and crop down the 4K to match. In theory, the 4K should look better and sharper. In fact, sitting back on the client sofa, most won’t see a difference. It’s only when they step to about 5 feet in front of the monitor that a difference is obvious and then only when looking at fine detail within the shot.

df_gh4_instant4k_smNot all scaling is equal. I’ve talked a lot about the comparison of HD scaling, but that really depends on the scaling that you use. For a quick shot, sure, use what your NLE has built in. For more critical operations, then you might want to scale images separately. DaVinci Resolve has excellent built-in scaling and lets you pick from smooth, sharp and bilinear algorithms. If you want a plug-in, then the best I’ve found is the new Red Giant Instant 4K filter. It’s a variation of their Instant HD plug-in and works in After Effects and Premiere Pro. There are a lot of quality tweaks and naturally, the better it does, the longer the render will be. Nevertheless, it offers outstanding results and in one test that I ran, it actually provided a better look within portions of the image than the native 4K shot.

df_4K_comparison-C500_smIn that case, it was a C500 shot of a woman on a park bench with a name badge. I had three identical versions of the shot (not counting the raw files) – the converted 4K ProRes4444 file, a converted 1080 ProRes4444 “proxy” file for editing and the in-camera 1080 Canon XF file. I blew up the two 1080 shots using Instant 4K and cropped the 4K shot so all were of equal framing. When I compared the native 4K shot to the expanded 1080 ProRes4444 shot, the woman’s hair was sharper in the 1080 blow-up, but the letters on the name badge were better on the original. The 1080 Canon XF blow-up was softer in both areas. I think this shows that some of the controls in the plug-in may give you superior results to the original (crisper hair); but, a blow-up suffers when you are using a worse codec, like Canon’s XF (50 Mbps 4:2:2). It’s fine for native HD, but the ProRes4444 codec has twice the chroma resolution and less compression, which makes a difference when scaling an image larger. Remember all of this pertains to viewing the image in HD.

4K deliverables

df_4K_comparison-to-1080_smSo what about working in native 4K for a 4K deliverable? That certainly has validity for high-resolution projects (films, concerts, large corporate presentations), but I’m less of a believer for television and web viewing. I’d rather have “better” pixels and not simply “more” pixels. Most of the content you watch at theaters using digital projection is 2K playback. Sometimes the master for that DCP was HD, 2K or 4K. If you are in a Sony 4K projector-equipped theater, most of the time, it’s simply the projector upscaling the content to 4K as part of the projection. Even though you may see a Sony 4K logo at the head of the trailers, you aren’t watching 4K content – definitely not, if it’s a stereo3D film. Yet, much of this looks pretty good, doesn’t it?

df_AMIRAEverything I talked about, regarding blowing up HD by up to 120% or more, still applies to 4K. Need to blow up a shot a bit in a 4K timeline? Go ahead, it will look fine. I think ARRI has proven this as well, taking films shot with the ALEXA all the way up to Imax. In fact, ARRI just announced that the AMIRA will get in-camera, on-the-fly upscaling of its image with the ability to record 4K (3840 x 2160 at up to 60fps) on the CFast 2.0 cards. They can do this, because the sensor starts with more pixels than HD or 2K. The AMIRA will expose all of the available photosites (about 3.4K sensor pixels) in what they call the “open gate” method. This image is lightly cropped to 3.2K and then scaled by a 1.2 factor, which results in UltraHD 4K recording on the same hardware. Pretty neat trick and judging by ARRI’s image quality, I’ll bet it will look very good. Doubling down on this technique, the ALEXA XT models will also be able to record ProRes media at this 3.2K size. In the case of the ALEXA, the designers have opted to leave the upscaling to post, rather than to do it in-camera.

To conclude, if you are working in 4K today, then by all means continue to do so. It’s a great medium with a lot of creative benefits. If you aren’t working in 4K, then don’t sweat it. You won’t be left behind for awhile and there are plenty of techniques to get you to the same end goal as much of the 4K production that’s going on.

Click these thumbnails for full resolution images.

df_gh4_instant4k_sm

 

 

 

df_4K_comparison-to-1080_sm

 

 

 

 

©2014 Oliver Peters

Red Giant Universe

df_rgsu_1

Red Giant Software, developers of such popular effects and editing tools as Trapcode and Magic Bullet, recently announced Red Giant Universe. Red Giant has adopted a hybrid free/subscription model. Once you sign into Universe for a Red Giant account, you have access to all the free filters and transitions that are part of this package. Initially this includes 31 free plug-ins (22 effects, 9 transitions) and 19 premium plug-ins (12 effects, 7 transitions). Universe users have a 30-day trial period before the premium effects become watermarked. Premium membership pricing will be $10/month, $99/year or $399/lifetime. Lifetime members will receive routine updates without any further cost.

A new approach to a fresh and growing library of effects

The general mood among content creators has been against subscription models; however, when I polled thoughts about the Universe model on one of the Creative COW forums, the comments were very positive. I originally looked at Red Giant’s early press on Universe and I had gotten the impression that Universe would be an environment in which users could create their own custom effects. In fact, this isn’t the case at all. The Universe concept is built on Supernova, an internal development tool that Red Giant’s designers use to create new effects and transitions. Supernova draws from a library of building block filters that can be combined to create new plug-in effects. This is somewhat the same as Apple’s Quartz Composer development tool; however, it is not part of the package that members can access.

df_rgsu_3Red Giant plans to build a community around the Universe members, who will have some input into the types of new plug-ins created. These plug-ins will only be generated by Red Giant designers and partner developers. Currently they are working with Crumplepop, with whom they created Retrograde – one of the premium plug-ins. The point of being a paid premium member is to continue receiving routine updates that add to the repertoire of Universe effects that you own. In addition, some of the existing Red Giant products will be ported to Universe in the future as new premium effects.

df_rgsu_2This model is similar to what GenArts had done with Sapphire Edge, which was based on an upfront purchase, plus a subscription for updated effects “collections” (essentially new preset versions of an Edge plug-in). These were created by approved designers and added to the library each month. (Note: Sapphire Edge – or at least the FX Central subscription – appears to have been discontinued this year.) Unlike the Sapphire Edge “collections”, the Universe updates are not limited to presets, but will include brand new plug-ins. Red Giant tells me they currently have several dozen in the development pipeline already.

Red Giant Universe supports both Mac and Windows and runs in recent versions of Adobe After Effects, Premiere Pro, Apple Final Cut Pro X and Motion. At least for now, Universe doesn’t support Avid, Sony Vegas, DaVinci Resolve, EDIUS or Nuke hosts. Members will be able to install the software on two computers and a single installation of Universe will install these effects into all applicable hosts, so only one purchase is necessary for all.

Free and premium effects with GPU acceleration

In this initial release, the range of effects includes many standards as free effects, including blurs, glows, distortion effects, generators and transitions. The premium effects include some that have been ported over from other Red Giant products, including Knoll Light Factory EZ, Holomatrix, Retrograde, ToonIt and others. In case you are concerned about duplication if you’ve already purchased some of these effects, Red Giant answers this in their FAQ: “We’ve retooled the tools. Premium tools are faster, sleeker versions of the Red Giant products that you already know and love. ToonIt is 10x faster. Knoll Light Factory is 5x faster. We’ve streamlined [them]with fewer controls so you can work faster. All of the tools work seamlessly with [all of the] host apps, unlike some tools in the Effects Suite.”

df_rgsu_4The big selling point is that these are high-quality, GPU-accelerated effects, which use 32-bit float processing for trillions of colors. Red Giant is using OpenGL rather than OpenCL or NVIDIA’s CUDA technology, because it is easier to provide support across various graphics cards and operating systems. The recommendation is to have one of the newer, faster NVIDIA or AMD cards or mobile GPUs. The minimum GPU is an Intel HD 3000 integrated graphics chip. According to Red Giant, “Everything is rendered on the GPU, which makes Universe up to 10 times faster than CPU-based graphics. Many tools use advanced render technology that’s typically used in game development and simulation.”

In actual use

After Universe is installed, the updates are managed through the Red Giant Link utility. This will now keep track of all Red Giant products that you have installed (along with Universe) and lets you update as needed. The effects themselves are nice and the quality is high, but these are largely standard effects, so far. There’s nothing major yet, that isn’t already represented with a similar effect within the built-in filters and transitions that come as part of FCP X, Motion or After Effects. Obviously, there are subjective differences in one company’s “bad TV” or “cartoon” look versus that of another, so whether or not you need any additional plug-ins becomes a personal decision.

As far as GPU-acceleration is concerned, I do find the effects to be responsive when I adjust them and preview the video. This is especially true in a host like Final Cut Pro X, which is really tuned for the GPU. For example, adding and adjusting a Knoll lens flare from the Universe package performs better on my 2009 Mac Pro (8-core with an NVIDIA Quadro 4000), than do the other third-party flare filters I have available on this unit.

df_rgsu_5The field is pretty crowded when you stack up Universe against such established competitors as GenArts Sapphire, Boris Continuum Complete, Noise Industries FxFactory Pro and others. As yet, Universe does not offer any tools that fill in workflow gaps, like tracking, masking or even keyers. I’m not sure the monthly subscription makes sense for too many customers. It would seem that free will be attractive to many, while an annual or lifetime subscription will be the way most users will purchase Universe. The lifetime price lines up well when you compare it to the others, in terms of purchasing a filter package.

Red Giant Universe is an ideal package of effects for editors. While Apple has developed a system with Motion where any user can created new FCP X effects based on templates, the reality is that few working editors have the time or interest to do that. They want effects that can be quickly applied with a minimum amount of tweaking and that perform well on a timeline. This is what impresses clients and what wins over editors to your product. With that target in mind, Red Giant definitely will do well with Universe if it holds to its promise. Ultimately the success of Universe will hang on how prolific the developers are and how quickly new effects come through the subscription pipeline.

Originally written for Digital Video magazine/Creative Planet Network

©2014 Oliver Peters

Amira Color Tool and your NLE

df_amiracolor_1I was recently alerted to the new Amira Color Tool by Michael Phillips’ 24p blog. This is a lightweight ARRI software application designed to create custom in-camera looks for the Amira camera. You do this by creating custom color look-up tables (LUT). The Amira Color Tool is available as a free download from the ARRI website (free registration required). Although the application is designed for the camera, you can also export looks in a variety of LUT file formats, which in turn, may be installed and applied to footage in a number of different editing and color correction applications. I tested this in both Apple Final Cut Pro X and Avid Media Composer | Software (v8) with good results.

The Amira Color Tool is designed to correct log-C encoded footage into a straight Rec709 offset or with a custom look. ARRI offers some very good instructions, white papers, sample looks and tutorials that cover the operation of this software. The signal flow is from the log-C image, to the Rec709 correction, and then to the CDL-based color correction. To my eye, the math appears to be floating point, because a Rec709 conversion that throws a shot into clipping, can be pulled back out of clipping in the look tab, using the CDL color correction tools. Therefore it is possible to use this tool for shots other than ARRI Amira or Alexa log-C footage, as long as it is sufficiently flat.

The CDL correction tools are based on slope, offset and power. In that model slope is equivalent to gain, offset to lift and power to gamma. In addition to color wheels, there’s a second video look parameters tab for hue intensities for the six main vectors (red, yellow, green, cyan, blue and magenta). The Amira Color Tool is Mac-only and opens both QuickTime and DPX files from the clips I tested. It worked successfully with clips shot on an Alexa (log-C), Blackmagic Cinema Camera (BMD Film profile), Sony F-3 (S-log) and Canon 1DC (4K Canon-log). Remember that the software is designed to correct flat, log-C images, so you probably don’t want to use this with images that were already encoded with vibrant Rec709 colors.

FCP X

df_amiracolor_4To use the Amira Color Tool, import your clip from the application’s file browser, set the look and export a 3D LUT in the appropriate format. I used the DaVinci Resolve setting, which creates a 3D LUT in a .cube format file. To get this into FCP X, you need to buy and install a LUT filter, like Color Grading Central’s LUT Utility. To install a new LUT there, open the LUT Utility pane in System Preferences, click the “+” symbol and navigate to where the file was saved.df_amiracolor_5_sm In FCP X, apply the LUT Utility to the clip as a filter. From the filter’s pulldown selection in the inspector, choose the new LUT that you’ve created and installed. One caveat is to be careful with ARRI files. Any files recorded with newer ARRI firmware are flagged for log-C and FCP X automatically corrects these to Rec709. Since you don’t want to double up on LUTs, make sure “log processing” is unchecked for those clips in the info tab of the inspector pane.

Media Composer

df_amiracolor_6_smTo use the custom LUTs in Media Composer, select “source settings” for the clip. Go to the color management tab and install the LUT. Now it will be available in the pull-down menu for color conversions. This color management change can be applied to a single clip or to a batch of clips within a bin.

In both cases, the source clips in FCP X and/or Media Composer will play in real-time with the custom look already applied.

df_amiracolor_2_sm

df_amiracolor_3_sm

©2014 Oliver Peters

CoreMelt TrackX

df_trackx_1_sm

Tracking isn’t something every editor does on a regular basis, but when you need it, very few NLEs have built-in tracking tools. This is definitely true with Apple Final Cut Pro X. CoreMelt makes some nice effects plug-ins, but in addition, they’ve produced a number of workflow tools that enhance the capabilities of Final Cut Pro X. These include Lock & Load X (stabilization) and SliceX (masking). The newest tool in the group is TrackX and like SliceX, it uses Mocha tracking technology licensed from Imagineer Systems. In keeping with the simplified controls common to FCP X effects, the tracking controls in TrackX are very easy to apply and use.

TrackX installs as three generators within FCP X – Simple Tracker, Track Layer and Track Text. All use the same planar-based Mocha tracker. The easiest to use – and where I get the best results – is the Simple Tracker. This lets you attach text or objects to a tracked item, so they travel with its movement.

The example used in their tutorial is of a downhill skier. As he races downhill, a timer read-out travels next to him. This works well and displays well, because the tracked objects do not have to perfectly adhere to each other. It uses a two-step process. First, create the item you want to attach and place it into a compound clip. Therefore, it can be a complex graphic and not just text. The second step is to track the object you want to follow. Apply the TrackX generator and trim to length, use the rectangle tool to select an area to be tracked, drop the compound clip into the filter control pane’s image well and then track forward or backwards. If there are hiccups within the tracks, you can manually delete or insert keyframes. Like other trackers, you can select the mode of analysis to be used, such as whether to follow position, scale or perspective.

df_trackx_2_smThe second TrackX generator is Track Layer. This worked well enough, but not nearly as well as the more advanced versions of Mocha that come with After Effects or are sold separately. This tool is designed to replace objects, such as inserting a screen image into a TV, window, iPad or iPhone. To use it, first highlight the area that will be replaced, by using the polygon drawing tool. Next, add the image to be used as the new surface. Then track. There are controls to adjust the scale and offset of the new surface image within its area.

In actual practice, I found it hard to get a track that wasn’t sloppy. It seems to track best when the camera is panning on an object without zooming or having any handheld rotation around the object. Since Mocha tracking is based on identifying flat planes, any three-dimensional motion around an object that results in a perspective change becomes hard to track. This is tough no matter what, but in my experience the standard Mocha trackers do a somewhat better job than TrackX did. A nice feature is a built-in masking tool, so that if your replacement surface is supposed to travel behind an object, like a telephone pole, you can mask the occluded area for realistic results.

Lastly, there’s Track Text. This generator has a built-in text editor and is intended to track objects in perspective. The example used in their demos is text, that’s attached to building rooftops in an aerial. The text is adjusted in perspective to be on the same plane as the roofs.

Overall, I liked the tools, but for serious compositing and effects, I would never turn to FCP X anyway. I would do that sort of work in After Effects. (TrackX does not install into Motion.) Nevertheless, for basic tracking, TrackX really fills a nice hole in FCP X’s power and is a tool that every FCP X editor will want at their fingertips.

For new features announced at NAB and coming soon, check out this video and post from FCP.co.

©2014 Oliver Peters

LUTs and FCP X

df_luts_01

LUTs or color look-up tables are a method of converting images from one color space or gamma profile into another. LUTs are usually a mathematically correct transform of one set of color and level values into another. For most editors and colorists, LUTs are commonly associated with log profiles that are increasingly used with various digital cameras, like an ARRI ALEXA, RED One, RED Epic or Blackmagic Design Cinema Camera. (Click on the images in this article for an expanded view.)

The concept gets confusing, because there are various types of LUTs and they can be inserted into different stages of the pipeline. There are display LUTs, used to convert the viewing color space, such as from Rec. 709 (video) into P3 (digital cinema projection). These can be installed into hardware conversion boxes, monitors and within software grading applications. There are camera LUTs, which are used to convert gamma profiles, such as from log-C to Rec. 709. And finally, there are creative LUTs used for aesthetic purposes, like film stock emulation.

df_luts_02One of the really sweet parts of Apple Final Cut Pro X is that it offers a vastly improved color pipeline that ties in closely to underpinnings of the OS, such as ColorSync. This offers developers opportunities over FCP “legacy” and quite frankly over many other competitors. Built into the code is the ability to recognize certain camera metadata if the camera manufacturer chooses to take advantage of Apple’s SDK. ARRI, Sony and RED are among those that have done so. For example, when you import ARRI ALEXA footage that was recorded with a log-C gamma profile, a metadata flag in the file toggles on log processing automatically within FCP X. Instead of seeing the flat log-C image, you see one that has already been converted, on-the-fly into Rec. 709 color space.

This built-in log processing comes with some caveats, though. The capability is only enabled with files recorded on ALEXA cameras with more recent firmware. It cannot be manually applied to older log-C footage, nor to any other log-encoded video file. It can only be toggled on or off without any adjustments. Finally, because this is done via under-the-hood ColorSync profile changes, it happens prior to the point any filters or color correction can be applied within FCP X itself.

df_luts_03A different approach has been developed by colorist Denver Riddle, known for his Color Grading Central website, products and tutorials. His new product, LUT Utility, is designed to provide FCP X editors with a better way of using LUTs for both corrective and creative color transforms. The plug-in installs into both Final Cut Pro X and Motion 5 and comes with a number of built-in LUTs for various cameras, such as the ALEXA, Blackmagic and even the Cinestyle profiles used with the Canon HDSLRs. Simply drop the filter onto a clip and select the LUT from the pulldown menu in the FCP X inspector pane. As a filter, you can freely apply any LUT selection, regardless of camera – plus, you can adjust the strength of the LUT via a slider. It can work within a series of filters applied to the same clip and can be placed upstream or downstream of any other filters, as well as within an adjustment layer (blank title effect). You can also stack multiple instances of the LUT with different settings on the same clip for creative effect.

df_luts_04The best part of LUT Utility is that you aren’t limited to the built-in LUTs. When you install the plug-in, a LUT Utility pane is added to System Preferences. In that pane, you can add additional LUTs sold by Color Grading Central or that you have created yourself. (External LUT files can be directly accessed within the filter when working in Motion 5.) One such package is the set of Osiris Digital Film Emulation LUTs developed jointly by Riddle and visionCOLOR. These are a set of nine film LUTs designed to mimic the looks of various film stocks. Each has two settings designed for either log or Rec. 709 video. For example, you can take an ALEXA log-C file and apply two instances of LUT Utility. Set the first filter to use the log-C-to-Rec.709 LUT. Then in the second filter, pick one of the film LUTs, but use the Rec. 709 version of it. Or, you could apply one instance of the LUT Utility filter and simply pick the same film LUT, but instead, select its log version. Both work, but will give you slightly different looks. Using the filter’s amount slider, it’s easy to fine tune the intensity of the effect.

df_luts_05LUT Utility is applied as a filter, which means you can still add other color correction filters before or after it. Applying a filter, like Hawaiki Color, prior to a log conversion LUT, means that you would be adjusting color values of the log image, before converting it into Rec. 709. If you add such a filter after the LUT, then you are grading the already-converted image. Each position will give you different results, but most of this is handled gracefully, thanks to FCP X’s floating-point processing. Finally, you can also apply the LUT as a filter and then do additional corrections downstream of the filter by using the built-in Color Board tools.

I found these LUTs easy to install and use. They appear to be pretty lightweight in how they affect FCP X playback performance. I’m running a 2009 Mac Pro with a new Mavericks installation. I can apply one or more instances of the LUT Utility filter and my unrendered ProRes media plays in real-time. With the widespread use of log and log-style gamma profiles, this is one of the handiest filter sets to have if you are a heavy FCP X user. Not only are most of the common cameras covered, but the Osiris LUTs add a nice creative edge that you won’t find at this price point in competitive products. If you use FCP X for color correction and finishing, then it’s really an essential tool.

©2014 Oliver Peters

Typemonkey

df_typemonkey_1

One of the ways to extend functions in Adobe After Effects is through scripting. These are automated macros to quickly perform tasks you could do yourself. By using scripts the results can be built more quickly without manually performing tedious, repetitive commands. Developers can create advanced scripts to automate complex creative treatments. These are installed like plug-ins, but show up as a module under the Window pulldown menu. One such script unit is Typemonkey – a kinetic text generator.

Kinetic Text

df_typemonkey_5We’ve all seen this current design trend for TV spots and marketing videos. The copy is presented via animated words, which move into position on screen. The view shifts from one word to the next in sync with the announcer at the reading pace of the viewer. Creating a kinetic text layout is relatively straightforward and can easily be created by an editor using After Effects or Motion.

The starting point for kinetic text is a large layout of stacked words. These are arranged horizontally and vertically in a bigger-than-raster field. It’s like taking a variety of building blocks and stacking them like a building. This word design can be created as a layered Photoshop document or as a series of layers in After Effects or Motion – one word per layer. To add energy and pace, you would next offset the timing of each layer and add an entry animation to the word on that layer, so that it flys, fades, rotates or types into visibility.

df_typemonkey_4Once this layout is created, the entire stack of layers is viewed with a 3D camera, which in turn is animated to create the moves from one word to the next as they appear inside the raster of your composition. This brings them full screen for a moment as the reader follows the context of this text. While this process is very easy once you understand it, the time it takes to build it can be quite long. In addition, a paragraph of words will result in a lengthy series of After Effects layers in your timeline pane.

Automating the process

df_typemonkey_2

Where Typemonkey enters the picture is to streamline the process and reduce or even eliminate the manual steps. Once installed, you open the Typemonkey interface module from the Window menu. Set the starting font from After Effects’ normal text control pane, paste or type your text into the Typemonkey window and press the “Do it!” button. At this point Typemonkey operates as a macro to automatically build the layers, the moves and the 3D camera animation. The final result is a timeline that shows the 3D camera layer with all word layers shied. Moves from word to word are evenly space for the length of the composition or selected work area with markers at each change. This builds a very nice composition with kinetic text in a matter of seconds.

df_typemonkey_7Naturally, most editors and designers will want to customize the defaults, so that every composition isn’t identical. This can be achieved through both the Typemonkey pane and AE’s standard layer effects. Sliding the markers in the composition timeline will change the animation pacing of the 3D camera’s move from word to word. This lets you hold longer on some words and move more quickly through others.

df_typemonkey_3The controls within the Typemonkey pane let you adjust some of the move styles and interpolations. You can also set up a series of colors, so that each word changes color as it cycles through the five palette choices. Through adjustments at both locations, designers can get quite a large range of variations from this single tool. The actual effects are performed using After Effects expressions, rather than keyframes, so you cannot easily make individual changes to the internal moves. However, you can certainly add your own keyframed transform effects on top of what Typemonkey creates.

Typemonkey is a low cost tool that will pay for itself in the time saved on a single job. Obviously its use is specific to kinetic text creative treatments, but used sparingly and with taste, it’s a look that will bring your motion graphics up a notch.

©2013 Oliver Peters