Red Giant Trapcode Suite 13

df2816_trapcode_sml

After Effects artists who are called upon to design a lot of shots that involve sci-fi effects, particles, user interface overlays, as well as shots with sparks, light rays, and sparkles have come to rely on Trapcode as their go-to plug-in set. The newest version, available from Red Giant, is Trapcode Suite 13. This package includes 11 different effects, which encompass a range of particle and volumetric lighting effects.

If you install the suite, all 11 Trapcode effects will show up in After Effects CC. These include Particular, Form, Tao, Mir, Shine, Lux, 3D Stroke, Echospace, Starglow, Sound Keys and Horizon. Of these, 3D Stroke, Shine and Starglow will also be available within Premiere Pro CC. Together these effects form a comprehensive toolkit for After Effects designers who really do have to create magic from scratch.

df2816_trapcode_01Trapcode Particular is typically the effect that most folks associate with Trapcode effects. In this new version, you can use its built-in Effects Builder to select from certain presets and design custom effects. Although other Trapcode models include presets for certain styles, only Particular includes this separate Effects Builder to browse, preview and apply effects. Particular now includes certain organic 3D effects, like smoke, fire, water and more.

df2816_trapcode_05Trapcode Form lets you design particle grids, spheres and objects that evolve over time. Trapcode Tao lets you build 3D geometries with fractal math for shapes, facets, etc. Tao is a simplified 3D object design tool, that enables metallic textures and the ability to incorporate the image maps from lower After Effects layers as surface textures. You can create animated objects, shapes and ribbons and all are GPU-accelerated. Trapcode Mir is designed to create 3D surfaces, terrains and wireframes. These can be used for tunnel effects and land topographies. Both Tao and Mir can display these designs as wireframes, shaded polygons or rendered surfaces.

df2816_trapcode_02Trapcode Sound Keys links to an imported audio file. It analyzes the file and creates animation keyframes, which can drive a colorized volume bar display to that sound. These keyframes can also be used to drive other effects, such as scaling to the beat. Trapcode 3D Stroke enables 3D lines, paths and overlays. Trapcode Lux turns After Effects lights into visible sources with volumetric properties. Trapcode Horizon is there to create infinite backgrounds in After Effects. Trapcode Echospace enables repeated effects like trails and 3D offsets.

Last but not least, there’s Trapcode Shine and Trapcode Starglow. Both are lighting effects. Shine generates 3D light rays that you can use with text or to mimic real-world lighting, like shafts of light through the forest. Shine can be linked to After Effects 3D lights for volumetric-aware effects. Starglow is more stylized with glints and glimmers, similar to adding a star filter to your lens.

Working with the suite

The suite as a whole is intended for serious After Effects artists who have to create shots, not merely enhance them. As such, it’s not like suites from other plug-in developers that offer a whole toolkit of image manipulation effects, color correction, titling and more. If that’s what you want, then the Trapcode Suite isn’t for you. However, each of these plug-ins is available separately, so if you only want Trapcode Particular or Shine, for example, then it’s best to buy just the one effect that you really need.

df2816_trapcode_06Each of these effects is quite deep. I have never seen any other plug-in with as many modifier controls as those from Trapcode. Unfortunately, these tools are very sparse on presets compared to competing plug-ins. Nevertheless, Particular has over 180 presets, while Shine, 3D Stroke and Starglow have 30, 40 and 49 presets respectively. Some, like Shine, Starglow and Sound Keys are pretty easy to figure out. Others, like Mir or Tao, really do require that you spend some time with tutorials. The investment in time is certainly worth it, if these are the type of effects that you need to do on a regular basis.

Although I use After Effects, I’m a novice at building such particle effects and find myself more comfortable with tools like Shine. Building a flying title with rays that emanate from the text was a piece of cake with Shine. Trapcode has worked hard to take advantage of GPU and CPU power. Mir and Tao are GPU-accelerated and others, like Particular, were optimized for better CPU performance in this release. Adding and adjusting these effects was pretty quick on a 2009 Mac Pro 8-core tower with a Sapphire 7950 card. No slouch, but certainly pretty average by today’s standards. I’m sure these effects would really scream on a top-of-the-line HP with a smokin’ NVIDIA card.

df2816_trapcode_03Trapcode Particular was also fun, because of the Effects Builder. Essentially it’s a presets browser, with different effects options. When you select an option, it becomes part of your effects chain in the Builder window. This lets you design a custom effect, starting with the emitter type and then adding modifiers within the chain, such as turbulence, gravity and so on. Each of the segments of this chain have parameters that can be tweaked. Once done, you apply the effect that you’ve built to the clip on the timeline and close the Builder window. Then make timing and other adjustments in the standard After Effects control panel.

df2816_trapcode_04There are many similar effects to Trapcode Shine offered by other plug-in developers. One unique attribute of Shine is the feature of adding fractal noise. So, in addition to light rays you can add the appearance of haze or smoke to the effect. Depending on how you set the controls, it can also look like a water reflection shimmering onto the objective in the image or other similar styles. All of this can be internally masked from within the plug-in. Applying the mask means that if you want the light rays to just emanate from a window in the corner of the set, you can adjust the mask accordingly. Light rays would only appear to come from the window and not other bright objects within the rest of the shot. Another unique aspect to Shine is that its light rays are 3D camera-aware, based on After Effects light and camera positions.

Overall, the Trapcode  Suite tools are a wonderful addition to any visual effects artist’s collection of plug-ins. The quality is outstanding, the visual appearance quite organic, and performance with a moderately powerful GPU is fast. Editors will likely want to limit themselves to Shine and Starglow to make the best investment for how they use plug-ins. But if you are a power After Effects user who also cuts in Premiere Pro CC, then the suite has you covered either way.

Originally written for Digital Video magazine / Creative Planet Network.

©2016 Oliver Peters

Swiss Army Man

df2716_swissarmymanWhen it comes to quirky movies, Swiss Army Man stands alone. Hank (Paul Dano) is a castaway on a deserted island at his wit’s end. In an act of final desperation, he’s about to hang himself, when he discovers Manny (Daniel Radcliffe), a corpse that’s just washed up on shore. At this point the film diverges from the typical castaway/survival story into an absurdist comedy. Manny can talk and has “magical powers” that Hank uses to find his way back to civilization.

Swiss Army Man was conceived and directed by the writing and directing duo of Dan Kwan and Daniel Sheinert, who work under the moniker Daniels. This is their feature length film debut and was produced with Sundance in mind. The production company brought on Matthew Hannam to edit the film. Hannam (The OA, Enemy, James White) is a Canadian film and TV editor with numerous features and TV series under his belt. I recently spoke with Hannam about the post process on Swiss Army Man.

Hannam discussed the nature of the film. “It’s a very handmade film. We didn’t have a lot of time to edit and had to make quick decisions. I think that really helped us. This was the dozenth or so feature for me, so in a way I was the veteran. It was fun to work with these guys and experience their creative process. Swiss Army Man is a very cinematically-aware film, full of references to other famous films. You’re making a survival movie, but it’s very aware that other survival movies exist. This is also a very self-reflexive film and, in fact, the model is more like a romantic comedy than anything else. So I was a bit disappointed to see a number of the reviews focus solely on the gags in the film, particularly around Manny, the corpse. There’s more to it than that. It’s about a guy who wonders what it might be like had things been different. It’s a very special little film, because the story puts us inside of Hank’s head.”

Unlike the norm for most features, Hannam joined the team after the shooting had been completed. He says, “I came on board during the last few days of filming. They shot for something like 25 days. This was all single-camera work with Larkin Seiple (Cop Car, Bleed For This) as director of photography. They shot ARRI ALEXA XT with Cooke anamorphic lenses. It was shot ARRIRAW, but for the edit we had a special LUT applied to the dailies, so the footage was already beautiful. I got a drive in August and the film premiered at Sundance. That’s a very short post schedule, but our goal was always Sundance.”

Shifting to Adobe tools

Like many of this year’s Sundance films, Adobe Premiere Pro was the editing tool of choice. Hannam continues, “I’m primarily an Avid [Media Composer] editor and the Dans [Kwan and Sheinert] had been using [Apple] Final Cut Pro in the past for the shorts that they’ve edited themselves. They opted to go with Premiere on this film, as they thought it would be easiest to go back and forth with After Effects. We set up a ‘poor man’s’ shared storage with multiple systems that each had duplicate media on local drives. Then we’d use Dropbox to pass around project files and shared elements, like sound effects and temp VFX. While the operation wasn’t flawless – we did experience a few crashes – it got the job done.”

Swiss Army Man features quite a few visual effects shots and Hannam credits the co-directors’ music video background with making this a relatively easy task. He says, “The Dans are used to short turnarounds in their music video projects, so they knew how to integrate visual effects into the production in a way that made it easier for post. That’s also the beauty of working with Premiere Pro. There’s a seamless integration with After Effects. What’s amazing about Premiere is the quality of the built-in effects. You get effects that are actually useful in telling the story. I used the warp stabilizer and timewarp a lot. In some cases those effects made it possible to use shots in a way that was never possible before. The production company partnered with Method for visual effects and Company 3 [Co3] for color grading. However, about half of the effects were done in-house using After Effects. On a few shots, we actually ended up using After Effects’ stabilization after final assembly, because it was that much better than what was possible during the online assembly of the film.”

Another unique aspect of Swiss Army Man is its musical score. Hannam explains, “Due to the tight schedule, music scoring proceeded in parallel with the editing. The initial temp music pulled was quirky, but didn’t really match the nature of the story. Once we got the tone right with the temp tracks, scenes were passed on to the composers – Andy Hull and Robert McDowell – who Daniels met while making a video for their band Manchester Orchestra. The concept for the score was that it was all coming from inside of Hank’s head. Andy sang all the music as if Hank was humming his own score. They created new tracks for us and by the end we had almost no temp music in the edit. Once the edit was finalized, they worked with Paul [Dano] and Daniel [Radcliffe] to sing and record the parts themselves. Fortunately both are great singers, so the final a cappella score is actually the lead actors themselves.”

Structuring the edit

Matthew Hannam and I discussed his approach to editing scenes, especially with this foray into Premiere Pro. He responds, “When I’m on Media Composer, I’m a fan of ScriptSync. It’s a great way to know what coverage you have. There’s nothing like that in Premiere, although I did use the integrated Story app. This enables you to load the script into a tab for quick access. Usually my initial approach is to sit down and watch all the footage for the particular scene while I plan how I’m going to assemble it. The best way to know the footage is to work with it. You have to watch how the shoot progresses in the dailies. Listen to what the director says at the end of a take – or if he interrupts in the middle – and that will give you a good idea of the intention. Then I just start building the scene – often first from the middle. I’m looking for what is the central point of that scene and it often helps to build from the middle out.”

Although Hannam doesn’t use any tricks to organize his footage or create selects, he does use “KEM rolls”. This term stems from the KEM flatbed film editing table. In modern parlance, it means that the editor has strung out all the footage for a scene into a single timeline, making it easy to scrub through all the available footage quickly. He continues, “I’ll build a dailies reel and tuck it away in the bottom of the bin. It’s a great way to quickly see what footage you have available. When it’s time to revise a scene, it’s good to go back to the raw footage and see what options you have. It is a quick way to jog your memory about what was shot.”

A hybrid post workflow

Another integral member of the post team was assistant editor Kyle Gilbertson. He had worked with the co-directors previously and was the architect of the hybrid post workflow followed on this film. Gilbertson pulled all of the shots for VFX that were being handled in-house. Many of the more complicated montages were handled as effects sequences and the edit was rebuilt in DaVinci Resolve before re-assembly in After Effects. Hannam explains, “We had two stages of grading with [colorist] Sofie Borup at Co3. The first was to set looks and get an idea what the material was going to look like once finished. Then, once everything was complete, we combined all of the material for final grading and digital intermediate mastering. There was a real moment of truth when the 100 or so shots that Daniels did themselves were integrated into the final cut. Luckily it all came together fairly seamlessly.”

“Having finished the movie, I look back at it and I’m full of warm feelings. We kind of just dove into it as a big team. The two Dans, Kyle and I were in that room kind of just operating as a single unit. We shifted roles and kept everything very open. I believe the end product reflects that. It’s a film that took inspiration from everywhere and everyone. We were not setting out to be weird or gross. The idea was to break down an audience and make something that everyone could enjoy and be won over by. In the end, it feels like we really took a step forward with what was possible at home. We used the tools we had available to us and we made them work. It makes me excited that Adobe’s Creative Cloud software tools were enough to get a movie into 700 cinemas and win those boys the Sundance Directing prize. We’re at a point in post where you don’t need a lot of hardware. If you can figure out how to do it, you can probably make it yourself. That was our philosophy from start to finish on the movie.”

Originally written for Digital Video magazine / Creative Planet Network.

©2016 Oliver Peters

Blackmagic Design Teranex Processors

df2616_teranex_sm

In recent years, Blackmagic Design has thrived on a business model of acquiring the assets of older industry icons, modernizing their products, and then re-introducing these cornerstone brands to an entirely new customer base. Top-of-the-line products that were formerly out of reach to most users are now attainable, thanks to significant price reductions as part of the Blackmagic Design product family.

Teranex is just such a case. It’s a company with whom I am well acquainted, since we are both Orlando-based. I remember their first NAB off-site, whisper suite. I’ve used their conversion and restoration products on a number of projects. At one point they were moving into the consumer TV space under the then ownership of Silicon Optix and later IDT. In that period, I produced the popular HQV Benchmark DVD and Blu-ray for them as an image test vehicle for consumers. As with many companies in the pro video space, they’ve had a past filled with ups and downs, so it’s great to see Blackmagic breathe new life into the technology.

Teranex Processors

Blackmagic Design offers three rack-mounted Teranex products. These are separate from the Teranex Mini line, which does not offer the full range of Teranex processing, but is comprised of more targeted units for specific conversion applications. The rack-mounted standards converters include the Teranex Express, Teranex 2D Processor and Teranex 3D Processor. All three offer more or less the same processing options, with the exception that the Express can work with 4K Ultra HD (3840×2160). The 2D and 3D Processors only go as high as 2K (2048×1080). Outside of that difference, they all handle up/down/cross-conversions between SD (NTSC and PAL), HD (720 and 1080), 2K and UHD (Express only). This includes frame sizes, as well the whole range of progressive and interlaced frame rates. There’s also aspect ratio correction (anamorphic, 16:9, 14:9, zoom, letterpox/pillarbox) and colorspace conversion. Add to this de-interlacing and 3:2 pulldown cadence correction. The key point, and why these units are must-haves for large post operations, is that they do it all and the processing is in real-time.

The Teranex 2D and 3D Processors can also function as i/o devices when connected via Thunderbolt. By purchasing one of these two models, you can skip the need for an additional Blackmagic Design capture device, assuming you have a Thunderbolt-equipped Mac Pro, iMac or MacBook Pro. With the purchase, you also get Blackmagic Design Ultrascope waveform monitor software that runs on your computer. When you run one of these units with Apple Final Cut Pro X, Adobe Premiere Pro CC or their own Media Express application, the response is the same as with a standalone i/o device. This is an optional use, however, as these two units can operate perfectly well in a standalone installation, such as part of a machine room environment. They do tend to have loud fans, so either way, you might want to keep them in a rack.

The biggest difference between the 2D and 3D Processor is that the 3D unit can also deal with stereoscopic video. In addition to the normal processing functions, the 3D model has adjustments for stereo images. There are also physical differences, so even if you don’t work with stereo images, you might still opt for the Teranex 3D Processor. For instance, while both units can handle analog or SDI video connections, the 3D Processor only allows for two channels of analog audio i/o to be plugged into the device. The 2D Processor uses a separate DB-25 break-out cable for all analog audio connectors. Like all Blackmagic rack products, no power plug is included. You need to provide your own three-prong electric cord. The 3D Processor features dual-redundant power supplies, which also means it requires two separate power cords. Not a big deal given the extra safety factor in mission-critical situations, but an extra consideration nonetheless. (Note: the 3D processor still works with only a single power cord plugged in.)

The Teranex Express, is more streamlined, with only digital SDI connectors. It is designed for straightforward, real-time processing and cannot also be employed as an i/o device. If you don’t need analog connectors, stereoscopic capabilities, nor Thunderbolt i/o, then the Express model is the right one for you. Plus, it’s currently the only one of the three that works with 4K Ultra HD content. The Teranex units also pass captioning, Dolby data and timecode.

In actual use

I tested both the Teranex Express and the 3D Processor for this review. I happen to have some challenging video to test. I’m working on a documentary made up of a lot of standard definition interviews shot with a Panasonic DVX-100, plus a lot of WWII archival clips. My goal is to get these up to HD for the eventual final product. As a standards converter and image processor both units work the same (excluding stereoscopic video). SDI in and SDI out with a conversion in-between.

The front panel is very straightforward, with buttons for the input standard and the desired output standard. The left side features settings for format size, frame rate, scan and aspect. The middle includes a multi-use LCD display, which is used to show menus, test patterns and video. To the right of the display are buttons for video levels and sharpening, since these models also include a built-in proc amp. Finally on the far right, you can see audio channel status, system status and presets. Last, but not least, there’s a “lock panel” button if you don’t want anyone to inadvertently change a setting in the middle of a job, as these controls are always active. When you pass any SDI signal through one of these units, the input is auto-detecting and the button layout easily guides the operator through the logical steps to set a desired target format for conversion.

As with all Blackmagic Design products, installation of the software needed for i/o was quick and easy. When I connected the Teranex 3D Processor to my MacBook Pro via Thunderbolt, all of the apps saw the device and for all intents and purposes it worked just the same as if I’d had a Blackmagic Design UltraStudio device connected. However, here the conversion side of the Teranex device is at odds with how it works as an i/o device. For example, the output settings typically followed the sequence settings of the NLE that was driving it. If I had an NTSC D1 timeline in Final Cut Pro X, the Teranex 3D Processor could not be set to up-convert this signal on output. It only output a matching SD signal. Up-conversion only happened if I placed the SD content into a 1080 timeline, which unfortunately means the software is doing the conversion and not the Teranex processor. As best as I could tell, you could not set the processor to override the signal on either input or output when connected via Thunderbolt.

Processing power

One of the hallmarks of Teranex processing is cadence correction. 24fps content that is recorded as a 30fps signal is said to have “3:2 pulldown”. It was originally developed to facilitate transferring film material to videotape. Pulldown is a method of repeating whole film frames across a pattern of interlaced video frames so that four film frames can fit into five video frames (ten fields). This pattern is called 24PN (“normal” pulldown) and the cadence of film frames to video fields is 2:3:2:3. Digital camera manufacturers adopted this technique to mimic the look of film when recording in a 24fps mode. To complicate matters, Panasonic introduced a different cadence called 24PA, or “advanced” pulldown. The cadence is 2:3:3:2 and was targeted at Final Cut Pro users. FCP featured a built-in routine for the software to drop the extra frame in the middle and restore the clips to a true 24fps during a FireWire capture. Another form of cadence is 2:2:2:4, which is common in DVD players when playing back a true 24fps DVD.

In the case of Teranex processing, it is designed to detect and correct the more common 24PN, i.e. 3:2 pulldown (2:3:2:3), but not the other two cadences (2:3:3:2, 2:2:2:4). Teranex is supposed to be able to fix “broken” 3:2 pulldown cadences in mixed timelines, meaning the pattern changes at every cut. However, when I checked this on my test project, I didn’t get perfect results. That’s most likely due to the fact that I was dealing with DV (not proper D1) content, which had gone through a lot of hands before it came to me. The best results would be if I treated every source clip individually. When I test that, the results were more what I expected to see.

Teranex technology was developed for real-time processing at a time when linear, videotape post ruled. Today, there are plenty of high-quality, non-real-time, software processing options, which yield results that are very close to what Teranex can deliver. In the case of my test project, I actually found that dealing with interlace was best handled by Blackmagic’s own DaVinci Resolve. I don’t necessarily need to get back to 24fps, but only to get the cleanest possible 30fps image. So my first target was to convert the 29.97i clips into a good 29.97p sequence. This was possible through Resolve’s built-in de-interlacing. Progressive frames always up-convert with fewer artifacts than interlaced clips. Once I had a good 29.97p file, then I could test the Teranex conversion capabilities.

I tested conversions with several NLEs, Resolve, After Effects and the Teranex hardware. While each of the options gave me useable HD copies, the best overall was using the Teranex unit – passing through it in real-time via SDI in and out. Teranex not only gave me cleaner results, as evidenced by fine edges (less “jaggies”), but I could also dial in noise reduction and sharpening to taste.

All processing is GIGO (garbage in, garbage out). You can never make awful DV look stunning in HD, much less 4K. It’s simply not possible. However, Blackmagic Design’s Teranex products give you powerful tools to make it look the best that it can. Software processing can get you close, but if fast turnaround is important, then there’s no replacement for real-time processing power. That’s where these Teranex processors continue to shine.

Originally written for Digital Video magazine / Creative Planet Network.

©2016 Oliver Peters

Adobe’s Summer 2016 Refresh

df2516_adobe_sm

Adobe is on track for the yearly refresh of its Creative Cloud applications. They have been on a roll with their professional video solutions – especially Premiere Pro CC – and this update is no exception. Since this is not a new, across-the-board Creative Cloud version update, the applications keep the CC 2015 moniker, except with a point increase. For example, Premiere Pro CC becomes version 2015.3, not CC 2016. Let me dive into what’s new in Premiere Pro, Audition, Adobe Media Encoder and After Effects.

Premiere Pro CC 2015.3

Adobe has captured the attention of the professional editing community with Premiere Pro and has held it with each new update. CC 2015.3 adds numerous new features in direct response to the needs of editors, including secondary color correction, a proxy workflow, a 360VR viewer and more.

New Lumetri features

df2516_lumetriThe Lumetri color panel brought over the dominant color correction tools from SpeedGrade CC configured into a Lightroom-style panel. For editors, Lumetri provides nearly everything they need for standard color correction, so there’s rarely any need to step outside of Premiere Pro. Three key features were added to Lumetri in this update.

First is a new white balance eyedropper. Lumetri has had temperature and tint sliders, but the eyedropper makes white balance correction a one-click affair. However, the new marquee feature is the addition of SpeedGrade’s HSL Secondary color correction. Use an eyedropper to select the starting color that you want to affect. Then use the “add” or “remove color” eyedroppers to adjust the selection. To further refine the isolated color, which is essentially a key, use the HSL, denoise and blur sliders. The selected color range can be viewed against black, white or gray to check the accuracy of the adjustment. You can then change the color using either the single or three-wheel color control. Finally, the secondary control also includes its own sliders for temperature, tint, contrast, sharpening and saturation.

In the rest of the Lumetri panel, Adobe changed the LUT (color look-up table) options. You can pick a LUT from either the input and/or creative tab. The new arrangement is more straightforward than when first introduced. Now only camera gamma correction LUTs (like ARRI Log-C to Rec 709) appear in the input tab and color style LUTs show up in the creative tab. Adobe LUTs plus SpeedLooks LUTs from LookLabs are included as creative choices. Previously you had to use a SpeedLooks camera LUT in tandem with one of the SpeedLooks creative LUTs to get the right correction . With this update, the SpeedLooks creative LUTs are all designed to be added to Rec 709 gamma, which makes these choices far more functional than before. You can now properly use one of these LUTs by itself without first needing to add a camera LUT.

New Proxy workflow

df2516_proxyApple Final Cut Pro X users have enjoyed a proxy workflow since its launch, whereas Adobe always touted Premiere Pro’s native media prowess. Nevertheless, as media files get larger and more taxing on computing systems, proxy files enable a more fluid editing experience. A new ingest tool has been added to the Media Browser. So now from within Premiere Pro, you can copy media, transcode to high-res file formats and create low-res proxies. You can also select clips in a bin and right-clip to create proxies, attach proxies and/or relink full-resolution files. There is a new toggle button that you can add to the toolbar, which lets you seamlessly flip between proxy and full-resolution media files. According to Adobe, even if you have proxy selected, any export always draws from the full-resolution media for the best quality.

Be careful with the proxy settings. For example, one of the default sizes is 1024×540, which would be the quarter-frame match for 2K media. But, if you use that for HD clips in a 1920×1080 timeline, then your proxies will be incorrectly pillar-boxed. If you create 720p proxies for 1080p clips, you’ll need to use “scale to frame size” in order to get the right size on the timeline. It’s a powerful new workflow, but take a bit of time to figure out the best option for your needs.

Adobe Media Encoder also gains the Media Browser tool, as well as a new ingest function, which has been brought over from Adobe Prelude. Now you can use Media Encoder to copy camera files and/or transcode them to primary and secondary locations. If you need to copy camera cards, transcode a full-res master file and also transcode a low-res proxy file, then this complete workflow can be handled through Media Encoder.

New 360VR viewer

df2516_360Premiere Pro CC now sports a new VR-capable viewer mode. Start with monoscopic or stereoscopic, stitched 360-degree video clips and edit them as you normally would. The viewer allows you to pan around inside the clip or view the timeline from a point of view. You can see what someone viewing with goggles sees when looking in a given direction. Note that this is not a pan-and-scan plug-in. You cannot drop one of these 360-degree clips into an otherwise 2D 16×9 (“flat”) timeline and use Premiere Pro’s VR function to keyframe a digital move within that clip.

There are other new Premiere Pro CC features that I haven’t yet tested thoroughly. These include new support for Apple Metal (an API that combines the functionality of OpenGL and OpenCL) and for grading control surfaces. Open Caption support has been improved – adding more languages and their native alphabets, including Arabic and Hebrew.

Adobe Audition CC 2015.2

df2516_auditionWant better audio mixing control than what’s available inside of Premiere Pro CC? Then Audition CC is the best tool for the job. Premiere Pro timelines translate perfectly and in the last update a powerful retime feature was added. Audition “automagically” edits the duration of a music cue for you in order to fit a prescribed length.

The Essential Sound panel is new in this update. The layout of this panel is the audio equivalent to the Lumetri color panel and also owes its design origins to Lightroom. Select a clip and choose from the Dialogue, Music, SFX or Ambience group. Each group presents you with a different, task-appropriate set of effects presets. For example, when you pick Dialogue, the panel will display tabbed controls for loudness, repair sound, improve clarity and a creative tab. Click on a section of the vertical stack within this panel to reveal the contents and controls for that section.

In the past, the workflow would have been a roundtrip from Premiere Pro to Audition and back. Now you can go directly to Adobe Media Encoder from Audition, which changes the workflow into these steps: cut in Premiere Pro CC, mix in Audition CC, and master/export directly through Adobe Media Encoder. Thus roundtrips are eliminated, because picture is carried through the Audition phase. This export path supports multichannel mix files, especially for mastering containers like MXF. Audition plus Media Encoder now enable you to export a multichannel file that includes a stereo mix plus stereo submix “stems” for dialogue, SFX and music.

After Effects CC 2015.3 and more

df2516_aeAfter Effects CC has been undergoing an overhaul through successive versions, including this one. Some users complained that the most recent version was a bit of a step backwards, but this is all in an effort to improve performance, as well as to modernize and streamline the product. From my vantage as an editor who uses After Effects as much as a utility as for occasional motion graphics and visual effects, I really like what Adobe has been doing. Changes in this update include enhanced performance, GPU-accelerated Gaussian blur and Lumetri color correction, better playback of cached frames, and a new a/v preview engine. In the test projects that I ran through it, including the demo projects sent by Adobe, performance was fast and rather impressive. That’s on a 2009 Mac Pro tower.

If you are an animator, then Maxon Cinema 4D is likely a tool that you use in conjunction with After Effects. Animated text and shape layers can now be saved directly into the Cinema 4D file format from After Effects. When you customize your text and shapes in Cinema 4D, the changes are automatically updated in After Effects for a roundtrip 3D motion graphics workflow.

Thanks to the live The Simpsons event, in which Homer was animated live using Character Animator, this tool is gaining visibility. Character Animator moves to version 4, even though the application is still technically in prerelease. Some of the enhancements include improved puppet tagging. You can record multiple takes of a character’s movement and then enable your puppet to respond to motion and trigger animation accordingly.

To wrap up, remember that Adobe is promoting Creative Cloud as more than simply a collection of applications. The subscription includes access to over 50 million royalty-free photos, illustrations, vector graphics and video (including 4K clips). According to Adobe, licensed Adobe Stock assets in your library are now badged for easy identification. Videos in your library are displayed with duration and format information and have links to video previews. You can access your Libraries whenever you need them, both when you are connected to the internet and working offline. I personally have yet to use Adobe Stock, but it’s definitely a resource that you should remember is there if you need it.

Click here for Dave Helmly’s excellent overview of the new features in Premiere Pro CC.

Originally written for Digital Video magazine and Creative Planet Network.

©2016 Oliver Peters

Apple iPad Pro

df1216_ipadpro_main_sm

Mark me down as a happy Apple iPad user. It’s my go-to computer away from home, unless I need to bring my laptop for on-site editing. I’ve even written some of my magazine stories, like NAB reports, on it. First the original iPad and now a new Air 2. While I don’t consider myself a post-PC computer user, I could imagine that if I didn’t need to run tools like Resolve, FCPX, and Premiere Pro, an iPad Pro could function as my only computer.

For this review, Apple loaned me the 12.9″ 128GB WiFi+Cellular iPad Pro, complete with all the bells-and-whistles, including the Apple Pencil, Lightning-to-SD Card Camera Reader, Case, Smart Cover, and Smart Keyboard. The Pro’s A9X processor is beefy for a tablet. Other reviewers have noted its performance rivals Apple’s smallest MacBook with the Intel Core M CPU. Since the iPad Air 2 processor is only one step down, you won’t see that much difference between it and the iPad Pro on most iOS applications. However, the A9X delivers twice the CPU and graphics performance of the Air 2’s A8X, so there is a difference in driving the larger 12.9” Pro screen, as well as with multitasking and animation-heavy applications.

df1216_ipadpro_split

Many specs are the same between these two models, with the exception that the iPad Pro includes a total of four speakers and adds a Smart Connector to be used with the Smart Keyboard. In addition, the Pro’s touch screen has been re-engineered to scan at 240 times/second (twice as fast as scanning for your finger) in support of the Apple Pencil. On March 21st Apple launched a second iPad Pro model using the same 9.7” form factor as the iPad Air 2. Other than screen size, the two Pro models sport nearly identical specs, including A9X processor, four speakers, and Smart Connector. Now there’s also a Smart Keyboard specifically designed for each model. Since I tested the larger version, the rest of this review is in the context of using the 12.9” model.

The big hallmark in iOS9 is multitasking, which lets you leave two applications open and on-screen, side-by-side at one time. You can go between them and slide the divider bar to change app size or move them completely on or off of the screen. This feature is superb on the iPad Pro, aided by the bigger screen real estate. It’s not quite as functional on the other iPads. However, many applications and web pages don’t feel quite optimized for the larger screen of the iPad Pro. It often feels like pages are slightly blown up or that there’s a lot of wasted space.

Accessories

df1216_ipadpro_pencilThe iPad Pro starts to stand out once you accessorize it. You can get an Apple case, Smart Cover and/or Smart keyboard. The covers magnetically attach to the iPad, so be careful. If you hold or lift the heavier iPad Pro by the cover, it can detach, resulting in the Pro potentially dropping to the floor. Both the Smart Cover and the Smart Keyboard can fold into a stand to prop up the iPad Pro on a desk. When you fold the Smart Keyboard back into a cover, it’s a very slim lid that fits over the screen. The feel of the keyboard is OK, but I prefer the action of the small, standalone Apple Bluetooth keyboard, which I use with my own iPad. Other reviewers have also expressed a preference for the Logitech keyboard available for the Pro. These new keyboards are enabled by the Smart Connector with its two-way power and data transfer, so no battery is required for the keyboard.

The new Apple Pencil is getting the most press. Unlike other pointing devices, the Pencil requires charging and can only be paired with the iPad Pro. The Pencil is clearly a blast to use with Pixelmator or FiftyThree’s Paper. It’s nicely weighted and feels as close to drawing with a real pen or pencil as you can get with an electronic stylus. It responds with pressure-sensitivity and you can even shade with the side of the tip. For drawing in apps like this, or Photoshop Express, Autodesk Graphic, Art Studio, etc., the Pencil is clearly superior to low-cost third-party styli or your finger. FiftyThree also offers its own drawing styli that are optimized for use with the Paper application.

df1216_ipadpro_53paperAs a pointing device, the Apple Pencil isn’t quite as good, since it was designed for fine detail. According to Apple, their design criteria was pixel-level precision. The Pencil does require charging, which you can do by plugging it into the iPad’s lightning port, or directly charging it by using the regular lightning cable and charger via a small adapter ring. When the Pencil gets low on juice a warning pops up on the iPad Pro’s screen. Plug it into the lightning port for a quick boost. Apple claims that fifteen seconds will give you thirty minutes of use and my experience bore this out.

The final accessory to mention is the Lightning-to-SD Card Camera Reader. The lightning port supports USB 3.0 speeds on the iPad Pro to make transfers fast. Plug the reader into the lightning port and pop your SD card into the reader. The Photos application will open to the contents of the card and you can import a selection of clips. Unfortunately, there is no generic way to transfer files into the iPad using SD cards. I’ve been able to cheat it a little by putting some renamed H.264 files into the DCIM folder structure from a Canon 5D camera. This made everything look like valid camera media. Then I could move files into Photos, which is Apple’s management tool for both camera stills and videos on the iPad. However, it doesn’t work for all files, such as graphics or audio tracks that you might use for a voice-over.

Using the iPad Pro as a professional video tool

Is the iPad Pro better for the video professional when compared with other tablets and iPads? Obviously the bigger screen is nice if you are editing in iMovie, but can one go beyond that?

df1216_ipadpro_filmicproI worked with a number of applications, such as FiLMiC Pro. This application adds real camera controls to the built-in camera. These include ISO, white balance, focus, frame rates, and stabilization controls. It was used in the production of the Sundance hit, Tangerine, and is a must-have tool if you intend to do serious captures with any iOS device. The footage looks good and H.264 compression (starting at 32Mbps) artifacts are not very visible. Unfortunately, there’s not shutter angle control to induce motion blur, which would smooth out the footage.

To make real production viable, you would need camera rigging and accessories. The weight of the 12.9″ iPad Pro makes it tough to shoot steady hand-held footage. Outside in bright daylight, the screen is too dim even at its brightest setting. Having some sort of display hood is a must. In fact, the same criticism is true if you are using it to draw outside. Nevertheless, if you mounted an iPad or iPad Pro in some sort of fixed manner, it would be very useful for recording interviews and similar, controllable productions. iOgrapher produces some of these items, but the larger iPad Pro model isn’t supported yet.

df1216_ipadpro_imovieFor editors, the built in option is iMovie. It is possible to edit external material, if you brought it in via the card reader, DropBox, iCloud Drive, or by syncing with your regular computer. (Apple’s suggested transfer path is via AirDrop.) Once you’ve edited your piece, you can move the project file from iOS iMovie to iMovie on your computer using iCloud Drive and then import that project into Final Cut Pro X. In my tests, the media was embedded into the project and none of the original timecode or file names were maintained. Frame rates were also changed from 29.97fps to 30.0fps. Clearly if you intend to use this path, it’s best for video originated on the iPad itself.

df1216_ipadpro_touchedit_1If you want a professional nonlinear editing tool for the iPad, nothing even comes close to TouchEdit, an app developed by feature film editor Dan Lebental (Ant-Man, Iron Man, Cowboys & Aliens) and his team. This app includes many of the tools an editor would expect, such as trimming, titles and audio mixing, plus it tracks all of the important clip metadata. There is a viable workflow to get clips into – and an edit list and/or movie out of – the iPad. Lebental started with a skeuomorphic interface design that borrows from the look of a flatbed editor. The newest version of the software includes the option for a flattened interface skin, plus a portrait and landscape layout, each of which enables somewhat different capabilities. TouchEdit is attractive as an offline editing tool that definitely benefits from the larger size and improved performance of the iPad Pro.

Final thoughts

df1216_ipadpro_touchedit_2

I used the 12.9” iPad Pro for three months. It’s a wonderful tool, but also a mixed bag. The more ample screen real estate makes it easier to use than the 9.7” iPad models. However, the smaller device is tweaked so that many pages are displayed a bit differently. Thus the size advantage of the larger Pro model is less pronounced. Like all iPads, the Pro uses the same iOS operating system. This holds back the potential of the Pro, which begs for some sort of hybrid “iOS Pro” operating system that would make the iPad Pro work more like a laptop. Naturally, Apple’s position is that iPads are “touch-first” devices and iOS a “touch-first” operating system. The weakest spot is the lack of true file i/o and a visible file structure. You have to go through Dropbox, iCloud, Photos, AirDrop, e-mail, or be connected to iTunes on your home machine.

The cost of the iPad Pro would seem to force a decision between buying the 12” MacBook and the 12.9″ iPad Pro. Both are of similar size, weight, and performance. In John Gruber’s Daring Fireball review he opined that in the case of the iPad Pro, “professional” should really be thought of in the context of “deluxe”. According to him, the iPad Pro relates to the regular iPad line in the same way a MacBook Pro relates to the other MacBooks. In other words, if an iPad serves your needs and you can afford the top-end version, then the Pro is for you. Its target market is thus self-defining. The iPad Pro is a terrific step up in all the things that make tablets the computing choice for many. Depending on your needs, it’s a great portable computer. For the few that are moving into the post-PC world, it could even be their only computer.

Originally written for Digital Video magazine / CreativePlanetNetwork.

©2016 Oliver Peters

Voice from the Stone

df0316_vfts_1_smAs someone who’s worked on a number of independent films, I find it exciting when an ambitious feature film project with tremendous potential comes from parts other than the mainstream Hollywood studio environment. One of these is Voice from the Stone, which features Emilia Clarke and Marton Csokas. Clarke has been a fan favorite in her roles as Daenerys Targaryen in Game of Thrones and the younger Sarah Connor in Terminator Genisys. Csokas has appeared in numerous films and TV series, including Sons of Liberty and Into the Badlands.

In Voice from the Stone, Clarke plays a nurse in 1950s Tuscany who is helping a young boy, Jakob (played by Edward Ding), recover from the death of his mother. He hasn’t spoken since the mother, a renowned pianist, died. According to Eric Howell, the film’s director, “Voice from the Stone was a script that screamed to be read under a blanket with a flashlight. It plays as a Hitchcock fairy tale set in 1950s Tuscany with mysterious characters and a ghostly antagonist.” While not a horror film or thriller, it is about the emotional relationship between Clarke and the boy, but with a supernatural level to it.

df0316_vfts_15Voice from the Stone is Howell’s feature directorial debut. He has worked on numerous films as a director, assistant director, stuntman, stunt coordinator, and in special effects. Dean Zanuck (Road to Perdition, Get Low, The Zero Theorem) produced the film through his Zanuck Independent company. From there, the production takes an interesting turn towards the American heartland, as primary post-production was handled by Splice in Minneapolis. This is a market known for its high-end commercial work, but Splice has landed a solid position as the primary online facility for numerous film and TV series, such as History Channel’s America Unearthed and ABC-TV’s In An Instant.

Tuscany, Minneapolis, and more

Clayton Condit, who co-owns and co-manages Splice with his wife Barb, edited Voice from the Stone. We chatted about how this connection came about. He says, “I had edited two short films with Eric. One of these, Anna’s Playground, made the short list for the 2011 Oscars in the short films category. Eric met with Dean about getting involved with this film and while we were waiting for the financing to be secured, we finished another short, called Strangers. Eric sent the script to Emilia and she loved it. After that everything sort of fell into place. It’s a beautiful script that, along with Eric’s style of directing, fueled amazing performances from the entire cast.”

df0316_vfts_2The actual production covered about 35 days in the Tuscany region of Italy. The exterior location was filmed at one castle, while the interiors at another. This was a two-camera shoot, using ARRI Alexas recording to ARRIRAW. Anamorphic lenses were used to record in ARRI’s 3.5K 4:3 format, but the final product is desqueezed for a 2.39:1 “scope” final 2K master. The DIT on set created editorial and viewing dailies in the ProRes LT file format, complete with synced production audio and timecode burn-in. The assistant editor back at Splice was also loading and organizing the same dailies, so that everything was available there, as well.

df0316_vfts_8Condit explains the timeline of the project, “The production was filmed on location in Italy during November and December of 2014. I was there for the first half of it, cutting on my MacBook Pro on set and in my hotel room. Once I travelled back to Minneapolis, I continued to build a first cut. The director arrived back in the states by the end of January to see early rough assemblies, but it was around mid-February when I really started working a full cut with Eric on the film. By April of 2015 we had a cut ready to present to the producers. Then it took a few more weeks working with them to refine the cut. Splice is a full service post facility, so we kicked off visual effects in May and color starting mid-June. The composer, Michael Wandmacher, created an absolutely gorgeous score that we were able to record during the first week of July at Air Studios in London. We partnered with Skywalker Sound for audio post-production and mix, which took us through the middle of August.”

As with any film, getting to the final result takes time and experimentation. He continues, “We screened for various small groups listening to feedback and debated and tweaked. The film has a lot of beautiful subtleties to it. We did not want to cheapen it with cliché tricks that would diminish the relationships between characters. It really is first a love story between a mother and her child. The director and producers and I worked very closely together taking scenes out, working pacing, putting scenes back in, and really making sure we had an effective story.”

df0316_vfts_12Splice handled visual effects ranging from sky replacements to entire green screen composited sequences. Condit explains, “Our team uses a variety of tools including Nuke, Houdini, Maya, and Cinema 4D. Since this film takes place in the 1950s, there were a lot of modern elements that needed to be removed, like TV antennas and distant power lines, for example. There’s a rock quarry scene with a pool of water. When it came time to shoot there, the water was really murky, so that had to be replaced. In addition, Splice also handled a number of straight effects shots. In a couple scenes the boy is on the edge of the roof of the castle, which was a green screen composite, of course. We also shot a day in a pool for underwater shots.”

Pioneering the cut with Final Cut Pro X

df0316_vfts_5Clayton Condit is a definite convert to Apple’s Final Cut Pro X and Voice from the Stone was no exception. Condit says, “Splice originated as an Avid-based shop and then moved over to Final Cut Pro as our market shifted. We also do a lot of online finishing, so we have to be compatible with whatever the offline editor cuts in. As FCP 7 fades away we are seeing more jobs being done in [Adobe] Premiere Pro and we also are finishing with [Blackmagic Design] DaVinci Resolve. Today we are sort of an ‘all of the above’ shop; but for my offline projects I really think FCP X is the best tool. Eric also appreciated his experience with FCP X as the technology never got in the way. As storytellers, we are creatively free to try things very quickly [with Final Cut Pro X].”

df0316_vfts_7“Of course, like every FCP X editor, I have my list of features that I’d like to see; but as a creative editorial tool, hands down it’s the real deal. I really love audio roles, for example. This made it very easy to manage my temp mixes and to hand over scenes to the composer so that he could control what audio he worked with. It also streamlined turnovers. My assistant, Cody Brown, used X2Pro Audio Convert to prepare AAFs for Skywalker. Sound work in your offline is so critical when trying to ‘sell’ your edit and to make sure a scene is really working. FCP X makes that pretty easy and fun. We have an extensive sound library here at Splice. Along with early music cues from Wandmacher, I was able to do fairly decent temp mixes in surround for early screenings inside Final Cut.”

On location, Condit kept his media on a small G-RAID Thunderbolt drive for portability; but back in Minneapolis, Splice has a 600TB Xsan shared storage system for collaboration among departments. Condit’s FCP X library and cache files were kept on small dual-SSD Thunderbolt drives for performance and with mirrored media he could easily transition between working at home or at Splice.

df0316_vfts_9Condit explains his FCP X workflow, “We broke the film into separate libraries for each of the five reels. Each scene was its own event. Shots were renamed by scene and take numbers using different keyword assignments to help sort and search. The film was shot with two cameras, which Cody grouped as multicam clips in FCP X. He used Sync-N-Link X to bring in the production sound metadata. This enabled me to easily identify channel names. I tend to edit in timelines rather than a traditional source and record approach. I start with ‘stringouts’ of all the footage by scene and will use various techniques to sort and track best takes. A couple of the items I’d love to see return to FCP X are tabs for open timelines and dupe detection.”

df0316_vfts_11Final Cut Pro X also has other features to help truly refine the edit. Condit says, “I used FCP X’s retiming function extensively for pace and emotion of shots. With the optical flow technology, it delivers great results. For example, in the opening shot you see two hands – the boy and his mother – playing piano. The on-set piano rehearsal was recorded and used for playback for all takes. Unfortunately it was half the speed of the final cue used in the film. I had to retime that performance to match the final cue, which required putting a keyframe in for every finger push. Optical flow looks so good in FCP X that many of the final online retimes were actually done in FCP X.”

df0316_vfts_6Singer Amy Lee of the band Evanescence recorded the closing title song for the film during the sound sessions at Skywalker. Condit says, “Amy completely ‘got’ the film and articulated it back in this beautiful song. She and Wandmacher collaborated to create something pretty special to close the film with. Our team is fortunate enough now to be creating a music video for the song that was shot at the same castle.”

Zanuck Independent is currently arranging a domestic distribution schedule for Voice from the Stone, so look for it in theaters later this year.

If you want more details, click here for Steve Hullfish’s excellent Art of the Cut interview with Clayton Condit.

Originally written for Digital Video magazine / CreativePlanetNetwork.

©2016 Oliver Peters

Film Editor Techniques

df2416_filmedit_sm

Editing is a craft that each editor approaches with similarities and differences in style and technique. If you follow my editor interviews or those at Steve Hullfish’s Art of the Cut series, then you know that most of the top editors are more than willing to share how they do things. This post will go through a “baker’s dozen” set of tips and techniques that hopefully will help your next, large project go just a bit more smoothly.

Transcoding media. While editing with native media straight from the camera is all the rage in the NLE world, it’s the worst way to work on long-term projects. Camera formats vary in how files are named, what the playback load is on the computer, and so on. It’s best to create a common master format for all the media in your project. If you have really large files, like 4K camera media, you might also transcode editing proxies. Cut with these and then flip to the master quality files when it comes time to finish.

Transcode audio. In addition to working with common media formats, it’s a good practice to get all of your audio into a proper format. Most NLEs can deal with a mix of audio formats, bit depths and sample rates, but that doesn’t mean you should. It’s quite common to get VO and temp music as MP3 files with 44.1kHz sampling. Even though your NLE may work with this just fine, it can cause problems with sync and during audio post later. Before you start working with audio in your project, transcode it to .wav of .aif formats with 48kHz sampling and 16-bit or 24-bit bit-depth. Higher sampling rates and bit-depths are OK if your NLE can handle them, but they should be multiples of these values.

Break up your project files by reel. Most films are broken down into 20 minute “reels”. Typically a feature will have five or six reels that make up the entire film. This is an old-school approach that goes back to the film day, yet, it’s still a good way to work in the modern digital era. How this is done differs by NLE brand.

With Media Composer, the root data file is the bin. Therefore, each film reel would be a separate timeline, quite possibly placed into a separate bin. This facilitates collaboration among editors and assistants using different systems, but still accessing the same project file. Final Cut Pro X and Premiere Pro CC don’t work this way. You cannot share the exact same FCPX library or Premiere Pro project file between two editors at one time.

In Final Cut Pro X, the library file is the basic data file/container, so each reel would be in its own library with a separate master library that contains only the final edited sequence for each of the reels. Since FCPX editors can open multiple libraries, it’s possible to work across reels this way or to have different editors open and work on different libraries independent of each other.

With Premiere you can only have a single project file open at one time. When a film is broken into one reel per project, it becomes easy for editors and assistants to work collaboratively. Then a master project can be created to import the final version of each reel’s timeline to create the combined film timeline. Media Browser within Premiere Pro should be used to access sequences from within other project files and import them into a new project.

Show/hide, sifting and sorting. Each NLE has its own way of displaying or hiding clips and subclips. Learning how to use these controls will help you speed up the organization of the media. Final Cut Pro X has a sophisticated method of assigning “favorites” and “rejects” to clips and ranges within clips. You can also assign keywords. By selecting what to see and to hide, it’s easy to cull a mass of footage into the few, best options. Likewise with Media Composer and Premiere Pro, you can show and hide clips and also sort by custom column criteria. Media Composer includes a custom sift feature, which is a filtering solution within the bin. It is easy to sift a bin by specific data in certain columns. Doing so hides everything else and reveals only the matching set of media on a per-bin basis.

Stringouts. A stringout is a sequence of selected footage. Many editors use stringouts as the starting point and then whittle down the scene from there. For example, Kirk Baxter likes his assistants to create a stringout for a dialogue scene that is broken down by line and camera. For each line of dialogue, you would see every take and camera angle covering that line of dialogue from wide to tight. Then the next line of dialogue and so on. The result is a very long sequence for the scene, but he can quickly assess the performance and best angle for each portion of the scene. Then he goes through and picks his favorites by pushing the video clip up one track for quick identification. The assistant then cleans up the stringout by creating a second version containing only these selected clips. Now the real cutting can begin.

Julian Clarke has his assistants create a similar stringout for action scenes. All takes and angles are organized back-to-back matching the choreography of the action. So – every angle/take for each crash or blast or punch within the scene. From these he has a clear idea of coverage and how to proceed cutting the scene, which otherwise might have an overwhelming amount of footage at first glance.

I use stringouts a lot for interview-driven documentaries. One sequence per person with everything. The second and third stringouts are successive cutdowns from that initial all-inclusive stringout. At this stage I start combining portions of sequences based on topics for a second round of stringouts. These will get duplicated and then culled, trimmed and rearranged as I refine the story.

Pancakes and using sequences as sources. When you use stringouts, it’s common to have one sequence become the source for another sequence. There are ways to handle this depending on your NLE. Many will nest the source sequence as a single clip on the new timeline. I contend that nesting should be avoided. Media Composer only allows one sequence in the “record” window to be active at any one time (no tabbed timeline). However, you can also drag a sequence to the source window and its tracks and clips can be viewed by toggling the timeline display between source and record. At least this way you can mark ins and outs for sections. Both Final Cut Pro “legacy” and Premiere Pro enable several sequences to be loaded into the timeline window where they are accessible through tabs. Final Cut Pro X dropped this feature, replacing it with a timeline history button to step forward or backward through several loaded sequences. To go between these sequences in all three apps, using copy-and-paste functions are typically the best way to bring clips from one sequence into another.

One innovative approach is the so-called “pancake” timeline, popularized by editor/blogger Vashi Nedomansky. Premiere Pro permits you to stack two or more timelines into separate panels. The selected sequence becomes active in the viewer at any given time. By dragging between timeline panels, it is possible to edit from one sequence to another. This is a very quick and efficient way to edit from a longer stringout of selects to a shorting one with culled choices.

Scene wall. Walter Murch has become synonymous with the scene wall, but in fact, many editors use this technique. In a scene wall, a series of index cards for each scene is placed in story order on a wall or bulletin board. This provides a quick schematic of the story at any given time during the edit. As you remove or rearrange scenes, it’s easy to see what impact that will have. Simply move the cards first and review the wall before you ever commit to doing the actual edit. In addition, with the eliminated cards (representing scenes) moved off to the side, you never lose sight of what material has been cut out of the film. This is helpful to know, in case you want to go back and revisit those.

Skinning, i.e. self-contained files. Another technique Murch likes to use is what he calls adding a skin to the topmost track. The concept is simple. When you have a lot of mixed media and temp effects, system performance can be poor until rendered. Instead of rendering, the timeline is exported as a self-contained file. In turn, that is re-imported into the project and placed onto the topmost track, hiding everything below it. Now playback is smooth, because the system only has to play this self-contained file. It’s like a “skin” covering the “viscera” of the timeline clips below it.

As changes are made to add, remove, trim or replace shots and scenes, an edit is made in this self-contained clip and the ends are trimmed back to expose the area in which changes are being made. Only the part where “edit surgery” happens isn’t covered by the “skin”, i.e. self-contained file. Next a new export is done and the process is repeated. By seeing the several tracks where successive revisions have been made to the timeline, it’s possible to track the history of the changes that have been made to the story. Effectively this functions as a type of visual change list.

Visual organization of the bin. Most NLEs feature list and frame views of a bin’s contents. FCPX also features a filmstrip view in the event (bin), as well as a full strip for the selected clip at the top of the screen when in the list view. Unfortunately, the standard approach is for these to be arranged based on sorting criteria or computer defaults, not by manual methods. Typically the view is a tiled view for nice visual organization. But, of course, the decision-making process can be messy.

Premiere Pro at least lets you manually rearrange the order of the tiles, but none of the NLEs is as freeform as Media Composer. The bin’s frame view can be a completely messy affair, which editors use to their advantage. A common practice is to move all of the selected takes up to the top row of the bin and then have everything else pulled lower in the bin display, often with some empty space in between.

Multi-camera. It is common practice, even on smaller films, to shoot with two or more cameras for every scene. Assuming these are used for two angles of the same subject, like a tight and a wide shot on the person speaking, then it’s best to group these as multi-camera clips. This gives you the best way to pick among several options. Every NLE has good multi-camera workflow routines. However, there are times when you might not want to do that, such as in this blog post of mine.

Multi-channel source audio. Generally sound on a film shoot is recorded externally with several microphones being tracked separately. A multi-channel .wav file is recorded with eight or more tracks of materials. The location sound mixer will often mix a composite track of the microphones for reference onto channel one and/or two of the file. When bringing this into the edit, how you handle it will vary with each NLE.

Both Media Composer and Premiere Pro will enable you to merge audio and picture into synchronized clips and select which channels to include in the combined file. Since it’s cumbersome to drag along eight or more source channels for every edit in these track-based timelines, most editors will opt to only merge the clips using channel one (the mixed track) of the multi-channel .wav file. There will be times when you need to go to one of the isolated mics, in which case a match-frame will get you back to the source .wav, from which you can pull the clean channel containing the isolated microphone. If your project goes to a post-production mixer using Pro Tools, then the mixer normally imports and replaces all of the source audio with the multi-channel .wav files. This is common practice when the audio work done by the picture editor is only intended to be used as a temp mix.

With Final Cut Pro X, source clips always show up as combined a/v clips, with multi-channel audio hidden within this “container”. This is just as true with synchronized clips. To see all of the channels, expand the clip or select it and view the details in the inspector. This way the complexity doesn’t clog the timeline and you can still selectively turn on or off any given mic channel, as well as edit within each audio channel. No need to sync only one track or to match-frame back to the audio source for more involved audio clean-up.

Multi-channel mixing. Most films are completed as 5.1 surround mixes – left, center, right, left rear surround, right rear surround, and low-frequency emitter (subwoofer). Films are mixed so that the primary dialogue is mono and largely in the center channel. Music and effects are spread to the left and right channels with a little bit also in the surrounds. Only loud, low frequencies activate the subwoofer channel. Usually this means explosions or some loud music score with a lot of bottom. In order to better approximate the final mix, many editors advocate setting up their mixing rooms for 5.1 surround or at least an LCR speaker arrangement. If you’ve done that, then you need to mix the timeline accordingly. Typically this would mean mono dialogue into the center channel and effects and music to the left and right speakers. Each of these NLEs support sequence presets for 5.1, which would accommodate this edit configuration, assuming that your hardware is set up accordingly.

Audio – organizing temp sound. It’s key that you organize the sounds you use in the edit in such a way that it is logical for other editors with whom you may be collaborating. It should also make sense to the post-production mixer who might do the final mix. If you are using a track-based NLE, then structure your track organization on the timeline. For example, tracks 1-8 for dialogue, tracks 9-16 for sound effects, and tracks 17-24 for music.

If you are using Final Cut Pro X, then it’s important to spend time with the roles feature. If you correctly assign roles to all of your source audio, it doesn’t matter what your timeline looks like. Once properly assigned, the selection of roles on output – including when using X2Pro to send to Pro Tools – determines where these elements show up on an exported file or inside of a Pro Tools track sheet. The most basic roles assignment would be dialogue, effects and music. With multi-channel location recordings, you could even assign a role or subrole for each channel, mic or actor. Spending a little of this time on the front end will greatly improve efficiency at the back end.

For more ideas, click on the “tips and tricks” category or start at 12 Tips for Better Film Editing and follow the bread crumbs forward.

©2016 Oliver Peters