BorisFX BCC 10

df3216_bcc10_01_sml

Boris Continuum Complete (BCC) by BorisFX is the epitome of the term “Swiss Army knife” when it comes to talking about plug-ins. Most editors will pick this package over others, if they can only have one toolkit to cover a diverse range of picture enhancements. In the past year, BorisFX has upgraded this toolkit with new effects, expanded to add more NLE hosts, and integrated mocha’s Academy Award-winning planar tracking technology after the acquisition of Imagineer Systems. This set of plug-ins is now up to version BCC10. BorisFX has not only added new effects to BCC10, but also expanded its licensing options to include multi-host and subscription options.

Since many users now work with several NLEs, multi-host licensing makes a lot of sense. One purchase with a single serial number covers the installation for each of the various applications. There are two multi-host license versions: one for Avid/Adobe/Apple/OFX and the second that doesn’t include Avid. OFX licensing covers the installation for Blackmagic Design DaVinci Resolve, as well as Sony Vegas Pro for PC users.

What’s new in BCC10

df3216_bcc10_10Boris Continuum Complete version 10 includes over 230 effects within 16 different categories, like 3D Objects, Art Looks, Particles, Perspective and more. Each effect comes with numerous presets for a total of over 2,500 presets in all. There are plenty of new tools in BCC10, but the biggest news is that each effect filter integrates mocha planar tracking. BorisFX has always included Pixel Chooser as a way of masking objects. Now each filter also lets you launch the mocha interface right from inside the plug-in’s effect control panel. For example, if you are applying skin smoothing to only your talent’s forehead using the new BCC Beauty Studio, simply launch mocha, create a mask for the forehead and track the talent’s movement within the shot. The mask and track are saved within the plug-in, so you can instantly see the results.

df3216_bcc10_05A second big change is the addition and integration of the FX Browser. Each plug-in effect lets you launch the FX Browser interface to display how each of the various presets for that effect would look when applied to the selected clip. You can preview the whole clip, not just a thumbnail. FX Browser is also a standalone effect that can be applied to the clip. When you use it that way, then all presets for all filters can be previewed. While FX Browser has been implemented in past versions in some of the hosts, this is the first time that it’s become an integrated part of the BCC package across all NLEs.

df3216_bcc10_02BCC10 includes two new “studio” tools, as well as a number of new individual effects. BCC Beauty Studio is a set of tools in a single filter targeted at image retouching, especially the skin texture of talent. Photographers retouch “glamor” shots to reduce or remove blemishes, so Photoshop-style retouching is almost expected these days. This is the digital video equivalent. As with most skin smoothing filters, BCC Beauty Studio uses skin keying algorithms to isolate skin colors. It then blurs skin texture, but also lets the editor adjust contrast, color correction, and even add a subtle glow to image highlights. Of course, as I mentioned above, mocha masking and tracking is integrated for the ultimate control in where and how the effect is applied.

The second new, complex filter is BCC Title Studio. This is an integrated 3D titling tool that can be used based on templates within the effects browser or by launching the separate Title Studio interface. Editors familiar with BorisFX products will recognize this titling interface as essentially Boris RED right inside of their NLE. Not only can you create titles, but also more advanced motion graphics. You can even import objects, EPS and image files for 3D effects, including the addition of materials and shading. As with other BorisFX tilting tools, you can animate text on and off the screen.

df3216_bcc10_03In addition to these two large plug-ins, BCC10 also gained nine new filters and transitions. These include BCC Remover (fills in missing pixels or removes objects using cloning) and BCC Drop-out Fixer (restores damaged footage). For the folks who have to deal with a lot of 4×3 content and vertical cell phone footage, there’s BCC Reframer. Unlike the usual approach where the same image is stretched and blurred behind the vertical shot, this filter includes options to stylize the foreground and background.

df3216_bcc10_11The trend these days is to embrace image “defects” as a creative effect, so two of the new filters are BCC Light Leaks and BCC Video Glitch. Each adds organic, distressed effects, like in-camera light contamination and corrupted digital video artifacts. To go along with this, there are also four new transitions, including a BCC Light Leaks Dissolve, Cross Glitch, Cross Zoom and Cross Melt. Of these, the light leaks, glitch and zoom transitions are about what you’d expect from the name, however, the melt transition seems rather unique. In addition to the underlying dissolve between two images, there are a variety of effects options that can be applied as part of this transition. Many of these are glass, plastic, prism or streak effects, which add an interesting twist to this style of transition.

In use

df3216_bcc10_04The new BCC10 package works within the established hosts much like it always has, so no surprises there. The Boris Continuum Complete package used to come bundled with Avid Media Composer, but unfortunately that’s no longer the case. Avid editors who want the full BCC set have to purchase it. As with most plug-ins, After Effects is generally the best host when adjustment and manipulation of effects are required.

df3216_bcc10_09A new NLE to consider is DaVinci Resolve. Many are testing the waters to see if Resolve could become their NLE of choice. Blackmagic Design introduced Resolve 12.5 with even more focus on its editing toolset, including new, built-in effect filters and transitions. In my testing, BCC10 works reasonably well with Resolve 12.5 once you get used to where the effects are. Resolve uses a modal design with editing and color correction split into separate modes or pages. BCC10 transition effects only show up in the OFX library of the edit page. For filter effects, which are applied to the whole clip, you have to go to the color page. During the color correction process you may add any filter effect, but it has to be applied to a node. If you apply more than one filter, you have to add a new node for each filter. With the initial release of BCC10, mocha did not work within Resolve. If you tried to launch it, a message came up that this functionality would be added at a later time. In May, BorisFX released BCC10.2, which included mocha for both Resolve 12.5 and Vegas Pro. To use the BCC10 effects with Resolve 12.5 you need the paid Studio version and not the free version of Resolve.

df3216_bcc10_07BorisFX BCC10 is definitely a solid update, with new features, mocha integration and better GPU-based performance. It runs best in After Effects CC, Premiere Pro CC and Avid Media Composer. The built-in effects tools are pretty good in After Effects, Final Cut Pro X and Resolve 12.5 – meaning you might get by without needing what BCC10 has to offer. On the other hand, they are unfortunately very mediocre in Premiere Pro or Media Composer. If one of those is your editing axe, then BCC10 becomes an essential purchase, if you want to improve the capabilities of your editing application. Regardless of which tool you use, BCC10 will give you more options to stretch your creativity.

df3216_bcc10_08On a related note, at IBC 2016 in Amsterdam, BorisFX announced the acquisition of GenArts. This means that the Sapphire effects are now housed under the BorisFX umbrella, which could make for some interesting bundling options in the future. As with their integration of mocha tracking into the BCC effects, future versions of BCC and/or Sapphire might also see a sharing of compatible technologies across these two effects families. Stay tuned.

Originally written for Digital Video magazine / Creative Planet Network

©2016 Oliver Peters

The wait is over – FCP X 10.3

df3116_fcpx1003_1_smAmidst the hoopla on Oct. 27th, when Apple introduced the new MacBook Pro with Touch Bar, the ProApps team also released updates to Final Cut Pro X, Motion and Compressor. This was great news for fans, since Final Cut got a prime showcase slot in the event’s main stage presentation. Despite the point numbering, the bump from 10.2 to 10.3 is a full version change, just like in macOS, where 10.11 (El Capitan) to 10.12 (Sierra) is also a new version. This makes FCP X 10.3 the fourth iteration in the FCP X line and the eleventh under the Final Cut Pro brand. I’m a bit surprised that Apple didn’t drop the “X” from the name, though, seeing as it’s done that with macOS itself. And speaking of operating systems, this release requires 10.11.4 (El Capitan) or higher (Sierra).

If you already purchased the application in the past, then this update will be a free upgrade for you. There are numerous enhancements, but three features stand out among the changes: the new interface, the expanded use of roles for mixing, and support for a wider color gamut.

A new look for the user interface

The new user interface is darker and flatter. Although for my taste, it’s a bit too dark without any brightness sliders to customize the appearance. The dimensional style is gone, putting Final Cut Pro X in line with the aesthetics of iMovie and other Apple applications. Final Cut Pro X was already out of step with design trends at the time it was first released. Reskinning the application with this new appearance brings it in line with the rest of the design industry.

The engineers have added workspaces and rearranged where certain controls are, though generally, panels are in the same places as before. Workspaces can be customized, but not nearly to the level of Adobe’s Premiere Pro CC. The most welcomed of these changes is that the inspector pane can be toggled to full height when needed. In reality, the inspector height isn’t changed. It’s the width of the timeline that changes and toggles between covering and revealing the full inspector panel.

There are other minor changes throughout 10.3, which make it a much better application. For example, if you like to work with a source/record, 2-up viewer display, then 10.3 now allows you to play a source clip from inside the event viewer.

Magnetic Timeline 2 and the expansion of roles

df3116_fcpx1003_2Apple did a lot of work to rejigger the way the timeline works and to expand the functionality of roles. It’s even being marketed as Magnetic Timeline 2. Up until now, the use of roles in Final Cut has been optional. With 10.3, it’s become the primary way to mix and organize connected clips within the timeline. Apple has resisted adding a true mixing panel, instead substituting the concept of audio lanes.

Let’s say that you assign the roles of dialogue, music or effects to your timeline audio clips. The timeline index panel lets you organize these clips into groups according to their assigned roles, which Apple calls audio lanes. If you click “show audio lanes”, the various connected clips rearrange vertical position in the timeline window to be grouped into their corresponding lanes, based on roles. Now you have three lanes of grouped clips: dialogue, effects, music. You can change timeline focus to individual roles – such as only dialogue – which will minimize the size of all the other roles (clips) in the window. These groups or lanes can also be soloed, so you just hear dialogue without the rest, for example.

There is no submix bus to globally control or filter groups of clips, like you have in Premiere Pro or most digital audio applications. The solution in FCP X 10.3 is to select all clips of the same role and create a compound clip. (Other NLEs refer to this as “nesting”.) By doing so, all of the dialogue, effects and music clips appear on the timeline as only three compound clips – one for each role. You can then apply audio filters or adjust the overall level of that role by applying them to the compound clip.

Unfortunately, if you have to go back and make adjustments to an individual clip, you’ll have to open up the compound clip in its own timeline. When you do that, you lose the context of the other clips. For example, tweaking a sound effect clip inside its compound clip, means that you would only hear the other surrounding effect clips, without dialogue and music or seeing the video. In addition, you won’t hear the result of filters or volume changes made at the top level of that compound clip. Nevertheless, it’s not as complex as it sounds and this is a viable solution, given the design approach Apple engineers have taken.

df3116_fcpx1003_3It does surprise me that they ended up with this solution, because it’s a very modal way of operating. This would seem to be an anathema to the intent of much of the rest of FCP X’s design. One has to wonder whether or not they’ve become boxed in my their own architecture. Naturally others will counter that this process is simplified due to the lack of track patching and submix matrices.

Wide color

The industry at large is embracing color standards that enable displays to reproduce more of the color spectrum, which the human eye can see. An under-the-hood change with FCP X is the embrace of wide gamut color. I think that calling it “wide color” dumbs down the actual standards, but I guess Apple wants to keep things in plain language. In any case, the interface is pretty clear on the actual specs.

Libraries can be set up for “standard color” (Rec. 601 for SD and Rec. 709 for HD) or “wide color” (Rec. 2020). The Projects (sequences) that you create within a Library can be either, as long as the Library was initially set up for wide gamut. You can also change the setting for a Project after the fact. Newer cameras that record in raw or log color space, like RED or ARRI models, are perfectly compatible with wide color (Rec. 2020) delivery, thanks to post-production color grading techniques. That is where this change comes into play.

For the most part you won’t see much difference in normal work, unless you really crank up the saturation. If you do this in the wide color gamut mode, you can get pretty extreme and the scopes will display an acceptable signal. However, if you then switch the Project setting to standard color, the high chroma areas will change to a somewhat duller appearance in the viewer and the scopes will show signal clipping. Most current television display systems don’t display wide gamut color, yet, so it’s not something most users need to worry about today. This is Apple’s way of future-proofing Final Cut and to pass the cleanest possible signal through the system.

A few more things

df3116_fcpx1003_4Numerous other useful tools were added in this version. For example, Flow – a morphing dissolve – for use in bridging jump cuts. Unlike Avid’s or Adobe’s variations, this transition works in real-time without analysis or rendering. This is because it morphs between two still frames. Each company’s approach has a slightly different appearance, but Flow definitely looks like an effect that will get a lot of use – especially with interview-driven productions. Other timeline enhancements include the ability to easily add and toggle audio fades. There’s simplified top and tail trimming. Now you can remove attributes and you can roll (trim) between adjacent, connected clips. Finally – a biggie for shared storage users – FCP X can now work with NAS systems that use the SMB protocol.

Working with it for over a week at the time I post this, the application has been quite stable, even on a production with over 2,000 4K clips. I wouldn’t recommend upgrading if you are in the middle of a production. The upgraded Libraries I tested did exhibit some flakiness, which weren’t there in freshly created Libraries. There’s also a technique to keep both 10.2 and 10.3 active on the same computer. Definitely trash your preferences before diving in.

So far, the plug-ins and Motion templates still work, but you’ll definitely need to check whether these vendors have issued updates designed for this release. This also goes for the third-party apps, like those from Intelligent Assistance, because 10.3 adds a new version of FCPXML. Both Intelligent Assistance and Blackmagic Design issued updates (for Resolve and Desktop Video) by the next day.

There are a few user interface bugs, but no show-stoppers. For instance, the application doesn’t appear to hold its last state upon close, especially when more than one Library is open. When you open it again the next time, the wrong Library may be selected or the wrong Project loaded in the timeline. It occasionally loses focus on the pane selected. This is an old bug that was there in previous versions. You are working in the timeline and all of a sudden nothing happens, because the application “forgot” which pane it’s supposed to have focus on. Clicking command-1 seems to fix this. Lastly, the audio meters window doesn’t work properly. If you resize it to be slimmer, the next time you launch FCP X, the meters panel is large again. That’s even if you updated the workspace with this smaller width. And then sometimes they don’t display audio until you close and reopen the audio meters window.

In this round of testing, I’ve had to move around Libraries with external media to different storage volumes. This requires media relinking. While it was ultimately successful, the time needed to relink was considerably longer than doing this same task in other NLEs.

My test units are all connected to Blackmagic Design i/o hardware, which seems to retard performance a bit. With a/v output turned off within the FCP X interface, clips play right away without stuttering when I hit the spacebar. With the a/v output on, I randomly get stuttering on clips when they start to play. It’s only a minor nuisance, so I just turn it off until I need to see the image on an external monitor. I’ve been told that AJA hardware performs better with FCP X, but I haven’t had a chance to test this myself. In any case, I don’t see this issue when running the same media through Premiere Pro on the exact same computer, storage and i/o hardware.

Final Cut Pro X 10.3 will definitely please most of its fans. There’s a lot of substance and improvement to be appreciated. It also feels like it’s performing better, but I haven’t had enough time with a real project yet to fully test that. Of course, the users who probe a bit deeper will point to plenty of items that are still missing (and available in products like Premiere Pro), such as better media relinking, more versatile replace edit functions and batch exporting.

For editors who’ve only given it a cursory look in the past or were swayed by the negative social media and press over the past five years, this would be the version to re-evaluate. Every new or improved item is targeted at the professional editor. Maybe it’s changed enough to dive in. On the other hand, if you’re an editor who’s given FCP X a fair and educated assessment and just not found it to your liking or suitable for your needs, then I doubt 10.3 will temp you. Regardless, this gives fans some reassurance about Apple’s commitment to professional users of their software – at least for another five years.

If you have the time, there are plenty of great tips here at the virtual Final Cut User Group.

The new Final Cut Pro X 10.3 user manual can be found here.

Click here for additional links highlighting features in this update.

Originally written for Digital Video magazine / Creative Planet Network

©2016 Oliver Peters

Tools for Dealing with Media

df3016_media_1_sm

Although most editing application manufacturers like to tout how you can just go from camera to edit with native media, most editors know that’s a pretty frustrating way to work. The norm these days is for the production team to use a whole potpourri of professional and prosumer cameras, so it’s really up to the editor to straighten this out before the edit begins. Granted a DIT could do all of this, but in my experience, the person being called a DIT is generally just someone who copies/backs-up the camera cards onto hard drives to bring back from the shoot. As an editor you are most likely to receive a drive with organized copies of the camera media cards, but still with the media in its native form.

Native media is fine when you are talking about ARRI ALEXA, Canon C300 or even RED files. It is not fine when coming from a Canon 5D, DJI, iPhone, Sony A7S, etc. The reason is that these systems record long-GOP media without valid timecode. Most do not generate unique file names. In some cases, there is no proper timebase within the files, so time itself is “rubbery” – meaning, a frame of time varies slightly in true duration from one frame to the next.

If you remove the A7S .mp4 files from within the clutter of media card folders and take these files straight into an NLE, you will get varying results. There is a signal interpreted as timecode by some tools, but not by others. Final Cut Pro X starts all of these clips at 00:00:00:00, while Premiere Pro and Resolve read something that is interpreted as timecode, which ascends sequentially on successive clips. Finally, these cameras have no way to deal with off-speed recordings. For example, if a higher frame rate is recorded with the intent to play it back in slow motion. You can do that with a high-end camera, but not these prosumer products. So I’ve come to rely on several software products heavily in these types of productions.

Step 1 : Hedge for Mac

df3016_media_2The first step in any editing is to get the media from the field drives onto the edit system drives. Hopefully your company’s SOP is to archive this media from the field in addition to any that comes out of the edit. However, you don’t want to edit directly from these drives. When you do a Finder copy from one drive to the next there is no checksum verification. In other words, the software doesn’t actually check to make sure the copy is exact without errors. This is the biggest plus for an application like Hedge – copy AND verification.

Hedge comes in a free and a paid version. The free version is useful, but copy and verify is slower than the paid version. The premium (paid) version uses a software component that they call Fast Lane to speed up the verification process so that it takes roughly the same amount of time as a Finder copy, which has no verification. To give you an idea, I copied a 62GB folder from a USB2.0 thumb drive to an external media drive connected to my Mac via eSATA (through an internal card). The process took under 30 minutes for a copy through Hedge (paid version) – about the same as it took for a Finder copy. Using the free version takes about twice as long, so there’s a real advantage to buying the premium version of the application. In addition, the premium version works with NAS and RAID systems.

The interface is super simple. Sources and targets are drag-and-drop. You can specify folders within the drives, so it’s not just a root-level, drive-to-drive copy. Multiple targets and even multiple sources can be specified within the same batch. This is great for creating a master as well as several back-up copies. Finally, Hedge generates a transfer log for written evidence of the copies and verification performed.

Step 2 : EditReady

df3016_media_3Now that you have your media copies, it’s time to process the prosumer camera media into something more edit-friendly. Since the camera-original files are being archived, I don’t generally save both the original and converted files on my edit system. For all intents and purposes, the new, processed files become my camera media. I’ve used tools like MPEG Streamclip in the past. That still works well, but EditReady from Divergent Media is better. It reads many media formats that other players don’t and it does a great job writing ProRes media. It will do other formats, too, but ProRes is usually the best format for projects that I work with.

One nice benefit of EditReady is that it offers additional processing functions. For example, if you want to bake in a LUT to the transcoded files, there’s a function for that. If you shot at 29.97, but want the files to play at 23.976 inside you NLE, EditReady enables you to retime the files accordingly. Since Divergent Media also makes ScopeBox, you can get a bundle with both EditReady and ScopeBox. Through a software conduit called ScopeLink, clips from the EditReady player show up in the ScopeBox viewer and its scopes, so you can make technical evaluations right within the EditReady environment.

EditReady uses a drag-and-drop interface that allows you to set up a batch for processing. If you have more that one target location or process chain, simply open up additional windows for each batch that you’d like to set up. Once these are fired off, all process will run simultaneously. The best part is that these conversions are fast, resulting in reliable transcoded media in an edit-friendly format.

Step 3: Better Rename

df3016_media_4The last step for me is usually to rename the file names. I won’t do this with formats like ALEXA ProRes or RED, but it’s essential for 5D, DJI and other similar cameras. That’s because these camera normally don’t generate unique file names. After all, you don’t want a bunch of clips that are named C0001 with a starting timecode of 00:00:00:00 – do you?

While there are a number of batch renaming applications and even Automator scripts that you can create, my preferred application is Better Rename, which is available in the Mac App Store. It has a host of functions to change names, add numbered sequences and append a text prefix or suffix to a name. The latter option is usually the best choice. Typically I’ll drag my camera files from each group into the interface and append a prefix that adds a camera card identifier and a date to the clip name. So C0001 becomes A01_102916_C0001. A clip from the second card would change from C0001 to A02_102916_C0001. It’s doubtful that the A camera would shoot more than 99 cards in a day, but if so, you can adjust your naming scheme accordingly.

There you go. Three simple steps to bulletproof how you work with media.

©2016 Oliver Peters

Audio Splits and Stems in Premiere Pro

df2916_audspltppro_8_sm

When TV shows and feature films are being mixed, the final deliverables usually include audio stems as separate audio files or married to a multi-channel video master file or tape. Stems are the isolated submix channels for dialogue, sound effects and music. These elements are typically called DME (dialogue, music, effects) stems or splits and a multi-channel master file that includes these is usually called a split-track submaster. These isolated tracks are normally at mix level, meaning that you can combine them and the sum should equal the same level and mix as the final composite mixed track.

The benefit of having such stems is that you can easily replace elements, like re-recording dialogue in a different language, without having to dive back into the original audio project. The simplest form is to have 3 stereo stem tracks (6 mono tracks) for left and right dialogue, sound effects and music. Obviously, if you have a 5.1 surround mix, you’ll end up with a lot more tracks. There are also other variations for sports or comedy shows. For example, sports shows often isolate the voice-over announcer material from an on-camera dialogue. Comedy shows may isolate the laugh track as a stem. In these cases, rather than 3 stereo DME stems, you might have 4 or more. In other cases, the music and effects stems are combined to end up with a single stereo M&E track (music and effects minus dialogue).

Although this is common practice for entertainment programming, it should also be common practice if you work in short films, corporate videos or commercials. Creating such split-track submasters at the time you finish your project can often save your bacon at some point down the line. I ran into this during the past week. df2916_audspltppro_1A large corporate client needed to replace the music tracks on 11 training videos. These videos were originally editing in 2010 using Final Cut Pro 7 and mixed in Pro Tools. Although it may have been possible to resurrect the old project files, doing so would have been problematic. However, in 2010, I had exported split-track submasters with the final picture and isolated stereo tracks for dialogue, sound effects and music. These have become the new source for our edit – now 6 years later. Since I am editing these in Premiere Pro CC, it is important to also create new split-track submasters, with the revised music tracks, should we ever need to do this again in the future.

Setting up a new Premiere Pro sequence 

I’m usually editing in either Final Cut Pro X or Premiere Pro CC these days. It’s easy to generate a multi-channel master file with isolated DME stems in FCP X, by using the Roles function. However, to do this, you need to make sure you properly assign the correct Roles from the get-go. Assuming that you’ve done this for dialogue, sound effects and music Roles on the source clips, then the stems become self-sorting upon export – based on how you route a Role to its corresponding export channel. When it comes to audio editing and mixing, I find Premiere Pro CC’s approach more to my liking. This process is relatively easy in Premiere, too; however, you have to set up a proper sequence designed for this type of audio work. That’s better than trying to sort it out at the end of the line.

df2916_audspltppro_4The first thing you’ll need to do is create a custom preset. By default, sequence presets are configured with a certain number of tracks routed to a stereo master output. This creates a 2-channel file on export. Start by changing the track configuration to multi-channel and set the number of output channels. My requirement is to end up with an 8-channel file that includes a stereo mix, plus stereo stems for isolated dialogue, sound effects and music. Next, add the number of tracks you need and assign them as “standard” for the regular tracks or “stereo submix” for the submix tracks.

df2916_audspltppro_2This is a simple example with 3 regular tracks and 3 submix tracks, because this was a simple project. A more complete project would have more regular tracks, depending on how much overlapping dialogue or sound effects or music you are working with on the timeline. For instance, some editors like to set up “zones” for types of audio. You might decide to have 24 timeline tracks, with 1-8 used for dialogue, 9-18 for sound effects and 17-24 for music. In this case, you would still only need 3 submix tracks for the aggregate of the dialogue, sound effects and music.

df2916_audspltppro_5Rename the submix tracks in the timeline. I’ve renamed Submix 1-3 as DIA, SFX and MUS for easy recognition. With Premiere Pro, you can mix audio in several different places, such as the clip mixer or the audio track mixer. Go to the audio track mixer and assign the channel output and routing. (Channel output can also be assigned in the sequence preset panel.) For each of the regular tracks, I’ve set the pulldown for routing to the corresponding submix track. Audio 1 to DIA, Audio 2 to SFX and Audio 3 to MUS. The 3 submix tracks are all routed to the Master output.

df2916_audspltppro_3The last step is to properly assign channel routing. With this sequence preset, master channels 1 and 2 will contain the full mix. First, when you export a 2-channel file as a master file or a review copy, by default only the first 2 output channels are used. So these will always get the mix without you having to change anything. Second, most of us tend to edit with stereo monitoring systems. Again, output channels 1 and 2 are the default, which means you’ll always be monitoring the full mix, unless you make changes or solo a track. Output channels 3-8 correspond to the stereo stems. Therefore, to enable this to happen automatically, you must assign the channel output in the following configuration: DIA (Submix 1) to 1-2 and 3-4, SFX (Submix 2) to 1-2 and 5-6, and MUS (Submix 3) to 1-2 and 7-8. The result is that everything goes to both the full mix, as well as the isolated stereo channel for each audio component – dialogue, sound effects and music.

Editing in the custom timeline

Once you’ve set up the timeline, the rest is easy. Edit any dialogue clips to track 1, sound effects to track 2 and music to track 3. In a more complex example, like the 24-track timeline I referred to earlier, you’d work in the “zones” that you had organized. If 1-8 are routed to the dialogue submix track, then you would edit dialogue clips only to tracks 1-8. Same for the corresponding sound effects and music tracks. Clips levels can still be adjusted as you normally would. But, by having submix tracks, you can adjust the level of all dialogue by moving the single, DIA submix fader in the audio track mixer. This can also be automated. If you want a common filter, like a compressor, added all of one stem – like a compressor across all sound effects – simply assign it from the pulldown within that submix channel strip.

Exporting the file

df2916_audspltppro_6The last step is exporting your spilt-track submaster file. If this isn’t correct, the rest was all for naught. The best formats to use are either a QuickTime ProRes file or one of the MXF OP1a choices. In the audio tab of the export settings panel, change the pulldown channel selection from Stereo to 8 channels. Now each of your timeline output channels will be exported as a separate mono track in the file. These correspond to your 4 stereo mix groups – the full mix plus stems. Now in one single, neat file, you have the final image and mix, along with the isolated stems that can facilitate easy changes down the road. Depending on the nature of the project, you might also want to export versions with and without titles for an extra level of future-proofing.

Reusing the file

df2916_audspltppro_7If you decide to use this exported submaster file at a later date as a source clip for a new edit, simply import it into Premiere Pro like any other form of media. However, because its channel structure will be read as 8 mono channels, you will need to modify the file using the Modify-Audio Channels contextual menu (right-click the clip). Change the clip channel format from Mono to Stereo, which turns your 8 mono channels back into the left and right sides of 4 stereo channels. You may then ignore the remaining “unassigned” clip channels. Do not change any of the check boxes.

Hopefully, by following this guide, you’ll find that creating timelines with stem tracks becomes second nature. It can sure help you years later, as I found out yet again this past week!

©2016 Oliver Peters

Swiss Army Man

df2716_swissarmymanWhen it comes to quirky movies, Swiss Army Man stands alone. Hank (Paul Dano) is a castaway on a deserted island at his wit’s end. In an act of final desperation, he’s about to hang himself, when he discovers Manny (Daniel Radcliffe), a corpse that’s just washed up on shore. At this point the film diverges from the typical castaway/survival story into an absurdist comedy. Manny can talk and has “magical powers” that Hank uses to find his way back to civilization.

Swiss Army Man was conceived and directed by the writing and directing duo of Dan Kwan and Daniel Sheinert, who work under the moniker Daniels. This is their feature length film debut and was produced with Sundance in mind. The production company brought on Matthew Hannam to edit the film. Hannam (The OA, Enemy, James White) is a Canadian film and TV editor with numerous features and TV series under his belt. I recently spoke with Hannam about the post process on Swiss Army Man.

Hannam discussed the nature of the film. “It’s a very handmade film. We didn’t have a lot of time to edit and had to make quick decisions. I think that really helped us. This was the dozenth or so feature for me, so in a way I was the veteran. It was fun to work with these guys and experience their creative process. Swiss Army Man is a very cinematically-aware film, full of references to other famous films. You’re making a survival movie, but it’s very aware that other survival movies exist. This is also a very self-reflexive film and, in fact, the model is more like a romantic comedy than anything else. So I was a bit disappointed to see a number of the reviews focus solely on the gags in the film, particularly around Manny, the corpse. There’s more to it than that. It’s about a guy who wonders what it might be like had things been different. It’s a very special little film, because the story puts us inside of Hank’s head.”

Unlike the norm for most features, Hannam joined the team after the shooting had been completed. He says, “I came on board during the last few days of filming. They shot for something like 25 days. This was all single-camera work with Larkin Seiple (Cop Car, Bleed For This) as director of photography. They shot ARRI ALEXA XT with Cooke anamorphic lenses. It was shot ARRIRAW, but for the edit we had a special LUT applied to the dailies, so the footage was already beautiful. I got a drive in August and the film premiered at Sundance. That’s a very short post schedule, but our goal was always Sundance.”

Shifting to Adobe tools

Like many of this year’s Sundance films, Adobe Premiere Pro was the editing tool of choice. Hannam continues, “I’m primarily an Avid [Media Composer] editor and the Dans [Kwan and Sheinert] had been using [Apple] Final Cut Pro in the past for the shorts that they’ve edited themselves. They opted to go with Premiere on this film, as they thought it would be easiest to go back and forth with After Effects. We set up a ‘poor man’s’ shared storage with multiple systems that each had duplicate media on local drives. Then we’d use Dropbox to pass around project files and shared elements, like sound effects and temp VFX. While the operation wasn’t flawless – we did experience a few crashes – it got the job done.”

Swiss Army Man features quite a few visual effects shots and Hannam credits the co-directors’ music video background with making this a relatively easy task. He says, “The Dans are used to short turnarounds in their music video projects, so they knew how to integrate visual effects into the production in a way that made it easier for post. That’s also the beauty of working with Premiere Pro. There’s a seamless integration with After Effects. What’s amazing about Premiere is the quality of the built-in effects. You get effects that are actually useful in telling the story. I used the warp stabilizer and timewarp a lot. In some cases those effects made it possible to use shots in a way that was never possible before. The production company partnered with Method for visual effects and Company 3 [Co3] for color grading. However, about half of the effects were done in-house using After Effects. On a few shots, we actually ended up using After Effects’ stabilization after final assembly, because it was that much better than what was possible during the online assembly of the film.”

Another unique aspect of Swiss Army Man is its musical score. Hannam explains, “Due to the tight schedule, music scoring proceeded in parallel with the editing. The initial temp music pulled was quirky, but didn’t really match the nature of the story. Once we got the tone right with the temp tracks, scenes were passed on to the composers – Andy Hull and Robert McDowell – who Daniels met while making a video for their band Manchester Orchestra. The concept for the score was that it was all coming from inside of Hank’s head. Andy sang all the music as if Hank was humming his own score. They created new tracks for us and by the end we had almost no temp music in the edit. Once the edit was finalized, they worked with Paul [Dano] and Daniel [Radcliffe] to sing and record the parts themselves. Fortunately both are great singers, so the final a cappella score is actually the lead actors themselves.”

Structuring the edit

Matthew Hannam and I discussed his approach to editing scenes, especially with this foray into Premiere Pro. He responds, “When I’m on Media Composer, I’m a fan of ScriptSync. It’s a great way to know what coverage you have. There’s nothing like that in Premiere, although I did use the integrated Story app. This enables you to load the script into a tab for quick access. Usually my initial approach is to sit down and watch all the footage for the particular scene while I plan how I’m going to assemble it. The best way to know the footage is to work with it. You have to watch how the shoot progresses in the dailies. Listen to what the director says at the end of a take – or if he interrupts in the middle – and that will give you a good idea of the intention. Then I just start building the scene – often first from the middle. I’m looking for what is the central point of that scene and it often helps to build from the middle out.”

Although Hannam doesn’t use any tricks to organize his footage or create selects, he does use “KEM rolls”. This term stems from the KEM flatbed film editing table. In modern parlance, it means that the editor has strung out all the footage for a scene into a single timeline, making it easy to scrub through all the available footage quickly. He continues, “I’ll build a dailies reel and tuck it away in the bottom of the bin. It’s a great way to quickly see what footage you have available. When it’s time to revise a scene, it’s good to go back to the raw footage and see what options you have. It is a quick way to jog your memory about what was shot.”

A hybrid post workflow

Another integral member of the post team was assistant editor Kyle Gilbertson. He had worked with the co-directors previously and was the architect of the hybrid post workflow followed on this film. Gilbertson pulled all of the shots for VFX that were being handled in-house. Many of the more complicated montages were handled as effects sequences and the edit was rebuilt in DaVinci Resolve before re-assembly in After Effects. Hannam explains, “We had two stages of grading with [colorist] Sofie Borup at Co3. The first was to set looks and get an idea what the material was going to look like once finished. Then, once everything was complete, we combined all of the material for final grading and digital intermediate mastering. There was a real moment of truth when the 100 or so shots that Daniels did themselves were integrated into the final cut. Luckily it all came together fairly seamlessly.”

“Having finished the movie, I look back at it and I’m full of warm feelings. We kind of just dove into it as a big team. The two Dans, Kyle and I were in that room kind of just operating as a single unit. We shifted roles and kept everything very open. I believe the end product reflects that. It’s a film that took inspiration from everywhere and everyone. We were not setting out to be weird or gross. The idea was to break down an audience and make something that everyone could enjoy and be won over by. In the end, it feels like we really took a step forward with what was possible at home. We used the tools we had available to us and we made them work. It makes me excited that Adobe’s Creative Cloud software tools were enough to get a movie into 700 cinemas and win those boys the Sundance Directing prize. We’re at a point in post where you don’t need a lot of hardware. If you can figure out how to do it, you can probably make it yourself. That was our philosophy from start to finish on the movie.”

Originally written for Digital Video magazine / Creative Planet Network.

©2016 Oliver Peters

Adobe’s Summer 2016 Refresh

df2516_adobe_sm

Adobe is on track for the yearly refresh of its Creative Cloud applications. They have been on a roll with their professional video solutions – especially Premiere Pro CC – and this update is no exception. Since this is not a new, across-the-board Creative Cloud version update, the applications keep the CC 2015 moniker, except with a point increase. For example, Premiere Pro CC becomes version 2015.3, not CC 2016. Let me dive into what’s new in Premiere Pro, Audition, Adobe Media Encoder and After Effects.

Premiere Pro CC 2015.3

Adobe has captured the attention of the professional editing community with Premiere Pro and has held it with each new update. CC 2015.3 adds numerous new features in direct response to the needs of editors, including secondary color correction, a proxy workflow, a 360VR viewer and more.

New Lumetri features

df2516_lumetriThe Lumetri color panel brought over the dominant color correction tools from SpeedGrade CC configured into a Lightroom-style panel. For editors, Lumetri provides nearly everything they need for standard color correction, so there’s rarely any need to step outside of Premiere Pro. Three key features were added to Lumetri in this update.

First is a new white balance eyedropper. Lumetri has had temperature and tint sliders, but the eyedropper makes white balance correction a one-click affair. However, the new marquee feature is the addition of SpeedGrade’s HSL Secondary color correction. Use an eyedropper to select the starting color that you want to affect. Then use the “add” or “remove color” eyedroppers to adjust the selection. To further refine the isolated color, which is essentially a key, use the HSL, denoise and blur sliders. The selected color range can be viewed against black, white or gray to check the accuracy of the adjustment. You can then change the color using either the single or three-wheel color control. Finally, the secondary control also includes its own sliders for temperature, tint, contrast, sharpening and saturation.

In the rest of the Lumetri panel, Adobe changed the LUT (color look-up table) options. You can pick a LUT from either the input and/or creative tab. The new arrangement is more straightforward than when first introduced. Now only camera gamma correction LUTs (like ARRI Log-C to Rec 709) appear in the input tab and color style LUTs show up in the creative tab. Adobe LUTs plus SpeedLooks LUTs from LookLabs are included as creative choices. Previously you had to use a SpeedLooks camera LUT in tandem with one of the SpeedLooks creative LUTs to get the right correction . With this update, the SpeedLooks creative LUTs are all designed to be added to Rec 709 gamma, which makes these choices far more functional than before. You can now properly use one of these LUTs by itself without first needing to add a camera LUT.

New Proxy workflow

df2516_proxyApple Final Cut Pro X users have enjoyed a proxy workflow since its launch, whereas Adobe always touted Premiere Pro’s native media prowess. Nevertheless, as media files get larger and more taxing on computing systems, proxy files enable a more fluid editing experience. A new ingest tool has been added to the Media Browser. So now from within Premiere Pro, you can copy media, transcode to high-res file formats and create low-res proxies. You can also select clips in a bin and right-clip to create proxies, attach proxies and/or relink full-resolution files. There is a new toggle button that you can add to the toolbar, which lets you seamlessly flip between proxy and full-resolution media files. According to Adobe, even if you have proxy selected, any export always draws from the full-resolution media for the best quality.

Be careful with the proxy settings. For example, one of the default sizes is 1024×540, which would be the quarter-frame match for 2K media. But, if you use that for HD clips in a 1920×1080 timeline, then your proxies will be incorrectly pillar-boxed. If you create 720p proxies for 1080p clips, you’ll need to use “scale to frame size” in order to get the right size on the timeline. It’s a powerful new workflow, but take a bit of time to figure out the best option for your needs.

Adobe Media Encoder also gains the Media Browser tool, as well as a new ingest function, which has been brought over from Adobe Prelude. Now you can use Media Encoder to copy camera files and/or transcode them to primary and secondary locations. If you need to copy camera cards, transcode a full-res master file and also transcode a low-res proxy file, then this complete workflow can be handled through Media Encoder.

New 360VR viewer

df2516_360Premiere Pro CC now sports a new VR-capable viewer mode. Start with monoscopic or stereoscopic, stitched 360-degree video clips and edit them as you normally would. The viewer allows you to pan around inside the clip or view the timeline from a point of view. You can see what someone viewing with goggles sees when looking in a given direction. Note that this is not a pan-and-scan plug-in. You cannot drop one of these 360-degree clips into an otherwise 2D 16×9 (“flat”) timeline and use Premiere Pro’s VR function to keyframe a digital move within that clip.

There are other new Premiere Pro CC features that I haven’t yet tested thoroughly. These include new support for Apple Metal (an API that combines the functionality of OpenGL and OpenCL) and for grading control surfaces. Open Caption support has been improved – adding more languages and their native alphabets, including Arabic and Hebrew.

Adobe Audition CC 2015.2

df2516_auditionWant better audio mixing control than what’s available inside of Premiere Pro CC? Then Audition CC is the best tool for the job. Premiere Pro timelines translate perfectly and in the last update a powerful retime feature was added. Audition “automagically” edits the duration of a music cue for you in order to fit a prescribed length.

The Essential Sound panel is new in this update. The layout of this panel is the audio equivalent to the Lumetri color panel and also owes its design origins to Lightroom. Select a clip and choose from the Dialogue, Music, SFX or Ambience group. Each group presents you with a different, task-appropriate set of effects presets. For example, when you pick Dialogue, the panel will display tabbed controls for loudness, repair sound, improve clarity and a creative tab. Click on a section of the vertical stack within this panel to reveal the contents and controls for that section.

In the past, the workflow would have been a roundtrip from Premiere Pro to Audition and back. Now you can go directly to Adobe Media Encoder from Audition, which changes the workflow into these steps: cut in Premiere Pro CC, mix in Audition CC, and master/export directly through Adobe Media Encoder. Thus roundtrips are eliminated, because picture is carried through the Audition phase. This export path supports multichannel mix files, especially for mastering containers like MXF. Audition plus Media Encoder now enable you to export a multichannel file that includes a stereo mix plus stereo submix “stems” for dialogue, SFX and music.

After Effects CC 2015.3 and more

df2516_aeAfter Effects CC has been undergoing an overhaul through successive versions, including this one. Some users complained that the most recent version was a bit of a step backwards, but this is all in an effort to improve performance, as well as to modernize and streamline the product. From my vantage as an editor who uses After Effects as much as a utility as for occasional motion graphics and visual effects, I really like what Adobe has been doing. Changes in this update include enhanced performance, GPU-accelerated Gaussian blur and Lumetri color correction, better playback of cached frames, and a new a/v preview engine. In the test projects that I ran through it, including the demo projects sent by Adobe, performance was fast and rather impressive. That’s on a 2009 Mac Pro tower.

If you are an animator, then Maxon Cinema 4D is likely a tool that you use in conjunction with After Effects. Animated text and shape layers can now be saved directly into the Cinema 4D file format from After Effects. When you customize your text and shapes in Cinema 4D, the changes are automatically updated in After Effects for a roundtrip 3D motion graphics workflow.

Thanks to the live The Simpsons event, in which Homer was animated live using Character Animator, this tool is gaining visibility. Character Animator moves to version 4, even though the application is still technically in prerelease. Some of the enhancements include improved puppet tagging. You can record multiple takes of a character’s movement and then enable your puppet to respond to motion and trigger animation accordingly.

To wrap up, remember that Adobe is promoting Creative Cloud as more than simply a collection of applications. The subscription includes access to over 50 million royalty-free photos, illustrations, vector graphics and video (including 4K clips). According to Adobe, licensed Adobe Stock assets in your library are now badged for easy identification. Videos in your library are displayed with duration and format information and have links to video previews. You can access your Libraries whenever you need them, both when you are connected to the internet and working offline. I personally have yet to use Adobe Stock, but it’s definitely a resource that you should remember is there if you need it.

Click here for Dave Helmly’s excellent overview of the new features in Premiere Pro CC.

Originally written for Digital Video magazine and Creative Planet Network.

©2016 Oliver Peters

Voice from the Stone

df0316_vfts_1_smAs someone who’s worked on a number of independent films, I find it exciting when an ambitious feature film project with tremendous potential comes from parts other than the mainstream Hollywood studio environment. One of these is Voice from the Stone, which features Emilia Clarke and Marton Csokas. Clarke has been a fan favorite in her roles as Daenerys Targaryen in Game of Thrones and the younger Sarah Connor in Terminator Genisys. Csokas has appeared in numerous films and TV series, including Sons of Liberty and Into the Badlands.

In Voice from the Stone, Clarke plays a nurse in 1950s Tuscany who is helping a young boy, Jakob (played by Edward Ding), recover from the death of his mother. He hasn’t spoken since the mother, a renowned pianist, died. According to Eric Howell, the film’s director, “Voice from the Stone was a script that screamed to be read under a blanket with a flashlight. It plays as a Hitchcock fairy tale set in 1950s Tuscany with mysterious characters and a ghostly antagonist.” While not a horror film or thriller, it is about the emotional relationship between Clarke and the boy, but with a supernatural level to it.

df0316_vfts_15Voice from the Stone is Howell’s feature directorial debut. He has worked on numerous films as a director, assistant director, stuntman, stunt coordinator, and in special effects. Dean Zanuck (Road to Perdition, Get Low, The Zero Theorem) produced the film through his Zanuck Independent company. From there, the production takes an interesting turn towards the American heartland, as primary post-production was handled by Splice in Minneapolis. This is a market known for its high-end commercial work, but Splice has landed a solid position as the primary online facility for numerous film and TV series, such as History Channel’s America Unearthed and ABC-TV’s In An Instant.

Tuscany, Minneapolis, and more

Clayton Condit, who co-owns and co-manages Splice with his wife Barb, edited Voice from the Stone. We chatted about how this connection came about. He says, “I had edited two short films with Eric. One of these, Anna’s Playground, made the short list for the 2011 Oscars in the short films category. Eric met with Dean about getting involved with this film and while we were waiting for the financing to be secured, we finished another short, called Strangers. Eric sent the script to Emilia and she loved it. After that everything sort of fell into place. It’s a beautiful script that, along with Eric’s style of directing, fueled amazing performances from the entire cast.”

df0316_vfts_2The actual production covered about 35 days in the Tuscany region of Italy. The exterior location was filmed at one castle, while the interiors at another. This was a two-camera shoot, using ARRI Alexas recording to ARRIRAW. Anamorphic lenses were used to record in ARRI’s 3.5K 4:3 format, but the final product is desqueezed for a 2.39:1 “scope” final 2K master. The DIT on set created editorial and viewing dailies in the ProRes LT file format, complete with synced production audio and timecode burn-in. The assistant editor back at Splice was also loading and organizing the same dailies, so that everything was available there, as well.

df0316_vfts_8Condit explains the timeline of the project, “The production was filmed on location in Italy during November and December of 2014. I was there for the first half of it, cutting on my MacBook Pro on set and in my hotel room. Once I travelled back to Minneapolis, I continued to build a first cut. The director arrived back in the states by the end of January to see early rough assemblies, but it was around mid-February when I really started working a full cut with Eric on the film. By April of 2015 we had a cut ready to present to the producers. Then it took a few more weeks working with them to refine the cut. Splice is a full service post facility, so we kicked off visual effects in May and color starting mid-June. The composer, Michael Wandmacher, created an absolutely gorgeous score that we were able to record during the first week of July at Air Studios in London. We partnered with Skywalker Sound for audio post-production and mix, which took us through the middle of August.”

As with any film, getting to the final result takes time and experimentation. He continues, “We screened for various small groups listening to feedback and debated and tweaked. The film has a lot of beautiful subtleties to it. We did not want to cheapen it with cliché tricks that would diminish the relationships between characters. It really is first a love story between a mother and her child. The director and producers and I worked very closely together taking scenes out, working pacing, putting scenes back in, and really making sure we had an effective story.”

df0316_vfts_12Splice handled visual effects ranging from sky replacements to entire green screen composited sequences. Condit explains, “Our team uses a variety of tools including Nuke, Houdini, Maya, and Cinema 4D. Since this film takes place in the 1950s, there were a lot of modern elements that needed to be removed, like TV antennas and distant power lines, for example. There’s a rock quarry scene with a pool of water. When it came time to shoot there, the water was really murky, so that had to be replaced. In addition, Splice also handled a number of straight effects shots. In a couple scenes the boy is on the edge of the roof of the castle, which was a green screen composite, of course. We also shot a day in a pool for underwater shots.”

Pioneering the cut with Final Cut Pro X

df0316_vfts_5Clayton Condit is a definite convert to Apple’s Final Cut Pro X and Voice from the Stone was no exception. Condit says, “Splice originated as an Avid-based shop and then moved over to Final Cut Pro as our market shifted. We also do a lot of online finishing, so we have to be compatible with whatever the offline editor cuts in. As FCP 7 fades away we are seeing more jobs being done in [Adobe] Premiere Pro and we also are finishing with [Blackmagic Design] DaVinci Resolve. Today we are sort of an ‘all of the above’ shop; but for my offline projects I really think FCP X is the best tool. Eric also appreciated his experience with FCP X as the technology never got in the way. As storytellers, we are creatively free to try things very quickly [with Final Cut Pro X].”

df0316_vfts_7“Of course, like every FCP X editor, I have my list of features that I’d like to see; but as a creative editorial tool, hands down it’s the real deal. I really love audio roles, for example. This made it very easy to manage my temp mixes and to hand over scenes to the composer so that he could control what audio he worked with. It also streamlined turnovers. My assistant, Cody Brown, used X2Pro Audio Convert to prepare AAFs for Skywalker. Sound work in your offline is so critical when trying to ‘sell’ your edit and to make sure a scene is really working. FCP X makes that pretty easy and fun. We have an extensive sound library here at Splice. Along with early music cues from Wandmacher, I was able to do fairly decent temp mixes in surround for early screenings inside Final Cut.”

On location, Condit kept his media on a small G-RAID Thunderbolt drive for portability; but back in Minneapolis, Splice has a 600TB Xsan shared storage system for collaboration among departments. Condit’s FCP X library and cache files were kept on small dual-SSD Thunderbolt drives for performance and with mirrored media he could easily transition between working at home or at Splice.

df0316_vfts_9Condit explains his FCP X workflow, “We broke the film into separate libraries for each of the five reels. Each scene was its own event. Shots were renamed by scene and take numbers using different keyword assignments to help sort and search. The film was shot with two cameras, which Cody grouped as multicam clips in FCP X. He used Sync-N-Link X to bring in the production sound metadata. This enabled me to easily identify channel names. I tend to edit in timelines rather than a traditional source and record approach. I start with ‘stringouts’ of all the footage by scene and will use various techniques to sort and track best takes. A couple of the items I’d love to see return to FCP X are tabs for open timelines and dupe detection.”

df0316_vfts_11Final Cut Pro X also has other features to help truly refine the edit. Condit says, “I used FCP X’s retiming function extensively for pace and emotion of shots. With the optical flow technology, it delivers great results. For example, in the opening shot you see two hands – the boy and his mother – playing piano. The on-set piano rehearsal was recorded and used for playback for all takes. Unfortunately it was half the speed of the final cue used in the film. I had to retime that performance to match the final cue, which required putting a keyframe in for every finger push. Optical flow looks so good in FCP X that many of the final online retimes were actually done in FCP X.”

df0316_vfts_6Singer Amy Lee of the band Evanescence recorded the closing title song for the film during the sound sessions at Skywalker. Condit says, “Amy completely ‘got’ the film and articulated it back in this beautiful song. She and Wandmacher collaborated to create something pretty special to close the film with. Our team is fortunate enough now to be creating a music video for the song that was shot at the same castle.”

Zanuck Independent is currently arranging a domestic distribution schedule for Voice from the Stone, so look for it in theaters later this year.

If you want more details, click here for Steve Hullfish’s excellent Art of the Cut interview with Clayton Condit.

Originally written for Digital Video magazine / CreativePlanetNetwork.

©2016 Oliver Peters