The FCP X – RED – Resolve Dance

df_fcpx-red-resolve_5

I recently worked on a short 10 minute teaser video for a potential longer film project. It was shot with a RED One camera, so it was a great test for the RED workflow and roundtrips using Apple Final Cut Pro 10.1.2/10.1.3 and DaVinci Resolve 11.

Starting the edit

As with any production, the first step is to properly back up and verify the data from the camera and sound cards. These files should go to redundant drives that are parked on the shelf for safe keeping. After this has been done, now you can copy the media to the editorial drives. In this case, I was using a LaCie RAID-5 array. Each day’s media was placed in a folder and divided into subfolders for RED, audio and other cameras, like a few 5D shots.

df_fcpx-red-resolve_4Since I was using FCP X and its RED and proxy workflows, I opted not to use REDCINE-X Pro as part of this process. In fact, the Mac Pro also didn’t have any RED Rocket accelerator card installed either, as I’ve seen conflicts with FCP X and RED transcodes when the RED Rocket card was installed. After the files were copied to the editorial drives, they were imported into an FCP X event, with media left in its original location. In the import setting, the option to transcode proxy media was enabled, which continues in the background while you start to work with the RED files directly. The camera files are 4K 16×9 .r3d files, so FCP X transcodes these to half-sized ProRes Proxy media.

df_fcpx-red-resolve_1Audio was recorded as double-system sound using a Sound Devices recorder. The audio files were 2-channel broadcast WAV files using slates for syncing. There was no in-camera audio and no common timecode. I was working with a couple of assistant editors, so I had them sync each clip manually. Instead of using FCP X’s synchronized clips, I had them alter each master clip using the “open in timeline” command. This lets you edit the audio directly to the video as a connected clip within the master clip. Once done, your master clip contains synced audio and video.  It functions just like a master clip with in-camera audio – almost (more on that later).df_fcpx-red-resolve_9

All synced clips were relabeled with a camera, scene and take designation, as well as adding this info to the camera, scene and take columns. Lastly, script notes were added to the notes column based on the script supervisor’s reports.

Transcodes

df_fcpx-red-resolve_6Since the post schedule wasn’t super-tight, I was able to let the transcodes finish overnight, as needed. Once this is done, you can switch FCP X to working with proxies and all the media will be there. The toggle between proxy and/or optimized-original media is seamless and FCP X takes care of properly changing all sizing information. For example, the project is 4K media in a 1080p timeline. FCP X’s spatial conform downscales the 4K media, but then when you toggle to proxy, it has to make the corresponding adjustments to media that is now half-sized. Likewise any blow-ups or reframing that you do also have to match in both modes.

df_fcpx-red-resolve_2The built-in proxy/optimized-original workflow provides you with offline/online editing phases right within the same system. Proxies for fast and efficient editing. Original or high-resolution transcodes for finishing. To keep the process fast and initially true to color decisions made on set, no adjustments were made to the RED files. FCP X does let you alter the camera raw color metadata from inside the application, but there’s no real reason to do this for offline editing files. That can be deferred until it’s time to do color correction. So during the edit, you see what the DoP shot as you view the RED files or the transcoded proxies.

df_fcpx-red-resolve_3We did hit one bad camera load. This might have been due to either a bad RED drive or possibly excessive humidity at that location. No matter what the reason, the result was a set of corrupt RED clips. We didn’t initially realize this in FCP X, and so, hit clips that caused frequent crashes. Once I narrowed it down to the load from that one location, I decided to delete these clips. For that group of shots, I used REDCINE-X Pro to transcode the files. I adjusted the color for a flatter, neutral profile (for later color correction) and transcoded full-resolution debayered 1080p ProRes 4444 files. We considered these as the new camera masters for those clips. Even there, REDCINE-X Pro crashed on a few of the clips, but I still had enough to make a scene out of it.

Editing

The first editing step is culling down the footage in FCP X. I do a first pass rejecting all bogus shots, like short clips of the floor, a bad slate, etc. Set the event browser to “hide rejected”. Next I review the footage based on script notes, looking at the “circle takes” first, plus picking a few alternates if I have a different opinion. I will mark these as Favorites. As I do this, I’ll select the whole take and not just a portion, since I want to see the whole take.

Once I start editing, I switch the event browser to “show favorites”. In the list view, I’ll sort the event by the scene column, which now gives me a quick roadmap of all possible good clips in the order of the script. During editing, I cut mainly using the primary storyline to build up the piece. This includes all overlapping audio, composites, titles and so on. Cutting proceeds until the picture is locked. Once I’m ready to move on to color correction, I export a project XML in the FCPXML format.

Resolve

df_fcpx-red-resolve_7I used the first release version (not beta) of DaVinci Resolve 11 Lite to do this grade. My intention was to roundtrip it back to FCP X and not to use Resolve as a finishing tool, since I had a number of keys and composites that were easier done in FCP X than Resolve. Furthermore, when I brought the project into Resolve, the picture was right, but all of the audio was bogus – wrong takes, wrong syncing, etc. I traced this down to my initial “open in timeline” syncing, which I’ll explaining in a bit. Anyway, my focus in Resolve was only grading and so audio wasn’t important for what I was doing. I simply disabled it.

Importing the FCPXML file into a fresh Resolve 11 project couldn’t have been easier. It instantly linked the RED, 5D and transcoded ProRes 4444 files and established an accurate timeline for my picture cut. All resizing was accurately translated. This means that in my FCP X timeline, when I blew up a shot to 120% (which is a blow-up of the 1080p image that was downscaled from the 4K source), Resolve knew to take the corresponding crop from the full 4K image to equal this framing of the shot without losing resolution.

The one video gotcha I hit was with the FCP X timeline layout. FCP X is one of the only NLEs that lets you place video BELOW what any other software would consider to be the V1 track – that’s the primary storyline. Some of my green screen composite shots were of a simulated newscast inserted on a TV set hanging on a wall in the primary scene. I decided to place the 5 or 6 layers that made up this composite underneath the primary storyline. All fine inside FCP X, however, in Resolve, it has to interpret the lowest video element as V1, thus shifting everything else up accordingly. As a result the, bulk of the video was on V6 or V7 and audio was equally shifted in the other direction. This results in a lot of vertical timeline scrolling, since Resolve’s smallest track height is still larger than most.

df_fcpx-red-resolve_8Resolve, of course, is a killer grading tool that handles RED media well. My grading approach is to balance out the RED shots in the first node. Resolve lets you adjust the camera raw metadata settings for each individual clip, if you need to. Then in node 2, I’ll do most of my primary grading. After that, I’ll add nodes for selective color adjustments, masks, vignettes and so on. Resolve’s playback settings can be adjusted to throttle back the debayer resolution on playback for closer-to-real-time performance with RED media. This is especially important, when you aren’t running the fastest drives, fastest GPU cards nor using a RED Rocket card.

To output the result, I switched over to Resolve’s Deliver tab and selected the FCP X easy set-up. Select handle length, browse for a target folder and run. Resolve is a very fast renderer, even with GPU-based RED debayering, so output wasn’t long for the 130 clips that made up this short. The resulting media was 1080p ProResHQ with an additional 3 seconds per clip on either side of the timeline cut – all with baked in color correction. The target folder also contains a new FCPXML that corresponds to the Resolve timeline with proper links to the new media files.

Roundtrip back into FCP X

Back in FCP X, I make sure I’ve turned off the import preference to transcode proxy media and that my toggle is set back to original/optimized media. Find the new FCPXML file from Resolve and import it. This will create a new event containing a new FCP X project (edited sequence), but with media linked to the Resolve render files. Audio is still an issue, for now.

There is one interesting picture glitch, which I believe is a bug in the FCPXML metadata. In the offline edit, using RED or proxy media, spatial conform is enabled and set to “fit”. That scales the 4K file to a 1080p timeline. In the sequence back from Resolve, I noticed the timeline still had yellow render bars. When I switched the spatial conform setting on a clip to “none”, the render bar over it went away, but the clip blew up much larger, as if it was trying to show a native 4K image at 1:1. Except, that this was now 1080 media and NOT 4K. Apparently this resizing metadata is incorrectly held in the FCPXML file and there doesn’t appear to be any way to correct this. The workaround is to simply let it render, which didn’t seem to hurt the image quality as far as I could tell.

Audio

Now to an explanation of the audio issue. FCP X master clips are NOT like any other master clips in other NLEs, including FCP 7. X’s master clips are simply containers for audio and video essence and, in that way, are not unlike compound clips. Therefore, you can edit, add and/or alter – even destructively – any material inside a master clip when you use the “open in timeline” function. You have to be careful. That appears to be the root of the XML translation issue and the audio. Of course, it all works fine WITHIN the closed FCP X environment!

Here’s the workaround. Start in FCP X. In the offline edited sequence (locked rough cut) and the sequence from Resolve, detach all audio. Delete audio from the Resolve sequence. Copy and paste the audio from the rough cut to the Resolve sequence. If you’ve done this correctly it will all be properly synced. Next, you have to get around the container issue in order to access the correct WAV files. This is done simply by highlighting the connected audio clip(s) and using the “break apart clip items” command. That’s the same command used to break apart compound clips into their component source clips. Now you’ll have the original WAV file audio and not the master clip from the camera.

df_fcpx-red-resolve_11At this stage I still encountered export issues. If your audio mixing engineer wants an OMF for an older Pro Tools unit, then you have to go through FCP 7 (via an Xto7 translation) to create the OMF file. I’ve done this tons of time before, but for whatever reason on this project, the result was not useable. An alternative approach is to use Resolve to convert the FCPXML into XML, which can then be imported into FCP 7. This worked for an accurate translation, except that the Resolve export altered all stereo and multi-channel audio tracks into a single mono track. Therefore, a Resolve translation was also a fail. At this point in time, I have to say that a proper OMF export from FCP X-edited material is no longer an option or at least unreliable at best.

df_fcpx-red-resolve_10This leaves you with two options. If your mixing engineer uses Apple Logic Pro X, then that appears to correctly import and convert the native FCPXML file. If your mixer uses Pro Tools (a more likely scenario) then newer versions will read AAF files. That’s the approach I took. To create an AAF, you have to export an FCPXML from the project file. Then using the X2Pro Audio Convert application, generate an AAF file with embedded and trimmed audio content. This goes to the mixer who in turn can ingest the file into Pro Tools.

Once the mix has been completed, the exported AIF or WAV file of the mix is imported into FCP X. Strip off all audio from the final version of the FCP X project and connect the clip of the final mix to the beginning of the timeline. Now you are done and ready to export deliverables.

For more on RED and FCP X workflows, check out this series of posts by Sam Mestman at MovieMaker.

Part 1   Part 2   Part 3

©2014 Oliver Peters

The State of FCP X Plug-ins

df_statexplugs_1_sm

The launch of Apple’s Final Cut Pro X spawned a large ecosystem of plug-ins and utilities. This was due, in part, to the easy method of creating Motion templates, along with the need to augment interoperability with other applications, i.e. fill in the gaps in FCP X’s capabilities. Lately you have to wonder about the general status of the FCP X plug-in market. GenArts decided to discontinue its Sapphire Edge product and BorisFX only recently launched its BCC 9 version for FCP X*. GenArts may have simply stopped the Edge product because the business model wasn’t working. On the other hand, new offerings, such as Red Giant Universe, build up the available tools.

NLEs suffered a “race to the bottom” with pricing, which left us with numerous low cost and even free options. Historically, plug-in packages have been tiered to the price of the host application. If you paid tens of thousands or more for a product like Flame, then paying a couple of grand for a set of third-party filters wasn’t unreasonable. The same filters for After Effects cost less, because the host was less. Naturally the market drives this, too, since there are far more After Effects users buying plug-ins, than there are Flame users. Unfortunately, that NLE “race to the bottom” leaves plug-in developers in an uncomfortable position, because many users are loathe to pay more for a set of plug-ins than for the host application itself. These shouldn’t be related, but they are.

It’s also not an exclusive FCP X problem, per se. Autodesk’s introduction of Smoke on the Mac didn’t attract Sparks plug-in developers to adapt their Flame/Smoke plug-ins to the Mac platform. That’s because these new users simply weren’t going to pay the kind of prices that Flame users had and still do for Sparks filters. Since Autodesk had no “takers”, they ended up adding more of the filter building blocks into Smoke itself and in the Smoke 2015 product have dropped the Sparks API altogether.

Aside from the issue of cost and what fuels development, for the user, plug-ins can be a tricky issue. Often plug-ins can be the biggest cause of application instability and poor playback performance. All too often, application crashes can be boiled down to a misbehaving filter. Most of the time, third-party effects do not provide as smooth of a performance as native filters. This is especially true of FCP X where built-in filters are far less of a drag on the system, then the others. Most likely due to the “secret sauce” Apple applies to its built-in effects and transitions.

In the case of Final Cut Pro X, there actually is no plug-in structure. It uses Motion’s FxPlug3 architecture as Motion templates. This means that a video filter, transition, generator or title has to be developed for Motion and that, in turn, is published as a Motion template, which appears inside FCP X as an effect. Even if you didn’t buy Motion, FCP X is running its effects engine “under the hood” and that’s why third-party filters work. While this makes it easy for users to create their own plug-ins, using the building blocks provided natively in Motion, it also adds a burden for more advanced developers. BorisFX, for example, must go the extra mile to make its FCP X filters and transitions look and feel the same as they do in After Effects, FCP 7 and other hosts.

On top of having a tool that simplifies the creation of cheap and free filters, FCP X also includes a nice set of native filters and transitions. This set is far more wide-ranging than what’s currently bundled with Adobe Premiere Pro CC or Avid Media Composer. Of course, not all are great – the keyer, for instance, is mediocre at best – and there are missing items – no DVE, masking or tracking. That’s why there is still room for third-party developers to create tools that meet the needs of the more demanding customers. But, for the vast majority of FCP X users, there’s little or no need to purchase a comprehensive package of 200 effects, when a lot of these looks can be achieved within the FCP X or Motion toolkit already.

The market does seem good for many specific filters that fill certain needs. I would imagine (although I don’t know actual numbers), that users are more likely to buy a tool like CoreMelt’s SliceX or Red Giant’s Magic Bullet Looks, because they address a specific deficiency. Likewise, I think users are more likely to buy a tool that works across several platforms. For instance, FxFactory Pro (and many of the FxFactory partner plug-ins) work for FCP X, FCP 7, Premiere Pro and After Effects on the same machine, without having to buy a different version for each host.

Another development need is for tools that augment interoperability and workflows. People tend to lump these into the plug-in discussion, although they really aren’t plug-ins. Tools like Xto7, 7toX, EDL-X, Shot Notes X and others are there to fill in the gaps of FCP X-centric workflows. If you have the need to send your sound from FCP X to a Pro Tools mixer, there’s no way to do it in X, using any method that’s commonly accepted within the industry. (I’m sorry, but exporting “baked in” roles is a workaround, not a solution.) The answer is X2Pro Audio Convert, which will generate an AAF with embedded audio from the FCPXML export of your timeline.

The trend I do see in the FCP X world is the creation of more and more Motion templates that are, in fact, design templates and not plug-ins. This is probably what Apple really had in mind in the first place. Companies like MotionVFX, Ripple Training, SugarFX and others are creating design templates, which are mini-Motion projects right inside your FCP X timeline. It’s a lot like buying a template After Effects project and then using that via Dynamic Link inside Premiere. Such templates are cheap and fun and save you a lot of building time, but they do suffer from the “flavor of the month” syndrome. A design might be good for one single production, but then you’ll never use it again. However, these templates are cheap enough that you can charge them off to the client without concern.

In kicking this idea around with friends that are developers, some felt there was a place for an Apple-curated market of plug-ins for FCP X – like the App Store, but simply for plug-ins, filters and utilities. That’s a little of how FxFactory works, but not all developers are represented there, of course. I do see the need for a product that acts like a template manager. This would let you enable or disable plug-ins and design templates as needed. If you’ve purchased a ton of third-party items, these quickly clutter up the FCP X palettes. Currently you have to remove items manually, by dragging the effects, titles and generators out of your Movies > Motion Templates folders. If you have FxFactory filters, then you can use the FxFactory application to manage those.

Overall, I see the FCP X ecosystem as healthy and growing, although with a shift of users away from comprehensive effect packages – largely due to cost. Tools and effects that users do purchase seem to be more task-specific and, in those cases, money is less of a factor. If you are a film editor, you have different needs than a corporate producer and so are more likely to buy some custom tools that you cannot otherwise live without. These tend to be made by small, responsive developers that don’t need to survive on the revenue that the large bundled effects packages used to bring. It’s too early to predict whether or not that’s a good thing for the market.

* Note: I originally composed this post a few weeks ago prior to the release of BCC9 and published this post with the error that BCC9 wasn’t out yet. In fact it is, hence the correction above.

©2014 Oliver Peters

More 4K

df_4Kcompare_main

I’ve talked about 4K before (here, here and here), but I’ve recently done some more 4K jobs that have me thinking again. 4K means different things to different people and in terms of dimensions, there’s the issue of cinema 4K (4096 pixels wide) versus the UltraHD/QuadHD/4K 16:9 (whatever you want to call it) version of 4K (3840 pixels wide). That really doesn’t make a lot of difference, because these are close enough to be the same. There’s so much hype around it, though, that you really have to wonder if it’s “the Emperor’s new clothes”. (Click on any of these images for expanded views.)

First of all, 4K used as a marketing term is not a resolution, it’s a frame dimension. As such, 4K is not four times the resolution of HD. That’s a measurement of area and not resolution. True resolution is usually measured in the vertical direction based on the ability to resolve fine detail (regardless of the number of pixels) and, therefore, 4K is only twice the resolution of HD at best. 4K is also not sharpness, which is a human perception affected by many things, such as lens quality, contrast, motion and grading. It’s worth watching Mark Schubin’s excellent webinar on the topic to get a clearer understanding of this. There’s also a very good discussion among top DoPs here about 4K, lighting, high dynamic range and more.

df_4kcompare_1A lot of arguments have been made that 4K cameras using a color-pattern filter method (Bayer-style), single CMOS sensor don’t even deliver the resolution they claim. The reason is that in many designs 50% of the pixels are green versus 25% each for red and blue. Green is used for luminance, which determines detail, so you do not have a 1:1 pixel relationship between green and the stated frame resolution of the sensor. That’s in part why RED developed 5K and 6K sensors and it’s why Sony uses an 8K sensor (F65) to deliver a 4K image.

The perceived image quality is also not all about total pixels. The pixels of the sensor, called photosites, are the light-receiving elements of the sensor. There’s a loose correlation between pixel size and light sensitivity. For any given sensor of a certain physical dimension, you can design it with a lot of small pixels or with fewer, but larger, pixels. This roughly correlates to a sensor that’s of high resolution, but a smaller dynamic range (many small pixels) or one with lower resolution, but a higher dynamic range (large, but fewer pixels). Although the equation isn’t nearly this simplistic, since a lot of color science and “secret sauce” goes into optimizing a sensor’s design, you can certainly see this play out in the marketing battles between the RED and ARRI camps. In the case of the ALEXA, ARRI adds some on-the-sensor filtering, which results in a softer image that gives it a characteristic filmic quality.df_4kcompare_2

Why do you use 4K?

With 4K there are two possible avenues. The first is to shoot 4K for the purpose of reframing and repositioning within HD and 2K timelines. Reframing isn’t a new production idea. When everyone shot on film, some telecine devices, like the Rank Cintel Mark III, sported zoom boards that permitted an optical blow-up of the 35mm negative. You could zoom in for a close-up in transfer that didn’t cost you resolution. Many videographers shoot 1080 for a 720 finish, as this allows a nice margin for reframing in post. The second is to deliver a final 4K product. Obviously, if your intent is the latter, then you can’t count on the techniques of the former in post.

df_4kcompare_3When you shoot 4K for HD post, then workflow is an issue. Do you shoot everything in 4K or just the items you know you’ll want to deal with? How will this cut with HD and 2K content? That’s where it gets dicey, because some NLEs have good 4K workflows and others don’t. But it’s here that I contend you are getting less than meets the eye, so to speak.  I have run into plenty of editors who have dropped a 4K clip into an HD timeline and then blown it up, thinking that they are really cropping into the native 4K frame and maintaining resolution. Depending on the NLE and the settings used, often they are simply blowing up an HD shot. The NLE scaled the 4K to HD first and then expanded the downscaled HD image. It didn’t crop into the actual 4K native resolution. So you have to be careful. And guess what, if the blow up isn’t that extreme, it may not look much different than the crop.

df_4kcompare_4One thing to remember is that a 4K image that is scaled to fit into an HD timeline gains the benefits of oversampling. The result in HD will be very sharp and, in fact, will generally look better perceptually than the exact same image natively shot in an HD size. When you now crop into the native image, you are losing some of that oversampling effect. A 1:1 pixel relationship is the same effective image size as a 200% blow-up. Of course, it’s not the same result. When you compare the oversampled “wide shot” (4K scaled to HD) to the “close-up” (native 4K crop), the close-up will often look softer. You’ll see defects of the image, like chromatic aberration in the lens, missed critical focus and sensor noise. Instead, if you shoot a wide and then an actual close-up, that result will usually look better.

On the other hand, if you blow up the 4K-to-HD or a native HD shot, you’ll typically see a result that looks pretty good. That’s because there’s often a lot more information there than monitors or the eye can detect. In my experience, you can commonly get away with a blow-up in the range of 120% of the original image size and in some cases, as much as 150%.

To scale or not to scale

df_4K_comparison_Instant4KLet me point out that I’m not saying a native 4K shot doesn’t look good. It does, but often the associated workflow hassles aren’t worth it. For example, let’s take a typical 1080p 50” Panasonic plasma that’s often used as a client monitor in edit suites. You or your client may be sitting 7 to 10 feet away from it, which is closer than most people sit in a living room with that size of a screen. If I show a client the native image (4K at 1:1 in an HD timeline) compared with an separate HD image at the same framing, it’s unlikely that they’ll see a difference. Another test is to take two exact images – one native HD and the other 4K. Scale up the HD and crop down the 4K to match. In theory, the 4K should look better and sharper. In fact, sitting back on the client sofa, most won’t see a difference. It’s only when they step to about 5 feet in front of the monitor that a difference is obvious and then only when looking at fine detail within the shot.

df_gh4_instant4k_smNot all scaling is equal. I’ve talked a lot about the comparison of HD scaling, but that really depends on the scaling that you use. For a quick shot, sure, use what your NLE has built in. For more critical operations, then you might want to scale images separately. DaVinci Resolve has excellent built-in scaling and lets you pick from smooth, sharp and bilinear algorithms. If you want a plug-in, then the best I’ve found is the new Red Giant Instant 4K filter. It’s a variation of their Instant HD plug-in and works in After Effects and Premiere Pro. There are a lot of quality tweaks and naturally, the better it does, the longer the render will be. Nevertheless, it offers outstanding results and in one test that I ran, it actually provided a better look within portions of the image than the native 4K shot.

df_4K_comparison-C500_smIn that case, it was a C500 shot of a woman on a park bench with a name badge. I had three identical versions of the shot (not counting the raw files) – the converted 4K ProRes4444 file, a converted 1080 ProRes4444 “proxy” file for editing and the in-camera 1080 Canon XF file. I blew up the two 1080 shots using Instant 4K and cropped the 4K shot so all were of equal framing. When I compared the native 4K shot to the expanded 1080 ProRes4444 shot, the woman’s hair was sharper in the 1080 blow-up, but the letters on the name badge were better on the original. The 1080 Canon XF blow-up was softer in both areas. I think this shows that some of the controls in the plug-in may give you superior results to the original (crisper hair); but, a blow-up suffers when you are using a worse codec, like Canon’s XF (50 Mbps 4:2:2). It’s fine for native HD, but the ProRes4444 codec has twice the chroma resolution and less compression, which makes a difference when scaling an image larger. Remember all of this pertains to viewing the image in HD.

4K deliverables

df_4K_comparison-to-1080_smSo what about working in native 4K for a 4K deliverable? That certainly has validity for high-resolution projects (films, concerts, large corporate presentations), but I’m less of a believer for television and web viewing. I’d rather have “better” pixels and not simply “more” pixels. Most of the content you watch at theaters using digital projection is 2K playback. Sometimes the master for that DCP was HD, 2K or 4K. If you are in a Sony 4K projector-equipped theater, most of the time, it’s simply the projector upscaling the content to 4K as part of the projection. Even though you may see a Sony 4K logo at the head of the trailers, you aren’t watching 4K content – definitely not, if it’s a stereo3D film. Yet, much of this looks pretty good, doesn’t it?

df_AMIRAEverything I talked about, regarding blowing up HD by up to 120% or more, still applies to 4K. Need to blow up a shot a bit in a 4K timeline? Go ahead, it will look fine. I think ARRI has proven this as well, taking films shot with the ALEXA all the way up to Imax. In fact, ARRI just announced that the AMIRA will get in-camera, on-the-fly upscaling of its image with the ability to record 4K (3840 x 2160 at up to 60fps) on the CFast 2.0 cards. They can do this, because the sensor starts with more pixels than HD or 2K. The AMIRA will expose all of the available photosites (about 3.4K sensor pixels) in what they call the “open gate” method. This image is lightly cropped to 3.2K and then scaled by a 1.2 factor, which results in UltraHD 4K recording on the same hardware. Pretty neat trick and judging by ARRI’s image quality, I’ll bet it will look very good. Doubling down on this technique, the ALEXA XT models will also be able to record ProRes media at this 3.2K size. In the case of the ALEXA, the designers have opted to leave the upscaling to post, rather than to do it in-camera.

To conclude, if you are working in 4K today, then by all means continue to do so. It’s a great medium with a lot of creative benefits. If you aren’t working in 4K, then don’t sweat it. You won’t be left behind for awhile and there are plenty of techniques to get you to the same end goal as much of the 4K production that’s going on.

Click these thumbnails for full resolution images.

df_gh4_instant4k_sm

 

 

 

df_4K_comparison-to-1080_sm

 

 

 

 

©2014 Oliver Peters

HP Z1 G2 Workstation

df_hpz1g2_heroHewlett-Packard is known for developing workstations that set a reliability and performance standard, characterized by the Z-series of workstation towers. HP has sought to extend what they call the “Z experience” to other designs, like mobile and all-in-one computers. The latest of these is the HP Z1 G2 Workstation – the second generation model of the Z1 series.

Most readers will associate the all-in-one concept with an Apple iMac. Like the iMac, the Z1 G2 is a self-contained unit housing all electronics and the display in one chassis. Whereas the top-end iMacs are targeted at advanced consumers and pros with less demanding computing needs, the HP Z1 G2 is strictly for the serious user who requires advanced horsepower. The iMac is a sealed unit, which cannot be upgraded by the user (except for RAM), and is largely configured with laptop-grade parts. In contrast, the HP Z1 G2 is a Rolls-Royce. The build is very solid and it exudes a sense of performance. The user has the option to configure their Z1 G2 from a wide range of components. The display lifts like a car hood for easy accept to the “engine”, making user upgrades nearly as easy as on a tower.

Configuration options

df_hpz1g2_hero_touchThe HP Z1 G2 offers processor choices that include Intel Core i3, Core i5 and three Xeon models. There are a variety of storage and graphics card choices and it supports up to 32GB of RAM. You may also choose between a Touch and non-Touch display. The Touch screen adds a glass overlay and offers finger or stylus interaction with the screen. Non-touch screens are a matte finish, while Touch screens are glossy. You have a choice of operating systems, including Windows 7, Windows 8 and Linux distributions.

I was able to specify the built-to-order configuration of the Z1 G2 for my review. This included a Xeon E3 (3.6GHz) quad-core, 16GB of RAM, optical drive and the NVIDIA K4100M graphics card. For storage, I selected one 256GB mSATA boot drive (“flash” storage), plus two 512GB SSDs that were set-up in a RAID-0 configuration. I also ordered the Touch option with 64-bit Windows 8.1 Pro. Z1 G2 models start at $1,999; however, as configured, this system would retail at over $6,100, including a 20% eCoupon promo discount.

An important, new feature is support for Thunderbolt 2 with an optional module. HP is one of the first PC manufacturers to support Thunderbolt. I didn’t order that, but reps from AJA, Avid and Blackmagic Design all confirmed to me that their Thunderbolt units should work fine with this workstation, as long as you install their Windows device drivers. One of these would be required for any external broadcast or grading monitor.

In addition to the custom options, the Z1 G2 includes wireless support, four USB 2.0 ports, two USB 3.0 ports, Gigabit Ethernet, a DisplayPort connector for an secondary computer monitor, S/PDIF, analog audio connectors, a webcam and a media card reader.

Arrival and set-up

df_hpz1g2_openThe HP Z1 G2 ships as a single, 57 pound package, complete with a wireless mouse and keyboard. The display/electronics chassis is attached to an adjustable arm that connects to the base. This allows the system to be tilted at any angle, as well as completely flat for shipping and access to the electronics. It locks into place when it’s flat (as in shipping), so you have to push down lightly on the display in order to unlock the latch button.

The display features a 27” (diagonal) screen, but the chassis is actually 31” corner-to-corner. Because the stand has to support the unit and counter-balance the weight at various angles, it sticks out about 12” behind the back of the chassis. Some connectors (including the power cord) are at the bottom, center of the back of the chassis. Others are along the sides. The adjustable arm allows any angle from vertical to horizontal, so it would be feasible to operate in a standing or high-chair position looking down at the monitor – a bit like a drafting table. I liked the fact that the arm lets you drop the display completely down to the desk surface, which put the bottom of the screen lower than my stationary 20” Apple Cinemas.

First impressions

df_hpz1g2_win81I picked the Touch option in order to test the concept, but quite frankly I decided it wasn’t for me. In order to control items by touch, you have to be a bit closer than the full length of your arm. As a glasses-wearer, this distance is uncomfortable for me, as I prefer to be a little farther away from a screen of this size. Although the touch precision is good, it’s not as precise as you’d get with a mouse or pen and tablet – even if using an iPad stylus. Only menu and navigation operations, but no drawing tools, worked in Photoshop – an application that seems natural for Touch. While I found the Touch option not to be that interesting to me, I did like the screen that comes with it. It’s glossy, which gives you nice density to your images, but not so reflective as to be annoying in a room with ambient lighting.

The second curiosity item for me was Windows 8.1. The Microsoft “metro” look has been maligned and many pros opt for Windows 7 instead. I actually found the operating system to function well and the “flat” design philosophy much like what Apple is doing with Mac OS X and iOS. The tiled Start screen that highlights this release can easily be avoided when you set-up your preferences. If you prefer to pin application shortcuts to the Windows task bar or on the Desktop, that’s easily done. Once you are in an application like Premiere Pro or Media Composer, the OS differences tend to disappear anyway.

df_hpz1g2_bmdtestSince I had configured this unit with an mSATA boot/applications drive and RAID-0 SSDs for media, the launch and operation of any application was very fast. Naturally the difference from a cold start on the Z1 G2, as compared to my 2009 Mac Pro with standard 7200RPM drives, was night and day. With most actual operations, the differences in application responsiveness were less dramatic.

One area that I think needs improvement is screen calibration. The display is not a DreamColor display, but color accuracy seems quite good and it’s very crisp at 2560 x 1440 pixels. Unfortunately, both the HP and NVIDIA calibration applications were weak, using consumer level nomenclature for settings. For instance, I found no way to accurately set a 6500-degree color temperature or a 2.2 gamma level, based on how the sliders were labelled. Some of the NVIDIA software controls didn’t appear to work at all.

Performance stress testing

I loaded up the Z1 G2 with a potpourri of media and applications, including Adobe CC 2014 (Photoshop, Premiere Pro, After Effects, SpeedGrade), Avid Media Composer 8, DaVinci Resolve 11 Lite (beta) and Sony Vegas Pro 13. Media included Sony XAVC 4K, Avid DNxHD175X, Apple ProRes 4444, REDCODE raw from an EPIC Dragon camera and more. This allowed me to make some direct comparisons with the same applications and media available on my 2009 eight-core Mac Pro. Its configuration included dual Xeon quad-core processors (2.26GHz), 28GB RAM, an ATI 5870 GPU card and a RAID-0 stripe of two internal 7200RPM spinning hard drives. No I/O devices were installed on either computer. While these two systems aren’t exactly “apples-to-apples”, it does provide a logical benchmark for the type of machine a new Z1 G2 customer might be upgrading from.

df_hpz1g2_4kIn typical, side-by-side testing with edited, single-layer timelines, most applications on both machines performed in a similar fashion, even with 4K media. It’s when I started layering sequences and comparing performance and render times that the differences became obvious.

My first test compared Premiere Pro CC 2014 with a 7-layer, 4K timeline. The V1 track was a full-screen, base layer of Sony XAVC. On top of that I layered six tracks of picture-in-picture (PIP) clips consisting of RED Dragon raw footage at various resolutions up to 5K. Some clips were recorded with in-camera slomo. I applied color correction, scaling/positioning and a drop shadow. The 24p timeline was one minute long and was exported as a 4K .mp4 file. The HP handled this task at just under 11 minutes, compared with almost two hours for the Mac Pro.

My second Premiere Pro test was a little more “real world” – a 48-second sequence of ARRI Alexa 1080p ProRes 4444 log-C clips. These were round-tripped through SpeedGrade to add a Rec 709 LUT, a primary grade and two vignettes to blur and darken the outer edge of the clips. This sequence was exported as a 720/24p .mp4 file. The Z1 G2 tackled this in about 14 minutes compared with 37 minutes for the Mac Pro.

df_hpz1g2_appsPremiere Pro CC 2014 uses GPU acceleration and the superior performance of the NVIDIA K4100M card in the HP versus the ATI 5870 in the Mac Pro is likely the reason for this drastic difference. The render times were closer in After Effects, which makes less use of the GPU for effects processing. My 6-layer After Effects stress test was an 8-second composition consisting of six layers of 1080p ProRes clips from the Blackmagic Cinema Camera. I applied various Cycore and color correction effects and then moved them in 3D space with motion blur enabled. These were rendered out using the QuickTime Animation codec. Times for the Z1 G2 and Mac Pro were 6.5 minutes versus 8.5 minutes respectively.

My last test for the HP Z1 G2 involved Avid Media Composer. My 10-layer test sequence included nine PIP video tracks (using the 3D warp effect) over a full-screen background layer on V1. All media was Avid DNxHD175X (1080p, 10-bit, 23.976fps). No frames were dropped in the medium display quality, but in full quality frames started to drop at V6. When I added a drop shadow to the PIP clips, frames were dropped starting at V4 for full quality and V9 for medium quality.

Conclusion

The HP Z1 G2 is an outstanding workstation. Like any alternative form factor, you have to weigh the options of legacy support for older storage systems and PCIe cards. Thunderbolt addresses many of those concerns as an increasing number of adapters and expansion units hits the market. Those interested in shifting from Mac to Windows – and looking for the best in what the PC side has to offer – won’t go wrong with HP products. The company also maintains close ties to Avid and other software vendors, to make sure the engineering of their workstations matches the future needs of the software.

Whether an all-in-one is right for you comes down to individual needs and preferences. I was very happy with the overall ease of installation, operation and performance of the Z1 G2. By adding MacDrive, QuickTime and ProRes software and codecs, I could easily move files between the Z1 and my Mac. The screen is gorgeous, it’s very quiet and the heat output feels less than from my Mac tower. In these various tests, I never heard any fans kick into high. Whether you are upgrading from an older PC or switching platforms, the HP Z1 G2 is definitely worth considering.

Originally written for Digital Video magazine / CreativePlanetNetwork.

©2014 Oliver Peters

24p HD Restoration

df_24psdhd_6

There’s a lot of good film content that only lives on 4×3 SD 29.97 interlaced videotape masters. Certainly in many cases you can go back and retransfer the film to give it new life, but for many small filmmakers, the associated costs put that out of reach. In general, I’m referring to projects with $0 budgets. Is there a way to get an acceptable HD product from an old Digibeta master without breaking the bank? A recent project of mine would say, yes.

How we got here

I had a rather storied history with this film. It was originally shot on 35mm negative, framed for 1.85:1, with the intent to end up with a cut negative and release prints for theatrical distribution. It was being posted around 2001 at a facility where I worked and I was involved with some of the post production, although not the original edit. At the time, synced dailies were transferred to Beta-SP with burn-in data on the top and bottom of the frame for offline editing purposes. As was common practice back then, the 24fps film negative was transferred to the interlaced video standard of 29.97fps with added 2:3 pulldown – a process that duplicates additional fields from the film frames, such that 24 film frames evenly add up to 60 video fields in the NTSC world. This is loaded into an Avid, where – depending on the system – the redundant fields are removed, or the list that goes to the negative cutter compensates for the adjustments back to a frame-accurate 24fps film cut.

df_24psdhd_5For the purpose of festival screenings, the project file was loaded into our Avid Symphony and I conformed the film at uncompressed SD resolution from the Beta-SP dailies and handled color correction. I applied a mask to hide the burn-in and ended up with a letter-boxed sequence, which was then output to Digibeta for previews and sales pitches to potential distributors. The negative went off to the negative cutter, but for a variety of reasons, that cut was never fully completed. In the two years before a distribution deal was secured, additional minor video changes were made throughout the film to end up with a revised cut, which no longer matched the negative cut.

Ultimately the distribution deal that was struck was only for international video release and nothing theatrical, which meant that rather than finishing/revising the negative cut, the most cost-effective process was to deliver a clean video master. Except, that all video source material had burn-in and the distributor required a full-height 4×3 master. Therefore, letter-boxing was out. To meet the delivery requirements, the filmmaker would have to go back to the original negative and retransfer it in a 4×3 SD format and master that to Digital Betacam. Since the negative was only partially cut and additional shots were added or changed, I went through a process of supervising the color-corrected transfer of all required 35mm film footage. Then I rebuilt the new edit timeline largely by eye-matching the new, clean footage to the old sequence. Once done and synced with the mix, a Digibeta master was created and off it went for distribution.

What goes around comes around

After a few years in distribution, the filmmaker retrieved his master and rights to the film, with the hope of breathing a little life into it through self-distribution – DVDs, Blu-rays, Internet, etc. With the masters back in-hand, it was now a question of how best to create a new product. One thought was simply to letter-box the film (to be in the director’s desired aspect) and call it a day. Of course, that still wouldn’t be in HD, which is where I stepped back in to create a restored master that would work for HD distribution.

Obviously, if there was any budget to retransfer the film negative to HD and repeat the same conforming operation that I’d done a few years ago – except now in HD – that would have been preferable. Naturally, if you have some budget, that path will give you better results, so shop around. Unfortunately, while desktop tools for editors and color correction have become dirt-cheap in the intervening years, film-to-tape transfer and film scanning services have not – and these retain a high price tag. So if I was to create a new HD master, it had to be from the existing 4×3 NTSC interlaced Digibeta master as the starting point.

In my experience, I know that if you are going to blow-up SD to HD frame sizes, it’s best to start with a progressive and not interlaced source. That’s even more true when working with software, rather than hardware up-convertors, like Teranex. Step one was to reconstruct a correct 23.98p SD master from the 29.97i source. To do this, I captured the Digibeta master as a ProResHQ file.

Avid Media Composer to the rescue

df_24psdhd_2_sm

When you talk about software tools that are commonly available to most producers, then there are a number of applications that can correctly apply a “reverse telecine” process. There are, of course, hardware solutions from Snell and Teranex (Blackmagic Design) that do an excellent job, but I’m focusing on a DIY solution in this post. That involves deconstructing the 2:3 pulldown (also called “3:2 pulldown”) cadence of whole and split-field frames back into only whole frames, without any interlaced tearing (split-field frames). After Effects and Cinema Tools offer this feature, but they really only work well when the entire source clip is of a consistent and unbroken cadence. This film had been completed in NTSC 29.97 TV-land, so frequently at cuts, the cadence would change. In addition, there had been some digital noise reduction applied to the final master after the Avid output to tape, which further altered the cadence at some cuts. Therefore, to reconstruct the proper cadence, changes had to be made at every few cuts and, in some scenes, at every shot change. This meant slicing the master file at every required point and applying a different setting to each clip. The only software that I know of to effectively do this with is Avid Media Composer.

Start in Media Composer by creating a 29.97 NTSC 4×3 project for the original source. Import the film file there. Next, create a second 23.98 NTSC 4×3 project. Open the bin from the 29.97 project into the 23.98 project and edit the 29.97 film clip to a new 23.98 sequence. Media Composer will apply a default motion adapter to the clip (which is the entire film) in order to reconcile the 29.97 interlaced frame rate into a 23.98 progressive timeline.

Now comes the hard part. Open the Motion Effect Editor window and “promote” the effect to gain access to the advanced controls. Set the Type to “Both Fields”, Source to “Film with 2:3 Pulldown” and Output to “Progressive”. Although you can hit “Detect” and let Media Composer try to decide the right cadence, it will likely guess incorrectly on a complex file like this. Instead, under the 2:3 Pulldown tab, toggle through the cadence options until you only see whole frames when you step through the shot frame-by-frame. Move forward to the next shot(s) until you see the cadence change and you see split-field frames again. Split the video track (place an “add edit”) at that cut and step through the cadence choices again to find the right combination. Rinse and repeat for the whole film.

Due to the nature of the process, you might have a cut that itself occurs within a split-field frame. That’s usually because this was a cut in the negative and was transferred as a split-field video frame. In that situation, you will have to remove the entire frame across both audio and video. These tiny 1-frame adjustments throughout the film will slightly shorten the duration, but usually it’s not a big deal. However, the audio edit may or may not be noticeable. If it can’t simply be fixed by a short 2-frame dissolve, then usually it’s possible to shift the audio edit a little into a pause between words, where it will sound fine.

Once the entire film is done, export a new self-contained master file. Depending on codecs and options, this might require a mixdown within Avid, especially if AMA linking was used. That was the case for this project, because I started out in ProResHQ. After export, you’ll have a clean, reconstructed 23.98p 4×3 NTSC-sized (720×486) master file. Now for the blow-up to HD.

DaVinci Resolve

df_24psdhd_1_smThere are many applications and filters that can blow-up SD to HD footage, but often the results end up soft. I’ve found DaVinci Resolve to offer some of the cleanest resizing, along with very fast rendering for the final output. Resolve offers three scaling algorithms, with “Sharper” providing the crispest blow-up. The second issue is that since I wanted to restore the wider aspect, which is inherent in going from 4×3 to 16×9, this meant blowing up more than normal – enough to fit the image width and crop the top and bottom of the frame. Since Resolve has the editing tools to split clips at cuts, you have the option to change the vertical position of a frame using the tilt control. Plus, you can do this creatively on a shot-by-shot basis if you want to. This way you can optimize the shot to best fit into the 16×9 frame, rather than arbitrarily lopping off a preset amount from the top and bottom.

df_24psdhd_3_smYou actually have two options. The first is to blow up the film to a large 4×3 frame out of Resolve and then do the slicing and vertical reframing in yet another application, like FCP 7. That’s what I did originally with this project, because back then, the available version of Resolve did not offer what I felt were solid editing tools. Today, I would use the second option, which would be to do all of the reframing strictly within Resolve 11.

As always, there are some uncontrollable issues in this process. The original transfer of the film to Digibeta was done on a Rank Cintel Mark III, which is a telecine unit that used a CRT (literally an oscilloscope tube) as a light source. The images from these tubes get softer as they age and, therefore, they require periodic scheduled replacement. During the course of the transfer of the film, the lab replaced the tube, which resulted in a noticeable difference in crispness between shots done before and after the replacement. In the SD world, this didn’t appear to be a huge deal. Once I started blowing up that footage, however, it really made a difference. The crisper footage (after the tube replacement) held up to more of a blow-up than the earlier footage. In the end, I opted to only take the film to 720p (1280×720) rather than a full 1080p (1920×1080), just because I didn’t feel that the majority of the film held up well enough at 1080. Not just for the softness, but also in the level of film grain. Not ideal, but the best that can be expected under the circumstances. At 720p, it’s still quite good on Blu-ray, standard DVD or for HD over the web.

df_24psdhd_4_smTo finish the process, I dust-busted the film to fix places with obvious negative dirt (white specs in the frame) caused by the initial handling of the film negative. I used FCP X and CoreMelt’s SliceX to hide and cover negative dirt, but other options to do this include built in functions within Avid Media Composer. While 35mm film still holds a certain intangible visual charm – even in such a “manipulated” state – the process certainly makes you appreciate modern digital cameras like the ARRI ALEXA!

As an aside, I’ve done two other complete films this way, but in those cases, I was fortunate to work from 1080i masters, so no blow-up was required. One was a film transferred in its entirety from a low-contrast print, broken into reels. The second was assembled digitally and output to intermediate HDCAM-SR 23.98 masters for each reel. These were then assembled to a 1080i composite master. Aside from being in HD to start with, cadence changes only occurred at the edits between reels. This meant that it only required 5 or 6 cadence corrections to fix the entire film.

©2014 Oliver Peters

Sony Vegas Pro 13

df_Vegas_hero_UIIf you are looking for an easy-to-use editing application that’s optimized for a Windows workstation, one option is the Vegas Pro family from Sony Creative Software. There are several configurations, including Vegas Pro 13 Edit, Vegas Pro 13 and Vegas Pro 13 Suite. The big differences among these is the selection of Sony and third party tools that come with the bundle. The Edit version is mainly the NLE software. The standard Vegas Pro 13 package includes a Dolby Digital Professional encoder, DVD Architect Pro 6, the NewBlueFX Video Essentials VI plug-in collection and Nectar Elements from iZotope. All three products include CALM Act-compliant loudness metering and the HitFilm video plug-in collection from FXHOME. The Suite bundle adds Sound Forge Pro 11 (a file-based audio editor), HitFilm 2 Ultimate (a separate compositing application), Vegas Pro Production Assistant and 25 royalty-free music tracks.

Vegas Pro is a 64-bit application that requires a 64-bit version of Windows 7, 8 or 8.1. In my testing, I installed it on a Xeon-powered HP Z1 G2 configured with Windows 8.1, an NVIDIA K4100m GPU and 16GB of RAM. I didn’t have any video I/O device connected, so I wasn’t able to test that, but Vegas Pro will support AJA hardware and various external control surfaces. If you’ve ever used a version of Vegas Pro in the past, then Vegas Pro 13 will feel comfortable. For those who’ve never used it, the layout might be a bit of a surprise compared with other NLE software. Vegas is definitely a niche product in the market, in spite of its power, but fans of the software are as loyal to it, as those on the Mac side who love Final Cut Pro X.

Vegas Pro 13 supports a wide range of I-frame and long-GOP video codecs, including many professional and consumer media formats. For those moving into 4K, Vegas Pro 13 supports XAVC (used by the F55) and XAVC-S, a format used in Sony’s 4K prosumer cameras. Other common professional formats supported include Panasonic P2 (AVC-Intra), Sony XDCAM, HDCAM-SR, ProRes (requires ProRes for Windows and QuickTime installed) and REDCODE raw. 4K timeline support goes up to a frame size of 4096 x 4096 pixels. As an application with deep roots in audio, the list naturally includes most audio formats, as well.

What’s new

df_Vegas_hitfilmFans of Vegas Pro will find a lot in version 13 to justify an upgrade. One item is Vegas Pro Connect, an iPad companion application designed to be used for review and approval. It features an online and offline mode to review and add comments to a Vegas Pro project. There’s also a new “proxy-first” workflow. For example, videographers shooting XDCAM can use the Sony Wireless Adapter to send camera proxies to the cloud. While the XDCAM discs are being shipped back to the facility, the editors can download and start the edit with the proxies. When the high-resolution media arrives, they then automatically relink the project to this media. Vegas Pro 13 adds a project archive to back up projects and associated media.

df_Vegas_nectarThe plug-ins have been expanded in this release by bundling in new effects from NewBlueFX, FXHOME and iZotope. The video effects include color modification, keying, bleach bypass, light flares, TV damage and a number of other popular looks. These additions augment Vegas Pro’s extensive selection of Sony audio and video effects. Vegas supports the VST audio plug-in and OpenFX (OFX) video plug-in formats. This means other compatible plug-ins installed for other applications on your system can be detected and used. For example, The FXHOME HitFilm plug-ins also showed up in Resolve 11 Lite (beta) that I had installed on this computer, because both applications share the OFX architecture.

Given its audio heritage, Vegas Pro 13 includes a comprehensive audio mixer. New with this release is the inclusion of iZotope Nectar Elements, a single audio plug-in designed for one-click voice processing. Another welcome addition is a loudness meter window to measure levels and mixes in order to be compliant with the CALM Act and EBU R-128.

Putting Vegas Pro 13 through the paces

df_Vegas_reddecodeOne big selling point of version 13 is GPU acceleration based on OpenCL in NVIDIA, AMD and Intel graphics cards. This becomes especially important when dealing with 4K formats. The performance advances are most noticeable once you start layering video tracks. Certainly working with 4K XAVC, RED EPIC Dragon and 1080p ProRes 4444 media was easy. Scrubbing and real-time playback never caused any issues. The Vegas Pro preview window lets you manually or automatically adjust visual preview quality to maintain maximum real-time playback. If you are a RED user, then you’ll appreciate access to the R3D decode properties. The Z1 G2 felt very responsive working with native RED camera media.

df_Vegas_colorcorrMany editors take awhile to get comfortable with Vegas Pro’s interface. Vegas started life as a multi-track audio software (DAW) and the layout and track design stems from that. Each video and audio track is designed like a mixing board channel strip. You have a read/touch/latch automation control, a plug-in chain and a level slider. With audio you also get panning and a meter. With video, you get a spatial control, parent/child track hierarchy control (for track grouping) and a compositing mode. Many of the functions can be manipulated in real-time, while the timeline is playing. This may seem obvious when writing audio levels in an automated mixing pass. It’s more unique for video. For example, you can do the same for video opacity – writing a real-time pass of opacity level changes on-the-fly, simply by adjusting the video level fader as the timeline plays.

df_Vegas_audioOnce you get deeper into Vegas, you’ll find quite a few surprises. For example, it supports stereoscopic workflows. The Title Generator effects include numerous animated text templates. Together with DVD Architect, you have a solid Blu-Ray Disc authoring system. Unfortunately, there were also a few things I’d wanted to test that simply didn’t seem to work. Vegas Pro 13 is supposed to be able to import and export a range of project files, including XML, AAF, FCPXML, Premiere projects, etc. I attempted to import XML, FCPXML and Premiere Pro project files, but came up empty each time. I was never able to export an FCPXML file. I was able to export FCP 7 XML and Premiere project files, but the Premiere file crashed Premiere Pro CC 2014 on both my Mac and this test PC. The FCP 7 XML did work in Premiere Pro, though. I tried to bring an XML into Final Cut Pro X using the 7toX translation utility, but FCP X was unable to relink to the media files. So, while this should be a great feature, it seems to be a work-in-progress at this point.

df_Vegas_interfaceIt was hard for me to warm up to the interface itself. While it’s very fast to operate, Vegas Pro is still designed like an audio application, and so, is very different than most traditional NLEs. For example, double-clicking a clip edits it straight to the timeline as the default condition. To first send it to the source viewer in order to select in and out points, you have to use the “Open in Trimmer” command. Fortunately, there is a preference setting to flip this behavior. Vegas Pro projects contain only a single timeline – also referred to as the project (like in FCP X). You cannot have multiple timelines within a single production, however, you can have more than one instance of Vegas Pro open at the same time. In that case, you can switch between them using the Windows task bar to select which active application window to bring to the front. It is also possible to edit a .veg (Vegas Pro project) file to the timeline. This gives you the same result as in other NLE software, where you can edit a nested timeline into another timeline.

Speaking of the interface, the application badly needs a redesign. It looks like it’s still from the Windows 98 world. Some people appreciate starkness – and I know this probably helps the application’s speed – but, if you’re going to stare at a screen all day long, it should look a bit more elegant. Even Sony’s Sound Forge Pro for the Mac, which shares a similar design and starkness, is cleaner and feels more modern. Plus it’s very bright. In fact, disabling the Vegas theme in preferences makes it even painfully brighter. It would be great if Vegas Pro had a UI brightness slider, like Adobe has offered for years.

Conclusion

Sony’s Vegas Pro 13 is a useful application with a lot of power for users at all levels. At only a few hundred dollars, it’s a strong application suite to have in your Windows toolkit, even if you prefer other NLEs. The prime reason is the wide codec support and easy 4K editing. If that’s how you use it, then the interface issues I mentioned won’t be a big deal.

On the other hand, if you’re an experienced Vegas Pro user and happy with it as is, then version 13 is a worthy upgrade, especially on a high-end machine. It’s fast, efficient and gets the job done. If Sony fixes the import/export problems I encountered, Vegas Pro could become a tool that would make itself indispensable.

Originally written for Digital Video magazine / CreativePlanetNetwork.

©2014 Oliver Peters

The Zero Theorem

df_tzt_1Few filmmakers are as gifted as Terry Gilliam when it comes to setting a story inside a dystopian future. The Monty Python alum, who brought us Brazil and Twelve Monkeys, to name just a few, is back with his newest, The Zero Theorem. It’s the story of Qohen Leth – played by Christoph Walz (Django Unchained, Water for Elephants, Inglorious Basterds) – an eccentric computer programmer who has been tasked by his corporate employer to solve the Zero Theorem. This is a calculation, that if solved, might prove that the meaning of life is nothingness.

The story is set in a futuristic London, but carries many of Gilliam’s hallmarks, like a retro approach to the design of technology. Qohen works out of his home, which is much like a rundown church. Part of the story takes Qohen into worlds of virtual reality, where he frequently interacts with Bainsley (Melanie Thierry), a webcam stripper that he met at a party, but who may have been sent by his employer, Mancom, to distract him. The Zero Theorem is very reminiscent of Brazil, but in concept, also of The Prisoner, a 1960s-era television series. Gilliam explores themes of isolation versus loneliness, the pointlessness of mathematical modeling to derive meaning and privacy issues.

I recently had a Skype chat with Mick Audsley, who edited the film last year. Audsley is London-based, but is currently nearing completion of a director’s cut of the feature film Everest in Iceland. This was his third Gilliam film, having previously edited Twelve Monkeys and The Imaginarium of Doctor Parnassus. Audsley explained, “I knew Terry before Twelve Monkeys and have always had a lot of admiration for him. This is my third film with Terry, as well as a short, and he’s an extraordinarily interesting director to work with. He still thinks in a graphic way, since he is both literally and figuratively an artist. He can do all of our jobs better than we can, but really values the input from other collaborators. It’s a bit like playing in a band, where everyone feeds off of the input of the other band members.”df_tzt_5

The long path to production

The film’s screenplay writer Pat Rushin teaches creative writing at the University of Central Florida in Orlando, Florida. He originally submitted the script for The Zero Theorem to the television series Project Greenlight, where it made the top 250. The script ended up with the Zanuck Company. It was offered to Gilliam in 2008, but initially other projects got in the way. It was revived in June 2012 with Gilliam at the helm. The script was very ambitious for a limited budget of under $10 million, so production took place in Romania over a 37-day period. In spite of the cost challenges, it was shot on 35mm film and includes 250 visual effects.

df_tzt_6Audsley continued, “Nicola [Pecorini, director of photography] shot a number of tests with film, RED and ARRI ALEXA cameras . The decision was made to use film. It allowed him the latitude to place lights outside of the chapel set – Qohen’s home – and have light coming in through the windows to light up the interior. Kodak’s lab in Bucharest handled the processing and transfer and then sent Avid MXF files to London, where I was editing. Terry and the crew were able to view dailies in Romania and then we discussed these over the phone. Viewing dailies is a rarity these days with digitally-shot films and something I really miss. Seeing the dailies with the full company provides clarity, but I’m afraid it’s dying out as part of the filmmaking process.”df_tzt_7

While editing in parallel to the production, Audsley didn’t upload any in-progress cuts for Gilliam to review. He said, “It’s hard for the director to concentrate on the edit, while he’s still in production. As long as the coverage is there, it’s fine. Certainly Terry and Nicola have a supreme understanding of film grammar, so that’s not a problem. Terry knows to get those extra little shots that will make the edit better. So, I was editing largely on my own and had a first cut within about ten days of the time that the production wrapped. When Terry arrived in London, we first went over the film in twenty-minute reels. That took us about two to three weeks. Then we went through the whole film as one piece to get a sense for how it worked as a film.”

Making a cinematic story

df_tzt_4As with most films, the “final draft” of the script occurs in the cutting room. Audsley continued, “The film as a written screenplay was very fluid, but when we viewed it as a completed film, it felt too linear and needed to be more cinematic – more out of order. We thought that it might be best to move the sentences around in a more interesting way. We did that quite easily and quickly. Thus, we took the strength of the writing and realized it in cinematic language. That’s one of the big benefits of the modern digital editing tools. The real film is about the relationship between Bainsley and Qohen and less about the world they inhabit. The challenge as filmmakers in the cutting room is to find that truth.”

df_tzt_8Working with visual effects presents its own editorial challenge. “As an editor, you have to evaluate the weight and importance of the plate – the base element for a visual effect – before committing to the effect. From the point-of-view of cost, you can’t keep undoing shots that have teams of artists working on them. You have to ensure that the timing is exactly right before turning over the elements for visual effects development. The biggest, single visual challenge is making Terry’s world, which is visually very rich. In the first reel, we see a futuristic London, with moving billboards. These shots were very complex and required a lot of temp effects that I layered up in the timeline. It’s one of the more complex sequences I’ve built in the Avid, with both visual and audio elements interacting. You have to decide how much can you digest and that’s an open conversation with the director and effects artists.”

The post schedule lasted about twenty weeks ending with a mix in June 2013. Part of that time was tied up in waiting for the completion of visual effects. Since there was no budget for official audience screenings, the editorial team was not tasked with creating temp mixes and preview versions before finishing the film. Audsley said, “The first cut was not overly long. Terry is good in his planning. One big change that we made during the edit was to the film’s ending. As written, Qohen ends up in the real world for a nice, tidy ending. We opted to end the film earlier for a more ambiguous ending that would be better. In the final cut the film ends while he’s still in a virtual reality world. It provides a more cerebral versus practical ending for the viewer.”

Cutting style 

df_tzt_9Audsley characterizes his cutting style as “old school”. He explained, “I come from a Moviola background, so I like to leave my cut as bare as possible, with few temp sound effects or music cues. I’ll only add what’s needed to help you understand the story. Since we weren’t obliged on this film to do temp mixes for screenings, I was able to keep the cut sparse. This lets you really focus on the cut and know if the film is working or not. If it does, then sound effects and music will only make it better. Often a rough cut will have temp music and people have trouble figuring out why a film isn’t working. The music may mask an issue or, in fact, it might simply be that the wrong temp music was used. On The Zero Theorem, George Fenton, our composer, gave us representative pieces late in the  process that he’d written for scenes.” Andre Jacquemin was the sound designer who worked in parallel to Audsley’s cut and the two developed an interactive process. Audsley explained, “Sometimes sound would need to breath more, so I’d open a scene up a bit. We had a nice back-and-forth in how we worked.”

df_tzt_3Audsley edited the film using Avid Media Composer version 5 connected to an Avid Unity shared storage system. This linked him to another Avid workstation run by his first assistant editor, Pani Ahmadi-Moore. He’s since upgraded to version 7 software and Avid ISIS shared storage. Audsley said, “I work the Avid pretty much like I worked when I used the Moviola and cut on film. Footage is grouped into bins for each scene. As I edit, I cut the film into reels and then use version numbers as I duplicate sequences to make changes. I keep a daily handwritten log about what’s done each day. The trick is to be fastidious and organized. Pani handles the preparation and asset management so that I can concentrate on the edit.”

df_tzt_2Audsley continued, “Terry’s films are very much a family type of business. It’s a family of people who know each other. Terry is supremely in control of his films, but he’s also secure in sharing with his filmmaking family. We are open to discuss all aspects of the film. The cutting room has to be a safe place for a director, but it’s the hub of all the post activity, so everyone has to feel free about voicing their opinions.”

Much of what the editor does, proceeds in isolation. The Zero Theorem provided a certain ironic resonance for Audsley, who commented, “At the start, we see a guy sitting naked in front of a computer. His life is harnessed in manipulating something on screen, and that is something I can relate to as a film editor! I think it’s very much a document of our time, about the notion that in this world of communication, there’s a strong aspect of isolation. All the communication in the world does not necessarily connect you spiritually.” The Zero Theorem is scheduled to open for limited US distribution in September.

Originally written for DV magazine / CreativePlanetNetwork.

©2014 Oliver Peters