The NLE that wouldn’t die II

df_nledie2_sm

With echoes of Monty Python in the background, two years on, Final Cut Pro 7 and Final Cut Studio are still widely in use. As I noted in my post from last November, I still see facilities with firmly entrenched and mature FCP “legacy” workflows that haven’t moved to another NLE yet. Some were ready to move to Adobe until they learned subscription was the only choice going forward. Others maintain a fanboy’s faith in Apple that the next version will somehow fix all the things they dislike about Final Cut Pro X. Others simply haven’t found the alternative solutions compelling enough to shift.

I’ve been cutting all manner of projects in FCP X since the beginning and am currently using it on a feature film. I augment it in lots of ways with plug-ins and utilities, so I’m about as deep into FCP X workflows as anyone out there. Yet, there are very few projects in which I don’t touch some aspect of Final Cut Studio to help get the job done. Some fueled by need, some by personal preference. Here are some ways that Studio can still work for you as a suite of applications to fill in the gaps.

DVD creation

There are no more version updates to Apple’s (or Adobe’s) DVD creation tools. FCP X and Compressor can author simple “one-off” discs using their export/share/batch functions. However, if you need a more advanced, authored DVD with branched menus and assets, DVD Studio Pro (as well is Adobe Encore CS6) is still a very viable tool, assuming you already own Final Cut Studio. For me, the need to do this has been reduced, but not completely gone.

Batch export

Final Cut Pro X has no batch export function for source clips. This is something I find immensely helpful. For example, many editorial houses specify that their production company client supply edit-friendly “dailies” – especially when final color correction and finishing will be done by another facility or artist/editor/colorist. This is a throwback to film workflows and is most often the case with RED and ALEXA productions. Certainly a lot of the same processes can be done with DaVinci Resolve, but it’s simply faster and easier with FCP 7.

In the case of ALEXA, a lot of editors prefer to do their offline edit with LUT-corrected, Rec 709 images, instead of the flat, Log-C ProRes 4444 files that come straight from the camera. With FCP 7, simply import the camera files, add a LUT filter like the one from Nick Shaw (Antler Post), enable TC burn-in if you like and run a batch export in the codec of your choice. When I do this, I usually end up with a set of Rec 709 color, ProResLT files with burn-in that I can use to edit with. Since the file name, reel ID and timecode are identical to the camera masters, I can easily edit with the “dailies” and then relink to the camera masters for color correction and finishing. This works well in Adobe Premiere Pro CC, Apple FCP 7 and even FCP X.

Timecode and reel IDs

When I work with files from the various HDSLRs, I prefer to convert them to ProRes (or DNxHD), add timecode and reel ID info. In my eyes, this makes the file professional video media that’s much more easily dealt with throughout the rest of the post pipeline. I have a specific routine for doing this, but when some of these steps fail, due to some file error, I find that FCP 7 is a good back-up utility. From inside FCP 7, you can easily add reel IDs and also modify or add timecode. This metadata is embedded into the actual media file and readable by other applications.

Log and Transfer

Yes, I know that you can import and optimize (transcode) camera files in FCP X. I just don’t like the way it does it. The FCP 7 Log and Transfer module allows the editor to set several naming preferences upon ingest. This includes custom names and reel IDs. That metadata is then embedded directly into the QuickTime movie created by the Log and Transfer module. FCP X doesn’t embed name and ID changes into the media file, but rather into its own database. Subsequently this information is not transportable by simply reading the media file within another application. As a result, when I work with media from a C300, for example, my first step is still Log and Transfer in FCP 7, before I start editing in FCP X.

Conform and reverse telecine

A lot of cameras offer the ability to shoot at higher frame rates with the intent of playing this at a slower frame rate for a slow motion effect – “overcranking” in film terms. Advanced cameras like the ALEXA, RED One, EPIC and Canon C300 write a timebase reference into the file that tells the NLE that a file recorded at 60fps is to be played at 23.98fps. This is not true of HDSLRs, like a Canon 5D, 7D or a GoPro. You have to tell the NLE what to do. FCP X only does this though its Retime effect, which means you are telling the file to be played as slomo, thus requiring a render.

I prefer to use Cinema Tools to “conform” the file. This alters the file header information of the QuickTime file, so that any application will play it at the conformed, rather than recorded frame rate. The process is nearly instant and when imported into FCP X, the application simply plays it at the slower speed – no rendering required. Just like with an ALEXA or RED.

Another function of Cinema Tools is reverse telecine. If a camera file was recorded with built-in “pulldown” – sometimes called 24-over-60 – additional redundant video fields are added to the file. You want to remove these if you are editing in a native 24p project. Cinema Tools will let you do this and in the process render a new, 24p-native file.

Color correction

I really like the built-in and third-party color correction tools for Final Cut Pro X. I also like Blackmagic Design’s DaVinci Resolve, but there are times when Apple Color is still the best tool for the job. I prefer its user interface to Resolve, especially when working with dual displays and if you use an AJA capture/monitoring product, Resolve is a non-starter. For me, Color is the best choice when I get a color correction project from outside where the editor used FCP 7 to cut. I’ve also done some jobs in X and then gone to Color via Xto7 and then FCP 7. It may sound a little convoluted, but is pretty painless and the results speak for themselves.

Audio mixing

I do minimal mixing in X. It’s fine for simple mixes, but for me, a track-based application is the only way to go. I do have X2Pro Audio Convert, but many of the out-of-house ProTools mixers I work with prefer to receive OMFs rather than AAFs. This means going to FCP 7 first and then generating an OMF from within FCP 7. This has the added advantage that I can proof the timeline for errors first. That’s something you can’t do if you are generating an AAF without any way to open and inspect it. FCP X has a tendency to include many clips that are muted and usually out of your way inside X. By going to FCP 7 first, you have a chance to clean up the timeline before the mixer gets it.

Any complex projects that I mix myself are done in Adobe Audition or Soundtrack Pro. I can get to Audition via the XML route – or I can go to Soundtrack Pro through XML and FCP 7 with its “send to” function. Either application works for me and most of my third-party plug-ins show up in each. Plus they both have a healthy set of their own built-in filters. When I’m done, simply export the mix (and/or stems) and import the track back into FCP X to marry it to the picture.

Project trimming

Final Cut Pro X has no media management function.  You can copy/move/aggregate all of the media from a single Project (timeline) into a new Event, but these files are the source clips at full length. There is no ability to create a new project with trimmed or consolidated media. That’s when source files from a timeline are shortened to only include the portion that was cut into the sequence, plus user-defined “handles” (an extra few frames or seconds at the beginning and end of the clip). Trimmed, media-managed projects are often required when sending your edited sequence to an outside color correction facility. It’s also a great way to archive the “unflattened” final sequence of your production, while still leaving some wiggle room for future trimming adjustments. The sequence is editable and you still have the ability to slip, slide or change cuts by a few frames.

I ran into this problem the other day, where I needed to take a production home for further work. It was a series of commercials cut in FCP X, from which I had recut four spots as director’s cuts. The edit was locked, but I wanted to finish the mix and grade at home. No problem, I thought. Simply duplicate the project with “used media”, create the new Event and “organize” (copies media into the new Event folder). I could live with the fact that the media was full length, but there was one rub. Since I had originally edited the series of commercials using Compound Clips for selected takes, the duping process brought over all of these Compounds – even though none was actually used in the edit of the four director’s cuts. This would have resulted in copying nearly two-thirds of the total source media. I could not remove the Compounds from the copied Event, without also removing them from the original, which I didn’t want to do.

The solution was to send the sequence of four spots to FCP 7 and then media manage that timeline into a trimmed project. The difference was 12GB of trimmed source clips instead of HUNDREDS of GB. At home, I then sent the audio to Soundtrack Pro for a mix and the picture back to FCP X for color correction. Connect the mix back to the primary storyline in FCP X and call it done!

I realize that some of this may sound a bit complex to some readers, but professional workflows are all about having a good toolkit and knowing how to use it. FCP X is a great tool for productions that can work within its walls, but if you still own Final Cut Studio, there are a lot more options at your disposal. Why not continue to use them?

©2013 Oliver Peters

NAB 2012 – Adobe CS6, Smoke 2013, Thunderbolt and more

Get some coffee, sit back and take your time reading this post. I apologize for its length in advance, but there’s a lot of new hardware and software to talk about. I’m going to cover my impressions of NAB along with some “first looks” at Adobe Creative Suite 6, Smoke 2013 and Thunderbolt i/o devices. There’s even some FCP X news!

_________________________________________________

Impressions of NAB 2012

I thought this year was going to be quiet and laid back. Boy, was I wrong! Once again Blackmagic Design stole the spotlight with democratized products. This year the buzz had to be the Blackmagic Cinema Camera. It delivers on the objective of the original RED Scarlet idea. It’s a $3K camera with 2.5K of resolution and 13 stops. I’ll leave the camera discussions to the camera guys, but suffice it to say that this camera was thought up with post in mind. That is – no new, proprietary codec. It uses ProRes, DNxHD or Cinema DNG (the Adobe raw format). It also includes a copy of Resolve and UltraScope with the purchase.

Along with that news was Blackmagic’s re-introduction of the Teranex processors. Prior to that company’s acquisition by Blackmagic Design, the top-of-the-line Teranex image processor loaded with options was around $90K. Now that Grant Petty’s wizards have had a go at it, the newest versions in a nicely re-designed form factor are $2K for 2D and $4K for 3D. Sweet. And if you think free (or close to it) stifles R&D, take a look at the new, cleaned-up DaVinci Resolve 9.0 interface. Great to see that the development continues.

You’ll note that there was a lot of buzz about 4K camera, but did you notice you need to record that image to something? Enter AJA – not with a camera – but, with the KiPro Mini Quad. That’s right – a 4K version of the Mini already designed with Canon’s C500 4K camera in mind. It records 4K ProRes 4444 files. AJA is also building its Thunderbolt portfolio with T-Tap, a monitoring-only Thunderbolt-to-SDI/HDMI output adapter under $250. More on Thunderbolt devices later in this post.

The NLE news was dominated by Adobe’s reveal of Creative Suite 6 (with Premiere Pro CS6) and Autodesk’s re-designed Smoke 2013. Avid’s news was mainly broadcast and storage-related, since Media Composer version 6 had been launched months before. Although that was old news to the post crowd, it was the first showing for the software at NAB. Nevertheless, to guarantee some buzz, Avid announced a short-term Symphony cross-grade deal that lasts into June. FCP (excluding X), Media Composer and Xpress Pro owners can move into Symphony for $999. If you are an Avid fan, this is a great deal and is probably the best bang-for-the-buck NLE available if you take advantage of the cross-grade.

An interesting sidebar is that both FilmLight and EyeOn are developing plug-in products for Avid software. FilmLight builds the Baselight color correction system, which was shown and recently released in plug-in form for FCP 7. Now they are expanding that to other hosts, including Nuke and Media Composer under the product name of Baselight Editions. EyeOn’s Fusion software is probably the best and fastest, feature film-grade compositor available on Windows. EyeOn is using Connection (a software bridge) to send Media Composer/Symphony or DS timeline clips to Fusion, which permits both applications to stay open. In theory, if you bought Symphony and added Baselight and Fusion, the combination becomes one of the most powerful NLEs on the market. All at under $5K with the current cross-grade!

Autodesk has been quite busy redesigning its Smoke NLE for the Mac platform. Smoke 2013 features a complete Mac-centric overhaul to turn it into an all-in-one “super editor” that still feels comfortable for editors coming from an FCP or Media Composer background. See my “first look” section below.

Quantel, who often gets lost in these desktop NLE discussions showed the software-only version of Pablo running on a tweaked PC. It uses four high-end NVIDIA cards for performance and there’s also a new, smaller Neo Nano control surface. Although pricing is lower, at $50K for the software alone, it’s still the premium brand.

There’s been plenty of talk about “editing in the cloud”, but in my opinion, there were three companies at the show with viable cloud solutions for post: Avid, Quantel and Aframe. In 2010 Avid presented a main stage technology preview that this year has started to come to fruition as Interplay Sphere. The user in the field is connected to his or her home base storage and servers over various public networks. The edit software is a version of the NewsCutter/Media Composer interface that can mix local full-res media with proxy media linked to full-res media at the remote site. When the edit is done, the sequence list is “published” to the server and local, full-res media uploaded back to the home base (trimmed clips only). The piece is conformed and rendered by the server at home. Seems like the branding line should be Replace your microwave truck with a Starbucks!

The company with a year of real experience “in the cloud” at the enterprise level is Quantel with Qtube. It’s a similar concept to Avid’s, but has the advantage of tying in multiple locations remotely. Media at the home base can also be searched and retrieved in formats that work for other NLEs, including Media Composer and Final Cut.

An exciting newcomer is Aframe. They are a British company founded by the former owner of Unit, one of Europe’s largest professional post facilities built around FCP and Xsan. Aframe is geared toward the needs of shows and production companies more so than broadcast infrastructures. The concept uses a “private cloud” (i.e. not Amazon servers) with an interface and user controls that feel a lot like a mash-up between Vimeo and Xprove. Full-res media can be uploaded in several ways, including via regional service centers located around the US. There’s full metadata support and the option to use Aframe’s contracted logging vendor if you don’t want to create metadata yourself. Editors cut with proxy media and then the full-res files are conformed via EDLs and downloaded when ready. Pricing plans are an attractive per-seat, monthly structure that start with a free, single seat account.

Apple doesn’t officially do trade shows anymore, but they were at NAB, flying under the radar. In a series of small, private meetings with professional customers and media, Apple was making their case for Final Cut Pro X. Rome wasn’t built in a day and the same can be said for re-building a dominant editing application from the ground up. Rather than simply put in the same features as the competition, Apple opted to take a fresh look, which has created much “Sturm und Drang” in the industry. Nevertheless, Apple was interested in pointing out the adoption by professional users and the fact that it has held an above-50% market share with new NLE seats sold to professional users during 2011. You can parse those numbers anyway you like, but they point to two facts: a) people aren’t changing systems as quickly as many vocal forum posts imply, and b) many users are buying FCP X and seeing if and how it might work in some or all of their operation.

FCP X has already enjoyed several quick updates in less than a year, thanks to the App Store mechanism. There’s a robust third-party developer community building around X. In fact, walking around the NAB floor, I saw at least a dozen or more booths that displayed FCP X in some fashion to demonstrate their own product or use it as an example of interoperability between their product and X. Off the top of my head, I saw or heard about FCP X at Autodesk, Quantel, AJA, Blackmagic Design, Matrox, MOTU, Tools On Air, Dashwood and SONY – not to mention others, like resellers and storage vendors. SONY has announced the new XDCAM plug-ins for X and compatibility of its XDCAM Browser software. Dashwood Cinema Solutions was showing the only stereo3D package that’s ready for Final Cut Pro X. And of course, we can’t live without EDLs, so developer XMiL Workflow Tools (who wasn’t exhibiting at NAB) has also announced EDL-X, an FCP XML-to-EDL translator, expected to be in the App Store by May.

On the Apple front, the biggest news was another peek behind the curtain at some of the features to be included in the next FCP X update, coming later this year. These include multichannel audio editing tools, dual viewers, MXF plug-in support and RED camera support. There are no details beyond these bullet points, but you can expect a lot of other minor enhancements as part of this update.

“Dual viewers” may be thought of as “source/record” monitors – added by Apple, thanks to user feedback. Apple was careful to point out to me that they intended to do a bit more than just that with the concept. “RED support” also wasn’t defined, but my guess would be that it’s based on the current Import From Camera routine. I would imagine something like FCP 7’s native support of RED media through Log and Transfer, except better options for bringing in camera raw color metadata. Of course, that’s purely speculation on my part.

Now, sit back and we’ll run through some “first looks”.

 _________________________________________________

Adobe Creative Suite 6 – A First Look

Adobe charged into 2012 with a tailwind of two solid years of growth on the Mac platform and heavy customer anticipation for what it plans to offer in Creative Suite 6. The release of CS5 and CS5.5 were each strong in their own right and introduced such technologies as the Mercury Playback Engine for better real-time performance, but in 2011 Adobe clearly ramped up its focus on video professionals. They acquired the IRIDAS SpeedGrade technology and brought the developers of Automatic Duck on board. There have been a few sneak peeks on the web including a popular video posted by Conan O’Brien’s Team Coco editors, but the wait for CS6 ended with this year’s NAB.

Production Premium

Adobe’s video content creation tools may be purchased individually, through a Creative Cloud subscription or as part of the Master Collection and Production Premium bundles. Most editors will be interested in CS6 Production Premium, which includes Prelude, Premiere Pro, After Effects, Photoshop Extended, SpeedGrade, Audition, Encore, Adobe Media Encoder, Illustrator, Bridge and Flash Professional. Each of these applications has received an impressive list of new features and it would be impossible to touch on every one here, so look for a more in-depth review at a future date. I’ll quickly cover some of the highlights.

Prelude

As part of CS6, Adobe is introducing Prelude, a brand new product designed for footage acquisition, ingest/transcode, organization, review and metadata tagging. It’s intended to be used by production assistants or producers as an application to prepare the footage for an editor. Both Prelude and Premiere Pro now feature “hover scrubbing”, which is the ability to scan through footage quickly by moving the mouse over the clip thumbnail, which can be expanded as large as a mini-viewer. Clips can be marked, metadata added and rough cuts assembled, which in turn are sent to Premiere Pro. There is a dynamic reading of metadata between Prelude and Premiere Pro. Clip metadata changes made in one application are updated in the other, since the information is embedded into the clip itself. Although Prelude is included with the software collection for single users, it can be separately purchased in volume by enterprise customers, such as broadcasters and news organizations.

Premiere Pro

A lot of effort was put into the redesign of Premiere Pro. The user interface has been streamlined and commands and icons were adjusted to be more consistent with both Apple Final Cut Pro (“legacy” versions) and Avid Media Composer. Adobe took input from users who have come from both backgrounds and wanted to alter the UI in a way that was reasonably familiar. The new CS6 keyboard shortcuts borrow from each, but there are also full FCP and full MC preset options. Workspaces have been redesigned, but an editor can still call up CS5.5 workspace layouts with existing projects to ease the transition. A dockable timecode window has been added and Adobe has integrated a dynamic trimming function similar to that of Media Composer.

The changes are definitely more than cosmetic, though, as Adobe has set out to design a UI that never forces you to stop. This means you can now do live updates to effects and even open other applications without the timeline playback ever stopping. They added Mercury Playback acceleration support for some OpenCL cards and there’s a new Mercury Transmit feature for better third-party hardware i/o support across all of the video applications. Many new tools have been added, including a new multi-camera editor with an unlimited number of camera angles. Some more features have been brought over from After Effects, including adjustment layers and the Warp Stabilizer that was introduced with CS5.5. This year they’ve broken out the rolling shutter repair function as a separate tool. Use it for quick HDSLR camera correction without the need to engage the full Warp Stabilizer.

SpeedGrade

By adding a highly-regarded and established color grading tool, Adobe has strengthened the position of Production Premium as the primary application suite for video professionals. The current level of integration is a starting point, given the short development time that was possible since last September. Expect this to expand in future versions.

SpeedGrade works as both a standalone grading application, as well as a companion to the other applications. There’s a new “Send to SpeedGrade” timeline export operation in Premiere Pro. When you go into SpeedGrade this way, an intermediate set of uncompressed DPX files is first rendered as the source media to be used by SpeedGrade. Both applications support a wide range of native formats, but they aren’t all the same, so this approach offers the fewest issues for now, when working with mixed formats in a Premiere sequence. In addition, SpeedGrade can also import EDLs and relink media, which offers a second path from Premiere Pro into SpeedGrade. Finished, rendered media returns to Premiere as a single, flattened file with baked-in corrections.

As a color correction tool, SpeedGrade presents an easy workflow – enabling you to stack layers of grading onto a single clip, as well as across the entire timeline. There are dozens of included LUTs and looks presets, which may be used for creative grading or to correct various camera profiles. An added bonus is that both After Effects and Photoshop now support SpeedGrade Look files.

Audition

With CS5.5, Adobe traded out Soundbooth for a cross-platform version of Audition, Adobe’s full-featured DAW software. In CS6, that integration has been greatly improved. Audition now sports an interface more consistent with After Effects and Premiere, newly added Mackie and Avid Eucon control surface protocol support and mixing automation. The biggest feature demoed in the sneak peeks has been the new Automatic Speech Alignment tool. You can take overdubbed ADR lines and automatically align them for near-perfect sync to replace the on-camera dialogue. All of this is thanks to the technology behind Audition’s new real-time, high-quality audio stretching engine.

Audition also gains a number of functions specific to audio professionals. Audio CD mastering has been added back into the program and there’s a new pitch control spectral display. This can be used to alter the pitch of a singer, as well as a new way to create custom sound design. Buying Production Premium gives you access to 20GB of downloadable audio media (sound effects and music scores) formerly available only via the online link to Adobe’s Resource Central.

After Effects

Needless to say, After Effects is the Swiss Army knife of video post. From motion graphics to visual effects to simple format conversation, there’s very little that After Effects isn’t called upon to do. Naturally there’s plenty new in CS6. The buzz feature is a new 3D camera tracker, which uses a point cloud to tightly track an object that exhibits size, position, rotation and perspective changes. These are often very hard for traditional 2D point trackers to follow. For example, the hood of a car moving towards the camera at an angle.

Now for the first time in After Effects, you can build extruded 3D text and vector shapes using built-in tools. This includes surface material options and a full 3D ray tracer. In general, performance has been greatly improved through a better hand-off between RAM cache and disk cache. As with Premiere Pro, rolling shutter repair is now also available as a separate tool in After Effects.

Photoshop

Photoshop has probably had the most online sneak peeks of any of the new Adobe apps. It has been available as a public beta since mid-March. Photoshop, too, sports a new interface, but that’s probably the least noteworthy of the new features. These include impressive new content-aware fill functions, 3D LUT support (including SpeedGrade Look files) and better auto-correction. There’s better use of GPU horsepower, which means common tasks like Liquefy are accelerated.

Photoshop has offered the ability to work with video as a single file for several versions. With CS6 it gains expanded video editing capabilities, enabled by a new layer structure akin to that used in After Effects. Although Premiere Pro or After Effects users probably won’t do much with it, Adobe is quite cognizant that many of its photography customers are increasingly asked to deal with video – thanks, of course, to the HD-video-enabled DSLRs, like the Canon EOS series. By integrating video editing and layering tools into Photoshop, it allows these customers to deliver a basic video project while working inside an application environment where they are the most comfortable. Video editors gain the benefit of having it there if they want to use it. Some may, in fact, develop their own innovative techniques once they investigate what it can do for them.

Adobe Creative Suite 6 offers a wealth of new features, expanded technologies and a set of brand new tools. It’s one of Adobe’s largest releases ever and promises to attract new interest from video professionals.

Click here for updated price and availability information.

Click here for videos that explain CS6 features.

Plus, a nice set of tutorial videos here.

 _________________________________________________

Autodesk Smoke 2013 – A First Look

Thanks to the common Unix underpinnings of Linux and Mac OS X, Autodesk Media & Entertainment was able to bring its advanced Smoke editor to the Mac platform in December of 2009 as an unbundled software product. The $15K price tag was a huge drop from that of their standard, turnkey Linux Smoke workstations, but still hefty for the casual user. Nevertheless, thanks to an aggressive trial and academic policy, Autodesk was very successful in getting plenty of potential new users to download and test the product. In the time since the launch on the Mac, Autodesk has had a chance to learn what Mac-oriented editors want and adjust to the feedback from these early adopters.

Taking that user input to heart, Autodesk introduced the new Smoke 2013 at NAB. This is an improved version that is much more “Mac-like”. Best of all it’s now available for $3,495 plus an optional annual subscription fee for support and software updates. Although this is an even bigger price reduction, it places Smoke in line with Autodesk’s animation product family (Maya, Softimage, etc.) and in keeping with what most Mac users feel is reasonable for a premium post production tool. Smoke 2013 will ship in fall, but the new price took effect at NAB. Any new and existing customers on subscription will receive the update as part of their support. Tutorials and trial versions of Smoke 2013 are expected to be available over the summer.

More Mac-like

Autodesk was successful in attracting a lot of trial downloads, but realized that the biggest hurdle was the steep learning curve even expert Final Cut and Media Composer editors encountered. Previous Mac versions of Smoke featured a user interface and commands inherited from the Linux versions of Smoke and Flame, which were completely different from any Mac editing application. Just getting media into the system baffled many. With Smoke 2013, Autodesk has specifically targeted editors who come from an Apple Final Cut Pro and/or Avid Media Composer background. The interface uses a standard, track-based editing workflow to maintain the NLE environment that editors are comfortable with. There’s a familiar Mac OS X menu bar at the top and the application has adopted most of the common OS commands. In short, it’s been redesigned – but not “re-imagined” – to act like a Mac application is supposed to.

Smoke now features a tab structure to quickly switch between modes, like media access, editing, etc. The biggest new tool is the Media Hub. This is an intelligent media browser that lets you easily access any compatible media on any of your hard drives. It recognizes native media formats, as opposed to simply browsing all files in the Finder. Media support includes RED, ARRIRAW, ProRes, DNxHD, H.264, XDCAM, image sequences, LUTs and more. Media Hub is the place to locate and import files, including the ability to drag-and-drop media directly into your Smoke library, as well as from the Finder into Smoke. Settings for formats like RED (debayer, color, etc.) are maintained even when you drag from the Finder. Since Smoke is designed as a finishing tool, you can also import AAF, XML (FCP 7, FCP X, Premiere Pro) and EDL lists generated by offline editors.

ConnectFX

Beyond familiar commands and the Media Hub, the editing interface has been redesigned to be more visually appealing and for the easier application of effects. ConnectFX is a method to quickly apply and modify effects right in the timeline. Tabbed buttons let you change between modes, such as resizing, time warps, Sparks filter effects and color correction. When you choose to edit effects parameters, the interface opens a ribbon above the timeline where you can alter numerical settings or enter a more advanced effects editing interface. If you need more sophistication, then move to nodes using ConnectFX. Smoke is the only editor with a node-based compositor that works in 3D space. You get many of the tools that have been the hallmark of the premium Autodesk system products, such as effects process nodes, the Colour Warper, relighting, 3D tracking and more.

Smoke 2013 is positioned as an integrated editing and effects tool. According to Autodesk’s research, editors who use a mixture of several different tools to get the job done – from editing to effects to grading – often use up to seven different software applications. Smoke is intended as a “super editor” that places all of these tools and tasks into a single, comprehensive application with a cohesive interface. The design is intended to maximize the workflow as an editor moves from editing into finishing.

Lighter system requirements

Apple is changing the technology landscape with more powerful personal workstations, like the iMac, which doesn’t fit the traditional tower design. Thunderbolt adds advanced, high-bandwidth connectivity for i/o and storage in a single cable connection.

To take advantage of these changes, Smoke 2013 has been designed to run on this new breed of system. For example, it will work on a newer MacBook Pro or iMac, connected to fast Thunderbolt storage, like a Promise Pegasus RAID array. A key change has been in the render format used by Smoke. Up until now, intermediate renders have been to uncompressed RGB 4:4:4 DPX image sequence files. While this maintains maximum quality, it quickly eats storage space and is taxing on less powerful machines. Rendering to an uncompressed RGB format is generally overkill if your camera originals started as some highly-compressed format like XDCAM or H.264. Now Smoke 2013 offers the option to render to compressed formats, such as one of the Apple ProRes codecs.

Another welcomed change is the ability to use some of the newer Thunderbolt i/o devices. Smoke on a Mac Pro tower has been able to work with AJA KONA 3G cards, but with Smoke 2013, AJA’s new Io XT has been added to the mix. The Io XT is an external unit with most of the features and power of the KONA card. It connects in the Thunderbolt chain with storage and/or a secondary display and is the only current Thunderbolt i/o device with a loop-through connection. Thus it isn’t limited to being at the end of the chain.

While at NAB, I took a few minutes to see how comfortable this new version felt. I’ve been testing Smoke 2012 at home and quite frankly had some of the same issues other FCP and Media Composer editors have had. It has been a very deep program that required a lot of relearning before you could feel comfortable. When I sat down in front of Smoke 2013 in the NAB pod, I was able to quickly work through some effects without any assistance, primarily based on what seemed logical to me in a “standard” NLE approach. I’m not going to kid you, though. To do advanced effects still requires a learning curve, but editors do plenty of in-timeline effects that never require extensive compositing. When I compare doing this type of work in Smoke 2013 versus 2012, I’d say that the learning requirements have been cut by 60% to 75% with this new version. That’s how much the redesign improves things for beginners.

You can start from scratch editing a project strictly on Smoke 2013, but in case you are wondering, this really shouldn’t be viewed as a complete replacement for FCP 7. Instead, it’s the advanced product used to add the polish. As such, it becomes an ideal companion for a fast application used for creative cutting, like Final Cut Pro, Premiere Pro or Media Composer.

Apple’s launch of Final Cut Pro X was a disruptive event that challenged conventional thinking. Autodesk Media & Entertainment’s launch of Smoke 2013 might not cause the same sort of uproar, but it brings a world-class finishing application to the Mac at a price that is attractive to many individual users and small boutiques.

Click here for videos and tutorials about Smoke.

Click here for Autodesk’s NAB videos.

 _________________________________________________

Thunderbolt I/O Devices – A First Look

Over the years media pros have seen data protocols come and go. Some, like Fibre Channel, are still current fixtures, while others, such as SCSI, have bitten the dust. The most exciting new technology is Thunderbolt, which is a merger of PCI Express and DisplayPort technologies co-developed by Intel and Apple. Started under the code name of Light Peak, the current implementation of Thunderbolt is a bi-directional protocol that passes power, video display signals and data transfer at up to 10Gbps of throughput in both directions. According to Apple, that’s up to twelve times faster than FireWire 800. It’s also faster than Fibre Channel, which tends to be the protocol of choice in larger facilities. Peripherals can access ten watts of power through Thunderbolt, too. Like SCSI and FireWire, Thunderbolt devices can be daisy-chained with special cables. Up to six devices can be connected in series, but certain devices have to be at the end of the chain. This is typically true when a PCIe-to-Thunderbolt adapter is used.

A single signal path can connect the computer to external storage, displays and capture devices, which provides editors with a powerful data protocol in a very small footprint. Thunderbolt technology is currently available in Apple iMac, MacBook Air, MacBook Pro and Mini computers and is starting to become available on some Windows systems. It is not currently available as a built-in technology on Mac Pros, but you can bet that if there’s a replacement tower, Thunderbolt will be a key part of the engineering design.

By its nature, Thunderbolt dictates that peripheral devices are external units. All of the processing horsepower of a PCIe card, such as a KONA or Decklink, is built into the circuitry of an external device, which is connected via the Thunderbolt cable to the host computer. I tested three Thunderbolt capture/output devices for this review: AJA Io XT, Blackmagic Design UltraStudio 3D and Matrox MXO2 LE MAX. AJA added the monitoring-only T-Tap at NAB to join the Io XT in AJA’s Thunderbolt line-up. Blackmagic Design has developed four Thunderbolt units at difference price tiers. For smaller installations or mobile environments, the UltraStudio Express, Intensity Shuttle Thunderbolt or Intensity Extreme are viable solutions.

Matrox has taken a different approach by using an adapter. Any of its four MXO2 products – the standard MXO2, Mini, LE or Rack – can be used with either Thunderbolt or non-Thunderbolt workstations. Simply purchase the unit with a Thunderbolt adapter, PCIe card and/or Express 34 slot laptop card. The MXO2 product is the same and only the connection method differs for maximum flexibility. The fourth company making Thunderbolt capture devices is MOTU. Their HDX-SDI was not available in time for this review, but I did have a chance to play with one briefly on the NAB show floor.

Differentiating features

All three of the tested units include up/down/cross-conversion between SD and HD formats and perform in the same fashion as their non-Thunderbolt siblings. Each has pros and cons that will appeal to various users with differing needs. For instance, the AJA Io XT is the only device with a Thunderbolt pass-through connector. The other units have to be placed at the end of a Thunderbolt path. They all support SDI and HDMI capture and output, as well as RS-422 VTR control. Both the AJA and Blackmagic units support dual-link SDI for RGB 4:4:4 image capture and output. The Matrox and AJA units use a power supply connected via a four-pin XLR, which makes it possible to operate them in the field on battery power.

The need to work with legacy analog formats or monitoring could determine your choice. This capability represents the biggest practical difference among the three. Both the MXO2 LE and UltraStudio 3D support analog capture and output, while there’s only analog output from the Io XT. The MXO2 LE uses standard BNC and XLR analog connectors (two audio channels on the LE, but more with the MXO2 or Rack), but the other two require a cable harness with a myriad of small connectors. That harness is included with the Blackmagic unit, but with AJA, you need to purchase an optional DB-25 Tascam-style cable snake for up to eight channels of balanced analog audio.

One unique benefit of the Matrox products is the optional MAX chip for accelerated H.264 processing. In my case, I tested the MXO2 LE MAX, which includes the embedded chip. When this unit is connected to a Mac computer, Apple Compressor, Adobe Media Encoder, Avid Media Composer, Telestream Episode and QuickTime perform hardware-accelerated encodes of H.264 files using the Matrox presets.

Fitting into your layout

I ran the Io XT, UltraStudio 3D and MXO2 LE through their paces connected to a friend’s new, top-of-the-line Apple iMac. All three deliver uncompressed SD or HD video over the Thunderbolt cable to the workstation. Processing to convert this signal to an encoded ProRes or DNxHD format will depend on the CPU. In short, recording a codec like ProRes4444 will require a fast machine and drives. I haven’t specifically tested it, but I presume this task would definitely challenge a Mac Mini using only internal drives!

The test-bed iMac workstation was configured with a Promise Pegasus 6-drive RAID array. The iMac includes two Thunderbolt ports and the Pegasus array offers a pass-through, so I was able to test these units both directly connected to the iMac, as well as daisy-chained onto the Promise array. This system would still allow the connection of more Thunderbolt storage and/or a secondary computer monitor, such as Apple’s 27″ Thunderbolt Display. Most peripheral manufacturers do not automatically supply cables, so plan on purchasing extra Thunderbolt cables ($49 for a six-foot cable from Apple).

These units work with most of the current crop of Mac OS X-based NLEs; however, you may need to choose a specific driver or software set to match the NLE you plan to operate. For instance, AJA requires a separate additional driver to be installed for Premiere Pro or Media Composer, which is provided for maximum functionality with those applications. The same is true for Matrox and Media Composer. I ran tests with Final Cut Pro 7, X and Premiere Pro CS 5.5, but not Media Composer 6, although they do work fine with that application. Only the Blackmagic Design products, like the UltraStudio 3D, will work with DaVinci Resolve. In addition to drivers, the software installation includes application presets and utility applications. Each build includes a capture/output application, which lets you ingest and lay off files through the device, independent of any editing application.

Broadcast monitoring and FCP X

The biggest wild card right now is performance with Final Cut Pro X. Broadcast monitoring was a beta feature added in the 10.0.3 update. With the release of 10.0.4 and compatible drivers, most performance issues have stabilized and this is no longer considered beta. Separate FCP X-specific drivers may need to be installed depending on the device.

If you intend to work mainly with Final Cut Pro “legacy” or Premiere Pro, then all of these units work well. On the other hand, if you’ve taken the plunge for FCP X, I would recommend the Io XT. I never got the MXO2 LE MAX to work with FCP X (10.0.3) during the testing period and initially the UltraStudio 3D wouldn’t work either, until the later version 9.2 drivers that Blackmagic posted mid-March. Subsequent re-testing with 10.0.4 and checking these units at NAB, indicate that both the Blackmagic and Matrox units work well enough. There are still some issues when you play at fast-forward speeds, where the viewer and external monitor don’t stay in sync with each other. I also checked the MOTU HDX-SDI device with FCP X in their NAB booth. Performance seemed similar to that of Matrox and Blackmagic Design.

The Io XT was very fluid and tracked FCP X quite well as I skimmed through footage. FCP X does not permit control over playback settings, so you have to set that in the control panel application (AJA) or system preference pane (Blackmagic Design and Matrox) and relaunch FCP X after any change. The broadcast monitoring feature in FCP X does not add any new VTR control or ingest capability and it’s unlikely that it ever will. To ingest videotape footage for FCP X using Io XT or UltraStudio, you will have to use the separate installed capture utility (VTR Xchange or Media Express, respectively) and then import those files from the hard drive into FCP X. Going the other direction requires that you export a self-contained movie file and use the same utility to record that file onto tape. The Matrox FCP X drivers and software currently do not include this feature.

Finally, the image to the Panasonic professional monitor I was using in this bay matched the FCP X viewer image on the iMac screen using either the Io XT or UltraStudio 3D. That attests to Apple’s accuracy claims for its ColorSync technology.

Performance with the mainstream NLEs

Ironically the best overall performance was using the end-of-life Final Cut Pro 7. In fact, all three units were incredibly responsive on this iMac/Promise combo. For example, when you use a Mac Pro with any FireWire or PCIe-connected card or device, energetic scrubbing or playing files at fast-forward speeds will result in the screen display and the external output going quickly out of sync with each other. When I performed the same functions on the iMac, the on-screen and external output stayed in sync with each of these three units. No amount of violent scrubbing caused it to lose sync. The faster data throughput and Thunderbolt technology had enabled a more pleasant editing experience.

I ran these tests using both a direct run from the iMac’s second Thunderbolt port, as well as looped from the back of the Promise array. Neither connection seemed to make much difference in performance with ProRes and AVCHD footage. I believe that you get the most data throughput when you are not daisy-chaining devices, however, I doubt you’ll see much difference under standard editing operation.

The best experience with Premiere Pro was using the Matrox MXO2 LE MAX, although the experience with the AJA and Blackmagic Design devices was fine, too. This stands to reason, as Matrox has historically had a strong track record developing for Adobe systems with custom cards, such as the Axio board set. Matrox also installs a high-quality MPEG-2 I-frame codec for use as an intermediate preview codec. This is an alternative to the QuickTime codecs installed on the system.

Portions of this entry originally written for Digital Video Magazine.

©2012 Oliver Peters

Why 4K

Ever since the launch of RED Digital Cinema, 4K imagery has become an industry buzzword. The concept stems from 35mm film post, where the digital scan of a film frame at 4K is considered full resolution and a 2K scan to be half resolution. In the proper used of the term, 4K only refers to frame dimensions, although it is frequently and incorrectly used as an expression of visual resolution or perceived sharpness. There is no single 4K size, since it varies with how it is used and the related aspect ratio. For example, full aperture film 4K is 4096 x 3112 pixels, while academy aperture 4K is 3656 x 2664. The RED One and EPIC use several different frame sizes. Most displays use the Quad HD standard of 3840 x 2160 (a multiple of 1920 x 1080) while the Digital Cinema Projection standard is 4096 x 2160 for 4K and 2048 x 1080 for 2K. The DCP standard is a “container” specification, which means the 2.40:1 or 1.85:1 film aspects are fit within these dimensions and the difference padded with black pixels.

Thanks to the latest interest in stereo 3D films, 4K-capable projection systems have been installed in many theaters. The same system that can display two full bandwidth 2K signals can also be used to project a single 4K image. Even YouTube offers some 4K content, so larger-than-HD production, post and distribution has quickly gone from the lab to reality. For now though, most distribution is still predominantly 1920 x 1080 HD or a slightly larger 2K film size.

Large sensors

The 4K discussion starts at sensor size. Camera manufacturers have adopted larger sensors to emulate the look of film for characteristics such as resolution, optics and dynamic range. Although different sensors may be of a similar physical dimension, they don’t all use the same number of pixels. A RED EPIC and a Canon 7D use similarly sized sensors, but the resulting pixels are quite different. Three measurements come into play: the actual dimensions, the maximum area of light-receiving pixels (photosites) and the actual output size of recorded frames. One manufacturer might use fewer, but larger photosites, while another might use more pixels of a smaller size that are more densely packed. There is a very loose correlation between actual pixel size, resolution and sensitivity. Larger pixels yield more stops and smaller pixels give you more resolution, but that’s not an absolute. RED has shown with EPIC that it is possible to have both.

The biggest visual attraction to large-sensor cameras appears to be the optical characteristics they offer – namely a shallower depth of field (DoF).  Depth of field is a function of aperture and focal length. Larger sensors don’t inherently create shallow depth of field and out-of-focus backgrounds. Because larger sensors require a different selection of lenses for equivalent focal lengths compared with standard 2/3-inch video cameras, a shallower depth of field is easier to achieve and thus makes these cameras the preferred creative tool. Even if you work with a camera today that doesn’t provide a 4K output, you are still gaining the benefits of this engineering. If your target format is HD, you will get similar results – as it relates to these optical characteristics – regardless of whether you use a RED, an ARRI ALEXA or an HDSLR.

Camera choices

Quite a few large-sensor cameras have entered the market in the past few years. Typically these use a so-called Super 35MM-sized sensor. This means it’s of a dimension comparable to a frame of 3-perf 35MM motion picture film. Some examples are the RED One, RED EPIC, ARRI ALEXA, Sony F65, Sony F35, Sony F3 and Canon 7D among others. That list has just grown to include the brand new Canon EOS C300 and the RED SCARLET-X. Plus, there are other variations, such as the Canon EOS 5D Mark II and EOS 1D X (even bigger sensors) and the Panasonic AF100 (Micro Four Thirds format). Most of these deliver an output of 1920 x 1080, regardless of the sensor. RED, of course, sports up to 5K frame sizes and the ALEXA can also generate a 2880 x 1620 output, when ARRIRAW is used.

This year was the first time that the industry at large has started to take 4K seriously, with new 4K cameras and post solutions. Sony introduced the F65, which incorporates a 20-megapixel 8K sensor. Like other CMOS sensors, the F65 uses a Bayer light filtering pattern, but unlike the other cameras, Sony has deployed more green photosites – one for each pixel in the 4K image. Today, this 8K sensor can yield 4K, 2K and HD images. The F65 will be Sony’s successor to the F35 and become a sought-after tool for TV series and feature film work, challenging RED and ARRI.

November 3rd became a day for competing press events when Canon and RED Digital Cinema both launched their newest offerings. Canon introduced the Cinema EOS line of cameras designed for professional, cinematic work. The first products seem to be straight out of the lineage that stems from Canon’s original XL1 or maybe even the Scoopic 16MM film camera. The launch was complete with a short Bladerunner-esque demo film produced by Stargate Studios along with a new film shot by Vincent Laforet (the photographer who launch the 5D revolution with his short film Reverie)  called Möbius.

The Canon EOS C300 and EOS C300 PL use an 8.3MP CMOS Super 35MM-sized sensor (3840 x 2160 pixels). For now, these only record at 1920 x 1080 (or 1280 x 720 overcranked) using the Canon XF codec. So, while the sensor is a 4K sensor, the resulting images are standard HD. The difference between this and the way Canon’s HDSLRs record is a more advanced downsampling technology, which delivers the full pixel information from the sensor to the recorded frame without line-skipping and excessive aliasing.

RED launched SCARLET-X to a fan base that has been chomping at the bit for years waiting for some version of this product. It’s far from the original concept of SCARLET as a high-end “soccer mom” camera (fixed lens, 2/3” sensor, 3K resolution with a $3,000 price tag). In fact, SCARLET-X is, for all intents and purposes, an “EPIC Lite”. It has a higher price than the original SCARLET concept, but also vastly superior specs and capabilities. Unlike the Canon release, it delivers 4K recorded motion images (plus 5K stills) and features some of the developing EPIC features, like HDRx (high dynamic range imagery).

If you think that 4K is only a high-end game, take a look at JVC. This year JVC has toured a number of prototype 4K cameras based on a proprietary new LSI chip technology that can record a single 3840 x 2160 image or two 1920 x 1080 streams for the left and right eye views of a stereo 3D recording. The GY-HMZ1U is derivative of this technology and uses dual 3.32MP CMOS sensors for stereo 3D and 2D recordings.

Post at 4K

Naturally the “heavy iron” systems from Quantel and Autodesk have been capable of post at 4K sizes for some time; however, 4K is now within the grasp of most desktop editors. Grass Valley EDIUS, Adobe Premiere Pro and Apple Final Cut Pro X all support editing with 4K media and 4K timelines. Premiere Pro even includes native camera raw support for RED’s .r3d format at up to EPIC’s 5K frames. Avid just released its 6.0 version (Media Composer 6, Symphony 6 and NewsCutter 10), which includes native support for RED One and EPIC raw media. For now, edited sequences are still limited to 1920 x 1080 as a maximum size. For as little as $299 for FCP X and RED’s free REDCINE-X (or REDCINE-X PRO) media management and transcoding tool, you, too, can be editing with relative ease on DCP-compliant 4K timelines.

Software is easy, but what about hardware? Both AJA and Blackmagic Design have announced 4K solutions using the KONA 3G or Decklink 4K cards. Each uses four HD-SDI connections to feed four quadrants of a 4K display or projector at up to 4096 x 2160 sizes. At NAB, AJA previewed for the press its upcoming 5K technology, code-named “Riker”. This is a multi-format I/O system in development for SD up to 5K sizes, complete with a high-quality, built-in hardware scaler. According to AJA, it will be capable of handling high-frame-rate 2K stereo 3D images at up to 60Hz per eye and 4K stereo 3D at up to 24/30Hz per eye.

Even if you don’t own such a display, 27″ and 30″ computer monitors, such as an Apple Cinema Display, feature native display resolutions of up to 2560 x 1600 pixels. Sony and Christie both manufacture a number of 4K projection and display solutions. In keeping with its plans to round out a complete 4K ecosystem, RED continues in the development of REDRAY PRO, a 4K player designed specifically for RED media.

Written for DV magazine (NewBay Media, LLC)

©2011 Oliver Peters

Easy Canon 5D post – Round III

The interest in HDSLR production and post shows no sign of waning. Although some of this information will seem redundant with earlier articles (here and here), I decided it was a good time to set down a working recipe of how I like to deal with these files. To some extend this is a “refresh” of the Round II article, given the things I’ve learned since then. The Canon cameras are the dominant choice, but that’s for today. Nikon is coming on strong with its D7000 and Panasonic has made a serious entry into the large-format-sensor video camera market with its Micro 4/3” AG-AF100. In six months, the post workflows might once again change.

To date, I have edited about 40 spots and short-form videos that were all shot using the Canon EOS 5D Mark II. Many of the early post issues, like the need to convert frame rates, are now behind us. This means fewer variables to consider. Here is a step-by-step strategy for working with HDSLR footage, specifically from Canon 5D/7D/1D HDLSR cameras.

Conversion

Before doing anything with the camera files, it is IMPERATIVE that you clone the camera cards. This is your “negative” and you ALWAYS want to preserve it in its original and UNALTERED form. One application to consider for this purpose is Videotoolshed’s Offloader.

Once that’s out of the way, the first thing I do with files from a Canon 5D or 7D is convert them to the Apple ProRes codec. Yes, various NLEs can natively work with the camera’s H.264 movie files, but I still find this native performance to be sluggish. I prefer to organize these files outside of the NLE and get them into a codec that’s easy to deal with using just about any editing or compositing application. Generally, I will use ProResLT, however, if there is really a quality concern, because the project may go through more heavy post,  then use standard ProRes or ProResHQ. Avid editors may choose to use an Avid DNxHD codec instead.

I have tried the various encoders, like Compressor or Grinder, but in the end have come back to MPEG Streamclip. I haven’t tried 5DtoRGB yet, because it is supposed to be a very slow conversion and most TV projects don’t warrant the added quality it may offer. I have also had unreliable results using the FCP Log and Transfer EOS plug-in. So, in my experience, MPEG Streamclip has not only been the fastest encoder, but will easily gobble a large batch without crashing and delivers equal quality to most other methods. 32GB CF cards will hold about 90-96 minutes of Canon video, so a shoot that generates 4-8 cards in a day means quite a lot of file conversion and you need to allow for that.

MPEG Streamclip allows you to initiate four processes in the batch at one time, which means that on a 4, 8 or 12-core Mac Pro, your conversion will be approximately real-time. The same conversion runs about 1.5x real-time (slower) using the EOS plug-in. The real strength of MPEG Streamclip is that it doesn’t require FCP, so data conversion can start on location on an available laptop, if you are really in that sort of rush.

Timecode and reel numbers

The Canon camera movie files contain little or no metadata, such as a timecode track. There is a THM file (thumbnail file) that contains a data/time stamp. The EOS plug-in, as well as some applications, use this to derive timecode that more-or-less corresponds to TOD (time-of-day) code. In theory, this means that consecutive clips should not have any timecode overlap, but unfortunately I have not found that to be universally true. In my workflow, I generally never use these THM files. My converted ProRes files end up in separate folders that simply contain the movie files and nothing else.

It is important to settle on a naming strategy for the cards. This designator will become the reel ID number, which will make it easy to trace back to the origin of the footage months later. You may use any scheme you like, but I recommend a simple abbreviation for location/day/camera/card. For example, if you shoot for several days in San Francisco with two cameras, then Day 1, Camera 1, Card 1 would be SF01A001 (cameras are designated as A, B, C, etc.); Day 1, Cam 2, Card 1 would be SF01B001; Day 2, Cam 1, Card 3 would be SF02A003 and so on. These card ID numbers are consistent with standard EDL conventions for numbering videotape reels. Create a folder for each card’s contents using this scheme and make sure the converted ProRes files end up in the corresponding folders.

I use QtChange to add timecode to the movie files. I will do this one folder at a time, using the folder name as the reel number. QtChange will embed the folder name (like SF01A001) into the file as the reel number when it writes the timecode track. I’m not a big fan of TOD code and, as I mentioned, the THM files have posed some problems. Instead, I’ll assign new timecode values in QtChange – typically a new hour digit to start each card. Card 1 starts at 1:00:00:00. Card 2 starts at 2:00:00:00 and so on. If Card 1 rolled over into the next hour digit, I might increment the next card’s starting value. So Card 2 might start at 2:30:00:00 or 3:00:00:00, just depending on the overall project. The objective is to avoid overlapping timecodes.

Renaming files

I never change the names of the original H.264 camera files. Since I might need to get back to these files from the converted ProRes media at some point in the future, I will need to be able to match names, like MVI_9877.mov or MVI_1276.mov. This means that I won’t remove the movie file name from the ProRes files either, but it is quite helpful to append additional info to the file name. I use R-Name (a file renaming batch utility) to do this. For example, I might have a set of files that constitute daytime B-roll exterior shots in Boston. With R-Name, I’ll add “-Bos-Ext” after the file name and before the .mov extension.

In the case of interview clips, I’ll manually append a name, like “-JSmith-1” after the movie name. By using this strategy, I am able to maintain the camera’s naming convention for an easy reference back to the original files, while still having a file that’s easy to recognize simply by its name.

Double-system sound

The best approach for capturing high-quality audio on an HDSLR shoot is to bring in a sound mixer and employ film-style, double-system sound techniques. Professional audio recorders, like a Zaxcom DEVA, record broadcast WAVE files, which will sync up just fine and hold sync through the length of the recording. Since the 5D/7D/1D cameras now record properly at 23.98, 29.97 or 25fps, no audio pulldown or speed adjustment should be required for sync.

If you don’t have the budget for this level of audio production, then a Zoom H4n (not the H4) or a Tascam DR-100 are viable options. Record the files at 48kHz sampling in a 16-bit or 24-bit WAVE format. NO MP3s. NO 44.1kHz.

The Zaxcom will have embedded timecode, but the consumer recorders won’t. This doesn’t really matter, because you should ALWAYS use a slate with a clapstick to provide a sync reference. If you use a recorder like a Zaxcom, then you should also use a slate with an LED timecode display. This makes it easy to find the right sound file. In the case of the Zoom, you should write the audio track number on the slate, so that it’s easy to locate the correct audio file in the absence of timecode.

You can sync up the audio manually in your NLE by lining up the clap on the track with the picture – or you can use an application like Singular Software’s PluralEyes. I recommend tethering the output of the audio recorder to the camera whenever possible. This gives you a guide track, which is required by PluralEyes. Ideally, this should have properly matched impedances so it’s useable as a back-up. It may be impractical to tether the camera, in which case, make sure to record reference audio with a camera mic. This may pose more problems for PluralEyes, but it’s better than nothing.

Singular Software has recently introduced DualEyes as a standalone application for syncing double-system dailies.

Your edit system

As you can see, most of this work has been done before ever bringing the files into an NLE application. To date, all of my Canon projects have been cut in Final Cut and I continue to find it to be well-suited for these projects – thanks, in part, to this “pre-edit” file management. Once you’ve converted the files to ProRes or ProResLT, though, they can easily be brought into Premiere Pro CS5 or Media Composer 5. The added benefit is that the ProRes media will be considerably more responsive in all cases than the native H.264 camera files.

Although I would love to recommend editing directly via AMA in Media Composer 5, I’m not quite sure Avid is ready for that. In my own experience, Canon 5D/7D/1D files brought in using AMA as either H.264 or ProRes are displayed at the proper video levels. Unfortunately others have had a different experience, where their files come in with RGB values that exhibit level excursions into the superwhite and superblack regions. The issue I’ve personally encountered is that when I apply non-native Avid AVX effects, like Boris Continuum Complete, Illusion FX or Sapphire, the rendered files exhibit crushed shadow detail and a shifted gamma value. For some reason, the native Avid effects, like the original color effect, don’t cause the same problem. However, it hasn’t been consistent – that is, levels aren’t always crushed.

Recommendations for Avid Media Composer editors

If you are an Avid editor using Media Composer 5, then I have the following recommendations for when you are working with H.264 or ProRes files. If you import the file via AMA and the levels are correct (black = 16, peak white = 235), then transcode the selected cut to DNxHD media before adding any effects and you should be fine. On the other hand, if AMA yields incorrect levels (black = 0, peak white = 255), then avoid AMA. Import “the old-fashioned way” and set the import option for the incoming file as having RGB levels. Avid has been made aware of these problems, so this behavior may be fixed in some future patch.

There is a very good alternative for Avid Media Composer editors using MPEG Streamclip for conversion. Instead of converting the files to one of the ProRes codecs, convert them to Avid DNxHD (using 709 levels), which is also available under the QuickTime options. I have found that these files link well to AMA and, at least on my system, display correct video levels. If you opt to import these the “old” way (non-AMA), the files will come in as a “fast import”. In this process, the QuickTime files are copied and rewrapped as MXF media, without any additional transcoding time.

“Off-speed” files, like “overcranked” 60fps clips from a Canon 7D can be converted to a different frame rate (like 23.98, 25 or 29.97) using the “conform” function of Apple Cinema Tools. This would be done prior to transcoding with MPEG Streamclip.

Avid doesn’t use the embedded reel number from a QuickTime file in its reel number column. If this is important for your workflow, then you may have to manually modify files after they have been imported into Media Composer or generate an ALE file (QtChange or MetaCheater) prior to import. That’s why a simple mnemonic, like SF01A001 is helpful.

Although this workflow may seem a bit convoluted to some, I love the freedom of being able to control my media in this way. I’m not locked into fixed metadata formats like P2. This freedom makes it easier to move files through different applications without being wedded to a single NLE.

Here are some more options for Canon HDSLR post from another article written for Videography magazine.

©2010 Oliver Peters

Grind those EOS files!

I have a love/hate relationship with Apple Compressor and am always on the lookout for better encoding tools. Part of our new file-based world is the regular need to process/convert/transcode native acquisition formats. This is especially true of the latest crop of HDSLRs, like the Canon EOS 5D Mark II and its various siblings. A new tool in this process is Magic Bullet Grinder from Red Giant Software. Here’s a nice description by developer Stu Maschwitz as well as another review by fellow editor and blogger, Scott Simmons.

I’ve already pointed out some workflows for getting the Canon H.264 files into an editable format in a previous post. Although Avid Media Composer 5, Adobe Premiere Pro CS5 and Apple Final Cut Pro natively support editing with the camera files – and although there’s already a Canon EOS Log and Transfer plug-in for FCP – I still prefer to convert and organize these files outside of my host NLE. Even with the newest tools, native editing is clunky on a large project and the FCP plug-in precludes any external organization, since the files have to stay in the camera’s folder structure with their .thm files.

Magic Bullet Grinder offers a simple, one-step batch conversion utility that combines several functions that otherwise require separate applications in other workflows. Grinder can batch-convert a set of HDSLR files, add timecode and simultaneously create proxy editing files with burn-in. In addition, it will upscale 720p files to 1080p. Lastly, it can conform frame-rates to 23.976fps. This is helpful if you want to shoot 720p/60 with the intent of overcranking (displayed as slow motion at 24fps).

The main format files are converted to either the original format (with added timecode), ProRes, ProRes 4444 or two quality levels of PhotoJPEG. Proxies are either ProRes Proxy or PhotoJPEG, with the option of several frame size settings. In addition, proxy files can have a burn-in with various details, such as frame numbers, timecode, file name + timecode or file name + frame numbers. Proxy generation is optional, but it’s ideal for offline/online editing workflows or if you simply need to generate low-bandwidth files for client review.

Grinder’s performance is based on the number of cores. It sends one file to each core, so in theory, eight files would be simultaneously processed on an 8-core machine. Speed and completion time will vary, of course, with the number, length and type of files and whether or not you are generating proxies. I ran a head-to-head test (main format only, no proxy files) on my 8-core MacPro with MPEG Streamclip and Compressor, using 16 H.264 Canon 5D files (about 1.55GB of media or 5 minutes of footage). Grinder took 12 minutes, Compressor 11 minutes and MPEG Streamclip 6 minutes. Of course, neither Compressor nor MPEG Streamclip would be able to handle all of the other functions – at least not within the same, simplified process. The conversion quality of Magic Bullet Grinder was quite good, but like MPEG Streamclip, it appears that ProRes files are generated with the QuickTime “automatic gamma correction” set to “none”. As such, the Compressor-converted files appeared somewhat lighter than those from either Grinder or MPEG Streamclip.

This is a really good effort for a 1.0 product, but in playing with it, I’ve discovered it has a lot of uses outside of HDSLR footage. That’s tantalizing and brings to mind some potential suggestions as well as issues with the way that the product currently works. First of all, I was able to convert other files, such as existing ProRes media. In this case, I would be interested in using it to ONLY generate proxy files with a burn-in. The trouble now is that I have to generate both a new main file (which isn’t needed) as well as the proxy. It would be nice to have a “proxy-only” mode.

The second issue is that timecode is always newly generated from the user entry field. Grinder doesn’t read and/or use an existing QuickTime timecode track, so you can’t use it to generate a proxy with a burn-in that matches existing timecode. In fact, if your source file has a valid timecode track, Grinder generates a second timecode track on the converted main file, which confuses both FCP and QuickTime Player 7. Grinder also doesn’t generate a reel number, which is vital data used by many NLEs in their media management.

I would love to see other format options. For instance, I like ProResLT as a good format for these Canon files. It’s clean and consumes less space, but isn’t a choice with Grinder. Lastly, the conform options. When Grinder conforms 30p and 60p files to 24p (23.976), it’s merely doing the same as Apple Cinema Tools by rewriting the QuickTime playback rate metadata. The file isn’t converted, but simply told to play more slowly. As such, it would be great to have more options, such as 30fps to 29.97fps for the pre-firmware-update Canon 5D files. Or conform to 25fps for PAL countries.

I’ve seen people comment that it’s a shame it won’t convert GoPro camera files. In fact it does! Files with the .mp4 extension are seen as an unsupported format. Simply change the file extension from .mp4 to .mov and drop it into Grinder. Voila! Ready to convert.

At $49 Magic Bullet Grinder is a great, little utility that can come in handy in many different ways. At 1.0, I hope it grows to add some of the ideas I’ve suggested, but even with the current features, it makes life easier in so many different ways.

©2010 Oliver Peters