Random Impressions – NAB 2010

I always enjoy the show – partly for the new toys – but also to hook up once again, face-to-face, with many friends in the business. I’m back home now and have had a day to decompress and make a few observations about NAB Convention.

First off, this was an extremely strong show for post. Tons of new versions of many of your favorite NLEs, color grading tools and other items. Second, the attendance was good. A bit more than last year – so still a “down” year compared with peaks of a few years ago. Yet, I felt the floor density was higher than the 2009 vs. 2010 numbers indicated. Thursday was still well-attended and not the ghost town I would have expected. So, on the purely subjective metric of how crowded the floor felt, I would have to say that daily averages were much better than 2009.

If you want more specific product knowledge about what was on the floor, check out the various NAB reports at Videography, DV, TV Technology, Studio Daily, Post and Pro Video Coalition. I would encourage you to check out DV’s “(Almost) Live From the NAB Show Blog” – Part 1 and Part 2. The following thoughts fall under opinion and observation, so I’m bound to skip a lot of the details that you might really want to know.

Apple

It never ceases to amaze me when I see blog posts and forum comments that seem to expect Apple to pop up out of nowhere at the show with some amazing new version of Final Cut Studio. Have these folks been under a rock? Apple swore off trade shows several years ago and there’s no indication this policy has changed. They were never on the 2009 or 2010 exhibitor’s list and you can’t plan a 1500-3000 seat “user event” at an area ballroom without word getting out. So, I have no idea why people persist in this fantasy game.

The short term scenario is that it is unlikely that there’ll be a feature-laden new version of FCP/FCS any time soon. Maybe an incremental update like the “new” Final Cut Studio from last year, but I wouldn’t expect that until a few months down the road at the earliest. Or maybe not until 2011. Even if that doesn’t happen or even if the release strikes many as lackluster instead of awesome, it won’t change the breakdown of NLEs to any great degree. If you work with FCP today, you are getting the job done and probably relatively happy with the product. I don’t foresee any change in the product that would greatly alter that situation.

The more important news – as it pertains to NAB – is that Apple is doing a good job of attracting a number of new partners to its core technologies. Autodesk’s Smoke for Mac OS X is a good example, but they are just one of the over 300-strong developer community that constitues the Final Cut ecosystem. A number of folks, such as ARRI, have licensed the ProRes codec, which is a pretty good endorsement of image quality, as well as workflow.

Avid

Certain versions tend to become milestones for a company’s software. I believe Media Composer 5 will be one of those. Avid renumbered versions with the release of Adrenaline several years ago, so this version 5 is really more like version 17. Numbers notwithstanding, other milestones for Media Composer had been the old version 5.x and version 7.x and I believe this newest release (targeted for June) will have just as much impact for Avid editors.

Media Composer 5 goes a long way towards keeping Avid editors in the fold and may even get some Avid-to-FCP “switchers” to come back. It adds limited 3rd party i/o hardware support, wider codec support (including RED and QuickTime through AMA) Pro Tools-style audio features and more FCP-like timeline editing functions. I highly doubt that it will really get any FCP diehards to convert, but it might pique the interest of those selecting their first high-end NLE. Down the road, I’ll have proper review when it’s ready for actual use.

In addition to Media Composer 5, Avid also previewed its “editing in the cloud” concept. This is largely based on work already done by Maximum Throughput, which had been acquired by Avid. The demo looked pretty fluid, but I think it’s probably a number of years off. That’s OK as this was merely a technology preview; however, it does have relevance to large enterprises. The same concepts developed for editing over the internet clearly apply to editing on an internal companywide LAN or WAN system.

The direction that Avid seems to be taking here – along with its expansion of Interplay into a family of asset management products – sets them up to make the Professional Services department into an IBM-style corporate consultation service and profit center. In other words, if you are a large company or TV network and want to implement the “cloud” editing concept along with the necessary asset management tools, it’s going to take a knowledgeable organization to do that for you. Avid naturally has such expertise and is poised to leverage its internal assets into billable services. The small editing boutique may not have any interest in that concept, but if it makes Avid a stronger company overall, then I’m all for it.

Adobe Creative Suite 5

CS5 is just about here. It’s 64-bit and uses the Mercury Playback Engine. But will Premiere Pro really pick up steam as an NLE of choice? Like Media Composer, expect a real review in the coming months. I’ve used Premiere Pro in the past on paying gigs and didn’t have the sort of issues I see people complain about. These were smaller projects, so I didn’t hit some of the problems that have plagued Premiere Pro, which mainly relate to scalability. Although it’s not touted in the CS5 press info, it does appear to me that Adobe has done a lot of tweaking under the hood. This is related to the changes for 64-bit, so I really expect Premiere Pro CS5 to be a far better product than previous versions.

Whether that’s true or not is going to depend on your particular system. For example, much has been written about the Mercury Playback Engine. This is an optimization for the CUDA technology of specific high-end NVIDIA graphics cards. If you don’t have one of these cards installed, Premiere Pro shifts into software emulation. In some cases, it will be a big difference and in other cases it won’t. There’s lots of native codec and format support, but not all camera codecs are equal. Some are CPU-intensive, some GPU-intensive and others require fast disk arrays. If your system is optimized for DVCPRO HD, for example (older CPU, but fast arrays), you won’t see outstanding results with AVC-Intra, which is processor-intensive, requiring the newest, fastest CPUs.

There’s plenty in the other apps to sell editors on the CS5 Production Premium bundle, even if they never touch Premiere Pro. On the other hand, Premiere Pro CS5 is still pretty powerful, so editors without a vested interest in Avid, Apple or something else, will probably be quite happy with it.

One format to rule them all

With apologies to J. R. R. Tolkien, the hopes of a single media format seem to have been totally shattered at this NAB. When MXF and AAF were originally bounced around, the hope was for a common media and metadata format that could be used from camera to NLE to server without the need for translation, transcoding or any other sort of conversion.

I think that idea is toast, thanks to the camera manufacturers, who – along with impatient users – have pushed NLE developers to natively support just about every new camera format and codec imaginable. Since the software can handle it, we see NLEs evolving into a more browser-style format. This is the basis for how Premiere Pro and Final Cut Pro are structured. It is now becoming a model that others are embracing. Avid has AMA (a plug-in API for camera manufacturers), but you also see “soft import” in the Autodesk systems and “soft mount” in Quantel. All variations of the same theme. In fact, Apple is the “odd man out” in this scenario, forcing everything into QuickTime before FCP can work with it.

The three advanced formats that seem to have the broadest support today are Avid DNxHD, Apple ProRes and Panasonic AVC-Intra. To a lesser extent you can add AVCHD, Sony XDCAM (various flavors) and DV/DVCPRO/DV50/DVCPRO HD.

Stereo 3D

Just when we thought we had this HD thing figured out, the electronics manufacturers are pushing us into stereo 3D. There was plenty of 3D on the floor, but bear in mind that there are very few in the production community pushing to do this. It’s driven almost entirely by display manufacturers and studios looking to cash in on 3D theater distribution. I think we are headed for a 3D bubble that will eventually drop back into a niche, albeit a large niche for some.

Whether 3D is big or not doesn’t matter. It’s here now and something many of us will have to deal with, so you might as well start figuring things out. The industry is at the starting point and a lot is in flux. First off – the terminology. Walking around the floor there were references to Stereo 3D, S3D, Stereoscopic and so on. Or what about marketing slogans like Panasonic’s “from camera to couch”? Or Sony’s “make.believe”? Hmm… Did the marketing people really think that one through? New crew positions will evolve. Are you a “stereographer”? Or should you be called a “stereoscopist”?

I watched a lot of stereo 3D demos and I generally didn’t like most of them. Too much of 3D looks like a visual effect and not the way my eyes see reality. It also affects the creative direction. For instance, the clip of a Kenny Chesney 3D concert film, which was edited in a typical, fast-paced, rock-n-roll-style of cutting, was harder to adjust to than the nice slow camera moves from the Masters golf coverage.

I also observed that most 3D shots have an extremely deep depth-of-field. More so even in 3D, than if you just looked at the shot in 2D. Shallow depth-of-field, like the gorgeous shots from the HDSLRs that everyone loves, don’t seem to work in 3D. I tended to pay attention to objects in the background, instead of the foreground, which I would presume is the opposite of what a director would have wanted. Many of the 3D shots felt like multi-planed pieces of animation. I have heard this referred to as “density zones” and seems to be an anomaly with 3D shots. A lot of these shots simply had the effect of a moving version of the vintage View-Masters of the past.

Obviously a lot of companies will try to produce 3D content from archival 2D masters. To answer that need JVC showed a real-time 2D-to-3D convertor, which was able to take standard programs and adjust shots on-the-fly using a set of sophisticated algorithms.  This creates some interesting artifacts. First off, you have to interpolate the information so that alternating fields become left and right eye views. Viewing the result shows visible scanlines on an HD display. That seems to be a common problem with current 3D displays.

Second, there are errors in the 3D. Some of the computation is based on colors, which means that occasionally some objects are incorrectly placed due to their color. That part of an object (like a shirt or certain colors in a flag) will appear at a different point in Z-space compared to the rest of the object to which it is attached. My guess is that casual viewers will almost never see these things and therefore such products will be quite successful.

My whole take on this is that we simply don’t see real life the way that stereo 3D films force us to see. Many folks will disagree with me on this, including a number of scientists, but I feel that people largely view life in 2D. Your eyes converge on an object and focus (both physically and mentally) on that object. Other things are on the periphery, so you are aware of them, but not focused on them. When you want to look at something else, you change your attention and change your focus, much like a pan or tilt with a rack focus. By the same token, we don’t see the sort of extreme shallow depth-of-field caused by some lenses, but that somehow feels more natural. These issues may evolve as stereo 3D evolves, but for me, the most natural images were those that were closest to 2D. If that’s the case, then you have to conclude, “What’s the point?”

Disruptive technology

Blackmagic Design definitely generated the buzz this year. They bought the ailing DaVinci Systems company last year and promptly told everyone in the media that they had no intension of selling cheaper versions of these flagship systems. We now know that wasn’t true. It turns out that Blackmagic has once again been true to form – as everyone had initially thought – and brought a brand new Mac version of DaVinci Resolve to NAB at a very low price.

Upon acquiring DaVinci, Blackmagic decided to “end-of-life” all hardware products (like the DaVinci 2K), end all support contracts and focus on rebuilding the company around its flagship software products – Resolve (grading) and Revival (film restoration). They redesigned the signature DaVinci control surfaces to better fit into Blackmagic Design’s manufacturing pipeline. You can now purchase Resolve in three configurations: software-only Mac ($1K), software (Mac) with panels ($30K) or a Linux version with panels ($50K). Add to this the computer, high-end graphics cards and drives.

The software-only version will work with a panel like the Tangent Wave, so it will allow a user to create a color grading room with the “name brand” product at a ridiculously low price. This has plenty of folks on various forums pretty steamed. I suspect there will be three types of DaVinci products.

Customer A is the existing facility that upgrades from an older DaVinci to Resolve 7.0 These people will build a high-end room using a cluster of Linux towers. That’s not cheap, but will still cost far less than in the past.

Customer B will be the facility that wants to set up a less powerful “assist” station. It may also be the entrepreneurial colorist who decides to set up his own home system – either to branch out on his own – or to be able to work from home to avoid the commute.

Customer C – the one that scares most folks – is the shop that sets up a bare bones grading room around Resolve, just so they can say that they have a DaVinci room. There are obvious performance differences between Resolve on a Mac and a full-featured, real-time 2K-capable-and-more DaVinci suite, so the fear is that some folks will represent one as being the other.

No matter what, that’s the same argument made when FCP came out and also when Color arrived. Grant Petty (Blackmagic Design’s founder) has always been about empowering people by lowering the cost of entry. This is just another step in that journey. I think the real question will be whether owners who have set up Apple Color rooms will convert these to DaVinci. Color is good, but DaVinci has the brand recognition and there are plenty of experienced DaVinci colorists around. At an extra $1K for software, this might be an easy transition. Likewise for Avid shops. Media Composer’s and Symphony’s color correction tools are pretty long-in-the-tooth and those owners are looking for options. DaVinci makes a lot more sense for these shops than investing in the Final Cut Studio approach. Hard to tell at this point.

Digital cameras

RED had its RED Day event. I was registered, but blew it off. Too much other stuff to see and quite frankly, I have little or no interest in being teased by cameras that are yet to come (late or if ever). In my world, HDSLRs have far greater impact than RED One or Epic. Judging by the number of Canons and Nikons I saw being used on the floor for video coverage and podcasts, I’d have to say the rest of the world shares that experience.

The real news is that RED is no longer the only game if you want a digital cinematography camera. Sure there’s Sony and Panasonic, but more importantly there’s ARRI with the Alexa and Aaton with the Penelope-∆ (Delta). Both companies have a strong film pedigree and these new cameras coming this year and in 2011 will offer some options that will interest DPs. The Penelope is oddest in that it’s a hybrid film/digital camera using two interchangeable magazines – one for film and another that’s a digital back. It uses an optical viewfinder, so the sensor if attached to the digital magazine in precisely the same location as the film loop in the film magazine. This leaves it exposed when you swap magazines, but the folks at Aaton don’t see this as an issue, aside from occasional, simple cleaning. In reality, you probably won’t be swapping back and forth between film and digital on the same production.

In my opinion, where RED has gone wrong has been in placing resolution over workflow. No matter how smooth, native or fast current RED post workflow is, they will have a hard time shaking the common “slam” that their workflow is slow, hard or expensive. ARRI and Aaton offer somewhat lower resolution than RED, but they record both camera RAW and direct-to-edit formats. The Alexa records in ARRI RAW as well as ProRes, while Aaton uses DNxHD (for now) as its compressed file format. This means that the camera generates a file that is ready to edit in Avid or FCP straight from the shoot. If you are working in TV, that may be all you need. If you are doing a feature film, it becomes an offline editing format. The camera RAW file is preserved as a “digital negative”, which would be used for color grading and finishing. ARRI RAW is already supported by a number of systems, including Avid (with Metafuze) and Assimilate Scratch.

Pure magic

Last year I was “wowed” by Singular Software’s PluralEyes. This year it was GET from AV3 Software. GET is a phonetic search tool based on the same Nexidia  technology that is licensed to Avid for Media Composer’s ScriptSync feature. Think of GET as Spotlight for speech. GET operates as a standalone application that can be used in conjunction with Final Cut Pro. It shouldn’t be thought of as just a plug-in.

The process is simple. First, index the media files that are to be reviewed. This only needs to happen once and the company claims that files can be indexed 200 times faster than real time. (ScriptSync’s indexing is extremely fast.) Once files are indexed, enter the search term into the GET search field and all the possible choices are located. Adjusting the accuracy up or down will increase or decrease the number of matching clips.

You can also do searches using multiple parameters, such as a search term plus a date or a reel number. Since the algorithms are phonetic, correct spelling is less important, as long are it sounds the same. GET includes its own player and clips imported into FCP will have markers at the matching points within the master clip. The shipping version of the product (in a few months) will also subclip the matching segments.

Other snapshots

There are a few other interesting things to mention.

CatDV from Square Box Systems has come along nicely. Many of my FCP friends have looked at this and characterize it as “what Final Cut Server should have been.” Check it out.

I ran into Boris Yamnitsky (Boris FX founder) at the show and he was more than happy to show me some of their upcoming release. Boris FX wasn’t officially exhibiting this year, but they are starting to roll out BCC 7, starting with the After Effects version (ready for CS5). It will include a number of key new features, like particles. What really caught my eye, though, was a color correction filter that combined functionality from both Colorista and Color. It’s a single layer color correction filter with 3 color wheels, but the twist is that you can apply masks with both inside and outside grades – all within the same instance of the filter.

Lastly, Lightworks is back. Well, it never actually left – just changed hands a few times. This placed it with EditShare after they acquired Geevs Broadcast last year. Rather than bang it out with the “A” NLE vendors, EditShare has opted to release it as open source and see what the development community can do for the product. It already has a small, loyal following among film editors and has a few, unmatched touches for collaborative editing. For instance, two editors can work on exactly the same sequence (not copies). One editor at a time has “record” control. As one makes changes, the other can see these updated on his own timeline!

See, I told you it was a fun year.

©2010 Oliver Peters

Canon 5D Avid FCP roundtrip

No, this isn’t the 5D workflow article that you’ve been waiting for. That’s still coming in another couple of weeks. In the meantime, I’ve started on another Canon 5D commercial. This time I’m cutting the project in Avid Media Composer instead of Final Cut Pro. There are a number of reasons, including some recent stability issues I’ve had with FCP. In addition, the creative treatment calls for some nice speed ramp effects. Avid’s FluidMotion is simply a much better slomo technology than anything in Final Cut. So this time, Media Composer is the right tool for the job.

In order to make sure that video levels match what I’m used to with FCP, I’ve been doing some testing of how to roundtrip files back to Final Cut. Ultimately these are web spots, so I want to make sure what I do in Media Composer matches what I do in Final Cut. When I finish editing the spot, there may be a reason to continue in FCP – such as to use Color for grading. That’s another reason to be very sure the images match, regardless of the NLE used.

That’s the dilemma. Avid has always treated video as Rec. 601/709, which means that black and white equal 16 and 235 on a scale of 0-255. This allows headroom and footroom for superwhites and “blacker than black” shadow areas. FCP doesn’t really honor this scale and seems to internally use adjusted levels of 0-235 (my guess), so it makes it tricky whenever you convert clips in and out of QuickTime. Not every QuickTime conversion is equal and you may get level, gamma, saturation and hue shifts depending on where and how the conversion is done and which codec is used.

One visible evidence of this difference is how each UI displays images. An image in a Media Composer window will tend to look “flatter” on the computer display, i.e. less contrast, than the exact same image in a Final Cut window. That really doesn’t matter for most video. If you compare the Avid output through one of Avid’s DX units with FCP’s output through a Kona card, both would look the same on a broadcast monitor and scopes. In the case of these 5D spots, though, the web is the target. I have to make sure the process is as transparent as possible, since there is no I/O hardware between the NLE and the final product.

When you import a QuickTime file into Avid Media Composer you must decide whether the file’s video levels are mapped as RGB (a full 0-255 range) or 601/709 (a scaled 16-235 range). Computer files, like a Photoshop graphic, are almost always RGB. The movie files generated by the Canon EOS 5D Mark II conform to a full RGB range, so set the color level mapping to RGB when importing these files into Media Composer. This tells Media Composer that the range of levels is 0-255 and must be rescaled to 16-235 upon import, when an Avid media file is created. I had both the original H.264 and converted ProRes versions of these files available. Both matched each other, so the resulting levels inside Avid Media Composer were the same whether I picked the H.264 or ProRes file. During the import stage, these were transcoded to the DNxHD145 codec for editing within a 1080p/29.97 project.

At this point you’d edit the same as with any other project. When done you would export a finished file for web conversion. This was the critical stage in my testing, because I wanted to be sure that I could export a file that matched any FCP version. Obviously, if you are going to color grade the footage, it’s less of an issue, since the image is going to look different than the original anyway. My main concern was to assure that the roundtrip would be as transparent as possible. In theory, the easiest approach would be to simply export a QuickTime file with a target codec (like ProRes) and be done with it. It turns out that this isn’t actually as transparent as you’d expect, presumably because of how Avid is interacting with QuickTime to write a non-Avid QuickTime codec.

The better solution takes a couple of steps, but the results are worth it. First of all, you must export from Media Composer with RGB mapping. The 16-235 levels are thus rescaled back out to 0-255 in order to match your computer display. To get the closest overall level match, you should use the Avid 1:1 codec, not one of the Apple uncompressed or ProRes codecs. You aren’t done yet. The Avid codec does display within FCP, but when I attempted to render it on an FCP timeline, the result was just digital hash. The workaround is to do a second conversion in QuickTime 7. Open the Avid 1:1 exported file in QuickTime Pro 7 and export that file again using the Apple ProRes codec.

When I brought the “round-tripped” ProRes file into FCP and split-screened it with the same clip in H.264 (from the camera) or ProRes (first generation conversion of the camera file), there was very little difference between the two clips – either visually or on the waveform. With this knowledge in hand, I’m now ready and comfortable in cutting the spot in Media Composer and won’t feel like I will make any compromise in image quality.

Here’s a recap of the steps:

  1. Import the 5D files into Avid Media Composer
  2. Use RGB mapping
  3. Cut normally
  4. Export an Avid 1:1 QuickTime movie
  5. Use RGB mapping
  6. Open file in QuickTime 7
  7. Export as Apple ProRes
  8. Import into Apple Final Cut Pro and continue working

© 2010 Oliver Peters

Easy Canon 5D post – Round II

RED’s Scarlet appears to be just around the corner and both Sony and Panasonic seem to be responding to the challenge of the upstart photo manufacturers. No matter what acronym you use – DSMC, HD-DSLR, HDSLR – these hybrid HD video / still photo cameras have grabbed everyone’s attention. 2010 may indeed be the year that hybrid digital SLR cameras hit their stride.

The Canon EOS 5D Mark II showed the possibilities in late 2008 when Vincent Laforet released Reverie, but like all of these new camera products, the big question was how to best handle the post. The 5D (so far) only shoots video at a true 30fps – lacking both the filmic 24fps rate – or any of the video-friendly frame rates (29.97, 25 or 23.976). That oversight was corrected in Canon’s EOS 7D and EOS 1D Mark IV models and may soon be corrected by a firmware update to the 5D. Even so, the 5D has remained a preferred option, because of its low light capabilities and full frame sensor. Photographers, videographers and filmmakers love the shallow depth-of-field, so a 24p-capable 5D is certainly on many wish lists.

Click the above image to enlarge

Until the 5D gets a 24fps upgrade [EDIT: coming in March, download will be here] , folks in post will have to contend with the 30fps footage generated by the camera. Last year I wrote an article on how to post a 5D project, which covers a lot of the basics. I’ve since done more 5D projects and formed a number of opinions and workflow tips. I’ve picked up many of these from reading Philip Bloom and Bruce Sharpe (PluralEyes inventor) and at the end of this post, I’ll include a number of useful links.

My first observation on the several 5D projects I’ve posted is that you get the best results from these new cameras when you treat them like film. Use classical production methods – slow pans, steady hand-held work, tripods, dollies and record audio as double-system sound. Secondly, allow time for processing files and syncing sound before you expect to start editing. 35mm film shoots typically require a day or more between the production day and post for lab processing and film transfer. The equivalent is true for HDSLRs. Whether it’s RED or an HDSLR, you have to become the film lab and transfer house. Once you wrap your head around that concept, the workflow steps make a lot more sense.

Click the above image to enlarge

I recently cut another Canon 5D Mark II job with Director/DP Toby Phillips. This was an internet commercial for the wine growers of the Yarra Valley region of Australia. Yarra Valley is to Australia, what Napa Valley is to California. Coincidentally, it’s also the region ravaged by the horrific fires of 2009. In order to keep the production light, Toby’s crew was bare bones and nearly all images were shot under available light – including sodium vapor lighting in warehouse areas. The creative concept was intended to be tongue-in-cheek. Real workers discussed why their job was the most important role in winemaking. The playful interplay between worker comments and winery/vineyard footage round out this :60 commercial.

Production tips

Toby rigged his camera with a modified plate, rails and matte box from his existing film equipment. This includes Arri and Manfrotto parts modified by Element Technica. The 5D records passable sound on its own, but it really isn’t ideal for the best quality. To get around this, a Zoom H4n handheld recorder was used for double-system sound. The Zoom has XLR inputs for external mics, in addition to its built-in XY-pattern stereo mics. A Sennheiser shotgun was plugged into the Zoom, which in turn recorded uncompressed 16-bit/48kHz WAV files. The headphone output of the Zoom was connected to the 5D, so that the camera files always contained reference audio.

There are a number of important tips to note here. First, there’s an impedance mismatch in this connection and the 5D uses an AGC circuit to attenuate audio, so the camera file audio will be clipped. To avoid this, turn down the headphone output level to a very low volume. Second, because the audio is clipped, if you forget to press record on the Zoom, the 5D’s audio is NOT acceptable. Following the traditional approach, a slate with clapstick was used for every sound take. The Zoom records numbered, sequential files, so the crew also wrote the audio file number on the slate for each take. These two steps make it easy to identify the correct audio take and to sync audio and video later in post.

Post workflow / pre-processing

This production configuration isn’t too different than shooting with other tapeless video cameras, but post requires a unique workflow. Key steps include video format conversion, speed adjustment and syncing the sound.

Video conversion – The Canon EOS 5D Mark II records 40Mpbs H.264 QuickTime movies in a 1920x1080p/30fps format. H.264 is not conducive to smooth editing in its native form. 5D files can be up to 4GB in length (about 12 minutes), but there is no clip-spanning provision, as in P2 or XDCAM. Where and when you convert the native H.264 camera files depends on your NLE. With Avid Media Composer, files are converted into Avid’s MXF format upon import. The import will be slow, since it’s also transcoding, but this is a one-step process. Unfortunately it ties up your NLE, so maybe in the future Avid’s MetaFuze or AMA will come to the rescue.

I cut with Apple Final Cut Pro, which does permit direct editing with the H.264 files, but you don’t really want to do that. I typically convert 5D files into Apple ProRes, using a batch setting in Compressor. You can use other codecs, of course, like DVCPRO HD, ProRes HQ, ProRes LT, etc. Philip Bloom likes to convert his files to the EX format using MPEG Streamclip. The reason for EX, according to him, is that the data rate is similar to the 5D files, so storage requirements don’t expand significantly.

The wine commercial had 127 camera files (2 hours 11 minutes of raw footage), which were converted to ProRes in about 4 hours on an 8-core Mac Pro. Storage needs increased from 40GB (H.264) to 142GB (ProRes). The nice part of this step (at least for FCP users) is that the conversion can be left as a batch to churn unattended. One word of caution, though. Compressor has a tendency to choke and crash when you throw tons of files at it, like 100+ camera files. So I usually do these conversions in groups of 20 or so files at a times.

Video speed adjustment - The 5D files are a true 30fps and not the fractional video rate of 29.97fps. Avid will convert these files to the correct rate on import, if audio and video tracks have been separated. According to Michael Phillips of Avid (one of their workflow gurus), “If the MOV file is video-only, then I use the ‘ignoreQtrate true’ console command and get a frame-for-frame import, resulting in a .1% slow down.” This is analogous to what happens when film is transferred to video. In my testing, it was important to first strip off the audio track of the MOV in order for this to work. You can do this using QuickTime Player Pro 7.

Final Cut permits native 30fps editing, but then your files won’t play through standard video gear, like a KONA card. I suppose for an internet spot this wouldn’t matter, however we had other uses, so a speed adjustment would have to happen at some point. I could either convert to 29.97 first and be done with it – or I could cut at 30fps and convert the finished spot. I normally opt to convert the ProRes files to 29.97fps first. To do this I use the Cinema Tools “conform” feature. That’s a nearly instantaneous process, which only alters the file’s metadata. It tells media players to run the file at the fractional frame rate of 29.97fps instead of 30fps.

Audio speed adjustment - Changing the frame rate from 30 to 29.97 means the picture has been slowed by .1% and so audio must also undergo the same pulldown. If you use a location sound recorder capable of a 48048kHz sample rate, then Avid Media Composer will automatically adjust the rate upon import back down to 48kHz and achieve the pulldown. In addition, there are various utilities that can “restamp” the metadata for the sample rate. A good choice is Sound Devices’ free Wave Agent. The Zoom recorder created 48kHz files, but these could be restamped as 47952kHz by such a software utility. In the case of Media Composer, the software sees this on import and slows the file by .1% to achieve the desired 48kHz sample rate. Thus the audio is back in sync.

Final Cut Pro works differently than Media Composer so your results may vary. FCP simply tries to maintain the same duration and thus would force a render in the timeline to convert the sample rate to 48kHz without altering the speed. Instead, I recommend that you render new versions of the audio before importing the files into FCP that have an applied speed change. When I initially tried the restamp approach, I got sync drift. After posting this entry, I tried it again with Wave Agent and the results were dead-on in sync. The only issue is that then you have to render the audio in FCP to get the correct sample rate. I’m not a big fan of how FCP renders audio files and so prefer to correct them prior to import into FCP. I have also had inconsistent results with FCP and how it handles sync with external audio files.

Because of these various concerns, I used Telestream Episode Pro and created an audio-only preset that included a speed change with a .999 value. I used this preset to batch-convert 20 16-bit/48kHz WAV files from the Zoom recorder (1 hour 9 minutes of raw dialogue) into “pulled down” AIF files. This took about two minutes. Whichever approach you take, I urge you to do this only with copies of files. Some of these various utilities use destructive processes, so you don’t want to change your originals.

(Note: For a better understanding of how BWF (broadcast wave files), QuickTime and Final Cut Pro interact, check out this product (BWF2XML) and description by Spherico.)

Syncing the dailies – After these conversion steps, the files are ready to import into FCP. Audio and video files are now in optimized formats that will match FCP’s native media settings. Next, you’ll have to sync the audio and video takes. If the crew used a clapstick, it’s easy to sync in either Avid or Final Cut using the standard group or multiclip routines.

For this wine spot, I used Singular Software’s PluralEyes to automatically sync all sound takes. PluralEyes was one of the highlights of NAB 2009 and is about as close to magic as any software can get. It analyzes audio waveforms to compare and align the reference camera audio against the separate audio files. This is why it’s critical to record even poor-quality reference audio to the camera in order to give PluralEyes something to analyze. Unfortunately for the Avid editor, PluralEyes only works with Final Cut and Sony Vegas Pro. It’s not a plug-in, but works on a timeline labeled “pluraleyes” in an open and saved FCP project.

Here are the steps:

a) Create a blank FCP timeline named “pluraleyes”.

b) Drag & drop all camera clips with dialogue (audio & video) onto the timeline (random order is OK).

c) Drag & drop all separate audio files onto the same timeline onto unused audio tracks (random order is OK).

d) Disable any redundant audio track (speeds up analysis).

e) Save the project, launch PluralEyes, start analysis/sync processing.

After a few minutes of processing, PluralEyes will automatically create a series of new FCP sequences – one for each sync take. The audio will be aligned so that the double-system sound files are now perfectly in sync with the camera audio.

Post workflow / edit / mix / grade

Now that you have sync takes, you can pretty much edit anyway you like. I picked the following tip up from Bloom’s blog. To make editing easier on the wine spot, I took these new sequences and renamed them according to the person who was speaking and which take it was. I export the sequences as QuickTime references movies (not self contained) to a location on my media drives. I then re-imported these reference movies, in effect turning them into master clips with merged 5D video and Zoom audio. These became my source for all sync takes. Any b-roll shots came from the regular ProRes files.

The rest of the edit went normally. I’ve got my MacPro set up with two internal 1TB drives configured as a software RAID-0 for media files (2TB). No issues with cutting ProRes this way. I bounced the audio to Soundtrack Pro for the final mix. No real reason, other than to take advantage of some of the plug-ins to add a touch of “sparkle” to the dialogue.

I used Apple Color for the grade. If you follow my blog, you know that I could have tackled this easily with various plug-ins and stayed inside FCP, however, I do like the Color interface and toolset. This spot was ideally suited to go through a grading pass using Color. As it turned out, this step might have been a bit premature due to client revisions. In hindsight, using plug-ins might have been preferable. I thought the cut was locked, so proceeded with the correction in Color.

The first version of the spot was a faster paced cut (57 shots in :60), so the client requested a second version with a little more breathing room and a few alternate dialogue takes. This necessitated going back into the footage. Those familiar with Color know that it generates new media files when it renders color correction. This is required to “bake in” the color corrections. If you assign handles of a few seconds to each shot, you have some room to trim shots when you are back in Final Cut. This doesn’t help you with other footage.

I decided to step back to the sequence before “sending to” Color and cut a second, more-relaxed version (46 shots in :60). Although this meant starting a new Color project, I was aided by Color’s ability to store grades. I could save the settings for each of the shots in version one and apply these settings to the similar or same shot in version two, within the new Color project. Adjust keyframes, tweak a few settings, render and bingo! – the grade is done. With :02 handles on each shot, version one (57 shots) rendered in about 40 minutes and version two (46 shots) took about 30 minutes. Both as 1920×1080 ProRes (29.97fps) media. Of course, like many commercials this wasn’t the end and a few more changes were made! The final version ended being a combination of these two cuts.

(As an aside, Stu Maschwitz has done a nice post about Color Correcting Canon 7D Footage on his ProLost blog.)

Post-processing / 24fps conversion

This could have been the end of the post for the wine spot, but there’s one more step. A big reason people like these HDSLRs is because they provide a very cost-effective way of getting that elusive “film look”. One part of that look is the 24fps frame rate. Yes – some film is shot at 30fps for spots and TV shows – so technically the 5D’s 30p footage is just fine. But clients really do want that 24fps look.

You can convert these 5D files quite cleanly to 24fps. This is a process I picked up from Bloom and discussed in my previous Canon post.  Here are the steps:

a) Note the exact duration of the 29.97fps timeline.

b) Export a self-contained QuickTime movie of the finished 29.97 sequence.

c) Bring that exported file into Compressor and set up a ProRes-to-ProRes conversion. Use a frame rate of 24fps (it actually is 23.98, but Compressor labels it as 24).

d) Turn Frame Controls on, set Rate Conversion to Best and change Duration from 100% of source to the exact duration of the original 29.97 timeline.

Now let Compressor crunch for a while.  My :60 spot took about 36 minutes to convert from 29.97 to 23.98. For good measure, I also take the finished file into Cinema Tools and conform it to 23.98, just in case it’s 24 and not 23.98. Then I import the file back into FCP. I create a new 23.98 timeline and edit the clip to the file. If everything is done correctly, this media should match without any rendering needed. Then I’ll copy and paste the audio from the 29.97 timeline to the 23.98 timeline. This should be in sync.

A couple of additional pointers. Since I don’t want to have this conversion process get confused with titles and dissolves, I remove all graphics and make dissolves into cuts (with handles) in the 29.97 sequence, prior to export. I actually exported the wine spot timeline as 1:04 instead of :60. When I was back in the 23.98 timeline, I fixed these trims, added back the fades, dissolves and graphics in order to complete the sequence.

The second issue is speed changes. I sped up two shots, which actually passed through Color and this 24p conversion just fine – except for one problem. My 29.97 timeline was actually an interlaced timeline. This doesn’t matter for the camera files, as they are inherently progressive. However, any timeline effects, like speed changes, titles and transitions are processed with interlaced motion. This affected the two sped-up shots in the 24p conversion, resulting in interlace artifacts. The simple fix was to replace these with the normal-speed media and redo the speed change in the 23.98 timeline. No big deal, but something to be mindful of in the future.

Finally, although this conversion is very good, it isn’t perfect. Cuts do stay as clean cuts and slow action converts cleanly looking as if it were shot at 24fps. Fast motion, however, does introduce some artifacts. These mainly show up as blended frames in areas of fast activity or fast camera movement. It’s no big deal really, as it tends to add to the filmic look of the material – a bit like motion blur.

Remember that this is an OPTIONAL and SUBJECTIVE step. I personally think that 30p is a “sweet spot” for LCD and plasma screens. This is especially true for the web and computer displays. In the end, my client decided they liked the 30p image better, because it was crisper.

Click the image to see the video in HD on Vimeo.

Or here for the “Alternate Cut” at 30fps (no 24p conversion).

Additional tools

Since the media files the HDSLR cameras generate are an outgrow of file creation at the consumer product level, there is very little metadata in them that an NLE would care about. No reel numbers, SMPTE timecode, edge numbers, etc. That’s good and bad. Good – in that the folder and file structure is quite simple and very malleable. Bad – in that you can have duplicate file names and there’s no ability to span clips. Think of it like a roll of 35mm negative. That would have about 11 minutes of capacity and new metadata is added when it’s transferred to video.

Since files are sequentially numbered on the memory card, once you start recording to the next card, it’s likely to have repeating file names. This is true both in the camera and on a recorder like the Zoom, simply because there is no reel (i.e. card) ID name or number.  The good news is that you can easily change this without corrupting metadata – as you would with RED or P2 – but, it means you have to manually impose some sort of structure yourself.

R-Name - One utility that can help in R-Name. Unfortunately it may be out of development, but I still use version 3, which works with Snow Leopard. You might be able to find a download still lurking in the depths of the internet – or, if not – a similar utility or an Automator routine. R-Name lets you rename files (as the name implies), but you can also append prefix or suffix character strings to a file name. For example, a set of media files from a 5D may be named MVI_1073.mov through MVI_1200.mov and you’d like to add a prefix for Card 1. Simply create an R-Name batch that adds a prefix such as “C001_” to all these files. Run the batch and voila – your files are now named C001_MVI_1073.mov through C001_MVI_1200.mov. Follow this process for each card and it becomes a nice, fast way of organizing your media.

QtChange – If reel numbers and timecode are important for you to have, then check out VideoToolshed’s QtChange. This is a comprehensive QuickTime utility, which lets you alter several file parameters. Most importantly, you can add or change reel number and timecode values. Although this isn’t essential for you to cut in FCP, certain functions, like dupe detection, won’t work without an assigned reel number. There are several ways to alter this info in QtChange, but one of the ways it can work is to automatically use the date stamp of the file for the reel number and the time stamp as a starting timecode number. Files can be changed in a batch, but be careful as these are destructive changes. Developer Bouke Vahl has been making ongoing changes to the product and recently added Avid Log Exchange functionality.

MetaCheater One deficiency of Avid Media Composer has been the inability to directly read all of the metadata from a QuickTime file. For instance, older versions of Media Composer and Symphony would not read QuickTime timecode. This has been corrected in the most recent versions; these apps now import the timecode, but still no reel number. In addition, the Canon cameras don’t generate timecode or reel numbers so you must add them if you need such information. You could use QtChange to add reel IDs and timecode, which Media Composer would import, but then there’s still the reel ID problem. MetaCheater is a simple way around this. This program extracts QuickTime metadata and creates an Avid Log Exchange file (ALE) with proper reel numbers and timecode values. Import the ALE file into Media Composer and then batch import the corresponding QuickTime movies. In this process, Media Composer uses the timecodes and reel numbers from the ALE instead of default values, with the result that your Avid bins properly reflect the reel and timecode information added to the 5D files. It would be just as if this media had been captured from a videotape source.

Here are a few comparisons of the color grading applied to these shots.

Click each of these images to see an enlarged view.

Original Image

Graded Image

Split Screen

Original Image

Graded Image

Split Screen

Addendum (Feb 2010)

After I initially wrote this article in January, I pulled it down for some tweaks. In the interim, I got busier for a few weeks until I could repost it. In that time, was able to do some more testing with Avid Media Composer 4.0.5 on another Canon 5D spot. I am adding my observations here, since many of my readers are Avid cutters and want to know the best way to handle these files in Media Composer.

Unlike FCP, there’s no simple drag-and-drop method in Avid. If you elect to convert the files using an external encoding application, you still have to bring the files in through Avid’s import routines. This adds a step and effectively doubles the total time it takes to convert and import as compared with FCP. Another frustrating issue is that when you move from the native camera files into Avid, you have to move out of the QuickTime color and gamma architecture and into an MXF structure using Avid codecs.

In the Avid world, video files are treated using the rec. 601/709 colorspace (16-235 on an 8-bit scale) and computer files are assumed to be in RGB space (0-255). When you import or export files to and from Media Composer, you always need to check the proper setting – RGB or 601/709. Unfortunately (or fortunately depending on your POV), this is largely hidden from view in the QuickTime world. Furthermore, Canon really hasn’t provided documentation that I’m aware of regarding the colorspace that these cameras work in and how closely color scaling conforms to either RGB or rec. 709. The long and short of it is that when you move in and out of QuickTime, you are often fighting level and gamma changes to varying degrees.

I tried a number of different import and encoding methods with Media Composer. All of them work, but with various trade-offs. The easiest method is as I outlined earlier in this article – simply import the H.264 camera files into Media Composer. When you do that, select RGB color space. The import time will take approximately 3:1 to 4:1 on a fast machine, depending on the target codec you choose to use, because the media is being transcoded during this import stage. I had the fastest encoding times using the Sony XDCAM-EX codec, which is now natively supported by Media Composer.

A second option is to use Apple Compressor (or another QuickTime encoder) to convert the camera files into QuickTime movies using an Avid DNxHD codec. This is the same approach as converting to Apple ProRes 422. Unfortunately, Avid still imposes a longer import time to get these files from QuickTime MOVs into the MXF media format. Although Compressor offers a choice between RGB and 709 when you select DNxHD, it doesn’t seem to make any difference in the appearance of the files. The files are converted to 709 color space and so should be imported into Avid with the import setting on 709. I hope that this import step will be eliminated at some point in the future, when and if Avid decides to support QuickTime files through its AMA feature.

The fastest, current method was to use Episode Pro again. MXF is now supported in this encoder, so I was able to convert the H.264 files into MXF-wrapped XDCAM-EX files that were ready for Avid. The beauty of is that the work can be done on an external machine in a batch and the import back into Media Composer is very fast. No transcoding is needed, as this just becomes a file copy. The EX codec looked clean and wasn’t too taxing on my Mac Pro. You also have the option of using XDCAM-HD and XDCAM HD 422 (50Mbps) codecs in the MXF file format. The only issue was that one of the media files appeared to be corrupt after encoding and had to be re-encoded. This might be an anomaly, but we ARE dealing with two long-GOP codecs in this process! Another benefit of this route is that no user interaction is required to determine color space settings.

Now to the level issues. In all of this back and forth – once I exported back out to QuickTime (ProRes 422 codec, using RGB setting on export) – no conversion identically matched the original camera files. When I compared versions, direct import of the files (H.264 into Avid) yielded slightly darker results. External conversion to DNxHD and then importing, yielded a slight gamma shift. Conversion/import via the MXF route appeared a bit lighter than the original. None of these were major differences, though. If you are going to color grade the final product anyway, it doesn’t really matter. I finally settled on a 2-step conversion workflow (described in my February 21 post) that yielded good results going from the 5D files into Media Composer and then to FCP.

As far as editing, syncing and grading, that is the same as with any other acquisition media. I used the same preparatory steps as outlined earlier (Cinema Tools conform to 29.97 and a .999 speed adjustment of the audio) – then converted and imported the video files. Inside Media Composer (1080p/29.97 project), everything synced and edited just as I expected.

Also in early February, Canon announced its EOS Movie Plugin-E1 for Final Cut Pro. Click here for the description. It’s supposed to be released in March and if I understand their description correctly, it allows you to import camera clips via FCP’s Log and Transfer module. During the import stage, files are transcoded to ProRes. Unfortunately there is no explanation of how frame rates are handled, so I presume the files are imported and remain at their original frame rate.

My conclusion after all of this is that both FCP and Media Composer are just fine for working with HDSLR projects. FCP seems a bit faster at the front, but in the end, you’re just traveling two different roads to get to the same destination.

I leave you with one last tidbit to ponder. Apple has just introduced Aperture 3, which includes HD video clip support in slideshows. I wonder how apps like Aperture, Lightroom and Photoshop (already supports some video functions) will impact these HDSLR workflows in the future?

(UPDATE: If you got here through links from other blogs, make sure you read the updated Round III post as well.)

Useful Links

5DMk2 blog – 1001 Noisy Cameras

Assisted Editing

Philip Bloom

Canon Explorers of Light

Canon Filmmakers

Cinema5D

DSLR HD

DVinfo

DVXuser

Element Technica

FreshDV

Tyler Ginter

Vincent Laforet

ProLost

Red Rock Micro

Bruce Sharpe

Spherico

Peter Wiggins

Planet5D

Video Toolshed

Zacuto

©2010 Oliver Peters

Tips for Small Camera and Hybrid DSLR Production

blg_cams

It started in earnest last year and has no sign of abating.  Videographers are clearly in the midst of two revolutions: tapeless recording and the use of the hybrid still/video camera (HDSLR). The tapeless future started with P2 and XDCAM, but these storage devices have been joined by other options, including Compact Flash, SD and SDHC memory cards. The acceptance of small cameras in professional operations first took off with DV cameras from Sony and Panasonic, especially the AG-DVX100. These solutions have evolved into cameras like the Sony HVRZ7U and PMWEX3 and Panasonic’s AG-HPX170 and AVCCAM product line. Modern compressed codecs have made it possible to record high-quality 1080 and 720 HD footage using smaller form factors than ever before.

This evolution has sparked the revolution of the HDSLR cameras, like the Canon EOS 5D Mark II, the new Canon EOS 7D and 1D Mark IV and the Nikon D90, D300s and D3s, to name a few. Although veteran videographers might have initially scoffed at such cameras, it’s important to note that Canon developed the 5D at the urging of Reuters and the Associated Press, so its photographers could deliver both stills and motion video with the least hassle. Numerous small films, starting with photographer Vincent Laforet’s Reverie, have more than proven that HDSLRs are up to the task of challenging their video cousins. From the standpoint of a news or sports department, we have entered an era where every reporter can become a video journalist, simply by having a small camera at the ready. That’s not unlike the days when reporters carried a Canon Scoopic 16mm, in case something newsworthy happened.

These cameras come with challenges, so here is some advice that will make your experience more successful:

1. Ergonomics / stability – Both small video camcorders and HDSLRs are designed for handheld, not shoulder-mounted, operation. This isn’t a great design for stability while recording motion. In order to get the best image out of these cameras, invest in an appropriate tripod and fluid head. For more advanced operations, check out the various camera mounting accessories from companies like Zacuto and Red Rock Micro.

2. Rolling shutter – This phenomenon affects all CMOS cameras to varying degrees. It is caused by horizontal movement and results in an image that is skewed. This distortion is caused by the time differential between information at the top and the bottom of the sensor. The HDSLRs have been criticized for these defects, but others like the EX or the RED One have also displayed the same artifacts to a lesser degree. This defect can be minimized by using a tripod and slow (or no) camera movement.

3. Focus – One of the reasons that shooters like HDSLRs is the large image sensor (compared to video cameras) and film lenses, which provide a shallow depth-of-field. This is a mixed blessing when you are covering a one-time event. Still photo zoom lenses aren’t mechanically designed to be zoomed and focused during the shot like film or video zoom lenses. This makes it harder to nail the shot on-the-fly. Since the depth-of-field is shallow, the focus is also less forgiving. Lastly, the focus is often done using an LCD viewer instead of a high-quality viewfinder. Many shooters using both small video cameras and HDSLRs have added an externally-mounted LCD monitor, as a better device for judging shots.

4. Audio – The issue of audio depends on whether we are talking about a Canon 5D or a Panasonic 170. Professional and even prosumer camcorders have been designed to have mics connected. To date, HDSLRs have not. If you are shooting extensive sync-sound projects with a hybrid camera, then you will want to consider using double-system sound with a separate recorder and mixer (human). At the very least, you’ll want to add an XLR mic adapter/mixer, like the BeachTek DXA-5D.

5. Movie files – Each of these cameras records its own specific format, codec and file wrapper. Production and post personnel have become comfortable with P2 and XDCAM, but the NLE manufacturers are still catching up to the best way of integrating consumer AVCHD content or files from these HDSLRs. Regardless of the camera system you plan to use, make sure that the file format is compatible with (or easily transcoded to) your NLE of choice.

6. Capacity – Most of the cameras use a recording medium that is formatted as FAT32. This limits a single file to 4GB, which in the case of the Canon 5D means the longest recording cannot exceed 12 minutes of HD (1920x1080p at 30fps). Unlike P2, there is no spanning provision to extend the length of a single recording. Make sure to plan your shot list to stay within the file limit. Come with enough media. In the case of P2, many productions bring along a “data wrangler” and a laptop. This person will offload the P2 cards to drives and then reformat (erase) the cards so that the crew can continue recording throughout the day with a limited number of P2 cards.

7. Back-up – Always back-up your camera media onto at least two devices in the original file format. I’ve known producers who merely transferred the files to the edit system’s local array and then trashed the camera media, believing the files were safe. Unfortunately, I’ve seen Avids quarantine files, making them inaccessible. On rare occasion, I’ve also seen Final Cut Pro media files simply disappear. The moral of the story is to treat your original camera media like film negative. Make two, verified back-ups and store them in a safe place should you ever need them again.

The new generation of small video camcorders and Hybrid DSLRs offers the tantalizing combination of lower operating cost and stunning imagery. That’s only possible with some care and planning. These tools aren’t right for every application, but the choices will continue to grow in the coming years. Those who embrace the trend will find new and exciting production options.

© 2009 Oliver Peters

Written for NewBay Media and TV Technology magazine

Canon EOS 5D Mark II in the real world

blg_canon5d_11

A case study on dealing with Canon 5D Mk2 footage on actual productions.

You could say that it started with Panasonic and Nikon, but it wasn’t until professional photographer Vincent Laforet posted his ground-breaking short film Reverie, that the idea of a shooting video with a DSLR (digital single lens reflex) camera caught everyone’s imagination. The concept of shooting high definition video with a relatively simple digital still camera was enough for Red Digital Cinema Camera Company to announce the dawn of the DSMC (digital still and motion camera) and push it to retool the concepts for its much anticipated Scarlet.

The Scarlet has yet to be released, but nevertheless, people have been busy shooting various projects with the Canon EOS 5D Mark II like the one used by Laforet. Check out these projects by directors of photography Philip Bloom and Art Adams. To meet the demand, companies like Red Rock Micro and Zacuto have been busy manufacturing a number of accessories designed specifically for the Canon 5D in order to make it a friendlier rig for the operator shooting moving video.

blg_canon5d_3

Frame from Reverie

Why use a still camera for video?

The HOW and WHY are pretty simple. Digital camera technology has advanced to the point that full-frame-rate video is possible using the miniaturized circuitry of a digital still photography camera. Nearly all DSLRs provide real-time video feedback to the LCD display on the back of the camera. Canon was able to use this concept to record the “live view” signal as a file to its memory card. The 5Dmk2 uses a large “full frame 35mm” 21.1 MP sensor, which is bigger than the RED One’s sensor or a 35mm motion picture film frame. Raw or JPEG stills captured with the camera are 5616×3744 pixels in a 5:3 aspect ratio (close to HD’s 16:9). The video view used for the live display is a downsampled image from the same sensor, which is recorded as a 1920×1080 high-def file. This is a compressed file (H264 codec) at a data rate of about 40Mbps. 16:9 is slightly wider than 5:3, so the file for the moving image is cropped on the top and bottom compared with a comparable still photo.

The true beauty of the camera is its versatility. A photographer can shoot both still images and motion video with the same camera and at the same settings. When JPEG images are recorded, then the same colorimetry, exposure and balance will be applied to both. Alternatively, one could opt for camera raw stills, in which case the photos can still be adjusted with great latitude after the fact, since this data would not be “baked in” as it is with the video. Stills from the camera use the full resolution of this large sensor, so photographs from the Canon 5D are much better than any stills extracted from an HD camera, including the RED One.

blg_canon5d_4

Frame from Reverie

Videographers have long used various film lens adapters to gain the lens selection and shallow depth-of-field advantages enjoyed by film DPs. The Canon 5D gives them the advantage of a wide range of glass that many may already own. The camera creates a relatively small footprint compared to the typical video and film camera – even with added accessories – so it becomes a very interesting option in run-and-gun situations, like documentaries. Last but not least, the camera body (no lenses) costs under $3K. So, compared with a Sony EX3 or a RED One, the 5Dmk2 starts to look even more attractive to low-budget filmmakers.

What you lose in the deal

As always, there are some trade-offs and the Canon EOS 5D Mark II is no exception. The first issue is recording time. The Canon 5D uses CF (CompactFlash) memory cards. These are formatted as FAT32 and have a 4GB file limit. Due to this limit, the maximum clip length for a single file recorded by the 5Dmk2 is about 12 minutes. Unlike P2 or EX, there is no provision for file spanning. The second issue is that the camera records at a true 30fps – not a video friendly 29.97 and not the highly desirable film rate of 23.98 or 24fps.

Audio is considered passable, but for serious projects, double-system, film-style sound is recommended. This workflow would be the same as if you were shooting on film. Traditional slates and/or software like PluralEyes (Singular Software) or FCPauxTC Reader (VideoToolshed) make post syncing picture and sound a lot easier.

blg_canon5d_1

Example of the rolling shutter effects used for interesting results

One major limitation cited by many is the rolling shutter that causes the so-called “jello” effect. The Canon 5D uses a single CMOS sensor and nearly all CMOS cameras have the same problem to some degree. This includes the RED One. This image artifact arises because the sensor is not globally exposed at the same point in time, like exposing a frame of 35mm film. Instead, portions of the sensor are sequentially exposed. This means that fast motion of an image or the camera translates into the image appearing to wobble or skew. In the worst case, the object in the frame takes on a certain rubbery quality, hence the name the “jello” effect. It can also show up with strobes and flashes. For example, I’ve seen it on strobe light and gun shot footage from a Sony EX3. In this case, the rolling shutter caused half of the frame to be exposed and the other half to be dark.

Skew or wobble becomes most obvious when there are distinct vertical lines within the frame, such as a lamp post or the edge of some furniture. Fast panning motion of the camera or subject can cause it, but it’s also quite visible in just the normal shakiness of handheld shots. If you notice many of the short films on the web, the camera is almost always stationary, tripod-mounted or moving very slowly. In addition, lens stabilization circuitry can also exacerbate the appearance of these artifacts. Yet, in other instances, it helps reduce the severity.

blg_canon5d_2

Note the skew on the passing subway cars

High-end CMOS cameras are engineered in ways that the effect is less noticeable, except in extreme circumstances. On the other hand, the Canon 5D competitor – the Nikon D90 – gained a bit of a reputation specifically for this artifact. To combat this issue, The Foundry recently announced RollingShutter, an After Effects and Nuke plug-in designed to tackle these image distortion problems.

Don’t let this all scare you away, though. Even a camera that is more subject to the phenomenon will turn out great images when the subject is organic in nature and care is taken with the camera movement. Check out some of the blog posts, like those from Stu Maschwitz, about these issues.

blg_canon5d_8

Frame from My Room video

But, how do you post it?

Like my RED blog post, I’ve given you a rather long-winded intro, so let’s take a look at a real-life project I recently posted that was shot using the Canon EOS 5D Mark II. Toby Phillips is a renowned international director, director of photography and Steadicam operator with tons of credits on commercials, music videos and feature films. I’ve worked with him on numerous spots where his medium of choice is 35mm film. Toby is also an avid photographer and Canon owner (including a 5D Mark II). We recently had a chance to use his 5Dmk2 for a good cause – a pro bono fundraiser for My Room, an Australian charity that assists the Children’s Cancer Centre at the Royal Children’s Hospital in Melbourne. Toby needed to shoot his scenes with minimal fuss in the ward. This became an ideal situation in which to test the capabilities of the Canon and to see how the concept translated into a finished piece in the real world.

blg_canon5d_5

Frame from My Room video

Toby has a definite shooting style. It typically involves keeping the camera in motion and pulling focus to just hit a point that’s optimally in focus at the sweet spot of the camera move. That made this project a good test bed for the Canon 5D in production. Lighting was good and the images had a warm and appealing quality. The footage generally turned out well, but Toby did express to me that shooting in this style – and shooting handheld without any of the Red Rock or Zacuto accessories or a focus puller – was tough to do. Remember that still camera lenses are not mechanically engineered like a motion picture lens. Focus and zoom ranges are meant to be set and left, not smoothly adjusted during the exposure time.

blg_canon5d_10

Posting footage from the 5Dmk2 is relatively easy, but you have to take the right steps, depending on what you want to end up with. The movie files recorded by the camera are QuickTime files using the H264 codec, so any Mac or PC QuickTime-compatible application can deal with the files. They are a true 30fps, so you can choose to work natively in 30fps (FCP) or first convert them to 29.97fps (for FCP or Avid). That speed change is minor, so there are no significant sync or pitch issues with the onboard audio. If you opt to edit with Media Composer, simply import the camera movies into a 29.97 project, using the RGB import settings and the result will be standard Avid media files. The camera shoots in progressive scan, so footage converted to 29.97 looks like that shot with any video camera in a 30p mode.

Canon 5D and Final Cut Pro

I edited the My Room project in Final Cut. Although I could have cut these natively (H264 at 30fps), I decided to first convert the files out of H264 for a smoother edit. I received the raw footage on a FireWire drive containing the clips copied from the CF cards. This included 150 motion clips for a total of about one hour of footage (18GB). The finished video would use a mixture of motion footage and moves on stills, so I also received another 152 stills from the 5Dmk2 plus 242 stills from a Canon G10 still camera.

Step one was file conversion to ProRes at 1920×1080. Apple Compressor on a MacBook Pro took under five hours for this step. Going to ProRes increased the storage needs from 18GB to 68GB.

Step two was frame rate conversion. The target audience is in Australia, so we decided to alter the speed to 25fps. This gives all shots a slight slomo quality as if the footage was shot in an overcranked setting. The 5Dmk2 by itself isn’t capable of variable frame rates or off-speed shooting, so any speed changes have to be handled in post. Although a frame rate change is possible in the Compressor setting (step 1), I opted to do it in Cinema Tools using the conform function. When you conform a file in Cinema Tools, you are altering the metadata information of that file. This tells a QuickTime-compatible application to play the file at a specific speed, such as 25fps instead of 30fps. I could also have used this to conform the rate to 29.97 or 23.98. Because only the metadata was changed, the time needed to conform a batch of 150 clips was nearly instantaneous.

Step three – pitch. Changing the frame rate through conform slows the clips, but it also affects the sync sound by making it slower and lowering the pitch. Our video was cut to a music track so that was no big deal; however, we did have one sync dialogue line. I decided to fix just the one line by using Soundtrack Pro. I went back to the original 30fps camera file and used STP’s TimeStretch. This let me adjust the sync speed (approximately 83% of the original) to 25fps, yet maintain the proper pitch.

Step four – stills. I didn’t want to deal with the stills in their full size within FCP. This would have been incredibly taxing on the system and generally overkill, even for an HD job. I created Photoshop actions to automate the conversion of the stills. The 152 5Dmk2 JPEG stills were converted from 5616×3744 to 3500×2333. The stills from the G10 come in a 4:3 aspect ratio (4416×3312) and were intended to be used as black-and-white portrait shots. Another Photoshop action made quick work of downsampling these to 3000×2250 and also converting them to black-and-white. Photoshop CS4 has a nice black-and-white adjustment tool, which generates slightly more pleasing results than a simple desaturation. These images were further cropped to 16:9 inside FCP during the edit.

blg_canon5d_6

Frame from My Room video

Editing

Once I had completed these conversions, the edit was pretty straightforward. The project was like any other PAL-based HD job (1920×1080, 25fps, ProRes). The Canon 5D creates files that are actually easier for an editor to deal with than RED, P2 or EX files. Naming follows the same convention as most what DSLRs use for stills, with files names such as MVI_0240.mov. There is no in-camera SMPTE timecode and all imported clips start from zero. File organization over a larger project would require a definite process, but on the other hand, you aren’t fighting something being done for you by the camera! There are no cryptic file names and copying the files from the card to other storage is as simple as any other QuickTime file. There is also no P2-style folder hierarchy to maintain, since the media is not MXF-based.

Singular Software and Glue Tools are both developing FCP-related add-ons to deal with native camera files from the Canon 5D. Singular offers an Easy Set-up for the camera files, whereas Glue Tools has announced a Log and Transfer plug-in. The latter will take the metadata from the file and apply the memory card ID number as a reel name. It uses the camera’s time-of-day stamp as a timecode starting point and interpolates clip timecode for the file. Thus, all clips in a 24-hour period would have a unique SMPTE timecode value, as long as they are imported using Log and Transfer.

blg_canon5d_7

Frame from My Room video

My final FCP sequence was graded in Apple Color. Not really because I had to, but rather to see how the footage would react. Canon positioned the 5Dmk2 in that niche between the high-end amateur and the entry level professional photographer, so it tends to have more automatic control than most pros would like. In fact, a recent firmware update added back some manual exposure control. In general, the camera tends to make good-looking images with rich saturation and contrast. Not necessarily ideal for grading, but Stu at ProLost offers this advice. Nevertheless, I really didn’t have any shots that presented major problems – especially given the nature of this shoot, which was closer to a documentary than a commercial shoot. I could have easily graded this with my standard “witches brew” of FCP plug-ins, but the roundtrip through Color was flawless.

As a first time out with the Canon EOS 5D Mark II, I think the results were pretty successful (click here to view). I certainly didn’t see any major compression artifacts to speak of and although the footage wasn’t immune from the “jello” effect, I don’t think it got in the way of the emotion we were trying to convey. A filmmaker who was serious about using this as the principal camera on a project could certainly deliver results on par with far more expensive HD cameras. To do that successfully, a) they would need to invest in some of the rigs and accessories needed to utilize the camera in a motion picture environment; and b) they would need to shoot carefully and adhere to set-ups that steer away from some of the problems.

blg_canon5d_9

What about 24fps?

25fps worked for us, but until Canon adds 24fps to the 5Dmk2 or a successor, filmmakers will continue to clamor for ways to get 24p footage out of the camera. Philip Bloom and others have posted innovative post “recipes” to achieve this.

I tested one of these solutions on my cut and was amazed at the results. If I needed to maintain sync dialogue on a project, yet wanted the “film look” of 24fps, this is the method I would use. It’s based on Bloom’s blog post (watch his tutorial video). Here are the steps if you are cutting with Final Cut Pro:

1. Edit your video at the native 30fps camera speed.
(Write down the accurate sequence duration in FCP.)

2. Export a self-contained QuickTime file.

3. Conform that exported file to 23.98fps in Cinema Tools.
(This will result in a longer, slowed down file.)

4. Bring the file into Compressor and create and apply a setting to convert the file, but leave the target frame rate at 23.98fps (or same as current file).

5. Click the applied setting to modify it in the Inspector window.

6. Enable Frame Controls and change the duration from “100% of source” to a new duration. Enter the exact original duration of the 30fps sequence (step 1). (Best results are achieved – but with the longest render times – when Rate Conversion is set to “Best – high quality motion compensated”.)

7. Import the converted file into FCP and edit it to a 23.98 fps timeline. This should match perfectly to a mixed version of the audio from the original 30fps sequence.

I was able to achieve a perfect conversion from 30fps to 23.98fps using these steps. There were no obvious optical flow artifacts or frame blending. This utilizes Compressor’s standards conversion technology, so even edited cuts in the self-contained file stayed clean without blending. Of course, your mileage may vary.

The edited video segment was 1:44 at 30fps and 2:10 at the slower 23.98fps rate. The retiming conversion necessary to get back to a 1:44-long 23.98 file took two hours on my MacBook Pro. This would be time-prohibitive if you wanted to process all of the raw footage first. Using it only on an edited piece definitely takes away the pain and leaves you with excellent results.

Cameras like the Canon EOS 5D Mark II are just the beginning of this DSMC journey. I don’t think Canon realized what they had until the buzz started. I’m sure you’ll soon see more of these cameras from Canon and Nikon, not to mention Panasonic and even Sony, too. Once RED finally starts shipping Scarlet, it will be interesting to see whether this concept really has legs. In any case, from an editor’s perspective, these formats aren’t your tape of old, but they also shouldn’t be feared.

©2009 Oliver Peters