The Canon 5D workflow article is back up. Check here. Thanks.
The Canon 5D workflow article is back up. Check here. Thanks.
No, this isn’t the 5D workflow article that you’ve been waiting for. That’s still coming in another couple of weeks. In the meantime, I’ve started on another Canon 5D commercial. This time I’m cutting the project in Avid Media Composer instead of Final Cut Pro. There are a number of reasons, including some recent stability issues I’ve had with FCP. In addition, the creative treatment calls for some nice speed ramp effects. Avid’s FluidMotion is simply a much better slomo technology than anything in Final Cut. So this time, Media Composer is the right tool for the job.
In order to make sure that video levels match what I’m used to with FCP, I’ve been doing some testing of how to roundtrip files back to Final Cut. Ultimately these are web spots, so I want to make sure what I do in Media Composer matches what I do in Final Cut. When I finish editing the spot, there may be a reason to continue in FCP – such as to use Color for grading. That’s another reason to be very sure the images match, regardless of the NLE used.
That’s the dilemma. Avid has always treated video as Rec. 601/709, which means that black and white equal 16 and 235 on a scale of 0-255. This allows headroom and footroom for superwhites and “blacker than black” shadow areas. FCP doesn’t really honor this scale and seems to internally use adjusted levels of 0-235 (my guess), so it makes it tricky whenever you convert clips in and out of QuickTime. Not every QuickTime conversion is equal and you may get level, gamma, saturation and hue shifts depending on where and how the conversion is done and which codec is used.
One visible evidence of this difference is how each UI displays images. An image in a Media Composer window will tend to look “flatter” on the computer display, i.e. less contrast, than the exact same image in a Final Cut window. That really doesn’t matter for most video. If you compare the Avid output through one of Avid’s DX units with FCP’s output through a Kona card, both would look the same on a broadcast monitor and scopes. In the case of these 5D spots, though, the web is the target. I have to make sure the process is as transparent as possible, since there is no I/O hardware between the NLE and the final product.
When you import a QuickTime file into Avid Media Composer you must decide whether the file’s video levels are mapped as RGB (a full 0-255 range) or 601/709 (a scaled 16-235 range). Computer files, like a Photoshop graphic, are almost always RGB. The movie files generated by the Canon EOS 5D Mark II conform to a full RGB range, so set the color level mapping to RGB when importing these files into Media Composer. This tells Media Composer that the range of levels is 0-255 and must be rescaled to 16-235 upon import, when an Avid media file is created. I had both the original H.264 and converted ProRes versions of these files available. Both matched each other, so the resulting levels inside Avid Media Composer were the same whether I picked the H.264 or ProRes file. During the import stage, these were transcoded to the DNxHD145 codec for editing within a 1080p/29.97 project.
At this point you’d edit the same as with any other project. When done you would export a finished file for web conversion. This was the critical stage in my testing, because I wanted to be sure that I could export a file that matched any FCP version. Obviously, if you are going to color grade the footage, it’s less of an issue, since the image is going to look different than the original anyway. My main concern was to assure that the roundtrip would be as transparent as possible. In theory, the easiest approach would be to simply export a QuickTime file with a target codec (like ProRes) and be done with it. It turns out that this isn’t actually as transparent as you’d expect, presumably because of how Avid is interacting with QuickTime to write a non-Avid QuickTime codec.
The better solution takes a couple of steps, but the results are worth it. First of all, you must export from Media Composer with RGB mapping. The 16-235 levels are thus rescaled back out to 0-255 in order to match your computer display. To get the closest overall level match, you should use the Avid 1:1 codec, not one of the Apple uncompressed or ProRes codecs. You aren’t done yet. The Avid codec does display within FCP, but when I attempted to render it on an FCP timeline, the result was just digital hash. The workaround is to do a second conversion in QuickTime 7. Open the Avid 1:1 exported file in QuickTime Pro 7 and export that file again using the Apple ProRes codec.
When I brought the “round-tripped” ProRes file into FCP and split-screened it with the same clip in H.264 (from the camera) or ProRes (first generation conversion of the camera file), there was very little difference between the two clips – either visually or on the waveform. With this knowledge in hand, I’m now ready and comfortable in cutting the spot in Media Composer and won’t feel like I will make any compromise in image quality.
Here’s a recap of the steps:
© 2010 Oliver Peters
One of the results of post production “democratization” is that many of us are literally working in a “cottage industry” – that is, from offices and edit suites right in our home. We often work in isolation free of clients hovering over our shoulder and free to set our own hours. Sound like utopia? Well, probably not.
I tend to miss the interaction and feedback from coworkers and clients and often find that this way of working lengthens the time it takes to get the job done, instead of improve it. Nevertheless, it’s here to stay, so develop strategies to make the status quo work for you. Working in the “cottage” specifically means devising the best plan for marketing, client review and interaction and delivery of your final product.
For most solo editors, this comes down to hanging out the old shingle on a website. For some, it’s a heavy dose of social networking with Twitter and Facebook. I don’t find the stream-of-consciousness world of Twitter to my liking. Plus, I simply don’t have that kind of time to waste. I have had a website online for about a decade, but lately find that the all-inclusive, comprehensive site doesn’t do the trick. After all, the point is to get the message out beyond the boundaries of your own dot com.
Although a company site that elegantly displays all of the demo videos and other details may look nice, it may not actually add any true marketing punch. I’ve opted for a split solution, using a combination of a website, this blog, Vimeo and Flickr. The point is marketing and each of these hosting communities have their own followers and search functions that increase the chance of a potential client finding YOU. For instance, many corporate clients use YouTube, because it has become a highly-searched resource.
A company website is still a good place for job-related information, like a production bio, list of services and so on. Beyond that, keep it simple. This blog is a place for me to express my running ideas and thoughts. If you look around, many pros have taken the approach of a blog format for their personal site. In addition to articles like this one, I also get a change to showcase some unique projects that I’ve worked on. One of the things you’ll notice about those fancy, complex sites is that they rarely get updated. That’s the beauty of blogs and video hosting services like Vimeo. You can easily add new content without a major website rebuild, since they are all template driven. This encourages you to keep the content fresh and for viewers to return.
There are a lot of video hosting options, including YouTube, SmugMug, Exposure Room, Sorenson 360 and Vimeo. I’ve tried various ones and in the end settled on Vimeo’s Plus service. I like the clean look and the level of controls. In general, the videos play smoothly for most connections. It also solves the Mac-PC compatibility issue that people have to deal with when hosting their own videos on a personal website. The Sorenson 360 site is also nice, but I find it a bit pricey, since it’s geared to high traffic. It might make sense for larger companies, but probably not for individual producers and editors interested in simply posting a few demo reels.
There are plenty of ways to handle review-and-approval, ranging from online solutions to shipping tapes and discs. If you opt for the online route then there are two ways to handle this: direct interaction or delayed response. Direct interaction is the closest to face-to-face communication you’re going to get with a client. There’s Apple’s iChat Theater, of course, but if you are looking for something more platform-agnostic, check out Fuze Movie and Fuze Meeting. Fuze Movie (formerly SyncVue) is ideally suited for an editor and director or director and VFX artist working out the details to change a scene or shot. All connected parties can log in (via Skype) and play, control and even mark up frames during the meeting.
A web-based version of this is Fuze Meeting, which doesn’t require the custom player application or the use of Skype. Any web browser will work, but you loose the on-frame mark-up capability. Nevertheless, this solution seems ideal for an editor or director reviewing a spot with a client, such as an ad agency, on the other end of the line.
I tend to work with clients who can’t be online with me at the same time. A system of sending or posting files works best for them and so, solutions like Apple’s MobileMe, Xprove, YouSendIt, Sorenson 360 and DropBox fit the bill. MobileMe’s new share function is one I’ve started to use a lot. I will frequently encode, post and link both large versions and iPhone-compatible versions.
Xprove is my choice when I need something better than a basic send or share function. There is good privacy and version control. Best of all, team members accessing the video can leave comments, giving the entire team access to the running commentary of everyone’s input.
The same services I mentioned above can be used for final delivery. For example, many basic (or even free) services are good for files up to 1GB. That’s enough for a five minute HD clip at Blu-ray specs. Some of the projects I work on these days are targeted exclusively for the web. When that’s the case I can deliver high-quality, high-bit-rate MPEG4 files to the web designer as a “master”. Generally that will be re-encoded into a set of different-sized files. In addition, I ship actual master files to the client burned unto DVD-ROM data discs for their archive. I’ve done a handful of projects like this where I have never actually spoken to my client in person. I could pass them on the street and not even know it was them. How odd?
Client review and final delivery make encoding a key ingredient to post. I use more than one software encoder depending on the type of file I need to create. My current favorite for high-quality HD files for Blu-ray and servers is Adobe Media Encoder, which comes bundled in their collections. It’s also one of the fastest encoders across the board. Standard def DVD files get their MPEG2 pass with either Apple Compressor or Telestream Episode Pro. I’ve also used Innobits BitVice and Adobe Media Encoder, just depending on how I feel.
H.264, MPEG4 and MP4 (all versions of the same) tend to be the preferred format for the web these days. These codecs are cross-platform compatible and work with QuickTime and Flash. My new MP4 favorite is Sorenson Squeeze 6. In the past, I’ve had issues with contrast and saturation in Squeeze-encoded files, but Sorenson has completely cleaned that up. The video looks good, speed is fast enough and the interface redesigned. Sorenson Squeeze 6 is the app I like to use for my Vimeo files.
On the other hand, when I send up review-and-approval files, I stick with Compressor. Encoding speed is fast and I can set up droplets for my favorite presets. One of these is an iPhone preset, which is ideal when posted to MobileMe with the intent of sharing. This way clients can review the file either on a computer or on their iPhone if they are on the run. It makes a lot of sense due mainly to the success and popularity of the iPhone.
A new option is the Matrox MXO2 capture system configured with MAX technology. Matrox has loaned me an MXO2 Mini as a review and test unit (more in a later article). The Mini is an ideal Final Cut Pro accessory for file-based workflows, because it’s a small unit primarily designed to connect your laptop or desktop to a video monitor. Matrox offers a PCIe and an Express 34 card, so you can use an MXO2 Mini with both a MacBook Pro and Mac Pro, if you own one of each. The optional MAX technology adds an integrated chip to provide hardware acceleration of H.264 encoding. It works within Compressor, so after installation, you’ll see additional Matrox presets. Pick one of those and the Mini will accelerate the H.264 compression of that preset for a definite encoding performance boost. If you do a lot of that, then the extra cost of the option will quickly pay for itself.
The current trend of downsizing means that more editors will be working from home. It’s time to develop strategies for making the best of this. Don’t just survive – thrive!
©2010 Oliver Peters
Modern digital acquisition, post and distribution wouldn’t be possible without data rate reduction, AKA compression. People like to disparage compression, but I dare say that few folks – including most post production professionals – have actually seen much uncompressed content. In fact, by the time you see a television program or a digitally-projected movie it has passed through at least three, four or more different compression algorithms – i.e. codecs.
Avid Media Composer and Apple Final Cut Pro dominate the editing landscape, so the most popular high-end HD codecs are the Avid DNxHD and Apple ProRes 422 codec families. Each offers several codecs at differing levels of compression, which are often used for broadcast mastering and delivery. Apple and Avid, along with most other NLE manufacturers, also natively support other camera codecs, such as those from Sony (XDCAM-HD, HD422, EX) and Panasonic (DVCPRO HD, AVC-Intra). Even these camera codecs are being used for intermediate post. I frequently use DVCPRO HD for FCP jobs and I recently received an edited segment as a QuickTime movie encoded with the Sony EX codec. It’s not a question of whether compression is good or bad, but rather, which codec gives you the best results.
Click on the above images to see an enlarged view. (Images from Olympus camera, prior to NLE roundtrip. Resized from original.)
I decided to test some of these codecs to see the results. I started with two stills taken with my Olympus C4000Z – a 4MP point-and-shoot digital camera. These images were originally captured in-camera as 2288-pixel-wide JPEGs in the best setting and then – for this test – converted to 1920×1080 TIFFs in Photoshop. My reason for doing this instead of using captured video, was to get the best starting point. Digital video cameras often exhibit sensor noise and the footage may not have been captured under optimum lighting conditions, which can tend to skew the results. The two images I chose are of the Donnington Grove Country Club and Hotel near Newbury, England – taken on a nice, sunny day. They had good dynamic range and the size reduction in Photoshop added the advantages of oversampling – thus, very clean video images.
I tested various codecs in both Avid Media Composer 4.0.5 and Apple Final Cut Pro 7. Step one was to import the images into each NLE. In Avid, the conversion occurs during the import stage, so I set my import levels to RGB (for computer files) and imported the stills numerous times in these codecs: 1:1 MXF (uncompressed), DNxHD145, DNxHD220, DNxHD220x, XDCAM-EX 35Mbps and XDCAM-HD422 50Mbps. In Final Cut Pro, the conversion occurs when files are placed on the timeline and rendered to the codec setting of that timeline. I imported the two stills and placed and rendered them onto timelines using these codecs: Apple 8-bit (uncompressed), ProRes LT, ProRes, ProRes HQ, DVCPRO HD and XDCAM-EX 35Mbps. These files were then exported again as uncompressed TIFFs for comparison in Photoshop. For Avid, this means exporting the files with RGB levels (for computer files) and for FCP, using the QuickTime Conversion – Still Image option (set to TIFF).
Note that in Final Cut Pro you have the option of controlling the import gamma settings of stills and animation files. Depending on the selection (source, 1.8, 2.20, 2.22) you choose, your video in and back out of Final Cut may or may not be identical to the original. In this case, choosing “source” gamma matched the Avid roundtrip, whereas using a gamma setting of 2.2 resulted in a darker image exported from FCP.
Click on the above images to see an enlarged view.
You’ll notice that in addition to various compressed codecs, I also used an uncompressed setting. The reason is that even “uncompressed” is a media codec. Furthermore, to be accurate, compression comparisons need to be done against the uncompressed video image, not the original computer still or graphic. There are always going to be some changes when a computer file is brought into the video domain, so you can’t fairly judge a compressed video file against the original photo. Had I been comparing video captured through a hardware card, then obviously I would only have uncompressed video files as my cleanest reference images.
I lined up the exported TIFFs as Photoshop layers and generated comparisons by setting the layer mode to “difference”. This generates a composite image based on any pixel value that is different between the two layers. These difference images were generated by matching a compressed layer against the corresponding Avid or FCP uncompressed video layer. In other words, I’m trying to show how much data is lost when you use a given compressed codec versus the uncompressed video image. Most compression methods disproportionately affect the image in the shadow areas. When you look at a histogram displaying these difference results, you only see levels in the darkest portion of an 8-bit scale. On a 0-255 range of levels, the histogram will be flat down to about 20 or 30 and then slope up quickly to a spike at close to 0.
This tells you that the largest difference is in the darkest areas. The maximum compression artifacts are visible in this range. The higher quality codecs (least compressed), exhibit a smaller histogram range that is closer to 0. The more highly-compressed codecs have a fatter range. This fact largely explains why – when you color grade highly compressed camera images – compression artifacts become quite visible if you raise black or gamma levels.
The resulting difference images were then adjusted to show artifacts clearly in these posted images. By adjusted, I mean changing the levels range by dropping the input white point from 255 to 40 and the output black point from 0 to 20. This is mainly for illustration and I want to reiterate that the normal composite images DO NOT look as bad as my adjusted images would imply. In fact, if you looked at the uncorrected images on a computer screen without benefit of a histogram display, you might think there was nothing there. I merely stretched the available dynamic range for demonstration purposes.
Of these various codecs, the Apple DVCPRO HD codec shows some extreme difference results. That’s because it’s the only one of these codecs that uses horizontal raster scaling. Not only is the data compressed, but the image is horizontally squeezed. In this roundtrip, the image has gone from 1920-pixels-wide (TIFF) to 1280 (DVCPRO HD) back to 1920 (exported TIFF). The effects of this clearly show in the difference image.
Click on the above images to see an enlarged view.
There are a couple of other things you may notice, such as level differences between the Avid and Apple images and between each of these and the originals. As I said before, there will always be some differences in this sort of conversion. Plus, Apple and Avid do not handle color space, level and gamma mapping in the same way, so a round trip through each application will yield slightly different results. Generally, if 2.2 gamma is selected for imported stills, the Apple FCP image will have a bit more contrast and somewhat darker shadow areas when compared to Avid on a computer screen – even when proper RGB versus Rec. 709 settings are maintained for Avid. This is mainly a result of the various QuickTime and other conversions going on.
If I were to capture video with Avid DX hardware on the Media Composer and AJA, Matrox or Blackmagic hardware on FCP – and compared these images on a video monitor and with scopes – there would likely be no such visible difference. When I used “source” gamma in FCP, then the two matched each other. Likewise, when you review the difference images below, 2.2 gamma in this case resulted in a fault difference composite between the FCP uncompressed and the original photo. The “source” gamma version more closely resembles the Avid result and is the right setting for these images.
The take-away from these tests should be that the most important comparisons are those that are relative, i.e. “within species”. In other words, how does ProRes LT compare to ProRes HQ or how does DNxHD 145 compare to DNxHD 220x? Not, how an Avid export compares with a Final Cut export. A valid inter-NLE comparison, however, is whether Avid’s DNxHD220x shows more or less compression artifacts than Apple’s ProRes HQ.
I think these results are pretty obvious: Higher-data-rate codecs (less compression) like Apple ProRes HQ or Avid DNxHD 220x yield superb results. Lower-date-rate codecs (more compression) like XDCAM-EX yield results that aren’t as good. I hope that arming you with some visible evidence of these comparisons, will help you better decide what post trade-off to use in the future.
(In case you’re wondering, I do highly recommend the Donnington Grove for a relaxing vacation in the English countryside. Cheers!)
Click on these images to see an enlarged view.
©2010 Oliver Peters