Easy Canon 5D post – Round II

RED’s Scarlet appears to be just around the corner and both Sony and Panasonic seem to be responding to the challenge of the upstart photo manufacturers. No matter what acronym you use – DSMC, HD-DSLR, HDSLR – these hybrid HD video / still photo cameras have grabbed everyone’s attention. 2010 may indeed be the year that hybrid digital SLR cameras hit their stride.

The Canon EOS 5D Mark II showed the possibilities in late 2008 when Vincent Laforet released Reverie, but like all of these new camera products, the big question was how to best handle the post. The 5D (so far) only shoots video at a true 30fps – lacking both the filmic 24fps rate – or any of the video-friendly frame rates (29.97, 25 or 23.976). That oversight was corrected in Canon’s EOS 7D and EOS 1D Mark IV models and may soon be corrected by a firmware update to the 5D. Even so, the 5D has remained a preferred option, because of its low light capabilities and full frame sensor. Photographers, videographers and filmmakers love the shallow depth-of-field, so a 24p-capable 5D is certainly on many wish lists.

Click the above image to enlarge

Until the 5D gets a 24fps upgrade [EDIT: coming in March, download will be here] , folks in post will have to contend with the 30fps footage generated by the camera. Last year I wrote an article on how to post a 5D project, which covers a lot of the basics. I’ve since done more 5D projects and formed a number of opinions and workflow tips. I’ve picked up many of these from reading Philip Bloom and Bruce Sharpe (PluralEyes inventor) and at the end of this post, I’ll include a number of useful links.

My first observation on the several 5D projects I’ve posted is that you get the best results from these new cameras when you treat them like film. Use classical production methods – slow pans, steady hand-held work, tripods, dollies and record audio as double-system sound. Secondly, allow time for processing files and syncing sound before you expect to start editing. 35mm film shoots typically require a day or more between the production day and post for lab processing and film transfer. The equivalent is true for HDSLRs. Whether it’s RED or an HDSLR, you have to become the film lab and transfer house. Once you wrap your head around that concept, the workflow steps make a lot more sense.

Click the above image to enlarge

I recently cut another Canon 5D Mark II job with Director/DP Toby Phillips. This was an internet commercial for the wine growers of the Yarra Valley region of Australia. Yarra Valley is to Australia, what Napa Valley is to California. Coincidentally, it’s also the region ravaged by the horrific fires of 2009. In order to keep the production light, Toby’s crew was bare bones and nearly all images were shot under available light – including sodium vapor lighting in warehouse areas. The creative concept was intended to be tongue-in-cheek. Real workers discussed why their job was the most important role in winemaking. The playful interplay between worker comments and winery/vineyard footage round out this :60 commercial.

Production tips

Toby rigged his camera with a modified plate, rails and matte box from his existing film equipment. This includes Arri and Manfrotto parts modified by Element Technica. The 5D records passable sound on its own, but it really isn’t ideal for the best quality. To get around this, a Zoom H4n handheld recorder was used for double-system sound. The Zoom has XLR inputs for external mics, in addition to its built-in XY-pattern stereo mics. A Sennheiser shotgun was plugged into the Zoom, which in turn recorded uncompressed 16-bit/48kHz WAV files. The headphone output of the Zoom was connected to the 5D, so that the camera files always contained reference audio.

There are a number of important tips to note here. First, there’s an impedance mismatch in this connection and the 5D uses an AGC circuit to attenuate audio, so the camera file audio will be clipped. To avoid this, turn down the headphone output level to a very low volume. Second, because the audio is clipped, if you forget to press record on the Zoom, the 5D’s audio is NOT acceptable. Following the traditional approach, a slate with clapstick was used for every sound take. The Zoom records numbered, sequential files, so the crew also wrote the audio file number on the slate for each take. These two steps make it easy to identify the correct audio take and to sync audio and video later in post.

Post workflow / pre-processing

This production configuration isn’t too different than shooting with other tapeless video cameras, but post requires a unique workflow. Key steps include video format conversion, speed adjustment and syncing the sound.

Video conversion – The Canon EOS 5D Mark II records 40Mpbs H.264 QuickTime movies in a 1920x1080p/30fps format. H.264 is not conducive to smooth editing in its native form. 5D files can be up to 4GB in length (about 12 minutes), but there is no clip-spanning provision, as in P2 or XDCAM. Where and when you convert the native H.264 camera files depends on your NLE. With Avid Media Composer, files are converted into Avid’s MXF format upon import. The import will be slow, since it’s also transcoding, but this is a one-step process. Unfortunately it ties up your NLE, so maybe in the future Avid’s MetaFuze or AMA will come to the rescue.

I cut with Apple Final Cut Pro, which does permit direct editing with the H.264 files, but you don’t really want to do that. I typically convert 5D files into Apple ProRes, using a batch setting in Compressor. You can use other codecs, of course, like DVCPRO HD, ProRes HQ, ProRes LT, etc. Philip Bloom likes to convert his files to the EX format using MPEG Streamclip. The reason for EX, according to him, is that the data rate is similar to the 5D files, so storage requirements don’t expand significantly.

The wine commercial had 127 camera files (2 hours 11 minutes of raw footage), which were converted to ProRes in about 4 hours on an 8-core Mac Pro. Storage needs increased from 40GB (H.264) to 142GB (ProRes). The nice part of this step (at least for FCP users) is that the conversion can be left as a batch to churn unattended. One word of caution, though. Compressor has a tendency to choke and crash when you throw tons of files at it, like 100+ camera files. So I usually do these conversions in groups of 20 or so files at a times.

Video speed adjustment - The 5D files are a true 30fps and not the fractional video rate of 29.97fps. Avid will convert these files to the correct rate on import, if audio and video tracks have been separated. According to Michael Phillips of Avid (one of their workflow gurus), “If the MOV file is video-only, then I use the ‘ignoreQtrate true’ console command and get a frame-for-frame import, resulting in a .1% slow down.” This is analogous to what happens when film is transferred to video. In my testing, it was important to first strip off the audio track of the MOV in order for this to work. You can do this using QuickTime Player Pro 7.

Final Cut permits native 30fps editing, but then your files won’t play through standard video gear, like a KONA card. I suppose for an internet spot this wouldn’t matter, however we had other uses, so a speed adjustment would have to happen at some point. I could either convert to 29.97 first and be done with it – or I could cut at 30fps and convert the finished spot. I normally opt to convert the ProRes files to 29.97fps first. To do this I use the Cinema Tools “conform” feature. That’s a nearly instantaneous process, which only alters the file’s metadata. It tells media players to run the file at the fractional frame rate of 29.97fps instead of 30fps.

Audio speed adjustment - Changing the frame rate from 30 to 29.97 means the picture has been slowed by .1% and so audio must also undergo the same pulldown. If you use a location sound recorder capable of a 48048kHz sample rate, then Avid Media Composer will automatically adjust the rate upon import back down to 48kHz and achieve the pulldown. In addition, there are various utilities that can “restamp” the metadata for the sample rate. A good choice is Sound Devices’ free Wave Agent. The Zoom recorder created 48kHz files, but these could be restamped as 47952kHz by such a software utility. In the case of Media Composer, the software sees this on import and slows the file by .1% to achieve the desired 48kHz sample rate. Thus the audio is back in sync.

Final Cut Pro works differently than Media Composer so your results may vary. FCP simply tries to maintain the same duration and thus would force a render in the timeline to convert the sample rate to 48kHz without altering the speed. Instead, I recommend that you render new versions of the audio before importing the files into FCP that have an applied speed change. When I initially tried the restamp approach, I got sync drift. After posting this entry, I tried it again with Wave Agent and the results were dead-on in sync. The only issue is that then you have to render the audio in FCP to get the correct sample rate. I’m not a big fan of how FCP renders audio files and so prefer to correct them prior to import into FCP. I have also had inconsistent results with FCP and how it handles sync with external audio files.

Because of these various concerns, I used Telestream Episode Pro and created an audio-only preset that included a speed change with a .999 value. I used this preset to batch-convert 20 16-bit/48kHz WAV files from the Zoom recorder (1 hour 9 minutes of raw dialogue) into “pulled down” AIF files. This took about two minutes. Whichever approach you take, I urge you to do this only with copies of files. Some of these various utilities use destructive processes, so you don’t want to change your originals.

(Note: For a better understanding of how BWF (broadcast wave files), QuickTime and Final Cut Pro interact, check out this product (BWF2XML) and description by Spherico.)

Syncing the dailies – After these conversion steps, the files are ready to import into FCP. Audio and video files are now in optimized formats that will match FCP’s native media settings. Next, you’ll have to sync the audio and video takes. If the crew used a clapstick, it’s easy to sync in either Avid or Final Cut using the standard group or multiclip routines.

For this wine spot, I used Singular Software’s PluralEyes to automatically sync all sound takes. PluralEyes was one of the highlights of NAB 2009 and is about as close to magic as any software can get. It analyzes audio waveforms to compare and align the reference camera audio against the separate audio files. This is why it’s critical to record even poor-quality reference audio to the camera in order to give PluralEyes something to analyze. Unfortunately for the Avid editor, PluralEyes only works with Final Cut and Sony Vegas Pro. It’s not a plug-in, but works on a timeline labeled “pluraleyes” in an open and saved FCP project.

Here are the steps:

a) Create a blank FCP timeline named “pluraleyes”.

b) Drag & drop all camera clips with dialogue (audio & video) onto the timeline (random order is OK).

c) Drag & drop all separate audio files onto the same timeline onto unused audio tracks (random order is OK).

d) Disable any redundant audio track (speeds up analysis).

e) Save the project, launch PluralEyes, start analysis/sync processing.

After a few minutes of processing, PluralEyes will automatically create a series of new FCP sequences – one for each sync take. The audio will be aligned so that the double-system sound files are now perfectly in sync with the camera audio.

Post workflow / edit / mix / grade

Now that you have sync takes, you can pretty much edit anyway you like. I picked the following tip up from Bloom’s blog. To make editing easier on the wine spot, I took these new sequences and renamed them according to the person who was speaking and which take it was. I export the sequences as QuickTime references movies (not self contained) to a location on my media drives. I then re-imported these reference movies, in effect turning them into master clips with merged 5D video and Zoom audio. These became my source for all sync takes. Any b-roll shots came from the regular ProRes files.

The rest of the edit went normally. I’ve got my MacPro set up with two internal 1TB drives configured as a software RAID-0 for media files (2TB). No issues with cutting ProRes this way. I bounced the audio to Soundtrack Pro for the final mix. No real reason, other than to take advantage of some of the plug-ins to add a touch of “sparkle” to the dialogue.

I used Apple Color for the grade. If you follow my blog, you know that I could have tackled this easily with various plug-ins and stayed inside FCP, however, I do like the Color interface and toolset. This spot was ideally suited to go through a grading pass using Color. As it turned out, this step might have been a bit premature due to client revisions. In hindsight, using plug-ins might have been preferable. I thought the cut was locked, so proceeded with the correction in Color.

The first version of the spot was a faster paced cut (57 shots in :60), so the client requested a second version with a little more breathing room and a few alternate dialogue takes. This necessitated going back into the footage. Those familiar with Color know that it generates new media files when it renders color correction. This is required to “bake in” the color corrections. If you assign handles of a few seconds to each shot, you have some room to trim shots when you are back in Final Cut. This doesn’t help you with other footage.

I decided to step back to the sequence before “sending to” Color and cut a second, more-relaxed version (46 shots in :60). Although this meant starting a new Color project, I was aided by Color’s ability to store grades. I could save the settings for each of the shots in version one and apply these settings to the similar or same shot in version two, within the new Color project. Adjust keyframes, tweak a few settings, render and bingo! – the grade is done. With :02 handles on each shot, version one (57 shots) rendered in about 40 minutes and version two (46 shots) took about 30 minutes. Both as 1920×1080 ProRes (29.97fps) media. Of course, like many commercials this wasn’t the end and a few more changes were made! The final version ended being a combination of these two cuts.

(As an aside, Stu Maschwitz has done a nice post about Color Correcting Canon 7D Footage on his ProLost blog.)

Post-processing / 24fps conversion

This could have been the end of the post for the wine spot, but there’s one more step. A big reason people like these HDSLRs is because they provide a very cost-effective way of getting that elusive “film look”. One part of that look is the 24fps frame rate. Yes – some film is shot at 30fps for spots and TV shows – so technically the 5D’s 30p footage is just fine. But clients really do want that 24fps look.

You can convert these 5D files quite cleanly to 24fps. This is a process I picked up from Bloom and discussed in my previous Canon post.  Here are the steps:

a) Note the exact duration of the 29.97fps timeline.

b) Export a self-contained QuickTime movie of the finished 29.97 sequence.

c) Bring that exported file into Compressor and set up a ProRes-to-ProRes conversion. Use a frame rate of 24fps (it actually is 23.98, but Compressor labels it as 24).

d) Turn Frame Controls on, set Rate Conversion to Best and change Duration from 100% of source to the exact duration of the original 29.97 timeline.

Now let Compressor crunch for a while.  My :60 spot took about 36 minutes to convert from 29.97 to 23.98. For good measure, I also take the finished file into Cinema Tools and conform it to 23.98, just in case it’s 24 and not 23.98. Then I import the file back into FCP. I create a new 23.98 timeline and edit the clip to the file. If everything is done correctly, this media should match without any rendering needed. Then I’ll copy and paste the audio from the 29.97 timeline to the 23.98 timeline. This should be in sync.

A couple of additional pointers. Since I don’t want to have this conversion process get confused with titles and dissolves, I remove all graphics and make dissolves into cuts (with handles) in the 29.97 sequence, prior to export. I actually exported the wine spot timeline as 1:04 instead of :60. When I was back in the 23.98 timeline, I fixed these trims, added back the fades, dissolves and graphics in order to complete the sequence.

The second issue is speed changes. I sped up two shots, which actually passed through Color and this 24p conversion just fine – except for one problem. My 29.97 timeline was actually an interlaced timeline. This doesn’t matter for the camera files, as they are inherently progressive. However, any timeline effects, like speed changes, titles and transitions are processed with interlaced motion. This affected the two sped-up shots in the 24p conversion, resulting in interlace artifacts. The simple fix was to replace these with the normal-speed media and redo the speed change in the 23.98 timeline. No big deal, but something to be mindful of in the future.

Finally, although this conversion is very good, it isn’t perfect. Cuts do stay as clean cuts and slow action converts cleanly looking as if it were shot at 24fps. Fast motion, however, does introduce some artifacts. These mainly show up as blended frames in areas of fast activity or fast camera movement. It’s no big deal really, as it tends to add to the filmic look of the material – a bit like motion blur.

Remember that this is an OPTIONAL and SUBJECTIVE step. I personally think that 30p is a “sweet spot” for LCD and plasma screens. This is especially true for the web and computer displays. In the end, my client decided they liked the 30p image better, because it was crisper.

Click the image to see the video in HD on Vimeo.

Or here for the “Alternate Cut” at 30fps (no 24p conversion).

Additional tools

Since the media files the HDSLR cameras generate are an outgrow of file creation at the consumer product level, there is very little metadata in them that an NLE would care about. No reel numbers, SMPTE timecode, edge numbers, etc. That’s good and bad. Good – in that the folder and file structure is quite simple and very malleable. Bad – in that you can have duplicate file names and there’s no ability to span clips. Think of it like a roll of 35mm negative. That would have about 11 minutes of capacity and new metadata is added when it’s transferred to video.

Since files are sequentially numbered on the memory card, once you start recording to the next card, it’s likely to have repeating file names. This is true both in the camera and on a recorder like the Zoom, simply because there is no reel (i.e. card) ID name or number.  The good news is that you can easily change this without corrupting metadata – as you would with RED or P2 – but, it means you have to manually impose some sort of structure yourself.

R-Name - One utility that can help in R-Name. Unfortunately it may be out of development, but I still use version 3, which works with Snow Leopard. You might be able to find a download still lurking in the depths of the internet – or, if not – a similar utility or an Automator routine. R-Name lets you rename files (as the name implies), but you can also append prefix or suffix character strings to a file name. For example, a set of media files from a 5D may be named MVI_1073.mov through MVI_1200.mov and you’d like to add a prefix for Card 1. Simply create an R-Name batch that adds a prefix such as “C001_” to all these files. Run the batch and voila – your files are now named C001_MVI_1073.mov through C001_MVI_1200.mov. Follow this process for each card and it becomes a nice, fast way of organizing your media.

QtChange – If reel numbers and timecode are important for you to have, then check out VideoToolshed’s QtChange. This is a comprehensive QuickTime utility, which lets you alter several file parameters. Most importantly, you can add or change reel number and timecode values. Although this isn’t essential for you to cut in FCP, certain functions, like dupe detection, won’t work without an assigned reel number. There are several ways to alter this info in QtChange, but one of the ways it can work is to automatically use the date stamp of the file for the reel number and the time stamp as a starting timecode number. Files can be changed in a batch, but be careful as these are destructive changes. Developer Bouke Vahl has been making ongoing changes to the product and recently added Avid Log Exchange functionality.

MetaCheater One deficiency of Avid Media Composer has been the inability to directly read all of the metadata from a QuickTime file. For instance, older versions of Media Composer and Symphony would not read QuickTime timecode. This has been corrected in the most recent versions; these apps now import the timecode, but still no reel number. In addition, the Canon cameras don’t generate timecode or reel numbers so you must add them if you need such information. You could use QtChange to add reel IDs and timecode, which Media Composer would import, but then there’s still the reel ID problem. MetaCheater is a simple way around this. This program extracts QuickTime metadata and creates an Avid Log Exchange file (ALE) with proper reel numbers and timecode values. Import the ALE file into Media Composer and then batch import the corresponding QuickTime movies. In this process, Media Composer uses the timecodes and reel numbers from the ALE instead of default values, with the result that your Avid bins properly reflect the reel and timecode information added to the 5D files. It would be just as if this media had been captured from a videotape source.

Here are a few comparisons of the color grading applied to these shots.

Click each of these images to see an enlarged view.

Original Image

Graded Image

Split Screen

Original Image

Graded Image

Split Screen

Addendum (Feb 2010)

After I initially wrote this article in January, I pulled it down for some tweaks. In the interim, I got busier for a few weeks until I could repost it. In that time, was able to do some more testing with Avid Media Composer 4.0.5 on another Canon 5D spot. I am adding my observations here, since many of my readers are Avid cutters and want to know the best way to handle these files in Media Composer.

Unlike FCP, there’s no simple drag-and-drop method in Avid. If you elect to convert the files using an external encoding application, you still have to bring the files in through Avid’s import routines. This adds a step and effectively doubles the total time it takes to convert and import as compared with FCP. Another frustrating issue is that when you move from the native camera files into Avid, you have to move out of the QuickTime color and gamma architecture and into an MXF structure using Avid codecs.

In the Avid world, video files are treated using the rec. 601/709 colorspace (16-235 on an 8-bit scale) and computer files are assumed to be in RGB space (0-255). When you import or export files to and from Media Composer, you always need to check the proper setting – RGB or 601/709. Unfortunately (or fortunately depending on your POV), this is largely hidden from view in the QuickTime world. Furthermore, Canon really hasn’t provided documentation that I’m aware of regarding the colorspace that these cameras work in and how closely color scaling conforms to either RGB or rec. 709. The long and short of it is that when you move in and out of QuickTime, you are often fighting level and gamma changes to varying degrees.

I tried a number of different import and encoding methods with Media Composer. All of them work, but with various trade-offs. The easiest method is as I outlined earlier in this article – simply import the H.264 camera files into Media Composer. When you do that, select RGB color space. The import time will take approximately 3:1 to 4:1 on a fast machine, depending on the target codec you choose to use, because the media is being transcoded during this import stage. I had the fastest encoding times using the Sony XDCAM-EX codec, which is now natively supported by Media Composer.

A second option is to use Apple Compressor (or another QuickTime encoder) to convert the camera files into QuickTime movies using an Avid DNxHD codec. This is the same approach as converting to Apple ProRes 422. Unfortunately, Avid still imposes a longer import time to get these files from QuickTime MOVs into the MXF media format. Although Compressor offers a choice between RGB and 709 when you select DNxHD, it doesn’t seem to make any difference in the appearance of the files. The files are converted to 709 color space and so should be imported into Avid with the import setting on 709. I hope that this import step will be eliminated at some point in the future, when and if Avid decides to support QuickTime files through its AMA feature.

The fastest, current method was to use Episode Pro again. MXF is now supported in this encoder, so I was able to convert the H.264 files into MXF-wrapped XDCAM-EX files that were ready for Avid. The beauty of is that the work can be done on an external machine in a batch and the import back into Media Composer is very fast. No transcoding is needed, as this just becomes a file copy. The EX codec looked clean and wasn’t too taxing on my Mac Pro. You also have the option of using XDCAM-HD and XDCAM HD 422 (50Mbps) codecs in the MXF file format. The only issue was that one of the media files appeared to be corrupt after encoding and had to be re-encoded. This might be an anomaly, but we ARE dealing with two long-GOP codecs in this process! Another benefit of this route is that no user interaction is required to determine color space settings.

Now to the level issues. In all of this back and forth – once I exported back out to QuickTime (ProRes 422 codec, using RGB setting on export) – no conversion identically matched the original camera files. When I compared versions, direct import of the files (H.264 into Avid) yielded slightly darker results. External conversion to DNxHD and then importing, yielded a slight gamma shift. Conversion/import via the MXF route appeared a bit lighter than the original. None of these were major differences, though. If you are going to color grade the final product anyway, it doesn’t really matter. I finally settled on a 2-step conversion workflow (described in my February 21 post) that yielded good results going from the 5D files into Media Composer and then to FCP.

As far as editing, syncing and grading, that is the same as with any other acquisition media. I used the same preparatory steps as outlined earlier (Cinema Tools conform to 29.97 and a .999 speed adjustment of the audio) – then converted and imported the video files. Inside Media Composer (1080p/29.97 project), everything synced and edited just as I expected.

Also in early February, Canon announced its EOS Movie Plugin-E1 for Final Cut Pro. Click here for the description. It’s supposed to be released in March and if I understand their description correctly, it allows you to import camera clips via FCP’s Log and Transfer module. During the import stage, files are transcoded to ProRes. Unfortunately there is no explanation of how frame rates are handled, so I presume the files are imported and remain at their original frame rate.

My conclusion after all of this is that both FCP and Media Composer are just fine for working with HDSLR projects. FCP seems a bit faster at the front, but in the end, you’re just traveling two different roads to get to the same destination.

I leave you with one last tidbit to ponder. Apple has just introduced Aperture 3, which includes HD video clip support in slideshows. I wonder how apps like Aperture, Lightroom and Photoshop (already supports some video functions) will impact these HDSLR workflows in the future?

(UPDATE: If you got here through links from other blogs, make sure you read the updated Round III post as well.)

Useful Links

5DMk2 blog – 1001 Noisy Cameras

Assisted Editing

Philip Bloom

Canon Explorers of Light

Canon Filmmakers

Cinema5D

DSLR HD

DVinfo

DVXuser

Element Technica

FreshDV

Tyler Ginter

Vincent Laforet

ProLost

Red Rock Micro

Bruce Sharpe

Spherico

Peter Wiggins

Planet5D

Video Toolshed

Zacuto

©2010 Oliver Peters

Tips for Small Camera and Hybrid DSLR Production

blg_cams

It started in earnest last year and has no sign of abating.  Videographers are clearly in the midst of two revolutions: tapeless recording and the use of the hybrid still/video camera (HDSLR). The tapeless future started with P2 and XDCAM, but these storage devices have been joined by other options, including Compact Flash, SD and SDHC memory cards. The acceptance of small cameras in professional operations first took off with DV cameras from Sony and Panasonic, especially the AG-DVX100. These solutions have evolved into cameras like the Sony HVRZ7U and PMWEX3 and Panasonic’s AG-HPX170 and AVCCAM product line. Modern compressed codecs have made it possible to record high-quality 1080 and 720 HD footage using smaller form factors than ever before.

This evolution has sparked the revolution of the HDSLR cameras, like the Canon EOS 5D Mark II, the new Canon EOS 7D and 1D Mark IV and the Nikon D90, D300s and D3s, to name a few. Although veteran videographers might have initially scoffed at such cameras, it’s important to note that Canon developed the 5D at the urging of Reuters and the Associated Press, so its photographers could deliver both stills and motion video with the least hassle. Numerous small films, starting with photographer Vincent Laforet’s Reverie, have more than proven that HDSLRs are up to the task of challenging their video cousins. From the standpoint of a news or sports department, we have entered an era where every reporter can become a video journalist, simply by having a small camera at the ready. That’s not unlike the days when reporters carried a Canon Scoopic 16mm, in case something newsworthy happened.

These cameras come with challenges, so here is some advice that will make your experience more successful:

1. Ergonomics / stability – Both small video camcorders and HDSLRs are designed for handheld, not shoulder-mounted, operation. This isn’t a great design for stability while recording motion. In order to get the best image out of these cameras, invest in an appropriate tripod and fluid head. For more advanced operations, check out the various camera mounting accessories from companies like Zacuto and Red Rock Micro.

2. Rolling shutter – This phenomenon affects all CMOS cameras to varying degrees. It is caused by horizontal movement and results in an image that is skewed. This distortion is caused by the time differential between information at the top and the bottom of the sensor. The HDSLRs have been criticized for these defects, but others like the EX or the RED One have also displayed the same artifacts to a lesser degree. This defect can be minimized by using a tripod and slow (or no) camera movement.

3. Focus – One of the reasons that shooters like HDSLRs is the large image sensor (compared to video cameras) and film lenses, which provide a shallow depth-of-field. This is a mixed blessing when you are covering a one-time event. Still photo zoom lenses aren’t mechanically designed to be zoomed and focused during the shot like film or video zoom lenses. This makes it harder to nail the shot on-the-fly. Since the depth-of-field is shallow, the focus is also less forgiving. Lastly, the focus is often done using an LCD viewer instead of a high-quality viewfinder. Many shooters using both small video cameras and HDSLRs have added an externally-mounted LCD monitor, as a better device for judging shots.

4. Audio – The issue of audio depends on whether we are talking about a Canon 5D or a Panasonic 170. Professional and even prosumer camcorders have been designed to have mics connected. To date, HDSLRs have not. If you are shooting extensive sync-sound projects with a hybrid camera, then you will want to consider using double-system sound with a separate recorder and mixer (human). At the very least, you’ll want to add an XLR mic adapter/mixer, like the BeachTek DXA-5D.

5. Movie files – Each of these cameras records its own specific format, codec and file wrapper. Production and post personnel have become comfortable with P2 and XDCAM, but the NLE manufacturers are still catching up to the best way of integrating consumer AVCHD content or files from these HDSLRs. Regardless of the camera system you plan to use, make sure that the file format is compatible with (or easily transcoded to) your NLE of choice.

6. Capacity – Most of the cameras use a recording medium that is formatted as FAT32. This limits a single file to 4GB, which in the case of the Canon 5D means the longest recording cannot exceed 12 minutes of HD (1920x1080p at 30fps). Unlike P2, there is no spanning provision to extend the length of a single recording. Make sure to plan your shot list to stay within the file limit. Come with enough media. In the case of P2, many productions bring along a “data wrangler” and a laptop. This person will offload the P2 cards to drives and then reformat (erase) the cards so that the crew can continue recording throughout the day with a limited number of P2 cards.

7. Back-up – Always back-up your camera media onto at least two devices in the original file format. I’ve known producers who merely transferred the files to the edit system’s local array and then trashed the camera media, believing the files were safe. Unfortunately, I’ve seen Avids quarantine files, making them inaccessible. On rare occasion, I’ve also seen Final Cut Pro media files simply disappear. The moral of the story is to treat your original camera media like film negative. Make two, verified back-ups and store them in a safe place should you ever need them again.

The new generation of small video camcorders and Hybrid DSLRs offers the tantalizing combination of lower operating cost and stunning imagery. That’s only possible with some care and planning. These tools aren’t right for every application, but the choices will continue to grow in the coming years. Those who embrace the trend will find new and exciting production options.

© 2009 Oliver Peters

Written for NewBay Media and TV Technology magazine

Canon EOS 5D Mark II in the real world

blg_canon5d_11

A case study on dealing with Canon 5D Mk2 footage on actual productions.

You could say that it started with Panasonic and Nikon, but it wasn’t until professional photographer Vincent Laforet posted his ground-breaking short film Reverie, that the idea of a shooting video with a DSLR (digital single lens reflex) camera caught everyone’s imagination. The concept of shooting high definition video with a relatively simple digital still camera was enough for Red Digital Cinema Camera Company to announce the dawn of the DSMC (digital still and motion camera) and push it to retool the concepts for its much anticipated Scarlet.

The Scarlet has yet to be released, but nevertheless, people have been busy shooting various projects with the Canon EOS 5D Mark II like the one used by Laforet. Check out these projects by directors of photography Philip Bloom and Art Adams. To meet the demand, companies like Red Rock Micro and Zacuto have been busy manufacturing a number of accessories designed specifically for the Canon 5D in order to make it a friendlier rig for the operator shooting moving video.

blg_canon5d_3

Frame from Reverie

Why use a still camera for video?

The HOW and WHY are pretty simple. Digital camera technology has advanced to the point that full-frame-rate video is possible using the miniaturized circuitry of a digital still photography camera. Nearly all DSLRs provide real-time video feedback to the LCD display on the back of the camera. Canon was able to use this concept to record the “live view” signal as a file to its memory card. The 5Dmk2 uses a large “full frame 35mm” 21.1 MP sensor, which is bigger than the RED One’s sensor or a 35mm motion picture film frame. Raw or JPEG stills captured with the camera are 5616×3744 pixels in a 5:3 aspect ratio (close to HD’s 16:9). The video view used for the live display is a downsampled image from the same sensor, which is recorded as a 1920×1080 high-def file. This is a compressed file (H264 codec) at a data rate of about 40Mbps. 16:9 is slightly wider than 5:3, so the file for the moving image is cropped on the top and bottom compared with a comparable still photo.

The true beauty of the camera is its versatility. A photographer can shoot both still images and motion video with the same camera and at the same settings. When JPEG images are recorded, then the same colorimetry, exposure and balance will be applied to both. Alternatively, one could opt for camera raw stills, in which case the photos can still be adjusted with great latitude after the fact, since this data would not be “baked in” as it is with the video. Stills from the camera use the full resolution of this large sensor, so photographs from the Canon 5D are much better than any stills extracted from an HD camera, including the RED One.

blg_canon5d_4

Frame from Reverie

Videographers have long used various film lens adapters to gain the lens selection and shallow depth-of-field advantages enjoyed by film DPs. The Canon 5D gives them the advantage of a wide range of glass that many may already own. The camera creates a relatively small footprint compared to the typical video and film camera – even with added accessories – so it becomes a very interesting option in run-and-gun situations, like documentaries. Last but not least, the camera body (no lenses) costs under $3K. So, compared with a Sony EX3 or a RED One, the 5Dmk2 starts to look even more attractive to low-budget filmmakers.

What you lose in the deal

As always, there are some trade-offs and the Canon EOS 5D Mark II is no exception. The first issue is recording time. The Canon 5D uses CF (CompactFlash) memory cards. These are formatted as FAT32 and have a 4GB file limit. Due to this limit, the maximum clip length for a single file recorded by the 5Dmk2 is about 12 minutes. Unlike P2 or EX, there is no provision for file spanning. The second issue is that the camera records at a true 30fps – not a video friendly 29.97 and not the highly desirable film rate of 23.98 or 24fps.

Audio is considered passable, but for serious projects, double-system, film-style sound is recommended. This workflow would be the same as if you were shooting on film. Traditional slates and/or software like PluralEyes (Singular Software) or FCPauxTC Reader (VideoToolshed) make post syncing picture and sound a lot easier.

blg_canon5d_1

Example of the rolling shutter effects used for interesting results

One major limitation cited by many is the rolling shutter that causes the so-called “jello” effect. The Canon 5D uses a single CMOS sensor and nearly all CMOS cameras have the same problem to some degree. This includes the RED One. This image artifact arises because the sensor is not globally exposed at the same point in time, like exposing a frame of 35mm film. Instead, portions of the sensor are sequentially exposed. This means that fast motion of an image or the camera translates into the image appearing to wobble or skew. In the worst case, the object in the frame takes on a certain rubbery quality, hence the name the “jello” effect. It can also show up with strobes and flashes. For example, I’ve seen it on strobe light and gun shot footage from a Sony EX3. In this case, the rolling shutter caused half of the frame to be exposed and the other half to be dark.

Skew or wobble becomes most obvious when there are distinct vertical lines within the frame, such as a lamp post or the edge of some furniture. Fast panning motion of the camera or subject can cause it, but it’s also quite visible in just the normal shakiness of handheld shots. If you notice many of the short films on the web, the camera is almost always stationary, tripod-mounted or moving very slowly. In addition, lens stabilization circuitry can also exacerbate the appearance of these artifacts. Yet, in other instances, it helps reduce the severity.

blg_canon5d_2

Note the skew on the passing subway cars

High-end CMOS cameras are engineered in ways that the effect is less noticeable, except in extreme circumstances. On the other hand, the Canon 5D competitor – the Nikon D90 – gained a bit of a reputation specifically for this artifact. To combat this issue, The Foundry recently announced RollingShutter, an After Effects and Nuke plug-in designed to tackle these image distortion problems.

Don’t let this all scare you away, though. Even a camera that is more subject to the phenomenon will turn out great images when the subject is organic in nature and care is taken with the camera movement. Check out some of the blog posts, like those from Stu Maschwitz, about these issues.

blg_canon5d_8

Frame from My Room video

But, how do you post it?

Like my RED blog post, I’ve given you a rather long-winded intro, so let’s take a look at a real-life project I recently posted that was shot using the Canon EOS 5D Mark II. Toby Phillips is a renowned international director, director of photography and Steadicam operator with tons of credits on commercials, music videos and feature films. I’ve worked with him on numerous spots where his medium of choice is 35mm film. Toby is also an avid photographer and Canon owner (including a 5D Mark II). We recently had a chance to use his 5Dmk2 for a good cause – a pro bono fundraiser for My Room, an Australian charity that assists the Children’s Cancer Centre at the Royal Children’s Hospital in Melbourne. Toby needed to shoot his scenes with minimal fuss in the ward. This became an ideal situation in which to test the capabilities of the Canon and to see how the concept translated into a finished piece in the real world.

blg_canon5d_5

Frame from My Room video

Toby has a definite shooting style. It typically involves keeping the camera in motion and pulling focus to just hit a point that’s optimally in focus at the sweet spot of the camera move. That made this project a good test bed for the Canon 5D in production. Lighting was good and the images had a warm and appealing quality. The footage generally turned out well, but Toby did express to me that shooting in this style – and shooting handheld without any of the Red Rock or Zacuto accessories or a focus puller – was tough to do. Remember that still camera lenses are not mechanically engineered like a motion picture lens. Focus and zoom ranges are meant to be set and left, not smoothly adjusted during the exposure time.

blg_canon5d_10

Posting footage from the 5Dmk2 is relatively easy, but you have to take the right steps, depending on what you want to end up with. The movie files recorded by the camera are QuickTime files using the H264 codec, so any Mac or PC QuickTime-compatible application can deal with the files. They are a true 30fps, so you can choose to work natively in 30fps (FCP) or first convert them to 29.97fps (for FCP or Avid). That speed change is minor, so there are no significant sync or pitch issues with the onboard audio. If you opt to edit with Media Composer, simply import the camera movies into a 29.97 project, using the RGB import settings and the result will be standard Avid media files. The camera shoots in progressive scan, so footage converted to 29.97 looks like that shot with any video camera in a 30p mode.

Canon 5D and Final Cut Pro

I edited the My Room project in Final Cut. Although I could have cut these natively (H264 at 30fps), I decided to first convert the files out of H264 for a smoother edit. I received the raw footage on a FireWire drive containing the clips copied from the CF cards. This included 150 motion clips for a total of about one hour of footage (18GB). The finished video would use a mixture of motion footage and moves on stills, so I also received another 152 stills from the 5Dmk2 plus 242 stills from a Canon G10 still camera.

Step one was file conversion to ProRes at 1920×1080. Apple Compressor on a MacBook Pro took under five hours for this step. Going to ProRes increased the storage needs from 18GB to 68GB.

Step two was frame rate conversion. The target audience is in Australia, so we decided to alter the speed to 25fps. This gives all shots a slight slomo quality as if the footage was shot in an overcranked setting. The 5Dmk2 by itself isn’t capable of variable frame rates or off-speed shooting, so any speed changes have to be handled in post. Although a frame rate change is possible in the Compressor setting (step 1), I opted to do it in Cinema Tools using the conform function. When you conform a file in Cinema Tools, you are altering the metadata information of that file. This tells a QuickTime-compatible application to play the file at a specific speed, such as 25fps instead of 30fps. I could also have used this to conform the rate to 29.97 or 23.98. Because only the metadata was changed, the time needed to conform a batch of 150 clips was nearly instantaneous.

Step three – pitch. Changing the frame rate through conform slows the clips, but it also affects the sync sound by making it slower and lowering the pitch. Our video was cut to a music track so that was no big deal; however, we did have one sync dialogue line. I decided to fix just the one line by using Soundtrack Pro. I went back to the original 30fps camera file and used STP’s TimeStretch. This let me adjust the sync speed (approximately 83% of the original) to 25fps, yet maintain the proper pitch.

Step four – stills. I didn’t want to deal with the stills in their full size within FCP. This would have been incredibly taxing on the system and generally overkill, even for an HD job. I created Photoshop actions to automate the conversion of the stills. The 152 5Dmk2 JPEG stills were converted from 5616×3744 to 3500×2333. The stills from the G10 come in a 4:3 aspect ratio (4416×3312) and were intended to be used as black-and-white portrait shots. Another Photoshop action made quick work of downsampling these to 3000×2250 and also converting them to black-and-white. Photoshop CS4 has a nice black-and-white adjustment tool, which generates slightly more pleasing results than a simple desaturation. These images were further cropped to 16:9 inside FCP during the edit.

blg_canon5d_6

Frame from My Room video

Editing

Once I had completed these conversions, the edit was pretty straightforward. The project was like any other PAL-based HD job (1920×1080, 25fps, ProRes). The Canon 5D creates files that are actually easier for an editor to deal with than RED, P2 or EX files. Naming follows the same convention as most what DSLRs use for stills, with files names such as MVI_0240.mov. There is no in-camera SMPTE timecode and all imported clips start from zero. File organization over a larger project would require a definite process, but on the other hand, you aren’t fighting something being done for you by the camera! There are no cryptic file names and copying the files from the card to other storage is as simple as any other QuickTime file. There is also no P2-style folder hierarchy to maintain, since the media is not MXF-based.

Singular Software and Glue Tools are both developing FCP-related add-ons to deal with native camera files from the Canon 5D. Singular offers an Easy Set-up for the camera files, whereas Glue Tools has announced a Log and Transfer plug-in. The latter will take the metadata from the file and apply the memory card ID number as a reel name. It uses the camera’s time-of-day stamp as a timecode starting point and interpolates clip timecode for the file. Thus, all clips in a 24-hour period would have a unique SMPTE timecode value, as long as they are imported using Log and Transfer.

blg_canon5d_7

Frame from My Room video

My final FCP sequence was graded in Apple Color. Not really because I had to, but rather to see how the footage would react. Canon positioned the 5Dmk2 in that niche between the high-end amateur and the entry level professional photographer, so it tends to have more automatic control than most pros would like. In fact, a recent firmware update added back some manual exposure control. In general, the camera tends to make good-looking images with rich saturation and contrast. Not necessarily ideal for grading, but Stu at ProLost offers this advice. Nevertheless, I really didn’t have any shots that presented major problems – especially given the nature of this shoot, which was closer to a documentary than a commercial shoot. I could have easily graded this with my standard “witches brew” of FCP plug-ins, but the roundtrip through Color was flawless.

As a first time out with the Canon EOS 5D Mark II, I think the results were pretty successful (click here to view). I certainly didn’t see any major compression artifacts to speak of and although the footage wasn’t immune from the “jello” effect, I don’t think it got in the way of the emotion we were trying to convey. A filmmaker who was serious about using this as the principal camera on a project could certainly deliver results on par with far more expensive HD cameras. To do that successfully, a) they would need to invest in some of the rigs and accessories needed to utilize the camera in a motion picture environment; and b) they would need to shoot carefully and adhere to set-ups that steer away from some of the problems.

blg_canon5d_9

What about 24fps?

25fps worked for us, but until Canon adds 24fps to the 5Dmk2 or a successor, filmmakers will continue to clamor for ways to get 24p footage out of the camera. Philip Bloom and others have posted innovative post “recipes” to achieve this.

I tested one of these solutions on my cut and was amazed at the results. If I needed to maintain sync dialogue on a project, yet wanted the “film look” of 24fps, this is the method I would use. It’s based on Bloom’s blog post (watch his tutorial video). Here are the steps if you are cutting with Final Cut Pro:

1. Edit your video at the native 30fps camera speed.
(Write down the accurate sequence duration in FCP.)

2. Export a self-contained QuickTime file.

3. Conform that exported file to 23.98fps in Cinema Tools.
(This will result in a longer, slowed down file.)

4. Bring the file into Compressor and create and apply a setting to convert the file, but leave the target frame rate at 23.98fps (or same as current file).

5. Click the applied setting to modify it in the Inspector window.

6. Enable Frame Controls and change the duration from “100% of source” to a new duration. Enter the exact original duration of the 30fps sequence (step 1). (Best results are achieved – but with the longest render times – when Rate Conversion is set to “Best – high quality motion compensated”.)

7. Import the converted file into FCP and edit it to a 23.98 fps timeline. This should match perfectly to a mixed version of the audio from the original 30fps sequence.

I was able to achieve a perfect conversion from 30fps to 23.98fps using these steps. There were no obvious optical flow artifacts or frame blending. This utilizes Compressor’s standards conversion technology, so even edited cuts in the self-contained file stayed clean without blending. Of course, your mileage may vary.

The edited video segment was 1:44 at 30fps and 2:10 at the slower 23.98fps rate. The retiming conversion necessary to get back to a 1:44-long 23.98 file took two hours on my MacBook Pro. This would be time-prohibitive if you wanted to process all of the raw footage first. Using it only on an edited piece definitely takes away the pain and leaves you with excellent results.

Cameras like the Canon EOS 5D Mark II are just the beginning of this DSMC journey. I don’t think Canon realized what they had until the buzz started. I’m sure you’ll soon see more of these cameras from Canon and Nikon, not to mention Panasonic and even Sony, too. Once RED finally starts shipping Scarlet, it will be interesting to see whether this concept really has legs. In any case, from an editor’s perspective, these formats aren’t your tape of old, but they also shouldn’t be feared.

©2009 Oliver Peters

Resolution Purists and the Real World

I love to lurk over at RedUser.net, the unofficial online forum for RED owners and enthusiasts. It’s a great place to gain insight about the technology, but it’s also just pure fun reading the various perceptions of the lesser experienced RED aficionados. The RED One camera employs a single 4520 x 2540 CMOS sensor to capture various image sizes – the most popular of which is 4096 x 2048. This is considered to be a 4K file with a 2:1 aspect ratio. Many people confuse resolution and file size, so a 4K file isn’t necessarily 4K worth of resolution. There’s also a lot of confusion between the terms resolution and sharpness. The simplest explanation is that resolution is the measurable ability to resolve fine detail, while sharpness relates to your eyes’ and brain’s perception of whether or not an image is crisp and shows a lot of detail. Both Mark Schubin (Videography magazine’s technical editor) and Adam Wilt (Pro Video Coalition) have written at length on these subjects.

 

As a poor country editor who isn’t a DP or image scientist, I defer to the authorities on these subjects, but I have spent several decades working in all sorts of image formats, resolutions and display technologies. From this experience, I can say that often the supposed resolution of the sensor, as expressed in pixels, has very little to do with how the image looks. I see a lot of folks online expressing the desire to finish in 4K, without any understanding of the real world cost or desirability of 4K post and distribution. Not to mention the fact that true 4K theatrical displays are quite a few years off, if for no other reason than the lack of financial incentive for major theater chains to convert all their 35mm film projection to something like Sony’s SRX-series digital cinema projectors. So in spite of an interest on the part of content producers to see 4K presentation venues, the reality is that high-resolution-originated product will continue to end up being viewed on various displays, from web movies to SD and HD television up to film projection and/or digital cinema projection at 2K or less.

 

 

Been There – Done That – Got the Belt Buckle

The irony of all of this is that we’ve been there before. I even have the limited edition belt buckle to prove it! In the late 70s I worked with the CEI 310 camera. This was a 2-piece electronic field camera that was definitely geared towards high-quality production and not news. The CEI 310 eventually became the basis of Panavision’s Panacam – their first foray into electronic cameras equipped with Panavision film lenses. Bear in mind that the 310 and Panacam were always SD cameras without any 24P capabilities. On the plus side, the colorimetry of the CEI camera appeared more “filmic” than its ENG counterparts, which was further enhanced by the addition of Panavision lenses and accessories.

 

At the time, I was responsible for a facility that cranked out a ton of grocery store commercials. “Painting” the camera to get the most out of tabletop shots was the job of the video engineer (often called the “video shader”). A lot of what I learned about color correction (and have since passed on to others) came from trying to get a cooked ham or roast to look appetizing using our RCA studio cameras! When Panavision set up the deal with CEI to market Panacams, they established a number of authorized rental/production facilities who would supply the camera accompanied by a trained technician. Again, this person’s job was to paint the image for the most pleasing look. Fast forward a couple of decades and you have the position of the DIT (digital imaging technician), who today fulfills the role of video shading, among other tasks, when HD cameras are used on high-budget shoots, like feature films.

 

These early attempts at electronic cinematography really didn’t go far, due to the limiting resolution of NTSC and PAL video. Sure the images looked great, but you were really only working in a medium that was acceptable for television and not the big screen. Nevertheless, companies like Panavision, CEI and other competitors (like Ikegami with the EC-35) proved that properly adjusted video cameras coupled with high-quality glass could be a good marriage, regardless of the resolution of the camera.

 

 

High Definition to Small Definition

Fortunately HD came along, reviving the ongoing interest to use electronic cameras for theatrical distribution. The company I worked for in the 90s was an early adopter of HD. We bought two of Sony’s HDW-730 cameras, which were interlaced 1080 HDCAM camcorders. Interlacing causes many of the purists to snub their noses, preferring the later 24P models as true film-style images. In spite of this, we produced quite a lot of impressive content, including a Biblical-based dramatic production for a themed attraction called “The Holyland Experience”. Our 20-minute film was shot on location in Israel and projected in a custom theater that rivaled any big screen movie theater in size and scope. The final master was edited in 1080i but encoded into 720p and projected using a Barco data-grade (not digital cinema) projector. Interlaced or not, this image was as impressive and as high-quality to the eye as if this had been a full blown 35mm film production.

 

On the other end of the scale, I’ve also posted the video portions of IllumiNations: Reflections Of Earth, Disney’s nighttime show at EPCOT – a fireworks and laser extravaganza choreographed to music. ROE’s video segments are presented on a 29’ tall rotating earth globe mounted on a barge in the middle of the EPCOT lagoon. The continental masses on that globe consist of LED displays. The final image that fills these screens is actually a 360 x 128 pixel video movie composited like a world map. The pixels for the continents are, in turn, mapped onto the matching LED coordinates of the globe. Australia only has the resolution of a typical computer desktop icon, yet it is still possible to discern imagery with a display this coarse. The trick is in the fact that viewing distances are 500’ to 700’ away and your brain fills in the gaps. This works much like the image of Lincoln’s face that’s made up of a mosaic of other images. When you get far enough back, you recognize Lincoln, instead of focusing on the individual components.

 

 

High Definition and the Silver Screen

 

Most folks now agree that the actual resolution of the RED One camera with proper lenses and accurate focus is in excess of 3K, though not quite as high as 4K. Compare this to film. 35mm negative is said to be as high as the equivalent of 8K (though 4K is generally accepted by most as “full” resolution), but typically is scanned at 4K or 2K resolution. However, the image you see in the theater from a projected release print, is generally considered to be closer to 1K. This varies with the quality of the print, projector lens and dimness of the projector lamp. Meanwhile, most of the popular HD cameras used for digital cinematography (Grass Valley Viper, Sony F900, Sony F23, etc.) capture images at 1920 x 1080, leaving you with a 16 x 9 image that’s comparable to a 2K film scan when the aspect ratio is 1.85:1. I’ve seen quite a few of the movies in theaters that were “filmed” using digital cameras (Collateral, Apocalypto, Zodiac, Star Wars, Once Upon A Time In Mexico, etc.) and I find very little to quibble about. In fact, Star Wars was shot with the wider 2.35:1 aspect, meaning that the top and bottom were cropped. So really only about 700 pixels out of the actual 1080 pixel height show up in the final prints.

 

I’ve also edited a film that was finished through a DI process using Assimilate SCRATCH. Our film was shot on 4-perf Super35mm negative and transferred to HDCAM-SR. Since we intended to end up in 1.85:1, the 4-perf Super35mm frame provided the closest fit to the 16 x 9 aspect ratio of HD, without wasting part of the top and bottom of the negative’s frame. This technique results in smaller film grain within the HD frame because more of the whole film frame is used. Internally our SCRATCH files were 2K DPX files and the output was back to an HDCAM-SR master. I’ve seen this film projected at DCI spec in the lab’s screening room, as well as HDCAM running through a projector at 1080i (interlaced with added 3:2 pulldown) and I must say that this image would not have looked any better had we worked off of a 4K film scan.

 

The reason I say this is due to the general texture of film and the creative choices made for exposure, lighting and lens/filter selection. Images that are often more pleasing to the eye are sometimes technically lower in sharpness. In other words, when you stick your nose up close to the screen, the image will tend to appear soft. Having higher resolution doesn’t matter, because there is no more real detail in the image to bring out except bigger film grain. One interesting comparison is last year’s There Will Be Blood versus No Country For Old Men. Blood went through a traditional film, rather than a digital finish, whereas No Country was completed at 2K resolution using a digital intermediate process. Both were nominated for an Oscar for Best Cinematography. By all rights, Blood should have had the higher resolution image, yet in point of fact, both looked about the same to the casual eye when seen in the theaters. The cinematography was striking enough to earn each a nomination.

 

 

It’s in the Glass

 

Going back to the Panacam example, what you start to find out is that the quality of the glass is a major factor in what ends up being recorded. I once did a film shot with a Sony F900 camera (24P). The DP/owner-operator opted to rent a “Panavised” Sony F900 (like those used on Star Wars) instead of using his own camera, so that he could take advantage of the better Panavision lenses. The result was a dramatic difference between the image quality of those lenses as compared to standard HD lenses. Likewise, some of the RED examples I’ve seen online that were shot with various non-optimized lenses, such as prime lenses designed for still photo cameras, exhibited less-than-superb quality. This is also why there have been a number of successful indie films shot with a Panasonic VariCam. Technically the VariCam, with its 1280 x 720 imager, should look significantly worse on the big screen than a Sony F900. Yet, many of these have been shot using 35mm lens adapters and high-quality film lenses. The results on screen speak for themselves. The funny thing is that there’s a lot of talk of 4K, yet when I’ve seen Sony’s 4K projector demos, the content comes from 1920 x 1080 sources – shot with various Sony or Panavision digital cameras. I can assure you that these look awesome.

 

 

You ARE Paying for Something

Aside from lenses, another thing to keep in mind is the electronics used by the camera for image enhancement and filtering. Part of the big difference you pay between a RED One and a competing Sony, Grass Valley or Panasonic camera is for the electronics used to enhance the image. The RED One generates a camera raw, Bayer-pattern image. The intent is to do all processing in post, just like sending film negative to a lab. The other cameras have a lot of circuitry designed to control the image in-camera. You may opt for a neutral, flat image, but there’s still processing applied to generate that finished RGB image from the camera, regardless of whether it’s flat or painted. This processing not only applies color matrices but also sharpens detail and reduces noise. By contrast, RED not only doesn’t apply this in-camera, but also uses OLPF (optical low pass filtering), common in digital still camera sensors. OLPF essentially filters out the highest resolution transients so that you don’t have excess aliasing in the image on things like contrasting diagonal lines, such as on a car grill. The design goal is to leave you with true and not artificial resolution. This means the image may at times appear soft, so sharpening and detail enhancement have to be added back (to taste) during the post production conversion of the camera raw files.

 

The dilemma of all of this file conversion needed in post is that you often don’t get the best results. On the plus side, you may reap the benefit of oversampling, meaning that at times an HD image downsampled to SD may look better than it if had been shot in SD to begin with. I have, however, also found the opposite to be true. HD is a very high resolution image that has more actual resolution than our monitors and projectors can truly display. An image looks more natural in HD when less detail enhancement is dialed in. If you crank up the enhancement, like you typically do in most SD cameras, then that image would look garish in HD. Unfortunately, when you downsample this very natural looking HD image into SD, the image tends to look soft, because we are used to the look of overly-enhanced SD cameras. Therefore, downsampling by a dedicated device like the Teranex Mini will give you better results than using the built-in functions of Final Cut Pro or a Kona card or an HD deck, because the Mini lets you subjectively add enhancement, color control and noise reduction as part of the HD-to-SD conversion.

 

Aliasing is another issue. A lot of HD content is captured in progressive formats (such as 24P). Progressive HD images on a native progressive display (projectors, plasmas, LCDs) look great, but when you display these same images as scaled-down NTSC or PAL on an interlaced CRT, something’s got to give. If you take a high-contrast transition, such as the light-to-dark changes between the metal bars in our car grill example, the HD image is able to retain all the anti-aliasing information for the in-between gradients in those transitions from light to dark and back. When this image is downsampled, some of this detail is lost and there’s less anti-aliasing information. The transitions becomes harsher when displayed on the interlaced SD CRT and the metal of the grill appears to scintillate with any movement. In order words, the diagonal edges of the metal grill appear more jagged and tend to “dance” between the scanlines.

 

Unfortunately this is a normal phenomenon and can exist whether you shoot digitally or on film. A few years back Cintel, an established telecine manufacturer, introduced SCAN’dAL, a feature designed specifically to deal with this issue when transferring 35mm footage to video. Although a lot of ink has been spilled about the benefits of oversampling, in some case the matching size yields the best results. I go back to SD videos I’ve cut, which were shot using a Sony Digital Betacam camcorder and am amazed at how much better these look in SD than newer versions of the same program shot on HD and downsampled for SD presentation. When downsampling is part of the workflow, then it is important to try a number of options if quality is critical. For example, sometimes hardware does a better job and at other times software is king. Some of the better HD-to-SD scaling in software is achieved in After Effects and Shake. Often just the smallest touch of Gaussian blur will help as well.

 

 

Reality Check for the Indie Filmmaker

 

One of the reasons this isn’t cut-and-dried is because camera manufacturers play so many games with the image. For example, the Panasonic HXV200 makes outstanding images and is popular with indie filmmakers. Yet it only uses a 960 x 540 pixel sensor to generate 720 or 1080 images – getting there through the magic of pixel shifting (See Adam Wilt). As good as the camera looks, when you put it side-by-side with Panasonic’s VariCam, the latter will appear noticeably sharper than the 200, because it indeed has higher resolution.

 

I’m sure you’re wondering if this is all just a can of worms. You’re right. It is. But often, the most calibrated measuring devices are simply your two eyes. Forget the specs and trust your instincts. A recent example is Shine A Light. This film was shot using a combination of 35mm film cameras and one Panavision Genesis. All footage ended up on HDCAM-SR (1920 x 1080) and the master from this not only was recorded out to 35mm film for release prints, but also IMAX. Even though HD isn’t close to the resolution of a 70mm IMAX negative, the Stones’ concert in Shine A Light looks incredible in IMAX projected onto a 5-story-tall screen!

 

In the real world, it’s amazing what you can get away with. Last year the Billy Graham Library opened with video modules that I edited and finished. The largest screen is in the Finale theater – an ultra-widescreen format that’s a horizontal composite of three 720p projections. Our sources were largely HD, but there were also a smattering of audience close-up shots from Graham’s last crusade in New York City that originated on a Panasonic DVX100A (mini-DV) camera. It was amazing how well these images held up in the finished product. Other great examples are the documentaries Murderball and The War Tapes. Each was shot with a variety of mini-DV cameras, yet in spite of the image defects, the stories and personalities are so enthralling, that image quality is the least important factor.

 

I have a lot of respect for what the team at RED has done, but I’m not yet willing to concede that shooting with the RED One is going to give you a better film than if you used other cameras, like an Arri D-21, Sony F23 or Panasonic’s new HPX3000, just because RED has a higher pixel count for its sensor. In the end, like everything else in this business, content and emotion is the most important ingredient. When it comes to capturing an image, the technical resolution of the camera is a big factor, but it doesn’t automatically guarantee the best image results from the point-of-view of your audience.

 

© 2008 Oliver Peters