11 More Final Cut Pro Tips

During the years with Final Cut, I’ve accumulated a number of workflow steps to eliminate some of the “gotchas” and bottlenecks. I’ll share a few in this post and hope you find them to be helpful.

1. Gamma setting

A couple of versions ago, FCP added a gamma settings preference (User Preferences / Editing). This lets you compensate for gamma differences in graphics created on other platforms. Ever since FCP7, I have found that the best setting to use is 2.2 (the native gamma of PCs). “Source” and other options don’t seem to yield the best results. As such, I now set all systems I use to 2.2 gamma.

This gamma preference applies to imported QuickTime movies as well, if they were created with an RGB codec, like Animation. Bring them in as “source” instead of 2.2 gamma and the level will be wrong. Like all other such settings, changes made to this setting only affect files imported after the change was made.

While we are looking at this tab, note the Still/Freeze Duration time. Crank this baby up. You aren’t creating any more media by doing so. If you have it set to 4 minutes, then that means you can have a still or a freeze last up to 4 minutes long as a single clip on the timeline.

2. Graphic Converter

I started a big project at the launch of FCP7 and ran into all sorts of new level problems with customer-supplied PNG files. This format is a lossless compressed format that stores images at high quality with a small file size. Unfortunately, the color syncing system used by the Mac OS looks for various color profile flags to get the levels right, which also affects what happens to such files in QuickTime. In my case, this caused unacceptable gamma shifts in FCP7. Most likely the culprit was either an incorrect color profile or an incorrect “assumption” by QuickTime. The solution for me was Lemke Soft’s Graphic Converter. This app has been a graphics conversion staple that I’ve used for two decades. It used to be bundled by Apple and in fact, the version I’m using under Snow Leopard was still the one I migrated over from a PowerBook G4!

I use Graphic Converter mainly for its batch conversion functions. In this example, I “washed” all the PNG files through Graphic Converter and turned them into uncompressed BMP files. Viola – no gamma shift – and proper levels! I have also run into cases where Photoshop-generated JPEGs had issues on a Mac. Here again Graphic Converter saved the day, by stripping out the Photoshop color profile info and rewriting a clean JPEG file.

3. Gaussian blur on stills

FCP is frequently slammed for the quality of moves on high-res photos using the DVE functions of the motion tab. Typically the offense is that detail in the image tends to scintillate or that diagonal lines look aliased. This really isn’t the fault of FCP, but rather the fact that these stills have a lot more detail and resolution than can be properly displayed when reduced to that size. As far as I know, FCP has no subpixel filtering, so the texture or detail in an image falls on one scan line or another without any smoothing in between. Other apps don’t necessarily create better results, but are actually softening, blurring or filtering the image in ways that look better in a video display format.

The secret for FCP users is to do the same thing. In other words, add blur to the image yourself. This can either be achieved in a graphics program like Photoshop or within FCP by adding a filter. If you use Photoshop, then prep the image by adding a slight Gaussian blur. Experiment with the right setting, but typically a value that looks a bit soft in Photoshop will ultimately look good on a TV screen. If you decide to just add an FCP filter, then use either the Gaussian blur or the Flicker filter. Again, play around with settings to taste, but I find that a Gaussian blur value of between .5 and 2 is generally right.

4. Audio as 48kHz

Repeat after me: Always work with uncompressed audio at a sample rate of 48kHz inside Final Cut. Yes, you can import MP3s and yes, you can work with 44.1kHz audio on the timeline, but in the end, it will bite you. Typically these files seem to be OK if left unrendered – leaving FCP to do all conversions on-the-fly. However, once they are rendered, forcing a sample rate conversion, you run into nastiness. For example, I’ve made music edits and found that the music at the edit points shifted. Or that levels didn’t react properly when I was trying to mix the audio.

When I work with audio in FCP, I will almost always convert the files to 48kHz, 16-bit AIFF files first. This can be done in two easy ways. I use QuickTime Pro (make sure to use QuickTime 7 if you are on Snow Leopard) to convert the files before importing. If you’d rather have this be automatic, simply purchase and install Digital Heaven’s Loader. It runs resident with FCP. Audio files that are imported into FCP by dragging them to the Loader tab will automatically be copied and converted during the import process.

5. Audio files running at the right speed

One of the quirks with FCP is handling audio synchronization. This is mainly a factor in film projects using double-system sound or music videos shot to a playback track. FCP determines speed based on audio samples and not timecode. Unfortunately it’s not as simple as making sure your audio setting matches the project. That’s because FCP apparently does some hidden things under the hood.

For example, I’ve seen issues where a project was started with one frame rate setting, but later changed. I hit this on a PAL music video job. When I imported the final mixed track and matched it against the client’s rough cut, nothing I could do would eliminate the drift between the final mix and the temp track. Even though everything appeared to be correct, imported audio would not line up as anticipated. The only culprit appeared to be the settings used when the project was initially created. Sometimes you have no control over this, because you’ve inherited the project already in progress.

It appears that this is an issue that affects audio files – AIFF and WAVE. One workaround is to convert the audio to QuickTime movies, which forces timecode onto the clip. This isn’t a silver bullet, though, and I have found it to work at times and not at others. If it doesn’t work, you may be forced to alter the audio speed in order to maintain sync. This becomes a big issue if you happen to record double-system sound when shooting with a Canon EOS 5D Mark II at 30fps. You can post in a true 30fps project and maintain sync or you can post at 29.97 and be video-friendly. In the latter case, it’s easy to change the video 30fps frame rate to 29.97fps using Cinema Tools, but the audio will have to be speed-adjusted by .999 to maintain sync.

6. Compressor and interlaced video

Compressor 3.5 (part of the “new” Final Cut Studio) changed the way the Inspector tab works. Now it includes an A/V Attributes tab that will actually “read” the format of the source file. This will also affect what an encoding preset does to that file. If you click on the source file in the Job panel, the Inspector will identify among other things the field order of the file. Unfortunately, it randomly gets it wrong. Files that are top field first are identified as progressive or bottom field first and the other way around.

If you are using Compressor to convert an interlaced camera file or a stock footage file into ProRes for use in FCP, for example, it is important that the field order be correct. If the field order is misidentified, the file will either be improperly converted or the field order will be incorrectly flagged. For example, HD files are supposed to be top field first. If this is identified as bottom first and converted to top, the field order will be swapped and the result on an FCP HD timeline will look wrong.

In other cases, the files won’t necessarily be converted incorrectly, but will only be flagged incorrectly. The file will actually be top field first, but it will show up inside FCP as bottom field first. When this happens a Field Shift filter is automatically applied by FCP onto the clip in the timeline. The file is actually the correct field order, but the filter makes it wrong, which again, yields the wrong results. Unfortunately, you won’t see this issue unless you are monitoring the video though a hardware i/o card or device to a broadcast display or CRT.

When you work in Compressor 3.5, it is important to check each source file for field order. If the file is misidentified, you must change the setting manually in the Inspector pane. This will generally fix the problem. In addition, check the Field Dominance column in the FCP Browser. If a file is incorrectly identified, this can be changed in that column. Once it is properly set, this will prevent a Field Shift filter from being automatically applied when it isn’t needed.

7. Better slomos

FCP7 has improved the variable speed workflow, but not the actual quality of the video. Video fields and/or frames are blended, which doesn’t result in the smoothest rendering of motion. This is especially true of “24p-over-30i” content. Other NLEs, like Avid Media Composer, employ more advanced technology, offering several options for motion rendering. The best looking Avid choice is FluidMotion Timewarp. Like other retiming technologies, this creates new “in-between” frames derived from actual surrounding frames when motion is slowed or sped up. A similar quality improvement can be achieved if you opt to use either Compressor or Motion instead of FCP.

In Compressor, apply a conversion preset and use Frame Controls to achieve a speed change. Select Best for Rate Conversion. Click Duration and set a percentage or a time value consistent with the speed you are trying to achieve. Inverse telecine can also be applied, but remember that this method only works for constant speeds and not speed ramps.

Motion is your other option. Speed changes can be achieved by either changing Properties for the clip or adding a Retiming behavior. To alter the image quality, reveal the Timing controls in the Properties tab. Set the speed and then select the type of Frame Blending in the drop-down menu. Optical flow yields the best result, although it will also on occasion introduce unwanted motion artifacts.

8. Controlling filter selections in Final Cut Pro

Ever wonder why some filter choices are listed twice in the FCP effects palette? That’s usually because you are seeing both the native FCP and the native Motion filters in the same pulldown menu. This is controlled by the Effects / Effects Availability setting. For example, when you select All Effects, you’ll see two Gaussian Blur and two Zoom Blur filters in the Blur category. Look in the Effects Class column of the FCP Browser and you’ll see that one of these is listed as an FxPlug filter. That’s the native Motion version of this filter. Change the setting to Only Recommended Effects and the FxPlug version of those two disappears, leaving you with only the native Final Cut version (not FxPlug). In most cases, either version can be used without issue.

Lastly, if you’ve installed a lot of plug-ins and would like to reduce the clutter of effects palette, then use the selection, Only My Preferred Effects. When this is selected, filters will only show up if you have placed a check in the Preferred column of the Browser.

9. Preparing stills for Final Cut

Final Cut Pro is resolution independent, but this doesn’t mean “resolution infinite”! Working with large, high-resolution files is almost always an issue when incorporating digital still photos into an FCP timeline. Although you can throw a lot at the software, some things work better than others. The exact point that you’ll choke FCP depends on factors like the software version, processing power, amount of RAM and the installed graphics card.

Final Cut Pro performs best with stills that are RGB-mode JPEG, TIFF or BMP files under 4,000 x 4,000 pixels. Many print-resolution files or digital still photos from high-megapixel cameras like the Canon 5D will exceed these sizes. Merely clicking on a 6,000 x 6,000 pixel TIFF file that’s CMYK and not RGB in the FCP Browser will often crash the app. Poof! Straight to Jail – do not pass GO, do not collect $200! My recommendation is always to review and prepare your graphics and stills prior to using them inside FCP.

The best applications for organizing, preparing and adjusting stills include Photoshop, Graphic Converter, Aperture, iPhoto and Lightroom. Files should always be 8-bit/channel RGB. Remove any alpha channels unless you intend to use the file for keying. Some folks will argue that FCP does better with uncompressed files, like TIFFs, as this reduces the decoding overhead of other files. That may be true, but I’ve always used JPEGs with good results. I’ll export the files in the JPEG format at the highest quality setting (12 in Photoshop) and reduce the frame size if it’s an extremely large file. Typically I’ll resize the frame to a maximum horizontal or vertical dimension of 2,500 pixels for SD and 3,500 for HD. This will usually give me plenty of space for a camera-style move on the image and still stay below 100% of the actual frame size.

10. Changing alpha setting

An ongoing dilemma with NLEs is how alpha channels are treated. Once imported into FCP you have the option of changing the alpha setting – None/Ignore, Straight, Black or White. The default is typically fine, but in some cases, a glow or a soft shadow will not have the proper transparency. For example, a white glow against a white background may result in a darker halo at the edge of the glow instead of seamlessly blending into the background. This can be corrected in FCP by opening the Format setting for that clip and changing the alpha setting to one of the other options. Change the master clip in the FCP Browser so it’s correct every time it is used, or correct it only in the timeline if it just needs to be fixed for one instance.

11. Positioning text and graphics

When you move a graphic or line of generated text using the FCP motion tab’s positioning controls, the result will often be soft or blurry. I believe this is related to FCP’s lack of subpixel filtering. Two rules I generally apply in these situations. First, whenever a generator (like Boris 3D or the Text generator) has a Control tab, I use the X,Y positioner in that tab and not the motion tab to reposition the text. This aligns the graphic without adding an additional modifying layer.

Second, any time I change scale or position parameters, I try to stay with even and whole number values. In order words, a scale value of 32, not 33; or a Y position value of 202, not 201.33. Try this for yourself and I think you’ll quickly see that the quality of anything with definite lines and detail, like text or a logo, will look much sharper at these even values. On rare occasions the opposite will be true. For example, it might look crisper if you nudge the text up to an odd number setting. In either case, experiment and see what looks best to you. Remember to only judge this when your Canvas is at a view setting of exactly 100%.

If these were helpful, check out Ten Tips For A Better Final Cut Pro Experience and 12 Tips for Better Film Editing.

© 2009 Oliver Peters

Remember film?

With all the buzz about various digital cameras like RED and the latest HDSLRs, it’s easy to forget that most national commercial campaigns, dramatic television shows, feature films and many local and regional spots are still filmed with ACTUAL 16mm and 35mm motion picture film. As an editor, you need to have a good understanding about the film transfer workflow and what information needs to be communicated between an editor and the transfer facility or lab.

Film transfers and speed

Film is typically exposed in the camera at a true 24fps. This is transferred in real-time to video using a scanner or telecine device like a Cintel Ursa or a DFT Spirit. During this process, the film’s running speed is slowed by 1/1000th to 23.98fps (also expressed as 23.976) – a rate compatible with the 29.97fps video rate of the NTSC signal. In addition, film that is being transferred to NTSC (525i) or high definition video for television (1080i/29.97 or 720p/59.94) is played with a cadence of repeated film frames, know as 3-2 pulldown. Film frames are repeated in a 2-3-2-3 pattern of video fields, so that 24 film frames equals 30 interlaced video frames (or 60 whole frames in the case of 720p) within one second of time. (Note: This is specific to the US and other NTSC-based countries. Many PAL countries shoot and post film content targeted for TV at a true 25fps.)

Film production requires the use of an external sound recorder. This production method is known as double-system sound recording. Analog audio recorders for film, like a Nagra, record at a true sound speed synced to 60Hz, or if timecode was used, at a true timecode value of 30fps. When the audio tape is synced to the film during the film-to-tape transfer session, the audio goes through a similar .999 speed adjustment, resulting in the sound (and timecode) running at 29.97fps instead of 30fps as compared to a real-time clock.

The film sound industry has largely transitioned from analog recorders – through DATs – to current file-based location recorders, like the Aaton Cantar or the Zaxcom Deva, which record multichannel Broadcast WAVE files. Sound speed and the subsequent sync-to-picture is based on sample rates. One frequent approach is for the location sound mixer to record the files at 48048 kHz, which are then “slowed” when adjusted to 48kHz inside the NLE or during film-to-tape transfer.

Check out 24p.com and zerocut.com for expanded explanations.

Film transfer

The objective of a film-to-tape transfer session is to color-correct the image, sync the sound and provide a tape and metadata for the editor. Sessions are typically booked as “unsupervised” (no client or DP looking over the colorist’s shoulders) or “supervised” (you are there to call the shots). The latter costs more and always takes more time. Unsupervised sessions are generally considered to be “one-light” or “best-light” color correction sessions. In a true one-light session, the telecine is set-up to a standard reference film loop and your footage is transferred without adjustment, based on that reference. During a best-light session, the colorist will do general, subjective color-correction to each scene based on his eye and input from the DP.

Truthfully, most one-light sessions today are closer to a best-light session than a true one-light. Few colorists are going to let something that looks awful go through, even if it matches a reference set-up. The best procedure is for the DP to film a few seconds of a Macbeth and a Grayscale chart as part of each new lighting set-up, which can be used by the colorist as a color-correction starting point. This provides the colorist with an objective reference relative to the actual lighting and exposure of that scene as intended by the DP.

Most labs will prep film negative for transfer by adding a countdown leader to a camera roll or lab roll (several camera rolls spliced together). They may also punch a hole in the leader (usually on the “picture start” frame or in the first slate). During transfer, it is common for the colorist to start each camera roll with a new timecode hour. The :00 rollover of that hour typically coincides with this hole punch. The average 35mm camera roll constitutes about 10-11 minutes of footage, so an hour-long video tape film transfer master will contain about five full camera rolls. The timecode would ascend from 1:00:00:00 up through 5:00:00:00 – a new hour value starting each new camera roll. A sync reference, like a hole-punched frame, corresponds to each new hour value at the :00 rollover. The second videotape reel would start with 6:00:00:00 and so on.

Many transfer sessions will also include the simultaneously syncing of the double-system audio. This depends on how the sound was recorded (Nagra, DAT or digital file) and the gear available at the facility. Bear in mind that when sound has to be manually synced by the colorist for each take – especially if this is by manually matching a slate with an audible clap – then the film-to-tape transfer session is going to take longer. As a rule-of-thumb MOS (picture-only), one-light transfer sessions take about 1.5 to 2 times the running length of the footage. That’s because the colorist can do a basic set-up and let a 10 minute camera roll transfer to tape without the need to stop and make adjustments or sync audio. Adding sound syncing and client supervision, often means the length of the session will increase by a factor of 4x or 5x.

The procedure for transferring film-to-tape is a little different for features versus a television commercial or a show. When film is transferred for a feature film, it is critical that a lot of metadata be included to facilitate the needs of a DI or cutting negative at the end of the line. I won’t go into that here, because it tends to be very specialized, but the information tracked includes audio and picture roll numbers, timecode, film keycode and scene/take information. This data is stored in a telecine log known as a FLEX file. This is a tab delimited text file, which is loaded by the editor into a database used by the NLE. It becomes the basis for ingesting footage and is used later as a cross-reference to create various film lists for negative cutting from the edited sequences.

If your use of film is for a commercial or TV show, then it’s less critical to track as much metadata. TV shows generally rely on tape-to-tape (or inside the NLE) color-correction and will almost never return to the film negative. You still want to “protect” for a negative cut, however, so you still need to track the film information. It’s nice to have the metadata as a way to go back to the film if you had to. Plus, some distributors still require cut negative or at least the film lists.

It’s more important that the film be transferred with a set-up that lends itself to proper color grading in post. This means that the initial transfer is going to look a bit flatter without any clipped highlights or crushed blacks. Since each show has its own unique workflow, it is important that the editors, post supervisor and dailies colorists are all on the same page. For instance, they might not want each camera roll to start with a new hour code. Instead, they might prefer to have each videotape reel stick with consistent ascending timecode. In other words, one hour TC value per videotape reel, so you know that 6:00:00:00 is going to be the start of videotape reel 6, and not film camera reel 6 / videotape reel 2, as in my earlier example.

Communication and guidelines are essential. It’s worth noting that the introduction of Digital Intermediate Mastering (DI) for feature films has clouded the waters. Many DI workflows no longer rely on keycode as a negative cut would. Instead, they have adopted a workflow not unlike the spot world, which I describe in the next section. Be sure to nail down the requirements before you start. Cover all the bases, even if there are steps that everyone assumes won’t be used. In the end, that may become a real lifesaver!

The spot world

I’m going to concentrate of the commercial spot world, since many of the readers are more likely to work here than in the rarified world of films and film-originated TV shows. Despite the advances of nonlinear color grading, most ad agencies still prefer to retransfer from the film negative when finishing the commercial.

This is the typical workflow:

-       Transfer a one-light to a video format for offline editing, like DVCAM

-       Offline edit with your NLE of choice

-       Generate transfer lists for the colorist based on the approved cut

-       Retransfer (supervised correction) selects to Digibeta or HD for finishing

-       Online editing/finishing plus effects

In this world, often different labs and transfer facilities, as well as editorial shops, may be used for each of these steps. Communication is critical. In many cases the director and DP may not be involved in the transfer and editing stages of the project, so the offline editor frequently plays the role of a producer. This is how spot editors worked in the film days and how many of the top commercial cutters still work today in New York, LA, Chicago or London.

In the first two steps, the objective is to get all of the footage that was shot ready to edit in the least time-consuming and most inexpensive manner possible. No time wasted in color-correction or using more expensive tape formats just to make creative decisions. The downside to this approach is that the client sometimes sees an image that isn’t as good as it could be (and will be in the end). This means the editor might have to do some explaining or add some temporary color-correction filters, just so the client understands the potential.

When the offline editing is done, the editor must get the correct info to the colorist who will handle the retransfer of the negative. For example, if each camera roll used a different hour digit, it will be important for the editor to know – and to relay – the correct relationship between camera rolls and timecode starts. For instance, if a hole punch was not used, then does 1:00:00:00 match “picture start” on the camera one leader? Does it match the 2-pop on the countdown? Does it match the first frame of the slate?

When film negative is retransferred, the colorist will transfer only the shots used in the finished cut of the commercial. Standard procedure is to transfer the complete shot “flash-to-flash”. In other words, from the start to the end of exposure on that shot. If it’s too long – as in an extended recording with many takes – then the colorist will transfer the shot as cut into the spot, plus several seconds of “handles”. This is almost always a client-supervised session and it can easily take 6-8 hours to work through the 40-50 shots that make up a fast paced spot.

The reason it’s important to know how the timecode corresponds to the original transfer, is because the colorist will use these same values in the retransfer. The colorist will line up camera roll one to a start frame that matches 1:00:00:00. If a shot starts at 1:05:10:00, then the colorist will roll down to that point, color-correct the shot and record it to tape with the extra handle length. Colorists will work in the ascending scene order of the source camera rolls – not is the order that these shots occur in the edited sequence. This is done so that film negative rolls are shuttled back and forth as little as possible.

As shots are recorded to videotape, matching source timecode will be recorded to the video master. As a result, the videotape transfer master will have ascending timecode values, but the timecode will not be contiguous. The numbers will jump between shots. During the online editing (finishing) session, the new footage will be batch-captured according to the shots in the edited sequence, so it’s critical that the retransferred shots match the original dailies as frame-accurately as possible. Otherwise the editor would be forced to match each shot visually! Therefore, it’s important to have a sufficient amount of footage before and after the selected portion of the shot, so that the VTR can successfully cue, preroll and be ingested. If all these steps are followed to the letter, then the online edit (or the “uprez” process) will be frame-accurate compared with the approved rough cut of the spot.

To make sure this happens smoothly, you need to give the colorist a “C-mode” list. This is an edit decision list that is sorted in the ascending timecode order of the source clips. This sort order should correspond to the same ascending order of shots as they occur on the camera rolls. Generating a proper C-mode EDL in some NLEs can be problematic, based on how they compute the information. Final Cut is especially poor at this. A better approach is to generate a log-style batch list. The colorist doesn’t use these files in an electronic fashion anyway, so it doesn’t matter if it’s an EDL, a spreadsheet, a hand-written log or a PDF. One tactic I take in FCP is to duplicate the sequence and strip out all effects, titles and audio from the dupe. Next, I copy & paste the duped sequence to a new, blank bin, which creates a set of corresponding subclips. This can be sorted and exported as a batch list. The batch list, in turn can be further manipulated. You may add color correction instructions, reference thumbnail images and so on.

Once I get the tape back from the retransfer session, I will Media Manage (FCP) or Decompresss (Avid) the sequence to create a new offline sequence. These clips can then be batch-captured for the final sequence with full-quality video (also called “uprezzing”). In some cases, FCP’s Media Manager has let me down and I’ve had to resort to exporting an EDL and using that as a basis for the batch capture. EDLs have proven to be pretty bullet-proof in the spot world.

Even though digital is where it’s at – or so I’ve heard – film will be here for years. So don’t forget how to work with it. If you’ve never had to work with it yet, no time like the present to learn. Your day will come soon.

©2009 Oliver Peters

The annual Editors Retreat is coming


If winter is the time for a bit of R&R in Florida, then it’s time to consider the annual Editors Retreat, now in its fifth year. This is an opportunity to spend a few days with a select group of editors for professional training, an exchange of ideas and just plain fun. The Editors Retreat has its origins loosely in the Avid Master Editors Workshop, but has morphed into the Retreat thanks to the efforts of Future Media Concepts. The 2010 Retreat will be at the Deauville Beach Resort in Miami Beach, January 13 – 16. (On a historical note, the Beatles taped their second appearance on The Ed Sullivan Show from the Napoleon Ballroom at the Deauville on February 16, 1964.)

Attendance to the Editors Retreat is limited and registration is for professional editors with five years or more experience. It doesn’t matter which platform you use, because there’s something for everyone. Sessions are handled by FMC trainers and guest editor/speakers, but one unique aspect is the Peer Presentations. Up until December 1, FMC is offering a discount to attendees who are willing to prepare a Peer Presentation of their own. These should be on a technical topic related to a project that they’ve done commercially.

Past examples include:

Producing from the Editor’s Chair: The Hurricane Katrina Project by Stig Daniels

Cutting the Independent Film by Abba Shapiro

Documentary Work and Workflow by Steve Audette

The whole point of the Retreat is to bring together editors from various disciplines and editing platforms and give them the opportunity to learn and share – not only from official training sessions – but also from their own collective experiences. Of course, there is plenty of formal training, including expert tips on Photoshop, Avid, Final Cut, After Effects, mixing and color correction.

One Editors Retreat highlight is a keynote presentation by a leading industry veteran. This year, the keynote speaker will be Christopher Nelson, who is an Emmy-nominated television series and movie editor. Nelson’s credits include episodes of LOST, Six Feet Under, The West Wing, House and Madmen.

I had a chance to make it to the Editors Retreat the last time it was in Miami Beach and I’ve got to say that it was a blast. In addition to the sessions, there’s plenty of casual time to compare notes with other editors, instructors and speakers, as well as to rub shoulders with product managers for many of the products that we use on a daily basis. So, needless to say, I’m making plans for January. Hope to see you there.

©2009 Oliver Peters

Tips for Small Camera and Hybrid DSLR Production


It started in earnest last year and has no sign of abating.  Videographers are clearly in the midst of two revolutions: tapeless recording and the use of the hybrid still/video camera (HDSLR). The tapeless future started with P2 and XDCAM, but these storage devices have been joined by other options, including Compact Flash, SD and SDHC memory cards. The acceptance of small cameras in professional operations first took off with DV cameras from Sony and Panasonic, especially the AG-DVX100. These solutions have evolved into cameras like the Sony HVRZ7U and PMWEX3 and Panasonic’s AG-HPX170 and AVCCAM product line. Modern compressed codecs have made it possible to record high-quality 1080 and 720 HD footage using smaller form factors than ever before.

This evolution has sparked the revolution of the HDSLR cameras, like the Canon EOS 5D Mark II, the new Canon EOS 7D and 1D Mark IV and the Nikon D90, D300s and D3s, to name a few. Although veteran videographers might have initially scoffed at such cameras, it’s important to note that Canon developed the 5D at the urging of Reuters and the Associated Press, so its photographers could deliver both stills and motion video with the least hassle. Numerous small films, starting with photographer Vincent Laforet’s Reverie, have more than proven that HDSLRs are up to the task of challenging their video cousins. From the standpoint of a news or sports department, we have entered an era where every reporter can become a video journalist, simply by having a small camera at the ready. That’s not unlike the days when reporters carried a Canon Scoopic 16mm, in case something newsworthy happened.

These cameras come with challenges, so here is some advice that will make your experience more successful:

1. Ergonomics / stability – Both small video camcorders and HDSLRs are designed for handheld, not shoulder-mounted, operation. This isn’t a great design for stability while recording motion. In order to get the best image out of these cameras, invest in an appropriate tripod and fluid head. For more advanced operations, check out the various camera mounting accessories from companies like Zacuto and Red Rock Micro.

2. Rolling shutter – This phenomenon affects all CMOS cameras to varying degrees. It is caused by horizontal movement and results in an image that is skewed. This distortion is caused by the time differential between information at the top and the bottom of the sensor. The HDSLRs have been criticized for these defects, but others like the EX or the RED One have also displayed the same artifacts to a lesser degree. This defect can be minimized by using a tripod and slow (or no) camera movement.

3. Focus – One of the reasons that shooters like HDSLRs is the large image sensor (compared to video cameras) and film lenses, which provide a shallow depth-of-field. This is a mixed blessing when you are covering a one-time event. Still photo zoom lenses aren’t mechanically designed to be zoomed and focused during the shot like film or video zoom lenses. This makes it harder to nail the shot on-the-fly. Since the depth-of-field is shallow, the focus is also less forgiving. Lastly, the focus is often done using an LCD viewer instead of a high-quality viewfinder. Many shooters using both small video cameras and HDSLRs have added an externally-mounted LCD monitor, as a better device for judging shots.

4. Audio – The issue of audio depends on whether we are talking about a Canon 5D or a Panasonic 170. Professional and even prosumer camcorders have been designed to have mics connected. To date, HDSLRs have not. If you are shooting extensive sync-sound projects with a hybrid camera, then you will want to consider using double-system sound with a separate recorder and mixer (human). At the very least, you’ll want to add an XLR mic adapter/mixer, like the BeachTek DXA-5D.

5. Movie files – Each of these cameras records its own specific format, codec and file wrapper. Production and post personnel have become comfortable with P2 and XDCAM, but the NLE manufacturers are still catching up to the best way of integrating consumer AVCHD content or files from these HDSLRs. Regardless of the camera system you plan to use, make sure that the file format is compatible with (or easily transcoded to) your NLE of choice.

6. Capacity – Most of the cameras use a recording medium that is formatted as FAT32. This limits a single file to 4GB, which in the case of the Canon 5D means the longest recording cannot exceed 12 minutes of HD (1920x1080p at 30fps). Unlike P2, there is no spanning provision to extend the length of a single recording. Make sure to plan your shot list to stay within the file limit. Come with enough media. In the case of P2, many productions bring along a “data wrangler” and a laptop. This person will offload the P2 cards to drives and then reformat (erase) the cards so that the crew can continue recording throughout the day with a limited number of P2 cards.

7. Back-up – Always back-up your camera media onto at least two devices in the original file format. I’ve known producers who merely transferred the files to the edit system’s local array and then trashed the camera media, believing the files were safe. Unfortunately, I’ve seen Avids quarantine files, making them inaccessible. On rare occasion, I’ve also seen Final Cut Pro media files simply disappear. The moral of the story is to treat your original camera media like film negative. Make two, verified back-ups and store them in a safe place should you ever need them again.

The new generation of small video camcorders and Hybrid DSLRs offers the tantalizing combination of lower operating cost and stunning imagery. That’s only possible with some care and planning. These tools aren’t right for every application, but the choices will continue to grow in the coming years. Those who embrace the trend will find new and exciting production options.

© 2009 Oliver Peters

Written for NewBay Media and TV Technology magazine