Why 4K

Ever since the launch of RED Digital Cinema, 4K imagery has become an industry buzzword. The concept stems from 35mm film post, where the digital scan of a film frame at 4K is considered full resolution and a 2K scan to be half resolution. In the proper used of the term, 4K only refers to frame dimensions, although it is frequently and incorrectly used as an expression of visual resolution or perceived sharpness. There is no single 4K size, since it varies with how it is used and the related aspect ratio. For example, full aperture film 4K is 4096 x 3112 pixels, while academy aperture 4K is 3656 x 2664. The RED One and EPIC use several different frame sizes. Most displays use the Quad HD standard of 3840 x 2160 (a multiple of 1920 x 1080) while the Digital Cinema Projection standard is 4096 x 2160 for 4K and 2048 x 1080 for 2K. The DCP standard is a “container” specification, which means the 2.40:1 or 1.85:1 film aspects are fit within these dimensions and the difference padded with black pixels.

Thanks to the latest interest in stereo 3D films, 4K-capable projection systems have been installed in many theaters. The same system that can display two full bandwidth 2K signals can also be used to project a single 4K image. Even YouTube offers some 4K content, so larger-than-HD production, post and distribution has quickly gone from the lab to reality. For now though, most distribution is still predominantly 1920 x 1080 HD or a slightly larger 2K film size.

Large sensors

The 4K discussion starts at sensor size. Camera manufacturers have adopted larger sensors to emulate the look of film for characteristics such as resolution, optics and dynamic range. Although different sensors may be of a similar physical dimension, they don’t all use the same number of pixels. A RED EPIC and a Canon 7D use similarly sized sensors, but the resulting pixels are quite different. Three measurements come into play: the actual dimensions, the maximum area of light-receiving pixels (photosites) and the actual output size of recorded frames. One manufacturer might use fewer, but larger photosites, while another might use more pixels of a smaller size that are more densely packed. There is a very loose correlation between actual pixel size, resolution and sensitivity. Larger pixels yield more stops and smaller pixels give you more resolution, but that’s not an absolute. RED has shown with EPIC that it is possible to have both.

The biggest visual attraction to large-sensor cameras appears to be the optical characteristics they offer – namely a shallower depth of field (DoF).  Depth of field is a function of aperture and focal length. Larger sensors don’t inherently create shallow depth of field and out-of-focus backgrounds. Because larger sensors require a different selection of lenses for equivalent focal lengths compared with standard 2/3-inch video cameras, a shallower depth of field is easier to achieve and thus makes these cameras the preferred creative tool. Even if you work with a camera today that doesn’t provide a 4K output, you are still gaining the benefits of this engineering. If your target format is HD, you will get similar results – as it relates to these optical characteristics – regardless of whether you use a RED, an ARRI ALEXA or an HDSLR.

Camera choices

Quite a few large-sensor cameras have entered the market in the past few years. Typically these use a so-called Super 35MM-sized sensor. This means it’s of a dimension comparable to a frame of 3-perf 35MM motion picture film. Some examples are the RED One, RED EPIC, ARRI ALEXA, Sony F65, Sony F35, Sony F3 and Canon 7D among others. That list has just grown to include the brand new Canon EOS C300 and the RED SCARLET-X. Plus, there are other variations, such as the Canon EOS 5D Mark II and EOS 1D X (even bigger sensors) and the Panasonic AF100 (Micro Four Thirds format). Most of these deliver an output of 1920 x 1080, regardless of the sensor. RED, of course, sports up to 5K frame sizes and the ALEXA can also generate a 2880 x 1620 output, when ARRIRAW is used.

This year was the first time that the industry at large has started to take 4K seriously, with new 4K cameras and post solutions. Sony introduced the F65, which incorporates a 20-megapixel 8K sensor. Like other CMOS sensors, the F65 uses a Bayer light filtering pattern, but unlike the other cameras, Sony has deployed more green photosites – one for each pixel in the 4K image. Today, this 8K sensor can yield 4K, 2K and HD images. The F65 will be Sony’s successor to the F35 and become a sought-after tool for TV series and feature film work, challenging RED and ARRI.

November 3rd became a day for competing press events when Canon and RED Digital Cinema both launched their newest offerings. Canon introduced the Cinema EOS line of cameras designed for professional, cinematic work. The first products seem to be straight out of the lineage that stems from Canon’s original XL1 or maybe even the Scoopic 16MM film camera. The launch was complete with a short Bladerunner-esque demo film produced by Stargate Studios along with a new film shot by Vincent Laforet (the photographer who launch the 5D revolution with his short film Reverie)  called Möbius.

The Canon EOS C300 and EOS C300 PL use an 8.3MP CMOS Super 35MM-sized sensor (3840 x 2160 pixels). For now, these only record at 1920 x 1080 (or 1280 x 720 overcranked) using the Canon XF codec. So, while the sensor is a 4K sensor, the resulting images are standard HD. The difference between this and the way Canon’s HDSLRs record is a more advanced downsampling technology, which delivers the full pixel information from the sensor to the recorded frame without line-skipping and excessive aliasing.

RED launched SCARLET-X to a fan base that has been chomping at the bit for years waiting for some version of this product. It’s far from the original concept of SCARLET as a high-end “soccer mom” camera (fixed lens, 2/3” sensor, 3K resolution with a $3,000 price tag). In fact, SCARLET-X is, for all intents and purposes, an “EPIC Lite”. It has a higher price than the original SCARLET concept, but also vastly superior specs and capabilities. Unlike the Canon release, it delivers 4K recorded motion images (plus 5K stills) and features some of the developing EPIC features, like HDRx (high dynamic range imagery).

If you think that 4K is only a high-end game, take a look at JVC. This year JVC has toured a number of prototype 4K cameras based on a proprietary new LSI chip technology that can record a single 3840 x 2160 image or two 1920 x 1080 streams for the left and right eye views of a stereo 3D recording. The GY-HMZ1U is derivative of this technology and uses dual 3.32MP CMOS sensors for stereo 3D and 2D recordings.

Post at 4K

Naturally the “heavy iron” systems from Quantel and Autodesk have been capable of post at 4K sizes for some time; however, 4K is now within the grasp of most desktop editors. Grass Valley EDIUS, Adobe Premiere Pro and Apple Final Cut Pro X all support editing with 4K media and 4K timelines. Premiere Pro even includes native camera raw support for RED’s .r3d format at up to EPIC’s 5K frames. Avid just released its 6.0 version (Media Composer 6, Symphony 6 and NewsCutter 10), which includes native support for RED One and EPIC raw media. For now, edited sequences are still limited to 1920 x 1080 as a maximum size. For as little as $299 for FCP X and RED’s free REDCINE-X (or REDCINE-X PRO) media management and transcoding tool, you, too, can be editing with relative ease on DCP-compliant 4K timelines.

Software is easy, but what about hardware? Both AJA and Blackmagic Design have announced 4K solutions using the KONA 3G or Decklink 4K cards. Each uses four HD-SDI connections to feed four quadrants of a 4K display or projector at up to 4096 x 2160 sizes. At NAB, AJA previewed for the press its upcoming 5K technology, code-named “Riker”. This is a multi-format I/O system in development for SD up to 5K sizes, complete with a high-quality, built-in hardware scaler. According to AJA, it will be capable of handling high-frame-rate 2K stereo 3D images at up to 60Hz per eye and 4K stereo 3D at up to 24/30Hz per eye.

Even if you don’t own such a display, 27″ and 30″ computer monitors, such as an Apple Cinema Display, feature native display resolutions of up to 2560 x 1600 pixels. Sony and Christie both manufacture a number of 4K projection and display solutions. In keeping with its plans to round out a complete 4K ecosystem, RED continues in the development of REDRAY PRO, a 4K player designed specifically for RED media.

Written for DV magazine (NewBay Media, LLC)

©2011 Oliver Peters

Easy Canon 5D post – Round III

The interest in HDSLR production and post shows no sign of waning. Although some of this information will seem redundant with earlier articles (here and here), I decided it was a good time to set down a working recipe of how I like to deal with these files. To some extend this is a “refresh” of the Round II article, given the things I’ve learned since then. The Canon cameras are the dominant choice, but that’s for today. Nikon is coming on strong with its D7000 and Panasonic has made a serious entry into the large-format-sensor video camera market with its Micro 4/3” AG-AF100. In six months, the post workflows might once again change.

To date, I have edited about 40 spots and short-form videos that were all shot using the Canon EOS 5D Mark II. Many of the early post issues, like the need to convert frame rates, are now behind us. This means fewer variables to consider. Here is a step-by-step strategy for working with HDSLR footage, specifically from Canon 5D/7D/1D HDLSR cameras.

Conversion

Before doing anything with the camera files, it is IMPERATIVE that you clone the camera cards. This is your “negative” and you ALWAYS want to preserve it in its original and UNALTERED form. One application to consider for this purpose is Videotoolshed’s Offloader.

Once that’s out of the way, the first thing I do with files from a Canon 5D or 7D is convert them to the Apple ProRes codec. Yes, various NLEs can natively work with the camera’s H.264 movie files, but I still find this native performance to be sluggish. I prefer to organize these files outside of the NLE and get them into a codec that’s easy to deal with using just about any editing or compositing application. Generally, I will use ProResLT, however, if there is really a quality concern, because the project may go through more heavy post,  then use standard ProRes or ProResHQ. Avid editors may choose to use an Avid DNxHD codec instead.

I have tried the various encoders, like Compressor or Grinder, but in the end have come back to MPEG Streamclip. I haven’t tried 5DtoRGB yet, because it is supposed to be a very slow conversion and most TV projects don’t warrant the added quality it may offer. I have also had unreliable results using the FCP Log and Transfer EOS plug-in. So, in my experience, MPEG Streamclip has not only been the fastest encoder, but will easily gobble a large batch without crashing and delivers equal quality to most other methods. 32GB CF cards will hold about 90-96 minutes of Canon video, so a shoot that generates 4-8 cards in a day means quite a lot of file conversion and you need to allow for that.

MPEG Streamclip allows you to initiate four processes in the batch at one time, which means that on a 4, 8 or 12-core Mac Pro, your conversion will be approximately real-time. The same conversion runs about 1.5x real-time (slower) using the EOS plug-in. The real strength of MPEG Streamclip is that it doesn’t require FCP, so data conversion can start on location on an available laptop, if you are really in that sort of rush.

Timecode and reel numbers

The Canon camera movie files contain little or no metadata, such as a timecode track. There is a THM file (thumbnail file) that contains a data/time stamp. The EOS plug-in, as well as some applications, use this to derive timecode that more-or-less corresponds to TOD (time-of-day) code. In theory, this means that consecutive clips should not have any timecode overlap, but unfortunately I have not found that to be universally true. In my workflow, I generally never use these THM files. My converted ProRes files end up in separate folders that simply contain the movie files and nothing else.

It is important to settle on a naming strategy for the cards. This designator will become the reel ID number, which will make it easy to trace back to the origin of the footage months later. You may use any scheme you like, but I recommend a simple abbreviation for location/day/camera/card. For example, if you shoot for several days in San Francisco with two cameras, then Day 1, Camera 1, Card 1 would be SF01A001 (cameras are designated as A, B, C, etc.); Day 1, Cam 2, Card 1 would be SF01B001; Day 2, Cam 1, Card 3 would be SF02A003 and so on. These card ID numbers are consistent with standard EDL conventions for numbering videotape reels. Create a folder for each card’s contents using this scheme and make sure the converted ProRes files end up in the corresponding folders.

I use QtChange to add timecode to the movie files. I will do this one folder at a time, using the folder name as the reel number. QtChange will embed the folder name (like SF01A001) into the file as the reel number when it writes the timecode track. I’m not a big fan of TOD code and, as I mentioned, the THM files have posed some problems. Instead, I’ll assign new timecode values in QtChange – typically a new hour digit to start each card. Card 1 starts at 1:00:00:00. Card 2 starts at 2:00:00:00 and so on. If Card 1 rolled over into the next hour digit, I might increment the next card’s starting value. So Card 2 might start at 2:30:00:00 or 3:00:00:00, just depending on the overall project. The objective is to avoid overlapping timecodes.

Renaming files

I never change the names of the original H.264 camera files. Since I might need to get back to these files from the converted ProRes media at some point in the future, I will need to be able to match names, like MVI_9877.mov or MVI_1276.mov. This means that I won’t remove the movie file name from the ProRes files either, but it is quite helpful to append additional info to the file name. I use R-Name (a file renaming batch utility) to do this. For example, I might have a set of files that constitute daytime B-roll exterior shots in Boston. With R-Name, I’ll add “-Bos-Ext” after the file name and before the .mov extension.

In the case of interview clips, I’ll manually append a name, like “-JSmith-1” after the movie name. By using this strategy, I am able to maintain the camera’s naming convention for an easy reference back to the original files, while still having a file that’s easy to recognize simply by its name.

Double-system sound

The best approach for capturing high-quality audio on an HDSLR shoot is to bring in a sound mixer and employ film-style, double-system sound techniques. Professional audio recorders, like a Zaxcom DEVA, record broadcast WAVE files, which will sync up just fine and hold sync through the length of the recording. Since the 5D/7D/1D cameras now record properly at 23.98, 29.97 or 25fps, no audio pulldown or speed adjustment should be required for sync.

If you don’t have the budget for this level of audio production, then a Zoom H4n (not the H4) or a Tascam DR-100 are viable options. Record the files at 48kHz sampling in a 16-bit or 24-bit WAVE format. NO MP3s. NO 44.1kHz.

The Zaxcom will have embedded timecode, but the consumer recorders won’t. This doesn’t really matter, because you should ALWAYS use a slate with a clapstick to provide a sync reference. If you use a recorder like a Zaxcom, then you should also use a slate with an LED timecode display. This makes it easy to find the right sound file. In the case of the Zoom, you should write the audio track number on the slate, so that it’s easy to locate the correct audio file in the absence of timecode.

You can sync up the audio manually in your NLE by lining up the clap on the track with the picture – or you can use an application like Singular Software’s PluralEyes. I recommend tethering the output of the audio recorder to the camera whenever possible. This gives you a guide track, which is required by PluralEyes. Ideally, this should have properly matched impedances so it’s useable as a back-up. It may be impractical to tether the camera, in which case, make sure to record reference audio with a camera mic. This may pose more problems for PluralEyes, but it’s better than nothing.

Singular Software has recently introduced DualEyes as a standalone application for syncing double-system dailies.

Your edit system

As you can see, most of this work has been done before ever bringing the files into an NLE application. To date, all of my Canon projects have been cut in Final Cut and I continue to find it to be well-suited for these projects – thanks, in part, to this “pre-edit” file management. Once you’ve converted the files to ProRes or ProResLT, though, they can easily be brought into Premiere Pro CS5 or Media Composer 5. The added benefit is that the ProRes media will be considerably more responsive in all cases than the native H.264 camera files.

Although I would love to recommend editing directly via AMA in Media Composer 5, I’m not quite sure Avid is ready for that. In my own experience, Canon 5D/7D/1D files brought in using AMA as either H.264 or ProRes are displayed at the proper video levels. Unfortunately others have had a different experience, where their files come in with RGB values that exhibit level excursions into the superwhite and superblack regions. The issue I’ve personally encountered is that when I apply non-native Avid AVX effects, like Boris Continuum Complete, Illusion FX or Sapphire, the rendered files exhibit crushed shadow detail and a shifted gamma value. For some reason, the native Avid effects, like the original color effect, don’t cause the same problem. However, it hasn’t been consistent – that is, levels aren’t always crushed.

Recommendations for Avid Media Composer editors

If you are an Avid editor using Media Composer 5, then I have the following recommendations for when you are working with H.264 or ProRes files. If you import the file via AMA and the levels are correct (black = 16, peak white = 235), then transcode the selected cut to DNxHD media before adding any effects and you should be fine. On the other hand, if AMA yields incorrect levels (black = 0, peak white = 255), then avoid AMA. Import “the old-fashioned way” and set the import option for the incoming file as having RGB levels. Avid has been made aware of these problems, so this behavior may be fixed in some future patch.

There is a very good alternative for Avid Media Composer editors using MPEG Streamclip for conversion. Instead of converting the files to one of the ProRes codecs, convert them to Avid DNxHD (using 709 levels), which is also available under the QuickTime options. I have found that these files link well to AMA and, at least on my system, display correct video levels. If you opt to import these the “old” way (non-AMA), the files will come in as a “fast import”. In this process, the QuickTime files are copied and rewrapped as MXF media, without any additional transcoding time.

“Off-speed” files, like “overcranked” 60fps clips from a Canon 7D can be converted to a different frame rate (like 23.98, 25 or 29.97) using the “conform” function of Apple Cinema Tools. This would be done prior to transcoding with MPEG Streamclip.

Avid doesn’t use the embedded reel number from a QuickTime file in its reel number column. If this is important for your workflow, then you may have to manually modify files after they have been imported into Media Composer or generate an ALE file (QtChange or MetaCheater) prior to import. That’s why a simple mnemonic, like SF01A001 is helpful.

Although this workflow may seem a bit convoluted to some, I love the freedom of being able to control my media in this way. I’m not locked into fixed metadata formats like P2. This freedom makes it easier to move files through different applications without being wedded to a single NLE.

Here are some more options for Canon HDSLR post from another article written for Videography magazine.

©2010 Oliver Peters

Grind those EOS files!

I have a love/hate relationship with Apple Compressor and am always on the lookout for better encoding tools. Part of our new file-based world is the regular need to process/convert/transcode native acquisition formats. This is especially true of the latest crop of HDSLRs, like the Canon EOS 5D Mark II and its various siblings. A new tool in this process is Magic Bullet Grinder from Red Giant Software. Here’s a nice description by developer Stu Maschwitz as well as another review by fellow editor and blogger, Scott Simmons.

I’ve already pointed out some workflows for getting the Canon H.264 files into an editable format in a previous post. Although Avid Media Composer 5, Adobe Premiere Pro CS5 and Apple Final Cut Pro natively support editing with the camera files – and although there’s already a Canon EOS Log and Transfer plug-in for FCP – I still prefer to convert and organize these files outside of my host NLE. Even with the newest tools, native editing is clunky on a large project and the FCP plug-in precludes any external organization, since the files have to stay in the camera’s folder structure with their .thm files.

Magic Bullet Grinder offers a simple, one-step batch conversion utility that combines several functions that otherwise require separate applications in other workflows. Grinder can batch-convert a set of HDSLR files, add timecode and simultaneously create proxy editing files with burn-in. In addition, it will upscale 720p files to 1080p. Lastly, it can conform frame-rates to 23.976fps. This is helpful if you want to shoot 720p/60 with the intent of overcranking (displayed as slow motion at 24fps).

The main format files are converted to either the original format (with added timecode), ProRes, ProRes 4444 or two quality levels of PhotoJPEG. Proxies are either ProRes Proxy or PhotoJPEG, with the option of several frame size settings. In addition, proxy files can have a burn-in with various details, such as frame numbers, timecode, file name + timecode or file name + frame numbers. Proxy generation is optional, but it’s ideal for offline/online editing workflows or if you simply need to generate low-bandwidth files for client review.

Grinder’s performance is based on the number of cores. It sends one file to each core, so in theory, eight files would be simultaneously processed on an 8-core machine. Speed and completion time will vary, of course, with the number, length and type of files and whether or not you are generating proxies. I ran a head-to-head test (main format only, no proxy files) on my 8-core MacPro with MPEG Streamclip and Compressor, using 16 H.264 Canon 5D files (about 1.55GB of media or 5 minutes of footage). Grinder took 12 minutes, Compressor 11 minutes and MPEG Streamclip 6 minutes. Of course, neither Compressor nor MPEG Streamclip would be able to handle all of the other functions – at least not within the same, simplified process. The conversion quality of Magic Bullet Grinder was quite good, but like MPEG Streamclip, it appears that ProRes files are generated with the QuickTime “automatic gamma correction” set to “none”. As such, the Compressor-converted files appeared somewhat lighter than those from either Grinder or MPEG Streamclip.

This is a really good effort for a 1.0 product, but in playing with it, I’ve discovered it has a lot of uses outside of HDSLR footage. That’s tantalizing and brings to mind some potential suggestions as well as issues with the way that the product currently works. First of all, I was able to convert other files, such as existing ProRes media. In this case, I would be interested in using it to ONLY generate proxy files with a burn-in. The trouble now is that I have to generate both a new main file (which isn’t needed) as well as the proxy. It would be nice to have a “proxy-only” mode.

The second issue is that timecode is always newly generated from the user entry field. Grinder doesn’t read and/or use an existing QuickTime timecode track, so you can’t use it to generate a proxy with a burn-in that matches existing timecode. In fact, if your source file has a valid timecode track, Grinder generates a second timecode track on the converted main file, which confuses both FCP and QuickTime Player 7. Grinder also doesn’t generate a reel number, which is vital data used by many NLEs in their media management.

I would love to see other format options. For instance, I like ProResLT as a good format for these Canon files. It’s clean and consumes less space, but isn’t a choice with Grinder. Lastly, the conform options. When Grinder conforms 30p and 60p files to 24p (23.976), it’s merely doing the same as Apple Cinema Tools by rewriting the QuickTime playback rate metadata. The file isn’t converted, but simply told to play more slowly. As such, it would be great to have more options, such as 30fps to 29.97fps for the pre-firmware-update Canon 5D files. Or conform to 25fps for PAL countries.

I’ve seen people comment that it’s a shame it won’t convert GoPro camera files. In fact it does! Files with the .mp4 extension are seen as an unsupported format. Simply change the file extension from .mp4 to .mov and drop it into Grinder. Voila! Ready to convert.

At $49 Magic Bullet Grinder is a great, little utility that can come in handy in many different ways. At 1.0, I hope it grows to add some of the ideas I’ve suggested, but even with the current features, it makes life easier in so many different ways.

©2010 Oliver Peters

Solutions to Improve FCP’s Media Management

Media management has long been considered Apple Final Cut Pro’s Achilles’ Heel. In reality, FCP has gotten better in this regard and does a pretty decent job of linking project master clips to media. The shortcomings of FCP media management become apparent when projects are moved around among different edit systems, hard drives and editors. I’ve started to dabble with a few different applications that improve on FCP’s native abilities. I’ll bring these to you on an irregular basis, once I get a chance to do a bit more testing.

The first of these is FcpReconnect from VideoToolShed. This is the brainchild of Bouke Vahl, a Dutch editor and software developer. FcpReconnect may be used in a number of different ways, but in general, works by linking files based on matching reel numbers and timecode. For FCP editors, it provides an excellent solution to projects that use an offline-online edit workflow. Since reel number and timecode are the key, you are less subject to FCP’s need to have file names that completely match. For most workflows, there are two basic ways of using FcpReconnect: a) consolidation and relink or b) relink via XML.

Method A – Consolidate and Relink

As a test, I started with footage from a recent Canon EOS 5D Mark II project. The native camera files are 1920×1080 H.264, 30fps and have no reel numbers or timecode. As I described in a previous post, I converted the media to Apple ProResLT in Compressor, conformed the files to 29.97fps in Cinema Tools and added reel numbers and timecode using QtChange – another handy application from VideoToolShed.

To test FcpReconnect, I used Compressor again to convert the hi-res ProResLT “master” files into DV anamorphic “proxy” files for offline editing. The DV files have the same reel number and timecode, but aren’t an exact file name match, as they had a “DV” suffix appended to the clip name.

I created an FCP edit project (NTSC DV anamorphic) and assembled a basic edit sequence using the DV proxy files. In this example, the DV clips are independent from the hi-res files, which would be the workflow if I decided to do an offline edit on my laptop or gave the files to another editor to cut segments for me. Only the DV files would be the sources in this edit.

Once the edit is done, the next step is to use FCP’s Media Manager to create an offline project. Set the target format to match the hi-res media (1920×1080/30p ProResLT) and set short handle lengths. This creates a new project, with only the clips that were used in the cut. The media for these hi-res clips will show up as “offline”, of course. Next, export a Batch List of this new sequence.

Open FcpReconnect and first make sure you have selected the right timecode standard in the Set Up pulldown menu. VideoToolShed is in The Netherlands, so the default at first launch will be PAL. Once you’ve set this, select the target media folder (the hi-res HD files) and select the Batch List that you just exported. Once it finds all of the matches, you have a few options.

For this test, I chose to use clip names and copy/trim self-contained media of the selected files. This is the equivalent of Avid’s “consolidate” feature.

The clips that are used in the edited sequence are copied to a new folder with a duration equal to the edited length on the timeline, plus the handles. It also renames the media files to match the clip names used in the sequence.

Return to FCP and reconnect the media (currently offline) of the hi-res sequence to the newly consolidated files. Typically, once the first file is located, the others will be automatically found. You will get an FCP dialogue box, because the new media attributes will not completely match the expected attributes. This is normal. Simply click “continue” and you’ll be OK.

Let me caution, that I would still avoid wildly renaming clips inside the FCP browser. The Canon files are sequentially numbered movie files. I tried some tests in which I completely renamed these files. For example, “MVI_2061-DV” might have been renamed to “Richard CU”. Most of the time this worked fine, but I did have a few clips that would not relink. My recommendation is still to use other columns in FCP or at least to leave the number as part of the new clip name. This will make it easier if you must manually locate a few files. I had no such problems in the tests where I left the master clip name the same as its corresponding media file name.

Method B – Relink via XML

An alternate method is to skip the consolidation step. After all, if you already have the hi-res media on your drives, you might not want to copy these files again. In Method B, you’d start the same way with hi-res and proxy files. Edit the proxy project and then use FCP Media Manager to create a new offline project matching the hi-res format. Export a Batch List AND an XML file from this new offline sequence.

In FcpReconnect, pick the target (hi-res) media folder and the Batch List. Instead of coping media, open the XML file. FcpReconnect analyzes the XML against the Batch List and the target media folder and generates a new XML.

Open this new XML file in Final Cut Pro and select “create new project”. The result will be a new FCP project containing one sequence, which is linked to the hi-res media. If you have done this properly, the sequence settings should match the target HD format (ProResLT in my example).

You can make sure the sequence clips are linked to the right media by checking the media path in “item settings”.

In addition, you can also verify frame-accuracy by placing the proxy edit sequence over the hi-res edit sequence and make sure everything lines up. My tests were all accurate.

VideoToolShed’s FcpReconnect is one of a number of applications being developed to fill in the gaps of Final Cut Pro’s media management. It’s clear to see that with a little care, it doesn’t take much to make FCP a far more robust NLE.

©2010 Oliver Peters

CoreMelt Lock and Load X

CoreMelt offers a number of GPU-accelerated plug-in sets for Final Cut Pro, Final Cut Express, Motion and After Effects. One powerful collection is CoreMelt Complete V2 (“v-twin”), which is currently up to 200 filters, transitions and generators. However a very cool, separate filter is their Lock & Lock X stabilization plug-in for Final Cut Pro. The original Lock & Load filter was updated to Lock & Lock X (a free upgrade for L&L users) shortly after NAB2010 and gained a significant new feature – Rolling Shutter Reduction.

Rolling shutter artifacts – the so-called “jello-o-cam” effect – have been the bane of CMOS-sensor cameras, most notably the HD-capable DSLR still cameras. The short answer for why this happens is that objects move in place during the time interval between the data being picked up from the top to the bottom of the sensor. The visual manifestation is skewing or a wobble to the image on fast horizontal motion or shaky handheld shots. CoreMelt’s Lock & Load X is designed to be used for both standard image stabilization, as well as reduction of these artifacts.

Final Cut Pro already includes a very good, built-in stabilization filter in the form of Smoothcam – a technology inherited from Shake. So why buy Lock & Load X when you already own Smoothcam? Two answers: speed and rolling shutter artifact reduction. Generally, Lock & Load X is faster than Smoothcam, although this isn’t always a given. CoreMelt claims up to12 times faster than Smoothcam, but that’s relative. One important factor is the length of the clip. When Smoothcam analyzes a clip to apply stabilization, it must process the entire media clip, regardless of how long of a clip was cut into the sequence. If the media clip is five minutes long, then Smoothcam processes all five minutes. Fortunately, this can proceed as a background function.

In contrast, Lock & Load X only analyzes the length of the clip that is actually in the sequence. If you only used ten seconds out of the five minutes, then Lock & Load X only processes those ten seconds. In this example, processing times between Smoothcam and Lock & Load X would be dramatically different. On the other hand, if you used the complete length of the clip, then processing times for the two might be similar. I’m not exactly sure whether Lock & Load X uses the same type of GPU-acceleration as the V2 filters, so I don’t know whether these processing times change with the card you have in your machine. I’m running a stock NVIDIA GeForce 120 in my Mac Pro, so it could be that an ATI or NVIDIA FX4800 card might show even better results with Lock & Load X. I don’t know the answer to that one, but in any case, processing a 1920 x 1080 ProResLT clip that was several seconds long took less than a minute for both stabilization and rolling shutter reduction.

When you compare the stabilized results between Smoothcam and Lock & Load X, you’ll generally prefer the latter. Most of the time the filter doesn’t zoom in quite as far and if you leave some movement in the image (such as with handheld shots), the “float” of the image feels more natural. However, there are exceptions. I tested one clip with a hard vertical adjustment by the cameraman. At that point, Smoothcam looked more natural than Lock & Load X, which introduced a slight rotation in correcting that portion of the clip. Another difference is real-time performance. On my machine, Smoothcam left me with a green render bar and Lock & Load X was orange. In FCP terms, this means that the unrendered Smoothcam clip played without degraded performance, while the Lock & Load X clip dropped frames. Once rendered, there’s no difference, of course, and render times were similar between the two. Again, this result might differ with another display card.

Rolling shutter artifact reduction is not unique to Lock & Load X, but as far as I know, is currently only available in one other, more expensive filter from The Foundry. In CoreMelt’s implementation, you must select the shutter coefficient, which is based on certain camera profiles supplied by CoreMelt with the filter. If you are working with Canon EOS 5D Mark II or EOS 7D footage, simply pick the camera, run the tracking analysis and you are done. You can choose to stabilize, reduce rolling shutter artifacts or both. In many cases, rolling shutter reduction is very subtle, so you might not see a massive change in the image. Sometimes, the filter simply corrects minor vertical distortions in the frame.

One application I find quite useful is with handheld shots that are intended to look like Steadicam shots. Lock & Load X does a nice job of steadying these shots without losing the natural “float” that you want to keep in the image. The “before” version might look decent, but when you compare the “after” version, it is definitely the preferable image. In order for Lock & Load X to do its magic, it has to blow-up the image slightly, so that the picture fills out to the edges of the frame This is true of any stabilization filter, including Smoothcam. Lock & Load X does this expansion within the filter and doesn’t change motion tab size values. The filter includes a “smart zoom” feature – intelligently resizing the image throughout the clip so that the least amount of blow-up is performed at any time. For a subtle stabilization, like the handheld shot example, Lock & Load X will typically zoom the image between 7% and 12% throughout the length of the clip. Thanks to the processing used, the quality of the rendered clip will be better than if you had zoomed in 10% in FCP’s motion tab.

CoreMelt’s Lock & Load X is a specialized filter. When you have the need for this function, it’s hard to beat. Clearly a new selling point is rolling shutter artifact reduction. Pro video cameras aren’t immune to the effect, however, since even a Sony EX uses a CMOS chip. But it’s a big factor for the HDSLRs. These cameras will continue to be the hot ticket for a while, so Lock & Load X becomes an indispensible tool for editors posting a lot of Canon and Nikon projects.

©2010 Oliver Peters