The NLE that wouldn’t die II

df_nledie2_sm

With echoes of Monty Python in the background, two years on, Final Cut Pro 7 and Final Cut Studio are still widely in use. As I noted in my post from last November, I still see facilities with firmly entrenched and mature FCP “legacy” workflows that haven’t moved to another NLE yet. Some were ready to move to Adobe until they learned subscription was the only choice going forward. Others maintain a fanboy’s faith in Apple that the next version will somehow fix all the things they dislike about Final Cut Pro X. Others simply haven’t found the alternative solutions compelling enough to shift.

I’ve been cutting all manner of projects in FCP X since the beginning and am currently using it on a feature film. I augment it in lots of ways with plug-ins and utilities, so I’m about as deep into FCP X workflows as anyone out there. Yet, there are very few projects in which I don’t touch some aspect of Final Cut Studio to help get the job done. Some fueled by need, some by personal preference. Here are some ways that Studio can still work for you as a suite of applications to fill in the gaps.

DVD creation

There are no more version updates to Apple’s (or Adobe’s) DVD creation tools. FCP X and Compressor can author simple “one-off” discs using their export/share/batch functions. However, if you need a more advanced, authored DVD with branched menus and assets, DVD Studio Pro (as well is Adobe Encore CS6) is still a very viable tool, assuming you already own Final Cut Studio. For me, the need to do this has been reduced, but not completely gone.

Batch export

Final Cut Pro X has no batch export function for source clips. This is something I find immensely helpful. For example, many editorial houses specify that their production company client supply edit-friendly “dailies” – especially when final color correction and finishing will be done by another facility or artist/editor/colorist. This is a throwback to film workflows and is most often the case with RED and ALEXA productions. Certainly a lot of the same processes can be done with DaVinci Resolve, but it’s simply faster and easier with FCP 7.

In the case of ALEXA, a lot of editors prefer to do their offline edit with LUT-corrected, Rec 709 images, instead of the flat, Log-C ProRes 4444 files that come straight from the camera. With FCP 7, simply import the camera files, add a LUT filter like the one from Nick Shaw (Antler Post), enable TC burn-in if you like and run a batch export in the codec of your choice. When I do this, I usually end up with a set of Rec 709 color, ProResLT files with burn-in that I can use to edit with. Since the file name, reel ID and timecode are identical to the camera masters, I can easily edit with the “dailies” and then relink to the camera masters for color correction and finishing. This works well in Adobe Premiere Pro CC, Apple FCP 7 and even FCP X.

Timecode and reel IDs

When I work with files from the various HDSLRs, I prefer to convert them to ProRes (or DNxHD), add timecode and reel ID info. In my eyes, this makes the file professional video media that’s much more easily dealt with throughout the rest of the post pipeline. I have a specific routine for doing this, but when some of these steps fail, due to some file error, I find that FCP 7 is a good back-up utility. From inside FCP 7, you can easily add reel IDs and also modify or add timecode. This metadata is embedded into the actual media file and readable by other applications.

Log and Transfer

Yes, I know that you can import and optimize (transcode) camera files in FCP X. I just don’t like the way it does it. The FCP 7 Log and Transfer module allows the editor to set several naming preferences upon ingest. This includes custom names and reel IDs. That metadata is then embedded directly into the QuickTime movie created by the Log and Transfer module. FCP X doesn’t embed name and ID changes into the media file, but rather into its own database. Subsequently this information is not transportable by simply reading the media file within another application. As a result, when I work with media from a C300, for example, my first step is still Log and Transfer in FCP 7, before I start editing in FCP X.

Conform and reverse telecine

A lot of cameras offer the ability to shoot at higher frame rates with the intent of playing this at a slower frame rate for a slow motion effect – “overcranking” in film terms. Advanced cameras like the ALEXA, RED One, EPIC and Canon C300 write a timebase reference into the file that tells the NLE that a file recorded at 60fps is to be played at 23.98fps. This is not true of HDSLRs, like a Canon 5D, 7D or a GoPro. You have to tell the NLE what to do. FCP X only does this though its Retime effect, which means you are telling the file to be played as slomo, thus requiring a render.

I prefer to use Cinema Tools to “conform” the file. This alters the file header information of the QuickTime file, so that any application will play it at the conformed, rather than recorded frame rate. The process is nearly instant and when imported into FCP X, the application simply plays it at the slower speed – no rendering required. Just like with an ALEXA or RED.

Another function of Cinema Tools is reverse telecine. If a camera file was recorded with built-in “pulldown” – sometimes called 24-over-60 – additional redundant video fields are added to the file. You want to remove these if you are editing in a native 24p project. Cinema Tools will let you do this and in the process render a new, 24p-native file.

Color correction

I really like the built-in and third-party color correction tools for Final Cut Pro X. I also like Blackmagic Design’s DaVinci Resolve, but there are times when Apple Color is still the best tool for the job. I prefer its user interface to Resolve, especially when working with dual displays and if you use an AJA capture/monitoring product, Resolve is a non-starter. For me, Color is the best choice when I get a color correction project from outside where the editor used FCP 7 to cut. I’ve also done some jobs in X and then gone to Color via Xto7 and then FCP 7. It may sound a little convoluted, but is pretty painless and the results speak for themselves.

Audio mixing

I do minimal mixing in X. It’s fine for simple mixes, but for me, a track-based application is the only way to go. I do have X2Pro Audio Convert, but many of the out-of-house ProTools mixers I work with prefer to receive OMFs rather than AAFs. This means going to FCP 7 first and then generating an OMF from within FCP 7. This has the added advantage that I can proof the timeline for errors first. That’s something you can’t do if you are generating an AAF without any way to open and inspect it. FCP X has a tendency to include many clips that are muted and usually out of your way inside X. By going to FCP 7 first, you have a chance to clean up the timeline before the mixer gets it.

Any complex projects that I mix myself are done in Adobe Audition or Soundtrack Pro. I can get to Audition via the XML route – or I can go to Soundtrack Pro through XML and FCP 7 with its “send to” function. Either application works for me and most of my third-party plug-ins show up in each. Plus they both have a healthy set of their own built-in filters. When I’m done, simply export the mix (and/or stems) and import the track back into FCP X to marry it to the picture.

Project trimming

Final Cut Pro X has no media management function.  You can copy/move/aggregate all of the media from a single Project (timeline) into a new Event, but these files are the source clips at full length. There is no ability to create a new project with trimmed or consolidated media. That’s when source files from a timeline are shortened to only include the portion that was cut into the sequence, plus user-defined “handles” (an extra few frames or seconds at the beginning and end of the clip). Trimmed, media-managed projects are often required when sending your edited sequence to an outside color correction facility. It’s also a great way to archive the “unflattened” final sequence of your production, while still leaving some wiggle room for future trimming adjustments. The sequence is editable and you still have the ability to slip, slide or change cuts by a few frames.

I ran into this problem the other day, where I needed to take a production home for further work. It was a series of commercials cut in FCP X, from which I had recut four spots as director’s cuts. The edit was locked, but I wanted to finish the mix and grade at home. No problem, I thought. Simply duplicate the project with “used media”, create the new Event and “organize” (copies media into the new Event folder). I could live with the fact that the media was full length, but there was one rub. Since I had originally edited the series of commercials using Compound Clips for selected takes, the duping process brought over all of these Compounds – even though none was actually used in the edit of the four director’s cuts. This would have resulted in copying nearly two-thirds of the total source media. I could not remove the Compounds from the copied Event, without also removing them from the original, which I didn’t want to do.

The solution was to send the sequence of four spots to FCP 7 and then media manage that timeline into a trimmed project. The difference was 12GB of trimmed source clips instead of HUNDREDS of GB. At home, I then sent the audio to Soundtrack Pro for a mix and the picture back to FCP X for color correction. Connect the mix back to the primary storyline in FCP X and call it done!

I realize that some of this may sound a bit complex to some readers, but professional workflows are all about having a good toolkit and knowing how to use it. FCP X is a great tool for productions that can work within its walls, but if you still own Final Cut Studio, there are a lot more options at your disposal. Why not continue to use them?

©2013 Oliver Peters

A RED post production workflow

When you work with RED Digital Cinema’s cameras, part of the post production workflow is a “processing” step, not unlike the lab and transfer phase of film post. The RED One, EPIC and SCARLET cameras record raw images using Bayer-pattern light filtering to the sensor. The resulting sensor data is compressed with the proprietary REDCODE codec and stored to CF cards or hard drives. In post, these files have to be decompressed and converted into RGB picture information, much the same as if you had shot camera raw still photography with a Nikon or Canon DSLR.

RED has been pushing the concept of working natively with the .r3d media (skipping any interim conversion steps) and has made an SDK (software development kit) available to NLE manufacturers. This permits REDCODE raw images to be converted and adjusted right inside the editing interface. Although each vendor’s implementation varies, the raw module enables control over the metadata for color temperature, tint, color space, gamma space, ISO and other settings. You also have access to the various quality increments available to “de-Bayer” the image (data-to-RGB interpolation). The downside to working natively, is that even with a fast machine, performance can be sluggish. This is magnified when dealing with a large quantity of footage, such as a feature film or other long-form projects. The native clips in your editing project are encumbered by the overhead of 4K compressed camera files.

For these and other reasons, I still advocate an offline-online procedure, rather than native editing, when working on complex RED projects. You could convert to a high-quality format like ProRes 4444 or 10-bit uncompressed at the beginning and never touch the RED files again, but the following workflow is one designed to give you the best of all worlds – easy editing, plus grading to get the best out of the raw files. There are many possible RED workflows, but I’ve used a variation of these steps quite successfully on a recent indie feature film – cut on Final Cut Pro 7 and graded in Apple Color. My intent here is to describe an easy workflow for projects mastering at 2K and HD sizes, which are destined for film festivals, TV and Blu-ray.

Conversion for offline editing

When you receive media from the studio or location, start by backing up and verifying all files. Make sure your camera-original media is safe. Then move on to RED’s REDCINE-X PRO. There is no need yet to change color metadata. Simply accept what was shot and set up a batch to convert the .r3d files into editing media, such as Avid DNxHD36 or Apple ProRes LT or ProRes Proxy. 1920×1080 or 1280×720 are the preferred sizes for lightweight editing media.

With a RED ROCKET accelerator card installed, conversion time will be about real-time. Without it, adjust the de-Bayer resolution settings to 1/2, 1/4 or 1/8 for faster rendering. The quality of these dailies only needs to be sufficient for making effective editing decisions. The advantage to using REDCINE-X PRO and not the internal conversion tools of the NLE (like FCP 7’s Log and Transfer) is faster conversion, which can be done on any machine and isn’t dependent on the specific requirements of a given editing application.

Creative (offline) editing

Import the media into your NLE. In the case of Final Cut Pro 7, simply drag the converted QuickTime files into a bin. Import any double-system audio and merge the clips. Edit until the picture cut is locked. Break the final sequence into reels of approximately ten minutes in length each. Export audio as OMF files for your sound designer/mixer. Duplicate the reels as video-only timelines, remove any effects, extend the length of shots with dissolves and restore all shots with speed changes to full length. Export an XML file for each of these reels.

REDCINE-X PRO primary grading pass

This is a two-step color grading process: Step 1 in REDCINE-X PRO and Step 2 in Apple Color. The advantage of REDCINE-X PRO is direct access to the raw files without the abstraction layer of an SDK. By adjusting the source settings panel within Color, Resolve, Media Composer, Premiere Pro and others, you are adjusting the raw controls; but, any further color adjustments (like curves and lift/gamma/gain “color wheels”) are made downstream of the internally-converted RGB image. This is functionally no different than rendering a high-quality, raw-adjusted RGB file from one application and then doing further corrections to it in another. That’s the philosophy here.

Import the XML file for each reel as a timeline into REDCINE-X PRO. This conforms the .r3d files into an edited sequence corresponding to your cut in FCP. Adjust the raw settings for all shots in the timeline. First, set color space to RedColor2. (You may temporarily set gamma space to RedGamma2 and increase saturation to better see the affect of your adjustments.) Remember, this is a primary grading pass, so match all shots and get the most consistent look to the entire timeline.

You can definitely do very extensive color correction in REDCINE-X PRO and never need another grading tool. That’s not the process here, though, so a neutral, plain look tends to be better for the next stage. The point is to create an evenly matched timeline that is within boundaries for more subjective and aggressive grading once you move to Color. When you are ready to export, return saturation to normal, set color/gamma space to RedColor2/RedLogFilm and the de-Bayer quality to full resolution. Export (render) the timeline using Apple ProRes 4444 at either a 2K or 1920×1080 size. Make sure the export preset is configured to create unique file names and an accompanying FCP XML. Repeat this process for each reel.

Sending to Color and FCP completion

Import the REDCINE-X PRO-generated XML for each reel into Final Cut. Reconnect media if needed. Remove any filters that REDCINE-X PRO may have inadvertently added. Double-check the sequence against your rough cut to verify accuracy and then send the new timeline to Color. Each reel becomes a separate Color project file. Grade for your desired look and render the final result as ProRes HQ or ProRes 4444. Lastly, send the project back to Final Cut Pro to complete the roundtrip.

Once the graded timelines are back in FCP, rebuild any visual effects, speed effects and transitions, including dissolves. Combine the video-only sequences with the mixed audio and add any finishing touches necessary to complete your master file and deliverables.

Written for DV Magazine (NewBay Media LLC)

©2012 Oliver Peters

The Girl with the Dragon Tattoo

The director who brought us Se7en has tapped into the dark side again with the Christmas-time release of The Girl with the Dragon Tattoo. Hot off of the success of The Social Network, director David Fincher dove straight into this cinematic adaptation of Swedish writer Steig Larsson’s worldwide publishing phenomena. Even though a Swedish film from the book had been released in 2009, Fincher took on the project, bringing his own special touch.

The Girl with the Dragon Tattoo is part of Larsson’s Millennium trilogy. The plot revolves around the disappearance of Harriet Vanger, a member of one of Sweden’s wealthiest families, forty years earlier. After these many years her uncle hires Mikael Blomkvist (Daniel Craig), a disgraced financial reporter, to investigate the disappearance. Blomkvist teams with punk computer hacker Lisbeth Salander (Rooney Mara). Together they start to unravel the truth that links Harriet’s disappearance to a string of grotesque murders that happened forty years before.

For this production, Fincher once again assembled the production and post team that proved successful on The Social Network, including director of photography Jeff Cronenweth, editors Kirk Baxter and Angus Wall and the music scoring team of Trent Reznor and Atticus Ross. Production started in August of last year and proceeded for 167 shooting days on location and in studios in Sweden and Los Angeles.

Like the previous film, The Girl with the Dragon Tattoo was shot completely with RED cameras – about three-quarters using the RED One with the M-X sensor and the remaining quarter with the RED EPIC, which was finally being released around that time. Since the EPIC cameras were in their very early stages, the decision was made to not use them on location in Sweden, because of the extreme cold. After the first phase in Sweden, the crew moved to soundstages in Los Angeles and continued with the RED Ones. The production started using the EPIC cameras during their second phase of photography in Sweden and during reshoots back in Los Angeles.

The editing team

I recently spoke with Kirk Baxter and Angus Wall, who as a team have cut Fincher’s last three films, earning them a best editing Oscar for The Social Network as well as a nomination The Curious Case of Benjamin Button. I was curious about tackling a film that had already been done a couple of years before. Kirk Baxter replied, “We were really reacting to David’s material above all, so the fact that there was another film about the same book didn’t really affect me. I hadn’t seen the film before and I purposefully waited until we were about halfway through the fine cut, before I sat down and watched the film. Then it was interesting to see how they had approached certain story elements, but only as a curiosity.”

As in the past, both Wall and Baxter split up editorial duties based on the workload at any given time. Baxter started cutting at the beginning of production, with Wall joining the project in April of this year. Baxter explained, “I was cutting during the production to keep up with camera, but sometimes priorities would shift. For example, if an actor had to leave the country or a set needed to be struck, David would need to see a cut quickly to be sure that he had the coverage he needed. So in these cases, we’d jump on those scenes to make sure he knew they were OK.” Wall continued, “This was a very labor intensive film. David shot 95% to 98% of everything with two cameras. On The Social Network they recorded 324 hours of footage and selected 281 hours for the edit. On Dragon Tattoo that count went up to 483 hours recorded and 443 hours selected!”

The Girl with the Dragon Tattoo has many invisible effects. According to Wall, “At last count there were over 1,000 visual effects shots throughout the film. Most of these are shot stabilizations or visual enhancements, such as adding matte painting elements, lens flares or re-creating split screens from the offline.  Snow and other seasonal elements were added to a number of shots, helping the overall tone, as well as reinforcing the chronology of the film. I think viewers will be hard pressed to tell which shots are real and which are enhanced.” Baxter added, “In a lot of cases the exterior locations were shot in Sweden and elaborate sets were built on sound stages in LA for the interiors. There’s one sequence that takes place in a cabin. All of the exteriors seen through the windows and doors are green screen shots. And those were bright green! I’ve been seeing the composited shots come back and it’s amazing how perfect they are. The door is opened and there’s a bright exterior there now.”

A winning workflow solution

The key to efficient post on a RED project is the workflow. Assistant editor Tyler Nelson explained the process to me. “We used essentially the same procedures as for The Social Network. Of course, we learned things on that, which we refined for this film. Since they used both the RED M-X and the EPIC cameras, there were two different frame sizes to deal with – 4352 x 2176 for the RED One and 5120 x 2560 for the EPIC. Plus each of these cameras uses a different color science to process the data from the sensor. The file handling was done through Datalab, a company that Angus owns. A custom piece of software called Wrangler automates the handling of the RED files. It takes care of copying, verifying and archiving the .r3d files to LTO and transcoding the media for the editors, as well as for review on the secured PIX system. The larger RED files were scaled down to 1920 x 1080 ProRes LT with a center-cut extraction for the editors, as well as 720p H.264 for PIX. The ‘look’ was established on set, so none of the RED color metadata was changed during this process.”

“When the cut was locked, I used an EDL [edit decision list] and my own database to conform the .r3d files back into reels of conformed DPX image sequences. This part was done in After Effects, which also allowed me to reposition and stabilize shots as needed. Most of the repositioning was generally a north-south adjustment to move a shot up or down for better head room. The final output frame size was 3600 x 1500 pixels. Since I was using After Effects, I could make any last minute fixes if needed. For instance, I saw one shot that had a monitor reflection within the shot. It was easy to quickly paint that out in After Effects. The RED files were set to the RedColor2 / RedLogFilm color space and gamma settings. Then I rendered out extracted DPX image sequences of the edited reels to be sent Light Iron Digital who did the DI again on this film.”

On the musical trail

The Girl with the Dragon Tattoo leans heavily on a score by Trent Reznor and Atticus Ross. An early peak came from a teaser cut for the film by Kirk Baxter to a driving Reznor cover of Led Zeppelin’s “Immigrant Song”. Unlike the typical editor and composer interaction – where library temp tracks are used for the edit and then a new score is done at the end of the line – Reznor and Ross were feeding tracks to the editors during the edit.

Baxter explained, “At first Trent and Atticus score to the story rather than to specific scenes. The main difference with their approach to scoring a picture is that they first provide us with a library of original score, removing the need for needledrops. It’s then a collaborative process of finding homes for the tracks. Ren Klyce [sound designer/re-recording mixer] also plays an integral part in this.” Wall added, “David initially reviewed the tracks and made suggestions as to which scenes they might work best in.  We started with these suggestions and refined placement as the edit evolved.  The huge benefit of working this way was that we had a very refined temp score very early in the process.” Baxter concluded, “Then Trent’s and Atticus’s second phase is scoring to picture. They re-sculpt their existing tracks to perfectly fit picture and the needs of the movie. Trent’s got a great work ethic. He’s very precise and a real perfectionist.”

The cutting experience

I definitely enjoyed the Oscar-winning treatment these two editors applied to intercutting dialogue scenes in The Social Network, but Baxter was quick to interject, “I’d have to say Dragon Tattoo was more complicated than The Social Network. It was a more complex narrative, so there were more opportunities to play with scene order. In the first act you are following the two main characters on separate paths. We played with how their scenes were intercut so that their stories were as interconnected as possible, giving promise to the audience of their inevitable union.”

“The first assembly was about three hours long. That hovered at around 2:50 for a while and got a bit longer as additional material was shot, but then shorter again as we trimmed. Eventually some scenes were lost to bring the locked cut in at two-and-a-half hours. Even though scenes were lost, those still have to be fine cut. You don’t know what can be lost unless you finish everything out and consider the film in its full form. A lot of work was put into the back half of the film to speed it up. Most of those changes were a matter of tightening the pace by losing the lead-in and lead-outs of scenes and often losing some detail within the scenes.”

Wall expanded on this, “Fans of any popular book series want a filmed adaptation to be faithful to the original story. In this case, we’re really dealing with a ‘five act’ structure. [laughs]. Obviously, not everything in the book can make it into the movie. Some of the investigative dead ends have to be excised, but you can’t remove every red herring.  So it was a challenging film to cut. Not only was it very labor intensive, with many disturbing scenes to put together, it was also a tricky storytelling exercise. But when you’re done and it’s all put together, it’s very rewarding to see. The teaser calls it the ‘feel-bad film of Christmas’ but it’s a really engaging story about these characters’ human experience. We hope audiences will find it entertaining.”

Some additional coverage from Post magazine.

Written for DV magazine (NewBay Media, LLC)

©2011 Oliver Peters

Why 4K

Ever since the launch of RED Digital Cinema, 4K imagery has become an industry buzzword. The concept stems from 35mm film post, where the digital scan of a film frame at 4K is considered full resolution and a 2K scan to be half resolution. In the proper used of the term, 4K only refers to frame dimensions, although it is frequently and incorrectly used as an expression of visual resolution or perceived sharpness. There is no single 4K size, since it varies with how it is used and the related aspect ratio. For example, full aperture film 4K is 4096 x 3112 pixels, while academy aperture 4K is 3656 x 2664. The RED One and EPIC use several different frame sizes. Most displays use the Quad HD standard of 3840 x 2160 (a multiple of 1920 x 1080) while the Digital Cinema Projection standard is 4096 x 2160 for 4K and 2048 x 1080 for 2K. The DCP standard is a “container” specification, which means the 2.40:1 or 1.85:1 film aspects are fit within these dimensions and the difference padded with black pixels.

Thanks to the latest interest in stereo 3D films, 4K-capable projection systems have been installed in many theaters. The same system that can display two full bandwidth 2K signals can also be used to project a single 4K image. Even YouTube offers some 4K content, so larger-than-HD production, post and distribution has quickly gone from the lab to reality. For now though, most distribution is still predominantly 1920 x 1080 HD or a slightly larger 2K film size.

Large sensors

The 4K discussion starts at sensor size. Camera manufacturers have adopted larger sensors to emulate the look of film for characteristics such as resolution, optics and dynamic range. Although different sensors may be of a similar physical dimension, they don’t all use the same number of pixels. A RED EPIC and a Canon 7D use similarly sized sensors, but the resulting pixels are quite different. Three measurements come into play: the actual dimensions, the maximum area of light-receiving pixels (photosites) and the actual output size of recorded frames. One manufacturer might use fewer, but larger photosites, while another might use more pixels of a smaller size that are more densely packed. There is a very loose correlation between actual pixel size, resolution and sensitivity. Larger pixels yield more stops and smaller pixels give you more resolution, but that’s not an absolute. RED has shown with EPIC that it is possible to have both.

The biggest visual attraction to large-sensor cameras appears to be the optical characteristics they offer – namely a shallower depth of field (DoF).  Depth of field is a function of aperture and focal length. Larger sensors don’t inherently create shallow depth of field and out-of-focus backgrounds. Because larger sensors require a different selection of lenses for equivalent focal lengths compared with standard 2/3-inch video cameras, a shallower depth of field is easier to achieve and thus makes these cameras the preferred creative tool. Even if you work with a camera today that doesn’t provide a 4K output, you are still gaining the benefits of this engineering. If your target format is HD, you will get similar results – as it relates to these optical characteristics – regardless of whether you use a RED, an ARRI ALEXA or an HDSLR.

Camera choices

Quite a few large-sensor cameras have entered the market in the past few years. Typically these use a so-called Super 35MM-sized sensor. This means it’s of a dimension comparable to a frame of 3-perf 35MM motion picture film. Some examples are the RED One, RED EPIC, ARRI ALEXA, Sony F65, Sony F35, Sony F3 and Canon 7D among others. That list has just grown to include the brand new Canon EOS C300 and the RED SCARLET-X. Plus, there are other variations, such as the Canon EOS 5D Mark II and EOS 1D X (even bigger sensors) and the Panasonic AF100 (Micro Four Thirds format). Most of these deliver an output of 1920 x 1080, regardless of the sensor. RED, of course, sports up to 5K frame sizes and the ALEXA can also generate a 2880 x 1620 output, when ARRIRAW is used.

This year was the first time that the industry at large has started to take 4K seriously, with new 4K cameras and post solutions. Sony introduced the F65, which incorporates a 20-megapixel 8K sensor. Like other CMOS sensors, the F65 uses a Bayer light filtering pattern, but unlike the other cameras, Sony has deployed more green photosites – one for each pixel in the 4K image. Today, this 8K sensor can yield 4K, 2K and HD images. The F65 will be Sony’s successor to the F35 and become a sought-after tool for TV series and feature film work, challenging RED and ARRI.

November 3rd became a day for competing press events when Canon and RED Digital Cinema both launched their newest offerings. Canon introduced the Cinema EOS line of cameras designed for professional, cinematic work. The first products seem to be straight out of the lineage that stems from Canon’s original XL1 or maybe even the Scoopic 16MM film camera. The launch was complete with a short Bladerunner-esque demo film produced by Stargate Studios along with a new film shot by Vincent Laforet (the photographer who launch the 5D revolution with his short film Reverie)  called Möbius.

The Canon EOS C300 and EOS C300 PL use an 8.3MP CMOS Super 35MM-sized sensor (3840 x 2160 pixels). For now, these only record at 1920 x 1080 (or 1280 x 720 overcranked) using the Canon XF codec. So, while the sensor is a 4K sensor, the resulting images are standard HD. The difference between this and the way Canon’s HDSLRs record is a more advanced downsampling technology, which delivers the full pixel information from the sensor to the recorded frame without line-skipping and excessive aliasing.

RED launched SCARLET-X to a fan base that has been chomping at the bit for years waiting for some version of this product. It’s far from the original concept of SCARLET as a high-end “soccer mom” camera (fixed lens, 2/3” sensor, 3K resolution with a $3,000 price tag). In fact, SCARLET-X is, for all intents and purposes, an “EPIC Lite”. It has a higher price than the original SCARLET concept, but also vastly superior specs and capabilities. Unlike the Canon release, it delivers 4K recorded motion images (plus 5K stills) and features some of the developing EPIC features, like HDRx (high dynamic range imagery).

If you think that 4K is only a high-end game, take a look at JVC. This year JVC has toured a number of prototype 4K cameras based on a proprietary new LSI chip technology that can record a single 3840 x 2160 image or two 1920 x 1080 streams for the left and right eye views of a stereo 3D recording. The GY-HMZ1U is derivative of this technology and uses dual 3.32MP CMOS sensors for stereo 3D and 2D recordings.

Post at 4K

Naturally the “heavy iron” systems from Quantel and Autodesk have been capable of post at 4K sizes for some time; however, 4K is now within the grasp of most desktop editors. Grass Valley EDIUS, Adobe Premiere Pro and Apple Final Cut Pro X all support editing with 4K media and 4K timelines. Premiere Pro even includes native camera raw support for RED’s .r3d format at up to EPIC’s 5K frames. Avid just released its 6.0 version (Media Composer 6, Symphony 6 and NewsCutter 10), which includes native support for RED One and EPIC raw media. For now, edited sequences are still limited to 1920 x 1080 as a maximum size. For as little as $299 for FCP X and RED’s free REDCINE-X (or REDCINE-X PRO) media management and transcoding tool, you, too, can be editing with relative ease on DCP-compliant 4K timelines.

Software is easy, but what about hardware? Both AJA and Blackmagic Design have announced 4K solutions using the KONA 3G or Decklink 4K cards. Each uses four HD-SDI connections to feed four quadrants of a 4K display or projector at up to 4096 x 2160 sizes. At NAB, AJA previewed for the press its upcoming 5K technology, code-named “Riker”. This is a multi-format I/O system in development for SD up to 5K sizes, complete with a high-quality, built-in hardware scaler. According to AJA, it will be capable of handling high-frame-rate 2K stereo 3D images at up to 60Hz per eye and 4K stereo 3D at up to 24/30Hz per eye.

Even if you don’t own such a display, 27″ and 30″ computer monitors, such as an Apple Cinema Display, feature native display resolutions of up to 2560 x 1600 pixels. Sony and Christie both manufacture a number of 4K projection and display solutions. In keeping with its plans to round out a complete 4K ecosystem, RED continues in the development of REDRAY PRO, a 4K player designed specifically for RED media.

Written for DV magazine (NewBay Media, LLC)

©2011 Oliver Peters

RED post for My Fair Lidy

I’ve work on various RED projects, but a recent interesting example is My Fair Lidy, an independent film produced through the Valencia College Film Production Technology program. This was a full-blown feature shot entirely with RED One cameras. In this program, professional filmmakers with real projects in hand partner with a class of eager students seeking to learn the craft of film production. I’ve edited two of these films produced through the program and assisted in various aspects of post on many others. My Fair Lidy – a quirky comedy directed by program director Ralph Clemente – was shot in 17 days this summer at various central Florida locations. Two RED Ones were used – one handled by director of photography Ricardo Galé and the second by student cinematographers. My Fair Lidy was produced by SandWoman Films and stars Christopher Backus and Leigh Shannon.

There are many ways to handle the post production of native RED media and I’ve covered a number of them in these earlier posts. There is no single “best way” to handle these files, because each production is often best-served by a custom solution. Originally, I felt the way to tackle the dailies was to convert the .r3d camera files into ProRes 4444 files using the RedLogFilm profile. This gives you a very flat look, and a starting point very similar to ARRI ALEXA files shot with the Log-C profile. My intension would have been to finish and grade straight from the QuickTimes and never return to the .r3d files, unless I needed to fix some problems. Neutral images with a RedLogFilm gamma setting are very easy to grade and they let the colorist swing the image for different looks with ease. However, after my initial discussions with Ricardo, it was decided to do the final grade from the native camera raw files, so that we had the most control over the image, plus the ability to zoom in and reframe using the native 4K files as a source.

The dailies and editorial flow

My Fair Lidy was lensed with a 16 x 9 aspect ratio, with the REDs set to record 4096 x 2304 (at 23.98fps). In addition to a RED One and a healthy complement of grip, lighting and electrical gear, Valencia College owns several Final Cut Pro post systems and a Red Rocket accelerator card. With two REDs rolling most of the time, the latter was a godsend on this production.  We had two workstations set up – one as the editor’s station with a large Maxx Digital storage array and the other as the assistant’s station. That system housed the Red Rocket card. My two assistants (Kyle Prince and Frank Gould) handled all data back-up and conversion of 4K RED files to 1920 x 1080 ProResHQ for editorial media. Using ProResHQ was probably overkill for cutting the film (any of the lower ProRes codecs would have been fine for editorial decisions) but this gave us the best possible image for an potential screenings, trailers, etc.

Redcine-X was our tool for .r3d media organization and conversion. All in-camera settings were left alone, except the gamma adjustment. The Red Rocket card handles the full-resolution debayering of the raw files, so conversion time is close to real time. The two stations were networked via AFP (Apple’s file-sharing protocol), which permitted the assistant to handle his tasks without slowing down the editor. In addition, the assistant would sync and merge audio from the double-system sound, multi-track audio recordings and enter basic scene/take descriptions. Each shoot day had its own FCP project, so when done, project files and media (.r3d, ProRes and audio) were copied over to the editor’s Maxx array. Master clips from these daily FCP projects were then copied-and-pasted (and media relinked) into a single “master edit” FCP project.

For reasons of schedule and availability, I split the editing responsibilities with a second film editor, Patrick Tyler. My initial role was to bring the film to its first cut and then Patrick handled revisions with the producer and director. Once the picture was locked, I rejoined the project to cover final finishing and color grading. My Fair Lidy was on a very accelerated schedule, with sound design and music scoring running on a parallel track. In total, post took about 15 weeks from start to finish.

Finishing and grading

Since we didn’t use FCP’s Log and Transfer function nor the in-camera QuickTime reference files as edit proxies, there was no easy way to get Apple Color to automatically relink clips to the original .r3d files. You can manually redirect Color to link to RED files, but this must be done one shot at a time – not exactly desirable for the 1300 or so shots in the film.

The recommended workflow is to export an XML from FCP 7, which is then opened in Redcine-X. It will correctly reconnect to the .r3d files in place of the QuickTime movies. From there you export a new XML, which can be imported into Color. Voila! A Color timeline that matches the edit using the native camera files. Unfortunately for us, this is where reality came crashing in – literally. No matter what we did, using both  XMLs and EDLs, everything that we attempted to import into Color crashed the application. We also tried ClipFinder, another free application designed for RED media. It didn’t crash Color, but a significant number of shots were incorrectly linked. I suspect some internal confusion because of the A and B camera situation.

On to Plan B. Since Redcine-X correctly links to the media and includes not only controls for the raw settings, but also a healthy toolset for primary color correction, then why not use it for part of the grading process? Follow that up with a pass through Color to establish the stylistic “look”. This ended up working extremely well for us. Here are the basic steps I followed.

Step 1. We broke the film into ten reels and exported an XML file for each reel from FCP 7.

Step 2. Each reel’s XML was imported into Redcine-X as a timeline. I changed all the camera color metadata for each shot to create a neutral look and to match shots to each other. I used RedColor (slightly more saturated than RedColor2) and RedGamma2 (not quite as flat as RedLogFilm), plus adjusted the color temp, tint and ISO values to get a neutral white balance and match the A and B camera angles. The intent was to bring the image “within the goalposts” of the histogram. Occasionally I would make minor exposure and contrast adjustments, but for the most part, I didn’t touch any of the other color controls.

My objective was to end up with a timeline that looked consistent but preserved dynamic range. Essentially that’s the same thing I would do as the first step using the primary tab within Color. The nice part about this is that once I matched the settings of the shots, the A and B cameras looked very consistent.

Step 3. Each timeline was exported from Redcine-X as a single ProResHQ file with these new settings baked in. We had moved the Red Rocket card into the primary workstation, so these 1920 x 1080 clips were rendered with full resolution debayering. As with the dailies, rendering time was largely real-time or somewhat slower. In this case, approximately 10-20 minutes per reel.

Step 4. I imported each rendered clip back into FCP and placed it onto video track two over the corresponding clips for that reel to check the conforming accuracy and sync. Using the “next edit” keystroke, I quickly stepped through the timeline and “razored” each edit point on the clip from Redcine-X. This may sound cumbersome, but only took a couple of minutes for each reel. Now I had an FCP sequence from a single media clip, but with each cut split as an edit point. Doing this creates “notches” that are used by the color correction software for cuts between corrections. That’s been the basis for all “tape-to-tape” color correction since DaVinci started doing it and the new Resolve software still includes a similar automatic scene detection function today.

Step 5. I sent my newly “notched” timeline to Color and graded as I normally would. By using the Redcine-X step as a “pre-grade”, I had done the same thing to the image as I would have done using the RED tab within Color, thus keeping with the plan to grade from the native camera raw files. I do believe the approach I took was faster and better than trying to do it all inside Color, because of the inefficiency of bouncing in and out of the RED tab in Color for each clip. Not to mention that Color really bogs down when working with 4K files, even with a Red Rocket card in place.

Step 6. The exception to this process was any shot that required a blow-up or repositioning. For these, I sent the ProRes file from dailies in place of the rendered shot from Redcine-X. In Color, I would then manually reconnect to the .r3d file and resize the shot in Color’s geometry room, thus using the file’s full 4K size to preserve resolution at 1080 for the blow-up.

Step 7. The last step was to render in Color and then “Send to FCP” to complete the roundtrip. In FCP, the reel were assembled for the full movie and then married to the mixed soundtrack for a finished film.

© 2011 Oliver Peters