The Ouch of 4K Post

df_4kpost_sm4K is the big buzz. Many in the post community are wondering when the tipping point will be reached when their clients will demand 4K masters. 4K acquisition has been with us for awhile and has generally proven to be useful for its creative options, like reframing during post. This has been possible long before the introduction of the RED One camera, if you were shooting on film. But acquiring in 4K and higher is quite a lot different than working a complete 4K post production pipeline.

There are a lot of half-truths surrounding 4K, so let me tackle a couple. When we talk about 4K, the moniker applies only to frame dimensions in pixels, not resolution, as in sharpness. There are several 4K dimensions, depending on whether you mean cinema specs or television specs. The cinema projection spec is 4096 x 2160 (1.9:1 aspect ratio) and within that, various aspects and frame sizes can be placed. The television or consumer spec is 3840 x 2160 (16:9 or 1.78:1 aspect ratio), which is an even multiple of HD at 1920 x 1080. That’s what most consumer 4K TV sets use. It is referred to by various labels, such as Ultra HD, UHD, UHDTV, Quad HD, 4K HD and so on. If you are delivering a digital cinema master it will be 4096 pixels wide, but if you deliver a television 4K master, it will be 3840 pixels wide. Regardless of which format your deliverable will be, you will most likely want to acquire at 4096 x 2304 (16:9) or larger, because this gives you some reframing space for either format.

This brings us to resolution. Although the area of the 4K frame is 4x that of a 1080p HD frame, the actual resolution is only theoretically 2x better. That’s because resolution is measured based on the vertical dimension and is a factor of the ability to resolve small detail in the image (typically based on thin lines of a resolution chart). True resolution is affected by many factors, including lens quality, depth of field, accuracy of the focus, contrast, etc. When you blow up a 35mm film frame and analyze high-detail areas within the frame, you often find them blurrier than you’d expect.

The brings us to post. The push for 4K post comes from a number of sources, but many voices in the independent owner-operator camp have been the strongest. These include many RED camera owners, who successfully cut their own material straight from the native media of the camera. NLEs, like Adobe Premiere Pro CC and Apple Final Cut Pro X, make this a fairly painless experience for small, independent projects, like short films and commercials. Unfortunately it’s an experience that doesn’t extrapolate well to the broader post community, which works on a variety projects and must interchange media with numerous other vendors.

The reason 4K post seems easy and viable to many is that the current crop of 4K camera work with highly compressed codecs and many newer computers have been optimized to deal with these codecs. Therefore, if you shoot with a RED (Redcode), Canon 1DC (Motion-JPEG), AJA Cion (ProRes), BMD URSA (ProRes) and Sony F55 (XAVC), you are going to get a tolerable post experience using post-ready, native media or by quickly transcoding to ProRes. But that’s not how most larger productions work. A typical motion picture or television show will take the camera footage and process it into something that fits into a known pipeline. This usually means uncompressed DPX image sequences, plus proxy movies for the editors. This allows a base level of color management that can be controlled through the VFX pipeline without each unit along the way adding their own color interpretation. It also keeps the quality highest without further decompression/recompression cycles, as well as various debayering methods used.

Uncompressed or even mildy compressed codecs mean a huge storage commitment for an ongoing facility. Here’s a quick example. I took a short RED clip that was a little over 3 minutes long. It was recorded as 4096 x 2304 at 23.976fps. This file was a bit over 7GB in its raw form. Then I converted this to these formats with the following results:

ProRes 4444 – 27GB

ProRes HQ (also scaled to UHD 3840 x 2160) – 16GB

Uncompressed 10-Bit – 116GB

DPX images (10-bits per channel) – 173GB

TIFF images (8-bits per channel) – 130GB

As you can see, storage requirement increase dramatically. This can be mitigated by tossing out some data, as the ProRes444 versus down-sampled ProResHQ comparison shows. It’s worth noting that I used the lower DPX and TIFF color depth options, as well. At these settings, a single 4K DPX frame is 38MB and a single 4K TIFF frame is 28MB.

For comparison, a complete 90-100 minute feature film mastered at 1920 x 1080 (23.976fps) as ProRes HQ will consume about 110-120GB of storage. UHD is still 4x the frame area, so if we use the ProRes HQ example above, 30x that 3 min. clip would give us the count for a typical feature. That figure comes out to 480GB.

This clearly has storage ramifications. A typical indie feature shot with two RED cameras over a one-month period, will likely generate about 5-10TB of media in the camera original raw form. If this same media were converted to ProRes444, never mind uncompressed, your storage requirements just increased to an additional 16-38TB. Mind you this is all as 24p media. As we start talking 4K in television-centric applications around the world, this also means 4K at 25, 30, 50 and 60fps. 60fps means 2.5x more storage demands than 24p.

The other element is system performance. Compressed codecs work when the computer is optimized for these. RED has worked hard to make Redcode easy to work with on modern computers. Apple ProRes enjoys near ubiquitous playback support. ProRes HQ even at 4K will play reasonably well from a two-drive RAID-0 stripe on my Mac Pro. Recode plays if I lower the debayer quality. Once you start getting into uncompressed files and DPX or TIFF image strings, it takes a fast drive array and a fast computer to get anything approaching consistent real-time playback. Therefore, the only viable workflow is an offline-online editorial system, since creative editorial generally requires multiple streams of simultaneous media.

This workflow gets even worse with other cameras. One example is the Canon C500, which records 4K camera raw files to an external recorder, such as the Convergent Design Odyssey 7Q. These are proprietary Canon camera raw files, which cannot be natively played by an NLE. These must first be turned into something else using a Canon utility. Since the Odyssey records to internal SSDs, media piles up pretty quickly. With two 512GB SSDs, you get 62 minutes of record time at 24fps if you record Canon 4K raw. In the real world of production, this becomes tough, because it means you either have to rent or buy numerous SSDs for your shoot or copy and reuse as you go. Typically transferring 1TB of data on set is not a fast process.

Naturally there are ways to make 4K post efficient and not as painful as it needs to be. But it requires a commitment to hardware resources. It’s not conducive to easy desktop post running off of a laptop, like DV and even HD has been. That’s why you still see Autodesk Smokes, Quantel Rio Pablos and other high-end systems dominate at the leading facilities. Think, plan and buy before you jump in.

©2014 Oliver Peters

Offline to Online with Premiere Pro or Final Cut Pro X

df_offon_1

Most NLE makers are pushing the ability to edit with native camera media, but there are still plenty of reasons to work in an offline-to-online editing workflow. Both Apple Final Cut Pro X and Adobe Premiere Pro CC make it very easy to do this.

Apple Final Cut Pro X

df_offon_2Apple built offline/online right into the design of FCP X. The application can internally transcode optimized media (such as converting GoPro files to ProRes) and proxy media. Proxy media is usually a half-sized version using the ProRes Proxy codec. There’s a preference toggle to switch between original/optimized or proxy media, with FCP X taking care of making sure all transforms and effects are applied properly between both selections.

df_offon_3What most folks don’t know is that you can “cheat” this system. If you import media and choose to copy it into your Event folder, then source media is stored in the Original Media folder within the Event folder. If you create proxies, those files are stored in the Transcoded Media – Proxy Media folder within the Event folder. It is possible to create and place these folders via the Finder. You just have to be careful about exact name and location. Once you do this, it is possible via the Finder, to copy camera media and edit proxies directly into these folders. For example, your DIT might have created proxies for you on location, using Resolve.

df_offon_4Once you launch FCP X, it will automatically find these files. The main criteria is that file names, timecode and duration are identical between the two sets of files. If X properly recognizes the files, you can easily toggle between original/optimized and proxy with the application behaving correctly. If you are unsure of creating these folders in the first place, then I suggest setting these up within FCP X by importing and transcoding a single bogus clip, like a slate or camera bars. Once the folders are set by FCP X, delete this first clip. DO NOT mix the workflows by importing/transcoding some of the clips via FCP X and then later altering or replacing these clips via the Finder. This will completely confuse X. With these few caveats, it is possible to set up a multi-user offline-online workflow using externally-generated media, but still maintaining control via FCP X.

UPDATE: With the FCP X 10.1 update, you must generate proxies with FCP X. Externally-generated proxies do not link as they did up to 10.0.9.

Adobe Premiere Pro CC

df_offon_5A more customary solution is available to Adobe editors thanks to the new Link and Locate feature. A common scenario is that editors might cut a spot in an offline edit session using proxy edit media – such as low-res files with timecode “burn-ins”. Then the camera files are color corrected in an outside grading session and rendered as final, trimmed clips that match the timeline clip lengths, with a few seconds of “handles”. Now the editor has to conform the sequence by linking to the new high-res, graded files.

With Premiere Pro CC you’d start the process in the normal manner by ingesting and cutting with the proxy files. When the cut is locked, create a trimmed project for the sequence, using the same handle length as the colorist will use. This is created using the Project Manger and you can select the option to make the clips Offline. Next, send an EDL or XML file for your locked cut, plus the camera media to the colorist.

df_offon_6Once you get the graded files back, open your trimmed Premiere Pro project. All media will be offline. Select the master clips and pick the Link Media option to open the Link Media dialogue window. Using the Match File Properties settings, set the parameters so that Premiere Pro will properly link to the altered files. Sometimes files names will be different, so you will have to adjust the the Link and Locate parameters accordingly, by deselecting certain matching options. For example, you might want a match strictly by timecode, ignoring file names.

Press Locate and navigate to the new location of the first missing file and relink. Normally all other clips in the same relative path will automatically relink, as well. Now you’ve got your edited sequence back, except with media populated by the final, high-quality files.

©2013 Oliver Peters

Why 4K

Ever since the launch of RED Digital Cinema, 4K imagery has become an industry buzzword. The concept stems from 35mm film post, where the digital scan of a film frame at 4K is considered full resolution and a 2K scan to be half resolution. In the proper used of the term, 4K only refers to frame dimensions, although it is frequently and incorrectly used as an expression of visual resolution or perceived sharpness. There is no single 4K size, since it varies with how it is used and the related aspect ratio. For example, full aperture film 4K is 4096 x 3112 pixels, while academy aperture 4K is 3656 x 2664. The RED One and EPIC use several different frame sizes. Most displays use the Quad HD standard of 3840 x 2160 (a multiple of 1920 x 1080) while the Digital Cinema Projection standard is 4096 x 2160 for 4K and 2048 x 1080 for 2K. The DCP standard is a “container” specification, which means the 2.40:1 or 1.85:1 film aspects are fit within these dimensions and the difference padded with black pixels.

Thanks to the latest interest in stereo 3D films, 4K-capable projection systems have been installed in many theaters. The same system that can display two full bandwidth 2K signals can also be used to project a single 4K image. Even YouTube offers some 4K content, so larger-than-HD production, post and distribution has quickly gone from the lab to reality. For now though, most distribution is still predominantly 1920 x 1080 HD or a slightly larger 2K film size.

Large sensors

The 4K discussion starts at sensor size. Camera manufacturers have adopted larger sensors to emulate the look of film for characteristics such as resolution, optics and dynamic range. Although different sensors may be of a similar physical dimension, they don’t all use the same number of pixels. A RED EPIC and a Canon 7D use similarly sized sensors, but the resulting pixels are quite different. Three measurements come into play: the actual dimensions, the maximum area of light-receiving pixels (photosites) and the actual output size of recorded frames. One manufacturer might use fewer, but larger photosites, while another might use more pixels of a smaller size that are more densely packed. There is a very loose correlation between actual pixel size, resolution and sensitivity. Larger pixels yield more stops and smaller pixels give you more resolution, but that’s not an absolute. RED has shown with EPIC that it is possible to have both.

The biggest visual attraction to large-sensor cameras appears to be the optical characteristics they offer – namely a shallower depth of field (DoF).  Depth of field is a function of aperture and focal length. Larger sensors don’t inherently create shallow depth of field and out-of-focus backgrounds. Because larger sensors require a different selection of lenses for equivalent focal lengths compared with standard 2/3-inch video cameras, a shallower depth of field is easier to achieve and thus makes these cameras the preferred creative tool. Even if you work with a camera today that doesn’t provide a 4K output, you are still gaining the benefits of this engineering. If your target format is HD, you will get similar results – as it relates to these optical characteristics – regardless of whether you use a RED, an ARRI ALEXA or an HDSLR.

Camera choices

Quite a few large-sensor cameras have entered the market in the past few years. Typically these use a so-called Super 35MM-sized sensor. This means it’s of a dimension comparable to a frame of 3-perf 35MM motion picture film. Some examples are the RED One, RED EPIC, ARRI ALEXA, Sony F65, Sony F35, Sony F3 and Canon 7D among others. That list has just grown to include the brand new Canon EOS C300 and the RED SCARLET-X. Plus, there are other variations, such as the Canon EOS 5D Mark II and EOS 1D X (even bigger sensors) and the Panasonic AF100 (Micro Four Thirds format). Most of these deliver an output of 1920 x 1080, regardless of the sensor. RED, of course, sports up to 5K frame sizes and the ALEXA can also generate a 2880 x 1620 output, when ARRIRAW is used.

This year was the first time that the industry at large has started to take 4K seriously, with new 4K cameras and post solutions. Sony introduced the F65, which incorporates a 20-megapixel 8K sensor. Like other CMOS sensors, the F65 uses a Bayer light filtering pattern, but unlike the other cameras, Sony has deployed more green photosites – one for each pixel in the 4K image. Today, this 8K sensor can yield 4K, 2K and HD images. The F65 will be Sony’s successor to the F35 and become a sought-after tool for TV series and feature film work, challenging RED and ARRI.

November 3rd became a day for competing press events when Canon and RED Digital Cinema both launched their newest offerings. Canon introduced the Cinema EOS line of cameras designed for professional, cinematic work. The first products seem to be straight out of the lineage that stems from Canon’s original XL1 or maybe even the Scoopic 16MM film camera. The launch was complete with a short Bladerunner-esque demo film produced by Stargate Studios along with a new film shot by Vincent Laforet (the photographer who launch the 5D revolution with his short film Reverie)  called Möbius.

The Canon EOS C300 and EOS C300 PL use an 8.3MP CMOS Super 35MM-sized sensor (3840 x 2160 pixels). For now, these only record at 1920 x 1080 (or 1280 x 720 overcranked) using the Canon XF codec. So, while the sensor is a 4K sensor, the resulting images are standard HD. The difference between this and the way Canon’s HDSLRs record is a more advanced downsampling technology, which delivers the full pixel information from the sensor to the recorded frame without line-skipping and excessive aliasing.

RED launched SCARLET-X to a fan base that has been chomping at the bit for years waiting for some version of this product. It’s far from the original concept of SCARLET as a high-end “soccer mom” camera (fixed lens, 2/3” sensor, 3K resolution with a $3,000 price tag). In fact, SCARLET-X is, for all intents and purposes, an “EPIC Lite”. It has a higher price than the original SCARLET concept, but also vastly superior specs and capabilities. Unlike the Canon release, it delivers 4K recorded motion images (plus 5K stills) and features some of the developing EPIC features, like HDRx (high dynamic range imagery).

If you think that 4K is only a high-end game, take a look at JVC. This year JVC has toured a number of prototype 4K cameras based on a proprietary new LSI chip technology that can record a single 3840 x 2160 image or two 1920 x 1080 streams for the left and right eye views of a stereo 3D recording. The GY-HMZ1U is derivative of this technology and uses dual 3.32MP CMOS sensors for stereo 3D and 2D recordings.

Post at 4K

Naturally the “heavy iron” systems from Quantel and Autodesk have been capable of post at 4K sizes for some time; however, 4K is now within the grasp of most desktop editors. Grass Valley EDIUS, Adobe Premiere Pro and Apple Final Cut Pro X all support editing with 4K media and 4K timelines. Premiere Pro even includes native camera raw support for RED’s .r3d format at up to EPIC’s 5K frames. Avid just released its 6.0 version (Media Composer 6, Symphony 6 and NewsCutter 10), which includes native support for RED One and EPIC raw media. For now, edited sequences are still limited to 1920 x 1080 as a maximum size. For as little as $299 for FCP X and RED’s free REDCINE-X (or REDCINE-X PRO) media management and transcoding tool, you, too, can be editing with relative ease on DCP-compliant 4K timelines.

Software is easy, but what about hardware? Both AJA and Blackmagic Design have announced 4K solutions using the KONA 3G or Decklink 4K cards. Each uses four HD-SDI connections to feed four quadrants of a 4K display or projector at up to 4096 x 2160 sizes. At NAB, AJA previewed for the press its upcoming 5K technology, code-named “Riker”. This is a multi-format I/O system in development for SD up to 5K sizes, complete with a high-quality, built-in hardware scaler. According to AJA, it will be capable of handling high-frame-rate 2K stereo 3D images at up to 60Hz per eye and 4K stereo 3D at up to 24/30Hz per eye.

Even if you don’t own such a display, 27″ and 30″ computer monitors, such as an Apple Cinema Display, feature native display resolutions of up to 2560 x 1600 pixels. Sony and Christie both manufacture a number of 4K projection and display solutions. In keeping with its plans to round out a complete 4K ecosystem, RED continues in the development of REDRAY PRO, a 4K player designed specifically for RED media.

Written for DV magazine (NewBay Media, LLC)

©2011 Oliver Peters

RED Post – the Easy Way III

If you’ve read some of my past articles about RED, you know I’m not a huge fan of “native” editing using the camera raw files as source clips. I find that an offline/online workflow is still best for smoothly editing RED projects, yet it still retains access to the raw color data during the finishing process. Previously I discussed an easy workflow for Apple Final Cut Pro and Color users, but this isn’t the only solution. As you know, Avid Media Composer 5 and Adobe Premiere Pro CS5 have both integrated support for RED’s camera raw files. In this post, I’m going to discuss a couple of ways to use these tools in a non-native fashion.

Option A:  Avid Media Composer 5 offline-online RED workflow

Thanks to AMA and RED camera’s SDK, Media Composer 5 offers access to RED’s .R3D files. You can import camera files and adjust the source color settings from within the NLE’s interface. You can either edit directly from these files or transcode them to Avid media for a smoother and faster editing experience. Here is a short step-by-step explanation of a Media Composer-based workflow.

Step 1. Access/import RED .R3D files via AMA (Avid Media Access). Camera clips will open inside Media Composer bins, complete with camera metadata.

Step 2. If you want to change the levels/gamma/exposure/balance of the file by altering the camera raw data, then open the Source Settings for each clip and adjust the video.

Step 3. Adjust the clip framing by opening the bin Reformat column and set the option for each clip (center cut, letterboxed, etc.). Remember that your RED clips may have a 2:1 aspect ratio, but your Avid sequence will be either HD 16:9 or SD 16:9 / 4:3.

Step 4. Set the Media Creation render tab to a video resolution of DNxHD36 with a Debayer quality of “quarter”. Since the objective is a good rough cut – not “finishing” – this quality settings is more than adequate for editing and screening your creative edits.

Step 5. Transcode all source clips. This process runs at close to real-time on a fast machine. When transcoding is done, close all AMA bins and do not use them during the edit. You’ll edit with the transcoded media only.

Step 6. Edit as normal until you get an approved, “locked” picture.

Step 7. Now it’s time to switch to “finishing”. Move or hide all Avid media (the transcoded DNxHD36 clips) by taking them out of the Avid MediaFiles/MXF/1 folder(s) on your media hard drive(s). You could also delete them, but it’s safer not to do that unless you really have to. Best to simply move them into a relabeled folder. Once you’ve done this, your edited sequence will appear with all media off-line.

Step 8. Open the AMA bins (with the .R3D files) and relink the edited sequence to the AMA clips. Make sure the “Allow relinking of imported/AMA clips by Source File name” is NOT checked in the Relink dialogue window. When relinking is completed, the sequence will be repopulated with AMA media, which will be the native, camera raw .R3D files. If you want to change the raw color data at this point, you will need to change each source clip and then refresh the sequence to update the color for clips that appear within the timeline.

Step 9. Change the Media Creation settings to a higher video resolution (such as DNxHD 175 X) and a Debayer quality of “full”.

Step 10. Consolidate/transcode your sequence. This will create new Avid media clips at full quality that are only the length of the clips as they appear in the cut, plus handles. Since a transcode using a “full” Debayer setting will be EXTREMELY SLOW, make sure you set very short handle lengths. (Note: If you have a Red Rocket card installed, Avid supports hardware-assisted rendering to accelerate the transcoding of RED media.)

Step 11. Finish all effects and color grading within the NLE as you normally would.

Option B:  Apple FCP / Automatic Duck / Adobe CS5 workflow

You might be asking, why not just edit in Final Cut Pro or Premiere Pro? The hitch is that Final Cut doesn’t support 4K files and Premiere Pro has a good native, but not a good offline-online workflow for RED files. FCP users clearly outnumber Premiere Pro users among professional film and video editors, however, both After Effects and Premiere Pro offer some interesting finishing options. In fact, a number of feature films have used both for all or part of the finishing process. A combination of Apple and Adobe tools creates some interesting scenarios for RED post. (Note: Automatic Duck Pro Import AE 5.0 is required.)

Step 1. Ingest your RED .R3D clips into Final Cut Pro using Log and Transfer. Set the preferences to use ProRes Proxy (NOT “native”). Set the color to “as shot”. This requires that the RED plug-in for FCS has been installed. (Refer to the previous article for a more in-depth explanation of this first step.) Please note that it is important to do this with the R3D files and not to start by simply dragging the in-camera-generated H, M or P QuickTime reference files into the FCP browser. Many RED users erroneously consider these to be “proxy” edit files. They are not. They are reference files at different resolutions/sizes that are linked to the R3D files and do not work correctly in this process.

Step 2. Edit normally in FCP until the cut is “locked”.

Step 3. Export an XML of your Final Cut sequence. I prefer using Automatic Duck’s free XML exporter and have had more reliable results with it, but the built-in FCP XML exporter will also work.

Step 4. Launch Adobe After Effects CS5. (Pro Import AE 5 works with CS3 and CS4, too, but you need to use an Adobe CS version compatible with native RED files.) Import the XML file using Pro Import AE 5. Make sure your Automatic Duck preferences are set to “Replace proxy footage with .R3D files.” The result will be an After Effects timeline with settings that match the Final Cut Pro sequence settings, except that all the clips will now be linked to the original camera files.

Step 5. Since the ProRes Proxy files were most likely 2K files, and the newly relinked camera files are the original 4K size, you will need to reset the scale value of each clip in the composition. This reframes the shot to fit inside the 2K frame, just as they did in FCP. Or you can creatively reframe the shots, since you have all the “bleed” of the full 4K frame. Alternatively, you can change the After Effects composition setting to match the 4K size.

At this point you could completely finish the project in After Effects, and there are a number of folks who would advocate that. From my point-of-view, After Effects is a compositing tool, rather than a DI or editing application. With the changes in Premiere Pro CS5, my druthers would be to get the media into that application. I’m only using After Effects as a conduit between Final Cut Pro and Premiere Pro in this process.

You could go from After Effects to Premiere Pro via Adobe’s Dynamic Linking, but I’d rather not. That simply nests the After Effects composition as a single clip on the Premiere Pro timeline. I want the shots available as individual timeline clips, so follow these steps.

Step 6. Launch a new Premiere Pro CS5 project and select a new sequence setting from one of the RED presets, such as a 4K timeline.

Step 7. Highlight all of the .R3D clips in the After Effects composition and Copy.

Step 8. Switch to the Premiere Pro sequence window and Paste. All of the RED clips will now fill up the Premiere Pro sequence. At this point you should have a native 4K sequence with .R3D camera raw media. Corresponding master clips will show up in the Premiere Pro project window.

Step 9. To change the camera raw color settings of the .R3D files, open a clip from the project window and alter its source settings. These changes will automatically update that clip on the timeline.

Step 10. Finish effects and color grading as desired. If you are using this process with the intent of sending files to a DI house for film finishing, then your settings and any grading should be very neutral to allow for maximum latitude at the next stage.

Step 11. Export media. A big selling point of Premiere Pro CS5 to RED users is that it allows you to export DPX image sequences, in addition to all of the standard media options. DPX is the preferred format of most high-end DI solutions, like Quantel Pablo, Autodesk Lustre, etc. Premiere Pro CS5 is one of the few desktop solutions that enables an export of full-resolution 4K DPX files from the edited timeline.

OK, I’ve given you a lot to chew on. In three articles on RED post, I’ve covered quite a few ways to finish RED-acquired projects. Don’t get overwhelmed. Remember that you don’t have to use them all. Simply pick the one that’s best for you and have fun.

©2010 Oliver Peters

RED Post – the Easy Way II

The RED camera company has succeeded in shaking up the industry and getting all other camera manufacturers to rethink what a digital cinema camera should be. This year, the ARRI Alexa presents the first serious challenge by another system designed around a camera raw workflow. Although RED maintains a resolution advantage, which will increase with the forthcoming Epic, there are many other reasons producers might opt for an Alexa, a Panavision Genesis, a Panasonic VariCam/3700/2700/3000 or a Sony F23/F35/F900/F800.

One of the strategic errors that I feel RED made was to emphasize resolution over workflow. By doing so, their innovative approach was tagged early on by detractors as difficult and time-consuming. It’s actually rather straightforward with a lot of versatility and can be adapted to many different production needs. Unfortunately, no matter how easy it has become today, RED will continue to battle this perception issue. This is exacerbated by RED itself, who has never provided good documentation for its products, especially the post production tools. A byproduct of the “perpetual beta” mode in which the company operates.

Native vs. non-native

I haven’t been a big fan of dealing with the camera raw files during editing, opting instead to pre-grade/render/export the camera files first into an edit-friendly format. If you search through the RedUser forum, you’ll find plenty of posts pointing out that the preferred feature film workflow is to export flat-looking DPX files for conforming and grading in DI systems like daVinci, Pablo and Lustre. This is a common workflow for DI and digital acquisition. I’ve demonstrated some of the latitude such a flat image can offer, even though it isn’t camera raw any longer.

Apple and Assimilate were early adopters of being able to access RED’s raw color data. Since then, RED developed an SDK that has allowed many other NLE manufacturers access to the raw data through this spec. Now others, like Avid and Adobe, can open and manipulate RED files based on the camera raw data. This gives editors wide latitude over how the image can look, without being stuck to a “baked in” camera image as a starting point. It’s like editing from transferred film, yet having access to the original negative in the NLE. I’ve recently reviewed Avid Media Composer 5 and Adobe Premiere Pro CS5 and spent some time testing this out. Both do a very good job with native RED files, but my conclusion is still that an offline/online editing methodology works best for complex, long-form productions.

FCP’s Log and Transfer

Last year, I edited 90% of my projects with Final Cut Pro, so I’ve decided to revisit Apple’s “native” RED workflow with a fresh eye. FCP does not let you work directly with the actual .R3D camera files. Instead, RED files are imported via FCP’s Log and Transfer module. Here you have two options: a) import as native REDCODE (the .R3D file is copied and rewrapped with a QuickTime container); or b) import/transcode to an edit-friendly codec, like one of the ProRes codecs. During Log and Transfer, you may select one of several colorimetry presets or “as shot”. Once imported into FCP, you can’t access the source settings (as in Media Composer or Premiere Pro). Instead, the workflow is designed around Apple Color, where the tools are provided to once again access the camera raw color data.

A lot of the RED appeal is over the fact that the camera records 4K images. 4K refers to a frame size of 4096 x 2048 pixels (2:1 aspect ratio). The RED One camera is capable of various frame sizes, but 4K appeals to indie filmmakers as some sort of Holy Grail. That’s in spite of the fact that most feature film DI is done at 2K sizes and some films are even posted using HD video (1920×1080) as an intermediate step. Avid Media Composer 5 limits you to an HD frame while Adobe Premiere Pro CS5 and After Effects CS5 will let you work at 4K. FCP doesn’t allows 4K, so the effective workaround is to downsample the 4K RED images to 2K (2048×1024). FCP and Color deal with this image size quite effectively and i/o hardware like the AJA KONA3 includes presets for 2K images. I like the idea of 4K at the camera, but I’m perfectly okay with 2K and HD in post.

Size and debayering

The downsample issue is confusing, because it affects image size and debayering – the process that turns raw data into RGB video. Unfortunately, RED hasn’t provided clear information as to what is really happening. The rule of thumb is that 2K images are downsampled as 1:1, while larger images use a 2:1 ratio. Since you have no control over the debayering settings in either Final Cut or Color, the belief expressed by some users is that RED’s own post tools, like REDCINE-X, yield better image quality. I haven’t seen anything that’s an issue in my own testing and some of the threads at RedUser would indicate that the results are comparable in head-to-head testing. You’ll have to judge for yourself.

If you are planning to post via this workflow, then it’s important to think about the right image size before production starts. If you shoot at 4K 2:1 (4096×2048), the resulting 2K 2:1 image (2048×1024) in FCP will either have to be center-cut (a blow-up with some cropping on the edges) to fit an HD (1920×1080) frame  – or it will have to be displayed with a letterbox mask.

Color scales the 2K image in the Geometry room as it renders. Since the majority of producers using this workflow are mainly interested in a proper HD image (1920×1080), I would recommend that the original footage be recorded in either 4K 16:9 (4096×2304) or 4K HD 16:9 (3840×2160), aka “quad HD”. The former gives you a little wiggle room for minor reframing, while the latter is an even multiple and will provide the most accurate downsampled image.

RED step-by-step with Final Cut Studio

Let’s take a look at the recommended Apple Final Cut Studio/RED workflow using an offline/online approach and camera raw files. Experienced RED owners who use FCP will be very familiar with this workflow. It’s also clearly described in RED’s FCP whitepaper. On the other hand, if you are about to approach your first RED project and have some trepidation about post, then this is for you. I’ll assume that you didn’t plunk down five grand for a RED Rocket accelerator card and don’t have the budget for a high-end finishing facility using Assimilate Scratch, Quantel Pablo, Avid DS or similar tools. In short, you are looking for the best way to leverage Apple Final Cut Studio and get the most out of your RED files.

Step 1: Download and install the RED Final Cut Studio Installer. This adds the QuickTime codec and the support modules for Final Cut Pro and Color. (The whitepaper is also included in this download.)

Step 2: Copy the RED camera files to your local hard drive array for editing. Back-up the files to other archive media and store in a secure location. (Avoid any illegal characters – like slashes, number signs, etc. – when you label folders.)

Step 3: Start a new FCP project. Use FCP’s Log and Transfer module to import the RED camera files. Set the L&T preferences to a target format of ProRes Proxy. Apply a color preset, like “daylight” if desired or leave “as shot”. This preset will be applied globally to all clips imported in this session.

Step 4: Edit your sequences as you normally would do. If you need to apply certain “looks” to satisfy the producer or client, use the FCP color correction tools for a temporary adjustment. Remember that this is offline editing. The goal is a good rough cut and ultimately an approved, “locked” picture cut.

Step 5: Once the cut is “locked”, use FCP’s Media Manager to generate a version of the final sequence for finishing. Run Media Manager and “create offline” to generate a new FCP project. Set the desired target sequence settings  – most likely Pro Res HQ or Pro Res 4444 (1920×1080 24p 48kHz). Set handle lengths as desired.

Step 6: Open the new media-managed FCP project. Open the Log and Transfer tool. Change the L&T preferences to “native” and “as shot”. Select the master clips (media is currently off-line) and batch capture. The corresponding portions of these RED clips will now be re-imported as native files.

Step 7: Select the final sequence and “Send to Color”. Remember that all of the Color compatibility considerations still apply. Long sequences should be first broken down into shorter sequences. Speed ramps should be “baked in”. In short, do all the usual pre-flight preparation required by the FCP-Color roundtrip.

Step 8: Thanks to the RED Installer, Color has now gained a RED tab in the Primary In room. Camera raw adjustments include gamma, colorspace, temperature, tint, gains, ISO and more. This is similar to making camera raw adjustments to digital still photos in Photoshop. All clips with the native REDCODE codec can be modified by these settings. These changes are on a clip-by-clip basis, but you can copy-and-paste or drag the Primary In settings from one clip to multiple clips.

The rest of the color grading steps follow standard Color operation. Adjust the Geometry settings as desired, render and send back to FCP. There are no raw OLPF (optical low-pass filtering) controls for detail enhancement or sharpening within the RED tab. If you feel that the image is slightly soft, then apply some sharpening within the Color FX room.

It doesn’t really make a lot of difference whether you follow this approach or prep the files first and never return to the native .R3D files. Both methods work and result in great images. It really boils down to what works for you. The process isn’t as hard as people make it out to be. Jump in, test a bit first and then you’re ready to rock!

©2010 Oliver Peters

RED Post – the Easy Way

blg_redpost_1

A commercial case study

Ever since the RED Digital Cinema Camera Company started to ship its innovative RED One camera, producers have been challenged with the best way to post produce its footage. Detractors have slammed RED for a supposed lack of post workflows. This is in wrong, since there are a number of solid ways to post RED footage. The trouble is that there isn’t a single best way and the path you choose is different depending on your computing platform, NLE of choice and destination. Many of the RED proponents over-think the workflow and insist on full 4K, native camera raw post. In my experience that’s unnecessary for 99% of all projects, especially those destined for the web or TV screens.

blg_redpost_2

Camera RAW

The RED One records images using a Bayer-pattern color filter array CMOS sensor, meaning that the data recorded is based on the intensity of red, green or blue light at each sensor pixel location. Standard video cameras record images that permanently incorporate (or “bake in”) the colorimetry of the camera as the look of the final image. The RED One stores colorimetry data recorded in the field for white balance, color temperature, ISO rating, etc. only as a metadata software file that can be nondestructively manipulated or even discarded completely in post. Most high-end DSLR still cameras use the same approach and can record either a camera raw image or a JPEG or TIFF that would have camera colorimetry “baked” into the picture. Shooting camera raw stills with a DSLR requires an application like Apple Aperture or Adobe Photoshop Lightroom or other similar image processing tools to generate final, color-corrected images from the stills you have shot.

Likewise, camera raw images from RED One require electronic processing to turn the Bayer pattern information into RGB video. Most of the typical image processing circuitry used in a standard HD video camera isn’t part of RED One, so these processes have to be applied in post. The amount of computation required means that this won’t happen in real-time and applying this processing requires rendering to “bake” the “look” into a final media file. Think of it as the electronic equivalent of a 35mm film negative. The film negative out of the camera rarely looks like your results after lab developing and film-to-tape transfer (telecine). RED One simply shifts similar steps into the digital realm. The beauty of RED One is that these steps can be done at the desktop level if you have the patience. Converting RED One’s camera raw images into useable video files involves the application of de-Bayering, adding colorimetry information, cropping, scaling, noise reduction and image sharpening.

blg_redpost_3

Native workflow

I am not a big believer in native RED workflows, unless you post with an expensive system, like Avid DS, Assimilate Scratch or Quantel. If you post with Apple Final Cut Studio, Adobe Creative Suite or Avid Media Composer, then the native workflow is largely a pain in the rear. “Native” means that you are working with some sort of reference or transcoded file during the creative editorial process. Because you are still dragging along the 4K-worth of  data, playback tends to be sluggish at the exact point where an editor really wants to rock-n-roll. When you move to the online editing (finishing) phase, you have to go through extra steps to access the original, camera raw media files (.R3D) and do any necessary final conversions. When cutting “native” not all of the color metadata used with the file is recognized, so you may or may not see the DP’s intended “look” during the offline (creative) editing phase.  For example, the application of curves values isn’t passed in the QuickTime reference file.

In some cases, such as visual effects shoots, native post is totally impractical. As the editor (or later the colorist), you may determine one color setting for the video files; but, the visual effects artist creates a different result, because he or she is also working natively with a set of camera raw files. You can easily end up in a situation where the effects shots don’t match the standard shots. Not only don’t they match, but it will be difficult to make them match, unless you go back to the camera raw information. This wouldn’t be possible with final, rendered effects shots. For these and many other reasons, I’m not keen on the native workflow and will discuss an alternative approach.

blg_redpost_12

Commercial post

I just wrapped up two national spots for Honda Generators with area production company, Florida Film & Tape. Brad Fuller (director/director of photography) shot with a RED One and I worked the gig as editor, post supervisor and colorist. The RED One can be set for various frame rates, aspect ratios and frame sizes and until recently, most folks have been shooting at 4096×2048 – a 2:1 aspect ratio. Early camera software builds had issues with 16×9, but that appears to have been fixed, so our footage was recorded as 4096×2304 at 23.98fps. That’s a 16×9 slice of the sensor using 4096 pixels (4K) as the horizontal dimension.

As an aside, there is plenty of discussion on the web about pixel dimensions versus resolution. Our images looked fine at 2K and HD because of the benefits of the oversampled starting point and downsampling that to a smaller size. When I actually extract a 4K TIFF for analysis and look at points of color detail, like the texture on an actor’s face or blades of grass, I see a general, subtle “softness” that I attribute to the REDcode wavelet compression. It’s comparable to the results you get from many standard digital still photo cameras when viewed at 1:1 pixels (a 100% view). I don’t feel that full-size 4K stills look as nice as images from a high-end Nikon or Canon DSLR for print work; but, that’s not my destination for this footage. I’m working in the TV world and our final spots were to be finished as HD (both 1080i and 720p) and NTSC (480i letterboxed). In that environment, the footage holds up quite well when compared with a 35mm film, F900 or VariCam commercial shoot.

The spots were shot on a stage and on location over the course of a week. The camera’s digital imaging tech (DIT) set up camera files on location and client, agency and director/DP worked out their looks based on the 720p video tap from the RED One to an HD video monitor. As with most tapeless media shoots, the media cards from the camera were copied to a set of two G-Tech FireWire drives as part of the on-set data wrangling routine. At this point all media was native .R3D and QuickTime reference files generated in-camera. The big advantage of the QuickTime reference files – and a part of the native workflow that IS quite helpful – is the fact that all folks on the set can review the footage. This allowed the client, agency and director to cull out the selected clips for use in editing. Think of it exactly like a film shoot. These are now your “circle” or “print” takes. Since I’m the “lab” in this scenario, it becomes very helpful to boil down the total of 250 clips shot to only 50 or so clips that I need to “process”.

blg_redpost_5

Processing

This approach is similar to a film shoot with a best-light transfer of dailies, final correction in post and no retransferring of the film. The Honda production wrapped on a Friday and I did my processing on Saturday in time for a Monday edit. This is where the various free and low-cost RED tools come into play. RED Digital Cinema offers several post applications as free downloads. In addition, a number of users have developed their own apps – some free, some for purchase. My first step was to select all the RED clips in Clipfinder. This is a free app that you can use to a) select and review all RED media files in a given volume or folder, b) add comments to individual files and c) control the batch rendering of selected files.

The key application for me is RED Alert. The RED One generates color metadata in-camera and RED Alert can be used to review and alter this metadata. It can also be used to export single TIFF, DPX or rendered, self-contained QuickTime media files, as well as to generate new QuickTime reference files. The beauty is that updating color metadata or generating new reference files is a nearly instantaneous process. Since I am functioning in the role of a colorist at this point, it is important that I communicate what I am doing with the DP and/or director to make sure I don’t undo a look painstakingly created during the shoot.

With all due respect to DPs and DITs everywhere, I’m skeptical that the look everyone liked on an HD monitor during the shoot is really the best setting to get an optimal result in post. There have been a number of evolving issues with RED One over successive camera builds. People have often ended up with a less-pleasing results than they thought they were getting, simply because what they thought they were seeing on set wasn’t what was being recorded.

blg_redpost_71

Three factors affect this: Color Space, Output LUT and ISO settings. Since color settings are simply metadata and don’t actually affect the raw recording, these are all just different ways to interpret the image. Unfortunately that’s a double-edged sword, because each of these settings have a lot of options that drastically change how the image appears. They also affect what you see on location and, if adjusted incorrectly, can cause the DP to under or overexpose the image. My approach in post is generally to ignore the in-camera data and create my own grade in RED Alert. On this job, I set Color Space to REDspace and the Output LUT (look-up table) to Rec 709. The latter is the color space for HD video. From what I can tell, REDspace is RED’s modified and punchier version of Rec 709. These settings essentially tell RED Alert to interpret the camera raw image with REDspace values and convert those to Rec 709. Remember that my destination is TV anyway, so ultimately Rec 709 is really all I’m going to be interested in at the end.

Some folks recommend the Log settings, but I disagree. Log color settings are great for film and are a way of truncating a wider dynamic range into less space by “squeezing” the portion of the light values pertaining to highlights and shadows. The fallacy of this for TV – especially if you are working with FCP or Media Composer – is that these tools don’t employ log-to-linear image conversion, so there’s really no mathematically-accurate way to expand the actual values of this compressed dynamic range. Instead, I prefer to stay in Rec 709 and work with what I see in front of me.

ISO is another much-discussed setting. The RED One is nominally rated as ISO 320 (default). I really think it’s more like 200, because RED One doesn’t have the best low-light sensitivity. When you compare it with available-light shots from the Canon EOS 5D Mark II (for example, stills from Reverie), the Canon will blow away the RED One. The RED One images are especially noisy in the blue channel. You can bump up the ISO setting as high as 2000, but if you do this in camera (and don’t correct it in post), it really isn’t as pleasant as “pushing” film or even using a high-gain setting on an HD video camera.

On the other hand, there are some very nice examples of corrected low-light shots over at RedUser; however, additional post production filtering techniques were used to achieve these cleaner images. Clean-up in post is certainly no substitute for better lighting during the shoot. In reasonably well-lit evening shots, an ISO of 400 or 500 in RED Alert is still OK, but you do start to see noise in the darker areas of the image.

blg_redpost_6

Pre-grading

The rub in all of this, when working with RED Alert, is that you have no output to a video display or scopes by which to accurately judge the image. You see it on your computer display, which is notoriously inaccurate. That’s an RGB display set to goodness-knows-what gamma value!  The only valid analysis tool is RED Alert’s histogram – so learn to use it. Since I am working this process as a “pre-grade” with the intent of final color grading later, my focus is to create a good starting point – not the final look of the shot. This means I will adjust the image within a safe range. In the case of these Honda spots, I increased the contrast and saturation with the intent that my later grading would actually involve a reduction of saturation for the desired appearance. Since my main tool is the histogram, I basically “stretched” the dynamic range of the displayed image to both ends of the histogram scale without clipping highlights or crushing shadows. I rendered new media and didn’t use the QuickTime reference files for post, which allowed me to apply a slight S-curve to my images. RED Alert lets you save grading presets, so even though you can only view one clip at a time, you can save and load presets to be applied to other clips, such as several takes of the same set-up.

Clipfinder and RED Alert work beautifully together. You can simply click on a clip in Clipfinder and it will open in RED Alert. Tweak the color settings and you’re done. It’s just that simple, fast and easy. The bad news is that these tools are Mac Intel only. Nothing for Power PCs. If you are running Windows, then you have to rely on RED Cine for these same tasks. RED Cine is a stripped down version of Scratch and has a lot of power, but I don’t find it as fast or straightforward as the various Mac tools.

blg_redpost_81

Rendering media files

My premise is not to work within the native flow, so I still have to render media files that I’m going to use for the edit. There is no easy way around this, because the good/fast/cheap triad is in effect. (You can only pick two.) If you are doing this at the desktop level, you can either buy the most fire-breathing computer you can afford or you can wait the amount of time it takes to render. Period!

The Mac RED tools require Intel Macs, but my client owns a G5-based FCP suite. To work around this, I processed the RED files at another FCP facility nearby that was equipped with a quad-core Mac Pro. I rendered the files to ProResHQ, which the faster G5s can still play, even though this codec is optimized for Intels. In addition, our visual effects artist was using After Effects on a PC. His druthers were for uncompressed DPX image sequences, but once Apple released its QuickTime decoder for ProRes on Windows, he was able to work with the ProResHQ files without issue on his PC.

My Saturday was spent adjusting color on the 50 circle takes and then I let the Mac Pro render overnight. You can render media files in RED Alert, Clipfinder or RED Rushes (another free RED application), but all three are actually using RED Line – a command-line-driven rendering engine. Clipfinder and RED Rushes simply provide a front-end GUI and batch capabilities so the user doesn’t have to mess with the Mac command line controls. At this point, you set cropping, scaling and de-Bayer values. Choices made here become a trade-off between time and quality. Since I had a bit of time, I went with better quality settings, such as the “half-high” de-Bayer value. This gives you very good results in a downsampled 2K or HD image, but takes a little longer to render.

OK, so how much longer? My 50 clips equaled about 21 minutes of footage. This was 24fps (23.98) footage and rendering averaged about 1.2 to 1.5fps – about 16:1. Ultimately several hours, but not unreasonable for an overnight render on a quad-core. Certainly if I were working with one of the newest, maxed out, octo-core Intel Xeon “Nehalem” Mac Pros, then the rendering would be done in less time!

On Sunday morning I check the files and met with the director/DP to review the preliminary color grade. He was happy, so I was happy and could dupe a set of the files to hand off to the visual effects artist.

blg_redpost_41

The edit

I moved back to the client’s G5 suite with the ProResHQ media. As a back-up plan, I brought along my Macbook Pro laptop – an Intel machine – just in case I had to access any additional native .R3D files during the edit. Turns out I did. Not for the edit, but for some extra plate shots that the effects artist needed, which hadn’t been identified as circle takes. Whip out the laptop and a quick additional render. Like most tapeless media shoots, clips are generally short. My laptop rendered at a rate of about .8fps – not really that shabby compared to the tower. Rendering a few additional clips only took several minutes and then we were ready to rock.

I cut these spots on Apple Final Cut Pro, but understand that there’s nothing about this workflow that would have been significantly different using another NLE, especially Avid Media Composer. In that case, I would have simply rendered DNxHD files or image sequence files, instead of ProResHQ. Since I had rendered 1920×1080 ProResHQ files, I was cutting “offline” with finished-quality media. No real issues there, even on the G5. Our spots were very simple edits, so the main need was to work out the pacing and the right shots to lock picture and hand off clips for visual effects. All client review and approval was done long distance using Xprove. Once the client approved a cut, I sent an EDL to the visual effects artist, who had a duplicate drive full of the same ProResHQ media.

blg_redpost_9

Finishing and final grade

The two spots each used a distinctly different creative approach. One spot was a white limbo studio shoot. The camera follows our lead actor walking in the foreground and activity comes to life in the background as he passes by. The inspiration for the look was a Tim McGraw music video in which McGraw wears a white shirt that is blown out and slightly glowing. Spot number two is all location and was intended to have a look reminiscent of the western Days of Heaven. In that film the colors are quite muted. In the white limbo spot, the effects not only involved manipulating the activity in the background, but creating mattes and the bloom effect for our foreground talent. Ultimately the decision was made to have a totally different look to the color and luminance of our foreground actor and the background elements and background actors. That sequence ended up with five layers to create each scene.

Spot number two wasn’t as complex, but required a rig-removal in nearly every scene. With these heavy VFX components, it seems obvious to me that working with native RED camera files would have been totally impractical. The advantage to native, camera raw files in grading is supposed to be that you have a greater correction range than with standard HD files. I had already done most of that, though, in my RED Alert “pre-grade”. There was very little advantage in returning to the native files at this point.

blg_redpost_7a1

Another wrinkle in our job was the G5. In Apple’s current workflow, you only have direct native access to .R3D files in Apple Color. Most G5s didn’t have graphics display cards up to the task of working with ProResHQ high-def files and Color. I ran a few tests to see if that was even an option and Color just chugged! Instead, I did my final grades in FCP using Magic Bullet Colorista, which was more than capable for this grading. Furthermore, the white limbo spot required different grading on different video tracks and interactive adjustment of grading, opacity and blend modes. The background scene was graded with a lower luminance level and colors were desaturated and shifted to an overall blue tone. Our lead foreground actor was graded very bright with much higher saturation and natural color tones. In the end, it would have been hard to accomplish what I needed to do in Color anyway. FCP was actually the better environment in this case, but After Effects would have been the next best alternative.

blg_redpost_111

Framing

One big advantage to RED is the ability to work with oversized images. I rendered my files at 1920×1080, but I did have to reframe one of our hero product shots. In that case, I simply re-rendered the file as 2K (2048×1152) and positioned it inside FCP’s 1920×1080 timeline. Again, this was a quick render on the laptop to generate the 2K ProResHQ clip.

DPs should consider this as something that works to their advantage. When RED footage was commonly only shot at a 2:1 aspect ratio, there was some “bleed-room” factored in for repositioning within a 16×9 project. Since shooting in 16×9 now means a 1:1 relationship of the camera file to the edited frame, DPs might actually be best off to shoot with a slightly looser composition. This would allow the 4096×2304 file to be rendered to 2K (2048×1152) and then the final position would be adjusted in the NLE. Final Cut Pro, Quantel, Premiere Pro, Autodesk Smoke and Avid DS can all handle 2K files. I understand that DPs might be reticent about leaving the final framing to someone else, but the fact of the matter is that this happens with every film-to-tape transfer of a 35mm negative. It’s easily controlled through proper communication, the use of registration/framing charts on set and ALWAYS keeping the DP in the loop.

Needless to say, most commercials still run as 4×3 on many TV stations and networks, so DPs should frame to protect for 4×3 cropping. This way “center-cut” conversions of HD masters retain the important part of the composition. Many shots composed for 16×9 will work fine in 4×3, but certain shots, like product shots, probably won’t. To avoid problems on the distribution end, compose your shots for both formats when possible and double shoot a shot when it’s not practical. The alternative is to only run letterboxed versions in standard def, but not every client has control of this down the line.

blg_redpost_13

Click to see the finished spots.

Final thoughts

The RED One is an innovative camera that has many converts on the production side. It doesn’t have to become magilla in post if you treat it like digital “film” and design an efficient workflow that accommodates processing, editing, VFX and grading. I believe the honeymoon is waning for RED (in a good way). Now serious users are leaving much of the unabashed enthusiasm behind and are getting down to brass tacks. They are learning how to use the camera and the post tools in the most efficient and productive manner. There are many solutions, but pick the one best for you and stick to it.

Click here for additional RED-related posts on DigitalFilms.

Follow these links to some of the available RED resources:

Clipfinder

Crimson

Cineform
Rubber Monkey Software

R3D Data Manager
Imagine Products
RED’s free tools
MetaCheater
Assimilate
Avid
Quantel
Autodesk
Adobe

©2009 Oliver Peters

Resolution Purists and the Real World

I love to lurk over at RedUser.net, the unofficial online forum for RED owners and enthusiasts. It’s a great place to gain insight about the technology, but it’s also just pure fun reading the various perceptions of the lesser experienced RED aficionados. The RED One camera employs a single 4520 x 2540 CMOS sensor to capture various image sizes – the most popular of which is 4096 x 2048. This is considered to be a 4K file with a 2:1 aspect ratio. Many people confuse resolution and file size, so a 4K file isn’t necessarily 4K worth of resolution. There’s also a lot of confusion between the terms resolution and sharpness. The simplest explanation is that resolution is the measurable ability to resolve fine detail, while sharpness relates to your eyes’ and brain’s perception of whether or not an image is crisp and shows a lot of detail. Both Mark Schubin (Videography magazine’s technical editor) and Adam Wilt (Pro Video Coalition) have written at length on these subjects.

 

As a poor country editor who isn’t a DP or image scientist, I defer to the authorities on these subjects, but I have spent several decades working in all sorts of image formats, resolutions and display technologies. From this experience, I can say that often the supposed resolution of the sensor, as expressed in pixels, has very little to do with how the image looks. I see a lot of folks online expressing the desire to finish in 4K, without any understanding of the real world cost or desirability of 4K post and distribution. Not to mention the fact that true 4K theatrical displays are quite a few years off, if for no other reason than the lack of financial incentive for major theater chains to convert all their 35mm film projection to something like Sony’s SRX-series digital cinema projectors. So in spite of an interest on the part of content producers to see 4K presentation venues, the reality is that high-resolution-originated product will continue to end up being viewed on various displays, from web movies to SD and HD television up to film projection and/or digital cinema projection at 2K or less.

 

 

Been There – Done That – Got the Belt Buckle

The irony of all of this is that we’ve been there before. I even have the limited edition belt buckle to prove it! In the late 70s I worked with the CEI 310 camera. This was a 2-piece electronic field camera that was definitely geared towards high-quality production and not news. The CEI 310 eventually became the basis of Panavision’s Panacam – their first foray into electronic cameras equipped with Panavision film lenses. Bear in mind that the 310 and Panacam were always SD cameras without any 24P capabilities. On the plus side, the colorimetry of the CEI camera appeared more “filmic” than its ENG counterparts, which was further enhanced by the addition of Panavision lenses and accessories.

 

At the time, I was responsible for a facility that cranked out a ton of grocery store commercials. “Painting” the camera to get the most out of tabletop shots was the job of the video engineer (often called the “video shader”). A lot of what I learned about color correction (and have since passed on to others) came from trying to get a cooked ham or roast to look appetizing using our RCA studio cameras! When Panavision set up the deal with CEI to market Panacams, they established a number of authorized rental/production facilities who would supply the camera accompanied by a trained technician. Again, this person’s job was to paint the image for the most pleasing look. Fast forward a couple of decades and you have the position of the DIT (digital imaging technician), who today fulfills the role of video shading, among other tasks, when HD cameras are used on high-budget shoots, like feature films.

 

These early attempts at electronic cinematography really didn’t go far, due to the limiting resolution of NTSC and PAL video. Sure the images looked great, but you were really only working in a medium that was acceptable for television and not the big screen. Nevertheless, companies like Panavision, CEI and other competitors (like Ikegami with the EC-35) proved that properly adjusted video cameras coupled with high-quality glass could be a good marriage, regardless of the resolution of the camera.

 

 

High Definition to Small Definition

Fortunately HD came along, reviving the ongoing interest to use electronic cameras for theatrical distribution. The company I worked for in the 90s was an early adopter of HD. We bought two of Sony’s HDW-730 cameras, which were interlaced 1080 HDCAM camcorders. Interlacing causes many of the purists to snub their noses, preferring the later 24P models as true film-style images. In spite of this, we produced quite a lot of impressive content, including a Biblical-based dramatic production for a themed attraction called “The Holyland Experience”. Our 20-minute film was shot on location in Israel and projected in a custom theater that rivaled any big screen movie theater in size and scope. The final master was edited in 1080i but encoded into 720p and projected using a Barco data-grade (not digital cinema) projector. Interlaced or not, this image was as impressive and as high-quality to the eye as if this had been a full blown 35mm film production.

 

On the other end of the scale, I’ve also posted the video portions of IllumiNations: Reflections Of Earth, Disney’s nighttime show at EPCOT – a fireworks and laser extravaganza choreographed to music. ROE’s video segments are presented on a 29’ tall rotating earth globe mounted on a barge in the middle of the EPCOT lagoon. The continental masses on that globe consist of LED displays. The final image that fills these screens is actually a 360 x 128 pixel video movie composited like a world map. The pixels for the continents are, in turn, mapped onto the matching LED coordinates of the globe. Australia only has the resolution of a typical computer desktop icon, yet it is still possible to discern imagery with a display this coarse. The trick is in the fact that viewing distances are 500’ to 700’ away and your brain fills in the gaps. This works much like the image of Lincoln’s face that’s made up of a mosaic of other images. When you get far enough back, you recognize Lincoln, instead of focusing on the individual components.

 

 

High Definition and the Silver Screen

 

Most folks now agree that the actual resolution of the RED One camera with proper lenses and accurate focus is in excess of 3K, though not quite as high as 4K. Compare this to film. 35mm negative is said to be as high as the equivalent of 8K (though 4K is generally accepted by most as “full” resolution), but typically is scanned at 4K or 2K resolution. However, the image you see in the theater from a projected release print, is generally considered to be closer to 1K. This varies with the quality of the print, projector lens and dimness of the projector lamp. Meanwhile, most of the popular HD cameras used for digital cinematography (Grass Valley Viper, Sony F900, Sony F23, etc.) capture images at 1920 x 1080, leaving you with a 16 x 9 image that’s comparable to a 2K film scan when the aspect ratio is 1.85:1. I’ve seen quite a few of the movies in theaters that were “filmed” using digital cameras (Collateral, Apocalypto, Zodiac, Star Wars, Once Upon A Time In Mexico, etc.) and I find very little to quibble about. In fact, Star Wars was shot with the wider 2.35:1 aspect, meaning that the top and bottom were cropped. So really only about 700 pixels out of the actual 1080 pixel height show up in the final prints.

 

I’ve also edited a film that was finished through a DI process using Assimilate SCRATCH. Our film was shot on 4-perf Super35mm negative and transferred to HDCAM-SR. Since we intended to end up in 1.85:1, the 4-perf Super35mm frame provided the closest fit to the 16 x 9 aspect ratio of HD, without wasting part of the top and bottom of the negative’s frame. This technique results in smaller film grain within the HD frame because more of the whole film frame is used. Internally our SCRATCH files were 2K DPX files and the output was back to an HDCAM-SR master. I’ve seen this film projected at DCI spec in the lab’s screening room, as well as HDCAM running through a projector at 1080i (interlaced with added 3:2 pulldown) and I must say that this image would not have looked any better had we worked off of a 4K film scan.

 

The reason I say this is due to the general texture of film and the creative choices made for exposure, lighting and lens/filter selection. Images that are often more pleasing to the eye are sometimes technically lower in sharpness. In other words, when you stick your nose up close to the screen, the image will tend to appear soft. Having higher resolution doesn’t matter, because there is no more real detail in the image to bring out except bigger film grain. One interesting comparison is last year’s There Will Be Blood versus No Country For Old Men. Blood went through a traditional film, rather than a digital finish, whereas No Country was completed at 2K resolution using a digital intermediate process. Both were nominated for an Oscar for Best Cinematography. By all rights, Blood should have had the higher resolution image, yet in point of fact, both looked about the same to the casual eye when seen in the theaters. The cinematography was striking enough to earn each a nomination.

 

 

It’s in the Glass

 

Going back to the Panacam example, what you start to find out is that the quality of the glass is a major factor in what ends up being recorded. I once did a film shot with a Sony F900 camera (24P). The DP/owner-operator opted to rent a “Panavised” Sony F900 (like those used on Star Wars) instead of using his own camera, so that he could take advantage of the better Panavision lenses. The result was a dramatic difference between the image quality of those lenses as compared to standard HD lenses. Likewise, some of the RED examples I’ve seen online that were shot with various non-optimized lenses, such as prime lenses designed for still photo cameras, exhibited less-than-superb quality. This is also why there have been a number of successful indie films shot with a Panasonic VariCam. Technically the VariCam, with its 1280 x 720 imager, should look significantly worse on the big screen than a Sony F900. Yet, many of these have been shot using 35mm lens adapters and high-quality film lenses. The results on screen speak for themselves. The funny thing is that there’s a lot of talk of 4K, yet when I’ve seen Sony’s 4K projector demos, the content comes from 1920 x 1080 sources – shot with various Sony or Panavision digital cameras. I can assure you that these look awesome.

 

 

You ARE Paying for Something

Aside from lenses, another thing to keep in mind is the electronics used by the camera for image enhancement and filtering. Part of the big difference you pay between a RED One and a competing Sony, Grass Valley or Panasonic camera is for the electronics used to enhance the image. The RED One generates a camera raw, Bayer-pattern image. The intent is to do all processing in post, just like sending film negative to a lab. The other cameras have a lot of circuitry designed to control the image in-camera. You may opt for a neutral, flat image, but there’s still processing applied to generate that finished RGB image from the camera, regardless of whether it’s flat or painted. This processing not only applies color matrices but also sharpens detail and reduces noise. By contrast, RED not only doesn’t apply this in-camera, but also uses OLPF (optical low pass filtering), common in digital still camera sensors. OLPF essentially filters out the highest resolution transients so that you don’t have excess aliasing in the image on things like contrasting diagonal lines, such as on a car grill. The design goal is to leave you with true and not artificial resolution. This means the image may at times appear soft, so sharpening and detail enhancement have to be added back (to taste) during the post production conversion of the camera raw files.

 

The dilemma of all of this file conversion needed in post is that you often don’t get the best results. On the plus side, you may reap the benefit of oversampling, meaning that at times an HD image downsampled to SD may look better than it if had been shot in SD to begin with. I have, however, also found the opposite to be true. HD is a very high resolution image that has more actual resolution than our monitors and projectors can truly display. An image looks more natural in HD when less detail enhancement is dialed in. If you crank up the enhancement, like you typically do in most SD cameras, then that image would look garish in HD. Unfortunately, when you downsample this very natural looking HD image into SD, the image tends to look soft, because we are used to the look of overly-enhanced SD cameras. Therefore, downsampling by a dedicated device like the Teranex Mini will give you better results than using the built-in functions of Final Cut Pro or a Kona card or an HD deck, because the Mini lets you subjectively add enhancement, color control and noise reduction as part of the HD-to-SD conversion.

 

Aliasing is another issue. A lot of HD content is captured in progressive formats (such as 24P). Progressive HD images on a native progressive display (projectors, plasmas, LCDs) look great, but when you display these same images as scaled-down NTSC or PAL on an interlaced CRT, something’s got to give. If you take a high-contrast transition, such as the light-to-dark changes between the metal bars in our car grill example, the HD image is able to retain all the anti-aliasing information for the in-between gradients in those transitions from light to dark and back. When this image is downsampled, some of this detail is lost and there’s less anti-aliasing information. The transitions becomes harsher when displayed on the interlaced SD CRT and the metal of the grill appears to scintillate with any movement. In order words, the diagonal edges of the metal grill appear more jagged and tend to “dance” between the scanlines.

 

Unfortunately this is a normal phenomenon and can exist whether you shoot digitally or on film. A few years back Cintel, an established telecine manufacturer, introduced SCAN’dAL, a feature designed specifically to deal with this issue when transferring 35mm footage to video. Although a lot of ink has been spilled about the benefits of oversampling, in some case the matching size yields the best results. I go back to SD videos I’ve cut, which were shot using a Sony Digital Betacam camcorder and am amazed at how much better these look in SD than newer versions of the same program shot on HD and downsampled for SD presentation. When downsampling is part of the workflow, then it is important to try a number of options if quality is critical. For example, sometimes hardware does a better job and at other times software is king. Some of the better HD-to-SD scaling in software is achieved in After Effects and Shake. Often just the smallest touch of Gaussian blur will help as well.

 

 

Reality Check for the Indie Filmmaker

 

One of the reasons this isn’t cut-and-dried is because camera manufacturers play so many games with the image. For example, the Panasonic HXV200 makes outstanding images and is popular with indie filmmakers. Yet it only uses a 960 x 540 pixel sensor to generate 720 or 1080 images – getting there through the magic of pixel shifting (See Adam Wilt). As good as the camera looks, when you put it side-by-side with Panasonic’s VariCam, the latter will appear noticeably sharper than the 200, because it indeed has higher resolution.

 

I’m sure you’re wondering if this is all just a can of worms. You’re right. It is. But often, the most calibrated measuring devices are simply your two eyes. Forget the specs and trust your instincts. A recent example is Shine A Light. This film was shot using a combination of 35mm film cameras and one Panavision Genesis. All footage ended up on HDCAM-SR (1920 x 1080) and the master from this not only was recorded out to 35mm film for release prints, but also IMAX. Even though HD isn’t close to the resolution of a 70mm IMAX negative, the Stones’ concert in Shine A Light looks incredible in IMAX projected onto a 5-story-tall screen!

 

In the real world, it’s amazing what you can get away with. Last year the Billy Graham Library opened with video modules that I edited and finished. The largest screen is in the Finale theater – an ultra-widescreen format that’s a horizontal composite of three 720p projections. Our sources were largely HD, but there were also a smattering of audience close-up shots from Graham’s last crusade in New York City that originated on a Panasonic DVX100A (mini-DV) camera. It was amazing how well these images held up in the finished product. Other great examples are the documentaries Murderball and The War Tapes. Each was shot with a variety of mini-DV cameras, yet in spite of the image defects, the stories and personalities are so enthralling, that image quality is the least important factor.

 

I have a lot of respect for what the team at RED has done, but I’m not yet willing to concede that shooting with the RED One is going to give you a better film than if you used other cameras, like an Arri D-21, Sony F23 or Panasonic’s new HPX3000, just because RED has a higher pixel count for its sensor. In the end, like everything else in this business, content and emotion is the most important ingredient. When it comes to capturing an image, the technical resolution of the camera is a big factor, but it doesn’t automatically guarantee the best image results from the point-of-view of your audience.

 

© 2008 Oliver Peters