4K is kinda meh


Lately I’ve done a lot of looking at 4K content. Not only was 4K all over the place at NAB in Las Vegas, but I’ve also had to provide some 4K deliverables on client projects. This has meant a much closer examination of the 4K image than in the past.

First, let’s define 4K. Typically the term 4K applies to either a “cinema” width of 4096 pixels or a broadcast width of 3840 pixels. The latter is also referred to as QuadHD, UltraHD or UHD and is a 2x multiple of the 1920-wide HD standard. For simplicity’s sake, in this article I’m going to be referring to 4K, but will generally mean the UHD version, i.e. 3840 x 2160 pixels, aka 2160p. While 4K (and greater) acquisition for an HD finish has been used for awhile in post, there are already demands for true 4K content. This vanguard is notably led by Netflix and Amazon, however, international distributors are also starting to request 4K masters, if they are available.

In my analysis of the images from various 4K (and higher) camera, it starts to become quite obvious that the 1:1 image in 4K really isn’t all that good. In fact, if you compared a blow-up from HD to 4K of that same image, it becomes very hard to distinguish the blow-up from the true 4K image. Why is that?

When you analyze a native 4K image, you become aware of the deficiencies in the image. These weren’t as obvious when that 4K original was down-sampled to an HD timeline and master. That’s because in the HD timeline you are seeing the benefit of oversampling, which results in a superb HD image. Here are some factors that become more obvious when you view the footage in its original size.

1. Most formats use a high-compression algorithm to squeeze the data into a smaller file size. In some cases compression artifacts start to become visible at the native size.

2. Many DPs like to shoot with vintage or otherwise “lower quality” lenses. This gives the image “character” and, in the words of one cinematographer that I worked with, “takes the curse off of the digital image.” That’s all fine, but again, viewed natively, you start to see the defects in the optics, like chromatic aberration in the corners, coloration of the image, and general softness.

3. Due to the nature of video viewfinders, run-and-gun production methods, and smaller crews, many operators do not nail the critical focus on a shot. That’s not too obvious when you down-convert the image; however, at 100% you notice that focus was on your talent’s ear and not their nose.

The interesting thing to me is that when you take a 4K (or greater) image, down-convert that to HD, and then up-convert it back to 4K, much of the image detail is retained. I’ve especially noticed this when high quality scalers are used for the conversion. For example, even the free version of DaVinci Resolve offers one of the best up-scalers on the market. Secondly, scaling for 1920 x 1080 to 3840 x 2160 is an even 2x multiple, so a) the amount you are zooming in isn’t all that much, and b) even numbered multiples give you better results than fractional values. In addition, Resolve also offers several scaling methods for sharper versus smoother results.

df1716_4k-native_16_smIn general, I feel that the most quality is retained when you start with 4K footage rather than HD, but that’s not a given. I’ve blown up ARRI ALEXA clips – that only ever existed as HD – up to 4K and the result was excellent. That has a lot to do with what ARRI is doing in their sensor and the general detail of the ALEXA image. Clearly that’s been proven time and time again in the theaters, where files recorded using ALEXAs with the footage in 2K, HD or 2.8K ARRIRAW have been blown up via 4K projection onto the large screen and the image is excellent.

Don’t get me wrong. I’m not saying you shouldn’t post in 4K if you have an easy workflow (see my post next week) to get there. What I am saying is that staying in 4K versus a 4K-HD-4K workflow won’t result in a dramatic difference in image quality, when you compare the two side-by-side at 100% pixel-for-pixel resolution. The samples below come from a variety of sources, including the blogs of John Brawley, Philip Bloom and OffHollywood Productions. In some cases the source images originated from pre-production cameras, so there may be image anomalies not found in actual shipping models of these cameras. Grades applied are mine.

View some of the examples below. Click on any of these images for the slide show. From there you can access the full size version of any of these comparisons.

©2016 Oliver Peters

The On Camera Interview


Many projects are based on first person accounts using the technique of the on camera interview. This approach is used in documentaries, news specials, corporate image presentations, training, commercials, and more. I’ve edited a lot of these, especially for commercials, where a satisfied customer might give a testimonial that gets cut into a five-ish minute piece for the web or a DVD and then various commercial lengths (:10, :15, :30, :60, 2 min.). The production approach and editing techniques are no different in this application than if you are working on a documentary.

The interviewer

The interview is going to be no better than the quality of the interviewer asking the off camera (and unheard) questions. Asking good questions in the right manner will yield successful results. Obviously the interviewer needs to be friendly enough to establish a rapport with the subject. People get nervous on camera, so the interviewer needs to get them relaxed. Then they can comfortably answer the questions and tell the story in their own words. The interviewer should structure the questions in a way that the totality of the responses tells a story. Think in terms of story arc and strive to elicit good beginning and ending statements.

df4115_iview_2Some key points to remember. First, make sure you get the person to rephrase the question as part of their answer, since the audience won’t hear the interviewer. This makes their answer a self-contained statement. Second, let them talk. Don’t interject or jump on the end of the answer, since this will make editing more difficult.

Sometimes in a commercial situation, you have a client or consultant on set, who wants to make sure the interviewee hits all the marketing copy points. Before you get started, you’ll need to have an understanding with the client that the interviewee’s answers will often have to be less than perfect. The interviewees aren’t experienced spokespersons. The more you press them to phrase the answer in the exact way that fits the marketing points or to correctly name a complex product or service in every response, the more stilted their speaking style will become. Remember, you are going for naturalness, honesty and emotion.

The basics

df4115_iview_1As you design the interview set, think of it as staging a portrait. Be mindful of the background, the lighting, and the framing. Depending on the subject matter, you may want a matching background. For example, a doctor’s interview might look best in a lab or in the medical office with complex surgical gear in the background. An interview with an average person is going to look more natural in a neutral environment, like their living room.

You will want to separate the interview subject from the background and this can be achieved through lighting, lens selection, and color scheme. For example, a blonde woman in a peach-colored dress will stand out quite nicely against a darker blue-green background. A lot of folks like the shallow depth-of-field and bokeh effect achieved by a full-frame Canon 5D camera with the right lens. This is a great look, but you can achieve it with most other cameras and lenses, too. In most cases, your video will be seen in the 16:9 HD format, so an off-center framing is desirable. If the person is looking camera left, then they should be on the right side of the frame. Looking camera right, then they should be on the left side.df4115_iview_7

Don’t forget something as basic as the type of chair they are sitting in. You don’t want a chair that rocks, rolls, leans back, or swivels. Some interviews take a long time and subjects that have a tendency to move around in a chair become very distracting – not to mention noisy – in the interview, if that chair moves with them. And of course, make sure the chair itself doesn’t creak.

Camera position

df4115_iview_3The most common interview design you see is where the subject is looking slightly off camera, as they are interacting with the interviewer who sitting to the left or the right of the camera. You do not want to instruct them to look into the camera lens while you are sitting next to the camera, because most people will dart between the interviewer and the camera when they try to attempt this. It’s unnatural.

The one caveat is that if the camera and interviewer are far enough away from the interview subject – and the interviewer is also the camera operator – then it will appear as if the interviewee is actually looking into the lens. That’s because the interviewer and the camera are so close to each other. When the subject addresses the interviewer, he or she appears to be looking at the lens when in fact the interviewee is really just looking at the interviewer.

df4115_iview_16If you want them looking straight into the lens, then one solution is to set up a system whereby the subject can naturally interact with the lens. This is the style documentarian Errol Morris has used in a rig that he dubbed the Interrotron. Essentially it’s a system of two teleprompters. The interviewer and subject can be in the same studio, although separated in distance – or even in other rooms. The two-way mirror of the teleprompter is projecting each person to the other. While looking directly at the interviewer in the teleprompter’s mirror, the interviewee is actually looking directly in the lens. This feels natural, because they are still looking right at the person.

Most producers won’t go to that length, and in fact the emotion of speaking directly to the audience, may or may not be appropriate for your piece. Whether you use Morris’ solution or not, the single camera approach makes it harder to avoid jump cuts. Morris actually embraces and uses these, however, most producers and editors prefer to cover these in some way. Covering the edit with a b-roll shot is a common solution, but another is to “punch in” on the frame, by blowing up the shot digitally by 15-30% at the cut. Now the cut looks like you used a tighter lens. This is where 4K resolution cameras come in handy if you are finishing in 2K or HD.

df4115_iview_6With the advent of lower-cost cameras, like the various DSLR models, it’s quite common to produce these interviews as two camera shoots. Cameras may be positioned to the left or the right of the interviewer, as well as on either side. There really is no right or wrong approach. I’ve done a few where the A-camera is right next to the interviewer, but the B-camera is almost 90-degrees to the side. I’ve even seen it where the B-camera exposes the whole set, including the crew, the other camera, and the lights. This gives the other angle almost a voyeuristic quality. When two cameras are used, each should have a different framing, so a cut between the cameras doesn’t look like a jump cut. The A-camera might have a medium framing including most of the person’s torso and head, while the B-camera’s framing might be a tight close-up of their face.

While it’s nice to have two matched cameras and lens sets, this is not essential. For example, if you end up with two totally mismatched cameras out of necessity – like an Alexa and a GoPro or a C300 and an iPhone – make the best of it. Do something radical with the B-camera to give your piece a mixed media feel. For example, your A-camera could have a nice grade to it, but the B-camera could be black-and-white with pronounced film grain. Sometimes you just have to embrace these differences and call it a style!


df4115_iview_4When you are there to get an interview, be mindful to also get additional b-roll footage for cutaway shots that the editor can use. Tools of the trade, the environment, the interview subject at work, etc. Some interviews are conducted in a manner other than sitting down. For example, a cheesemaker might take you through the storage room and show off different rounds of cheese. Such walking-talking interviews might make up the complete interview or they might be simple pieces used to punctuate a sit-down interview. Remember, that if you have the time, get as much coverage as you can!

Audio and sync

It’s best to use two microphones on all interviews – a lavaliere on the person and a shotgun mic just out of the camera frame. I usually prefer the sound of the shotgun, because it’s more open; but depending on how noisy the environment is, the lav may be the better channel to use. Recording both is good protection. Not all cameras have great sound systems, so you might consider using an external audio recorder. Make sure you patch each mic into separate channels of the camera and/or external recorder, so that they are NOT summed.

df4115_iview_8Wherever you record, make sure all sources receive audio. It would be ideal to feed the same mics to all cameras and recorders, but that’s not always possible. In that case, make sure that each camera is at least using an onboard camera mic. The reason to do this is for sync. The two best ways to establish sync is common timecode and a slate with a clapstick. Ideally both. Absent either of those, then some editing applications (as well as a tool like PluralEyes) can analysis the audio waveform and automatically sync clips based on matching sound. Worst case, the editor can manually sync clips be marking common aural or visual cues.

Depending on the camera model, you may have media cards that don’t span and automatically start a new clip every 4GB (about every 12 minutes with some formats). The interviewer should be mindful of these limits. If possible, all cameras should be started together and re-slated at the beginning of each new clip.

Editing workflow

df4115_iview_13Most popular nonlinear editing applications (NLE) include great features that make editing on camera interviews reasonably easy. To end up with a solid five minute piece, you’ll probably need about an hour of recorded interview material (per camera angle). When you cut out the interviewer’s questions, the little bit of chit chat at the beginning, and then repeats or false starts that an interviewee may have, then you are generally left with about thirty minutes of useable responses. That’s a 6:1 ratio.

The goal as an editor is to be a storyteller by the soundbites you select and the order into which you arrange them. The goal is to have the subject seamlessly tell their story without the aide of an on camera host or voice-over narrator. To aid the editing process use NLE tools like favorites, markers, and notes, along with human tools like written transcripts and your own notes to keep the project organized.

This is the standard order of things for me:

Sync sources and create multi-cam sequences or multi-cam clips depending on the NLE.

Pass 1 – create a sequence with all clips synced up and organized into a single timeline.

Pass 2 – clean up the interview and remove all interviewer questions.

Pass 3 – whittle down the responses into a sequence of selected answers.

Pass 4 – rearrange the soundbites to best tell the story.

Pass 5 – cut between cameras if this is a multi-camera production.

Pass 6 – clean up the final order by editing out extra words, pauses, and verbal gaffs.

Pass 7 – color correct clips, mix audio, add b-roll shots.

df4115_iview_9As I go through this process, I am focused on creating a good “radio cut” first. In other words, how does the story sound if you aren’t watching the picture. Once I’m happy with this, I can worry about color correction, b-roll, etc. When building a piece that includes multiple interviewees, you’ll need to pay attention to several other factors. These include getting a good mix of diversity – ethnic, gender, job classification. You might want to check with the client first as to whether each and every person interviewed needs to be used in the video. Clearly some people are going to be duds, so it’s best to know up front whether or not you’ll need to go through the effort to find a passable soundbite in those cases or not.

There are other concerns when re-ordering clips among multiple people. Arranging the order of clips so that you can cut between alternating left and right-framed shots makes the cutting flow better. Some interviewees comes across better than others, however, make sure not to lean totally on these responses. When you get multiple, similar responses, pick the best one, but if possible spread around who you pick in order to get the widest mix of respondents. As you tell the story, pay attention to how one soundbite might naturally lead into another – or how one person’s statement can complete another’s thoughts. It’s those serendipitous moments that you are looking for in Pass 4. It’s what should take the most creative time in your edit.

Philosophy of the cut

df4115_iview_11In any interview, the editor is making editorial selections that alter reality. Some broadcasters have guidelines at to what is and isn’t permissible, due to ethical concerns. The most common editorial technique in play is the “Frankenbite”. That’s where an edit is made to truncate a statement or combine two statements into one. Usually this is done because the answer went off into a tangent and that portion isn’t relevant. By removing the extraneous material and creating the “Frankenbite” you are actually staying true to the intent of the answer. For me, that’s the key. As long as your edit is honest and doesn’t twist the intent of what was said, then I personally don’t have a problem with doing it. That part of the art in all of this.

df4115_iview_10It’s for these reasons, though, that directors like Morris leave the jump cuts in. This lets the audience know an edit was made. Personally, I’d rather see a smooth piece without jump cuts and that’s where a two camera shoot is helpful. Cutting between two camera angles can make the edit feel seamless, even though the person’s expression or body position might not truly match on both sides of the cut. As long as the inflection is right, the audience will accept it. Occasionally I’ll use a dissolve, white flash or blur dissolve between sections, but most of the time I stick with cuts. The transitions seem like a crutch to me, so I use them only when there is a complete change of thought that I can’t bridge with an appropriate soundbite or b-roll shot.

df4115_iview_12The toughest interview edit tends to be when you want to clean things up, like a repeated word, a stutter, or the inevitable “ums” and “ahs”. Fixing these by cutting between cameras normally results in a short camera cut back and forth. At this point, the editing becomes a distraction. Sometimes you can cheat these jump cuts by staying on the same camera angle and using a short dissolve or one of the morphing transitions offered by Avid, Adobe, or MotionVFX (for FCPX). These vary in their success depending on how much a person has moved their body and head or changed expressions at the edit point. If their position is largely unchanged, the morph can look flawless. The more the change, the more awkward the resulting transition can be. The alternative is to cover the edit with a cutaway b-roll shot, but that’s often not desirable if this happens the first time we see the person. Sometimes you just have to live with it and leave these imperfections alone.

Telling the story through sight and sound is what an editor does. Working with on camera interviews is often the closest an editor comes to being the writer, as well. But remember that mixing and matching soundbites can present nearly infinite possibilities. Don’t get caught in the trap so many do of never finishing. Bring it to a point where the story is well-told and then move on. If the entire production is approached with some of these thoughts in mind, the end result can indeed be memorable.

©2015 Oliver Peters

Camerama 2015


The design of a modern digital video camera comes down to the physics of the sensor and shutter, the software to control colorimetry and smart industrial design to optimize the ergonomics for the operator. Couple that with a powerful internal processor and recording mechanism and you are on your way. Although not exactly easy, these traits no longer require skills that are limited to the traditional camera manufacturers. As a result, innovative new cameras have been popping up from many unlikely sources.

df0715_cionThe newest of these is AJA, which delivered the biggest surprise of NAB 2014 in the form of their CION 4K/UltraHD/2K/HD digital camera. Capitalizing on a trend started by ARRI, the CION records directly to the edit-ready Apple ProRes format, using AJA Pak solid state media. The CION features a 4K APS-C sized CMOS sensor with a global shutter to eliminate rolling-shutter artifacts. AJA claims 12 stops of dynamic range and uses a PL mount for lenses designed for Super 35mm. The CION is also capable of outputting AJA camera raw at frame rates up to 120fps.  It can send out 4K or UHD video from its four 3G-SDI outputs to the AJA Corvid Ultra for replay and center extraction during live events.

df0715_alexaThe darling of the film and high-end television world continues to be ARRI Digital with its line of ALEXA cameras. These now include the Classic, XT, XT Plus, XT M and XT Studio configurations. They vary based on features and sensor size. The Classic cameras have a maximum active sensor photosite size of 2880 x 2160, while the XT models go as high as 3414 x 2198. Another difference is that the XT models allow in-camera recording of ARRIRAW media. The ALEXA introduced ProRes recording and all current XT models permit Apple ProRes and Avid DNxHD recording.

df0715_amiraThe ALEXA has been joined by the newer, lighter AMIRA, which is targeted at documentary-style shooting with smaller crews. The AMIRA is tiered into three versions, with the Premium model offering 2K recording in all ProRes flavors at up to 200fps. ARRI has added 4K capabilities to both the ALEXA and AMIRA line by utilizing the full sensor size using their Open Gate mode. In the Amira, this 3.4K image is internally scaled by a factor of 1.2 to record a UHD file at up to 60fps to its in-camera CFast 2.0 cards. The ALEXA uses a similar technique, but only records the 3.4K signal in-camera, with scaling to be done later in post.

df0715_alexa65To leapfrog the competition, ARRI also introduced its ALEXA 65, which is available through the ARRI Rental division. This camera is a scaled up version of the ALEXA XT and uses a sensor that is larger than a 5-perf 65mm film frame. That’s an Open Gate resolution of 6560 x 3102 photosites. The signal is captured as uncompressed ARRIRAW. Currently the media is recorded on ALEXA XR Capture drives at a maximum frame rate of 27fps.

df0715_bmd_cc_rear_lBlackmagic Design had been the most unexpected camera developer a few years ago, but has since grown its DSLR-style camera line into four models: Studio, Production 4K, Cinema and Pocket Cinema. These vary in cosmetic style and size, which formats they are able to record and the lens mounts they use. df0715_bmdpocketThe Pocket Cinema Camera is essentially a digital equivalent of a Super 16mm film camera, but in a point-and-shoot, small camera form factor. The Cinema and Production 4K cameras feature a larger, Super 35mm sensor. Each of these three incorporate ProRes and/or CinemaDNG raw recording. The Studio Camera is designed as a live production camera. It features a larger viewfinder, housing, accessories and connections designed to integrate this camera into a television studio or remote truck environment. There is an HD and a 4K version.

df0715_ursaThe biggest Blackmagic news was the introduction of the URSA. Compared to the smaller form factors of the other Blackmagic Design cameras, the URSA is literally a “bear” of a camera. It is a rugged 4K camera built around the idea of user-interchangeable parts. You can get EF, PL and broadcast lens mounts, but you can also operate it without a lens as a standalone recording device. It’s designed for UltraHD (3840 x 2160), but can record up to 4,000 pixels wide in raw. Recording formats include CinemaDNG raw (uncompressed and 3:1 compressed), as well as Apple ProRes, with speeds up to 80fps. There are two large displays on both sides of the camera, which can be used for monitoring and operating controls. It has a 10” fold-out viewfinder and a built-in liquid cooling system. As part of the modular design, users can replace mounts and even the sensor in the field.

df0715_c300Canon was the most successful company out of the gate when the industry adopted HD-video-capable DSLR cameras as serious production tools. Canon has expanded these offerings with its Cinema EOS line of small production cameras, including the C100, C100 Mark II, C300 and C500, which all share a similar form factor. Also included in this line-up is the EOS-1D C, a 4K camera that retains its DSLR body. The C300 and C500 camera both use a Super 35mm sized sensor and come in EF or PL mount configurations. The C300 is limited to HD recording using the Canon XF codec. The C500 adds 2K and 4K (4096 cinema and 3840 UHD) recording capabilities, but this signal must be externally recorded using a device like the Convergent Design Odyssey 7Q+. HD signals are recorded internally as Canon XF, just like the C300. The Canon EOS C100 and C100 Mark II share the design of the C300, except that they record to AVCHD instead of Canon XF. In addition, the Mark II can also record MP4 files. Both C100 models record to SD cards, whereas the C300/C500 cameras use CF cards. The Mark II features improved ergonomics over the base C100 model.

df0715_5dThe Canon EOS-1D C is included because it can record 4K video. Since it is also a still photography camera, the sensor is an 18MP full-frame sensor. When recording 4K video, it uses a Motion JPEG codec, but for HD, can also use the AVCHD codec. The big plus over the C500 is that the 1D C records 4K onboard to CF cards, so is better suited to hand-held work. The DSLR cameras that started the craze for Canon continue to be popular, including the EOS 5D Mark III and the new EOS 7D Mark II. Plus the consumer-oriented Rebel versions. All are outstanding still cameras. The 5D features a 22.3MP CMOS sensor and records HD video as H.264 MOV files to onboard CF cards. Thanks to the sensor size, the 5D is still popular for videographers who want extremely shallow depth-of-field shots from a handheld camera.

df0715_d16Digital Bolex has become a Kickstarter success story. These out-of-the-box thinkers coupled the magic of a venerable name from the film era with innovative design and marketing to produce the D16 Cinema Camera. Its form factor mimics older, smaller, handheld film camera designs, making it ideal for run-and-gun documentary production. It features a Super 16mm sized CCD sensor with a global shutter and claims 12 stops of dynamic range. The D16 records in 12-bit CinemaDNG raw to internal SSDs, but media is offloaded to CF cards or via USB3.0 for media interchange. The camera comes with a C-mount, but EF, MFT and PL lens mounts are available. Currently the resolutions include 2048 x 1152 (“S16mm mode”), 2048 x 1080 (“S16 EU”) and HD (“16mm mode”). The D16 records 23.98, 24 and 25fps frame rates, but variable rates up to 32fps in the S16mm mode are coming soon. To expand on the camera’s attractiveness, Digital Bolex also offers a line of accessories, including Kish/Bolex 16mm prime lens sets. These fixed aperture F4 lenses are C-mount for native use with the D16 camera. Digital Bolex also offers the D16 in an MFT mount configuration and in a monochrome version.

df0715_hero4The sheer versatility and disposable quality of GoPro cameras has made the HERO line a staple of many productions. The company continues to advance this product with the HERO4 Black and Silver models as their latest. These are both 4K cameras and have similar features, but if you want full video frame rates in 4K, then the HERO4 Black is the correct model. It will record up to 30fps in 4K, 50fps in 2.7K and 120fps in 1080p. As a photo camera, it uses a 12MP sensor and is capable of 30 frames a one second in burst mode and time-lapse intervals from .5 to 60 seconds. The video signal is recorded as an H264 file with a high-quality mode that’s up 60 Mb/s. MicrosSD card media is used. HERO cameras have been popular for extreme point-of-video shots and its waterproof housing is good for 40 meters. This new HERO4 series offers more manual control, new night time and low-light settings, and improved audio recording.

df0715_d810Nikon actually beat Canon to market with HD-capable DSLRs, but lost the momentum when Canon capitalized on the popularity of the 5D. Nevertheless, Nikon has its share of supportive videographers, thanks in part to the quantity of Nikon lenses in general use. The Nikon range of high-quality still photo and video-enabled cameras fall under Nikon’s D-series product family. The Nikon D800/800E camera has been updated to the D810. This is the camera of most interest to professional videographers. It’s a 36.3MP still photo camera that can also record 1920 x 1080 video in 24/30p modes internally and 60p externally. It can also record up to 9,999 images in a time-lapse sequence. A big plus for many is its optical viewfinder. It records H.264/MPEG-4 media to onboard CF cards. Other Nikon video cameras include the D4S, D610, D7100, D5300 and D3300.

df0715_varicamPanasonic used to own the commercial HD camera market with the original VariCam HD camera. They’ve now reimagined that brand in the new VariCam 35 and VariCam HS versions. The new VariCam uses a modular configuration with each of these two cameras using the same docking electronics back. In fact, a costumer can purchase one camera head and back and then only need to purchase the other head, thus owning both the 35 and the HS models for less than the total cost of two cameras. The VariCam 35 is a 4K camera with wide color gamut and wide dynamic range (14+ stops are claimed). It features a PL lens mount, records from 1 to 120fps and supports dual-recording. For example, you can simultaneously record a 4K log AVC-Intra master to the main recorder (expressP2 card) and 2K/HD Rec 709 AVC-Intra/AVC-Proxy/Apple ProRes to a second internal recorder (microP2 card) for offline editing. VariCam V-Raw camera raw media can be recorded to a separate Codex V-RAW recorder, which can be piggybacked onto the camera. The Panasonic VariCam HS is a 2/3” 3MOS broadcast/EFP camera capable of up to 240fps of continuous recording.  It supports the same dual-recording options as the VariCam 35 using AVC-Intra and/or Apple ProRes codecs, but is limited to HD recordings.

df0715_gh4With interest in DSLRs still in full swing, many users’ interest in Panasonic veers to the Lumix GH4. This camera records 4K cinema (4096) and 4K UHD (3840) sized images, as well as HD. It uses SD memory cards to record in MOV, MP4 or AVCHD formats. It features variable frame rates (up to 96fps), HDMI monitoring and a professional 4K audio/video interface unit. The latter is a dock the fits to the bottom of the camera. It includes XLR audio and SDI video connections with embedded audio and timecode.

RED Digital Cinema started the push for 4K cameras and camera raw video recording with the original RED One. That camera is now only available in refurbished models, as RED has advanced the technology with the EPIC and SCARLET. Both are modular camera designs that are offered with either the Dragon or the Mysterium-X sensor. The Dragon is a 6K, 19MP sensor with 16.5+ stops of claimed dynamic range. The Mysterium-X is a 5K, 14MP sensor that claims 13.5 stops, but up to 18 stops using RED’s HDRx (high dynamic range) technology. df0715_epicThe basic difference between the EPIC and the SCARLET, other than cost, is that the EPIC features more advanced internal processing and this computing power enables a wider range of features. For example, the EPIC can record up to 300fps at 2K, while the SCARLET tops out at 120fps at 1K. The EPIC is also sold in two configurations: EPIC-M, which is hand-assembled using machined parts, and the EPIC-X, which is a production-run camera. With the interest in 4K live production, RED has introduced its 4K Broadcast Module. Coupled with an EPIC camera, you could record a 6K file for archive, while simultaneously feeding a 4K and/or HD live signal for broadcast. RED is selling studio broadcast configurations complete with camera, modules and support accessories as broadcast-ready packages.

df0715_f65Sony has been quickly gaining ground in the 4K market. Its CineAlta line includes the F65, PMW-F55, PMW-F5, PMW-F3, NEX-FS700R and NEX-FS100. All are HD-capable and use Super 35mm sized image sensors, with the lower-end FS700R able to record 4K raw to an external recorder. At the highest end is the 20MP F65, which is designed for feature film production.df0715_f55 The camera is capable of 8K raw recording, as well as 4K, 2K and HD variations. Recordings must be made on a separate SR-R4 SR MASTER field recorder. For most users, the F55 is going to be the high-end camera for them if they purchase from Sony. It permits onboard recording in four formats: MPEG-2 HD, XAVC HD, SR File and XAVC 4K. With an external recorder, 4K and 2K raw recording is also available. High speeds up to 240fps (2K raw with the optional, external recorder) are possible. The F5 is the F55’s smaller sibling. It’s designed for onboard HD recording (MPEG-2 HD, XAVC HD, SR File). 4K and 2K recordings require an external recorder.

df0715_fs7The Sony camera that has caught everyone’s attention is the PXW-FS7. It’s designed as a lightweight, documentary-style camera with a form factor and rig that’s reminiscent of an Aaton 16mm film camera. It uses a Super 35mm sized sensor and delivers 4K resolution using onboard XAVC recording to XQD memory cards. XDCAM MPEG-2 HD recording (now) and ProRes (with a future upgrade) will also be possible. Also raw will be possible to an outboard recorder.

df0715_a7sSony has also not been left behind by the DSLR revolution. The A7s is an APS-C, full frame, mirrorless 12.2MP camera that’s optimized for 4K and low light. It can record up to 1080p/60 (or 720p/120) onboard (50Mbps XAVC S) or feed uncompressed HD and/or 4K (UHD) out via its HDMI port. It will record onboard audio and sports such pro features as Sony’s S-Log2 gamma profile.

With any overview, there’s plenty that we can’t cover. If you are in the market for a camera, remember many of these companies offer a slew of other cameras ranging from consumer to ENG/EFP offerings. I’ve only touched on the highlights. Plus there are others, like Grass Valley, Hitachi, Samsung and Ikegami that make great products in use around the world every day. Finally, with all the video-enabled smart phones and tablets, don’t be surprised if you are recording your next production with an iPhone or iPad!

Originally written for Digital Video magazine / CreativePlanetNetwork.

©2015 Oliver Peters

More 4K


I’ve talked about 4K before (here, here and here), but I’ve recently done some more 4K jobs that have me thinking again. 4K means different things to different people and in terms of dimensions, there’s the issue of cinema 4K (4096 pixels wide) versus the UltraHD/QuadHD/4K 16:9 (whatever you want to call it) version of 4K (3840 pixels wide). That really doesn’t make a lot of difference, because these are close enough to be the same. There’s so much hype around it, though, that you really have to wonder if it’s “the Emperor’s new clothes”. (Click on any of these images for expanded views.)

First of all, 4K used as a marketing term is not a resolution, it’s a frame dimension. As such, 4K is not four times the resolution of HD. That’s a measurement of area and not resolution. True resolution is usually measured in the vertical direction based on the ability to resolve fine detail (regardless of the number of pixels) and, therefore, 4K is only twice the resolution of HD at best. 4K is also not sharpness, which is a human perception affected by many things, such as lens quality, contrast, motion and grading. It’s worth watching Mark Schubin’s excellent webinar on the topic to get a clearer understanding of this. There’s also a very good discussion among top DoPs here about 4K, lighting, high dynamic range and more.

df_4kcompare_1A lot of arguments have been made that 4K cameras using a color-pattern filter method (Bayer-style), single CMOS sensor don’t even deliver the resolution they claim. The reason is that in many designs 50% of the pixels are green versus 25% each for red and blue. Green is used for luminance, which determines detail, so you do not have a 1:1 pixel relationship between green and the stated frame resolution of the sensor. That’s in part why RED developed 5K and 6K sensors and it’s why Sony uses an 8K sensor (F65) to deliver a 4K image.

The perceived image quality is also not all about total pixels. The pixels of the sensor, called photosites, are the light-receiving elements of the sensor. There’s a loose correlation between pixel size and light sensitivity. For any given sensor of a certain physical dimension, you can design it with a lot of small pixels or with fewer, but larger, pixels. This roughly correlates to a sensor that’s of high resolution, but a smaller dynamic range (many small pixels) or one with lower resolution, but a higher dynamic range (large, but fewer pixels). Although the equation isn’t nearly this simplistic, since a lot of color science and “secret sauce” goes into optimizing a sensor’s design, you can certainly see this play out in the marketing battles between the RED and ARRI camps. In the case of the ALEXA, ARRI adds some on-the-sensor filtering, which results in a softer image that gives it a characteristic filmic quality.df_4kcompare_2

Why do you use 4K?

With 4K there are two possible avenues. The first is to shoot 4K for the purpose of reframing and repositioning within HD and 2K timelines. Reframing isn’t a new production idea. When everyone shot on film, some telecine devices, like the Rank Cintel Mark III, sported zoom boards that permitted an optical blow-up of the 35mm negative. You could zoom in for a close-up in transfer that didn’t cost you resolution. Many videographers shoot 1080 for a 720 finish, as this allows a nice margin for reframing in post. The second is to deliver a final 4K product. Obviously, if your intent is the latter, then you can’t count on the techniques of the former in post.

df_4kcompare_3When you shoot 4K for HD post, then workflow is an issue. Do you shoot everything in 4K or just the items you know you’ll want to deal with? How will this cut with HD and 2K content? That’s where it gets dicey, because some NLEs have good 4K workflows and others don’t. But it’s here that I contend you are getting less than meets the eye, so to speak.  I have run into plenty of editors who have dropped a 4K clip into an HD timeline and then blown it up, thinking that they are really cropping into the native 4K frame and maintaining resolution. Depending on the NLE and the settings used, often they are simply blowing up an HD shot. The NLE scaled the 4K to HD first and then expanded the downscaled HD image. It didn’t crop into the actual 4K native resolution. So you have to be careful. And guess what, if the blow up isn’t that extreme, it may not look much different than the crop.

df_4kcompare_4One thing to remember is that a 4K image that is scaled to fit into an HD timeline gains the benefits of oversampling. The result in HD will be very sharp and, in fact, will generally look better perceptually than the exact same image natively shot in an HD size. When you now crop into the native image, you are losing some of that oversampling effect. A 1:1 pixel relationship is the same effective image size as a 200% blow-up. Of course, it’s not the same result. When you compare the oversampled “wide shot” (4K scaled to HD) to the “close-up” (native 4K crop), the close-up will often look softer. You’ll see defects of the image, like chromatic aberration in the lens, missed critical focus and sensor noise. Instead, if you shoot a wide and then an actual close-up, that result will usually look better.

On the other hand, if you blow up the 4K-to-HD or a native HD shot, you’ll typically see a result that looks pretty good. That’s because there’s often a lot more information there than monitors or the eye can detect. In my experience, you can commonly get away with a blow-up in the range of 120% of the original image size and in some cases, as much as 150%.

To scale or not to scale

df_4K_comparison_Instant4KLet me point out that I’m not saying a native 4K shot doesn’t look good. It does, but often the associated workflow hassles aren’t worth it. For example, let’s take a typical 1080p 50” Panasonic plasma that’s often used as a client monitor in edit suites. You or your client may be sitting 7 to 10 feet away from it, which is closer than most people sit in a living room with that size of a screen. If I show a client the native image (4K at 1:1 in an HD timeline) compared with an separate HD image at the same framing, it’s unlikely that they’ll see a difference. Another test is to take two exact images – one native HD and the other 4K. Scale up the HD and crop down the 4K to match. In theory, the 4K should look better and sharper. In fact, sitting back on the client sofa, most won’t see a difference. It’s only when they step to about 5 feet in front of the monitor that a difference is obvious and then only when looking at fine detail within the shot.

df_gh4_instant4k_smNot all scaling is equal. I’ve talked a lot about the comparison of HD scaling, but that really depends on the scaling that you use. For a quick shot, sure, use what your NLE has built in. For more critical operations, then you might want to scale images separately. DaVinci Resolve has excellent built-in scaling and lets you pick from smooth, sharp and bilinear algorithms. If you want a plug-in, then the best I’ve found is the new Red Giant Instant 4K filter. It’s a variation of their Instant HD plug-in and works in After Effects and Premiere Pro. There are a lot of quality tweaks and naturally, the better it does, the longer the render will be. Nevertheless, it offers outstanding results and in one test that I ran, it actually provided a better look within portions of the image than the native 4K shot.

df_4K_comparison-C500_smIn that case, it was a C500 shot of a woman on a park bench with a name badge. I had three identical versions of the shot (not counting the raw files) – the converted 4K ProRes4444 file, a converted 1080 ProRes4444 “proxy” file for editing and the in-camera 1080 Canon XF file. I blew up the two 1080 shots using Instant 4K and cropped the 4K shot so all were of equal framing. When I compared the native 4K shot to the expanded 1080 ProRes4444 shot, the woman’s hair was sharper in the 1080 blow-up, but the letters on the name badge were better on the original. The 1080 Canon XF blow-up was softer in both areas. I think this shows that some of the controls in the plug-in may give you superior results to the original (crisper hair); but, a blow-up suffers when you are using a worse codec, like Canon’s XF (50 Mbps 4:2:2). It’s fine for native HD, but the ProRes4444 codec has twice the chroma resolution and less compression, which makes a difference when scaling an image larger. Remember all of this pertains to viewing the image in HD.

4K deliverables

df_4K_comparison-to-1080_smSo what about working in native 4K for a 4K deliverable? That certainly has validity for high-resolution projects (films, concerts, large corporate presentations), but I’m less of a believer for television and web viewing. I’d rather have “better” pixels and not simply “more” pixels. Most of the content you watch at theaters using digital projection is 2K playback. Sometimes the master for that DCP was HD, 2K or 4K. If you are in a Sony 4K projector-equipped theater, most of the time, it’s simply the projector upscaling the content to 4K as part of the projection. Even though you may see a Sony 4K logo at the head of the trailers, you aren’t watching 4K content – definitely not, if it’s a stereo3D film. Yet, much of this looks pretty good, doesn’t it?

df_AMIRAEverything I talked about, regarding blowing up HD by up to 120% or more, still applies to 4K. Need to blow up a shot a bit in a 4K timeline? Go ahead, it will look fine. I think ARRI has proven this as well, taking films shot with the ALEXA all the way up to Imax. In fact, ARRI just announced that the AMIRA will get in-camera, on-the-fly upscaling of its image with the ability to record 4K (3840 x 2160 at up to 60fps) on the CFast 2.0 cards. They can do this, because the sensor starts with more pixels than HD or 2K. The AMIRA will expose all of the available photosites (about 3.4K sensor pixels) in what they call the “open gate” method. This image is lightly cropped to 3.2K and then scaled by a 1.2 factor, which results in UltraHD 4K recording on the same hardware. Pretty neat trick and judging by ARRI’s image quality, I’ll bet it will look very good. Doubling down on this technique, the ALEXA XT models will also be able to record ProRes media at this 3.2K size. In the case of the ALEXA, the designers have opted to leave the upscaling to post, rather than to do it in-camera.

To conclude, if you are working in 4K today, then by all means continue to do so. It’s a great medium with a lot of creative benefits. If you aren’t working in 4K, then don’t sweat it. You won’t be left behind for awhile and there are plenty of techniques to get you to the same end goal as much of the 4K production that’s going on.

Click these thumbnails for full resolution images.










©2014 Oliver Peters

Filmmaking Pointers

df_fmpointersIf you want to be a good indie filmmaker, you have to understand some of the basic principles of telling interesting visual stories and driving the audience’s emotions. These six   ideas transcend individual components of filmmaking, like cinematography or editing. Rather, they are concepts that every budding director should understand and weave into the entire structure of how a film is approached.

1. Get into the story quickly. Films are not books and don’t always need a lengthy backstory to establish characters and plot. Films are a journey and it’s best to get the characters on that road as soon as possible. Most scripts are structured as three-act plays, so with a typical 90-100 minute running time, you should be through act one at roughly one third of the way into the film. If not, you’ll lose the interest of the audience. If you are 20 minutes into the film and you are still establishing the history of the characters without having advanced the story, then look for places to start cutting.

Sometimes this isn’t easy to tell and an extended start may indeed work well, because it does advance the story. One example is There Will Be Blood. The first reel is a tour de force of editing, in which editor Dylan Tichenor builds a largely dialogue-free montage that quickly takes the audience through the first part of Daniel Plainview’s (Daniel Day-Lewis) history in order to bring the audience up to the film’s present day. It’s absolutely instrumental to the rest of the film.

2. Parallel story lines. A parallel story structure is a great device to show the audience what’s happening to different characters at different locations, but at more or less the same time. With most scripts, parallel actions are designed to eventually converge as related or often unrelated characters ultimately end up in the same place for a shared plot. An interesting take on this is Cloud Atlas, in which an ensemble cast plays different characters spread across six different eras and locations – past, present and future.

The editing style pulled off by Alexander Berner is quite a bit different than traditional parallel story editing. A set of characters might start a scene in one era. Halfway through the scene – through some type of abrupt cut, such as walking through a door – the characters, location and eras shift to somewhere else. However, the story and the editing are such that you clearly understand how the story continues for the first half of that scene, as well as how it led into the second half. This is all without explicitly shooting those parts of each scene. Scene A/era A informs your understanding of scene B/era B and vice versa.

3. Understand camera movement. When a camera zooms, moves or is used in a shaky, handheld manner, this elicits certain emotions from the audience. As a director or DP, you need to understand when each style is appropriate and when it can be overdone. Zooming into a close-up while an actor delivers a line should be done intentionally. It tells the audience, “Listen up. This is important.” If you shoot handheld footage, like most of the Bourne series, it drives a level of documentary-style, frenetic action that should be in keeping with the concept.

The TV series NYPD Blue is credited with introducing TV audiences to the “shaky-cam” style of camera work. Many pros thought it was overdone, with movement often being introduced in an unmotivated fashion. Yet, the original Law & Order series also made extensive use of handheld photography. As this was more in keeping with a subtle documentary style, few complained about its use on that show.

4. Color palettes and art direction. Many new filmmakers often feel that you can get any look you want through color grading. The reality is that it all starts with art direction. Grading should enhance what’s there, not manufacture something that isn’t. To get that “orange & teal” look, you need to have a set and wardrobe that has some greens and blues in it. To get a warm, earthy look, you need a set and wardrobe with browns and reds.

This even extends to black & white films. To get the right contrast and tonal values in black & white, you often have to use set/wardrobe color choices that are not ideal in a color world. That’s because different colors carry differing luminance and midrange values, which becomes very obvious, once you eliminate the color information from the picture. Make sure you take that into account if you plan to produce a black & white film.

5. Score versus sound design. Music should enhance and underscore a film, but it does not have to be wall-to-wall. Some films, like American Hustle and The Wolf of Wall Street, are driven by a score of popular tunes. Others are composed with an original score. However, often the “score” consists of sound design elements and simple musical drones designed to heighten tension and otherwise manipulate emotion. The absence of score in a scene can achieve the same effect. Sound effects elements with stark simplicity may have more impact  on the audience than music. Learn when to use one or the other or both. Often less is more.

6. Don’t tell too much story. Not every film requires extensive exposition. As I said at the top, a film is not a book. Visual cues are as important as the spoken word and will often tell the audience a lot more in shorthand, than pages and pages of script. The audience is interested in the journey your film’s characters are on and frequently need very little backstory to get an understanding of the characters. Don’t shy away from shooting enough of that sort of detail, but also don’t be afraid to cut it out, when it becomes superfluous.

©2014 Oliver Peters

The Ouch of 4K Post

df_4kpost_sm4K is the big buzz. Many in the post community are wondering when the tipping point will be reached when their clients will demand 4K masters. 4K acquisition has been with us for awhile and has generally proven to be useful for its creative options, like reframing during post. This has been possible long before the introduction of the RED One camera, if you were shooting on film. But acquiring in 4K and higher is quite a lot different than working a complete 4K post production pipeline.

There are a lot of half-truths surrounding 4K, so let me tackle a couple. When we talk about 4K, the moniker applies only to frame dimensions in pixels, not resolution, as in sharpness. There are several 4K dimensions, depending on whether you mean cinema specs or television specs. The cinema projection spec is 4096 x 2160 (1.9:1 aspect ratio) and within that, various aspects and frame sizes can be placed. The television or consumer spec is 3840 x 2160 (16:9 or 1.78:1 aspect ratio), which is an even multiple of HD at 1920 x 1080. That’s what most consumer 4K TV sets use. It is referred to by various labels, such as Ultra HD, UHD, UHDTV, Quad HD, 4K HD and so on. If you are delivering a digital cinema master it will be 4096 pixels wide, but if you deliver a television 4K master, it will be 3840 pixels wide. Regardless of which format your deliverable will be, you will most likely want to acquire at 4096 x 2304 (16:9) or larger, because this gives you some reframing space for either format.

This brings us to resolution. Although the area of the 4K frame is 4x that of a 1080p HD frame, the actual resolution is only theoretically 2x better. That’s because resolution is measured based on the vertical dimension and is a factor of the ability to resolve small detail in the image (typically based on thin lines of a resolution chart). True resolution is affected by many factors, including lens quality, depth of field, accuracy of the focus, contrast, etc. When you blow up a 35mm film frame and analyze high-detail areas within the frame, you often find them blurrier than you’d expect.

The brings us to post. The push for 4K post comes from a number of sources, but many voices in the independent owner-operator camp have been the strongest. These include many RED camera owners, who successfully cut their own material straight from the native media of the camera. NLEs, like Adobe Premiere Pro CC and Apple Final Cut Pro X, make this a fairly painless experience for small, independent projects, like short films and commercials. Unfortunately it’s an experience that doesn’t extrapolate well to the broader post community, which works on a variety projects and must interchange media with numerous other vendors.

The reason 4K post seems easy and viable to many is that the current crop of 4K camera work with highly compressed codecs and many newer computers have been optimized to deal with these codecs. Therefore, if you shoot with a RED (Redcode), Canon 1DC (Motion-JPEG), AJA Cion (ProRes), BMD URSA (ProRes) and Sony F55 (XAVC), you are going to get a tolerable post experience using post-ready, native media or by quickly transcoding to ProRes. But that’s not how most larger productions work. A typical motion picture or television show will take the camera footage and process it into something that fits into a known pipeline. This usually means uncompressed DPX image sequences, plus proxy movies for the editors. This allows a base level of color management that can be controlled through the VFX pipeline without each unit along the way adding their own color interpretation. It also keeps the quality highest without further decompression/recompression cycles, as well as various debayering methods used.

Uncompressed or even mildy compressed codecs mean a huge storage commitment for an ongoing facility. Here’s a quick example. I took a short RED clip that was a little over 3 minutes long. It was recorded as 4096 x 2304 at 23.976fps. This file was a bit over 7GB in its raw form. Then I converted this to these formats with the following results:

ProRes 4444 – 27GB

ProRes HQ (also scaled to UHD 3840 x 2160) – 16GB

Uncompressed 10-Bit – 116GB

DPX images (10-bits per channel) – 173GB

TIFF images (8-bits per channel) – 130GB

As you can see, storage requirement increase dramatically. This can be mitigated by tossing out some data, as the ProRes444 versus down-sampled ProResHQ comparison shows. It’s worth noting that I used the lower DPX and TIFF color depth options, as well. At these settings, a single 4K DPX frame is 38MB and a single 4K TIFF frame is 28MB.

For comparison, a complete 90-100 minute feature film mastered at 1920 x 1080 (23.976fps) as ProRes HQ will consume about 110-120GB of storage. UHD is still 4x the frame area, so if we use the ProRes HQ example above, 30x that 3 min. clip would give us the count for a typical feature. That figure comes out to 480GB.

This clearly has storage ramifications. A typical indie feature shot with two RED cameras over a one-month period, will likely generate about 5-10TB of media in the camera original raw form. If this same media were converted to ProRes444, never mind uncompressed, your storage requirements just increased to an additional 16-38TB. Mind you this is all as 24p media. As we start talking 4K in television-centric applications around the world, this also means 4K at 25, 30, 50 and 60fps. 60fps means 2.5x more storage demands than 24p.

The other element is system performance. Compressed codecs work when the computer is optimized for these. RED has worked hard to make Redcode easy to work with on modern computers. Apple ProRes enjoys near ubiquitous playback support. ProRes HQ even at 4K will play reasonably well from a two-drive RAID-0 stripe on my Mac Pro. Recode plays if I lower the debayer quality. Once you start getting into uncompressed files and DPX or TIFF image strings, it takes a fast drive array and a fast computer to get anything approaching consistent real-time playback. Therefore, the only viable workflow is an offline-online editorial system, since creative editorial generally requires multiple streams of simultaneous media.

This workflow gets even worse with other cameras. One example is the Canon C500, which records 4K camera raw files to an external recorder, such as the Convergent Design Odyssey 7Q. These are proprietary Canon camera raw files, which cannot be natively played by an NLE. These must first be turned into something else using a Canon utility. Since the Odyssey records to internal SSDs, media piles up pretty quickly. With two 512GB SSDs, you get 62 minutes of record time at 24fps if you record Canon 4K raw. In the real world of production, this becomes tough, because it means you either have to rent or buy numerous SSDs for your shoot or copy and reuse as you go. Typically transferring 1TB of data on set is not a fast process.

Naturally there are ways to make 4K post efficient and not as painful as it needs to be. But it requires a commitment to hardware resources. It’s not conducive to easy desktop post running off of a laptop, like DV and even HD has been. That’s why you still see Autodesk Smokes, Quantel Rio Pablos and other high-end systems dominate at the leading facilities. Think, plan and buy before you jump in.

©2014 Oliver Peters

Final Cut “Studio 2014”


A few years ago I wrote some posts about Final Cut Pro as a platform and designing an FCP-centric facility. Those options have largely been replaced by an Adobe approach built around Creative Cloud. Not everyone has warmed up to Creative Cloud. Either they don’t like the software or they dislike the software rental model or they just don’t need much of the power offered by the various Adobe applications.

If you are looking for alternatives to a Creative Cloud-based production toolkit, then it’s easy to build your own combination with some very inexpensive solutions. Most of these are either Apple software or others that are sold through the Mac App Store. As with all App Store purchases, you buy the product once and get updates for free, so long as the product is still sold as the same. Individual users may install the apps onto as many Mac computers as they personally own and control, all for the one purchase price. With this in mind, it’s very easy for most editors to create a powerful bundle that’s equal to or better than the old Final Cut Studio bundle – at less than its full retail price back in the day.

The one caveat to all of this is how entrenched you may or may not be with Adobe products. If you need to open and alter complex Illustrator, Photoshop, After Effects or Premiere Pro project files, then you will absolutely need Adobe software to do it. In that case, maybe you can get by with an old version (CS6 or earlier) or maybe trial software will work. Lastly you could outsource to a colleague with Adobe software or simply pick up a Creative Cloud subscription on a month-by-month rental. On the other hand, if you don’t absolutely need to interact with Adobe project files, then these solutions may be all you need. I’m not trying to advocate for one over the other, but rather to add some ideas to think about.

Final Cut Pro X / Motion / Compressor

df_fcpstudio_fcpx_smThe last Final Cut Studio bundle included FCP 7, Motion, Compressor, Cinema Tools, DVD Studio Pro, Soundtrack Pro and Color. The current Apple video tools of Final Cut Pro X, Motion and Compressor cover all of the video bases, including editing, compositing, encoding, transcoding and disc burning. The latter may be a bone of contention for many – since Apple has largely walked away from the optical disc world. Nevertheless, simple one-off DVDs and Blu-ray discs can still be created straight from FCP X or Compressor. Of course, FCP X has been a mixed bag for editors, with many evangelists and haters on all sides. If you square off Premiere Pro against Final Cut Pro X, then it really boils down to tracks versus trackless. Both tools get the job done. Which one do you prefer?

df_fcpstudio_motion_smMotion versus After Effects is a tougher call. If you are a power user of After Effects, then Motion may seem foreign and hard to use. If the focus is primarily on motion graphics, then you can certainly get the results you want in either. There is no direct “send to” from FCP X to Motion, but on the plus side, you can create effects and graphics templates using Motion that will appear and function within FCP X. Just like with After Effects, you can also buy stock Motion templates for graphics, show opens and other types of design themes and animations.

Logic Pro X

df_fcpstudio_lpx_smLogic Pro X is the DAW in our package. It becomes the replacement for Soundtrack Pro and the alternative to Adobe Audition or Avid Pro Tools. It’s a powerful music creation tool, but more importantly for editors, it’s a strong single file and multitrack audio production and post production application. You can get FCP X files to it via FCPXML or AAF (converted using X2Pro). There are a ton of plug-ins and mixing features that make Logic a solid DAW. I won’t dive deeply into this, but suffice it to say, that if your main interest in using Logic is to produce a better mix, then you can learn the essentials quickly and get up and running in short order.

DaVinci Resolve

df_fcpstudio_resolve_smEvery decent studio bundle needs a powerful color correction tool. Apple Color is gone, but Blackmagic Design’s DaVinci Resolve is a best-of-breed replacement. You can get the free Resolve Lite version through the App Store, as well as Blackmagic’s website. It does most of everything you need, so there’s little reason to buy the paid version for most editors who do some color correction.

Resolve 11 (due out soon) adds improved editing. There is a solid synergy with FCP X, making it not only a good companion color corrector, but also a finishing editorial tool. OFX plug-ins are supported, which adds a choice of industry standard creative effects if you need more than FCP X or Motion offer.

Pixelmator / Aperture

df_fcpstudio_pixelmator_smThis one’s tough. Of all the Adobe applications, Photoshop and Illustrator are hardest to replace. There are no perfect alternatives. On the other hand, most editors don’t need all that power. If direct feature compatibility isn’t a need, then you’ve got some choices. One of these is Pixelmator, a very lightweight image manipulation tool. It’s a little like Photoshop in the version 4-7 stages, with a mix of Illustrator tossed in. There are vector drawing and design tools and it’s optimized for core image, complete with a nice set of image filters. However, it does not include some of Photoshop CC’s power user features, like smart objects, smart filters, 3D, layer groups and video manipulation. But, if you just need to doctor some images, extract or modify logos or translate various image formats, Pixelmator might be the perfect fit. For more sophistication, another choice (not in the App Store) is Corel’s Painter, as well as Adobe Photoshop Elements (also available at the App Store).

df_fcpstudio_aperture_smAlthough Final Cut Studio never included a photo application, the Creative Cloud does include Lightroom. Since the beginning, Apple’s Aperture and Adobe’s Lightroom have been leapfrogging each other with features. Aperture hasn’t changed much in a few years and is likely the next pro app to get the “X” treatment from Apple’s engineers. Photographers have the same type of “Chevy vs. Ford” arguments about Aperture and Lightroom as editors do about NLEs. Nevertheless, editors deal a lot with supplied images and Aperture is a great tool to use for organization, clean up and image manipulation.


The list I’ve outlined creates a nice set of tools, but if you need to interchange with other pros using a variety of different software, then you’ll need to invest in some “glue”. There are a number of utilities designed to go to and from FCP X. Many are available through the App Store. Examples include Xto7, 7toX, EDL-X, X2Pro, Shot Notes X, Lumberjack and many others.

For a freewheeling discussion about this topic and other matters, check out my conversation with Chris Fenwick at FCPX Grille.

©2014 Oliver Peters