More 4K

df_4Kcompare_main

I’ve talked about 4K before (here, here and here), but I’ve recently done some more 4K jobs that have me thinking again. 4K means different things to different people and in terms of dimensions, there’s the issue of cinema 4K (4096 pixels wide) versus the UltraHD/QuadHD/4K 16:9 (whatever you want to call it) version of 4K (3840 pixels wide). That really doesn’t make a lot of difference, because these are close enough to be the same. There’s so much hype around it, though, that you really have to wonder if it’s “the Emperor’s new clothes”. (Click on any of these images for expanded views.)

First of all, 4K used as a marketing term is not a resolution, it’s a frame dimension. As such, 4K is not four times the resolution of HD. That’s a measurement of area and not resolution. True resolution is usually measured in the vertical direction based on the ability to resolve fine detail (regardless of the number of pixels) and, therefore, 4K is only twice the resolution of HD at best. 4K is also not sharpness, which is a human perception affected by many things, such as lens quality, contrast, motion and grading. It’s worth watching Mark Schubin’s excellent webinar on the topic to get a clearer understanding of this. There’s also a very good discussion among top DoPs here about 4K, lighting, high dynamic range and more.

df_4kcompare_1A lot of arguments have been made that 4K cameras using a color-pattern filter method (Bayer-style), single CMOS sensor don’t even deliver the resolution they claim. The reason is that in many designs 50% of the pixels are green versus 25% each for red and blue. Green is used for luminance, which determines detail, so you do not have a 1:1 pixel relationship between green and the stated frame resolution of the sensor. That’s in part why RED developed 5K and 6K sensors and it’s why Sony uses an 8K sensor (F65) to deliver a 4K image.

The perceived image quality is also not all about total pixels. The pixels of the sensor, called photosites, are the light-receiving elements of the sensor. There’s a loose correlation between pixel size and light sensitivity. For any given sensor of a certain physical dimension, you can design it with a lot of small pixels or with fewer, but larger, pixels. This roughly correlates to a sensor that’s of high resolution, but a smaller dynamic range (many small pixels) or one with lower resolution, but a higher dynamic range (large, but fewer pixels). Although the equation isn’t nearly this simplistic, since a lot of color science and “secret sauce” goes into optimizing a sensor’s design, you can certainly see this play out in the marketing battles between the RED and ARRI camps. In the case of the ALEXA, ARRI adds some on-the-sensor filtering, which results in a softer image that gives it a characteristic filmic quality.df_4kcompare_2

Why do you use 4K?

With 4K there are two possible avenues. The first is to shoot 4K for the purpose of reframing and repositioning within HD and 2K timelines. Reframing isn’t a new production idea. When everyone shot on film, some telecine devices, like the Rank Cintel Mark III, sported zoom boards that permitted an optical blow-up of the 35mm negative. You could zoom in for a close-up in transfer that didn’t cost you resolution. Many videographers shoot 1080 for a 720 finish, as this allows a nice margin for reframing in post. The second is to deliver a final 4K product. Obviously, if your intent is the latter, then you can’t count on the techniques of the former in post.

df_4kcompare_3When you shoot 4K for HD post, then workflow is an issue. Do you shoot everything in 4K or just the items you know you’ll want to deal with? How will this cut with HD and 2K content? That’s where it gets dicey, because some NLEs have good 4K workflows and others don’t. But it’s here that I contend you are getting less than meets the eye, so to speak.  I have run into plenty of editors who have dropped a 4K clip into an HD timeline and then blown it up, thinking that they are really cropping into the native 4K frame and maintaining resolution. Depending on the NLE and the settings used, often they are simply blowing up an HD shot. The NLE scaled the 4K to HD first and then expanded the downscaled HD image. It didn’t crop into the actual 4K native resolution. So you have to be careful. And guess what, if the blow up isn’t that extreme, it may not look much different than the crop.

df_4kcompare_4One thing to remember is that a 4K image that is scaled to fit into an HD timeline gains the benefits of oversampling. The result in HD will be very sharp and, in fact, will generally look better perceptually than the exact same image natively shot in an HD size. When you now crop into the native image, you are losing some of that oversampling effect. A 1:1 pixel relationship is the same effective image size as a 200% blow-up. Of course, it’s not the same result. When you compare the oversampled “wide shot” (4K scaled to HD) to the “close-up” (native 4K crop), the close-up will often look softer. You’ll see defects of the image, like chromatic aberration in the lens, missed critical focus and sensor noise. Instead, if you shoot a wide and then an actual close-up, that result will usually look better.

On the other hand, if you blow up the 4K-to-HD or a native HD shot, you’ll typically see a result that looks pretty good. That’s because there’s often a lot more information there than monitors or the eye can detect. In my experience, you can commonly get away with a blow-up in the range of 120% of the original image size and in some cases, as much as 150%.

To scale or not to scale

df_4K_comparison_Instant4KLet me point out that I’m not saying a native 4K shot doesn’t look good. It does, but often the associated workflow hassles aren’t worth it. For example, let’s take a typical 1080p 50” Panasonic plasma that’s often used as a client monitor in edit suites. You or your client may be sitting 7 to 10 feet away from it, which is closer than most people sit in a living room with that size of a screen. If I show a client the native image (4K at 1:1 in an HD timeline) compared with an separate HD image at the same framing, it’s unlikely that they’ll see a difference. Another test is to take two exact images – one native HD and the other 4K. Scale up the HD and crop down the 4K to match. In theory, the 4K should look better and sharper. In fact, sitting back on the client sofa, most won’t see a difference. It’s only when they step to about 5 feet in front of the monitor that a difference is obvious and then only when looking at fine detail within the shot.

df_gh4_instant4k_smNot all scaling is equal. I’ve talked a lot about the comparison of HD scaling, but that really depends on the scaling that you use. For a quick shot, sure, use what your NLE has built in. For more critical operations, then you might want to scale images separately. DaVinci Resolve has excellent built-in scaling and lets you pick from smooth, sharp and bilinear algorithms. If you want a plug-in, then the best I’ve found is the new Red Giant Instant 4K filter. It’s a variation of their Instant HD plug-in and works in After Effects and Premiere Pro. There are a lot of quality tweaks and naturally, the better it does, the longer the render will be. Nevertheless, it offers outstanding results and in one test that I ran, it actually provided a better look within portions of the image than the native 4K shot.

df_4K_comparison-C500_smIn that case, it was a C500 shot of a woman on a park bench with a name badge. I had three identical versions of the shot (not counting the raw files) – the converted 4K ProRes4444 file, a converted 1080 ProRes4444 “proxy” file for editing and the in-camera 1080 Canon XF file. I blew up the two 1080 shots using Instant 4K and cropped the 4K shot so all were of equal framing. When I compared the native 4K shot to the expanded 1080 ProRes4444 shot, the woman’s hair was sharper in the 1080 blow-up, but the letters on the name badge were better on the original. The 1080 Canon XF blow-up was softer in both areas. I think this shows that some of the controls in the plug-in may give you superior results to the original (crisper hair); but, a blow-up suffers when you are using a worse codec, like Canon’s XF (50 Mbps 4:2:2). It’s fine for native HD, but the ProRes4444 codec has twice the chroma resolution and less compression, which makes a difference when scaling an image larger. Remember all of this pertains to viewing the image in HD.

4K deliverables

df_4K_comparison-to-1080_smSo what about working in native 4K for a 4K deliverable? That certainly has validity for high-resolution projects (films, concerts, large corporate presentations), but I’m less of a believer for television and web viewing. I’d rather have “better” pixels and not simply “more” pixels. Most of the content you watch at theaters using digital projection is 2K playback. Sometimes the master for that DCP was HD, 2K or 4K. If you are in a Sony 4K projector-equipped theater, most of the time, it’s simply the projector upscaling the content to 4K as part of the projection. Even though you may see a Sony 4K logo at the head of the trailers, you aren’t watching 4K content – definitely not, if it’s a stereo3D film. Yet, much of this looks pretty good, doesn’t it?

df_AMIRAEverything I talked about, regarding blowing up HD by up to 120% or more, still applies to 4K. Need to blow up a shot a bit in a 4K timeline? Go ahead, it will look fine. I think ARRI has proven this as well, taking films shot with the ALEXA all the way up to Imax. In fact, ARRI just announced that the AMIRA will get in-camera, on-the-fly upscaling of its image with the ability to record 4K (3840 x 2160 at up to 60fps) on the CFast 2.0 cards. They can do this, because the sensor starts with more pixels than HD or 2K. The AMIRA will expose all of the available photosites (about 3.4K sensor pixels) in what they call the “open gate” method. This image is lightly cropped to 3.2K and then scaled by a 1.2 factor, which results in UltraHD 4K recording on the same hardware. Pretty neat trick and judging by ARRI’s image quality, I’ll bet it will look very good. Doubling down on this technique, the ALEXA XT models will also be able to record ProRes media at this 3.2K size. In the case of the ALEXA, the designers have opted to leave the upscaling to post, rather than to do it in-camera.

To conclude, if you are working in 4K today, then by all means continue to do so. It’s a great medium with a lot of creative benefits. If you aren’t working in 4K, then don’t sweat it. You won’t be left behind for awhile and there are plenty of techniques to get you to the same end goal as much of the 4K production that’s going on.

Click these thumbnails for full resolution images.

df_gh4_instant4k_sm

 

 

 

df_4K_comparison-to-1080_sm

 

 

 

 

©2014 Oliver Peters

Filmmaking Pointers

df_fmpointersIf you want to be a good indie filmmaker, you have to understand some of the basic principles of telling interesting visual stories and driving the audience’s emotions. These six   ideas transcend individual components of filmmaking, like cinematography or editing. Rather, they are concepts that every budding director should understand and weave into the entire structure of how a film is approached.

1. Get into the story quickly. Films are not books and don’t always need a lengthy backstory to establish characters and plot. Films are a journey and it’s best to get the characters on that road as soon as possible. Most scripts are structured as three-act plays, so with a typical 90-100 minute running time, you should be through act one at roughly one third of the way into the film. If not, you’ll lose the interest of the audience. If you are 20 minutes into the film and you are still establishing the history of the characters without having advanced the story, then look for places to start cutting.

Sometimes this isn’t easy to tell and an extended start may indeed work well, because it does advance the story. One example is There Will Be Blood. The first reel is a tour de force of editing, in which editor Dylan Tichenor builds a largely dialogue-free montage that quickly takes the audience through the first part of Daniel Plainview’s (Daniel Day-Lewis) history in order to bring the audience up to the film’s present day. It’s absolutely instrumental to the rest of the film.

2. Parallel story lines. A parallel story structure is a great device to show the audience what’s happening to different characters at different locations, but at more or less the same time. With most scripts, parallel actions are designed to eventually converge as related or often unrelated characters ultimately end up in the same place for a shared plot. An interesting take on this is Cloud Atlas, in which an ensemble cast plays different characters spread across six different eras and locations – past, present and future.

The editing style pulled off by Alexander Berner is quite a bit different than traditional parallel story editing. A set of characters might start a scene in one era. Halfway through the scene – through some type of abrupt cut, such as walking through a door – the characters, location and eras shift to somewhere else. However, the story and the editing are such that you clearly understand how the story continues for the first half of that scene, as well as how it led into the second half. This is all without explicitly shooting those parts of each scene. Scene A/era A informs your understanding of scene B/era B and vice versa.

3. Understand camera movement. When a camera zooms, moves or is used in a shaky, handheld manner, this elicits certain emotions from the audience. As a director or DP, you need to understand when each style is appropriate and when it can be overdone. Zooming into a close-up while an actor delivers a line should be done intentionally. It tells the audience, “Listen up. This is important.” If you shoot handheld footage, like most of the Bourne series, it drives a level of documentary-style, frenetic action that should be in keeping with the concept.

The TV series NYPD Blue is credited with introducing TV audiences to the “shaky-cam” style of camera work. Many pros thought it was overdone, with movement often being introduced in an unmotivated fashion. Yet, the original Law & Order series also made extensive use of handheld photography. As this was more in keeping with a subtle documentary style, few complained about its use on that show.

4. Color palettes and art direction. Many new filmmakers often feel that you can get any look you want through color grading. The reality is that it all starts with art direction. Grading should enhance what’s there, not manufacture something that isn’t. To get that “orange & teal” look, you need to have a set and wardrobe that has some greens and blues in it. To get a warm, earthy look, you need a set and wardrobe with browns and reds.

This even extends to black & white films. To get the right contrast and tonal values in black & white, you often have to use set/wardrobe color choices that are not ideal in a color world. That’s because different colors carry differing luminance and midrange values, which becomes very obvious, once you eliminate the color information from the picture. Make sure you take that into account if you plan to produce a black & white film.

5. Score versus sound design. Music should enhance and underscore a film, but it does not have to be wall-to-wall. Some films, like American Hustle and The Wolf of Wall Street, are driven by a score of popular tunes. Others are composed with an original score. However, often the “score” consists of sound design elements and simple musical drones designed to heighten tension and otherwise manipulate emotion. The absence of score in a scene can achieve the same effect. Sound effects elements with stark simplicity may have more impact  on the audience than music. Learn when to use one or the other or both. Often less is more.

6. Don’t tell too much story. Not every film requires extensive exposition. As I said at the top, a film is not a book. Visual cues are as important as the spoken word and will often tell the audience a lot more in shorthand, than pages and pages of script. The audience is interested in the journey your film’s characters are on and frequently need very little backstory to get an understanding of the characters. Don’t shy away from shooting enough of that sort of detail, but also don’t be afraid to cut it out, when it becomes superfluous.

©2014 Oliver Peters

The Ouch of 4K Post

df_4kpost_sm4K is the big buzz. Many in the post community are wondering when the tipping point will be reached when their clients will demand 4K masters. 4K acquisition has been with us for awhile and has generally proven to be useful for its creative options, like reframing during post. This has been possible long before the introduction of the RED One camera, if you were shooting on film. But acquiring in 4K and higher is quite a lot different than working a complete 4K post production pipeline.

There are a lot of half-truths surrounding 4K, so let me tackle a couple. When we talk about 4K, the moniker applies only to frame dimensions in pixels, not resolution, as in sharpness. There are several 4K dimensions, depending on whether you mean cinema specs or television specs. The cinema projection spec is 4096 x 2160 (1.9:1 aspect ratio) and within that, various aspects and frame sizes can be placed. The television or consumer spec is 3840 x 2160 (16:9 or 1.78:1 aspect ratio), which is an even multiple of HD at 1920 x 1080. That’s what most consumer 4K TV sets use. It is referred to by various labels, such as Ultra HD, UHD, UHDTV, Quad HD, 4K HD and so on. If you are delivering a digital cinema master it will be 4096 pixels wide, but if you deliver a television 4K master, it will be 3840 pixels wide. Regardless of which format your deliverable will be, you will most likely want to acquire at 4096 x 2304 (16:9) or larger, because this gives you some reframing space for either format.

This brings us to resolution. Although the area of the 4K frame is 4x that of a 1080p HD frame, the actual resolution is only theoretically 2x better. That’s because resolution is measured based on the vertical dimension and is a factor of the ability to resolve small detail in the image (typically based on thin lines of a resolution chart). True resolution is affected by many factors, including lens quality, depth of field, accuracy of the focus, contrast, etc. When you blow up a 35mm film frame and analyze high-detail areas within the frame, you often find them blurrier than you’d expect.

The brings us to post. The push for 4K post comes from a number of sources, but many voices in the independent owner-operator camp have been the strongest. These include many RED camera owners, who successfully cut their own material straight from the native media of the camera. NLEs, like Adobe Premiere Pro CC and Apple Final Cut Pro X, make this a fairly painless experience for small, independent projects, like short films and commercials. Unfortunately it’s an experience that doesn’t extrapolate well to the broader post community, which works on a variety projects and must interchange media with numerous other vendors.

The reason 4K post seems easy and viable to many is that the current crop of 4K camera work with highly compressed codecs and many newer computers have been optimized to deal with these codecs. Therefore, if you shoot with a RED (Redcode), Canon 1DC (Motion-JPEG), AJA Cion (ProRes), BMD URSA (ProRes) and Sony F55 (XAVC), you are going to get a tolerable post experience using post-ready, native media or by quickly transcoding to ProRes. But that’s not how most larger productions work. A typical motion picture or television show will take the camera footage and process it into something that fits into a known pipeline. This usually means uncompressed DPX image sequences, plus proxy movies for the editors. This allows a base level of color management that can be controlled through the VFX pipeline without each unit along the way adding their own color interpretation. It also keeps the quality highest without further decompression/recompression cycles, as well as various debayering methods used.

Uncompressed or even mildy compressed codecs mean a huge storage commitment for an ongoing facility. Here’s a quick example. I took a short RED clip that was a little over 3 minutes long. It was recorded as 4096 x 2304 at 23.976fps. This file was a bit over 7GB in its raw form. Then I converted this to these formats with the following results:

ProRes 4444 – 27GB

ProRes HQ (also scaled to UHD 3840 x 2160) – 16GB

Uncompressed 10-Bit – 116GB

DPX images (10-bits per channel) – 173GB

TIFF images (8-bits per channel) – 130GB

As you can see, storage requirement increase dramatically. This can be mitigated by tossing out some data, as the ProRes444 versus down-sampled ProResHQ comparison shows. It’s worth noting that I used the lower DPX and TIFF color depth options, as well. At these settings, a single 4K DPX frame is 38MB and a single 4K TIFF frame is 28MB.

For comparison, a complete 90-100 minute feature film mastered at 1920 x 1080 (23.976fps) as ProRes HQ will consume about 110-120GB of storage. UHD is still 4x the frame area, so if we use the ProRes HQ example above, 30x that 3 min. clip would give us the count for a typical feature. That figure comes out to 480GB.

This clearly has storage ramifications. A typical indie feature shot with two RED cameras over a one-month period, will likely generate about 5-10TB of media in the camera original raw form. If this same media were converted to ProRes444, never mind uncompressed, your storage requirements just increased to an additional 16-38TB. Mind you this is all as 24p media. As we start talking 4K in television-centric applications around the world, this also means 4K at 25, 30, 50 and 60fps. 60fps means 2.5x more storage demands than 24p.

The other element is system performance. Compressed codecs work when the computer is optimized for these. RED has worked hard to make Redcode easy to work with on modern computers. Apple ProRes enjoys near ubiquitous playback support. ProRes HQ even at 4K will play reasonably well from a two-drive RAID-0 stripe on my Mac Pro. Recode plays if I lower the debayer quality. Once you start getting into uncompressed files and DPX or TIFF image strings, it takes a fast drive array and a fast computer to get anything approaching consistent real-time playback. Therefore, the only viable workflow is an offline-online editorial system, since creative editorial generally requires multiple streams of simultaneous media.

This workflow gets even worse with other cameras. One example is the Canon C500, which records 4K camera raw files to an external recorder, such as the Convergent Design Odyssey 7Q. These are proprietary Canon camera raw files, which cannot be natively played by an NLE. These must first be turned into something else using a Canon utility. Since the Odyssey records to internal SSDs, media piles up pretty quickly. With two 512GB SSDs, you get 62 minutes of record time at 24fps if you record Canon 4K raw. In the real world of production, this becomes tough, because it means you either have to rent or buy numerous SSDs for your shoot or copy and reuse as you go. Typically transferring 1TB of data on set is not a fast process.

Naturally there are ways to make 4K post efficient and not as painful as it needs to be. But it requires a commitment to hardware resources. It’s not conducive to easy desktop post running off of a laptop, like DV and even HD has been. That’s why you still see Autodesk Smokes, Quantel Rio Pablos and other high-end systems dominate at the leading facilities. Think, plan and buy before you jump in.

©2014 Oliver Peters

Final Cut “Studio 2014″

df_fcpstudio_main

A few years ago I wrote some posts about Final Cut Pro as a platform and designing an FCP-centric facility. Those options have largely been replaced by an Adobe approach built around Creative Cloud. Not everyone has warmed up to Creative Cloud. Either they don’t like the software or they dislike the software rental model or they just don’t need much of the power offered by the various Adobe applications.

If you are looking for alternatives to a Creative Cloud-based production toolkit, then it’s easy to build your own combination with some very inexpensive solutions. Most of these are either Apple software or others that are sold through the Mac App Store. As with all App Store purchases, you buy the product once and get updates for free, so long as the product is still sold as the same. Individual users may install the apps onto as many Mac computers as they personally own and control, all for the one purchase price. With this in mind, it’s very easy for most editors to create a powerful bundle that’s equal to or better than the old Final Cut Studio bundle – at less than its full retail price back in the day.

The one caveat to all of this is how entrenched you may or may not be with Adobe products. If you need to open and alter complex Illustrator, Photoshop, After Effects or Premiere Pro project files, then you will absolutely need Adobe software to do it. In that case, maybe you can get by with an old version (CS6 or earlier) or maybe trial software will work. Lastly you could outsource to a colleague with Adobe software or simply pick up a Creative Cloud subscription on a month-by-month rental. On the other hand, if you don’t absolutely need to interact with Adobe project files, then these solutions may be all you need. I’m not trying to advocate for one over the other, but rather to add some ideas to think about.

Final Cut Pro X / Motion / Compressor

df_fcpstudio_fcpx_smThe last Final Cut Studio bundle included FCP 7, Motion, Compressor, Cinema Tools, DVD Studio Pro, Soundtrack Pro and Color. The current Apple video tools of Final Cut Pro X, Motion and Compressor cover all of the video bases, including editing, compositing, encoding, transcoding and disc burning. The latter may be a bone of contention for many – since Apple has largely walked away from the optical disc world. Nevertheless, simple one-off DVDs and Blu-ray discs can still be created straight from FCP X or Compressor. Of course, FCP X has been a mixed bag for editors, with many evangelists and haters on all sides. If you square off Premiere Pro against Final Cut Pro X, then it really boils down to tracks versus trackless. Both tools get the job done. Which one do you prefer?

df_fcpstudio_motion_smMotion versus After Effects is a tougher call. If you are a power user of After Effects, then Motion may seem foreign and hard to use. If the focus is primarily on motion graphics, then you can certainly get the results you want in either. There is no direct “send to” from FCP X to Motion, but on the plus side, you can create effects and graphics templates using Motion that will appear and function within FCP X. Just like with After Effects, you can also buy stock Motion templates for graphics, show opens and other types of design themes and animations.

Logic Pro X

df_fcpstudio_lpx_smLogic Pro X is the DAW in our package. It becomes the replacement for Soundtrack Pro and the alternative to Adobe Audition or Avid Pro Tools. It’s a powerful music creation tool, but more importantly for editors, it’s a strong single file and multitrack audio production and post production application. You can get FCP X files to it via FCPXML or AAF (converted using X2Pro). There are a ton of plug-ins and mixing features that make Logic a solid DAW. I won’t dive deeply into this, but suffice it to say, that if your main interest in using Logic is to produce a better mix, then you can learn the essentials quickly and get up and running in short order.

DaVinci Resolve

df_fcpstudio_resolve_smEvery decent studio bundle needs a powerful color correction tool. Apple Color is gone, but Blackmagic Design’s DaVinci Resolve is a best-of-breed replacement. You can get the free Resolve Lite version through the App Store, as well as Blackmagic’s website. It does most of everything you need, so there’s little reason to buy the paid version for most editors who do some color correction.

Resolve 11 (due out soon) adds improved editing. There is a solid synergy with FCP X, making it not only a good companion color corrector, but also a finishing editorial tool. OFX plug-ins are supported, which adds a choice of industry standard creative effects if you need more than FCP X or Motion offer.

Pixelmator / Aperture

df_fcpstudio_pixelmator_smThis one’s tough. Of all the Adobe applications, Photoshop and Illustrator are hardest to replace. There are no perfect alternatives. On the other hand, most editors don’t need all that power. If direct feature compatibility isn’t a need, then you’ve got some choices. One of these is Pixelmator, a very lightweight image manipulation tool. It’s a little like Photoshop in the version 4-7 stages, with a mix of Illustrator tossed in. There are vector drawing and design tools and it’s optimized for core image, complete with a nice set of image filters. However, it does not include some of Photoshop CC’s power user features, like smart objects, smart filters, 3D, layer groups and video manipulation. But, if you just need to doctor some images, extract or modify logos or translate various image formats, Pixelmator might be the perfect fit. For more sophistication, another choice (not in the App Store) is Corel’s Painter, as well as Adobe Photoshop Elements (also available at the App Store).

df_fcpstudio_aperture_smAlthough Final Cut Studio never included a photo application, the Creative Cloud does include Lightroom. Since the beginning, Apple’s Aperture and Adobe’s Lightroom have been leapfrogging each other with features. Aperture hasn’t changed much in a few years and is likely the next pro app to get the “X” treatment from Apple’s engineers. Photographers have the same type of “Chevy vs. Ford” arguments about Aperture and Lightroom as editors do about NLEs. Nevertheless, editors deal a lot with supplied images and Aperture is a great tool to use for organization, clean up and image manipulation.

Other

The list I’ve outlined creates a nice set of tools, but if you need to interchange with other pros using a variety of different software, then you’ll need to invest in some “glue”. There are a number of utilities designed to go to and from FCP X. Many are available through the App Store. Examples include Xto7, 7toX, EDL-X, X2Pro, Shot Notes X, Lumberjack and many others.

For a freewheeling discussion about this topic and other matters, check out my conversation with Chris Fenwick at FCPX Grille.

©2014 Oliver Peters

Why film editors love Avid Media Composer

df_filmeditors_mc_1_sm

The editing of feature films is a small niche of the overall market for editing software, yet companies continue to highlight features edited with their software as a form of aspirational marketing to attract new users. Avid Technology has had plenty of competition since the start of the company, but the majority of mainstream feature films are still edited using Avid Media Composer software. Lightworks and Final Cut Pro “legacy” have their champions (soon to be joined by FCP X and Premiere Pro CC), but Media Composer has held the lead – at least in North America – as the preferred software for feature film editors.

Detractors of Avid like to characterize these film editors as luddites who are resistant to change. They like to suggest that the interface is stodgy and rigid and just not modern enough. I would suggest that change for change’s sake is not always a good thing. Originally Final Cut Pro got a foothold, because it did well with file-based workflows and was very cheap compared to turnkey workstations running Media Composer or Film Composer. Those days are long gone, so trying to make the argument based on cost alone doesn’t go very far.

Editing speed is gained through familiarity and muscle memory. When you hire a top-notch feature editor, you aren’t hiring them for their software prowess. Instead, you are hiring them for their mind, ideas and creativity. As such, there is no benefit to one of these editors in changing to another piece of software, just because it’s the cool kid on the block. Most know how they need to manipulate the software tools so well, that thinking about what to do in the interface just disappears.

Change is attractive to new users, with no preconceived preferences. FCP X acolytes like to say how much easier it is to teach new users FCP X than a track-based system, like FCP 7, Premiere Pro or Media Composer. As someone who’s taught film student editing workshops, my opinion is that it simply isn’t true. It’s all in what you teach, how you teach it and what you expect them to accomplish. In fact, I’ve had many who are eager to learn Media Composer, precisely because they know that it continues to be the “gold standard” for feature film editing software.

There are some concrete reasons why film editors prefer Media Composer. For many, it’s because Avid was their first NLE and it felt logical to them. For others, it’s because Avid has historically incorporated a lot of user input into the product. Here are a dozen factors that I believe keep the equation in favor of Avid Media Composer.

1. Film metadata - At the start of the NLE area, Avid was an offline editing system, designed to do the creative cut electronically. The actual final cut for release was done by physically conforming (cutting) camera negative to match the rough cut. Avid built in tools to cut at 24fps and to track the metadata back to film for frame-accurate lists that went to the lab and the negative cutter. Although negative cutting is all but dead today, this core tracking of metadata benefits modern versions of Media Composer and is still applicable to file-based workflows.

2. Trimming - Avid editors rave about the trim mode in Media Composer. It continues to be the best there is and has been augmented by Smart Tools for FCP-style contextual timeline editing. Many editors spend a lot of time trimming and nothing matches Media Composer.

3. Logical layout - When Avid started out, they sought the direct input from many working editors and this helped the interface evolve into something totally logical. For example, the keyboard position of JKL (transport controls) or mark/clear/go-to in/out is based on hand positions when placed on the keyboard and not an arbitrary choice by a software designer. If you look at the default keyboard map in Media Composer, there are fewer layers than the other apps. I would argue that Media Composer’s inherent design makes more layers unnecessary. In fact, more layers become more confusing.

 4. Script integration – Early on, Avid’s designers looked at how an actual written script might be used within the software. This is completely different than simply attaching copied-and-pasted text to a clip. With Media Composer, you can set up the bin with the actual script pages and link clips to the text of the dialogue. This is included with the base software as a manual process, but if you want to automate the linking, then the optional ScriptSync add-on will lighten the load. A second dialogue-driven option, PhraseFind, is great for documentary filmmakers. Some editors never use these features, but those that do, wouldn’t want to work any other way.

5. Built-in effect tools - The editorial team on most features gets involved in creating temporary visual effects. These are placeholders and style ideas meant to help the director and others visualize the effects. Sometimes these are editorial tricks, like an invisible split screen to combine different takes. The actual, final effect is done by the visual effects compositors. Avid’s internal tools, however, allow a talented film editor or assistant editor, to temp in an effect at a very high quality level. While Media Composer is certainly not a finishing tool equal to Avid DS (now EOL’ed) or Autodesk Smoke, the internal tools surpass all other desktop offline editors. FCP X requires third-party plug-ins or Motion 5 and Premiere Pro CC requires After Effects. With Media Composer, you have built-in rotosplining, tracking, one of the better keyers, stabilization and more. All without leaving the primary editing interface.

6. Surround mixing – Often film editors will build their rough cut with LCR (left-center-right) or full 5.1 surround panning. This helps to give a better idea of the theatrical mix and preps a sequence for early screenings with a preview or focus group audience. Other systems let you work in surround, too, but none as easily as with Media Composer, assuming you have the right i/o hardware.

7. Project sharing – You simply cannot share the EXACT SAME project file among simultaneous, collaborative users with any other editing application in the same way as you can with Media Composer and Avid’s Unity or Isis shared storage networks. Not every user needs that and there are certainly functional alternatives for FCP and Premiere Pro, as well as Media Composer. For film editors, the beauty of the Avid approach, is that everyone on the team can be looking at the exact same project. When changes are made to a sequence for a scene and the associated bin is saved, that updated info ripples to everyone else’s view. Large films may have as many as 15 to 20 connected users, once you tally editors, assistants and visual effects editors. This function is hard to duplicate with any alternative software.

8. Cross-platform and easy authorization – Media Composer runs under both Mac OS X and Windows on a wide range of machines. This makes it easy for editors on location to shift between a desktop workstation and a laptop, which may be of differing OS platforms. In the past, software licensing was via a USB license key (dongle), but newer versions use software authorization to activate the application. The software may be installed on any number of machines, with one active and authorized at any given time. De-activation and re-activation only takes a few seconds if you are connected to the Internet.

9. Portability of projects and media – Thanks to Avid’s solid media management with internal media databases, it’s easy to move drives between machines with no linking issues. Keep a common and updated project file on two machines and you can easily move a media drive back and forth between them. The software will instantly find all the correct media when Media Composer is launched. In addition, Avid has held one of the best track records for project interchange among older and newer versions.

10. Interoperability with lists – Feature film workflows are all about “playing well with others”. This means industry-standard list formats, like EDLs, AAFs and OMFs. I wish Media Composer would also natively read and write XMLs, but that’s a moving target and generally not as widely accepted in the facilities that do studio-level work. The other standards are all there and built into the tools. So sending lists to a colorist or audio editor/mixer requires no special third-party software.

11. Flexible media architecture – Avid has moved forward from the days when it only handled proprietary Avid media formats. Thanks to AMA, many native camera file formats and QuickTime codecs are supported. Through a licensing deal with Apple, even ProRes is natively supported, including writing ProRes MXF files on Apple workstations. This gives Media Composer wider support for professional codecs than nearly every other editing application. On top of that, you still have Avid’s own DNxHD, one of the best compression schemes currently in professional film and video use.

12. Robust – In most cases, Media Composer is a rock-solid application, with minimal hiccups and crashes. Avid editors have become very used to reliability and will definitely pipe up when that doesn’t happen. Generally Avid editors do not experience the sorts of RAM leaks that seem to plaque other editing software.

For the sake of full disclosure, I am a member of one of the advisory councils that are part of the Avid Customer Association. Obviously, you might feel that this taints what I’ve written above. It does not.

I’ve edited with Avid software since the early 90s, but I’ve edited for years with other applications, too. Most of the last decade leading up to Apple’s launch of Final Cut Pro X was spent on FCP “legacy”. The last couple of years have been spent trying to work the kinks out of FCP X. I’ve cut feature films on Media Composer, FCP 4-7, FCP X and even a Sony BVE-9100. I take a critical view towards all of them and go with what is best for the project.

Even though I don’t use many of the Avid-specific features mentioned above, like ScriptSync, I do see the strengths and why other film editors wouldn’t want to use anything else. My main goal here was to answer the question I hear so often, which is, “Why do they still use Avid?” I hope I’ve been able to offer a few answers.

For some more thoughts, take a look at these videos about DigitalFilm Tree’s transition from FCP to Media Composer and Alan Bell’s approach to using Avid products for cutting films like “The Hunger Games: Catching Fire”.

©2014 Oliver Peters

Color Concepts and Terminology

df_clrterms_10_sm

It’s time to dive into some of the terms and concept that brought you modern color correction software. First of all – color grading versus color correction. Many use these terms to identify different processes, such as technical shot matching versus giving a shot a subjective “look”. I do this too, but the truth of the matter is that they are the same and are interchangeable. Grading tends to be a more European way of naming the process, but it is the same as color correction. (Click on any of the images in this article for an expanded and more descriptive view.)

All of our concepts stem from the film lab process known as color timing. Originally this described the person who knew how long to leave the negative in the chemical bath to achieve the desired result (the “timer”). Once the industry figured out to manipulate color in the negative-to-positive printing process, the “color timer” was the person who controlled the color analyzer and who dialed in degrees of density and red/blue/green coloration. The Dale Grahn Color iPad application will give you a good understanding of this process. Alexis Van Hurkman also covers it in his “Color Correction Handbook”.df_clrterms_09_sm

Electronic video color correction started with early color cameras and telecine (film-to-tape transfer or “film chain”) devices. These were based on red/blue/green color systems, where the video engineer (or “video shader”) would balance out the three components, along with exposure and black level (shadows). He or she would adjust the signal of the pick-up systems, including tubes, CCDs and photoelectric cells.

RCA added circuitry onto their cameras called a chroma proc, which divided the color spectrum according to the six divisions of the vectorscope – red, blue, green, cyan, magenta and yellow. The chroma proc let the operator shift the saturation and/or hue of each one of these six slices. For instance, you could desaturate the reds within the image. Early color correction modules for film-to-tape transfer systems adopted this same circuity. The “primary” controls manipulated the actual pick-up devices, while the “secondary” controls were downstream in the signal chain and let you further fine tune the color according to this simple, six-vector division.

df_clrterms_11_sm

Early color correction system were built to transfer color film to air or to videotape. They were part machine control and part color corrector. Modern color correction for post production came to be, because of these three key advances: memory storage, scene detection and signal decoding.

Memory storage. Once you could store and recall color correction settings, then it was easy to go back and forth between camera angles or shots and apply a different setting to each. Or you could create several looks and preview those for the client. The addition of this technology was the basis for a seminal patent lawsuit, known as the Rainbow patent suit, as the battle ranged over who first developed this technology.

Scene detection. Film transfer systems had to play in real-time to be recorded to videotape, which meant that shot changes had to trigger the change from one color correction setting to the next. Early systems did this via the operator manually marking an edit point (called “notching”), via an EDL (edit decision list) or through automatic scene detection circuitry. This was important for the real-time transfer of edited content, including film prints, cut negative and eventually videotape programs.

Signal decoding. The ability of color correction systems to decode a composite or component analog (and later digital) signal through added hardware, shifted color correction from camera shading and film transfer to being another general post production tool at a post facility. The addition of a signal decoder board in a DaVinci unit split the input signal into RGB parts and enabled the colorist to enhance the correction of an already-edited master using the “secondary” signal electronics of the system. This enabled “tape-to-tape” color correction of edited masters. Thanks to scene detection or an EDL, color correction could be shot-to-shot and frame-accurate, when played back in real-time for its re-encoded, corrected output back to a second videotape master.

Eventually the tools used in hardware-based, tape-to-tape color correction systems became standard. Quantel and Avid led the way by being first to incorporate these features into their nonlinear editing software.

_________________________________

Color correction software tends to break up its control into primary and secondary functions. As you can see from the earlier explanations, there’s really no reason to do that, since we are no longer controlling the pick-up devices within a camera or telecine. Nevertheless, it’s terminology we seem to be comfortable with. Often secondary controls enable masking and keys to isolate color – not because it has to be that way – but, because DaVinci added these features into their secondary control set. In modern correction tools, any function could happen on any layer, node, room, etc.df_clrterms_03_sm

The core language for color manipulation still boils down to the simple controls exemplified by the Dale Grahn app. A signal can be brighter, darker, more or less “dense” (contrast) and have its colorimetry shifted by added or subtracting red, blue or green for the overall image or in the highlight, midrange or shadow portions of the image. This basic approach can be controlled through sliders, knobs, color wheels and other user interfaces. Different software applications and plug-ins get to the same point through different means, so I’ll cover a few approaches here. Bear in mind, that since some of these actually represent somewhat different color science and math, examples that I present might not yield exactly the same results. Many controls are equivalent in their effect, though not necessarily identical in how they affect the image.

df_clrterms_01_smA common misconception is that shadow/mid/highlight controls on a 3-way color corrector will evenly divide the waveform into three discrete ranges. In fact, these are very large, overlapping ranges that interact with each other. If you shift a shadow luminance control up, it doesn’t typically just expand or compress the lower third of the waveform. Although some correctors act this way, most tend to shift the whole waveform up or down. If you change the color balance of the midrange, this color change will also affect shadows and highlights. The following is a quick explanation of some of the popular color control models.

Contrast/pivot/temperature/tint

df_clrterms_07_smContrast and temperature controls have recently become more popular and are considered a more photographic approach to correction. When you adjust contrast, the image levels expand or stretch as viewed on a waveform. Highlights get brighter and shadows deepen. This contrast expansion centers on a pivot point, which by default is at the center of the signal. If you change the pivot slider you are shifting the center point of this contrast expansion. In one direction, this means the contrast control will stretch the range below the pivot point more than above it. Shift the pivot slider in the other direction for the opposite effect.

df_clrterms_06_smColor temperature and tint (also called magenta) controls balance the red/blue/green signal channels in relationship to each other. If you slide a color temperature control while watching an RGB parade display on a waveform, you’ll note that adjustments shift the red and blue channels up or down in the opposite direction to each other, while leaving green unaffected. When you adjust the tint (or magenta) slider, you are adjusting the green channel. As you raise or lower the green, both the red and blue channels move together in a compensating direction.

Slope/offset/power

df_clrterms_08_smThe SOP model is used for CDL (color decision list) values and breaks down the signal according to luma (master), red, green and blue and are expressed in the form of plus or minus values for slope, offset and power. Scratch Play’s color adjustments are a good example of the SOP model in action. Slope is equivalent to gain. Picture the waveform as a diagonal line from dark to light. As you rotate this imaginary line, the higher part becomes taller, which represents brightness values. Think of the slope concept as this rotating line. As such, its results are comparable to a contrast control.

The offset control shifts the entire signal up or down, similar to other shadow or lift controls. The power control alters gamma. As you adjust power, the gamma signal is curved in a positive or negative direction, effectively making the midrange tones lighter or darker.

Lift/gamma/gain

df_clrterms_02_smThe LGG model is the common method used for most 3-way color wheel-style correctors. It effectively works in a similar manner to contrast and SOP, except that the placement of controls makes more sense to most casual users. Gain, as the name implies, increases the signal, effectively expanding the overall values and making highlights brighter. Lift shifts the entire signal higher or lower. Changing a lift control to darken shadows, will also have some effect on the overall image. Gamma bends the curve and effectively makes the midrange values lighter or darker.

Luma ranges

df_clrterms_04_smThe portions of the signal altered by highlight/shadow/midrange controls (like SOP, LGG or other) overlap. If you change the color balance for the midrange tones, you will also contaminate shadows and highlights with this color shift. The extent of the portion that is affected is controlled by a luma range control. Many color correction applications do not give you control over shifting the crossover points of these luma ranges. Some that do, include Avid Symphony, Synthetic Aperture Color Finesse and Adobe SpeedGrade. Each offers curves or sliders to reduce or expand the area controlled by each luma range and effectively tightens or widens the overlap or crossover between the ranges.

DaVinci Resolve includes a similar function within its log-style color wheels panel. It uses range adjustments that can limit the area affected by the balance and saturation controls. Similar results may be achieved by using HSL keyers or qualifiers that include softening controls.

Channels or printer lights

df_clrterms_05_smVideo signals are made up of red, blue and green channel information. It is not uncommon for properly-balanced digital cameras to still maintain a green color cast to the overall image, especially if log-profile recording was used. Here, it’s best to simply balance the overall channels first to neutralize the image, rather than attempt to do this through color wheel adjustments. Some software uses actual channel controls, so it’s easy to make a base-level adjustment to the output or mix of a channel. If your software uses printer lights, you can achieve the same results. Printer lights harken back to lab color timing, using point values that equate to color analysis values. Regardless, dialing in a plus or minus red/blue/green printer light value effectively gives you the same results as altering the output value of a specific color channel.

This is just a short post to go over some of the more confusing terminology found in modern color correction software. Many applications tend to blend the color science models, so as you apply the points mentioned to your favorite tool, you may see somewhat different results. Hopefully I’ve gotten you in the ballpark, in order to understand what happens when you twirl the knob the next time.

©2014 Oliver Peters

NAB 2013 Distilled

df_nab2013_1Another year – another NAB exhibition. A lot of fun stuff to see. Plenty of innovation and advances, but no single “shocker” like last year’s introduction of the Blackmagic Cinema Camera. Here are some observations based on this past week in Las Vegas.

4K

Yes, 4K was all over. I was a bit surprised that many of the pieces for a complete end-to-end solution are in place. The term 4K refers to the horizontal pixel width of the image, but two common specs are used – the DCI (film) standard of 4096 and the UltraHD (aka QuadHD) standard of 3840. Both are “4K”. Forgotten in the discussion is frame rate. Many displays were showing higher frame rates, such as 4K at 60fps. 120fps is also being discussed.

4K (and higher) cameras were there from Canon, Sony, RED, JVC, GoPro and now Blackmagic Design. Stereo3D was there, too, in pockets; but, it’s all but dead (again). 4K, though, will have legs. The TV sets and distribution methods are coming into position and this is a nonintrusive experience for the viewer. SD to HD was an obvious “in your face” difference. 4K is noticeably better, but not as much as SD to HD. More like 720p versus 1080p. This means that consumer prices will have to continue to drop (as they will) for 4K to really catch hold, except for special venue applications. Right now, it’s pretty obvious how gorgeous 4K is when standing a few feet away from an 84” screen, but few folks can afford that yet.

Interestingly enough, you can even do live 4K broadcasts, using 4K cameras and production products from Astro Designs. This will have value in live venues like sporting events and large corporate meetings. A new factor – “region of interest” – comes into play. This means you can shoot 4K and then scale/crop the portion of the image that interests you. Naturally there was also 8K by NHK and also Quantel. Both have been on the forefront of HD and then 4K. Quantel was demonstrating 8K (downsampled to a 4K monitor) just to show their systems have the headroom for the future.

ARRI did not have a 4K camera, but the 4 x 3 sensor of the ALEXA XT model features 2880 x 2160 photosites. When you use an anamorphic 2:1 lens and record ARRIRAW, you effectively end up with an unsqueezed image of 5760 x 2160 pixels. Downsample that to a widescreen 2.4:1 image inside a 4096 DCI frame and you have visually similar results as with a Sony or RED camera delivering in 4K. This was demonstrated in the booth and the results were quite pleasing. The ALEXA looked a bit softer than comparable displays at the Sony and RED booths, but most cinematographers would probably opt for the ARRI image, since it appears a lot closer to the look of scanned film at 4K. Part of this is inherent with ARRI’s sensor array, which includes optical filtering in-camera. Sony was showing clips from the upcoming Oblivion feature film, which was shot with an F65. To many attendees these clips looked almost too crisp.

In practical terms, most commercial, corporate, television or indie film users of 4K cameras want an easy workflow. If that’s your goal, then the best “true” 4K paths are to shoot with the Canon C500 or the Sony F55. The C500 can be paired with the (now shipping) AJA KiPro Quad to record 4K ProRes files. The Sony records in the XAVC codec (a variant of AVC-Intra). Both are ready to edit (importer plug-ins may be required) without conversions.

You can also record ARRI 2K ProRes in an ALEXA or use one of the various raw workflows (RED, Canon, Blackmagic, Sony, ARRI). Raw is nice, but adds extra steps to the process – often with little benefit over log-profile recording to an encoded file format.

Edit systems

With the shake-up that Apple’s introduction of Final Cut Pro X has brought to the market, brand dominance has been up for grabs. Apple wasn’t officially at the show, but did have some off-site presence, as well as a few staffers at demo pods. For example, they were showing the XAVC integration in an area of the Sony booth. FCP X was well-represented as part of other displays all over the floor. An interesting metric I noticed, was that all press covering the show on video, were cutting their reports on laptops using FCP X. That is a sweet spot for use of the application. No new FCP X news (beyond the features released with 10.0.8) was announced.

Adobe is currently the most aggressive in trying to earn the hearts of editors. The “next” versions of Premiere Pro, SpeedGrade, Audition and After Effects have a ton of features that respond to customer requests and will speed workflows. Adobe’s main stage demos were packed and the general consensus of most editors discussing a move away from FCP 7 (and even Avid) was a move to Adobe. In early press, Adobe mentioned working with the Coen brothers, who have committed to cutting their next film with Premiere.

The big push was for Adobe Anywhere – their answer for cloud-based editing. Although a very interesting product, it will compete in the same space as Quantel Qtube and Avid Interplay Sphere. These are enterprise solutions that require servers, storage, software and support. While it’s an interesting technology, it will tend to be of more interest to larger news operations and educational facilities than smaller post shops.

Avid came on with Media Composer 7 at a new price, with Symphony as an add-on option to Media Composer. The biggest features were the ability to edit with larger-than-HD video sources (output is still limited to HD), LUT support, improved media management of AMA files and background transcoding using managed folders (watch folders). In addition, Pro Tools goes to 11, with a new video engine – it can natively run Avid sequences from AAF imports – and faster-than-real-time bounce. The MC background transcode and the PT11 bounce will be time savers for Avid users and that translates into money saved.

Avid Interplay Sphere (announced last year) now works on Macs, but its main benefit is remote editing for stations that have invested in Interplay solutions. Avid is also bundling packages of ISIS storage, Interplay asset management and seats of Media Composer at even lower price points. Although still premium solutions, they are finally in a range that may be attractive to some small edit facilities and broadcasters, given that it includes installation and support.

The other NLE players include Avid DS (not shown), Quantel Pablo Rio, Autodesk Smoke 2013, Grass Valley EDIUS, Sony Vegas, Media 100 (not shown) and Lightworks. Most of these have no bearing in my market. Smoke 2013 is getting traction. Autodesk is working to get user feedback to improve the application, as it moves deeper into a market segment that is new to them. EditShare is forging ahead with Lightworks on the Mac. It looked pretty solid at the show, but expect something that’s ready for users towards the end of the year. It’s got the film credits to back it up, so a free (or near free) Mac version should shake things up even further.

One interesting addition to the market is DaVinci Resolve 10 gaining editing features. Right now the editing bells-and-whistles are still rudimentary, though all of the standard functions are there. Plus there are titles, speed changes with optical flow and a plug-in API (OpenFX). You can already apply GenArts Sapphire filters to your clips. These are applied in the color correction timeline as nodes, rather than effects added to an editing timeline. This means the Sapphire filters can be baked into any clip renders. The positioning of Resolve 10 is as an online editing tool. That means conforming, titling and trims/tweaks after grading. You now have even greater editing capabilities at the grading stage without having to return to an NLE. Ultimately the best synergy will be between FCP X and Resolve. Together the two apps make for a very interesting package and Apple seems to be working closely with Blackmagic Design to make this happen. Ironically the editing mode page looks a lot like FCP X would have looked with tracks and dual viewers.

Final thoughts

I was reading John Buck’s Timeline on the plane. Even though we think of the linear days as having been dominated by CMX, the reality was that there were many systems, including Mach One, Epic, ISC, Strassner, Convergence, Datatron, Sony, RCA and Ampex. In Hollywood, the TV industry was split among them, which is why a common interchange standard of the EDL was developed. For awhile, Avid became the dominant tool in the nonlinear era, but the truth is that hasn’t always been the norm – nor should it be. The design dilemma of engineering versus creative was a factor from the beginning of video editing. Should a system be simple enough that producers, directors and non-technical editors can run it? Sound familiar?

When I look at the show I am struck at how one makes their buying choices. To use the dreaded car analogy, FCP X is the sports car and Avid is the truck. But the sports car is a temperamental Ferrari that does some things very well , but isn’t appropriate for others. The truck is a Tundra with all the built-in, office-on-the-road niceties.

If I were a facility manager, making a purchase for a large scale facility, it would probably still be Avid. It’s the safe bet – the “you don’t get fired for buying IBM” bet. Their innovations at the show were conservative, but meet the practical needs of their current customers. There simply is no other system with a proven track record across all types of productions that scales from one user to massive installations. But offering conservative innovation isn’t a growth strategy. You don’t get new users that way. Media Composer has become truly complex in ways that only veteran users can accept and that has to change fast.

Apple FCP X is the wild card, of course. Apple is playing the long game looking for the next generation of users. If FCP X weren’t an Apple product, it would receive the same level of attention as Vegas Pro, at best. Also a great tool with a passionate user base, but nothing that has the potential of dominating market share. The trouble is Apple gets in its own way due to corporate secrecy. I’ve been using FCP X for awhile and it certainly is a professional product. But to use it effectively, you have to change your workflow. In a multi-editor, multi-production facility, this means changing a lot of practices and retraining staff. It also means augmenting the software with a host of other applications to fix the short-comings.

Broadening the appeal of FCP X beyond the one-man-band operations may be tough for that reason. It’s too non-standard and no one has any idea of where it’s headed. On the other hand, as an editor who’s willing to deal with new challenges, I like the fast, creative cutting performance of FCP X. This makes it a great offline editing tool in my book. I find a “start in X, finish in Resolve” approach quite intriguing.

Right now, Adobe feels like the horse to beat. They have the ear of the users and an outreach reminiscent of when Apple was in the early FCP “legacy” era. Adobe is working hard to build a community and the interoperability between applications is the best in the industry. They are only hampered by the past indifference towards Premiere that many pro users have. But that seems to be changing, with many new converts. Although Premiere Pro “next” feels like FCP 7.5, that appears to be what users really want. The direction, at least, feels right. Apple may have been “skating to where the puck will be”, but it could be that no one is following or the puck simply wasn’t going there in the first place.

For an additional look – click over to my article for CreativePlanetNetwork – DV magazine.

©2013 Oliver Peters