Baby Driver

You don’t have to be a rabid fan of Edgar Wright’s work to know of his films. His comedy trilogy (Sean of the Dead, Hot Fuzz, The World’s End) and cult classics like Scott Pilgrim vs. the World loom large in pop culture. His films have earned a life beyond most films’ brief release period and earned Wright a loyal following. The latest film from Wright is Baby Driver, a musically-fueled action film written and directed by Wright, which just made a big splash at SXSW. It stars Ansel Elgort, Kevin Spacey, Jon Hamm, Jamie Foxx, and Eiza Gonzalez.

At NAB, Avid brought in a number of featured speakers for its main stage presentations, as well as its Avid Connect event. One of these speakers was Paul Machliss (Scott Pilgrim vs. the World, The World’s End, Baby Driver), who spoke to packed audiences about the art of editing these films. I had a chance to go in-depth with Machliss about the complex process of working on Baby Driver.

From Smoke to baptism by fire

We started our conversation with a bit of the backstory of the connection between Wright and Machliss. He says, “I started editing as an online editor and progressed from tape-based systems to being one of the early London-based Smoke editors. My boss at the time passed along a project that he thought would be perfect for Smoke. That was onlining the sitcom Spaced, directed by Edgar Wright. Edgar and I got on well. Concurrent to that, I had started learning Avid. I started doing offline editing jobs for other directors and had a ball. A chance came along to do a David Beckham documentary, so I took the plunge from being a full-time online editor to taking my chances in the freelance world. On the tail end of the documentary, I got a call from Edgar, offering me the gig to be the offline editor for the second season of Spaced, because Chris Dickens (Hot Fuzz, Berberian Sound Studio, Slumdog Millionaire) wasn’t available to complete the edit. And that was really jumping into the deep end. It was fantastic to be able to work with Edgar at that level.”

Machliss continues, “Chris came back to work with Edgar on Shaun of the Dead and Hot Fuzz, so over the following years I honed my skills working on a number of British comedies and dramas. After Slumdog Millionaire came out, which Chris cut and for which he won a number of awards, including an Oscar, Chris suddenly found himself very busy, so the rest of us working with Edgar all moved up one in the queue, so to speak. The opportunity to edit Scott Pilgrim came up, so we all threw ourselves into the world of feature films, which was definitely a baptism by fire. We were very lucky to be able to work on a project of that nature during a time where the industry was in a bit of a slump due to the recession. And it’s fantastic that people still remember it and talk about it seven years on. Which brings us to Baby Driver. It’s great when a studio is willing to invest in a film that isn’t a franchise, a sequel, or a reboot.”

Music drives the film

In Baby Driver, Ansel Elgort plays “Baby”, a young kid who is the getaway driver for a gang. At a young age, he was in a car accident which leaves him with tinnitus, so it takes listening to music 24/7 to drown out the tinnitus. Machliss explains, “His whole life becomes regimented to whatever music he is listening to – different music for different moods or occasions. Somehow everything falls magically into sync with whatever he is listening to – when he’s driving, swerving to avoid a car, making a turn – it all seems to happen on the beat. Music drives every single scene. Edgar deliberately chose commercial top-20 tracks from the 1960s up to today. Each song Baby listens to also slyly comments on whatever is happening at the time in the story. Everything is seemingly choreographed to musical rhythms. You’re not looking at a musical, but everything is musically driven.”

Naturally, building a film to popular music brings up a whole host of production issues. Machliss tells how this film had been in the planning for years, “Edgar had chosen these tracks years ago. I believe it was in 2011 that Edgar and I tried to sequence the tracks and intersperse them with sound effects. A couple of months later, he did a table read in LA and sent me the sound files. In the Avid, I combined the sound files, songs, and some sound effects to create effectively a 100-minute radio play, which was, in fact, the film in audio form. The big thing is that we had to clear every song before we could start filming. Eventually we cleared 30-odd songs for the film. In addition, Edgar worked with his stunt team and editor Evan Schiff in LA to create storyboards and animatics for all of the action scenes.”

Editor on the front lines

Unlike most films, a significant amount of the editing took place on-set with Machliss working from a portable set-up. He says, “Based on our experiences with Scott Pilgrim and World’s End, Edgar decided it would be best to have me on-set during most of the Atlanta shoot for Baby Driver. Even though a cutting room was available, I was in there maybe ten percent of the time. The rest of the time I was on set. I had a trolley with a laptop, monitor, an Avid Mojo, and some hard drives and I would connect myself via ethernet to the video assist’s hard drive. Effectively I was crew in the front lines with everyone else. Making sure the edit worked was as important as getting a good take in the can. If I assured Edgar that a take would work, then he knew it wasn’t going to come back and cause problems for us six months later. We wanted things to work naturally in camera without a lot of fiddling in post. We didn’t want to have to fall back on frame-cutting and vari-speeding if we didn’t have to. There was a lot of prep work in making sure actions correctly coincided with certain lyrics without the action seeming mechanical.”

The nature of the production added to the complexity of the production audio configuration, too. Machliss explains, “Sound-wise, it was very complicated. We had playback going to earwigs in the actors’ ears, Edgar wanted to hear music plus the dialogue in his cans, and then I needed to get a split feed of the audio, since I already had the clean music on my timeline. We shot this mostly on 35mm film. Some days were A-camera only, but usually two cameras running. It was a combination of Panavision, Arricams, and occasionally Arri Alexas. Sometimes there were some stunt shots, which required nine or ten cameras running. Since the action all happened against playback of a track, this allowed me to use Avid’s multicam tools to quickly group shots together. Avid’s AMA tools have really come of age, so I was able to work without needing to ingest anything. I could treat the video assist’s hard drive as my source media, as long as I had the ethernet connection to it. If we were between set-ups, I could get Avid to background-transcode the media, so I’d have my own copy.”

Did all of this on-set editing speed up the rest of the post process? He continues, “All of the on-set editing helped a great deal, because we went into the real post-production phase knowing that all the sequences basically worked. During that time, as I’d fill up a LaCie Rugged drive, I would send that back to the suites. My assistant, Jerry Ramsbottom, would then patiently overcut my edits from the video assist with the actual scanned telecine footage as it came in. We shot from mid-February until mid-May and then returned to England. Jonathan Amos came on board a few weeks into the director’s cut edit and worked on the film with Edgar and myself up until the director’s cut picture lock. He did a pass on some of the action scenes while Edgar and myself concentrated on dialogue and the overall shape of the film. He stayed on board up until the final picture lock and made an incredible contribution to the action and the tension of the film. By the end of the year we’d locked and then we finished the final mix mid-February of this year. But the great thing was to be able to come into the edit and have those sequences ready to go.”

Editing from set is something many editors try to avoid. They feel they can be more objective that way. Machliss sees it a bit differently, “Some editors don’t like being on set, but I like the openness of it – taking it all in. Because when you are in the edit, you can recall the events of the day a particular scene was shot – ‘I can remember when Ken Spacey did this thing on the third take, which could be useful’. It’s not vital to work like this, but it does preclude to a kind of short-hand, which is something Edgar and I have developed over these years anyway. The beauty of it is that Edgar and I will take the time to try every option. You can never hit on the perfect cut the first time. Often you’ll get feedback from screenings, such as ‘we’d like to see more emotion between these characters’. You know what’s available and sometimes four extra shots can make all the difference in how a scene reads without having to re-imagine anything. We did drop some scenes from the final version of the film. Of course, you go ‘that’s a shame’, but at least these scenes were given a chance. However, there are always bits where upon the 200th viewing you can decide, ‘well, that’s completely redundant’ – and it’s easy to drop. You always skate as close to the edge of making a film shorter without doing any damage to it.”

The challenge of sound

During sound post, Baby Driver also presented some unique challenges. Machliss says, “For the sound mix – and even for the shoot – we had to make sure we were working with the final masters of the song recordings to make sure the pitch and duration remained constant throughout. Typically these came in as mono or stereo WAVs. Because music is such an important element to the film, the concept of perceived direction becomes important. Is the music emanating from Baby’s earbuds? What happens to it when the camera moves or he turns his head? We had to work out a language for the perception of sound. This was Edgar’s first film mixed in Dolby ATMOS and were the second film in Goldcrest London’s new Atmos-certified dubbing theater. Then we did a reduction to 7.1 and 5.1. Initially we were thinking this film would have no score other than the songs. Invariably you need something to get from A to B. We called on the services of Steven Price (Gravity, Fury, Suicide Squad), who provided us with some original cues and some musical textures. He did a very clever thing where he would match the end pitch or notes of a commercial song and then by the time he came to the end of his cue, it would match to the incoming note or key of the next song. And you never notice the change.”

Working with Avid in a new way

To wrap up the conversation, we talked a bit about using Avid Media Composer on his work. Machliss has used numerous other systems, but Media Composer still fits the bill for his work today. He says, “For me, the speed of working with AMA in Avid in the latest software was a real benefit. I could actually keep up with the speed of the shoot. You don’t want to be the one holding up a crew of 70. I also made good use of background transcoding. On a different project (Fleabag), I was able to work with native 2K Alexa ProRes camera files at full resolution. It was fantastic to be able to use Frameflex and apply LUTs – doing the cutting, but then bringing back my old skills as an online editor to paint out booms and fix things up. Once we locked, I could remove the LUTs and export DPX files, which went straight to the grading facility. That was exciting to work in a new way.”

Baby Driver opens this summer in the US and should be a fun ride. You can certainly enjoy a film like this without knowing the nitty gritty of the production that goes into it. However, after you’ve read this article, you just might need to see it at least twice – once to just enjoy and once agin to study the “invisible art” that’s gone into bringing it to screen.

Originally written for Digital Video magazine / Creative Planet Network

©2017 Oliver Peters

Bricklayers and Sculptors

One of the livelier hangouts on the internet for editors to kick around their thoughts is the Creative COW’s Apple Final Cut Pro X Debates forum. Part forum, part bar room brawl, it started as a place to discuss the relative merits (or not) of Apple’s FCP X. As such, the COW’s bosses allow a bit more latitude than in other forums. However, often threads derail into really thoughtful discussions about editing concepts.

Recently one of its frequent contributors, Simon Ubsdell, posted a thread called Bricklayers and Sculptors. In his words, “There are two different types of editors: Those who lay one shot after another like a bricklayer builds a wall. And those who discover the shape of their film by sculpting the raw material like a sculptor works with clay. These processes are not the same. There is no continuum that links these two approaches. They are diametrically opposed.”

Simon Ubsdell is the creative director, partner, and editor/mixer for London-based trailer shop Tokyo Productions. Ubsdell is also an experienced plug-in developer, having developed and/or co-developed the TKY, Tokyo, and Hawaiki effects plug-ins. But beyond that, Simon is one of the folks with whom I often have e-mail discussions regarding the state of editing today. We were both early adopters of FCP X who have since shifted almost completely to Adobe Premiere Pro. In keeping with the theme of his forum post, I asked him to share his ideas about how to organize an edit.

With Simon’s permission, the following are his thoughts on how best to organize editing projects in a way that keeps you immersed in the material and results in editing with greater assurance that you’ve make the best possible edit decisions.

________________________________________________

Simon Ubsdell – Bricklayers and Sculptors in practical terms

To avoid getting too general about this, let me describe a job I did this week. The producer came to us with a documentary that’s still shooting and only roughly “edited” into a very loose assembly – it’s the stories of five different women that will eventually be interweaved, but that hasn’t happened yet. As I say, extremely rough and unformed.

I grabbed all the source material and put it on a timeline. That showed me at a glance that there was about four hours of it in total. I put in markers to show where each woman’s material started and ended, which allowed me to see how much material I had for each of them. If I ever needed to go back to “everything”, it would make searching easier. (Not an essential step by any means.)

I duplicated that sequence five times to make sequences of all the material for each woman. Then I made duplicates of those duplicates and began removing everything I didn’t want. (At this point I am only looking for dialogue and “key sound”, not pictures which I will pick up in a separate set of passes.)

Working subtractively

From this point on I am working almost exclusively subtractively. A lot of people approach string-outs by adding clips from the browser – but here all my clips are already on the timeline and I am taking away anything I don’t want. This is for me the key part of the process because each edit is not a rough approximation, but a very precise “topping and tailing” of what I want to use. If you’re “editing in the Browser” (or in Bins), you’re simply not going to be making the kind of frame accurate edits that I am making every single time with this method.

The point to grasp here is that instead of “making bricks” for use later on, I am already editing in the strictest sense – making cuts that will stand up later on. I don’t have to select and then trim – I am doing both operations at the same time. I have my editing hat on, not an organizing hat. I am focused on a timeline that is going to form the basis of the final edit. I am already thinking editorially (in the sense of creative timeline-based editing) and not wasting any time merely thinking organizationally.

I should mention here that this is an iterative process – not just one pass through the material, but several. At certain points I will keep duplicates as I start to work on shorter versions. I won’t generally keep that many duplicates – usually just an intermediate “long version”, which has lost all the material I definitely don’t want. And by “definitely don’t want” I’m not talking about heads and tails that everybody throws away where the camera is being turned on or off or the crew are in shot – I am already making deep, fine-grained editorial and editing decisions that will be of immense value later on. I’m going straight to the edit point that I know I’ll want for my finished show. It’s not a provisional edit point – it’s a genuine editorial choice. From this point of view, the process of rejecting slates and tails is entirely irrelevant and pointless – a whole process that I sidestep entirely. I am cutting from one bit that I want to keep directly to the next bit I want to keep and I am doing so with fine-tuned precision. And because I am working subtractively I am actually incorporating several edit decisions in one – in other words, with one delete step I am both removing the tail from the outgoing clip and setting the start of the next clip.

Feeling the pacing and flow

Another key element here is that I can see how one clip flows into another – even if I am not going to be using those two clips side-by-side. I can already get a feel for the pacing. I can also start to see what might go where, so as part of this phase, I am moving things around as options start suggesting themselves. Because I am working in the timeline with actual edited material, those options present themselves very naturally – I’m getting offered creative choices for free. I can’t stress too strongly how relevant this part is. If I were simply sorting through material in a Browser/Bin, this process would not be happening or at least not happening in anything like the same way. The ability to reorder clips as the thought occurs to me and for this to be an actual editorial decision on a timeline is an incredibly useful thing and again a great timesaver. I don’t have to think about editorial decisions twice.

And another major benefit that is simply not available to Browser/Bin-based methods, is that I am constructing editorial chunks as I go. I’m taking this section from Clip A and putting it side-by-side with this other section from Clip A, which may come from earlier in the actual source, and perhaps adding a section from Clip B to the end and something from Clip C to the front. I am forming editorial units as I work through the material. And these are units that I can later use wholesale.

Another interesting spin-off is that I can very quickly spot “duplicate material”, by which I mean instances where the same information or sentiment is conveyed in more or less the same terms at different places in the source material. Because I am reviewing all of this on the timeline and because I am doing so iteratively, I can very quickly form an opinion as to which of the “duplicates” I want to use in my final edit.

Working towards the delivery target

Let’s step back and look at a further benefit of this method. Whatever your final film is, it will have the length that it needs to be – unless you’re Andy Warhol. You’re delivering a documentary for broadcast or theatrical distribution, or a short form promo or a trailer or TV spot. In each case you have a rough idea of what final length you need to arrive at. In my case, I knew that the piece needed to be around three minutes long. And that, of course, throws up a very obvious piece of arithmetic that it helps me to know. I had five stories to fit into those three minutes, which meant that the absolute maximum of dialogue that I would need would be just over 30 seconds from each story!  The best way of getting to those 30 seconds is obviously subtractively.

I know I need to get my timeline of each story down to something approaching this length. Because I’m not simply topping and tailing clips in the Browser, but actually sculpting them on the timeline (and forming them into editorial units, as described above), I can keep a very close eye on how this is coming along for each story strand. I have a continuous read-out of how well I am getting on with reducing the material down to the target length. By contrast, if I approach my final edit with 30 minutes of loosely selected source material to juggle, I’m going to spend a lot more time on editorial decisions that I could have successfully made earlier.

So the final stage of the process in this case was simply to combine and rearrange the pre-edited timelines into a final timeline – a process that is now incredibly fast and a lot of fun. I’ve narrowed the range of choices right down to the necessary minimum. A great deal of the editing has literally already been done, because I’ve been editing from the very first moment that I laid all the material on the original timeline containing all the source material for the project.

As you can see, the process has been essentially entirely subtractive throughout – a gradual whittling down of the four hours to something closer to three minutes. This is not to say there won’t be additive parts to the overall edit. Of course, I added music, SFX, and graphics, but from the perspective of the process as a whole, this is addition at the most trivial level.

Learning to tell the story in pictures

There is another layer of addition that I have left out and that’s what happens with the pictures. So far I’ve only mentioned what is happening with what is sometimes called the “radio edit”. In my case, I will perform the exact same (sometimes iterative) process of subtracting the shots I want to keep from the entirety of the source material – again, this is obviously happening on a timeline or timelines. The real delight of this method is to review all the “pictures” without reference to the sound, because in doing so you can get a real insight into how the story can be told pictorially. I will often review the pictures having very, very roughly laid up some of the music tracks that I have planned on using. It’s amazing how this lets you gauge both whether your music suits the material and conversely whether the pictures are the right ones for the way you are planning to tell the story.

This brings to me a key point I would make about how I personally work with this method and that’s that I plunge in and experiment even at the early stages of the project. For me, the key thing is to start to get a feel for how it’s all going to come together. This loose experimentation is a great way of approaching that. At some point in the experimentation something clicks and you can see the whole shape or at the very least get a feeling for what it’s all going to look like. The sooner that click happens, the better you can work, because now you are not simply randomly sorting material, you are working towards a picture you have in your head. For me, that’s the biggest benefit of working in the timeline from the very beginning. You’re getting immersed in the shape of the material rather than just its content and the immersion is what sparks the ideas. I’m not invoking some magical thinking here – I’m just talking about a method that’s proven itself time and time again to be the best and fastest way to unlock the doors of the edit.

Another benefit is that although one would expect this method to make it harder to collaborate, in fact the reverse is the case if each editor is conversant with the technique. You’re handing over vastly more useful creative edit information with this process than you could by any other means. What you’re effectively doing is “showing your workings” and not just handing over some versions. It means that the editor taking over from you can easily backtrack through your work and find new stuff and see the ideas that you didn’t end up including in the version(s) that you handed over. It’s an incredibly fast way for the new editor to get up to speed with the project without having to start from scratch by acquainting him or herself with where the useful material can be found.

Even on a more conventional level, I personally would far rather receive string-outs of selects than all the most carefully organized Browser/Bin info you care to throw at me. Obviously if I’m cutting a feature, I want to be able to find 323T14 instantly, but beyond that most basic level, I have no interest in digging through bins or keyword collections or whatever else you might be using, as that’s just going to slow me down.

Freeing yourself of the Browser/Bins

Another observation about this method is how it relates to the NLE interface. When I’m working with my string-outs, which is essentially 90% of the time, I am not ever looking at the Browser/Bins. Accordingly, in Premiere Pro or Final Cut Pro X, I can fully close down the Project/Browser windows/panes and avail myself of the extra screen real estate that gives me, which is not inconsiderable. The consequence of that is to make the timeline experience even more immersive and that’s exactly what I want. I want to be immersed in the details of what I’m doing in the timeline and I have no interest in any other distractions. Conversely, having to keep going back to Bins/Browser means shifting the focus of attention away from my work and breaking the all-important “flow” factor. I just don’t want any distractions from the fundamentally crucial process of moving from one clip to another in a timeline context. As soon as I am dragged away from that, there’s is a discontinuity in what I am doing.

The edit comes to shape organically

I find that there comes a point, if you work this way, when the subsequence you are working on organically starts to take on the shape of the finished edit and it’s something that happens without you having to consciously make it happen. It’s the method doing the work for you. This means that I never find myself starting a fresh sequence and adding to it from the subsequences and I think that has huge advantages. It reinforces my point that you are editing from the very first moment when you lay all your source material onto one timeline. That process leads without pause or interruption to the final edit through the gradual iterative subtraction.

I talked about how the iterative sifting process lets you see “duplicates”, that’s to say instances where the same idea is repeated in an alternative form – and that it helps you make the choice between the different options. Another aspect of this is that it helps you to identify what is strong and what is not so strong. If I were cutting corporates or skate videos this might be different, but for what I do, I need to be able to isolate the key “moments” in my material and find ways to promote those and make them work as powerfully as possible.

In a completely literal sense, when you’re cutting promos and trailers, you want to create an emotional, visceral connection to the material in the audience. You want to make them laugh or cry, you want to make them hold their breath in anticipation, or gasp in astonishment. You need to know how to craft the moments that will elicit the response you are looking for. I find that this method really helps me identify where those moments are going to come from and how to structure everything around them so as to build them as strongly as possible. The iterative sifting method means you can be very sure of what to go for and in what context it’s going to work the best. In other words, I keep coming back to the realization that this method is doing a lot of the creative work for you in a way that simply won’t happen with the alternatives. Even setting aside the manifest efficiency, it would be worth it for this alone.

There’s a huge amount more that I could say about this process, but I’ll leave it there for now. I’m not saying this method works equally well for all types of projects. It’s perhaps less suited to scripted drama, for instance, but even there it can work effectively with certain modifications. Like every method, every editor wants to tweak it to their own taste and inclinations. The one thing I have found to its advantage above all others is that it almost entirely circumvents the problem of “what shot do I lay down next?” Time and again I’ve seen Browser/Bin-focused editors get stuck in exactly this way and it can be a very real block.

– Simon Ubsdell

For an expanded version of this concept, check out Simon’s in-depth article at Creative COW. Click here to link.

For more creative editing tips, click on this link for Film Editor Techniques.

©2017 Oliver Peters

The Handmaid’s Tale

With tons of broadcast, web, and set-top outlets for dramatic television, there’s a greater opportunity than ever for American audiences to be exposed to excellent productions produced outside of Hollywood or New York. Some of the most interesting series come out of Canada from a handful of production vendors. One such company is Take 5 Productions, which has worked on such co-productions as Vikings, American Gothic, Penny Dreadful, and others. One of their newest offerings is The Handmaid’s Tale, currently airing in ten, hourlong episodes on Hulu, as well as being distributed internationally through MGM.

The Handmaid’s Tale is based on a dystopian novel written in 1985 by Margaret Atwood. It’s set in New England during the near future, when an authoritarian theocracy has overthrown the United States government and replaced it with the Republic of Gilead. The population has had declining births due to pollution and disease, so a class of women (the handmaids), who are considered fertile, are kept by the ruling class (the Commanders) as concubines for the purpose of having their children. This disturbing tale and series, with its nods to Nazi Germany and life behind the Iron Curtain, not to mention Orwell and Kubrick, stars Elizabeth Moss (Mad Men, The One I Love, Girl, Interrupted) as Offred, one of the handmaids, as she tries to survive her new reality.

The tone of the style and visuals for The Handmaid’s Tale was set by cinematographer-turned-director, Reed Morano (Frozen River, Meadowland, The Skeleton Twins). She helmed three of the episodes, including the pilot. As with many television series, a couple of editors traded off the cutting duties. For this series, Julian Clarke (Deadpool, Chappie, Elysium) started the pilot, but it was wrapped up by Wendy Hallam Martin (Queer As Folk, The Tudors, The Borgias). Hallam Martin and Christopher Donaldson (Penny Dreadful, Vikings, The Right Kind of Wrong) alternated episodes in the series, with one episode cut by Aaron Marshall (Vikings, Penny Dreadful, Warrior).

Cutting a dystopian future

I recently spoke with Wendy Hallam Martin about this series and working in the Toronto television scene. She says, “As a Canadian editor, I’ve been lucky to work on some of the bigger shows. I’ve done a lot of Showtime projects, but Queer As Folk was really the first big show for me. With the interest of outlets like Netflix and Hulu, budgets have increased and Canadian TV has had a chance to produce better shows, especially the co-productions. I started on The Handmaid’s Tale with the pilot, which was the first episode. Julian [Clarke] started out cutting the pilot, but had to leave due to his schedule, so I took over. After the pilot was shot (with more scenes to come), the crew took a short break. Reed [Morano] was able to start her director’s cut before she shot episodes two and three to set the tone. The pilot didn’t lock until halfway through the season.”

One might think a mini-series that doesn’t run on a broadcast network would have a more relaxed production and post schedule, akin to a feature film. But not so with The Handmaid’s Tale, which was produced and delivered on a schedule much like other television dramatic series. Episodes were shot in blocks of two episodes at a time with eight days allotted per episode. The editor’s assembly was due five days later followed by two weeks working with the director for a director’s cut. Subsequent changes from Hulu and MGM notes result in a locked cut three months after the first day of production for those two episodes. Finally, it’s three days to color grade and about a month for sound edit and mix.

Take 5 has its own in-house visual effects department, which handles simple VFX, like wire removals, changing closed eyes to open, and so on. A few of the more complex VFX shots are sent to outside vendors. The episodes average about 40 VFX shots each, however, the season finale had 70 effects shots in one scene alone.

Tackling the workload

Hallam Martin explained how they dealt with the post schedule. She continues, “We had two editors handling the shows, so there was always some overlap. You might be cutting one show while the next one was being assembled. This season we had a first and second assistant editor. The second would deal with the dailies and the first would be handling visual effects hand-offs, building up sound effects, and so on. For the next season we’ll have two firsts and one second assistant, due to the load. Reed was very hands-on and wanted full, finished tracks of audio. There were always 24 tracks of sound on my timelines. I usually handle my own temp sound design, but because of the schedule, I handed that off to my first assistant. I would finish a scene and then turn it over to her while I moved on to the next scene.”

The Handmaid’s Tale has a very distinctive look for its visual style. Much of the footage carries a strong orange-and-teal grade. The series is shot with an ARRI ALEXA Mini in 4K (UHD). The DIT on set applies a basic look to the dailies, which are then turned into Avid DNxHD36 media files by Deluxe in Toronto to be delivered to the editors at Take 5. Final color correction is handled from the 4K originals by Deluxe under the supervision of the series director of photography, Colin Watkinson (Wonder Woman, Entourage, The Fall). A 4K (UHD) high dynamic range master is delivered to Hulu, although currently only standard dynamic range is streamed through the service. Hallam Martin adds, “Reed had created an extensive ‘look book’ for the show. It nailed what [series creator] Bruce Miller was looking for. That, combined with her interview, is why the executive producers hired her. It set the style for the series.”

Another departure from network television is that episodes do not have a specific duration that they must meet. Hallam Martin explains, “Hulu doesn’t dictate exact lengths like 58:30, but they did want the episodes to be under an hour long. Our episodes range from about 50 to 59 minutes. 98% of the scenes make it into an episode, but sometimes you do have to cut for time. I had one episode that was 72 minutes, which we left that long for the director’s cut. For the final version, the producers told me to ‘go to town’ in order to pace it up and get it under an hour. This show had a lot of traveling, so through the usual trimming, but also a lot of jump cuts for the passage of time, I was able to get it down. Ironically the longest show ended up being the shortest.”

Adam Taylor (Before I Fall, Meadowland, Never a Neverland) was the series composer, but during the pilot edit, Morano and Hallam Martin had to set the style. Hallam Martin says, “For the first three episodes, we pulled a lot of sources from other film scores to set the style. Also a lot of Trent Reznor stuff. This gave Adam an idea of what direction to take. Of course, after he scored the initial episodes, we could use those tracks as temp for the next episodes and as more episode were completed, that increased the available temp library we had to work with.”

Post feelings

Story points in The Handmaid’s Tale are often exposed through flashbacks and Moss’ voice over. Naturally voice over pieces affect the timing of both the acting and the edit. I asked Hallam Martin how this was addressed. She says, “The voice over was recorded after the fact. Lizzie Moss would memorize the VO and act with that in mind. I would have my assistant do a guide track for cutting and when we finally received Lizzie’s, we would just drop it in. These usually took very little adjustment thanks to her preparation while shooting. She’s a total pro.” The story focuses on many ideas that are tough to accept and watch at times. Hallam Martin comments, “Some of the subject matter is hard and some of the scenes stick with you. It can be emotionally hard to watch and cut, because it feels so real!”

Wendy Hallam Martin uses Avid Media Composer for these shows and I asked her about editing style. She comments, “I watch all the dailies from top to bottom, but I don’t use ScriptSync. I will arrange my bins in the frame view with a representative thumbnail for each take. This way I can quickly see what my coverage is. I like to go from the gut, based on my reaction to the take. Usually I’ll cut a scene first and then compare it against the script notes and paperwork to make sure I haven’t overlooked anything that was noted on set.” In wrapping up, we talked about films versus TV projects. Hallam Martin says, “I have done some smaller features and movies-of-the-week, but I like the faster pace of TV shows. Of course, if I were asked to cut a film in LA, I’d definitely consider it, but the lifestyle and work here in Toronto is great.”

The Handmaid’s Tale continues with season one on Hulu and a second season has been announced.

Originally written for Digital Video magazine / Creative Planet Network

©2017 Oliver Peters

A Conversation with Thomas Grove Carter

The NAB Show is a great place to see the next level of media hardware and software. Even better, it’s also a great place to meet old friends, make new ones, and pick up the tips and tricks of your craft through the numerous tutorials, seminars, and off-site events that accompany the show.

This year I had the chance to interview Thomas Grove Carter, an editor at Trim Editing, which is a London-based creative editorial shop. He appeared at several sessions to present his techniques for maximizing the power of Final Cut Pro X. These sessions were moderated by Apple and FCPWORKS.

Thomas Grove Carter has a number of high-profile projects on his reel, including work for Honda, Game of Thrones, Audi, and numerous music artists. Carter is a familiar name in the Final Cut Pro X editing community. He first came to prominence with Honda’s “The Other Side” long-form web commercial. In it, Carter juxtaposes parallel day and night driving scenarios covering the main actor – dad by day, undercover police officer by night. On the interactive website, you can toggle in-sync between the two versions. Thanks to FCPX’s way of connecting clips and the nature of its magnetic timeline, Carter could use this then-young application to build the commercial, as well as preview the interactivity for the client – all on a very tight deadline.

I had the pleasure of sitting down with Carter in a semi-quiet corner of the NAB Press Room shortly after his Post Production World keynote session on Sunday evening.

____________________________________

[Oliver Peters]: We first started hearing your name when Honda’s “The Other Side” long-form commercial hit the web. That fit ideally with Final Cut Pro X’s unique ability to connect clips above and below the primary storyline on the timeline. Was that something you came up with intuitively?

[Thomas Grove Carter]: I knew that Final Cut Pro X was going to be good for this interactive piece. As you’re playing back in FCPX you can enable and disable layers. This meant I could actually do a rough preview of what it’s going to look like. I knew that I was going to have these two layers of video, but I didn’t exactly know what it was going to be until the edit, so I started to assemble each story separately. Then at some point, once I had each narrative roughly built, I put them both together on the same timeline and started adding the sound. From then on I was able to play it ‘interactively’ right inside FCPX.  Back then, I split the day and night audio above and below the primary storyline. Today though, I’d probably assign a role for the day and a role for all of the night. Because, you can’t add audio-only above the primary storyline anymore. So that’s what I’d do to divide it out. All the audio and video still connects in exactly the same way – it just looks slightly different. Another great advantage of doing this in X was clip connections. For any given shot, there was the day and night version, and then, all the audio for the day and all the audio for the night. Just by grabbing the one clip in the primary and moving it or trimming it – everything for day and night – picture and audio – both would move together.

[OP]: Tell me a bit about your relationship with Trim Editing.

[TGC]: There are three partners, who are the most senior three editors. Then there are four or five other main editors and two or three junior editors, plus a number of assistants and runners.

It’s been running over 12 years and I joined the team just over 4 years ago.

[OP]: Are all of you using Final Cut Pro X?

[TGC]: Originally, before anyone started using Final Cut Pro X, we had a mix of Avid and Final Cut Pro 7. Then we began to move to Avid as we saw that Final Cut Pro 7 was not going to be improved. So I started to move to Avid, too. But, I was using Final Cut Pro X on my own personal projects. I began to use it on smaller jobs and one of the other editors said, “That’s cool, that thing you’re doing there.” And he started to try it out. Now we’re kind of at a point where most of the editors are on Final Cut Pro X. One is using Avid, so our assistants need to be able to work with both.

[OP]: Have you been able to convert the last hold-out?

[TGC]: He’s always been Avid. That’s what he uses. The company doesn’t dictate what we use to edit with. It’s all about making the best work. If I decided tomorrow that I wanted to cut in Avid or Premiere – it wouldn’t be an issue. Anyone can cut with anything they like.

[OP]: Any thoughts of going to Premiere?

[TGC]: We’ve fallen in love with the way FCPX works – the browser and the timeline. I think Premiere is good, because it feels very much like a continuation of where Final Cut Pro 7 was, which is why loads of people have moved to it. I understand that. It’s an easy move. But it’s the core way that X functions that I love. That stuff just isn’t in any other NLE. What I’ve found with everyone who has moved to it, including myself – there were always a few little hooks that keep people coming back, even if you don’t like the whole app initially. For me, the first thing I liked is how you can pull out the audio clips and things move out of the way automatically. And I always just thought ‘I can’t make this thing work, but that feature is cool’. And then I kept coming back to it and slowly fell I love with the rest of it. One of the other editors loved the way of making dynamic selects in the browser and said, “I’m going to do this job in X.” He’d select in the browser using favorites and rejects and he absolutely loved it. Loved the way it was so fluid with the thumbnails and he felt immersed in his rushes. Then he gets to the timeline. “Oh, I can’t make this work.” He sent it back to Final Cut Pro 7 and finished up there. He did that on two or three jobs, because it takes time to get comfortable with the timeline. It’s strange when you come from track-based. But once it clicks, it’s amazing.

[OP]: How do your assistant editors fit into the workflow?

[TGC]: Generally I go from one job to the next. It might be two weeks or a month and a quick turnaround. Occasionally there might be an overlap – like, the next job has already started shooting and I haven’t finished the last one off yet. So it might be that I need an assistant editor to load my stuff. Or maybe I have to move on to the next job and I’ve got an assistant doing final tweaks on the last one. It’s much simpler to load projects in X than it is in Avid and one thing I’ve heard in the industry is, “Oh, does that mean you’re going to fire a lot of assistants, because you don’t need them?” No! Of course, we’re going to employ them, but we’ll actually give them editing work to do whenever we can – not just grunt work. Let them do the cut-downs, versions, first assemblies. There’s more time now for them to be doing creative work.

We also try to promote from within. I was the first person who was hired from outside of the company. Almost all the other editors, apart from the partners, have been people who’ve moved up from within. Yes, we could be paying this assistant to be loading all our stuff and making QuickTimes. But if you can be paying the assistant and they can be doing another job, why wouldn’t you do that? It’s another revenue stream for the company. So it’s great to be able to get them up to a level where they can pick up work and build up their own reels and creative chops.

[OP]: Are you primarily working with proxy media?

[TGC]: Not ‘Final Cut Pro X proxy media’, but we use ProRes Proxy or  LT files, which are often transcoded by a DIT on set. They look great, but the post house always goes back to the camera originals for the grade. Sometimes if it’s a smaller job – a low budget music video, for example – I’ll get the ARRI files if they shooting ProRes and just take them into Final Cut straight away- just to get working quicker.

[OP]: Since you work in the area of high-end commercials, do you typically send out audio, color and effects to outside post facilities?

[TGC]: Sound and post work is finished off elsewhere. We work with all the big post facilities –  The Mill, Framestore, and MPC, for example. The directors we work with have their favorite colorists. They’re hiring them because they have the right eye, the right creative skills – not just because they can push the buttons. But we’re doing more and more in the offline now. Clients aren’t used to seeing things as ‘offline’ these days. They’re used to things looking slick. I do a lot of sound design, because it goes so hand in hand with the picture edit. Sometimes the picture doesn’t work without any of the sound, so I do quite a lot of it – get it sounding really great, but it will ultimately be remixed later. I might be working on a project for a month and the sound becomes a very integral creative element. And then the sound mixer only gets a day to pull it all together. They do a great job, but it’s really important to give them as much as we can to work with – to really set the creative direction of the audio.

[OP]: In your presentations, you’ve mentioned Trim’s light hardware footprint. How is the facility configured?

[TGC]: Well, we’ve got ‘cylinder’ Mac Pros, Retina iMacs, and more recently we’ve been trying out a few of the new MacBook Pros, alongside the LG 5K displays. I’ve actually been cutting with that set up a lot recently. I really like it, because I turn up at the suite with my laptop, plug two cables in and that’s it! One cable for the 5K display, power and audio. The second cable goes out to HDMI. It runs the client monitor (HD/4K TV) and a USB hub. It’s a really slick and flexible set up.

For storage, we’re currently using Samsung T3 SSD drives, which are so fast and light, they can handle most things we throw at them. It’s a really slick and flexible set up. But with a few potential feature films in the near future, we are looking again at shared storage. I think that’s an interesting area of the market these days. There are some really amazing new products, which don’t come from the same old vendors.

[OP]: How do clients react to this modular suite approach?

[TGC]: If were doing our jobs, clients shouldn’t really notice the tech were using to drive the edit. And people love the space we’ve created. We’ve got really nice rooms – none of our suites are small. Clients are looking at a 50″ to 60” TV, which is 4K in some of our suites. And we’ve got really great sound systems. So, in terms of what clients are seeing and hearing, it doesn’t get much better in an edit suite.

Sometimes directors will come by even when they’re not editing with us. They’ll come by and write their treatments and just hang out, which is really nice. There’s a lot of common space with areas to work and meet.

There’s a lot of art all over the place and when anyone sees a sign that has the word ‘trim’ in it – they buy it. It might be a street sign or a ‘trim something’ logo. So, you see these signs all over the building. It adds a really nice character to the place. When I joined the company, I wanted to bring something to it – and I love LEGO – so I built our logo using it. That’s mounted at our entrance now.

[OP]: There’s a certain mentality in working with agencies. How does Trim approach that?

[TGC]: We tend to focus on the directors. That’s where you develop the greatest relationships, which is where the best work comes from. Not that I dislike working with an agency, but you build a much closer creative bond with your directors.

One small way we help build a good working environment for directors and agencies is to all have lunch together, every single day. We have lunch rather than editing and eating at our desks. One of the great things about this is that directors get to meet other agencies and editors get to meet other directors. It’s really good to be able to socialize like that. It also helps build different relationships than what would ever happen if we we’re all locked away in a suite all day.

[OP]: At what point do you typically get involved with a job?

[TGC]: I’ll usually get pencilled on a job while the director is still pitching it. And then I’ll start work straight after the shoot. Occasional we’ll be on set, but only if it’s a really tight deadline. On that Honda job, that was a six-day shoot to make two, 2 1/2 minute films and then they needed to see it really soon after the shoot. So, I had to be on set. But typically I like not being on set, because when you’re on set you’re suddenly part of the, “Oh, this shot was amazing. It took us four hours to get in the pouring rain.” You’re invested in that baggage. Whereas, when you just view it coldly in the edit, you don’t know what happened on set. You can go, “This shot doesn’t work – let’s lose it.” That fresh vision is a great reason for the editor to be as far from a shoot as possible.

[OP]: One of the projects on your reel is a Games of Thrones promo. How did that job come your way?

[TGC]: That was actually a director I hadn’t worked with – but, just a director who wanted to work with me. He’d been trying to get me on a few jobs that I hadn’t been able to do. It was an outside director that HBO brought in to shoot. It wasn’t a trailer made of footage from the show. They brought in a commercials and music director to shoot the piece and he wanted to work with me. So, it came down like that and then I worked with him and HBO to bring it all together.

[OP]: Do you have any preferences for the types of projects you work on?

[TGC]: Things like the Audi commercial are really fun, because there’s a lot of sound design. A lot of commercials are heavily storyboarded, but it can often be more satisfying if the director has been a bit more loose in the filming. It might be a montage of different people doing activities, for example. And those can be quite fun, because the final thing – you’ve come up with it and you’ve created the narrative and the flow of it. I say that with hindsight, because they turn out to be the most creatively satisfying. But, the process can be much harder when you’re in the thick of it – because it’s on your shoulders and you haven’t got a really locked storyboard to fall back on. I’ll happily do really long hours and work really hard, if it’s a good bit of work – and, at the end of the day, I’ve worked with nice people.

[OP]: With Final Cut Pro X – anything that you’d like to see different?

[TGC]: Maybe collaboration is one thing that would be interesting to see if there’s a new and interesting take on it. Avid bin-locking is great, but actually when you boil it down, it’s quite a simple thing. It locks this bin, you can’t go in there. You can make a copy of it. That’s all it’s doing, but it’s simple and it works really well. All the cloud-based things I’ve seen so far – they’ve not really gotten me excited. I don’t feel like anyone has really nailed what that is yet. Everyone is just doing it because they can, not because it works really well, or is actually useful. I’d be interested to see if there’s something that can be done there.

In the timeline, I’d like to be able to look inside compound clips without stepping into them. I often use compound clips to combine sound effects or music stems. I’d like to be able to open them in context in the timeline and edit the contents inline with the master timeline. And I’d love some kind of dupe detection in the timeline. But otherwise, I’m really enjoying the new version.

Click this link to watch Thomas Grove Carter in action with FCPX at this year’s Las Vegas SuperMeet at NAB.

____________________________________

I certainly appreciated the time Thomas Grove Carter spent with me to do this interview. Along with a few other interviews, it made for a better-than-average Vegas trip. As a side note, I recorded my interviews (for transcription only) on my iPad, with the aid of the Apogee MetaRecorder app. This works with iPhones and iPads and starts at free, however, you should spend the $4.99 in-app upgrade to be able to do anything useful with it. It can use the built-in mic and records full quality audio WAV files – and – it features a connection to FCPX with fcpxml. Finally, to aid in generating a text transcript, I used Digital Heaven’s SpeedScriber. Although still in beta, it worked well for what I needed. As with all audio-to-text transcription applications, there’s no such thing as perfect. I did need to do a fair amount of clean-up, however, that’s not uncommon.

©2017 Oliver Peters

Blackmagic Design DaVinci Resolve Panels

I started my editing career in the era of linear editing suites, where dedicated control panels ruled. CMX keyboards, Grass Valley switchers, ADOs – you name it. These enabled operational speed and experienced editors could drive these rooms like a virtuoso pianist. Much of that dexterity has been lost, thanks to the ubiquity of software-based user interfaces for applications running on general purpose computers and controlled by a mouse. But Grant Petty and Blackmagic Design have set out to change that. At the beginning of March, he introduced two new color correction control panels as companion tools to the company’s DaVinci Resolve editing and grading solution. According to Petty more people are using Resolve to edit than to color correct. By introducing these new panels, he hopes to get more of these users involved in the color correction side of Resolve.

Blackmagic Design now offers three DaVinci Resolve panels: Advanced ($29,995), Mini ($2,995) and Micro ($995). Obviously, the Advanced panel is for serious, dedicated color correction facilities with the traffic to support that investment. It’s a large, three-module console with four trackball/ring controls in the center section. The Mini and Micro panels are designed to be more portable than the Advanced panel. The Mini is essentially a three-trackball subset of the center section of the larger panel. The Micro is the trackball section of the Mini, without the Mini’s tilted backplane. If you are an editor who uses Resolve for color correction, but that’s less than 50% of your workload, then the Micro is probably the right panel for you. If you color correct more than 50%, then the Mini is the better bet. However, these panels are designed for more than just editors. If you work as a DIT (digital imaging technician) in the field or on-set, you most likely use Resolve, making these panels the perfect addition to your toolkit.

Taking the Mini for a spin

Blackmagic Design loaned me a Resolve Mini panel for about two weeks for this review. I have to say, it was love at first sight. These panels continue with Blackmagic’s modern industrial design style. This has earned them an international Design Team of the Year award from the Red Dot Awards last year. The Mini panel is a well-constructed metal console with precision trackballs, rings, knobs and buttons. (The panel also uses some high-impact plastic in its construction.) With packaging, it weighs 24 pounds, thus it’s more “transportable” than portable. If you want something to toss into a gig bag, then the Micro would be the panel to buy. The Mini is better for facility use; however, it’s easy enough to move between rooms as needed.

The smaller Micro is bus-powered over USB, but the Mini includes several connection and power options. Communications can be over ethernet or USB/USB-C. Power options include standard AC wall power, 12 volt 4-pin, or ethernet PoE. Like other Blackmagic Design products, you have to supply your own power chord, but the Mini does include a USB-to-USB-C adapter chord. To run the panel, you need to install Resolve Studio (paid) or Resolve (free), version 12.5.5 or later. And yes, these panels only work with Resolve. Connection is drop-dead easy. Just power it up and plug in the USB cable to any available USB port on your computer (or looped off of a connected device, like a monitor). Then select the panel in Resolve’s preferences. This ease of installation is refreshing, without any of the finickiness of other protocols, like EUCON. The one downside for editors is that this panel only controls the color mode of Resolve. There are no dedicated controls for editing, importing or exporting. So you won’t be able to shed the keyboard and mouse completely.

Everything at your fingertips

The main section of the panel includes three trackballs for hue control and rings for luminance control. Generally these correspond to shadow, mids and highlight ranges of the image. Across the top of this flat section are twelve knobs for additional color controls. Push in the knobs to reset their adjusted values. On the right are buttons to move through nodes, clips and stills, along with play/stop buttons. The slanted backplane of the Mini panel features two five-inch, high-resolution LCD menu/control displays, fifteen buttons on either side, eight soft keys across the top, and eight knobs under the displays. The buttons on the left select the portion of the interface that you need to deal with, like primary correction, tracking, sizing, blurs, etc. The buttons on the right are to add nodes, copy and paste, move through stills and keyframes, and toggle the computer display to a full screen viewer.

Resolve’s primary color correction window is pretty deep, requiring paging through different sections of the control window, such as primary bars, primary wheels, log, raw and more. There’s actually a fourth control wheel for offset in addition to lows, mids and highs. Much of this is exposed to the panel. For example, you can use the knobs to adjust the primary bars, while also moving the trackballs, which would normally adjust the primary wheels. Across the bottom of Resolve’s primary window are additional controls for contrast, saturation and more, which spread across two pages of that interface window. These controls are all active on the Mini by using the twelve knobs located above the trackballs. In some cases, you’ll need to change the part of the interface that appears on the two LCDs. This is enabled by the two arrow keys in the upper left corner of the panel. However, switching pages on the panel is required less often than when you only use the mouse with the interface.

The offset function (the fourth primary wheel and fourth trackball on the Advanced panel) can be accessed by selecting the offset key located above the middle trackball. In that mode, the left ring controls temperature, middle ring controls tint and right ring controls level. The right trackball controls color balance.

Resolve is built around controls that may or may not be present in other applications. For example, it is designed as a YRGB system, meaning you can gang level and color controls, but can also correct Y (luminance) lift/gamma/gain levels independent of color (RGB). In addition to standard three-wheel color corrections, you also have contrast/pivot control, as well as some photographic-style enhancements. These include color boost (like a vibrance control), mid detail (softens or sharpens the image), plus blurs. In you are using Resolve Studio, then temporal noise reduction is active. From what I can tell, this is the only control not active when using the panel with the free version of Resolve.

Resolve uses an elaborate curves system, which you would think would be difficult to implement with knobs and buttons. However, Blackmagic has done a wonderful job. The normal curve levels (ganged or independent channels) can be adjusted by six of the knobs under the LCD displays. These work at preset intervals of 0, 20, 40, 60, 80 and 100% along the curve path from dark to light. If you use hue curves, you start with one of six preset colors selected from the panel. Then an “input hue” knob lets you change the selected color left or right within its hue range, based on the last color knob selected. Custom curves also offer a YSFX tool. This is an adjustment to shrink and even invert the curve range. The extreme opposite setting results in a negative image.

There are plenty of other tools. Resolve has a powerful point-cloud tracker, which can also be accessed from the Mini. One handy feature is the ability to automatically add a node with a preset box or circle window. Once applied, then you adjust the window. Although you can step through keyframes, it still requires the mouse to add or delete keyframes. You also have to delete nodes via mouse and not from the panel. Some keys, like FX and User are available for future expansion.

In practice

I spent about a week with the panel, working on and off with some test projects. Needless to say, I enjoyed the process, but there are a few things I wish were different.  The Mini panel is really designed for full-time color grading. If you have a desk layout for editing, then there probably isn’t enough extra space to situate the Mini is an optimal location. For instance, if you wanted to place the panel between your keyboard and display, then the Micro would be a better option. There is no power switch, so the panel is always on. Fortunately, it’s fan-less and quiet, even when on. There are no illumination controls for the displays or the backlit keys. That’s fine in a normally lit room, but might be too bright for some, if you keep the light level very low in the suite.

I’d like to see more versatile transport control. Resolve supports faster-than-real-time playback and scrubbing, but the control panel only gives you 1X play in the forward or reverse directions. It would be nice to have better transport control from the panel. Resolve functions, like adding LUTs, can’t be handled from the panel. The controls to select HSL qualifiers for secondary color correction include eyedroppers, but you still need to use the mouse to graphically pick the right area of the screen. It would be nice if you could do this with the trackball. These are minor points and by no means deal breakers.

A dedicated color correction panel will not only make you a faster colorist – it will also make you a better one. More controls are front and center, which means you are likely to discover and use processes that you would otherwise miss if you simply relied on the mouse or a pen and tablet. You have two hands to control the panel. As with any other tactile task, such as audio mixing with a mixing board, your hands will soon know instinctively what to adjust without having to look at the panel. You can stay more focused on your video display and the scopes. Grading is not only faster, but it’s more intuitive.

Some are going to baulk at the price, no matter how reasonable these portable panels are. To place that into context, at $2,995, the Mini is still less expensive than a decked out Mac Pro or MacBook Pro, which might be your main editing/grading workstation. Plus they work with the free version of Resolve. So if color correction is part of your business model and Resolve is your color correction tool of choice, then either of these two DaVinci Resolve panels is easily justified. The more I’ve been using the Resolve Mini, the more I like it. It’s the Porsche of small grading control panels – solid, stylish and powerful.

Originally written for Digital Video magazine / Creative Planet Network

©2017 Oliver Peters

CrumplePop and FxFactory

If you edit with Final Cut Pro – either the classic and/or new version – then you are familiar with two of its long-running plug-in developers. Namely, FxFactory (Noise Industries) and CrumplePop. Last year the two companies joined forced to bring the first audio plug-ins to the FxFactory plug-in platform. CrumplePop has since expanded its offerings through FxFactory to include a total of six audio and video products. These are AudioDenoise, EchoRemover, VideoDenoise, AutoWhiteBalance, EasyTracker, and BetterStabilizer.

Like much of the eclectic mix of products curated through FxFactory, the CrumplePop effects work on a mix of Apple and Adobe products (macOS only). You’ll have to check the info for each specific plug-in to make sure it works with your application needs. These are listed on the FxFactory site, however, this list isn’t always complete. For example, an effect that is listed for Premiere Pro may also work in After Effects or Audition (in the case of audio). While most are cross-application compatible, the EasyTracker effect only works in Final Cut Pro X. On the other hand, the audio filters work in the editing applications, but also Audition, Logic Pro X, and even GarageBand. As with all of the FxFactory effects, you can download a trial through the FxFactory application and see for yourself, whether or not to buy.

I’ve tested several of these effects and they are simple to apply and adjust. The controls are minimal, but simplicity doesn’t mean lack of power. Naturally, whenever you compare any given effect or filter from company A versus company B, you can never definitively say which is the best one. Some of these functions, like stabilization, are also available within the host application itself. Ultimately the best results are often dependent on the individual clip. In other words, results will be better with one tool or the other, depending on the challenges presented in any given clip. Regardless, the tools are easy to use and usually provide good results.

In my testing, a couple of the CrumplePop filters proved very useful to me. EchoRemover is a solid, go-to, “fix it” filter for location and studio interviews, voice overs, and other types of dialogue. Often those recordings have a touch of “boominess” to the sound, because of the room ambience. EchoRemover did the trick on my trouble clip. The default setting was a bit heavy-handed, but after a few tweaks, I had the clean track I was looking for.

EasyStabilizer is designed to tame shaky and handheld camera footage. There are several starting parameters to choose from, such as “handheld walking”, which determine the analysis to be done on the clip. One test shot had the camera operator with a DSLR moving around a group of people at a construction site in a semi-circle, which is a tough shot to stabilize. Comparing the results to the built-in tools didn’t leave any clear winner in my mind. Both results were good, but not without some, subtle motion artifacts.

I also tested EasyTracker, which is designed for only Final Cut Pro X. I presume that’s because Premiere Pro and After Effects already both offer good tracking. Or maybe there’s something in the apps that makes this effect harder to develop. In any case, EasyTracker gives you two methods: point and planar. Point tracking is ideal for when you want to pin an object to something that moves in the frame. Planar is designed for tracking flat objects, like inserting a screen into phone or monitor. When 3D is enabled, the pinned object will scale in size as the tracked object gets larger in the frame. UPDATE: I had posted earlier that the foreground video seemed to only work with static images, like graphic logos, but that was incorrect. The good folks at CrumplePop pointed me to one of their tutorials. The trick is that you first have to make a compound clip of the foreground clip and then it works fine with a moving foreground and background image.

Like other FxFactory effects, you only buy the filter you want, without a huge investment in a large plug-in package, where many of the options might go unused. It’s nice to see FxFactory add audio filters, which expands its versatility and usefulness within the greater Final Cut Pro X (and Premiere Pro) ecosystem.

©2017 Oliver Peters

Faster, Together at NAB

With the NAB trade show just around the corner, it’s time to shore up your last minute plans for things to do and see. In addition to the tons of exhibits in the Las Vegas Convention Center halls, there are numerous outside meetings, conferences, training sessions, and places for production and post professionals to meet and greet.

A new addition this year is LumaForge’s Faster, Together Stage presentations. These are being held Monday through Wednesday across the street at the Courtyard by Marriott Las Vegas Convention Center. I’ll be part of the “State of the NLE” panel discussion Wednesday at 3PM. It should be fun and although some have referred to this as the “NLE cage match”, we are all friends and looking forward to an enlightening discussion. The presentations are free, but you must register in advance. See you there!

©2017 Oliver Peters