Five Decades of Edit Suite Evolution

I spent last Friday setting up two new Apple iMac Pros as editing workstations. When I started as an editor in the 1970s, it was the early days of computer-assisted video editing. Edit suites (or bays) were intended for either “offline” editing with simple hardware, where creative cutting was the goal – or they were “online”, designed for finishing and used the most expensive gear. Sometimes the online bay would do double-duty for both creative and final post.

The minimum investment for such a linear edit suite would include three 2” videotape recorders, a video switcher (vision mixer), edit controller, audio mixer, and a small camera for titles and artwork. Suites were designed with creature comforts, since clients would often spend days at a time supervising the edit session. Before smart phones and the internet, clients welcomed the chance to get out of the office and go to the edit. Outfitting one of these edit suites would start at several hundred thousand dollars.

At my current edit gig, the company runs nine Mac workstations within a footprint that would have only supported three edit suites of the past, including a centralized machine room. Clients rarely come to supervise an edit, so the layout is more akin to the open office plan of a design studio. Editing can be self-contained on a Mac or PC and editors work in a more collegial, collaborative environment. There’s one “hero” room for when clients do decide to drop in.

In these five decades, computer-assisted editing has gone through four phases:

Phase 1 – Offline and online edit suites, primarily based on linear videotape technology.

Phase 2 – Nonlinear editing took hold with the introduction of Avid, EMC, Media 100, and Lightworks. The resolution was too poor for finishing, but the systems were ideal for the creative process. VTR-based linear rooms still handled finishing.

Phase 3 – As the quality improved, nonlinear systems could deliver finished masters. But camera acquisition and delivery was still centered on videotape. Nonlinear systems still had to be able to output to tape, which required specialized i/o hardware.

Phase 4 (current) – Editing is completely based around the computer. Most general-purpose desktop and even laptop computers are capable of the whole gamut of post services without the need for specialized hardware. That has become optional. The full shift to Phase 4 came when file-based acquisition and delivery became the norm.

This transition brought about a sea change in cost, workflow, facility design, and talent needs. It has been driven by technology, but also a number of socioeconomic factors.

1. Technology always advances. Computers get more powerful at a lower cost point. Moore’s Law and all that. Although our demands increase – SD, HD, 4K, 8K, and beyond – computers, so far, have not been outpaced. I can edit 4K today with an investment of under $10K, which was impossible in 1980, even with an investment of $500K or more. This cost reduction also applies to shared storage solutions (NAS and SAN systems). They are cheaper, easier to install, and more reliable than ever. Even the smallest production company can now afford to design editing around the collaboration of several editors and workstations.

2. The death of videotape came with the 2011 Tohoku earthquake and tsunami in Japan that disabled the Fukushima nuclear plant. A byproduct of this natural disaster was that it damaged the Sony videotape manufacturing plant, putting supplies of HDCAM-SR stock on indefinite backorder. This pointed to the vulnerability of videotape and hastened the acceptance of file-based delivery for masters by key networks and distributors.

3. Interactions with clients and human beings in general has changed – thanks to smartphones, personal computers, and the internet. While both good and bad, the result is a shift in our communication with clients. Most of the time, edit session review and approval is handled over internet services. Post your cut. Get feedback. Make your changes and post again. Repeat. Along with a smaller hardware footprint than in the past, this is one of the prime reasons that room designs have changed. You don’t need a big, comfortable edit suite designed for clients, if they aren’t going to come. A smaller room will do as long as your editors are happy and productive.

Such a transition isn’t new. It’s been mirrored in the worlds of publishing, graphic design, and recording studios. Nevertheless, it is interesting to look back at how far things have come. Naturally, some will view this evolution as a threat and others as filled with opportunities And, of course, where it goes from here is anyone’s guess.

All I know is that setting up two edit systems in a day would have been inconceivable in 1975!

Originally written for RedShark News

©2018 Oliver Peters

Advertisements

Editing and Music Composition

Editing and Music Composition

A nip is in the air and snow is falling in some regions. All signs of Fall and Winter soon to come. The sights, smells, and sounds of the season will be all around us. Festive events. Holiday celebrations. Joy. But no other season is so associated with memorable music to put us in the mood. That makes this a perfect time to talk about how video and film editing has intrinsic similarities with musical composition.

Fellow editor Simon Ubsdell has a lot of thoughts on the subject – perfect for one of my rare guest blog posts. Simon is Creative Director of Tokyo Productions, a London-based post-production shop specializing in trailers. Simon is multi-talented with experience in music, audio post, editing, and software development.

Grab a cup of holiday cheer and sit back for this enlightening read.

______________________________________

Simon Ubsdell – Editing and Music Composition

There is a quote attributed to several different musicians, including Elvis Costello, Miles Davis, and Thelonius Monk, which goes: “Talking about music is like dancing about architecture“. It sounds good and it seems superficially plausible, but I think it’s wrong on two levels. Firstly, a good choreographer would probably say that it’s perfectly possible to use dance to say something interesting about architecture and a good architect might well say that they could design a building that said something about dance. But I think it’s also unhelpful to imply that one art form can’t tell us useful things about another. We can learn invaluable lessons both from the similarities and the differences, particularly if we focus on process rather than the end result.

Instead, here’s Ingmar Bergman: “I would say that there is no art form that has so much in common with film as music. Both affect our emotions directly, not via the intellect. And film is mainly rhythm; it is inhalation and exhalation in continuous sequence.

Bergman is certainly not the only filmmaker to have made this observation and I think everyone can recognise the essential truth of it. However, what I want to consider here is not so much what film and music have in common as art forms, but rather whether the process of music composition can teach us anything useful about the process of film editing. As an editor who also composes music, I have found thinking about this to be useful in both directions.

In films you’ll often see a composer sitting down at a piano and laboriously writing a score one note after another. He bangs around until he finds one note and then he scribbles it into the manuscript; then he bangs around looking for the next one. Music composition is made to look like a sequential process where each individual note is decided upon (with some difficulty usually!) before moving on to the next. The reality is of course that music composition doesn’t work this way at all. So I’d like to look at some of the ways that one does actually go about writing a piece of music and how the same principles might apply to how we edit films. Because music is such a vast subject, I’m going to limit myself largely to the concepts of classical music composition, but the same overall ideas apply to whatever kind of music you might be writing in whatever genre.

What both music and film have in common is that they unfold over time: they are experienced sequentially. So the biggest question that both the composer and the editor need to address is how to organise the material across time, and to do that we need to think about structure.

Musical Structure

From the Baroque period onwards and even before, composers have drawn on a very specific set of musical structures around which to build their compositions. 

The Canon (as in Pachelbel’s famous example) is the repetition of the same theme over and over again with added ornamentation that becomes increasingly more elaborate. The Minuet and Trio is an A/B/A sandwich in which a theme is repeated (Minuet), but with a contrasting middle section (Trio). The Rondo is a repeated theme that alternates with multiple contrasting sections, in other words A/B/A/C/A/D, etc. The Theme and Variations sets out a basic theme and follows it with a series of elaborations in different keys, tempi, time signatures, and so on. 

Sonata Form, widely used for the opening movements of most symphonic works, is a much more sophisticated scheme, that starts by setting out two contrasting themes (the “1st and 2nd Subjects”) in two different keys (the “Exposition”), before moving into an extended section where those ideas undergo numerous changes and augmentations and key modulations (the “Development Section”), before returning to the original themes, both now in the home key of the piece (the “Recapitulation Section”), often leading to a final epilogue called the “Coda”. 

In all these cases the structure is built out of thematic and other contrasts, and contrast is a word I’m going to be coming back to repeatedly here, because it goes to the core of where music composition and editing come together.

Now the point of using musical structures of this kind is that the listener can form an idea of how the piece is unfolding even when hearing it for the first time. They provide a map that helps you orientate yourself within the music, so it doesn’t come across as just some kind of confused and arbitrary ramble across terrain that’s hard to read. Music that doesn’t come with signposts is not easy to listen to with concentration, precisely because you don’t know where you are. (Of course, the humble pop song illustrates this, too. We can all recognise where the verse ends and the chorus begins and the chorus repetitions give us clear anchor points that help us understand the structure. The difference with the kind of classical music I’m talking about is that a pop song doesn’t have to sustain itself for more than a few minutes, whereas some symphonies last well over an hour and that means structure becomes vastly more important.) 

What structure does is effectively twofold: on the one hand it gives us a sense of comprehensibility, predictability, even familiarity; and on the other hand it allows the composer to surprise us by diverging from what is expected. The second part obviously follows from the first. If we don’t know where we are, then we don’t know what to expect and everything is a constant surprise. And that means nothing is a surprise. We need familiarity and comprehensibility in order to be able to be surprised by the surprises when they come. Conversely, music that is wholly without surprises gets dull very quickly. Just as quickly as music that is all surprise, because again it offers us no anchor points. 

Editing Structure

So what comparisons can we draw with editing in terms of structure? Just as with our fictional movie “composer” sitting at the piano picking out one note after another, so you’ll find that many newcomers to editing believe that that’s how you put together a film. Starting at the beginning, you take your first shot and lay it down, and then you go looking for your next shot and you add that, and then the next one and the next one. Of course, you can build a film this way, but what you are likely to end up with is a shapeless ramble rather than something that’s going to hold the viewer’s attention. It will be the equivalent of a piece of music that has no structural markers and doesn’t give us the clues we need to understand where we are and where we are going. Without those cues the viewer quickly gets lost and we lose concentration. Not understanding the structure means we can’t fully engage with the film.

So how do we go about creating structure in our editing? Music has an inherently much more formal character, so in many ways the composer has an easier job, but I’d suggest that many of the same principles apply.

Light and Shade in Music

Music has so many easy to use options to help define structure. We have tempo – how fast or slow the music is at any one point. Rhythm – the manner in which accented notes are grouped with non-accented notes. Pitch – how high or low the musical sounds are. Dynamics – how loud or soft the music is, and how soft becomes loud and vice versa. Key – how far we have moved harmonically from the dominant key of the piece. Mode – whether we are experiencing the bright optimism of a major key or the sombre darkness of a minor key (yes, that’s a huge over-simplification!). Harmony – whether we are moving from the tension of dissonance to the resolution of consonance, or vice versa.

All of these options allow for contrasts – faster/slower, brighter/darker, etc. It’s out of those contrasts that we can build structure. For example, we can set out our theme in a bright, shiny major key with a sprightly rhythm and tempo, and then move into a slow minor key variation shrouded in mystery and suspense. It’s from those contrasts that we grasp the musical structure. And of course moving through those contrasts becomes a journey. We’re not fixed in one place, but instead we’re moving from light to dark, from peaceful to agitated, from tension to resolution, and so on. Music satisfies and nourishes and delights and surprises us, because it takes us on those journeys and because it is structured so that we experience change.

Light and Shade in Editing

So what are the editing equivalents? Let’s start with the easiest scenario and that’s where we are cutting with music. Because music has the properties we’ve discussed above, we can leverage those to give our films the same contrasts. We can change the pace and the mood simply by changing the pace and mood of the music we use. That’s easy and obvious, but very often overlooked. Far too many music-driven pieces are remorselessly monotonous, relying far too heavily for far too long on music of the same pace and mood. That very quickly dissipates the viewer’s engagement for the reasons we have talked about. Instead of feeling as though we are going on a journey of contrasts, we are stuck in one repetitive loop and it’s dull – and that means we stop caring and listening and watching. Instead of underscoring where the film is going, it effectively tells us that the film is going nowhere, except in circles.

(Editing Tip: So here’s a suggestion: if you’re cutting with pre-composed music, don’t let that music dictate the shape of your film. Instead cut the music so it works for you. Make sure you have changes of pace and intensity, changes of key and mode, that work to enhance the moments that are important for your film. Kill the music, or change it, or cut it so that it’s driving towards the moments that really matter. Master it and don’t let it master you. Far too often we see music that steamrolls through everything, obliterating meaning, flattening out the message – music that fails to point up what’s important and de-emphasise what is not. Be in control of your structure and don’t let anything dictate what you are doing, unless it’s the fundamental meaning you are trying to convey.

Footnote: Obviously what I’ve said here about music applies to the soundtrack generally. Sound is one of the strongest structural markers we have as editors. It builds tension and relaxation, it tells us where moments begin and end, it guides us through the shape of the film in a way that’s even more important than the pictures.)

And that brings me to a really important general point. Too many films feel like they are going in circles, because they haven’t given enough thought to when and how the narrative information is delivered. So many film-makers think it’s important to tell us everything as quickly as possible right up front.They’re desperate to make sure they’ve got their message across right here right now in its entirety. And then they simply end up recycling stuff we already know and that we care about less and less with each repetition. It’s a bit like a composer piling all his themes and all their variations into the first few bars (a total, unapproachable cacophony) and then being left with nothing new to say for the rest of the piece.

A far better approach is to break your narrative down into a series of key revelations and delay each one as long as you dare. Narrative revelations are your key structural points and you must cherish them and nurture them and give them all the love you can and they will repay you with enhanced audience engagement. Whatever you do, don’t throw them away unthinkingly and too soon. Every narrative revelation marks a way station on the viewer’s journey, and those way stations are every bit as crucial and valuable as their musical equivalents. They are the map of the journey. They are why we care. They are the hooks that make us re-engage.

Tension and Relaxation

This point about re-engagement is important too and its brings me back to music. Music that is non-stop tension is exhausting to listen to, just as music that is non-stop relaxation quickly becomes dull. As we’ve discussed, good music moves between tension and relaxation the whole time at both the small and the large scale, and that alternation creates and underpins structure. We feel the relaxation, because it has been preceded by tension and vice versa.

And the exact same principle applies to editing. We want the viewer to experience alternating tension and relaxation, moments of calm and moments of frenzied activity, moments where we are absorbing lots of information and moments where we have time to digest it. (Remember, Bergman talking about “inhalation and exhalation”.) Tension/relaxation applies at every level of editing, from the micro-level of the individual cuts to the macro level of whole scenes and whole sequences. 

As viewers we understand very well that a sudden burst of drama after a period of quiet is going to be all the more striking and effective. Conversely we know about the effect of getting our breath back in the calms that come after narrative storms. That’s at the level of sequences, but even within scenes, we know that they work best when the mood and pace are not constant, when they have corners and changes of pace, and their own moments of tension and relaxation. Again it’s those changes that keep us engaged. Constant tension and its opposite, constant relaxation, have the opposite effect. They quickly end up alienating us. The fact is we watch films, because we want to experience that varied journey – those changes between tension and relaxation.

Even at the level of the cut, this same principle applies. I was recently asked by a fellow editor to comment on a flashy piece of cutting that was relentlessly fast, with no shot even as long as half a second. Despite the fact that the piece was only a couple of minutes long, it felt monotonous very quickly – I’d say after barely 20 seconds. Whereas of course, if there had been even just a few well-judged changes of pace, each one of those would have hooked me back in and re-engaged my attention. It’s not about variety for variety’s sake, it’s about variety for structure’s sake.

The French have an expression: “reculer pour mieux sauter“, which roughly means taking a step back so you can jump further, and I think that’s a good analogy for this process. Slower shots in the context of a sequence of faster shots act like “springs”. When faster shots hit slower shots, it’s as if they apply tension to the spring, so that when the spring is released the next sequence of faster shots feels faster and more exciting. It’s the manipulation of that tension of alternating pace that creates exciting visceral cutting, not just relentlessly fast cutting in its own right.

Many great editors build tension by progressively increasing the pace of the cutting, with each shot getting incrementally shorter than the last. We may not be aware of that directly as viewers, but we definitely sense the “accelerated heartbeat” effect. The obvious point to make is that acceleration depends on having started slow, and deceleration depends on having increased the pace. Editing effects are built out of contrasts. It’s the contrasts that create the push/pull effect on the viewer and bring about engagement.

(Editing Tip: It’s not strictly relevant to this piece, but I wanted to say a few words on the subject of cutting to music. Many editors seem to think it’s good practice to cut on the downbeats of the music track and that’s about as far as they ever get. Let’s look at why this strategy is flawed. If our music track has a typical four beats to the bar, the four beats have the following strengths: the first, the downbeat, is the dominant beat; the third beat (often the beat where the snare hits) is the second strongest beat; then the fourth beat (the upbeat); and finally the second beat, the weakest of the four.

Cutting on the downbeat creates a pull of inertia, because of its weight. If you’re only ever cutting on that beat, then you’re actually creating a drag on the flow of your edit. If you cut on the downbeat and the third beat, you create a kind of stodgy marching rhythm that’s also lacking in fluid forward movement. Cutting on the upbeat, however, because it’s an “offbeat”, actually helps to propel you forward towards the downbeat. What you’re effectively doing is setting up a kind of cross-rhythm between our pictures and your music, and that has a really strong energy and flow. But again the trick is to employ variety and contrast. Imagine a drummer playing the exact same pattern in each bar: that would get monotonous very quickly, so what the drummer actually does is to throw in disruptions to the pattern that build the forward energy. He will, for example, de-emphasise the downbeat by exaggerating the snare, or he will even shift where the downbeat happens, and add accents that destabilise the four-square underlying structure. And all that adds to the energy and the sense of forward movement. And that’s the exact principle we should be aiming for when cutting to music.

There’s one other crucial, but often overlooked, aspect to this: making your cut happen on a beat is far less effective than making a specific moment in the action happen on a beat. That creates a much stronger sense of forward-directed energy and a much more satisfying effect of synchronisation overall. But that’s not to say you should only ever cut this way. Again variety is everything, but always with a view to what is going to work best to propel the sequence forward, rather than let it get dragged back. Unless, of course, dragging back on the forward motion is exactly what you want for a particular moment in your film, in which case, that’s the way to go.)

Building Blocks

You will remember that our fictional composer sits down at the piano and picks out his composition note by note. The implicit assumption there is that individual notes are the building blocks of a piece of music. But that’s not how composers work. The very smallest building block for a composer is the motif – a set of notes that exists as a tiny seed out of which much larger musical ideas are encouraged to grow. The operas of Wagner, despite notoriously being many hours long, are built entirely out of short motifs that grow through musical development to truly massive proportions. You might be tempted to think that a motif is the same thing as a riff, but riffs are merely repetitive patterns, whereas motifs contain within them the DNA for vast organic structures and the motifs themselves can typically grow other motifs.

Wagner is, of course, more of a exception than a rule and other composers work with building blocks on a larger scale than the simple motif. The smallest unit is typically something we call a phrase, which might be several bars long. And then again one would seldom think of a phrase in isolation, since it only really exists as part of larger thematic whole. If we look at this famous opening to Mozart’s 40th Symphony we can see that he starts with a two bar phrase that rises on the last note, that is answered by a phrase that descends back down from that note. The first phrase is then revisited along with its answering phrase – both shifted one step lower. 

But that resulting eight bars is only half of the complete theme, while the complete 1st Subject is 42 two bars long. So what is Mozart’s basic building block here? It most certainly isn’t a note, or even a phrase. In this case it’s something much more like a combination of a rhythm pattern (da-da-Da) and a note pattern (a falling interval of two adjacent notes). But built into that is a clear sense of how those patterns are able to evolve to create the theme. In other words, it’s complicated.

The fundamental point is that notes on their own are nothing; they are inert; they have no meaning. It’s only when they form sequences that they start to become music.

The reason I wanted to highlight this point is that I think it too gives us a useful insight into the editing process. The layperson tends to think of the single shot as being the basic building block, but just as single notes on their own are inert, so the single shot on its own (typically, unless it’s an elaborate developing shot) is lacking in meaning. It’s when we build shots into sequences that they start to take on life. It’s the dynamic, dialectical interplay of shots that creates shape and meaning and audience engagement. And that means it’s much more helpful to think of shot sequences as the basic building blocks. It’s as sequences that shots acquire the potential to create structure. Shots on their own do not have that quality. So it pays to have an editing strategy that is geared towards the creation and concatenation of “sequence modules”, rather than simply a sifting of individual shots. That’s a huge subject that I won’t go into in any more detail here, but which I’ve written about elsewhere.

Horizontal and Vertical Composition

Although the balance keeps shifting down the ages, music is both horizontal and vertical and exists in a tension between those aspects. Melody is horizontal – a string of notes that flows left to right across the page. Harmony is vertical – a set of notes that coexist in time. But these two concepts are not in complete opposition. Counterpoint is what happens when two or more melodies combine vertically to create harmony. The fugue is one of the most advanced expressions of that concept, but there are many others. It’s a truly fascinating, unresolved question that runs throughout the history of music, with harmony sometimes in the ascendant and sometimes melody.

Melody typically has its own structure, most frequently seen in terms of groups of four bars, or multiples of four bars. It tends to have shapes that we instinctively understand even when hearing it for the first time. Harmony, too, has a temporal structure, even though we more typically think of it as static and vertical. Vertical harmonies tend to suggest a horizontal direction of travel, again based on the notion of tension and relaxation, with dissonance resolving towards consonance. Harmonies typically point to where they are planning to go, although of course, just as with melody, the reason they appeal to us so much is that they can lead us to anticipate one thing and then deliver a surprise twist.

In editing we mostly consider only melody, in other words, how one shot flows into another. But there is also a vertical, harmonic component. It’s only occasionally that we layer our pictures to combine them vertically (see footnote). But we do it almost all the time with sound – layering sound components to add richness and complexity. I suppose one way of looking at this would be to think of pictures as the horizontal melody and the soundtrack as the vertical harmony, or counterpoint.

One obvious way in which we can approach this is to vary the vertical depth to increase and decrease tension. A sound texture that is uniformly dense quickly becomes tiresome. But if we think in terms of alternating moments where the sound is thickly layered and moments where it thins out, then we can again increase and decrease tension and relaxation.

(Footnote: One famous example of vertical picture layering comes in Apocalypse Now where Martin Sheen is reading Kurz’ letter while the boat drives upstream towards the waiting horror. Coppola layers up gliding images of the boat’s passage in dissolves that are so long they are more like superimpositions – conveying the sense of the hypnotic, awful, disorientating journey into the unknowable. But again contrast is the key here, because punctuating that vertical layering, Coppola interjects sharp cuts that hit us full in the face: suspended corpses, the burning helicopter in the branches of a tree. The key thing to notice is the counterpoint between the hard cuts and the flowing dissolves/superimpositions. The dissolves lull us into an eery fugue-like state, while the cuts repeatedly jolt us out of it to bring us face to face with the horror. The point is that they both work together to draw us inexorably towards the climax. The cuts work, because of the dissolves, and the dissolves work because of the cuts.)

Moments

The moments that we remember in both music and films are those points where something changes suddenly and dramatically. They are the magical effects that take your breath away. There is an incredibly famous cut in David Lean’s Lawrence of Arabia that is a perfect case in point. Claude Rains (Mr. Dryden) and Peter O’Toole (Lawrence) have been having a lively discussion about whether Lawrence really understands how brutal and unforgiving the desert is going to be. O’Toole insists that “it’s going to be fun”. He holds up a lighted match, and we cut to a close-up as he blows it out. On the sound of him blowing, we cut to an almost unimaginably wide shot of the desert as the sun rises almost imperceptibly slowly in what feels like complete silence. The sudden contrast of the shot size, the sudden absence of sound, the abruptness of cutting on the audio of blowing out the match – all of these make this one of the most memorable moments in film history. And of course, it’s a big narrative moment too. It’s not just clever, it has meaning. 

Or take another famous moment, this time from music. Beethoven’s massive Choral Symphony, the Ninth, is best known for its famous final movement, the Ode to Joy, based on Schiller’s poem of the same name. The finale follows on from a slow movement of celestial tranquillity and beauty, but it doesn’t launch immediately into the music that everyone knows so well. Instead there is a sequence built on the most incredible dissonance, which Wagner referred to as “the terror fanfare”. Beethoven has the massed ranks of the orchestra blast out a phenomenally powerful fortissimo chord that stacks up all seven notes of the D minor harmonic scale. It’s as if we are hearing the foul demons of hatred and division being sent screeching back to the depths of hell. And as the echoes of that terrifying sound are still dying away, we suddenly hear the solo baritone, the first time in nearly an hour of music that we have heard a human voice: “O Freunde, nicht diese Töne“, “Friends, let us not hear these sounds”. And so begins that unforgettable ode to the brotherhood of all mankind.

The point about both the Lawrence of Arabia moment and the Beethoven moment is that in each case, they form giant pivots upon which the whole work turns. The Lawrence moment shows us one crazy Englishman pitting himself against the limitless desert. The Beethoven moment gives us one lone voice stilling the forces of darkness and calling out for something better, something to unite us all. These are not mere stylistic tricks, they are fundamental structural moments that demand our attention and engage us with what each work is really about.

I’m not suggesting that everything we cut is going to have moments on this kind of epic scale, but the principle is one we can always benefit from thinking about and building into our work. When we’re planning our edit, it pays to ask ourselves where we are going to make these big turning points and what we can do with all the means at our disposal to make them memorable and attention-engaging. Our best, most important stuff needs to be reserved for these pivotal moments and we need to do everything we can to do it justice. And the best way of doing that, as Beethoven and David Lean both show us, is to make everything stop.

When the Music Stops

Arguably the greatest composer ever has one of my favourite ever quotes about music: “The music is not in the notes, but in the silence between.” Mozart saw that the most magical and profound moments in music are when the music stops. The absence of music is what makes music. To me that is one of the most profound insights in art.

From an editing point of view, that works, too. We need to understand the importance of not cutting, of not having sound, of not filling every gap, of creating breaths and pauses and beats, of not rushing onto the next thing, of allowing moments to resonate into nothingness, of stepping away and letting a moment simply be.

The temptation in editing is always to fill every moment with something. It’s a temptation we need to resist wherever we can. Our films will be infinitely better for it. Because it’s in those moments that the magic happens.

Composing and Editing with Structure

I hope by now you’ll agree with me about the fundamental importance of structure in editing. So let’s come back to our original image of the composer hammering out his piece of music note by note, and our novice editor laying out his film shot by shot.

It should be obvious that a composer needs to pre-visualise the structure of the piece before starting to think about the individual notes. At every level of the structure he needs to have thought about where the structural changes might happen – both on a large and small scale. He needs to plan the work in outline: where the key changes are going to happen, where the tempo shifts from fast to slow or slow to fast, where the tension escalates and where it subsides, where the whole orchestra is playing as one and where we hear just one solitary solo line. 

It goes without saying that very few composers have ever plotted out an entire work in detail and then stuck rigidly to the plan. But that’s not the point. The plan is just a plan until a better one comes along. The joy of composition is that it throws up its own unexpected surprises, ideas that grow organically out of other ideas and mushroom into something bigger, better and more complex than the composer could envisage when starting out. But those ideas don’t just shoot off at random. They train themselves around the trelliswork of the original structure. 

As I’ve mentioned, classical composers have it easy, because they can build upon pre-conceived structures like Sonata Form and the rest.  As editors we don’t have access to the same wealth of ready-built conventions, but we do have a few. 

One of the structures that we very frequently call upon is the famous three-act structure. It works not only for narrative, but for pretty much any kind of film you can think of. The three-act structure does in fact have a lot in common with Sonata Form. Act One is the Exposition, where we set out the themes to be addressed. Act Two is the Development Section, where the themes start to get complicated and we unravel the problems and questions that they pose. And Act Three is the Recapitulation (and Coda), where we finally resolve the themes set out in Act One. Almost anything you cut at whatever length can benefit from being thought of in these structural terms: a) set out your theme or themes; b) develop your themes and explore their complexities; c) resolve your themes (or at least point to ways in which they might be resolved). And make sure your audience is aware of how those sections break down. As an editor who has spent a lot of my working life cutting movie trailers, I know that every experienced trailer editor deploys three-act structure pretty much all the time and works it very hard indeed.

 Of course, scripted drama comes into the cutting room with its own prebuilt structure, but the script is by no means necessarily the structural blueprint for the finished film. Thinking about how to structure what was actually shot (as against what was on the page) is still vitally important. The originally conceived architecture might well not actually function as it was planned, so we can’t simply rely on that to deliver a film that will engage as it should. The principles that we’ve discussed of large scale composition, of pace, of contrast, of rhythm, and so on are all going to be useful in building a structure that works for the finished film.

Other kinds of filmmaking rely heavily on structural planning in the cutting room and a huge amount of work can go into building the base architecture. And it really helps if we think of that structural planning as more than simply shifting inert blocks into a functional whole. If we take inspiration from the musical concepts described here, we can create films that breathe a far more dynamic structural rhythm, that become journeys through darkness and light, through tension and relaxation, between calm and storm, journeys that engage and inspire.

Conclusion

Obviously this is just an overview of what is in reality a huge subject, but what I want to stress is that it really pays to be open to thinking about the processes of editing from different perspectives. Music, as a time-based art form, has so many useful lessons to draw from, both in terms of large scale architecture and small scale rhythms, dynamics, colours, and more. And those lessons can help us to make much more precise, refined and considered decisions about editing practice, whatever we are cutting.

– Simon Ubsdell

For more of Simon’s thoughts on editing, check out his blog post Bricklayers and Sculptors.

© 2018 Simon Ubsdell, Oliver Peters

Mary Queen of Scots

Few feature film editors have worked on such a diverse mix of films as Chris Dickens. His work ranges from Shaun of the Dead to Les Misérables, picking up an Oscar award along the way for editing Slumdog Millionaire. The latest film is Mary Queen of Scots, starring Gemma Chan, Margot Robbie, and Saoirse Ronan. This historical drama is helmed by Josie Rourke (Much Ado About Nothing), an experienced theatre director, who has also worked with film and TV projects. Readers will be familiar with Dickens from my Hot Fuzz interview. I recently had the pleasure to chat with him again about Mary Queen of Scots.

______________________________________________________

[OP] I know that there’s a big mindset difference between directing for the stage and directing for film. How was it working with Josie Rourke for this film?

[CD] She was very solid with the actors’ performances and how to rehearse them. There are great performances and that was the major thing she was concentrating on. She knew about the creative side of filmmaking, but not about the technical. We were essentially helping her with that to get what she wanted on screen. It’s a dialogue-driven movie and so she was very at home with that. But we had to work with her to adapt her normal approach for the screen, such as when to use images instead of dialogue.

Filmmaking is all about seeing something more than you can just see with the naked eye. Plus seeing emotionally what an actor is delivering. The way they’re doing it is different than on stage. It’s smaller. Film acting is much subtler. I don’t think we ever had a difference of opinion about that. It was more that in the theatre you are trying to communicate things through an actor’s movement and language and not so much through their eyes and the subtleties of their face. With film, one close-up of an actor can do more than a whole page of dialogue. Nevertheless, she certainly gave the cameramen freedom, while she concentrated on performance. And she shot all of that stuff so we had enough to use to make it work on the screen.

[OP] Did that dynamic affect how and where you edited?

[CD] I was mostly at the studio, but I did go on location with them. We shot at Pinewood and then on location in Scotland and around England. I went up to Scotland, where we had some action scenes, to help with that. Josie needed feedback about what she was shooting and needed to see at it quickly. I also did some second unit shooting and things like that.

[OP] Typically, period dramas require extensive visual effects to disguise modern locations and make them appear historically appropriate. They also are still frequently shot on film. What was the case with this film?

[CD] It was shot digitally. The DoP [John Mathieson, Logan, The Man from U.N.C.L.E., X-Men: First Class] would have preferred film, because of the genre, but that would have been too expensive. There were always two and sometimes three cameras for most set-ups. But, there are very few visual effects. Just a few clean-ups. There is an epic feel, but that’s not the main direction. The film is a more psychological story about these two women, the Queen of England and the Queen of Scotland. They are both opposed to each other, but also like each other. It’s about their relationship and the sort of psychological connection between them. The story is more intimate in that way. So it’s about the performance and the subtleties to that story.

[OP] Walk me through the production and post timeline.

[CD] We shot it a year ago last August for about three months. I assembled the film during that time and then we started the director’s cut in October of last year. We actually had a long edit and didn’t finish until July of this year. I think we spent about thirteen weeks doing a director’s cut. Then the producer’s cut, and then, a director’s cut again. I think we did about two or three test screenings and we had sound editors on board quite early. In fact, we never stopped cutting almost right until the end. If you have a lot of screenings, everyone involved with the film wants to do a lot of changes and it keeps happening right down to the wire. So we basically carried on cutting almost right through till the middle of June.

[OP] It sounds like you had more changes than usual for most film edits – especially after your test screenings. Tell me more.

[CD] The core of the film is about Mary, who was a Catholic Queen, and Elizabeth, who was a Protestant Queen. Mary had the claim to not just be Queen of Scotland, but the Queen of England, as well. She’s a threat to Elizabeth, so the film is about that threat. These women essentially had an agreement between them. Elizabeth agreed that Mary’s child would succeed her if she died. This was a private agreement between the two women. The men around them who are in their government are trying tp stop them from interacting with each other and having any kind of agreement. So it’s about women in a very archaic world. They are leaders, but they are not men, and the system around them are not happy for them to be leaders. This was the first time there was a queen in either country ever – and at the same time.

The theme is kind of modern, so the script – written by Beau Willimon, who writes House of Cards – was a bit like a political drama. In his writing, he intercuts scenes to give it a modern, more interesting feel. I followed that pattern – crosscutting scenes and stuff like that. When we started screening, a lot of people found that difficult to understand, so we went the other way around. We put things together and made the structure more classic. But when we then started screening it again, we realized that the film had ceased to be unique. It started becoming more like other dramas from this genre. So we put it all the way back to how it originally was. We went back to the spirit of what Beau had written and did more intercutting, but in different places. That is why it took so long to cut the film, because the balance was difficult to arrive at. Often a script is written in a very linear fashion and you cut it up later. But in this case it was the opposite way around.

If you listen too much to the audience or even producers of the film you can lose what makes it unique. The hands of the director are very important. Particularly here, because this is a women’s story, directed by a woman director, and it was very important to preserve that point of view, which could very easily be eroded. She wrote it with Beau and he doesn’t explain everything. He doesn’t have characters telling you how they got to a certain place or why. We needed to preserve that, but we also needed to let people into the story a little more. So we had to make adjustments to allow an audience to understand it.

[OP] I’m sure that such changes, as with every film, affected its final length. How was Mary Queen of Scots altered through these various cuts and recuts?

[CD] The original cut was about two hours and 45 minutes, but we ended up at an hour and 55. To get there, we started to cut back on the more epic scenes within the film. For instance, we had a battle scene early on in the film and there was a battle at the end of the film where Mary is beaten and expelled from Scotland. They didn’t really have the budget for a classic battle like in Braveheart. It was a slightly more impressionistic battle – more abstract and about how it feels. It was a beautiful sequence, but we found that the film didn’t need that. It just didn’t need to be that complete. We had to make a lot of choices like that – cutting things down.

We cut nearly an hour of material, which obviously I’m used to doing. However, what we found is that, because it was a performance piece, by cutting it down so far, we also lost a little bit of the air between scenes. It became quite brutal – just story without any kind of feeling. So once we got the the story working well, we then had to breathe life back into it. I literally went all the way back to the first edit of the film and looked at what was good about it in terms of the life and the subtleties. Then we very carefully started putting that back into the film. When you screen the film for audiences, you get very tunneled into making the story tighter and understandable, which is often at the expense of quite a lot. It’s an interesting part of the process – going back to the core of the story. You always have to do that. Sometimes you lose a little through the editing process and then you have to try and get it back.

We also had quite a lot of work on music. We had a composer on board [Max Richter, White Boy Rick, Hostiles, Morgan] quite early and he gave us a lot of ideas. But, as we changed the edit, we had to change the direction of the music somewhat. Of course, this also contributed to the length of the editing schedule. 

[OP] Music can certainly make or break a film. Some editors start with it right away and others wait until the end to play with options. It sounds like music was a bit of a challenge.

[CD] I normally go with it dry at the beginning. When I start putting the scenes together I tend to start using temp music. But I try to avoid it for as long as possible – even into the director’s cut. I think sometimes you can just use it as a bandage if you’re not careful. But on this film, we had a very specific tone that we needed to sell. It was a slightly more modern, suspenseful take on the music. We did end up using music a little earlier than I would have hoped.

We had a cut the film and we had a soundtrack, but we were constantly changing it – trying new things – as the edit changed. The music was more avant garde to start with and that was our intention, but the studio wanted it to be a little more melodic. The composer is very respected in the classical world, so he took that on board and wrote some themes for us that took it into a slightly different direction. He would write something – maybe not even to picture – and then give us the stems. The music editor and I would edit the music and try it out in different places. Then the composer would see what we had done with it to picture. We would then give it back to him. He would do a bit more work and give it back to us. It was actually a very unusual process.

[OP] With such a diverse set of films under your belt, what are some of your tips in tackling a scene?

[CD] I go through the rushes and try to watch everything that they shot. If there are A and B cameras, then I try to watch the B camera, as well. You get different emotional things from that, since it is a different angle. In the ideal situation when there’s time, I watch everything, mark what I like, and then make a roll with all my selected takes. Then I watch it again. I prune it down even more and then start a cut. Ideally, I try to find one take that works all the way through a scene as my first port of call. Then I go through the roll of my selects and look at what I marked and what I liked and try to work those things into the cut. I look at each one to see if that’s the best performance for that line and I literally craft it like that.

When you’ve watched half of a roll of rushes, you don’t know how to cut the scene. But once you’ve watched it all – everything they’ve shot – you then can organize the scene in your head. The actual cutting is quite quick then. I tend to watch it and think, ‘Okay I know what I’m going to do for the first cut. I’m going to use that shot for the beginning, that bit for the end, and so on.’ I map it in my head and quickly put that together with largely the selected takes that I like. Then I watch it and start refining it, honing it, and going through the roll again – adding things. Of course that depends on time. If I don’t have much time, I have to work fast, so I can’t do that all the time.

[OP] Any closing thoughts to wrap this up?

[CD] The experience of editing Mary Queen of Scots really reminded me how important it is to stick to the original intention and ambition of the film and make editorial decisions based on that. This doesn’t mean sticking to the letter of the script, but looking at how to communicate its intent overall. Film editing, of course, always means lots of changes and so it’s easy to get lost. Therefore, going back to the original thought always helps in making the right choices in the end.

© 2018 Oliver Peters

The Old Man & the Gun

Stories of criminal exploits have long captivated the American public. But no story is quirkier than that of Forrest Silva “Woody” Tucker. He was a lifelong bank robber and prison escape artist who was in and out of prison. His most famous escape came in 1979 from San Quentin State Prison. The last crimes were a series of bank robberies around the Florida retirement community where he lived. He was captured in 2000 and died in prison in 2004 at the age of 83. Apparently good at his job – he stole an estimated four million dollars over his lifetime – Tucker was aided by a set of older partners, dubbed the “Over the Hill Gang”. His success, in part, was because he tended to rob lower profile, local banks and credit unions. While he did carry a gun, it seems he never actually used it in any of the robberies.

The Old Man & the Gun is a semi-fictionalized version of Tucker’s story brought to the screen by filmmaker David Lowery (A Ghost Story, Pete’s Dragon, Ain’t Them Bodies Saints). It stars Robert Redford as Tucker, along with Danny Glover and Tom Waits as his gang. Casey Affleck plays John Hunt, a detective who is on his trail. Sissy Spacek is Jewel, a woman who takes an interest in Tucker. Lowery wrote the script in a romanticized style that is reminiscent of how outlaws of the old west are portrayed. The screenplay is based on a 2003 article in The New Yorker magazine by Dale Grann, which chronicled Tucker’s real-life exploits.

David Lowery is a multi-talented filmmaker with a string of editing credits. (He was his own editor on A Ghost Story.) But for this film, he decided to leave the editing to Lisa Zeno Churgin, A.C.E. (Dead Man Walking, Pitch Perfect, Cider House Rules, House of Sand and Fog), with whom he had previously collaborated on Pete’s Dragon. I recently had the opportunity to chat with Churgin about working on The Old Man & the Gun.

___________________________________

[OP] Please tell me a bit about your take on the story and how the screenplay’s sequence ultimately translated into the finished film.

[LZC] The basis of Redford’s character is a boy who started out stealing a bicycle, went to reform school when he was 13, and it continued along that way for the rest of his life. Casey Affleck is a cop in the robbery division who takes it as a personal affront when the bank where he was trying to make a deposit was robbed. He makes it his mission to discover who did it, which he does. But because it’s a case that crosses state lines, the case gets taken over by the FBI. Casey’s character then continues the search on his own.  It’s a wonderful cat and mouse game. 

There are three storylines in the film. The story begins when Tucker is leaving the scene of a robbery and pulls over to the side of the road to help Jewel [Sissy Spacek] while evading the police on his trail. Their story provides a bit of a love interest.  The second storyline is that of the “Over the Hill Gang”. And the third storyline is the one between Tucker and Hunt. It’s not a particularly linear story, so we were always balancing these three storylines. Whenever it started to feel like we’d been away too long from a particular storyline and set of characters, it was time to switch gears.

Although David wrote the script, he wasn’t particularly overprotective of it. As in most films, we experimented a lot, moving scenes around to make those three main stories find their proper place. David dressed Redford in the same blue suit for the entire movie with occasional shirt or tie changes. This made it easier to shift things than when you have costume constraints. Often scenes ended up back where they started, but a lot of times they didn’t – just trying to find the right balance of those three stories. We had absolute freedom to experiment, and because David is a writer, director, and an editor in his own right, he really understands and appreciates the process.

The nature of this film was so unique, because it is of another time and place [the 1980s], but still modern in its own way. I also see it partly as an homage to Bob [Redford], because this is possibly his last starring role. Shooting on 16mm film certainly lends itself to another time and place. The score is a jazz score. That jazz motor places it in time, but also keeps it contemporary. As an aside, a nice touch is when Casey visits Redford in the hospital and he does a little ‘nose salute’ from The Sting, which was Casey’s idea.

[OP] On some films the editor is on location, keeping up to camera with the cut. On others, the editing team stays at a home base. For The Old Man & the Gun, you two were separated during the initial production phase. Tell me how that was handled.

[LZC] David was filming in Cincinnati and I was simultaneously cutting in LA. Because it was being shot on film, they sent it to Fotokem to be developed and then to Technicolor to be digitized. Then it was brought over to us on a drive. When you don’t get to watch dailies together, which is pretty much the norm these days, I try to ask the director to communicate with the script supervisor as much as possible while they are shooting: circled takes, particular line readings, any idea that the director might want to communicate to the editor. That sort of input always helps. Their distant location and the need to process film meant it would be a few days before I got the film and before David could see a scene that he’d shot, cut together. Getting material to him as quickly as possible is the best thing that I can do. That’s always my goal.

When I begin cutting a scene, I start by loading a sequence of all of the set-ups and then scroll through this sequence (what most editors who worked on film call a KEM roll) so that I can see what has been shot. Occasionally, I’ll put together selects, but generally I just start at the beginning and go cut to cut. The hardest part is always figuring out what’s going to be the first cut. Are we going to start tight? Are we going to start wide where we show everything? What is that first cut going to be? I seem to spend more time on that than anything else and once I get into it – and I’m not the first person to say this – the film tells you what to do. My goal is to get it into form as quickly as possible, so I can get a cut back to the director.

I finished the editor’s cut in LA and then we moved the cutting room to Dallas. Then David and I worked on the director’s cut – traditionally ten weeks – and after that, we showed it to the producers. Our time was extended a bit, because we had to wait for Bob’s availability to shoot some of the robbery sequences. They always knew that they were going to have to do some additional filming.

[OP] I know David is an experienced editor. How did you divide up the editorial tasks? Or was David able to step back from diving in and cutting, too?

[LZC] David is an excellent editor in his own right, but he is very happy to have someone else do the first pass. On this film I think he was more interested in playing around with some of the montages sequences. Then he’d hand it back to me so that I could incorporate it back into the film, sometimes making changes that kept it within the style of the film as a whole.

[OP] The scenes used in a film and the final length are always malleable until the final version of the cut. I’m sure this one was no different. Please tell me a bit about that.

[LZC] We definitely lost a fair number of scenes. My assistant makes scene cards that we put up on the wall and then when we lift a scene it goes on the back of the door. That way, you can just open the door and look on the back and see what has been taken out. In this particular film, because of the three separate storylines, scenes went in, out, and rearranged – and then in, out, and rearranged again. Often, scenes that we dropped at the very beginning ended up back in the movie, because it’s like a house of cards. You know you really have to weigh everything and try to juxtapose and balance the storylines and keep it moving. The movie is quite short now, but my first cut wasn’t that long. The final cut is 94 minutes and I think the first cut wasn’t much more than two hours.  

[OP] Let me shift gears a bit. As I understand it, David is a fan of Adobe Creative Cloud and in particularly, Premiere Pro. On The Old Man & the Gun, you shifted to Premiere Pro, as well. As someone who comes from a film and Avid editorial background, how was it to work with Premiere Pro?

[LZC] Over the course of my career, I’ve done what we call ‘doctor jobs’, where an editor comes in and does a recut of a film. On some of these jobs, I had the opportunity to work on Lightworks and on Final Cut. When we began Pete’s Dragon, David asked if I would consider doing it on Premiere Pro. David Fincher’s team had just done Gone Girl using it and David was excited about the possibility of doing Pete’s using Premiere. But for a big visual effects film, Premiere at that stage really wasn’t ready. I said if we do another film together, I’d be happy to learn Premiere. So, when we knew we would be doing Old Man, David spoke to the people at Adobe. They arranged to have Christine Steele tutor me. I worked with her before we began shooting. It was perfect, because we live close to each other and we were able to work in short, three- and four-hour blocks of time. (Note: Steele is an LA-based editor, who is frequently a featured presenter for Adobe.)

I also hired my first assistant, Mike Melendi, who was experienced with Premiere Pro. It was definitely a little intimidating at first, but within a week, I was fine. I actually ended up doing another film on Avid afterwards and I was a little nervous to go back to Avid. But that was like riding a bike. And after that, I took over another film that was on Premiere. Now I know I can go back and forth and that it’s perfectly fine.

[OP] Many feature film editors with an extensive background on Media Composer often rely on Avid’s script integration tools (ScriptSync). That’s something Premiere doesn’t have. Any concerns there?

[LZC] I think ScriptSync is the most wonderful thing in the world, but I grew up without it. When my assistants prepare dailies for me, they’ll put in a bunch of locators, so I know where there are multiple takes within a take. I think ScriptSync is great if you can get the labor of somebody to do it. I know there are a lot of editors who do it themselves while they’re watching dailies. I worked on a half-hour comedy where there was just a massive amount of footage and a tremendous amount of ‘keep rollings’. After working for one week I said to them, ‘We have to get SciptSync’. And they did! We had a dedicated person to do it and that’s all they did. It’s a wonderful luxury, which I would always love to have, but because I learned without it, I’ve created other ways to work without it.

My biggest issue with Premiere was the fact that, because I always work in the icon view and not list view, I had to contend with their grid arrangement within the bins. With Media Composer, you can arrange your clips however you want. Adobe knew that it was a really big issue for me and for other editors, so they are working on a version where you can move and arrange the clips within a bin. I’ve had the opportunity to give input on that and I know we’ll see that changed in a future version.

I would love to keep working on Premiere. Coming back to it again recently, I felt really confident about being able to go back and forth between the two systems. But some directors and studios have specific preferences. Still, I think it would be a lot of fun to continue working in Premiere.

[OP] Any final thoughts on the experience?

[LZC] I enjoyed the opportunity to work on such a wonderful project with such great actors. For me as an editor, that’s always my goal – to work with great performances. To help have a hand in shaping and creating wonderful moments like the ones we have in our film. I hope others feel that we achieved that.

For more, check out Adobe’s customer stories and blog. Also Steve Hullfish’s Art of the Cut interview.

This interview transcribed with the assistance of SpeedScriber.

©2018 Oliver Peters

Beyond the Supernova

No one typifies hard driving, instrumental, guitar rock better than Joe Satriani. The guitar virtuoso – known to his fans as Satch – has sixteen studio albums under his belt, along with several other EPs, live concert and compilation recordings. In addition to his solo tours, Satriani founded the “G3”, a series of short tours that feature Satriani along with a changing cast of two other all-star, solo guitarists, such as Steve Vai, Yngwie Malmsteen, Guthrie Govan, and others. In another side project, Satriani is the guitarist for the supergroup Chickenfoot, which is fronted by former Van Halen lead singer, Sammy Hagar.

The energy behind Satriani’s performances was captured in the new documentary film, Beyond the Supernova, which is currently available on the Stingray Qello streaming channel. This documentary grew out of the general behind-the-scenes coverage of Satriani’s 2016 and 2017 tours in Asia and Europe, to promote his 15th studio album, Shockwave Supernova. Tour filming was handled by Satriani’s son, ZZ (Zachariah Zane) – an up-and-coming, young filmmaker. The tour coincided with Joe Satriani’s 60th birthday and 30 years after the release of his multi-platinum-selling album Surfing with the Alien. These elements, as well as capturing Satriani’s introspective nature, provided the ingredients for a more in-depth project, which ZZ Satriani produced, directed and edited.

According to Joe Satriani in an interview on Stingray’s PausePlay, “ZZ was able to capture the real me in a way that only a son would understand how to do; because I was struggling with how I was going to record a new record and go in a new direction. So, as I’m on the tour bus and backstage – I guess it’s on my face. He’s filming it and he’s going ‘there’s a movie in here about that. It’s not just a bunch of guys on tour.’”

From music to filmmaking

ZZ Satriani graduated from Occidental College in 2015 with a BA in Art History and Visual Arts, with a focus on film production. He moved to Los Angeles to start a career as a freelance editor. I spoke with ZZ Satriani about how he came to make this film. He explained, “For me it started with skateboarding in high school. Filmmaking and skateboarding go hand-in-hand. You are always trying to capture your buddies doing cool tricks. I gravitated more to filmmaking in college. For the 2012 G3 Tour, I produced a couple of web videos that used mainly jump cuts and were very disjointed, but fun. They decided to bring me on for the 2016 tour in order to produce something similar. But this time, it had to have more of a story. So I recorded the interviews afterwards.”

Although ZZ thinks of himself as primarily an editor, he handed all of the backstage, behind-the-scenes, and interview filming himself, using a Sony PXW-FS5 camera. He comments, “I was learning how to use the camera as I was shooting, so I got some weird results – but in a good way. I wanted the footage to have more of a filmic look – to have more the feeling of a memory, than simply real-time events.”

The structure of Beyond the Supernova intersperses concert performances with events on the tour and introspective interviews with Joe Satriani. The multi-camera concert footage was supplied by the touring support company and is often mixed with historical footage provided by Joe Satriani’s management team. This enabled ZZ to intercut performances of the same song, not only from different locations, but even different years, going back to Joe Satriani’s early career.

The style of cutting the concert performances is relatively straightforward, but the travel and interview bridges that join them together have more of a stream-of-consciousness feel to them and are often quite psychedelic. ZZ says, “I’m not a big [Adobe] After Effects guy, so all of the ‘effects’ are practical and built up in layers within [Adobe] Premiere Pro. The majority of ‘effects’ dealt with layering, blending and cropping different clips together. It makes you think about the space within the frame – different shapes, movement, direction, etc. I like playing around that way – you end up discovering things you wouldn’t have normally thought of. Let your curiosity guide you, keep messing with things and you will look at everything in a new way. It keeps editing exciting!”

Premiere Pro makes the cut

Beyond the Supernova was completely cut and finished in Premiere Pro. ZZ explains why,  “Around 2011-12, I made the switch from [Apple] Final Cut Pro to Premiere Pro while I was in a film production class. They informed us that was the new standard, so we rolled with it and the transition was very smooth. I use other apps in the Adobe suite and I like the layout of everything in each one, so I’ve never felt the need to switch to another NLE.”

ZZ Satriani continues, “We had a mix of formats to deal with, including the need to upscale some of the standard definition footage to HD, which I did in software. Premiere handled the PXW-FS5’s XAVC-L codec pretty well in my opinion. I didn’t transcode to Pro Res, since I had so much footage, and not a lot of external hard drive space. I knew this might make things go more slowly – but honestly, I didn’t notice any significant drawbacks. I also handled all of the color correction, using Premiere’s Lumetri color controls and the FilmConvert plug-in.” Satriani created the sound design for the interview segments, but John Cuniberti (who has also mixed Joe Satriani’s albums) re-mixed the live concert segments in his studio in London. The final 5.1 surround mix of the whole film was handled at Skywalker Sound.

The impetus pushing completion was entry into the October 2017 Mill Valley Film Festival. ZZ says, “I worked for a month putting together the trailer for Mill Valley. Because I had already organized the footage for this and an earlier teaser, the actual edit of the film came easily. It took me about two months to cut – working by myself in the basement on a [2013] Mac Pro. Coffee and burritos from across the street kept me going.” 

Introspection brings surprises

Fathers and sons working together can often be an interesting dynamic and even ZZ learned new things during the production. He comments, “The title of the film evolved out of the interviews. I learned that Joe’s songs on an album tend to have a theme tied to the theme of the album, which often has a sci-fi basis to it. But it was a real surprise to me when Joe explained that Shockwave Supernova was really his character or persona on stage. I went, ‘Wait! After all these years, how did I not know that?’”

As with any film, you have to decide what gets cut and what stays. In concert projects, the decision often comes down to which songs to include. ZZ says, “One song that I initially thought shouldn’t be included was Surfing with the Alien. It’s a huge fan favorite and such an iconic song for Joe. Including it almost seemed like giving in. But, in a way it created a ‘conflict point’ for the film. Once we added Joe’s interview comments, it worked for me. He explained that each time he plays it live that it’s not like repeating the past. He feels like he’s growing with the song – discovering new ways to approach it.”

The original plan for Beyond the Supernova after Mill Valley was to showcase it at other film festivals. But Joe Satriani’s management team thought that it coincided beautifully with the release of his 16th studio album, What Happens Next, which came out in January of this year. Instead of other film festivals, Beyond the Supernova made its video premiere on AXS TV in March and then started its streaming run on Stingray Qello this July. Qello is known as a home for classic and new live concerts, so this exposes the documentary to a wider audience. Whether you are a fan of Joe Satriani or just rock documentaries, ZZ Satriani’s Beyond the Supernova is a great peek behind the curtain into life on the road and some of the thoughts that keep this veteran solo performer fresh.

Images courtesy of ZZ Satriani.

©2018 Oliver Peters

Wild Wild Country

Sometimes real life is far stranger than fiction. Such is the tale of the Rajneeshees – disciples of the Indian guru Bhagwan Shree Rajneesh – who moved to Wasco County, Oregon in the 1980s. Their goal was to establish a self-contained, sustainable, utopian community of spiritual followers, but the story quickly took a dark turn. Conflicts with the local Oregon community escalated, including the first and single, largest bioterror attack in the United States, when a group of followers poisoned 751 guests at ten local restaurants through intentional salmonella contamination. 

Additional criminal activities included attempted murder, conspiracy to assassinate the U. S. Attorney for the District of Oregon, arson, and wiretapping. The community was largely controlled by Bhagwan Shree Rajneesh’s personal secretary, Sheela Silverman (Ma Anand Sheela), who served 29 months in federal prison on related charges. She moved to Switzerland upon her release. Although the Rajneeshpuram community is no more and its namesake is now deceased, the community of followers lives on as the Osho International Foundation. This slice of history has now been chronicled in the six-part Netflix documentary Wild Wild Country, directed by Chapman and Maclain Way.

Documentaries are truly an editor’s medium. More so than any other cinematic genre, the final draft of the script is written in the cutting room. I recently interviewed Wild Wild Country’s editor, Neil Meiklejohn, about putting this fascinating tale together.

Treasure in the archives

Neil Meiklejohn explains, “I had worked with the directors before to help them get The Battered Bastards of Baseball ready for Sundance. That is also an Oregon story. While doing their research at the Oregon Historical Society, the archivist turned them on to this story and the footage available. The 1980s was an interesting time in local broadcast news, because that was a transition from film to video. Often stories were shot on film and then transferred to videotape for editing and airing. Many times stations would simply erase the tape after broadcast and reuse the stock. The film would be destroyed. But in this case, the local stations realized that they had something of value and held onto the footage. Eventually it was donated to the historical society.”

“The Rajneeshees on the ranch were also very proud of what they were doing – farming and building a utopian city – so, they would constantly invite visitors and media organizations onto the ranch. They also had their own film crews documenting this, although we didn’t have as much access to that material. Ultimately, we accumulated approximately 300 hours of archival media in all manner of formats, including Beta-SP videotape, ripped DVDs, and the internet. It also came in different frame rates, since some of the sources were international. On top of the archival footage, the Ways also recorded another 100 hours of new interviews with many of the principals involved on both sides of this story. That was RED Dragon 6K footage, shot in two-camera, multi-cam set-ups. So, pretty much every combination you can think of went into this series. We just embraced the aesthetic defects and differences – creating an interesting visual texture.”

Balancing both sides of the story

“Documentaries are an editor’s time to shine,” continues Meiklejohn. “We started by wanting to tell the story of the battle between the cult and the local community without picking sides. This really meant that each scene had to be edited twice. Once from each perspective. Then those two would be combined to show both sides as point-counterpoint. Originally we thought about jumping around in time. But, it quickly became apparent that the best way to tell the story was as a linear progression, so that viewers could see why people did what they did. We avoided getting tricky.”

“In order to determine a structure to our episodes, we first decided the ‘ins’ and ‘outs’ for each and then the story points to hit within. Once that was established, we could look for ‘extra gold’ that might be added to an episode. We would share edits with our executive producers and Netflix. On a large research-based project like this, their input was crucial to making sure that the story had clarity.”

Managing the post production

Meiklejohn normally works as an editor at LA post facility Rock Paper Scissors. For Wild Wild Country, he spent ten months in 2017 at an ad hoc cutting room located at the offices of the film’s executive producers, Jay and Mark Duplass. His set-up included Apple iMacs running Adobe Creative Cloud software, connected to an Avid ISIS shared storage network. Premiere Pro was the editing tool of choice.

Meiklejohn says, “The crew was largely the directors and myself. Assistant editors helped at the front end to get all of the media organized and loaded, and then again when it came time to export files for final mastering. They also helped to take my temp motion graphics – done in Premiere – and then polish them in After Effects. These were then linked back into the timeline using Dynamic Link between Premiere and After Effects. Chapman and Maclain [Way] were very hands-on throughout, including scanning in stills and prepping them in Photoshop for the edit. We would discuss each new segment to sort out the best direction the story was taking and to help set the tone for each scene.”

“Premiere Pro was the ideal tool for this project, because we had so many different formats to deal with. It dealt well with the mess. All of the archival footage was imported and used natively – no transcoding. The 6K RED interview footage was transcoded to ProRes for the ‘offline’ editing phase. A lot of temp mixing and color correction was done within Premiere, because we always wanted the rough cuts to look smooth with all of the different archival footage. Nothing should be jarring. For the ‘online’ edit, the assistants would relink to the full-resolution RED raw files. The archival footage was already linked at its native resolution, because I had been cutting with that all along. Then the Premiere sequences were exported as DPX image sequences with notched EDLs and sent to E-Film, where color correction was handled by Mitch Paulson. Unbridled Sound handled the sound design and mix – and then Encore handled mastering and 1080p deliverables.”

Working with 400 hours of material and six hour-long episodes in Premiere might be a concern for some, but it was flawless for Meiklejohn. He continues, “We worked the whole series as one large project, so that at any given time, we could go back to scenes from an earlier episode and review and compare. The archival material was organized by topic and story order, with corresponding ‘selects’ sequences. As the project became bigger, I would pare it down by deleting unnecessary sequences and saving a newer, updated version. So, no real issue by keeping everything in a single project.”

As with any real-life event, where many of the people involved are still alive, opinions will vary as to how balanced the storytelling is. Former Rajneeshees have both praised and criticized the focus of the story. Meiklejohn says, “Sheela is one of our main interview subjects and in many ways, she is both the hero and the villain of this story. So, it was interesting to see how well she has been received on social media and in the public screenings we’ve done.”

Wild Wild Country shares a pointed look into one of the most bizarre clashes in the past few decades. Meiklejohn says, “Our creative process was really focused on the standoff between these two groups and the big inflection points. I tried to let the raw emotions that you see in these interviews come through and linger a bit on-screen to help inform the events that were unfolding. The story is sensational in and of itself, and I didn’t want to distract from that.”

For more information, check out Steve Hullfish’s interview at Art of the Cut.

Originally written for CreativePlanetNetwork.

©2018 Oliver Peters

Editing the FX Series Atlanta

Atlanta just wrapped its second season on the FX Network. The brainchild of actor/writer/producer/director Donald Glover, Atlanta is the story of Earn Marks, a Princeton drop-out who returns home to Atlanta, where he decides to manage his cousin’s rap career. The show is very textural and plot is secondary. It loosely follows Earn and the people in his life – specifically his cousin, Paper Boi – an up and coming rapper – and his friend and posse-mate, Darrius.

The visual architect of the show is director Hiro Murai, who has directed the majority of the episodes. He has set an absurdist tone for much of the story. Any given episode can be wildly different from the episodes that come on either side of it. The episodes taken as a whole make up what the series is about.

I recently had a chance to interview the show’s editors, Kyle Reiter and Isaac Hagy, about working on Atlanta and their use of Adobe Premiere Pro CC to edit the series.

Isaac Hagy: “I have been collaborating with Hiro for years. We went to college together and ever since then, we’ve been making short films and music videos. I started out doing no-budget music videos, then eventually moved into documentaries and commercials, and now television. A few years ago, we made a short film called Clapping for the Wrong Reasons, starring Donald. That became kind of an aesthetic precursor that we used in pitching this show. It served as a template for the tone of Atlanta.”

“I’ve used pretty much every editing software under the sun – cutting short films in high school on iMovie, then Avid in college when I went to film school at USC. Once I started doing short film projects, I found Final Cut Pro to be more conducive to quick turnarounds than Avid. I used that for five or six years, but then they stopped updating it, so I needed to switch over to a more professional alternative. Premiere Pro was the easiest transition from Final Cut Pro and, at that time, Premiere was starting to be accepted as a professional platform. A lot of people on the show come from a very DIY background, where we do everything ourselves. Like with the early music videos – I would color and Hiro would do effects in After Effects. So, Premiere was a much more natural fit. I am on a show using [Avid] Media Composer right now and it feels like a step backwards.”

With a nod to their DIY ethos, post-production for Atlanta also follows a small, collective approach. 

Kyle Reiter: “We rent a post facility that is just a single-story house. We have a DIY server called a NAS that one of our assistants built and all the media is stored there. It’s just a tower. We brought in our own desktop iMacs with dual monitors that we connect to the server over Ethernet. The show is shot with ARRI Amira cameras in a cinema 2K format. Then that is transcoded to proxy media for editing, which makes it easy to manage. The color correction is done in Resolve. Our assistant editors online it for the colorist, so there’s no grading in-house.” Atlanta airs on the FX Network in the 720p format.

The structure and schedule of this production make it possible to use a simple team approach. Projects aren’t typically shared among multiple editors and assistants, so a more elaborate infrastructure isn’t required to get the job done. 

Isaac Hagy: “It’s a pretty small team. There’s Kyle and myself. We each have an assistant editor. We just split the episodes, so I took half of the season and Kyle the other half. We were pretty self-contained, but because there were an odd number of episodes, we ended up sharing the load on one of them. I did the first cut of that episode and Kyle took it through the director’s cut. But other than that, we each had our individual episodes.”

Kyle Reiter: “They’re in Atlanta for several months shooting. We’ll spend five to seven days doing our cut and then typically move on to the next thing, before we’re finished. That’s just because they’re out of town for several months shooting and then they’ll come back and continue to work. So, it’s actually quite a bit of time calendar-wise, but not a lot of time in actual work hours. We’ll start by pulling selects and marking takes. I do a lot of logging within Premiere. A lot of comments and a lot of markers about stuff that will make it easy to find later. It’s just breaking it down to manageable pieces. Then from there, going scene-by-scene, and putting it all together.”

Many scripted television series that are edited on Avid Media Composer rely on Avid’s script integration features. This led me to wonder whether Reiter and Hagy missed such tools in Premiere Pro.

Isaac Hagy: “We’re lucky that the way in which the DP [Christian Sprenger] and the director shoot the series is very controlled. The projects are never terribly unwieldy, so really simple organizing usually does the trick.”

Kyle Reiter: “They’re never doing more than a handful of takes and there aren’t more than a handful of set-ups, so it’s really easy to keep track of everything. I’ve worked with editors that used markers and just mark every line and then designate a line number; but, we don’t on this show. These episodes are very economical in how they are written and shot, so that sort of thing is not needed. It would be nice to have an Avid ScriptSync type of thing within Premiere Pro. However, we don’t get an unwieldy amount of footage, so frankly it’s almost not necessary. If it were on a different sort of show, where I needed that, then absolutely I would do it. But this is the sort of show I can get away with not doing it.”

Kyle Reiter: “I’m on a show right now being cut on Media Composer, where there are 20 to 25 takes of everything. Having ScriptSync is a real lifesaver on that one.”

Both editors are fans of Premiere Pro’s advanced features, including the ability to use it with After Effects, along with the new sound tools added in recent versions.

Isaac Hagy: “In the offline, we create some temp visual effects to set the concepts. Some of the simpler effects do make it into the show. We’ll mock it up in Premiere and then the AE’s will bring it into After Effects and polish the effect. Then it will be Dynamic Link-ed back into the Premiere timeline.”

“We probably go deeper on the sound than any other technical aspect of the show. In fact, a lot of the sound that we temp for the editor’s cut will make it to the final mix stage. We not only try to source sounds that are appropriate for a scene, but we also try to do light mixing ourselves – whether it’s adding reverb or putting the sound within the space – just giving it some realism. We definitely use the sound tools in Premiere quite a bit. Personally, I’ve had scenes where I was using 30 tracks just for sound effects.”

“I definitely feel more comfortable working in sound in Premiere than in Media Composer -and even than I felt in Final Cut. It’s way easier working with filters, mixing, panning, and controlling multiple tracks at once. This season we experimented with the Essential Sound Panel quite a bit. It was actually very good in putting a song into the background or putting sound effects outside of a room – just creating spaces.”

When a television series or film is about the music industry, the music in the series plays a principal role. Sometimes that is achieved with a composed score and on other shows, the soundtrack is built from popular music.

Kyle Reiter: “There’s no score on the show that’s not diegetic music, so we don’t have a composer. We had one episode this year where we did have score. Flying Lotus and Thundercat are two music friends of Donald’s that scored the episode. But other than that, everything else is just pop songs that we put into the show.”

Isaac Hagy: “The decision of which music to use is very collaborative. Some of the songs are written in the script. A lot are choices that Kyle and I make. Hiro will add some. Donald will add some. We also have two great music supervisors. We’re really lucky that we get nearly 90% of the music that we fall in love with cleared. But when we don’t, our music supervisors recommend some great alternatives. We’re looking for an authenticity to the world, so we try to rely on tracks that exist in the real world.”

Atlanta provides an interesting look of the city’s hip-hop culture on the fringe. A series that has included an alligator and Donald Glover in weird prosthetic make-up – and where Hiro Murai takes inspiration from The Shining certainly isn’t your run-of-the-mill television series. It definitely leaves fans wanting more, but to date, a third season has not yet been announced.

This interview was recorded using the Apogee MetaRecorder for iOS application and transcribed thanks to Digital Heaven’s SpeedScriber.

Originally written for CreativePlanetNetwork.

©2018 Oliver Peters