The Nuances of Overcranking

The concept of overcranking and undercranking in the world of film and video production goes back to the origins of motion picture technology. The earliest film cameras required the camera operator to manually crank the film mechanism – they didn’t have internal motors. A good camera operator was partially judged by how constant of a frame rate they could maintain while cranking the film through the camera.

Prior to the introduction of sound, the correct frame rate was 18fps. If the camera was cranked faster than 18fps (overcranking), then the playback speed during projection was in slow motion. If the camera was cranked slower than 18fps (undercranking), the motion was sped up. With sound, the default frame rate shifted from 18 to 24fps. One by-product of this shift is that the projection of old B&W films gained that fast, jerky motion we often incorrectly attribute to “old time movies” today. That characteristic motion is because they are no longer played at their intended speeds.

While manual film cranking seems anachronistic in modern times, it had the benefit of in-camera, variable-speed capture – aka speed ramps. There are modern film cameras that include controlled mechanisms to still be able to do that today – in production, not in post.

Videotape recording

With the advent of videotape recording, the television industry was locked into constant recording speeds. Variable-speed recording wasn’t possible using tape transport mechanisms. Once color technology was established, the standard record, playback, and broadcast frame rates became 29.97fps and/or 25.0fps worldwide. Motion picture films captured at 24.0fps were transferred to video at the slightly slower rate of 23.976fps (23.98) in the US and converted to 29.97 by employing pulldown – a method to repeat certain frames according to a specific cadence. (I’ll skip the field versus frame, interlaced versus progressive scan discussion.)

Once we shifted to high definition, an additional frame rate category of 59.94fps was added to the mix. All of this was still pinned to physical videotape transports and constant frame rates. Slomo and fast speed effects required specialized videotape or disk pack recorders that could playback at variable speeds. A few disk recorders could record at different speeds, but in general, it was a post-production function.

File-based recording

Production shifted to in-camera, file-based recording. Post shifted to digital, computer-based, rather than electro-mechanical methods. The nexus of these two shifts is that the industry is no longer locked into a limited number of frame rates. So-called off-speed recording is now possible with nearly every professional production camera. All NLEs can handle multiple frame rates within the same timeline (albeit at a constant timeline frame rate).

Modern video displays, the web, and streaming delivery platforms enable viewers to view videos mastered at different frame rates, without being dependent on the broadcast transmission standard in their country or region. Common, possible system frame rates today include 23.98, 24.0, 25.0, 29.97, 30.0, 59.94, and 60.0fps. If you master in one of these, anyone around the world can see your video on a computer, smart phone, or tablet.

Record rate versus system/target rate

Since cameras can now record at different rates, it is imperative that the production team and the post team are on the same page. If the camera operator records everything at 29.97 (including sync sound), but the post is designed to be at 23.98, then the editor has four options. 1) Play the files as real-time (29.97 in a 23.98 sequence), which will cause frames to be dropped, resulting in some stuttering on motion. 2) Play the footage at the slowed speed, so that there is a one-to-one relationship of frames, which doesn’t work for sync sound. 3) Go through a frame rate conversion before editing starts, which will result in blended and/or dropped frames. 4) Change the sequence setting to 29.97, which may or may not be acceptable for final delivery.

Professional production cameras allow the operator to set both the system or target frame rate, in addition to the actual recording rate. These may be called different names in the menus, but the concepts are the same. The system or target rate is the base frame rate at which this file will be edited and/or played. The record rate is the frame rate at which images are exposed. When the record rate is higher than the target rate, you are effectively overcranking. That is, you are recording slow motion in-camera.

(Note: from here on I will use simplified instead of integer numbers in this post.) A record rate of 48fps and a target rate of 24fps results in an automatic 50% slow motion playback speed in post, with a one-to-one frame relationship (no duplicated or blended frames). Conversely, a record rate of 12fps with a target rate of 24fps results in playback that is fast motion at 200%. That’s the basis for hyperlapse/timelapse footage.

The good news is that professional production cameras embed the pertinent metadata into the file so that editing and player software automatically knows what to do. Import an ARRI Alexa file that was recorded at 120fps with a target rate of 24fps (23.98/23.976) into Final Cut Pro X or Premiere Pro and it will automatically playback in slow motion. The browser will identify the correct target rate and the clip’s timecode will be based on that same rate.

The bad news as that many cameras used in production today are consumer products or at best “prosumer” cameras. They are relatively “dumb” when it comes to such settings and metadata. Record 30fps on a Canon 5D or Sony A7S and you get 30fps playback. If you are cutting that into a 24fps (23.98) sequence, you will have to decide how to treat it. If the use is for non-sound-sync B-roll footage, then altering the frame rate (making it play slow motion) is fine. In many cases, like drone shots and handheld footage, that will be an intentional choice. The slower footage helps to smooth out the vibration introduced by using such a lightweight camera.

The worst recordings are those made with iPhone, iPads, or similar devices. These use variable-bit-rate codecs and variable-frame-rate recordings, making them especially difficult in post. For example, an iPhone recording at 30.0fps isn’t exactly at that speed. It wobbles around that rate – sometimes slightly slower and something faster. My recommendation for that type of footage is to always transcode to an optimized format before editing. If you must shoot with one of these devices, you really need to invest in the FiLMiC Pro application, which will give you a certain level of professional control over the iPhone/iPad camera.

Transcode

Time and storage permitting, I generally recommend transcoding consumer/prosumer formats into professional, optimized editing formats, like Avid DNxHD/HR or Apple ProRes. If you are dealing with speed differences, then set your file conversion to change the frame rate. In our 30 over 24 example (29.97 record/23.98 target), the new footage will be slowed accordingly with matching timecode. Recognize that any embedded audio will also be slowed, which changes its sample rate. If this is just for B-roll and cutaways, then no problem, because you aren’t using that audio. However, one quirk of Final Cut Pro X is that even when silent, the altered sample rate of the audio on the clip can induce strange sound artifacts upon export. So in FCPX, make sure to detach and delete audio from any such clip on your timeline.

Interpret footage

This may have a different name in any given application, but interpret footage is a function to make the application think that the file should be played at a different rate than it was recorded at. You may find this in your NLE, but also in your encoding software. Plus, there are apps that can re-write the QuickTime header information without transcoding the file. Then that file shows up at the desired rate inside of the NLE. In the case of FCPX, the same potential audio issues can arise as described above if you go this route.

In an NLE like Premiere or Resolve, it’s possible to bring in 30-frame files into a 24-frame project. Then highlight these clips in the browser and modify the frame rate. Instant fix, right? Well, not so fast. While I use this in some cases myself, it comes with some caveats. Interpreting footage often results in mismatched clip linking when you are using the internal proxy workflow. The proxy and full-res files don’t sync up to each other. Likewise, in a roundtrip with Resolve, file relinking in Resolve will be incorrect. It may result in not being able to relink these files at all, because the timecode that Resolve looks for falls outside of the boundaries of the file. So use this function with caution.

Speed adjustments

There’s a rub when work with standard speed changes (not frame rate offsets). Many editors simply apply an arbitrary speed based on what looks right to them. Unfortunately this introduces issues like skipping frames. To perfectly apply slow or fast motion to a clip, you MUST stick to simple multiples of that rate, much like traditional film post. A 200% speed increase is a proper multiple. 150% is not. The former means you are playing every other frame from a clip for smooth action. The latter results in only one fourth of the frames being eliminated in playback, leaving you with some unevenness in the movement. 

Naturally there are times when you simply want the speed you picked, even if it’s something like 177%. That’s when you have to play with the interpolation options of your NLE. Typically these include frame duplication, frame blending, and optical flow. All will give you different looks. When it comes to optical flow, some NLEs handle this better than others. Optical flow “creates” new  in-between frames. In the best case it can truly look like a shot was captured at that native frame rate. However, the computation is tricky and may often lead to unwanted image artifacts.

If you use Resolve for a color correction roundtrip, changes in motion interpolation in Resolve are pointless, unless the final export of the timeline is from Resolve. If clips go back to your NLE for finishing, then it will be that software which determines the quality of motion effects. Twixtor is a plug-in that many editors use when they need even more refined control over motion effects.

Doing the math

Now that I’ve discussed interpreting footage and the ways to deal with standard speed changes, let’s look at how best to handle off-speed clips. The proper workflow in most NLEs is to import the footage at its native frame rate. Then, when you cut the clip into the sequence, alter the speed to the proper rate for frames to play one-to-one (no blended, duplicate, or skipped frames). Final Cut Pro X handles this in the best manner, because it provides an automatic speed adjustment command. This not only makes the correct speed change, but also takes care of any potential audio sample rate issues. With other NLEs, like Premiere Pro, you will have to work out the math manually. 

The easiest way to get a value that yields clean frames (one-to-one frame rate) is to simply divide the timeline frame rate by the clip frame rate. The answer is the percentage to apply to the clip’s speed in the timeline. Simple numbers yield the same math results as integer numbers. If you are in a 23.98 timeline and have 29.97 clips, then 24 divided by 30 equals .8 – i.e. 80% slow motion speed. A 59.94fps clip is 40%. A 25fps clip is 96%.

Going in the other direction, if you are editing in a 29.97 timeline and add a 23.98 clip, the NLE will normally add a pulldown cadence (duplicated frames). If you want this to be one-to-one, if will have to be sped up. But the calculation is the same. 30 divided by 24 results in a 125% speed adjustment. And so on.

Understanding the nuances of frame rates and following these simple guidelines will give you a better finished product. It’s the kind of polish that will make your videos stand out from those of your fellow editors.

© 2019 Oliver Peters

Preparing your Film for Distribution

First-time filmmakers are elated when their film finally gets picked up for distribution. But the hardest work may be next. Preparing your film and companion materials can be a very detailed and complex endeavor if you didn’t plan for it properly from the outset. While each distributor and/or network has slightly different specs, the general requirements are the same. Here are the more common ones.

1. Film master. Supplying a master file is self-evident, but the exact details are not consistent across the board. Usually some additional post will be required when you get distribution. You will need to add the distributor’s logo animation up front, make sure the first video starts at a specified timecode, and that you have audio channels in a certain configuration (see Item 2).

In spite of the buzz over 4K, many distributors still want 1920×1080 files at 23.98fps (or possibly 24.0fps) – usually in the Apple ProResHQ* video codec. The frame rate may differ for broadcast-oriented films, such as documentaries. In that case, 29.97fps might be required. Also, some international distributors will require 25.0fps. If you have any titles over the picture, then “textless” material must also be supplied. Generally, you can add those sections, such as the video under opening titles, at the end of the master, following the end credits of the film.

*Occasionally film festivals and some distributors will also require a DCP package instead of a single QuickTime or MXF master file.

2. Audio mixes and tracks. Stereo and/or 5.1 surround mixes are the most commonly requested audio configurations. You’ll often be asked to supply both the full mixes and the “stems”. The latter are separate submixes of only dialogue, sound effects, and music. Some distributors want these stems as separate files, while others want them attached to the master file. These are easy to supply if the film was originally mixed with that in mind. But if your mixer only produced a final mix, then it’s a lot harder to go back and get new stem tracks. A typical channel assignment on a delivery master is eight tracks for the 5.1 surround mix (L, R, C, LFE, Ls, Rs), the stereo mix (left, right), and a stereo M&E mix (combined music and effects, minus the dialogue).

3. Subtitles and captions. In order to be compliant with various accessibility regulations, you will likely have to supply closed captioning sidecar files that sync to your master. There are numerous formats and several NLEs allow you to create these. However, it’s far easier and usually more accurate to have a service create your files. There are numerous vendors, with prices starting as low as $1/minute. Closed captions should not be confused with subtitles, also called open captions. These appear on-screen and are common when someone is speaking in another language. Check with your distributor if this applies to you, because they may want the video without titles, in the event of international distribution.

4. Legal documentation. There’s a wide range of paperwork that you should be prepared to turn over. This includes licensing for any music and stock footage, talent releases, contracts, and deal memos. One important element is to be able to prove “chain-of-title”. You must be able to prove that you own the rights to the story and the film. Music is often a sticking point for indie filmmakers. If you used temp music or had a special deal for film festival showings, now is the time to pay up. You won’t get distribution until all music is clearly licensed. Music info should also include a cue sheet (song names, length, and position within the film).

5. Errors and omissions insurance. This is a catch-all policy you’ll need to buy to satisfy many distributors. It’s designed to cover you in the event that there’s a legal claim (frivolous or otherwise) against the film. For example, if someone comes out of the woodwork saying that you ripped them off and stole their story idea and that you now owe them money.

6. Trailer. Distributors often request a trailer to be used to promote the film. The preference seems to be that the trailer is under two minutes in length. It may or may not need to include the MPAA card at the front and should have a generic end tag (no “coming soon” or date at the end). Often a simple stereo mix will be fine, but don’t take that for granted. If you are going through full sound post anyway in creating a trailer, be sure to generate the full audio package – stereo and surround mixes and splits in various combinations, just like your feature film master.

7. Everything else. Beyond this list, you’ll often be asked for additional “nice to have” items. These include screeners (DVD or web), behind-the-scenes press clips or photos, frame grabs from the film, a final script, biographies of the creative team and lead actors, as well as a poster image.

As you can see, none of this seems terribly difficult if you are aware of these needs going in. But if you have prepared none of this in advance, it will become a mad scramble at the end to keep the distributor happy.

Originally written for RedShark News

©2018 Oliver Peters

Editing and Music Composition

Editing and Music Composition

A nip is in the air and snow is falling in some regions. All signs of Fall and Winter soon to come. The sights, smells, and sounds of the season will be all around us. Festive events. Holiday celebrations. Joy. But no other season is so associated with memorable music to put us in the mood. That makes this a perfect time to talk about how video and film editing has intrinsic similarities with musical composition.

Fellow editor Simon Ubsdell has a lot of thoughts on the subject – perfect for one of my rare guest blog posts. Simon is Creative Director of Tokyo Productions, a London-based post-production shop specializing in trailers. Simon is multi-talented with experience in music, audio post, editing, and software development.

Grab a cup of holiday cheer and sit back for this enlightening read.

______________________________________

Simon Ubsdell – Editing and Music Composition

There is a quote attributed to several different musicians, including Elvis Costello, Miles Davis, and Thelonius Monk, which goes: “Talking about music is like dancing about architecture“. It sounds good and it seems superficially plausible, but I think it’s wrong on two levels. Firstly, a good choreographer would probably say that it’s perfectly possible to use dance to say something interesting about architecture and a good architect might well say that they could design a building that said something about dance. But I think it’s also unhelpful to imply that one art form can’t tell us useful things about another. We can learn invaluable lessons both from the similarities and the differences, particularly if we focus on process rather than the end result.

Instead, here’s Ingmar Bergman: “I would say that there is no art form that has so much in common with film as music. Both affect our emotions directly, not via the intellect. And film is mainly rhythm; it is inhalation and exhalation in continuous sequence.

Bergman is certainly not the only filmmaker to have made this observation and I think everyone can recognise the essential truth of it. However, what I want to consider here is not so much what film and music have in common as art forms, but rather whether the process of music composition can teach us anything useful about the process of film editing. As an editor who also composes music, I have found thinking about this to be useful in both directions.

In films you’ll often see a composer sitting down at a piano and laboriously writing a score one note after another. He bangs around until he finds one note and then he scribbles it into the manuscript; then he bangs around looking for the next one. Music composition is made to look like a sequential process where each individual note is decided upon (with some difficulty usually!) before moving on to the next. The reality is of course that music composition doesn’t work this way at all. So I’d like to look at some of the ways that one does actually go about writing a piece of music and how the same principles might apply to how we edit films. Because music is such a vast subject, I’m going to limit myself largely to the concepts of classical music composition, but the same overall ideas apply to whatever kind of music you might be writing in whatever genre.

What both music and film have in common is that they unfold over time: they are experienced sequentially. So the biggest question that both the composer and the editor need to address is how to organise the material across time, and to do that we need to think about structure.

Musical Structure

From the Baroque period onwards and even before, composers have drawn on a very specific set of musical structures around which to build their compositions. 

The Canon (as in Pachelbel’s famous example) is the repetition of the same theme over and over again with added ornamentation that becomes increasingly more elaborate. The Minuet and Trio is an A/B/A sandwich in which a theme is repeated (Minuet), but with a contrasting middle section (Trio). The Rondo is a repeated theme that alternates with multiple contrasting sections, in other words A/B/A/C/A/D, etc. The Theme and Variations sets out a basic theme and follows it with a series of elaborations in different keys, tempi, time signatures, and so on. 

Sonata Form, widely used for the opening movements of most symphonic works, is a much more sophisticated scheme, that starts by setting out two contrasting themes (the “1st and 2nd Subjects”) in two different keys (the “Exposition”), before moving into an extended section where those ideas undergo numerous changes and augmentations and key modulations (the “Development Section”), before returning to the original themes, both now in the home key of the piece (the “Recapitulation Section”), often leading to a final epilogue called the “Coda”. 

In all these cases the structure is built out of thematic and other contrasts, and contrast is a word I’m going to be coming back to repeatedly here, because it goes to the core of where music composition and editing come together.

Now the point of using musical structures of this kind is that the listener can form an idea of how the piece is unfolding even when hearing it for the first time. They provide a map that helps you orientate yourself within the music, so it doesn’t come across as just some kind of confused and arbitrary ramble across terrain that’s hard to read. Music that doesn’t come with signposts is not easy to listen to with concentration, precisely because you don’t know where you are. (Of course, the humble pop song illustrates this, too. We can all recognise where the verse ends and the chorus begins and the chorus repetitions give us clear anchor points that help us understand the structure. The difference with the kind of classical music I’m talking about is that a pop song doesn’t have to sustain itself for more than a few minutes, whereas some symphonies last well over an hour and that means structure becomes vastly more important.) 

What structure does is effectively twofold: on the one hand it gives us a sense of comprehensibility, predictability, even familiarity; and on the other hand it allows the composer to surprise us by diverging from what is expected. The second part obviously follows from the first. If we don’t know where we are, then we don’t know what to expect and everything is a constant surprise. And that means nothing is a surprise. We need familiarity and comprehensibility in order to be able to be surprised by the surprises when they come. Conversely, music that is wholly without surprises gets dull very quickly. Just as quickly as music that is all surprise, because again it offers us no anchor points. 

Editing Structure

So what comparisons can we draw with editing in terms of structure? Just as with our fictional movie “composer” sitting at the piano picking out one note after another, so you’ll find that many newcomers to editing believe that that’s how you put together a film. Starting at the beginning, you take your first shot and lay it down, and then you go looking for your next shot and you add that, and then the next one and the next one. Of course, you can build a film this way, but what you are likely to end up with is a shapeless ramble rather than something that’s going to hold the viewer’s attention. It will be the equivalent of a piece of music that has no structural markers and doesn’t give us the clues we need to understand where we are and where we are going. Without those cues the viewer quickly gets lost and we lose concentration. Not understanding the structure means we can’t fully engage with the film.

So how do we go about creating structure in our editing? Music has an inherently much more formal character, so in many ways the composer has an easier job, but I’d suggest that many of the same principles apply.

Light and Shade in Music

Music has so many easy to use options to help define structure. We have tempo – how fast or slow the music is at any one point. Rhythm – the manner in which accented notes are grouped with non-accented notes. Pitch – how high or low the musical sounds are. Dynamics – how loud or soft the music is, and how soft becomes loud and vice versa. Key – how far we have moved harmonically from the dominant key of the piece. Mode – whether we are experiencing the bright optimism of a major key or the sombre darkness of a minor key (yes, that’s a huge over-simplification!). Harmony – whether we are moving from the tension of dissonance to the resolution of consonance, or vice versa.

All of these options allow for contrasts – faster/slower, brighter/darker, etc. It’s out of those contrasts that we can build structure. For example, we can set out our theme in a bright, shiny major key with a sprightly rhythm and tempo, and then move into a slow minor key variation shrouded in mystery and suspense. It’s from those contrasts that we grasp the musical structure. And of course moving through those contrasts becomes a journey. We’re not fixed in one place, but instead we’re moving from light to dark, from peaceful to agitated, from tension to resolution, and so on. Music satisfies and nourishes and delights and surprises us, because it takes us on those journeys and because it is structured so that we experience change.

Light and Shade in Editing

So what are the editing equivalents? Let’s start with the easiest scenario and that’s where we are cutting with music. Because music has the properties we’ve discussed above, we can leverage those to give our films the same contrasts. We can change the pace and the mood simply by changing the pace and mood of the music we use. That’s easy and obvious, but very often overlooked. Far too many music-driven pieces are remorselessly monotonous, relying far too heavily for far too long on music of the same pace and mood. That very quickly dissipates the viewer’s engagement for the reasons we have talked about. Instead of feeling as though we are going on a journey of contrasts, we are stuck in one repetitive loop and it’s dull – and that means we stop caring and listening and watching. Instead of underscoring where the film is going, it effectively tells us that the film is going nowhere, except in circles.

(Editing Tip: So here’s a suggestion: if you’re cutting with pre-composed music, don’t let that music dictate the shape of your film. Instead cut the music so it works for you. Make sure you have changes of pace and intensity, changes of key and mode, that work to enhance the moments that are important for your film. Kill the music, or change it, or cut it so that it’s driving towards the moments that really matter. Master it and don’t let it master you. Far too often we see music that steamrolls through everything, obliterating meaning, flattening out the message – music that fails to point up what’s important and de-emphasise what is not. Be in control of your structure and don’t let anything dictate what you are doing, unless it’s the fundamental meaning you are trying to convey.

Footnote: Obviously what I’ve said here about music applies to the soundtrack generally. Sound is one of the strongest structural markers we have as editors. It builds tension and relaxation, it tells us where moments begin and end, it guides us through the shape of the film in a way that’s even more important than the pictures.)

And that brings me to a really important general point. Too many films feel like they are going in circles, because they haven’t given enough thought to when and how the narrative information is delivered. So many film-makers think it’s important to tell us everything as quickly as possible right up front.They’re desperate to make sure they’ve got their message across right here right now in its entirety. And then they simply end up recycling stuff we already know and that we care about less and less with each repetition. It’s a bit like a composer piling all his themes and all their variations into the first few bars (a total, unapproachable cacophony) and then being left with nothing new to say for the rest of the piece.

A far better approach is to break your narrative down into a series of key revelations and delay each one as long as you dare. Narrative revelations are your key structural points and you must cherish them and nurture them and give them all the love you can and they will repay you with enhanced audience engagement. Whatever you do, don’t throw them away unthinkingly and too soon. Every narrative revelation marks a way station on the viewer’s journey, and those way stations are every bit as crucial and valuable as their musical equivalents. They are the map of the journey. They are why we care. They are the hooks that make us re-engage.

Tension and Relaxation

This point about re-engagement is important too and its brings me back to music. Music that is non-stop tension is exhausting to listen to, just as music that is non-stop relaxation quickly becomes dull. As we’ve discussed, good music moves between tension and relaxation the whole time at both the small and the large scale, and that alternation creates and underpins structure. We feel the relaxation, because it has been preceded by tension and vice versa.

And the exact same principle applies to editing. We want the viewer to experience alternating tension and relaxation, moments of calm and moments of frenzied activity, moments where we are absorbing lots of information and moments where we have time to digest it. (Remember, Bergman talking about “inhalation and exhalation”.) Tension/relaxation applies at every level of editing, from the micro-level of the individual cuts to the macro level of whole scenes and whole sequences. 

As viewers we understand very well that a sudden burst of drama after a period of quiet is going to be all the more striking and effective. Conversely we know about the effect of getting our breath back in the calms that come after narrative storms. That’s at the level of sequences, but even within scenes, we know that they work best when the mood and pace are not constant, when they have corners and changes of pace, and their own moments of tension and relaxation. Again it’s those changes that keep us engaged. Constant tension and its opposite, constant relaxation, have the opposite effect. They quickly end up alienating us. The fact is we watch films, because we want to experience that varied journey – those changes between tension and relaxation.

Even at the level of the cut, this same principle applies. I was recently asked by a fellow editor to comment on a flashy piece of cutting that was relentlessly fast, with no shot even as long as half a second. Despite the fact that the piece was only a couple of minutes long, it felt monotonous very quickly – I’d say after barely 20 seconds. Whereas of course, if there had been even just a few well-judged changes of pace, each one of those would have hooked me back in and re-engaged my attention. It’s not about variety for variety’s sake, it’s about variety for structure’s sake.

The French have an expression: “reculer pour mieux sauter“, which roughly means taking a step back so you can jump further, and I think that’s a good analogy for this process. Slower shots in the context of a sequence of faster shots act like “springs”. When faster shots hit slower shots, it’s as if they apply tension to the spring, so that when the spring is released the next sequence of faster shots feels faster and more exciting. It’s the manipulation of that tension of alternating pace that creates exciting visceral cutting, not just relentlessly fast cutting in its own right.

Many great editors build tension by progressively increasing the pace of the cutting, with each shot getting incrementally shorter than the last. We may not be aware of that directly as viewers, but we definitely sense the “accelerated heartbeat” effect. The obvious point to make is that acceleration depends on having started slow, and deceleration depends on having increased the pace. Editing effects are built out of contrasts. It’s the contrasts that create the push/pull effect on the viewer and bring about engagement.

(Editing Tip: It’s not strictly relevant to this piece, but I wanted to say a few words on the subject of cutting to music. Many editors seem to think it’s good practice to cut on the downbeats of the music track and that’s about as far as they ever get. Let’s look at why this strategy is flawed. If our music track has a typical four beats to the bar, the four beats have the following strengths: the first, the downbeat, is the dominant beat; the third beat (often the beat where the snare hits) is the second strongest beat; then the fourth beat (the upbeat); and finally the second beat, the weakest of the four.

Cutting on the downbeat creates a pull of inertia, because of its weight. If you’re only ever cutting on that beat, then you’re actually creating a drag on the flow of your edit. If you cut on the downbeat and the third beat, you create a kind of stodgy marching rhythm that’s also lacking in fluid forward movement. Cutting on the upbeat, however, because it’s an “offbeat”, actually helps to propel you forward towards the downbeat. What you’re effectively doing is setting up a kind of cross-rhythm between our pictures and your music, and that has a really strong energy and flow. But again the trick is to employ variety and contrast. Imagine a drummer playing the exact same pattern in each bar: that would get monotonous very quickly, so what the drummer actually does is to throw in disruptions to the pattern that build the forward energy. He will, for example, de-emphasise the downbeat by exaggerating the snare, or he will even shift where the downbeat happens, and add accents that destabilise the four-square underlying structure. And all that adds to the energy and the sense of forward movement. And that’s the exact principle we should be aiming for when cutting to music.

There’s one other crucial, but often overlooked, aspect to this: making your cut happen on a beat is far less effective than making a specific moment in the action happen on a beat. That creates a much stronger sense of forward-directed energy and a much more satisfying effect of synchronisation overall. But that’s not to say you should only ever cut this way. Again variety is everything, but always with a view to what is going to work best to propel the sequence forward, rather than let it get dragged back. Unless, of course, dragging back on the forward motion is exactly what you want for a particular moment in your film, in which case, that’s the way to go.)

Building Blocks

You will remember that our fictional composer sits down at the piano and picks out his composition note by note. The implicit assumption there is that individual notes are the building blocks of a piece of music. But that’s not how composers work. The very smallest building block for a composer is the motif – a set of notes that exists as a tiny seed out of which much larger musical ideas are encouraged to grow. The operas of Wagner, despite notoriously being many hours long, are built entirely out of short motifs that grow through musical development to truly massive proportions. You might be tempted to think that a motif is the same thing as a riff, but riffs are merely repetitive patterns, whereas motifs contain within them the DNA for vast organic structures and the motifs themselves can typically grow other motifs.

Wagner is, of course, more of a exception than a rule and other composers work with building blocks on a larger scale than the simple motif. The smallest unit is typically something we call a phrase, which might be several bars long. And then again one would seldom think of a phrase in isolation, since it only really exists as part of larger thematic whole. If we look at this famous opening to Mozart’s 40th Symphony we can see that he starts with a two bar phrase that rises on the last note, that is answered by a phrase that descends back down from that note. The first phrase is then revisited along with its answering phrase – both shifted one step lower. 

But that resulting eight bars is only half of the complete theme, while the complete 1st Subject is 42 two bars long. So what is Mozart’s basic building block here? It most certainly isn’t a note, or even a phrase. In this case it’s something much more like a combination of a rhythm pattern (da-da-Da) and a note pattern (a falling interval of two adjacent notes). But built into that is a clear sense of how those patterns are able to evolve to create the theme. In other words, it’s complicated.

The fundamental point is that notes on their own are nothing; they are inert; they have no meaning. It’s only when they form sequences that they start to become music.

The reason I wanted to highlight this point is that I think it too gives us a useful insight into the editing process. The layperson tends to think of the single shot as being the basic building block, but just as single notes on their own are inert, so the single shot on its own (typically, unless it’s an elaborate developing shot) is lacking in meaning. It’s when we build shots into sequences that they start to take on life. It’s the dynamic, dialectical interplay of shots that creates shape and meaning and audience engagement. And that means it’s much more helpful to think of shot sequences as the basic building blocks. It’s as sequences that shots acquire the potential to create structure. Shots on their own do not have that quality. So it pays to have an editing strategy that is geared towards the creation and concatenation of “sequence modules”, rather than simply a sifting of individual shots. That’s a huge subject that I won’t go into in any more detail here, but which I’ve written about elsewhere.

Horizontal and Vertical Composition

Although the balance keeps shifting down the ages, music is both horizontal and vertical and exists in a tension between those aspects. Melody is horizontal – a string of notes that flows left to right across the page. Harmony is vertical – a set of notes that coexist in time. But these two concepts are not in complete opposition. Counterpoint is what happens when two or more melodies combine vertically to create harmony. The fugue is one of the most advanced expressions of that concept, but there are many others. It’s a truly fascinating, unresolved question that runs throughout the history of music, with harmony sometimes in the ascendant and sometimes melody.

Melody typically has its own structure, most frequently seen in terms of groups of four bars, or multiples of four bars. It tends to have shapes that we instinctively understand even when hearing it for the first time. Harmony, too, has a temporal structure, even though we more typically think of it as static and vertical. Vertical harmonies tend to suggest a horizontal direction of travel, again based on the notion of tension and relaxation, with dissonance resolving towards consonance. Harmonies typically point to where they are planning to go, although of course, just as with melody, the reason they appeal to us so much is that they can lead us to anticipate one thing and then deliver a surprise twist.

In editing we mostly consider only melody, in other words, how one shot flows into another. But there is also a vertical, harmonic component. It’s only occasionally that we layer our pictures to combine them vertically (see footnote). But we do it almost all the time with sound – layering sound components to add richness and complexity. I suppose one way of looking at this would be to think of pictures as the horizontal melody and the soundtrack as the vertical harmony, or counterpoint.

One obvious way in which we can approach this is to vary the vertical depth to increase and decrease tension. A sound texture that is uniformly dense quickly becomes tiresome. But if we think in terms of alternating moments where the sound is thickly layered and moments where it thins out, then we can again increase and decrease tension and relaxation.

(Footnote: One famous example of vertical picture layering comes in Apocalypse Now where Martin Sheen is reading Kurz’ letter while the boat drives upstream towards the waiting horror. Coppola layers up gliding images of the boat’s passage in dissolves that are so long they are more like superimpositions – conveying the sense of the hypnotic, awful, disorientating journey into the unknowable. But again contrast is the key here, because punctuating that vertical layering, Coppola interjects sharp cuts that hit us full in the face: suspended corpses, the burning helicopter in the branches of a tree. The key thing to notice is the counterpoint between the hard cuts and the flowing dissolves/superimpositions. The dissolves lull us into an eery fugue-like state, while the cuts repeatedly jolt us out of it to bring us face to face with the horror. The point is that they both work together to draw us inexorably towards the climax. The cuts work, because of the dissolves, and the dissolves work because of the cuts.)

Moments

The moments that we remember in both music and films are those points where something changes suddenly and dramatically. They are the magical effects that take your breath away. There is an incredibly famous cut in David Lean’s Lawrence of Arabia that is a perfect case in point. Claude Rains (Mr. Dryden) and Peter O’Toole (Lawrence) have been having a lively discussion about whether Lawrence really understands how brutal and unforgiving the desert is going to be. O’Toole insists that “it’s going to be fun”. He holds up a lighted match, and we cut to a close-up as he blows it out. On the sound of him blowing, we cut to an almost unimaginably wide shot of the desert as the sun rises almost imperceptibly slowly in what feels like complete silence. The sudden contrast of the shot size, the sudden absence of sound, the abruptness of cutting on the audio of blowing out the match – all of these make this one of the most memorable moments in film history. And of course, it’s a big narrative moment too. It’s not just clever, it has meaning. 

Or take another famous moment, this time from music. Beethoven’s massive Choral Symphony, the Ninth, is best known for its famous final movement, the Ode to Joy, based on Schiller’s poem of the same name. The finale follows on from a slow movement of celestial tranquillity and beauty, but it doesn’t launch immediately into the music that everyone knows so well. Instead there is a sequence built on the most incredible dissonance, which Wagner referred to as “the terror fanfare”. Beethoven has the massed ranks of the orchestra blast out a phenomenally powerful fortissimo chord that stacks up all seven notes of the D minor harmonic scale. It’s as if we are hearing the foul demons of hatred and division being sent screeching back to the depths of hell. And as the echoes of that terrifying sound are still dying away, we suddenly hear the solo baritone, the first time in nearly an hour of music that we have heard a human voice: “O Freunde, nicht diese Töne“, “Friends, let us not hear these sounds”. And so begins that unforgettable ode to the brotherhood of all mankind.

The point about both the Lawrence of Arabia moment and the Beethoven moment is that in each case, they form giant pivots upon which the whole work turns. The Lawrence moment shows us one crazy Englishman pitting himself against the limitless desert. The Beethoven moment gives us one lone voice stilling the forces of darkness and calling out for something better, something to unite us all. These are not mere stylistic tricks, they are fundamental structural moments that demand our attention and engage us with what each work is really about.

I’m not suggesting that everything we cut is going to have moments on this kind of epic scale, but the principle is one we can always benefit from thinking about and building into our work. When we’re planning our edit, it pays to ask ourselves where we are going to make these big turning points and what we can do with all the means at our disposal to make them memorable and attention-engaging. Our best, most important stuff needs to be reserved for these pivotal moments and we need to do everything we can to do it justice. And the best way of doing that, as Beethoven and David Lean both show us, is to make everything stop.

When the Music Stops

Arguably the greatest composer ever has one of my favourite ever quotes about music: “The music is not in the notes, but in the silence between.” Mozart saw that the most magical and profound moments in music are when the music stops. The absence of music is what makes music. To me that is one of the most profound insights in art.

From an editing point of view, that works, too. We need to understand the importance of not cutting, of not having sound, of not filling every gap, of creating breaths and pauses and beats, of not rushing onto the next thing, of allowing moments to resonate into nothingness, of stepping away and letting a moment simply be.

The temptation in editing is always to fill every moment with something. It’s a temptation we need to resist wherever we can. Our films will be infinitely better for it. Because it’s in those moments that the magic happens.

Composing and Editing with Structure

I hope by now you’ll agree with me about the fundamental importance of structure in editing. So let’s come back to our original image of the composer hammering out his piece of music note by note, and our novice editor laying out his film shot by shot.

It should be obvious that a composer needs to pre-visualise the structure of the piece before starting to think about the individual notes. At every level of the structure he needs to have thought about where the structural changes might happen – both on a large and small scale. He needs to plan the work in outline: where the key changes are going to happen, where the tempo shifts from fast to slow or slow to fast, where the tension escalates and where it subsides, where the whole orchestra is playing as one and where we hear just one solitary solo line. 

It goes without saying that very few composers have ever plotted out an entire work in detail and then stuck rigidly to the plan. But that’s not the point. The plan is just a plan until a better one comes along. The joy of composition is that it throws up its own unexpected surprises, ideas that grow organically out of other ideas and mushroom into something bigger, better and more complex than the composer could envisage when starting out. But those ideas don’t just shoot off at random. They train themselves around the trelliswork of the original structure. 

As I’ve mentioned, classical composers have it easy, because they can build upon pre-conceived structures like Sonata Form and the rest.  As editors we don’t have access to the same wealth of ready-built conventions, but we do have a few. 

One of the structures that we very frequently call upon is the famous three-act structure. It works not only for narrative, but for pretty much any kind of film you can think of. The three-act structure does in fact have a lot in common with Sonata Form. Act One is the Exposition, where we set out the themes to be addressed. Act Two is the Development Section, where the themes start to get complicated and we unravel the problems and questions that they pose. And Act Three is the Recapitulation (and Coda), where we finally resolve the themes set out in Act One. Almost anything you cut at whatever length can benefit from being thought of in these structural terms: a) set out your theme or themes; b) develop your themes and explore their complexities; c) resolve your themes (or at least point to ways in which they might be resolved). And make sure your audience is aware of how those sections break down. As an editor who has spent a lot of my working life cutting movie trailers, I know that every experienced trailer editor deploys three-act structure pretty much all the time and works it very hard indeed.

 Of course, scripted drama comes into the cutting room with its own prebuilt structure, but the script is by no means necessarily the structural blueprint for the finished film. Thinking about how to structure what was actually shot (as against what was on the page) is still vitally important. The originally conceived architecture might well not actually function as it was planned, so we can’t simply rely on that to deliver a film that will engage as it should. The principles that we’ve discussed of large scale composition, of pace, of contrast, of rhythm, and so on are all going to be useful in building a structure that works for the finished film.

Other kinds of filmmaking rely heavily on structural planning in the cutting room and a huge amount of work can go into building the base architecture. And it really helps if we think of that structural planning as more than simply shifting inert blocks into a functional whole. If we take inspiration from the musical concepts described here, we can create films that breathe a far more dynamic structural rhythm, that become journeys through darkness and light, through tension and relaxation, between calm and storm, journeys that engage and inspire.

Conclusion

Obviously this is just an overview of what is in reality a huge subject, but what I want to stress is that it really pays to be open to thinking about the processes of editing from different perspectives. Music, as a time-based art form, has so many useful lessons to draw from, both in terms of large scale architecture and small scale rhythms, dynamics, colours, and more. And those lessons can help us to make much more precise, refined and considered decisions about editing practice, whatever we are cutting.

– Simon Ubsdell

For more of Simon’s thoughts on editing, check out his blog post Bricklayers and Sculptors.

© 2018 Simon Ubsdell, Oliver Peters

Websites for Filmmakers

There are plenty of places to go for budding filmmakers to learn more about their craft on the web. Naturally some places are better than others and forums often take on the atmosphere of a biker bar. Here are some of my top suggestions (in no particular order) for getting a better handle on the art of filmmaking, especially when it comes to editing.

Tony Zhou does a great job of film analysis – breaking down scene construction and finding similarities in a director’s signature style(s) or a particular technique. This Vimeo page puts these all in one handy spot.

This Guy Edits tackles similar videos to Zhou’s, with the additional angle of actually watching him go through his thought process of editing an indie film. The brainchild of film editor Sven Pape, these videos have had significant traction. In addition, he’s been doing this with Final Cut Pro X, so I’m sure that’s added even more interest.

Vashi Visuals is the site for editor Vashi Nedomansky.  His blogs covers a lot of film analysis without relying on videos like the first two I’ve mentioned. Nevertheless there are a number of insights, ranging from film aspect ratio changes to the workflow he’s followed on such large-scale films as “6 Below”.

While I’m partial to my own interview articles, it’s hard to beat Steve Hullfish’s Art of the Cut interview series for sheer depth and volume. Hullfish is an editor, colorist, trainer, and author with several books to his credit, including a curated version of some of these interviews. It’s a great place to go to understand what the leading editors go through in cutting films.

If you are more partial to cinematography than editing, you’ve got to check out the site and forum of renowned director of photography Roger Deakins. It’s free to sign up to the forum, where you can learn a ton about film production. Deakins also chimes in at times, schedule permitting.

Another cinematographer gracious with his time is David Mullen. The “Ask David Mullen ANYTHING” thread was started on one of the forums at RedUser a few years ago and at present has spawned nearly 600 pages. In this thread, Mullen responds to user questions about cinematography with answers from his personal experience and study of the art.

Like Hullfish, another ProVideo Coalition contributor is Brian Hallett, with his Art of the Shot series. He’s adapted the interview concept to talk with leading cinematographers about their art, choice of gear, and more.

Of course, there’s more to filmmaking than editing and cinematography. It all starts with the word, so let me include Scott Smith’s blog. Smith is a writer/director whose Screenwriting from Iowa blog covers a wide range of topics – from film, writing, and TV analysis to his own experiences and travels.

I’ll wrap this up with some “honorable mentions”. Plenty of companies have blogs on their websites, but some invite collaborators to submit guest posts, so it’s not just content related to that company’s product. Of course, some, like Avid, focus on product-centric posts. Here are a few sites worth reading:

Academy Originals

Avid

Premium Beat

Frame IO

Wipster

Note: Some of these links load better in Chrome than in Safari.

©2017 Oliver Peters

Nocturnal Animals

nocanim_01_smSome feature films are entertaining popcorn flicks, while others challenge the audience to go deeper. Writer/director Tom Ford’s (A Single Man) second film, Nocturnal Animals definitely fits into the latter group. Right from the start, the audience is confronted with a startling and memorable main title sequence, which we soon learn is actually part of an avant garde art gallery opening. From there the audience never quite knows what’s around the next corner.

Susan Morrow (Amy Adams) is a privileged Los Angeles art gallery owner who seems to have it all, but whose life is completely unfulfilled. One night she receives an unsolicited manuscript from Edward Sheffield (Jake Gyllenhaal), her ex-husband with whom she’s been out of touch for years. With her current husband (Armie Hammer) away on business, she settles in for the night to read the novel. She is surprised to discover it is dedicated to her. The story being told by Edward is devastating and violent, and it triggers something in Susan that arouses memories of her past love with the author.

Nocturnal Animals keeps the audience on edge and is told through three parallel storylines – Susan’s current reality, flashbacks of her past with Edward, and the events that are unfolding in the novel. Managing this delicate balancing act fell to Joan Sobel, ACE, the film’s editor. In her film career, Sobel has worked with such illustrious directors as Quentin Tarantino, Billy Bob Thornton, Paul Thomas Anderson and Paul Weitz.  She was Sally Menke’s First Assistant Editor for six-and-a-half years on four films, including Kill Bill, vol. 1 and Kill Bill, vol. 2.  Sobel also edited the Oscar-winning short dark comedy, The Accountant.  This is her second feature with Tom Ford at the helm.

Theme and structure

In our recent conversation, Joan Sobel discussed Nocturnal Animals. She says, “At its core, this film is about love and revenge and regret, with art right in the middle of it all. It’s about people we have loved and then carelessly discarded, about the cruelties that we inflict upon each other, often out of fear or ambition or our own selfishness.  It is also about art and the stuff of dreams.  Susan has criticized Edward’s ambition as a writer. Edward gets Susan to feel again through his art – through that very same writing that Susan has criticized in the past. But art is also Edward’s vehicle for revenge – revenge for the hurt that Susan has caused him during their past relationship. The film uses a three-pronged story structure, which was largely as Tom scripted. The key was to find a fluid and creative way to transition from one storyline to the other, to link those moments emotionally or visually or both. Sometimes that transition was triggered by a movement, but other times just a look, a sound, a color or an actor’s nuanced glance.”

nocanim_02Nocturnal Animals was filmed (yes, on film not digital) over 31 days in California, with the Mojave Desert standing in for west Texas. Sobel was cutting while the film was being shot and turned in her editor’s cut about a week after the production wrapped. She explains, “Tom likes to work without a large editorial infrastructure, so it was just the two of us working towards a locked cut. I finished my cut in December and then we relocated to London for the rest of post. I always put together a very polished first cut, so that there is already an established rhythm and a flow.  That way the director has a solid place to begin the journey. Though the movie was complex with its three-pronged structure – along with the challenge of bringing to life the inner monologue that is playing in Susan’s head – the movie came together rather quickly. Tom’s script was so well written and the performances so wonderful that by the end of March we pretty much had a locked cut.”

The actors provided fruitful ground for the editor.  Sobel continues, “It was a joy to edit Amy Adams’ performance. She’s a great actress, but when you actually edit her dailies, you get to see what she brings to the movie. Her performance is reliant less on dialogue (she actually doesn’t have many lines), instead emphasizing Amy’s brilliance as a film actor in conveying emotion through her mind and through her face and her eyes.”

“Tom is a staggering talent, and working with him is a total joy.  He’s fearless and his creativity is boundless.  He is also incredibly generous and very, very funny (we laugh a lot!), and we share an endless passion for movies.  Though the movie is always his vision, his writing, he gravitates towards collaboration. So we would get quite experimental in the cut. The trust and charm and sharp, clear intelligence that he brings into the cutting room resulted in a movie that literally blossoms with creativity. Editing Nocturnal Animals was a totally thrilling experience.”

Tools of the trade

nocanim_03Sobel edited Nocturnal Animals with Avid Media Composer. Although she’s used other editing applications, Media Composer is her tool of choice. I asked about how she approaches each new film project. She explains, “The first thing I do is read the script. Then I’ll read it again, but this time out loud. The rhythms of the script become more lucid that way and I can conceptualize the visuals. When I get dailies for a scene, I start by watching everything and taking copious notes about every nuance in an actor’s performance that triggers an emotion in me, that excites me, that moves me, that shows me exactly where this scene is going.  Those moments can be the slightest look, a gesture, a line reading.”

“I like to edit very organically based on the footage. I know some editors use scene cards on a wall or they rely on Avid’s Script Integration tools, but none of those approaches are for me. Editing is like painting – it’s intuitive. My assistants organize bins for themselves in dailies order. Then they organize my bins in scene/script order. I do not make selects sequences or ‘KEM rolls’. I simply set up the bins in frame view and then rearrange the order of clips according to the flow – wide to tight and so on. As I edit, I’m concentrating on performance and balance. One little trick I use is to turn off the sound and watch the edit to see what is rhythmically and emotionally working. Often, as I’m cutting the scene, I find myself actually laughing with the actor or crying or gasping! Though this is pretty embarrassing if someone happens to walk into my cutting room, I know that if I’m not feeling it, then the audience won’t either.”

Music and sound are integral for many editors, especially Sobel. She comments, “I love to put temp music into my editor’s cuts. That’s a double-edged sword, though, because the music may or may not be to the taste of the director. Though Tom and I are usually in sync when it comes to music, Tom doesn’t like to start off with temp music in the initial cut, so I didn’t add it on this film. Once Tom and I started working together, we played with music to see what worked. This movie is one that we actually used very little music in and when we did, added it quite sparingly. Mostly the temp music we used was music from some of Abel’s [Korzeniowski, composer] other film scores. I also always add layers of sound effects to my tracks to take the movie and the storytelling to a further level. I use sound to pull your attention, to define a character, or a mood, or elevate a mystery.”

Unlike many films, Nocturnal Animals flew through the post process without any official test screenings. Its first real screening was at the Venice Film Festival where it won the Silver Lion Grand Jury Prize. “Tom has the unique ability to both excite those working with him and to effortlessly convey his vision, and he had total confidence in the film. The film is rich with many layers and is the rare film that can reveal itself through subsequent viewings, hopefully providing the audience with that unique experience of being completely immersed in a novel, as our heroine becomes immersed in Nocturnal Animals,” Sobel says. The film opened in the US during November and is a Focus Features release.

Check out more with Joan Sobel at “Art of the Cut”.

Originally written for Digital Video magazine / Creative Planet Network.

©2017 Oliver Peters