Editing and Music Composition

Editing and Music Composition

A nip is in the air and snow is falling in some regions. All signs of Fall and Winter soon to come. The sights, smells, and sounds of the season will be all around us. Festive events. Holiday celebrations. Joy. But no other season is so associated with memorable music to put us in the mood. That makes this a perfect time to talk about how video and film editing has intrinsic similarities with musical composition.

Fellow editor Simon Ubsdell has a lot of thoughts on the subject – perfect for one of my rare guest blog posts. Simon is Creative Director of Tokyo Productions, a London-based post-production shop specializing in trailers. Simon is multi-talented with experience in music, audio post, editing, and software development.

Grab a cup of holiday cheer and sit back for this enlightening read.

______________________________________

Simon Ubsdell – Editing and Music Composition

There is a quote attributed to several different musicians, including Elvis Costello, Miles Davis, and Thelonius Monk, which goes: “Talking about music is like dancing about architecture“. It sounds good and it seems superficially plausible, but I think it’s wrong on two levels. Firstly, a good choreographer would probably say that it’s perfectly possible to use dance to say something interesting about architecture and a good architect might well say that they could design a building that said something about dance. But I think it’s also unhelpful to imply that one art form can’t tell us useful things about another. We can learn invaluable lessons both from the similarities and the differences, particularly if we focus on process rather than the end result.

Instead, here’s Ingmar Bergman: “I would say that there is no art form that has so much in common with film as music. Both affect our emotions directly, not via the intellect. And film is mainly rhythm; it is inhalation and exhalation in continuous sequence.

Bergman is certainly not the only filmmaker to have made this observation and I think everyone can recognise the essential truth of it. However, what I want to consider here is not so much what film and music have in common as art forms, but rather whether the process of music composition can teach us anything useful about the process of film editing. As an editor who also composes music, I have found thinking about this to be useful in both directions.

In films you’ll often see a composer sitting down at a piano and laboriously writing a score one note after another. He bangs around until he finds one note and then he scribbles it into the manuscript; then he bangs around looking for the next one. Music composition is made to look like a sequential process where each individual note is decided upon (with some difficulty usually!) before moving on to the next. The reality is of course that music composition doesn’t work this way at all. So I’d like to look at some of the ways that one does actually go about writing a piece of music and how the same principles might apply to how we edit films. Because music is such a vast subject, I’m going to limit myself largely to the concepts of classical music composition, but the same overall ideas apply to whatever kind of music you might be writing in whatever genre.

What both music and film have in common is that they unfold over time: they are experienced sequentially. So the biggest question that both the composer and the editor need to address is how to organise the material across time, and to do that we need to think about structure.

Musical Structure

From the Baroque period onwards and even before, composers have drawn on a very specific set of musical structures around which to build their compositions. 

The Canon (as in Pachelbel’s famous example) is the repetition of the same theme over and over again with added ornamentation that becomes increasingly more elaborate. The Minuet and Trio is an A/B/A sandwich in which a theme is repeated (Minuet), but with a contrasting middle section (Trio). The Rondo is a repeated theme that alternates with multiple contrasting sections, in other words A/B/A/C/A/D, etc. The Theme and Variations sets out a basic theme and follows it with a series of elaborations in different keys, tempi, time signatures, and so on. 

Sonata Form, widely used for the opening movements of most symphonic works, is a much more sophisticated scheme, that starts by setting out two contrasting themes (the “1st and 2nd Subjects”) in two different keys (the “Exposition”), before moving into an extended section where those ideas undergo numerous changes and augmentations and key modulations (the “Development Section”), before returning to the original themes, both now in the home key of the piece (the “Recapitulation Section”), often leading to a final epilogue called the “Coda”. 

In all these cases the structure is built out of thematic and other contrasts, and contrast is a word I’m going to be coming back to repeatedly here, because it goes to the core of where music composition and editing come together.

Now the point of using musical structures of this kind is that the listener can form an idea of how the piece is unfolding even when hearing it for the first time. They provide a map that helps you orientate yourself within the music, so it doesn’t come across as just some kind of confused and arbitrary ramble across terrain that’s hard to read. Music that doesn’t come with signposts is not easy to listen to with concentration, precisely because you don’t know where you are. (Of course, the humble pop song illustrates this, too. We can all recognise where the verse ends and the chorus begins and the chorus repetitions give us clear anchor points that help us understand the structure. The difference with the kind of classical music I’m talking about is that a pop song doesn’t have to sustain itself for more than a few minutes, whereas some symphonies last well over an hour and that means structure becomes vastly more important.) 

What structure does is effectively twofold: on the one hand it gives us a sense of comprehensibility, predictability, even familiarity; and on the other hand it allows the composer to surprise us by diverging from what is expected. The second part obviously follows from the first. If we don’t know where we are, then we don’t know what to expect and everything is a constant surprise. And that means nothing is a surprise. We need familiarity and comprehensibility in order to be able to be surprised by the surprises when they come. Conversely, music that is wholly without surprises gets dull very quickly. Just as quickly as music that is all surprise, because again it offers us no anchor points. 

Editing Structure

So what comparisons can we draw with editing in terms of structure? Just as with our fictional movie “composer” sitting at the piano picking out one note after another, so you’ll find that many newcomers to editing believe that that’s how you put together a film. Starting at the beginning, you take your first shot and lay it down, and then you go looking for your next shot and you add that, and then the next one and the next one. Of course, you can build a film this way, but what you are likely to end up with is a shapeless ramble rather than something that’s going to hold the viewer’s attention. It will be the equivalent of a piece of music that has no structural markers and doesn’t give us the clues we need to understand where we are and where we are going. Without those cues the viewer quickly gets lost and we lose concentration. Not understanding the structure means we can’t fully engage with the film.

So how do we go about creating structure in our editing? Music has an inherently much more formal character, so in many ways the composer has an easier job, but I’d suggest that many of the same principles apply.

Light and Shade in Music

Music has so many easy to use options to help define structure. We have tempo – how fast or slow the music is at any one point. Rhythm – the manner in which accented notes are grouped with non-accented notes. Pitch – how high or low the musical sounds are. Dynamics – how loud or soft the music is, and how soft becomes loud and vice versa. Key – how far we have moved harmonically from the dominant key of the piece. Mode – whether we are experiencing the bright optimism of a major key or the sombre darkness of a minor key (yes, that’s a huge over-simplification!). Harmony – whether we are moving from the tension of dissonance to the resolution of consonance, or vice versa.

All of these options allow for contrasts – faster/slower, brighter/darker, etc. It’s out of those contrasts that we can build structure. For example, we can set out our theme in a bright, shiny major key with a sprightly rhythm and tempo, and then move into a slow minor key variation shrouded in mystery and suspense. It’s from those contrasts that we grasp the musical structure. And of course moving through those contrasts becomes a journey. We’re not fixed in one place, but instead we’re moving from light to dark, from peaceful to agitated, from tension to resolution, and so on. Music satisfies and nourishes and delights and surprises us, because it takes us on those journeys and because it is structured so that we experience change.

Light and Shade in Editing

So what are the editing equivalents? Let’s start with the easiest scenario and that’s where we are cutting with music. Because music has the properties we’ve discussed above, we can leverage those to give our films the same contrasts. We can change the pace and the mood simply by changing the pace and mood of the music we use. That’s easy and obvious, but very often overlooked. Far too many music-driven pieces are remorselessly monotonous, relying far too heavily for far too long on music of the same pace and mood. That very quickly dissipates the viewer’s engagement for the reasons we have talked about. Instead of feeling as though we are going on a journey of contrasts, we are stuck in one repetitive loop and it’s dull – and that means we stop caring and listening and watching. Instead of underscoring where the film is going, it effectively tells us that the film is going nowhere, except in circles.

(Editing Tip: So here’s a suggestion: if you’re cutting with pre-composed music, don’t let that music dictate the shape of your film. Instead cut the music so it works for you. Make sure you have changes of pace and intensity, changes of key and mode, that work to enhance the moments that are important for your film. Kill the music, or change it, or cut it so that it’s driving towards the moments that really matter. Master it and don’t let it master you. Far too often we see music that steamrolls through everything, obliterating meaning, flattening out the message – music that fails to point up what’s important and de-emphasise what is not. Be in control of your structure and don’t let anything dictate what you are doing, unless it’s the fundamental meaning you are trying to convey.

Footnote: Obviously what I’ve said here about music applies to the soundtrack generally. Sound is one of the strongest structural markers we have as editors. It builds tension and relaxation, it tells us where moments begin and end, it guides us through the shape of the film in a way that’s even more important than the pictures.)

And that brings me to a really important general point. Too many films feel like they are going in circles, because they haven’t given enough thought to when and how the narrative information is delivered. So many film-makers think it’s important to tell us everything as quickly as possible right up front.They’re desperate to make sure they’ve got their message across right here right now in its entirety. And then they simply end up recycling stuff we already know and that we care about less and less with each repetition. It’s a bit like a composer piling all his themes and all their variations into the first few bars (a total, unapproachable cacophony) and then being left with nothing new to say for the rest of the piece.

A far better approach is to break your narrative down into a series of key revelations and delay each one as long as you dare. Narrative revelations are your key structural points and you must cherish them and nurture them and give them all the love you can and they will repay you with enhanced audience engagement. Whatever you do, don’t throw them away unthinkingly and too soon. Every narrative revelation marks a way station on the viewer’s journey, and those way stations are every bit as crucial and valuable as their musical equivalents. They are the map of the journey. They are why we care. They are the hooks that make us re-engage.

Tension and Relaxation

This point about re-engagement is important too and its brings me back to music. Music that is non-stop tension is exhausting to listen to, just as music that is non-stop relaxation quickly becomes dull. As we’ve discussed, good music moves between tension and relaxation the whole time at both the small and the large scale, and that alternation creates and underpins structure. We feel the relaxation, because it has been preceded by tension and vice versa.

And the exact same principle applies to editing. We want the viewer to experience alternating tension and relaxation, moments of calm and moments of frenzied activity, moments where we are absorbing lots of information and moments where we have time to digest it. (Remember, Bergman talking about “inhalation and exhalation”.) Tension/relaxation applies at every level of editing, from the micro-level of the individual cuts to the macro level of whole scenes and whole sequences. 

As viewers we understand very well that a sudden burst of drama after a period of quiet is going to be all the more striking and effective. Conversely we know about the effect of getting our breath back in the calms that come after narrative storms. That’s at the level of sequences, but even within scenes, we know that they work best when the mood and pace are not constant, when they have corners and changes of pace, and their own moments of tension and relaxation. Again it’s those changes that keep us engaged. Constant tension and its opposite, constant relaxation, have the opposite effect. They quickly end up alienating us. The fact is we watch films, because we want to experience that varied journey – those changes between tension and relaxation.

Even at the level of the cut, this same principle applies. I was recently asked by a fellow editor to comment on a flashy piece of cutting that was relentlessly fast, with no shot even as long as half a second. Despite the fact that the piece was only a couple of minutes long, it felt monotonous very quickly – I’d say after barely 20 seconds. Whereas of course, if there had been even just a few well-judged changes of pace, each one of those would have hooked me back in and re-engaged my attention. It’s not about variety for variety’s sake, it’s about variety for structure’s sake.

The French have an expression: “reculer pour mieux sauter“, which roughly means taking a step back so you can jump further, and I think that’s a good analogy for this process. Slower shots in the context of a sequence of faster shots act like “springs”. When faster shots hit slower shots, it’s as if they apply tension to the spring, so that when the spring is released the next sequence of faster shots feels faster and more exciting. It’s the manipulation of that tension of alternating pace that creates exciting visceral cutting, not just relentlessly fast cutting in its own right.

Many great editors build tension by progressively increasing the pace of the cutting, with each shot getting incrementally shorter than the last. We may not be aware of that directly as viewers, but we definitely sense the “accelerated heartbeat” effect. The obvious point to make is that acceleration depends on having started slow, and deceleration depends on having increased the pace. Editing effects are built out of contrasts. It’s the contrasts that create the push/pull effect on the viewer and bring about engagement.

(Editing Tip: It’s not strictly relevant to this piece, but I wanted to say a few words on the subject of cutting to music. Many editors seem to think it’s good practice to cut on the downbeats of the music track and that’s about as far as they ever get. Let’s look at why this strategy is flawed. If our music track has a typical four beats to the bar, the four beats have the following strengths: the first, the downbeat, is the dominant beat; the third beat (often the beat where the snare hits) is the second strongest beat; then the fourth beat (the upbeat); and finally the second beat, the weakest of the four.

Cutting on the downbeat creates a pull of inertia, because of its weight. If you’re only ever cutting on that beat, then you’re actually creating a drag on the flow of your edit. If you cut on the downbeat and the third beat, you create a kind of stodgy marching rhythm that’s also lacking in fluid forward movement. Cutting on the upbeat, however, because it’s an “offbeat”, actually helps to propel you forward towards the downbeat. What you’re effectively doing is setting up a kind of cross-rhythm between our pictures and your music, and that has a really strong energy and flow. But again the trick is to employ variety and contrast. Imagine a drummer playing the exact same pattern in each bar: that would get monotonous very quickly, so what the drummer actually does is to throw in disruptions to the pattern that build the forward energy. He will, for example, de-emphasise the downbeat by exaggerating the snare, or he will even shift where the downbeat happens, and add accents that destabilise the four-square underlying structure. And all that adds to the energy and the sense of forward movement. And that’s the exact principle we should be aiming for when cutting to music.

There’s one other crucial, but often overlooked, aspect to this: making your cut happen on a beat is far less effective than making a specific moment in the action happen on a beat. That creates a much stronger sense of forward-directed energy and a much more satisfying effect of synchronisation overall. But that’s not to say you should only ever cut this way. Again variety is everything, but always with a view to what is going to work best to propel the sequence forward, rather than let it get dragged back. Unless, of course, dragging back on the forward motion is exactly what you want for a particular moment in your film, in which case, that’s the way to go.)

Building Blocks

You will remember that our fictional composer sits down at the piano and picks out his composition note by note. The implicit assumption there is that individual notes are the building blocks of a piece of music. But that’s not how composers work. The very smallest building block for a composer is the motif – a set of notes that exists as a tiny seed out of which much larger musical ideas are encouraged to grow. The operas of Wagner, despite notoriously being many hours long, are built entirely out of short motifs that grow through musical development to truly massive proportions. You might be tempted to think that a motif is the same thing as a riff, but riffs are merely repetitive patterns, whereas motifs contain within them the DNA for vast organic structures and the motifs themselves can typically grow other motifs.

Wagner is, of course, more of a exception than a rule and other composers work with building blocks on a larger scale than the simple motif. The smallest unit is typically something we call a phrase, which might be several bars long. And then again one would seldom think of a phrase in isolation, since it only really exists as part of larger thematic whole. If we look at this famous opening to Mozart’s 40th Symphony we can see that he starts with a two bar phrase that rises on the last note, that is answered by a phrase that descends back down from that note. The first phrase is then revisited along with its answering phrase – both shifted one step lower. 

But that resulting eight bars is only half of the complete theme, while the complete 1st Subject is 42 two bars long. So what is Mozart’s basic building block here? It most certainly isn’t a note, or even a phrase. In this case it’s something much more like a combination of a rhythm pattern (da-da-Da) and a note pattern (a falling interval of two adjacent notes). But built into that is a clear sense of how those patterns are able to evolve to create the theme. In other words, it’s complicated.

The fundamental point is that notes on their own are nothing; they are inert; they have no meaning. It’s only when they form sequences that they start to become music.

The reason I wanted to highlight this point is that I think it too gives us a useful insight into the editing process. The layperson tends to think of the single shot as being the basic building block, but just as single notes on their own are inert, so the single shot on its own (typically, unless it’s an elaborate developing shot) is lacking in meaning. It’s when we build shots into sequences that they start to take on life. It’s the dynamic, dialectical interplay of shots that creates shape and meaning and audience engagement. And that means it’s much more helpful to think of shot sequences as the basic building blocks. It’s as sequences that shots acquire the potential to create structure. Shots on their own do not have that quality. So it pays to have an editing strategy that is geared towards the creation and concatenation of “sequence modules”, rather than simply a sifting of individual shots. That’s a huge subject that I won’t go into in any more detail here, but which I’ve written about elsewhere.

Horizontal and Vertical Composition

Although the balance keeps shifting down the ages, music is both horizontal and vertical and exists in a tension between those aspects. Melody is horizontal – a string of notes that flows left to right across the page. Harmony is vertical – a set of notes that coexist in time. But these two concepts are not in complete opposition. Counterpoint is what happens when two or more melodies combine vertically to create harmony. The fugue is one of the most advanced expressions of that concept, but there are many others. It’s a truly fascinating, unresolved question that runs throughout the history of music, with harmony sometimes in the ascendant and sometimes melody.

Melody typically has its own structure, most frequently seen in terms of groups of four bars, or multiples of four bars. It tends to have shapes that we instinctively understand even when hearing it for the first time. Harmony, too, has a temporal structure, even though we more typically think of it as static and vertical. Vertical harmonies tend to suggest a horizontal direction of travel, again based on the notion of tension and relaxation, with dissonance resolving towards consonance. Harmonies typically point to where they are planning to go, although of course, just as with melody, the reason they appeal to us so much is that they can lead us to anticipate one thing and then deliver a surprise twist.

In editing we mostly consider only melody, in other words, how one shot flows into another. But there is also a vertical, harmonic component. It’s only occasionally that we layer our pictures to combine them vertically (see footnote). But we do it almost all the time with sound – layering sound components to add richness and complexity. I suppose one way of looking at this would be to think of pictures as the horizontal melody and the soundtrack as the vertical harmony, or counterpoint.

One obvious way in which we can approach this is to vary the vertical depth to increase and decrease tension. A sound texture that is uniformly dense quickly becomes tiresome. But if we think in terms of alternating moments where the sound is thickly layered and moments where it thins out, then we can again increase and decrease tension and relaxation.

(Footnote: One famous example of vertical picture layering comes in Apocalypse Now where Martin Sheen is reading Kurz’ letter while the boat drives upstream towards the waiting horror. Coppola layers up gliding images of the boat’s passage in dissolves that are so long they are more like superimpositions – conveying the sense of the hypnotic, awful, disorientating journey into the unknowable. But again contrast is the key here, because punctuating that vertical layering, Coppola interjects sharp cuts that hit us full in the face: suspended corpses, the burning helicopter in the branches of a tree. The key thing to notice is the counterpoint between the hard cuts and the flowing dissolves/superimpositions. The dissolves lull us into an eery fugue-like state, while the cuts repeatedly jolt us out of it to bring us face to face with the horror. The point is that they both work together to draw us inexorably towards the climax. The cuts work, because of the dissolves, and the dissolves work because of the cuts.)

Moments

The moments that we remember in both music and films are those points where something changes suddenly and dramatically. They are the magical effects that take your breath away. There is an incredibly famous cut in David Lean’s Lawrence of Arabia that is a perfect case in point. Claude Rains (Mr. Dryden) and Peter O’Toole (Lawrence) have been having a lively discussion about whether Lawrence really understands how brutal and unforgiving the desert is going to be. O’Toole insists that “it’s going to be fun”. He holds up a lighted match, and we cut to a close-up as he blows it out. On the sound of him blowing, we cut to an almost unimaginably wide shot of the desert as the sun rises almost imperceptibly slowly in what feels like complete silence. The sudden contrast of the shot size, the sudden absence of sound, the abruptness of cutting on the audio of blowing out the match – all of these make this one of the most memorable moments in film history. And of course, it’s a big narrative moment too. It’s not just clever, it has meaning. 

Or take another famous moment, this time from music. Beethoven’s massive Choral Symphony, the Ninth, is best known for its famous final movement, the Ode to Joy, based on Schiller’s poem of the same name. The finale follows on from a slow movement of celestial tranquillity and beauty, but it doesn’t launch immediately into the music that everyone knows so well. Instead there is a sequence built on the most incredible dissonance, which Wagner referred to as “the terror fanfare”. Beethoven has the massed ranks of the orchestra blast out a phenomenally powerful fortissimo chord that stacks up all seven notes of the D minor harmonic scale. It’s as if we are hearing the foul demons of hatred and division being sent screeching back to the depths of hell. And as the echoes of that terrifying sound are still dying away, we suddenly hear the solo baritone, the first time in nearly an hour of music that we have heard a human voice: “O Freunde, nicht diese Töne“, “Friends, let us not hear these sounds”. And so begins that unforgettable ode to the brotherhood of all mankind.

The point about both the Lawrence of Arabia moment and the Beethoven moment is that in each case, they form giant pivots upon which the whole work turns. The Lawrence moment shows us one crazy Englishman pitting himself against the limitless desert. The Beethoven moment gives us one lone voice stilling the forces of darkness and calling out for something better, something to unite us all. These are not mere stylistic tricks, they are fundamental structural moments that demand our attention and engage us with what each work is really about.

I’m not suggesting that everything we cut is going to have moments on this kind of epic scale, but the principle is one we can always benefit from thinking about and building into our work. When we’re planning our edit, it pays to ask ourselves where we are going to make these big turning points and what we can do with all the means at our disposal to make them memorable and attention-engaging. Our best, most important stuff needs to be reserved for these pivotal moments and we need to do everything we can to do it justice. And the best way of doing that, as Beethoven and David Lean both show us, is to make everything stop.

When the Music Stops

Arguably the greatest composer ever has one of my favourite ever quotes about music: “The music is not in the notes, but in the silence between.” Mozart saw that the most magical and profound moments in music are when the music stops. The absence of music is what makes music. To me that is one of the most profound insights in art.

From an editing point of view, that works, too. We need to understand the importance of not cutting, of not having sound, of not filling every gap, of creating breaths and pauses and beats, of not rushing onto the next thing, of allowing moments to resonate into nothingness, of stepping away and letting a moment simply be.

The temptation in editing is always to fill every moment with something. It’s a temptation we need to resist wherever we can. Our films will be infinitely better for it. Because it’s in those moments that the magic happens.

Composing and Editing with Structure

I hope by now you’ll agree with me about the fundamental importance of structure in editing. So let’s come back to our original image of the composer hammering out his piece of music note by note, and our novice editor laying out his film shot by shot.

It should be obvious that a composer needs to pre-visualise the structure of the piece before starting to think about the individual notes. At every level of the structure he needs to have thought about where the structural changes might happen – both on a large and small scale. He needs to plan the work in outline: where the key changes are going to happen, where the tempo shifts from fast to slow or slow to fast, where the tension escalates and where it subsides, where the whole orchestra is playing as one and where we hear just one solitary solo line. 

It goes without saying that very few composers have ever plotted out an entire work in detail and then stuck rigidly to the plan. But that’s not the point. The plan is just a plan until a better one comes along. The joy of composition is that it throws up its own unexpected surprises, ideas that grow organically out of other ideas and mushroom into something bigger, better and more complex than the composer could envisage when starting out. But those ideas don’t just shoot off at random. They train themselves around the trelliswork of the original structure. 

As I’ve mentioned, classical composers have it easy, because they can build upon pre-conceived structures like Sonata Form and the rest.  As editors we don’t have access to the same wealth of ready-built conventions, but we do have a few. 

One of the structures that we very frequently call upon is the famous three-act structure. It works not only for narrative, but for pretty much any kind of film you can think of. The three-act structure does in fact have a lot in common with Sonata Form. Act One is the Exposition, where we set out the themes to be addressed. Act Two is the Development Section, where the themes start to get complicated and we unravel the problems and questions that they pose. And Act Three is the Recapitulation (and Coda), where we finally resolve the themes set out in Act One. Almost anything you cut at whatever length can benefit from being thought of in these structural terms: a) set out your theme or themes; b) develop your themes and explore their complexities; c) resolve your themes (or at least point to ways in which they might be resolved). And make sure your audience is aware of how those sections break down. As an editor who has spent a lot of my working life cutting movie trailers, I know that every experienced trailer editor deploys three-act structure pretty much all the time and works it very hard indeed.

 Of course, scripted drama comes into the cutting room with its own prebuilt structure, but the script is by no means necessarily the structural blueprint for the finished film. Thinking about how to structure what was actually shot (as against what was on the page) is still vitally important. The originally conceived architecture might well not actually function as it was planned, so we can’t simply rely on that to deliver a film that will engage as it should. The principles that we’ve discussed of large scale composition, of pace, of contrast, of rhythm, and so on are all going to be useful in building a structure that works for the finished film.

Other kinds of filmmaking rely heavily on structural planning in the cutting room and a huge amount of work can go into building the base architecture. And it really helps if we think of that structural planning as more than simply shifting inert blocks into a functional whole. If we take inspiration from the musical concepts described here, we can create films that breathe a far more dynamic structural rhythm, that become journeys through darkness and light, through tension and relaxation, between calm and storm, journeys that engage and inspire.

Conclusion

Obviously this is just an overview of what is in reality a huge subject, but what I want to stress is that it really pays to be open to thinking about the processes of editing from different perspectives. Music, as a time-based art form, has so many useful lessons to draw from, both in terms of large scale architecture and small scale rhythms, dynamics, colours, and more. And those lessons can help us to make much more precise, refined and considered decisions about editing practice, whatever we are cutting.

– Simon Ubsdell

For more of Simon’s thoughts on editing, check out his blog post Bricklayers and Sculptors.

© 2018 Simon Ubsdell, Oliver Peters

Advertisements