LumaFusion – an iOS NLE

As Apple’s iOS platform becomes more powerful, applications for it begin to rival the power and complexity of desktop software. LumaFusion is a recently introduced nonlinear video editing product from Luma Touch. Its founders created the Avid/Pinnacle/Corel iOS NLE, but LumaFusion takes a fresh approach. Luma Touch currently offers three iOS products: LumaClip (a single-clip editor), LumaFX (video effects for clips), and LumaFusion (a full-fledged NLE that integrates the features of the other two products). All three apps run under what Luma Touch dubs their Spry Engine, a framework for iOS video applications.

LumaFusion works on both the iPhone and iPad; however, the iPad version comes closest to a professional desktop experience. Ideally you’ll want one of the iPad Pros, but it runs perfectly fine on an iPad Air 2 with the A8X chip, which is what I used. I’ve tried other iOS NLEs, including Adobe Clip, iMovie, and TouchEdit, which have their pros and cons. For instance, iMovie doesn’t deal with fractional video frame rates and TouchEdit tries to mimic a flatbed film editor. This brings me to LumaFusion, which has been designed as a modern, professional-grade NLE for the iOS platform.

The iOS ecosystem

Like other iOS apps, that tie into the ecosystem, media can be imported from iTunes, Photos, and other third-party applications, like FiLMiC Pro. As a “pro” app, it understands various whole and fractional frame rates and sizes up to 3840 x 2160 (UHD 4K), depending on your device. However, for me, the interest is not in cutting things that I’ve shot with my iPad, but rather fitting it into an offline/online editing workflow. This means import and export are critical.

If you own an iPad Pro, then you can get an SD card reader as an accessory. With the card reader, only native DSLR movie clips will be imported into the Photos app, but not other file formats. Typically, you are going to transfer media using cloud syncing tools, like Dropbox, Box, OneDrive, etc. LumaFusion also includes a number of royalty-free music cuts, which can be accessed through its integrated media browser.

To use it as a rough-cut tool, simply create H.264 proxies on your desktop system and sync those to the iPad using Dropbox (or another cloud service). I created a test project of about 60 clips (720p, 6Mbps, 29.97fps) that only consumed 116MB of storage space. Therefore, even a free 2GB Dropbox account would be fine. Within LumaFusion, import the files from Dropbox and start editing.

LumaTouch will soon start beta testing LumaConnect – a macOS companion application designed to facilitate offline/online editing roundtrips. It will feature automatic iOS proxy creation and the ability to relink high-res media – as well as any iOS-captured content – back on your desktop computer. LumaConnect will also allow the rendering of projects as Apple ProRes files.

User interface and editing workflow

Overall, the interface design and editing model more closely approximates Apple Final Cut Pro X than any other NLE. The app’s design is built around a media pool with various editing projects (sequences). This is a similar approach to FCPX 10.0, which had separate Events (bins) and Projects (sequences), but no combined Libraries. It’s almost like FCPX “Lite” for iOS.

There are three main windows: media browser, timeline, and a single, combo viewer. It uses fly-out panels for tools and mode changes to access clip editing and effects modules. These modules are, in fact, LumaClip and LumaFX integrated into LumaFusion. The timeline is “magnetic”, much like FCPX. Clip construction on the timeline also follows the layout of primary and connected clips, rather than discrete target tracks. A total of three integrated audio/video clips can be stacked vertically, along with another three audio-only clips, for a total of six audio “tracks”. Audio can be adjusted through a fly-out track mixer. LumaFusion includes four clip editing tools: speed and reverse, frame fit, color effects, and audio editing. In addition, there’s a multi-layered title tool, along with a number of customizable title templates to choose from. Clip-based volume and video effect adjustments can be keyframed.

Effects are pretty sophisticated and would often be GPU-accelerated on a desktop system. These include color correction, blurs, transforms, transitions, and more. You can stack a number of these onto a single clip without any impact on playback. The effects priority can be rearranged and the interface also provides an indication of how many resources you are tying up on the iPad.

The editing experience

Serious video editing on an iPad isn’t for everyone, but the more I worked with it, the more I enjoyed the experience. If you have an iPad-compatible keyboard, it follows some generic commands, including JKL playback and I and O for mark-in and mark-out. There are also a few FCPX keystrokes, like W for insert/overwrite (depending on which edit mode is selected). Unfortunately J (reverse playback) only works in the clip viewer, but not in the timeline. I’d love to see a more extensive keyboard command set. Naturally, being an iOS app, everything can be accessed via touch, which is best (though not essential) if you have the Apple Pencil for the iPad Pro.

There are a few standard editing functions that I missed. For example, there’s no “rolling-edit” trim function. If you want to move a cut point – equally trimming the left and right sides – you have to do it in the overwrite edit mode and trim the incoming or outgoing side of one of the clips. But, if you trim it back, a gap is left. J-cuts and L-cuts require that you detach the audio from the clip, as there is no way to expand an a/v clip in the timeline.

It is definitely possible to finish and export a polished piece from LumaFusion. You can also export an audio-only mix. This enables you to embellish your audio track outside of LumaFusion and then reimport and marry it to the picture for the final version. Because you can layer vertical tracks, cutting a two-camera interview piece on your iPad is pretty easy. Rough-cutting a first pass or pulling edited selects on an iPad becomes completely viable with LumaFusion.

Sharing your edit

Once you’ve edited your piece, it’s easy to share (export) your final sequence as a single audio/video file, audio-only file, project (currently only compatible with LumaFusion), or trimmed media. Be aware that there’s a disconnect between the frame rate terminology for settings versus exports. For example, with project settings, you can pick 24 or 30, which are actually 23.98 or 29.97; however, on export, you must pick between 24 and 23.98 or 30 and 29.97. Nevertheless, exports up to UHD frame sizes are fine, including downscaled sizes, if needed. So, you can import and cut in UHD and export a 1080 file. A flattened H.264 movie file of your sequence – wrapped in either an .mp4 or QuickTime .mov container – may be exported at up to 50Mbps (1080p) or 100Mbps (UHD).

If your intension is to use LumaFusion for “offline” editing, then for now, your only option is to embed “burn-in” timecode into the media that you send to the iPad. Then manually write down edit points based on the visible timecode at the cuts. The upcoming LumaConnect macOS application will make it possible to send projects to both Final Cut Pro X and Premiere Pro via XML. According to Luma Touch, they will also be adding XML export from LumaFusion as an in-app purchase, most likely before the release of LumaConnect.

Using an iPad or iPad Pro as your only computer isn’t for everyone, but LumaFusion is definitely a tool that brings iOS editing closer to the desktop experience. To get you started, the company has posted over 30 short tutorials on their YouTube channel. Sure, there are compromises, but not as many as you might think for simple projects. Even if an iPad is only a supplemental tool, then like so many other iOS apps, LumaFusion is another way to add efficiency in the modern, mobile world.

Originally written for RedShark News.

©2017 Oliver Peters

Premiere Pro Workflow Tips

When you are editing on projects that only you touch, your working practices can be as messy as you want them to be. However, if you work on projects that need to be interchanged with others down the line, or you’re in a collaborative editing environment, good operating practices are essential. This starts at the moment you first receive the media and carries through until the project has been completed, delivered, and archived.

Any editor who’s worked with Avid Media Composer in a shared storage situation knows that it’s pretty rock solid and takes measures to assure proper media relinking and management. Adobe Premiere Pro is very powerful, but much more freeform. Therefore, the responsibility of proper media management and editor discipline falls to the user. I’ve covered some of these points in other posts, but it’s good to revisit workflow habits.

Folder templates. I like to have things neat and one way to assure that is with project folder templates. You can use a tool like Post Haste to automatically generate a new set of folders for each new production – or you can simply design your own set of folders as a template layout and copy those for each new job. Since I’m working mainly in Premiere Pro these days, my folder template includes a Premiere Pro template project, too. This gives me an easy starting point that has been tailored for the kinds of narrative/interview projects that I’m working on. Simply rename the root folder and the project for the new production (or let Post Haste do that for you). My layout includes folders for projects, graphics, audio, documents, exports, and raw media. I spend most of my time working at a multi-suite facility connected to a NAS shared storage system. There, the folders end up on the NAS volume and are accessible to all editors.

Media preparation. When the crew comes back from the shoot, the first priority is to back-up their files to an archive drive and then copy the files again to the storage used for editing – in my case a NAS volume. If we follow the folder layout described above, then those files get copied to the production dailies or raw media (whatever you called it) folder. Because Premiere Pro is very fluid and forgiving with all types of codecs, formats, and naming conventions, it’s easy to get sloppy and skip the next steps. DON’T. The most important thing for proper media linking is to have consistent locations and unique file names. If you don’t, then future relinking, moving the project into an application like Resolve for color correction/finishing, or other process may lead to not linking to the correct file.

Premiere Pro works better when ALL of the media is in a single common format, like DNxHD/HR or ProRes. However, for most productions, the transcoding time involved would be unacceptable. A large production will often shoot with multiple camera formats (Alexa, RED, DSLRs, GoPros, drones, etc.) and generate several cards worth of media each day. My recommendation is to leave the professional format files alone (like RED or Alexa), but transcode the oddball clips, like DJI cameras. Many of these prosumer formats place the media into various folder structures or hide them inside a package container format. I will generally move these outside of this structure so they are easily accessible at the Finder level. Media from the cameras should be arranged in a folder hierarchy of Date, Camera, and Card. Coordinate with the DIT and you’ll often get the media already organized in this manner. Transcode files as needed and delete the originals if you like (as long as they’ve been backed up first).

Unfortunately these prosumer cameras often use repeated, rather than unique, file names. Every card starts over with clip number 0001. That’s why we need to rename these files. You can usually skip renaming professional format files. It’s optional. Renaming Alexa files is fine, but avoid renaming RED or P2 files. However, definitely rename DSLR, GoPro, and DJI clips. When renaming clips I use an app called Better Rename on the Mac, but any batch renaming utility will do. Follow a consistent naming convention. Mine is a descriptive abbreviation, month/day, camera, and card. So a shoot in Palermo on July 22, using the B camera, recorded on card 4, becomes PAL0722B04_. This is appended in front of the camera-generated clip name, so then clip number 0057 becomes PAL0722B04_0057. You don’t need the year, because the folder location, general project info, or the embedded file info will tell you that.

A quick word on renaming. Stick with universal alphanumeric conventions in both the files and the folder names. Avoid symbols, emojis, etc. Otherwise, some systems will not be able to read the files. Don’t get overly lengthy in your names. Stick with upper and lower case letters, numbers, dashes, underscores, and spaces. Then you’ll be fine.

Project location. Premiere Pro has several basic file types that it generates with each project. These include the project file itself, Auto-saved project files, renders, media cache files and audio peak (.pek) files. Some of these are created in the background as new media is imported into the project. You can choose to store these anywhere you like on the system, although there are optimal locations.

Working on a NAS, there is no problem in letting the project file, Auto-saves, and renders stay on the NAS in the same section of the NAS as all of your other media. I do this because it’s easy to back-up the whole job at the end of the line and have everything in one place. However, you don’t want all the small, application-generated cache files to be there. While it’s an option in preferences, it is highly recommended to have these media cache files go to the internal hard drive of the workstation or a separate, external local drive. The reason is that there are a lot of these small files and that traffic on the NAS will tend to bog down the overall performance. So set them to be local (the default).

The downside of doing this is that when another editor opens the Premiere Pro project on a different computer, these files have to be regenerated on that new system. The project will react sluggishly until this background process is complete. While this is a bit of a drag, it’s what Adobe recommends to keep the system operating well.

One other cache setting to be mindful of is the automatic delete option. A recent Premiere Pro problem cropped up when users noticed that original media was disappearing from their drives. Although this was a definite bug, the situation mainly affected users who had set Media cache to be with their original media files and had enabled automatic deletion. You are better off to keep the default location, but change the deletion setting to manual. You’ll have to occasional clean your caches manually, but this is preferable to losing your original content.

Premiere Pro project locking. A recent addition to Premiere Pro is project locking. This came about because of Team Projects, which are cloud-only shared project files. However, in many environments, facilities do not want their projects in the cloud. Yet, they can still take advantage of this feature. When project locking is enabled in Premiere Pro (every user on the system must do this), the application opens a temporary .prlock next to the project file. This is intended to prevent other users from opening the same project and overwriting the original editor’s work and/or revisions.

Unfortunately, this only works correctly when you open a project from the launch window. Do not open the project by double-clicking the project file itself in order to launch Premiere Pro and that project. If you open through the launch window, then Premiere Pro will prevents you from opening a locked project file. However, if you open through the Finder, then the locking system is circumvented, causing crashes and potentially lost work.

Project layout templates.  Like folder layouts, I’m fond of using a template for my Premiere Pro projects, too. This way all projects have a consistent starting point, which is good when working with several editors collaboratively. You can certainly create multiple templates depending on the nature and specs of the job, e.g. commercials, narrative, 23.98, 29.97, etc. As with the folder layout, I’ll often use a leading underscore with a name to sort an item to the top of a list, or start the name with a “z” to sort it to the bottom. A lot of my work is interview-driven with supportive B-roll footage. Most of the time I’m cutting in 23.98fps. So, that’s the example shown here.

My normal routine is to import the camera files (using Premiere Pro’s internal Media Browser) according to the date/camera/card organization described earlier. Then I’ll review the footage and rearrange the clips. Interview files go into an interview sources bin. I will add sub-bins in the B-roll section for general categories. As I review footage, I’ll move clips into their appropriate area, until the date/camera/card bins are empty and can be deleted from the project. Interviews will be grouped as multi-cam clips and edited to a single sequence for each person. This sequence gets moved into the Interview Edits sub-bin and becomes the source for any clips from this interview. I do a few other things before starting to edit, but that’s for another time and another post.

Working as a team. There are lots of ways to work collaboratively, so the concept doesn’t mean the same thing in every type of job. Sometimes it requires different people working on the same job. Other times it means several editors may access a common pool of media, but working in their own discrete projects. In any case, Premiere does not allow the same sort of flexibility that Media Composer or Final Cut Pro editors enjoy. You cannot have two or more editors working inside the same project file. You cannot open more than one project at a time. This mean Premiere Pro editors need to think through their workflows in order to effectively share projects.

There are different strategies to employ. The easiest is to use the standard “save as” function to create alternate versions of a project. This is also useful to keep project bloat low. As you edit a long time on a project, you build up a lot of old “in progress” sequences. After a while, it’s best to save a copy and delete the older sequences. But the best way is to organize a structure to follow.

As an example, let’s say a travel-style show covers several locations in an episode. Several editors and an assistant are working on it. The assistant would create a master project with all the footage imported and organized, interviews grouped/synced, and so on. At this point each editor takes a different location to cut that segment. There are two options. The first is to duplicate the project file for each location. Open each one up and delete the content that’s not for that location. The second option is to create a new project for each location and them import media from the master project using Media Browser. This is Adobe’s built-in module that enables the editor to access files, bins, and sequences from inside other Premiere Pro projects. When these are imported, there is no dynamic linking between the two projects. The two sets of files/sequences are independent of each other.

Next, each editors cuts their own piece, resulting in a final sequence for each segment. Back in the master project, each edited sequence can be imported – again, using Media Browser –  for the purposes of the final show build and tweaks. Since all of the media is common, no additional media files will be imported. Another option is to create a new final project and then import each sequence into it (using Media Browser). This will import the sequences and any associated media films. Then use the segment sequences to build the final show sequence and tweak as needed.

There are plenty of ways to use Premiere Pro and maintain editing versatility within a shared storage situation. You just have to follow a few rules for “best practices” so that everyone will “play nice” and have a successful experience.

Click here to download a folder template and enclosed Premiere Pro template project.

©2017 Oliver Peters

Websites for Filmmakers

There are plenty of places to go for budding filmmakers to learn more about their craft on the web. Naturally some places are better than others and forums often take on the atmosphere of a biker bar. Here are some of my top suggestions (in no particular order) for getting a better handle on the art of filmmaking, especially when it comes to editing.

Tony Zhou does a great job of film analysis – breaking down scene construction and finding similarities in a director’s signature style(s) or a particular technique. This Vimeo page puts these all in one handy spot.

This Guy Edits tackles similar videos to Zhou’s, with the additional angle of actually watching him go through his thought process of editing an indie film. The brainchild of film editor Sven Pape, these videos have had significant traction. In addition, he’s been doing this with Final Cut Pro X, so I’m sure that’s added even more interest.

Vashi Visuals is the site for editor Vashi Nedomansky.  His blogs covers a lot of film analysis without relying on videos like the first two I’ve mentioned. Nevertheless there are a number of insights, ranging from film aspect ratio changes to the workflow he’s followed on such large-scale films as “6 Below”.

While I’m partial to my own interview articles, it’s hard to beat Steve Hullfish’s Art of the Cut interview series for sheer depth and volume. Hullfish is an editor, colorist, trainer, and author with several books to his credit, including a curated version of some of these interviews. It’s a great place to go to understand what the leading editors go through in cutting films.

If you are more partial to cinematography than editing, you’ve got to check out the site and forum of renowned director of photography Roger Deakins. It’s free to sign up to the forum, where you can learn a ton about film production. Deakins also chimes in at times, schedule permitting.

Another cinematographer gracious with his time is David Mullen. The “Ask David Mullen ANYTHING” thread was started on one of the forums at RedUser a few years ago and at present has spawned nearly 600 pages. In this thread, Mullen responds to user questions about cinematography with answers from his personal experience and study of the art.

Like Hullfish, another ProVideo Coalition contributor is Brian Hallett, with his Art of the Shot series. He’s adapted the interview concept to talk with leading cinematographers about their art, choice of gear, and more.

Of course, there’s more to filmmaking than editing and cinematography. It all starts with the word, so let me include Scott Smith’s blog. Smith is a writer/director whose Screenwriting from Iowa blog covers a wide range of topics – from film, writing, and TV analysis to his own experiences and travels.

I’ll wrap this up with some “honorable mentions”. Plenty of companies have blogs on their websites, but some invite collaborators to submit guest posts, so it’s not just content related to that company’s product. Of course, some, like Avid, focus on product-centric posts. Here are a few sites worth reading:

Academy Originals

Avid

Premium Beat

Frame IO

Wipster

Note: Some of these links load better in Chrome than in Safari.

©2017 Oliver Peters

Spice with Templates

One way in which Apple’s Final Cut Pro X has altered editing styles is through the use of effects built as Motion templates, using the common engine shared with Apple Motion. There are a number of developers marketing effects templates, but the biggest batch can be found at the Fxfactory website. A regular development partner is idustrial Revolution, the brainchild of editor (and owner of FCP.co) Peter Wiggins. Wiggins offers a number of different effects packages, but the group marketed under the XEffects brand includes various templates that are designed to take the drudgery out of post, more so than just being eye-catching visual effects plug-ins.

XEffects includes several packages designed to be compatible with the look of certain styles of production, such as news, sports, and social media. These packages are only for FCP X and come with modifiable, preset moves, so you don’t have to build complex title and video moves through a lot of keyframe building. The latest is XEffects Viral Video, which is a set of moves, text, and banners that fit in with the style used today for trendy videos. The basic gist of these effects covers sliding or moving banners with titles and templates that have been created to conform to both 16:9 and square video projects. In addition, there are a set of plug-ins to create simple automatic moves on images, which is helpful in animating still photos. Naturally several title templates can be used together to create a stacked graphic design.

Another company addressing this market is Rampant Design Tools with a series of effects templates for both Apple Final Cut X and Adobe Premiere Pro CC. Their Premiere Pro templates include both effects presets and template projects. The effects presets can be imported into Premiere and become part of your arsenal of presets. For example, if you what to have text slide in from the side, blurred, and then resolve itself when it comes to rest – there’s a preset for that. Since these are presets, they are lightweight, as no extra media is involved.

The true templates are actually separate Premiere Pro template projects. Typically these are very complex, layered, and nested timelines that allow you to create very complex effects without the use of traditional plug-ins. These projects are designed to easily guide you where to place your video, so no real compositing knowledge is needed. Rampant has done the hard part for you. As with any Premiere Pro project, you can import the final effects sequence into your active project, so there’s no need to touch the template project itself. However, these template projects do include media and aren’t as lightweight as the presets, so be mindful of your available hard drive space.

For Final Cut Pro X, Rampant has done much the same, creating both a set of installable Motion template effects, like vignette or grain, as well as more complex FCP X Libraries designed for easy and automatic use. As with the Premiere products, some of these Libraries contain media and are larger than others, so be mindful of your space.

Both of these approaches offer new options in the effects market. These developers give you plug-in style effects without actually coding a specific plug-in. This makes for faster development and less concern that a host application version change will break the plug-in. As with any of these new breed of effects, the cost is much lower than in the past and effects can be purchase a la carte, which enables you to tailor your editor’s tool bag to your immediate needs.

©2017 Oliver Peters

Baby Driver

You don’t have to be a rabid fan of Edgar Wright’s work to know of his films. His comedy trilogy (Shaun of the Dead, Hot Fuzz, The World’s End) and cult classics like Scott Pilgrim vs. the World loom large in pop culture. His films have earned a life beyond most films’ brief release period and earned Wright a loyal following. The latest film from Wright is Baby Driver, a musically-fueled action film written and directed by Wright, which just made a big splash at SXSW. It stars Ansel Elgort, Kevin Spacey, Jon Hamm, Jamie Foxx, and Eiza Gonzalez.

At NAB, Avid brought in a number of featured speakers for its main stage presentations, as well as its Avid Connect event. One of these speakers was Paul Machliss (Scott Pilgrim vs. the World, The World’s End, Baby Driver), who spoke to packed audiences about the art of editing these films. I had a chance to go in-depth with Machliss about the complex process of working on Baby Driver.

From Smoke to baptism by fire

We started our conversation with a bit of the backstory of the connection between Wright and Machliss. He says, “I started editing as an online editor and progressed from tape-based systems to being one of the early London-based Smoke editors. My boss at the time passed along a project that he thought would be perfect for Smoke. That was onlining the sitcom Spaced, directed by Edgar Wright. Edgar and I got on well. Concurrent to that, I had started learning Avid. I started doing offline editing jobs for other directors and had a ball. A chance came along to do a David Beckham documentary, so I took the plunge from being a full-time online editor to taking my chances in the freelance world. On the tail end of the documentary, I got a call from Edgar, offering me the gig to be the offline editor for the second season of Spaced, because Chris Dickens (Hot Fuzz, Berberian Sound Studio, Slumdog Millionaire) wasn’t available to complete the edit. And that was really jumping into the deep end. It was fantastic to be able to work with Edgar at that level.”

Machliss continues, “Chris came back to work with Edgar on Shaun of the Dead and Hot Fuzz, so over the following years I honed my skills working on a number of British comedies and dramas. After Slumdog Millionaire came out, which Chris cut and for which he won a number of awards, including an Oscar, Chris suddenly found himself very busy, so the rest of us working with Edgar all moved up one in the queue, so to speak. The opportunity to edit Scott Pilgrim came up, so we all threw ourselves into the world of feature films, which was definitely a baptism by fire. We were very lucky to be able to work on a project of that nature during a time where the industry was in a bit of a slump due to the recession. And it’s fantastic that people still remember it and talk about it seven years on. Which brings us to Baby Driver. It’s great when a studio is willing to invest in a film that isn’t a franchise, a sequel, or a reboot.”

Music drives the film

In Baby Driver, Ansel Elgort plays “Baby”, a young kid who is the getaway driver for a gang. At a young age, he was in a car accident which leaves him with tinnitus, so it takes listening to music 24/7 to drown out the tinnitus. Machliss explains, “His whole life becomes regimented to whatever music he is listening to – different music for different moods or occasions. Somehow everything falls magically into sync with whatever he is listening to – when he’s driving, swerving to avoid a car, making a turn – it all seems to happen on the beat. Music drives every single scene. Edgar deliberately chose commercial top-20 tracks from the 1960s up to today. Each song Baby listens to also slyly comments on whatever is happening at the time in the story. Everything is seemingly choreographed to musical rhythms. You’re not looking at a musical, but everything is musically driven.”

Naturally, building a film to popular music brings up a whole host of production issues. Machliss tells how this film had been in the planning for years, “Edgar had chosen these tracks years ago. I believe it was in 2011 that Edgar and I tried to sequence the tracks and intersperse them with sound effects. A couple of months later, he did a table read in LA and sent me the sound files. In the Avid, I combined the sound files, songs, and some sound effects to create effectively a 100-minute radio play, which was, in fact, the film in audio form. The big thing is that we had to clear every song before we could start filming. Eventually we cleared 30-odd songs for the film. In addition, Edgar worked with his stunt team and editor Evan Schiff in LA to create storyboards and animatics for all of the action scenes.”

Editor on the front lines

Unlike most films, a significant amount of the editing took place on-set with Machliss working from a portable set-up. He says, “Based on our experiences with Scott Pilgrim and World’s End, Edgar decided it would be best to have me on-set during most of the Atlanta shoot for Baby Driver. Even though a cutting room was available, I was in there maybe ten percent of the time. The rest of the time I was on set. I had a trolley with a laptop, monitor, an Avid Mojo, and some hard drives and I would connect myself via ethernet to the video assist’s hard drive. Effectively I was crew in the front lines with everyone else. Making sure the edit worked was as important as getting a good take in the can. If I assured Edgar that a take would work, then he knew it wasn’t going to come back and cause problems for us six months later. We wanted things to work naturally in camera without a lot of fiddling in post. We didn’t want to have to fall back on frame-cutting and vari-speeding if we didn’t have to. There was a lot of prep work in making sure actions correctly coincided with certain lyrics without the action seeming mechanical.”

The nature of the production added to the complexity of the production audio configuration, too. Machliss explains, “Sound-wise, it was very complicated. We had playback going to earwigs in the actors’ ears, Edgar wanted to hear music plus the dialogue in his cans, and then I needed to get a split feed of the audio, since I already had the clean music on my timeline. We shot this mostly on 35mm film. Some days were A-camera only, but usually two cameras running. It was a combination of Panavision, Arricams, and occasionally Arri Alexas. Sometimes there were some stunt shots, which required nine or ten cameras running. Since the action all happened against playback of a track, this allowed me to use Avid’s multicam tools to quickly group shots together. Avid’s AMA tools have really come of age, so I was able to work without needing to ingest anything. I could treat the video assist’s hard drive as my source media, as long as I had the ethernet connection to it. If we were between set-ups, I could get Avid to background-transcode the media, so I’d have my own copy.”

Did all of this on-set editing speed up the rest of the post process? He continues, “All of the on-set editing helped a great deal, because we went into the real post-production phase knowing that all the sequences basically worked. During that time, as I’d fill up a LaCie Rugged drive, I would send that back to the suites. My assistant, Jerry Ramsbottom, would then patiently overcut my edits from the video assist with the actual scanned telecine footage as it came in. We shot from mid-February until mid-May and then returned to England. Jonathan Amos came on board a few weeks into the director’s cut edit and worked on the film with Edgar and myself up until the director’s cut picture lock. He did a pass on some of the action scenes while Edgar and myself concentrated on dialogue and the overall shape of the film. He stayed on board up until the final picture lock and made an incredible contribution to the action and the tension of the film. By the end of the year we’d locked and then we finished the final mix mid-February of this year. But the great thing was to be able to come into the edit and have those sequences ready to go.”

Editing from set is something many editors try to avoid. They feel they can be more objective that way. Machliss sees it a bit differently, “Some editors don’t like being on set, but I like the openness of it – taking it all in. Because when you are in the edit, you can recall the events of the day a particular scene was shot – ‘I can remember when Kevin Spacey did this thing on the third take, which could be useful’. It’s not vital to work like this, but it does preclude to a kind of short-hand, which is something Edgar and I have developed over these years anyway. The beauty of it is that Edgar and I will take the time to try every option. You can never hit on the perfect cut the first time. Often you’ll get feedback from screenings, such as ‘we’d like to see more emotion between these characters’. You know what’s available and sometimes four extra shots can make all the difference in how a scene reads without having to re-imagine anything. We did drop some scenes from the final version of the film. Of course, you go ‘that’s a shame’, but at least these scenes were given a chance. However, there are always bits where upon the 200th viewing you can decide, ‘well, that’s completely redundant’ – and it’s easy to drop. You always skate as close to the edge of making a film shorter without doing any damage to it.”

The challenge of sound

During sound post, Baby Driver also presented some unique challenges. Machliss says, “For the sound mix – and even for the shoot – we had to make sure we were working with the final masters of the song recordings to make sure the pitch and duration remained constant throughout. Typically these came in as mono or stereo WAVs. Because music is such an important element to the film, the concept of perceived direction becomes important. Is the music emanating from Baby’s earbuds? What happens to it when the camera moves or he turns his head? We had to work out a language for the perception of sound. This was Edgar’s first film mixed in Dolby ATMOS and we were the second film in Goldcrest London’s new Atmos-certified dubbing theater. Then we did a reduction to 7.1 and 5.1. Initially we were thinking this film would have no score other than the songs. Invariably you need something to get from A to B. We called on the services of Steven Price (Gravity, Fury, Suicide Squad), who provided us with some original cues and some musical textures. He did a very clever thing where he would match the end pitch or notes of a commercial song and then by the time he came to the end of his cue, it would match to the incoming note or key of the next song. And you never notice the change.”

Working with Avid in a new way

To wrap up the conversation, we talked a bit about using Avid Media Composer on his work. Machliss has used numerous other systems, but Media Composer still fits the bill for his work today. He says, “For me, the speed of working with AMA in Avid in the latest software was a real benefit. I could actually keep up with the speed of the shoot. You don’t want to be the one holding up a crew of 70. I also made good use of background transcoding. On a different project (Fleabag), I was able to work with native 2K Alexa ProRes camera files at full resolution. It was fantastic to be able to use Frameflex and apply LUTs – doing the cutting, but then bringing back my old skills as an online editor to paint out booms and fix things up. Once we locked, I could remove the LUTs and export DPX files, which went straight to the grading facility. That was exciting to work in a new way.”

Baby Driver opened at the start of July in the US and is a fun ride. You can certainly enjoy a film like this without knowing the nitty gritty of the production that goes into it. However, after you’ve read this article, you just might need to see it at least twice – once to just enjoy and once again to study the “invisible art” that’s gone into bringing it to screen.

(For more with Paul Machliss, check out these interviews at Studio Daily, ProVideoCoalition, and FrameIO.)

Originally written for Digital Video magazine / Creative Planet Network

©2017 Oliver Peters

Bricklayers and Sculptors

One of the livelier hangouts on the internet for editors to kick around their thoughts is the Creative COW’s Apple Final Cut Pro X Debates forum. Part forum, part bar room brawl, it started as a place to discuss the relative merits (or not) of Apple’s FCP X. As such, the COW’s bosses allow a bit more latitude than in other forums. However, often threads derail into really thoughtful discussions about editing concepts.

Recently one of its frequent contributors, Simon Ubsdell, posted a thread called Bricklayers and Sculptors. In his words, “There are two different types of editors: Those who lay one shot after another like a bricklayer builds a wall. And those who discover the shape of their film by sculpting the raw material like a sculptor works with clay. These processes are not the same. There is no continuum that links these two approaches. They are diametrically opposed.”

Simon Ubsdell is the creative director, partner, and editor/mixer for London-based trailer shop Tokyo Productions. Ubsdell is also an experienced plug-in developer, having developed and/or co-developed the TKY, Tokyo, and Hawaiki effects plug-ins. But beyond that, Simon is one of the folks with whom I often have e-mail discussions regarding the state of editing today. We were both early adopters of FCP X who have since shifted almost completely to Adobe Premiere Pro. In keeping with the theme of his forum post, I asked him to share his ideas about how to organize an edit.

With Simon’s permission, the following are his thoughts on how best to organize editing projects in a way that keeps you immersed in the material and results in editing with greater assurance that you’ve make the best possible edit decisions.

________________________________________________

Simon Ubsdell – Bricklayers and Sculptors in practical terms

To avoid getting too general about this, let me describe a job I did this week. The producer came to us with a documentary that’s still shooting and only roughly “edited” into a very loose assembly – it’s the stories of five different women that will eventually be interweaved, but that hasn’t happened yet. As I say, extremely rough and unformed.

I grabbed all the source material and put it on a timeline. That showed me at a glance that there was about four hours of it in total. I put in markers to show where each woman’s material started and ended, which allowed me to see how much material I had for each of them. If I ever needed to go back to “everything”, it would make searching easier. (Not an essential step by any means.)

I duplicated that sequence five times to make sequences of all the material for each woman. Then I made duplicates of those duplicates and began removing everything I didn’t want. (At this point I am only looking for dialogue and “key sound”, not pictures which I will pick up in a separate set of passes.)

Working subtractively

From this point on I am working almost exclusively subtractively. A lot of people approach string-outs by adding clips from the browser – but here all my clips are already on the timeline and I am taking away anything I don’t want. This is for me the key part of the process because each edit is not a rough approximation, but a very precise “topping and tailing” of what I want to use. If you’re “editing in the Browser” (or in Bins), you’re simply not going to be making the kind of frame accurate edits that I am making every single time with this method.

The point to grasp here is that instead of “making bricks” for use later on, I am already editing in the strictest sense – making cuts that will stand up later on. I don’t have to select and then trim – I am doing both operations at the same time. I have my editing hat on, not an organizing hat. I am focused on a timeline that is going to form the basis of the final edit. I am already thinking editorially (in the sense of creative timeline-based editing) and not wasting any time merely thinking organizationally.

I should mention here that this is an iterative process – not just one pass through the material, but several. At certain points I will keep duplicates as I start to work on shorter versions. I won’t generally keep that many duplicates – usually just an intermediate “long version”, which has lost all the material I definitely don’t want. And by “definitely don’t want” I’m not talking about heads and tails that everybody throws away where the camera is being turned on or off or the crew are in shot – I am already making deep, fine-grained editorial and editing decisions that will be of immense value later on. I’m going straight to the edit point that I know I’ll want for my finished show. It’s not a provisional edit point – it’s a genuine editorial choice. From this point of view, the process of rejecting slates and tails is entirely irrelevant and pointless – a whole process that I sidestep entirely. I am cutting from one bit that I want to keep directly to the next bit I want to keep and I am doing so with fine-tuned precision. And because I am working subtractively I am actually incorporating several edit decisions in one – in other words, with one delete step I am both removing the tail from the outgoing clip and setting the start of the next clip.

Feeling the pacing and flow

Another key element here is that I can see how one clip flows into another – even if I am not going to be using those two clips side-by-side. I can already get a feel for the pacing. I can also start to see what might go where, so as part of this phase, I am moving things around as options start suggesting themselves. Because I am working in the timeline with actual edited material, those options present themselves very naturally – I’m getting offered creative choices for free. I can’t stress too strongly how relevant this part is. If I were simply sorting through material in a Browser/Bin, this process would not be happening or at least not happening in anything like the same way. The ability to reorder clips as the thought occurs to me and for this to be an actual editorial decision on a timeline is an incredibly useful thing and again a great timesaver. I don’t have to think about editorial decisions twice.

And another major benefit that is simply not available to Browser/Bin-based methods, is that I am constructing editorial chunks as I go. I’m taking this section from Clip A and putting it side-by-side with this other section from Clip A, which may come from earlier in the actual source, and perhaps adding a section from Clip B to the end and something from Clip C to the front. I am forming editorial units as I work through the material. And these are units that I can later use wholesale.

Another interesting spin-off is that I can very quickly spot “duplicate material”, by which I mean instances where the same information or sentiment is conveyed in more or less the same terms at different places in the source material. Because I am reviewing all of this on the timeline and because I am doing so iteratively, I can very quickly form an opinion as to which of the “duplicates” I want to use in my final edit.

Working towards the delivery target

Let’s step back and look at a further benefit of this method. Whatever your final film is, it will have the length that it needs to be – unless you’re Andy Warhol. You’re delivering a documentary for broadcast or theatrical distribution, or a short form promo or a trailer or TV spot. In each case you have a rough idea of what final length you need to arrive at. In my case, I knew that the piece needed to be around three minutes long. And that, of course, throws up a very obvious piece of arithmetic that it helps me to know. I had five stories to fit into those three minutes, which meant that the absolute maximum of dialogue that I would need would be just over 30 seconds from each story!  The best way of getting to those 30 seconds is obviously subtractively.

I know I need to get my timeline of each story down to something approaching this length. Because I’m not simply topping and tailing clips in the Browser, but actually sculpting them on the timeline (and forming them into editorial units, as described above), I can keep a very close eye on how this is coming along for each story strand. I have a continuous read-out of how well I am getting on with reducing the material down to the target length. By contrast, if I approach my final edit with 30 minutes of loosely selected source material to juggle, I’m going to spend a lot more time on editorial decisions that I could have successfully made earlier.

So the final stage of the process in this case was simply to combine and rearrange the pre-edited timelines into a final timeline – a process that is now incredibly fast and a lot of fun. I’ve narrowed the range of choices right down to the necessary minimum. A great deal of the editing has literally already been done, because I’ve been editing from the very first moment that I laid all the material on the original timeline containing all the source material for the project.

As you can see, the process has been essentially entirely subtractive throughout – a gradual whittling down of the four hours to something closer to three minutes. This is not to say there won’t be additive parts to the overall edit. Of course, I added music, SFX, and graphics, but from the perspective of the process as a whole, this is addition at the most trivial level.

Learning to tell the story in pictures

There is another layer of addition that I have left out and that’s what happens with the pictures. So far I’ve only mentioned what is happening with what is sometimes called the “radio edit”. In my case, I will perform the exact same (sometimes iterative) process of subtracting the shots I want to keep from the entirety of the source material – again, this is obviously happening on a timeline or timelines. The real delight of this method is to review all the “pictures” without reference to the sound, because in doing so you can get a real insight into how the story can be told pictorially. I will often review the pictures having very, very roughly laid up some of the music tracks that I have planned on using. It’s amazing how this lets you gauge both whether your music suits the material and conversely whether the pictures are the right ones for the way you are planning to tell the story.

This brings to me a key point I would make about how I personally work with this method and that’s that I plunge in and experiment even at the early stages of the project. For me, the key thing is to start to get a feel for how it’s all going to come together. This loose experimentation is a great way of approaching that. At some point in the experimentation something clicks and you can see the whole shape or at the very least get a feeling for what it’s all going to look like. The sooner that click happens, the better you can work, because now you are not simply randomly sorting material, you are working towards a picture you have in your head. For me, that’s the biggest benefit of working in the timeline from the very beginning. You’re getting immersed in the shape of the material rather than just its content and the immersion is what sparks the ideas. I’m not invoking some magical thinking here – I’m just talking about a method that’s proven itself time and time again to be the best and fastest way to unlock the doors of the edit.

Another benefit is that although one would expect this method to make it harder to collaborate, in fact the reverse is the case if each editor is conversant with the technique. You’re handing over vastly more useful creative edit information with this process than you could by any other means. What you’re effectively doing is “showing your workings” and not just handing over some versions. It means that the editor taking over from you can easily backtrack through your work and find new stuff and see the ideas that you didn’t end up including in the version(s) that you handed over. It’s an incredibly fast way for the new editor to get up to speed with the project without having to start from scratch by acquainting him or herself with where the useful material can be found.

Even on a more conventional level, I personally would far rather receive string-outs of selects than all the most carefully organized Browser/Bin info you care to throw at me. Obviously if I’m cutting a feature, I want to be able to find 323T14 instantly, but beyond that most basic level, I have no interest in digging through bins or keyword collections or whatever else you might be using, as that’s just going to slow me down.

Freeing yourself of the Browser/Bins

Another observation about this method is how it relates to the NLE interface. When I’m working with my string-outs, which is essentially 90% of the time, I am not ever looking at the Browser/Bins. Accordingly, in Premiere Pro or Final Cut Pro X, I can fully close down the Project/Browser windows/panes and avail myself of the extra screen real estate that gives me, which is not inconsiderable. The consequence of that is to make the timeline experience even more immersive and that’s exactly what I want. I want to be immersed in the details of what I’m doing in the timeline and I have no interest in any other distractions. Conversely, having to keep going back to Bins/Browser means shifting the focus of attention away from my work and breaking the all-important “flow” factor. I just don’t want any distractions from the fundamentally crucial process of moving from one clip to another in a timeline context. As soon as I am dragged away from that, there’s is a discontinuity in what I am doing.

The edit comes to shape organically

I find that there comes a point, if you work this way, when the subsequence you are working on organically starts to take on the shape of the finished edit and it’s something that happens without you having to consciously make it happen. It’s the method doing the work for you. This means that I never find myself starting a fresh sequence and adding to it from the subsequences and I think that has huge advantages. It reinforces my point that you are editing from the very first moment when you lay all your source material onto one timeline. That process leads without pause or interruption to the final edit through the gradual iterative subtraction.

I talked about how the iterative sifting process lets you see “duplicates”, that’s to say instances where the same idea is repeated in an alternative form – and that it helps you make the choice between the different options. Another aspect of this is that it helps you to identify what is strong and what is not so strong. If I were cutting corporates or skate videos this might be different, but for what I do, I need to be able to isolate the key “moments” in my material and find ways to promote those and make them work as powerfully as possible.

In a completely literal sense, when you’re cutting promos and trailers, you want to create an emotional, visceral connection to the material in the audience. You want to make them laugh or cry, you want to make them hold their breath in anticipation, or gasp in astonishment. You need to know how to craft the moments that will elicit the response you are looking for. I find that this method really helps me identify where those moments are going to come from and how to structure everything around them so as to build them as strongly as possible. The iterative sifting method means you can be very sure of what to go for and in what context it’s going to work the best. In other words, I keep coming back to the realization that this method is doing a lot of the creative work for you in a way that simply won’t happen with the alternatives. Even setting aside the manifest efficiency, it would be worth it for this alone.

There’s a huge amount more that I could say about this process, but I’ll leave it there for now. I’m not saying this method works equally well for all types of projects. It’s perhaps less suited to scripted drama, for instance, but even there it can work effectively with certain modifications. Like every method, every editor wants to tweak it to their own taste and inclinations. The one thing I have found to its advantage above all others is that it almost entirely circumvents the problem of “what shot do I lay down next?” Time and again I’ve seen Browser/Bin-focused editors get stuck in exactly this way and it can be a very real block.

– Simon Ubsdell

For an expanded version of this concept, check out Simon’s in-depth article at Creative COW. Click here to link.

For more creative editing tips, click on this link for Film Editor Techniques.

©2017 Oliver Peters

The Handmaid’s Tale

With tons of broadcast, web, and set-top outlets for dramatic television, there’s a greater opportunity than ever for American audiences to be exposed to excellent productions produced outside of Hollywood or New York. Some of the most interesting series come out of Canada from a handful of production vendors. One such company is Take 5 Productions, which has worked on such co-productions as Vikings, American Gothic, Penny Dreadful, and others. One of their newest offerings is The Handmaid’s Tale, currently airing in ten, hourlong episodes on Hulu, as well as being distributed internationally through MGM.

The Handmaid’s Tale is based on a dystopian novel written in 1985 by Margaret Atwood. It’s set in New England during the near future, when an authoritarian theocracy has overthrown the United States government and replaced it with the Republic of Gilead. The population has had declining births due to pollution and disease, so a class of women (the handmaids), who are considered fertile, are kept by the ruling class (the Commanders) as concubines for the purpose of having their children. This disturbing tale and series, with its nods to Nazi Germany and life behind the Iron Curtain, not to mention Orwell and Kubrick, stars Elizabeth Moss (Mad Men, The One I Love, Girl, Interrupted) as Offred, one of the handmaids, as she tries to survive her new reality.

The tone of the style and visuals for The Handmaid’s Tale was set by cinematographer-turned-director, Reed Morano (Frozen River, Meadowland, The Skeleton Twins). She helmed three of the episodes, including the pilot. As with many television series, a couple of editors traded off the cutting duties. For this series, Julian Clarke (Deadpool, Chappie, Elysium) started the pilot, but it was wrapped up by Wendy Hallam Martin (Queer As Folk, The Tudors, The Borgias). Hallam Martin and Christopher Donaldson (Penny Dreadful, Vikings, The Right Kind of Wrong) alternated episodes in the series, with one episode cut by Aaron Marshall (Vikings, Penny Dreadful, Warrior).

Cutting a dystopian future

I recently spoke with Wendy Hallam Martin about this series and working in the Toronto television scene. She says, “As a Canadian editor, I’ve been lucky to work on some of the bigger shows. I’ve done a lot of Showtime projects, but Queer As Folk was really the first big show for me. With the interest of outlets like Netflix and Hulu, budgets have increased and Canadian TV has had a chance to produce better shows, especially the co-productions. I started on The Handmaid’s Tale with the pilot, which was the first episode. Julian [Clarke] started out cutting the pilot, but had to leave due to his schedule, so I took over. After the pilot was shot (with more scenes to come), the crew took a short break. Reed [Morano] was able to start her director’s cut before she shot episodes two and three to set the tone. The pilot didn’t lock until halfway through the season.”

One might think a mini-series that doesn’t run on a broadcast network would have a more relaxed production and post schedule, akin to a feature film. But not so with The Handmaid’s Tale, which was produced and delivered on a schedule much like other television dramatic series. Episodes were shot in blocks of two episodes at a time with eight days allotted per episode. The editor’s assembly was due five days later followed by two weeks working with the director for a director’s cut. Subsequent changes from Hulu and MGM notes result in a locked cut three months after the first day of production for those two episodes. Finally, it’s three days to color grade and about a month for sound edit and mix.

Take 5 has its own in-house visual effects department, which handles simple VFX, like wire removals, changing closed eyes to open, and so on. A few of the more complex VFX shots are sent to outside vendors. The episodes average about 40 VFX shots each, however, the season finale had 70 effects shots in one scene alone.

Tackling the workload

Hallam Martin explained how they dealt with the post schedule. She continues, “We had two editors handling the shows, so there was always some overlap. You might be cutting one show while the next one was being assembled. This season we had a first and second assistant editor. The second would deal with the dailies and the first would be handling visual effects hand-offs, building up sound effects, and so on. For the next season we’ll have two firsts and one second assistant, due to the load. Reed was very hands-on and wanted full, finished tracks of audio. There were always 24 tracks of sound on my timelines. I usually handle my own temp sound design, but because of the schedule, I handed that off to my first assistant. I would finish a scene and then turn it over to her while I moved on to the next scene.”

The Handmaid’s Tale has a very distinctive look for its visual style. Much of the footage carries a strong orange-and-teal grade. The series is shot with an ARRI ALEXA Mini in 4K (UHD). The DIT on set applies a basic look to the dailies, which are then turned into Avid DNxHD36 media files by Deluxe in Toronto to be delivered to the editors at Take 5. Final color correction is handled from the 4K originals by Deluxe under the supervision of the series director of photography, Colin Watkinson (Wonder Woman, Entourage, The Fall). A 4K (UHD) high dynamic range master is delivered to Hulu, although currently only standard dynamic range is streamed through the service. Hallam Martin adds, “Reed had created an extensive ‘look book’ for the show. It nailed what [series creator] Bruce Miller was looking for. That, combined with her interview, is why the executive producers hired her. It set the style for the series.”

Another departure from network television is that episodes do not have a specific duration that they must meet. Hallam Martin explains, “Hulu doesn’t dictate exact lengths like 58:30, but they did want the episodes to be under an hour long. Our episodes range from about 50 to 59 minutes. 98% of the scenes make it into an episode, but sometimes you do have to cut for time. I had one episode that was 72 minutes, which we left that long for the director’s cut. For the final version, the producers told me to ‘go to town’ in order to pace it up and get it under an hour. This show had a lot of traveling, so through the usual trimming, but also a lot of jump cuts for the passage of time, I was able to get it down. Ironically the longest show ended up being the shortest.”

Adam Taylor (Before I Fall, Meadowland, Never a Neverland) was the series composer, but during the pilot edit, Morano and Hallam Martin had to set the style. Hallam Martin says, “For the first three episodes, we pulled a lot of sources from other film scores to set the style. Also a lot of Trent Reznor stuff. This gave Adam an idea of what direction to take. Of course, after he scored the initial episodes, we could use those tracks as temp for the next episodes and as more episode were completed, that increased the available temp library we had to work with.”

Post feelings

Story points in The Handmaid’s Tale are often exposed through flashbacks and Moss’ voice over. Naturally voice over pieces affect the timing of both the acting and the edit. I asked Hallam Martin how this was addressed. She says, “The voice over was recorded after the fact. Lizzie Moss would memorize the VO and act with that in mind. I would have my assistant do a guide track for cutting and when we finally received Lizzie’s, we would just drop it in. These usually took very little adjustment thanks to her preparation while shooting. She’s a total pro.” The story focuses on many ideas that are tough to accept and watch at times. Hallam Martin comments, “Some of the subject matter is hard and some of the scenes stick with you. It can be emotionally hard to watch and cut, because it feels so real!”

Wendy Hallam Martin uses Avid Media Composer for these shows and I asked her about editing style. She comments, “I watch all the dailies from top to bottom, but I don’t use ScriptSync. I will arrange my bins in the frame view with a representative thumbnail for each take. This way I can quickly see what my coverage is. I like to go from the gut, based on my reaction to the take. Usually I’ll cut a scene first and then compare it against the script notes and paperwork to make sure I haven’t overlooked anything that was noted on set.” In wrapping up, we talked about films versus TV projects. Hallam Martin says, “I have done some smaller features and movies-of-the-week, but I like the faster pace of TV shows. Of course, if I were asked to cut a film in LA, I’d definitely consider it, but the lifestyle and work here in Toronto is great.”

The Handmaid’s Tale continues with season one on Hulu and a second season has been announced.

Originally written for Digital Video magazine / Creative Planet Network

©2017 Oliver Peters