Sound Forge Pro Mac 3

There are plenty of modern tools that deal with audio, but sometimes you need a product with a very narrow focus to do the job without compromise. That’s where Sound Forge Pro fits in. Originally part of Sonic Foundry’s software development, the product, along with its siblings Vegas Pro and Acid, migrated to Sony Creative Software. Sony, in turn, sold off those products to German software developer Magix, where they appear to have found a good home. I recently tested Sound Forge Pro Mac 3, which is the macOS companion to Sound Forge Pro 11 on the Windows side. (Sound Forge Pro 12 is expected to roll out in 2018.) Both are advanced, multichannel audio editors, dedicated to editing, processing, and mastering individual audio files, as opposed to a DAW application, which is designed for mixing.

Although Magix’s other products are PC-centric, they’ve done a good job embracing and improving the Mac products. Sound Forge also comes in the Audio Studio version – a lower cost Windows product designed for users who don’t need quite as many features. There is no Mac equivalent for it yet. The former Mac version that was sold through Apple’s Mac App Store is no longer available. Naturally a product like Sound Forge Pro may require some justification for its price tag, since the application competes with great audio tools within most modern NLEs, like Final Cut Pro X, Premiere Pro, or Resolve. It’s also competing with Adobe Audition (included with a Creative Cloud subscription) and Apple Logic Pro X, which sports a lower cost.

Sound Forge Pro is primarily designed as a dedicated audio mastering application, that does precision audio editing. It works with multichannel audio files (up to 32 tracks) in maximum bit rates of 24-bit, 32-bit, and 64-bit float at up to 192kHz. It also works with video files, although it will only import the audio channels for processing. Sound Forge Pro for the Mac comes with several iZotope plug-ins, including Declicker, Declipper, Denoiser, Ozone Elements 7, and RX Elements. (The Windows version includes a slightly different mix of iZotope plug-ins.) That’s on top Magix’s own plug-ins and any other AU plug-ins that might already be installed on your Mac from other applications. The bottom line is that you have a lot of effects and processing horsepower to work with when using Sound Forge Pro.

Even though Sound Forge Pro is essentially a single file editor, you can work with multiple individual files. Multiple files are displayed within the interface as horizontal tabs or in a vertical stack. You can process multiple files at the same time and can copy and paste between them. You can also copy and paste between individual channels within a single multichannel file.

As an audio editor, it’s fast, tactile, and non-destructive, making it ideal for music editing, podcasts, radio interviews, and more. For audio producers, it complies with Red Book Standard CD authoring. The attraction for video editors is its mastering tools, especially loudness control for broadcast compliance. Both Magix’s Wave Hammer and iZotope Ozone 7 Elements’ mastering tools are great for solving loudness issues. That’s aided by accurate LUFS metering. Other cool tools include AutoTrim, which automatically removes gaps of silence at the beginnings and ends of files or from regions within a file. There is also élastique Timestretch, a processing tool to slow down or speed up audio, while maintaining the correct pitch. Timestretch can be applied to an entire file or simply a section within a file. Effects tools and plug-ins are divided into those that require processing and those that can be played in real-time. For example, Timestretch is applied as a processing step, whereas a reverb filter would play in real time. Processing is typically fast on any modern desktop or laptop computer, thanks to the application’s 64-bit engine.

Basic editing is as simple as marking a section and hitting the delete key. You can also split a file into events and then trim, delete, move, or copy & paste event blocks. If you slide an event to overlap another, a crossfade is automatically created. You can adjust the fade-in/fade-out slopes of these crossfades. The only missing item is the ability to scrub through audio in any fashion. So, no mouse scrub or JKL-key jogging with audible audio, as you’d normally find in an NLE application. That’s apparently there in the Windows versions, but not in this Mac version.

All in all, if audio is a significant part of your workload and you want to handle it in a better and easier fashion, then Sound Forge Pro Mac 3 is worth the investment.

Originally written for RedShark News.

©2017 Oliver Peters

Advertisements

SpeedScriber

Script-based video editing started with Ediflex. But, it really came into its own when Avid created script integration as a way to cut dialogue-driven stories, like feature films, in Media Composer. The key ingredient is a written script or a transcription of the spoken audio. This is easy with a feature that’s been acted according to defined script lines, but much harder with something freeform, like a documentary or news interview. In those projects, you first need a person or service to transcribe the audio into a written document – or simply cut without it and hunt around when you look for that one specific sentence.

Modern technology has come to the rescue in the form of artificial intelligence, which has enabled a number of transcription services to offer very fast turnaround times from audio upload to a transcribed, speech-to-text document. Several video developers have tapped into these resources to create new transcription services/applications, which can be tied into several of the popular NLE applications.

Transcription for the three “A” companies

One of these new products is SpeedScriber, a transcription application for macOS and its companion service developed by Digital Heaven, which was founded by veteran UK editor and plug-in developer Martin Baker. To start using SpeedScriber, install the free SpeedScriber application, which is available from the Apple Mac App Store. The next steps depend on whether you just want to create transcribed documents, captioning files, or script integration for Avid Media Composer, Adobe Premiere Pro CC, or Apple Final Cut Pro X.

If you just want a document, or plan to use Media Composer or FCPX, then no other tools are required. For Premiere Pro CC workflows, you’ll want to download an panel installer for macOS or Windows from the SpeedScriber website. This integrates as a standard Premiere Pro panel and permits you to import transcription files directly into Premiere Pro. The SpeedScriber application enables roundtripping to/from Final Cut using FCPXML.

First, let’s talk about the transcription itself. It should generally be clip-based and not from edited timelines, unless you just want to document a completed project or for captioning. When you launch SpeedScriber for the first time, you’ll need to create an account. This will include 15 minutes of free transcription time. The file length determines the time used. Billing for the service is based on time and is tiered, ranging from $.50/minute (30/60/120 minutes) down to $.37/minute (6,000 minutes). Minutes are pre-purchased and don’t expire.

Once your account is ready, drag-and-drop or point the application to the file to import. Disable any unwanted audio channels, so that the transcription is based on the best audio channel within the file. Even if all channels are equal, disable all but one of them. Set up the number of speakers and language format, such as British, Australian, or American English. According to Baker, support for five European languages will be added in version 1.1. The service will automatically determine when speakers change, such as between an interviewer and the subject. It’s hard for the system to determine this with great accuracy, so don’t expect these speaker changes to be perfect.

The transcription experience

Accuracy of the transcription can be extremely good, but it depends on the audio quality that you’ve supplied. A clean interview track – well mic’ed and in a quiet room – can be dead-on with only a few corrections needed. Slower speakers who enunciate well result in greater accuracy. On the other hand, having several speakers in a noisy environment, or a very fast speaker with a heavy accent, will require a lot of correction – enough so that manual transcription might be better in those cases.

Once SpeedScriber has completed its automatic transcription, you can play the file to proof it and make any corrections to the text that are required. It’s easy to type corrections to the transcription within the SpeedScriber text editing window. When done, you can export the text in a number of different formats. I ran a test clip of a clear-spoken woman with well-recorded audio. She had a slight southern drawl, but the result from SpeedScriber was excellent. It also did a good job of ignoring speech idiosyncrasies, such a frequent “ums”. This eight minute test clip only required about a dozen text corrections throughout.

If the objective is script integration into an NLE, then the process varies depending on brand. Typically such integration is clip-based, although multi-cam clips are supported. However, it’s tougher when you try to connect the transcription to a timeline. For example, I like to do cutdowns of interviews first, before transcribing, and that’s not really how ScreedScriber works best. In version 1.1, FCPX compound clips will be supported, so segments can be cut before transcription.

A clear set of tutorial videos are available in the support section of  the SpeedScriber website.

Integration with NLEs

Media Composer is easy, because it already has a Script Integration feature. Import the text file that was exported from SpeedScriber as a new script into Media Composer and link the video clip to it. If you purchased Avid’s ScriptSync, then you can automatically line up the clip to sentences within the script. This happens automatically thanks to ScriptSync’s speech analysis function. But if you didn’t purchase this add-on, simply add sync points manually.

With Premiere Pro, select the clip, open the SpeedScriber panel and from it, import the corresponding transcription. The text appears in the Speech Analysis section of that clip’s metadata display. It will actually be embedded into the media file so that the clip can be moved between projects complete with that clip’s transcription. You can view and use this text display to mark in/out by words for accurate script-based selections. When you import the script and link it to a multi-cam clip, synced clip, or sequence, text will show up as markers and can be viewed in the markers panel. Premiere Pro is the only integration that can easily update existing speech metadata or markers. So you can start editing with the raw transcript and then update it later when corrections have been made. However, when I tested transcriptions on an edited sequence instead of a clip, it locked up Premiere Pro, requiring a Force Quit. Fortunately, when I re-opened the recovered project, the markers were there as expected.

The most straight forward approach seems to be its use with Final Cut Pro X. According to Baker, “This is the first Digital Heaven product with broad appeal by supporting Avid and Premiere Pro. But FCPX has ended up having the deepest integration due to the ability to drag-and-drop the Library, which was introduced in 10.3. So with roundtripping, SpeedScriber rebuilds the clip’s timeline without any need to export. Another advantage of the roundtripping is that SpeedScriber can read the audio channel status from the dropped XML, which is important for getting the best accuracy.”

There’s a roundtrip procedure with FCPX, but even without it, simply export an FCPXML from SpeedScriber. Import that into your Final Cut Pro X Library. The clip will then show a number of keyword entries corresponding to line breaks. For each keyword entry, the browser notes field will display the associated text, making it easy to find any dialogue. Plus, these entries are already marked as selections. When clips are edited into the sequence (an FCPX Project), the timeline index enables these notes to be displayed under the Tags section.

SpeedScriber shows tremendous potential to accelerate the efficiency of many spoken-word projects, like documentaries. Half the battle is trying to figure out the story that you want to tell, so having the text right in front of you makes this job easier. Applying modern technology to this challenge is refreshing and the constantly improving accuracy of these systems makes it an easy consideration. SpeedScriber is one of those tools that not only gets you home earlier, but will give you the assurance that you can easily find that clip you are looking for in the proverbial haystack of clips.

©2017 Oliver Peters

Adobe’s Late-2017 Creative Cloud Updates

According to Cisco, 82% of internet traffic will be video by 2021. Adobe believes over 50% of that will be produced video and not just simple user content. This means producers will be expected to produce more – working faster and smarter. In the newest Creative Cloud update, Adobe has focused on just such workflow improvements. These were previewed at IBC and will be released later this year.

Adobe Premiere Pro CC

With this release, Adobe has finally enabled the ability to have more than one project file open at the same time. You can move clips and sequences between open projects. In addition, projects can be locked by the user, making Premiere Pro the first NLE to enable multiple open projects and locking within a single application. In addition, Adobe has expanded project types to include both Team Projects (your project is in the cloud) and shared projects (your project is local). The latter is ideal for SAN/NAS environments and adds Avid-style collaboration.

Editors will enjoy specific timeline enhancements, like “close all gaps” and up to 16 label colors. The Essential Graphics panel gets some love with font filtering and a visual font preview window. Graphics templates will now include a minimum duration, so that these clips can be extended on the timeline, while leaving the fade-in and fade-out constant.

Adobe is doubling down on VR using its acquired Skybox technology. New are 19 immersive effects and transitions specific to VR projects. These are needed to properly seam wraparound edges when effects are added to VR clips. They are all GPU-only effects; however, as some VR clips can be 5K wide and larger, performance can be challenging. Nevertheless, Adobe reports decent performance with 6K VR clips at half-resolution on laptops like the HP z820 or the 2017 15” MacBook Pro. There is also an immersive playback viewer designed for HMDs (head mount displays). It will display the image along with the Premiere Pro timeline window.

Premiere Pro’s non-VR editing updates, including shared projects, are explained well by the reTooled blog (video here).

Adobe Audition

Audition is the place to finalize your Premiere Pro mix, so a new auto-ducking mix tool has been added. This is based on Sensei, Adobe’s umbrella name for its artificial intelligence technologies. To use auto-ducking, the editor simply has to adjust sensitivity, amount of reduction, and fades, and then let Audition do the rest. Under AI, it will detect pauses in the dialogue and adjust music volume accordingly.

Other Audition enhancements include a timeline timecode overlay for the video viewer, the ability to simultaneously adjust dual-sided fades on clips, and new record and punch-in preferences for ADR work (“looping”).

After Effects

Here’s another example of this focus on time-savings. After Effects gains a new start-up window to set-up the first composition. It also gains a keyboard command editor, and in this release, will add the same font previewing tools as Premiere Pro. The biggest new feature is an expansion of the expression controls. These will be tied to data files for the quick updating of template graphics. If you create a graphic – such as a map of the US with certain information displayed by colors for each state – and it’s based on a template tied to data, then changing the supporting data information will automatically update the graphic. Other enhancements include GPU acceleration for third-party plug-ins that use the Mercury Playback Engine.

Character Animator

This live-capture, cartoon animation tool finally comes out of beta. A new feature will be the adjustment of the responsiveness of the animation tracking. This will permit live animation to look more hand-drawn. Actions can now be triggered by MIDI control panels. Triggers are editable in the timeline with a waveform for better matching of lip-sync.

There’s plenty of good user news, too, including the the release of 6 Below, an ultra-wide film designed for the Barco three-screen format. It was edited by Vashi Nedomansky using Premiere Pro. Other Premiere Pro news includes the dramatic feature film, Only The Brave, edited by Bill Fox, and Coup 53, a documentary in post being cut by Walter Murch. Both of these noted editors have been using Premiere Pro.

For more in-depth info, check out these links for a solid overview of Adobe’s soon-to-come Creative Cloud application updates:

ProVideo Coalition – Scott Simmons

Premiere Bro blog

Adobe’s own Digital Video & Audio blog

Originally written for Digital Video magazine / Creative Planet Network

©2017 Oliver Peters

Chromatic

Since its introduction six years ago, Apple Final Cut Pro X has only offered the Color Board as its color correction/grading tool. That’s in addition to some automatic correction features and stylized “look” effects. The Color Board interface is based on color swatches and puck sliders, instead of traditional color wheels, leaving many users pining for something else. To answer this need, several third-party, plug-in developers have created color corrector effects modules to fill the void. The newest of these is Chromatic from Coremelt – a veteran Final Cut plug-in developer.

The toolset

Chromatic is the most feature-rich color correction module currently available for FCPX. It offers four levels of color grading, including inside and/or outside of a mask, overall frame, and also a final output correction. When you first apply the Chromatic Grade effect to a clip, you’ll see controls appear within the FCPX inspector window. These are the final output adjustments. To access the full toolset, you need to click on the Grade icon, which launches a custom UI. Like other grading tools that require custom interfaces, Chromatic’s grading toolset opens as a floating window. This is necessitated by the FCPX architecture, which doesn’t give developers the ability to integrate custom interface panels, like you’ll find in Adobe applications. To work around this limitation, developers have come up with various ingenious solutions, including floating UI windows, HUDs (heads up displays), and viewer overlays. Chromatic uses all of these approaches.

The Chromatic toolset includes nine correction effects, which can be stacked in any order onto a clip. These include lift/gamma/gain sliders, lows/mids/highs color wheels, auto white balance, replace color, color balance/temperature/exposure/saturation, three types of curves (RGB, HSL, and Lab), and finally, color LUTs. As you use more tools on a clip, these will stack into the floating window like layers. Click on any of these tools within the window to access those specific controls. Drag tools up or down in this window to rearrange the order of operation of Chromatic’s color correction processes. The specific controls work and look a lot like similar functions within DaVinci Resolve. This is especially true of HSL Curves, where you can control Hue vs. Sat or Hue vs. Luma.

Masking with the power of Mocha

Corrections can be masked, in order to effect only specific regions of the image. If you select “overall”, then your correction will affect the entire image. But is you select “inside” or “outside” of the mask, then you can grade regions of the image independent of each other. Take, for example, a common, on-camera interview situation with a darkened face in front of a brightly exposed exterior window. Once you mask around the face, you can then apply different correction tools and values to the face, as opposed to the background window. Plus, you can still apply an overall grade to the image, as well as final output adjustment tweaks with the sliders in the inspector window. That’s a total of four processes, with a number of correction tools used in each process.

To provide masking, Coremelt has leveraged its other products, SliceX and TrackX. Chromatic uses the same licensed Mocha planar tracker for fast, excellent mask tracking. In our face example, should the talent move around within the frame, then simply use the tracker controls in the masking HUD to track the talent’s movement within the shot. Once tracked, the mask is locked onto the face.

Color look-up tables (LUTs)

When you purchase Chromatic, you’ll also get a LUT (color look-up table) browser and a default collection of looks. (More looks may be purchased from Coremelt.) The LUT browser is accessible within the grading window. I’m not a huge fan of LUTs, as these are most often a very subjective approach to a scene that simply doesn’t work with all footage equally well. All “bleach bypass” looks are not equal. Chromatic’s LUT browser also enables access to any other LUTs you might have installed on your system, regardless of where they came from, as long as they are in the .cube format.

LUTs get even more confusing with camera profiles, which are designed to expand flat-looking, log-encoded camera files into colorful Rec709 video. Under the best of circumstances these are mathematically correct LUTs developed by the camera manufacturer. These work as an inverse of the color transforms applied as the image is recorded. But in many cases, commonly available camera profile LUTs don’t come from the manufacturers themselves, but are actually reverse-engineered to function closely to the manufacturer’s own LUT. They will look good, but might not yield identical results to a true camera LUT.

In the case of FCPX, Apple has built in a number of licensed camera manufacturer LUTs for specific brands. These are usually auto-detected and applied to the footage without appearing as an effect in the inspector. So, for instance, with ARRI Alexa footage that was recorded as Log-C, FCPX automatically adds a LogC-to-Rec709 LUT. However, if you disable that and then subsequently add Chromatic’s LogC-to-Rec709 LUT, you’ll see quite a bit of difference in gamma levels. Apple actually uses two of these LUTs – a 2D and a 3D cube LUT. Current Alexa footage defaults to the 3D LUT, but if you change the inspector pulldown to the regular LogC LUT, you’ll see similar gamma levels to what Chromatic’s LUT shows. I’m not sure if the differences are because the LUT isn’t correct, or whether it’s an issue of where, within the color pipeline, the LUT is being inserted. My recommendation is to stick with the FCPX default camera profile LUTs and then use the Chromatic LUTs for creative looks.

In use

Chromatic is a 1.0 product and it’s not without some birthing issues. One that manifested itself is a clamping issue with 2013 Mac Pros. Apparently this depends on which model of AMD D-series GPU your machine has. On some machines with the D-500 chips, video will clamp at 0 and 100, regardless of whether or not clamping has been enabled in the plug-in. Coremelt is working on a fix, so contact them for support if you have this or other issues.

Overall, Chromatic is well-behaved as custom plug-ins go. Performance is good and rendering is fast. Remember that each tool you use on a clip is like adding an additional effects filters. Using all nine tools on a clip is like applying nine effects filters. Performance will depend on a lot of circumstances. For example, if you are working with 4K footage playing back from a fast NAS storage system, then it will take only a few applied tools before you start impacting performance. However, 1080p local media on a fast machine is much more forgiving, with very little performance impact during standard grading using a number of applied tools.

Coremelt has put a lot of work into Chromatic. To date, it’s the most comprehensive grading toolset available within Final Cut Pro X. It is like having a complete grading suite right inside of the Final Cut timeline. If you are serious about grading within the application and avoiding a roundtrip through DaVinci Resolve, then Chromatic is an essential plug-in tool to have.

©2017 Oliver Peters

Affinity Photo for iPad

UK developer Serif has been busy creating a number of Mac, Windows, and now iOS applications that challenge Adobe’s stranglehold on the imaging industry. Newest of these is Affinity Photo for the iPad. As newer iPads become more powerful – starting with the Air 2 and moving into the present with two Pro models – iOS app developers are taking notice. There have been a number of graphic and design apps available for iOS for some time, including Adobe Photoshop Express (PS Express), but none is as full-featured as Affinity Photo. There is very little compromise between the desktop version and the iPad version, making is the most sophisticated iOS application currently on the market.

Affinity Photo starts with an elegant user interface, that’s broken down into five “personas”. These are essentially workspaces and include Photo, Selections, Liquify, Develop, and Tone Mapping. Various tools, specific to each persona, populate the left edge of the screen. So in Photo, that’s where you’ll find crop, move, brush tools and more. The right edge displays a series of “Studios”. These often contain a set of tools, like layer management, adjustment filters, channel control, text, and so on. There’s everything there that we’ve come to expect from an advanced desktop graphics application. Naturally, if you own an iPad Pro with the Apple Pencil, then you can further take advantage of Serif’s support for the pressure-sensitivity of that input device.

Best of all, response is very fluid. For example, the Liquify persona offers an image mesh that you can drag around to bend or deform an image. There’s virtually no lag while doing this. Some changes require rasterization before moving on. In the case of Liquify, changes are non-destructive, until you exit that persona. Then you are asked whether or not to commit to those changes. If you commit, then the distortion you’ve done in that persona is rendered to the image inside of the Photo app.

When working with photography, you’ll do your work either in the Develop or the Tone Mapping persona. As you would expect, Develop includes the standard photo enhancement tools, including color, red-eye, and lens distortion correction. There’s also detail enhancement, noise reduction, and a blemish removal tool. Tone mapping is more exotic. While intended for work with high dynamic range images, you can use these tools to create very stylized enhancement effects on non-HDR images, too.

All of this is great, but how do you get in and out of the iPad? That’s one of Affinity Photo’s best features. Like most iOS apps, you can bring in files from various cloud services like Dropbox. But being a photography application, you can also import any native iPad images from other applications, like the native Apple Photos. Therefore, if you snap a photo with your iPad camera, it’s available to Affinity Photo for enhancement. When you “save a copy” of the document, the processed file is saved to iCloud in its native .afphoto file format. These images can be accessed from iCloud on a regular Mac desktop or laptop computer. So if you also have the desktop (macOS) version of Affinity Photo, it will read the native file format, preserving all of the layer and effects information within that file. In addition, you can export a version from the iPad in a wide range of graphic formats, including Photoshop.

Affinity Photo includes sophisticated color management tools that aren’t commonly available in an iOS photo/graphics application. Exports may be saved in various color profiles. In addition, you can set various default color profiles and convert a document’s profile, such as from RGB to CMYK. While having histograms available for image analysis isn’t unusual, Affinity Photo also includes tools like waveform displays and a vectorscope, which are familiar to video-centric users.

Serif has made it very easy to get up and running for new users. At the launch screen, you have access to an interactive introduction, an extensive list of help topics, and tutorials. You can also access a series of complex sample images. When you pick one of these, it’s downloaded to your iPad where you can dive in and deconstruct or modify it to your heart’s content. Lastly, all personas include a question mark icon in the lower right corner. Touch and hold the icon and it will display the labels for all of the tools in that persona. Thus, it’s very easy to switch over if you come from a Photoshop-centric background.

Affinity Photo is a great example of what the newest iPads are capable of. Easy interchange between the iOS and macOS versions are the icing on the cake, enabling the iPad to be part of a designer’s arsenal and not simply a media consumption device.

©2017 Oliver Peters

LumaFusion – an iOS NLE

As Apple’s iOS platform becomes more powerful, applications for it begin to rival the power and complexity of desktop software. LumaFusion is a recently introduced nonlinear video editing product from Luma Touch. Its founders created the Avid/Pinnacle/Corel iOS NLE, but LumaFusion takes a fresh approach. Luma Touch currently offers three iOS products: LumaClip (a single-clip editor), LumaFX (video effects for clips), and LumaFusion (a full-fledged NLE that integrates the features of the other two products). All three apps run under what Luma Touch dubs their Spry Engine, a framework for iOS video applications.

LumaFusion works on both the iPhone and iPad; however, the iPad version comes closest to a professional desktop experience. Ideally you’ll want one of the iPad Pros, but it runs perfectly fine on an iPad Air 2 with the A8X chip, which is what I used. I’ve tried other iOS NLEs, including Adobe Clip, iMovie, and TouchEdit, which have their pros and cons. For instance, iMovie doesn’t deal with fractional video frame rates and TouchEdit tries to mimic a flatbed film editor. This brings me to LumaFusion, which has been designed as a modern, professional-grade NLE for the iOS platform.

UPDATE: Watch this video for a rundown of the new features in version 1.4, released in September 2017.

The iOS ecosystem

Like other iOS apps, that tie into the ecosystem, media can be imported from iTunes, Photos, and other third-party applications, like FiLMiC Pro. As a “pro” app, it understands various whole and fractional frame rates and sizes up to 3840 x 2160 (UHD 4K), depending on your device. However, for me, the interest is not in cutting things that I’ve shot with my iPad, but rather fitting it into an offline/online editing workflow. This means import and export are critical.

If you own an iPad Pro, then you can get an SD card reader as an accessory. With the card reader, only native DSLR movie clips will be imported into the Photos app, but not other file formats. Typically, you are going to transfer media using cloud syncing tools, like Dropbox, Box, OneDrive, etc. LumaFusion also includes a number of royalty-free music cuts, which can be accessed through its integrated media browser.

To use it as a rough-cut tool, simply create H.264 proxies on your desktop system and sync those to the iPad using Dropbox (or another cloud service). I created a test project of about 60 clips (720p, 6Mbps, 29.97fps) that only consumed 116MB of storage space. Therefore, even a free 2GB Dropbox account would be fine. Within LumaFusion, import the files from Dropbox and start editing.

LumaTouch will soon start beta testing LumaConnect – a macOS companion application designed to facilitate offline/online editing roundtrips. It will feature automatic iOS proxy creation and the ability to relink high-res media – as well as any iOS-captured content – back on your desktop computer. LumaConnect will also allow the rendering of projects as Apple ProRes files.

User interface and editing workflow

Overall, the interface design and editing model more closely approximates Apple Final Cut Pro X than any other NLE. The app’s design is built around a media pool with various editing projects (sequences). This is a similar approach to FCPX 10.0, which had separate Events (bins) and Projects (sequences), but no combined Libraries. It’s almost like FCPX “Lite” for iOS.

There are three main windows: media browser, timeline, and a single, combo viewer. It uses fly-out panels for tools and mode changes to access clip editing and effects modules. These modules are, in fact, LumaClip and LumaFX integrated into LumaFusion. The timeline is “magnetic”, much like FCPX. Clip construction on the timeline also follows the layout of primary and connected clips, rather than discrete target tracks. A total of three integrated audio/video clips can be stacked vertically, along with another three audio-only clips, for a total of six audio “tracks”. Audio can be adjusted through a fly-out track mixer. LumaFusion includes four clip editing tools: speed and reverse, frame fit, color effects, and audio editing. In addition, there’s a multi-layered title tool, along with a number of customizable title templates to choose from. Clip-based volume and video effect adjustments can be keyframed.

Effects are pretty sophisticated and would often be GPU-accelerated on a desktop system. These include color correction, blurs, transforms, transitions, and more. You can stack a number of these onto a single clip without any impact on playback. The effects priority can be rearranged and the interface also provides an indication of how many resources you are tying up on the iPad.

The editing experience

Serious video editing on an iPad isn’t for everyone, but the more I worked with it, the more I enjoyed the experience. If you have an iPad-compatible keyboard, it follows some generic commands, including JKL playback and I and O for mark-in and mark-out. There are also a few FCPX keystrokes, like W for insert/overwrite (depending on which edit mode is selected). Unfortunately J (reverse playback) only works in the clip viewer, but not in the timeline. I’d love to see a more extensive keyboard command set. Naturally, being an iOS app, everything can be accessed via touch, which is best (though not essential) if you have the Apple Pencil for the iPad Pro.

There are a few standard editing functions that I missed. For example, there’s no “rolling-edit” trim function. If you want to move a cut point – equally trimming the left and right sides – you have to do it in the overwrite edit mode and trim the incoming or outgoing side of one of the clips. But, if you trim it back, a gap is left. J-cuts and L-cuts require that you detach the audio from the clip, as there is no way to expand an a/v clip in the timeline.

It is definitely possible to finish and export a polished piece from LumaFusion. You can also export an audio-only mix. This enables you to embellish your audio track outside of LumaFusion and then reimport and marry it to the picture for the final version. Because you can layer vertical tracks, cutting a two-camera interview piece on your iPad is pretty easy. Rough-cutting a first pass or pulling edited selects on an iPad becomes completely viable with LumaFusion.

Sharing your edit

Once you’ve edited your piece, it’s easy to share (export) your final sequence as a single audio/video file, audio-only file, project (currently only compatible with LumaFusion), or trimmed media. Be aware that there’s a disconnect between the frame rate terminology for settings versus exports. For example, with project settings, you can pick 24 or 30, which are actually 23.98 or 29.97; however, on export, you must pick between 24 and 23.98 or 30 and 29.97. Nevertheless, exports up to UHD frame sizes are fine, including downscaled sizes, if needed. So, you can import and cut in UHD and export a 1080 file. A flattened H.264 movie file of your sequence – wrapped in either an .mp4 or QuickTime .mov container – may be exported at up to 50Mbps (1080p) or 100Mbps (UHD).

If your intension is to use LumaFusion for “offline” editing, then for now, your only option is to embed “burn-in” timecode into the media that you send to the iPad. Then manually write down edit points based on the visible timecode at the cuts. The upcoming LumaConnect macOS application will make it possible to send projects to both Final Cut Pro X and Premiere Pro via XML. According to Luma Touch, they will also be adding XML export from LumaFusion as an in-app purchase, most likely before the release of LumaConnect.

Using an iPad or iPad Pro as your only computer isn’t for everyone, but LumaFusion is definitely a tool that brings iOS editing closer to the desktop experience. To get you started, the company has posted over 30 short tutorials on their YouTube channel. Sure, there are compromises, but not as many as you might think for simple projects. Even if an iPad is only a supplemental tool, then like so many other iOS apps, LumaFusion is another way to add efficiency in the modern, mobile world.

Originally written for RedShark News.

©2017 Oliver Peters

Spice with Templates

One way in which Apple’s Final Cut Pro X has altered editing styles is through the use of effects built as Motion templates, using the common engine shared with Apple Motion. There are a number of developers marketing effects templates, but the biggest batch can be found at the Fxfactory website. A regular development partner is idustrial Revolution, the brainchild of editor (and owner of FCP.co) Peter Wiggins. Wiggins offers a number of different effects packages, but the group marketed under the XEffects brand includes various templates that are designed to take the drudgery out of post, more so than just being eye-catching visual effects plug-ins.

XEffects includes several packages designed to be compatible with the look of certain styles of production, such as news, sports, and social media. These packages are only for FCP X and come with modifiable, preset moves, so you don’t have to build complex title and video moves through a lot of keyframe building. The latest is XEffects Viral Video, which is a set of moves, text, and banners that fit in with the style used today for trendy videos. The basic gist of these effects covers sliding or moving banners with titles and templates that have been created to conform to both 16:9 and square video projects. In addition, there are a set of plug-ins to create simple automatic moves on images, which is helpful in animating still photos. Naturally several title templates can be used together to create a stacked graphic design.

Another company addressing this market is Rampant Design Tools with a series of effects templates for both Apple Final Cut X and Adobe Premiere Pro CC. Their Premiere Pro templates include both effects presets and template projects. The effects presets can be imported into Premiere and become part of your arsenal of presets. For example, if you what to have text slide in from the side, blurred, and then resolve itself when it comes to rest – there’s a preset for that. Since these are presets, they are lightweight, as no extra media is involved.

The true templates are actually separate Premiere Pro template projects. Typically these are very complex, layered, and nested timelines that allow you to create very complex effects without the use of traditional plug-ins. These projects are designed to easily guide you where to place your video, so no real compositing knowledge is needed. Rampant has done the hard part for you. As with any Premiere Pro project, you can import the final effects sequence into your active project, so there’s no need to touch the template project itself. However, these template projects do include media and aren’t as lightweight as the presets, so be mindful of your available hard drive space.

For Final Cut Pro X, Rampant has done much the same, creating both a set of installable Motion template effects, like vignette or grain, as well as more complex FCP X Libraries designed for easy and automatic use. As with the Premiere products, some of these Libraries contain media and are larger than others, so be mindful of your space.

Both of these approaches offer new options in the effects market. These developers give you plug-in style effects without actually coding a specific plug-in. This makes for faster development and less concern that a host application version change will break the plug-in. As with any of these new breed of effects, the cost is much lower than in the past and effects can be purchase a la carte, which enables you to tailor your editor’s tool bag to your immediate needs.

©2017 Oliver Peters