More 4K

df_4Kcompare_main

I’ve talked about 4K before (here, here and here), but I’ve recently done some more 4K jobs that have me thinking again. 4K means different things to different people and in terms of dimensions, there’s the issue of cinema 4K (4096 pixels wide) versus the UltraHD/QuadHD/4K 16:9 (whatever you want to call it) version of 4K (3840 pixels wide). That really doesn’t make a lot of difference, because these are close enough to be the same. There’s so much hype around it, though, that you really have to wonder if it’s “the Emperor’s new clothes”. (Click on any of these images for expanded views.)

First of all, 4K used as a marketing term is not a resolution, it’s a frame dimension. As such, 4K is not four times the resolution of HD. That’s a measurement of area and not resolution. True resolution is usually measured in the vertical direction based on the ability to resolve fine detail (regardless of the number of pixels) and, therefore, 4K is only twice the resolution of HD at best. 4K is also not sharpness, which is a human perception affected by many things, such as lens quality, contrast, motion and grading. It’s worth watching Mark Schubin’s excellent webinar on the topic to get a clearer understanding of this. There’s also a very good discussion among top DoPs here about 4K, lighting, high dynamic range and more.

df_4kcompare_1A lot of arguments have been made that 4K cameras using a color-pattern filter method (Bayer-style), single CMOS sensor don’t even deliver the resolution they claim. The reason is that in many designs 50% of the pixels are green versus 25% each for red and blue. Green is used for luminance, which determines detail, so you do not have a 1:1 pixel relationship between green and the stated frame resolution of the sensor. That’s in part why RED developed 5K and 6K sensors and it’s why Sony uses an 8K sensor (F65) to deliver a 4K image.

The perceived image quality is also not all about total pixels. The pixels of the sensor, called photosites, are the light-receiving elements of the sensor. There’s a loose correlation between pixel size and light sensitivity. For any given sensor of a certain physical dimension, you can design it with a lot of small pixels or with fewer, but larger, pixels. This roughly correlates to a sensor that’s of high resolution, but a smaller dynamic range (many small pixels) or one with lower resolution, but a higher dynamic range (large, but fewer pixels). Although the equation isn’t nearly this simplistic, since a lot of color science and “secret sauce” goes into optimizing a sensor’s design, you can certainly see this play out in the marketing battles between the RED and ARRI camps. In the case of the ALEXA, ARRI adds some on-the-sensor filtering, which results in a softer image that gives it a characteristic filmic quality.df_4kcompare_2

Why do you use 4K?

With 4K there are two possible avenues. The first is to shoot 4K for the purpose of reframing and repositioning within HD and 2K timelines. Reframing isn’t a new production idea. When everyone shot on film, some telecine devices, like the Rank Cintel Mark III, sported zoom boards that permitted an optical blow-up of the 35mm negative. You could zoom in for a close-up in transfer that didn’t cost you resolution. Many videographers shoot 1080 for a 720 finish, as this allows a nice margin for reframing in post. The second is to deliver a final 4K product. Obviously, if your intent is the latter, then you can’t count on the techniques of the former in post.

df_4kcompare_3When you shoot 4K for HD post, then workflow is an issue. Do you shoot everything in 4K or just the items you know you’ll want to deal with? How will this cut with HD and 2K content? That’s where it gets dicey, because some NLEs have good 4K workflows and others don’t. But it’s here that I contend you are getting less than meets the eye, so to speak.  I have run into plenty of editors who have dropped a 4K clip into an HD timeline and then blown it up, thinking that they are really cropping into the native 4K frame and maintaining resolution. Depending on the NLE and the settings used, often they are simply blowing up an HD shot. The NLE scaled the 4K to HD first and then expanded the downscaled HD image. It didn’t crop into the actual 4K native resolution. So you have to be careful. And guess what, if the blow up isn’t that extreme, it may not look much different than the crop.

df_4kcompare_4One thing to remember is that a 4K image that is scaled to fit into an HD timeline gains the benefits of oversampling. The result in HD will be very sharp and, in fact, will generally look better perceptually than the exact same image natively shot in an HD size. When you now crop into the native image, you are losing some of that oversampling effect. A 1:1 pixel relationship is the same effective image size as a 200% blow-up. Of course, it’s not the same result. When you compare the oversampled “wide shot” (4K scaled to HD) to the “close-up” (native 4K crop), the close-up will often look softer. You’ll see defects of the image, like chromatic aberration in the lens, missed critical focus and sensor noise. Instead, if you shoot a wide and then an actual close-up, that result will usually look better.

On the other hand, if you blow up the 4K-to-HD or a native HD shot, you’ll typically see a result that looks pretty good. That’s because there’s often a lot more information there than monitors or the eye can detect. In my experience, you can commonly get away with a blow-up in the range of 120% of the original image size and in some cases, as much as 150%.

To scale or not to scale

df_4K_comparison_Instant4KLet me point out that I’m not saying a native 4K shot doesn’t look good. It does, but often the associated workflow hassles aren’t worth it. For example, let’s take a typical 1080p 50” Panasonic plasma that’s often used as a client monitor in edit suites. You or your client may be sitting 7 to 10 feet away from it, which is closer than most people sit in a living room with that size of a screen. If I show a client the native image (4K at 1:1 in an HD timeline) compared with an separate HD image at the same framing, it’s unlikely that they’ll see a difference. Another test is to take two exact images – one native HD and the other 4K. Scale up the HD and crop down the 4K to match. In theory, the 4K should look better and sharper. In fact, sitting back on the client sofa, most won’t see a difference. It’s only when they step to about 5 feet in front of the monitor that a difference is obvious and then only when looking at fine detail within the shot.

df_gh4_instant4k_smNot all scaling is equal. I’ve talked a lot about the comparison of HD scaling, but that really depends on the scaling that you use. For a quick shot, sure, use what your NLE has built in. For more critical operations, then you might want to scale images separately. DaVinci Resolve has excellent built-in scaling and lets you pick from smooth, sharp and bilinear algorithms. If you want a plug-in, then the best I’ve found is the new Red Giant Instant 4K filter. It’s a variation of their Instant HD plug-in and works in After Effects and Premiere Pro. There are a lot of quality tweaks and naturally, the better it does, the longer the render will be. Nevertheless, it offers outstanding results and in one test that I ran, it actually provided a better look within portions of the image than the native 4K shot.

df_4K_comparison-C500_smIn that case, it was a C500 shot of a woman on a park bench with a name badge. I had three identical versions of the shot (not counting the raw files) – the converted 4K ProRes4444 file, a converted 1080 ProRes4444 “proxy” file for editing and the in-camera 1080 Canon XF file. I blew up the two 1080 shots using Instant 4K and cropped the 4K shot so all were of equal framing. When I compared the native 4K shot to the expanded 1080 ProRes4444 shot, the woman’s hair was sharper in the 1080 blow-up, but the letters on the name badge were better on the original. The 1080 Canon XF blow-up was softer in both areas. I think this shows that some of the controls in the plug-in may give you superior results to the original (crisper hair); but, a blow-up suffers when you are using a worse codec, like Canon’s XF (50 Mbps 4:2:2). It’s fine for native HD, but the ProRes4444 codec has twice the chroma resolution and less compression, which makes a difference when scaling an image larger. Remember all of this pertains to viewing the image in HD.

4K deliverables

df_4K_comparison-to-1080_smSo what about working in native 4K for a 4K deliverable? That certainly has validity for high-resolution projects (films, concerts, large corporate presentations), but I’m less of a believer for television and web viewing. I’d rather have “better” pixels and not simply “more” pixels. Most of the content you watch at theaters using digital projection is 2K playback. Sometimes the master for that DCP was HD, 2K or 4K. If you are in a Sony 4K projector-equipped theater, most of the time, it’s simply the projector upscaling the content to 4K as part of the projection. Even though you may see a Sony 4K logo at the head of the trailers, you aren’t watching 4K content – definitely not, if it’s a stereo3D film. Yet, much of this looks pretty good, doesn’t it?

df_AMIRAEverything I talked about, regarding blowing up HD by up to 120% or more, still applies to 4K. Need to blow up a shot a bit in a 4K timeline? Go ahead, it will look fine. I think ARRI has proven this as well, taking films shot with the ALEXA all the way up to Imax. In fact, ARRI just announced that the AMIRA will get in-camera, on-the-fly upscaling of its image with the ability to record 4K (3840 x 2160 at up to 60fps) on the CFast 2.0 cards. They can do this, because the sensor starts with more pixels than HD or 2K. The AMIRA will expose all of the available photosites (about 3.4K sensor pixels) in what they call the “open gate” method. This image is lightly cropped to 3.2K and then scaled by a 1.2 factor, which results in UltraHD 4K recording on the same hardware. Pretty neat trick and judging by ARRI’s image quality, I’ll bet it will look very good. Doubling down on this technique, the ALEXA XT models will also be able to record ProRes media at this 3.2K size. In the case of the ALEXA, the designers have opted to leave the upscaling to post, rather than to do it in-camera.

To conclude, if you are working in 4K today, then by all means continue to do so. It’s a great medium with a lot of creative benefits. If you aren’t working in 4K, then don’t sweat it. You won’t be left behind for awhile and there are plenty of techniques to get you to the same end goal as much of the 4K production that’s going on.

Click these thumbnails for full resolution images.

df_gh4_instant4k_sm

 

 

 

df_4K_comparison-to-1080_sm

 

 

 

 

©2014 Oliver Peters

HP Z1 G2 Workstation

df_hpz1g2_heroHewlett-Packard is known for developing workstations that set a reliability and performance standard, characterized by the Z-series of workstation towers. HP has sought to extend what they call the “Z experience” to other designs, like mobile and all-in-one computers. The latest of these is the HP Z1 G2 Workstation – the second generation model of the Z1 series.

Most readers will associate the all-in-one concept with an Apple iMac. Like the iMac, the Z1 G2 is a self-contained unit housing all electronics and the display in one chassis. Whereas the top-end iMacs are targeted at advanced consumers and pros with less demanding computing needs, the HP Z1 G2 is strictly for the serious user who requires advanced horsepower. The iMac is a sealed unit, which cannot be upgraded by the user (except for RAM), and is largely configured with laptop-grade parts. In contrast, the HP Z1 G2 is a Rolls-Royce. The build is very solid and it exudes a sense of performance. The user has the option to configure their Z1 G2 from a wide range of components. The display lifts like a car hood for easy accept to the “engine”, making user upgrades nearly as easy as on a tower.

Configuration options

df_hpz1g2_hero_touchThe HP Z1 G2 offers processor choices that include Intel Core i3, Core i5 and three Xeon models. There are a variety of storage and graphics card choices and it supports up to 32GB of RAM. You may also choose between a Touch and non-Touch display. The Touch screen adds a glass overlay and offers finger or stylus interaction with the screen. Non-touch screens are a matte finish, while Touch screens are glossy. You have a choice of operating systems, including Windows 7, Windows 8 and Linux distributions.

I was able to specify the built-to-order configuration of the Z1 G2 for my review. This included a Xeon E3 (3.6GHz) quad-core, 16GB of RAM, optical drive and the NVIDIA K4100M graphics card. For storage, I selected one 256GB mSATA boot drive (“flash” storage), plus two 512GB SSDs that were set-up in a RAID-0 configuration. I also ordered the Touch option with 64-bit Windows 8.1 Pro. Z1 G2 models start at $1,999; however, as configured, this system would retail at over $6,100, including a 20% eCoupon promo discount.

An important, new feature is support for Thunderbolt 2 with an optional module. HP is one of the first PC manufacturers to support Thunderbolt. I didn’t order that, but reps from AJA, Avid and Blackmagic Design all confirmed to me that their Thunderbolt units should work fine with this workstation, as long as you install their Windows device drivers. One of these would be required for any external broadcast or grading monitor.

In addition to the custom options, the Z1 G2 includes wireless support, four USB 2.0 ports, two USB 3.0 ports, Gigabit Ethernet, a DisplayPort connector for an secondary computer monitor, S/PDIF, analog audio connectors, a webcam and a media card reader.

Arrival and set-up

df_hpz1g2_openThe HP Z1 G2 ships as a single, 57 pound package, complete with a wireless mouse and keyboard. The display/electronics chassis is attached to an adjustable arm that connects to the base. This allows the system to be tilted at any angle, as well as completely flat for shipping and access to the electronics. It locks into place when it’s flat (as in shipping), so you have to push down lightly on the display in order to unlock the latch button.

The display features a 27” (diagonal) screen, but the chassis is actually 31” corner-to-corner. Because the stand has to support the unit and counter-balance the weight at various angles, it sticks out about 12” behind the back of the chassis. Some connectors (including the power cord) are at the bottom, center of the back of the chassis. Others are along the sides. The adjustable arm allows any angle from vertical to horizontal, so it would be feasible to operate in a standing or high-chair position looking down at the monitor – a bit like a drafting table. I liked the fact that the arm lets you drop the display completely down to the desk surface, which put the bottom of the screen lower than my stationary 20” Apple Cinemas.

First impressions

df_hpz1g2_win81I picked the Touch option in order to test the concept, but quite frankly I decided it wasn’t for me. In order to control items by touch, you have to be a bit closer than the full length of your arm. As a glasses-wearer, this distance is uncomfortable for me, as I prefer to be a little farther away from a screen of this size. Although the touch precision is good, it’s not as precise as you’d get with a mouse or pen and tablet – even if using an iPad stylus. Only menu and navigation operations, but no drawing tools, worked in Photoshop – an application that seems natural for Touch. While I found the Touch option not to be that interesting to me, I did like the screen that comes with it. It’s glossy, which gives you nice density to your images, but not so reflective as to be annoying in a room with ambient lighting.

The second curiosity item for me was Windows 8.1. The Microsoft “metro” look has been maligned and many pros opt for Windows 7 instead. I actually found the operating system to function well and the “flat” design philosophy much like what Apple is doing with Mac OS X and iOS. The tiled Start screen that highlights this release can easily be avoided when you set-up your preferences. If you prefer to pin application shortcuts to the Windows task bar or on the Desktop, that’s easily done. Once you are in an application like Premiere Pro or Media Composer, the OS differences tend to disappear anyway.

df_hpz1g2_bmdtestSince I had configured this unit with an mSATA boot/applications drive and RAID-0 SSDs for media, the launch and operation of any application was very fast. Naturally the difference from a cold start on the Z1 G2, as compared to my 2009 Mac Pro with standard 7200RPM drives, was night and day. With most actual operations, the differences in application responsiveness were less dramatic.

One area that I think needs improvement is screen calibration. The display is not a DreamColor display, but color accuracy seems quite good and it’s very crisp at 2560 x 1440 pixels. Unfortunately, both the HP and NVIDIA calibration applications were weak, using consumer level nomenclature for settings. For instance, I found no way to accurately set a 6500-degree color temperature or a 2.2 gamma level, based on how the sliders were labelled. Some of the NVIDIA software controls didn’t appear to work at all.

Performance stress testing

I loaded up the Z1 G2 with a potpourri of media and applications, including Adobe CC 2014 (Photoshop, Premiere Pro, After Effects, SpeedGrade), Avid Media Composer 8, DaVinci Resolve 11 Lite (beta) and Sony Vegas Pro 13. Media included Sony XAVC 4K, Avid DNxHD175X, Apple ProRes 4444, REDCODE raw from an EPIC Dragon camera and more. This allowed me to make some direct comparisons with the same applications and media available on my 2009 eight-core Mac Pro. Its configuration included dual Xeon quad-core processors (2.26GHz), 28GB RAM, an ATI 5870 GPU card and a RAID-0 stripe of two internal 7200RPM spinning hard drives. No I/O devices were installed on either computer. While these two systems aren’t exactly “apples-to-apples”, it does provide a logical benchmark for the type of machine a new Z1 G2 customer might be upgrading from.

df_hpz1g2_4kIn typical, side-by-side testing with edited, single-layer timelines, most applications on both machines performed in a similar fashion, even with 4K media. It’s when I started layering sequences and comparing performance and render times that the differences became obvious.

My first test compared Premiere Pro CC 2014 with a 7-layer, 4K timeline. The V1 track was a full-screen, base layer of Sony XAVC. On top of that I layered six tracks of picture-in-picture (PIP) clips consisting of RED Dragon raw footage at various resolutions up to 5K. Some clips were recorded with in-camera slomo. I applied color correction, scaling/positioning and a drop shadow. The 24p timeline was one minute long and was exported as a 4K .mp4 file. The HP handled this task at just under 11 minutes, compared with almost two hours for the Mac Pro.

My second Premiere Pro test was a little more “real world” – a 48-second sequence of ARRI Alexa 1080p ProRes 4444 log-C clips. These were round-tripped through SpeedGrade to add a Rec 709 LUT, a primary grade and two vignettes to blur and darken the outer edge of the clips. This sequence was exported as a 720/24p .mp4 file. The Z1 G2 tackled this in about 14 minutes compared with 37 minutes for the Mac Pro.

df_hpz1g2_appsPremiere Pro CC 2014 uses GPU acceleration and the superior performance of the NVIDIA K4100M card in the HP versus the ATI 5870 in the Mac Pro is likely the reason for this drastic difference. The render times were closer in After Effects, which makes less use of the GPU for effects processing. My 6-layer After Effects stress test was an 8-second composition consisting of six layers of 1080p ProRes clips from the Blackmagic Cinema Camera. I applied various Cycore and color correction effects and then moved them in 3D space with motion blur enabled. These were rendered out using the QuickTime Animation codec. Times for the Z1 G2 and Mac Pro were 6.5 minutes versus 8.5 minutes respectively.

My last test for the HP Z1 G2 involved Avid Media Composer. My 10-layer test sequence included nine PIP video tracks (using the 3D warp effect) over a full-screen background layer on V1. All media was Avid DNxHD175X (1080p, 10-bit, 23.976fps). No frames were dropped in the medium display quality, but in full quality frames started to drop at V6. When I added a drop shadow to the PIP clips, frames were dropped starting at V4 for full quality and V9 for medium quality.

Conclusion

The HP Z1 G2 is an outstanding workstation. Like any alternative form factor, you have to weigh the options of legacy support for older storage systems and PCIe cards. Thunderbolt addresses many of those concerns as an increasing number of adapters and expansion units hits the market. Those interested in shifting from Mac to Windows – and looking for the best in what the PC side has to offer – won’t go wrong with HP products. The company also maintains close ties to Avid and other software vendors, to make sure the engineering of their workstations matches the future needs of the software.

Whether an all-in-one is right for you comes down to individual needs and preferences. I was very happy with the overall ease of installation, operation and performance of the Z1 G2. By adding MacDrive, QuickTime and ProRes software and codecs, I could easily move files between the Z1 and my Mac. The screen is gorgeous, it’s very quiet and the heat output feels less than from my Mac tower. In these various tests, I never heard any fans kick into high. Whether you are upgrading from an older PC or switching platforms, the HP Z1 G2 is definitely worth considering.

Originally written for Digital Video magazine / CreativePlanetNetwork.

©2014 Oliver Peters

Sony Vegas Pro 13

df_Vegas_hero_UIIf you are looking for an easy-to-use editing application that’s optimized for a Windows workstation, one option is the Vegas Pro family from Sony Creative Software. There are several configurations, including Vegas Pro 13 Edit, Vegas Pro 13 and Vegas Pro 13 Suite. The big differences among these is the selection of Sony and third party tools that come with the bundle. The Edit version is mainly the NLE software. The standard Vegas Pro 13 package includes a Dolby Digital Professional encoder, DVD Architect Pro 6, the NewBlueFX Video Essentials VI plug-in collection and Nectar Elements from iZotope. All three products include CALM Act-compliant loudness metering and the HitFilm video plug-in collection from FXHOME. The Suite bundle adds Sound Forge Pro 11 (a file-based audio editor), HitFilm 2 Ultimate (a separate compositing application), Vegas Pro Production Assistant and 25 royalty-free music tracks.

Vegas Pro is a 64-bit application that requires a 64-bit version of Windows 7, 8 or 8.1. In my testing, I installed it on a Xeon-powered HP Z1 G2 configured with Windows 8.1, an NVIDIA K4100m GPU and 16GB of RAM. I didn’t have any video I/O device connected, so I wasn’t able to test that, but Vegas Pro will support AJA hardware and various external control surfaces. If you’ve ever used a version of Vegas Pro in the past, then Vegas Pro 13 will feel comfortable. For those who’ve never used it, the layout might be a bit of a surprise compared with other NLE software. Vegas is definitely a niche product in the market, in spite of its power, but fans of the software are as loyal to it, as those on the Mac side who love Final Cut Pro X.

Vegas Pro 13 supports a wide range of I-frame and long-GOP video codecs, including many professional and consumer media formats. For those moving into 4K, Vegas Pro 13 supports XAVC (used by the F55) and XAVC-S, a format used in Sony’s 4K prosumer cameras. Other common professional formats supported include Panasonic P2 (AVC-Intra), Sony XDCAM, HDCAM-SR, ProRes (requires ProRes for Windows and QuickTime installed) and REDCODE raw. 4K timeline support goes up to a frame size of 4096 x 4096 pixels. As an application with deep roots in audio, the list naturally includes most audio formats, as well.

What’s new

df_Vegas_hitfilmFans of Vegas Pro will find a lot in version 13 to justify an upgrade. One item is Vegas Pro Connect, an iPad companion application designed to be used for review and approval. It features an online and offline mode to review and add comments to a Vegas Pro project. There’s also a new “proxy-first” workflow. For example, videographers shooting XDCAM can use the Sony Wireless Adapter to send camera proxies to the cloud. While the XDCAM discs are being shipped back to the facility, the editors can download and start the edit with the proxies. When the high-resolution media arrives, they then automatically relink the project to this media. Vegas Pro 13 adds a project archive to back up projects and associated media.

df_Vegas_nectarThe plug-ins have been expanded in this release by bundling in new effects from NewBlueFX, FXHOME and iZotope. The video effects include color modification, keying, bleach bypass, light flares, TV damage and a number of other popular looks. These additions augment Vegas Pro’s extensive selection of Sony audio and video effects. Vegas supports the VST audio plug-in and OpenFX (OFX) video plug-in formats. This means other compatible plug-ins installed for other applications on your system can be detected and used. For example, The FXHOME HitFilm plug-ins also showed up in Resolve 11 Lite (beta) that I had installed on this computer, because both applications share the OFX architecture.

Given its audio heritage, Vegas Pro 13 includes a comprehensive audio mixer. New with this release is the inclusion of iZotope Nectar Elements, a single audio plug-in designed for one-click voice processing. Another welcome addition is a loudness meter window to measure levels and mixes in order to be compliant with the CALM Act and EBU R-128.

Putting Vegas Pro 13 through the paces

df_Vegas_reddecodeOne big selling point of version 13 is GPU acceleration based on OpenCL in NVIDIA, AMD and Intel graphics cards. This becomes especially important when dealing with 4K formats. The performance advances are most noticeable once you start layering video tracks. Certainly working with 4K XAVC, RED EPIC Dragon and 1080p ProRes 4444 media was easy. Scrubbing and real-time playback never caused any issues. The Vegas Pro preview window lets you manually or automatically adjust visual preview quality to maintain maximum real-time playback. If you are a RED user, then you’ll appreciate access to the R3D decode properties. The Z1 G2 felt very responsive working with native RED camera media.

df_Vegas_colorcorrMany editors take awhile to get comfortable with Vegas Pro’s interface. Vegas started life as a multi-track audio software (DAW) and the layout and track design stems from that. Each video and audio track is designed like a mixing board channel strip. You have a read/touch/latch automation control, a plug-in chain and a level slider. With audio you also get panning and a meter. With video, you get a spatial control, parent/child track hierarchy control (for track grouping) and a compositing mode. Many of the functions can be manipulated in real-time, while the timeline is playing. This may seem obvious when writing audio levels in an automated mixing pass. It’s more unique for video. For example, you can do the same for video opacity – writing a real-time pass of opacity level changes on-the-fly, simply by adjusting the video level fader as the timeline plays.

df_Vegas_audioOnce you get deeper into Vegas, you’ll find quite a few surprises. For example, it supports stereoscopic workflows. The Title Generator effects include numerous animated text templates. Together with DVD Architect, you have a solid Blu-Ray Disc authoring system. Unfortunately, there were also a few things I’d wanted to test that simply didn’t seem to work. Vegas Pro 13 is supposed to be able to import and export a range of project files, including XML, AAF, FCPXML, Premiere projects, etc. I attempted to import XML, FCPXML and Premiere Pro project files, but came up empty each time. I was never able to export an FCPXML file. I was able to export FCP 7 XML and Premiere project files, but the Premiere file crashed Premiere Pro CC 2014 on both my Mac and this test PC. The FCP 7 XML did work in Premiere Pro, though. I tried to bring an XML into Final Cut Pro X using the 7toX translation utility, but FCP X was unable to relink to the media files. So, while this should be a great feature, it seems to be a work-in-progress at this point.

df_Vegas_interfaceIt was hard for me to warm up to the interface itself. While it’s very fast to operate, Vegas Pro is still designed like an audio application, and so, is very different than most traditional NLEs. For example, double-clicking a clip edits it straight to the timeline as the default condition. To first send it to the source viewer in order to select in and out points, you have to use the “Open in Trimmer” command. Fortunately, there is a preference setting to flip this behavior. Vegas Pro projects contain only a single timeline – also referred to as the project (like in FCP X). You cannot have multiple timelines within a single production, however, you can have more than one instance of Vegas Pro open at the same time. In that case, you can switch between them using the Windows task bar to select which active application window to bring to the front. It is also possible to edit a .veg (Vegas Pro project) file to the timeline. This gives you the same result as in other NLE software, where you can edit a nested timeline into another timeline.

Speaking of the interface, the application badly needs a redesign. It looks like it’s still from the Windows 98 world. Some people appreciate starkness – and I know this probably helps the application’s speed – but, if you’re going to stare at a screen all day long, it should look a bit more elegant. Even Sony’s Sound Forge Pro for the Mac, which shares a similar design and starkness, is cleaner and feels more modern. Plus it’s very bright. In fact, disabling the Vegas theme in preferences makes it even painfully brighter. It would be great if Vegas Pro had a UI brightness slider, like Adobe has offered for years.

Conclusion

Sony’s Vegas Pro 13 is a useful application with a lot of power for users at all levels. At only a few hundred dollars, it’s a strong application suite to have in your Windows toolkit, even if you prefer other NLEs. The prime reason is the wide codec support and easy 4K editing. If that’s how you use it, then the interface issues I mentioned won’t be a big deal.

On the other hand, if you’re an experienced Vegas Pro user and happy with it as is, then version 13 is a worthy upgrade, especially on a high-end machine. It’s fast, efficient and gets the job done. If Sony fixes the import/export problems I encountered, Vegas Pro could become a tool that would make itself indispensable.

Originally written for Digital Video magazine / CreativePlanetNetwork.

©2014 Oliver Peters

Red Giant Universe

df_rgsu_1

Red Giant Software, developers of such popular effects and editing tools as Trapcode and Magic Bullet, recently announced Red Giant Universe. Red Giant has adopted a hybrid free/subscription model. Once you sign into Universe for a Red Giant account, you have access to all the free filters and transitions that are part of this package. Initially this includes 31 free plug-ins (22 effects, 9 transitions) and 19 premium plug-ins (12 effects, 7 transitions). Universe users have a 30-day trial period before the premium effects become watermarked. Premium membership pricing will be $10/month, $99/year or $399/lifetime. Lifetime members will receive routine updates without any further cost.

A new approach to a fresh and growing library of effects

The general mood among content creators has been against subscription models; however, when I polled thoughts about the Universe model on one of the Creative COW forums, the comments were very positive. I originally looked at Red Giant’s early press on Universe and I had gotten the impression that Universe would be an environment in which users could create their own custom effects. In fact, this isn’t the case at all. The Universe concept is built on Supernova, an internal development tool that Red Giant’s designers use to create new effects and transitions. Supernova draws from a library of building block filters that can be combined to create new plug-in effects. This is somewhat the same as Apple’s Quartz Composer development tool; however, it is not part of the package that members can access.

df_rgsu_3Red Giant plans to build a community around the Universe members, who will have some input into the types of new plug-ins created. These plug-ins will only be generated by Red Giant designers and partner developers. Currently they are working with Crumplepop, with whom they created Retrograde – one of the premium plug-ins. The point of being a paid premium member is to continue receiving routine updates that add to the repertoire of Universe effects that you own. In addition, some of the existing Red Giant products will be ported to Universe in the future as new premium effects.

df_rgsu_2This model is similar to what GenArts had done with Sapphire Edge, which was based on an upfront purchase, plus a subscription for updated effects “collections” (essentially new preset versions of an Edge plug-in). These were created by approved designers and added to the library each month. (Note: Sapphire Edge – or at least the FX Central subscription – appears to have been discontinued this year.) Unlike the Sapphire Edge “collections”, the Universe updates are not limited to presets, but will include brand new plug-ins. Red Giant tells me they currently have several dozen in the development pipeline already.

Red Giant Universe supports both Mac and Windows and runs in recent versions of Adobe After Effects, Premiere Pro, Apple Final Cut Pro X and Motion. At least for now, Universe doesn’t support Avid, Sony Vegas, DaVinci Resolve, EDIUS or Nuke hosts. Members will be able to install the software on two computers and a single installation of Universe will install these effects into all applicable hosts, so only one purchase is necessary for all.

Free and premium effects with GPU acceleration

In this initial release, the range of effects includes many standards as free effects, including blurs, glows, distortion effects, generators and transitions. The premium effects include some that have been ported over from other Red Giant products, including Knoll Light Factory EZ, Holomatrix, Retrograde, ToonIt and others. In case you are concerned about duplication if you’ve already purchased some of these effects, Red Giant answers this in their FAQ: “We’ve retooled the tools. Premium tools are faster, sleeker versions of the Red Giant products that you already know and love. ToonIt is 10x faster. Knoll Light Factory is 5x faster. We’ve streamlined [them]with fewer controls so you can work faster. All of the tools work seamlessly with [all of the] host apps, unlike some tools in the Effects Suite.”

df_rgsu_4The big selling point is that these are high-quality, GPU-accelerated effects, which use 32-bit float processing for trillions of colors. Red Giant is using OpenGL rather than OpenCL or NVIDIA’s CUDA technology, because it is easier to provide support across various graphics cards and operating systems. The recommendation is to have one of the newer, faster NVIDIA or AMD cards or mobile GPUs. The minimum GPU is an Intel HD 3000 integrated graphics chip. According to Red Giant, “Everything is rendered on the GPU, which makes Universe up to 10 times faster than CPU-based graphics. Many tools use advanced render technology that’s typically used in game development and simulation.”

In actual use

After Universe is installed, the updates are managed through the Red Giant Link utility. This will now keep track of all Red Giant products that you have installed (along with Universe) and lets you update as needed. The effects themselves are nice and the quality is high, but these are largely standard effects, so far. There’s nothing major yet, that isn’t already represented with a similar effect within the built-in filters and transitions that come as part of FCP X, Motion or After Effects. Obviously, there are subjective differences in one company’s “bad TV” or “cartoon” look versus that of another, so whether or not you need any additional plug-ins becomes a personal decision.

As far as GPU-acceleration is concerned, I do find the effects to be responsive when I adjust them and preview the video. This is especially true in a host like Final Cut Pro X, which is really tuned for the GPU. For example, adding and adjusting a Knoll lens flare from the Universe package performs better on my 2009 Mac Pro (8-core with an NVIDIA Quadro 4000), than do the other third-party flare filters I have available on this unit.

df_rgsu_5The field is pretty crowded when you stack up Universe against such established competitors as GenArts Sapphire, Boris Continuum Complete, Noise Industries FxFactory Pro and others. As yet, Universe does not offer any tools that fill in workflow gaps, like tracking, masking or even keyers. I’m not sure the monthly subscription makes sense for too many customers. It would seem that free will be attractive to many, while an annual or lifetime subscription will be the way most users will purchase Universe. The lifetime price lines up well when you compare it to the others, in terms of purchasing a filter package.

Red Giant Universe is an ideal package of effects for editors. While Apple has developed a system with Motion where any user can created new FCP X effects based on templates, the reality is that few working editors have the time or interest to do that. They want effects that can be quickly applied with a minimum amount of tweaking and that perform well on a timeline. This is what impresses clients and what wins over editors to your product. With that target in mind, Red Giant definitely will do well with Universe if it holds to its promise. Ultimately the success of Universe will hang on how prolific the developers are and how quickly new effects come through the subscription pipeline.

Originally written for Digital Video magazine/Creative Planet Network

©2014 Oliver Peters

Adobe Anywhere

df_anywhere_1_sm

Adobe Anywhere for video is Adobe’s first foray into collaborative editing. Anywhere functions a lot like other shared storage environments, except that editors and producers are not bound to working within the facility and its hard-wired network. The key difference between Adobe Anywhere and other NLE/SAN combinations is that all media is stored at the central location and the system’s servers handle the actual editing and compositing functions of the editing software. This means that no media is stored on the editor’s local computer and lightweight client stations can be used, since the required horsepower exists at the central location. Anywhere works within a facility using the existing LAN or externally over the internet when client systems connect remotely over VPN. Currently Adobe Anywhere is integrated directly into Adobe Premiere Pro CC and Prelude CC (Windows and OS X). Early access to After Effects integration is part of Adobe Anywhere 1.6, with improved integration available in the next release.

The Adobe Anywhere cluster

df_anywhere_4_smAdobe Anywhere software is installed on a set of Windows servers, which are general purpose server computers that you would buy from a vendor like Dell or HP. The software creates two types of nodes: a single Adobe Anywhere Collaboration Hub node and three or more Adobe Mercury Streaming Engine nodes. Each node is installed on a separate server, so a minimum configuration requires four computers. This is separate from the shared storage. If you use a SAN, such as a Facilis Technology or an EditShare system, the SAN will be mounted at the OS level by the computing cluster of Anywhere servers. Local and remote editors can upload source media to the SAN for shared access via Anywhere.

The Collaboration Hub computer stores all of the Anywhere project metadata, manages user access and coordinates the other nodes in the system. The Mercury Streaming Engine computers provide real-time, dynamic viewing streams of Premiere Pro and Prelude sequences with GPU-accelerated effects. Media stays in its native file format on the storage servers. There are no proxy files created by the system. In order to handle real-time effects, each of the Streaming Engine servers must be equipped with a high-end NVIDIA graphics card.

As a rule of thumb, this minimum cluster size supports 10-15 active users, according to Adobe. However, the actual number depends on media type, resolution, number of simultaneous source clips needed per editor, as well as activities that may be automated like import and export. Adobe prices the Anywhere software based on the number of named users. This is a subscription model of $1,000/year/user. That’s in addition to installed seats of Creative Cloud and the cost of the hardware to make the system work, which is supplied by other vendors and not Adobe. Since this is not sold as a turnkey installation by Adobe, certain approved vendors, like TekServe and Keycode Media, have been qualified as Adobe Anywhere system integrators.

How it works

df_anywhere_5_smWhile connected to Adobe Anywhere and working with an Anywhere project, the Premiere Pro or Prelude application on the local computer is really just functioning as the software front-end that is driving the application running back at the server. The result of the edit decisions are streamed back to the local machine in real-time as a single stream of video. The live stream of media from the Mercury Streaming Engine is being handled in a similar fashion to the playback resolution throttle that’s already part of Premiere Pro. As native media is played, the computer adjusts the stream’s playback compression based on bandwidth. Whenever playback is paused, the parked frame is updated to full resolution – thus, enabling an editor to tweak an effect or composite and always see the full resolution image while making the adjustments.

To understand this better, let’s use the example of a quad split. If this were done locally, the drives would be playing back four streams of video and the software and GPU of that local computer would composite the quad split and present a single stream of video to the viewer display. In the case of Adobe Anywhere, the playback of these four streams and the compositing of the quad split would take place on the Mercury Streaming Engine computer. In turn, it would stream this live composite as a single feed of video back to the remotely connected computer. Since all the “heavy lifting” is done at “home base” the system requirements for the client machine can be less beefy. In theory, you could be working with a MacBook Air, while editing RED Epic 5K footage.

Productions

Another difference with Adobe Anywhere is that instead of having Premiere Pro or Prelude project files, users create shared productions, designed for multi-user and multi-application access. This way a collaborating team is set up like a workgroup with assigned permission levels. Media is common and central to avoid media duplication. Any media that is added on-site, is uploaded to the production in its native resolution and becomes part of the shared assets of the production. The Collaboration Hub computer manages the database for all productions.

When a user remotely logs into an Adobe Anywhere Production, media to which he has been granted access is available for browsing using Premiere Pro’s standard Media Browser panel. When an editor starts working, Anywhere automatically makes a virtual “clone” of his or her production items and opens them in a private session. Because multiple people can be working in the same production at the same time, Adobe Anywhere provides protection against conflicts or overwrites. In order to share your private changes, you must first get any updates from the shared production. This pulls all shared changes into your private view. If another person has changed the same asset you are working on, you are provided with information about the conflict and given the opportunity to keep the other person’s changes, your changes or both. Once you make your choices, you can then transfer your changes back to the shared production. Anywhere also maintains a version history, so if unwanted changes are made, you can revert back to an earlier or alternate version.

Adobe Anywhere in the wild

df_anywhere_2_smAlthough large installations like CNN are great for publicity headlines, Adobe Anywhere is proving to be useful at smaller facilities, too. G-Men Media is a production company based in Venice, California. They are focused primarily on feature film and commercial broadcast work. According to G-Men COO, Jeff Way, “G-Men was originally founded with the goal of utilizing the latest digital technologies available to reduce costs, accelerate workflow and minimize turnaround time for our clients. Adobe Anywhere allowed us to provide our clients a more efficient workflow on post productions without having to grow infrastructure on a per project basis.”

“A significant factor of Adobe Anywhere, which increased the growth of our client base, was the system’s ability to organize production teams based on talent instead of location. If we can minimize or eliminate time required for coordinating actual production work (i.e. shipping hard drives, scheduling meetings with editors, awaiting review/approval), we can save clients money that they can then invest into more creative aspects of the project – or simply undercut their budget. Furthermore, we have the ability to scale up or down without added expenses in infrastructure. All that’s required on our end is simply granting the Creative Cloud seat access to the system assets for their production.”

df_anywhere_3_smThe G-Men installation was handled by Keycode Media, based on the recommended Adobe configuration described at the beginning of this article. This includes four SuperMicro 1U rack-mounted SuperServers. Three of these operate as the Adobe Anywhere Mercury Streaming Engines and the fourth acts as the Adobe Anywhere Collaboration Hub. Each of the Mercury Streaming Engines has its own individual NVIDIA Tesla K10 GPU card. The servers are connected to a Facilis Terrablock shared storage array via a 10 Gigabit Ethernet switch. Their Internet feed is via a fiber optic connection, typically operating at 500Mbps (down) /150Mbps (up). G-Men has used the system on every project, since it went live in August of 2013. Noteworthy was its use for post on Savageland – the first feature film to run through an Adobe Anywhere system.

Way continued, “Savageland ended up being a unique situation and the ultimate test of the system’s capabilities. Savageland was filmed over three years with various forms of media from iPhone and GoPro footage to R3D raw and Canon 5D. It was really a matter of what the directors/producers could get their hands on from day-to-day. After ingesting the assets into our system, we were able to see a fluid transition straight into editing without having to transcode media assets. One of the selling factors of gaining Savageland as a client was the flexibility and feasibility of allowing all of the directors and editors (who lived large distances from each other in Los Angeles) to work at their convenience. The workflow for them changed from setting aside their weekends and nights for review meetings at a single location to a readily available review via their MacBooks and iPads.”

“For most of our clients, the system has allowed them to bring on the editorial talent they want without having to worry about the location of the editor. At the same time, the editors enjoyed the flexibility of working from wherever they wanted – many times out of their own homes. The benefit for editors and directors is the capability to remotely collaborate and provide feedback immediately. We’ve had a few productions where there are more than one editor working on the same assets – both creating different versions of the same edit. At the same time we had a director viewing the changes immediately after they were shared, with notes on each version. Then they had the ability to immediately make a decision on one or the other or provide creative feedback, so the editors could immediately apply the changes in real time.”

G-Men is in production on Divine Access, a feature film being shot in Austin, Texas. Way explained, “We’re currently in Austin beginning principal photography. Knowing the cloud-based editing workflows available to us, we wanted to expand the benefits we are gaining in post to the entirety of a feature film production from first location scout to principal photography and all the way through to delivery. We’re using our infrastructure to ingest and begin edits as we shoot, which is really new and exciting to all of the producers working on the film.  With the upload speeds we have available to us, we are able to provide review/approvals to our director the same day.”

Originally written for Digital Video magazine/CreativePlanetNetwork.

©2014 Oliver Peters

Apple’s New Mac Pro

df_mp2013_4_smThe run of the brushed aluminum tower design that highlighted Apple’s PowerMac G5 and Intel Mac Pros ended with the introduction of a radical replacement in late 2013. No matter what the nickname – “the cylinder”, “the tube” or whatever - Apple’s new 2013 Mac Pro is a tour de force of industrial design. Few products have had such pent up demand. The long lead times for custom machines originally ran months, but by now, with accelerated production, has been reduced to 24 hours. Nevertheless, if you are happy with a stock configuration, then it’s possible to walk out with a new unit on the same day at some of the Apple Store or reseller retail locations.

Design

The 2013 Mac Pro features a cylindrical design. It’s about ten inches tall, six-and-a-half inches in diameter and, thanks to a very dense component construction, weighs about eleven pounds. The outer shell – it’s actually a sleeve that can be unlocked and lifted off – uses a dark (not black) reflective coating. Internally, the circuits are mounted onto a triangle-shaped core. There’s a central vent system that draws air in through the bottom and out through the top, much like a chimney. You can still mount the Mac Pro sideways without issue, as long as the vents are not blocked. This design keeps the unit quiet and cool most of the time. During my tests, the fan noise was quieter than my tower (generally a pretty quiet unit) and the fans never kicked into high.

Despite the small size, all components are workstation class and not mobile or desktop products, as used in the Apple laptops or iMacs. It employs the fastest memory and storage of any Mac and is designed to pick up where the top-of-the-line iMac leaves off. The processors are Intel Xeon instead of Core i5 or Core i7 CPUs and graphics cards are AMD FirePro GPUs. This Xeon model is a multicore, single CPU chip. Four processor options are offered (4, 6, 8 and 12-core), ranging in speed from 3.7GHz (4-core) to 2.7GHz (12-core). RAM can be maxed out to a full 64GB. It is the only component of the Mac Pro where a user-installed, third-party upgrade is an easy option.

The Mac Pro is optimized for dual graphics processors with three GPU choices: D300 (2GB VRAM each), D500 (3GB VRAM each) or D700 (6GB VRAM each) GPUs. Internal storage is PCIe-based flash memory in 256GB, 512GB or 1TB configurations. These are not solid state drives (SSDs), but rather flash storage like that used in the iPads. Storage is connected directly to the PCIe bus of the Mac Pro for the fastest possible data i/o. The stock models start at $2,999 (4-core) and $3,999 (6-core).

Apple shipped me a reviewer’s unit,  configured in a way that they feel is the “sweet spot” for high-end video. My Mac Pro was the 8-core model, with 32GB of RAM, dual D700 GPUs and 512GB of storage. This configuration with a keyboard, mouse and AppleCare extended warranty would retail at $7,166.

Connectivity

df_mp2013_5_smAll connectors are on the back – four USB 3.0, six Thunderbolt 2, two Gigabit Ethernet and one HDMI 1.4. There is also wireless, Bluetooth, headset and speaker support. The six Thunderbolt 2 ports are split out from three internal Thunderbolt 2 buses, with the bottom bus also taking care of the HDMI port.

You can have multiple Thunderbolt monitors connected, as well as a 4K display via the HDMI spigot, however you will want to separate these onto the different buses. For example, you wouldn’t be able to support two 27” Apple displays and a 4K HDMI-connected monitor all on one single Thunderbolt bus. However, you can support up to six non-4K displays if you distribute the load across all of the connections. Since the plug for Thunderbolt is the same as Mini Display Port, you can connect nearly any standard computer monitor to these ports if you have the proper plug. For example, I used my 20” Apple Cinema Display, which has a DVI plug, by simply adding a DVI-to-MDP adapter.

The change to Thunderbolt 2 enables faster throughput. The first version of Thunderbolt used two channels of 10Gb/s data and video, with each channel going in opposite directions. Thunderbolt 2 combines this for two channels going in the same direction, thus a total of 20Gb/s. You can daisy-chain Thunderbolt devices and it is possible to combine Thunderbolt 1 and Thunderbolt 2 devices in the same chain. First generation Thunderbolt devices (such as monitors) should be at the end of the chain, so as not to create a bottleneck.

The USB 3.0 ports will support USB 1.0 and 2.0 devices, but of course, there is no increase in their speed. There is no legacy support for FireWire or eSATA, so if you want to connect older drives, you’ll need to invest in additional docks, adapters and/or expansion units. (Apple sells a $29 Thunderbolt-to-FireWire 800 adapter.) This might also include a USB hub. For example, I have more than four USB-connected devices on my current 2009 Mac Pro. The benefit of standardizing on Thunderbolt, is that all of the Thunderbolt peripherals will work with any of Apple’s other computers, including MacBook Pros, Minis and iMacs.

The tougher dilemma is if you need to accommodate current PCIe cards, such as a RED Rocket accelerator card, a FibreChannel adapter or a mini-SAS/eSATA card. In that case, a Thunderbolt 2 expansion unit will be required. One such solution is the Sonnet Technologies Echo Express III-D expansion chassis.

Mac Pro as your main edit system

df_mp2013_2_smI work in many facilities with various vintages of Mac Pro towers. There’s a wide range of connectivity needs, including drives, shared storage and peripherals. Although it’s very sexy to think about just a 2013 Mac Pro sitting on your desk with nothing else, other than a Thunderbolt monitor, that’s not the real world of post. If you are evaluating one of these as your next investment, consider what you must add. First and foremost is storage. Flash storage and SSDs are great for performance, but you’re never going to put a lot of video media on a 1TB (or smaller) drive. Then you’ll need monitors and most likely adapters or expansion products for any legacy connection.

I priced out the same unit I’m reviewing and then factored in an Apple 27” display, the Sharp 32” UHD monitor, a Promise Pegasus2 R6 12TB RAID, plus a few other peripherals, like speakers, audio i/o, docks and adapters. This bumps the total to over $15K. Granted, I’ve pretty much got a full system that will last me for years. The point is, that it’s important to look at all the ramifications when you compare the new Mac Pro over a loaded iMac or a MacBook Pro or simply upgrading a recently-purchased Mac Pro tower.

Real world performance

df_mp2013_6_smMost of the tests promoting the new Mac Pro have focused on 4K video editing. That’s coming and the system is certainly good for it, but that’s not what most people encounter today. Editors deal with a mix of media, formats, frame rates, frame sizes, etc. I ran a set of identical tests on the 2013 Mac Pro and on my own 2009 Mac Pro tower. That’s an eight-core (dual 4-core Xeons) 2.26GHz model with 28GB of RAM. The current video card is a single NVIDIA Quadro 4000 and my media is on an internal two-drive (7200RPM eSATA) RAID-0 array. Since I had no external drives connected to the 2013 Mac Pro, all media was playing from and writing to the internal flash storage. This means that performance would be about as good as you can get, but possibly better than with externally-connected drives.

I tested Apple Final Cut Pro X, Motion, Compressor, Adobe Premiere Pro CC and After Effects CC. Media included RED EPIC 5K camera raw, ARRI ALEXA 1080p ProRes 4444, Blackmagic Cinema Camera 2.5K ProResHQ and more. Most of the sequences included built-in effects and some of the new Red Giant Universe filters.

df_mp2013_3_smTo summarize the test results, performance – as measured in render or export times – was significantly better on the 2013 Mac Pro. Most of the tests showed a 2X to 3X bump in performance, even with the Adobe products. Naturally FCP X loves the GPU power of this machine. The “BruceX” test, developed as a benchmark by Alex Gollner for FCP X, consists of a 5K timeline with a series of generators. I exported this as a 5K ProRes 4444 file. The older tower accomplished this in 1:47, while the new Mac Pro smoked it in just :19. My After Effects timeline consisted of ProRes 4444 clips with a bunch of intensive Cycore filters. The old versus new renders were 23:26 and 12:53, respectively.  I also ran tests with DaVinci Resolve 10, another application that loves more than one GPU. These were RED EPIC 5K files in a 1080p timeline. Debayer resolution was set to full (no RED Rocket card used). The export times ran at 4-12fps (depending on the clip) on the tower versus 15-40fps on the new Mac Pro.

df_mp2013_1_smIn general, all operations with applications were more responsive. This is, of course, true with any solid state storage. The computer boots faster and applications load and respond more quickly. Plus, more RAM, faster processors and other factors all help to optimize the 2013 Mac Pro for best performance. For example, the interaction between Adobe Premiere Pro CC and SpeedGrade CC using the Direct Link and Lumetri filters was noticeably better with the new machine. Certainly that’s true of Final Cut Pro X and Motion, which are ideally suited for it. I would add that using a single 20” monitor connected to the Mac Pro placed very little drag on one GPU, so the second could be totally devoted to processing power. Performance might vary if I had two 27” displays, plus a 4K monitor hooked to it.

I also tested Avid Media Composer. This software doesn’t particularly use a lot of GPU processing, so performance was about the same as with my 2009 Mac Pro. It also takes a trick to get it to work. The 2013 Mac Pro has no built-in audio device, which Media Composer needs to see in order to launch. If you have an audio device connected, such as an Mbox2 Mini or even just a headset with a microphone, then Media Composer detects a core audio device and will launch. I downloaded and installed the free Soundflower software. This acts as a virtual core audio device and can be set as the computer’s audio input in the System Preferences sound panel. Doing so enabled Media Composer to launch and operate normally.

Whether the new 2013 Mac Pro is the ideal tower replacement for you comes down to budget and many other variables. Rest assured that it’s the best machine Apple has to offer today. Analogies to powerful small packages (like the Mini Cooper or Bruce Lee) are quite apt. The build quality is superb and the performance is outstanding. If you are looking for a machine to service your needs for the next five years, then it’s the ideal choice.

(Note: This unit was tested prior to the release of 10.9.3, so I didn’t encounter any of the render issues that have been plaguing Adobe and DaVinci users.)

Originally written for Digital Video magazine/CreativePlanetNetwork.

©2014 Oliver Peters

Amira Color Tool and your NLE

df_amiracolor_1I was recently alerted to the new Amira Color Tool by Michael Phillips’ 24p blog. This is a lightweight ARRI software application designed to create custom in-camera looks for the Amira camera. You do this by creating custom color look-up tables (LUT). The Amira Color Tool is available as a free download from the ARRI website (free registration required). Although the application is designed for the camera, you can also export looks in a variety of LUT file formats, which in turn, may be installed and applied to footage in a number of different editing and color correction applications. I tested this in both Apple Final Cut Pro X and Avid Media Composer | Software (v8) with good results.

The Amira Color Tool is designed to correct log-C encoded footage into a straight Rec709 offset or with a custom look. ARRI offers some very good instructions, white papers, sample looks and tutorials that cover the operation of this software. The signal flow is from the log-C image, to the Rec709 correction, and then to the CDL-based color correction. To my eye, the math appears to be floating point, because a Rec709 conversion that throws a shot into clipping, can be pulled back out of clipping in the look tab, using the CDL color correction tools. Therefore it is possible to use this tool for shots other than ARRI Amira or Alexa log-C footage, as long as it is sufficiently flat.

The CDL correction tools are based on slope, offset and power. In that model slope is equivalent to gain, offset to lift and power to gamma. In addition to color wheels, there’s a second video look parameters tab for hue intensities for the six main vectors (red, yellow, green, cyan, blue and magenta). The Amira Color Tool is Mac-only and opens both QuickTime and DPX files from the clips I tested. It worked successfully with clips shot on an Alexa (log-C), Blackmagic Cinema Camera (BMD Film profile), Sony F-3 (S-log) and Canon 1DC (4K Canon-log). Remember that the software is designed to correct flat, log-C images, so you probably don’t want to use this with images that were already encoded with vibrant Rec709 colors.

FCP X

df_amiracolor_4To use the Amira Color Tool, import your clip from the application’s file browser, set the look and export a 3D LUT in the appropriate format. I used the DaVinci Resolve setting, which creates a 3D LUT in a .cube format file. To get this into FCP X, you need to buy and install a LUT filter, like Color Grading Central’s LUT Utility. To install a new LUT there, open the LUT Utility pane in System Preferences, click the “+” symbol and navigate to where the file was saved.df_amiracolor_5_sm In FCP X, apply the LUT Utility to the clip as a filter. From the filter’s pulldown selection in the inspector, choose the new LUT that you’ve created and installed. One caveat is to be careful with ARRI files. Any files recorded with newer ARRI firmware are flagged for log-C and FCP X automatically corrects these to Rec709. Since you don’t want to double up on LUTs, make sure “log processing” is unchecked for those clips in the info tab of the inspector pane.

Media Composer

df_amiracolor_6_smTo use the custom LUTs in Media Composer, select “source settings” for the clip. Go to the color management tab and install the LUT. Now it will be available in the pull-down menu for color conversions. This color management change can be applied to a single clip or to a batch of clips within a bin.

In both cases, the source clips in FCP X and/or Media Composer will play in real-time with the custom look already applied.

df_amiracolor_2_sm

df_amiracolor_3_sm

©2014 Oliver Peters