
I always enjoy the show – partly for the new toys – but also to hook up once again, face-to-face, with many friends in the business. I’m back home now and have had a day to decompress and make a few observations about NAB Convention.
First off, this was an extremely strong show for post. Tons of new versions of many of your favorite NLEs, color grading tools and other items. Second, the attendance was good. A bit more than last year – so still a “down” year compared with peaks of a few years ago. Yet, I felt the floor density was higher than the 2009 vs. 2010 numbers indicated. Thursday was still well-attended and not the ghost town I would have expected. So, on the purely subjective metric of how crowded the floor felt, I would have to say that daily averages were much better than 2009.
If you want more specific product knowledge about what was on the floor, check out the various NAB reports at Videography, DV, TV Technology, Studio Daily, Post and Pro Video Coalition. I would encourage you to check out DV’s “(Almost) Live From the NAB Show Blog” – Part 1 and Part 2. The following thoughts fall under opinion and observation, so I’m bound to skip a lot of the details that you might really want to know.
Apple
It never ceases to amaze me when I see blog posts and forum comments that seem to expect Apple to pop up out of nowhere at the show with some amazing new version of Final Cut Studio. Have these folks been under a rock? Apple swore off trade shows several years ago and there’s no indication this policy has changed. They were never on the 2009 or 2010 exhibitor’s list and you can’t plan a 1500-3000 seat “user event” at an area ballroom without word getting out. So, I have no idea why people persist in this fantasy game.
The short term scenario is that it is unlikely that there’ll be a feature-laden new version of FCP/FCS any time soon. Maybe an incremental update like the “new” Final Cut Studio from last year, but I wouldn’t expect that until a few months down the road at the earliest. Or maybe not until 2011. Even if that doesn’t happen or even if the release strikes many as lackluster instead of awesome, it won’t change the breakdown of NLEs to any great degree. If you work with FCP today, you are getting the job done and probably relatively happy with the product. I don’t foresee any change in the product that would greatly alter that situation.
The more important news – as it pertains to NAB – is that Apple is doing a good job of attracting a number of new partners to its core technologies. Autodesk’s Smoke for Mac OS X is a good example, but they are just one of the over 300-strong developer community that constitues the Final Cut ecosystem. A number of folks, such as ARRI, have licensed the ProRes codec, which is a pretty good endorsement of image quality, as well as workflow.
Avid
Certain versions tend to become milestones for a company’s software. I believe Media Composer 5 will be one of those. Avid renumbered versions with the release of Adrenaline several years ago, so this version 5 is really more like version 17. Numbers notwithstanding, other milestones for Media Composer had been the old version 5.x and version 7.x and I believe this newest release (targeted for June) will have just as much impact for Avid editors.
Media Composer 5 goes a long way towards keeping Avid editors in the fold and may even get some Avid-to-FCP “switchers” to come back. It adds limited 3rd party i/o hardware support, wider codec support (including RED and QuickTime through AMA) Pro Tools-style audio features and more FCP-like timeline editing functions. I highly doubt that it will really get any FCP diehards to convert, but it might pique the interest of those selecting their first high-end NLE. Down the road, I’ll have proper review when it’s ready for actual use.
In addition to Media Composer 5, Avid also previewed its “editing in the cloud” concept. This is largely based on work already done by Maximum Throughput, which had been acquired by Avid. The demo looked pretty fluid, but I think it’s probably a number of years off. That’s OK as this was merely a technology preview; however, it does have relevance to large enterprises. The same concepts developed for editing over the internet clearly apply to editing on an internal companywide LAN or WAN system.
The direction that Avid seems to be taking here – along with its expansion of Interplay into a family of asset management products – sets them up to make the Professional Services department into an IBM-style corporate consultation service and profit center. In other words, if you are a large company or TV network and want to implement the “cloud” editing concept along with the necessary asset management tools, it’s going to take a knowledgeable organization to do that for you. Avid naturally has such expertise and is poised to leverage its internal assets into billable services. The small editing boutique may not have any interest in that concept, but if it makes Avid a stronger company overall, then I’m all for it.
Adobe Creative Suite 5
CS5 is just about here. It’s 64-bit and uses the Mercury Playback Engine. But will Premiere Pro really pick up steam as an NLE of choice? Like Media Composer, expect a real review in the coming months. I’ve used Premiere Pro in the past on paying gigs and didn’t have the sort of issues I see people complain about. These were smaller projects, so I didn’t hit some of the problems that have plagued Premiere Pro, which mainly relate to scalability. Although it’s not touted in the CS5 press info, it does appear to me that Adobe has done a lot of tweaking under the hood. This is related to the changes for 64-bit, so I really expect Premiere Pro CS5 to be a far better product than previous versions.
Whether that’s true or not is going to depend on your particular system. For example, much has been written about the Mercury Playback Engine. This is an optimization for the CUDA technology of specific high-end NVIDIA graphics cards. If you don’t have one of these cards installed, Premiere Pro shifts into software emulation. In some cases, it will be a big difference and in other cases it won’t. There’s lots of native codec and format support, but not all camera codecs are equal. Some are CPU-intensive, some GPU-intensive and others require fast disk arrays. If your system is optimized for DVCPRO HD, for example (older CPU, but fast arrays), you won’t see outstanding results with AVC-Intra, which is processor-intensive, requiring the newest, fastest CPUs.
There’s plenty in the other apps to sell editors on the CS5 Production Premium bundle, even if they never touch Premiere Pro. On the other hand, Premiere Pro CS5 is still pretty powerful, so editors without a vested interest in Avid, Apple or something else, will probably be quite happy with it.
One format to rule them all
With apologies to J. R. R. Tolkien, the hopes of a single media format seem to have been totally shattered at this NAB. When MXF and AAF were originally bounced around, the hope was for a common media and metadata format that could be used from camera to NLE to server without the need for translation, transcoding or any other sort of conversion.
I think that idea is toast, thanks to the camera manufacturers, who – along with impatient users – have pushed NLE developers to natively support just about every new camera format and codec imaginable. Since the software can handle it, we see NLEs evolving into a more browser-style format. This is the basis for how Premiere Pro and Final Cut Pro are structured. It is now becoming a model that others are embracing. Avid has AMA (a plug-in API for camera manufacturers), but you also see “soft import” in the Autodesk systems and “soft mount” in Quantel. All variations of the same theme. In fact, Apple is the “odd man out” in this scenario, forcing everything into QuickTime before FCP can work with it.
The three advanced formats that seem to have the broadest support today are Avid DNxHD, Apple ProRes and Panasonic AVC-Intra. To a lesser extent you can add AVCHD, Sony XDCAM (various flavors) and DV/DVCPRO/DV50/DVCPRO HD.
Stereo 3D
Just when we thought we had this HD thing figured out, the electronics manufacturers are pushing us into stereo 3D. There was plenty of 3D on the floor, but bear in mind that there are very few in the production community pushing to do this. It’s driven almost entirely by display manufacturers and studios looking to cash in on 3D theater distribution. I think we are headed for a 3D bubble that will eventually drop back into a niche, albeit a large niche for some.
Whether 3D is big or not doesn’t matter. It’s here now and something many of us will have to deal with, so you might as well start figuring things out. The industry is at the starting point and a lot is in flux. First off – the terminology. Walking around the floor there were references to Stereo 3D, S3D, Stereoscopic and so on. Or what about marketing slogans like Panasonic’s “from camera to couch”? Or Sony’s “make.believe”? Hmm… Did the marketing people really think that one through? New crew positions will evolve. Are you a “stereographer”? Or should you be called a “stereoscopist”?
I watched a lot of stereo 3D demos and I generally didn’t like most of them. Too much of 3D looks like a visual effect and not the way my eyes see reality. It also affects the creative direction. For instance, the clip of a Kenny Chesney 3D concert film, which was edited in a typical, fast-paced, rock-n-roll-style of cutting, was harder to adjust to than the nice slow camera moves from the Masters golf coverage.
I also observed that most 3D shots have an extremely deep depth-of-field. More so even in 3D, than if you just looked at the shot in 2D. Shallow depth-of-field, like the gorgeous shots from the HDSLRs that everyone loves, don’t seem to work in 3D. I tended to pay attention to objects in the background, instead of the foreground, which I would presume is the opposite of what a director would have wanted. Many of the 3D shots felt like multi-planed pieces of animation. I have heard this referred to as “density zones” and seems to be an anomaly with 3D shots. A lot of these shots simply had the effect of a moving version of the vintage View-Masters of the past.
Obviously a lot of companies will try to produce 3D content from archival 2D masters. To answer that need JVC showed a real-time 2D-to-3D convertor, which was able to take standard programs and adjust shots on-the-fly using a set of sophisticated algorithms. This creates some interesting artifacts. First off, you have to interpolate the information so that alternating fields become left and right eye views. Viewing the result shows visible scanlines on an HD display. That seems to be a common problem with current 3D displays.
Second, there are errors in the 3D. Some of the computation is based on colors, which means that occasionally some objects are incorrectly placed due to their color. That part of an object (like a shirt or certain colors in a flag) will appear at a different point in Z-space compared to the rest of the object to which it is attached. My guess is that casual viewers will almost never see these things and therefore such products will be quite successful.
My whole take on this is that we simply don’t see real life the way that stereo 3D films force us to see. Many folks will disagree with me on this, including a number of scientists, but I feel that people largely view life in 2D. Your eyes converge on an object and focus (both physically and mentally) on that object. Other things are on the periphery, so you are aware of them, but not focused on them. When you want to look at something else, you change your attention and change your focus, much like a pan or tilt with a rack focus. By the same token, we don’t see the sort of extreme shallow depth-of-field caused by some lenses, but that somehow feels more natural. These issues may evolve as stereo 3D evolves, but for me, the most natural images were those that were closest to 2D. If that’s the case, then you have to conclude, “What’s the point?”
Disruptive technology
Blackmagic Design definitely generated the buzz this year. They bought the ailing DaVinci Systems company last year and promptly told everyone in the media that they had no intension of selling cheaper versions of these flagship systems. We now know that wasn’t true. It turns out that Blackmagic has once again been true to form – as everyone had initially thought – and brought a brand new Mac version of DaVinci Resolve to NAB at a very low price.
Upon acquiring DaVinci, Blackmagic decided to “end-of-life” all hardware products (like the DaVinci 2K), end all support contracts and focus on rebuilding the company around its flagship software products – Resolve (grading) and Revival (film restoration). They redesigned the signature DaVinci control surfaces to better fit into Blackmagic Design’s manufacturing pipeline. You can now purchase Resolve in three configurations: software-only Mac ($1K), software (Mac) with panels ($30K) or a Linux version with panels ($50K). Add to this the computer, high-end graphics cards and drives.
The software-only version will work with a panel like the Tangent Wave, so it will allow a user to create a color grading room with the “name brand” product at a ridiculously low price. This has plenty of folks on various forums pretty steamed. I suspect there will be three types of DaVinci products.
Customer A is the existing facility that upgrades from an older DaVinci to Resolve 7.0 These people will build a high-end room using a cluster of Linux towers. That’s not cheap, but will still cost far less than in the past.
Customer B will be the facility that wants to set up a less powerful “assist” station. It may also be the entrepreneurial colorist who decides to set up his own home system – either to branch out on his own – or to be able to work from home to avoid the commute.
Customer C – the one that scares most folks – is the shop that sets up a bare bones grading room around Resolve, just so they can say that they have a DaVinci room. There are obvious performance differences between Resolve on a Mac and a full-featured, real-time 2K-capable-and-more DaVinci suite, so the fear is that some folks will represent one as being the other.
No matter what, that’s the same argument made when FCP came out and also when Color arrived. Grant Petty (Blackmagic Design’s founder) has always been about empowering people by lowering the cost of entry. This is just another step in that journey. I think the real question will be whether owners who have set up Apple Color rooms will convert these to DaVinci. Color is good, but DaVinci has the brand recognition and there are plenty of experienced DaVinci colorists around. At an extra $1K for software, this might be an easy transition. Likewise for Avid shops. Media Composer’s and Symphony’s color correction tools are pretty long-in-the-tooth and those owners are looking for options. DaVinci makes a lot more sense for these shops than investing in the Final Cut Studio approach. Hard to tell at this point.
Digital cameras
RED had its RED Day event. I was registered, but blew it off. Too much other stuff to see and quite frankly, I have little or no interest in being teased by cameras that are yet to come (late or if ever). In my world, HDSLRs have far greater impact than RED One or Epic. Judging by the number of Canons and Nikons I saw being used on the floor for video coverage and podcasts, I’d have to say the rest of the world shares that experience.
The real news is that RED is no longer the only game if you want a digital cinematography camera. Sure there’s Sony and Panasonic, but more importantly there’s ARRI with the Alexa and Aaton with the Penelope-∆ (Delta). Both companies have a strong film pedigree and these new cameras coming this year and in 2011 will offer some options that will interest DPs. The Penelope is oddest in that it’s a hybrid film/digital camera using two interchangeable magazines – one for film and another that’s a digital back. It uses an optical viewfinder, so the sensor if attached to the digital magazine in precisely the same location as the film loop in the film magazine. This leaves it exposed when you swap magazines, but the folks at Aaton don’t see this as an issue, aside from occasional, simple cleaning. In reality, you probably won’t be swapping back and forth between film and digital on the same production.
In my opinion, where RED has gone wrong has been in placing resolution over workflow. No matter how smooth, native or fast current RED post workflow is, they will have a hard time shaking the common “slam” that their workflow is slow, hard or expensive. ARRI and Aaton offer somewhat lower resolution than RED, but they record both camera RAW and direct-to-edit formats. The Alexa records in ARRI RAW as well as ProRes, while Aaton uses DNxHD (for now) as its compressed file format. This means that the camera generates a file that is ready to edit in Avid or FCP straight from the shoot. If you are working in TV, that may be all you need. If you are doing a feature film, it becomes an offline editing format. The camera RAW file is preserved as a “digital negative”, which would be used for color grading and finishing. ARRI RAW is already supported by a number of systems, including Avid (with Metafuze) and Assimilate Scratch.
Pure magic
Last year I was “wowed” by Singular Software’s PluralEyes. This year it was GET from AV3 Software. GET is a phonetic search tool based on the same Nexidia technology that is licensed to Avid for Media Composer’s ScriptSync feature. Think of GET as Spotlight for speech. GET operates as a standalone application that can be used in conjunction with Final Cut Pro. It shouldn’t be thought of as just a plug-in.
The process is simple. First, index the media files that are to be reviewed. This only needs to happen once and the company claims that files can be indexed 200 times faster than real time. (ScriptSync’s indexing is extremely fast.) Once files are indexed, enter the search term into the GET search field and all the possible choices are located. Adjusting the accuracy up or down will increase or decrease the number of matching clips.
You can also do searches using multiple parameters, such as a search term plus a date or a reel number. Since the algorithms are phonetic, correct spelling is less important, as long are it sounds the same. GET includes its own player and clips imported into FCP will have markers at the matching points within the master clip. The shipping version of the product (in a few months) will also subclip the matching segments.
Other snapshots
There are a few other interesting things to mention.
CatDV from Square Box Systems has come along nicely. Many of my FCP friends have looked at this and characterize it as “what Final Cut Server should have been.” Check it out.
I ran into Boris Yamnitsky (Boris FX founder) at the show and he was more than happy to show me some of their upcoming release. Boris FX wasn’t officially exhibiting this year, but they are starting to roll out BCC 7, starting with the After Effects version (ready for CS5). It will include a number of key new features, like particles. What really caught my eye, though, was a color correction filter that combined functionality from both Colorista and Color. It’s a single layer color correction filter with 3 color wheels, but the twist is that you can apply masks with both inside and outside grades – all within the same instance of the filter.
Lastly, Lightworks is back. Well, it never actually left – just changed hands a few times. This placed it with EditShare after they acquired Geevs Broadcast last year. Rather than bang it out with the “A” NLE vendors, EditShare has opted to release it as open source and see what the development community can do for the product. It already has a small, loyal following among film editors and has a few, unmatched touches for collaborative editing. For instance, two editors can work on exactly the same sequence (not copies). One editor at a time has “record” control. As one makes changes, the other can see these updated on his own timeline!
See, I told you it was a fun year.
©2010 Oliver Peters
You must be logged in to post a comment.