Most DSLRs typically shoot in a compact, complex format (usually H.264 or AVCHD) that takes up a small amount of space. These compressed formats take more work for your Mac to decode than formats like ProRes 422, in which each frame is completely independent from one another. When you optimize media—on import or afterwards—you’re transcoding to ProRes, and theoretically making the editing process smoother. But how much help does your Mac really need, and how much difference does it make?
Most current Macs can play back unfiltered H.264 just fine, without dropping frames. You don’t need to optimize to skim your footage, nor to edit it into your timeline. Throw a filter or a transition into the mix, and the power of your Mac comes into play. My Mac Pro (2009, 2.66 GHz with ATI 4870, WD Green SATA HD) can play back transitions between native H.264 clips, and even transitions between two native clips, which each have color correction and a couple of filters applied. My MacBook Pro 13” (2011, 2.7GHz i7, SSD) can’t handle a single filter without dropping frames, but optimized media isn't the issue: It drops frames just as readily with optimized media as it does with native.
FCP X’s Preferences window’s Playback tab, where you can deactivate or delay background rendering, and choose to be notified of dropped frames during playback.
After rendering, everything’s fine on any Mac. Any filters or transitions are pre-computed and converting any problematic areas to ProRes (no matter what the source was) needs no further processing. The default preference is to auto-render after 5 seconds. It’s also worth noting that you can speed up playback in general by choosing Better Performance over High Quality in the Playback section of Preferences. You won’t see every pixel, but if you don’t routinely work in full screen or with a second monitor, it likely won’t be a problem.
Now, it is true that native media hits your Mac’s performance barrier slightly sooner than optimized media. Before you start dropping frames, you might be able to throw an extra filter or so on optimized media than you can throw on native media. But this is very dependent on the filter; there’s really not much in it. The performance hit in real-time decoding of H.264 is small, but you’d need to test on your own Mac to discover how significant it is for you.
So what about color space? The native footage is 4:2:0, while the 4:2:2 of ProRes has twice as much color information. Still, when the original source was 4:2:0, there’s only so much you can do. No matter what the source, you’ll be rendering (and probably exporting) the finished edit to ProRes 4:2:2 anyway — the question is, does converting to 4:2:2 up-front make any difference?
To test, I took two copies of the same clip, one in ProRes and one in the original H.264, and applied the same extreme color correction to both, then compared the same frame in each clip visually, and with the histogram. Differences were obvious in the scopes before rendering, but minimal afterwards. For what it’s worth, the histograms are quite different even before a color correction is applied, indicating that some kind of chroma smoothing is done as part of the transcode to ProRes.
Comparison of Pre- and Post-Render for Native H.264 and ProRes 422 sources.
Looking at the edges of the histograms, there’s slightly more detail in the shadows and highlights (on extreme color corrections like this) in the optimized ProRes clips. It’s worth doing your own tests, but the visual differences in the final image are very slight. You may have a different opinion if you’re used to color grading feature films.
What about the speed of rendering effects on native H.264 vs. optimized ProRes? Negligible difference. I took the first ten seconds of two identical clips, one native H.264, one optimized to ProRes. I put the same Bleach Bypass filter on each, with the amount set to 90%. ProRes took 52 seconds to render, while native H.264 took 53 seconds. Easily within the bounds of user error.
If you edit with native media, your finished timeline will be made up of:
... and then you export everything to ProRes anyway. This brings us to an interesting point.
Exporting without rendering is much, much faster than rendering first and then exporting. Why? The GPU can be used to speed up the rendering of effects. Those two clips that took 52-53 seconds each to render before? I changed the amount slider on the filter to force a re-render, then immediately exported to a full-resolution ProRes QuickTime with Share > Export Media.
The full export, including rendering both clips and writing it all to disk, took about 14 seconds. The original renders took a total of 105 seconds!
Comparison of rendering two clips in the timeline and exporting both clips without rendering, approximate times in seconds
Now, it may well be true that background rendering only occurs while you’re not using your Mac, making the time spent in rendering effectively free. It may also be true that you couldn’t make some creative decisions effectively without seeing them at full speed and full resolution in the context of the edit. Both of these are good reasons to let a render happen anyway. But it’s almost certainly a waste of your time to render before exporting.
If you use it, assess multicam closely. Since the feature requires playback of multiple streams of video simultaneously, your computer needs to do more work. You’re more likely to drop frames while cutting from angle to angle if you choose not to optimize. Luckily, a dedicated preference setting (“Create optimized media for multicam clips” under Playback) allows you to optimize only multicam media. It’s worth leaving this option enabled if you have any problems.
FCP X’s Preferences window’s Playback tab, where you can also find “Create optimized media for multicam clips”.
My choice? I edit with native media, and I have background rendering off. Generally, my Mac is fast enough to show me filters and transitions without rendering, though I’m not worried by the occasional dropped frame (and choose not to be told about it). Anything complex that I need to review in context can be rendered manually by selecting the clip(s) and pressing Control-R.
Your Mac’s mileage may vary, but it’s absolutely worth testing your workflow and revising if necessary. If your main target is the web, any compression will likely obliterate the minor color differences seen with native H.264 sources. If you’re targeting a cinema release, then sure, maybe optimizing your media is a good idea. The slightly improved color correction ability could enhance a marginal shot or two.
My importing preferences.
OK, so slightly improved color will be a point for some users. Is the extra time up front and the extra 60GB/hour (approximate, depends on frame rate and resolution) worth it? Not to me. The argument that hard drives are cheap is obvious: they are. But there’s also a cost to data wrangling. Managing and backing up all that extra data has a cost, too. All things being equal, I’d rather store 3-4 times the amount of footage in the same amount of space.
Approximate data rate comparisons in GB/hr and Mb/s (ProRes data rates from Apple’s white paper).
A final, heretical note: In Final Cut Pro 6 and 7, you could use native HDV footage in a ProRes sequence with few issues. You could even use HDV footage in a native HDV sequence, with an option to render effects to ProRes. Editing natively was slightly slower, but hardly the demon child many made it out to be. I remember cutting native HDV in a press box at a Test Cricket match on a 2006 MacBook Pro and having a fine time. At the time, the default advice was to transcode all source material to ProRes. Nowadays, FCP X won’t even let you “optimize” HDV or XDCAM sources; it’s native all the way.
You can’t “optimize” XDCAM 422 footage, nor HDV/DV.
These days, computers can easily handle on-the-fly decoding of complex, highly compressed codecs with very few issues. At the end of the day, optimizing media can slightly speed up the playback of filtered video, before rendering, and can slightly improve color fidelity in certain circumstances. That’s it. You had to optimize H.264 footage in FCP7, but you (probably!) don’t in FCP X.