Camera vendors will design whatever gets the best pictures to sell their cameras. But that has led to all manner of coding schemes and compression formats — and there is the matter of containers or wrappers. The rise of the single sensor has added an additional choice: raw or coded.
Camera designers have to adopt a codec format that meets a number of, sometimes conflicting, requirements. First, it must meet the quality expectations for the camera, for its price and format. Second, it must not be power hungry. Third, the data rate must be as low as possible to ease demands on the camera storage cards. And fourth, sometimes a little overlooked, it must be compatible with popular NLEs.
The low data rate demands indicate an efficient codec design, but the more recent the compression format, the more processing power is needed, immediately conflicting with the low power requirement. Hence, the popularity of MPEG-2 long after AVC was released. This is where the big engineering compromise comes in. If the camera has an adequate internal compression format, then uncompressed or raw data can be made available via SDI or HDMI for users who want more of the sensor information. External recorders have become common, especially with single-large-sensor cameras. Many allow encoding into an edit format such as DNxHD or ProRes, speeding the ingest process in post.
For the broadcaster, all this choice gives flexibility at the production stage but does not lead to standardization in the workflow. The edit bay must deal with this plethora of formats, a far step from the days of two primary tape formats, the Betacam family and the DV family. Even a format like AVC I-frame encoding comes in two flavors: Panasonic’s AVC-Intra (and Ultra) and Sony’s XAVC. The former is high 422 profile, level 4.1, and the latter is level 5.2. So much for interoperability.
The drive to support 4K is one reason Sony has adopted 5.2, as lower levels only support up to 2K resolution, and Panasonic has introduced AVC-Ultra to support higher data rates.
Editing AVC requires a recent NLE workstation, as it needs considerable processing resources. Many editors prefer to work with DNxHD or ProRes, transcoding everything at ingest, and this can ease the demands on the power of the workstation.
Will there ever be a single codec for cameras? I think not. The requirements of each programming genre are so different. Compare newsgathering with a high-end VFX shoot. One needs small files for backhaul; the other needs as much of the original sensor information as possible. And what of HEVC? So far it’s going to see application as a distribution codec. The processing resources for encoding do not make it practical for current camera electronics, but if we get to 4K 3-D newsgathering, who knows?
—David Austerberry is the editor of Broadcast Engineering World.