It seems like we have had compression technology forever. We haven’t, unless you include the adoption of interlace scanning, the original analog compression system. Digital compression has existed for essentially as long as we have had digitized video (and audio). I distinctly remember attending SMPTE conferences as a much younger person in which a major topic was “bit-rate reduction” technology, another term for what we now call compression. Context is everything, so let’s venture back a bit to see why this was so complicated.
Before CCIR 601
Before ITU-R BT.601 (or its original popular name, CCIR 601) there was little agreement on the sampling format for moving images. The research into how to sample and store video was centered in fine research institutes on several continents. Of course, video was what we would today call standard definition, a distinction that was not helpful before NHK showed HDTV to the world in the late ’80s. And there was no common agreement on component video as the basis for imaging, nor storage and transmission.
So, sampling grids did not need to be locked to any other notions, and in fact there was no requirement that it be based on rectangular sample grids. Some proposals that were quite popular included samples aligned 45 degrees to the line structure.
Opening the barn door
When researchers looked into how to compress images, the work first had to be done to define some basic parameters, like how many samples per second it took to reasonably represent and transmit quality images. When SMPTE and EBU did the heavy lifting of trying to define a standard (601) for sampling images, it made a huge difference to the advancement of digital imaging.
It led directly to the development of the first practical digital recorder from Sony, which was uncompressed. Picture quality was never approximate; it was a full and exact reproduction of the sampled image. One might argue that sampling itself threw away valuable content, and some at the time no doubt did. Indeed, for some applications, that was certainly true. However, the adoption of the 601 standard opened the barn door and let research proceed on bit-rate reduction of standardized streams.
DCT-based compression was not invented for 601 sampled images, but quickly work centered on using DCT as the basis for compression. Two international standards were established to harmonize the work on compression worldwide. JPEG (Joint Photographic Experts Group) and MPEG (Moving Pictures Experts Group) created the base standards we use today in due process bodies from which all manufacturers and users would benefit. Other work preceded MPEG and JPEG, including work that created standards adopted widely in Europe but never very successful in North America, for reasons I have never fully understood.
The ETSI (European Telecommunications Standards Institute) compression standards (ETS 300.174) used rates based on integer fractions of European data transmission standards, especially E-1 at 34Mb/s. One-half and one-quarter of the E-1 rate yielded 17MB and 8Mb rates, which the EBU deployed widely on its satellite network in the ’90s. But with the rapid advances in MPEG compression, it became clear that MPEG offered higher quality for the same bit rate, and thus better economy for interconnection. Eventually, broadcasters worldwide adopted MPEG-2, most based on the DVB specifications which facilitated interoperability.