Ingest can be defined as the storage of content and the logging of the descriptive and technical metadata necessary to identify and locate audio, video, data or any other content/asset resident in a broadcast infrastructure by a media asset management system/application.
This broad definition addresses all forms of content storage, including disk and archival tape. Simply stated: for content to be considered “ingested” it must also be catalogued by the asset management system, regardless of the storage medium or format.
Repurposing general formats
Video content within a facility can be broken down into three generalized categories:
Repurposing requires that content ingest must be done in a way that accounts for the variety of delivery channels needed.
Resident content can be in formats ranging from 1080i and 720p HD and SD to analog video and any number of compression formats, such as DV, HDV, P2, IMX and so on.
User-generated video (UGV), also referred to as viewer-generated programming (VGP), is also becoming an increasingly prevalent source for content, especially for news. This introduces HDV and other compression formats into those a station must support.
Each new compression format requires the addition of another conversion codec to the infrastructure. At present, the emerging use of AVC/MPEG-4/H.264 and VC-1 compression and their coexistence with widely installed MPEG-2 DTV distribution and consumption devices are creating an engineering challenge.
This evolution to non-MPEG-2 compression is happening right now. HBO has revealed that it intends to use MPEG-4 compression for program distribution beginning in 2008. Similarly, with the continual emergence of new compression formats, infrastructure design must be considered dynamic and change must be anticipated.
Consider the source
Planning the workflow for source/ingest storage requires creative engineering. The source formats and ultimate delivery must be considered. Multiple video format conversions should be avoided whenever possible (zero conversion is ideal).
For example, a major event such as the Daytona 500 or the Grammy Music Awards may have relevance for decades. Storing an HD baseband copy with a bit rate of 1.5Gb/s requires a disk write data rate of 187.5MB/s and occupies 675GB per hour. Storage that can support this data rate is expensive and will be a RAID array or other parallel write configuration. A three-hour event will need nearly 3TB; clearly, this is an unrealistic standard operating procedure to use for all ingested content.
When HD content is compressed to 100Mb/s, MPEG-2, the disk write rate is reduced to 12.5MB/s and occupies 45GB per hour. With 5.1 audio (six channels), approximately one more gigabyte is needed per hour. At three hours of primetime HD programming per day, one year will require 50TB. This is just for finished programs; what about the unedited sources?
Other possible storage formats to consider include the MPEG elementary stream (ES), packetized ES (PES) or even the entire transport stream (TS). The maximum TS bit rate is slightly less than 20Mb/s, requires a disk write data rate of 2.5MB/s and requires 9GB of storage per hour. Three hours for 365 days equals 3.3TB. A three hour per day HD elementary stream at 12Mb/s requires nearly 6TB per year.
If the program content is backhauled from the site as 270Mb/s compressed HD, the disk write data rate is 33.75MB/s and uses 121.5GB of storage per hour. Again, it is not cost effective to store all content at a high bit rate or as uncompressed video.
Future content repurposing must also be considered. If the full bandwidth copy were used, it would eventually be compressed to an MPEG TS. But if the source was stored as an MPEG TS, a rebroadcast could be accomplished by using this TS copy of the event. An added incentive of this practice is that by storing a transport stream copy, compression resources are free to be used for other purposes.
At some time in the near future, however, repurposing big events for presentation in digital cinema theaters may become an attractive revenue source for repurposed content. New York’s Metropolitan Opera is doing this now for live broadcasts. Expanding compressed content from either a transport stream or 100Mb/s or 270Mb/s versions may or may not produce visible artifacts. In this instance, a full-bandwidth tape copy would be the ideal.
Some form of compression must be used if electronic delivery is required. A compressed copy is the most efficient approach for distribution, but it may not produce the highest quality video. Only real world testing and evaluation can identify the best content delivery format for a given presentation scenario.
Editing adds to the problem
If the source content is to be edited, graphics added and scene transition effects added for repurposing, working with uncompressed content will yield the highest quality finished content. This is really the only option. The commercial availability of compressed domain effects processing is primarily limited to simple features such as logo insertion. Repeated compression and expansion of content as it moves through the editing process, however, will degrade image quality.
Add to this the constraints of limited production facilities, editing suites, compression equipment and operations personnel, and it becomes obvious that a holistic view of the complete broadcast operation center must be considered before committing to any potential system design. Throw in multiple broadcasts (an OTA multicast or cable network that simultaneously delivers more than one channel), Web and handheld distribution, and it gets even more complex. Maybe broadcast engineers should turn to computer simulations of system design, workflows and equipment usage to explore infrastructure design. Service oriented architecture (SOA) is a highly useful way to model broadcast and production workflows.
There is no lack of manufacturers offering many varieties and feature sets of ingest servers. Some have partnered with editing and media management vendors to produce turnkey systems. While this is a good thing, the solutions are still somewhat proprietary.
In this discussion, storage requirements for various video sources that are ingested into the facility’s asset management system have been presented. The design goal may be to store all content in both its original format and the house format. More information is still necessary before attempting to design a supporting infrastructure for multichannel distribution.
The next Transition to Digital will investigate audio ingest issues.