A great deal has been written about the importance of file-based workflows for broadcast applications. Often, much of the discussion revolves around video servers and/or nearline storage systems. These pivotal devices are central to file-based workflows, tasked with holding the files for a period of time. Less attention is paid to the processes and circumstances that cause the media to move through these devices, or what controls how media changes through its life cycle.
The life of a media asset often begins before there is a video or audio essence file. For example, it may be a placeholder in a newsroom system that calls for a video clip to add substance to breaking news. It could also be an entry in a schedule for a soap opera or investigative report that is still in planning or production, or any of a vast array of syndicated media or commercial assets that need to be scheduled for playout.
In the case of a newsroom, typically the media asset will reside on a shared storage system until it is archived. Other content may first enter a facility as a file or baseband signal and either be ingested directly to a video server, sent straight to nearline or archive storage, or passed very quickly from an ingest server to nearline or archive storage.
Regardless of what triggers the need for a media asset in a broadcast environment, a human being will assist in creating the need, and a varying number of other people will want to track, log, modify, monetize and eventually purge that media from the workflow.
We will assume for the purposes of this discussion that media needs to exist in some form of video server for playout at some point in its life and that it will be archived once its immediate usefulness has been exhausted.
It makes sense that a number of automated steps exist to track the media asset throughout its life cycle to ensure that the status and location of media is known at all times. This requires a system that learns about existence of the media when it first enters the broadcast workflow, and tracks its changes through retirement to a tape- or disk-based archive system. The asset management system should always recall the media existed and be able to restore it in the future for additional manipulation and reuse.
Low-resolution browse copies can help verify content when conducting a search. These can be generated when the media enters a video server or when it is created and/or modified. Ideally, low-resolution content should be generic to allow storage of the browse copies in an open storage environment such as nearline storage. Asset management and browse/editing systems then have equal access to the content.
New technologies for “fingerprinting” low-resolution content allow the asset management system to uniquely link it with all instances of the high-resolution content, whether within online, nearline or archive storage. This approach allows effective use of partial restores by absolute reference and not just by the clip name, for example.
Directly linking the low-resolution copy's location in the asset management system lets the operator immediately preview the content before making decisions on what to do with it. It also allows the operator to determine if the copy is the right version. This is particularly beneficial with assets that have the same or similar names and metadata but are significantly different in content.
Modifying media for delivery
Size often matters. The original media asset may have been uncompressed or captured using high-quality compression during production to get the best possible image quality with the least amount of generational loss. This may well be the level of quality at which the master or “mezzanine” copy will be stored. However, this is likely less than optimal for delivery.
Many versions of the original may be spawned from the mezzanine copy as the number of distribution platforms continues to grow. We are now tracking dozens of potential versions, all generated from the same original media, in different storage systems throughout the organization. This situation is magnified as promos, and teases are cut and linked to the original asset.
Good practices dictate that any operator who has the appropriate privileges should be able to find the location of the right version of the media, determine its current status, add or modify its metadata or the essence itself, and cause it to move and/or be modified to the next step in the workflow. This means searching is unified throughout the workflow, with user rights controlling the level of interaction the operator has with the asset. In this way, there is no mystery as to the location or current state of the media.
Creating value from media
Making money on an aired commercial is unlikely without the proof that it played back on schedule and for its full duration. The “as-run” log is a proven output of playout automation that shows when content was played and for how long. Tight integration to the sales and scheduling system is required to monetize this effort in a way that removes errors and adds simplicity. Enter BXF (Broadcast eXchange Format) and its rich metadata and associated messaging.
The traffic system generates a playlist of content for air, which is automatically imported into the playout automation system's playlist. As the playlist approaches its predetermined time window where a clip's availability is to be played, automation will collect the media from nearline or archive storage if it is not already on the playout server. Issues are reported if the media is missing or unavailable, so there is time to react by ingesting the media into the playout server or replacing it with alternative content.
Status is fed back immediately to the scheduling/sales system once the asset has played out. This avoids the need to reconcile as-run logs at the end of the day. Spotting problems and rectifying them is, therefore, possible at the sales level as well as the master control level. The result is a reduction in make-goods, which adds to the bottom line.
The dynamic capabilities of BXF can also instigate late changes in the schedule direct from the scheduling/sales system. This not only supports the instant scheduling of make-goods within the same time period, thereby minimizing losses, but also opens up the possibility of more dynamic sales and promotional models.
Moving media in a workflow
Whether it's a tape, DVD, hard drive, file or data stream, knowing where your media is and how quickly you can move it around your workflow is important to operational efficiency. The knowledge of whether the movement of media was successful is important to overall quality. Asset management is key to the task of moving content from nearline or archive storage, and verifying that the media is intact after the transfer is complete. (See Figure 1.)
How media, particularly files, reach their target destination(s) varies. FTP is the common approach. It is important to know the size of your data pipe and latency of the link when moving media over great distances. The latency will help dictate if file transfer acceleration is required to add efficiency to the process. This will ensure that full value of the purchased bandwidth is attained while moving the media in a timely fashion.
Scheduling tools and network optimization techniques further enhance efficient delivery, allowing the asset management system to choose when and by what route to send the material to even out network usage and prevent bottlenecks.
Automating the task of moving content, whether based off play-to-air schedules, archive and restore needs, on production workflow requirements can become a very daunting task. Ensuring that a versatile rules-based engine is driving the processes with tight integration to the asset management system will simplify the operator's life. He or she will have a real-time view of an asset in all its current locations and a history of its travels.
Why we're moving file-based media
A single piece of media can eventually exist in multiple locations and in a variety of quality levels. It can be cut to length or edited to promote when transmitted. We move media because there is finite storage capacity in transmission servers. We also move media because play-to-air devices in the expanding universe of multiplatform delivery systems have different needs.
We archive because we believe the media has value in the future. The media, we hope, will eventually be recycled, repackaged and transmitted on future generations of multiplatform delivery broadcast devices.
Andrew Warman is product marketing manager for servers, editing and graphics, and Chris Simons is vice president of automation and asset management for Harris Broadcast Communications.