Online streaming video has become a major part of every broadcaster's technology and business plans over the last few years. In October 2010, this new mainstream acceptance of online video was reflected in the United States by the passage of the Twenty-First Century Communications and Video Accessibility Act (CVAA), which instructed the FCC to create rules ensuring that Internet-delivered videos would be held to similar standards for accessibility as traditional television. The most important of these requirements is mandatory closed captioning for the deaf and hard-of-hearing, which has been a requirement for most television programming since 1996.
In January 2012, the FCC released a Report and Order detailing the types of Internet video content that would be subject to closed captioning requirements, and providing a schedule for compliance. These rules will eventually require captioning for all “full-length” video programming that has also been aired over traditional TV channels with closed captions. Consumer-generated and other videos that are shown only over the Internet, or videos that are only clips or outtakes of full TV programs, will not be covered.
What's the timeline? The compliance deadlines provided by the commission provide broadcasters with six months (September 2012) to comply for new, prerecorded, programming; 12 months for new, live, programming such as news and sports (March 2013); and 18 months to provide full compliance for archival material that was already publicly streaming before the passage of the rules.
While most major broadcasters' online streaming sites have already begun to implement closed captioning in some form, few, if any, are already in compliance with the full regulations. This is especially true because the new FCC rules require that the “online captioning experience is equivalent to the television captioning experience.” The rules specifically state that this will require site operators to provide advanced decoder options such as adjustable colors and sizes. It also may exclude altogether the now common practice of providing captions in a separate, non-integrated window from the video presentation. These may seem like small details for the many broadcasters who have no captioning for online content yet, but they are essential to take into consideration at an early stage to avoid future problems.
The most difficult task for many broadcasters will likely be the development of a workflow for streaming captions authored in real time during live news and sports. For simulcasts, where the program is distributed concurrently through broadcast and online, this requires a system where broadcast caption data is captured while being encoded on the production video signal, and uplinked in real time to the streaming media source server for synchronized delivery with the consumer video streams.
An additional difficulty for live news captions is that while many existing streaming media player technologies have built-in support for display of captions from a previously prepared caption file, most are not prepared to display the incremental, roll-up style of captioning that is used during news broadcasts. Smooth integration with existing automation systems that recognize commercials and various other forms of content in the streaming feed is also a concern.
Current streaming captioning solutions
Solutions to the challenge presented by streaming live captions taken from broadcast may need to vary significantly, at least at an early stage, for different server and player technologies. Adobe Flash, for PC browser viewing, and HTTP Live Streaming, for Apple iOS devices, are currently the most relevant technologies for sites seeking the broadest possible compatible user base, but Microsoft Silverlight also maintains a presence, and use of new HTML5 video technologies is widely expected to grow in the next several years.
At present, there is no single file type that works across all stream types; the FCC recommended (but did not require) use of the Timed Text format, but many commonly used players do not yet read these files. Meanwhile, there is even more variation in player support for real-time captions, since a previously recorded file will not be available at the beginning of the stream.
Assuming for a moment that a suitable format (or set of formats) has been settled on, the requirements broadcasters should set for a solution are clear. The system should:
Repurpose the closed captions already in use in the broadcast, with little or no additional labor required on the part of the provider of the real-time transcription service or master control staff;
Enable both live simulcasting of the captions with minimal delay, and archival recording of the captions for use in future on-demand streaming;
Use existing automation systems to recognize commercials and other content that will be replaced on the streaming feed; and
Make use of a decoder on the consumer end with a maximal feature set to satisfy the “equivalent experience” mandate from the FCC rules.
Signal path example
To demonstrate the type of system that could enable efficient repurposing of live broadcast captions while meeting many of the above requirements, Figure 1 shows an example workflow using Adobe's Flash Media Server technology.
In this workflow, software included on the broadcast closed caption encoder directly uplinks closed captions in the Timed Text format to the Flash Media Server, with the same destination and required network routing as the video streaming software (Adobe's Flash Media Live Encoder).
The closed caption encoder is a logical uplink point because there is already direct access to all captions for the program, regardless of source, including remote delivery by IP or modem, locally connected teleprompters and captions already included in the upstream video. Automation can be used to block caption uplink during commercials and interstitials, as ad content is likely to differ between the broadcast and online feeds.
Uplinking directly from the closed caption encoder also provides a new opportunity to reduce caption delay on the online stream; while live broadcast captioning is always several seconds behind due to the transcription delay, this delay can be removed from the online feed in a programmable way, since it is likely that the total latency of the video path through the streaming compression software is greater than the original caption latency.
At the Flash Media Server, the uplinked closed caption data merges again with the compressed video, and is at this point synchronized permanently to the stream. All consumers will see the captions synchronized perfectly to the video, regardless of their connection quality. Also, if archival recording is performed on the server, the synchronized captions will be preserved within the saved video file for future on-demand playback. Additional recording capability is available on the caption encoder, which can provide complete Timed Text files, saving the real-time broadcast captions as commanded by the automation system.
Consumers will view the captions through the streaming player provided on the broadcaster's website. A component included in Adobe's basic development kit will play the Timed Text captions, though without some of the features standard on typical television decoders. Additional third-party plug-in components are available to augment these features, providing the rich experience captioning users are accustomed to.
For broadcasters seeking to offer a live simulcast of all of their televised offerings, this system would provide a single workflow that would automatically ensure that all content captioned on the broadcast feed would also have equivalent captioning on the online stream. In other environments, a system like this could be used for news and live events only, while caption files obtained from the original post-production captioning vendors would be posted separately to the media servers for pre-recorded content.
Meeting the deadline
Full streaming caption compliance, on the tight deadlines specified by the FCC, will be a big subject this year for many broadcasters. As with many streaming media tasks, the proliferation of format technologies will be one of the biggest challenges to identifying the required solutions, and demonstrating compliance with the necessary set of supported software and devices.
Successful implementations do exist that will provide broadcasters with compliance certainty, while providing consumers with the level of experience they have come to expect of broadcast captioning in the past 20 years.
Bill McLaughlin is Software Systems Manager at EEG.