When discussing IT networks for broadcast and post production, there are two main issues to consider. The first is streaming vs. file transfer. The second is performing these functions over a local area network (LAN) (within a facility) vs. over a wide area network (WAN). The single asterisk (*) in Figure 1 indicates that current ubiquitous IT-based protocols do not support functions such as partial file transfer and automatic resumption in the case of an interruption. The double asterisks (**) indicate that successful deployment over a WAN almost always requires a private network, dark fiber and/or ATM using permanent virtual circuits (PVCs).
Figure 1. There are two main axes to the discussion of IT technology for broadcast and post production: streaming vs. file transfer and local vs. WAN. Click here to see an enlarged diagram.
The drawing shows that, as you might expect, LANs and WANs designed for file transfer can be used to move video files. They can also be used to stream video in real time. But, in the LAN environment, the user must exercise caution. And in the WAN environment, if the user does not have a purpose-built network, the result is unlikely to meet his or her needs. Local streaming using TCP/IP is possible, but the user must carefully design the network to support isochronous operation. Even in this situation, interruptions can occur. Work is underway in the Video Services Forum to identify user requirements for IP streaming over public networks.
Video over LANs
Inside a facility, Fibre Channel and Ethernet are the primary technologies used for networking. Most video applications use Fibre Channel to establish a channel over the network so that remote storage looks like it is physically connected to the local computer using SCSI (see Broadcast Engineering, June 2001, Computers and Networks, Fibre Channel Storage). Fibre Channel provides high data rates (up to 1 Gbits/s if optimized protocols are used), and can be designed so that the network is quite reliable.
TCP/IP over Ethernet is used in almost all post production and broadcast facilities. Video users move files over Ethernet all the time. Gigabit Ethernet (Gig-E) has a throughput of about 700 Mbits/s. 10Gig-E is under development and will likely have a throughput of 7 Gbits/s. Ethernet switches are now commodity products, allowing network designers to provide redundant links and to easily expand network capacity (see Broadcast Engineering, April 2001, Computers and Networks, Cable and Wiring for LANs). Furthermore, 100Base-T has become a commodity with very low prices due to huge volumes sold worldwide. You can expect Gig-E prices to fall rapidly in the next 18 months.
Asynchronous transfer mode (ATM) can be used inside a facility, but it may not be the best answer. Compared to typical video payloads, ATM cells are small, so the relatively large size of the headers compared to the payload results in inefficiencies compared to Internet-protocol (IP) technology. Additionally, some claim that setting up an ATM environment requires specialists trained in configuring and maintaining ATM, and that the cost of ATM, compared to conventional IP networks, is high. For these reasons, ATM is not likely to become dominant in this application.
In summary, in the local environment, fast network speeds are now available at near commodity pricing. Furthermore, the network speeds available now are approaching those needed to support multi-user distributed applications that regularly move video over the network. This is in contrast to several years ago when the highest speeds available were in the 6 Mbits/s to 7 Mbits/s range.
So, as of 2002, if you want to build an IT network for video, you can do so. But you still need to employ your hard-won engineering skills to be sure that the network has adequate capacity, and you must be sure to keep video traffic separate from the business network.
Video over WANs
When moving material between two facilities, the predominant technologies are ATM and IP, both running over synchronous optical networks (SONET). In many locations, you can now use ATM to provide video service between two distant facilities. Through the use of permanent virtual circuits, service providers can create virtual point-to-point circuits that are capable of delivering 100 percent of their rated bandwidth 100 percent of the time. While it takes engineering skill to provide these circuits, they are available, although this service is not available everywhere. ATM pipes can be much larger than required for video streams, so bandwidth in the ATM environment is generally not a problem. You might think that ordering a big pipe from your local service provider is a waste of money. After all, why pay for such a big pipe when occasional video traffic is only going to take a portion of this bandwidth.
You might want a bigger pipe for several reasons. First, you should realize that, even if you have a big pipe, the service provider is only going to transmit the active payload in the pipe. For this reason, you may only have to pay for the portion of the pipe you use. Second, the extra bandwidth is almost immediately available if you need more capacity. Finally, the pipe can be used for other services besides video, such as voice and data. When talking to your video service provider, you may be surprised to hear that prices have become very competitive over the last several years.
It is also possible to order very big IP pipes from a service provider. These pipes are capable of handling video, but there are severe limits on the ability of IP-based networks to carry real-time video. Also, for some facilities, the “last mile” problem still exists. That is, the service provider's central facility in town has plenty of bandwidth, but getting a circuit from the central facility to the broadcaster is an issue. Watching the trends in bandwidth in recent years, one can quickly conclude that this problem has been overcome, or will be overcome soon.
If bandwidth is no longer the major stumbling block to using IT infrastructure for video, then why is it that this technology has not become the norm for our facilities? The reason is that there are still infrastructure issues to be resolved.
Infrastructure and QoS
What is infrastructure? Infrastructure involves a broad range of topics, from network monitoring and control to establishing appropriate service-level agreements to identifying or developing IT protocols appropriate for use in broadcast and post production.
Many television engineers operate on the basis of “implied quality of service.” So even though QoS is not a term used to specify analog television circuits, television network engineers expect a certain level of performance, delay, jitter and so on. They order the transmission service that is most appropriate to the case at hand and expect the appropriate quality of service.
If the engineer is confronted with a link that has a quality level below the tolerated level (i.e., a link whose implied QoS does not live up to his expectations), then serious repercussions will result. As users move to new technologies, especially technologies employed within the Internet, they bring with them this innate sense of QoS. When television engineers or computer network designers try to map this implied QoS onto the types of pathological conditions that may occur in digital transport networks, especially IP networks, there is a strong possibility that misunderstanding will occur if the parties involved do not establish clear specifications for these networks.
Regarding the challenge of carrying real-time video over IP, IP is a connectionless network-transport protocol built around an any-to-any environment that does not require provisioning of individual circuits to connect each site on the network. IP networks — in particular IP network backbones — use two routing protocols to enable the distribution of routes to all routers connected to the network, or to make them accessible through another connected network. Interior-gateway protocols (IGP) such as the open-shortest-path-first (OSPF) protocol and the border-gateway protocol (BGP) control how traffic flows from end to end by distributing routes within any particular backbone IP network and, externally, to peer backbone IP networks. If a link fails, traffic may be forced to switch quickly from one path to another, causing packet loss and changing the packet delay.
Because IP networks handle traffic on a per-packet basis, not a fixed-cell-size basis, latency through any one port can dramatically vary depending on the size of the packet, the link and other traffic transiting the link. In other words, the transmission delay over a public IP network can change dramatically from one packet to the next.
Additionally, the automatic scheduling adaptation across the network provided by ATM cell admission control (CAC) capability is not currently available for IP-based networks. CAC is a critical technology in ATM, and is used to reserve bandwidth for subsequent use.
To compound the complexity of providing the level of QoS required for real-time video services, it is impossible to monitor all of the traffic flows on a backbone network. Traffic flows change from minute to minute, and can significantly shift in a matter of minutes. For example, significant changes can be seen during specialized, IP video streaming events. Since IP networks are dependent on each other, problems on one backbone network can significantly impact the routing on another.
Multi-protocol label switching (MPLS) has been touted as the cure for all of the current shortcomings of IP-based backbone networks. It does have several characteristics that can provide a better mechanism for controlling, routing and monitoring traffic across a backbone network. But MPLS requires modifications to existing routing protocols to provide traffic engineering features (i.e. CAC-like functionality), and equipment manufacturers are just now realizing that they need to modify their products to better support a broader range of QoS offerings. MPLS and various protocol enhancements are still being developed and have not yet been completely standardized. However, several equipment manufacturers have already implemented proprietary MPLS and traffic engineering methods, improving their equipment's ability to interoperate with other manufacturers' equipment.
It appears that bandwidth will no longer be the limiting factor in using IT-based technology for video — if not now, in the near future. File streaming technology is well advanced and, with some exceptions, can be used to move video both locally and in the WAN environment. IP technology is ubiquitous, and can be used for streaming in the local environment, but with several serious caveats. However, IP streaming of real-time, high-quality video over public networks faces serious challenges. These challenges exist because of fundamental technology choices in the IP protocols. Finally, the video user must begin to understand QoS and to think in these terms if he or she is to make successful use of IT technology for video.
It is worth mentioning that the Video Services Forum has been especially active in the area of IP for video. Recently, the Forum submitted an informative paper to the ITU in an attempt to raise IP for video as an issue to the telecommunications and computer industries.
Brad Gilmer is president of Gilmer & Associates, executive director of the AAF Association and technical moderator for the Video Services Forum.
Send questions and comments to: firstname.lastname@example.org