What is in this article?:
Computers communicate across a shared network using the same bandwidth. There will not be a “nailed up” full-time connection from a sender to a receiver. There should be enough bandwidth for the network to function well, but that does not mean that bandwidth will always be available when it is needed. When two computers try to talk at the same time (a collision), they will both back off for a random amount of time before making another attempt.
A central assumption behind Ethernet networking is that applications will be well-behaved. By this, I mean applications will observe the rules of the road and will not hog all of the available bandwidth.
When Ethernet was created, the assumption was that most of the data transferred across the network would be small. (Think of file transfers of small documents, short network control messages and so on.) When you put heavy, continuous loads on
Ethernet networks, they start to collapse. This is because network designers assumed that there would always be some gaps in transmission, and that everyone could find a time to talk on the network even if things were pretty busy. But, if you load a network with professional video traffic, for example, a single transmitter can quickly suck all of the air out of the room, leaving no time for others to get a word in edgewise. Similarly, using User Datagram Protocol (UDP), poorly behaved clients can dominate a network, which destroys communications for everyone. This is important since most professional video-over-IP applications use UDP.
We will explore many of these assumptions in more detail over the coming months.
—Brad Gilmer is executive director of the Video Service Forum, executive director of the Advanced Media Workflow Association and president of Gilmer & Associates.