What is in this article?:
It may seem strange to start our discussion on networking with a discussion on nuclear war, but if you really want to understand how networks are designed, this is a good place to start.
The period after World War II was a difficult time. The U.S. fought the Korean War, which was a proxy war between the U.S. and China, and at the same time entered into the Cold War with Russia. A nuclear arms race ensued, and some of us practiced “duck and cover” drills at school. Many in this country took the threat of a nuclear attack extremely seriously. It was in this environment that modern computer networking was born.
The country needed a military command and control technology that could survive a “smoking hole” scenario — where one or even several cities were reduced to smoking holes in the ground. The technology could not rely on centralized switching centers or a central control system. Initially, designers considered traditional systems with backup switching and control systems in several locations, but the threat of multiple successful “smoking holes” during an attack rendered these traditional designs unacceptable. It fell to DARPA (the Defense Advanced Research Projects Agency, part of the U.S. Department of Defense) to figure out a solution to this problem. This is probably the most significant key assumption; most other assumptions fall directly out of this.
Now, everyone takes it for granted that if an application wants to send information over a network, that information is broken down into small parts, loaded into the payload section of a packet and launched over a network. But remember, at the time this technology was being developed, paper punch tape and teletypes were the order of the day. These systems operated over wire-line or radio networks and required a continuous carrier in order to work. Breaking the information to be sent into smaller packets was a fundamental concept, and it is a critical assumption behind modern network design.