____________________________________
Oral-History:Paul Baran
https://ethw.org/Oral-History:Paul_Baran
Cold War Threat, ~ 1959
Baran:
In late 1959 when I joined the RAND Corporation, the Air Force was synonymous with National Defense. The other services were secondary. The major problem facing the Country and the World was that the Cold War between the two super powers had esculated to the point by 1959 when both sides were starting to build highly vulnerable missile systems prone to accidents. Whichever side fired their thermonuclear weapons first would essentially destroy the retaliatory capacity of the other. This was a highly unstable and dangerous era. A single accidental fired weapon could set off an unstoppable nuclear war. A preferred alternative would be to have the ability to withstand a first strike and the capability of returning the damage in kind. This reduces the overwhelming advantage by a first strike, and allows much tighter control over nuclear weapons. This is sometimes called Second Strike Capability. If both sides had a retaliatory capability that could withstand a first-strike attack, a more stable situation would result. This situation is sometimes called Mutually Assured Destruction, also known by its appropriate acronym, MAD. Those were crazy times.
Communications: the Achilles Heel, 1960+
Baran:
The weakest spot in assuring a second strike capability was in the lack of reliable communications. At the time we didn’t know how to build a communication system that could survive even collateral damage by enemy weapons. RAND determined through computer simulations that the AT&T Long Lines telephone system, that carried essentially all the Nation’s military communications, would be cut apart by relatively minor physical damage. While essentially all of the links and the nodes of the telephone system would survive, a few critical points of this very highly centralized analog telephone system would be destroyed by collateral damage alone by missiles directed at air bases and collapse like a house of card. This rendered critical long distance communications unlikely. Well what about high frequency radio, i.e. the HF or short wave band? The problem here is that a single high altitude nuclear bursts destroys sky wave propagation for hours. While propagation would continue via the ground wave, the sky wave badly needed for long distance radio would not function reducing usable radio ranges to a few tens of miles.
The fear was that our communications were so vulnerable that each missile base commander would face the dilemma of either doing nothing in the event of a physical attack, or taking action that would mean an all out irrevocable war. A communications system that could withstand attack was needed that would allow reduction of tension at the height of the Cold War.
Broadcast Station Distributed Teletypewriter Network, 1960
Baran:
At that time the expressed concern was for a system able to support Minimum Essential Communications -- a euphemism for the President authorizing a weapons launch.
In 1960 I proposed using broadcast stations as the links of a network . Broadcast stations during the daytime depend soley only on the ground wave, not subject to the loss of the sky wave. This is the reason that AM broadcast stations have such a short range during the day. I was able to demonstrate using FCC station data that there were enough AM broadcast stations in the right location and of the right power levels to allow signals to be relayed across the country. I proposed a very simple protocol. Just flood the network with the same message.
When I took this briefing around to the Pentagon, and other parts of the defense establishment I received the objection that it didn’t fix the problem of the military. “OK, a very narrow band capacity may take care of the President issuing the orders at the start of a war, but how do you support all the other important communications requirements that you need to operate the military during such a critical time.”
High Data Rate Distributed Communications, 1961 - 64
Baran:
The response was unambiguous. What I proposed wouldn’t fully hack it. So it was “back to the drawing board” time. I started to examine what military communications needs were regarded as essential by reading reports on the subject, and asking people at various military command centers. The more that I examined the issues, the longer the list. So I said to myself. “As I can’t figure out what essential communications is needed, let’s take a different tack. I’ll give those guys so much damn bandwidth that they wouldn’t know what in Hell to do with it all.” In other words, I viewed the challenge to be the design of a secure network able send signals over a network being cut up, and yet having the signals delivered with perfectly reliability. And, with more capacity than anything built to date. When one starts a project aim for the moon. Reality will cut you back later. But if you don’t aim high at the outset you can never advance very far.
Why Digital? Why Message Blocks?
Baran:
I knew that the signals would have to find their way through surviving paths, which would mean a lot of switching through multiple tandem links. But, at that time long distance telephone communications systems transmitted only analog signals. This placed a fundamental restriction on the number of tandem connected links that could be used before the voice signal quality became unusable. A telephone voice signal could pass through no more than about five independent tandem links before it would become inaudible. This ruled out analog transmission in favor of digital transmission. Digital signals have a wonderful property. As long as the noise is less than the signal’s amplitude it is possible to reconstruct the digital signal without error.
The future survivable system had to be all-digital. At each connected node, the digital signal would be verified that the next node correctly received it. And, if not, the signal would be retransmitted. As one day the network would also have to carry voice as well as teletypewriter and computer data, all traffic would be in the same form – bits. All analog signals would first be digitized. To keep the delay times short the digital stream would be packaged into small message blocks each with a standardized format. Work on time division multiplexing of digital telephone signals was in an early state in Bell Labs. Their experimental equipment used a data rate of about 1.5 Megabits/sec. I then started with the premise that it would be feasible to use digital transmission, at least for short distances at 1.5 Megabits/sec. since the signals could be reconstructed at each node. A big problem blocking long distance digital transmission was transmission jitter buildup. Every mile a repeater amplifier chopped the tops off the wave and reconstituted a clean digital signal. But noise caused a cumulative shifting of the zero crossing points. This limited the span distance. I thought that a node terminating each link in a non-synchronous manner should effectively clean up the accumulated jitter. This would and provide a de facto way of achieving long distances by such jitter error clean up. And I felt that if that didn’t work, then our fall back technology would then be the use of extremely cheap microwaves that could be feasible in this noise margin tolerable application.
On Parallelism
Baran:
By this time it was beginning to become clear that the new system’s overall reliability would be significantly greater than the reliability of any one component. Hence I could think in terms of building the entire system out of cheap parts – something previously inconceivable in the all-analog world.
Hochfelder:
Because it is in parallel?
Baran:
Yes. In parallelism there is strength. Many parts must fail before no path could be found through the network. It took a redundancy level of only about three times the theoretical minimum to build a very tough network . If you didn’t have to worry about enemy attacks, then a redundancy level of about 1.5 would suffice for a very reliable network out of very inexpensive and unreliable parts. And, it would later show that it would be possible to reduce the cost of communication by almost two decimal orders of magnitude. The saving in part came from being able to design the long distance transmission systems as links of a meshed network with alternative paths without allowing huge fade margins where all the links are connected in tandem. With analog transmission every link of the network must be “gold plated” to achieve reliability.
Hot-Potato Routing
Baran:
A key element of the concept was that it would be necessary to keep a “carbon copy” of each message block using computer technology, until the next station successfully received the message. The next challenge was to find a way for the packets to seek their own way through the network. This meant that some implicit path information must be contained as housekeeping data within the message block itself. The housekeeping includes data about the source and destination of the packet together with a an implied time measurement such as the number of times the message block had been retransmitted. This small amount of information allowed creation of an algorithm that did a very effective job of routing dynamically changing traffic to always find the best instantaneous path through the network.
Basic Concepts Underlying Packet Switching, 1960
Baran:
I had earlier discovered that very robust networks could be built with only modest increases in redundancy over that required for the minimum connectivity. And, then it dawned on me that the process or resending defective or missing packets would then allow the creation of an essentially error-free network . Since it didn’t make any difference whether a failure was due to enemy attacks or poor reliable components, it would be possible to build systems where the system reliability is far greater than the reliability of any of its parts And, even with inexpensive components a super reliable network would result.
Another interesting characteristic was the network learning property would allow users to move around the network, with that person’s address following them . This would allow separating the physical address from the logical address throughout the network, a fundamental characteristic of the Internet.
Another that I learned was that in building self-learning systems it is equally important to forget, as it is to learn. For example, when you destroy parts of a network, the network must quickly adapt to routing traffic entirely differently. I found that by using two different time constants, one for learning and the other for forgetting provided the balanced properties desired. And, I found it helpful to view the network as an organism, as it had many of the characteristics of an organism as it responds to overloads, and sub-system failures.
Dynamic Routing, 1961
Baran:
I first thought that it might be possible to build a system capable of smart routing through the network after reading about Shannon’s mouse through a maze mechanism . But instead of remembering only a single path, I wanted a scheme that not only remembered, but also knew when to forget, if the network was chopped up. It is interesting to note that the early simulation showed that after the hypothetical network was 50% instantly destroyed, that the surviving pieces of the network reconstituted themselves within a half a second of real world time and again worked efficiently in handling the packet flow.
Hochfelder:
How would the packets know how to do that?
Baran:
Through the use of a very simple routing algorithm. Imagine that you are a hypothetical postman and mail comes in from different directions, North, South, East and West. You, the postman would look at the cancellation dates on the mail from each direction. If for example if our postman was in Chicago, mail from Philadelphia would tend to arrive from the East with the latest cancellation date. If the mail from Philadelphia had arrived from the North, South, or West it would arrive with a later cancellation date because it would have had to take a longer route (statistically). Thus, the preferred direction to send traffic to Philadelphia would be out over the channel connected from the East as it had the latest cancellation date. Just by looking at the time stamps on traffic flowing through the post office you get all the information you need to route traffic efficiently.
Each hypothetically post office would be built the same way. And, each would have a local table that recorded the statistics of traffic flowing through the post office. With packets, it was easier to increment a count in a field of the packet than to time stamp. So, that is what I did. It’s simple and self-learning. And when this “handover number” got too big, then we knew that the end point was unreachable and dropped that packet so that it didn’t clutter the network.
Hochfelder:
Always searching for the shortest path.
Baran:
Yes, that is the scheme. We needed a learning constant and a forgetting constant as no single measurement could be completely trusted. The forgetting constant also allows the network to respond to rapidly varying loads from different places. If the instantaneous load exceeded the capacity of the links, then the traffic is automatically spread through more of the network. I called this doctrine, “Hot Potato Routing.” These days this approach is called “Deflection Routing.” By the way, the routing doctrine used in the Internet differs from the original Hot Potato approach, and is the result of a large number of improvements over the years.
Basic Properties of Packet Switching, 1960 - 62
Baran:
The term “packet switching” was first used by Donald Davies of the National Physical Laboratory in England who independently came up with the same general concept in November 1965 .
Essentially all the basic concepts of today’s packet switching can be found described either in the 1962 paper or in the Augurst 1964 RAND Memoranda in which such key concepts as the virtual circuit are described in detail.
The concept of the “virtual circuit” is that the links and nodes of the system are all free, except during those instances when actually sending packets. This allows a huge saving over circuit switching, because 99 percent of the time nothing is being sent so the same facilities can be shared with other potential users.
Then there is the concept of “flow control”, which is the mechanism to automatically prevent any node from overloading. All the basic concepts were worked out in engineering detail ina series of RAND Memoranda (between 10 to 14 volumes, depending on how they are counted) What resulted was a realization that the system would be extremely robust, with the end to end error rate essentially zero, even if built with inexpensive components. And, it would be very efficient in traffic handling in comparison to the circuit-switching alternative.
Economic Payoff Potential Versus Perceived Risks
Baran:
This combination of economy and capability suggested that if built and maintained at a cost of $60,000,000 (1964 Dollars) that it could handle the long distance telecommunications within the Department of Defense that was costing the taxpayer about $2 billion a year.
At the time, the great saving in cost claimed was so great that it made the story intuitively unbelievable. It violated the common sense instincts of the listener who would say in effect that: “If it were ever possible to achieve such efficiencies the phone company (AT&T) would have done it already.”
Another understandable objection was “This couldn’t possibly work. It is too complicated.” This perception was based on the common view, correct at the time, that computers were big, taking up large glass walled rooms, and were notoriously unreliable. When I said that that each switching node could be a shoe sized box with the required computer’s capabilities, many didn’t believe it. (I had planned doing everything in miniaturized hardware in lieu of using off the shelf minicomputers.) So I had the burden of proof, to define the small box down to the circuit level to show that it could indeed be done.
Another issue was the separation of the transmission network from the analog to digital conversion points. This is described in detail in Vol. 8 of the ODC series This RAND Memorandum describes in detail how users are connected to the switching network. The separate unit that is described connects up to 1024 users and convert their analog signals into digital signals. This included voice, teletypewriters, computer modems, etc. One side of the box would connect to the existing analog telephones, while the other side which was digital would connect to the switching network, preferably at multiple points to eliminate a single point of failure.
This constant increase in desire for engineering details caused so much paper to be written at the time cluttering up the literature. On a positive note it left us with a very detailed description of packet switching proposed at that time. This record has been helpful in straightening out some of the later misrepresentations of who did what and when as found in the popular press’s view of history.
Opposition and Detailed Definition Memoranda, 1961+
Baran:
The enthusiasm that this early plan encountered was mixed. I obtained excellent support from RAND (after an early cool and cautious initial start). Others, particular those from AT&T (the telephone monopoly at the time) objected violently. Many of the objections were at the detail level, so the burden of proof was then on me to provide proposed implementation descriptions at an ever finer level of detail. Time after time I would return with increasingly detailed briefing charts and reports. But, each time I would hear a mantra, “It won’t work because of ____”. “ It won’t work because of (some new objection).” I gave the briefings to many places to various government agencies, to research laboratories, to commercial companies, but primarily to the military establishment I gave briefings at least 35 times. It was hard for a visitor with an interest in communications to visit RAND without being subject to a presentation. My chief purpose in giving these presentations so broadly was that I was looking for reasons that it might not work. I wanted to be absolutely sure that I hadn’t overlooked anything that could affect workability. After each encounter where I could not answer the questions quantitatively, I would go back and study each of the issues raised and fill in the missing details. This was an iterative process constituting a wire brush treatment of a wild set of concepts.
In fairness, much of the early criticism was valid. Of course the burden of proof belongs to the proponent. Among the many positive outcomes of the exercise was that, 1) I was building a better understanding the details of such new systems, 2) I was building a growing degree of confidence in the notions, and 3) I had accumulated a growing pile of paper including simulation data to support the idea that the system would be self learning and stable.
Publication, 1964
Baran:
Most of the work was done in the period 1960-62. As you can imagine old era analog transmission engineers unable to understand what was being contemplated in detail. And, not understanding, they were negative and intuitively believed that it could possibly work. However, I did build up a set of great supporters as I went along. My most loyal supporters at RAND included Keith Uncapher my boss at the time, and Paul Armer and Willis Ware, co-heads of the Computer Science Department. RAND provided a remarkable degree of freedom to do this controversial work, and supported me in external disagreements. By 1963 I felt that I had carried this work about as far as appropriate to RAND (which some jokingly say stands for “Research And No Development.”) And, I felt that as I had completed the bulk of my work I began wrapping up the technical development phase in 1964 when I published the set of memoranda in 1964 which were primarily written on airplanes in the 1960 to 1962 era. There were some revisions in 1963, and the RAND Memoranda came out in 1964. I continued to work on some of the non-technical issues and gave tutorials in many places including summer courses at the University of Michigan in 1965 and 1966.
In May 1964 I published a paper in the IEEE Communications Transactions which summarizes the work and provides a pointer to each of a dozen volumes of Rand Memoranda for the serious reader who wanted to read the backup material Essentially all this work was unclassified in the belief that we would all be better off if the fate of the world relied on more robust communications networks. Only two of the first twelve Memoranda were classified. One dealt with cryptography and the other with weak spots that were discovered and the patches to counter the weak spots. A thirteenth classified volume was written in 1965 by Rose Hirshfield on real world geographical layout of the network. And there was a 14th describing a secure telephone that could be used with the system and had possible applications outside of the network and so wasn’t included in the number series. This was co-authored with Dr. Rein Turn.
Baran:
Getting a new idea out to a larger audience is always challenging. Perhaps more so if it is a departure from the classical way of doing things. The IEEE Spectrum which is sent to all IEEE members picked up the article in a “Scanning the Transactions”. I looked to this short summary to being a pointer to the IEEE Transaction article, for those that didn’t normally read the Communications Society Transactions. This article in turn pointed to RAND Memoranda. readily available either from RAND or its depositories around the world. In those days RAND Publications were mailed free to anyone who requested a copy.
But no matter how hard one tries, it seems that it is impossible to get the word out to everyone. This is not a novel problem. And, it contributes to duplicative research, made more common by the reluctance by some to take the time to review the literature before proceeding with their own research. Some even regard reviewing the literature as a waste of time. I was surprised many years later to find a few key people in closely related research say that they were totally unaware of this work until many years later. I recall describing the system in detailed discussions, only to find out at a later date that the listener had completely forgotten what was said, and who didn’t receive his Epiphany until much later and ostensibly through a different channel.
Conceptual Gap Between Analog and Digital Thinking
Baran:
The fundamental hurdle in acceptance was whether the listener had digital experience or knew only analog transmission techniques. The older telephone engineers had problems with the concept of packet switching. On one of my several trips to AT&T Headquarters at 195 Broadway in New York City I tried to explain packet switching to a senior telephone company executive. In mid sentence he interrupted me, “Wait a minute, son. “Are you trying to tell me that you open the switch before the signal is transmitted all the way across the country?” I said, “Yes sir, that’s right.” The old analog engineer looked stunned. He looked at his colleagues in the room while his eyeballs rolled up sending a signal of his utter disbelief. He paused for a while, and then said, “Son, here’s how a telephone works….” And then he went on with a patronizing explanation of how a carbon button telephone worked. It was a conceptual impasse.
On the other hand, the computer people over at Bell Labs in New Jersey did understand the concept. That was insufficient. When I told the AT&T Headquarters folks that their own research people at Bell Labs had no trouble understanding and didn’t have the same objections as the Headquarters people. Their response was, “Well, Bell Labs is made up of impractical research people who don’t understand real world communication.”
Willis Ware of RAND tried to build a bridge early in the process. He knew Dr. Edward David Executive Director of Bell Labs and he aske for help. Ed set up a meeting at his house with the chief engineer of AT&T and myself to try to overcome the conceptual hurdle. At this meeting I would describe something in language familiar to those that knew digital technology. Ed David would translate what I was saying into language more familiar in the analog telephone world (he practically used Western Electric part numbers) to our AT&T friend, who responded in a like manner. Ed David would translate it back into computer nerd language.
I would encounter this cultural impasse time after time between those who were familiar only with the then state of the art of analog communications – highly centralized and with highly limited intelligence circuit switching and myself talking about all-digital transmission, smart switches and self-learning networks. But, all through the process of erosion, more and more people came to understand what was being said. The base of support strengthened in RAND, the Air Force, academia, government and some industrial companies --and parts of Bell Labs. But I could never penetrate AT&T Headquarters objections who at that time had a complete monopoly on telecommunications. It would have been the perfect organization to build the network. Our initial objective was to have the Air Force contract the system out to AT&T to build the network but unfortunately AT&T was dead set against the idea.
Hochfelder:
Were there financial objections as well?
AT&T Headquarters Lack of Receptivity
Baran:
Possibly, but not frontally. They didn’t want to do it for a number of reasons and dug their heels in looking for publicly acceptable reasons. For example, AT&T asserted that were not enough paths through the country to provide for the number of routes that I had proposed for the National packet based network but refused to show us their route maps. (I didn’t tell them that someone at RAND had already acquired a back door copy of the AT&T maps containing the physical routes across the US since AT&T refused to voluntarily provide these maps that were needed to model collateral damage to the telephone plant by attacks at the US Strategic Forces.) I told AT&T that I thought that they were in error and asked them to please check their maps more carefully. After a month’s delay in which they never directly answered the question, one of their people responded by grumbling, “It isn’t going to work, and even if it did, damned if we are going to put anybody in competition to ourselves.”
I suspect the major reason for the difficulty in accommodating packet switching at the digital transmission level was that it would violate a basic ground rule of the Bell System -- everything added to the telephone system had to work with all previous equipment presently installed. Everything had to fit to into the existing plan. Nothing totally different could be allowed except as a self contained unit that fit into the overall system. The concept of long distance all-digital communications links connecting small computers serving as switches represents a totally different technology and paradigm, and was too hard for them to swallow. I can understand and respect that reason, but can also appreciate the later necessity for divestiture. Competition better serves the public interest in the longer term than a monopoly, no matter how competent and benevolent that monopoly might. There is always the danger that the monopoly can be in error and there is no way to correct this.
On Bell Labs Response
Baran:
While the folks AT&T Headquarters violently opposed the technology, there were digitally competent people in Bell Labs who appreciated what it was all about. One of the mysteries that I have never figured out is why after packet switching was shown to be feasible in practice and many papers published by others that it took so many years before papers in packet switching would ever emerge from Bell Labs.
The first paper on the subject that I recall being published in the Bell System Technical Journal was by Dr. John Pierce. This paper described a packet network made up of overlapping Ballantine rings. It was a brilliant idea and his architecture used in today’s ATM systems.
Hochfelder:
What is a Ballantine ring?
Baran:
Have you ever seen the Ballantine Beer’s logo? It is made up of three overlapping rings? Since a signal can be sent in both directions on a loop, no single loop cut need stop communications from flowing from the other direction. Because the signal can go both ways any single cut can be tolerated without loss allowing time for repair. It is a powerful idea.
The RAND Formal Recommendation to the Air Force, 1965
Baran:
In 1965 the RAND Corporation issued a formal Recommendation to the Air Force (which they do very rarely) for the Air Force to proceed to build the proposed network . The Air Force then asked the MITRE Corporation, a not-for-profit organization that worked for the government to set up a study and review committee. The Committee after independent investigation concluded that the design was valid and that a viable system could be built and that the Air Force should immediately proceed with implementation.
As the project was about to launch, the Department of Defense said that as this system was to be a National communications system, it would in accordance with the Defense Reorganization Act of 1949 (finally being implemented in 1965) fall into the charter of the Defense Communications Agency.
The choice of DCA would have been fine years later when DCA was more appropriately staffed. But at that time the DCA was a shell organization staffed by people who lacked strength in digital understanding. I had learned through the many briefings I had given to various audiences that there was an impenetrable barrier to understanding packet switching by those who lacked digital experience. At RAND I was essentially free to work on anything that I felt to be of most importance to National Security. This allowed me for example to serve on various ad hoc DDR&E (Department of Defense Research & Engineering) committees. I sometimes consulted with Frank Eldridge in the Comptrollers Office of the Department of Defense helping him to review items in the command and control budgets submitted by the services. Frank Eldridge was an old RAND colleague initially responsible for the project on the protection of command and control. He was among the strongest supporters for the work that I was doing on Distributed Communications. He had gone over to the Pentagon working with McNamara’s “whiz kids.” Frank Eldridge had undergone many of the same battles with AT&T and understood the issues of the RAND thence Air Force proposal.
Approval for the money for the Defense Communication Agency (DCA) to undertake the RAND distributed communications system development was under Frank Eldridge’s responsibility. Both Frank and I agreed that DCA lacked the people at that time who could successfully undertake this project and would likely screw up this program. An expensive failure would make it difficult for a more competent agency to later undertake this project. I recommended that this program not be funded at this time and the program be quietly shelved, waiting for a more auspicious opportunity to resurrect it.
The Cold War at this time had cooled from loud threats of thermonuclear warheads to the lower level of surrogate small wars. And, we were bogged down in Viet Nam.
source:
https://ethw.org/Oral-History:Paul_Baran
____________________________________
Oral-History:Paul Baran
https://ethw.org/Oral-History:Paul_Baran
Cold War Threat, ~ 1959
Baran:
In late 1959 when I joined the RAND Corporation, the Air Force was synonymous with National Defense. The other services were secondary. The major problem facing the Country and the World was that the Cold War between the two super powers had esculated to the point by 1959 when both sides were starting to build highly vulnerable missile systems prone to accidents. Whichever side fired their thermonuclear weapons first would essentially destroy the retaliatory capacity of the other. This was a highly unstable and dangerous era. A single accidental fired weapon could set off an unstoppable nuclear war. A preferred alternative would be to have the ability to withstand a first strike and the capability of returning the damage in kind. This reduces the overwhelming advantage by a first strike, and allows much tighter control over nuclear weapons. This is sometimes called Second Strike Capability. If both sides had a retaliatory capability that could withstand a first-strike attack, a more stable situation would result. This situation is sometimes called Mutually Assured Destruction, also known by its appropriate acronym, MAD. Those were crazy times.
Communications: the Achilles Heel, 1960+
Baran:
The weakest spot in assuring a second strike capability was in the lack of reliable communications. At the time we didn’t know how to build a communication system that could survive even collateral damage by enemy weapons. RAND determined through computer simulations that the AT&T Long Lines telephone system, that carried essentially all the Nation’s military communications, would be cut apart by relatively minor physical damage. While essentially all of the links and the nodes of the telephone system would survive, a few critical points of this very highly centralized analog telephone system would be destroyed by collateral damage alone by missiles directed at air bases and collapse like a house of card. This rendered critical long distance communications unlikely. Well what about high frequency radio, i.e. the HF or short wave band? The problem here is that a single high altitude nuclear bursts destroys sky wave propagation for hours. While propagation would continue via the ground wave, the sky wave badly needed for long distance radio would not function reducing usable radio ranges to a few tens of miles.
The fear was that our communications were so vulnerable that each missile base commander would face the dilemma of either doing nothing in the event of a physical attack, or taking action that would mean an all out irrevocable war. A communications system that could withstand attack was needed that would allow reduction of tension at the height of the Cold War.
Broadcast Station Distributed Teletypewriter Network, 1960
Baran:
At that time the expressed concern was for a system able to support Minimum Essential Communications -- a euphemism for the President authorizing a weapons launch.
In 1960 I proposed using broadcast stations as the links of a network . Broadcast stations during the daytime depend soley only on the ground wave, not subject to the loss of the sky wave. This is the reason that AM broadcast stations have such a short range during the day. I was able to demonstrate using FCC station data that there were enough AM broadcast stations in the right location and of the right power levels to allow signals to be relayed across the country. I proposed a very simple protocol. Just flood the network with the same message.
When I took this briefing around to the Pentagon, and other parts of the defense establishment I received the objection that it didn’t fix the problem of the military. “OK, a very narrow band capacity may take care of the President issuing the orders at the start of a war, but how do you support all the other important communications requirements that you need to operate the military during such a critical time.”
High Data Rate Distributed Communications, 1961 - 64
Baran:
The response was unambiguous. What I proposed wouldn’t fully hack it. So it was “back to the drawing board” time. I started to examine what military communications needs were regarded as essential by reading reports on the subject, and asking people at various military command centers. The more that I examined the issues, the longer the list. So I said to myself. “As I can’t figure out what essential communications is needed, let’s take a different tack. I’ll give those guys so much damn bandwidth that they wouldn’t know what in Hell to do with it all.” In other words, I viewed the challenge to be the design of a secure network able send signals over a network being cut up, and yet having the signals delivered with perfectly reliability. And, with more capacity than anything built to date. When one starts a project aim for the moon. Reality will cut you back later. But if you don’t aim high at the outset you can never advance very far.
Why Digital? Why Message Blocks?
Baran:
I knew that the signals would have to find their way through surviving paths, which would mean a lot of switching through multiple tandem links. But, at that time long distance telephone communications systems transmitted only analog signals. This placed a fundamental restriction on the number of tandem connected links that could be used before the voice signal quality became unusable. A telephone voice signal could pass through no more than about five independent tandem links before it would become inaudible. This ruled out analog transmission in favor of digital transmission. Digital signals have a wonderful property. As long as the noise is less than the signal’s amplitude it is possible to reconstruct the digital signal without error.
The future survivable system had to be all-digital. At each connected node, the digital signal would be verified that the next node correctly received it. And, if not, the signal would be retransmitted. As one day the network would also have to carry voice as well as teletypewriter and computer data, all traffic would be in the same form – bits. All analog signals would first be digitized. To keep the delay times short the digital stream would be packaged into small message blocks each with a standardized format. Work on time division multiplexing of digital telephone signals was in an early state in Bell Labs. Their experimental equipment used a data rate of about 1.5 Megabits/sec. I then started with the premise that it would be feasible to use digital transmission, at least for short distances at 1.5 Megabits/sec. since the signals could be reconstructed at each node. A big problem blocking long distance digital transmission was transmission jitter buildup. Every mile a repeater amplifier chopped the tops off the wave and reconstituted a clean digital signal. But noise caused a cumulative shifting of the zero crossing points. This limited the span distance. I thought that a node terminating each link in a non-synchronous manner should effectively clean up the accumulated jitter. This would and provide a de facto way of achieving long distances by such jitter error clean up. And I felt that if that didn’t work, then our fall back technology would then be the use of extremely cheap microwaves that could be feasible in this noise margin tolerable application.
On Parallelism
Baran:
By this time it was beginning to become clear that the new system’s overall reliability would be significantly greater than the reliability of any one component. Hence I could think in terms of building the entire system out of cheap parts – something previously inconceivable in the all-analog world.
Hochfelder:
Because it is in parallel?
Baran:
Yes. In parallelism there is strength. Many parts must fail before no path could be found through the network. It took a redundancy level of only about three times the theoretical minimum to build a very tough network . If you didn’t have to worry about enemy attacks, then a redundancy level of about 1.5 would suffice for a very reliable network out of very inexpensive and unreliable parts. And, it would later show that it would be possible to reduce the cost of communication by almost two decimal orders of magnitude. The saving in part came from being able to design the long distance transmission systems as links of a meshed network with alternative paths without allowing huge fade margins where all the links are connected in tandem. With analog transmission every link of the network must be “gold plated” to achieve reliability.
Hot-Potato Routing
Baran:
A key element of the concept was that it would be necessary to keep a “carbon copy” of each message block using computer technology, until the next station successfully received the message. The next challenge was to find a way for the packets to seek their own way through the network. This meant that some implicit path information must be contained as housekeeping data within the message block itself. The housekeeping includes data about the source and destination of the packet together with a an implied time measurement such as the number of times the message block had been retransmitted. This small amount of information allowed creation of an algorithm that did a very effective job of routing dynamically changing traffic to always find the best instantaneous path through the network.
Basic Concepts Underlying Packet Switching, 1960
Baran:
I had earlier discovered that very robust networks could be built with only modest increases in redundancy over that required for the minimum connectivity. And, then it dawned on me that the process or resending defective or missing packets would then allow the creation of an essentially error-free network . Since it didn’t make any difference whether a failure was due to enemy attacks or poor reliable components, it would be possible to build systems where the system reliability is far greater than the reliability of any of its parts And, even with inexpensive components a super reliable network would result.
Another interesting characteristic was the network learning property would allow users to move around the network, with that person’s address following them . This would allow separating the physical address from the logical address throughout the network, a fundamental characteristic of the Internet.
Another that I learned was that in building self-learning systems it is equally important to forget, as it is to learn. For example, when you destroy parts of a network, the network must quickly adapt to routing traffic entirely differently. I found that by using two different time constants, one for learning and the other for forgetting provided the balanced properties desired. And, I found it helpful to view the network as an organism, as it had many of the characteristics of an organism as it responds to overloads, and sub-system failures.
Dynamic Routing, 1961
Baran:
I first thought that it might be possible to build a system capable of smart routing through the network after reading about Shannon’s mouse through a maze mechanism . But instead of remembering only a single path, I wanted a scheme that not only remembered, but also knew when to forget, if the network was chopped up. It is interesting to note that the early simulation showed that after the hypothetical network was 50% instantly destroyed, that the surviving pieces of the network reconstituted themselves within a half a second of real world time and again worked efficiently in handling the packet flow.
Hochfelder:
How would the packets know how to do that?
Baran:
Through the use of a very simple routing algorithm. Imagine that you are a hypothetical postman and mail comes in from different directions, North, South, East and West. You, the postman would look at the cancellation dates on the mail from each direction. If for example if our postman was in Chicago, mail from Philadelphia would tend to arrive from the East with the latest cancellation date. If the mail from Philadelphia had arrived from the North, South, or West it would arrive with a later cancellation date because it would have had to take a longer route (statistically). Thus, the preferred direction to send traffic to Philadelphia would be out over the channel connected from the East as it had the latest cancellation date. Just by looking at the time stamps on traffic flowing through the post office you get all the information you need to route traffic efficiently.
Each hypothetically post office would be built the same way. And, each would have a local table that recorded the statistics of traffic flowing through the post office. With packets, it was easier to increment a count in a field of the packet than to time stamp. So, that is what I did. It’s simple and self-learning. And when this “handover number” got too big, then we knew that the end point was unreachable and dropped that packet so that it didn’t clutter the network.
Hochfelder:
Always searching for the shortest path.
Baran:
Yes, that is the scheme. We needed a learning constant and a forgetting constant as no single measurement could be completely trusted. The forgetting constant also allows the network to respond to rapidly varying loads from different places. If the instantaneous load exceeded the capacity of the links, then the traffic is automatically spread through more of the network. I called this doctrine, “Hot Potato Routing.” These days this approach is called “Deflection Routing.” By the way, the routing doctrine used in the Internet differs from the original Hot Potato approach, and is the result of a large number of improvements over the years.
Basic Properties of Packet Switching, 1960 - 62
Baran:
The term “packet switching” was first used by Donald Davies of the National Physical Laboratory in England who independently came up with the same general concept in November 1965 .
Essentially all the basic concepts of today’s packet switching can be found described either in the 1962 paper or in the Augurst 1964 RAND Memoranda in which such key concepts as the virtual circuit are described in detail.
The concept of the “virtual circuit” is that the links and nodes of the system are all free, except during those instances when actually sending packets. This allows a huge saving over circuit switching, because 99 percent of the time nothing is being sent so the same facilities can be shared with other potential users.
Then there is the concept of “flow control”, which is the mechanism to automatically prevent any node from overloading. All the basic concepts were worked out in engineering detail ina series of RAND Memoranda (between 10 to 14 volumes, depending on how they are counted) What resulted was a realization that the system would be extremely robust, with the end to end error rate essentially zero, even if built with inexpensive components. And, it would be very efficient in traffic handling in comparison to the circuit-switching alternative.
Economic Payoff Potential Versus Perceived Risks
Baran:
This combination of economy and capability suggested that if built and maintained at a cost of $60,000,000 (1964 Dollars) that it could handle the long distance telecommunications within the Department of Defense that was costing the taxpayer about $2 billion a year.
At the time, the great saving in cost claimed was so great that it made the story intuitively unbelievable. It violated the common sense instincts of the listener who would say in effect that: “If it were ever possible to achieve such efficiencies the phone company (AT&T) would have done it already.”
Another understandable objection was “This couldn’t possibly work. It is too complicated.” This perception was based on the common view, correct at the time, that computers were big, taking up large glass walled rooms, and were notoriously unreliable. When I said that that each switching node could be a shoe sized box with the required computer’s capabilities, many didn’t believe it. (I had planned doing everything in miniaturized hardware in lieu of using off the shelf minicomputers.) So I had the burden of proof, to define the small box down to the circuit level to show that it could indeed be done.
Another issue was the separation of the transmission network from the analog to digital conversion points. This is described in detail in Vol. 8 of the ODC series This RAND Memorandum describes in detail how users are connected to the switching network. The separate unit that is described connects up to 1024 users and convert their analog signals into digital signals. This included voice, teletypewriters, computer modems, etc. One side of the box would connect to the existing analog telephones, while the other side which was digital would connect to the switching network, preferably at multiple points to eliminate a single point of failure.
This constant increase in desire for engineering details caused so much paper to be written at the time cluttering up the literature. On a positive note it left us with a very detailed description of packet switching proposed at that time. This record has been helpful in straightening out some of the later misrepresentations of who did what and when as found in the popular press’s view of history.
Opposition and Detailed Definition Memoranda, 1961+
Baran:
The enthusiasm that this early plan encountered was mixed. I obtained excellent support from RAND (after an early cool and cautious initial start). Others, particular those from AT&T (the telephone monopoly at the time) objected violently. Many of the objections were at the detail level, so the burden of proof was then on me to provide proposed implementation descriptions at an ever finer level of detail. Time after time I would return with increasingly detailed briefing charts and reports. But, each time I would hear a mantra, “It won’t work because of ____”. “ It won’t work because of (some new objection).” I gave the briefings to many places to various government agencies, to research laboratories, to commercial companies, but primarily to the military establishment I gave briefings at least 35 times. It was hard for a visitor with an interest in communications to visit RAND without being subject to a presentation. My chief purpose in giving these presentations so broadly was that I was looking for reasons that it might not work. I wanted to be absolutely sure that I hadn’t overlooked anything that could affect workability. After each encounter where I could not answer the questions quantitatively, I would go back and study each of the issues raised and fill in the missing details. This was an iterative process constituting a wire brush treatment of a wild set of concepts.
In fairness, much of the early criticism was valid. Of course the burden of proof belongs to the proponent. Among the many positive outcomes of the exercise was that, 1) I was building a better understanding the details of such new systems, 2) I was building a growing degree of confidence in the notions, and 3) I had accumulated a growing pile of paper including simulation data to support the idea that the system would be self learning and stable.
Publication, 1964
Baran:
Most of the work was done in the period 1960-62. As you can imagine old era analog transmission engineers unable to understand what was being contemplated in detail. And, not understanding, they were negative and intuitively believed that it could possibly work. However, I did build up a set of great supporters as I went along. My most loyal supporters at RAND included Keith Uncapher my boss at the time, and Paul Armer and Willis Ware, co-heads of the Computer Science Department. RAND provided a remarkable degree of freedom to do this controversial work, and supported me in external disagreements. By 1963 I felt that I had carried this work about as far as appropriate to RAND (which some jokingly say stands for “Research And No Development.”) And, I felt that as I had completed the bulk of my work I began wrapping up the technical development phase in 1964 when I published the set of memoranda in 1964 which were primarily written on airplanes in the 1960 to 1962 era. There were some revisions in 1963, and the RAND Memoranda came out in 1964. I continued to work on some of the non-technical issues and gave tutorials in many places including summer courses at the University of Michigan in 1965 and 1966.
In May 1964 I published a paper in the IEEE Communications Transactions which summarizes the work and provides a pointer to each of a dozen volumes of Rand Memoranda for the serious reader who wanted to read the backup material Essentially all this work was unclassified in the belief that we would all be better off if the fate of the world relied on more robust communications networks. Only two of the first twelve Memoranda were classified. One dealt with cryptography and the other with weak spots that were discovered and the patches to counter the weak spots. A thirteenth classified volume was written in 1965 by Rose Hirshfield on real world geographical layout of the network. And there was a 14th describing a secure telephone that could be used with the system and had possible applications outside of the network and so wasn’t included in the number series. This was co-authored with Dr. Rein Turn.
Baran:
Getting a new idea out to a larger audience is always challenging. Perhaps more so if it is a departure from the classical way of doing things. The IEEE Spectrum which is sent to all IEEE members picked up the article in a “Scanning the Transactions”. I looked to this short summary to being a pointer to the IEEE Transaction article, for those that didn’t normally read the Communications Society Transactions. This article in turn pointed to RAND Memoranda. readily available either from RAND or its depositories around the world. In those days RAND Publications were mailed free to anyone who requested a copy.
But no matter how hard one tries, it seems that it is impossible to get the word out to everyone. This is not a novel problem. And, it contributes to duplicative research, made more common by the reluctance by some to take the time to review the literature before proceeding with their own research. Some even regard reviewing the literature as a waste of time. I was surprised many years later to find a few key people in closely related research say that they were totally unaware of this work until many years later. I recall describing the system in detailed discussions, only to find out at a later date that the listener had completely forgotten what was said, and who didn’t receive his Epiphany until much later and ostensibly through a different channel.
Conceptual Gap Between Analog and Digital Thinking
Baran:
The fundamental hurdle in acceptance was whether the listener had digital experience or knew only analog transmission techniques. The older telephone engineers had problems with the concept of packet switching. On one of my several trips to AT&T Headquarters at 195 Broadway in New York City I tried to explain packet switching to a senior telephone company executive. In mid sentence he interrupted me, “Wait a minute, son. “Are you trying to tell me that you open the switch before the signal is transmitted all the way across the country?” I said, “Yes sir, that’s right.” The old analog engineer looked stunned. He looked at his colleagues in the room while his eyeballs rolled up sending a signal of his utter disbelief. He paused for a while, and then said, “Son, here’s how a telephone works….” And then he went on with a patronizing explanation of how a carbon button telephone worked. It was a conceptual impasse.
On the other hand, the computer people over at Bell Labs in New Jersey did understand the concept. That was insufficient. When I told the AT&T Headquarters folks that their own research people at Bell Labs had no trouble understanding and didn’t have the same objections as the Headquarters people. Their response was, “Well, Bell Labs is made up of impractical research people who don’t understand real world communication.”
Willis Ware of RAND tried to build a bridge early in the process. He knew Dr. Edward David Executive Director of Bell Labs and he aske for help. Ed set up a meeting at his house with the chief engineer of AT&T and myself to try to overcome the conceptual hurdle. At this meeting I would describe something in language familiar to those that knew digital technology. Ed David would translate what I was saying into language more familiar in the analog telephone world (he practically used Western Electric part numbers) to our AT&T friend, who responded in a like manner. Ed David would translate it back into computer nerd language.
I would encounter this cultural impasse time after time between those who were familiar only with the then state of the art of analog communications – highly centralized and with highly limited intelligence circuit switching and myself talking about all-digital transmission, smart switches and self-learning networks. But, all through the process of erosion, more and more people came to understand what was being said. The base of support strengthened in RAND, the Air Force, academia, government and some industrial companies --and parts of Bell Labs. But I could never penetrate AT&T Headquarters objections who at that time had a complete monopoly on telecommunications. It would have been the perfect organization to build the network. Our initial objective was to have the Air Force contract the system out to AT&T to build the network but unfortunately AT&T was dead set against the idea.
Hochfelder:
Were there financial objections as well?
AT&T Headquarters Lack of Receptivity
Baran:
Possibly, but not frontally. They didn’t want to do it for a number of reasons and dug their heels in looking for publicly acceptable reasons. For example, AT&T asserted that were not enough paths through the country to provide for the number of routes that I had proposed for the National packet based network but refused to show us their route maps. (I didn’t tell them that someone at RAND had already acquired a back door copy of the AT&T maps containing the physical routes across the US since AT&T refused to voluntarily provide these maps that were needed to model collateral damage to the telephone plant by attacks at the US Strategic Forces.) I told AT&T that I thought that they were in error and asked them to please check their maps more carefully. After a month’s delay in which they never directly answered the question, one of their people responded by grumbling, “It isn’t going to work, and even if it did, damned if we are going to put anybody in competition to ourselves.”
I suspect the major reason for the difficulty in accommodating packet switching at the digital transmission level was that it would violate a basic ground rule of the Bell System -- everything added to the telephone system had to work with all previous equipment presently installed. Everything had to fit to into the existing plan. Nothing totally different could be allowed except as a self contained unit that fit into the overall system. The concept of long distance all-digital communications links connecting small computers serving as switches represents a totally different technology and paradigm, and was too hard for them to swallow. I can understand and respect that reason, but can also appreciate the later necessity for divestiture. Competition better serves the public interest in the longer term than a monopoly, no matter how competent and benevolent that monopoly might. There is always the danger that the monopoly can be in error and there is no way to correct this.
On Bell Labs Response
Baran:
While the folks AT&T Headquarters violently opposed the technology, there were digitally competent people in Bell Labs who appreciated what it was all about. One of the mysteries that I have never figured out is why after packet switching was shown to be feasible in practice and many papers published by others that it took so many years before papers in packet switching would ever emerge from Bell Labs.
The first paper on the subject that I recall being published in the Bell System Technical Journal was by Dr. John Pierce. This paper described a packet network made up of overlapping Ballantine rings. It was a brilliant idea and his architecture used in today’s ATM systems.
Hochfelder:
What is a Ballantine ring?
Baran:
Have you ever seen the Ballantine Beer’s logo? It is made up of three overlapping rings? Since a signal can be sent in both directions on a loop, no single loop cut need stop communications from flowing from the other direction. Because the signal can go both ways any single cut can be tolerated without loss allowing time for repair. It is a powerful idea.
The RAND Formal Recommendation to the Air Force, 1965
Baran:
In 1965 the RAND Corporation issued a formal Recommendation to the Air Force (which they do very rarely) for the Air Force to proceed to build the proposed network . The Air Force then asked the MITRE Corporation, a not-for-profit organization that worked for the government to set up a study and review committee. The Committee after independent investigation concluded that the design was valid and that a viable system could be built and that the Air Force should immediately proceed with implementation.
As the project was about to launch, the Department of Defense said that as this system was to be a National communications system, it would in accordance with the Defense Reorganization Act of 1949 (finally being implemented in 1965) fall into the charter of the Defense Communications Agency.
The choice of DCA would have been fine years later when DCA was more appropriately staffed. But at that time the DCA was a shell organization staffed by people who lacked strength in digital understanding. I had learned through the many briefings I had given to various audiences that there was an impenetrable barrier to understanding packet switching by those who lacked digital experience. At RAND I was essentially free to work on anything that I felt to be of most importance to National Security. This allowed me for example to serve on various ad hoc DDR&E (Department of Defense Research & Engineering) committees. I sometimes consulted with Frank Eldridge in the Comptrollers Office of the Department of Defense helping him to review items in the command and control budgets submitted by the services. Frank Eldridge was an old RAND colleague initially responsible for the project on the protection of command and control. He was among the strongest supporters for the work that I was doing on Distributed Communications. He had gone over to the Pentagon working with McNamara’s “whiz kids.” Frank Eldridge had undergone many of the same battles with AT&T and understood the issues of the RAND thence Air Force proposal.
Approval for the money for the Defense Communication Agency (DCA) to undertake the RAND distributed communications system development was under Frank Eldridge’s responsibility. Both Frank and I agreed that DCA lacked the people at that time who could successfully undertake this project and would likely screw up this program. An expensive failure would make it difficult for a more competent agency to later undertake this project. I recommended that this program not be funded at this time and the program be quietly shelved, waiting for a more auspicious opportunity to resurrect it.
The Cold War at this time had cooled from loud threats of thermonuclear warheads to the lower level of surrogate small wars. And, we were bogged down in Viet Nam.
source:
https://ethw.org/Oral-History:Paul_Baran
____________________________________
• Internet backbone
https://en.wikipedia.org/wiki/Internet_backbone
• Tier 1 network
https://en.wikipedia.org/wiki/Tier_1_network
─ List of Tier 1 networks
• https://asrank.caida.org/
• Internet exchange point
https://en.wikipedia.org/wiki/Internet_exchange_point
https://en.wikipedia.org/wiki/List_of_Internet_exchange_points
https://en.wikipedia.org/wiki/List_of_Internet_exchange_points_by_size
____________________________________
Russ Haynal's ISP Page
This page links to the major pieces of the Internet's infrastructure.
http://navigators.com/isp.html
Internet backbone maps
https://web.archive.org/web/20060411203358/http://www.nthelp.com/maps.htm
https://prefix.pch.net/applications/ixpdir/?show_active_only=0&sort=traffic&order=desc
PCH (Packet Clearing House)
Internet Exchange Directory
https://www.pch.net/ixp/dir
http://www.telegeography.com/products/internet-exchange-directory/
http://lookinglass.org/wix.php
https://ixpdb.euro-ix.net/en/
The IXP Database (IXPDB) is an authoritative, comprehensive, public source of data related to IXPs. It collects data directly from IXPs through a recurring automated process. It also integrates data from third-party sources in order to provide a comprehensive and corroborated view of the global interconnection landscape. The combined data can be viewed, analyzed, and exported through this web-based interface and an API.
IXP Database (IXPDB) - collects data directly from IXPs through a recurring automated process
https://ixpdb.euro-ix.net/en/
https://www.opte.org/the-internet
Route Views Project
Route Views
http://routeviews.org/
University of Oregon’s Route Views Project. Route Views has feeds from all over the Internet
____________________________________
https://www.opte.org/faq
What is Routeviews?
http://routeviews.org/
University of Oregon Route Views Project
The University's Route Views project was originally conceived as a tool for Internet operators to obtain real-time BGP information about the global routing system from the perspectives of several different backbones and locations around the Internet. Although other tools handle related tasks, such as the various Looking Glass Collections (see e.g. TRACEROUTE.ORG), they typically either provide only a constrained view of the routing system (e.g., either a single provider or the route server) or they do not provide real-time access to routing data.
While the Route Views project was originally motivated by interest on the part of operators in determining how the global routing system viewed their prefixes and/or AS space, there have been many other interesting uses of this Route Views data. For example, NLANR has used Route Views data for AS path visualization and to study IPv4 address space utilization (archive).
The Internet maps created here leverage the Route Views archive data. However, the data only begins in 1997 and we are still hoping to find older routing table dumps that pre-date the Routeviews archive.
source:
https://www.opte.org/faq
____________________________________
http://www.telegeography.com/products/internet-exchange-directory/
https://www.submarinecablemap.com/
https://www.cloudinfrastructuremap.com/
https://www.internetexchangemap.com/
____________________________________
What is the Internet?
─ watch the following time laps youtube.com video of a 3-D colored graph, watch it grows over time
https://youtu.be/-L1Zs_1VPXA
https://youtu.be/-L1Zs_1VPXA
US DoD Internet Research
https://www.youtube.com/watch?v=BDV1KZxCKi0
https://www.youtube.com/watch?v=BDV1KZxCKi0
____________________________________
https://en.wikipedia.org/wiki/Internet_backbone
Internet backbone
From Wikipedia, the free encyclopedia
Each line is drawn between two nodes, representing two IP addresses. This is a small look at the backbone of the Internet.
The Internet backbone may be defined by the principal data routes between large, strategically interconnected computer networks and core routers of the Internet. These data routes are hosted by commercial, government, academic and other high-capacity network centers, as well as the Internet exchange points and network access points, that exchange Internet traffic between the countries, continents, and across the oceans. Internet service providers, often Tier 1 networks, participate in Internet backbone traffic by privately negotiated interconnection agreements, primarily governed by the principle of settlement-free peering.
The Internet, and consequently its backbone networks, do not rely on central control or coordinating facilities, nor do they implement any global network policies. The resilience of the Internet results from its principal architectural features, most notably the idea of placing as few network state and control functions as possible in the network elements and instead relying on the endpoints of communication to handle most of the processing to ensure data integrity, reliability, and authentication. In addition, the high degree of redundancy of today's network links and sophisticated real-time routing protocols provide alternate paths of communications for load balancing and congestion avoidance.
The largest providers, known as Tier 1 networks, have such comprehensive networks that they do not purchase transit agreements from other providers.[1]
Infrastructure
Undersea Internet cables
Routing of prominent undersea cables that serve as the physical infrastructure of the Internet.
The Internet backbone consists of many networks owned by numerous companies. Optical fiber trunk lines consists of many fiber cables bundled to increase capacity, or bandwidth. Fiber-optic communication remains the medium of choice for Internet backbone providers for several reasons. Fiber-optics allow for fast data speeds and large bandwidth, they suffer relatively little attenuation, allowing them to cover long distances with few repeaters, and they are also immune to crosstalk and other forms of electromagnetic interference which plague electrical transmission.[citation needed] The real-time routing protocols and redundancy built into the backbone is also able to reroute traffic in case of a failure.[2] The data rates of backbone lines have increased over time. In 1998,[3] all of the United States' backbone networks had utilized the slowest data rate of 45 Mbit/s. However, technological improvements allowed for 41 percent of backbones to have data rates of 2,488 Mbit/s or faster by the mid 2000s.[4]
History
In the early days of the Internet, backbone providers exchanged their traffic at government-sponsored network access points (NAPs), until the government privatized the Internet, and transferred the NAPs to commercial providers.[1]
Modern backbone
Because of the overlap and synergy between long-distance telephone networks and backbone networks, the largest long-distance voice carriers such as AT&T Inc., MCI (acquired in 2006 by Verizon), Sprint, and Lumen also own some of the largest Internet backbone networks. These backbone providers sell their services to Internet service providers (ISPs).[1]
Each ISP has its own contingency network and is equipped with an outsourced backup. These networks are intertwined and crisscrossed to create a redundant network. Many companies operate their own backbones which are all interconnected at various Internet exchange points (IXPs) around the world.[7] In order for data to navigate this web, it is necessary to have backbone routers—routers powerful enough to handle information—on the Internet backbone and are capable of directing data to other routers in order to send it to its final destination. Without them, information would be lost.[8]
Regional backbone
...
http://navigators.com/isp.html
https://web.archive.org/web/20060411203358/http://www.nthelp.com/maps.htm
https://www.opte.org/about
##### ##### #####
https://en.wikipedia.org/wiki/Tier_1_network
Tier 1 network
From Wikipedia, the free encyclopedia
A Tier 1 network is an Internet Protocol (IP) network that can reach every other network on the Internet solely via settlement-free interconnection (also known as settlement-free peering).[1][2] Tier 1 networks can exchange traffic with other Tier 1 networks without paying any fees for the exchange of traffic in either direction.[3] In contrast, some Tier 2 networks and all Tier 3 networks must pay to transmit traffic on other networks.[3]
Relationship between the various tiers of Internet providers
There is no authority that defines tiers of networks participating in the Internet.[1] The most common and well-accepted definition of a Tier 1 network is a network that can reach every other network on the Internet without purchasing IP transit or paying for peering.[2] By this definition, a Tier 1 network must be a transit-free network (purchases no transit) that peers for free with every other Tier 1 network and can reach all major networks on the Internet. Not all transit-free networks are Tier 1 networks, as it is possible to become transit-free by paying for peering, and it is also possible to be transit-free without being able to reach all major networks on the Internet.
The most widely quoted source for identifying Tier 1 networks is published by Renesys Corporation,[4] but the base information to prove the claim is publicly accessible from many locations, such as the RIPE RIS database,[5] the Oregon Route Views servers, Packet Clearing House, and others.
RIPE RIS database
Oregon Route Views servers
Packet Clearing House
It can be difficult to determine whether a network is paying for peering or transit, as these business agreements are rarely public information, or are covered under a non-disclosure agreement. The Internet peering community is roughly the set of peering coordinators present at the Internet exchange points on more than one continent. The subset representing Tier 1 networks is collectively understood in a loose sense, but not published as such.
History
The original Internet backbone was the ARPANET when it provided the routing between most participating networks. The development of the British JANET (1984) and U.S. NSFNET (1985) infrastructure programs to serve their nations' higher education communities, regardless of discipline,[6] resulted in 1989 with the NSFNet backbone. The Internet could be defined as the collection of all networks connected and able to interchange Internet Protocol datagrams with this backbone. Such was the weight of the NSFNET program and its funding ($200 million from 1986 to 1995)—and the quality of the protocols themselves—that by 1990 when the ARPANET itself was finally decommissioned, TCP/IP had supplanted or marginalized most other wide-area computer network protocols worldwide.
When the Internet was opened to the commercial markets, multiple for-profit Internet backbone and access providers emerged. The network routing architecture then became decentralized and attained a need for exterior routing protocols, in particular the Border Gateway Protocol emerged. New Tier 1 ISPs and their peering agreements supplanted the government-sponsored NSFNet, a program that was officially terminated on April 30, 1995.[6] The NSFnet-supplied regional networks then sought to buy national-scale Internet connectivity from these now numerous, private, long-haul networks.
List of Tier 1 networks
These networks are universally recognized as Tier 1 networks, because they can reach the entire internet (IPv4 and IPv6) via settlement-free peering. The CAIDA AS rank is a rank of importance on the internet.[10]
• https://asrank.caida.org/
##### ##### #####
https://asrank.caida.org/
ASRank is CAIDA's ranking of Autonomous Systems (AS) (which approximately map to Internet Service Providers) and organizations (Orgs) (which are a collection of one or more ASes). This ranking is derived from topological data collected by CAIDA's Archipelago Measurement Infrastructure and Border Gateway Protocol (BGP) routing data collected by the Route Views Project and RIPE NCC.
ASes and Orgs are ranked by their customer cone size, which is the number of their direct and indirect customers. Note: We do not have data to rank ASes (ISPs) by traffic, revenue, users, or any other non-topological metric.
https://asrank.caida.org/
##### ##### #####
https://en.wikipedia.org/wiki/Tier_1_network
Winther, Mark (May 2006). "Tier1 ISPs: What They Are and Why They Are Important" (PDF). NTT America Corporate.
http://www.us.ntt.net/downloads/papers/IDC_Tier1_ISPs.pdf
https://www.thousandeyes.com/learning/techtorials/isp-tiers
ThousandEyes is an interesting startup, that has made a name for itself with its service that watches pretty much the whole internet to help companies figure out the source of performance problems of websites and web-based apps. For instance, it can determine if an outage is the company's fault or that of its service providers.
##### ##### #####
https://en.wikipedia.org/wiki/Internet_exchange_point
Internet exchange point
From Wikipedia, the free encyclopedia
Internet exchange points (IXes or IXPs) are common grounds of IP networking, allowing participant Internet service providers (ISPs) to exchange data destined for their respective networks.[1] IXPs are generally located at places with preexisting connections to multiple distinct networks, i.e., datacenters, and operate physical infrastructure (switches) to connect their participants. Organizationally, most IXPs are each independent not-for-profit associations of their constituent participating networks (that is, the set of ISPs which participate at that IXP). The primary alternative to IXPs is private peering, where ISPs directly connect their networks to each other.
IXPs reduce the portion of an ISP's traffic that must be delivered via their upstream transit providers, thereby reducing the average per-bit delivery cost of their service. Furthermore, the increased number of paths available through the IXP improves routing efficiency (by allowing routers to select shorter paths) and fault-tolerance. IXPs exhibit the characteristics of the network effect.[2]
History
Internet exchange points began as Network Access Points or NAPs,
NSFNet Internet architecture, c. 1995
Operations
A 19-inch rack used for switches at the DE-CIX in Frankfurt, Germany
Technical operations
A typical IXP consists of one or more network switches, to which each of the participating ISPs connect. Prior to the existence of switches, IXPs typically employed fiber-optic inter-repeater link (FOIRL) hubs or Fiber Distributed Data Interface (FDDI) rings, migrating to Ethernet and FDDI switches as those became available in 1993 and 1994.
Asynchronous Transfer Mode (ATM) switches were briefly used at a few IXPs in the late 1990s, accounting for approximately 4% of the market at their peak, and there was an attempt by Stockholm-based IXP NetNod to use SRP/DPT, but Ethernet has prevailed, accounting for more than 95% of all existing Internet exchange switch fabrics. All Ethernet port speeds are to be found at modern IXPs, ranging from 10 Mb/second ports in use in small developing-country IXPs, to ganged 10 Gb/second ports in major centers like Seoul, New York, London, Frankfurt, Amsterdam, and Palo Alto. Ports with 100 Gb/second are available, for example, at the AMS-IX in Amsterdam and at the DE-CIX in Frankfurt.[citation needed]
http://www.drpeering.net/white-papers/Art-Of-Peering-The-IX-Playbook.html
##### ##### #####
https://en.wikipedia.org/wiki/List_of_Internet_exchange_points
https://www.peeringdb.com/
The Interconnection Database
Join. Search. Grow your network.
PeeringDB is a freely available, user-maintained, database of networks, and the go-to location for interconnection data. The database facilitates the global interconnection of networks at Internet Exchange Points (IXPs), data centers, and other interconnection facilities, and is the first stop in making interconnection decisions.
The database is a non-profit, community-driven initiative run and promoted by volunteers. It is a public tool for the growth and good of the Internet. Join the community and support the continued development of the Internet.
##### ##### #####
https://en.wikipedia.org/wiki/List_of_Internet_exchange_points_by_size
https://prefix.pch.net/applications/ixpdir/?show_active_only=0&sort=traffic&order=desc
PCH (Packet Clearing House)
Internet Exchange Directory
https://www.pch.net/ixp/dir
http://www.telegeography.com/products/internet-exchange-directory/
http://lookinglass.org/wix.php
https://ixpdb.euro-ix.net/en/
The IXP Database (IXPDB) is an authoritative, comprehensive, public source of data related to IXPs. It collects data directly from IXPs through a recurring automated process. It also integrates data from third-party sources in order to provide a comprehensive and corroborated view of the global interconnection landscape. The combined data can be viewed, analyzed, and exported through this web-based interface and an API.
IXP Database (IXPDB) - collects data directly from IXPs through a recurring automated process
https://ixpdb.euro-ix.net/en/
____________________________________
https://www.peeringdb.com/
The Interconnection Database
Join. Search. Grow your network.
PeeringDB is a freely available, user-maintained, database of networks, and the go-to location for interconnection data. The database facilitates the global interconnection of networks at Internet Exchange Points (IXPs), data centers, and other interconnection facilities, and is the first stop in making interconnection decisions.
The database is a non-profit, community-driven initiative run and promoted by volunteers. It is a public tool for the growth and good of the Internet. Join the community and support the continued development of the Internet.
____________________________________
____________________________________
http://www.telegeography.com/products/internet-exchange-directory/
https://www.submarinecablemap.com/
https://www.cloudinfrastructuremap.com/
https://www.internetexchangemap.com/
____________________________________
https://www.howtogeek.com/751880/the-foundation-of-the-internet-tcpip-turns-40/
How Does TCP/IP Work?
TCP and IP are two separate technologies that work together, hand-in-hand, to achieve reliable connections through a heterogeneous (many different types of computers and links) computer network.
As previously mentioned, IP handles addressing machines on the network and how blocks of data (called “packets“) reach the proper destination. TCP ensures that the packets reach their destination without error, calling ahead to make sure there is a host to receive the information and, if the information is lost on the way or corrupted, re-transmitting the data until it gets there safely.
What's the Difference Between TCP and UDP?
RELATED
What's the Difference Between TCP and UDP?
TCP/IP’s architects purposely separated the implementation of TCP and IP to make the network more flexible and modular. In fact, TCP can be swapped out with a different protocol called UDP that is faster but allows data loss in situations where 100% transmission accuracy isn’t necessary, such as a telephone call or a video broadcast.
Network engineers call this modular design a “protocol stack,” and it allows some of the lower layers in the stack to be handled independently in a way that is most appropriate for the local machine architecture. Then the upper layers can work on top of those to communicate with each other. In the case of the Internet, this stack typically consists of four layers:
• Link Layer – Low-level protocols that work with a physical medium (such as Ethernet)
• Internet Layer – Routes packets (IP, for example)
• Transport Layer – Makes and breaks connections (TCP, for example)
• Application Layer – How people use the network (the web, FTP, and others)
____________________________________
https://www.opte.org/about
What is this about?
Some people need to see to understand.
Since the Internet is an enormous amalgamation of individual networks that provide the relatively seamless communication of data, it seemed logical to draw lines from one point to another.
This project has been a 17+ year labor of love under the moniker of The Opte Project. The map has been an icon of what the Internet looks like in hundreds of books, in movies, museums, office buildings, educational discussions, and countless publications. The map has also become a teaching tool, allowing visual learners to quickly understand the Internet and networking.
Now I hope this map will be a teaching tool on why we need to build a new Internet with new core principles built into it. The Internet is woven into society, and by changing the Internet, it's possible to change the world.
There are many other answers below and in our FAQ section of the site.
https://en.wikipedia.org/wiki/Protocol_Wars
____________________________________
https://en.wikipedia.org/wiki/Domain_Name_System
Most prominently, Domain_Name_System translates readily memorized domain names to the numerical IP addresses needed for locating and identifying computer services and devices with the underlying network protocols.[1] The Domain Name System has been an essential component of the functionality of the Internet since 1985.
____________________________________
____________________________________
No comments:
Post a Comment