Friday, March 31, 2023

deep state wins

 deep state wins
Investment Theory of Party Competition

https://www.counterpunch.org/2014/09/02/the-zero-sum-game-of-perpetual-war/

https://www.textise.net/showText.aspx?strURL=https://www.counterpunch.org/2014/09/02/the-zero-sum-game-of-perpetual-war/

selected text from
       Why the deep state always wins
       The zero-sum game of perpetual war
       by Bill Blunden, August 29, 2014 (www.belowgotham.com)

So just who are the “deciders”? American philosopher John Dewey answered this question in one crisp sentence14:

   “Politics is the shadow cast on society by big business.”

   In Investment Theory of Party Competition, a model devised by political scientist Thomas Ferguson. Ferguson’s theory describes the political process as being dominated by corporate interests which coalesce into factions and compete to guide policy. A couple of researchers, Martin Gilens and Benjamin Page, have published a paper that offers quantitative validation of Ferguson’s model concluding that17:

“Multivariate analysis indicates that economic elites and organized groups
representing business interests have substantial independent impacts on U.S.
government policy, while average citizens and mass-based interest groups have
little or no independent influence.”

    Jacob Hacker and Paul Pierson explain in their book, Winner Take All Politics, that corporations have used similar collective strategies to coordinate their efforts and instrument policy changes. The media likes to portray political contests as one individual versus another (as American culture is rooted in the myth of rugged individualism) but it’s more accurate to view political struggle as a form of conflict between organizations. A billionaire like George Soros isn’t just a lone citizen, he represents a small army of people.

Richard Fisher of the Dallas Federal Reserve Bank has reported that 12 American
megabanks control something on the order of 70% of the American banking industry’s assets28.  Or consider the investment management company BlackRock which holds over $3 trillion in assets29. This figure is on par with the 2013 U.S. Federal Budget.

   More than a decade ago John Stockwell presciently pointed out an unsettling logic, an instance of Hegelian Dialectic where the ruling class creates its own enemies to feed off of the ensuing carnage52:

“Enemies are necessary for the wheels of the U.S. military machine to turn. If the world were peaceful, we would never put up with this kind of ruinous expenditure on arms at the cost of our own lives. This is where the thousands of CIA destabilizations begin to make a macabre kind of economic sense. They function to kill people who never were our enemies-that’s not the problem-but to leave behind, for each one of the dead, perhaps five loved ones who are now
traumatically conditioned to violence and hostility toward the United States. This insures that the world will continue to be a violent place, populates with contras and Cuban exiles and armies in Southeast Asia, justifying the endless, profitable production of arms to ‘defend’ ourselves in such a violent world”

The defense industry thrives from regional conflicts like this, a constant stream of flash points in America’s self-perpetuating campaign to eradicate terrorism. The cost for the U.S. military campaigns in Iraq, Afghanistan, and Pakistan reaches into trillions of dollars and much of that funding ends up covering military expenses53.

Before the United States invaded Iraq its oil wells weren’t accessible to outside firms. After the invasion Western oil interests like Shell, BP, and
ExxonMobil have all gained entry to one of the world’s largest sources of oil57.

source:
       Why the deep state always wins
       The zero-sum game of perpetual war
       by Bill Blunden, August 29, 2014 (www.belowgotham.com)
       filename:  deep-state-wins.pdf
       http://www.belowgotham.com/deep-state-wins.pdf
   ____________________________________

1 http://www.nuclearsecrecy.com/nukemap/

2 Louis Menand, “Fat Man: Herman Kahn and the nuclear age,” New Yorker, June 27, 2005,
http://www.newyorker.com/magazine/2005/06/27/fat-man

10 “ExxonMobil’s Dirty Secrets, from Indonesia to Nigeria to Washington: Steve Coll on ‘Private Empire’,”
Democracy Now!, May 7, 2012,
http://www.democracynow.org/2012/5/7/exxonmobils_dirty_secrets_from_indonesia_to#

11 Wilford, Hugh, The Mighty Wurlitzer: How the CIA Played America, Harvard University Press, 2008.

12 Excerpts from Manufacturing Consent: Noam Chomsky interviewed by various interviewers,
http://www.chomsky.info/interviews/1992----02.htm

14 Robert Brett Westbrook, John Dewey and American Democracy, Cornell University Press, 1991, page 440.

15 G. William Domhoff, “C. Wright Mills, Power Structure Research, and the Failures of Mainstream Political
Science,” New Political Science 29 (2007), pp. 97-114,
http://www2.ucsc.edu/whorulesamerica/theory/mills_critique.html

Power structure research
http://www2.ucsc.edu/whorulesamerica/

16 Peter Phillips, “Inside Bohemian Grove,” Counterpunch, August 13, 2003,
http://www.counterpunch.org/2003/08/13/inside-bohemian-grove/print

17 Martin Gilens and Benjamin Page, “Testing Theories of American Politics: Elites, Interest Groups, and Average
Citizens,” Perspectives on Politics, Fall 2014,
https://www.princeton.edu/~mgilens/Gilens%20homepage%20materials/Gilens%20and%20Page/Gilens%20and%
20Page%202014-Testing%20Theories%203-7-14.pdf

26 Nomi Prins, All the Presidents’ Bankers, Nation Books, 2014,
http://www.democracynow.org/2014/4/8/all_the_presidents_bankers_nomi_prins#

28 Richard W. Fisher, Ending 'Too Big to Fail': A Proposal for Reform Before It's Too Late (With Reference to Patrick
Henry, Complexity and Reality), Dallas Federal Reserve, January 16, 2013,
http://www.dallasfed.org/news/speeches/fisher/2013/fs130116.cfm

29 Peter Phillips and Kimberly Soeiro, “The Global 1%: Exposing the Transnational Ruling Class,” Project Censored,
August 22, 2012, http://www.projectcensored.org/the-global-1-exposing-the-transnational-ruling-class/

31 Top Spenders 1998-2014, OpenSecrets.org, https://www.opensecrets.org/lobby/top.php?indexType=s

40 Mike Lofgren, “Essay: Anatomy of the Deep State,” Bill Moyers and Company, February 21, 2014,
http://billmoyers.com/2014/02/21/anatomy-of-the-deep-state/

41 Dexter Filkins, “The Deep State,” New Yorker, March 12, 2012,
http://www.newyorker.com/magazine/2012/03/12/the-deep-state
   ____________________________________

Le Carre novel

[Mike] Lofgren says he first encountered the term in a spy novel A Delicate Truth by John le Carre, who describes the hidden power brokers at work in Great Britain.

A Delicate Truth by John le Carre,
the hidden power brokers at work in Great Britain.
deep state (the hidden power brokers at work)

https://www.americanrhetoric.com/speeches/dwightdeisenhowerfarewell.html

https://www.textise.net/showText.aspx?strURL=https://www.mikelofgren.net/essay-anatomy-of-the-deep-state/#content

https://www.mikelofgren.net/essay-anatomy-of-the-deep-state/#content

https://en.wikipedia.org/wiki/Mike_Lofgren

https://en.wikipedia.org/wiki/Bill_Blunden_(author)

https://www.textise.net/showText.aspx?strURL=https://www.counterpunch.org/2014/09/02/the-zero-sum-game-of-perpetual-war/

https://www.counterpunch.org/2014/09/02/the-zero-sum-game-of-perpetual-war/

https://www.textise.net/showText.aspx?strURL=http://www.newyorker.com/magazine/2012/03/12/the-deep-state#main-content

https://en.wikipedia.org/wiki/Deep_state

https://en.wikipedia.org/wiki/Deep_state_in_the_United_States

https://www.textise.net/showText.aspx?strURL=https://www.npr.org/2019/11/06/776852841/the-man-who-popularized-the-deep-state-doesnt-like-the-way-its-used#mainContent

https://www.npr.org/2019/11/06/776852841/the-man-who-popularized-the-deep-state-doesnt-like-the-way-its-used#mainContent

https://www.textise.net/showText.aspx?strURL=https://www.washingtonpost.com/wp-dyn/content/article/2011/01/14/AR2011011404915_pf.html

https://www.washingtonpost.com/wp-dyn/content/article/2011/01/14/AR2011011404915_pf.html
   ____________________________________
   ____________________________________
   ────────────────────────────────────
   ____________________________________
··<────────────────────────────────────────────────────────────────────────────>
··<---------------------------------------------------------------------------->



who rules america

 
https://whorulesamerica.ucsc.edu/


WhoRulesAmerica.net
Power, Politics, & Social Change
by G. William Domhoff
Welcome to WhoRulesAmerica.net, a site about how power is distributed and wielded in the United States. It both builds upon and greatly supplements the book Who Rules America?, now in its 8th edition. The book's latest subtitle is "The Corporate Rich, White Nationalist Republicans, and Inclusionary Democrats in the 2020s"; the story of how we got here and where we're going next is complicated, but most of the answers are in Who Rules America? and/or this Web site; you can also watch some videos of Bill Domhoff giving invited lectures on the topic.


Wednesday, March 22, 2023

Burn pit


fire fighters 

cohort studies for exposure to Chemical carcinogens.
   ____________________________________
   (2.2) industries were able to successfully push back, delay, and slow down any significance changed in light of the scout mindset (accuracy-motivated reasoning) assessment on the situation:  
     (2.21) Tobacco & cigarrete with cancer [United States][interesting because of UK healthcare system, they did not have this problem], 
     (2.22) Petroleum & Fossil fuel industry with global warming, and 
     (2.23) Chemical company with a insecticide [European union], the Industry can delay change for over two decades, stay in business, maintain the revenue stream, and accumulate the body count; ...
     (2.24) Automobile industry was able to successfully lobby and delay California air resource board adoption, implementation and mandate to switch over from Internal Combusion Engine (ICE) vehicle to Electrical-powered Vehicle (EV) for ... . 
     (2.25) 3M and Du Pont on PFOA  PFOA/C-8  FC-143 product line 
         for 40 years DuPont knew C-8 was poison
     (2.26) pp.208-209
         Eric Schaeffer, chief of EPA's office of regulatory enforcement
         but they've dragged it out for thirty years.”
         In his resignation letter, Schaeffer said the nine companies he sued “emit an incredible 5 million tons of sulfer dioxide every year (a quarter of the emission in the entire country) as well as 2 million tons of nitrogen oxide.”  The agency's uncontested scientific data showed that 10,800 died prematurely each year because of that pollution. The White House, he said, had snatched “defeat from the jaws of victory” because most of the plant owners had been ready to settle. Two of them withdrew from consent decrees. 
         As the second Bush-Cheney term neared an end, final rules had yet to be written or enforced. 
         “Every day you can postpone saves them a lot of money”, Schaeffer said in an interview, speaking of plant owners. “It's a lot of money. In the long run, finally, finally, most of these power plants will have scrubbers, but they've dragged it out for thirty years.” 
          (Angler: the Cheney vice presidency, Barton Gellman, 2008, pp.208-209, )
   ____________________________________
Donella H. Meadows, Edited by Diana Wright, Thinking in systems             [ ]

p.104
Changing the length of a delay may utterly change behaviour. Delays are often sensitive leverage points for policy, if they can be made shorter or longer. You can see why that is. If a decision point in a system (or a person working in that part of the system) is responding to delayed information, or responding with a delay, the decisions will be off target. 

p.105
On the other hand, if action is taken too fast, it may nervously amplify short-term variation and create unnecessary instability. Delays determine how fast systems can react, how accurately they hit their targets, and how timely is the information passed around a system. Overshoots, oscillations, and collapses are always caused by delays.

p.107
  In your new position, you experience the information flows, the incentives and disincentives, the goals and discrepancies, the pressures──the bounded rationality──that goes with that position.

p.108
  Change comes first from stepping outside the limited information that can be seen from any single place in the system and getting an overview.  ...

p.108
  It's amazing how quickly and easily behaviour changes can come, with even slight enlargement of bounded rationality, by providing better, more complete, timelier information. 

     (Thinking in systems : a primer, Donella H. Meadows, Edited by Diana Wright, sustainability institute, 2008, QA 402 .M425 2008, )
   ____________________________________
Soldiers and Scouts: Why our minds weren't built for truth | Julia Galef
https://www.youtube.com/watch?v=yfRC8ZgBXZw
https://www.youtube.com/watch?v=yfRC8ZgBXZw
Long Now Foundation
  Oct 18, 2019

soldier mindset : (directionally motivated reasoning)
                  cling to a single worldview (course of action) and 
                  look for evidences to support that worldview 
                   So, it's reasoning that is unconscious, usually. 
                   So, we're hunting for arguments in favor of something that we want to believe or that we already believe.

  scout mindset :  (accuracy-motivated reasoning)
                   get as accurate as possible the landscape 
                    figuring out what is actually true. 
                   their role is not to attack or defend, but it's to go out, see what's really there and form as accurate a map of an issue or a situation as possible, including all of the uncertainties and unknowns. 
   ____________________________________
https://summate.it/?v2

https://summate.it/https://thedecisionlab.com/podcasts/soldiers-and-scouts-with-julia-galef

Soldiers and Scouts with Julia Galef - The Decision Lab

    So, there are all sorts of reasons why soldier mindset might be a really good thing for us. But it's not always a good thing, and it's not always rational. And that's where scout mindset comes in. Scout mindset is the antidote to soldier mindset. It's the mindset in which we try to be honest with ourselves and to be objective about what's actually true. And that's important because it allows us to make better decisions, to see the world in a more accurate way, and to act more effectively in our personal and professional lives.
    BROOKE: That was great. So, what is the difference between perseverance, adaptation, and rational perseverance?
    JULIA: So, perseverance is basically trying to keep doing something even when it's not going well. So, it's persevering in the face of difficulty or adversity. Adaptation is basically trying to change or adjust to the changing situation. And rational perseverance is basically trying to do the best that you can with the information that you have, and the constraints that you have.
    So, it's trying to make the best use of the information that you have to try to achieve your goals. And I think that's really important to remember, because a lot of the time, when we're stuck, or when we're struggling, or when we're not feeling very good about ourselves, a lot of the time what we might want to do is give up. We might want to stop trying.
    And the thing that I think is really important to remember is that there is always hope. Even when things are tough, even when we're facing difficulty, there is always hope. We can always try to adjust and to persevere, even if we don't feel like we're doing very well. And in the end, I think that that's really what matters.
    BROOKE: That
   ____________________________________
https://thedecisionlab.com/podcasts/soldiers-and-scouts-with-julia-galef
   ────────────────────────────────────
Julia: Right. And so, Tetlock dubbed them the superforecasters and wrote this book with Dan Gardner about what they were doing right. Like what was the secret sauce to Superforecasting? It's really interesting and I think unusually rigorous for a social science study. And so, there's a bunch of things in there and you should read it yourself. But one of the things that he notices superforecasters doing a lot more than other people is adjusting their beliefs incrementally.

So, a lot of people will either never update their view at all, or they'll do a... Oh, sorry, just knocked over my mic. A lot of people will never update their view at all or they'll do a 180 where they're like, "Well, all right, I guess I'll give up on that idea because of counter evidence. And the superforecasters did a much more subtle thing where they'd have this view and then they'd read some more news articles and they'd go, "Hmm, that makes me a little less confident."

 • adjusting their beliefs incrementally.
 • one of the things that he notices superforecasters doing a lot more than other people is adjusting their beliefs incrementally.
 • A lot of people will never update their view at all or they'll do a 180 where they're like, "Well, all right, I guess I'll give up on that idea because of counter evidence. And the superforecasters did a much more subtle thing where they'd have this view and then they'd read some more news articles and they'd go, "Hmm, that makes me a little less confident."

 • Or some new development would happen and they'd go, "Actually, this makes war more likely." And so, they'd adjust their confidence upwards from 65 to 75 or something like that. So, it was this very careful, delicate process. And by the end of it, the percentage they eventually landed on was much more accurate than the norm.

source:
       https://thedecisionlab.com/podcasts/soldiers-and-scouts-with-julia-galef

Julia Galef is a the co-founder and president of the Center for Applied Rationality and host of the podcast Rationally Speaking. Her work on rationality has been featured in publications such as Scientific American, Forbes, The Wall Street Journal, The Atlantic, and TED. Julia is also an acclaimed YouTuber and writer - The Scout Mindset: Why Some People See Things Clearly and Other's Don't is her first book.
   ____________________________________

   (4.4)  on measurement 
          p.296
          W. Edwards Deming, he said that 97 percent of what matters in an organization can't be measured. 
          the result of conventional measurement was “tampering”:  manipulating without  genuine understanding.  
          p.296
          Deming wrestled with this issue regularly; when he heard people talk about measuring results, he often asked: 
          “How do you know?”  In other word, how can you possibly assess success or failure based on the few miniscule elements tracked by your “measurement machine”?  
           How do you know your have chosen the right elements, and weighted them effectively?  
           If the assumptions put into the measurement model are false, how do you come to recognize the problem? 
          (Peter Senge, Art Kleiner, Charlotte Roberts, Richard Ross, George Roth, Bryan Smith, The dance of change : the challenge of sustaining momentus in learning organizations, (a fifth discipline resource), 1999)
([ in general, some measurement is better than no measurement; the more basic the measurement, in general, provide the bigger picture; data is dynamic (like system dynamics) ]) 
   ____________________________________
Today I learned that after decades of breathing polluted air, my immune system would very likely be compromised; ... 

Decades of Air Pollution Undermine the Immune System

November 21, 2022

The diminished power of the immune system in older adults is usually blamed on the aging process. But a new study by Columbia immunologists shows that decades of particulate air pollution also take a toll.

The study found that inhaled particles from environmental pollutants accumulate over decades inside immune cells in lymph nodes associated with the lung, eventually weakening the cells’ ability to fight respiratory infections.

The findings—published Nov. 21 in Nature Medicine(link is external and opens in a new window)—offer a new reason why individuals become more susceptible to respiratory diseases with age.

Elderly people are especially vulnerable to respiratory infections, a fact brought into stark relief by the COVID pandemic. The death rate from COVID is 80 times greater in people over age 75 than in younger adults, and the elderly are also more vulnerable to influenza and other infections of the lung.

The Columbia researchers weren’t initially looking at air pollution’s influence on the immune system. More than ten years ago, they began to collect tissues from deceased organ donors to study immune cells in multiple mucosal and lymphoid tissues. Such cells have been largely inaccessible to researchers studying the immune system where sampling is limited to peripheral blood.

“When we looked at people’s lymph nodes, we were struck by how many of the nodes in the lung appeared black in color, while those in the GI tract and other areas of the body were the typical beige color,” says Donna Farber, PhD, the George H. Humphreys II Professor of Surgical Sciences (in Surgery) and professor of microbiology & immunology at Columbia University Vagelos College of Physicians and Surgeons, who led the study.

And as the researchers collected more tissue from younger donors, they also noticed an age difference in the appearance of the lung’s lymph nodes: Those from children and teenagers were largely beige while those from donors over age 30 looked were tinged with black and got darker with increasing age.

lymph nodes from the lungs get darker with increasing age
Lung lymph nodes from six non-smokers between the ages of 20 and 62. Particles of air pollution darken the lymph nodes and impair immune cells within the nodes. Images: Donna Farber / Columbia University Irving Medical Center.
“When we imaged the lung’s blackened lymph nodes and found they were clogged with particles from airborne pollutants, we started to think about their impact on the lung’s ability to fight infection as people age,” Farber says.

In the new study, she and her colleagues examined tissues from 84 deceased human organ donors ranging in age from 11 to 93, all nonsmokers.

They found that the pollutant particles in the lung’s lymph nodes were located inside macrophages, immune cells that engulf and destroy bacteria, viruses, cellular debris, and other potentially dangerous substances.

The macrophages containing particulates were significantly impaired: they were much less capable of ingesting other particles and producing cytokines—chemical “help” signals—that activate other parts of the immune system. Macrophages in those same lymph nodes that did not contain particulates were unimpaired.

“These immune cells are simply choked with particulates and could not perform essential functions that help defend us against pathogens,” Farber says.

“We do not know yet the full impact pollution has on the immune system in the lung,” Farber adds, “but pollution undoubtedly plays a role in creating more dangerous respiratory infections in elderly individuals and is another reason to continue the work in improving air quality.”

James P. Kiley, PhD, director of the Division of Lung Diseases at the National Heart, Lung, and Blood Institute, part of the National Institutes of Health, agrees. “This is an interesting study that suggests air pollution may contribute to why older people become more susceptible to respiratory infections,” says Kiley, who was not a part of the study. “In addition to supporting ongoing efforts to control air pollution, these findings underscore the importance of additional research to better understand the lung effects of inhaled particulates and the interactions between air pollution and chronic lung diseases.”

source:
       https://www.cuimc.columbia.edu/news/decades-air-pollution-undermine-immune-system
   ____________________________________
04.2021
April 2021
National Geographic
the fight for clean air

Beth Gardiner
Choked : life and death in the age of air pollution

Air pollution causes 7 million premature deaths a year
p.40
pp.40-67
p.66
PM2.5 (fine-particulate matter)
average annual particle pollution (micrograms/m^3)
PM2.5 (particulate matter with diameter 2.5 micrometers or smaller)

PM2.5 air pollution by U.S. Air Quality Index 
                       health category
  0  good  
 51  moderate
101  unhealthy for sensitive groups
151  unhealthy
201  very unhealthy 
301+ hazardous

Beth Gardiner
Choked : life and breath in the age of air pollution 

p.63
pollution gets far less attention, though it kills far more people

p.43
Francesa Dominici, biostatistics professor
air pollution
dirty air
chronic illnesses
weaken the immune system
inflame the airways
leaving the body less able to fight off a respiratory virus
Havard T. H. Chan school of public health
60 million older Americans enrolled in Medi Care
 - age, gender, race, zip code
 - the dates and diagnostic codes for all deaths and hospitalizations


Joel Schwartz, Havard epidemiologist 
daily pollutant levels, over 17 years 
p.48
In a 2017 study they found that even in places where air met national standards, pollution was linked to higher death rates
That means “the standard is not safe”

p.48
 - That means “the standard is not safe”
 - viral death rates were higher in places with more PM2.5
 - dirty air ends far more lives, and with far greater regularity than the novel coronavirus.
 - outdoor air pollution
 - smoke from indoor cookstoves 
 - Pollution is a hidden killer
 - It [pollution] doesn't get listed on death certificate. 

p.49
 -  1993 landmark project known as the “six cities” study
 -  Pollution is harmful at much lower levels than once thought, and in many more ways. 
 -  Douglas Dockery, lead author
    -  1993 landmark project
    -  “six cities” study

p.49
Dean Schraufnagel, a pulmonary medicine professor at the University of Illinois at Chicago, when he led a panel in 2018 that reviewed and summarized decades of research [on air pollution]

p.49
Dirty air affects nearly all the body's essential systems triggering heart attacks and arrhythmias, 

p.49
everybody has bronchitis
 - dirty air [pollution] is harmful at much lower level than once thought 
 - Dirty air, his committee reported, affects nearly all the body's essential systems.

─“”‘’•

p.49
toxic fuel: diesel
Paris, Barcelona, Rome, Frankfurt

p.50
nitrogen dioxide emissions

p.50
diesel and coal

p.50
Valentina Bosetti and
Massimo Tavoni
agriculture
Modern industrial farming is a major polluter
manure, chemical fertilizers,
ammonia, which reacts with other pollutants 

p.57
medical waste incinerator, a chemical plant, a landfill

p.56
Jyoti Pande Lavakare, book
Breathing Here Is Injurious to Your Health

p.56
But economic growth quickly outpaced all antipollution measures.

p.54
India's urban poor, who work or even live on streets, breathe far worse air.

p.57
[Air pollution] is a pandemic in slow motion
   ____________________________________
Dave Oliver, USN (Ret.), Against the Tide: Rickover's Leadership Principles and the Rise of the Nuclear Navy

p.21
   The Navy had not yet invented equipment that could adequately control the atmosphere inside the submarine, so everyone breathed air containing carbon dioxide, as well as other nasty contaminants, at about thirty times normal levels. We understood that our atmosphere wasn't exactly the same as the one that grew corn and soybeans back in Indiana, although doctors weren't sure of the long-term effects. Without reading the New England Journal of Medicine, we could tell excess carbon dioxide affected the bodies' platelets, as our blood took a long time clotting until we had been off the submarine for a couple weeks. We assumed the air was also the reason for the low-level pounding ache in the back of our brains. 
  ([ chronic exposure to polluted air ])
  ([ continuous exposure to polluted air ])
  ([ symptoms from continuous exposure to polluted air ])
   (Against the tide : Rickover's leadership principles and the rise of the nuclear Navy / Rear Admiral Dave Oliver, USN (Ret.)., 1. Rickover, Hyman George., 2. admirals--united states--biography., 3. united states. navy--officers--biography., 4. nuclear submarines--united states--history--20th century., 5. nuclear warships--united states--safety measures--history., 6. marine nuclear reactor plants--united states--safety measures--history., 7. united states. navy--management., 8. leadership--united states., 2014, )
   ____________________________________
   ____________________________________

   ____________________________________
   ────────────────────────────────────
   ____________________________________
··<────────────────────────────────────────────────────────────────────────────>
··<---------------------------------------------------------------------------->

before the Internet

    ____________________________________
Oral-History:Paul Baran
https://ethw.org/Oral-History:Paul_Baran

Cold War Threat, ~ 1959
Baran:

In late 1959 when I joined the RAND Corporation, the Air Force was synonymous with National Defense. The other services were secondary. The major problem facing the Country and the World was that the Cold War between the two super powers had esculated to the point by 1959 when both sides were starting to build highly vulnerable missile systems prone to accidents. Whichever side fired their thermonuclear weapons first would essentially destroy the retaliatory capacity of the other. This was a highly unstable and dangerous era. A single accidental fired weapon could set off an unstoppable nuclear war. A preferred alternative would be to have the ability to withstand a first strike and the capability of returning the damage in kind. This reduces the overwhelming advantage by a first strike, and allows much tighter control over nuclear weapons. This is sometimes called Second Strike Capability. If both sides had a retaliatory capability that could withstand a first-strike attack, a more stable situation would result. This situation is sometimes called Mutually Assured Destruction, also known by its appropriate acronym, MAD. Those were crazy times.

Communications: the Achilles Heel, 1960+
Baran:

The weakest spot in assuring a second strike capability was in the lack of reliable communications. At the time we didn’t know how to build a communication system that could survive even collateral damage by enemy weapons. RAND determined through computer simulations that the AT&T Long Lines telephone system, that carried essentially all the Nation’s military communications, would be cut apart by relatively minor physical damage. While essentially all of the links and the nodes of the telephone system would survive, a few critical points of this very highly centralized analog telephone system would be destroyed by collateral damage alone by missiles directed at air bases and collapse like a house of card. This rendered critical long distance communications unlikely. Well what about high frequency radio, i.e. the HF or short wave band? The problem here is that a single high altitude nuclear bursts destroys sky wave propagation for hours. While propagation would continue via the ground wave, the sky wave badly needed for long distance radio would not function reducing usable radio ranges to a few tens of miles.

The fear was that our communications were so vulnerable that each missile base commander would face the dilemma of either doing nothing in the event of a physical attack, or taking action that would mean an all out irrevocable war. A communications system that could withstand attack was needed that would allow reduction of tension at the height of the Cold War.

Broadcast Station Distributed Teletypewriter Network, 1960
Baran:

At that time the expressed concern was for a system able to support Minimum Essential Communications -- a euphemism for the President authorizing a weapons launch.

In 1960 I proposed using broadcast stations as the links of a network . Broadcast stations during the daytime depend soley only on the ground wave, not subject to the loss of the sky wave. This is the reason that AM broadcast stations have such a short range during the day. I was able to demonstrate using FCC station data that there were enough AM broadcast stations in the right location and of the right power levels to allow signals to be relayed across the country. I proposed a very simple protocol. Just flood the network with the same message.

When I took this briefing around to the Pentagon, and other parts of the defense establishment I received the objection that it didn’t fix the problem of the military. “OK, a very narrow band capacity may take care of the President issuing the orders at the start of a war, but how do you support all the other important communications requirements that you need to operate the military during such a critical time.”

High Data Rate Distributed Communications, 1961 - 64
Baran:

The response was unambiguous. What I proposed wouldn’t fully hack it. So it was “back to the drawing board” time. I started to examine what military communications needs were regarded as essential by reading reports on the subject, and asking people at various military command centers. The more that I examined the issues, the longer the list. So I said to myself. “As I can’t figure out what essential communications is needed, let’s take a different tack. I’ll give those guys so much damn bandwidth that they wouldn’t know what in Hell to do with it all.” In other words, I viewed the challenge to be the design of a secure network able send signals over a network being cut up, and yet having the signals delivered with perfectly reliability. And, with more capacity than anything built to date. When one starts a project aim for the moon. Reality will cut you back later. But if you don’t aim high at the outset you can never advance very far.

Why Digital? Why Message Blocks?
Baran:

I knew that the signals would have to find their way through surviving paths, which would mean a lot of switching through multiple tandem links. But, at that time long distance telephone communications systems transmitted only analog signals. This placed a fundamental restriction on the number of tandem connected links that could be used before the voice signal quality became unusable. A telephone voice signal could pass through no more than about five independent tandem links before it would become inaudible. This ruled out analog transmission in favor of digital transmission. Digital signals have a wonderful property. As long as the noise is less than the signal’s amplitude it is possible to reconstruct the digital signal without error.

The future survivable system had to be all-digital. At each connected node, the digital signal would be verified that the next node correctly received it. And, if not, the signal would be retransmitted. As one day the network would also have to carry voice as well as teletypewriter and computer data, all traffic would be in the same form – bits. All analog signals would first be digitized. To keep the delay times short the digital stream would be packaged into small message blocks each with a standardized format. Work on time division multiplexing of digital telephone signals was in an early state in Bell Labs. Their experimental equipment used a data rate of about 1.5 Megabits/sec. I then started with the premise that it would be feasible to use digital transmission, at least for short distances at 1.5 Megabits/sec. since the signals could be reconstructed at each node. A big problem blocking long distance digital transmission was transmission jitter buildup. Every mile a repeater amplifier chopped the tops off the wave and reconstituted a clean digital signal. But noise caused a cumulative shifting of the zero crossing points. This limited the span distance. I thought that a node terminating each link in a non-synchronous manner should effectively clean up the accumulated jitter. This would and provide a de facto way of achieving long distances by such jitter error clean up. And I felt that if that didn’t work, then our fall back technology would then be the use of extremely cheap microwaves that could be feasible in this noise margin tolerable application.

On Parallelism
Baran:

By this time it was beginning to become clear that the new system’s overall reliability would be significantly greater than the reliability of any one component. Hence I could think in terms of building the entire system out of cheap parts – something previously inconceivable in the all-analog world.

Hochfelder:

Because it is in parallel?

Baran:

Yes. In parallelism there is strength. Many parts must fail before no path could be found through the network. It took a redundancy level of only about three times the theoretical minimum to build a very tough network . If you didn’t have to worry about enemy attacks, then a redundancy level of about 1.5 would suffice for a very reliable network out of very inexpensive and unreliable parts. And, it would later show that it would be possible to reduce the cost of communication by almost two decimal orders of magnitude. The saving in part came from being able to design the long distance transmission systems as links of a meshed network with alternative paths without allowing huge fade margins where all the links are connected in tandem. With analog transmission every link of the network must be “gold plated” to achieve reliability.

Hot-Potato Routing
Baran:

A key element of the concept was that it would be necessary to keep a “carbon copy” of each message block using computer technology, until the next station successfully received the message. The next challenge was to find a way for the packets to seek their own way through the network. This meant that some implicit path information must be contained as housekeeping data within the message block itself. The housekeeping includes data about the source and destination of the packet together with a an implied time measurement such as the number of times the message block had been retransmitted. This small amount of information allowed creation of an algorithm that did a very effective job of routing dynamically changing traffic to always find the best instantaneous path through the network.

Basic Concepts Underlying Packet Switching, 1960
Baran:

I had earlier discovered that very robust networks could be built with only modest increases in redundancy over that required for the minimum connectivity. And, then it dawned on me that the process or resending defective or missing packets would then allow the creation of an essentially error-free network . Since it didn’t make any difference whether a failure was due to enemy attacks or poor reliable components, it would be possible to build systems where the system reliability is far greater than the reliability of any of its parts And, even with inexpensive components a super reliable network would result.

Another interesting characteristic was the network learning property would allow users to move around the network, with that person’s address following them . This would allow separating the physical address from the logical address throughout the network, a fundamental characteristic of the Internet.

Another that I learned was that in building self-learning systems it is equally important to forget, as it is to learn. For example, when you destroy parts of a network, the network must quickly adapt to routing traffic entirely differently. I found that by using two different time constants, one for learning and the other for forgetting provided the balanced properties desired. And, I found it helpful to view the network as an organism, as it had many of the characteristics of an organism as it responds to overloads, and sub-system failures.

Dynamic Routing, 1961
Baran:

I first thought that it might be possible to build a system capable of smart routing through the network after reading about Shannon’s mouse through a maze mechanism . But instead of remembering only a single path, I wanted a scheme that not only remembered, but also knew when to forget, if the network was chopped up. It is interesting to note that the early simulation showed that after the hypothetical network was 50% instantly destroyed, that the surviving pieces of the network reconstituted themselves within a half a second of real world time and again worked efficiently in handling the packet flow.

Hochfelder:

How would the packets know how to do that?

Baran:

Through the use of a very simple routing algorithm. Imagine that you are a hypothetical postman and mail comes in from different directions, North, South, East and West. You, the postman would look at the cancellation dates on the mail from each direction. If for example if our postman was in Chicago, mail from Philadelphia would tend to arrive from the East with the latest cancellation date. If the mail from Philadelphia had arrived from the North, South, or West it would arrive with a later cancellation date because it would have had to take a longer route (statistically). Thus, the preferred direction to send traffic to Philadelphia would be out over the channel connected from the East as it had the latest cancellation date. Just by looking at the time stamps on traffic flowing through the post office you get all the information you need to route traffic efficiently.

Each hypothetically post office would be built the same way. And, each would have a local table that recorded the statistics of traffic flowing through the post office. With packets, it was easier to increment a count in a field of the packet than to time stamp. So, that is what I did. It’s simple and self-learning. And when this “handover number” got too big, then we knew that the end point was unreachable and dropped that packet so that it didn’t clutter the network.

Hochfelder:

Always searching for the shortest path.

Baran:

Yes, that is the scheme. We needed a learning constant and a forgetting constant as no single measurement could be completely trusted. The forgetting constant also allows the network to respond to rapidly varying loads from different places. If the instantaneous load exceeded the capacity of the links, then the traffic is automatically spread through more of the network. I called this doctrine, “Hot Potato Routing.” These days this approach is called “Deflection Routing.” By the way, the routing doctrine used in the Internet differs from the original Hot Potato approach, and is the result of a large number of improvements over the years.

Basic Properties of Packet Switching, 1960 - 62
Baran:

The term “packet switching” was first used by Donald Davies of the National Physical Laboratory in England who independently came up with the same general concept in November 1965 .

Essentially all the basic concepts of today’s packet switching can be found described either in the 1962 paper or in the Augurst 1964 RAND Memoranda in which such key concepts as the virtual circuit are described in detail.

The concept of the “virtual circuit” is that the links and nodes of the system are all free, except during those instances when actually sending packets. This allows a huge saving over circuit switching, because 99 percent of the time nothing is being sent so the same facilities can be shared with other potential users.

Then there is the concept of “flow control”, which is the mechanism to automatically prevent any node from overloading. All the basic concepts were worked out in engineering detail ina series of RAND Memoranda (between 10 to 14 volumes, depending on how they are counted) What resulted was a realization that the system would be extremely robust, with the end to end error rate essentially zero, even if built with inexpensive components. And, it would be very efficient in traffic handling in comparison to the circuit-switching alternative.

Economic Payoff Potential Versus Perceived Risks
Baran:

This combination of economy and capability suggested that if built and maintained at a cost of $60,000,000 (1964 Dollars) that it could handle the long distance telecommunications within the Department of Defense that was costing the taxpayer about $2 billion a year.

At the time, the great saving in cost claimed was so great that it made the story intuitively unbelievable. It violated the common sense instincts of the listener who would say in effect that: “If it were ever possible to achieve such efficiencies the phone company (AT&T) would have done it already.”

Another understandable objection was “This couldn’t possibly work. It is too complicated.” This perception was based on the common view, correct at the time, that computers were big, taking up large glass walled rooms, and were notoriously unreliable. When I said that that each switching node could be a shoe sized box with the required computer’s capabilities, many didn’t believe it. (I had planned doing everything in miniaturized hardware in lieu of using off the shelf minicomputers.) So I had the burden of proof, to define the small box down to the circuit level to show that it could indeed be done.

Another issue was the separation of the transmission network from the analog to digital conversion points. This is described in detail in Vol. 8 of the ODC series This RAND Memorandum describes in detail how users are connected to the switching network. The separate unit that is described connects up to 1024 users and convert their analog signals into digital signals. This included voice, teletypewriters, computer modems, etc. One side of the box would connect to the existing analog telephones, while the other side which was digital would connect to the switching network, preferably at multiple points to eliminate a single point of failure.

This constant increase in desire for engineering details caused so much paper to be written at the time cluttering up the literature. On a positive note it left us with a very detailed description of packet switching proposed at that time. This record has been helpful in straightening out some of the later misrepresentations of who did what and when as found in the popular press’s view of history.

Opposition and Detailed Definition Memoranda, 1961+
Baran:

The enthusiasm that this early plan encountered was mixed. I obtained excellent support from RAND (after an early cool and cautious initial start). Others, particular those from AT&T (the telephone monopoly at the time) objected violently. Many of the objections were at the detail level, so the burden of proof was then on me to provide proposed implementation descriptions at an ever finer level of detail. Time after time I would return with increasingly detailed briefing charts and reports. But, each time I would hear a mantra, “It won’t work because of ____”. “ It won’t work because of (some new objection).” I gave the briefings to many places to various government agencies, to research laboratories, to commercial companies, but primarily to the military establishment I gave briefings at least 35 times. It was hard for a visitor with an interest in communications to visit RAND without being subject to a presentation. My chief purpose in giving these presentations so broadly was that I was looking for reasons that it might not work. I wanted to be absolutely sure that I hadn’t overlooked anything that could affect workability. After each encounter where I could not answer the questions quantitatively, I would go back and study each of the issues raised and fill in the missing details. This was an iterative process constituting a wire brush treatment of a wild set of concepts.

In fairness, much of the early criticism was valid. Of course the burden of proof belongs to the proponent. Among the many positive outcomes of the exercise was that, 1) I was building a better understanding the details of such new systems, 2) I was building a growing degree of confidence in the notions, and 3) I had accumulated a growing pile of paper including simulation data to support the idea that the system would be self learning and stable.

Publication, 1964
Baran:

Most of the work was done in the period 1960-62. As you can imagine old era analog transmission engineers unable to understand what was being contemplated in detail. And, not understanding, they were negative and intuitively believed that it could possibly work. However, I did build up a set of great supporters as I went along. My most loyal supporters at RAND included Keith Uncapher my boss at the time, and Paul Armer and Willis Ware, co-heads of the Computer Science Department. RAND provided a remarkable degree of freedom to do this controversial work, and supported me in external disagreements. By 1963 I felt that I had carried this work about as far as appropriate to RAND (which some jokingly say stands for “Research And No Development.”) And, I felt that as I had completed the bulk of my work I began wrapping up the technical development phase in 1964 when I published the set of memoranda in 1964 which were primarily written on airplanes in the 1960 to 1962 era. There were some revisions in 1963, and the RAND Memoranda came out in 1964. I continued to work on some of the non-technical issues and gave tutorials in many places including summer courses at the University of Michigan in 1965 and 1966.

In May 1964 I published a paper in the IEEE Communications Transactions which summarizes the work and provides a pointer to each of a dozen volumes of Rand Memoranda for the serious reader who wanted to read the backup material Essentially all this work was unclassified in the belief that we would all be better off if the fate of the world relied on more robust communications networks. Only two of the first twelve Memoranda were classified. One dealt with cryptography and the other with weak spots that were discovered and the patches to counter the weak spots. A thirteenth classified volume was written in 1965 by Rose Hirshfield on real world geographical layout of the network. And there was a 14th describing a secure telephone that could be used with the system and had possible applications outside of the network and so wasn’t included in the number series. This was co-authored with Dr. Rein Turn.

Baran:

Getting a new idea out to a larger audience is always challenging. Perhaps more so if it is a departure from the classical way of doing things. The IEEE Spectrum which is sent to all IEEE members picked up the article in a “Scanning the Transactions”. I looked to this short summary to being a pointer to the IEEE Transaction article, for those that didn’t normally read the Communications Society Transactions. This article in turn pointed to RAND Memoranda. readily available either from RAND or its depositories around the world. In those days RAND Publications were mailed free to anyone who requested a copy.

But no matter how hard one tries, it seems that it is impossible to get the word out to everyone. This is not a novel problem. And, it contributes to duplicative research, made more common by the reluctance by some to take the time to review the literature before proceeding with their own research. Some even regard reviewing the literature as a waste of time. I was surprised many years later to find a few key people in closely related research say that they were totally unaware of this work until many years later. I recall describing the system in detailed discussions, only to find out at a later date that the listener had completely forgotten what was said, and who didn’t receive his Epiphany until much later and ostensibly through a different channel.

Conceptual Gap Between Analog and Digital Thinking
Baran:

The fundamental hurdle in acceptance was whether the listener had digital experience or knew only analog transmission techniques. The older telephone engineers had problems with the concept of packet switching. On one of my several trips to AT&T Headquarters at 195 Broadway in New York City I tried to explain packet switching to a senior telephone company executive. In mid sentence he interrupted me, “Wait a minute, son. “Are you trying to tell me that you open the switch before the signal is transmitted all the way across the country?” I said, “Yes sir, that’s right.” The old analog engineer looked stunned. He looked at his colleagues in the room while his eyeballs rolled up sending a signal of his utter disbelief. He paused for a while, and then said, “Son, here’s how a telephone works….” And then he went on with a patronizing explanation of how a carbon button telephone worked. It was a conceptual impasse.

On the other hand, the computer people over at Bell Labs in New Jersey did understand the concept. That was insufficient. When I told the AT&T Headquarters folks that their own research people at Bell Labs had no trouble understanding and didn’t have the same objections as the Headquarters people. Their response was, “Well, Bell Labs is made up of impractical research people who don’t understand real world communication.”

Willis Ware of RAND tried to build a bridge early in the process. He knew Dr. Edward David Executive Director of Bell Labs and he aske for help. Ed set up a meeting at his house with the chief engineer of AT&T and myself to try to overcome the conceptual hurdle. At this meeting I would describe something in language familiar to those that knew digital technology. Ed David would translate what I was saying into language more familiar in the analog telephone world (he practically used Western Electric part numbers) to our AT&T friend, who responded in a like manner. Ed David would translate it back into computer nerd language.

I would encounter this cultural impasse time after time between those who were familiar only with the then state of the art of analog communications – highly centralized and with highly limited intelligence circuit switching and myself talking about all-digital transmission, smart switches and self-learning networks. But, all through the process of erosion, more and more people came to understand what was being said. The base of support strengthened in RAND, the Air Force, academia, government and some industrial companies --and parts of Bell Labs. But I could never penetrate AT&T Headquarters objections who at that time had a complete monopoly on telecommunications. It would have been the perfect organization to build the network. Our initial objective was to have the Air Force contract the system out to AT&T to build the network but unfortunately AT&T was dead set against the idea.

Hochfelder:

Were there financial objections as well?

AT&T Headquarters Lack of Receptivity
Baran:

Possibly, but not frontally. They didn’t want to do it for a number of reasons and dug their heels in looking for publicly acceptable reasons. For example, AT&T asserted that were not enough paths through the country to provide for the number of routes that I had proposed for the National packet based network but refused to show us their route maps. (I didn’t tell them that someone at RAND had already acquired a back door copy of the AT&T maps containing the physical routes across the US since AT&T refused to voluntarily provide these maps that were needed to model collateral damage to the telephone plant by attacks at the US Strategic Forces.) I told AT&T that I thought that they were in error and asked them to please check their maps more carefully. After a month’s delay in which they never directly answered the question, one of their people responded by grumbling, “It isn’t going to work, and even if it did, damned if we are going to put anybody in competition to ourselves.”

I suspect the major reason for the difficulty in accommodating packet switching at the digital transmission level was that it would violate a basic ground rule of the Bell System -- everything added to the telephone system had to work with all previous equipment presently installed. Everything had to fit to into the existing plan. Nothing totally different could be allowed except as a self contained unit that fit into the overall system. The concept of long distance all-digital communications links connecting small computers serving as switches represents a totally different technology and paradigm, and was too hard for them to swallow. I can understand and respect that reason, but can also appreciate the later necessity for divestiture. Competition better serves the public interest in the longer term than a monopoly, no matter how competent and benevolent that monopoly might. There is always the danger that the monopoly can be in error and there is no way to correct this.

On Bell Labs Response
Baran:

While the folks AT&T Headquarters violently opposed the technology, there were digitally competent people in Bell Labs who appreciated what it was all about. One of the mysteries that I have never figured out is why after packet switching was shown to be feasible in practice and many papers published by others that it took so many years before papers in packet switching would ever emerge from Bell Labs.

The first paper on the subject that I recall being published in the Bell System Technical Journal was by Dr. John Pierce. This paper described a packet network made up of overlapping Ballantine rings. It was a brilliant idea and his architecture used in today’s ATM systems.

Hochfelder:

What is a Ballantine ring?

Baran:

Have you ever seen the Ballantine Beer’s logo? It is made up of three overlapping rings? Since a signal can be sent in both directions on a loop, no single loop cut need stop communications from flowing from the other direction. Because the signal can go both ways any single cut can be tolerated without loss allowing time for repair. It is a powerful idea.

The RAND Formal Recommendation to the Air Force, 1965
Baran:

In 1965 the RAND Corporation issued a formal Recommendation to the Air Force (which they do very rarely) for the Air Force to proceed to build the proposed network . The Air Force then asked the MITRE Corporation, a not-for-profit organization that worked for the government to set up a study and review committee. The Committee after independent investigation concluded that the design was valid and that a viable system could be built and that the Air Force should immediately proceed with implementation.

As the project was about to launch, the Department of Defense said that as this system was to be a National communications system, it would in accordance with the Defense Reorganization Act of 1949 (finally being implemented in 1965) fall into the charter of the Defense Communications Agency.

The choice of DCA would have been fine years later when DCA was more appropriately staffed. But at that time the DCA was a shell organization staffed by people who lacked strength in digital understanding. I had learned through the many briefings I had given to various audiences that there was an impenetrable barrier to understanding packet switching by those who lacked digital experience. At RAND I was essentially free to work on anything that I felt to be of most importance to National Security. This allowed me for example to serve on various ad hoc DDR&E (Department of Defense Research & Engineering) committees. I sometimes consulted with Frank Eldridge in the Comptrollers Office of the Department of Defense helping him to review items in the command and control budgets submitted by the services. Frank Eldridge was an old RAND colleague initially responsible for the project on the protection of command and control. He was among the strongest supporters for the work that I was doing on Distributed Communications. He had gone over to the Pentagon working with McNamara’s “whiz kids.” Frank Eldridge had undergone many of the same battles with AT&T and understood the issues of the RAND thence Air Force proposal.

Approval for the money for the Defense Communication Agency (DCA) to undertake the RAND distributed communications system development was under Frank Eldridge’s responsibility. Both Frank and I agreed that DCA lacked the people at that time who could successfully undertake this project and would likely screw up this program. An expensive failure would make it difficult for a more competent agency to later undertake this project. I recommended that this program not be funded at this time and the program be quietly shelved, waiting for a more auspicious opportunity to resurrect it.

The Cold War at this time had cooled from loud threats of thermonuclear warheads to the lower level of surrogate small wars. And, we were bogged down in Viet Nam.

source:
        https://ethw.org/Oral-History:Paul_Baran
   ____________________________________

  •  Internet backbone
       https://en.wikipedia.org/wiki/Internet_backbone

  •  Tier 1 network
       https://en.wikipedia.org/wiki/Tier_1_network
      ─ List of Tier 1 networks
         •  https://asrank.caida.org/

  •  Internet exchange point
       https://en.wikipedia.org/wiki/Internet_exchange_point

       https://en.wikipedia.org/wiki/List_of_Internet_exchange_points

       https://en.wikipedia.org/wiki/List_of_Internet_exchange_points_by_size
   ____________________________________
Russ Haynal's ISP Page
This page links to the major pieces of the Internet's infrastructure.
http://navigators.com/isp.html

Internet backbone maps
https://web.archive.org/web/20060411203358/http://www.nthelp.com/maps.htm

https://prefix.pch.net/applications/ixpdir/?show_active_only=0&sort=traffic&order=desc

PCH (Packet Clearing House)
Internet Exchange Directory
https://www.pch.net/ixp/dir

http://www.telegeography.com/products/internet-exchange-directory/

http://lookinglass.org/wix.php

https://ixpdb.euro-ix.net/en/
The IXP Database (IXPDB) is an authoritative, comprehensive, public source of data related to IXPs. It collects data directly from IXPs through a recurring automated process. It also integrates data from third-party sources in order to provide a comprehensive and corroborated view of the global interconnection landscape. The combined data can be viewed, analyzed, and exported through this web-based interface and an API.

IXP Database (IXPDB) - collects data directly from IXPs through a recurring automated process
https://ixpdb.euro-ix.net/en/

https://www.opte.org/the-internet

Route Views Project
Route Views
           http://routeviews.org/
University of Oregon’s Route Views Project. Route Views has feeds from all over the Internet
   ____________________________________
       https://www.opte.org/faq

What is Routeviews?

       http://routeviews.org/

University of Oregon Route Views Project

The University's Route Views project was originally conceived as a tool for Internet operators to obtain real-time BGP information about the global routing system from the perspectives of several different backbones and locations around the Internet. Although other tools handle related tasks, such as the various Looking Glass Collections (see e.g. TRACEROUTE.ORG), they typically either provide only a constrained view of the routing system (e.g., either a single provider or the route server) or they do not provide real-time access to routing data.

While the Route Views project was originally motivated by interest on the part of operators in determining how the global routing system viewed their prefixes and/or AS space, there have been many other interesting uses of this Route Views data. For example, NLANR has used Route Views data for AS path visualization and to study IPv4 address space utilization (archive).

The Internet maps created here leverage the Route Views archive data. However, the data only begins in 1997 and we are still hoping to find older routing table dumps that pre-date the Routeviews archive.

source:
       https://www.opte.org/faq
   ____________________________________

http://www.telegeography.com/products/internet-exchange-directory/

https://www.submarinecablemap.com/

https://www.cloudinfrastructuremap.com/

https://www.internetexchangemap.com/
   ____________________________________

What is the Internet?
 ─ watch the following time laps youtube.com video of a 3-D colored graph, watch it grows over time

https://youtu.be/-L1Zs_1VPXA
https://youtu.be/-L1Zs_1VPXA

US DoD Internet Research
https://www.youtube.com/watch?v=BDV1KZxCKi0
https://www.youtube.com/watch?v=BDV1KZxCKi0
   ____________________________________
https://en.wikipedia.org/wiki/Internet_backbone

Internet backbone
From Wikipedia, the free encyclopedia

Each line is drawn between two nodes, representing two IP addresses. This is a small look at the backbone of the Internet.
The Internet backbone may be defined by the principal data routes between large, strategically interconnected computer networks and core routers of the Internet. These data routes are hosted by commercial, government, academic and other high-capacity network centers, as well as the Internet exchange points and network access points, that exchange Internet traffic between the countries, continents, and across the oceans. Internet service providers, often Tier 1 networks, participate in Internet backbone traffic by privately negotiated interconnection agreements, primarily governed by the principle of settlement-free peering.

The Internet, and consequently its backbone networks, do not rely on central control or coordinating facilities, nor do they implement any global network policies. The resilience of the Internet results from its principal architectural features, most notably the idea of placing as few network state and control functions as possible in the network elements and instead relying on the endpoints of communication to handle most of the processing to ensure data integrity, reliability, and authentication. In addition, the high degree of redundancy of today's network links and sophisticated real-time routing protocols provide alternate paths of communications for load balancing and congestion avoidance.

The largest providers, known as Tier 1 networks, have such comprehensive networks that they do not purchase transit agreements from other providers.[1]

Infrastructure
Undersea Internet cables
Routing of prominent undersea cables that serve as the physical infrastructure of the Internet.
The Internet backbone consists of many networks owned by numerous companies. Optical fiber trunk lines consists of many fiber cables bundled to increase capacity, or bandwidth. Fiber-optic communication remains the medium of choice for Internet backbone providers for several reasons. Fiber-optics allow for fast data speeds and large bandwidth, they suffer relatively little attenuation, allowing them to cover long distances with few repeaters, and they are also immune to crosstalk and other forms of electromagnetic interference which plague electrical transmission.[citation needed] The real-time routing protocols and redundancy built into the backbone is also able to reroute traffic in case of a failure.[2] The data rates of backbone lines have increased over time. In 1998,[3] all of the United States' backbone networks had utilized the slowest data rate of 45 Mbit/s. However, technological improvements allowed for 41 percent of backbones to have data rates of 2,488 Mbit/s or faster by the mid 2000s.[4]

History
In the early days of the Internet, backbone providers exchanged their traffic at government-sponsored network access points (NAPs), until the government privatized the Internet, and transferred the NAPs to commercial providers.[1]

Modern backbone
Because of the overlap and synergy between long-distance telephone networks and backbone networks, the largest long-distance voice carriers such as AT&T Inc., MCI (acquired in 2006 by Verizon), Sprint, and Lumen also own some of the largest Internet backbone networks. These backbone providers sell their services to Internet service providers (ISPs).[1]

Each ISP has its own contingency network and is equipped with an outsourced backup. These networks are intertwined and crisscrossed to create a redundant network. Many companies operate their own backbones which are all interconnected at various Internet exchange points (IXPs) around the world.[7] In order for data to navigate this web, it is necessary to have backbone routers—routers powerful enough to handle information—on the Internet backbone and are capable of directing data to other routers in order to send it to its final destination. Without them, information would be lost.[8]

Regional backbone
 ...

http://navigators.com/isp.html

https://web.archive.org/web/20060411203358/http://www.nthelp.com/maps.htm

https://www.opte.org/about

  #####    #####    #####

https://en.wikipedia.org/wiki/Tier_1_network

Tier 1 network
From Wikipedia, the free encyclopedia

A Tier 1 network is an Internet Protocol (IP) network that can reach every other network on the Internet solely via settlement-free interconnection (also known as settlement-free peering).[1][2] Tier 1 networks can exchange traffic with other Tier 1 networks without paying any fees for the exchange of traffic in either direction.[3] In contrast, some Tier 2 networks and all Tier 3 networks must pay to transmit traffic on other networks.[3]

Relationship between the various tiers of Internet providers
There is no authority that defines tiers of networks participating in the Internet.[1] The most common and well-accepted definition of a Tier 1 network is a network that can reach every other network on the Internet without purchasing IP transit or paying for peering.[2] By this definition, a Tier 1 network must be a transit-free network (purchases no transit) that peers for free with every other Tier 1 network and can reach all major networks on the Internet. Not all transit-free networks are Tier 1 networks, as it is possible to become transit-free by paying for peering, and it is also possible to be transit-free without being able to reach all major networks on the Internet.

The most widely quoted source for identifying Tier 1 networks is published by Renesys Corporation,[4] but the base information to prove the claim is publicly accessible from many locations, such as the RIPE RIS database,[5] the Oregon Route Views servers, Packet Clearing House, and others.

RIPE RIS database
Oregon Route Views servers
Packet Clearing House

It can be difficult to determine whether a network is paying for peering or transit, as these business agreements are rarely public information, or are covered under a non-disclosure agreement. The Internet peering community is roughly the set of peering coordinators present at the Internet exchange points on more than one continent. The subset representing Tier 1 networks is collectively understood in a loose sense, but not published as such.

History
The original Internet backbone was the ARPANET when it provided the routing between most participating networks. The development of the British JANET (1984) and U.S. NSFNET (1985) infrastructure programs to serve their nations' higher education communities, regardless of discipline,[6] resulted in 1989 with the NSFNet backbone. The Internet could be defined as the collection of all networks connected and able to interchange Internet Protocol datagrams with this backbone. Such was the weight of the NSFNET program and its funding ($200 million from 1986 to 1995)—and the quality of the protocols themselves—that by 1990 when the ARPANET itself was finally decommissioned, TCP/IP had supplanted or marginalized most other wide-area computer network protocols worldwide.

When the Internet was opened to the commercial markets, multiple for-profit Internet backbone and access providers emerged. The network routing architecture then became decentralized and attained a need for exterior routing protocols, in particular the Border Gateway Protocol emerged. New Tier 1 ISPs and their peering agreements supplanted the government-sponsored NSFNet, a program that was officially terminated on April 30, 1995.[6] The NSFnet-supplied regional networks then sought to buy national-scale Internet connectivity from these now numerous, private, long-haul networks.

List of Tier 1 networks
These networks are universally recognized as Tier 1 networks, because they can reach the entire internet (IPv4 and IPv6) via settlement-free peering. The CAIDA AS rank is a rank of importance on the internet.[10]
  •  https://asrank.caida.org/

  #####    #####    #####

https://asrank.caida.org/
 ASRank  is CAIDA's ranking of Autonomous Systems (AS) (which approximately map to Internet Service Providers) and organizations (Orgs) (which are a collection of one or more ASes). This ranking is derived from topological data collected by CAIDA's Archipelago Measurement Infrastructure and Border Gateway Protocol (BGP) routing data collected by the Route Views Project and RIPE NCC.

ASes and Orgs are ranked by their customer cone size, which is the number of their direct and indirect customers. Note: We do not have data to rank ASes (ISPs) by traffic, revenue, users, or any other non-topological metric.
https://asrank.caida.org/

  #####    #####    #####

https://en.wikipedia.org/wiki/Tier_1_network

 Winther, Mark (May 2006). "Tier1 ISPs: What They Are and Why They Are Important" (PDF). NTT America Corporate.
http://www.us.ntt.net/downloads/papers/IDC_Tier1_ISPs.pdf

https://www.thousandeyes.com/learning/techtorials/isp-tiers

ThousandEyes is an interesting startup, that has made a name for itself with its service that watches pretty much the whole internet to help companies figure out the source of performance problems of websites and web-based apps. For instance, it can determine if an outage is the company's fault or that of its service providers.

  #####    #####    #####
 
https://en.wikipedia.org/wiki/Internet_exchange_point

Internet exchange point
From Wikipedia, the free encyclopedia

Internet exchange points (IXes or IXPs) are common grounds of IP networking, allowing participant Internet service providers (ISPs) to exchange data destined for their respective networks.[1] IXPs are generally located at places with preexisting connections to multiple distinct networks, i.e., datacenters, and operate physical infrastructure (switches) to connect their participants. Organizationally, most IXPs are each independent not-for-profit associations of their constituent participating networks (that is, the set of ISPs which participate at that IXP). The primary alternative to IXPs is private peering, where ISPs directly connect their networks to each other.

IXPs reduce the portion of an ISP's traffic that must be delivered via their upstream transit providers, thereby reducing the average per-bit delivery cost of their service. Furthermore, the increased number of paths available through the IXP improves routing efficiency (by allowing routers to select shorter paths) and fault-tolerance. IXPs exhibit the characteristics of the network effect.[2]

History
Internet exchange points began as Network Access Points or NAPs,
NSFNet Internet architecture, c. 1995

Operations

A 19-inch rack used for switches at the DE-CIX in Frankfurt, Germany
Technical operations
A typical IXP consists of one or more network switches, to which each of the participating ISPs connect. Prior to the existence of switches, IXPs typically employed fiber-optic inter-repeater link (FOIRL) hubs or Fiber Distributed Data Interface (FDDI) rings, migrating to Ethernet and FDDI switches as those became available in 1993 and 1994.

Asynchronous Transfer Mode (ATM) switches were briefly used at a few IXPs in the late 1990s, accounting for approximately 4% of the market at their peak, and there was an attempt by Stockholm-based IXP NetNod to use SRP/DPT, but Ethernet has prevailed, accounting for more than 95% of all existing Internet exchange switch fabrics. All Ethernet port speeds are to be found at modern IXPs, ranging from 10 Mb/second ports in use in small developing-country IXPs, to ganged 10 Gb/second ports in major centers like Seoul, New York, London, Frankfurt, Amsterdam, and Palo Alto. Ports with 100 Gb/second are available, for example, at the AMS-IX in Amsterdam and at the DE-CIX in Frankfurt.[citation needed]

http://www.drpeering.net/white-papers/Art-Of-Peering-The-IX-Playbook.html

  #####    #####    #####

https://en.wikipedia.org/wiki/List_of_Internet_exchange_points

https://www.peeringdb.com/

The Interconnection Database
Join. Search. Grow your network.
PeeringDB is a freely available, user-maintained, database of networks, and the go-to location for interconnection data. The database facilitates the global interconnection of networks at Internet Exchange Points (IXPs), data centers, and other interconnection facilities, and is the first stop in making interconnection decisions.

The database is a non-profit, community-driven initiative run and promoted by volunteers. It is a public tool for the growth and good of the Internet. Join the community and support the continued development of the Internet.

  #####    #####    #####

https://en.wikipedia.org/wiki/List_of_Internet_exchange_points_by_size

https://prefix.pch.net/applications/ixpdir/?show_active_only=0&sort=traffic&order=desc

PCH (Packet Clearing House)
Internet Exchange Directory
https://www.pch.net/ixp/dir

http://www.telegeography.com/products/internet-exchange-directory/

http://lookinglass.org/wix.php

https://ixpdb.euro-ix.net/en/
The IXP Database (IXPDB) is an authoritative, comprehensive, public source of data related to IXPs. It collects data directly from IXPs through a recurring automated process. It also integrates data from third-party sources in order to provide a comprehensive and corroborated view of the global interconnection landscape. The combined data can be viewed, analyzed, and exported through this web-based interface and an API.

IXP Database (IXPDB) - collects data directly from IXPs through a recurring automated process
https://ixpdb.euro-ix.net/en/
   ____________________________________
https://www.peeringdb.com/

The Interconnection Database
Join. Search. Grow your network.
PeeringDB is a freely available, user-maintained, database of networks, and the go-to location for interconnection data. The database facilitates the global interconnection of networks at Internet Exchange Points (IXPs), data centers, and other interconnection facilities, and is the first stop in making interconnection decisions.

The database is a non-profit, community-driven initiative run and promoted by volunteers. It is a public tool for the growth and good of the Internet. Join the community and support the continued development of the Internet.
   ____________________________________

   ____________________________________
http://www.telegeography.com/products/internet-exchange-directory/

https://www.submarinecablemap.com/

https://www.cloudinfrastructuremap.com/

https://www.internetexchangemap.com/
   ____________________________________
https://www.howtogeek.com/751880/the-foundation-of-the-internet-tcpip-turns-40/

How Does TCP/IP Work?
TCP and IP are two separate technologies that work together, hand-in-hand, to achieve reliable connections through a heterogeneous (many different types of computers and links) computer network.

As previously mentioned, IP handles addressing machines on the network and how blocks of data (called “packets“) reach the proper destination. TCP ensures that the packets reach their destination without error, calling ahead to make sure there is a host to receive the information and, if the information is lost on the way or corrupted, re-transmitting the data until it gets there safely.

What's the Difference Between TCP and UDP?
RELATED
What's the Difference Between TCP and UDP?
TCP/IP’s architects purposely separated the implementation of TCP and IP to make the network more flexible and modular. In fact, TCP can be swapped out with a different protocol called UDP that is faster but allows data loss in situations where 100% transmission accuracy isn’t necessary, such as a telephone call or a video broadcast.

Network engineers call this modular design a “protocol stack,” and it allows some of the lower layers in the stack to be handled independently in a way that is most appropriate for the local machine architecture. Then the upper layers can work on top of those to communicate with each other. In the case of the Internet, this stack typically consists of four layers:

  •  Link Layer – Low-level protocols that work with a physical medium (such as Ethernet)
  •  Internet Layer – Routes packets (IP, for example)
  •  Transport Layer – Makes and breaks connections (TCP, for example)
  •  Application Layer – How people use the network (the web, FTP, and others)
   ____________________________________
https://www.opte.org/about

What is this about?
Some people need to see to understand.

Since the Internet is an enormous amalgamation of individual networks that provide the relatively seamless communication of data, it seemed logical to draw lines from one point to another.

This project has been a 17+ year labor of love under the moniker of The Opte Project. The map has been an icon of what the Internet looks like in hundreds of books, in movies, museums, office buildings, educational discussions, and countless publications. The map has also become a teaching tool, allowing visual learners to quickly understand the Internet and networking.

Now I hope this map will be a teaching tool on why we need to build a new Internet with new core principles built into it.  The Internet is woven into society, and by changing the Internet, it's possible to change the world.

There are many other answers below and in our FAQ section of the site.

https://en.wikipedia.org/wiki/Protocol_Wars
   ____________________________________
https://en.wikipedia.org/wiki/Domain_Name_System

Most prominently, Domain_Name_System translates readily memorized domain names to the numerical IP addresses needed for locating and identifying computer services and devices with the underlying network protocols.[1] The Domain Name System has been an essential component of the functionality of the Internet since 1985.
   ____________________________________
   ____________________________________
 
  







 


Robert Freegard (1 March 1971)

 Robert Freegard (1 March 1971)

career conman
Robert Freegard (wiki ??)
an illusionist, sociopath
all of us have a story we want to be told and if you are told that story at the wrong time by the wrong person, they can have an incredibly powerful hold over you;
magician who is able to look at people, and say what is it you are missing in your life exactly and gives it to them and ...
has this same ability to put forth this huge illusion
these people do exist, they have these power, to be very wary of them;
12 years
what's the dramatic essence of this story
what's the dramatic center that we can tell and how do we do that
screenplay (research) ==> mechanic for the script

articles, trial, Robert Freegard
manipulated
trying to crush emotionally
FBI agent, Scotland yard
Robert Freegard, gifted at manipulation
tending bar, selling car
a woman missing for 10 years
breaking her confidence, breaking her peace of mind
convicted of kidnapping by fraud
as opposed to kidnapping by force
kidnapping by mind control (mind fuck)
he deprived people freedom with his mind rather than by force
that's an incredible concept

Robert Freegard appeal the conviction.
He is loosed and freed.

control, pleasure, reassurance, power
sociopath, does not tip into psychopath
fraud, fear, brainwashing, cult leader

source:
        Rogue agent 2021 (movie, DVD format)
   ____________________________________
https://en.wikipedia.org/wiki/Robert_Hendy-Freegard

Robert Hendy-Freegard

From Wikipedia, the free encyclopedia
Robert Hendy-Freegard
Born    Robert Freegard

1 March 1971 (age 52)
Dronfield, Derbyshire, England
Other names    David Hendy, David Clifton
Known for    Conman and impostor
Criminal charges    Theft, deception and kidnapping-by-fraud (2005); kidnap conviction quashed on appeal (2007)
Criminal penalty    Life in prison (2005); reduced to 9 years on appeal (2007)
Criminal status    Released in 2009

Robert Hendy-Freegard (born Robert Freegard, 1 March 1971)[1] is a British convicted conman and impostor who masqueraded as an MI5 agent while working as a barman and car salesman.[2] He is also known as David Hendy and David Clifton.[3]
 
Career

Hendy-Freegard was born in Dronfield and started his career as a barman and car salesman.[4] Hendy-Freegard met his victims on social occasions or as customers in the pub or car dealership where he worked. Having met the victims, he claimed to be an undercover agent, working for MI5, the Special Branch or for Scotland Yard. He applied pressure and psychological stress to his victims, claiming they were threatened with assassination by the IRA, to coerce them into following his demands.[5]

Having won his victims over, he coerced money out of them and pressured them to do his bidding, including cutting off contact with their family and friends; performing "loyalty tests"; and living alone in poor conditions. He seduced five women, claiming that he wanted to marry them. Initially some of the victims refused to cooperate with the police because he had warned them that police would be double agents or MI5 agents performing another "loyalty test".[6]

In 1992, while working in The Swan, a pub in Newport, Shropshire, Freegard befriended two women, Sarah Smith and Maria Hendy, and a man, John Atkinson. All three were agriculture students at Harper Adams University in Edgmond.[7] He told Atkinson that he was an MI5 undercover agent who was investigating an IRA cell in the college. He forced Atkinson to let himself be beaten up to prove his loyalty and to show that he was "hard enough". He also persuaded him to behave in a bizarre manner in college to prove his loyalty and to alienate him from friends. Hendy-Freegard then told Atkinson his cover was blown and both of them had to go undercover. He persuaded Atkinson to tell Smith, who at the time was Atkinson's girlfriend, and Hendy that he had liver cancer and persuaded them to accompany them in a "farewell tour" all over England.[8]

Later he let them in on "the story". He told them to sever all contact with their families because they were in danger just through being associated with him. They moved to Sheffield and gave him all their money.[9] Maria Hendy became his lover and gave birth to his two daughters; she was later beaten up by him losing one of her teeth.[9] During their relationship he changed his surname by deed poll to Hendy-Freegard.[10]

Next, in 1995, Hendy-Freegard had an affair with a recently-married personal assistant, Elizabeth Bartholomew (née Richardson). He told her to take up loans, supposedly to settle her debts following her divorce, and then made her sleep on park benches.[7]

In 1996, Hendy-Freegard told a woman in Newcastle, Lesley Gardner, that he needed money to buy off IRA killers, who had been released after the Good Friday agreement. She gave him £16,000 over six years. He also sold her car and again kept the money.[7]

In 2000, Hendy-Freegard convinced a female company director, Renata Kister, that MI5 had told him to watch someone in the Sheffield car dealership where he was working and persuaded her to buy a better car. He sold her original car, kept the money and persuaded her to take out a £15,000 loan for him. He also asked Kister for a room for Sarah Smith because she was supposedly in a witness protection programme. He told her that Smith could not speak English, and told Smith that for security reasons she had to pretend that she could not understand anything said to her, so that the two women would not speak to each other.[6][7]

In 2000, Hendy-Freegard met a lawyer, Caroline Cowper, who was a customer in the car dealership in Chiswick, West London. He helped her change her car, pocketed the difference, asked for more, persuaded her to give more money for a leasing business they would run together and stole £14,000 from her building society account. They later became lovers and went on holidays all over the world. They then became engaged before her family intervened. When the leasing car did not materialize, he told her that the Polish mafia had taken it.[7]

In 2002, Hendy-Freegard seduced an American child psychologist, Kimberley Adams, with stories of how he had infiltrated a criminal network and killed a criminal who had threatened to expose him. He said he wanted to marry her, on condition that she would also become an agent and cut off the contact with her family.[7]

In 2002, Scotland Yard and the FBI organized a sting operation with the help of Kimberley Adams' parents. First, the FBI bugged the phone of the parents. Adams' mother told Hendy-Freegard she would give him $10,000 but only in person. Hendy-Freegard met the mother in Heathrow Airport where police apprehended him.[11]

On 23 June 2005, after an eight-month trial, Blackfriars Crown Court convicted Hendy-Freegard on two counts of kidnapping (John Atkinson and Sarah Smith), 10 of theft and eight of deception. On 6 September 2005, he was given a life sentence.[9][12]

On 25 April 2007, it was reported that Hendy-Freegard had appealed against his kidnapping convictions.[13] The Court of Appeal judgment played an important role in defining the modern offence of kidnap, holding that inducing a person to go from one place to another by fraud does not constitute kidnapping.[14] His life sentence was revoked but he still faced nine years for the other offences.[15][16] He was released in May 2009.

In 2011, using the name David Hendy, he met and seduced a British woman named Sandra Clifton through an online dating site.[17] He claimed to work in the media industry selling advertising space to large companies, and often hinted at the amount of money he had, convincing Sandra of his wealth with expensive gifts and trips. It has been alleged that as their relationship developed over the course of the next few years, he gradually manipulated Sandra and her family members through deception, coercive control and mental abuse, progressively isolating Sandra from her family and friends.[17][18] Hendy-Freegard has denied these claims.[17] Sandra Clifton later ceased contact with her children Sophie and Jake, and with their father Mark Clifton.[17][19]

It has been reported that Hendy-Freegard and Sandra Clifton have been working in the beagle breeding and showing business, and that he uses the name David Clifton.[19] According to the Netflix documentary he lived and worked in France where he traded luxury dogs;[20] however, more recent media suggests he lived in Reading, Berkshire.[21]

On 26 August 2022, a British man, who the French media identified as Hendy-Freegard, caused severe injuries to two Gendarmes after a Veterinary Health inspection of his dog breeding activities in Vidaillat, France.[22]

On 2 September 2022, after being on the run for eight days, he was apprehended by the Police of Dilbeek in Groot-Bijgaarden (near Brussels) following the issuance of a European arrest warrant. He was travelling alone.[23] On 3 September, he appeared before a Belgian magistrate and was detained pending his extradition to France.[24] On 15 September, the Brussels council court authorised Hendy-Freegard's extradition to France.[25][26] On 19 September, the prosecutor's office confirmed that he had appealed against the extradition.[26] Hendy-Freegard was handed over to the French authorities on 17 October.[27] He appeared before judges in Limoges on 20 October, when he was indicted for the alleged attempted murder of 'persons holding public authority' and held in pre-trial detention at the Limoges remand centre.[27]

In popular culture

The Spy Who Stole My Life, a television documentary about Hendy-Freegard, was broadcast on Channel Five on 7 September 2005.[28]

He was the subject of the January 2022 Netflix documentary mini-series The Puppet Master: Hunting the Ultimate Conman.[29] In July 2022, Netflix UK streamed the feature film Rogue Agent, based on his life.[30]
   ____________________________________