── Why learn to cope with competition when you can eliminate it?
── The Bells were corporate America's reigning champs in the rope-a-dope game of keeping up appearances in the front office while quietly pummeling their rivals in the parking lot.
── (when in doubt, follow the money; beware of the nature of the beast)
____________________________________
Tim Wu, The Master Switch, 2010 [ ]
p.240
Why learn to cope with competition when you can eliminate it?
p.241
If there was any hope of recovering even a bit of that former power, it lay in a painstaking long-term strategy.
But Whitacre and others with revanchist longing would bid their time. They knew that while Bell was officially a public menace, the old regime retained many loyalist and friends in Congress, federal agencies, and most of all, state and local governments.
p.242
The perceived value of competition has varied considerably in American history. In the late 19th and 20th centuries, broadly speaking, many business leaders like Vail, as well as labor leaders and economists, thought that competition, particularly in operating utilities or other economic necessities, was wasteful and destructive. In such sectors, government regulation was deemed prudent to protect businesses serving a vital social function from the excess of competition, assuring them of, if not monopoly, at least a reasonable degree of market share.
p.243
Embracing the process of “competition” that was under way, the Bells prepared to make their comeback as a dominant player in a nominally open industry.
p.244
The Bells were corporate America's reigning champs in the rope-a-dope game of keeping up appearances in the front office while quietly pummeling their rivals in the parking lot.
While not keen to share their toys under the Act's so-called unbundling rules, the Bells immediately understood that the deal was a win for them. What mattered most was one critical fact: the 1996 law superseded the consent decree that had ended the Bell antitrust lawsuit. With that decree abrogated, the Bells were now under the supervision of the FCC, as opposed to the hawk-eyed taskmaster Judge Greene. It was for them the catbird seat: there was no rival they couldn't handle, except for the federal courts and the Department of Justice.
p.244-247
MCI, Microwave Communication Inc.
(Wu, Tim, The master switch : the rise and fall of information empires / Tim Wu., 1. telecommunication--history., 2. information technology--history., 2010 )
(The Master Switch: The Rise and Fall of Information Empires, Tim Wu, 2010.)
____________________________________
── Mark Lemley of Stanford Law School is director of the Stanford Program in Law, Science and Technology.
── Qualcomm's various court cases for several years.
── "Qualcomm made a commitment that it would licence its chips on reasonable and non-discriminatory terms, because they wanted their chips to be included in the industry standards, and then they created a structure to avoid doing this,"
https://hardware.slashdot.org/story/20/08/29/057257/tesla-intel-and-others-urge-americas-ftc-to-oppose-qualcomm-ruling
Tesla, Intel, and Others Urge America's FTC to Oppose Qualcomm Ruling (bbc.co.uk)
Posted by EditorDavid on Saturday August 29, 2020 @03:37PM
Prof Mark Lemley of Stanford Law School is director of the Stanford Program in Law, Science and Technology. He has been following Qualcomm's various court cases for several years. "Qualcomm made a commitment that it would licence its chips on reasonable and non-discriminatory terms, because they wanted their chips to be included in the industry standards, and then they created a structure to avoid doing this," he said.
"I think they are in fact violating the antitrust laws."
____________________________________
Conceptual Gap Between Analog and Digital Thinking
Baran:
The fundamental hurdle in acceptance was whether the listener had digital experience or knew only analog transmission techniques. The older telephone engineers had problems with the concept of packet switching. On one of my several trips to AT&T Headquarters at 195 Broadway in New York City I tried to explain packet switching to a senior telephone company executive. In mid sentence he interrupted me, “Wait a minute, son. “Are you trying to tell me that you open the switch before the signal is transmitted all the way across the country?” I said, “Yes sir, that’s right.” The old analog engineer looked stunned. He looked at his colleagues in the room while his eyeballs rolled up sending a signal of his utter disbelief. He paused for a while, and then said, “Son, here’s how a telephone works….” And then he went on with a patronizing explanation of how a carbon button telephone worked. It was a conceptual impasse.
On the other hand, the computer people over at Bell Labs in New Jersey did understand the concept. That was insufficient. When I told the AT&T Headquarters folks that their own research people at Bell Labs had no trouble understanding and didn’t have the same objections as the Headquarters people. Their response was, “Well, Bell Labs is made up of impractical research people who don’t understand real world communication.”
Willis Ware of RAND tried to build a bridge early in the process. He knew Dr. Edward David Executive Director of Bell Labs and he aske for help. Ed set up a meeting at his house with the chief engineer of AT&T and myself to try to overcome the conceptual hurdle. At this meeting I would describe something in language familiar to those that knew digital technology. Ed David would translate what I was saying into language more familiar in the analog telephone world (he practically used Western Electric part numbers) to our AT&T friend, who responded in a like manner. Ed David would translate it back into computer nerd language.
I would encounter this cultural impasse time after time between those who were familiar only with the then state of the art of analog communications – highly centralized and with highly limited intelligence circuit switching and myself talking about all-digital transmission, smart switches and self-learning networks. But, all through the process of erosion, more and more people came to understand what was being said. The base of support strengthened in RAND, the Air Force, academia, government and some industrial companies --and parts of Bell Labs. But I could never penetrate AT&T Headquarters objections who at that time had a complete monopoly on telecommunications. It would have been the perfect organization to build the network. Our initial objective was to have the Air Force contract the system out to AT&T to build the network but unfortunately AT&T was dead set against the idea.
Hochfelder:
Were there financial objections as well?
AT&T Headquarters Lack of Receptivity
Baran:
Possibly, but not frontally. They didn’t want to do it for a number of reasons and dug their heels in looking for publicly acceptable reasons. For example, AT&T asserted that were not enough paths through the country to provide for the number of routes that I had proposed for the National packet based network but refused to show us their route maps. (I didn’t tell them that someone at RAND had already acquired a back door copy of the AT&T maps containing the physical routes across the US since AT&T refused to voluntarily provide these maps that were needed to model collateral damage to the telephone plant by attacks at the US Strategic Forces.) I told AT&T that I thought that they were in error and asked them to please check their maps more carefully. After a month’s delay in which they never directly answered the question, one of their people responded by grumbling, “It isn’t going to work, and even if it did, damned if we are going to put anybody in competition to ourselves.”
I suspect the major reason for the difficulty in accommodating packet switching at the digital transmission level was that it would violate a basic ground rule of the Bell System -- everything added to the telephone system had to work with all previous equipment presently installed. Everything had to fit to into the existing plan. Nothing totally different could be allowed except as a self contained unit that fit into the overall system. The concept of long distance all-digital communications links connecting small computers serving as switches represents a totally different technology and paradigm, and was too hard for them to swallow. I can understand and respect that reason, but can also appreciate the later necessity for divestiture. Competition better serves the public interest in the longer term than a monopoly, no matter how competent and benevolent that monopoly might. There is always the danger that the monopoly can be in error and there is no way to correct this.
source:
____________________________________
An AnandTech Interview with Jim Keller: 'The Laziest Person at Tesla'
by Dr. Ian Cutress on June 17, 2021 12:20 PM EST
IC: Would you say that engineers need more people skills these days? Because everything is complex, everything has separate abstraction layers, and if you want to work between them you have to have the fundamentals down.
JK: Now here’s the fundamental truth, people aren't getting any smarter. So people can't continue to work across more and more things - that's just dumb. But you do have to build tools and organizations that support people's ability to do complicated things. The VAX 8800 team was 150 people. But the team that built the first or second processor at Apple, the first big custom core, was 150 people. Now, the CAD tools are unbelievably better, and we use 1000s of computers to do simulations, plus we have tools that could place and route 2 million gates versus 200. So something has changed radically, but the number of people an engineer might talk to in a given day didn't change at all. If you have an engineer talk to more than five people a day, they'll lose their mind. So, some things are really constant.
── here’s the fundamental truth, people aren't getting any smarter.
── you do have to build tools and organizations that support people's ability to do complicated things.
CPU Instruction Sets: Arm vs x86 vs RISC-V
IC: You’ve spoken about CPU instruction sets in the past, and one of the biggest requests for this interview I got was around your opinion about CPU instruction sets. Specifically questions came in about how we should deal with fundamental limits on them, how we pivot to better ones, and what your skin in the game is in terms of ARM versus x86 versus RISC V. I think at one point, you said most compute happens on a couple of dozen op-codes. Am I remembering that correctly?
JK: [Arguing about instruction sets] is a very sad story. It's not even a couple of dozen [op-codes] - 80% of core execution is only six instructions - you know, load, store, add, subtract, compare and branch. With those you have pretty much covered it. If you're writing in Perl or something, maybe call and return are more important than compare and branch. But instruction sets only matter a little bit - you can lose 10%, or 20%, [of performance] because you're missing instructions.
For a while we thought variable-length instructions were really hard to decode. But we keep figuring out how to do that. You basically predict where all the instructions are in tables, and once you have good predictors, you can predict that stuff well enough. So fixed-length instructions seem really nice when you're building little baby computers, but if you're building a really big computer, to predict or to figure out where all the instructions are, it isn't dominating the die. So it doesn't matter that much.
When RISC first came out, x86 was half microcode. So if you look at the die, half the chip is a ROM, or maybe a third or something. And the RISC guys could say that there is no ROM on a RISC chip, so we get more performance. But now the ROM is so small, you can't find it. Actually, the adder is so small, you can hardly find it? What limits computer performance today is predictability, and the two big ones are instruction/branch predictability, and data locality.
Now the new predictors are really good at that. They're big - two predictors are way bigger than the adder. That's where you get into the CPU versus GPU (or AI engine) debate. The GPU guys will say ‘look there's no branch predictor because we do everything in parallel’. So the chip has way more adders and subtractors, and that's true if that's the problem you have. But they're crap at running C programs.
GPUs were built to run shader programs on pixels, so if you're given 8 million pixels, and the big GPUs now have 6000 threads, you can cover all the pixels with each one of them running 1000 programs per frame. But it's sort of like an army of ants carrying around grains of sand, whereas big AI computers, they have really big matrix multipliers. They like a much smaller number of threads that do a lot more math because the problem is inherently big. Whereas the shader problem was that the problems were inherently small because there are so many pixels.
There are genuinely three different kinds of computers: CPUs, GPUs, and AI. NVIDIA is kind of doing the ‘inbetweener’ thing where they're using a GPU to run AI, and they're trying to enhance it. Some of that is obviously working pretty well, and some of it is obviously fairly complicated. What's interesting, and this happens a lot, is that general-purpose CPUs when they saw the vector performance of GPUs, added vector units. Sometimes that was great, because you only had a little bit of vector computing to do, but if you had a lot, a GPU might be a better solution.
── 80% of core execution is only six instructions - you know, load, store, add, subtract, compare and branch. With those you have pretty much covered it.
── If you're writing in Perl or something, maybe call and return are more important than compare and branch.
──
── What limits computer performance today is predictability, and the two big ones are instruction/branch predictability, and data locality.
── There are genuinely three different kinds of computers: CPUs, GPUs, and AI.
── NVIDIA is kind of doing the ‘inbetweener’ thing where they're using a GPU to run AI, and they're trying to enhance it.
──
── general-purpose CPUs
── vector performance of GPUs
──
IC: So going back to ISA question - many people were asking about what do you think about Arm versus x86? Which one has the legs, which one has the performance? Do you care much, if at all?
JK: I care a little. Here's what happened - so when x86 first came out, it was super simple and clean, right? Then at the time, there were multiple 8-bit architectures: x86, the 6800, the 6502. I programmed probably all of them way back in the day. Then x86, oddly enough, was the open version. They licensed that to seven different companies. Then that gave people opportunity, but Intel surprisingly licensed it. Then they went to 16 bits and 32 bits, and then they added virtual memory, virtualization, security, then 64 bits and more features. So what happens to an architecture as you add stuff, you keep the old stuff so it's compatible.
So when Arm first came out, it was a clean 32-bit computer. Compared to x86, it just looked way simpler and easier to build. Then they added a 16-bit mode and the IT (if then) instruction, which is awful. Then [they added] a weird floating-point vector extension set with overlays in a register file, and then 64-bit, which partly cleaned it up. There was some special stuff for security and booting, and so it has only got more complicated.
Now RISC-V shows up and it's the shiny new cousin, right? Because there's no legacy. It's actually an open instruction set architecture, and people build it in universities where they don’t have time or interest to add too much junk, like some architectures have. So relatively speaking, just because of its pedigree, and age, it's early in the life cycle of complexity. It's a pretty good instruction set, they did a fine job. So if I was just going to say if I want to build a computer really fast today, and I want it to go fast, RISC-V is the easiest one to choose. It’s the simplest one, it has got all the right features, it has got the right top eight instructions that you actually need to optimize for, and it doesn't have too much junk.
── So what happens to an architecture as you add stuff, you keep the old stuff so it's compatible.
── ([ legal code and regulation ─ don't mess to much with the core or the foundation ])
── IT (if then) instruction
──
──
IC: So modern instruction sets have too much bloat, especially the old ones. Legacy baggage and such?
JK: Instructions that have been iterated on, and added to, have too much bloat. That's what always happens. As you keep adding things, the engineers have the struggle. You can have this really good design, there are 10 features, and so you add some features to it. The features all make it better, but they also make it more complicated. As you go along, every new feature added gets harder to do, because the interaction for that feature, and everything else, gets terrible.
The marketing guys, and the old customers, will say ‘don't delete anything’, but in the meantime they are all playing with the new fresh thing that only does 70% of what the old one does, but it does it way better because it doesn't have all these problems. I've talked about diminishing return curves, and there's a bunch of reasons for diminishing returns, but one of them is the complexity of the interactions of things. They slow you down to the point where something simpler that did less would actually be faster. That has happened many times, and it's some result of complexity theory and you know, human nefariousness I think.
── You can have this really good design, there are 10 features, and so you add some features to it. The features all make it better, but they also make it more complicated. As you go along, every new feature added gets harder to do, because the interaction for that feature, and everything else, gets terrible.
──
── I've talked about diminishing return curves, and there's a bunch of reasons for diminishing returns, but one of them is the complexity of the interactions of things.
── They slow you down to the point where something simpler that did less would actually be faster.
──
──
IC: So did you ever see a situation where x86 gets broken down and something just gets reinvented? Or will it just remain sort of legacy, and then just new things will pop up like RISC-V to kind of fill the void when needed?
JK: x86-64 was a fairly clean slate, but obviously it had to carry all the old baggage for this and that. They deprecated a lot of the old 16-bit modes. There's a whole bunch of gunk that disappeared, and sometimes if you're careful, you can say ‘I need to support this legacy, but it doesn't have to be performant, and I can isolate it from the rest’. You either emulate it or support it.
We used to build computers such that you had a front end, a fetch, a dispatch, an execute, a load store, an L2 cache. If you looked at the boundaries between them, you'd see 100 wires doing random things that were dependent on exactly what cycle or what phase of the clock it was. Now these interfaces tend to look less like instruction boundaries – if I send an instruction from here to there, now I have a protocol. So the computer inside doesn't look like a big mess of stuff connected together, it looks like eight computers hooked together that do different things. There’s a fetch computer and a dispatch computer, an execution computer, and a floating-point computer. If you do that properly, you can change the floating-point without touching anything else.
That's less of an instruction set thing – it’s more ‘what was your design principle when you build it’, and then how did you do it. The thing is, if you get to a problem, you could say ‘if I could just have these five wires between these two boxes, I could get rid of this problem’. But every time you do that, every time you violate the abstraction layer, you've created a problem for future Jim. I've done that so many times, and like if you solve it properly, it would still be clean, but at some point if you hack it a little bit, then that kills you over time.
── it looks like eight computers hooked together that do different things. There’s a fetch computer and a dispatch computer, an execution computer, and a floating-point computer.
── ‘what was your design principle when you build it’, and then how did you do it.
──
── The thing is, if you get to a problem, you could say ‘if I could just have these five wires between these two boxes, I could get rid of this problem’. But every time you do that, every time you violate the abstraction layer, you've created a problem for future Jim.
──
──
source:
____________________________________
Tim Wu, The Master Switch, 2010 [ ]
p.114
Having won its case, Hush-A-Phone ran a series of advertisements proclaiming its device newly approved for use by federal tariff. Unfortunately, its could not keep up with Bell's own stately pace of product design, and then the phone company began to sell new handsets again, sometime in the 1960s, Hush-A-Phone folded. Such are the wages of stifling innovation: to this day, while the annoyance of mobile home chatter, the banality of overheard conversations, has become a cliché, there is not a Hush-A-Phone or its equivalent to be found.
Hush-A-Phone's valiant founder died sometime in the 1970s, to be forgotten, apart from one great cultural reference. In the 1985 film Brazil, Robert De Niro plays a maverick repairman who does unauthorized repairs and leads a resistance movement against a totalitarian state. The hero and hope of that dystopia is named Harry Tuttle.
(Wu, Tim, The master switch : the rise and fall of information empires / Tim Wu., 1. telecommunication--history., 2. information technology--history., 2010 )
(The Master Switch: The Rise and Fall of Information Empires, Tim Wu, 2010.)
____________________________________
____________________________________
https://image.slideserve.com/1447410/the-three-faces-of-power-l.jpg
The three faces of power
• The ability to force someone to do something. A causes B to act, and B knows A has the “power.” Coercive.
• The ability to influence the actions of another. A persuades B to do something, though B is not aware of the persuasion.
• The structure of the sets of institutions, benefits A over B, while neither is aware of the background relationship.
“”
source:
https://image.slideserve.com/1447410/the-three-faces-of-power-l.jpg
Kenneth E. Boulding (1990). “Three Faces of Power”
____________________________________
··<────────────────────────────────────────────────────────────────────────────>
··<---------------------------------------------------------------------------->
No comments:
Post a Comment