Further examples of group madness in technology
First set of examples here:
Many of these taken from the comments on the last one; thanks bros. Again, one of the worst arguments I hear is that “thing X is inevitable because the smart people are doing it.” There are tons of examples of smart people doing and working on stupid things because everyone else is doing it. Everyone conveniently forgets about these stupid things the “smart people” did in the past; blinded by modern marketing techniques trumpeting The Latest Thing. It’s one of my fundamental theorems that “smart people” are even more susceptible to crazes and tulip bulb nonsense than primitive people, mostly because of how they become “smart.” Current year “smart people” achieve their initial successes by book learning. This is fine and dandy as long as someone who is actually smart selected good and true books to read. The problem is, current year “smart people” take marketing baloney as valid inputs. Worse, they also take “smart people” social cues as important inputs: they fear standing out from the herd, even when it is obvious the herd is insane. It’s how stupid political ideas spread as well.
That’s how we have very smart people working on very obvious nonsense like battery powered airplanes. Just to remind everyone: electric motors are heavy, and batteries are like 100x heavier than the equivalent power density of guzzoline. To say nothing of the fact that batteries take a lot longer to charge than filling up gas tanks. If you want to make air travel greener, cheapen the FAA testing requirements on certifying small engines for flightworthiness. We still use designs requiring leaded gasoline from the 1950s, back when certs were cheaper because of the lack of this bureaucratic overhead. You can get 2-4x more fuel efficiency this way. Better than idiocy like hoping a battery operated airplane will work. You can also just make it 10x more expensive so there are less scumbags from America (and everywhere else: I don’t like you either unless you’re Japanese) filling up my favorite places.
Distributed manufacturing. This is a recent one where solid printers are indeed useful and have become important tools in various applications, but people attributed magical properties to additive manufacture. Lots of hobbyists now use plastic solid printers to make plastic telescope rings or Warhammer40k figurines. They get used in big boy machine shops to make enclosures for prototypes, molds and on occasion, metal sintered objects which would be extremely difficult to cast. Believe it or not, making enclosures was a very time consuming part of prototyping in my lifetime. I remember the first solid printed enclosure I got, I thought it was wonderful. Molds; you can now email the pattern to people rather than mailing hand-carved styrofoam molds. Metal sintering solid printing is extremely time and energy intensive (and by nature probably always will be), but it’s totally worth it for something thin with lots of internal structure, like a rocket nozzle. All this is great and welcome progress, but it’s not how it was sold to people maybe 15-20 years ago. Back then, people were overtly saying it would be the end of centralized manufacturing. Every neighborhood would have a star trek replicator which would make them stuff from plans emailed over the internet. This was very obviously the sheerest nonsense to anyone familiar with objects made out of matter and how they are made, yet it was uncritically repeated and amplified by millions of people who should know better. In fact it’s still uncritically repeated, though I guess it is less of a craze than it was 20 years ago as more people have experience with these things. It’s tough to get excited about the “materials savings” for solid printing things when you’re paying 50 bucks for the feedstock needed to make a 2 inch tall Yoda figurine. Back in 2012, there was a mass hysteria about solid printers making guns. As I pointed out, you could make guns of similar quality whittling pieces of wood, or using pipes you get from the hardware store. The only reason this is viable is the legal “gun” part in the US is the lower receiver, which is not a part that needs to be made well to function on an AR-15. Magic star trek replicators for AR-15s, alas, are not going to be a thing in our lifetimes. You can make them on machine tools though, and machine tools don’t cost much. Nobody would have worried about solid printed “guns” if they didn’t think solid printers were magic star trek replicators, which, back in 2012, they did. FWIIW the lizard men at WEF still are trying to sell this, at least when combined with IoT, AI, 5G and … gene editing. Absolute proof nobody involved with WEF has ever manufactured any object of worth to humanity.

from Byte magazine V16-04 1991 https://vintageapple.org/byte/pdf/199104_Byte_Magazine_Vol_16-04_Soviet_Computing.pdf
The Paperless Office was a past craze. Lots of companies were based around this idea. I never bought into it; I had access to the best screens available at the time, and still sent all the physics papers to the laserjet printer, or walked to the library and photocopied journal articles. Even if you make giant thin PDF readers for portability, it’s a lot easier to scribble notes on a piece of stapled paper, which is also easier to read, lighter, lasts longer, and doesn’t need a charge on the batteries. Some of the ideas in the byte articles above ended up being used in search engines. Stuff like collaborative documents was a useful idea. The OCR approaches in use at the time made scanned documents pretty worthless; not sure that’s improved any. Still lots of paper around every office and barring some giant breakthrough in e-ink, always will be. The vision of paperless sounded real good; you could search a bunch of papers in your pre-cloud document fog, but the reality was you’d do a shitload of work and spend a shitload of money and having a filing cabinet with paper organized by paper subject tabs still worked a lot better and was a zillion times cheaper. BTW I still maintain having a couple of nice nerdy ladies with filing cabinets, telephones and fax machines is more economically efficient than running a database system for your business for like 98% of situations. That’s why the Solow “paradox” is still in play. Pretty sure that’s why places like Japan and Germany still use such systems instead of generating GDP by firing the nice nerdy ladies and hiring a bunch of H1b computard programmers and buying a lot of expensive IT hardware.
Lisp unt Prolog -yes my favorite programming language is a lisp. Prolog nonsense was same era, both were intertwined with the 5th generation computing project, but as Lisp is still a cult language, and insane people still use Prolog, it’s worth a few words. Prolog is easily disposed of: it is trivial to code up constraints which have NP-hard solutions in Prolog. That’s kind of how inherent to how constraint programming works. This probably looks absolutely amazing on 1988 tier technology. 1989 tier technology was 32 bit, hard fixed to 2GB. Actually real life fixed to 64MB because thats how many chips you can stuff into a SparkStation-1. That was a giant, super advanced machine of its day; running a Prolog compiled by Lisp (which took up a good chunk of this memory), you actually could solve NP-hard problems, because there isn’t much to them when you’re constrained to such small problems. This was even more true on the 24bit 1mb Lisp machines. The idea that adding a couple of bits to the result you were interested in would explode the computation didn’t occur to people back then. We should all know this by now and avoid doing any Prolog without thinking about how the compiler works (in which case, why not use something else where it is more obvious), or waiting for super-Turing architectures which think NP=P (or using it as a front end for some kind of solver).
Lisp has a different, more fundamental basket of problems. You can easily write a Prolog in Lisp, which one might notice is a serious problem considering above. This is given as a student exercise by Lisp’s strongest soldier, who, despite being a rich VC, doesn’t seem to have gotten rich investing in any successful Lisp companies. He claims he got his first pile of loot writing a bunch of web shop nonsense in Lisp (which later got translated into shitty C++). I dunno I wasn’t there back in 95; we used Fortran in those days. Even by his own admission, it was a creation of N=2 programmers, very possibly mostly N=1, and it was not modifiable or fixable by anybody else. I think that’s what is cool, and what is wrong with Lisp. You can write macros in it, and modify the language to potentially be quite productive in a small project: but how are others supposed to deal with this? Did you write an extensive documentation of your language innovations and make everyone on the team read it? What happens when N>3 people do this? Is it an N^N problem of communication when N people write macros? In R (an infix version of scheme), people deal with this by ignoring the packages which use certain kinds of alterations of the language (aka the Hadleyverse which I personally ignore religiously), or just embracing that one kind of macro and only doing that thing. Maybe it’s better to keep your language super bureaucratic and spend 400 lines of code every time you send some data to a function to make sure the function knows what’s up. That’s how almost everything that has successfully made money has done it. They all use retard languages that are at least OK at solving problems, not mentat languages that self modify as part of the language specification. Maybe Paul Graham got lucky back in the 1995 because generating valid HTML which holds state was something one or two dudes could do in Lisp. It wasn’t like they had very many choices; most languages sucked at that sort of thing; in fact, in year of our Lord 1995 a lot of people developed programming languages designed to emit stuff like a valid HTML webstore: Javascript, Java, Ruby, PHP are examples we all remember, and which went on to create trillions in value. That is greater value than anything Lisp has ever done, basically by being kind of limited and “squirting stateful HTML over wires” domain specific retarded and not giving users superpowers to easily modify the language. One of the fun things about Paul Graham’s big claims about Lisp is we know for a fact it all could have as easily been done in Perl: because, actually, it was done in Perl, multiple times. Perl was not only more productive in terms of value created, it was more legible too, and amenable to collaboration. Lisp of course had the ability to mutate HTML with state: it was a sort of specification language for other languages. That’s what first-gen AI was inherently; custom interpreters. Maybe if they just solidified the macros and made everyone use them, or, like, wrote a library, it would still be used somehow. Anyway, fuck Lisp, even if I am overly fond of one of its dialects.
CORBA was a minor craze in the mid-90s. I remember building some data acquisition/control gizmo using RPCGEN; took like a day of reading the manual despite never doing anything like that before. As far as I know my smooth brain thing still functions to this day. An architecture astronaut two beamlines over wondered why I didn’t use CORBA. As I recall, his thing didn’t quite work right, and never actually did, but as he was senior to me I just told him I didn’t know C++ very well (plus it didn’t work on VxWorks and lacked a Labview hook). I never learned about this thing, but I think its selling point was its “advanced suite of features” and its object orientation. It was a bit of a craze; if you go look at old byte magazines you’ll find software vendors bragging about using it in the mid 90s. Java Domino, Lotus Notes; are you not impressed? Did these CORBA things not set the world on fire? If you look at what it actually was, it looks like a student project to make teacher happy with fashionable object orientation rather than something used to solve real problems.
Come to think of it, what ever happened to Object Oriented Everything? I remember in the early 1990s when I was still using Fortran, people were always gabbling on about this stuff. People selling the idea would have these weird diagrams of boxes with wires connecting to other boxes; you can tell it was designed to appeal to pointy headed managers. I couldn’t make much of it, thinking perhaps it might make sense for something which has physical box-like things such as a GUI. Later on I realized what people were really looking for was namespaces; something you could get a ghetto version of using naming conventions, or stuff like structs with function pointers in C if you want to get fancy. The other things, polymorphism, operator overloading and inheritance, usually these were not so helpful or useful for anything. People came up with elaborate mysticisms and patterns about objects: remember things like “factory objects” and “decorator patterns” and “iterator patterns?” You could make these nice block diagrams in UML so retards could understand it! All this gorp was supposed to help with “code reuse,” but it absolutely didn’t: mostly it just added a layer of bureaucratic complexity to whatever you were doing, making it the opposite of code reuse: you had to write MOAR CODE to get it to do anything. You could probably trace a history of objecty dead ends looking at C++ “innovations” over the years: objects, generics/templates, eventually getting some functional features allowing one to do some programming that looks a lot like what we were doing using C macros, while maintaining backward compatibility with all 50 of the previous generation of C++ paradigms.
Related: this 5 minute video is worth the time if you’re still an LLM vibe coding respecter. The man has a simple argument; if LLMs are so great at writing code it’s going to replace people googling stack overflow, where’s all the code? He brings receipts from a few of the efforts to measure programmer productivity with LLM code assistants. Comments are funny too!

You can also transport yourself to 1991 and read a post-mortem of the first AI winter here.

Examples of group madness in technology
One of the worst arguments I hear is that “thing X is inevitable because the smart people are doing it.” As I’ve extensively documented over the last 15 years on this blog, smart people in groups are not smart and are even more subject to crazes and mob behavior as everyone else. Noodle theory, nanotech, quantum computing: plenty of examples in history of large groups of smarty pants people marching off a cliff. The same holds true for actual technological trends. I think LLMs are overrated magic-8 ball word predictors, autonomous vehicles are mostly vaporware Potemkin technologies, and we’re never going to replace internal combustion engine cars with electric golf carts, no matter how many dumb imac screens you put in them because batteries contain 20-100x less energy density than guzzoline and the grid won’t support it. These are unpopular opinions which non-thinking “smart” people will scoff at due to the ideology of continual progress (which these dorks haven’t noticed has failed for 50 years, hypnotized by their fuggin screens). Yet, my track record is better than non-thinking “smart” people because I think about things rather than uncritically accept what “smart” people tell me. It’s useful to consider the record of non-thinking “smart” word-regurgitators on a few technological examples.
Itanium: this is one I watched developing with some amusement. In the late 90s there was a thriving ecosystem of competing server architectures: (SGI) MIPS, (DEC) Alpha, (Sun) Sparc, (HP) PA-RISC. All of these architectures were superior to Intel chips: most of them were more or less 64 bit clean RISC architectures. At some point in the late 90s, they all decided, quite unprompted, to get out of the hardware business and use Intel’s pending Itanium chip. I assume the main reason was MBA reasoning: designing chips is capital intensive, complicated, risky, and you can’t put clean estimates into spreadsheets which makes bean-counters sad. Why not use somebody else’s silicon? Intel has more money than any of these server companies and all the other server companies were gonna do it. Itanium, of course was a massive dud. This was something completely predictable, as the last time they tried to innovate away from their antiquated 8086 architecture, the 32-bit iAPX 432 architecture: that was also a massive failure. Same reason for failure too: they did all the hip and trendy things, making an actual stack machine with hardware support for object oriented programming, multitasking and hardware support for Ada (which everyone at the time assumed was language of the future: it still is). It was 10x slower than its much less futuristic looking competitor the 68000 from Motorola which persisted in its atavistic use of registers, which were too backward looking for the intel iAPX432. Also the compiler was hot garbage, despite the fact that it was designed to make it easier to write compilers for it. Intel by all rights should have died from this, because they are fucking retarded, but the IBM PC was invented around that time and saved their bacon, more or less by IBM accidentally picking their shitty 8086 chip for inside baseball reasons (basically they hated Motorola). The Itanium used the trendy VLIW architecture which never really worked as well as predicted (again group of “experts” failure), and offloaded a lot of work to the compiler. What could go wrong: first time someone explained to me why VLIW was supposed to be great, it was obvious to me it wouldn’t work: “bro, compilers suck.” I guess the Russians have one VLIW chip called Elbrus, but they don’t sell them to outsiders, probably because they suck for anything other than being free of NSA spyware in the chipsets. VLIW does work OK for signal processing, which is why they thought it was a good idea at the time, but signal processing and general computing are only tangentially related: that’s why we didn’t use signal processing chips for general purpose computing. What do people use now? Basically the same thing DEC, Sun and HP were using: RISC. RISC was the right approach all along, with some additions for speculative execution. AMD did the obvious thing and produced a RISCy 64-bit extension to the Intel 32 bit architecture and emulated all the crufty 16/32 bit stuff. This was such a huge success, Intel now uses AMD’s architecture. Retards. All of these companies which literally had the right answer already (64 bit RISC chips) went out of business anyway because they listened to “experts” and MBAs.
Fifth generation computing project. I’ve written about this already, but for the sake of my listacle, it is a great example. For some reason people in the 80s, particularly Japanese people, thought that GUIs + VLSI + parallel computing + SQL + Prolog magic = Star Trek Brain in a Can computard <TM>. The hysteria around this was worse than the present hysteria around LLM “AI.” Governments got involved in an imaginary arms race with the Japanese, starting their own fifth generation computing projects. Tens of billions (of 80s/90s dollars) were spent on these boondoggles; careers ruined, and the whole thing went on for a decade and a half before people realized they were being retarded and had no idea how to build Star Trek Brain in a Can computard <TM>. There were also startups which everyone praising the OpenAI goons or quantum computard startups seem to have forgotten. Lisp Machines still inhabit the fever dreams of Lisp enthusiasts, but the startup was a massive failure, and was definitely part of the fifth generation computing project craze. Thinking Machines was another MIT related fifth generation startup. This one got biggest, fastest, perhaps because it was most explicitly fifth generation; 1000 employees to make a rube goldberg massively parallel computer (the Connection Machine) that literally nobody could make do anything interesting. It should have been obvious: the CPUs were radically interconnected, but lacked enough RAM to do much more than keep track of when some other CPU was asking it to do something. They also could only process 1 bit at a time, serially, which is completely insane. Oh yeah, they also made you program this atrocity in Lisp, which is also insane. Later they tried and failed to compete with Cray by switching to sane CPU architectures. Symbolics lasted the longest because they had an actual product people would pay for: the computer algebra system Macsyma, but they didn’t exactly set the world on fire, and their Lisp machine wannabe 5th generation computer was a giant failure. By the 90s they were basically just a Lisp and Macsyma that runs on their Lisp, and were a nerf ball for various private equity plays. I guess theoretically Cyc still exists, but nobody knows what they’re up to. This one is analogous to LLMs as “AI” and quantum computardism as nobody had the slightest idea how to do any of the intermediate steps between having VLSI and having Star Trek Brain in a Can computard <TM>. It seems unforgivably stupid from present perspectives, but at the time it probably seemed reasonable: Horn clauses were doing neat things comparable to present day LLMs, but were godawful slow and limited from small databases. But it is retarded, just like OpenAI’s approach to “general AI” is retarded.
Underwater Colonization. For some reason in the 50s through the 70s, people thought we might build giant underwater colonies, or at least single family housing. It’s difficult to imagine today why anybody might think this was a good idea, but if we think it through it makes some sense. Up until maybe 1900, Western Civilization had been hugely expansive; finishing off the colonization of the Americas, Siberia and having colonies on all the major continents. 1960 wasn’t so far off from those days and people still had the colonial spirit; pulling up roots and moving to California or Vladivostok or Australia for a better life. Aquariums had become a popular hobby in the 50s and 60s with the advent of electric pumps and ways of shipping live fish; why not live in one? The new technologies of SCUBA had been developed recently, as well as massive offshore drilling platforms. Jacques Cousteau was on the TeeVee showing his undersea explorations. Cousteau even built an undersea house for himself where he enjoyed smoking in the pure oxygen atmosphere. The CIA spread some disinfo about underwater mineral mining to cover up their retrieval of Russian nukes in a sunken submarine; people believed it. They believed it to the extent people still talk about seabed mining; seems like one of the worst ideas imaginable, but at the time it probably seemed inevitable. They even saw underwater colonization as a first step towards colonizing space; after all it’s a lot cheaper to get to the ocean. Old books are filled with the advantages: beautiful undersea views, get away from the overpopulation, you could raise underwater crops to feed the hungry (another nuts idea from the past, though I guess we do it now with groace fish farming), freedom and exploration for everyone! Of course this would never happen on any kind of scale. It’s much cheaper and safer to build concrete boxes with a door than waterproof metal chambers and airlocks. Some people like adventure, but nobody really likes living in a metal tube with no air surrounded by immanent death by suffocation. This didn’t reach the level of craze as something like Fifth generation computing or the Itanium, but the old books and documentaries are filled with breathy predictions that it was going to happen. As far as I know only Cousteau actually tried doing it. Cousteau was really cool dude. That said, it would be really cool if there were underwater colonies of aquanauts. Sort of like it would be really cool if there were space colonies.
Personal Digital Assistants and handwriting recognition. These achieved some market penetration before they got wiped out by phones, and it’s kind of peculiar example because of this. I was an enthusiast, owning the excellent HP95,HP100,HP200LX systems which I used as PDA and a kind of mini-laptop with two weeks battery life on AA batteries. I got enormous amounts of work done on these things in grad school in the 90s; I had everything from Derive (mathematica like computer algebra system with a superior UI) to a Fortran compiler, and could upload work to the Cray from it using Kermit or whatever, while it had the ordinary notepad, alarm, contact and todo lists that all PDAs came with. It’s still much superior to an ipotato for getting things done (and had a better keyboard than my new T14), and … battery life of 2 weeks was cool. Nobody but me and some wall street types thought this was good. Everyone else thought little screenpad things with handwriting recognition were the future. Magic Cap, Apple Newton, various Geoworks devices, Palm Pilots. I was an early enough adopter I made a few bucks speculating on these weird little companies on short term trades. They were very heavily traded, so people figured they’d be awfully important. I guess in a way they were, but nobody at the time realized that teenage girls texting each other and sending pictures were the real market for having a computard in your pocket. Absolutely nobody realized that thumb typing on a screen was way more acceptable than memorizing a shorthand handwriting technique your computer could recognize. Probably Magic Cap came closest by realizing carrying around a small computer was useless without fancy GUIs and internet capabilities, but their market failure was epic (and made me a few thousand bucks in grad school, for which I am forever grateful). Smart people all agreed PDAs would be important and would use some kind of handwriting, because people back then were used to writing things in a little notebook. Nobody could foresee the computard part was unimportant: it was the communication and escapism function which people wanted. Nobody could foresee that handwriting actually became an obsolete skill. That’s what you get for listening to “smart people” talking about inevitabilities.
Towards a new kind of science and technology
Reading the history of science and technology, one is struck by how important two branches of physics are: electricity and magnetism, and thermodynamics. The former is mostly how we conveniently pipe energy around. The latter is …. how we get the energy in the first place, as well as making most of chemistry possible. Even humble inventions such as the small electric motor had enormous implications in how things get done. Before the invention of small electric motors, for example, machine shops or printing presses were powered directly by thermodynamics and mechanical connections: usually leather belts driven off a line shaft. Other sorts of mechanical connections were also used: wire rope systems (elevators still use them which is kind of an odd anachronism) or hydraulics. Now we pipe electricity around and electric motors turn the power into motion. Must have seemed like magic at the time; it is pretty cool when you stop to think about it. Burn something in one place, pipe the energy via thin bits of metal into little motors which do useful work right where you need it. Much better than strapping leather belts to the output shaft of a steam engine, with dudes shoveling coal into it in another room.
Thermodynamics is how we get power from heat. It is also how we design chemical reactions to make useful substances. I could imagine a modern world without modern chemistry (there would be fewer people without the Haber-Bosch process); even without electricity (certainly without computards: life was more fun without them), but not without thermodynamics and heat engines. Human standards of living are essentially proportional to the amount of heat converted into power which humans can use to do useful work. Without heat engines and the science which drove their creation and perfection, we’re back to Renaissance or Roman times where virtually everything is moved and built with muscle. It is now possible to get electricity directly from the sun and we’ve had wind and water mills for millennia. There are exotic ways of extracting electricity directly from heat: magnetohydrodynamics for example: still thermodynamic ultimately. Most accessible human power comes from thermodynamics. We live in a thermodynamic age: without it, we go back to subsistence farming and chattel slavery.
Thermodynamics has a sort of reductionist version which is what is mostly taught in schools: statistical mechanics. It’s “nice” for physicists to think about things in this way, as we can derive most of the ideas of thermodynamics from more simple ideas in probability and statistics, and knowing that matter is made out of atoms. We like to think about it as being more fundamental for this reason, but I’m not sure it’s more general than the kind of thermodynamics developed to optimize heat engines. It is neat that we can derive all this thermodynamic stuff from simpler ideas, and this represents a great intellectual achievement. None the less, something was lost in didactics in thinking only about the statistical physics stuff.
I think it’s within the realm of possibility that the statistical physics stuff also makes us blind to things we should be able to see if we approach the problem differently. Physicists probably wouldn’t have invented statistical physics and derived all those cool canonical ensembles if they hadn’t first invented thermodynamics. It’s not something that comes naturally from ideas about probability and atoms, though everyone who worked on thermodynamics had the notion that it was probably something like that. Thermodynamics does come naturally from thinking real hard about making better steam engines (before that, better cannons). That’s how we got there. Practical observation of nature, not smoking the pipe and thinking big thoughts about mathematics. Statistical physics was a cleanup operation; a very successful one, but it came about after the laws were discovered.
There are other attempts to embed thermodynamics in some other kind of abstraction. Ruppeiner geometry is a way of embedding it into information geometry: I think mostly because it makes it easier to reason about thermodynamics in other differential geometric systems like General Relativity, though I haven’t made a study of it and I could be wrong. Other extensions of this idea may have more generality. Maybe not though: physics nerdoids like mapping ideas onto other kinds of models, especially geometric models (guilty). That doesn’t mean you get any extra insights from them. Other examples of formulation of thermodynamics in higher forms: contact geometry, Hamilton Jacobi theory (and here as well), ET Jaynes MaxEnt attempts using information theory, Caratheodory’s axiomatic thermodynamics. Are they important? Don’t know: they don’t seem to have influenced much of anything so far, beyond, “that’s pretty nifty.”
Thinking about the history of thermodynamics, it was essentially people trying to come to grips with the concept of heat. Heat and its absence is something we observe in nature through fairly humble kinds of observation. Drill a hole and everything gets hot: what means? People had been thinking about heat in various ways for hundreds of years before thermodynamics was formulated in the mid-1800s and finalized in the early 1900s. The kinds of observations of heat and its behavior are fairly humble stuff: looking at the microscopic theory is interesting and you can find odd effects you might not have looked for otherwise, but it was the basic stuff which brought about the biggest insights. Very humble measurements: pressure, temperature, volume, work. Retard monkey measure things, notice things are conserved or equal to combinations of other variables in funny ways. None of the doofus “looking into the mind of God” crap we’ve been afflicted with since the 20th century brought public relations to the physics community: just hammer and tongs science by the men who actually created the modern world.
I postulate there are higher orders of “thermodynamics” which are discoverable, yet undiscovered. Everyone knows there is something called non-equilibrium thermodynamics, which almost certainly has numerous undiscovered laws, and a couple discovered ones like the Onsager reciprocal relations. Note here, that the Onsager reciprocal relations are strictly formulated like all the other thermodynamic laws. He didn’t start from the microscopic version of statistical physics; he used normal physical conceptions of continuity and thought about what was going on in a manner mostly devoid of statistical mechanics until the very end of the second paper. You can read his original papers here and here for his reasoning. The impetus for the work was the thermoelectric effect, which had until previously been a classic subject of interest for thermodynamics pioneers like Lord Kelvin. Onsager finished the job; the first and so far only law of non-equilibrium thermodynamics. It’s criminally under-taught in school despite having extreme real world utility: I think because you’d have to be quite familiar with thermodynamics itself, which is also criminally under-taught in physics school.
People mostly stopped serious thinking about this stuff with the advent of useful computers. If you have a non-equilibrium system you can figure out most of the stuff you need using computer simulations. The problem with letting computard look for answers in specific cases is you don’t get the higher order Onsager-type thing allowing you to look for effects you haven’t thought of yet. I’m talking about humble phenomena you can see. There are numerous phenomena in matter which show order which are not easily described by higher order thinking or some kind of Onsageresque thermodynamic relationship. They may be describable by some kind of simulator, or they may not. When things behave in an orderly fashion they should be describable by mathematics.
There was this guy Herman Haken, who to my surprise only died this year (at 97) who wrote books on something he called Synergetics. It was an exciting series of books to read as an undergraduate in the early 90s, as it seemed to tie together a bunch of stuff which bothered me about physics, and promise a way forward (it also redpilled me that information and entropy were the same thing). Haken’s ideas came from studying phase transitions, and particularly the self-organization of laser dynamics. He and his colleagues were interested in things which self-organized: stuff like turbulence, patterns in fluid mechanics and plasmas, brains, Fokker–Planck equations. He assumed (as I do) there must be some unifying mathematics behind this sort of weird stuff that looks familiar and classically mathematical.
I don’t think anything useful came of these efforts; in part because it was long on qualitative stuff, fiddling with differential equations and interdisciplinary work, and short on trying to make something in the world of matter function properly, unlike thermodynamics and steam engines. You can’t blame them for trying though; Ilya Prigogine won a Nobel in chemistry around this time for his work on self-organizing systems, Fractals were a big thing, and results from Chaos Theory were pouring in, giving mathematical order to another kind of complex system which appeared to have self-organizing properties. Simple computer simulations of flocking birds and ant-swarms were newly possible and showed enticing order from simple rules. Solitons were something people studied back then: the kind of self-organizing system you could make in a little tub of water. Sure it’s described by the Korteweg De Vries equation, problem solved, right? Nah, not really. The phenomenon is more universal than that, and it’s basically only understood via computard simulation. For a while people thought maybe solitons could be a hidden key to what’s really going on in quantum mechanics. It seems a stretch, but it’s as good a guess as any I know of. When I started my physics career in the early 90s, this stuff looked like it was the future. Unfortunately it remains something for the future: as far as I can tell people have mostly ceased to think about these things, and ultimately never really did give them a good think. Now a days, physicists seem to prefer fiddling with neural nets.
Of course it’s possible that all this spontaneous order is just a coinkidink and there is no over-arching principle to such things. I don’t think so, though. I think it mostly remains unstudied, excepting perhaps outside of classified work in naval and aerospace laboratories, where it isn’t likely to help anybody. Though there are some interesting exceptions to this where new results come out of military studies of turbulence. It is perhaps over-ambitious to think something like turbulence is connected to non-equilibrium chemical reactions, chaos or solitons, but I think Haken and his friends were onto something, and there is stuff there which can unify many seemingly unrelated unusual behaviors of matter. These sorts of things may be used to solve practical problems, and for progress in this, we probably should use one or more of them as test cases, if we want to figure stuff out, the same way we figured out thermodynamics from thinking about steam engines. I think Haken’s project was a failure mostly because it was a late career vanity project, done in the usual dreary “make conferences, publish proceedings” way, but I think his assumptions of an underlying order are probably right. This kind of thing probably isn’t going to make progress by throwing money at it and having conferences (no other problem in human history has been solved in this way): it’s probably going to be attempting to build something practical that involves this sort of self-organizing system. Or at least a deep Onsager-like study of some particular one. Figuring out what that might be almost certainly won’t happen in the “physics community,” for the same reasons nothing else happens in the “physics community,” but it should happen. Humanity is leaving money on the table otherwise.
Planning of invention 3: Westinghouse Gas Turbines
I like to write about technical triumphs, but it’s also worth remembering giant failures. We have had plenty of examples of technological failure in recent times, but most of them are familiar, or are some variation on the sexy marketing vaporware trope (nanotech, quantum computing). There’s scance remembrance for the Westinghouse Gas Turbine Division. It was important historically; the Westinghouse company delivered the country’s first native design jet engine, based on its long expertise with steam turbines. This was in 1943 before any knowledge of British or German (or other American) jet engine research. GE was its only native competition. GE had both a turbo/supercharger division and a steam turbine division. Pratt & Whitney got its start as a contractor for Westinghouse (it was also good at turbochargers). GE and P&W are still around making jet engines. Westinghouse isn’t. There’s a reason for this.
Jet turbines are almost comically a black art. For guys who think science and technology is all about muh papers, jet engines are a great counterexample. Jet engines are all trade secrets. You can trace of the lineage of most of these things to Whittle and Rolls Royce; they gave the Russians their start, and ultimately GE and P&W as well, as these firms manufactured RR designs on contract. The Russians taught the Chinese (who are still behind the Russians). SNECMA (frenchies) might have developed their own using BMW scientists, but they’re so tied in with the Americans and British it’s would be difficult to credit them with independent creation. GE was the supplier for the Air Forces; the Navy wanted Westinghouse to be their supplier, I assume for similar reasons they have different football teams.
The Westinghouse J30 was the first successful non-German axial flow turbojet: this was a legit home-grown first in the USA. Almost all jets are now Axial flow; the early ones by Frank Whittle were “centrifugal” aka more like turbochargers. So, this was a real leapfrog in a way: they got the ultimate design form of the jet engine correct. Mostly, I think, because steam turbines were axial. The J34, a larger version, powered the F2H Banshee and Douglas F3D Skyknight: successful first generation carrier jet fighters. The J34 gave about 3000 or 3400 lbs of thrust; 4000 with afterburner (which, FWIIW was designed by somebody else). The first GE Axial turbojet (the J35) made comparable thrust. The next jet Westinghouse designed, the J40 was supposed to double this to 7500lbs, or 11,000lbs with afterburner with pie eyed ideas that it might hit 16,000lbs one day.
The Navy bet big on this engine: it was supposed to power several of their most futuristic looking planes (of all time) to supersonic speeds. These planes should have been as good as the Century series of the Air Force. They were unfortunately turds in large part because the J40 failed. Westinghouse jet engines in general are a great case study of technological development failure. Not only were they turds which didn’t deliver on the thrust requirements (which Westinghouse cravenly lied about), they were unreliable turds. It was a huge scandal at the time and resulted in Westinghouse getting out of the jet engine business.

Westinghouse failed at jet engines in part because it failed to recognize that it was a substantively different technology and business from the steam turbines they evolved from. It was a small team, entirely converted from steam turbine research; none of them knew anything about combustion (people at GE and P&W did). They even attempted to use oiled babbitt sleeve bearings on the turbine instead of ball bearings; worked in steam turbines bro! The other thing the steam guys never did was a mass production line; their peak wartime production of steam turbines for naval shipping was something like 4 per month. Jet engines required hundreds a month. Steam turbines could be tinkered with in production due to the smaller number. Not so with mass produced jet engines.
That’s how Pratt & Whitney got involved with jet engines: they knew how to do mass production (and combustion). Their management was also more serious than Westinghouse management about jet engines, so P&W’s team was given space, resources, even a R&D lab to produce the Westinghouse J30. They were literally called in to the project by the Navy to save Westinghouse’s bacon. Westinghouse’s engine design group was only given a corner in the steam turbine factory for both R&D and production, which was insane. Westinghouse CEO Gwylim Price was ideologically against investing company funds in government projects, hence lack of investment in the program. It is amazing they were able to pop out any working engines under these conditions. Apparently they did so in part by relying on enormously skilled old machinists that used to work with George Westinghouse himself. Most of the Westinghouse manufactured engines were unworkable, but most of the P&W made versions of the J30 worked. This was an early hint the Westinghouse engineering team weren’t doing something right and P&W were.

Eventually an assembly line was leased from the Navy in Kansas city and the J34, a larger evolution of their J30 was successfully mass produced there by Westinghouse. Unfortunately it was half a continent away from the R&D section (which was in a corner of the steam turbine plant) in Philadelphia. This production shop did work for the J34, probably because Pratt & Whitney engineers had already worked out the production bugs on the J30.
This didn’t work for anything else; the subsequent J40 and J46 engine projects were failures. The Westinghouse people just assumed they could wing it and would get a J34 like outcome. Incidentally the Soviets were able to put their factories far from the design bureaus, but they were disciplined Soviets who understood production engineering, not corporate apparatchik nitwits who were waiting to be promoted to the more profitable washing machine division. For all I know they frog marched the engineers to the production line to make sure there were no problems, and sent them to the gulag if there were (feel free to speak up if you know: Soviet high technology groups were quite successful somehow).
J40 had a higher compression stage, and the turbine was two-stage; neither of which the Westinghouse Engineers had done before. If you look at successful R&D to production programs, you put the engineer next to the machinist; not in different cities. GE, Rolls Royce and Pratt & Whitney were all doing this. There was also a clownish Naval Bureau of Aeronautics which prevaricated on requirements and seemed to think it could plan things without, like checking progress, or even doing basic reality checks. They were the ones that assumed the J40 would go smoothly and ordered a bunch of aircraft types based around the J40, despite their earlier experience with J30 production failures. They were also enablers; treating Westinghouse with kid gloves in many ways rather than holding them accountable. To complicate things they also didn’t sign any contracts with Westinghouse: just letters of intent, which required Westinghouse, run by a nickel and dime ideologue, to pony up the rest rather than take a loan out against the contract. The same Bureau of Aeronautics tried to shut down the Sidewinder program BTW; they were across the board as bad as Westinghouse. By contrast, P&W engines were internally funded at 10x the level of the Westinghouse project, and were given one of their top engineering managers, Perry Pratt (no relation). P&W management recognized that, in capitalism you had to invest in new products instead of whining about taxes, winging it with a corner lab and engaging in petty swindles designed to defraud and outrage the customer.
Westinghouse dealt with their failing J40 project, not by investing further resources in the project or changing management to someone more aggressive, but by placing old fashioned newspaper ads trumpeting the triumph of their J40. This ought to be recognizable as the standard Silly Con Valley vaporware approach used with AI, VR and autonomous vehicles. Unlike current year stenographers for tech and military companies, everyone laughed at this ridiculous imposture. Contemplate this the next time the gibbering dummkopf at yoyodyne tells you about the wonders of his new vaporous quantum neural metaverse bullshit. This preposterous marketing campaign of course cheesed off the Navy as they knew better than anyone that it was false and the J40 was a huge basket of failure. The Navy had also just subsidized construction of a new engine facility in Columbus Ohio which Westinghouse promptly used to manufacture refrigerators: a ridiculous swindle which probably didn’t make Westinghouse any new friends.
This failure by Westinghouse caused a congressional inquiry. Contrast this with our present den of whores who vacuum F35 Lockheed peen0r when they’re not busy taking it in the keister from their favorite foreign government who has kompromat on them diddling teenagers or whatever. Back in the 40s and 50s you could become Vice President by looking for fraud and waste in military contractors. Now I assume it would only get you unelected or worse. It’s fun to read this historical stuff, because the Westinghouse swine were actually considerably better intentioned than any current year defense contractor I can think of. Yet the congress of the people took them to task for it, and very much rightly so; the Westinghouse executives were both incompetent and scumbags.
The steampunk clowns of Westinghouse never delivered a working production J40, and their next venture, an upsized J34, the J46, was also essentially a failure: giving a fraction of its advertised power and regularly breaking down spectacularly. It was deployed in the F7U “Gutless” Cutlass, which had other problems, but a shitty underpowered engine didn’t help matters any. Same story with the Convair F2Y Sea Dart supersonic seaplane.

This used to be my screensaver image
The McDonnell F3H Demon was almost sunk because of its reliance on the J40; they were able to be fitted with a GE J71. This airframe eventually more or less evolved into the wildly successful F-4 Phantom II.

The batlike Douglas F4D Skyray was also supposed to use the J40, but they eventually shoehorned a P&W J57 into the thing, setting it back years and changing the airframe considerably. Looked cool; might have been better if they didn’t have to fatten up the fuselage to fit the J57 in it.

The early swing-wing/variable geometry jet, the Grumman XF10F Jaguar was also supposed to use the J40, and failed in part because of its lack of power.
Westinghouse failed at jet engines in part by being ahead of its time and overconfident. It acted much like current year Lockheed; asking for the government teat to take on all the risk, and offering up rampant excuses for their failures to invest human and capital resources in the new line of business. Westinghouse (like Lockheed) also simply lied about performance characteristics and costs. Various commentators have stated that one could attribute the failure to so and so’s theories of organizational capabilities. The reality was pretty simple: the engineering team was too small, under resourced and had insufficient experience with both combustion and mass production, and they got somewhat lucky the first time. The management was mildly retarded, and had an ideological commitment to not investing in government projects or recognizing that the new technology wasn’t just a sort of steam turbine. The weird layout of production and R&D, lack of experience in combustion and production and inability to predict the obvious fact that the military would eventually want faster jets didn’t help either. GE and P&W succeeded because they were more bloody minded, experienced in related problems, and had better management which invested in the new technology.
Very interesting thesis on Westinghouse jet engine development (where most of this comes from):
Click to access WestinghouseAGT.pdf
The website is also good.




45 comments