Very small engines
One of the interesting things to contemplate is the scale of the internal combustion engine. It’s a very human scale device; pistons the size of fists, Valves about as wide as knuckles. It’s the kind of thing a man with normal sized machine tools can make. Most internal combustion engines in the world are on this human scale. The ideas came about from the very human business of making cannons and pumps for coal mines, so no real surprise at this. There are also fairly large ones driving cargo ships with pistons which are about a yard in diameter. Those are about as big as they get: half meter to a meter in diameter pistons and something around that size has been in existence for about a century (along with similarly sized steam engines they evolved from). Cars with fist sized pistons have a thermodynamic efficiency of around 25%, maybe 35% on a good day. The thing with manhole size pistons hits 50% and is able to burn tar-like bunker fuel.
The more important prime mover is the turbine. For gas turbines, the turbine blade is of a similarly human length scale: the things that convert heat into motion are single crystals of nickel-superalloys which are a few inches long; about 6 inches long -not real different in scale from car or marine engine pistons. Steam turbine blades are made of less exotic materials and are considerably longer; maybe a few feet long -just like the old timey big piston steam engines. If we ever switch to supercritical CO2 turbines, the blades will be much smaller -back to gas turbine size or smaller.
There are lots of reasons for this, but the primary reason is people are people sized and tend to make things out of parts on people scales. If you start thinking about other length scales, things get very different. For the same reasons you can’t just make a lathe very small and expect it to function similarly, you can’t make an efficient heat engine very small and expect it to work the same way. For example, the surface area to volume ratio in smaller engines becomes unfavorable for standard designs. Combustion looks different on millimeter length scales than it does in fist sized objects; it’s much more unstable, and the droplet size from something like a fuel injector or carburetor isn’t so favorable to very small motors. To put a scale on it; diesel motor injectors make droplets around 5 microns. Gasoline/alcohol, maybe 25 microns. If you’re using a carburetor, which on a small engine you probably are for “it’s difficult to fit a fuel injector in here” reasons, probably 100 or 200 micron droplet sizes. Imagine you have a 5mm (aka 5000 microns) bore engine, the droplets start to look like giant beach balls bouncing around inside the piston. That’s going to produce very strange burn dynamics compared to the same droplets bouncing around. Average motors having a bore size of 90mm, it doesn’t look so bad. Going smaller than 0.1cc obviously this gets worse. Same story but worse for stuff like steam piston engines, along with the additional hurdle of having a tiny steam bomb in your prime mover.
There’s obvious reasons why a small heat engine might be desirable. Hydrocarbons are a great way of storing energy. Much better than the present generation of battery technologies in terms of weight and volume. That’s why life uses hydrocarbons to store energy. Having a little motor and some ethanol for a laptop battery sounds pretty cool to me. I mean, a fuel cell would be more silent and futuristic, but nobody can make those work right, and people do make motors work on a regular basis and have for 150 years or more. Again, 1200kJ/kg lithium batteries versus 40,000kJ/kg kerosene. Imagine you’d like an insect sized drone (people definitely want this); you ain’t gonna power such a thing for very long with a tiny volume of lithium polymer, but you could certainly do it with some hydrocarbons.

Of course model engineers have made small heat engines for over a century now, but as far as I know, none of them have concentrated on making them efficient small heat engines; just making them function is enough work, or push a model airplane around.
Starting with the Carnot model, we can begin to see even more reasons why there might be challenges with building small, efficient heat engines:
Squeezing big heat differences into a small space is going to be more difficult than squeezing big heat differences into a large space. An efficient heat engine burning kerosene or whatever might have of 2700 kelvin, with
of 300 kelvin. Maintaining a temperature gradient of 2400C over a few feet is fairly easily doable, but seems more difficult over millimeters unless you start making the things out of zirconia or other ceramics.
Making flame on a small length scale is also inherently difficult, there is a phenomenon called “flame quenching distance.” Over a length scale of a few millimeters the flame can’t propagate well. I believe this is independent of the beach-ball sized fuel droplets in tiny motors, but it’s probably somewhat related.
Speaking of scale: stuff like piston rings assumes a piston-cylinder gap which involves a piston of a couple of inches drilled by conventional boring bars. These have gaps of a certain size, which work very well at this point for pistons of this size. They work like shit on much smaller pistons/bores because, like, geometry. A tiny gap in a 95mm piston looks huge in a 5mm piston in comparison to total area.
Surface area to volume: this is why we don’t have very large insects (but did when there was more oxygen in the atmosphere). Bugs breathe through holes in their skin rather than through lungs. Works fine for small critters, falls over for anything bigger than current year bugs. Similarly the surface to volume ratio is very different for a 1cc (model airplane) or 0.1cc or 0.01cc engine than from a more ordinary 2000cc or 4000cc motor pushing your car around (7000 cc for Americans I guess). There are many implications related to this: heat transmission is one of them. It’s easy to maintain large temperature differentials in a bigger motor. Large temperature differentials means higher efficiency. At smaller length scales, the thermal conductances scale differently from the forces as well, so something like a steam engine is going to look radically different at 0.1cc than 50,000 cc like in a big old timey ship steam engine piston.

The other thing is a little motor is necessarily going to have to run at a higher RPM for high energy densities, and that’s kind of bad for combustion efficiency because the flame has to propagate and the high RPMs make it more difficult for it to do so.
There are wackier ideas. Something like thermoacoustic engines was a pretty interesting foray into strange domains. In effect, this is a Stirling engine where the pistons are standing sound waves in a resonant chamber. These are using different kinds of physics to get rid of moving parts. They are pretty good sized -something like a foot long. There’s some crazy German dude on youtube building such things in hopes of powering his house using the effect, burning self-generated biogas. It’s not so much this design, as being inspired by it: using new kinds of physics to make small prime movers.
Keeping with the idea of using sound, there’s an idea called the thermoacoustic ratchet. You can create microcavities which create standing waves at very high frequencies when there is a temperature differential, from there you can harvest the energy using some other idea; maybe piezoelectric. There’s other material properties; people have started using pyroelectric materials to harvest such energy. Even weirder: using little vapor bubbles in liquid capillaries. Other ideas: evaporation has been looked at. Squeezing liquid through weird little pores. There’s probably a lot of crazy ideas in tribology and materials science that could be put to work here. One of the cool things about all this is much of it is open to tinkerers.
Small steam engine:
https://www.mpg.de/4691201/thermodynamics_microscopic_steam_engine
https://www.sciencedaily.com/releases/2011/12/111211134002.htm
Pre-Dreadnaughts: an aesthetic appreciation
Continuing my fascination with transitional designs, I’ve been looking at Pre-Dreadnought battleships. Battleships are ridiculous, but also awesome. The things that came before the classical battleship were even more ridiculous. The Dreadnought, what we think of as a battleship, was the culmination of years of design thought on the topic of sticking a metal boat with lots of cannons on it on the ocean. The basic change of the Dreadnaughts/modern battleships from the earlier idea is using the new wonder technology of steam turbines, and making the armament “all big guns.” Pre-Dreadnought battleships can be thought of as “battleships” through their use of turrets and steel armor. But they often had a bewildering array of different caliber cannons; some heavies, some rapid fire and closer to the waterline. The idea being that small torpedo boats were a real threat, better dealt with using smaller rapid fire guns, rather than the big guns. Quite a few militaries thought capital ships were obsolete thanks to the invention of the torpedo; remember the Caio Duilio torpedo boat carrier? The French even stopped battleship production all together in favor of torpedo boats and other small vessels. The Jeune Ecole group of military intellectuals came to this conclusion. They were early innovators in Submarines and Destroyers as a result of this, though they later went back to battleships as seen below. Jeune Ecole thinking is probably relevant again today: drones aka autonomous torpedoes end up being kind of similar in threat to large ships as unguided torpedoes were in the past. One can build fairly long range autonomous torpedoes which would be difficult to detect or defend against. I’m not sure that idea has fully percolated through the navies of the world.
The Dreadnought ships were faster than previous generations and had fewer kinds of guns. Faster is an obvious advantage. Fewer kinds of guns is less obvious, but also important: you only need one fire control system for one kind of gun. Might as well make it a big gun. HMS Dreadnought was the first class of ship which had basically one kind of big gun, giving name to the idea, but it could easily have been called IJN Satsuma or USS Michigan type battleships, as the Japanese and Americans had the idea at around the same time. They arguably cribbed the concept from an Italian idea published in Jane’s Fighting Ships. All subsequent battleships were more or less of the Dreadnought type with various incremental improvements in artillery, engines and armor.
The idea for the Dreadnought came about as the result of recent sea battles, mostly involving the Japanese (against China and especially Russia). Gun battles happened at surprisingly long range, making smaller, shorter range quick firing guns the previous generations of battleships were festooned with, or the arbitrary mid-size guns extraneous. It made ballistics calculations easier if you only had one kind of heavy gun. This is a big deal without microchips; fire control systems were cobbled together using machine tools and hand calculators. Turbines made everything happen faster; 21->28 knots instead of 16->18 or so from reciprocating steam engines. Oil fire eventually became the standard, so stokers could do something else, like work on the fire control system.
What I like best about the pre-Dreadnoughts is the way they look. They mostly had short lifespans and didn’t achieve much in battle, but they looked cool. It looks like you took a post-Dreadnought battleship and festooned it with a bunch of steampunk nonsense that looks like it belongs to the horse and buggy era. Casemate guns, birds nests, small turrets, portholes, ventilation tubes (I think obviated by the invention of the small electric motor), giant square masts, davits: all the kind of stuff that had been on boats for decades or centuries before, but arguably no longer relevant to the era of steam.
They’re all weird to modern eyes, but one of the weirdest was the Charles Martel class. It used a tumblehome hull which was absurdly fatter at the waterline than anything that makes sense to the modern eye.

French Battleship Charles Martel
I’m not going to list the armaments the thing had; you can see most of them for yourself; a seemingly haphazard festooning with random sized turrets and gizmoes. Why does it have multiple bird nests on two masts? The thing is also covered in holes, which makes no sense to me. Maybe it didn’t have electric lighting? Seems risky putting holes in your armor though. Notice the gloriously retro ventilation tubes aft of the second funnel.

USS Texas
The USS Texas (launched 1892)kept the main batteries amidships; something the Dreadnaught itself preserved for some of its guns before everyone realized this was ridiculous. Festooned with smaller casemate cannon, it was designed to fight…. South American battleships, which were considered a threat back in those days when the Souf Americans still had well functioning economic and legal systems. It was successful in the Spanish American war, despite being obsolete by that time. I like the cheerful decoration at the prow and the steampunk robotech vibe of the rest of the hull.

French battleship Danton
French battleship Danton has a different set of excesses from the Charles Martel; five funnels, all different sizes, exhausting 26 boilers which drove …. four turbine engines they got from the British. A semi-dreadnaught for the turbines, but it retained the older casemate cannons and oddball calibers. The giant masts give it some of the character of old sailing ships. It was sunk by a U-boat in WW-1.

USS Indiana
This is a late photo of the USS Indiana, around WW-1 times, well past its heyday. I find the added radio mast (the giant tube like tower aft) and numerous ventilation funnels to be festive.

HMS Jupiter
This thing looks like a contemporary cargo boat and previous generation two masted ship of the line collided and got encrusted with mechanical barnacles. One of its most noteworthy feats was a tour as an icebreaker for the Rooskies in WW-1.

Пересвет
Russian battleship Peresvet was relatively lightly armored and sunk by the Japanese at Port Arthur. The Japanese salvaged it, drove it around for a decade or two as INS Sagami, then sold it back to the Russians, now allied with them, for WW-1. It sunk from a German mine off the Egyptian coast shortly after this. It’s recognizably of its time with the birds nests, ram prow casemate guns and so on. But it looks like it’s 4-5 stories above the water; massive freeboard. Why? To present larger broadside target to the Japanese?

Tsesarevich
Russian battleship Tsesarevich we’re back to wacky bulbous tumblehome hull designs; it was actually a relative of Charles Martel above and was made in France. So we know the Russians weren’t fetishists for having 5 story buildings above the water for Japanese target practice. It was also attacked in Port Arthur, and it’s main contributions to WW-1 were malingering communist outbreaks among the crew.
Cool pre-Dreadnought autism playlist by Drachinifel which inspired this:
A-5 Vigilante
The North American Aviation company was pretty much wiped out by the Apollo-1 accident; a political sacrifice, as the all-oxygen atmosphere which caused the disaster was called out by …. North American Aviation. The company made some amazing planes before it bit the dust and got bought out by the Rockwell conglomerate. For WW-2 planes, it made the game changing P-51 Mustang, and for Korea, the F-86 Sabre. Later the F-100 Super Sabre; the first super sonic fighter deployed. There are a bunch of other impressive firsts by North American Aviation; Navaho intercontinental cruise missiles, Hound Dog cruise missiles, the X-15, various rockets, Space Shuttles, B-1 bombers, but we’ll focus other matters here. Probably the craziest thing they built was the XB-70 Valkyrie, a Mach-3 bomber prototype. There was also a crazy interceptor design, the XF-108 Rapier, which was supposed to defend against the Soviet Mach-3 bombers (which never materialized once everyone switched to ICBMs). It was also considered for a role accompanying the B-70. It was a real purdy gizmo in the sketches, though it never actually flew. After its cancellation, some of the work they did on the F-108 manifested in the A-5 Vigilante.

XF-108 Rapier
The A-5 is a mostly unknown and underappreciated jet. It was, like the XF-108 design, very aesthetic: the first test pilot took a look at it and knew it would be pretty good, and it was. It wasn’t a Mach-3 design, it was designed for Mach-2. That’s why the wingtips didn’t tip down on the A-5: no Mach-3 shockwave to ride the compression lift. Mach-2 was still pretty fast back in those days: the thing first flew in 1958. Originally it was designed to be a nuclear bomber for the Navy. One with a peculiar way of flinging the bomb out the space between the engines at the rear end of the thing: sort of like excreting a supersonic radioactive turd.

Nuke turd delivery system
That never really worked right, and the Navy eventually got sub launched ballistic nuke missiles and lost interest in the system. They did buy a bunch of the things for the reconnaissance role: the space that would have held a nuke was pretty good size and shape to stuff full of cameras. This was an essential role in Vietnam to see if bombing runs were successful. Enough of them were shot down (18 of them), the Navy ordered 30 more of them late in the 1960s to cover the role.

The airframe is the real innovation: many subsequent jets used something similar. The air intake should look familiar to modern eyes: most dual engine fourth generation fighters used it. The F-15, F-14, Mig-25, Mig-31, Su-27, Tu-22M all use the same wedge ramp design; even the Concorde kind of did. It was the first such design, and has a lot of favorable qualities. The wing is high as it was with most of the above planes. There’s a weird rumor that the A-5 heavily influenced the design of the Mig-25, but other than the above similarity with the air intake and high wing, they don’t look much alike to me. Look for yourself:


Yes, they’re both high wing dual engine planes; other than that they don’t look alike to me
I think the main resemblance comes from the fact that the A-5 descended from a Mach-3 interceptor design, so it is no surprise that the Mig-25, which was approximately contemporary, would have some modest family resemblances as a Mach-3 interceptor design. So do all the other 4th generation fighter designs the A-5 doesn’t get priority credit for. Otherwise the Russian plane looked like a very fast brick, where the A-5 was nice and aesthetically swoopy, the way most North American Aviation products were.
The A-5, like many planes of the time, used boundary layer effects to add lift for those carrier takeoffs: blown flaps, basically. This idea always weirded me out; you divert some jet thrust and spray it over the wings and you get more lift. Worked pretty well though. I think the idea was eventually mostly abandoned due to the maintenance requirements of keeping these blowers clean. The jets used were the same as in the popular F-4 Phantom; the GE J79, a workhorse of the period. Overall it was similar in weight and thrust, though it was considerably longer and had more wing for no-wind takeoffs (a weird requirement the F-4 didn’t have), but roughly speaking they had similar capabilities. The A-5 had much longer range though, which is why they used it for reconnaissance instead of the more common F-4.
The A-5 had one of the first onboard digital computers, one of the first heads up displays, a radar supplemented inertial navigation system they cribbed from Project Navaho, and a complex radar. The ground crew hated it, as electronics were not so reliable in those days, and its electronic complexity ended up being the main cause of it retiring early in 1979. It also had some early stealthy characteristics; probably all that swoopy business.
I find the A-5 to be interesting for a couple of reasons; it looked good, it looked maybe 15 years ahead of its time with its overall airframe. It held an obscure but important specialty role. It also has the heritage of the XF108, which was a hugely influential plane in US history, both for the airframe which eventually became the definitive 4th generation heavy fighter airframe, and for the AIM-47 standoff missile it would have used. A device which was the direct ancestor of the AIM-54 Phoenix used by the F-14; still one of our best missiles. Had the F-108 been deployed, it might have had capabilities similar to the Mig-31 evolution of the Mig-25, which has proved itself effective even today with the philosophy of “go fast and high and use long range missiles.” Probably the electronics in those days weren’t quite up to it, but the basic idea was sound. The “fighter Mafia” pushed US air combat doctrine in a different direction, but the original idea of all the century fighters were pushing towards something like the Mig-31. In an odd way, stealth fighters were developed to fight this kind of idea of the powerful high flying fast interceptor with long range missiles. The Avro Arrow of Canook legend was also the same basic idea. Incidentally the Mig-31 and the F-108 are both considerably heavier planes than the A-5 was: because of the role, basically.
The A-5, though, it was a bunch of prototype ideas slapped together into a pretty good airplane. It was the purdiest one on the carrier flight deck for the duration of its career and it had a fairly long service life in its reconnaissance role. Its aerodynamic bones influenced most of the great 4th generation fighters, yet most people never heard of it.
Reversible computards: classical and quantum
Reversible computing is interesting as a concept. There is something called the Landauer limit, which back in 2010 I used to calculate the thermodynamic limits of AI versus the human brain. The idea here is, every bit has some minimal amount of entropy involved in its destruction. Dissipating entropy creates heat. All computers destroy bits routinely; tremendous quantities of them. That’s how logic-gates work. You can make any kind of digital computational element from a NOT and XOR gate, and the XOR ones slaughter bits with majestic heavenly force (or NAND; whatever, there is bit destruction involved somewhere). Landauer pointed out there is some minimal entropy increase involved with murdering perfectly innocent bits. -k*T*log(2) is the Landauer entropy involved in the death of a bit. k being the Botlzmann constant aka 1.38×10^-12 Joules/Kelvin.
I think it was Tomasso Toffoli who came up with the idea of not throwing away those bits, thus potentially keeping around -kT*log(2) of entropy dissipated into the heat bath. It’s a cool idea, but not a complicated idea: just keep the bits around. That way when you do a NAND or XOR or whatever destructive boolean operation, you can in principle reverse it. The downside of this is you have to keep around a shitload of bits and devise a way to use them to power your computation.

These bits are not particularly important in the larger scheme of the energy budget of a CPU doing its thing. A modern CPU has something like 10^9 gates, clocks at say 3*10^9hz. Theoretically that’s like 3*10^19 potentially dead bits a second. Really you’re probably using 1/100 or 1/1000 of those gates per second: call it 3×10^17 bits killed a second when it’s thinking hard (this is a huge overestimate for most situations). Boltzman’s constant k is 2.07×10−21 J/T. Times log(2) and 300 degrees kelvin is 4×10^-19joules/bit. Assuming you kill off 3×10^17 bits, that’s 0.12 joules a second. CPU uses like 30-100 watts, so even with this overestimate of dead bits, they have 1000x more waste heat coming from all the other stuff going on in a CPU.
The idea of reversible computers is interesting in a strictly theoretical sense. The problem is people take this theoretical idea, sort of like the idea of having actual infinite bit length real numbers, as a physical idea. While you can build a reversible computer in principle or in some limited way, you can’t perfectly cancel out the bit-heat dissipated. You just can’t: same reason you can’t build a perpetual motion machine. Bennett claims you can build a perpetual motion machine of this kind, effectively running the calculation using the calculation in reverse to power the forward calculation, neatly canceling out the big heat balloon of bits Toffoli kept around, by immediately using them to power the forward calculation. He’s almost certainly wrong mathematically, or making unphysical assumptions that are effectively Maxwell’s Demon (aka a perpetual motion machine). Usually it gets phrased as “adiabatic” aka extremely slow changes to the bits, which rather defeats the purpose of doing a shitload of bit operations without generating any heat. A shitload of bit slaughtering computations turns into a pico-shitload of “adiabatic” computations, and if we don’t want to wait around for the calculation, we’re back to heating up the air with bits, and, what the hell, all the resistive heat which is already 10,000 times more important.
Resistive heating is way more important than dead Landauer bits, even assuming you could do something about them to power a Bennett style perpetual calculation machine; like tens and hundreds of thousands of times more important. Removing stuff like resistive heating is extremely difficult, and not as “fancy” to nerdoids, so nobody actually wants to do this kind of work, certainly no pencil and paper theorist dingalings. Beyond that, imagining we actually have a computing device which is close to the Landauer limit, and further imagining that we can build magic Toffoli gates which keep the extra bits around so we can in principle reverse the computation, and further imagining we can actually get the Landauer entropy back via magic powerless Bennett bit refrigerators; virtually all these things are vast lacunae. We’re already at about 4 orders of “nobody has any idea how to do one, let alone all 4 of these quite possibly individually impossible things.”
Yet, we still get breathy bullshit like this. Granted it’s the New Scientist; I’m pretty sure I’ve seen things on the cover talking about psychic bees or whatever. They paywalled this because they knew I was going to make fun of it, but here’s the video precis:

They mention this startup: Vaire computing. They claim they’re gonna make reversible computards for us real soon now. I glanced at their website and publicity materials and it’s even more vacuous than the quantum computard startups: it’s obvious this company exists to get ARPA baksheesh and produce nothing of value. Many such cases. You can go to the founder’s personal website, look at his research papers and tell me if there’s something there. If there is, I can’t see it. Looks like very old ideas from the 1980s already funded by DARPA. Split level charge recovery logic (SCRL) was a candidate replacement for CMOS; one that people thought might be in principle reversible. Nobody actually made it reversible. This startup probably isn’t gonna do it either. Might be cool to do more with charge recovery logic though.
We can do a lot better than present CMOS technologies without getting into “woo” bullshit like reversible calculations using Toffoli gates or even SCRL. The kinds of things that need to be done are difficult, otherwise someone would have done at least one of them by now. But this sort of thing is iterative and progress can often be made with existing techniques rather than gambling on something more fruity.
If we stick with present CMOS semiconductor technology, there are quite a few obvious things people talk about doing. You can shut down unused circuitry on your chip. You can slow down clock rates on unused circuitry on your chip (already done for unused cores: can be done on a finer scale). You can use process technologies to improve capacitor performance and shrink required capacitance. You can use process technologies to make variable voltage usage on non critical paths. We can optimize designs of gates to reduce the number of transistors needed for groups of gates. We can lower the number of bits used in a gate (aka FP16 is cheaper than FP64). We can improve wire performance by shrinking wires, or sharing wires. We can lower overall chip voltage. We can recycle some of the waste heat using the Peltier effect or other heat engine. We can lower transistor leakage current by using alternative transistor designs. People have asserted factors of 100 or 1000 using these techniques. That would be way better than adiabatic reversible computers. We need to do all kinds of stuff like this before we start worrying about Landauer bit heat.
Less obvious perhaps: compilers and modern operating systems suck. There’s no real reason they are as bad as they are, but they are bad. I’ll go out on a limb and assert a factor of 4 is possible here; on many problems, factors of 100 or more are possible. You get a hint at this by observing that specially tuned algorithms, like the BLAS or FFTW are absurdly faster than naive compiled code. Getting the computard compiler to understand people’s intentions better, and understand the limitations of a particular CPU/memory system it runs on (instead of assuming everything is basically a PDP-11 as is the model of virtually all compilers) pays huge dividends. That’s not even counting getting rid of failson ideas like JIT compilers which make everything slower.
These ideas are things people can do now, and probably should, to the extent that they are not already done. Of course, unlike theorist reddit-man wanking about reversible computing, we have some ideas on how to accomplish these things by iterating on what already exists. It has zero “woo” content to it: all of it is hammer and tongs engineering. No jobs writing programming languages and operating systems for imaginary reversible architectures from the armchair wanker squadrons farting out worthless papers on the topic. Those people should be thinking about how to make compilers for actually existing architectures better.

Oh yeah, while I’m at it, the “quantum computing” part: quantum computers can’t have resistive heating either, and they have to be fully reversible or they don’t work. You can figure this out by looking at the Schrodinger equation. Or you can look at the latest press release baloney. Resistive heating is effectively an observation. Nothing quantum is happening if your ‘qubits’ are being observed by the environment. Worse than that, quantum computards are analog wave computers: at least if you believe wave functions are physical. And nobody knows how to build the qubit registers or gates.
3 comments