Variable beta decay
Trigger warning: PDF link heavy shitpoast.
One of the weird and potentially breakthrough things in physics is variable beta decay. Beta decay, essentially is free neutrons turning into a proton, electron and neutrino. The Feynman diagram looks like this:
The W (- in this case) boson is the “weak field” mediating boson. In electroweak theory, you calculate the cross section of the reaction in this way; adding up vertexes in the Feynman diagram just so. The neutrino is key here: all beta decays involve one of these, along with some kind of lepton (usually an electron or “beta ray”). Many of the slower half life nuclear decays are essentially beta decay. Too many neutrons sticking out a sloppy nucleus end up looking like independent neutrons, more or less.
The opposite can happen with an anti-electron/positron when the nucleus is in a lower energetic state after the decay. Also, a neutrino can boink into a proton or neutron and make an extra electron or positron and change the proton to neutron or vice versa, at least if it has enough energy. Weak force: it involves leptons, baryons and neutrinos.
One of the things people occasionally notice is what appears to be periodicity in the half-lives of beta-decay radioactive isotopes. A lot of this is probably bad science: Geiger counters are bad tools for this job. They have variable baseline count rates. They are sensitive to air pressure and humidity (since the detected electron has to travel from the radioactive stuff to the detector window). They are sensitive to electrical charge and magnetic effects since they’re basically avalanche tubes, including power supply variability. Worse, they have variable sensitivity at different frequencies due to capacitance of the humble cable connecting the detector to the electronics box. For some reason a lot of the experiments involve Geiger counters, and a lot of kvetching about taking out these variabilities; the criticisms of them mostly talk about Geiger counter problems. I guess this happens because a lot of people have spare Geiger counters sitting around not doing anything (I have one FWIIW, though unsuited for beta detection). This makes it easy to wire it up to a recording device, and stick it in a box with some random beta decay element. Maybe you get fancy and put it in an argon sealed, temperature controlled, lead encased (to remove some of the background) Faraday cage to try to remove some of these problems.
In my opinion these sorts of experiments would be a lot more impressive if they used ordinary wristwatch tritium tubes and a photodiode and found some kind of correlation between the results with the Geiger counters. This removes some subtle experimental crap you might not have thought of, like the fact that radon gas is periodically emitted in your basement, or that cosmic rays which you have no way of distinguishing from a beta decay have various storm-like variations. The tritium tubes are well calibrated for stuff like degradation of the phosphorescent material, escape of the tritium and other such factors. Tritium has a short half life (~12y), and so, a strong signal where periodicities should show up well if something were going on. A correlation between two kinds of devices, presumably being held in different places would be ideal. Different buildings would be ideal for short term (time of day); different continents for longer term (day of year) periodicities.
There are other techniques which could potentially be brought to bear as well: scintillation detectors can be used to detect gammas resulting from daughter particles of a beta decay. These use photomultiplier tubes, which have their own problems similar to those involved in Geiger counters, but a lot of labs have them kicking around and their changes in performance characteristics with most kinds of environmental drift will be different from that of a Geiger counter.
The main hypothesis discussed is that observed periodicities involving time of day or week of year involves neutrinos somehow, which is a pretty good guess, since we know there are lots of neutrinos shooting out from the Sun, space (galactic center perhaps) and so on. This is a big old guess, of course. People have postulated all kinds of wacky stuff; variability of the fine structure constant, hidden variable quantum mechanics, new cosmological factors. I can imagine other wacky ideas: there’s formalism developed for extremely intense magnetic fields: imagine if the scalar wave wackos are right and longitudinal electromagnetic waves are a thing.
There are interesting speculations (and measurements) that stuff like chemical binding potentials, or even pressure can vary beta-decay rates. These sorts of experiments have been going on for a long time; for example, this 1977 paper talking about physical or chemical decay rate studies dating back to 1947. Also this Soviet paper about experiments dating back to the 1950s. People stick Be-7 in fullerenes and observe different decay rates. They even have noticed different decay rates in this system at different temperatures. You can imagine why they might think these things may effect beta-decay rates; different kinds of chemicals and pressure put electrons closer to or farther from the nucleus. These sorts of things mostly fit within conventional beta-decay theory, but they should be examined more carefully as all kinds of nuclear chemistry depends on the assumptions that beta-decay is invariant.
All this is fun, but it also all suffers from both statistical problems and systematic error problems. Favorite isotopes used are common ones which often have inconveniently long half lives. Tritium is something like 12 years. Others use Caesium-137, which is 50 years. Be-7 is better in that its half life is something life is around 57 days. Some of the observations have been random weird stuff where you happen to have a lot of very short lived isotopes around. Favorite detectors are apparently whatever you have kicking around the house. Direct beta decay measurements, with, say a Geiger counter are fraught for reasons stated above, plus variability in mica-window transmission and all kinds of other things. All the different detectors have different tradeoffs.
Statistically speaking, measuring a half life is using likelihood in log L2, which has all the kinds of problems linear regression fans know about. Usually they look for periodicities using Lomb-Scargle periodograms, which has less well understood problems. As far as I know, none of these datasets have been analyzed with conformal techniques which could look in detail at the actual observed error bounds.
I could imagine something like finding a cheap way of making lots of shorter half life stuff and measuring its decays at different times of year. Maybe phosphorus-32 in a liquid scintillator; half life of about 14 days. You can make it by sticking sulphur in a neutron howitzer. I’ve never done this, so I don’t know what other shit the neutrons will make, but it appears to be a common industrial isotope, so someone’s presumably worked this out. Tritium could be a cheap thing, coupled with low-noise photodetectors. Such systems can be tuned and used as standards for longer term variations in beta decay. This is the sort of thing well within the means of any physics department and most “gentleman scientists,” and distributed data could be collected with time-clock standards using a blockchain kind of thing. If the effect is real it must happen all over the place.
Probably these are just sloppy experiments with apparatus ill suited to these sorts of longer term periodicities, and/or bad statistics. None the less, it’s a weird enough thing, people should be more curious about it. We build giant neutrino detectors and kvetch a lot about this sort of physics, doing some small science, perhaps in the decentralized way as suggested above, seems very much worth doing. Unfortunately the scientific community prefers empire building to curiosity driven experiments such as this sort of thing, so I can’t imagine it happening under current conditions. There’s lots of jobs and prestige involved in huge installations like Super-Kamiokande, and none in sticking a raspi and some mildly radioactive shit in a cupboard in 400 physics departments.
Sonoluminescence
Sonoluminescence is one of those effects that redditors and “experts” would deny the existence of if it were not so easy to generate. It’s been known for about a century now; I think it was originally observed but not understood in cavitating propellers, before some German stuck photographic plates in a sonicator. The study of it had a sort of renaissance in the 1990s when Seth Putterman wrote a Scientific American article on the subject pointing out how weird it was, with his colleagues Hiller and Barber showing how to build a doodad using simple equipment to observe single bubble sonoluminescence. The past is a foreign country: consider some of the topics of current year Scientific American and despair. Imagining current year SciAm inspiring a new research direction, let alone showing you how to build it in your living room is sort of like imagining antigravity technology being developed on 4chan, but old school SciAm was pretty based. Current year SciAm is a complete joke now that they’ve purged Horgan; maggots subsisting on the corpse of a once great institution.
I know about this sonoluminescence thing because my old boss in 1995 had me fool around with it a little bit: he didn’t have the moolah to finish his lab or pay me, but he thought it was an interesting question and there was a couple of relevant pieces of equipment kicking around. I even met Putterman briefly. I was a bit sore that he looked at me like I was some loathsome species of intestinal fluke. I also found it amusing when a based Herr Doktorr Professor from the Atomic Physics department roasted him mightily in his talk, dismissing the whole thing as trivial somehow.
In hindsight Putterman is an interesting and admirable guy, and physics grad students really are lower than intestinal flukes. Putterman was a student of George Uhlenbeck; discoverer of electron spin and co-inventor of the Ornstein-Uhlenbeck formalism beloved of statistical arbitrage types. Putterman has worked on a lot of cool and weird stuff; he’s done table-top fusion using thermoelectric properties of lithium niobate, he’s done neat stuff with solitons, and he also does research into triboelectrics. Triboelectrics is another mysterious table-top kind of physics you could do yourself (peel tape or crack a lifesaver candy in the dark; you’ll see weird lights), most physicists ignore it and as a result it remains mysterious. This sort of curiosity and sense of scientific adventure is rare and admirable. There’s a legend he spent his grad student support grant on fancy wines, like a character out of Brideshead Revisited. Putterman was originally a theorist; mostly low temperature stuff, which is generally the type of research with lots of feedback from experimentalists. He even asked some interesting “quantum information theory” questions before that was a subject. Rather than loaf around publishing theory papers nobody reads, he branched out post-tenure into a productive career as an eclectic experimentalist as outlined above. Gentlemen: This is The Way. Don’t be a goofy bongo playing wannabe theorist doing looking into the mind of God unfruitful masturbation: be like Seth Putterman -that’s what real physicists do. Go play with some actual matter.
Back to sonoluminescence: you make cavitating bubbles in water using sound and you see sparks of light. The bubble collapse creates tremendous pressures and temperatures and you get a flash of light. The SciAm single bubble example is the most canonical one and you can find lots of hobbyists reproducing it on youtube (21m50s if the hot link doesn’t work):
You’re basically zapping the blob of water in the Erlenmeyer flask with piezoelectric transducer “speakers” at its resonant frequency (25khz in this instance). You have to degas the water beforehand to get it to about 0.2 atm partial pressures; basically you boil it, then stick a cork in it while it cools. These days it should be possible to automate the setup further using image recognition, a DAC, and maybe a needle actuator to form the bubble. Since SciAm is too busy praising girl-bosses and kvetching about imaginary slights to publish home sonoluminescence articles, I doubt it will happen.
Hu seo þrag gewat, genap under nihthelm, swa heo no waere.
There are several very remarkable things about this effect beyond the fact that you’re turning sound into light. For one thing, the light flashes are happening on a picosecond time scale. It was unknown back in ’95 how short the flashes were; we were going to hook it up to a streak camera to find out. Since it happens on a 25khz repetition, your eyeball integrates it all together, but the shortness of the pulses is quite remarkable. Light travels only 0.3mm in 1 picosecond; think about that for a minute. Something extraordinary must be happening inside the collapsing bubble. It’s such a narrow pulse it is difficult to measure the width of light pulses on picosecond scales; you need special equipment like a streak camera to do it.
The other extraordinary thing: the spark of light is blue. Blue light implies things are very, very hot. Blue stars, for example, are something like 10x hotter than red stars (up to 40,000 Kelvin versus 4000 Kelvin). Worse, since both water and Erlenmeyer flasks are opaque to ultraviolet light, the spectrum of created light might be peaked higher: that means it might be even hotter. I remember trying to think of ways of getting the light out with a capillary or something to better bound what the temperature was. Unsurprisingly this is still kind of an unknown figure; somewhere between 5,000 and 20,000 Kelvin, I guess depending in part on materials, frequency, stability and people’s best guesses and so on.

They don’t make articles like this any more
The other extraordinary thing is the effect of the simple case described above somehow depends on the partial pressure of argon in the water. No argon no sonoluminescence. It’s my understanding that other kinds of sonoluminescence do not depend on the partial pressure of argon, and you can get the same thing with other noble gasses, but it was just a peculiar and unexpected thing. Hot stuff, you’d expect to be from ionized nitrogen bonds or something. Nope; not unless the argon is involved.
Back in 95 there were all kinds of weird speculations about it; the Casimir effect, some kind of hydrogen fusion (disproved), quantum radiation, miniature black holes (lol), proton tunneling. Back then we didn’t even know how small the bubble gets (about a micron from the cursory literature search I did), so you could fit all kinds of wild 1/0 kinds of ideas into this thing. Probably though, it’s noble gas bremsstrahlung like Herr Doktor Professor suggested in his roast of Putterman’s talk. Still a pretty wild effect. Also hilarious that 28 years later it remains fairly poorly understood despite so many eyes on. We now know how to do this with other substances other than water and argon. But actual understanding, critical points (there obviously are a couple), mathematical modeling: even how hot it is and what the hell is really going on are all mysteries. Seems like the type of thing which could be automated a little better; we have lots of real time computer doodads capable of helping make this effect a kind of off the shelf item rather than the rube goldberg contraption shown above. Maybe some future Kapitsa will figure it out for us.
Anomalies in the calculations of the anomalous magnetic moment of the electron
People regularly send me Alexander Unzicker videos. He’s a grouch and gadfly of similar timber. Unlike me Unzicker seems to have remained a professional physicist of some kind, or at least he seems to identify as such, which if you believe modern knuckleheads, should be good enough for him to be taken at face value. He’s written a book warning of physics gambling its credibility away. I haven’t read it, more or less because I don’t think physics has much credibility left, at least other than among the type of people who somehow manage to remain enthusiastic 1950s style Science Fiction fans who think the future is right around the corner. I think Unzicker has come to his conclusions fairly recently, though he appears to be a guy who has always tried to get to the bottom of things based on his publications. I am guessing he’s dismissed as an outsider by the physics community, but he appears to be a bright guy, and he certainly knows more field theory than I do. I haven’t seen more than a couple of his videos, but have enjoyed the ones I’ve seen. This one got me to read a couple of amusing papers:
You can watch it or not; it’s based on the work of a gentleman by the name of Oliver Consa. Consa’s paper is a quick read (quicker than watching the video, at least for me: I absolutely despise how much time podcasts waste –only the bapcast is worth the time), though it must have involved painstaking scholarship over the course of many months in a good physics library. It’s an important topic: that of the anomalous magnetic moment of the electron.
History lesson time, all larval physicists take a course on quantum mechanics and they solve the Schroedinger equation for the the spectrum of the hydrogen atom: there are closed form solutions with Spherical harmonics and Laguerre polynomials. The spin and angular momentum of the electron is sort of folded in space around the central potential of the proton of the nucleus, and you get answers which are more or less the energy levels of hydrogen. You basically can’t do this closed form trick with any other atoms without making all kinds of approximations (OK fine, you can kinda do lower levels of the helium spectra with higher dimensional spherical harmonics, treating the two electrons as one in a higher dimensional space), but it’s such a beautiful result, it makes for great indoctrination for young fizzy-cysts. We then use the basic ideas developed here to solve all kinds of other atomic and scattering problems.
Later in your coursework you slap on some relativistic corrections called “fine structure” they tell you comes from the Dirac equation which you may or may not eventually fool around with, but which everyone should. You chug along and make corrections to the spectrum of hydrogen bringing it in greater agreement with the observed values. At some point they mention something called the Lamb shift, which, since it is small, is called “hyperfine structure.” It is related to the anomalous magnetic moment of the electron mentioned in the title. You can see that it’s proportional to various spin orbit couplings, and so you more or less concentrate on those, perhaps getting a bit of inadvertent group theory on SU(2). They tell you that this further correction has something to do with the quantization of the electromagnetic field itself. You then go on to forget all this and do something else unless you’re an atomic guy (like me) or a particle guy.

If you’re a particle guy, or an atom jockey getting above his station, you will eventually get to the dreaded course on quantum electrodynamics (QED): this is the so-called second quantization where the fields themselves are quantized. The Dirac/Schroedinger equations are matter quantization -the fields remain classical and continuous. I won’t bore you with the hocus-pocus that goes on here, but if you make it through your course (Itzykson and Zuber for me -I assume some other text is now canonical since my $120-in-the-90s book is now $21 from Dover) you eventually run up against something that literally everyone knows is nonsense called “renormalization.” In renormalization you do some ridiculous thing where you subtract one infinity from another done in a special way which just happens to produce the “correct” answer. When you come up against it, if you have any thumos, you will say “hey this sure looks like bullshit; you’d give me a D on a test if I thought that up on my own.” If your professor has any soul, he will agree with you. You then both shrug and move on, assuming that people more intelligent than you have done the work and think it is OK. Nobel prizes have been handed out for this, which, after all, somehow gives the “right” answer. One of the things about this course is that any nontrivial calculation is astoundingly complicated; involving pages and pages of solution of differential equations, Greens functions, trace formulas and probably other things I’ve forgotten. You go on to do a couple of simple Feynman diagrams in the course, then either stop thinking about it (me) or continue with this baloney and become some kinda particle weirdo who should probably all neck themselves for the shame of it at this point, assuming they haven’t found gainful employment outside the particle physics communitay.
With that preamble in mind, and keeping in mind my coursework of almost 30 years ago is all I have to go on for this subject, I present to you this aforementioned work of Oliver Consa. You see, there is an old saw that QED is the most precisely verified theory in the history of the human race. This is, of course, bullshit for various reasons. One of the most important reasons is the one Consa covers; one I was dimly aware of -that the theorists have always had a hard time getting it right in their calculations, and often got it “right” in lock step with experimental errors; aka they got the same errors as the experimentalists did, meaning, more or less, they cheated at their homework. I knew this had happened; it also happened with the charge to mass ratio of the electron: Millikan reported an incorrect result and subsequent results were …. closer to his erroneous result and only crept towards the true result sequentially. People are chickenshit to publish stuff which contradicts established results. Also people with experience notice theorists are often over-eager to get the “right” answer -my own thesis advisor had a theorist collaborator (famous dude) who borked up a paper “explaining” some experimental results in a similarly embarrassing way.
It turns out the theorists for this allegedly ultra-precise theory have gotten the right answer almost exactly never, at least according to Consa’s scholarship. This I hadn’t realized. It looked like a jolly game of whack a mole where the experimentalists would come up with a new number, after discovering the secretary left the toaster on while performing the experiment or whatever, the theorists would claim to match the number and tell some shaggy dog story about how someone else checked their unpublishably long work and “no you can’t see it; maybe we’ll publish it one day.” Really: that’s how it happened. In fact, that’s happened any number of times and for all I know is still happening. The analytic results we accept as true are partially unpublished, and the people who did the original calculation are long dead. The precision claim rings pretty hollow when it changes every couple of years based on what the latest experimental results are.

“Look how big my brain is!”
I mention the didactic indoctrination that all physicists take above for an important reason. Pretty much every living physicist goes through that: a sort of psychological initiation or indoctrination. It’s how we are formed into physicists. To be clear, you go through 5 years of this crap, culminating in the quantum field theory: the thing done by “the important people” in the physics department. Feynman did it! We all love Feynman! It’s all presented to you as beautiful, inevitable even. Lots of physics really is beautiful and looks inevitable in hindsight, but QED and the rest of the standard model certainly ain’t: it’s a disgusting trash fire of nerds painting over errors and sawing off things that don’t fit together. It is hard to deal with, and only people of a certain kind of intelligence are able to do so in all its gory details. Such people are more like medieval theologians than actual physicists with insight into, you know, physics: the science of matter.
If America were to fight a real war with flying saucers or the Chinese or whatever where we’d need to make actual progress, would we ask the “wicked smaht” field theorists or even the slightly practical experimental particle physics guys to help? I mean, assuming we wanted to actually win the thing. The particle dorks ain’t what they used to be: Feynman made real contributions to the War effort, even though he may have led us all down a useless rat hole with QED. I’m trying to imagine someone from current year field theory working on industrial production problems, or even coming up with something like neutron cross section heuristics using human calculators: having a hard time imagining it TBH. Working mechanical or electrical engineers: no problem. Working field theorists: absolutely not.
I think Consa gets the sociology right in the preamble: in the mid-late 1940s, American physicists were masters of the universe for having come up with nuclear weapons, the Loran navigation system, the proximity fuze and radar. Their prestige was unmatched and the money began to flow from the federal government in ways it never had before. At this moment in time, high energy physics became a racket: the field theorists were top of the pops for this. They had to come up with some kind of answers: it wasn’t possible to not know what the answer was -billions of dollars in research funding and thousands of jobs were at stake. The guys who won the Nobel for QED more or less had to be right to justify the business. They themselves figured they were just showing the outlines of something better that would come later on; something without all the renormalization baloney and 90 page long calculations. That later thing never came about, so we’re stuck with this Nobel prize winning turkey. I think Consa works on some alternative to QED, which is a noble and brave, and probably necessary thing to do. He’s not the only person to notice this stuff; a trivial google turns up other examples of rather important people noticing.
Even when I was in grad school in the 90s, it seemed like high energy experimental physics should have been taken out back and shot like ole yeller. People working on it were usually bureaucrats tossed into a meat grinder: working preposterous hours and doing things not remotely recognizable as science for the payoff of getting your name on some ridiculous paper with 5000 “authors.” There was the fumes of a certain cachet to the field, but it was obvious back then only a fool would do this to themselves as a career. It’s not clear to me how the field will eventually die, but die it should.
Even assuming it’s broadly correct, field theory itself is absurdly abstract and intellectually impotent, autistically going over the same glasperlenspeil ideas of the last 75 years rather than attempting something new. This is a statement rather beyond my previous sneerings at noodle theory: the whole broad category of “field theorists” and high energy physicists appear to be morally and intellectually bankrupt. Even assuming the field theory I learned in school is correct in some sense, and all these calculational blunders are a kind of progress or were illusory somehow: it is a subject irrelevant to virtually all of the practically observable world, quite unlike something such as quantum mechanics, special relativity and other 100 year old “modern” physics ideas which have numerous real world consequences we use daily. Second quantization is essentially irrelevant; some weird lines in the hydrogen atom -that’s it. The promise of physics is that sperging out on something like hyperfine structure in atomic spectra (or mesonic spectra or whatever) is going to each us useful things about the rest of the world, possibly bringing some other kinds of benefit or at least a deeper understanding of matter and the universe. The archetypical unified field theory was the theory of electricity and magnetism which brought preposterously huge increases in both human understanding and power over nature; almost immediately. Quantum field theory has done no such thing and seems unable to do such a thing. Unless it changes radically it will never do such a thing. At best it’s a glorified IQ test; though a double humped one -people who continue to do it after graduation certainly seem to exhibit a certain kind of stupidity.

I think a lot of the problem with this sort of thing is …. the process described above. By the time you’ve begun to master something like field theory, even at a low Itzykson-Zuber level like I did, you’re subsumed in sunk cost. Doing these calculations is hard; even solving simple Feynman diagrams makes you feel pretty damn clever: you’re just like the big boys of history. The problem is, what if those big boys were dead wrong? There’s plenty of historical precedent for large groups of intellectual workers wandering off on intellectual branches that don’t make any sense due to incorrect abstractions. Kabbalah, numerology, Marxist economics, Prolog constraint solvers, astrology, alchemy, the fountain of youth, Atlantis: these sorts of clownish bullshit are the norms for human beings, and actual physics done by men of power like Poincare is the exceptional weird thing that sometimes, apparently fairly rarely happens. Countless fortunes and lives of many thousands of very talented people were wasted on, say, alchemy. It is becoming clear that the same can be said for various forms of quantum field theory and “standard model” as an intellectual enterprise.
I mean, OK I get it, I’m one of the first people to make such an assertion in a strong way, and I’m just some guy. It seems pretty crazy with all those smart people “study hard differential equation” that none of the high profile ones could also come to the conclusion, though Unzicker and Consa (and Hossenfelder I think) certainly seem coming around to this. I could be wrong! Surely all this looks very suspicious. I know there are a few former field theorist types in my readership: feel free to tell me this is bullshit and Consa’s scholarship is wrong.
Putting aside these specifics, and the fact that all of these QED measurements boil down to measuring the fine structure constant – a number which would be there without QED, which it after all predates QED by a couple of decades: it’s something that turns up naturally in the presence of quantum mechanics of electric field related things. If I were still in the game I wouldn’t look for particle soup “tests” of QED: those people are involved in a bureaucracy folie a deux with theorist nerds. A more potentially precise measurement, to say nothing of a more potentially relevant measurement, might be something like the Casimir effect, or other solid state test of QED. There are a few of these tests out there for various forms of Casimir effect. If they’re not looking for an anomaly they won’t find one, but that’s where I’d look. This also has the benefit of bringing ideas from QED into the relevant physical world of matter. Theoretically you should be able to do stuff with MEMS or other forms of lithography. Heck you could probably also do something with very macroscopic objects like Fabry-Perot cavities, Wheatstone bridges and cantilevers. I’m not going to think about this for long enough to make concrete suggestions or try to build something in my machine shop. For one thing, I have better things to do. For another there are people out there who make a living at this shit, and it’s what they should be doing if they weren’t chicken-hearted poltroons or unimaginative bureaucratic goblins. Further investigations of Casimir is even a relatively low-risk career move: you’re going to get something out of it by pulling 2nd quantization into the macroscopic world. Maybe some adventurous person outside the degenerate welfare-queen anglosphere will figure it out.
Edit add (Mar 14 2023): this is a really cool recent measurement, not at all done in a particle physics way. No word on whether or not the theorists are adding digits: https://arxiv.org/abs/2209.13084
Humble tokamak physicist owns generations of cosmological wankers
It’s not often I get excited about papers from the physics community. The field I used to love has turned into a dreary ghetto of noodle-theory wankers, experimental particle physics bureaucrats, cosmological mountebanks, “phenomenologists” and quantum computing charlatans. But I’m excited about this paper:
https://link.springer.com/article/10.1140%2Fepjc%2Fs10052-021-08967-3
The author, Gerson Otto Ludwig, is a lifelong plasma physicist from Brazil; a noble profession, even if controlled nuclear fusion is unlikely as a near future energy source. Plasma physics is a fiendishly difficult field; it is both mathematically difficult and unlike the more “woo” grandiose kinds of physics hiding behind formalism, your ideas are generally testable by experiment. Maybe some cosmological wanker pissed him off, and he said “segure minha cerveja.” Maybe he just noticed something from fooling around with magnetohydrodynamic models all day. But if he’s right, he’s basically written the most dramatic single paper own of the physics and astronomy community, like ever. Assuming this paper is correct, it is a literal extinction event for thousands of wankers; a fiery asteroid across the sky, with a bunch of cud-chewing cosmological dinosaurs staring at it in dumb disbelief.
One of the things cosmologists, noodle theorists and astronomers worry a lot about is “dark matter.” When you look out in space at rotating galaxies, they appear to contain more mass than we can actually see; even weirder, the mass appears to not be in the bright centroid for some reason. Something is making those galaxies stick together and rotate in funny ways and we can’t see it. If you do a physics major and your professor isn’t incompetent, they’ll probably make you work through an example of this. I remember doing so, thinking, “huh that’s pretty weird” then proceeded to attempt a career on objects about 10^68 times smaller than a galaxy. I had always assumed that someone had worked through the General Relativity version of this calculation in detail or at least given a reason why GR doesn’t apply. But I guess nobody did. There’s a larger issue here; why do galaxies look that way at all? You can mumble a bit about angular momentum and so on, but it is kind of peculiar there are so many things out there that look like this. When you read books on Galactic dynamics, there will always be a chapter wondering why galaxies are spirals; lots of hand wavey theories are given, but it’s pretty obvious nobody has a good idea.

I never studied GR; had the opportunity to do so with the great Ezra Newman and Carlo Rovelli. Skipping that for a dumb quantum optics course or whatever my excuse was, was an error. However one picks up a smattering of these things. There are analogies to the classical Maxwell equations in GR. It’s obvious there must be a component that works like electrostatics since Newtonian gravity looks exactly like Coulomb’s law with different constants, and mass substituting for electrical charge. What isn’t obvious is that there is also a gravitomagnetic term, which looks like Ampere’s law, relating the motion of charged particles to the magnetic field. So, there is a sort of gravitational analog to the magnetic field that happens when masses flow; old idea, people think it has something to do with quasars.

You can see where this is going: plasma physicists think about Lorentz forces on gasses of charged particles all goddamned day. Professor Ludwig related all this gravitic stuff to some equations from magnetohydrodynamics, ran the numbers, and realized the weird dark matter forces are probably a consequence of the geometry of spacetime. Theoretically any ambitious grad student of the last 50 years could have thought of it. I never studied magnetohydrodynamics myself, or GR, but if I were sitting around thinking about why Galaxies look weird or dork matter, and I know there was such a thing as gravitomagnetic effects, I might be …. slightly curious about what plasma physicists have come up with. It’s not like the z-pinch effect or tokamaks are particularly secret ideas; tokamaks at least have been bellowed about for decades.
Anyway, unless I’m missing something big here, it’s all straightforward stuff; a workman like piece of physics scholarship, and it seems to give the right answer (I haven’t checked). If he’s right, it’s going to make lots of people real mad, then sad for their wasted lives. The type of people who deserve a comeuppance. I will be unspeakably happy if this is true and the man wins the Nobel for it, making fools of a field filled with fools.

16 comments