Locklin on science

Conditional probability: an educational defect in Physics didactics

Posted in physics by Scott Locklin on January 16, 2026

Conditional probability is something physicists have a hard time with. There are a number of reasons I know this is true. Primarily I know it is true from my own experience: I had a high-middling to excellent didactics experience in physics, and was basically never exposed to the idea. When I got out into the “real world” of, say, calculating probable ad impressions this concept became of towering importance. It took me a while to grasp it, and I still occasionally struggle with the idea, but it’s actually pretty simple.

What is the probability a man is over 6′ tall? Well, in the US, you look at the normal distribution and find it’s about 14%. If you know both his parents are 6′ tall, the number is higher. If both his parents are 5′ tall, the number is lower. That’s a practical example of conditional probability. Making it super concrete, imagine you have a deck of cards. Probability of drawing an ace is 4/52. Probability of drawing an ace if (conditionally) 10 cards have been drawn with no aces is 4/42. Probability of drawing an ace if you pulled 10 cards and two of them are aces (conditionally) is 2/42.  You can do it with urns or dice or whatever; make yourself happy with your favorite example.

Statistical mechanics seems like this is where you should learn such things in physics, since we have no independent probability theory classes. I looked in Reif and Ma, the two books I learned statistical mechanics from. Reif doesn’t have the concept in the index, though it mentions Markoff and Fokker-Planck (he does mention conditional probability here). Ma only mentions it to argue that he doesn’t need it to teach statistical mechanics (later bringing it back in various places in a sort of ad-hoc way: I shouldn’t have slept in so much in that class). Ma even manages to avoid mentioning conditional probability in his treatment of Fokker-Planck, a considerable intellectual achievement for a set of equations for calculation of a conditional probability. As such, most physicists end up thinking of probabilities as funny sorts of ratios that must add up to one, which is right for a lot of cases in physics, but which is not correct in the general sense. Most of the classical statistical physics done with canonical ensembles (aka most of it) assume we can ignore conditional probability. Stuff like non-equilibrium thermodynamics is going to contain a lot of conditional probability, since it is dynamic and one-way in the same sense as the above card game. Our one example of a non-equilibrium thermodynamic relation which rises to the level of a law, the Onsager relations, certainly uses conditional probability, though Onsager himself never mentions it explicitly. The fact that he never uses the words, nor are they used in didactic explanations probably keeps physicists from having a good think about the implications of conditional probability in this and in other places. Out of sight, out of mind.

There are more pedestrian examples of physicists missing out on conditional probability; I’ll list a couple below:

Jung/Pauli synchronicity. When I was a young pot smoking man, I read with great interest a book on the correspondence between Jung and Wolfgang Pauli on the subject of synchronicity. If you’re unfamiliar with the topic, the following clip from Repo Man explains it well; lots of weird coincidences happen, and our brains ascribe meaning to them. Feels a lot like psychic powers or something. The reality is, the otherwise incredibly meticulous Pauli didn’t know enough about conditional probability, even to the level of understanding the trivial Birthday Paradox. It’s all conditional probability: it’s only surprising because our brains don’t intuitively grasp how conditional probability works. The brain observes many things in a short period of time; if some of them happen to overlap in a conditional way over a human consciousness tier period of time (minutes, hours, a day or two), the brain flags it as something significant, even when it’s entirely expected, like a group of 23 people being 50% likely to have a shared birthday. Pauli is a lot smarter than me; arguably smarter than any living current year physicist whose name isn’t Roger Penrose, yet he missed this obvious thing. Probably because his life was a mess and he was drinking too much, but also because he was probably never exposed to the idea in school or anyplace else.

Fermi Paradox is a case where a Nobel prize winning physicist kind of left out important conditional probability aspects of a model. As we all know it is a calculation of there being other forms of intelligent life in the universe based on approximated probabilities. The Drake Equation lists number of stars in the universe, approximate probability of a planet in the habitable zone, age of solar systems, probability of life, intelligent life, civilizations, civilizations with space travel,  etc. In the end he sums things up by multiplying all the numbers together, and comes to the conclusion that there must be intelligent life which we should be able to observe or which have visited us, or there are hidden and depressing dangers which wiped out all these space faring alien cultures. If you look carefully at what he did,  you might never notice he didn’t use any conditional probability. Probably he elided over some important conditional probability. For example, most species go extinct in a way that fits the Survival model; there’s no reason to think intelligent ones have any special advantages, and lots of reasons to think any sort of megafauna, intelligent or otherwise is going to be at least as likely as any other species of megafauna to go extinct over time. This is just one of the conditional probability factors at work here. Though maybe earths are just rare, or intelligent life is unlikely in conditions where they might discover electricity (aka aquatic life). Conditional probability isn’t necessarily the right tool here for a quick look at orders of magnitude, but it is conspicuous for its continued absence in a calculation which heavily implies it might be useful.

The thermodynamic arrow of time. The arrow of time is considered a root problem in physics. Microscopic classical physics, there is no obvious arrow of time. The equations work the same way backwards as forwards. Yet you can assemble the microscopic equations into large ensembles and get the very irreversible laws of thermodynamics. Watanabe wrote an important paper on this subject in 1965 where he noticed that we leave out the conditional probabilities when formulating the statistical mechanical ensembles we use to calculate things and derive thermodynamic relations which make things like steam engines possible. Watanabe’s paper is influential with people with good taste, but mostly has been ignored. Certainly ignored in didactics, and often disputed for reasons which remain obscure to me. Rovelli and friends for example (linked above) think it’s a bad argument for various fiddley reasons which make no sense to me, but the idea of using conditional probability to ascertain where the arrow of time is coming from seems obvious. Of course I don’t know how to do it; I’m a mere statistical dabbler. Physicists resist this with all their might; you can find otherwise obviously intelligent people saying, effectively, “it just isn’t, OK.”

My favorite potential example of this is ET Jaynes idea that the mysteries of quantum entanglement go away when you think about conditional probability. I like this one a lot. Mostly because it dispenses with all the psychic powers quantum mysticism that has sprung up around the ideas of quantum mechanics. Also because it dispenses with quantum computers, which are both obviously fake and retarded. But mostly because Jaynes is the patron saint of physicists who make the jump to data science, and so, was uniquely qualified to bring this sort of thing up. Data science people have to know all about conditional probability: that’s pretty much what they’re doing, all day, every day. If nothing else, the fact that the main engagement with this idea in the literature ends up agreeing with it, rather than deboonking it kind of indicates that the conditional probability is weak among physicists. That’s not to say Jaynes was right, but the lack of informed argument against it indicates a weakness in the topic of conditional probability. If indeed the ideas of Jaynes turn out to be true (I’m in no position to adjudicate), this example will be held up by some future Thomas Kuhn type of thinker to be a spectacular example of a field of very smart people deluding themselves with didactic deficiencies, mathematical ignorance and group-think. As Mencken put it:

The liberation of the human mind has never been furthered by such learned (pedant) dunderheads; it has been furthered by gay fellows who heaved dead cats into sanctuaries and then went roistering down the highways of the world, proving to all men that doubt, after all, was safe – that the god in the sanctuary was finite in his power and hence a fraud. One horse-laugh is worth ten thousand syllogisms. It is not only more effective; it is also vastly more intelligent.

As an aside, I found another contemporary researcher who seems to take the conditional probability approach to get rid of quantum woo. I haven’t read his papers in detail, but they seem to be thoughts along the same lines as Kracklauer and others mentioned in the previous article. It’s entirely possible that entanglement is exactly what Scott Aaronson thinks it is, but the fact that its one application is only useful for pumping up fraudulent penny stocks thus far, I mean, I dunno considering the above it wouldn’t surprise me if the big wrinkly brains got this one wrong.

I suppose statisticians also have a hard time with conditional probability with Simpson’s “paradox” being a prime example, and Berkson’s paradox being a less known one. Contemporary statistical practitioners aren’t supposed to be deep thinkers though, so they get a pass.

Wacky fun physics ideas

Posted in physics by Scott Locklin on November 22, 2025

My reading lately has ventured into weird physics papers. Mainstream physics (unlike machine learning and classical statistics where real progress has been made) is booooring these days. There’s no point in reading another “shittonium on silicon 111” papers, nor am I interested in stupid big budget projects where people always get the expected answer, nor is there any return in reading yet another “confirmation of the standard model” phenomenology gobbledeygook. Black holes, dork matter; anything that can’t be observed is uninteresting and generally just gabbling on about our ignorance of how things actually work. I don’t care for any of the PR touted baloney unified field theories by gentleman surfers or purple hair quaternion enthusiasts, which are about quirky personalities rather than quality of ideas. I’m also sick to death of anyone paying any attention to Avi Loeb, who sees flying saucers in every piece of space junk flying through the solar system. Anyone that hires a PR firm for his “results” is a fraud. Anyone who the media likes for other reasons is likely also a fraud.

I do like weird science though. Stuff that makes you think, “hey what would happen if the universe was this way.” It might not be right, it might even be obviously retarded to people who work in the field, but considering that the post-1945 order is ending, we’d expect it to end in physics as well, just as it did post 1918. When the great upheavals happen in human history, previous certainties become less certain and things start to move in the arts and sciences.  These are all theoretical noodlings, but at least they take the time to have an interesting thought. Anyway, in no particular order, here are some WEIRD SCIENCE papers and a few notes about each.

Leptons might not generate gravity. I mean, they might, they might not. Leptons (aka mostly electrons) are weird because they’re like 2000x lighter than everything else. Leptons obviously have inertial mass. I’m pretty sure someone has shown that they are subject to gravity also by now, but I never looked for evidence of this. Probably this is part of particle accelerator physics: electrons should drop as quickly as anything else in the direction of the earth’s core. Whether or not they generate gravity has definitely not been measured to within the accuracy we’d need to know for sure. This is in principle a knowable thing. The Kreuzer experiment was an early attempt to measure gravitational mass versus inertial mass of two different kinds of normal matter. This paper gives a decent argument that such an experiment couldn’t distinguish the case where leptons or binding energy were not gravity generating as General Relativity says they must be. Maybe it’s just a wacky idea, but it’s an interesting wacky idea that should have physicists getting out their torsion balances.

Gravity is an entropic result of matrix mechanics. I don’t fully understand this one as it involves some references to noodle theory, but it’s fun to pretend gravity is entropic and see what happens. In this case, the author defines a fast and slow timescale in the matrix mechanics. Gravity is experienced by the slow timescale matrix elements through complex entanglement interactions with the fast timescale matrix elements (which act like a heat bath). It’s a little annoying he’s using noodle theory, as that’s where the gravitational constants come from, making this a sort of self-licking ice cream cone. Why do we need noodle theory if Gravity is entropic? There’s a couple of other papers in the same genre, I didn’t pick this one for any particular strengths, it’s just the first one I came across. I’ve mentioned I find the whole entropic gravity idea to be interesting, though there are no really compelling papers on it. The main argument for it is the fact that most of the everyday forces we encounter in life are thermodynamic in origin; why not gravity also. Hand wavey argument for sure, but many great things, such as special relativity, originated from humble reasoning.

Is the electron a photon with toroidal topology. It seems like this must be false, since the electron has charge and all, but it’s an interesting idea, and it has an excellent answer to this objection. The present theory of point-like electrons is pretty weird if you stop to think about it: points imply singularities. I mean, the electron has charge, spin and a magnetic dipole moment: how do you get that from a point? This is a very clever paper: wrap an electromagnetic field of a photon on a torus and you naturally get a lot of interesting properties of the electron back, including spin-1/2 properties, a net magnetic dipole moment and an electric charge; effectively from the orientation of the photon’s electric field around the torus. The precession of the field around the torus makes it look like a sphere on most reasonable time scales, which looks  like what we think of as an electron. It even gets the charge of the electron right using simple arguments. Some dopes say it’s not quite right, then happily go back to QED, which also doesn’t get the charge of the electron right by a significantly larger degree. The crazy thing about this is the Compton wavelength of the electron also  falls out naturally: basically you get quantum mechanics for free from this, and the mechanism of this is demonstrated, rather than the normal state of affairs, which is to simply accept that there’s a wavelength. This has to do with the Doppler shift of a photon’s momentum on the torus. This is a really elegant and cool idea. There are about 130 citations so far. I came across this one in this Huygens Optics video presentation on the paper which is worth a look, though the paper is quite clear also. It’s an old idea, the Dirac equation has a funny oscillation in it. Here’s a nice review paper of this group of ideas, which says nice things about this paper specifically, but points out that it doesn’t explain how the photon got all twisted up in a doughnut in the first place. Though they kind of suggested smashing a couple of high energy photons together in a special way might do it; pair production, basically, though the toroidal symmetry implies something a little more detailed (probably involving circular polarization). This is my favorite of this small collection of weird science papers: the type of thing that could grow into something which sweeps away a lot of the mystical nonsense accreted up over the last century. Some of the bits of it seem arbitrary, but nowhere near as arbitrary as the standard model, and it has satisfying mechanistic explanations without any mysticism. All you need is electromagnetic waves, special relativity and the Doppler shift. Oh yeah and a torus.

I guess this one is not physics, but astrobiology is weird enough to count. Astrobiology was housed in the same buildings as the physics department in my department at Pitt after all; near the guy who used rockets to learn about the atmosphere. Imagine if there were very different kinds of life, say, something based on arsenic or silicon. Unless the organisms were big, how would we detect them? I’ve mentioned before that new forms of normal life are discovered all the time in places like mud; if a form of life were weird we might not be able to detect them at all. The concept has been explored a little, for example in this paper Signatures of a Shadow Biosphere, they speculate that “desert varnish” may be such a thing. Desert varnish looks weird and is enriched in arsenic and manganese. Maybe it is the result of a form of life with a radically different metabolism involving arsenic and manganese. Others say no, but of course others say no; it’s worth investigating further. Less exotic: we could look for L-sugars (amino acids have more inherent racemic qualities).

Variable beta decay

Posted in physics anomalies by Scott Locklin on June 20, 2025

Trigger warning: PDF link heavy shitpoast.

One of the weird and potentially breakthrough things in physics is variable beta decay. Beta decay, essentially is free neutrons turning into a proton, electron and neutrino. The Feynman diagram looks like this:
The W (- in this case) boson is the “weak field” mediating boson. In electroweak theory, you calculate the cross section of the reaction in this way; adding up vertexes in the Feynman diagram just so. The neutrino is key here: all beta decays involve one of these, along with some kind of lepton (usually an electron or “beta ray”). Many of the slower half life nuclear decays are essentially beta decay. Too many neutrons sticking out a sloppy nucleus end up looking like independent neutrons, more or less.

The opposite can happen with an anti-electron/positron when the nucleus is in a lower energetic state after the decay. Also, a neutrino can boink into a proton or neutron and make an extra electron or positron and change the proton to neutron or vice versa, at least if it has enough energy. Weak force: it involves leptons, baryons and neutrinos.

One of the things people occasionally notice is what appears to be periodicity in the half-lives of beta-decay radioactive isotopes. A lot of this is probably bad science: Geiger counters are bad tools for this job. They have variable baseline count rates. They are sensitive to air pressure and humidity (since the detected electron has to travel from the radioactive stuff to the detector window). They are sensitive to electrical charge and magnetic effects since they’re basically avalanche tubes, including power supply variability. Worse, they have variable sensitivity at different frequencies due to capacitance of the humble cable connecting the detector to the electronics box. For some reason a lot of the experiments involve Geiger counters, and a lot of kvetching about taking out these variabilities; the criticisms of them mostly talk about Geiger counter problems. I guess this happens because a lot of people have spare Geiger counters sitting around not doing anything (I have one FWIIW, though unsuited for beta detection). This makes it easy to wire it up to a recording device, and stick it in a box with some random beta decay element. Maybe you get fancy and put it in an argon sealed, temperature controlled, lead encased (to remove some of the background) Faraday cage to try to remove some of these problems.

In my opinion these sorts of experiments would be a lot more impressive if they used ordinary wristwatch tritium tubes and a photodiode and found some kind of correlation between the results with the Geiger counters.  This removes some subtle experimental crap you might not have thought of, like the fact that radon gas is periodically emitted in your basement, or that cosmic rays which you have no way of distinguishing from a beta decay have various storm-like variations. The tritium tubes are well calibrated for stuff like degradation of the phosphorescent material, escape of the tritium and other such factors. Tritium has a short half life (~12y), and so, a strong signal where periodicities should show up well if something were going on. A correlation between two kinds of devices, presumably being held in different places would be ideal. Different buildings would be ideal for short term (time of day); different continents for longer term (day of year) periodicities.

There are other techniques which could potentially be brought to bear as well: scintillation detectors can be used to detect gammas resulting from daughter particles of a beta decay. These use photomultiplier tubes, which have their own problems similar to those involved in Geiger counters, but a lot of labs have them kicking around and their changes in performance characteristics with most kinds of environmental drift will be different from that of a Geiger counter.

The main hypothesis discussed is that observed periodicities involving time of day or week of year involves neutrinos somehow, which is a pretty good guess, since we know there are lots of neutrinos shooting out from the Sun, space (galactic center perhaps) and so on. This is a big old guess, of course. People have postulated all kinds of wacky stuff; variability of the fine structure constant, hidden variable quantum mechanics, new cosmological factors. I can imagine other wacky ideas: there’s formalism developed for extremely intense magnetic fields: imagine if the scalar wave wackos are right and longitudinal electromagnetic waves are a thing.

There are interesting speculations (and measurements) that stuff like chemical binding potentials, or even pressure can vary beta-decay rates.  These sorts of experiments have been going on for a long time; for example, this 1977 paper talking about physical or chemical decay rate studies dating back to 1947. Also this Soviet paper about experiments dating back to the 1950s. People stick Be-7 in fullerenes and observe different decay rates. They even have noticed different decay rates in this system at different temperatures. You can imagine why they might think these things may effect beta-decay rates; different kinds of chemicals and pressure put electrons closer to or farther from the nucleus. These sorts of things mostly fit within conventional beta-decay theory, but they should be examined more carefully as all kinds of nuclear chemistry depends on the assumptions that beta-decay is invariant.

All this is fun, but it also all suffers from both statistical problems and systematic error problems. Favorite isotopes used are common ones which often have inconveniently long half lives. Tritium is something like 12 years. Others use Caesium-137, which is 50 years. Be-7 is better in that its half life is something life is around 57 days. Some of the observations have been random weird stuff where you happen to have a lot of very short lived isotopes around. Favorite detectors are apparently whatever you have kicking around the house. Direct beta decay measurements, with, say a Geiger counter are fraught for reasons stated above, plus variability in mica-window transmission and all kinds of other things. All the different detectors have different tradeoffs.

Statistically speaking, measuring a half life is using likelihood in log L2, which has all the kinds of problems linear regression fans know about. Usually they look for periodicities using Lomb-Scargle periodograms, which has less well understood problems. As far as I know, none of these datasets have been analyzed with conformal techniques which could look in detail at the actual observed error bounds.

I could imagine something like finding a cheap way of making lots of shorter half life stuff and measuring its decays at different times of year. Maybe phosphorus-32 in a liquid scintillator; half life of about 14 days. You can make it by sticking sulphur in a neutron howitzer. I’ve never done this, so I don’t know what other shit the neutrons will make, but it appears to be a common industrial isotope, so someone’s presumably worked this out. Tritium could be a cheap thing, coupled with low-noise photodetectors. Such systems can be tuned and used as standards for longer term variations in beta decay. This is the sort of thing well within the means of any physics department and most “gentleman scientists,” and distributed data could be collected with time-clock standards using a blockchain kind of thing. If the effect is real it must happen all over the place.

Probably these are just sloppy experiments with apparatus ill suited to these sorts of longer term periodicities, and/or bad statistics. None the less, it’s a weird enough thing, people should be more curious about it. We build giant neutrino detectors and kvetch a lot about this sort of physics, doing some small science, perhaps in the decentralized way as suggested above, seems very much worth doing. Unfortunately the scientific community prefers empire building to curiosity driven experiments such as this sort of thing, so I can’t imagine it happening under current conditions. There’s lots of jobs and prestige involved in huge installations like Super-Kamiokande, and none in sticking a raspi and some mildly radioactive shit in a cupboard in 400 physics departments.

Entropic gravity

Posted in physics by Scott Locklin on March 21, 2025

One of my reasonably firmly held beliefs is that gravity is not quantizable. The idea that gravity should be quantizable and unified with the other forces of nature in ways analogous to how electricity and magnetism were unified is one of the many shibboleths afflicting the physics community in current year, responsible for atrocities like noodle theory, loop quantum gravity and the television career of Michio Kaku. As far as I can tell this obsession originates with two ideas, sociologically speaking.

The first reason people are obsessed with this is because Einstein worked on it, and considered it an important unsolved problem.  This is a reasonable heuristic: Einstein was a genius. Einstein also worked on statistical mechanics,  relativity, quantum mechanics, the photoelectric effect, condensed matter physics; he even invented a novel form of refrigerator. Current year would-be quantizers of gravity who want to flex on old Albert never work on such a broad array of topics: they  only work on mathematical masturbations that don’t go anyplace.  That’s why Einstein was a great physicist and current year unification chodes aren’t.

Don’t forget who won the argument!

The second reason people are obsessed with this is physics chodes talk about “the moments before the big bang” and claim not to have a physics that work in the first 10^-43 seconds of the creation of the universe. This is completely unreasonable. You can know the complete unreasonableness of this idea for a physical fact by looking through a backyard tier telescope and taking note of galactic and globular cluster motions. We’ve known this is retarded for almost 100 years: Fritz Zwicky proved it in 1933. “Dark matter” and the even more annoying “dark energy” are basically the statement that our theory of gravity is de-facto incorrect on large scales. FWIIW Zwicky thought it was MOND, not dork matter, since there’s no evidence for the latter. Nobody has ever detected dark matter or energy in a laboratory: we only know about it because galactic scale objects don’t behave as if they obey our theory of gravity. So, the conceit that some egotistical theoretical physics dork could tell you everything that ever happened from t=0 on to the present is absurd. He can’t even explain what’s happening now.

The reason for this 10^-43 number is they approximate that the universe was sufficiently dense back then that gravity must have behaved in a quantum way, which somehow unifies with the other two and a half quantizable forces (Electroweak and strong nuclear forces). That gives you a big old hint as to why gravity ain’t quantum: it is an absurdly weak force compared to all the other ones we know about. Gravity is only measurable at all when you pile up preposterously large amounts of matter. In Cavendish’s direct gravitational force experiment, it was the attraction to a couple of 350lb lead balls, held 8″ from the test mass, and he measured about 0.018 milligrams of force. We can’t have a directly measured form of quantum gravity because we can’t have 350lb quantum mechanical chunks of lead: quantum mechanics holds for objects which are like 10^34 times smaller. There’s the old saying I attribute (I can’t find the reference) to Phil Anderson “there is no more reason to find a quantum theory of gravity than a quantum theory of steam engines.”

Cavendish doing actual physics

Which brings me to today’s actual topic: the idea that there may be a steam engine theory of gravity. I was inspired to write by a recent series of papers by Ginestra Bianconi, which I don’t think is necessarily anything special, as it’s mostly mathematical formalism devoid of physical insight. The original insight itself is OK: the idea that gravity is a sort of secondary effect from thermodynamics seems reasonable as it is an extremely macroscopic thing. Amusingly one of the idea’s big proponents according to wakipedia is a guy whose book I made fun of a year and a half ago, Thanu Padmanabhan.

An earlier thing along these lines is by Erik Verlinde, where he derives Newton’s law of gravitation from information theoretic considerations using something from noodle theory called “the holographic principle.” Kinda sorta inertia as well. Sabine Hossenfelder wrote a response to the paper which is kind of better than the original paper. The basic idea is Gauss and Stokes theorems, arbitrarily sticking a “holographic” bag around objects which produces the expected 1/r potential. Sabine notes that the argument works for E&M as thermodynamics as well if you stick different constants in it. She also notes that festooning it with bits doesn’t really do anything, and points out there’s no need for “holographic screens” -they’re just constant entropy surfaces.

Of course none of this is remotely physical. There ain’t any holographic bags, and there’s no reason to think there might be. Constant entropy surfaces; well maybe someone can come up with some reason for them to be there. Other people claimed cold neutron trampolines having a quantum state disproved entropic gravity, but obviously this fellow hasn’t heard of phonons, and it ends up sounding like a reddit tier argument.

Ted Jacobson is probably the originator of the idea. It came up in ideas on the information content of the event horizons of black holes (I don’t think much of black hole “physics” either -rather hard to check). He doesn’t mention holography, nor does he stick Gauss theorem in anywhere (probably because it was obvious that one could), but the essential argument is in there. I also looked at a paper by Thanu Padmanabhan, but like his book it seems to be a recitation of well known facts.

Going back to Bionconi’s paper, it’s a riff on these sorts of efforts. In his case, she assumes General Relativity, then presumes a quantum matter field, then presumes a “quantum relative entropy” field which causes a sort of MOND modification to GR. Quantum relative entropy isn’t the type of thing which has ever been measured to be an actual thing; more the type of wanking “quantum information theorists” came up with. You know, in case we ever invent actual large scale quantum coherent forms of matter. It’s an interesting idea, and I’d like to see an analogous Hossenfelder comment on this one, as most of the physics and “physics” is unfamiliar to me. If I had to guess, she’d probably say something along the lines of there’s no reason to postulate the “matter field” other than getting the answer she did get; particularly since we’re talking about enormous quantities of matter which aren’t particularly quantum looking. Ginestra Bianconi is an interesting person, but not much of a physicist. Most of her work is “network theory” stuff, and it isn’t much concerned with matter or physical reality. Rotating shapes looks impressive, but unless it predicts something surprising you can later measure, you’re just flicking the bean. Perihelion of Mercury, not measurements of pyramid inches.

It would be cool if gravity came from some kind of thermodynamic property of matter, or some higher order kind of thermodynamics. As I said above, it wouldn’t surprise me if it worked that way: gravity is a large scale thing, and most of the interesting dynamics we see around us come from thermodynamics (and gravity). Would be even cooler if it came from quantum relative entropy (though I doubt it). Really though, people do pretty well assuming gravity is a plain old force.

Fritz Zwicky doesn’t like thermodynamic gravity even if it gives MOND like results