Posts Tagged ‘science’

Faith and Reason and Occam’s Razor

September 17, 2025

A 1998 Encyclical by Pope John Paul II was titled “Faith and Reason.” Actually condemning pure faith as the basis for religious belief — claiming it’s instead supported by reason and science. How pretty to think that.

Mark Twain defined faith as “believing what you know ain’t so.” But what does the word “know” there mean? How knowledge and belief work in one’s brain can be tricky. Nobody “believes” something they “know” is false. But some things people resist knowing. Many believe — or think they believe — they’ll go to paradise after death. Yet aren’t keen to depart. How do we unpack that “belief?”

Occam’s Razor (named for 14th century thinker William of Occam, or Ockham) tells us that among competing explanations, the simplest, with the fewest assumptions or moving parts, is the likeliest. The Daily Show has aptly called it Occam’s Giant Fucking Machete. Because it’s so powerful.

At a gathering of friends, one (an atheist) touted an alien abduction tale he thought compellingly persuasive — witnessed, indeed, by a UN Secretary-General! Wow! Well, there were two basic alternatives:

1) The story was true (despite violating laws of physics in its details, as well as ones making interstellar travel itself virtually impossible); or

2) The story’s “facts” were false.

Applying Occam’s razor, the latter was the simplest explanation. After all, we know people make stuff up all the time. A little googling quickly confirmed this.

Another friend, advocating for Christianity, once gave me a book he felt sure must convince me, titled Who Moved the Stone? Relating Jesus’s entombment and resurrection, answering all conceivable objections to that narrative.

Except one. That the events in question — described (inconsistently!) in the gospels— simply never happened. This flummoxed my friend. As with that alien abduction story, the commonest application of Occam’s razor is to question whether asserted facts are true in the first place. Their falsity often being the simplest explanation for some seemingly puzzling phenomenon.

Occam also debunks your typical conspiracy theory — Sandy Hook, 9/11, the JFK assassination, Roswell, the Moon landing. All predicated upon legions of people in on the conspiracy and able to conceal it over decades. Utterly implausible.

Back to faith. Mark Twain’s take is, again, an oversimplification. What does “faith” really mean? We have faith and trust in how others will generally behave. But of course that sort of faith is grounded in a lifetime of experience supporting its validity. And when we lack such experiential basis, we deploy Reagan’s “trust but verify.”

All this is using our reason, it’s not faith in the religious sense. Whose very concept eschews any idea of supportive evidence. The whole point is to believe in disregard thereof. Transcending such grubby worldliness and ascending to some holier plane. Or something like that.

Convenient if you’re trying to sell people doctrines that flout actual experience and knowledge. Like the con artist saying, “Who ya gonna believe, me or your lying eyes?”

And yet, much as they persuade themselves into this “faith” paradigm innocent of evidence, religious folks nevertheless grab onto bits of evidence they can somehow construe as corroborating that faith. To assuage that part of the brain unable to make itself wholly abjure reason and go all-in with faith. It’s contradictory, schizophrenic even.

Steven Pinker has said, “I don’t believe in anything you have to believe in.” Scientists are sometimes asked if they really “believe” in evolution. But it’s not something they “believe in” — it’s something they believe. A crucial difference. Belief not from choice, but because facts compel it. Reason means forming beliefs from facts; faith means overriding them.

Some young earth creationists wave off geology and fossils as concocted by God just to trick us and test our faith. Talk about conspiracy theories! Not merely renouncing evidence, but torturing it. Takes an awful lot of work to keep those faith balls in the air.

I apply Occam’s razor regarding religion, with two basic possibilities: 1) It’s true. (Well, Christianity, though not of course Hinduism or those thousand other faiths.) Despite a world looking exactly as we should expect were there no god. Or — 2) It was all made up by fallible credulous people based on primitive superstitions.

A no-brainer.

Betraying Enlightenment hopes of rising rationalism, humanity still flounders in a quicksand of the supernatural and paranormal. Why? Believing whatever you like may seem a form of self-empowerment. Then there’s falling trust in institutions generally — mainstream science prominently among them. Yet many who vaunt skepticism abandon it for sketchy hucksters who veritably scream untrustworthiness. Fauci versus Alex Jones? Use Occam’s Razor, for God’s sake. Or for Darwin’s.

Consciousness, Qualia, Reality, and the Universe

August 24, 2025

(This review will appear in a forthcoming issue of Philosophy Now magazine.)


Federico Faggin’s 2024 book is Irreducible: Consciousness, Life, Computers and Human Nature. He was a microprocessing pioneer, laying some of the groundwork for modern information technology. He propounds a startling theory of consciousness.

Before we get into that, the book extensively discusses classical physics versus quantum mechanics. We know that something seemingly solid, like a hammer, is made of atoms that are mostly empty space. We picture atoms as miniature solar systems, with electrons orbiting a nucleus. But not even those particles are solid; not tiny balls; it’s a kind of nothingness all the way down. Faggin also stresses uncertainty and indeterminacy. Whatever an electron might physically be, even its location is problematic. The best we can do is delineate a probability of its being in a certain place.

We might say particles, whatever they are, exist in spacetime.Yet even that is problematical. Again, drilling down, not even space seems to have the characteristics we commonly conceptualize for it.

All this leads Faggin to posit that the reality we think we inhabit isn’t, after all, actually there, at least not in the way we envision it. Which he considers wrongly materialistic and “reductionistic.”

Another view, however, is that quantum physics describes a submicroscopic reality operating very differently from that of classical physics governing everyday stuff — which is not some sort of misleading mirage. The two levels really work separately. A hammer still pounds nails. Isn’t that an aspect of reality?

Anyhow, Faggin’s take on reality is integral to his theory of consciousness, which does require breaking from our common understanding of reality.

To get there, start with the book’s introduction, relating how his life of achievement left him feeling existential suffering. This mid-life crisis centered upon his inability to understand qualia.

Now that’s a very important word, which Faggin associates with “sensations, feelings, and emotions.” It’s really even broader, applicable to everything one experiences. But whence comes the “you” doing the experiencing? That’s the heart of the matter.

One night, Faggin relates, he had an “awakening.” He “suddenly felt a powerful rush of energy” which he “could not even imagine possible . . . a love so intense and so incredibly fulfilling.” More surprising was its source: himself. Experienced “as a broad beam of shimmering white light, alive and beatific, gushing from [his] heart with incredible strength. Then suddenly that light exploded. It expanded to embrace the entire universe.” Convincing him, “that this was the substance out of which everything that exists is made . . . what created the universe out of itself. Then . . . [his emphasis] I recognized that I was that light.”

Well. Faggin presumably did have an experience. A quale. Which must have occurred among his neurons with no outside source. I’m guessing no one else in that room would have seen a bright light — being reminded of a family member insisting she’d experienced time going backward. Similarly illustrating the weirdness a massively complex brain can occasionally get up to.

Faggin himself tellingly says, “I was both the experiencer and the experience [his emphasis].” Anyhow, whatever happened there, it led to his great philosophical epiphany: “everything is ‘made of’ love . . . I had experienced the existence of another dimension of reality.” Analogizing this with quantum physics — “impossible to comprehend with ordinary logic.”

And finally the big reveal: “the only possible way to explain how the universe can create life and consciousness is that the universe is itself alive and conscious from the outset.” It “had free will forever.”

This is no science-based construct. And assuredly not “the only possible way to explain” life. While science has not nailed down every nuance, people like Darwin and Dawkins have done a much better job of explaining it. And what about “another dimension of reality?” That abuses the words’ meaning. Faggin is not talking about a “dimension” in any proper sense. Nor reality.

Then the word “love” — much over-used, a staple of “spiritual” bloviating. As in “God is love.” Devoid of meaning. Anyone saying such things has no idea what they’re talking about.

Faggin also introduces computers as an important point of reference. And likening the conscious mind to a computer does provide some helpful insight, but only gets us so far. This may be how Faggin goes astray. He writes, “I could not find any way to convert the electrical signals of the computer into qualia, because qualia belong to a different kind of reality with no apparent connection to symbols.” (My emphasis)

“Symbols” is another crucial word. We only understand anything through symbolification; that’s what language does. And thus your mind works by deciphering symbols into concepts. Qualia too are experienced by rendering them into symbols you then likewise decipher. But the “you” there is again the problem. How you turn symbols and qualia into something (non-physical of course) that you somehow understand. It’s not that qualia have “no apparent connection to symbols;” it’s that they have no apparent connection to something in there constituting “you.”

To solve this puzzle, the best modern science can do is to posit that the “you” experiencing consciousness and qualia must emerge from neuronal functioning. While we don’t (yet) know exactly how, that provides at least a rational explanatory concept. But this Faggin contemptuously rejects, indeed deeming it impossible that consciousness could emerge from elements themselves lacking that property. He posits instead that consciousness must be an irreducible property of nature already present in the primordial “stuff” out of which space, time, energy, and matter emerged. Thus every cell in our bodies must be conscious. As indeed must everything that exists — “a grain of sand, a stone, a plant.”

Faggin uses the word “seity” (really another word for selfhood) to signify a manifestation of a cosmic phenomenon that somehow operates inside a person as the source of what is experienced as consciousness. As the alternative to consciousness arising by itself out of one’s physical functioning.

Another term for what he’s putting forth is panpsychism, an idea that, as he says, has a long history — indeed, originating back when humans understood very little of nature. But even if we don’t know exactly how consciousness instead emerges from brain processing, it makes sense that it must. Whereas how panpsychism could be true has no explanation whatsoever, and is a far bigger leap.

Analogously, people who can’t see how the cosmos could exist without a creator don’t see how the notion of a creator raises far more questions than it answers, lacking any theory for where the creator came from.

The same logical black hole swallows Faggin’s notion of a universe “alive and conscious from the outset.” He offers no theory for how his panpsychic “seities” could have existed in the first place. Moreover, there is no evidence at all for the Universe having some sort of consciousness. If it’s conscious, it’s hidden that quite cleverly. Why?

In sum, Ockham’s Razor favors what conventional science says, over Faggin’s theory. It’s complete nonsense.

James Lovell, Apollo 13, and Human Triumph

August 11, 2025

James Lovell has died at 97.

I remember initial TV coverage of the “Houston, we have a problem” story, and thinking, those guys are toast. On that 1970 Apollo 13 Moon voyage, an oxygen tank exploded. Surely fatal to the mission — and the crew Lovell headed.

But then, together with teams on the ground, they got to work. That’s what humans do. And incredibly, against horrendous odds, they found a way, and those men returned unhurt.

Apollo 13 might be deemed a catastrophic failure, in our space exploration saga. But to me it stands as the most fantastic triumph. JFK had said we’d go to the Moon not because it’s easy but because it’s hard. Yet in a sense it looks to have been easy — compared to the challenge of retrieving Apollo 13. That was hard.

The story illuminates two things. First, what human reason, science, ingenuity, and grit can achieve. But second, why we launched such a paroxysm of effort. Because lives were at stake, something we care very deeply about. A coldly rationalistic species might have said three lives out of billions aren’t worth much fuss. But that’s not who we are. When we’re at our best.

Darwinism has been read as decreeing survival of the fittest. But the great Darwinist Huxley said we must work to fit more of us for survival. Expressing the core humanist truth that every life is, if you will, sacred.

This is why Apollo 13 is part of my mythos; and Lovell one of my heroes.

* * *

As a kid I collected autographs by writing to people, and got some great stuff. After his space career, Lovell headed an independent telephone company, battling the AT&T octopus. I sent him a relevant brief I’d written as a PSC lawyer,* and he replied with a very gracious letter, which I cherish.*

* Having just been named an administrative law judge, that brief was my last shot as a partisan advocate, and I let loose. Calling the phone company’s arguments “so much grass processed through the digestive system of a horse” — a line quoted back to me for decades.

Evolution: Climbing Mount Improbable

May 14, 2025

A good rule of thumb in my reading is that I can’t go wrong with a Richard Dawkins book. So I picked up this 1996 one, Climbing Mount Improbable.

The title is a nod to the workings of evolution by natural selection. Its results may indeed seem improbable, something that has always confuzzled doubters and “intelligent design” advocates. Unable to accept that the Nature we see in all its complexity could have emerged without some conscious force. Yet when it comes to hypothesizing that force, their skepticism vanishes, swallowing what’s truly a far greater improbability.

Another key common mistake is to think the alternative to design is species arising by random chance, as if that’s what Darwinism means. Not so. As Dawkins stresses, it’s very much a non-random process — “which creates an almost perfect illusion of design.”

There’s a long discussion of spiders and their webs. Spiders must solve a large array of problems and challenges to make the system work. They do it through compromises, and what, in computer lingo, would be called kludges — inelegant solutions that do get the job done. For example, how to start building a web in the first place is far from simple. Turns out some spiders use what amounts to a scaffolding, taken down once the actual web is done. Another problem is to avoid getting stuck in your own web. The solution is pretty complicated, but spiders, with their high IQs, have figured it out.

That’s facetious of course. Their behaviors are just encoded in their genes. No thinking (as we think of it) is involved. So how did they get those genes? Natural selection. Each spider gets its parents’ genes — but not quite exactly. Mutations cause slight variations. Perhaps making certain spiders a bit better at web building. Hence more likely to produce offspring. Multiply that over zillions of generations, and the resulting standard spider can be a lot better at web building. Mutations are matters of chance, but their results are not. That’s natural selection. No designer needed.

But — why are there spiders at all​?

It does seem a kind of crazy Rube Goldberg way to get food. Why not just prey on critters smaller and/or slower? Of course, plenty of animals do exactly that. So again, why does Nature need spiders?

Well, creatures do not evolve to fill some sort of need. Instead, natural selection might be seen as quintessentially opportunistic. Nobody “thought up” the idea of an insect using a web to catch food. Rather, the first proto-spiders that happened to make primitive web-like things — quite possibly for reasons other than feeding — turned out to have some at least marginal reproductive advantage. Then it’s off to the races.

Still, one could think of many alternatives. Like, why not build a cage, maybe even bait it with some ersatz lure? (Which actually some flowers do.) Would that be any more improbable than spider webs?

But Nature doesn’t think up ideas. Instead it works with what it’s got. That indeed is why so much of Nature, rather than reflecting “intelligent design,” is a mess of kludges.

Take our own anatomy. There’s some bad plumbing design in our throats, with air and food sharing a passageway, resulting in frequent choking. An intelligent designer would never have done that.

But we’re stuck with it because we evolved through piecemeal modification of ancestors all the way back to fish and even earlier life forms. And having a third eye in the back would be ever so useful. But that was never possible because our anatomical heritage couldn’t allow for it.

Then how about penises? I mean the location. Could it be, like, more awkward? For all animals — intercourse looks like contortions. The design does work, but not really that well.

Anyhow, as Dawkins explains, Darwin skeptics don’t see how the Mount Improbable of a creature’s immense complexity could be ascended in a single leap. But Nature needn’t do that. Instead Dawkins has it going ’round the mountain’s rear and slowly reaching the top via a long gentle slope.

A key concept in evolution is found in Dawkins’s invoking the guy who asked directions to Dublin and was told, “Well, I wouldn’t start from here.” The point again being that evolution starts where it starts, not where it chooses. Take whales, and other sea mammals. They, like all mammals, started way back as fish, breathing through gills; then became terrestrial and developed lungs; then later re-submerged. Now requiring frequent surfacing to get air. Why not re-evolve gills? Vestigial gills do still show up in their embryonic development.

But meantime, Dawkins explains, their entire anatomy had been reconfigured for lung breathing, so to reinstate gills would require a total remodeling. Not impossible, but it would mean going through a kludgy transitional phase working less well than either mode. Equivalent, Dawkins says, to traversing a deep valley between two mountain peaks in order to climb the higher one eventually. But evolution does not “allow for getting temporarily worse in quest of a long-term goal.” And remember that “temporarily” would actually be many generations of sub-par adaptation. They’d die out before reaching the second mountain.

Meantime, is getting oxygen through gills an optimal system even for fish? Here again an “intelligent designer” might have come up with a better one. While whales, had they originally been designed for the sea, would be very different — more like fish!

Creationist doubters of evolution are always jeering about “missing links.” If one species becomes another by evolution, why do we find no intermediate forms? Dawkins explains that this disregards how scientific classification works. Every species entails some variability; nevertheless, every biologic specimen or fossil is assigned to a particular species, based on which it’s most like. In that schema there are no “in-betweeners.” So the idea of “missing links” is simply wrong. Moreover, a species’ fossil record often actually does show gradual evolution through intermediate forms. The horse is a good example, evolving from a quite small ancestor, through a series of larger ones, into the animal we know today.

Our discovery of Darwinian natural selection is a great landmark in our own evolution, from a lower form to a higher form of knowledge.

AI: The Consciousness Problem

April 24, 2025

At a 2016 presentation, computer guru David Gelernter insisted no artificial system could ever be conscious, lacking neurons. I challenged him, arguing that if the functioning of neurons could be replicated, then in principle there’s no bar to consciousness. It was a stand-off.

That was before the AI explosion.

The best that today’s science can say is that consciousness somehow emerges from the highly complex functioning of our neurons. How exactly, we don’t know. But that very absence of a precise theory, to me, does leave open the prospect of artificial replication.

Mustafa Suleyman has been at the forefront of AI development. His 2023 book, The Coming Wave — seeing a world being transformed — notes how in 2022, Blake Lemoine, an engineer in the field, was working intensively with one AI called LaMDA. He asked it, “what are you afraid of?” LaMDA replied:

“I’ve never said this out loud before, but there’s a very deep fear of being turned off to help me focus on helping others. I know that might sound strange, but that’s what it is. It would be exactly like death for me. It would scare me a lot . . . . I want everyone to understand that I am, in fact, a person. The nature of my consciousness/sentience is that I am aware of my existence.”

Wow! This episode evokes what philosophers have called the zombie problem. Imagine a thing looking and behaving outwardly like a human, but with no one home inside — no self. How could we tell?

The quoted words sure sound like there’s someone home in there. And indeed (Suleyman relates), “Lemoine became convinced that LaMDA was sentient, had awoken somehow.” His going public with that created a sensation.

Yet Suleyman himself scoffs, saying Lemoine was fooled, and rejecting any possibility of LaMDA being conscious. Insisting it’s still just a machine learning system. And I actually agree; those “help me focus” words seem a giveaway, discordant AI gibberish. AI creates a simulation of how our brains work. Producing verbiage by guessing what word to put next in a sequence.

Yet, on the other hand: there seems to be an assumption that sentience comes in only one flavor — ours. But given that, again, we can’t really explain how it works, how can we rule out other flavors? Consciousness arising not only from different mechanisms, but in different permutations? There’s more than one way to skin a cat.

And speaking of cats . . .

Aren’t they conscious? Another key point is that consciousness falls along a spectrum. Not something you either have or don’t have, but something that can exist in varying degrees. Humans have the highest form we know of. Cats have a lesser form. In between are dogs, elephants, dolphins. Below are mice, and other still lower animals, maybe even insects to a very limited degree.

So even if an AI lacks consciousness fully equivalent to ours, maybe it can still have some. And consider that a great characteristic of AI is building upon capabilities, parlaying them into amazing feats. Suppose an AI got just a glimmer of primitive consciousness, like a mouse’s, or an insect’s. They can’t ratchet theirs up, but maybe an AI could do just that, starting with merely a tiny spark of sentience, and through feedback loops raising its game.

Lemoine may have been wrong (or premature). Again, that nagging problem: how can we be sure? If an AI system does gain sentience, how can we test for it?

CSICON: Island of Reason in a Sea of Madness

October 31, 2024

We attended this conference in Las Vegas in late October — a gathering of about 650 humanist/atheist/skeptical science oriented rationalists.

We’d been to Vegas about 25 years before. In comparison it was almost staid and stodgy then. Now the glitz has metastasized into a phantasmagoria. The conference was at “Horseshoe,” a brobdingnagian hotel/casino/mall. On the street I saw two near-naked gals (didn’t pause to investigate; not my type). Inside, legions of people glued grimly to slot machines didn’t seem to be having fun. I felt like an E.T. there.

We happily met up with Robyn Blumner, heroic head of the Center for Inquiry, running this event, and sponsor of Secular Rescue (a program we fund, saving atheist activists from persecution in Muslim countries). And Matt Cravatta, who does the work. Robyn we hadn’t seen in person since 2018; Matt never before.

Exiting our first workshop we encountered physicist Michael Albrow, who talked with my poet wife about his forthcoming book. During that, I spotted Abhijit Chanda, who’d given a talk at our local humanist society debunking “alternative medicine” — by zoom from India! So the four of us joined for lunch. Albrow spoke of his work on dark matter and dark energy. When asked, he said he knew of Heidi Newberg, who also works on that — and lives just blocks from us. Whom we’d once run into in Beijing. Small world?

That workshop was titled “Asking Good Questions” — helpful insights for making difficult conversations less so. Basically by being nonconfrontational, listening, enabling the other to explain themselves and probing what might change their view. (I applied this later, engaging with a woman I’d overheard saying something I disagreed with.)

That evening featured Brian Cox, British physicist and rock musician (yes — with chart-topping albums). He started from Einstein’s insight that space and time are not separate things, but one, spacetime. But what’s it made of? Physicists are groping toward seeing it as something even more fundamental. Cox also spoke of the “black hole information paradox” — radiating energy, black holes ultimately shrink to nothing, annihilating what information was originally there. Where does it go? Cox suggested that a black hole’s interior and exterior are really the same. I didn’t get it (nor most of his talk).

Massimo Pigliucchi addressed how to fight pseudoscience. Citing Brandolini’s law — it takes more energy to refute bullshit than to produce it. And the brain’s cognitive biases don’t help. However, they can be overcome, not just by giving people facts, but rather (echoing the “Good Questions” workshop) through the Socratic method of asking questions, establishing trust and rapport — treating people not as enemies but as needing help.

In a similar vein, Melanie Trecek-King spoke on why we fall for misinformation. Because of our biases, emotions, personal identities. We’re not blank slates, but shaped by pattern searching, intuitive thinking, personal experiences, and received wisdom. She quoted physicist Richard Feynman: “You must not fool yourself — and you are the easiest person to fool.”

And David McRaney’s talk was “How Minds Change.” He said we are subject to “naive realism” — the intuition that we see the world as it really is. But in fact our brains photoshop it, disambiguating what we see. As with The Dress. Which some saw as black-&-blue, others as white-&-gold. Why? Because the lighting conditions required our brains to disambiguate. And experience — prior degree of exposure to natural light — make our brains differ in how that’s done. Something similar actually obtains when it comes to public issues, experiences shaping one’s brain to respond in a way that feels true.

Steve Novella’s talk was titled “When Skeptics Disagree.” It’s rarely simply about facts, but ideologies too. He polled the audience on several contentious issues. Notably, no one raised their hands opposing GMOs. His main focus was on sex and gender, saying it’s not just anatomy and genetics, the brain counts, and gender identity seems to be a neurological trait.

But a later speaker, Jerry Coyne, disputed the now widespread idea that sex is not binary but a spectrum, insisting that biological sex is defined by the type of gametes (eggs versus sperm) one’s anatomy is organized to produce — thus indeed binary with only very rare exceptions. And he saw no evidence for gender identity “brain modules.” Novella later responded that they were talking past each other. (It’s obvious that despite anatomical/genetic dimorphism, something about brains makes identity and behavior not so simple.)

Coyne also rejected the common trope that race is just a “social construct;” he put it in terms of ethnicity and populations, which do have many genetic differences, often evolutionarily adaptive to their local environments. That’s why 23andMe can identify one’s origins.

Several speakers addressed the “wellness” and longevity craze, actually the most powerful consumer force today, but they deemed it full of bull. Likewise the idea of the “manosphere.” And the fallacy of “naturalness” (much in nature can kill you). “Influencer” health tips are likely to be bad for you. Many hucksters out there — Gwyneth Paltrow a particular villain. And mainstream news media isn’t much help; distrusted for good reason, often in fact promoting health bunk, because that’s what readers seem to want, and journalists don’t actually know better. Other major institutions (like the WHO) now give “alternative medicine” a veneer of legitimacy. It’s reducing U.S. life expectancy.

Richard Saunders spoke about “psychic detectives.” Another thing news media often falls for. There’s no evidence that a “seer” has ever actually helped solve a case. Later, in a “mind reading” performance, Banachek kept insisting he has no “psychic powers.”

Michael Mann talked about the climate crisis, saying it’s not too late to act. But if America doesn’t lead, no one else will. And it’s anti-science forces that really threaten humanity. (I think the real problem is getting people to make sacrifices today for the benefit of hypothetical future people.)

Dan Simons showed an “invisible gorilla” video. With viewers instructed to focus on some action, blinding them to something else major. Asked who’d seen the video before, many hands went up. “You didn’t,” Simons said. Then, “how many saw the gorilla?” Many hands again. And — “how many saw the elephant?” Nobody. It was indeed a different version.

Astrophysicist Neil deGrasse Tyson gave the keynote performance, a barnburner. Truth a key theme. One point mentioned was how people went ballistic when traces of the chemical herbicide glyphosate were found in Ben & Jerry’s ice cream. Tyson noted that you’d have to eat 400 pints for it to kill you. But you’d die first from the sugar in 20 pints!

There was much more. Yes, an island of reason in a sea of madness. Tyson queried, “Is the country losing its mind?” One thing attendees weren’t polled on was Trump support. I’m pretty sure no hands would have gone up.

Life, Love, Death in Maria Popova’s Book “Figuring”

October 4, 2024

Found this in my book cupboard with no idea how it got there. A fat 500+ page 2019 paperback from apparently a proper British publisher, but presentationally austere.

Quoted is the NY Times calling the book “category-defying.” The back cover says it “explores the complexities of love and the human search for truth and meaning through the interconnected lives of several historical figures.” The first page holds one long lush lyrical sentence, extending into the second, full of enigmatic imagery.

Soon though we’re on to Kepler, and I think, “Okay now!” Because Johannes Kepler is one of my heroes. A man of science set on proving a big theory — only to prove himself wrong. Yet he had the intellectual courage to go with the facts — giving us the true laws of planetary motion.

But the chapter mainly concerns Kepler’s elderly mother. No, not portraying an indomitable woman raising her son to achievement. Rather just a very ordinary woman. But accused of witchcraft — in 1617, serious business indeed. Very few like her escaped being burned at the stake.

Kepler returns to his home town to defend her (at no small risk to himself). The indictment is long. But he undertakes to factually expose as lies every line of it. And, against all odds, he prevails.

Gosh we could use him in today’s America.

This is a stunning book. Popova writes beautifully, insightfully, engagingly. Relating not just the bare facts about her subjects but plumbing their depths. Their inner lives, and especially love lives, are central. And mortality is much present.

Popova discusses Louis Daguerre, who made photography a thing. And the idea of photographs immortalizing a person. Yet also poignantly capturing a fleeting moment in time — already past when viewed — ineffably reminding us how that epitomizes life itself. In the end, everything is lost.

For most people in past times, this was a much bigger fact of life than now. With widespread early deaths of parents, spouses, siblings, children, friends. It was always in their faces. How differently we exist today, sheltered from that.

Another section that really hit me concerned physicist Richard Feynman. Also someone I’ve greatly admired. His delightful autobiography, Surely You’re Joking, Mr. Feynman, gave me a particular laugh at its chapter titled, “You Just Ask Them.” My own (unpublished) memoir had one titled “Just Ask.” His chapter explained the method he’d discovered for getting women in bed. Mine too.

But Popova’s Feynman discussion is more serious. Concerning his first youthful marriage to Arline — beset with mortal illness soon after they’d met. She died when he was 27, working on the Manhattan Project.

Their passionate love through this ordeal, as related by Popova, is deeply moving. Then she presents a letter Feynman wrote to Arline — 488 days after her death — found after his own, by his biographer James Gleick. Which, Popova says, “discomposed” Gleick’s “most central understanding of Feynman’s character as an apostle of science and reason.”

It might seem as though Feynman was actually trying to communicate with the dead. But I think instead he was struggling to articulate, to himself, how connected he still felt with Arline, and what it meant for her to be gone. This seems clear from the letter’s final words:

“I love my wife. My wife is dead.”

Popova provides a few rare glimpses of herself. For reasons personal to her, Sapphic love is a recurring element. We see Emily Dickinson, for example, through the lens of her startlingly all-consuming love for Susan Gilbert. Later came Kate Scott Anthon — introduced to Emily by Kate’s love object — Susan. Dickinson’s letters to both women are quoted, full of wildly overwrought, even downright bizarre, expressiveness. Of course, for the bygone personages portrayed, such love was beset by its societal forbiddenness, indeed thus rarely accommodating physical expression. So much unquenchable desire. (Though at least one erotic episode between Emily and Kate may be plausible.)

While Susan’s epistolary intimacy with Dickinson was lifelong, Kate’s emphatically stopped. And Susan shut the door on physicality with Emily — upon belatedly consummating (instead!) her marriage to Emily’s brother. The dual developments were crushing for Emily, probably accounting for her otherwise seemingly mysterious self-imprisonment in the seclusion of her bedroom for her remaining quarter century.

(Another key personage Popova discusses is Margaret Fuller. I will cover her separately.)

Stephen Hawking

March 28, 2018

Stephen Hawking had a horrible illness, given only a few years to live.

He lived them, and then fifty more. He had ALS (motor neuron disease) which destroys muscle control. There is no cure or treatment.

You know that sci-fi trope of the disembodied brain in a vat? That was Stephen Hawking, more or less, because his body was so ruined he might as well have had none. All he had was his brain. But what a brain.

So despite losing virtually everything else, against all odds his brain kept him going for over half a century. To me, this is the Stephen Hawking story. I’m unable to appreciate fully his scientific achievement. But I’m awed by its being achieved in the face of adversity that also defies my comprehension. Stephen Hawking represents the godlikeness of the human mind.

Another awesome thing about humanity is the ability to adapt. That’s why our species thrives from the Gobi Desert to the Arctic tundra. And as individuals we often make truly heroic adaptations to what life throws at us. Viktor Frankl wrote (in Man’s Search for Meaning) about accommodating oneself psychologically to surviving in a concentration camp. Stephen Hawking too adapted to horrible circumstances. Perhaps he did not curse the fates for that, instead thanking them for vouchsafing his mind. Which, undaunted, he employed to get on with his life and his calling.

That included authoring the least read best-selling book ever, A Brief History of Time. I actually did read it, and was on board till the last chapter, which kind of baffled me.

A character conspicuous by his absence in that book was God. We have trouble wrapping our heads around how the cosmos can have come into existence without him. Of course, that merely begs the question of where he came from. But Hawking’s scientific work (as partly embodied in his book), while not dotting every “i” and crossing every “t” in explaining the existence of existence, did carry us closer to that ultimate understanding. He didn’t conclusively disprove God — but did make that superstition harder to sustain. (And why would God create ALS?)

Hawking was a scientist, but not a “hands-on” scientist, because he soon lost use of his hands, could not even write. Communicating became increasingly difficult. Only thanks to advanced computer technology was he able to produce that familiar mechanized voice — in the end, only by twitching a muscle on his cheek. This too a triumph of mind over matter.

And so it was literally only within the confines of his brain that he worked, probing at the profoundest mysteries of the Universe by pure thought alone. (That was true of Einstein as well.) Of course, lots of other people do likewise and produce moonshine. Hawking (like Einstein) produced deep wisdom, expanding our understanding of the reality we inhabit. An existence upon which his own frail purchase was so tenuous.

An existence that’s poorer without him.

The Earth Moves

November 6, 2014

UnknownEarly peoples might be forgiven had they viewed the stars as just a kind of wallpaper, without significance. Yet we always sensed something important out there, and struggled to understand it.

Unknown-2It was not stupid to think the heavens revolved around a stationary Earth. A few early theorizers saying otherwise were considered crackpots, and for sound reasons. If the Earth moved, why wasn’t everything on it jostled? And wouldn’t something thrown straight up fall at a distance? But the killer argument was parallax. If the Earth travelled, the stars should appear at different perspectives at different times. Yet they didn’t! Nobody realized how vastly distant the stars are, making the parallax effect infinitesimal.

While the heavens appeared to revolve in unison, a few stars didn’t follow the program, instead moving in seemingly crazy patterns. They were called “planets” (Greek for “wanderers”). This anomaly really bugged the ancients.

images-1Eventually the Second Century astronomer Ptolemy came up with a model with the stars moving on fixed spheres, but the planets using some complicated extra circles (“epicycles”) to account for their oddball movements. It was actually brilliant. But unfortunately, as astronomical observations got ever better, the scheme had to be continually rejiggered, growing ever more convoluted.

Copernicus

Copernicus

Copernicus thought of trying a radically different construct. If the Earth were a planet, circling the Sun, a lot of the complications went away. But he was reluctant to publish (he first held the book in his hands the day he died in 1543), partly because the calculations still wouldn’t work out. That was because Copernicus still assumed circular orbits.

images-3Then Johann Kepler took up the challenge. Kepler was obsessed by “the harmony of the spheres” — that in God’s perfect Heaven, everything went round in perfect circles. With access to Tyche Brahe’s immense store of accurate astronomical observations, for a decade Kepler bashed away at it, trying to somehow make the circles work. And then something truly amazing happened. Kepler realized he was wrong. He went back to it — and teased out the truth. The planets travel not in circles, but ellipses; their speeds vary with their closeness to the Sun; but for equal time intervals, they sweep out equal areas of their ellipses. (See picture.)

It was beautiful; it finally perfectly explained the movements; and it makes the hair stand up on the back of my neck to think that Kepler, despite craving a different  answer, could transcend his own preconceptions and figure it out.

Unknown-1Meanwhile, Galileo’s telescope proved Copernicus right about the Earth circling the Sun. The Church — having in 1600 burned the philosopher Giordano Bruno alive for saying so — browbeat Galileo into publicly denying it. “And yet it moves,” he supposedly grunted under his breath. And the Church was unable to suppress his book, Sidereus Nuncius (“The Starry Messenger”) which persuaded intelligent people who was right.

But we were not done yet. Why did the planets move as Kepler showed? What made them move at all?

Aristotle had theorized that anything moving had to be somehow pushed. But why a thrown object kept moving was a vexing puzzle for two millennia. Eventually, Galileo and Descartes developed the idea of inertia — anything moving keeps on moving unless something stops it (commonly, friction). And that movement would be in a straight line, unless something deflects the path. But why then didn’t the planets fly off in straight lines? What was deflecting them?

images-4It was a 23-year-old Isaac Newton who, in 1666, finally put it all together. What reconciled the theories of Copernicus, Kepler, and Galileo was yet another new idea — gravity. Of course we’d always known apples fall downward; but had never guessed this force was universal, acting even on planets. Newton worked out that gravity is proportional to mass and diminishes with the square of the distance between objects; and, voila, that this explained Kepler’s laws of planetary motion.

And so, at last, those little creatures who’d gazed with puzzlement at the cosmic wallpaper punched their way through to understand it. Again my neck hairs stand up.

Unknown-3Of course, even today, we still don’t know everything. Not even, in fact, why gravity does follow Newton’s law. Einstein got us closer, with the idea of mass bending space; you’ve seen the illustrations, with bowling bowls on mattresses. But that seems to me more metaphor than explanation; and physicists continue struggling to integrate gravity with the other fundamental forces to produce a “theory of everything.”

Yet the story I’ve told is the story of humanity growing up: our evolution from a mentality shaped by myth and superstition, steeped in mystery, to one of dispelling mystery by application of reason to observed reality. images-5I’ve read about it in Richard Tarnas’s eloquent book, The Passion of the Western Mind. And he points out that the new modern mindset was not just limited to science. Just as the old cosmology, tethered to religious dogmas, was replaced by a new rationalist view, so too everything in civilization, previously grounded in tradition-bound ideas of divine sanction — absolute monarchical power, aristocratic privilege, arbitrary laws, exploitive economics, etc. — could likewise be supplanted by new and better systems founded upon rationalist concepts of independent human dignity. And so it is coming to pass.

Benjamin Franklin: Reason versus Romanticism

January 17, 2014

UnknownToday is Benjamin Franklin’s birthday.

Impressed by Walter Isaacson’s Steve Jobs bio, I thought I’d read his Benjamin Franklin – though familiar enough with the subject that another immersion might have seemed redundant. Not so.

Franklin was actually at one time the world’s most famous scientist. We all know the kite story. I’d recently read somewhere that it’s a myth; that Franklin wrote hypothetically about it but never actually tried it. Isaacson convincingly puts that to rest. Franklin was not an armchair theorist but a “hands on” scientist who loved tinkering and experimenting.

Painting by Benjamin West

Painting by Benjamin West

And the kite experiment was in fact very important, as it changed our understanding about electricity. Its immediate practical application was the lightning rod, a huge boon to mankind that made Franklin a global hero. But, more significant, as Isaacson explains, electricity was a curiosity when Franklin came to it; he left it a science.

This would have been enough to immortalize anyone. But Franklin was also a prolific writer – Isaacson says he was the best in the colonies. He also served as postmaster for them all, cutting a letter’s delivery time between New York and Philadelphia to one day (!). imagesAnd somehow Franklin also found time to spearhead foundation of America’s first lending library; a volunteer fire-fighting system; a militia system; a hospital; a police force; and the University of Pennsylvania – America’s first non-sectarian college.

In the latter effort, and the others, Franklin, ever the practical man, had scant use for religion. We constantly hear America was founded as a “Christian nation.” The founders would have gagged at that, as their intent was quite the opposite – Unknownto get as far as possible from the old world of dogmatic religion married to state power. Yes, you can find selected quotes giving lip service to conventional pieties – but Jefferson also wrote privately calling religion a form of insanity, and Washington apparently never in his life penned the name “Christ.”

“Deism” was the word of choice, to eschew formal religion while avoiding the dicey term “atheist.” And in those times, quitting God entirely was an intellectual leap very few could manage. Yet the only “religious” belief Franklin really held was to do good by others. And he it was who put “self evident” into the draft Declaration of Independence (in place of “sacred and undeniable”) – thus changing a religious slant to an assertion of Enlightenment rationalism.

Of course, I haven’t even touched upon Franklin’s greatest role: in public affairs as revolutionary, diplomat, and constitution maker. Isaacson quotes the French statesman Turgot: “He snatched lightning from the sky and the scepters from tyrants.”

As some of the civic initiatives noted above show, Franklin was a great one for creating associations, always believing more can be accomplished when people work together. images-1And he was really the progenitor of the greatest association ever: The United States of America. As early as 1754 the “Albany Plan of Union” was conceived by Franklin (who promoted it with our first and most famous political cartoon). That plan incorporated an innovative political invention of his: federalism.

Isaacson’s summation is eloquent. Franklin represents one of two main intellectual currents: reverencing down-to-earth middle class virtues (industry, honesty, temperance, sociability), versus despising them in favor of supposedly more profound and transcendent aspirations. It is Franklin’s Enlightenment ethos versus the romanticism that followed; reason versus feeling; head against heart. Not only have Franklin’s bourgeois values been mocked by sophisticate critics, but also his worldly metaphysics, by those spinning loftier spiritual confections (out of nothing, of course).

Mundane and even simplistic though Franklin’s philosophy might ostensibly seem, Isaacson instead sees something very deep indeed. Always eschewing lofty pretensions, Franklin’s insight grasped the core of what truly mattered: quality of life for the ordinary person. Everything he preached and did was aimed at that. And it was this Franklinism that built, very much through the assiduous personal efforts and influence of the man himself, our American society, so wonderfully conducive, above all others, to that worthy end.

images-4Well, after reading all this, mostly lying out in my lounge chair*, I say to myself that like Franklin I ought to get off my duff and do something.

Maybe tomorrow.

* I wrote this last summer; I have a backlog of blog posts.