‘The climate is always changing’: true, but not helpful

“When we already believe the world to be a certain way, then we interpret new experiences to fit with those beliefs, whether they actually do or not.”

Thus spake Veritasium

I had another post scheduled for today, but this one is far more timely.

My post last week highlighted an example of a meme (‘learning styles’) that’s been around since the 1970s. Despite it being strongly contested and having little evidence to back it up, it clearly strikes a chord that many find hard to resist.

The UK Met Office's 'warning impact matrix', showing 'amber' ticked

Yesterday, the UK Met Office issued its first ever ‘amber extreme heat warning‘ for the UK. I’ve not been able to determine when they began issuing these warnings, but even so, this should suffice to underscore the reality of climate change.

However, one clear effect this alert has had is to bring out the climate science deniers, spouting their dismissive and misleading memes such as, “I remember the UK heatwave in 1976”. (I saw a half dozen of those in just one comment thread I read this morning.)

That was then, this is now (take 1)

Yes, I remember the UK heatwave of 1976, too. It was exceptional, it’s true. But it’s not at all relevant, and the very fact that it’s raised in knee-jerk response to (yet another) heatwave warning says to me that those bringing it up believe that it’s somehow evidence that human-caused climate change isn’t happening, which is utter nonsense, as demonstrated by the following thirty second video clip:

Watching the Land Temperature Bell Curve Heat Up (1950-2020) (NASA)
(Hat tip to Peter Sinclair for finding this!)

Just in case you don’t have time to watch that right now, I’ve grabbed two screenshots from it: one from 1976, and the other from 2020.

That was then, this is now (take 2)

“Ah,” says the typical climate science denier, “but, the climate is always changing”. This is true. But it’s also a truism — and extremely misleading; it’s about as relevant as stating that the Sun will rise tomorrow. The point that it misses is that at no time in the geological record have global temperatures risen as fast as they are doing right now.

Here’s another short clip. This one’s just under three minutes long, and I urge you to watch it to the end — when it takes us back, through several ice ages, to the time when our species first appeared on Earth.

History of atmospheric CO2, from 800,000 years ago until January 2019 (CarbonTracker)

That was then, this is now (take 3)

“But carbon dioxide is just a trace gas!” splurts the average clueless denier, seemingly oblivious of the reality that even a ‘trace’ amount of any one of several poisons in their body would render them stone dead. Just as dead, in fact, as if there were no carbon dioxide at all in our atmosphere (because Earth would be a snowball planet).

Svante Arrhenius, winner of the Nobel Prize in Chemistry in 1903, was the first to use basic principles of physical chemistry to estimate the extent to which increases in atmospheric carbon dioxide (CO2) increase Earth’s surface temperature through the greenhouse effect. He did this in 1896. We’ve been twiddling our thumbs, and ‘debating’ the idea (even those who aren’t chemists, physicists, or climate scientists), for one and a quarter centuries. If we’ve learned one thing in all that time, it’s that you can lead someone to knowledge, but you can’t help them think.

Posted in ... wait, what?, balance, Climate, Communication, Core thought, Education, Environment, GCD: Global climate disruption, Health, perception, Phlyarology, Strategy | Tagged , , | 22 Comments

Veritasium on the biggest myth in education

“When we already believe the world to be a certain way, then we interpret new experiences to fit with those beliefs, whether they actually do or not.”

Thus spake Veritasium

Veritasium: This video is about learning styles. What kind of learner are you?

Audience: [Various interactions with people on the street.]

Veritasium: There is this idea in education that everyone has their own preferred way of learning, their so-called ‘learning style’; if information is presented in accordance with the learning style, then they’ll learn better. Now, there are dozens of different learning style theories, but the most common one identifies four main learning styles: visual, auditory, reading/writing and kinesthetic, or ‘VARK’ for short. Visual learners learn best from images, demonstrations and pictures. […] Auditory learners learn best from listening to an explanation. […] Reading/writing learners learn best from reading and writing. […] And kinesthetic learners learn best by doing; physically interacting with the world. […]

Now, learning styles make intuitive sense because we know everyone is different. Some people have better spatial reasoning; others have better listening comprehension. We know some people are better readers, while others are good with their hands.

Daniel Willingham: It sort of very much fits with a broad strain of thought. And the recent Western tradition is like we’re all unique, we’re all different. And so you don’t want to say, like, “everybody learns the same way”; that sort of conflicts with our feelings about what it means to be human.

Veritasium: So, doesn’t it make sense that people should learn better in their own preferred learning style? Well, teachers certainly seem to think so. A survey of nearly 400 teachers from the UK and the Netherlands found that over 90% believed that individuals learn better when they receive information in their preferred learning style.

The misconception: Just like every professor has a different of style of teaching, you have a different style of learning. But when his teacher starts using visuals, Jonathan finds it easier to focus and understand the material, so he might be a visual learner.

Veritasium: Can you tell me what that means to you? What does it mean to be a visual learner?

Guy in black jacket: To me, it means that for me to learn something, sometimes I need to draw it, or I need to write it down, or I need to see a picture or a movie.

‘Honey’: For example, science classes: I get bored easily just listening, and I think it’s more interesting for me to actually be able to do it.

Veritasium: How do you know that you’re a visual learner?

Smart guy!: I don’t. I just assume.

Veritasium: To take advantage of learning styles, then, teachers need to do two things. First, identify the learning style of each of their students, and second, teach each student in accordance with their learning style. On the VARK website, it says: “Once you know about VARK, its power to explain things will be a revelation”. But, before you take an online learning styles quiz, it’s a good idea to ask, “do learning styles even exist?” I mean, do you have one? And if you’re taught in accordance with it, would you learn better?

Well, you could test this by running a randomized control trial, where first you would identify learners with at least two different learning styles, say, visual and auditory; and then randomly assign the learners to one of two educational presentations: one visual, one auditory. So, for half of the students the experience will match their learning style, and, for the other half, it won’t. And then you give everyone the same test. If the learning style hypothesis is correct, the results should show better performance when the presentation matches the learning style than when they’re mismatched.

I tried a very unscientific version of this experiment on the street. For some people, I matched their learning style, so I showed visual learners pictures of 10 items. But for other visual learners, I read out the items instead.

Veritasium: Bell, penguin, Sun…

Audience: [Interviewees on the street trying to remember the list of things.]

Veritasium: Most people could remember only about five or six things. […] But a few could remember substantially more, say, eight or nine items. […] But the reason didn’t seem to be because the presentation matched their preferred learning style, but because they employed a memory strategy.

‘Honey’: So, like, as you were showing, I was like making an order in my head, so, as I saw more, I would just add it to the list, and I was repeating the list as I was looking at them so I could just say it out loud.

Veritasium: Did you try a strategy while you were looking at those pictures?

‘Story guy’: Yeah. So I guess I tried, like, creating a story because it’s easier to remember a story than just individual objects. So I try to, like, tie it all into one story.

Veritasium: This is obviously anecdotal evidence, but rigorous studies like the one I outlined have been conducted. For example, one looked at visualizers versus verbalizers instead of visual versus auditory learners. The study was computer based. So, first, students’ learning styles were assessed using questions like, “would you rather read a paragraph or see a diagram describing an atom?” The researchers also provided some challenging explanations with two buttons: ‘Visual Help’ or ‘Verbal Help’. The visual one played a short animation, whereas the verbal help gave a written explanation. From these measures combined, the researchers categorized the students as either visualizers or verbalizers, and then the students were randomly assigned to go through a text-based or picture-based lesson on electronics. When a student hovered their mouse over keywords in the lesson in the text-based group, a definition and clarification came up; but in the picture group, an annotated diagram was shown instead. And, after the lesson, the students did a test to assess their learning. The students whose preferred learning style matched their instruction performed no better on the test than those whose instruction was mismatched. The researchers ran the test again with 61 non-college-educated adults and found exactly the same result.

But learning styles are a preference. So, how strongly do learners stick to their preference? Well, in a 2018 study, during the first week of semester, over 400 students at a university in Indiana completed the VARK questionnaire and they were classified according to their learning style. Then, at the end of the semester, the same students completed a study strategy questionnaire. So, how did they actually study during the term? Well, an overwhelming majority of students used study strategies which were supposedly incompatible with their learning style; and the minority of students who did, did not perform significantly differently on the assessments in the course.

The visual, auditory, reading/writing, kinesthetic — or ‘VARK’ model — came about from Neil Fleming, a school inspector in New Zealand. Describing the origins of VARK, he says, “I was puzzled when I observed excellent teachers who did not reach some learners, and poor teachers who did. I decided to try to solve this puzzle. There are, of course, many reasons for what I observed. But one topic that seemed to hold some magic, some explanatory power, was preferred modes of learning; ‘modal preferences'”. And thus VARK was born. There was no study that revealed students naturally cluster into four distinct groups, just some magic that might explain why some teachers can reach students while others can’t.

But how can this be? If we accept that some people are more skilled at interpreting and remembering certain kinds of stimuli than others, like visual or auditory, then why don’t we see differences in learning or recall with different presentations? Well, it’s because what we actually want people to recall is not the precise nature of the images or the pitch or quality of the sound; it’s the meaning behind the presentations.

There are some tasks that obviously require the use of a particular modality. Learning about music, for example, should have an auditory component. Similarly, learning about geography will involve looking at maps. And some people will have greater aptitude to learn one task over another. Someone with perfect pitch, for example, will be better able to recall certain tones in music. Someone with excellent visual-spatial reasoning will be better at learning the locations of countries on a map. But the claim of learning style theories is that these preferences will be consistent across learning domains. The person with perfect pitch should learn everything better auditorially; but that is clearly not the case. Most people will learn geography better with a map.

Review articles of learning styles consistently conclude there is no credible evidence that learning styles exist. In a 2009 review the researchers note, “the contrast between the enormous popularity of the learning styles approach within education and the lack of credible evidence for its utility is, in our opinion, striking and disturbing. If classification of students’ learning styles has practical utility, it remains to be demonstrated”.

Daniel Willingham: What we’re expecting is, if your style was honoured, you’re going to perform better than if you had some experience that conflicted with your style. And this is where we don’t see any support for the learning styles theory.

Veritasium: One of the reasons many people find learning styles so convincing is because they already believe it to be true. For example, they might already think that they’re a visual learner, and then when a teacher shows them a diagram of, say, a bike pump and suddenly the concept clicks, well, they interpret this as evidence for their visual learning style.

Daniel Willingham: You already believe that learning styles is right. When you have an experience, the first thing you think is, “is that in some way consistent with learning styles?” And if it is, you don’t think further.

Veritasium: … when in reality that diagram might just be a great diagram that would have helped anyone learn. When we already believe the world to be a certain way, then we interpret new experiences to fit with those beliefs, whether they actually do or not.

So, if learning styles don’t improve learning, then what does? Well, there’s a large body of literature that supports the claim that everyone learns better with multimodal approaches, where words and pictures are presented together rather than either words or pictures alone. […] And this is known as the multimedia effect. And it explains, in part, at least, why videos can be such powerful tools for learning when the narration complements the visuals. […] In my PhD research, I found explicit discussion of misconceptions was essential in multimedia teaching for introductory physics. […]

Ultimately, the most important thing for learning is not the way the information is presented, but what is happening inside the learner’s head. People learn best when they’re actively thinking about the material, solving problems or imagining what happens if different variables change. I talked about how and why we learn best in my video ‘The Science of Thinking‘, so check that out.

Now, the truth is, there are many evidence-based teaching methods that improve learning; ‘learning styles’ is just not one of them. And it is likely, given the prevalence of the learning styles misconception, that it actually makes learning worse. I mean, learning styles give teachers unnecessary things to worry about, and they may make some students reluctant to engage with certain types of instruction. And all the time and money spent on learning styles and related training could be better spent on interventions that actually improve learning. You are not a visual learner, nor an auditory learner, nor a kinesthetic learner; or, more accurately, you are all these kinds of learner in one. The best learning experiences are those that involve multiple different ways of understanding the same thing. And best of all, this strategy works not just for one subset of people, but for everyone.

This part of the video was sponsored by Google Search. Now, there are lots of topics out there that are controversial, like learning styles, for example. Most people believe learning styles are a thing, whereas educational researchers find no robust evidence for them. And if you search for ‘learning styles’, you’ll get lots of sites with resources and quizzes. But if you search for ‘learning styles debunked’, well, then you’ll find articles about how there is very little evidence for the learning styles hypothesis.

I think one of the most common traps people fall into is only searching for information that confirms what they already believe. A common mistake is putting the answer you’re looking for right in the search query. A better idea is to try another search, adding ‘debunked’ or ‘false’ at the end and see what comes up. And Google makes it easy to get more detail about the source of the information. Just click the three dots next to any search result and then you can judge for yourself whether the information is trustworthy and if you want to visit the site. A Google search is meant to surface the most relevant information for your query, but it’s up to you to formulate that query; try a few different searches and assess whether the information is reliable. And the whole point of Veritasium is to get to the truth. So, I’m excited to encourage everyone to think more critically about how we get information. I want to thank Google for sponsoring this part of the video, and I want to thank you for watching.

The transcript above was was made with the help of Sonix, which did most of the donkey work for a tiny fee (I did have to spend some time tidying it up). Note that I do not have the copyright owner’s permission to publish this transcript here. I’ve investigated the copyright rules regarding transcriptions (more about that here), and one thing I’ve learned is that it’s no defence to make a disclaimer like “these aren’t my words, no copyright infringement intended.” However, I offer the transcription here as a service to society (especially the deaf community). I do hope the copyright owner won’t object. And I hope that you find this video as interesting as I did.

Posted in ... wait, what?, Communication, Core thought, Education, perception, Science | Tagged , , , , , , , , | 18 Comments

Extraordinary times call for extraordinary measures

It may (or may not) have been Carl Linnaeus who classified our species as ‘homo sapiens sapiens‘ in the late 18th Century. As you probably know, that label derives from Latin: ‘homo’ means ‘man’, while ‘sapiens’ can be translated as a number of almost-synonyms; the double-barrelled use here might be read as ‘the wise, thinking man’.

I’ve experienced several decades of life as a member of this species, being a spectator (thankfully, on the sidelines) to the various nonsensical and outrageous behaviours our kind exhibits. I’m sure you know of what I speak: miscellaneous evils and injustices of all sorts, such as installing building cladding that’s not fire-resistant (Grenfell), building clusters of massive condominium complexes on reclaimed land (Champlain Towers), launching ships deemed ‘unsinkable’ (Titanic), warring against others of our own kind (often perpetrated by fanatics whose mantras include “love thy neighbour” and “kill the infidel”) — those kinds of things.

An utterly barmy crusade

I believe that Linnaeus (if it was him) made a poor choice. Particularly in the light of our species’ inherent inability to acknowledge threats that aren’t imminent (climate change being the obvious example), it became clear to me that far from being ‘the wise, thinking man’, we ought to have a moniker that’s more honest, and a whole lot less pretentious.

And so, almost exactly a decade ago, I found myself pondering the question, “what would be a more appropriate name for our species?” ‘Homo sapiens sapiens’ was clearly a misnomer. I settled on ‘homo fatuus brutus‘, which translates as ‘the foolish, stupid man’. And thus, I embarked upon a (mostly tongue-in-cheek) campaign to try to get our name changed.

Why would I even try such a thing? One lone nutcase on a blog trying to persuade others to join a lunatic crusade is going to only elicit ridicule, right?

It’s just a crazy thought experiment, but imagine if enough people were to think it a Good Idea, and the name were to actually be changed… it could have far-reaching effects.

Unsurprisingly, my campaign, such as it is, hasn’t been all that successful to date. It’s got some laughs along the way, and some funny looks, but that’s about it. I didn’t really expect much more.

Contemplating the receptacle’s exterior

Over time, I came to believe that this is more important than it would at first appear. In order to address an issue, it is first necessary to admit that there is an issue; only by recognising that there is a problem can one ever hope to take steps to rectify it.

Labels are important. And while we all think of ourselves as ‘wise and thinking’ (and some even expanding that to mean ‘masters of the universe’), we are less likely to consider that, perhaps, we can be capable of making mistakes. Contrariwise, any serious attempt to rename our species would, at the very least, bring heated debate, which, regardless of the success or failure of the endeavour, would shine a light on the matter.

And so, I recently considered the idea of trying to get our species name formally changed. The only avenue I can think of by which this might be achieved would be by setting up a petition, in the (admittedly foolish) hope that it might be possible to gain enough signatures to be taken seriously. After all, if millions of US citizens can follow the lead of a prevaricating moron, and (a somewhat smaller number of) millions of UK citizens can idolize a corrupt buffoon, well, the planet is one’s carpius nana.

Archaic rules prohibit reclassifying an existing species

To be effective, a petition needs to be addressed to someone (or some body) that has the power to act on it.

I did some digging. It transpires that the body responsible for ‘zoological nomenclature’ (a fancy way of saying ‘animal naming’) is the International Commission on Zoological Nomenclature (ICZN) [not to be confused with the International Code of Zoological Nomenclature (ICZN)]. Huzzah! thought I, having believed I had identified to whom I should address my petition, should it ever get enough signatories to not be laughed out of court.

Founded in 1895, the ICZN (not the ICZN) is an organization dedicated to “achieving stability and sense in the scientific naming of animals”.

‘Stability’. And ‘sense’. I strongly suspect the sequence of those two words is important, and that the ICZN (entirely understandably) values the former over the latter. And I’m reasonably certain that a request to rename the human species would be dismissed out of hand as utter nonsense.

Which, of course, it is (that’s pretty much the whole point). But, on the other hand, is it any more nonsensical than perpetuating the use of a ‘wise’ label for a species that is arguably on the verge of committing suicide?

Unfortunately, some more digging has revealed that it would appear that our fate is sealed: according to the ICZN, names are locked in by the ‘Principle of Priority‘, which says, in a nutshell, that as we have already been named, we can’t be renamed.

The eternally hungry ouroboros

It seems that the phlyarological ouroboros is complete, and self-sustaining.

An illustration of the short-sightedness of homo fatuus brutus, who, having ascended to the cliff's edge, is about to step off.
Having ascended to the cliff’s edge, the short-sighted homo fatuus brutus is about to step off.

(Just out of curiosity… if I were to actually set up such a petition, would you sign it?)

Posted in ... wait, what?, Communication, Core thought, perception, Phlyarology, Strategy | Tagged , , , , | 23 Comments

The Koala Conspiracy

The most recent post from Larry Oliver’s blog Echoes from a Pale Blue Dot is from 2017. (That site seems, sadly, to currently have tumbleweeds rolling through it, though Larry is still active on Twitter at @tweetingdonal.) That post is a reblog of this insightful article by Jacob A Tennessen (@JacobPhD) from the year before. Though it’s a half-decade old, it’s a great example of how just because something’s old doesn’t mean it’s not still got legs. Well worth a read.

Jacob A Tennessen's avatarAdaptive Diversity

koala Do marsupials even exist?

The word of the year for 2016 is officially “post-truth.” It seems a lot of folks just don’t care very much for facts. Instead, they form beliefs based on subjective feelings about what kind of experts are trustworthy and what kinds of stories fit their existing worldview. Fake news is rampant. It thrives under a secular version of Poe’s law: when politics has been fractured into extremes, any tale about the opposition sounds plausible. We are at an impasse. If showing people the data is not good enough, what is?

For science educators, this is nothing new. The most dispiriting and challenging aspect of science outreach isn’t ignorance, it’s willful denial. Folks who have heard about climate change, evolution, the effectiveness of vaccines, or the safety of GMOs, but simply refuse to believe it. It’s frustrating. How do objective scientists reach out to…

View original post 714 more words

Posted in balance, Communication, Core thought, GCD: Global climate disruption, perception, Reblogs, Science, Strategy | Tagged , , , , | 11 Comments

Wax off! (A lunar epiphany)

A couple of years ago, I had a minor epiphany about the Moon. I wrote a post about it at the time, called ‘How to tell at a glance if the Moon is waxing or waning‘. Since then, when I’ve seen the Moon in its not-full phase, I’ve used my ‘wax on/ wax off’ rule (as described in that post) to determine whether it’s waxing or waning.

And in all that time, I can only remember ever thinking, “Ah, wax on!”. I can’t recall ever thinking, “Ah, wax off!”.

… until this morning, that is:

It’s not easy to see, but the Moon is up there….

The Sun was coming up — and I suddenly realised why it was that I hadn’t seen a ‘wax off’ before. A waxing moon is on that part of its monthly cycle when it’s heading away from the Sun… and so it’s easy to spot, at night. When the Moon is waning, it’s on the Sun-ward side of the Earth, which means that, when it’s above the horizon, it’s more likely to be competing with the Sun, and so it’s not so easy to see.

… at least, I think that’s right. Assuming that it is, it’s entirely possible that you may be thinking, “Well, duh!” If so, please cut me a bit of slack; it’s taken me over half a century to figure it out on my own :)

Header image adapted from
white round shape on black background
by Mason Kimbarovsky on Unsplash

Posted in ... wait, what?, Communication, Education, memetics, Science | Tagged , , , , | 17 Comments

Daryl Davis Is Still Going Strong

This:

In one case, Davis said, he listened as a K.K.K. district leader brought up crime by African Americans and told him that Black people are genetically wired to be violent. Davis responded by acknowledging that many crimes are committed by Black people but then noted that almost all well-known serial killers have been white and mused that white people must have a gene to be serial killers.

When the K.K.K. leader sputtered that this was ridiculous, Davis agreed: It’s silly to say that white people are predisposed to be serial killers, just as it’s ridiculous to say that Black people have crime genes.

The man went silent, Davis said, and about five months later quit the K.K.K.

jilldennison's avatarFilosofa's Word

In 2017, Keith and I both wrote about a man named Daryl Davis, a Black man who is doing more than his share to help white supremacists stop being white supremacists, one at a time.  If you’re interested, here are links to Keith’s post and mine.  Last weekend, Pulitzer Prize winning journalist Nicholas Kristof’s column looked to Davis and his technique in hopes of taking a page from Davis’ playbook to find ways to deal with people on the other side of the many divisive issues we are confronted with today. I think it is well worth considering …


‘How Can You Hate Me When You Don’t Even Know Me?’

By Nicholas Kristof

Opinion Columnist

One of the questions I’m asked most is: How do I talk to those on the other side of America’s political and cultural abyss? What can I say to my brother/aunt/friend who thinks Joe…

View original post 866 more words

Posted in ... wait, what?, balance, Communication, Core thought, People, perception, Phlyarology, Reblogs, Strategy | Tagged , , | 5 Comments

Is it actually true that seeing is believing?

What colour are the dots?

The dots inside these hearts are distinctly blue or green, right?

Distinctly blue and green, right?

Wrong. The dots inside the hearts are all exactly the same colour. Here’s a section of the top left of the same image, where I’ve blanked out the misleading colours (I made a bit of a mess of it to be honest, but hey, I don’t get paid to do this).

Don’t believe me? By all means, prove it to yourself if you need to. (If you’re using the Firefox browser, you can use the built-in eyedropper: Click the main menu in the upper right corner, scroll to More Tools and then to the eyedropper.)

What colour is the star Betelgeuse?

The other day, my brother (no, not him, another one) told me that he’d seen the star Betelgeuse in his binoculars, and that it was yellow. I found this odd, as it’s my understanding that Betelgeuse is a red supergiant; I’ve seen it in the sky on numerous occasions, and it’s always looked red to me. At first, in discussion with my bro, I accepted that I could be mistaken: knowing that it’s called a ‘red supergiant’ may have primed me to see it as red, when, maybe, its colour doesn’t appear that way in reality.

But the very next day, by one of those ‘coincidence‘ things, YouTube presented me with the following video:

Transcription below

… and, naturally, because it’s entitled ‘This Is Not Yellow’, I watched the video (I couldn’t resist: I’d been primed to do so by my conversation with my brother the previous day). And what I learned from it in the first few minutes (as is his wont, Michael Stevens soon changes tracks to other matters) was that the human eye isn’t actually capable of perceiving the colour yellow.

So, I did some digging. And various sources confirmed my original belief that Betelgeuse is a red supergiant star.

What colour is our sun?

This reminded me of a conversation I’d (coincidentally) had recently with Professor Kipping, of ‘Cool Worlds‘ fame, in the comments section of a recent video of his (The Red Sky Paradox) in which he’d claimed that our own sun is ‘yellow’. Although I respect Prof. Kipping immensely, he’s wrong about this: our sun is actually not yellow, it’s white (with a hint of pink). Interestingly, in the comments on that video, Prof. Kipping talks about our sun being yellow ‘because that’s how it appears when we look at it’. I find it odd that he would make this assertion, since we can’t look at our sun directly, at least when it’s overhead: if anyone were foolish enough to do so, they’d quickly go blind. The only time one can look at it safely is at sunrise and sunset; and then, it appears to be red (not yellow, not white) because the intervening atmosphere slows down the incoming light.

Since moonlight is just reflected sunlight, if the Sun were yellow, the Moon would appear to be, too. And anyone who’s been fortunate enough to observe Baily’s Beads during a total solar eclipse will confirm that the ‘diamond ring’ is white (not yellow, and not red).

All of which is a bit of a sidetrack: the main point I wanted to make here is that these are good illustrations of how easily our brains and our beliefs can mislead us, and how important it is to retain an open mind, and to remain sceptical of beliefs — especially our own.


Transcript of ‘This Is Not Yellow’

Michael Stevens: Using GPS, these trails represent pizza delivery in Manhattan on a typical Friday night, and is this a frog… or a horse? It’s episode 52 of IMG.

This lemon looks yellow to me, and it probably looks yellow to you as well, but not in the same way. You see, here in this room, this lemon is subtractively yellow. It absorbs all visible wavelengths of light except for yellow light, which it reflects onto my retina. But the screen that you are using to watch this video doesn’t produce yellow light at all. In fact, it can only produce red, blue or green light. The really cool but kind of disturbing thing about this is that here in the room I am actually seeing real yellow light; but you are seeing fake yellow. Absolutely no yellow is coming off of your screen and falling on your retina, but it still looks yellow because it’s quite easy to lie to the brain.

Our retinas contain three different types of cone cells that are receptive to color, and each one is best suited to detect a certain color: one is great for blue; the other is great for green, and the third is great for red. Notice that there’s no individual cell looking for yellow. So the way we actually see yellow happens like this: The wavelength of yellow light falls between the wavelengths of red and green, and so when an object reflects yellow light onto your retina, both the green and the red cones are slightly activated, which your brain notices and says, well, that’s what happens when something’s yellow, so it must be yellow. All a computer monitor or a mobile phone screen has to do to make you think you’re seeing yellow is send a little bit of red and a little bit of green light at you. As long as the pixels and the little sub-pixels on them are small enough that you can’t distinguish them individually, your brain will just say, well, I’m receiving some red and some green; that’s what yellow things do: it must be yellow, even though it actually is not.

(The rest of Vsauce’s video is totally irrelevant to this post, but it’s been transcribed anyway so here it is — you may find it of interest.)

Lemons can also produce electricity: a little bit of zinc, a little bit of copper and boom, you’re moving electrons around; but not that many. I mean, the current voltage are quite low; you could run an LCD, but even a potato could do that. If you wanted to run a flashlight bulb, that would take 3,000 lemons. And if you wanted to run a halogen bulb, well, that would take 37,000 lemons. But artist Caleb Charland doesn’t care. He spent 11 hours hammering nails into 300 living apples hanging on trees. By connecting them to a household lamp, he was able to make it glow just dimly enough to capture this image with a four hour exposure.

Less alive and more frightening are Steve Shaheen‘s sculptures; little dudes with bulb heads desperately trying to plug themselves in.

Merve Kahraman‘s ‘revitalizer’ never dies. It’s a light bulb surrounded by wax. Now the wax melts because of the light bulb’s heat and drips into a special container and all kinds of weird new shapes. But whenever you want, you can just flip it so that the new cooled wax is at the top. But my favorite is the Fukasada wooden lightbulb: it looks like a solid block of wood, but it’s actually hollow and chipped to a nearly paper thin width. When you turn it on, you can see the light coming from inside.

Combos: Artist Tang Yau Hoong blew my mind this week. We’ve got clear days and smoke; boats and crocs; whales and hearts; pi-bike; brains and boxing gloves; day and night — but don’t be scared, you can always paint yourself some light or just swing on some light. OK, let’s frame it this way: climbing wall. This fitness club in Japan uses frames and other pretty interior elements to create a decidedly less rugged climbing wall.

But let’s get simple, like minimal. Thanks to Lego, here are their bricks arranged to represent famous characters.

Now for some art illusions. Here’s a cute couple, but can you see in this very same image the baby they will soon have? Or how about these zebras? There’s a lion hiding amongst them; can you find it? Billboards can be clever, but here’s a great one that makes it look like someone is pushing out a section of the building.

But how many of you will remember seeing it? If we assume that you don’t remember experiencing major cultural events before you’re five or six years old, that means that every year there are fewer and fewer people alive who remember experiencing recent historical events. XKCD made this amazing chart to show when, in the future, the majority, more than half of living Americans will not remember being alive when certain things happened. For instance, he calculated using data from the U.S. Census Bureau that 2012, this very year, is the first year in American history since in which fewer than half of living Americans remember being alive in the 1970s. By 2041, most of us won’t remember a time when Pluto was actually called a planet. By 2043, most of us alive won’t remember living during George W Bush’s presidency. And by 2047, more than half of living Americans will not have been alive to have remembered anything that you did today, like when you made that funny face in the yearbook. No, no, no. That funny face. If you’re not following @tweetsauce, you are missing out on daily Vsauce content, most of which never makes it to a video. So go follow us on Twitter and I’m going to leave you with another combo, a tessellation, while you listen to Jake Chudnows‘ [unclear]. He made a music video for the song over on his channel, so check that out.

And, as always, thanks for watching.

Michael Stevens AKA Vsauce; transcript courtesy of Sonix

Header image adapted from
closeup photo of person
by Marina Vitale on Unsplash

Posted in ... wait, what?, Core thought, illusion, perception, Phlyarology | Tagged , , , , , , , , | 11 Comments

From 1985: Warnings from Carl Sagan and Al Gore (Take Two)

Climate deniers will have a hard time explaining these to their grandchildren, the kids who are now woke to the disasters they’ve been served by blindness and greed.

Astounding find by climatestate.

Source: Climate Denial Crock of the Week

I don’t usually repost content; I think it’s much better to simply offer links (though I’m aware they’re rarely followed). But exceptional times call for exceptional measures.

The story so far: I recently stumbled upon an earlier post of mine from 2018, a standard reblog from Peter Sinclair’s ‘climate crocks’ site. The post was just a video that was attended solely by Peter’s comment, above. In the comment thread on that post, Climate State (@climatestate) said of the video:

Thanks Peter for sharing, took me two days to compile this gem. Notice also that Manabe mentions briefly sea ice decline.

Comment by ‘Climate State’ on Peter’s post

Unfortunately, the YouTube video in the original post — and so on my reblog too — has gone walkies. Given that this was a ‘gem’ that had taken ‘two days to compile’, I wanted to revisit it — and, of course, make it available to others again. So I got in touch with Climate State on Vimeo, who kindly pointed me to another copy. However, due to a WordPress bug that (currently, at least) prevents embedding in reblog posts, I’m posting it here, instead.

With a little luck, this copy of the video may enjoy better longevity. Whether homo fatuus brutus will enjoy the same has yet to be revealed….

Source: climatestate.com
Posted in balance, Biodiversity, Climate, Core thought, Environment, GCD: Global climate disruption, Reblogs, Science | Tagged , , , , | 15 Comments

Planned obsolescence: conspiracy fact, not conspiracy theory

Full transcription below

One of my very earliest wibblettes, back in 2007, was a simple rant about the very topic that ‘Veritasium’ expounds here. I’ve touched on the subject several times since (for instance in 2011, when I asked the question “Hey, clever people, can you design to last?“; I’m still waiting on an answer to that that isn’t “No, because we have to ‘design for the dump’ or civilization will collapse”… /eyeroll).

We really do, desperately, need to get off this ‘stuff’ treadmill. But before we can even think about doing that, we need to open our eyes to the fact that it exists. Veritasium sheds some light on the matter:

Veritasium: This is a video about things, like cars, phones and lightbulbs; and an actual conspiracy that made them worse. This video was sponsored by NordVPN; more about them at the end of the video. I am outside Livermore Fire Station #6; and in here, they have the longest continuously-on lightbulb in the world. It has been on for 120 years, since 1901. It’s not even connected to a light switch — but it does have a back-up battery and generator. So, the big question is: how has this lightbulb lasted so long? It was manufactured by hand not long after commercial lightbulbs were first invented, and yet it has been running for over a million hours; way longer than any lightbulb today is meant to last.

A while back, a friend of mine told me this story: that someone had invented a lightbulb that would last forever — years ago; but they never sold it because an everlasting lightbulb makes for a terrible business model. I mean, you would never have any repeat customers and eventually you would run out of people to sell lightbulbs to. I thought this story sounded ridiculous. If you could make an everlasting lightbulb, then everyone would buy your lightbulb over the competitors’: and so you could charge really high prices, make a lot of money — even if demand would eventually dry up. I just couldn’t imagine that we had better lightbulbs in the past, and then intentionally made them worse. But it turns out I was wrong. At least, sort of.

Inventing a viable electric light was hard. I mean, this is the typical incandescent design, which just involves passing electric current through a material, making it so hot that it glows. You know, less than 5% of the electrical energy comes out as light; the other 95% is released as heat. So, these are really ‘heat bulbs’ which give off a little bit of light as a by-product. You know, the temperature of the filament can get up to 2800 Kelvin. That is half as hot as the surface of the sun. At temperatures like those, most materials melt; and if they don’t melt, they burn. Which is why in the 1840s, Warren De la Rue came up with the idea of putting the filament in a vacuum bulb so there’s no oxygen to react with. By 1879, Thomas Edison had made a bulb with a cotton thread filament that lasted 14 hours. Other inventors created bulbs with platinum filaments or other carbonized materials; and, gradually, the lifespan of bulbs increased. The filaments changed from carbon to tungsten, which has a very high melting point; and by the early 1920s, average bulb lifetimes were approaching 2000 hours, with some lasting 2500 hours. But this is when lifetimes stopped getting longer, and started getting shorter.

In Geneva, Switzerland, just before Christmas 1924, there was a secret meeting of top executives from the world’s leading lightbulb companies: Philips, International General Electric, Tokyo Electric, Osram from Germany and the UK’s Associated Electric, among others. They formed what became known as the Phoebus cartel, named after Phoebus, the Greek god of light. There, all these companies agreed to work together to help each other: by controlling the world’s supply of lightbulbs. In the early days of the electrical industry, there had been lots of different small lightbulb manufacturers; but, by now, they had largely been consolidated into these big corporations, each dominant in a particular part of the world. The biggest threat they all faced was from longer lasting lightbulbs. For example, in 1923, Osram sold 63 million lightbulbs: but the following year they sold only 28 million. Lightbulbs were lasting too long, eating into sales. So, all the companies in the cartel agreed to reduce the lifespan of their bulbs to 1000 hours, cutting the existing average almost in half.

But how could each company ensure that the other companies would actually follow the rules and make shorter lasting lightbulbs? After all, it would be in each of their individual interests to make a better product to out-sell the others. Well, to enforce the thousand-hour limit, each of the manufacturers had to send in sample bulbs from their factories, and they were tested on big test stands like this one. If a bulb lasted significantly longer than a thousand hours, then the company was fined. If a bulb lasted longer than three thousand hours? Well, the fine was 200 Swiss Francs for every thousand bulbs sold; and there are records of these fines being issued to companies. But, how do you make a worse lightbulb in the first place? Well, the same engineers who had previously been tasked with extending the lifespan now had to find ways to decrease it. So they tried different materials, different shaped filaments and thinner connections: and if you look at the data, they were successful. Ever since the formation of the cartel, the lifespan of lightbulbs steadily decreased so that by 1934 the average lifespan was just 1205 hours. And just as they had planned, sales increased for cartel members by 25% in the four years after 1926. And even though the cost of components came down, the cartel kept prices virtually unchanged, so they increased their profit margins.

So, did people know that the lightbulb companies were conspiring together to make their products worse? No. The Phoebus cartel claimed that its purpose was to increase standardization and efficiency of lightbulbs; I mean, they did establish this screw thread as standard: you can find it on virtually all lightbulbs around the world now. But all evidence points to the cartel’s being motivated by profits and increased sales, not by what was best for consumers.

So, one of the reasons this lightbulb has lasted so long is because it was made before the cartel era. Another reason is because the filament has always been run at low power, just four or five Watts. It was meant to be a night light for the fire station, to provide just enough light so that firemen wouldn’t run into things at night. And the fact that it was always on reduced the thermal cycling of the filament and components, limiting the stress caused by thermal expansion and contraction. The Phoebus cartel was initially planned to last at least until 1955, but it fell apart in the 1930s. It was already struggling due to outside competition and non-compliance amongst some of its members, but the outbreak of World War Two is really what finished it off. So, this cartel was dead; but its methods survive to this day. There are lots of companies out there that intentionally shorten the lifespan of their products. It’s a tactic known now as ‘planned obsolescence’.

This was actually the subject of Casey Neistat’s first viral video, all the way back in 2003:

Ryan: Thank you for calling Apple; my name name’s Ryan, may I have your first name, please?

Casey: Casey.

Ryan: All right, what seems to be the issue today?

Casey: I have an iPod that I bought about 18 months ago, and the battery is dead on it?

Ryan: Mmhm? 18 months? OK, it’s passed its year, which basically means, there’ll be a charge of $255, plus a mailing fee to send it to us to refurb it, to correct it. But at that price, you know, you might as well go get a new one.

Veritasium: This video got millions of views in a time before YouTube or social media, and it spawned a class action lawsuit which Apple settled out of court. But it didn’t stop the company from practising planned obsolescence. After an iOS update in 2017, users of older iPhones found apps loading significantly slower, or the device shutting down altogether. Apple said they’d throttled performance to protect the battery of older devices and increase their longevity. Of course, that wouldn’t be an issue if the battery were replaceable. In a series of lawsuits that concluded in 2020, Apple was fined, or reached settlements to pay, hundreds of millions of dollars. Undoubtedly, this amount pales in comparison to the extra revenue they generate by limiting the lifespan of their products.

But some would argue that planned obsolescence isn’t just about greed, but that it’s also good for everyone. During the Great Depression in the 1930s, when as much as a quarter of Americans were out of work, an American real estate broker, Bernard London, proposed mandatory planned obsolescence as a way to get people back to work and lift America out of the Depression. He wrote, “I would have the government assign a lease of life to shoes, and homes, and machines when they are first created, and they would be sold and used within the term of their existence, definitely known by the consumer”. After the allotted time had expired, these things would be legally ‘dead’, and would be controlled by the duly appointed governmental agency and destroyed if there is widespread unemployment. Now, this might sound like a wild fringe idea; but people were clearly afraid of being put out of work by technological progress and products that were too good.

There was even a popular Oscar-nominated film about it. This is ‘The Man in the White Suit‘, from 1951. It’s about a scientist who invents the perfect fibre; it won’t stain, or break, or fray….

Sidney Stratton (Alec Guinness): I think I’ve succeeded in the co-polymerisation of amino acid residues and carbohydrate molecules; both containing ionic groups. It’s really perfectly simple.

Veritasium: The Academy Award nomination was for best screenplay: I kid you not. Anyway, everyone is initially excited about our hero’s scientific discovery: he makes a suit out of the thread, and it has to be white because the fibre is so stain-resistant it can’t even be dyed. But this is when trouble strikes: the factory owners realize they won’t be able to sell as much of this thread because it’s so durable; and the workers worry it’ll put them out of a job.

Ignorant old washerwoman: Why can’t you scientists leave things alone? What about my bit of washing when there’s no washing to do?

Veritasium: This is when you get the climactic scene where factory workers and factory owners team up to chase down the scientist to destroy him and his invention. And believe it or not, this movie may have been inspired by real events. In the 1940s, the synthetic fibre nylon replaced silk in stockings, and it was so durable that the products became an overnight sensation. There were literal riots when women tried to get their hands on them. When the manufacturers realized they had made the product too good, they didn’t destroy the fibre; but they did follow the example of the Phoebus cartel: they instructed their engineers and scientists to find ways to weaken the product; to shorten its lifespan so people would have to buy more.

Now, it seems like consumers are finally fighting back against planned obsolescence. In the European Union, and in over 25 states in the US, there’s proposed legislation to enshrine the ‘right to repair’. These laws would force manufacturers to make it easier to repair their products. They would have to provide information and access to parts so you could replace a battery or fix a cracked screen at a third party repair shop without voiding your warranty. So, if the right to repair does become law, does that mean artificial obsolescence will be gone for good? Sadly, no, because there is one last thing manufacturers can use to make their products obsolete, which is you.

Henry Ford released the first mass market car, the Model T, in 1908, and he envisioned it like a workhorse, an affordable tool that wouldn’t wear out; a bit like the everlasting lightbulb. In 1922, Ford said, “We want the man who buys one of our cars never to have to buy another. We never make an improvement that renders any previous model obsolete”. But by 1920, 55% of American families already owned a car. Nearly everyone that could afford one, had one. And that same year, there was a small economic downturn, driving down sales for both Ford and General Motors. In 1921, Dupont, the chemical and paint company, took over the controlling share in General Motors, and they started experimenting with painting cars different colours. Up until then, Henry Ford had said, “you could have whatever colour you like — so long as it’s black”. It took a couple of years of testing, but in 1924, GM released their first cars in different colours, and soon after they introduced a trick that feels very familiar now. Each new year, they would introduce cars in different colours.

The goal wasn’t just to make Ford’s Model T look outdated, but to make their own cars feel outdated every year; encouraging customers to trade in their old cars for shiny new ones. Years later, GM’s head of design, Harley Earl, candidly discussed his role in creating what he called ‘dynamic obsolescence’: “Our big job is to hasten obsolescence. In 1934 the average car ownership span was five years now (which was 1955) it is two years. When it is one year we will have a perfect score.” By the time he said this, General Motors was the most valuable company in the world, and it sold half of all vehicles purchased in the US every year. These days, the world’s most valuable company, Apple, seems to have copied directly out of this play-book. I mean, new styles every year? Check. New special colours every year? Check. Marginal technological improvement? Check. I mean, is this useful innovation, or just a gimmick?

The inspiration for General Motors, and hence for Apple, comes from fashion; where real innovation is all but impossible. So, the only way to make people feel the urgency to get out there and buy is to create styles that last but one season. The trouble then is, you run through these styles too quickly — and then what are you supposed to do? Well, just recycle the styles from a few decades ago. The iPhone also shows this recycling trend. I mean, just look at the way the edges were initially rounded; and then they were squared off; and then they were rounded again; and now they’re squared off. And how much do you want to bet that the iPhone 14 has rounded edges? I think the point is that with design and styling, there is no ‘best,’ there’s only ‘different,’ which is apparently enough to remind us that we don’t have the latest and greatest, and so we have to rush out and keep buying.

The only type of obsolescence we should support is technological. Which brings us back to the lightbulb. You know, in the last 20 years, lightbulbs have gone from incandescent, which was basically unchanged for a hundred years, to compact fluorescent; and now to LED. These use just a tenth the energy and can last anywhere from 10 to 50 times longer. Yeah, that’s pretty bright. So, you’re more likely to sell your house than to have to replace an LED bulb that you’ve installed inside it. So, we’ve finally reached the point of what is essentially an everlasting lightbulb.

[NordVPN advert snipped] So, I want to thank NordVPN for sponsoring Veritasium, and I want to thank you for watching.

Veritasium

… and I want to thank Sonix for making it possible to transcribe this video without it taking me a dozen hours or more!


PS I’ve been reminded, by some visitors who have searched Wibble for the keyword ‘conspiracy’, that I posted a short article last year entitled ‘The Conspiracy Theory Handbook‘: If you’ve got this far and are interested in the topic, you may want to take a look at that. I think you may find it interesting.


Header image adapted from
people sitting on white concrete stairs‘ (?)
by Susan Q Yin on Unsplash

Posted in ... wait, what?, balance, Business, Capitalism, Communication, Core thought, Economics, Phlyarology, Strategy | Tagged , , , , , | 25 Comments

Is “better late than never” always true?

The Question

Is “better late than never” always true, or are there times where never would be the preferred option?

PCGuyIV’s ‘Truthful Tuesday’: June 1st, 2021

After pondering this awhile, I have a problem with this, as it’s not one question, but two. And I think that, to answer it, it’s necessary to focus on the words ‘always’ and ‘never’.

To take the second part first: I think that ‘never’ can never be the ‘preferred option,’ as in ‘one that is deliberately chosen’. One could only ever choose the ‘never’ option if the benefits of doing so were to outweigh the benefits of taking the planned action, and, barring a change in circumstances that might negate the original plan, I cannot think of a single example where that might be true.

Missing the launch window

As PCGuyIV points out in his own answer, ‘late’ implies a deadline. If that is time-critical, failing to meet it can make further action towards the goal pointless, or nonsensical.

Take, for instance, a Mars probe: if its launch window is passed and the rocket is still on the launch pad, the mission has to be postponed — for two years or more — or even scrubbed entirely. While this equates with ‘never’, the option is not one that was chosen in advance; the situation has changed. There will, presumably, have been unavoidable reasons for the delay, despite best efforts to achieve the objective.

Altered circumstances after the event

‘Never’ can also appear to be an ‘option’ if new facts come to light that reveal that the failure to meet the deadline has changed — or, even, possibly, improved — the situation, despite initial conditions that may have suggested action was required. But since this implies a faulty analysis, or misunderstanding, of the conditions leading to the original action plan, the ‘never’ option can never be the preferred choice for that original plan. And such should never be used as an excuse to procrastinate, although sometimes this is exactly what does happen….

So, is “better late than never” always true?

My answer to this is a simple ‘no’. As PCGuyIV says, one has to consider the consequences of not meeting the deadline; but since there are some situations in which being too late can have disastrous results, defying rectification by any action (the Titanic comes to mind), the maxim is not always true.

Delaying action on climate change, for example, is something homo fatuus brutus has been doing now for decades. The “we need more information” mantra has been pushed by many, fuelled largely by vested interests and the merchants of doubt with the intention of maintaining business as usual as long as possible, despite the evidence. While the mantra sounds reasonable, what some demand is 100% certainty, which is totally unreasonable (it’s a science denial technique known as ‘impossible expectations’).

Graph of CO2 mitigation curves at different starting points
(click to embiggen) — blatantly thieved from The Conversation without permission

Coming up to the tail-end of 2014, I asked a question of my own: “Are we ready for 2015?“. It was, of course, rhetorical; but the CO2 mitigation graph above gives a clear answer that: no, we weren’t ready. And now, six-and-a-half years later, we’re still not.

Posted in ... wait, what?, Communication, Core thought, GCD: Global climate disruption, Phlyarology, Strategy | Tagged , , , , , , , , | 16 Comments