Three things we learnt from a new megastudy of climate messaging
Evidence on changing pro-environmental attitudes and behaviour
This post too long for email. Click on the header to open it in a browser and read the full thing
Out last week, A registered report megastudy on the persuasiveness of the most-cited climate messages. The study tested the effects of 10 well researched climate change messaging strategies on Americans’ pro-environmental attitudes and behaviour.
Science is a tapestry of evidence, where we shouldn’t put too much weight on a single study. This study, however, is worth paying some attention to, for the following reasons:
It takes evidence from a lot of people. 13,544 to be precise, which is approximately 13,500 more than some studies of persuasion from the dark, unreliable days of psychology, pre-replication crisis.
It has a large, diverse authorship. 25 authors by my count. Most importantly, these researchers come from different research groups, ensuring that the study design has input from people who don’t necessarily agree, and who have different favoured theories they want to support.
I don’t know what exactly qualifies something to be a “megastudy”, but I don’t think many participants or many authors alone should count - the crucial thing is the attempt to provide something comprehensive. For this study, they first reviewed the literature (157 research papers on climate messaging), and from them selected the ten most cited messaging strategies for which there was evidence of effectiveness. By testing them in the same study they allowed, first, agreement on how each strategy should be implemented, and second, direct comparison of the effects. Here’s the description of the strategies from the paper (individual strategies bolded by me):
Six selected messaging strategies described supporting climate change mitigation as consistent with moral values and ideologies, such as care and considerateness, free market beliefs, belief in a just world, patriotism, purity and system preservation. Two messaging strategies emphasized that climate change is scientific consensus. The remaining two strategies described the costs for a socially distant group or gain associated with climate change mitigation.
The study has a number of credibility indicators. Most importantly, and in the title, is that it is a registered report, a mechanism whereby the methods for a study are reviewed by the journal - which agrees to publish regardless of how the results turn out. This shifts the focus of journal review onto the importance of the research question, and the solidity of methods proposed for addressing it, rather than the results. You realise exactly how revolutionary the improvement this makes to the credibility of published results is when you appreciate the range of biases generated by the “review when you know the results” convention (basically positive existing results get published, negative results get file-drawered, and authors are incentivised to write up their studies as though they show a positive result, and what happened was what they predicted all along). So, basically, the registered report format meant the authors had every incentive to design a strong study, and reduced incentive to manipulate the analysis to present positive results, leaving us, the readers, with more faith that what they claim they show is representative1. They also share the anonymised data and analysis code, which is an important transparency measure and credibility signal.
So, worth spending some time with. What do they show?
1. There is bipartisan agreement on climate issues (in many ways)
This first thing isn’t emphasised by the authors of the study, but it is revealed by their results and worth comment. The study polled a sample of American adults, recruiting approximately as many Democrats and Republicans, so in this sense at least representative2
These plots show the volume of responses from participants of different political alignment. Here’s the one from belief in climate change. Although there is a famous partisan divide on this in the US, I’d argue that there is as much similarity as difference here. Most Republicans have more belief that climate change is happening than it is not (they are over 50% on the belief scale), the same as independents and Democrats (admittedly more Democrats express 100% certainty on this one).
Both Democrats and Republicans are similar in this intention to take “non-political” pro-environmental actions, such as “Eat less meat”, “fly less”, “compost”. Or, at least, they are similar in that many who identify as Democrat and many who identify as Republican have intentions to take pro-environmental action, and some from both groups have the intention not to.
The partisan groups really pull apart in how they interpret climate change and how they regard political choices in response to climate change. Republicans, for example, just seem a great deal less worried about climate change than Democrats:
You can check out the full results in the paper, but my takeaway was that - despite elite messaging questioning the reality of climate change - most of the US population doesn’t reject the reality of climate change, they just differ in what they think should be done about it.
Consequently, we shouldn’t assume that disagreements about climate policies are disagreements about the science of climate change (and if we do, that could be misdirection). There’s a similar story in the UK, where the reality of climate change and the need to do something about it has cross-partisan consensus.
2. Message targeting doesn’t matter
“Persuasiveness varied little across party lines”, report the study authors, “inconsistent with theories predicting heterogeneous effects for targeted messages.”
Put another way, messages were persuasive or not, it didn’t seem to matter who was seeing them, a Democrat or a Republican.
This chimes with what I reported political scientist David Broockman arguing in my newsletter from December, and what Alex Coppock argued in his book Persuasion in Parallel: How Information Changes Minds about Politics.
The idea of targeted messaging fulfils our hope for a magic bullet which will persuade those who think differently from us (and surely also plays well for the business model of campaign consultants), but the reality may be that the differences between different messages are larger (and easier to take advantage of) than the differences between how the same message lands with different demographics.
Although the study authors don’t rub this in, six of the ten messages they tested used a moral value reframing, inspired by the Moral Foundations school of thought. The lack of difference across Democrats and Republicans should specifically dampen our enthusiasm for how practical Moral Foundations is as a way of reaching people of different political persuasions. (That said, many good theories contain deep truths and are also entirely impractical, or impractical until applied right, but at the least this result shows that simplistic translation of Moral Foundations into persuasive messaging doesn’t allow special access to one partisan group or the other).
3. Persuasion is possible, just hard
The third thing I’d take from this study is that, consistent with a lot of the research I review in this newsletter, persuasion is possible. All groups of participants responded to some of the persuasion messages, but the response was typically small - a few percentage points.
Here’s just one of the outcomes they measured, attitude (on a scale of 0 to 100) towards climate mitigation policies (things like “Ban construction of new coal-fired power plants”).
The top strategy - a purity framing message (“Participants read a text arguing that mitigating climate change is consistent with values such as protecting the purity of American land.”) - looked like it had an average effect of moving people 1.5 points on the 100-point scale. In an election, that can be enough to matter, but in an election you also have the other side trying to push voters in the other direction. Does it matter in other contexts? Well you could say that it is impressive to get any movement in beliefs or attitudes from a short persuasive message. It proves people haven’t completely made up their minds on the issue.
That said, my native optimism about how reachable people are was given pause by what was perhaps the most interesting question the researchers asked: would participants be willing to give up a proportion of their payment for taking part as a donation to a pro-environmental organisation? No messaging strategy - not one - increased donations relative to the control condition (which was reading a non-persuasive message on a non-climate related topic, such as “the history of neckties”).
In fact, it looked like several messaging strategies decreased participants’ willingness to donate. I don’t read too much into this backfire effect. The donation measure wasn’t a planned part of the study. The most I’d take from it is the results are incompatible with any persuasion effect on expressed belief flowing through directly to behaviour (willingness to donate). Simply put, what people say is easier to shift than what they do (especially when cash is on the line).
In the words of the study authors, these results highlight “the limits of short-form messages for increasing Americans’ support for action to address climate change.”
Overall
We need more collaborative studies which test different approaches head to head. These particular results should shift our expectation down that it is possible to find bespoke messages for different political groups, without doing much to shift our prior expectation that, while always possible, persuasion is hard.
This newsletter is free for everyone to read and always will be. To support my writing you can upgrade to a paid subscription (more on why here)
Keep reading for the references, more on persuasion, a recommendation for a great podcast and links to some recent things published elsewhere by me.
Reference & More
Voelkel, J.G., Ashokkumar, A., Abeles, A.T. et al. A registered report megastudy on the persuasiveness of the most-cited climate messages. Nat. Clim. Chang. (2026). https://doi.org/10.1038/s41558-025-02536-2
Related, by me
Our intuitions about political advertising are poor. Notes on a great interview with David Broockman
Language models are persuasive - and that’s a good thing. Two new studies provide insights into exactly how LLMs persuade, and what that means.
The truth about digital propaganda. Reasonable People #55: Our piece in New Scientist bring evidence to worries about online manipulation
How persuasive is AI-generated propaganda? Reasonable People #51: Bullet review of a new paper suggesting LLMs can create highly persuasive text and will supercharge covert propaganda campaigns.
Amy Mount: Political Heat
If climate policy and politics is your thing, please read Amy Mount’s reflection the last 12 months: 2025: When the UK’s climate consensus crumbled
The short version: emissions went down. Renewables and electrification went up. The climate narrative was muted. Wider politics got harder. Voters stayed volatile and parties were polarised. The general election in 2029 will be a defining moment for the UK’s climate progress. But I remain hopeful – because the public is onside. The question is, will it stay there?
And for deep dives on everything from the UK energy market to the Conservatives’ environmental legacy, Amy has you covered in the Political Heat podcast archives
Political Heat
Full disclosure: Amy is my cousin, so I have some CoI
The Transmitter: The 1,000 neuron challenge
By me, in neuroscience magazine The Transmitter
A competition to design small, efficient neural models might provide new insight into real brains—and perhaps unite disparate modeling efforts.
Link: The 1,000 neuron challenge
WonkHE: A new model moves research in a more democratic direction – it could be faster and fairer too
By myself and Anna Butters. This is about research funding, which is an interesting and important decision problem. I’m proud of the way we wrote this to start very zoomed out on the big picture, then take the reader through to the specific mechanism - distributed peer review - about which we have new evidence on.
Here’s the start:
Research quality is the dark matter of the university sector. It is hard enough to assess research after it has been done, research funders must find some way to evaluate proposals for projects which don’t exist yet. The established model for this is external expert review, combined with a panel stage where proposals, and their reviews are discussed, and hard choices made.
UK researchers will be familiar with this via our own UKRI, and everyone who has had a funding application rejected will recognise that the reviews received may be partial or mis-directed. This speaks to the idiosyncrasy and variability in individual judgments of what makes a good project, and has downstream consequences for what ultimately gets funded.
Link: A new model moves research in a more democratic direction – it could be faster and fairer too
Preprint: Applicants as reviewers: A mixed methods evaluation of the risks, benefits, and potential of distributed peer review for grant funding allocations
Speaking of which, the preprint of our work on this has just been published. It’s a shorter (by 50%) version of the working paper, so not strictly new. Here’s the conclusion
This trial demonstrates feasibility of DPR [Distributed Peer Review] for an interdisciplinary funding programme. Implementing a DPR process requires funders to accept trade-offs, but can offer valuable benefits, including shortening time to funding decision. Building on established peer review principles, it has the potential to contribute to the democratisation of funding evaluation.
Citation:
Butters, A., Marshall, M. B., Pinfield, S., Stafford, T., Bondarenko, A., Neubauer, B., … Denecke, H. (2026, January 6). Applicants as reviewers: A mixed methods evaluation of the risks, benefits, and potential of distributed peer review for grant funding allocations. https://doi.org/10.31222/osf.io/t2p56_v1
…And finally
Tom Thomson. Northern River, 1914-15. (via @tarnport)
END
Comments? Feedback? Urban legends? I am tom@idiolect.org.uk and on Mastodon at @tomstafford@mastodon.online
Shout out to Norway’s Dam Foundation, which promises higher success rates and enhanced support for requests for research funding which use registered reports https://www.dam.no/nyheter/vi-kan-ikke-lenger-akseptere-at-resultater-blir-liggende-i-skuffen/
The methods section makes clear that they use a non-representative sample via survey company, but recruited to be representative on certain dimensions. I didn’t have time to dig into exactly how they managed this. It is perhaps worth noting that of the people they initially recruited 44% were excluded from the analysis, which is fairly typical for the low quality of samples from survey companies in my experience (i.e. lots of participants are just in it for the money and are happy to provide duff responses).














