Disconnexion

One thing we left out in last week’s complaint is generative AI’s undoubted ability to magnify the worst of human online behavior. A few days ago, the world discovered that X’s chatbot, Grok, can be commanded to “nudify” images of women and children – that is, digitally remove their clothes without their consent. A number of commenters also note that some of the same British politicians who are calling out X and Grok about this and who more broadly insist on increasing restrictions in the name of online safety nonetheless continue to post there. Even Ashley St. Clair, the mother of one of Elon Musk’s sons, is unable to get these images taken down. Some ministers have called for banning this form of deepfake software.

Among those calling for Elon Musk to act “urgently” are technology secretary Liz Kendall and prime minister Keir Starmer. The BBC reported this morning (January 9) that the government is calling on Ofcom to use “all its powers”. At Variety, Naman Rathandran reports that X has moved AI image editing behind a paywall.

On January 2, at the National Observer, Jimmy Thompson calls on the Canadian government to delete their accounts. On Wednesday, the Commons women and equalities committee announced it would stop using X. As of January 8, both Kendall and Starmer are still posting on X, along with the UK’s Supreme Court and the Regulatory Policy committee and doubtless many others. Ofcom, the regulatory agency in charge of enforcing the Online Safety Act, posted a statement on January 5 saying it has contacted X and plans a “swift assessment to determine whether there are potential compliance issues that warrant investigation”. At the Online Safety Act Network, Lorna Woods explains the relevant law.

My guess is that few politicians manage their own social media – an extreme form of mental compartmentalization – and their aides are schooled in the belief that “we must meet the audience where they are”. In that sense, these accounts are not ordinary users, who use social media to connect to their friends and other interesting people. Politicians, like many others who are paid to show off in public, use social media to broadcast, not so much to participate. But much depends on whether you think that Grok’s behavior is one piece of a fundamental structural problem with X and its ownership or whether you believe it’s an isolated ill-thought-out feature to be solved by tweaking software, a distinction Jason Koebler explores at 404 Media.

The politicians’ accounts doubtless predate Musk’s takeover. Twitter was – and X is – small compared to other social media. But the short-burst style perfectly suited journalists, who gave it far more coverage than it probably deserved. Politicians go where they perceive the public to be, which is often signaled by media coverage.

It’s not necessarily wrong for politicians and government agencies to argue that they should be on X to serve their constituents who use it. But to legitimize that claim they should also be cross-posting on every significant platform, especially the open web. We can then argue about the threshold for “significant”. At a guess, it’s bigger than a blog but smaller than Mastodon, where politicians are notoriously absent.

***

The early 2020s’ exciting future of cryptocurrencies has gotten lost in the distraction of the last couple of years’ excitement over our new future of technologies pretending to be “smart”. In 2023’s “crypto winter”, we thought anyone still interested was either an early booster or thought they could smell profit. As Molly White wrote this week, they’ve spent the last two years nourishing grudges and building a political machine that could sink large parts of the economy.

More quietly, as Dave Birch predicted in 2017 (and repeated in his 2020 book, The Currency Cold War) “serious people” were considering their approach. Among them, Birch numbered banks, governments, and communities.

Now, governments are hatching proposals. As 2025 ended, the European Council backed the European Central Bank’s digital euro plan; the European Parliament will vote on it this year. The Financial Times reports that this electronic alternative to cash could help European central bankers pull back some control over electronic retail payments from the US organizations that dominate the field. The ECB hopes to start issuing the currency in 2029. In the UK, the Bank of England is mulling the design of the digital pound. The International Monetary Fund sees the digital euro as a continuation of financial stability.

Birch dates government interest to Facebook’s now-defunct 2019 cryptocurrency plan. Today, I imagine new motives: the US’s diminishing reliability as an ally raises the desirability of lessening reliance on its infrastructure generally. Visa, Mastercard, and other payment mechanisms largely transit US systems, a reality the FT says European banks are already working to change. In March, ECB board member Philip R. Lane argued that the digital euro will foster monetary autonomy.

We’ll see. The Economist writes that many countries are recognizing cash’s greater resilience, and are rethinking plans to go all-digital.

It remains hard to know how much central bank digital currencies will matter. As I wrote in 2023, there are few obvious benefits to individuals. For most of us the problem isn’t the mechanism for payments, it’s finding the money.

Illustrations: Bank of England facade.

Wendy M. Grossman is an award-winning journalist. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

The AI who was God

Three subjects dominated 2025: increasing AI infestation, expanding surveillance use of biometrics, and, age verification and online safety. The last spent the year spreading across the world, including, most recently, to Louisiana. There, on December 22, a US District Court blocked the law on First Amendment grounds in a suit brought by the trade association NetChoice. Less than a week earlier on similar grounds, NetChoice also won a suit in Arkansas that would have penalized platforms for “using designs or algorithms” that they “know or should have known” could harm users by for example leading to addiction, drug use, or self-harm. The judge in this case called the law “unconstitutionally vague”. Personally, it seems like it would be hard to prove cause and effect.

However, much of the rest of the year felt in many ways like rinse-and-repeat, only bigger and more frustrating. The immediate future – 2026 – therefore looks like more of all of those perennial topics, especially surveillance. This time next year we will still be fighting over age verification, network neutrality, national identification systems, surveillance, data protection, security issues surrounding the Internet of Things and other “smart” devices, social media bans, and access to strong encryption, along with other perennials such as copyright and digital sovereignty.

It is however possible that AI might have gone quiet by then. Three types of reasons: financial, technical, and social.

To take finances first, concerns about the AI bubble have been building all year. In the latest of his series of diatribes about this and the “rot economy”, Ed Zitron writes that AI is bringing “enshittification Stage Four”, in which companies, having already turned on their users and customers, turn on their shareholders. Zitron traces the circular deals, the massive debt, the extravagant claims, and the disproportionately small revenues, and invokes the adage, if something can’t go on forever, it will stop.

On the technical side, no matter what Elon Musk predicts, more sober commentary at MIT Technology Review is calling “reset”. As Adam Becker writes in More Everything Forever, one thing that can’t go on ad infinitum is exponentially increasing computing power-up: exponential growth always hits resource limits. It is entirely possible that come 2027 we’ll have run out of all sorts of road on this current paradigm of “AI”. If so, expect to hear a lot more about how quantum is ready to remake the world. Generative AI will still be bigger ten years from now (just like the Internet in 2000, when the dot-com boom crashed), but it won’t become sentient and fix climate change.

Brief digression. On Mastodon, Icelandic web developer Baldur Bjarnason posts that he’s hearing people claim that studies showing that large language models won’t lead to AGI are “whitewashing creationism”. Uh…huh?

On the social side, pressure is mounting to curb the industry’s growth. US politicians including US senator Bernie Sanders (D-VT) and Florida governor Ron DeSantis (R) are both working to slow data center construction. Data centers guzzle power and water, as Zitron also explains, and nearby residents pay both directly and indirectly.

Other harms keep mounting up. The year’s Retraction Watch annual report includes myriad fake references Salesforce fired 4,000 people before only now realizing that large language models can’t do their jobs; other companies nonetheless want to copy it. Organizers canceled a concert by Canadian musician Ashley MacIsaac after a Google AI summary wrongly said he’d been convicted of sexual assault.

At Utah’s Park Record, Cannon Taylor reported recently that in late October an AI summary indicated that a West Jordan, Utah police officer had morphed into a frog. Simple explanation: ta Harry Potter movie playing in the background had been recorded by the officer’s bodycam during an investigation. Per the story, the summary seemed humanly written until, “And then the officer turned into a frog, and a magic book appeared and began granting wishes.”

The story goes on to report several different AI software trials. One, used in Summit County, has a setting that inserts deliberate errors into the summaries to expose officers who don’t thoroughly check them. With that turned off, the time savings over having officers write their own summaries are considerable. Summit County turned it on.The time savings vanished. The County decided to pass.

Back when pranksters used to deface web pages for fun, a pasttime more embarrassing than harmful, I thought it would be much worse when they learned to make small, hard-to-detect changes that poisoned the information supply.

AI is perfect for automating this.

In their “FakeParts” paper (PDF), researchers at the Institut Polytechnique de Paris discuss a disturbing example: subtle, localized AI-driven changes to otherwise real videos. These fakeparts blend in seamlessly; identifying them is far harder than spotting a complete fake, which on its own is hard enough. The researchers warn that subtle changes to facial expressions or gestures can change the emotional content of genuine statements, great for creating targeted attacks and sophisticated disinformation campaigns.

Cut to James Thurber‘s 1939 fable, The Owl Who Was God. If AI kills us, it will be because we trust it without applying common sense.

Illustrations: Barred owl (photo by Steve Bellovin, used by permission).

Wendy M. Grossman is an award-winning journalist. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Review: More Everything Forever

More Everything Forever: AI Overlords, Space Empires, and Silicon Valley’s Crusade to Control the Fate of Humanity
By Adam Becker
Basic Books (Hachette)
ISBN: 9781541619593
Publication date: April 22, 2025

A friend who would be 93 now used to say that the first time he’d read about the idea of living long enough to live forever was when he was about eight. Even at that age, he was a dedicated reader of science fiction, though he also said this was a habit so weird at the time that he had to hide it from his classmates.

Cut to 1992, when I reviewed Ed Regis’s book Great Mambo Chicken and the Transhuman Condition for New Scientist. Regis traveled the American southwest, finding cryonicists, guys building rockets in the desert, wondering whether gravity was really necessary, figuring out how to make backups of our brains, spinning chickens in centrifuges to understand the impact of heavier-than-earth gravity, that sort of thing. Regis called it “fin-de-siecle hubris”.

In 1992 it was certainly tempting to believe that this sort of craziness was somehow related to the upcoming millennium. Today’s techbros have no such excuse, yet their dreams are the same. This is the collection Timnit Gebru and Émile Torres have dubbed TESCREAL: Transhumanism, extropianism, Singularitarianism, Cosmism, Rationalists, Altruism, and Longtermism, all of it, as Adam Becker explains in More Everything Forever, more of a rebranding than a new vision of the future.

You could accordingly view Becker’s book as a follow-up, 25 years on. Regis could present all this as a mostly whacked-out bunch of dreamers, but since then it’s all become much more serious. Today’s chicken-spinners are armed with massive amounts of money and power and are willing to ignore the present suffering of millions if it means enabling their image of the future. We’ve met this crowd before, in the pages of Douglas Rushkoff’s Survival of the Richest. These are the folks who treat science fiction’s cautionary tales as a manual for what to build.

Becker does a fine job of tracing the history of the various TESCREAL strands. Most are older than one might expect, some with roots in thousands-year-old Christian beliefs. Isn’t fear of death, which Becker believes lies at the core of all this, as old as humanity? At last year’s CPDP, Mireille Hildebrandt called TESCREAL “paradise engineering”.

“If it violates physics, you can ignore it,” I was told at a conference on these topics after I asked how to distinguish the appealing-but-impossible from the well-maybe-someday. Becker proves the wisdom of this: his grounding in engineering and physics helps him provide essential debunking. Mars is too far away and too poisonous for humans to settle there any time soon. Meanwhile, he points out, Moore’s Law, which underpins projections by folks like Ray Kurzweil that computational power will continue to accelerate exponentially, is far more likely to end, like all other exponential trends. Physics, resource constraints, the increasing difficulty of finding new technological paradigms, and the fact that we understand so little of how the human brain or consciousness really works are all factors. The reality, Becker concludes, is that AGI is at best a long, long way off.

The censorship-industrial complex

In a sign of the times, the Academy of Motion Picture Arts and Sciences has announced that in 2029 the annual Oscars ceremony will move from ABC to YouTube, where it will be viewable worldwide for free. At Variety, Clayton Davis speculates how advertising will work – perhaps mid-roll? The obvious answer is to place the ads between the list of nominees and opening the envelope to announce the winner. Cliff-hanger!

The move is notable. Ratings for the awards show have been declining for decades. In 1960, 45.8 million people in the US watched the Oscars – live, before home video recording. In 1998, the peak, 55.2 million, after VCRs, but before YouTube. In 2024: 19.5 million. This year, the Oscars drew under 18.1 million viewers.

On top of that, broadcast TV itself is in decline. One of the biggest audiences ever gathered for a single episode of a scripted show was in 1983: 100 million, for the series finale of M*A*S*H. In 2004, the Friends finale drew 52.5 million. In 2019, the Big Bang Theory finale drew just 17.9 million. YouTube has more than 2.7 billion active users a month. Whatever ABC was paying for the Oscars, reach may matter more than money, especially in an industry that is also threatened by shrinking theater audiences. In the UK, YouTube is second most-watched TV service ($), after only the BBC.

The move suggests that the US audience itself may also not be as uniquely important as it was historically. The Academy’s move fits into many other similar trends.

***

During this week’s San Francisco power outage, an apparently unexpected consequence was that non-functioning traffic lights paralyzed many of the city’s driverless Waymo taxis. In its blog posting, the company says, “While the Waymo Driver is designed to handle dark traffic signals as four-way stops, it may occasionally request a confirmation check to ensure it makes the safest choice. While we successfully traversed more than 7,000 dark signals on Saturday, the outage created a concentrated spike in these requests. This created a backlog that, in some cases, led to response delays contributing to congestion on already-overwhelmed streets.”

Friends in San Francisco note that the California Driver’s Handbook (under “Traffic Control”) is specific about what to do in such situations: treat the intersection as if it had all-way stop signs. It’s a great example of trusting human social cooperation.

Robocars are, of course, not in on this game. In an uncertain situation they can’t read us. So the volume of requests overwhelmed the remote human controllers and the cars froze, blocking intersections and even sidewalks. Waymo suspended the service temporarily, and says it is updating the cars’ software to make them act “more decisively” in such situations in future.

Of course, all these companies want to do away with the human safety drivers and remote controllers as they improve cars’ programming to incorporate more edge cases. I suspect, however, that we’ll never really reach the point where humans aren’t needed; there will always be new unforeseen issues. Driving a car is a technical challenge. Sharing the roads with others is a social effort requiring the kind of fuzzy flexibility computers are bad at. Getting rid of the humans will mean deciding what level of dysfunction we’re willing to accept from the cars.

Self-driving taxis are coming to London in 2026, and I’m struggling to imagine it. It’s a vastly more complex city to navigate than San Francisco, and has many narrow, twisty little streets to flummox programmers used to newer urban grids.

***

The US State Department has announced sanctions barring five people and potentially their families from obtaining visas to enter or stay in the US, labeling them radical activists and weaponized NGOs. They are: Imran Ahmed, an ex-Labour advisor and founder and CEO of the Centre for Countering Digital Hate; Clare Melford, founder of the Global Disinformation Index; Thierry Breton, a former member of the European Commission, whom under secretary of state for public diplomacy Sarah B. Rogers, called “a mastermind” of the Digital Services Act; and Josephine Ballon and Anna-Lena von Hodenberg, managing directors of the independent German organization HateAid, which supports people affected by digital violence. Ahmed, who lived in Washington, DC, has filed suit to block his deportation; a judge has issued a temporary restraining order.

It’s an odd collection as a “censorship-industrial complex”. Breton is no longer in a position to make laws calling US Big Tech to account; his inclusion is presumably a warning shot to anyone seeking to promote further regulation of this type. GDI’s site’s last “news” posting was in 2022. HateAid has helped a client file suit against Google in August 2025, and sued X in July for failing to remove criminal antisemitic content. The Center for Countering Digital Hate has also been in court to oppose antisemitic content on X and Instagram; in 2024 Elon Musk called it a ‘criminal organization’. There was more logic to”the three people in hell” taught to an Irish friend as a child (Cromwell, Queen Elizabeth I, and Martin Luther).

Whatever the Trump administration’s intention, the result is likely to simply add more fuel to initiatives to lessen European dependence on US technology.

Illustrations: Christmas tree in front of the US Capitol in 2020 (via Wikimedia).

Wendy M. Grossman is an award-winning journalist. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Slop

Sometimes it doesn’t pay to be first. iRobot, the maker of the Roomba, has filed for Chapter 11 bankruptcy protection and been acquired by Picea, one of its Chinese suppliers, Lauren Almeida reports at the Guardian. The company’s value has cratered since 2021.

Given the wild enthusiasm that greeted the Roomba’s release in 2002, it seems incredible. Years before then, I recall an event where a speaker whose identity I don’t remember said that ever since he had mentioned the possibility of a robot vacuum sometime like the 1960s he’d gotten thousands of letters asking when it would be ready. There was definitely customer demand. It helped that the Roomba itself was kind of cute as it banged randomly into furniture. People named them, and took them on vacation. But, as often happens, the Roomba’s success attracted lower-cost competitors, and the first mover failed to keep up.

I got one in 2003. After a great few months, I realized that Roombas are not compatible with long hair, which ties them into knots that take longer to cut out than vacuuming. I gave it away within a year and haven’t tried again.

At Mashable, Leah Stodart warns that although the Roombas people already have will continue to work “for now”, users can’t be confident that this state of affairs will continue. Like so many other things that used to be things we owned and are now things we subscribe to (but still think we “buy”), newer-model Roombas are controlled by an app that the manufacturer can change or discontinue at will. She calls it “unplanned obsolescence”. Her advice not to buy a new one this year is sound from the consumer’s point of view, but hardly likely to help the company survive.

***

If generative AI is so great, why is everyone forcing it on us? The latest example, Luke James reports at Tom’s Hardware, is LG “smart” TVs whose users woke up the other day to find a new update had installed “CoPilot: Your AI Companion” without asking permission and that there was no option to remove it. The most you can do to disable it, James says, is keep your TV disconnected from the Internet.

There are of course many more, the automated summaries popping up everywhere being the most obvious. Then, Matthew Gault reports at 404 Media, a Discord moderator and an Anthropic executive added Anthropic’s Claude chatbot to a community for queer gamers, who had voted to restrict Claude to its own channel. Result: major exodus. Duh.

And, of course, as Lance Ulanoff reminds at TechRadar, there is “AI slop” everywhere – music playlists, YouTube videos, ebooks – threatening people’s livelihood even though, as Cory Doctorow has written, “AI can’t do your job. But an AI salesman can convince your boss to fire you and replace you with a chatbot that can’t do your job.” For a while, anyway: Microsoft is halving its sales targets for AI.

And thus we get “slop” as the word of the year, per Merriam-Webster. Any time companies are this intent on foisting something on us – chatbots, ads – you have to know that they’re intent on favoring their interests, not ours.

***

Last week, Customs and Borders Patrol published a notice in the Federal Register proposing new rules for foreigners traveling to the US on an ESTA (“Electronic System for Travel Authorization”) as part of the visa waiver program. It has drawn a lot of discussion in the UK, which is one of the 42 affected countries. Under the new rules, applicants must install CBP’s app, into which they must submit a massive load of “high-value” personal information. The list is long, allows for a so-far-imaginary future of DNA sampling, and expects you to be able to give five years’ worth of family members’ residences, phone numbers, and places of birth, and all the email addresses you’ve used for ten years. CBP thinks the average applicant should be able to complete on their smartphone in 22 minutes. I think it would take hours of painful, resentful typing on a stupid touch keyboard, and even then I doubt I could fill it out with any certainty that the information I supplied was complete or accurate. Data collection at this scale makes it easy to find an error to use as an excuse to deny entry to or deport someone you want to get rid of. As Edward Hasbrouck writes at Papers, Please, “Welcome to the 2026 World Cup”.

“They have to be planning to use AI on all that data,” a friend commented last week. Probably – to build social graphs and find connections deemed suspicious. Privacy International predicts that the masses of data being demanded will in fact enable the AI tools necessary to implement automated decision making and calls the proposals “disproportionate for “a family’s visit to Disney World”,

One of the problems Hasbrouck highlights while opposing this level of suspicionless data collection is that CBP has not provided any way for would-be respondents to the Federal Register notice to examine the app’s source code. What other data might it be collecting?

As Hasbrouck adds in a follow-up, the rules the US imposes on visitors are often adopted by other countries as requirements for US travelers. In this game of ping-pong escalation, no one wins.

ID is football

On Wednesday, Australia woke up to its new social media ban for under-16s. As Ange Lavoipierre explains at ABC News, the ban isn’t total. Under-16s are barred from owning their own accounts on a a list of big platforms – Facebook, Instagram, Threads, Twitch, YouTube, TikTok, X, Reddit, Kick, and Snapchat – but not barred from *using* those platforms. So, inevitably, there are already reports of errors and kids figuring out how to bypass the rules in order to stay in touch with their friends. The Washington Post’s report contains this contradiction: “Numerous recent polls indicate that a solid majority of Australians support the ban, but that young respondents largely don’t plan to comply.”

Helpfully, ABC News reported a couple of months ago that researchers, led by the UK’s Age Check Certification Scheme, have tested age assurance vendors, and found that “Old man” masks and other cheap party costumes apparently work to fool age estimation algorithms).

Edge cases are appearing, such as the country’s teen Olympians – skateboarders and triathletes – for whom the ban disrupts years of building fan communities, potentially also disrupting some of their funding.

Meanwhile, the BBC reports that a pair of 15-year-olds, backed by the Digital Freedom Project, are challenging the ban in court. The Josh Taylor reports at the Guardian that Reddit is also suing.

At Nature, Rachel Fieldhouse and Mohana Basu write that the ban’s wider effects will be assessed by scientists independently. This is good; defining “success” solely by the numbers of blocks bypassed substitutes an easy measure for the long-term impacts, which are diffuse, difficult to measure, and subject to many confounding variables.

But we know this: the ratchet effect applies. I first encountered it in the context of alternative medicine. Chronic illnesses have cycles; they improve, plateau, get worse. Apply a harmless remedy. If the patient gets better, the remedy is working. If it stays the same, the remedy has halted the decline. If it gets worse, the remedy came too late. In all cases, the answer is more of the remedy. So with online safety. In child safety, the answer is always that more restrictions are needed. In the UK, where the Online Safety Act has been in force for mere months, three members of the House of Lords have already proposed a similar ban as an amendment to the Children’s Wellbeing and Schools Bill.

***

Keir Starmer’s vague plan for a mandatory digital ID is back. This week saw a Westminster Hall debate, as required after nearly three million people signed an online petition opposing it.

At Computer Weekly, Liz Evenstead reports that MPs across all parties attacked the plan, making familiar points: the target such a scheme could create for criminals, the change it would bring to the relationship between citizens and the state, and the potential threat to civil liberties. They also attacked its absence from Labour’s election manifesto; last month, Fiona Brown reported at The National that on Times Radio UK head Louis Mosley said that Palantir would not bid on contracts for the digital ID because it hasn’t had “a clear, resounding ballot box”.

Also a potential issue is cost, which the Office of Budget Responsibility recently estimated at £1.8 billion. According to SA Mathieson at The Register, the government has rejected the figure but declined to provide an alternative estimate until its soon-to-be-launched consultation has been completed.

Also hovering in the background, weirdly ignored, is the digital identity and attributes trust framework, which has been in progress for the last several years at least.

Beyond that, we still have no real details. For this reason, in a panel I moderated at this week’s UK Internet Governance Forum, I asked panelists – Dave Birch, Karla Prudencio, and Mirca Madianou to try to produce some principles for what digital ID should and should not be. Birch in particular has often said he thinks Britain as a sovereign state in the 21st century sorely needs a digital identity infrastructure – by which he *doesn’t* mean anything like the traditional “ID card” so many are talking about. As we all agree, technology has changed a lot since 2005, when this was last attempted. Since then: blockchain, smartphones, social media, machine learning, generative. So we agree that far: anything the government proposes really should look very different than the last attempt, in 2005.

Here are the principles our discussion came up with:
– Design for edge cases, as a system that works for them will work for everyone.
– Design for plural identities.
– Don’t design the system as a hostile environment.
– Don’t create a target for hackers.
– Understand the real purpose .
– Identification is not authentication.
– Understand public-private partnerships as three-way relationships with users.
– Design to build public trust.

And one last thought:
– Sometimes, ID is football.

That last is from Madianou’s field work in Karen refugee camps along the border between Thailand and Myanmar. One teenaged boy really wanted an ID card so he could leave the camp and return safely without being arrested in order to go play football in a nearby village. It’s a reminder: identification can mean many different things in different situations.

Illustrations: The Mae La refugee camp in Thailand (by Tayzar44 at Wikimedia.

Also this week: TechGrumps 3.34 – ChatGPT is not my wingman.

Wendy M. Grossman is an award-winning journalist. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

A road not taken

Nearly 20 years ago, I attended a conference on road pricing. The piece I wrote about it (PDF) for Infosecurity magazine suggests it was in late 2007, three years after transport secretary Alistair Darling proposed bringing in a national road pricing scheme. The idea represented a profound change; until a few years earlier, congestion had always led to building more roads. In 2003, however, London mayor Ken Livingstone implemented instead the congestion charge – and both traffic and pollution levels had dropped.

So this conference explored the idea that road pricing would cut traffic to match road capacity, taking us off the vicious spiral of increasing road capacity and watching traffic rise to choke it. Darling’s proposal was for a satellite tracking following a 2004 feasibility study. In 2007, however, prime minister Tony Blair effectively dropped the idea after 1.8 million people signed a petition opposing it.

This week’s announcement of road pricing for electric vehicles is rather differently motivated, but reawakened my memory of the 2008 discussion. Roads must be paid for somehow, and, as foreseen by the the Institute for Fiscal Studies in 2012 the rise of electric vehicles inevitably eats away revenues from fuel taxes. EVs have many benefits: they can be powered without fossil fuels; their engines emit no carbon or other pollutants; and they are quieter. However, they weigh 10% to 30% more than internal combustion engine vehicles, and tire wear remains a significant pollutant.

Back in 2005 there were three main contenders for per-mile road pricing: automated number/license plate readers; tag and beacon; and time-distance-place. At the time, versions of these were already in use: the first was in place to administer London’s congestion charge; the second, effectively an update to paying at the tollbooth, was in place on turnpikes in the American northeast and in the UK at Dartford Crossing; the third was being used in Germany’s HGV system, which collects tolls for the kilometers driven on the country’s autobahns. In a 2007 paper, Cambridge researchers David N. Cottingham, Alastair Beresford, and Robert K. Harle analyzed the technologies available.

Whatever you call them, limited-access highways – autobahns, motorways, interstates, thruways – are a relatively simple problem because there are relatively few entry and exit points. Tracking, as transponders read by automated tollbooths have made possible, remains a privacy concern. Such a scheme was deemed unworkable for London, where TfL counted 227 entry points to the most congested area, and barriers would simply create new chokepoints. For this reason, and also because it estimated that 80% of cars entering the congestion zone are infrequent users, TfL opted for a system of cameras that read license plates on the fly and an automated system to send out penalty notices if someone hasn’t paid. This system also seems difficult to imagine scaling to a national level; every road, street, and back alley would have to have ANPR cameras. In the US, where Flock cameras are collecting ANPR data at scale, law enforcement and immigration authorities are already exploiting it in anti-democratic ways, as 404 Media reports.

In 2008, TDP, a much more likely approach for a nationwide system of per-mile pricing, would have required a box to be installed in every vehicle to track it, likely via GPS, and report time and location data via mobile networks for use to calculate what the owner should pay. No one was then sure whether road users would accept having tags in their vehicles or be willing to pay the considerable expense; as I seem to have written in that 2008 Infosecurity article, “‘We’re going to change your behavior and charge you for the privilege’ isn’t much of a sales pitch.” But such a system would enable proportionately charging people based on their actual road use.

If we were updating that discussion, parts would be unchanged. Congestion charge-style ANPR cameras everywhere will be no more feasible than then. Germany’s system for motorways will similarly not be feasible for smaller roads and within cities. TDP, however…

Here in 2025, most people are already carrying smart phones with GPS just part of the package. So there could be a choice: buy a box that is irretrievably embedded in the vehicle or download a TDP app that’s somehow tied to and paired with the car, perhaps via its electronic key, so that it won’t start unless the app-car link is enabled. (Fun for anyone whose battery dies in the course of an evening out.) In addition, cars already collect all sorts of data and send it to their manufacturers. So it’s also possible to imagine a government requring manufacturers active in the UK to transmit time and location data to a specified authority.

Obviously, the privacy implications of such a system would be staggering. Law enforcement would demand access. Businesses whose fleet patterns are commercially sensitive would hate it. And the UK’s successive governments have shown themselves to be highly partial to centralized databases that are built for one purpose and then are exploited in other ways. For this reason, Beresford’s idea in 2008 was for a privacy-protecting decentralized system using low-cost equipment that would allow cars to identify neighboring non-payers and report only those.

The good news is that the details we have so far government proposals suggest something far simpler: report the odometer reading at each year’s annual vehicle check and multiply by the per-mile charge. So unusual these days to see a government propose something so simple and cheap. Whether it’s a good idea to discourage the shift to EVs at this particular time is a different question.

Illustrations: A fork in a road (via Wikimedia).

At Plutopia, we interview Bruce Schneier about his new book, Rewiring Democracy, which examines the good and bad of what AI may bring to democracy.

Wendy M. Grossman is an award-winning journalist. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Review: The Seven Rules of Trust

The Seven Rules of Trust: Why It Is Today’s Essential Superpower
by Jimmy Wales
Bloomsbury
ISBN: 978-1-5266-6501-0

Probably most people have either forgotten or never known that when Jimmy Wales first founded Wikipedia it was widely criticized. A lot of people didn’t believe an encyclopedia written and edited by volunteers could be any good. Many others believed free access would destroy Britannica’s business model, and reacted resentfully. Teachers warned students against using it, despite the fact that Wikipedia’s talk pages offer rare transparency into how knowledge is curated.

Now we know the Internet is big enough for both Wikipedia and Britannica.

Much of Wikipedia’s immediate value lay in its infinite expandability; it covered in detail many subjects the more austere Britannica considered unworthy. But, as Wales writes at the beginning of his recent book, The Seven Rules of Trust, Wikipedia’s biggest challenge was finding a way to become trusted. Britannica must have faced this too, once. Its solution was to build upon the reputation of the paid experts who write its entries. Wikipedia settled on passion, transparency, and increasingly rigorous referencing. As it turns out, collectively we know a lot. Today, Wikipedia is nearly 100 times the size of Britannica, has hundreds of language editions, and is so widely trusted that most of us don’t even think about how often we consult it.

In The Seven Rules of Trust, Wales tells the story of: how Wikipedia got from joke to trusted resource. It began, he says, with its editors trusting each other. For this part of his story, he relies on Frances Frei‘s model of trust, a triangle balancing authenticity, empathy, and logic. Editors’ trust enabled the collaboration that could build public trust in their work, which is guided by Wikipedia’s five pillars.

Wales’s seven rules are not complicated: trust is personal, even at scale; people are born to connect and collaborate; successful collaboration requires a clear positive shared purpose; give trust to get trust; practice civility; stick to your mission and avoid getting involved in others’ disputes; embrace transparency. Some of these could be reframed as the traditional virtues, as when Wales talks about the principle of “assume good faith” when trying to negotiate the diversity of others’ opinions to reach consensus on how to present a topic. I think of this as “charity”. Either way, it’s not meant to be infinite; good faith can be abused, and Wales goes on to talk about how Wikipedia handles trolls, self-promoters, and other problems.

Yet, Wales’s account feels rosy. Many of his stories about remediating the site’s flaws revolve around one or two individuals who personally built up areas such as Wikipedia’s coverage of female scientists. I’m not sure he’s in a position to recognize how often would-be contributors are quickly deterred by an editor fiercely defending their domain or how difficult it’s become to create a new page and make sure it stays up. And, although he nods at the hope that the book will help recruit new editors, he doesn’t discuss the problem of churn Wikipedia surely faces.

Having steered the creation of something as gigantic and seemingly unlikely as Wikipedia, Wales has certainly earned the right to explain how he did it in the hope of helping others embarking on similarly large and unlikely projects. Wales argues that trust has enabled diversity of opinion, and the resulting internal disagreement has improved Wikipedia’s quality. Almost certainly true, but hard to apply to more diffuse missions; see today’s cross-party politics.

Sovereign immunity

At the Gikii conference in 2018, a speaker told us of her disquiet after receiving a warning from Tumblr that she had replied to several messages posted there by a Russian bot. After inspecting the relevant thread, her conclusion was that this bot’s postings were designed to increase the existing divisions within her community. There would, she warned, be a lot more of this.

We’ve seen confirming evidence over the years since. This week provided even more when X turned on location identification for all accounts, whether they wanted it or not. The result has been, as Jason Koebler writes at 404 Media, to expose the true locations of accounts purporting to be American, posting on political matters. A large portion of the accounts behind viral posts designed to exacerbate tensions are being run by people in countries like Bangladesh, Vietnam, India, Cambodia, and Russia, among others, with generative AI acting as an accelerant.

Unlike the speaker we began with, in his analysis, Koebler finds that the intention behind most of this is not to stir up divisions but simply to make money from an automated ecosystem that makes it easy. The US is the main target simply because it’s the most lucrative market. He also points out that while X’s new feature has led people to talk about it, the similar feature that has long existed on Facebook and YouTube has never led to change because, he writes, “social media companies do not give a fuck about this”. Cue the Upton Sinclair quote: “It is difficult to get a man to understand something when his Salary depends upon his not understanding it”

The incident reminded that this type of fraud in general seems to be endemic, especially in the online advertising ecosystem. In March, Portsmouth senior lecturer Karen Middleton submitted evidence (PDF) to a UK Parliamentary Select Committee Inquiry arguing that the advertising ecosystem urgently needs regulatory attention as a threat to information integrity. At the Financial Times, Martin Wolf thinks that users should be able to sue the platforms for reimbursement when they are tricked by fraudulent ads – a model that might work for fraudulent ads that cause quantifiable harm but not for those that cause wider, less tangible, social harm. Wolf cites a Reuters report from Jeff Horwitz, who analyzes internal Facebook documents to find that the company itself expected 10% of its 2024 revenues – $16 billion – to come from ads for scams and banned goods.

Search Engine Land, citing Juniper Research, estimated in 2023 that $84 billion in advertising spend would be lost to ad fraud that year, and predicted a rise to $172 billion by 2028. Spider Labs estimates 2024 losses at over $37.7 billion, based on traffic data it’s analyzed through its fraud prevention tool, and 2025 losses at $41.4 billion. For context, DataReportal puts global online ad revenue at close to $790.3 billion in 2024. Also for comparison, Adblock Tester estimated last week that ad blockers cut publishers’ advertising revenues on average by 25% in 2023, costing them up to $50 billion a year.

If Koebler is correct in his assessment, until or unless advertisers rebel the incentives are misplaced and change will not happen.

***

Enforcement of the Online Safety Act has continued to develop since it came into force in July. This week, Substack became the latest to announce it would implement age verification for whatever content it deems to be potentially harmful. Paid subscribers are exempt on the basis that they have signed up with credit cards, which are unavailable in the UK to those under 18.

In October, we noted the arrival of a lawsuit against Ofcom brought in US courts by 4Chan and Kiwi Farms. The lawyer’s name, Preston Byrne, sounded familiar; I now remember he talked bitcoin at the 2015 Tomorrow’s Transaction Forum.

James Titcomb writes at the Daily Telegraph that Ofcom’s lawyers have told the US court that it is a public regulatory authority and therefore has “sovereign immunity”. The lawsuit contends that Ofcom is run as a “commercial enterprise” and therefore doesn’t get to claim sovereign immunity. Plus: the First Amendment.

Meanwhile, with age verification spreading to Australia and the EU, on X Byrne is advocating that US states enact foreign censorship shield laws. One state – Wyoming – has already introduced one. The draft GRANITE Act was filed on November 19. Among other provisions, the law would permit US citizens who have been threatened with fines to demand three times the amount in damages – potentially billions for a company like Meta, which can be fined up to 10% of global revenue under various UK and EU laws. That clause would have to pass the US Congress. In the current mood, it might; in July in a report the House of Representatives Judiciary Committee called the EU’s Digital Services Act a foreign censorship threat.

It’s hard to know how – or when – this will end. In 1990s debates, many imagined that the competition to enforce national standards for speech across the world would lead either to unrestricted free speech or to a “least common denominator” regime in which the most restrictive laws applied everywhere. Byrne’s battle isn’t about that; it’s about who gets to decide.

Illustrations: A wild turkey strutting (by Frank Schulenberg at Wikimedia). Happy Thanksgiving!

Also this week:
At Plutopia, we interview Jennifer Granick, surveillance and cybersecurity counsel at ACLU.

Wendy M. Grossman is an award-winning journalist. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.

Mean time between failures

Normal people should not know names like “US-East-1”, someone said this week on Mastodon, or more or less. “US-East-1” is the section of Amazon Web Services that went out last month to widely disruptive effect. What this social media poster was getting at, while contemplating this week’s Cloudflare outage, is the fact that the series of recent Internet-related outages has made network nodes that were previously only known to technical specialists into household names.

For the history-minded, there was a moment like this in 1988, when a badly-written worm put the Internet on newspapers’ front pages for the first time. The Internet was then so little known that every story had to explain what it was – primarily, then, a network connecting government, academic, and corporate scientific research institutions. Now, stories are explaining network architecture. I guess that’s progress?

Much less detailed knowledge was needed to understand what happened on Tuesday, when Cloudflare went down, taking with it access to Spotify, Uber, Grindr, Ikea, Microsoft CoPilot, Politico, and even, in London, its VPN service (says Wikipedia). Cloudflare offers content delivery and protection against distributed denial of service attacks, and as such it interpolates itself into all sorts of Internet interactions. I often see it demanding action to prove I’m not a robot; in that mode it’s hard to miss. That said, many sites really do need the protection it offers against large-scale attacks. Attacks at scale require defense at scale.

Ironically, one of the sites lost in the Cloudflare outage was DownDetector, a site that helps you know if the site you can’t reach is down for everyone or just you, one of several such debugging tools for figuring out who needs to fix what.

So, Cloudflare was Tuesday. Amazon’s outage was just about a month ago, on October 20. Microsoft Azure, another DNS error, was just a week later. All three of these had effects across large parts of the network.

Is this a trend or just a random coincidental cluster in a sea of possibilities?

One thing that’s dispiriting about these outages is that so often the causes are traceable to issues that have been well-understood for years. With Amazon it was a DNS error. Microsoft also had a DNS issue “following an inadvertent configuration change”. Cloudflare’s issue may have been less predictable; The Verge reports its problem was a software crash caused by a “feature file” used by its bot management system abruptly doubling in size, taking it above the size the software was designed to handle.

Also at The Verge, Emma Roth thinks it’s enough of a trend that website owners need to start thinking about backup – that is, failover – plans. Correctly, she says the widespread impact of these outages shows how concentrated infrastructure service provision has become. She cites Signal CEO Meredith Whittaker: the encrypted messaging service can’t find an alternative to using one of the three or four major cloud providers.

At Krebs on Security, Brian Krebs warns sites that managed to pivot their domains away from Cloudflare to keep themselves available during the outage need to examine their logs for signs of the attacks Cloudflare normally protects them from and put effort into fixing the common vulnerabilities they find. And then also: consider spreading the load so there isn’t a single point of failure. As I understand it, Netflix did this after the 2017 AWS outage.

For any single one of these giant providers, significant outages are not common. This was, Jon Brodin says at Ars Technica, Cloudflare’s worst outage since 2019. That one was due to a badly written firewall rule. But increasing size also brings increasing complexity, and, as these outages have also shown, even the largest network can be disrupted at scale by a very small mistake.

Elsewhere, a software engineer friend and I have been talking about “mean time between failures”, a measure normally applied to hard drives, servers, or other components. There, it’s much more easily measured – run a load of drives, time when they fail, take an average… With the Internet, so much depends on your individual setup. But beyond that: what counts as failure? My friend suggested setting thresholds based on impact: number of people, length of time, extent of cascading failures. Being able to quantify outages might help get a better sense of whether it’s a trend or a random cluster. The bottom line, though, is clear already: increasing centralization means that when outages occur they are further-reaching and disruptive in unpredictable ways. This trend can only continue, even if the outages themselves become rarer.

Most of us have no control over the infrastructure decisions sites and services make, or even any real way to know what they are. We can counter this to some extent by diversifying our own dependencies.

In the first decade or two of the Internet, we could always revert to older ways of doing things. Increasingly, this is impossible because either those older methods have been turned off or because technology has taken us places where the old ways didn’t go. We need to focus a lot more on making the new systems robust, or face a future as hostages.

Illustrations: Traffic jam in New York’s Herald Square, 1973 (via Wikimedia).

Also this week
– At the Plutopia podcast, we interview Jennifer Granick, the Surveillance and Cybersecurity counsel at the ACLU about the expansion of government and corporate surveillance and the increasing threat to civil liberties.
– At Skeptical Inquirer, I interview juggling mathematician Colin Wright about spreading enthusiasm for mathematics.

Wendy M. Grossman is an award-winning journalist. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.