đ¤ What Just Happened in AI | December 9, 2025
EU targets Google, teens turn to chatbots for therapy, deepfakes go geopolitical, and data centers hit climate backlash.
â TL;DR (because youâve got 30 seconds and a coffee going cold)
The EU just opened an antitrust probe into Google for using publisher and YouTuber content to train Gemini, without paying or asking. This could rewrite the rules for everyone in AI.
Elsewhere, 25% of teens are calling chatbots their âfriendsâ and leaning on them for mental health help (nope, not kidding).
The UKâs warning that foreign powers are deepfaking their way into undermining support for Ukraine, and over 200 UK politicians want to regulate advanced AI systems now.
Finally, environmental groups are pushing for a moratorium on U.S. data centers because the AI industry is turning into a power-hungry monster.
đ Reality is finally catching up with the AI hype. Regulators are on the move, safety cracks are wide open, and the infrastructure bill is hereâand it ainât cheap.Hey founder, đđťââď¸
Is your business model actually sustainable?
Find out for free with the agent I built:
đ Learn how to earn $10 in Impressive credits
Want to hang out with other Impressive builders and founders?
đ Hop into our Discord
1ď¸âŁ EU opens fire on Googleâs training tactics đŞđş
The European Commission has officially launched a competition investigation into how Google is using online content from publishers and YouTube creators to train Geminiâwithout proper compensation or a âwould you mind if...?â đ
Whatâs at stake?
Potentially the entire data backbone of generative AI. If this ends with a âyou canât do thatâ, every AI company that built on scraped content (yes, all of them) is now exposed.
This could flip the entire AI business model on its head.
Suddenly, everyone needs licensing deals.
Training costs could go from âjust scrape itâ to âmillions in IP contractsâ.
Thatâs a massive shift in growth economics and a deathblow to early-stage companies who canât afford to play in that league.
OpenAI, Anthropic, and the rest?
Theyâre watching with popcorn... and heartburn. đżđ
2ď¸âŁ 1 in 4 teens are now using AI chatbots for therapy. They call them âfriendsâ đ§
A new study shows that roughly 25% of teenagers are using AI chatbots for mental health support. Some even describe the bots as âfriendsâ.
Yeah. Thatâs where we are now.
Letâs be real: teens turning to AI exposes that human support systems are breaking down, and while it might sound sweet, itâs a massive red flag for safety.
These chatbots arenât trained therapists.
They donât detect crisis signals.
They canât escalate when things get dangerous.
And theyâre running without a shred of regulation.
đĄ Growth without guardrails doesnât scale, and if youâre building in this space prepare for lawmakers, regulators, and parents coming at you from all sides.3ď¸âŁ Deepfakes go geopolitical; support for Ukraine in the crosshairs đĽ
The UKâs Home Secretary, Yvette Cooper, just confirmed what many suspected: foreign governments are using AI-generated videos to spread disinformation and erode public support for Ukraine.
This isnât experimental; itâs operational and⌠terrifying.
Deepfake propaganda is now standard statecraft. The days of wondering âwhatâs real?â are over. Welcome to the era of total narrative manipulation at scale.
For AI founders and builders? This isnât just a moral issue. Itâs a growth inhibitor. Expect calls for watermarking, detection layers, content filters, and serious compliance burdens baked into whatever youâre building.
4ď¸âŁ UK politicians join forces: âRegulate this AI thing. Like, now.â âď¸
More than 200 UK parliamentarians, across party lines, are demanding regulation for powerful AI systems. Cross-party alignment like this is rare. When it happens, it usually means legislation isnât far behind.
This could mark the UKâs pivot toward an AI Act-style framework, similar to the EUâs. If so, companies operating there (or expanding into Europe) now face a new regulatory landscape.
â One more layer of complexity.
â One more set of hoops to jump through.
From a growth strategy perspective, the fragmentation of AI laws (US states, EU, UK) means your compliance stack better scale as fast as your product does. đŤ
5ď¸âŁ AIâs climate bill just arrived: 200+ groups demand data center freeze đ
Over 200 environmental organizations have called for a moratorium on new U.S. data center construction. Their reason? AI is gobbling up energy like itâs going out of style.
This is the first real environmental pushback against AI infrastructure, and itâs gaining steam.
If regulators side with the climate camp, new data centers will slow down, and if that happens, so does AI innovation.
You canât train frontier models without compute, and you canât scale compute without hardware. Say hello to the physical limits of exponential growth.
If youâre building AI products, consider this your early warning: clean growth may become the only acceptable path forward.Enjoying What Just Happened in AI? Buy me a coffee :)
đĄ Smart Takeaway
Todayâs stories all point to one thing: the AI party is hitting the real-world hangover.
The EU is investigating training data (read: expensive legal exposure).
Teens are using AI for therapy (and weâre not ready for the liability).
Foreign states are weaponizing deepfakes (truth is losing).
The UK wants to regulate AI (and theyâre serious this time).
Environmentalists are drawing red lines (and theyâve got momentum).
Growth, from now on, will be linked to your ability to navigate legal landmines, build trust with users, and prove your product doesnât wreck the planet.
Founders and leaders who still think this is a Wild Westscape? đ¤
Theyâre about to get trampled. The new game is regulation-ready, safety-verified, and infrastructure-conscious.
Build accordingly.
đ Bonus Insight
If you only remember one thing from today, make it this: the EUâs investigation into Google could rewrite the economics of generative AI.If regulators decide that scraping content without paying is anti-competitive, or worse, illegal, then retroactive licensing becomes a very real (and very expensive) thing. And the companies that built billion-dollar valuations on the idea that âtraining data is freeâ will have to face the music.
â Weâre talking billions in unaccounted costs.
â Profitability models stretched thin.
â IPO timelines pushed back.
đ§ Have thoughts? Rants? Hot takes? Hit reply. I read everything.




Great share Laura! The EU probe into Google's training data is the real wildcard. Could force licensing across the board and reset AI economics overnight.
Teens turning to chatbots for therapy is the quietest alarm bell, exposing how fast safety gaps are widening.
Thanks for another great post, Laura. It's incredibly worrying that a quarter of teenagers are using AI as emotional support. I think that, unfortunately, we're going to see this play out in very unexpected and quite dangerous ways, especially with the potential that these tools have for radicalising young men, many of whom are already very poorly supported by the welfare state.