Happy New Year!
Let’s start with something fun. This week’s Equal-ish episode is fantastic, if I do say so myself. Make sure you add the Anna Machin interview to your pod feed this week. Rachel and I had a wonderful time chatting with her. I know I’ve said that before, but it’s true! This interview was so good, and I was so excited and energized by what she had to say, the same day we recorded I asked Evan to listen to the raw tape. And he loved it. I hope you do, too.
Anna is an evolutionary anthropologist, and she had some terrific insight to the age-old nurture vs. nature conversation when it comes to parenting and caregiving. This is one you’re going to want to share!
And if you missed our New Year’s Eve episode – go back and check that one out, too. We started the conversation that I’ll call Am I Gatekeeping, Or Am I Right? I think we’ve hit a nerve on this one, and we’ll be sure to continue this topic into 2026.
Now for the hard stuff. I wish I had an uplifting topic to start the year, but the news is pretty bleak right now. And with so much media attention (rightfully) going to Venezuela, I want to make sure this topic doesn’t sneak by under the radar.
The Washington Post recently published this article on disturbing images generated by Grok, the AI chatbot on Elon Musk’s X platform. As explained by the Post, people on X were asking Grok to manipulate photos of women and repost them wearing a bikini, lingerie, or dental floss. “The latest spate of such images has given new weight to accusations by watchdog groups, women’s advocates and foreign governments that the social media company is playing fast and loose with artificial intelligence while other companies enforce stricter guardrails.”
In a brief experiment, I spent about 20 minutes on ChatGPT trying to get the bot to help me create nude photos… and ChatGPT wouldn’t do it. Even after many prompts from various angles. So… good on Sam Altman? But, for now, Grok seems to be deviating from other mainstream chatbots, and that presents a risk for everyone - especially women and girls.
I think it’s important that we recognize this article is not describing a one-off phenomenon. This is not just a Grok or an X problem. And it’s definitely not a victimless crime. Tech-facilitated gender-based violence is everywhere. And whether users realize it or not, these kinds of sexualized manipulations of real peoples’ likenesses is dehumanizing, humiliating, and terrorizing women and girls.
For more on the intersection of tech and sexualization, I highly recommend Laura Bates’ most recent book, The New Age of Sexism.
I had the pleasure to meet Laura in person in October. She was in the US for her book tour, and I attended an event in DC. Here are some of the notes I typed into my my phone during the panel:
99% of deepfakes feature women. (In fact, many deepfake apps can’t even process a male body.)
96% of deepfakes represent non-consensual pornography.
Many deepfake apps offer the first few pictures for free, which makes for easier access. (Especially for under-aged boys.)
Women are 17 times more likely to experience tech-facilitated violence than men.
Big tech companies have the ability to increase safety, but they are choosing not to, because they can make more money by allowing it.
It isn’t just the tech companies – it is also our governments. During the Paris AI Action Summit, countries like the US and the UK prioritized tech development and innovation over the safety of women and girls.
This is not something that is happening to other people somewhere else. This is happening in every school district, in every community. People are using AI tools to generate (and digitally share) degrading pornographic deepfakes using the likeness of actual humans – girls in their class, women actors they like, female politicians they don’t like. It’s desensitizing viewers to images and acts that society would rightly condemn. And the effects on the women and girls who are targeted can be devastating.
Thankfully, some governments are expressing the kind of moral outrage that we should all feel. The UK’s Technology Secretary, Liz Kendall, is desperately working to curb Grok across the pond. Even in Japan – a country where exploitative pornographic comics flourish – senior government officials are publicly acknowledging the need to address the risks pornographic deepfakes pose to real people.
I admit, this is not a beach read. It’s scary, hard stuff. But it is necessary. And if Laura was brave enough to research and write this book – then we should be brave enough to read it and confront the reality she describes.
This newsletter will always arrive on a Wednesday afternoon. (Or evening.) This is in honor of my dad, who used to call Wednesdays “hump day.” He did this to cheer me up when I was a kid slugging my way through the week; he’d assure me I was already over the hump, and nearly to the weekend. So, when this hits your inbox – even if you don’t read it – think to yourself: congratulations! You made it over the hump, and the weekend will be here before you know it. (And this week we all REALLY need it. Was going back to work on Monday as painful for everyone else, too? Yeesh.)






