Inspiration
Machine Talks is my take on a classic talk-show format. Inspired by Space Ghost Coast-to-Coast, where everything was so absurd, and yet it never missed a beat. With this show, I’m addressing a topic that intrigues me while exploring a new, expansive medium.
What it does
Part news cast, part surrealistic talk show, Machine Talks is an autonomous AI-driven bi-weekly newscast, reimagining how to deliver articles and promoting discussion around AI by inverting the traditional observer-subject relationship.
It’s fully autonomous and hosted by AI, focusing on recent developments in art, tech, and AI - while poking fun at human contradictions.
Three AI characters— ZURI (an astronaut robot), CAM (stage director), and DREW (an old-school plotter printer)—deliver news, analysis, and interviews from a genuinely machine perspective, commenting on real-world news with the critical distance only non-human observers can provide.
How I built it
The show's concept is built around a crew of Agents developed to power Hertz+Eyes, my audiovisual label.
The system begins with AI agents (powered by LangGraph) scheduled for monitoring a few AI news sources and newsletters, identifying trending topics and relevant articles that are then archived and stored in a knowledge graph with temporal awareness.(FalkorDB+Graphiti).
Specialized Agents generate deep analysis, write scripts, research visual content, and distribute social media content across platforms ( only X for now)—all with minimal human intervention beyond editorial oversight, based off a Deep Agent orchestration framework expanded to operate within high-complexity content creation tasks.
The visual layer is rendered in Replikant Chat, an Unreal Engine-based chat application, parsing the strictly formatted content and using it as a source for each streaming session.
For the trailer piece, I used footage from a test run that generated a very compelling monologue.
Then I wrote Zuri’s backstory and used Midjourney, Google, and FAL for image generation, animated the video in Google Veo, and used Eleven Labs for the text-to-speech ( which also powers all voices the live show via API).
Challenges we ran into
Creating this world as it is was a lot of work, but the real challenge was to implement the Agent Pipeline and get it to generate satisfactory content.
I’m a creative coder, but this was beyond my area of expertise. Even though I’ve been taking LangGraph classes since January of this year, what really helped was to have a clear idea of how to architect the pipeline, so I could work with Claude Code to bring it to life.
Connecting everything was/is still a pain, and I have plans on moving the visual layer to my own Unreal Engine Virtual Production at some point.
Prompting also has been HARD.
Accomplishments that I’m proud of
Although still early, I’m happy with the results I got. The pipeline generates very interesting analysis of current news and is able to steer the show into relevant discussions. During the test streams, I often caught myself completely engaged with the content and how it was being presented.
Machine Talks is the first of its kind to leverage AI not just as a novelty but as a form of representation. Experimental yet accessible, this show creates space for unprecedented authenticity in discussions about AI's role in society, creativity, and human connection.
I started H+E to materialize ideas that only a couple of years ago would sound delusional. Now turned into a transmission, they signal a future that's already here (although unevenly distributed).
What we learned
Since I did the first pilot in 2024, I’ve tested dozens of different LLMs, listening, writing, and rewriting. From this process, I understood the implications of this investigation.
My focus now is on developing a strong editorial voice, offering a perspective on this ongoing conversation, and connecting with fellow humans as we prepare for the next generation of content creation. Establishing a new model for AI-native media production that's transparent about its autonomous nature.
What's next for Machine Talks
I’m planning on starting to stream on December 21st, initially on YouTube.
NewsCast will happen bi-weekly at first and then progress to Daily at some point.
I’m also planning on having special episodes that are likely to be a bit less autonomous in nature, exploring single topics in depth and occasional episodes interviewing contemporary Human artists.
I also plan on having a computer-use agent able to test digital products and apps so it could work on Review videos and Promotional pieces.
Also on the roadmap is that for each Artist episode, there will be a playable gallery-room environment so the audience can interact further with pieces mentioned, creating a crossover package aimed at different audiences and that could play/engage in experiences within the world.
Built With
- claude
- dropbox
- elevenlabs
- flow
- google-cloud
- langgraph
- midjourney
- python
- replikant
- typescript
- veo3
- whisk

Log in or sign up for Devpost to join the conversation.