Generalizable robots in the next 3–4 years
Plus an AI consciousness discussion with renowned computational neuroscientist Tomaso Poggio
Hello, fellow humans! I’m sharing some of the most interesting conversations I’ve had recently about progress in AI research and the emergence of consciousness in AI systems.
I recently spoke with Danijar Hafner, who until recently was a staff researcher at Google DeepMind. He’s now on a new journey, having founded a new company. We discussed AI research and world models that Danijar has worked on in the past.
Key moments from our conversation:
Architecture is not that important for achieving AGI. We could achieve it even with RNNs. But we need algorithmic improvements and better objective functions. More compute still makes a huge difference.
We shouldn’t measure AI by human reasoning, because human reasoning is limited. AI will go way beyond human reasoning capabilities.
We are way past “LLMs.” Gemini, Claude, and ChatGPT are complex multimodal systems, not just language models.
Robotics will see significant improvements and much better generalization in the next 3–4 years, even without continual learning. Data diversity will contribute to near-term advances in robotics.
Watch the full conversation on the BuzzRobot YouTube channel
Another conversation I recently had was with Tomaso Poggio, a renowned computational neuroscientist and professor at MIT. Tomaso mentored Demis Hassabis (CEO of Google DeepMind) and Christof Koch, a neuroscientist who proposed Integrated Information Theory of consciousness, among other prominent scientists.
Key moments from our conversation:
Gemini and ChatGPT are already past the Turing test for intelligence.
In 2015, Demis Hassabis thought the path to AGI was 80% neuroscience and 20% engineering. In a more recent conversation between Tomaso and Demis, it’s 50/50—or the engineering part may be even higher.
Current AI systems are good at simulating consciousness, but Tomaso believes today’s systems are not conscious.
Tomaso is sympathetic to Manuel and Lenore Blum’s theory of consciousness: a “consciousness moment” is realizing something is important (for example, pain). If robots ever experience pain, Tomaso would consider them conscious creatures.
Watch the full conversation from my new show on AI consciousness

