WorldMind Blog
What is World Mind?
The pursuit of artificial general intelligence needs something like a paradigm shift. There is a growing widespread consensus that large language models have hit a wall. The AI labs behind the frontie...
What is Ontology?
When people in AI talk about “ontology,” they usually mean something technical, such as an organized chart of entities and categories, like a knowledge graph that specifies how “doctor” relates to “ho...
Why Philosophy is Useless and Yet Matters for AI
Philosophy is often accused of being useless, and in a certain sense that’s true. Philosophy doesn’t build bridges, cure diseases, or put rockets on the moon. It doesn’t provide grounded methods for s...
What Does Heideggerian Ontology Have To Do With Transformers?
Transformers work because they accidentally approximated an ontological structure.
The Huge Energy Costs of LLMs Reveals an Absence of Grounding
The numbers are staggering. To train a frontier-scale large language model requires thousands of GPUs running in parallel, billions of parameters finely tuned, and trillions of tokens drawn from the e...
The Forgotten Oracle – Hubert Dreyfus and the First AI Winter
In the mid-1960s, at the very moment when artificial intelligence was first being celebrated as the future of science, one voice stood apart. It was the voice of Hubert Dreyfus, a young philosopher at...
How Did Philosophy Become as Polarized as Our Politics?
While Being and Time is approaching its hundredth anniversary, there is still a reason why most scientists, and especially AI researchers, continue to think in a Cartesian mindset rather than in terms...
Scaling Has Reached Its Limit Exactly in Coherence
The marvel of large language models is their coherence. Ask them a question, and they respond with sentences that flow, paragraphs that hold together, and arguments that appear structured. Coherence h...
AI Safety is Out of Control
AI safety is the most urgent conversation in the field today. Companies publish safety charters, researchers debate alignment strategies, governments scramble to regulate. But most of what passes for ...
Why Large Language Models Will Never Think
Large language models are astonishing machines. They can generate essays, write code, summarize books, even carry on conversations that feel uncannily human. They are fluent in language to a degree no...
How Language and World are the Same Web of Meaning
We usually think of “world” and “language” as two very different things: the world is everything out there, and language is how we talk about it. But Heidegger turned this picture upside down. For him...
Why Transformers Speak but Don’t Understand
Large language models astonish us with their fluency. Ask them to explain a concept, write an essay, or carry on a conversation, and the words flow in ways that feel remarkably human. Yet the very sam...
The Irony of AI Fear
We’ve all seen predictions by various luminaries that superintelligent AI will hunt us down, enslave us, or wipe us out. The imagery is apocalyptic, machines turning against their creators. But let’s ...
Why Transformers Work: See Heidegger, Not Descartes
AI researchers have inherited more from Descartes than they might realize. The Cartesian worldview divides reality into two realms, the external world of objects and the internal realm of ideas. In th...
The Quixotic Quest of Interpretability
In the ongoing effort to understand artificial intelligence, few pursuits have taken on such a romantic air as the quest for interpretability. Researchers enter the dense interior of deep neural netwo...
How Large Language Models are Designed to Hallucinate
I feel a real sense of relief that my new paper has finally been posted on arxiv.org: http://arxiv.org/abs/2509.16297. Getting it accepted was not easy, and perhaps that is fitting because the argumen...
Sutton’s Stream and Heidegger’s World
Dwarkesh Patel just posted a video interview with Richard Sutton titled “Richard Sutton – Father of RL thinks that LLMs are a dead end.” I love this video for a number of reasons. I won’t comment her...
Solving Sutton’s Transfer Problem
Richard Sutton’s four pillars of reinforcement learning, policy, value function, perception, and transition, are elegant in their clarity, yet they also reveal a persistent weakness. Each is learned w...