I had coffee with a friend last week—a machine learning researcher deep into the technical trenches of AI. We were supposed to talk about AGI timelines. Instead, we stumbled into a question that’s been bothering me since: Can thinking happen without language?
It started with Meta’s departing AI director.
The World Model Problem
Yann LeCun, Meta’s chief AI scientist for over a decade, is leaving to start his own research company. The interesting part isn’t that he’s leaving—it’s what he’s working on.
LeCun wants to teach AI without forcing everything through language first. The current approach—large language models like GPT—processes the world by converting it into tokens, words, text. Everything gets interpreted as language before the model can “think” about it.
His bet: that’s causing massive information loss.
Think about it. When you see a cat jump off a table, you don’t internally narrate “the feline quadruped is executing a downward trajectory from an elevated horizontal surface.” You just see it. You understand it. The experience is richer than any description.
LeCun’s “world models” try to let AI learn directly from images and video—the way humans actually perceive reality—without the translation layer.
That’s when I asked my friend: if AI doesn’t need language to think, do we?
The Language-First Assumption
We assume thinking happens in words. Internal monologue. That voice in your head narrating your thoughts.
But is that actually thinking, or just reporting on thinking that already happened?
Noam Chomsky argued that language is too slow, too imprecise to be the mechanism of thought itself. The real thinking happens beneath the surface—vast, wordless patterns of understanding. Language is just how we try (and often fail) to externalize it.
William James described it better: “Great thinkers have vast premonitory glimpses of schemes of relations between terms, which hardly even as verbal images enter the mind, so rapid is the whole process.”
You’ve felt this. The moment you know the solution to a problem but can’t yet articulate it. The insight that arrives fully formed, then takes minutes to translate into words. The poem you write because prose can’t capture what you’re trying to say.
If language was thinking, none of that would make sense.
How Language Shapes What We Can Think
But here’s where it gets interesting: even if thinking doesn’t require language, language absolutely shapes what we can think and how we think it.
This is the Sapir-Whorf hypothesis—linguistic relativity. The structure of your language influences your perception of reality.
My friend brought up German. The language builds words through concatenation—stacking concepts together into compound words. Schadenfreude (harm-joy). Weltanschauung (world-view). Zeitgeist (time-spirit).
This isn’t just vocabulary. It’s a different way of constructing ideas—taking abstract concepts and combining them into precise, technical terms. Some researchers argue this is why German-speaking regions produced so many physicists and philosophers. The language itself encourages systematic, modular thinking.
You can build concepts in German in ways other languages make harder.
The Emotional Structure of Arabic
Then we talked about Arabic.
Arabic is an intensely emotional language. It has dozens of words for different shades of love, anger, sadness. حزن isn’t just sadness—it’s grief, deep sorrow. غضب isn’t just anger—it ranges from mild irritation to explosive rage. The language forces you to be specific about which emotion you’re experiencing.
More than that, Arabic uses vivid, poetic imagery for feelings. “في قلبي غصة”—”there is a lump in my heart.” Not “I’m sad.” The language makes you feel the emotion through metaphor.
Does this shape how Arabs think? How they process emotions? How they relate to each other?
The research says yes. Native speakers of different languages show different patterns of brain connectivity. German speakers show stronger connectivity in frontal-parietal networks—areas associated with complex syntax and logical processing. The language literally rewires how their brains work.
If language shapes cognition, and different languages shape it differently, then people speaking different languages are thinking in fundamentally different ways.
What This Means for AI (and Us)
Back to LeCun’s world models.
If he’s right—if forcing AI to process everything through language causes information loss—then we’ve been building AI wrong. We’ve been trying to teach machines to think the way we talk, not the way we actually think.
The irony: we might understand human intelligence better by building AI that doesn’t use language.
But here’s what really stayed with me from that conversation:
We can’t escape our own languages.
I think in English and Arabic. My thoughts are shaped by the structures, vocabularies, and metaphors of both. I can’t think “outside” of language because language is the water I swim in.
Even this blog post—me trying to capture a wordless insight from a conversation—is constrained by what English lets me say and how it lets me say it.
The thoughts I’m having right now as I write are being filtered through linguistic structures. Some ideas make it through clearly. Others get distorted. Some can’t be expressed at all, so they stay trapped as vague feelings.
If I wrote this in Arabic, it would be a different post. Not a translation—a different thought.
The Question That Won’t Go Away
Can thinking happen without language?
Maybe. Probably.
But can we think without language? That’s harder. Because even when we try to think pre-linguistically—in images, feelings, patterns—we immediately reach for words to stabilize the thought, to make it graspable.
Language might not be thinking. But it might be the only way we know how to catch our thoughts before they slip away.
And if machines learn to think without language—if LeCun’s world models work—we might finally build AI that understands the world more directly than we can.
Which is either the most exciting or most terrifying thing I’ve thought about all week.
I’m still not sure which.