Where are developements of AI actually heading right now?
Yann LeCun — Turing Award laureate and longtime Chief AI Scientist at Meta — launched a new startup in Paris in March 2026: AMI Labs (Advanced Machine Intelligence Labs). Seed funding of around €890 million, from investors including Nvidia, Samsung, Toyota, and Jeff Bezos. The largest round of its kind in Europe.
What caught our attention wasn't the amount. It was a single sentence on the startup's website:
“Real intelligence does not start in language. It starts in the world.”
The sentence isn’t revolutionary — but it opens a window. Because it raises a question we keep coming back to at Schaltzeit: Where is the development of AI actually taking us and our work? What can and should AI do for us? Have we been too focused on large language models since the breakthrough of LLMs during the last years?
- In a nutshell: LLMs and World Models
LLMs (Large Language Models) like ChatGPT, Claude, or Gemini learn from vast amounts of text. They recognise statistical patterns in language and use them to generate responses that simulate human communication. Their ‘worldview’ is always mediated by language: what isn’t written down simply doesn’t exist for them.
World Models take a fundamentally different approach: they build an internal model of the physical world and simulate cause and effect within it — much like toddlers who learn through touch, movement, and observation before they can even speak. This approach is seen as especially promising for AI robotics and physical interaction.
Where language models fall short
Anyone working with ChatGPT, Claude, or Gemini today quickly sees how well these systems can analyse texts, build arguments, and write creatively. Impressive. And at the same time, that very impressiveness conceals a structural problem: language always encodes what already exists. What people have already thought, experienced, and written down feeds into the training data. What hasn’t yet been put into words – new connections, unnamed phenomena, embodied or intuitive knowledge – simply remains invisible to language models.
For those of us working in foresight, this isn’t an abstract problem. The foresight researcher and philosopher of science Armin Grunwald has described it precisely: the future doesn’t exist as an empirically researchable fact – it’s only ever accessible through language, through scenarios, narratives, images, metaphors. We negotiate the future in our language. And in doing so, we carry all the blind spots of language into our anticipation of futures.
Wittgenstein’s proposition ‘The limits of my language are the limits of my world’ applies to language models too. Perhaps especially to them.
World Models: an alternative paradigm gains momentum
LeCun’s answer to this limitation: World Models. The idea is that AI shouldn’t learn about the world through text patterns, but build an internal model of the physical world. Like a toddler learning about gravity by falling. Not by reading a Wikipedia article on gravity.
And LeCun isn’t alone. World Labs (also funded at around one billion US dollars), Meta with its JEPA architecture, and Google with Genie 3 are all working on similar approaches in parallel. What we’re witnessing here is no fringe project but maybe rather a paradigm shift taking shape in AI research.
What would that change in practice? An AI that no longer views the world through human language could recognise patterns for which we don’t yet have words. It could model connections that don’t yet fit into our linguistic categories. That makes it exciting — and at the same time harder to get a handle on.
The black box grows – what does that mean for us?
With LLMs, we at least have a linguistic surface to work with. We can observe which language a model uses, which metaphors it favours, which futures it frames as ‘normal’ or ‘inevitable’. Hermeneutic methods, the systematic interpretation and questioning of texts, still have traction here for us as human foresight experts.
World Models remove that lever, at least in part. Once the processing logic is no longer organised linguistically, we lose the surface where we can reflect on and critically evaluate interpretive patterns.
We know this problem from our own practice. For example our BLACK BOX workshop concept, developed for the Kapitel 21 AI conference back then, made this tension tangible: both human intuition and AI decision logic are opaque in their processing. Input and output are visible, but what happens in between is not. With World Models, that black box extends. And the questions we raised in the workshop become more urgent.
Three starting points that remain
Does that mean reflection is no longer possible? We don’t think so. We see three starting points:
The framing stays linguistic. What we ask a World Model, how we define the problem, what goals we set – all of that remains shaped by human language. And that’s exactly where we can still intervene: in the questions we ask, and the assumptions embedded in them.
From process analysis to impact assessment. Perhaps we have to accept that we can no longer ask: ‘How did the model arrive at this result?’ But we can ask: ‘What kind of world does this result lead us towards?’ shifting the focus from understanding the process to evaluating the consequences an inherent conditions.
Collective negotiation gains importance. When no one fully understands what’s happening inside the model, the question of whether a result is acceptable becomes even more of a social question. Teams, organisations, democratic publics – all of them will be increasingly called upon to collectively decide what to do with AI-generated results.
Understanding the world without experiencing it – is that possible?
World Models promise an understanding of the world beyond language. But they remain bodiless. They model the physical world mathematically – yet have no access to what shapes human learning so fundamentally: lived experience. A toddler learns about gravity through falling and pain. Not through an equation.
Behind this lies a question bigger than the technology: can something that grasps the world without a body and without social embeddedness actually understand what we intend to do with it? What is desirable, what is just, what is worthy of human dignity – that doesn’t emerge from equations. It emerges from shared experience, from conversations, from conflicts, from living together.
And perhaps that’s precisely the point not to lose sight of, amid all the excitement about World Models: language isn’t just a limitation. It’s also a tool of empowerment. We negotiate justice, dignity, and visions for the future in language. Political and social movements are always also struggles over language – over concepts, metaphors, narratives. This capacity to reflect on ourselves through language is a democratic asset.
What this means for foresight – four thoughts
- Linguistic reflection remains indispensable. Precisely because AI may increasingly operate beyond language, we need people who can interpret results, contextualise them, and relate them to values. Hermeneutic competence becomes even more important.
- Participatory processes need to become more robust. The less we can trace how an AI system arrives at its results, the more we need social mechanisms of accountability – in teams, in organisations, in public debate.
- The question ‘which AI, for what?’ becomes more pressing. ‘Should we use AI?’ is already an outdated question. What kind of AI, for what goal, under what conditions, that’s what we ask ourselves in many projects. World Models broaden the spectrum and may make the decision more demanding.
- The question of the future remains a human one. AI World Models may model the world more precisely than language can. But which world we want to build – that’s still a normative question. And normativity doesn’t emerge from mathematical models. It emerges between people.
A productive paradigm shift
LeCun has reopened an old question: is intelligence possible without language? For those of us in foresight, it leads to one of our own: Is foresight conceivable without language?
Our provisional answer: probably not. But the question is worth asking. World Models might open up new possibilities for recognising connections that can’t yet be put into words. Thereby they will challenge us to organise reflection of ist results differently. And they remind us that working with AI is ultimately not about the technology – but about the questions we bring to it.
What do you think: how do World Models change our understanding of foresight? What questions does this raise for your work? We’d love to hear your thoughts!
AI in Foresight at Schaltzeit
AI wasn’t a topic that first appeared on our agenda with the ChatGPT hype in 2022. Back in 2020, we designed and piloted a foresight process for Germany’s Federal Ministry of Labour and Social Affairs that incorporated AI methods. At the time, that was still pioneering work. What has become clearer since then: there is a wide spectrum between a simple GPT application and a structured AI workflow system – and which approach makes sense always depends on the goals and specific questions at hand.
The debate around World Models fits right into this. It makes visible what we experience in our daily work: AI is not a monolithic block. It is a field that changes rapidly, one where the right question matters more than the latest technology.
Interested in talking about this? We welcome the exchange – on ongoing projects, methodological questions, or the use of AI in your foresight context.
Further links and reading
- AMI Labs: amilabs.xyz
- Heise Online: „World model instead of LLM: Yann LeCun’s startup receives 890 million euros“ (März 2026)
- World Labs: worldlabs.ai/blog/funding-2026
- Grunwald, A.: Wovon ist die Zukunftsforschung eine Wissenschaft?
- Soetebeer, M. (2022): BLACK BOX Workshopkonzept, Schaltzeit Blog
- Weh, L. & Soetebeer, M. (2021): KI-Ethik und Neuroethik fördern relationalen KI-Diskurs. In: Arbeitswelt und KI 2030, Springer Gabler