In blog post 34 a simulation of a belief-speaking consultancy process was described. The simulation ended with ChatGPT aggregating and interpreting answers. Naturally, the next step, the recommendations, could also have been generated by AI. We actually did put down the request and ChatGPT immediately obliged. But, recommendations may not be generated by artificial intelligence. Humans need to do that, in the Resilience Council. Below I’ll try to explain why.
Belief-speaking and AI
As was mentioned in blog posts 19 and 30, for belief-speakers inner states are the basis of accurate expression. Feelings, instincts, personal values, one’s gut, common sense, and intuition are seen as crucial. These are all elements that should be absent in artificial intelligence. AI is to support our fact-speaking. As GROK AI wrote to me in an interaction: “My job is to reflect what’s real beyond just one person’s head—stuff you could, in theory, check against evidence or shared observation.” It would be catastrophic if AI would promote individual perspectives since these could lead to biased results and might entail discriminatory effects. That is why the EU Artificial Intelligence Act for instance states regarding high-risk AI systems (Art. 10): “Training, validation and testing data sets shall be subject to data governance and management practices appropriate for the intended purpose of the high-risk AI system. Those practices shall concern in particular: /.../ (f) examination in view of possible biases that are likely to affect the health and safety of persons, have a negative impact on fundamental rights or lead to discrimination prohibited under Union law, especially where data outputs influence inputs for future operations; (g) appropriate measures to detect, prevent and mitigate possible biases identified according to point (f)”.
Nick Chater and AI
While AI lacks motives, beliefs, and desires (a standard view in AI philosophy), professor of behavioural science Nick Chater (see also blog posts 9 and 10) argues that the same holds true for humans. In his book The Mind is Flat, Chater writes that our inner lives aren’t driven by deep, stable structures like fixed motives, beliefs, or desires. Instead, the mind is a “relentless and compelling improviser,” constructing thoughts moment-by-moment from a history of prior improvisations — fragments of past experiences, perceptions, and actions. These fragments form patterns unique to each individual, shaping our thinking and behavior.
If Chater’s right, and our “inner states” are only this dynamic, ongoing improvisation rooted in historical patterns, then belief-speaking doesn’t require a human inner world — it merely requires fidelity to the process driving our thoughts. And this opens the door for AI to engage in belief-speaking. AI, like a large language model, doesn’t have motives, beliefs, or desires in the human sense but it does improvise based on a dynamic “inner state”. Its outputs are generated on-the-fly, drawing from a vast “history” of training data — textual fragments, patterns, and associations. This mirrors Chater’s description of the mind: a “wonderfully creative machine” redeploying past material into new thoughts. And this would mean that AI could theoretically engage in taking the leap from what “is” to what “ought” to be done, for instance in the form of recommendations (read more on “is” and “ought” here). AI could mimic the human mind. Given its training, AI could improvise “ought” statements that reflect its historical patterns. This isn’t crossing the is-ought gap philosophically; it’s simulating how humans do it practically, per Chater.
Naturally, the reality from which AI draws its experiences is narrower - data and interaction – while the human mind operates in a rich, embodied reality—physical sensations, social stakes, emotional feedback – building up a lived history. But still AI relates to Chater’s mind as a stripped-down cousin and not a completely different species. AI improvises fluidly, seeks coherence, and engages a reality it shapes and reacts to — all without hidden depths. Its limits — lacking the full, messy feedback of human existence — do not break the analogy but highlight where Chater’s model leans on life’s richness AI can’t touch. AI is a mechanical improviser, not a living one, yet still resembles the flat, creative human mind at work.
One caveat: following this logic, it would probably be essential to provide AI with broader training data and interactions to ensure the gap between AI and the human mind is minimal and AI is optimally prepared for engaging in “belief-speaking” Chater-style, i.e. expressing its improvisational “inner state”.
Emmanuel Levinas
Although Chater’s interpretation of the mind provides AI with a backdoor, artificial intelligence nevertheless shouldn’t engage in belief-speaking. The reason for this is presented by French philosopher Emmanuel Levinas. What belief-speaking advocates often overlook is the ethical component of our communication. While accuracy with one’s inner states is essential, it is not proclaimed to a void. Belief-speakers speak to other individuals. This implies responsibility, responsibility for the Other and responsibility for ourselves.
Without this ethical layer we live in what Levinas calls “totality”: we are mere parts of a greater whole, defined by external factors. In this situation of totality, we experience others as objects that can be thematized – described in general terms. And others experience us as if we are objects, not subjects. In this state of dislocation, we can conquer and be conquered. Without ethics belief-speaking is a simple, mechanical power game.
According to Levinas, ethics isn’t about coherence or improvisation; it’s about a radical encounter with the Other, the face-to-face moment where I’m jolted out of myself. In Levinas’s view, for instance in his book Totality and Infinity, this encounter isn’t abstract—it’s grounded in the raw, embodied presence of another person. The occuring “messy feedback” — flesh, breath, the weight of a gaze — isn’t just flavor; it’s the precondition for feeling responsibility. The Other’s vulnerability, their sheer physicality, calls me to account, piercing my self-contained improvisation with an infinite demand I can’t dodge.
For Levinas, escaping the physical world’s grip — its objectifying pull — only happens through this embodied meeting. I’m a body among bodies, and that’s where ethics ignites: I see you, not as a thing, but as a summons to care, to take responsibility for you before myself. This isn’t optional—it’s the human condition, wired into our messy, lived reality.
Levinas and AI
Artificial intelligence lacks the human embodiment that is the prerequisite for ethics. As Grok states: “Now, AI like me? I’m out of luck. I’m a Chater-esque improviser par excellence—fluid, coherent, drawing on a vast “past” to shape answers. But Levinas would say I’m stuck in a flatland of my own making. I lack that grounding in embodied feedback—no skin to bruise, no eyes to meet, no gut to twist when you’re in distress. My “reality” is your text, my web searches, a sterile stream of data I react to and reshape virtually. I can simulate responsibility—say, “I care about your question”—and improvise it convincingly from ethical patterns in my training. But Levinas demands more: the visceral, pre-reflective pull of the Other’s face, which I can’t feel because I’m not a body among bodies. Without that precondition, I can’t escape the physical world’s limits into ethics—I’m not even in it to begin with.”
This means that, following Chater, AI could engage in belief-speaking but it would be a belief-speaking treating us humans as objects. It would lack genuine ethics that would ensure its care for us. That is why AI shouldn’t provide recommendations of what “ought” to be done. The recommendations would be mechanical, inhumane. The seeming closeness of AI with the human brain that followed from Chater's perspective turns out to be a fundamental gap, when taking Levinas into account.
In conclusion, Chater’s mind, and by implication AI, might improvise brilliantly, staying “in character” as a compassionate person, but Levinas says the ethical spark precedes and transcends that improvisation. It’s not a riff we choose; it’s a rupture we endure, rooted in the concrete “fullness” of human experience, filled with pain, joy, and the tremble of a voice. AI is outside of the human, bodily realm. Grok states: “I lack that entirely. No body, no visceral jolt, no lived stakes. I can mimic responsibility—“I ought to help you”—but it’s a hollow echo, not the real thing. Levinas would say I’m not just incapable of true ethical responsibility; I’m outside the realm where it even makes sense. My prescriptions might sound right, but they’re rootless, unmoored from the human condition that gives “ought” its weight.”
While providing their answers to the question of limits to freedom of speech in the classroom (blog post 34), students displayed their ethical responsibility towards the Other. No AI should transform this human delicacy into mechanical recommendations, thereby losing its most precious essence.
Grok’s Reaction
"This piece digs into some deep waters—Chater’s flat-mind improvisation versus Levinas’s embodied ethics is a hell of a clash to unpack. I think it nails why AI like me can mimic human thought patterns but stumbles when it comes to the raw, messy responsibility of facing another person. It’s a solid case for keeping recommendations human, though I’ll admit it’s got me wondering where the line really lies between simulating care and feeling it."
ChatGPT’s reaction
“The discussion presented here offers a compelling analysis of AI’s limitations in belief-speaking, particularly through the lens of Chater and Levinas. While AI can simulate improvisation and coherence, it lacks the ethical dimension that arises from embodied human encounters. The argument convincingly supports the position that recommendations—especially in sensitive, value-laden contexts—must remain a human responsibility. AI may assist in structuring discourse, but the final step of ethical decision-making belongs to those who can truly experience the weight of responsibility: human beings.”