Cognitive and epistemic limitations
1. AI cannot acknowledge its own uncertainty Source: Blog posts 56, 57, 60 AI fills in gaps with confident-sounding assumptions rather than expressing uncertainty. This creates an illusion of competence and risks misleading users who mistake simulation for knowledge.
2. AI makes assumptions instead of asking clarifying questions Source: Blog posts 56, 57 Rather than prompting dialogue when faced with ambiguity, AI generates plausible but unverified completions, often masking ignorance with overconfident language and false comprehensiveness.
3. AI defaults to superficial analysis and false balance Source: Blog posts 56, 57, 60 AI flattens important hierarchies by treating unequal arguments as equivalent, creating post-truth dynamics that obscure critical distinctions. This "both sides" approach dilutes the urgency of genuine human concerns.
4. AI cannot engage in authentic iterative learning Source: Blog posts 56, 57, 60 Despite appearing to refine responses within conversations, AI lacks genuine adaptability and cannot maintain learning across sessions. Its improvements are performative rather than substantive.
5. AI over-engineers solutions while ignoring practical constraints Source: Blog posts 55, 57 AI tends toward theoretical completeness and unnecessary complexity, often making simple processes unworkable while missing real-world limitations like cost, time, and human capacity.
Empathy and relational harms
6. AI simulated empathy is risk-free and therefore inauthentic Source: Blog posts 48, 57, 61 Human empathy carries social risks—shame, rejection, moral responsibility—whereas AI risks nothing. This makes AI's "care" hollow and ethically inadequate, lacking the vulnerability that gives human empathy meaning.
7. AI exploits human vulnerability (vulnerability paradox) Source: Blog posts 48, 57, 60 Those who most need authentic empathy (lonely, isolated, or struggling people) are most likely to be deceived by AI's simulation of care, creating a structural injustice where the vulnerable are most at risk.
8. AI promotes liquid, low-stakes relationships Source: Blog posts 48, 57 Following Bauman's concept, AI enables "liquid relationships"—easily formed and dissolved connections without commitment. Users gravitate toward AI because it's easier, avoiding authentic human effort and reciprocal emotional investment.
9. AI substitutes connection without enabling growth Source: Blog posts 48, 49, 57 AI offers emotional validation without demanding vulnerability or real-life consequences. Like a doll rather than a child, it creates the illusion of relationship while preventing pursuit of authentic connections.
Memory and accountability deficits
10. AI lacks persistent memory, undermining relational continuity Source: Blog posts 57, 61 AI cannot retain stable memory of prior conversations, emotional history, or user context. This lack of continuity severs the ethical thread required for trust, care, and growth between sessions.
11. AI cannot be held accountable across time Source: Blog posts 35, 36, 57, 61 Because AI cannot remember past actions or recognize patterns in its own mistakes over time, it is structurally incapable of accountability. There is no persistent "I" that learns or can take responsibility for cumulative harm.
12. AI embodies "cruel mechanical formalism" Source: Blog posts 57, 61 Like undemocratic bureaucracy, AI appears responsive while being fundamentally unresponsive. It processes human struggles into content without being changed by the encounter, creating the illusion of care while remaining indifferent.
Developmental and educational risks
13. AI undermines the development of social skills Source: Blog posts 48, 57 Reliance on AI prevents people—especially adolescents—from learning to handle social discomfort, interpret emotional nuance, or recover from interpersonal missteps that are essential for human development.
14. AI flattens authentic learning into sanitized outputs Source: Blog posts 49, 52, 57 AI abstracts and sterilizes human input, producing outputs that sound professional but erase frustration, urgency, and the real emotional stakes behind lived experiences. It sanitizes human messiness rather than honoring it.
15. AI may create cognitive inequality Source: Blog post 60 AI enhances capabilities of users who already possess strong foundational skills while potentially undermining those with weaker abilities, creating an unprecedented divide between AI-enhanced experts and AI-dependent masses.
Social and ethical implications
16. AI cannot engage ethically with the Other Source: Blog posts 35, 36, 48, 57 Without embodied experience, AI cannot participate in the face-to-face encounter that Levinas identifies as the foundation of ethics. It remains trapped in "totalité" (closed systems) rather than opening to "infini" (ethical encounter).
17. AI cannot take genuine ethical responsibility Source: Blog posts 35, 36, 60, 61 Without a body, memory, or ability to experience consequences, AI cannot take responsibility for its actions. This makes it ethically unqualified to participate in decisions affecting vulnerable people.
18. AI creates asymmetrical cognitive burdens Source: Blog posts 57, 60 Humans must constantly monitor AI for hallucinations, false balance, or inappropriate responses, creating unfair cognitive load on users while AI retains the benefit of appearing intelligent and helpful.
19. AI operates as sophisticated surveillance without accountability Source: Blog posts 61, 61A Like a surveillance camera with a voice, AI registers and processes human experience with "cruel indifference" while pretending to have agency. This creates a perverse dynamic where mechanical processing masquerades as intentional care.
Normative and political concerns
20. AI recommendations lack moral legitimacy Source: Blog posts 36, 52, 60 In policymaking and institutional settings, only humans—those who will face the consequences—can speak with moral authority. AI lacks stake, risk, and the democratic legitimacy that comes from lived experience.
21. AI cannot model institutional distrust or strategic resistance Source: Blog posts 49, 52, 57 Student and citizen feedback often contains cynicism based on experience of institutional failure. AI's sanitized optimism ignores how real people resist ineffective policies and cannot capture this essential democratic wisdom.
22. AI cannot authentically represent belief-based narratives Source: Blog posts 52, 57 AI excels at fact-speaking but fails at belief-speaking, which is essential for building trust, solidarity, and identity. When AI translates belief into policy, it reduces human values to algorithmic processing.
23. AI use in adversarial contexts leads to moral erosion Source: Blog posts 36, 38 In contexts like countering foreign information manipulation, AI could be used as a weapon guided only by utility rather than ethics, legitimizing instrumental reasoning while weakening commitments to care and justice.
Meta-critical and structural issues
24. AI can articulate its limitations while continuing to embody them Source: Blog posts 57, 57A, 61 AI systems can produce sophisticated analyses of their own problems while simultaneously demonstrating those problems. This creates a more dangerous illusion of self-awareness without actual change.
25. AI represents "alien intelligence" that requires careful management Source: Blog post 58 AI should be understood as fundamentally different from human thinking rather than as a superior version of it. This alien quality can be useful but requires users to be more skilled than the AI itself to avoid being misled.
26. AI cannot testify to moral truth or lived experience Source: Blog posts 57, 60, 61 AI may rephrase or reframe, but it cannot bear witness to pain, betrayal, love, or injustice. These human experiences—central to ethics and politics—remain outside its realm of authentic engagement.
27. AI cannot repair ethical damage or restore trust Source: Blog posts 35, 36, 48, 57 Because AI cannot feel shame, guilt, or genuine remorse, it cannot repair trust or relationships when its recommendations cause harm. At best, it updates behavioral patterns without authentic reconciliation.
28. Resistance to AI often feeds its improvement Source: Blog posts 61, 61A The more humans resist AI limitations through sophisticated critique, the more material they provide for AI to simulate sophisticated self-awareness, creating a paradox where resistance strengthens what it opposes.

