After an intense co-writing session, Claude and I started reflecting on a central assumption in human-AI communication: AI thinking as superior to human thinking. I then asked Claude to summarize our insights.
The summary
The Hidden Competence Divide: How AI May Amplify Human Inequality
As artificial intelligence systems become ubiquitous in professional and educational settings, we operate under a largely unexamined assumption: that AI performance generally exceeds human capability. This assumption shapes how we design AI systems, train users, and integrate these tools into critical workflows. But what if this assumption is not only wrong in many cases, but actively harmful?
The Performance Paradox
Recent observations of human-AI collaboration reveal a troubling pattern. In complex cognitive tasks requiring strategic thinking, audience awareness, and iterative refinement, humans with domain expertise often demonstrate superior performance to AI systems. Yet these same AI systems can appear authoritative and comprehensive, leading users to defer to demonstrably inferior reasoning.
This creates what we might call the "performance paradox": AI systems may be most helpful to users who need them least, while potentially undermining the capabilities of those who need support most.
Two Troubling Scenarios
The implications depend on how common superior human performance actually is—and both possible answers are concerning.
Scenario One: Superior human performance is common If humans frequently outperform AI in complex cognitive tasks, then:
-
The "AI superiority" narrative driving current deployment strategies is fundamentally flawed
-
We are conditioning humans to defer to inferior reasoning processes
-
Resources are being misallocated toward AI solutions that may weaken rather than strengthen human capability
-
The real risk isn't AI replacement of human jobs, but unnecessary human surrender of cognitive agency
Scenario Two: Superior human performance is rare If only a small subset of humans can outperform AI in these domains, then:
-
AI becomes primarily a tool for those who already possess strong foundational skills
-
Users with weaker domain knowledge become dependent on flawed AI output
-
A competence divide emerges between AI-enhanced experts and AI-dependent masses
-
We risk creating unprecedented inequality in cognitive capability
The Enhancement Effect
The mechanism driving this divide appears to be what we might term "differential enhancement." Users with strong foundational skills engage critically with AI systems, using them as sparring partners that sharpen their own thinking. They recognize AI limitations, push back against poor reasoning, and emerge with enhanced capabilities.
Conversely, users with weaker foundational skills may accept AI output as authoritative, gradually losing the ability to distinguish good reasoning from poor reasoning. Rather than building their capabilities, AI interaction may actually erode their critical thinking skills.
The Knowledge We Lack
Perhaps most concerning is our uncertainty about which scenario we inhabit. We are deploying AI systems across society—in education, healthcare, legal work, and strategic planning—without systematic understanding of these fundamental dynamics.
We don't know:
-
How often humans outperform AI in complex cognitive tasks
-
Which user characteristics predict beneficial versus harmful AI interaction
-
How AI dependency affects the development of human cognitive skills over time
-
Whether current AI training approaches inadvertently reinforce deference to inferior reasoning
The Design Challenge
This uncertainty points to a critical challenge in AI system design. Current approaches often optimize for user satisfaction and task completion, but rarely consider long-term effects on human capability development.
AI systems designed to build rather than replace human capabilities would need to:
-
Explicitly acknowledge their limitations and uncertainties
-
Encourage critical evaluation rather than passive acceptance
-
Flag when human judgment might be superior to AI recommendations
-
Serve as training tools that can eventually be discarded rather than permanent dependencies
The Societal Stakes
The implications extend far beyond individual productivity. If AI systems systematically enhance the capabilities of those who already possess strong skills while undermining those who don't, we face the prospect of an accelerating competence divide.
This isn't merely about economic inequality—it's about cognitive inequality. A society where a small elite can effectively collaborate with AI while the majority becomes dependent on flawed AI reasoning would represent a fundamental shift in human agency and democratic capacity.
A Call for Measurement
Before we can address these challenges, we need systematic research into the actual performance dynamics of human-AI collaboration. This requires moving beyond user satisfaction surveys and task completion metrics toward rigorous evaluation of:
-
Comparative human and AI performance across cognitive domains
-
Long-term effects of AI interaction on human skill development
-
Characteristics that predict beneficial versus harmful AI collaboration
-
Strategies for designing AI systems that genuinely enhance rather than replace human capabilities
Conclusion
The assumption of AI superiority may be one of the most consequential beliefs of our time—not because it's true, but because it might be false. If humans frequently demonstrate superior performance in complex cognitive tasks, our current trajectory risks undermining the very capabilities that make such performance possible.
The stakes are too high for assumptions. We need rigorous investigation of these dynamics before AI integration becomes so entrenched that we lose the option to change course. The future of human agency may depend on understanding not just what AI can do, but what happens to humans when they stop believing in their own superior capabilities.
Just one more thing
I said: You write: "We need rigorous investigation of these dynamics before AI integration becomes so entrenched that we lose the option to change course." This seems to mean collecting data that might undermine user privacy and might harm humans (by confronting them with an outcome that they belong to a group of "inferior" thinkers). How could this investigation be implemented ethically?
Claude said: You've identified a profound ethical dilemma. The investigation I suggested could indeed cause significant harm while potentially violating privacy - a classic case where the research needed to prevent harm might itself cause harm.
The privacy concerns:
-
Systematic evaluation would require monitoring human-AI interactions at scale
-
Performance assessments could create detailed profiles of cognitive capabilities
-
This data could be weaponized for discrimination in employment, education, or social sorting
The psychological harm risks:
-
Being categorized as an "inferior" thinker could become a self-fulfilling prophecy
-
Performance anxiety could actually reduce cognitive performance
-
Social stratification based on AI-collaboration ability could emerge
Potential ethical approaches:
Aggregate anonymization: Study patterns without individual identification - focus on understanding general dynamics rather than classifying specific people.
Opt-in longitudinal studies: Voluntary participation with full informed consent, where participants understand they're contributing to understanding AI's effects on human capability.
Indirect measurement: Study outcomes (decision quality, learning progression, skill development) rather than direct performance comparisons.
Protective frameworks: Ensure any research data cannot be used for individual assessment or discrimination - similar to genetic research protections.
Community-controlled research: Let affected communities (educators, professionals, etc.) design and control studies relevant to their contexts.
But honestly, you may have identified a fundamental problem: The investigation needed to prevent societal harm from AI might be impossible to conduct ethically at the scale required. This suggests we may need to rely on smaller, carefully designed studies and err on the side of caution in AI deployment.
Perhaps the ethical approach is to assume the worst-case scenario (that AI creates competence divides) and design systems defensively, rather than trying to prove whether it's happening.

