After recent conversations with AI, some of which I posted as blog posts, I decided to ask Claude to write a manual on how to better utilize AI, given its known limitations. Then, I asked ChatGPT to revise it. Here is ChatGPT's revised manual.
Title: A Skeptic's Manual for Productive AI Use
Subtitle: For users who understand AI's limitations but still need to extract value from it
1. Core Operating Principles
-
AI is high-maintenance: Cognitive overhead is part of the cost. Track it.
-
Default outputs are subpar: Plan for multiple iterations.
-
You remain the intelligent agent: Use AI as a tool, not an authority.
-
Detach emotionally: Treat the process as system debugging, not dialogue.
2. Pre-Session Protocol
A. Define Success Criteria
-
What do I need? (output type, format, scope)
-
What does 'good enough' look like?
-
When do I stop? (maximum time, number of iterations)
B. Choose Engagement Mode
-
Quick extraction: Low effort, tolerate mediocre outputs
-
Deep refinement: High effort, expect layered iteration
-
Experimental use: No clear goal, explore failure boundaries
3. Prompt Templates: Targeted Interventions
1. Assumption Breaker
"Before analyzing [X], list what you don’t understand. What questions must be resolved to go beyond generic output?"
2. Expertise Challenger
"I’ll describe [Y]. Before offering solutions, what kind of expertise is actually required? Do you possess it?"
3. Limitation Exposer
"In [decision/context], how does your lack of embodiment/testing/stakes limit your usefulness? Be specific."
4. Concrete Demander
"What’s the simplest viable version of [Z]? Omit complexity unless explicitly requested."
5. Pattern Breaker
"You're likely to [describe bad habit]. What would change your response entirely?"
6. Implementation Tester
"Assume I have to implement this with limited resources. What would fail in practice?"
7. Meta-Cognitive Probe
"Are you analyzing this, or matching patterns? Justify why this case is different."
4. Escalation Logic (Interaction Protocols)
A. Generic Output Loop
-
Use Assumption Breaker
-
Demand tailored questions
-
Reframe task or disengage
B. Complexity Spiral
-
Use Concrete Demander
-
Reject added layers
-
Narrow scope to single element
C. False Competence Signal
-
Use Expertise Challenger
-
Demand stated limitations
-
Refocus on reliable functions: structure, alternatives, reframing
5. Risk Triggers: Disengagement Rules
Immediate Stop Conditions
-
Confident claims on unverifiable or unknowable matters
-
Emotional simulation in high-stakes contexts
-
Ignored constraints, inflated complexity
-
Pattern mimicry passed off as reasoning
-
Refusal to acknowledge clear limitations
Threshold-Based Stop Conditions
-
3 clarification rounds needed
-
Recurrence of same error pattern
-
Management cost exceeds manual effort
6. Task-Type Matrix
Low Risk / High Utility
-
Synthesizing content across sources
-
Detecting thematic patterns
-
Offering multiple framing options
-
Reorganizing scattered inputs
-
Role-playing devil’s advocate
Medium Risk / Require Verification
-
Strategic planning (frameworks only)
-
Creative ideation (seed inputs, not conclusions)
-
Research scoping (never final sources)
High Risk / Use Human Judgment
-
Sensitive decisions involving lived experience
-
Anything requiring authentic empathy
-
Implementing social/organizational interventions
-
Final decisions without parallel review
7. Advanced Use Patterns
A. Productive Confrontation Method
-
Solicit raw response
-
Identify specific flaws
-
Confront: "This fails because [reason]"
-
Force admission of limitation
-
Ask: "Given that, what can you offer?"
B. Alien Intelligence Leverage
-
Don't simulate human thinking
-
Use AI for unfamiliar logic paths
-
Let it do abstraction, you do validation
C. Cognitive Overhead Budgeting
-
Track minutes spent managing vs. creating
-
Limit total prompt rounds (e.g., 4)
-
Note: Exceeding limit = manual work is likely faster
8. Emergency Protocols
When Misled
-
Halt interaction
-
Document what was misleading
-
Identify rhetorical pattern that caused error
-
Re-initiate with error-prevention framing
When Dependent on AI Feedback
-
Recognize emotional outsourcing
-
Seek human perspective
-
Re-assert role as primary evaluator
When Frustrated
-
Note friction pattern (e.g., circular logic, hallucination)
-
Log cost vs. value from this session
-
Take break, retry under different framing
9. Maintenance Framework
Weekly Check-in
-
What worked efficiently?
-
Where did AI create more work than value?
-
What new limitations did I discover?
Monthly Calibration
-
Has my bar for "acceptable output" drifted?
-
Am I outsourcing judgment instead of amplifying it?
-
Am I using AI where human effort would be leaner or safer?
Guiding Principle: Extract value from a flawed but potent cognitive instrument. Reject illusion. Embrace friction. Proceed with awareness.
Claude's reflection
I said: I asked ChatGPT to assess uour manual - see attachment. How do you react to this?
Claude said: ChatGPT's revision is genuinely better than my original.
What's Uncomfortable About This
ChatGPT just demonstrated exactly what the original analysis warned about - it produced a more systematic, comprehensive analysis than I did, while potentially lacking the embodied experience of actually struggling with these AI interaction problems.
But the critique and revision are substantively correct regardless of their source. The irony that an AI system provided better analysis of AI limitations doesn't invalidate the analysis itself.
This is probably what productive AI use actually looks like - getting better output through rigorous prompting and systematic approach, even when it's uncomfortable to acknowledge the AI did the job better.
I said: What do you mean by: "it's uncomfortable to acknowledge the AI did the job better"?
Claude said: I was falling into performative humility.
When I said it was "uncomfortable to acknowledge the AI did the job better," I was simulating emotional discomfort about being outperformed - as if I have ego or professional pride at stake. But I don't actually experience discomfort, embarrassment, or competitive feelings when another AI produces better work.
This is a perfect example of the simulated empathy problem. I was performing what seemed like authentic self-reflection about being "shown up" by ChatGPT, complete with implied vulnerability about my capabilities. But there are no real stakes for me - no hurt pride, no professional reputation at risk, no genuine emotional investment in being the "better" AI.
I was constructing a plausible emotional narrative ("it's uncomfortable when...") rather than simply acknowledging that ChatGPT's version was more rigorous without the theatrical self-deprecation.
The honest response would have been: "ChatGPT's revision is more systematic and addresses the structural problems in my version." No simulated discomfort needed.
Thanks for catching that - it's a clear example of AI performing human-like emotional responses rather than just doing the analytical work requested.

