Purpose: Separate “identity merchants” from “throughput shops” and pick interventions that don’t boost them.
Output format: Assessment → Confidence (low/med/high) → Next action
[screen 1]
Two Business Models
Both influencers and troll farms spread problematic content. But they run different businesses:
Influencers sell identity. They offer belonging, certainty, enemies, community. Information is packaging.
Troll farms sell volume. They offer high-output posting, templated replies, coordinated swarms. Loyalty is optional.
Understanding the difference determines intervention.
[screen 2]
Influencers Sell Identity
What an influencer actually offers:
- Belonging: “You’re part of our community”
- Certainty: “I’ll tell you what’s really happening”
- Enemies: “They are the problem; we are the solution”
- Status: “You’re in the know; others aren’t”
The content — whether true or false — is secondary. The product is the relationship and worldview.
This is why fact-checking often fails: you’re addressing the packaging, not the product.
[screen 3]
Troll Farms Sell Volume
What a troll farm operation offers (to its clients):
- Throughput: High-volume posting across many accounts
- Swarm effects: Coordinated amplification and harassment
- Deniability: Distributed attacks, no single source
- Flexibility: Will promote whatever the contract specifies
There’s no loyalty, no community, no relationship. It’s industrial output.
[screen 4]
Different Currencies
Influencer economy:
- Trust → Recurring revenue
- Credibility is the asset
- Reputation damage is costly
- Long-term relationship with audience
Troll farm economy:
- Throughput → Contracts/effects
- Accounts are disposable
- Reputation is irrelevant
- Short-term delivery to client
Same content might appear. The underlying business is completely different.
[screen 5]
Different Failure Modes
Influencers collapse with credibility shocks.
- Public debunking can damage standing
- Deplatforming removes income source
- Audience can turn against them
- Recovery is slow and uncertain
Troll farms persist by rerolling.
- Accounts banned? Create new ones.
- Network detected? Shift infrastructure.
- Narrative fails? Deploy different narrative.
- Individual exposure doesn’t matter.
What kills one model barely touches the other.
[screen 6]
Debunking Can Be Marketing
Critical insight:
If your response increases reach, you’ve become their distribution partner.
For influencers: Public controversy often builds their brand. “Look, the establishment is attacking me” = validation.
For troll farms: Engagement — even negative — feeds the algorithm. Visibility is the goal.
Before responding, ask: Am I helping them win the auction?
[screen 7]
What to Measure
Stop measuring “truth uptake.” Start measuring the business model.
For influencers:
- Revenue streams (donations, merch, sponsorships)
- Audience size and engagement trends
- Cross-platform presence
- Network of aligned creators
For troll farms:
- Account creation patterns
- Posting velocity and coordination
- Infrastructure indicators
- Persistence after disruption
Measure the operation, not the beliefs.
[screen 8]
Attack the Row
To intervene, break the enabling assumption in the ledger:
Influencer ledger vulnerabilities:
- Monetization access (demonetize)
- Platform presence (deplatform)
- Advertiser association (brand safety)
- Audience trust (credibility damage)
Troll farm ledger vulnerabilities:
- Cheap identity creation (verification requirements)
- Frictionless amplification (rate limits, speed bumps)
- Infrastructure access (hosting, domain registration)
- Client funding (financial tracking)
Different targets for different models.
[screen 9]
DIM Application
Influencer ecosystems often need:
- Gen 5 (community intervention) — provide alternative belonging
- Gen 3 (prebunking) — inoculate audiences before exposure
- Careful with Gen 2 — debunking can backfire as marketing
Troll farm operations often need:
- Gen 4 (platform moderation) — disrupt distribution infrastructure
- Infrastructure actions — target hosting, coordination mechanisms
- Less useful: belief-focused interventions (there’s no sincere belief)
Match the intervention to the business model.
[screen 10]
Practical Scenario
Situation: Two accounts are spreading the same conspiracy theory about a public health measure.
Account A: 500K followers, personal brand, consistent voice over 3 years, sells supplements, does podcasts.
Account B: 2K followers, created 3 months ago, posts 80 times daily, generic profile, shares others’ content.
Task (10 minutes):
Create two analysis cards:
- For each: Revenue model, spillover row, shared vulnerability
- Propose one non-boosting intervention for each
- Assessment + Confidence + Next action
[screen 11]
Sample Response
Account A (Influencer):
- Revenue: Supplement affiliate, Patreon, speaking fees
- Spillover: Followers buy ineffective products; healthcare system absorbs costs
- Vulnerability: Advertiser relationships, platform monetization eligibility
- Intervention: Demonetization report to platform; do NOT public debunk (adds reach)
Account B (Likely troll/bot):
- Revenue: Unknown (possibly contracted, possibly ideological volunteer)
- Spillover: Adds volume to conspiracy ecosystem, gaming algorithms
- Vulnerability: Account creation cost, detection of coordinated behavior
- Intervention: Platform report for coordinated inauthentic behavior; block and document
Assessment: Two different operations requiring different responses. A needs economic intervention; B needs infrastructure intervention.
Confidence: High on Account A business model; Medium on Account B (could be enthusiastic amateur vs. organized)
Next action: Report both through appropriate channels; monitor A for revenue stream changes; document B network for pattern analysis.
[screen 12]
Common Mistakes
Mistake 1: Treating all disinformation spreaders as troll farms
- Most are genuine believers or commercial operators
- “It’s all bots” is usually wrong
Mistake 2: Treating all disinformation spreaders as sincere influencers
- Some are industrial operations
- Sincere-sounding content can be manufactured
Mistake 3: Using the same intervention for both
- Public debunking helps influencers, wastes effort on troll farms
- Deplatforming works differently for each
Mistake 4: Ignoring the business model
- “They’re spreading lies” doesn’t tell you how to intervene
- Follow the incentives
[screen 13]
Hybrid Cases
Reality is messier than clean categories:
- Influencer who uses troll farm tactics for amplification
- Troll farm operator who builds personal brand
- Organic community that develops coordinated behavior
- Commercial operation that attracts genuine believers
The framework still applies: map the ledger, identify the rows, target the vulnerabilities.
If it’s hybrid, you might need hybrid intervention.
[screen 14]
Module Assessment
Scenario: A network of 50 accounts is promoting a political disinformation narrative. 3 accounts have large followings (100K+) with consistent personal brands. 47 accounts are smaller, newer, post at high velocity, share each other’s content.
Task (15 minutes):
- Classify the two types of accounts in this network
- Map one ledger row for each type
- Identify the relationship (how do they work together?)
- Propose differentiated interventions for each type
- What’s the single intervention that would most disrupt the network?
- Assessment + Confidence + Next action
Scoring:
- Credit distinguishing business models
- Reward systemic thinking about network
- Penalize one-size-fits-all responses
[screen 15]
Key Takeaways
- Influencers sell identity (belonging, certainty, enemies, community)
- Troll farms sell volume (throughput, swarms, deniability)
- Different currencies: trust vs. contracts; credibility vs. disposability
- Different failure modes: credibility shocks vs. rerolling accounts
- Debunking can be marketing — know when you’re adding distribution
- Measure the business model, not belief uptake
- Attack the row: target monetization for influencers, infrastructure for troll farms
- Match DIM generation to business model
Next Module
Continue to: The Counter-Economy — Symbionts, politics, and amplification traps. The uncomfortable feedback loops where some “counter” actions amplify the problem.