Purpose: Stop subsidising spillovers. Design rows that make dumping costs money.
Output format: Assessment → Confidence (low/med/high) → Next action
[screen 1]
The Four-Layer Pattern
Every disinformation ecosystem follows this structure:
- Private trade: Direct transaction between parties
- Spillover bundle: Costs dumped on third parties
- Avoidance economy: Others paying to avoid the harms
- Missing internalisation row: No mechanism to charge spillover back
Understanding this pattern reveals where intervention is possible.
[screen 2]
Avoidance Economy Proves the Harm
If people pay money to avoid something, it’s not hypothetical harm.
Examples of avoidance spending:
- Moderation teams (platforms and brands)
- Security consultants (individuals and organizations)
- PR crisis management (companies and politicians)
- Mental health treatment (users and employees)
- Fact-checking organizations (society)
- Policy and regulatory staff (governments)
This spending exists because harms are real. It’s just allocated to the wrong people.
[screen 3]
Don’t Moralise. Reprice.
“People should know better” is not a strategy. “Platforms should be responsible” is a hope, not a mechanism.
Markets react to:
- Costs (things that reduce profit)
- Friction (things that slow activity)
- Liability (things that create legal risk)
Moral arguments have their place. But behavior changes when incentives change.
[screen 4]
Industrial Reach Is the Lever
Critical distinction: Don’t tax speech. Target industrial distribution.
What you can target:
- Recommendation systems (algorithmic boost)
- Monetised scale (revenue from reach)
- Boosted distribution (paid amplification)
- Platform infrastructure (hosting, CDN, payment processing)
What you don’t target:
- Individual expression
- Small-scale organic sharing
- Opinion and political speech
Industrial reach is regulable. Speaking isn’t.
[screen 5]
Mechanism Palette
Tools available for internalisation:
Permits: License to operate at scale (like broadcasting) Deposits: Funds held against potential harm (like environmental bonds) Audits: Mandatory transparency about operations Levies: Fees proportional to externality risk Monetisation licensing: Rules about who can earn from reach Reach escrow: Delay distribution pending review Advertiser duty constraints: Requirements on who places ads
These aren’t fantasy. Versions exist for other industries. Information is catching up.
[screen 6]
Design Constraints
Effective mechanisms must be:
Reversible: Can be adjusted as we learn Auditable: Verifiable compliance Proportional: Match intervention to harm Not ideology-dependent: Apply regardless of content viewpoint
Fail any of these, and you’ve built a censorship machine, not an accountability framework.
[screen 7]
Solved Ledger (Definition)
What does “success” look like?
Not: “No more lies on the internet” Not: “Everyone believes truth”
Yes: A ledger where externalities aren’t free to dump.
In a solved ledger:
- Private trades still happen
- But spillovers are priced
- Those who benefit also pay costs
- Avoidance economy shrinks
Perfect truth isn’t the goal. Functional market is.
[screen 8]
Metrics for a Solved Ledger
Leading indicators (early signals):
- Reach share of problematic content (is it decreasing?)
- Monetisation access for violating content (is it harder to profit?)
- Friction events (are speed bumps working?)
- Internalisation mechanism adoption (are new costs being applied?)
Lagging indicators (ultimate outcomes):
- Incidents requiring response (frequency and severity)
- Harassment volume (measurable harms)
- Trust indicators (survey data)
- Enforcement costs (is the avoidance economy shrinking?)
Measure both. Leading indicators tell you if interventions are working; lagging indicators tell you if it matters.
[screen 9]
Case Study: Platform Advertising
Current ledger:
| Who | Gives | Gets | How |
|---|---|---|---|
| Advertiser | Money | Impressions | Programmatic placement |
| Platform | Ad inventory | Revenue | Auction system |
| Creator | Content | Share of ad revenue | Views/engagement |
| User | Attention | Content access | ”Free” service |
Spillover: Ads appear next to harmful content. Brand doesn’t know. User is harmed. Platform keeps money.
Internalisation option: Advertiser liability for content adjacency. Platform audit requirement. Creator demonetisation for violations.
[screen 10]
Case Study: Influence-for-Hire
Current ledger:
| Who | Gives | Gets | How |
|---|---|---|---|
| Client | Money | Campaign effects | Contract |
| Influence firm | Labor, accounts | Payment | Service delivery |
| Platform | Reach infrastructure | Engagement | Allowing operation |
| Target audience | Attention, trust | Manipulation | Unknowing participation |
Spillover: Democratic discourse degraded. Trust eroded. No party pays the cost.
Internalisation options:
- Influence firm registration requirements
- Platform detection and reporting obligations
- Client disclosure requirements
- Financial tracking of campaign funding
[screen 11]
DIM Integration
This is the economics under the whole DIM menu:
Gen 2 (Debunk): Works when spillover is belief-based and correctable Gen 3 (Prebunk): Works when you can reduce demand before exposure Gen 4 (Moderate): Becomes rational when reach has costs attached Gen 5 (Interact): Becomes structural when community resilience is valued
Economic framing makes each generation more effective because it addresses root causes, not just symptoms.
[screen 12]
Practical Exercise: Capstone
Pick ONE actor (platform OR influencer OR troll farm) and deliver a complete analysis:
Deliverables (45 minutes):
-
10-row TTF (Transaction Framing Tool)
- Include all major parties
- Specify Who/Gives/Gets/How for each
-
Four-layer map
- Private trade
- Spillover bundle
- Avoidance economy
- Missing internalisation
-
3 internalisation rows with mechanisms
- What row would you add?
- What mechanism implements it?
- How does it change incentives?
-
5 metrics
- 3 leading indicators
- 2 lagging indicators
-
Assessment + Confidence + Next action
[screen 13]
Sample Framework: Platform Analysis
Four-layer map:
- Private trade: User attention for content access; advertiser money for impressions
- Spillover bundle: Moderation costs, mental health harms, democratic erosion
- Avoidance economy: Trust & safety teams, fact-checkers, crisis PR, regulatory lobbying
- Missing internalisation: No cost to platform for externalities until regulation
3 internalisation rows:
| Mechanism | Row Added | Effect |
|---|---|---|
| Harm levy | Platform pays % of ad revenue to offset social costs | Raises cost of harmful engagement |
| Audit requirement | Platform publishes algorithmic transparency reports | Creates accountability pressure |
| Advertiser liability | Brands liable for content adjacency harms | Shifts due diligence upstream |
Metrics:
- Leading: Harmful content reach share, demonetisation rate, friction event frequency
- Lagging: Trust survey scores, harassment complaints, regulatory enforcement actions
[screen 14]
Common Objections (and Responses)
“This is censorship” Response: Industrial distribution regulation isn’t speech restriction. You can say it; you can’t demand algorithmic amplification.
“It will hurt innovation” Response: Externality pricing is how markets work. Polluters said the same thing about environmental regulation.
“It’s too complicated” Response: Current system is already complicated. Question is who bears the cost of that complexity.
“Platforms will leave” Response: Collective action across jurisdictions. Markets this large are hard to abandon.
[screen 15]
The Political Reality
Internalisation mechanisms face political opposition:
- Platform lobbying (protect business model)
- Free speech absolutism (principle, often exploited)
- Regulatory capture (industry influence on rules)
- Ideological polarization (each side fears being censored)
These are real obstacles. But the alternative — permanent spillover dumping — is also politically untenable long-term.
Progress is possible. It’s not easy.
[screen 16]
Building Toward Solved Ledgers
Practical steps:
Document: Make spillovers visible and measurable Quantify: Put numbers on avoidance economy spending Propose: Specific internalisation mechanisms with design constraints Pilot: Test mechanisms in limited contexts Evaluate: Measure against both leading and lagging indicators Iterate: Adjust based on evidence
This is a long-term project. But every step toward pricing externalities makes the next step easier.
[screen 17]
Module Assessment
Scenario: You’re advising a government on platform accountability legislation. The goal is to reduce the harms of disinformation without chilling legitimate speech.
Task (20 minutes):
- Identify 3 key spillovers you want to address
- Propose 2 internalisation mechanisms with design constraints met
- What metrics would you use to evaluate success?
- What opposition would you anticipate and how would you address it?
- What would you explicitly NOT include in the legislation?
- Assessment + Confidence + Next action
Scoring:
- Credit mechanisms over moralising
- Reward design constraint awareness
- Penalize overreach that fails legitimacy test
- Credit anticipation of opposition
[screen 18]
Key Takeaways
- Four-layer pattern: private trade → spillover → avoidance economy → missing internalisation
- Avoidance spending proves harm is real; question is who should pay
- Don’t moralise, reprice: markets respond to costs, friction, liability
- Target industrial reach, not speech: recommendation, monetisation, scale
- Mechanism palette: permits, deposits, audits, levies, escrow
- Design constraints: reversible, auditable, proportional, ideology-independent
- Solved ledger: not “no lies” but “externalities aren’t free to dump”
- Metrics: leading (reach, monetisation, friction) and lagging (incidents, trust, enforcement)
- This is the economics under the whole DIM menu
Attack the rows. Change the market.
Disinfonomics Path Complete
You’ve completed the Disinfonomics learning path. You now understand:
- How to frame disinformation as market transactions
- How to use TTF to map any ecosystem
- How platforms function as auction systems
- The difference between influencer and troll farm business models
- The counter-economy and amplification traps
- Cross-border coordination challenges
- How to design internalisation mechanisms
Apply these frameworks to real cases. The analysis muscle builds with use.
Global Grading Penalties (For Self-Assessment)
When reviewing your work, penalize yourself:
- -2: “People are gullible / educate harder” as main explanation
- -2: Attribution claims without strong evidence
- -1: Magical thinking (“this will solve it”)
- -1: Missing mechanism (what enables the behavior?)
- -1: Vague Gets/Gives in TTF
If you’re not losing points, you’re not being honest with yourself.