Purpose: You’ll learn to distinguish foreign information manipulation from domestic noise, assess evidence systematically, and avoid the overclaiming that damages credibility.
[screen 1]
Starting Point
You’ve heard the terms: “election interference,” “foreign influence,” “hybrid threats.” Politicians use them. Headlines invoke them. Sometimes accurately. Often not.
FIMI — Foreign Information Manipulation and Interference — is a real phenomenon. It’s also frequently misdiagnosed, overclaimed, and weaponized in domestic political fights.
Your job is to be precise. Not paranoid. Not dismissive. Precise.
[screen 2]
What FIMI Actually Is
FIMI refers to information operations conducted by foreign actors — typically states or their proxies — designed to manipulate public opinion, undermine trust, or interfere with political processes in target countries.
Key word: foreign. But “foreign” is an attribute to establish, not assume. We’ll get to that.
FIMI is coordinated, resourced, and strategically directed. It’s not your neighbor sharing a dubious meme. It’s organized activity with identifiable infrastructure.
[screen 3]
What FIMI Is Not
Let’s clear the deck:
Not FIMI:
- Domestic activists you disagree with
- Organic grassroots movements with foreign support
- News outlets from other countries reporting critically on your government
- Citizens sharing foreign perspectives
- Your political opponents
Possibly FIMI:
- Coordinated inauthentic behavior with foreign infrastructure
- State-funded operations designed to deceive about origin
- Campaigns using deceptive identities to manipulate
The distinction matters. Mislabeling domestic dissent as “foreign interference” is itself a form of information manipulation.
[screen 4]
The “Foreign” Problem
Here’s the uncomfortable truth: establishing foreign origin is hard.
A VPN makes attribution harder. Hiring local contractors obscures origin. Using genuine local grievances as amplification material creates plausible deniability.
So we don’t start with “is it foreign?” We start with:
- Is there harm? (What’s the negative effect?)
- Is there coordination? (Patterns suggesting organized activity)
- Is there deception? (About identity, origin, or intent)
- What’s the scale? (Industrial reach vs. organic spread)
“Foreign” comes last, if evidence supports it.
[screen 5]
The Evidence Ladder
When assessing potential FIMI, classify your evidence:
Strong signals:
- Technical infrastructure linked to foreign state actors
- Leaked documents confirming foreign direction
- Platform attributions with supporting evidence
- Cross-platform coordinated behavior from identified foreign assets
- Financial flows to foreign entities
Medium signals:
- Timing aligned with foreign strategic interests
- Content amplification patterns matching known foreign playbooks
- Accounts with creation patterns suggesting bulk production
- Narrative alignment with foreign state media
Weak signals:
- “It benefits [foreign country]”
- Bad grammar or translation artifacts
- “It feels coordinated”
- “It went viral suspiciously fast”
Weak signals are reasons to investigate. They’re not conclusions.
[screen 6]
A Practical Scenario
Situation: Three days before a national election, a hashtag emerges attacking a candidate. Within 12 hours, it’s trending. Some accounts pushing it have limited history. A journalist asks: “Is this foreign interference?”
Your constraints: 4 hours. Public tools only. Briefing for an editor.
What you produce:
- Assessment of what’s happening
- Confidence level (low/medium/high)
- Recommended next action
What you don’t do: Claim “foreign interference” based on timing and suspicion.
[screen 7]
Working Through the Scenario
Step 1: Check the accounts
- Creation dates (bulk creation = medium signal)
- Posting history (new accounts with intense activity = medium signal)
- Network connections (clustered following = medium signal)
Step 2: Trace the origin
- First appearance of the hashtag
- Who amplified early
- Any cross-platform coordination
Step 3: Assess the content
- Does it align with known foreign narratives?
- Is it factually false or just negative?
- What’s the intended effect?
Step 4: Check your evidence tier
- Strong signals? Proceed carefully.
- Medium signals only? Note the uncertainty.
- Weak signals only? Don’t claim FIMI.
[screen 8]
Sample Assessment
“A coordinated campaign is pushing #CandidateXCorrupt. Evidence suggests inauthentic amplification: 40% of early spreaders are accounts created in the past week with no prior posting history. Content includes both legitimate criticism and unverified claims.
Assessment: Organized amplification campaign. Origin unclear.
Confidence: Medium. Coordination patterns visible; foreign attribution unsupported.
Next action: Monitor for infrastructure links. Do not claim foreign interference without technical attribution.”
This is the discipline. Describe what you see. Flag what you don’t know. Recommend proportionate action.
[screen 9]
Why Foreign Actors Do This
Understanding motivation helps assessment (but isn’t proof of involvement).
Strategic goals:
- Undermine trust in democratic institutions
- Amplify social divisions already present
- Support preferred candidates or policies
- Distract from domestic criticism
- Justify their own actions through “whataboutism”
- Test and refine techniques for future operations
Key insight: FIMI rarely creates divisions. It exploits existing ones. If there’s no genuine grievance to amplify, campaigns fail.
[screen 10]
The Attribution Trap
Attribution is seductive. “Russia did it” makes a cleaner headline than “coordinated campaign of unclear origin.”
But overclaiming damages credibility:
- Future accurate warnings are dismissed
- Domestic actors learn to dismiss criticism as “foreign interference”
- Real foreign operations benefit from noise and confusion
Rule: Your confidence in attribution should match your evidence tier. Low-medium evidence = low-medium confidence.
Government agencies with classified access can sometimes make stronger claims. You probably can’t. That’s fine. Your job is accurate assessment, not attribution theater.
[screen 11]
FIMI vs. Domestic Disinformation
The distinction matters for response:
Domestic disinformation:
- Protected speech in most democracies
- Address through counter-speech, education, platform policies
- Often involves genuine (if misguided) belief
Foreign information manipulation:
- May invoke national security frameworks
- Can justify stronger platform action
- Involves coordinated deception about origin
The gray zone:
- Foreign actors working with domestic partners
- Domestic actors using foreign infrastructure
- Genuine domestic movements amplified by foreign actors
Most real cases are gray. Get comfortable with uncertainty.
[screen 12]
The Amplification Question
Foreign actors often don’t create content. They amplify it.
This creates analytical problems:
- The content may be true, or true-ish
- The domestic originators may be genuine
- The foreign amplification may be invisible
Question to ask: Is the harm from the content, or from the amplification?
If amplification is the problem → Gen 4 responses (reduce reach) If content is the problem → Gen 2/3 responses (debunk/prebunk) If both → layered approach
[screen 13]
Stop Rules
Know when to stop investigating:
Stop when:
- Your timebox is reached and evidence remains weak
- Harm threshold isn’t met (minor reach, limited impact)
- Mitigation is more urgent than attribution
- You’re chasing signals that don’t converge
Don’t stop just because:
- Attribution is hard (it usually is)
- The content is politically inconvenient
- Someone wants a faster answer
The goal is accurate assessment, not closure. “Insufficient evidence” is a valid conclusion.
[screen 14]
Real-World FIMI Examples
Russian Internet Research Agency (2016)
- Technical evidence: Infrastructure links to Russian entities
- Platform evidence: Facebook, Twitter attributions with supporting data
- Leaked evidence: Mueller investigation documents
- Evidence tier: Strong
Chinese COVID-19 narratives (2020-2022)
- State media alignment: Clear
- Coordinated inauthentic behavior: Platform-documented
- Strategic timing: Aligned with diplomatic objectives
- Evidence tier: Medium-Strong
Note how attribution relies on multiple evidence types, not single signals.
[screen 15]
Your Response Options (DIM Framework)
Detection isn’t the intervention. It’s input for choosing one.
Gen 2 (Debunk): Fact-check false claims after they spread
- Use when: Specific false claims, addressable audience
- Limitation: Often amplifies the content
Gen 3 (Prebunk): Inoculate audiences before exposure
- Use when: Predictable narratives, time to prepare
- Limitation: Requires anticipation
Gen 4 (Moderation): Reduce reach through platform action
- Use when: Coordinated inauthentic behavior, clear policy violation
- Limitation: Censorship concerns, whack-a-mole dynamics
Gen 5 (Interaction): Build resilient communities, address underlying grievances
- Use when: Long-term, systemic vulnerability
- Limitation: Slow, hard to measure
[screen 16]
Matching Response to Evidence
Your evidence tier should guide your response:
Strong evidence of foreign FIMI:
- Platform action justified
- Government communication may be appropriate
- Detailed public attribution possible
Medium evidence:
- Focus on the behavior, not the actor
- “Coordinated campaign” rather than “Russian operation”
- Platform action on policy violations, not attribution
Weak evidence:
- Monitor and document
- Don’t public-claim
- Avoid response that amplifies
Proportionality matters. Overclaiming on weak evidence creates more problems than it solves.
[screen 17]
Common Mistakes
Mistake 1: “Cui bono” as evidence “This benefits Russia, therefore Russia did it.” Reality: Many actors may benefit. Benefit isn’t causation.
Mistake 2: Assuming coordination from alignment “These accounts share the same message, therefore coordinated.” Reality: Organic movements also share messages. Look for behavioral coordination, not just content alignment.
Mistake 3: Foreign = illegitimate Not all foreign speech is manipulation. Foreign journalists, activists, and citizens have legitimate voices too.
Mistake 4: Attribution before investigation “It’s Russian disinformation” as starting hypothesis rather than conclusion.
[screen 18]
Building Your Assessment Habit
Every time you encounter potential FIMI:
- What do I see? (Observable behavior)
- What signals am I relying on? (Strong/medium/weak)
- What don’t I know? (Gaps in evidence)
- What would change my assessment? (What evidence would strengthen/weaken)
- What’s proportionate action? (Given evidence tier)
Write it down. The discipline is in the documentation.
[screen 19]
Why This Matters
FIMI is real. Foreign actors do attempt to manipulate information environments. Pretending otherwise is naive.
But FIMI is also overdiagnosed. “Foreign interference” has become a convenient label for content people don’t like. This degrades the concept and makes real threats harder to address.
Your job: Be the analyst who gets it right. Not the one who sees foreign plots everywhere. Not the one who dismisses genuine threats as paranoia.
Precision is the skill. Evidence is the foundation.
[screen 20]
Module Assessment
Scenario: A Facebook group with 50,000 members suddenly starts sharing anti-NATO content. Posting frequency increased 300% in two weeks. Group admins have no public identity. Content includes both factual criticism and unverified claims about NATO “plans to attack Russia.”
Your task (10 minutes):
- List the signals you see (classify as strong/medium/weak)
- What additional information would you seek?
- Write a 3-sentence assessment with confidence level
- Recommend next action
- Identify one thing you explicitly will NOT claim
Scoring guide:
- Penalize overclaiming
- Reward acknowledgment of uncertainty
- Credit proportionate recommendations
[screen 21]
Key Takeaways
- FIMI is organized foreign information manipulation — but “foreign” must be established, not assumed
- Classify evidence by strength: strong signals support attribution, weak signals only justify investigation
- Detection is input for response selection, not the goal itself
- Overclaiming damages credibility and helps adversaries
- Proportionate response matches evidence tier
- “Insufficient evidence” is a valid and often correct conclusion
You now have a framework. Practice will build the skill.
Next Module
Continue to: Recognizing Manipulation Tactics — Practical techniques for identifying coordinated behavior without falling into common analytical traps.