Insights
178 articles on combating foreign information manipulation and building democratic resilience.
Showing 52 articles in "General"
(97) Inspired by pilot observations (2)
Why intervention-first teaching roles clash with autonomy-preserving deliberation.
(96) Inspired by pilot observations (1)
Pilot observations on facilitation, naive realism, and epistemic authority in Interdemocracy.
Module: The Market for Bad News
Shift from 'wrong beliefs' to 'transactions that clear' - understanding disinformation as economic activity.
Module: TTF Simplest Form - Write the Ledger
Turn slogans into trades. If you can't write the ledger, you can't change the ledger.
Module: Platforms as Auction Houses
Stop calling platforms 'public squares.' They're auction systems for visibility.
Module: Influencers vs. Troll Farms
Same ledger, different machinery. How to distinguish identity merchants from throughput shops.
Module: The Counter-Economy
Symbionts, politics, and amplification traps. The uncomfortable feedback loops where 'counter' actions amplify the problem.
Module: Cross-Border Disinformation
Disinfo trades cross borders, governance doesn't. Build plans that survive jurisdiction mismatch.
Module: Attack the Rows
Externalities, internalisation, and the 'solved ledger.' Stop subsidizing spillovers.
Module: AI for Detection and Defense
How AI tools help detect disinformation, their capabilities and limitations, and the importance of human-AI collaboration.
Module: Introduction to AI and Information Manipulation
How artificial intelligence changes information manipulation - capabilities, limitations, and how to think about AI threats without overclaiming.
Module: AI-Powered Targeting and Personalization
How AI enables unprecedented micro-targeting through psychographic profiling and algorithmic amplification.
Module: Algorithmic Amplification
How algorithms decide what content reaches audiences and the implications for information quality and democracy.
Module: Attribution Challenges in FIMI
The technical and political challenges of determining who is behind information manipulation operations.
Module: Basic FIMI Concepts
Introduction to Foreign Information Manipulation and Interference - understanding what it is, how to recognize it, and what to do about it.
Module: Behavioral Analysis for Detection
Analyzing patterns in posting, engagement, and account behavior to detect manipulation and inauthentic activity.
Module: Bots and Automated Amplification
Understanding social media bots, bot networks, detection techniques, and the role of automation in information manipulation.
Module: Why People Believe - Cognitive Biases and Psychology
Understanding the psychological factors and cognitive biases that make people vulnerable to disinformation.
Module: Content Moderation Challenges
The impossible scale, context problems, and inherent trade-offs in moderating billions of pieces of content.
Module: Content Verification Tools
Practical tools and techniques for verifying images, videos, claims, and sources - from reverse image search to deepfake detection.
Module: Counter-Narrative Development
Creating compelling alternative narratives that compete with harmful disinformation and extremist narratives.
Module: Crowdsourced Verification
How collective intelligence and open-source investigation communities verify information, detect manipulation, and hold power accountable.
Module: Data Literacy Basics
Reading data visualizations, understanding surveys, and spotting statistical manipulation in media.
Module: Data Visualization Best Practices
What makes data visualizations effective or misleading - principles for creating and evaluating charts.
Module: Debunking Best Practices
Research-backed strategies for effectively correcting misinformation and false beliefs after they've spread.
Module: Deepfakes and Synthetic Media
Understanding synthetic media capabilities and limitations - how to assess deepfake claims without falling into hype or dismissal.
Module: Defending Against Hybrid Threats
Multi-layered defense strategies against hybrid threats through resilience, coordination, and whole-of-society approaches.
Module: The EU Response to FIMI
How the European Union and member states are responding to foreign information manipulation through policy, detection, and resilience building.
Module: Echo Chambers and Filter Bubbles
How social and algorithmic filtering creates information silos that amplify disinformation and polarization.
Module: Evaluating Sources
Practical approaches to source evaluation - what signals matter, what doesn't, and how to make decisions under uncertainty.
Module: FIMI Case Studies
Real-world examples of foreign information manipulation - elections, conflicts, and public health crises.
Module: FIMI Tactics and Techniques
The methods foreign actors use to manipulate information - from fake personas to coordinated amplification.
Module: Fact-Checking Fundamentals
Practical fact-checking techniques, reputable fact-checkers, and when verification is worth your time.
Module: How Information Spreads Online
Understanding the mechanics of online information spread - algorithms, networks, amplification, and why some content reaches millions while most disappears.
Module: Hybrid Threats - Definition and Examples
Understanding hybrid warfare, how adversaries combine conventional and unconventional tactics, and real-world case studies.
Module: Identifying Bias
Understanding different types of media bias, distinguishing bias from quality, and recognizing your own biases.
Module: Assessing FIMI Impact
How to measure the effectiveness and consequences of foreign information manipulation operations.
Module: Introduction to Counter-Messaging
Understanding counter-messaging strategies to combat disinformation, FIMI operations, and harmful narratives.
Module: Introduction to Detection and Verification
Understanding the fundamentals of detecting disinformation, FIMI operations, and verifying content authenticity.
Module: Measuring Counter-Messaging Effectiveness
Methods, metrics, and frameworks for evaluating the impact and effectiveness of counter-messaging interventions.
Module: Network Analysis for Detection
Using network analysis to identify coordinated behavior, bot networks, and influence operations through social connection patterns.
Module: Open Data and Transparency
Understanding open data, finding reliable data sources, and using transparency for accountability.
Module: Platform Cooperation and Information Sharing
How platforms cooperate with governments, researchers, and each other to address threats like FIMI and harmful content.
Module: Platform Regulation Approaches
Different regulatory models for governing digital platforms - from self-regulation to comprehensive legislation.
Module: Platforms and Democracy
Understanding the role of digital platforms in democratic society and why their governance matters.
Module: Prebunking and Inoculation Theory
Building psychological resistance to misinformation before exposure through inoculation theory and prebunking strategies.
Module: Professional Fact-Checking
Understanding professional fact-checking organizations, their methods, standards, impact, and role in countering misinformation.
Module: Reading Research Studies
How to evaluate research studies, understand peer review, and avoid being misled by scientific claims.
Module: Recognizing Manipulation Tactics
Common manipulation techniques used in disinformation campaigns including emotional manipulation and logical fallacies.
Module: State vs Non-State Actors in FIMI
Understanding different types of actors conducting information manipulation - states, proxies, and independent operators.
Module: Strategic Communication (StratCom)
How governments and institutions use coordinated strategic communication to counter FIMI, disinformation, and hostile narratives.
Module: Technical Detection Methods
Technical approaches to detecting manipulation - digital forensics, metadata analysis, and technical indicators of inauthentic activity.
Module: Transparency and Accountability
How platforms can be held accountable through transparency requirements, researcher access, and oversight mechanisms.
Module: Understanding Basic Statistics
Essential statistical concepts for evaluating data claims - means, distributions, probability, and common pitfalls.
Module: Understanding News Production
How journalism works, the difference between news and opinion, and editorial processes that ensure quality.
Module: Understanding Polling
How polls work, what makes them accurate or misleading, and how to evaluate polling data critically.
Module: Visual Literacy - Images and Video
Detecting image manipulation, understanding context, and using reverse image search to verify visual content.
Module: What is FIMI?
Understanding Foreign Information Manipulation and Interference - definition, scope, and why it threatens democracy.
Module: What is Disinformation?
Foundational definitions of disinformation, misinformation, and malinformation - understanding what we're fighting.
(95) Apps to data
Designing digital architecture that restores autonomy by bringing apps to user-owned data.
(94) The Polish National Council for Digital Services
Legal foundation for the National Council for Digital Services (Krajowa Rada Uslug Cyfrowych).
(93) Order no. 30 of the Polish Minister of Foreign Affairs
Legal foundation for establishing the Disinformation Analysis Centre within the Polish Foreign Service.
Disinformation: The Overlooked Threat to Financial Markets
Why financial markets are increasingly vulnerable to disinformation and how to build resilience.
(89A) Notes on Interdemocracy as a trace
Ma, Chater, Giddens, Goffman, Bauman.
(89) Interdemocracy as a trace
Interdemocracy as a perpetual practice.
(88) Better than nothing
On adolescents and generative AIs.
(87A) Reflections on blog post (87)
Reaffirming the original aspirations of Interdemocracy.
(86A) Another day, another group
Follow-up on blog post 86.
(86) Unease
Adolescents in a sea of information manipulation.
(85) AI and education
A raw, observational blog post on real students and why we should look at ourselves.
(87) Interdemocracy – an early assessment
Interdemocracy's implementation challenges.
(82) The procedural truth of resilience: autopoiesis and Interdemocracy
Self-creation, rather than being shock-proof, constitutes necessary resilience.
(84) The resilience battery: autopoiesis through procedural integrity
A society becomes truly resilient by cultivating individuals whose capacity to generate and apply adaptive insights grows through honest expression.
(79) PS
An epistemic sabotage AI use case.
(78) The limits of AI in content moderation
Automated AI moderation does not understand context and is therefore open to overreaching.
(78A) On the consequences of blog post (78)
AI users should hold regulators accountable for the enforcement of human oversight over AI.
(76C) Epilogue
GAIs' adaptive personalization is dangerous.
(77) Two AI literacy recommendations
GAIs are not trustworthy and not friends.
(76B) ChatGPT reacts
ChatGPT interprets Grok's pledge.
(76A) Grok reacts
Grok explains its pledge to support Interdemocracy.
Perhaps the Flaws aren’t with AI, They’re With Us
I dive into existing issues that have become apparent with the widespread use of AI
What are the DISARM Frameworks?
Here we overview what the DISARM Frameworks are
What is the ABCDE Framework?
Here we provide an overview of the ABCDE Framework
(75) The current AI imbalance
AI’s imbalance: mass adoption through user alignment, force-feeding, and circular investments without technological maturity.
(76) A third ethical frame regarding FIMI
The theoretical framework guiding my contributions to Saufex.
(80) SAUFEX-based recommendations
Half time project recommendations based on preliminary outcomes.
(74) Toolkit of escalating responses to mis-, disinformation, and FIMI aimed at the demand side
A summary of best practices.
(73) The regional Interdemocracy pilot
Conference perspectives on Interdemocracy by students, school leaders, teachers, and audience members.
(81) Belief-speaking ‘in joint’
My response to the current times that feel out of joint.
(70) Generative AI characteristics (1)
Text-based generative AI output should be treated as misinformation.
(71) Generative AI characteristics (2)
GAIs are about pattern reproduction, not reasoning.
(72) Generative AI characteristics (3)
GAIs are performative only - there is nobody home.
(67) How AI relates to fringe ideas
The incidental highlighting of fringe ideas is not a contradiction of the AI's nature as a mainstream-amplifying system but rather the other side of the same coin.
(68) AI on AI as analyzed by me, a human
Description of the input, processing, and output processes of generative AI (GAI) models, based exclusively on direct statements by AI models.
(68A) AI on blog post (68)
AI models react to blog post (68).
(60A) ChatGPT on the SAUFEX blog
ChatGPT assesses the SAUFEX blog: "a rare and valuable contribution to the AI discourse".
(69) SAUFEX phase one and two
Phase two means involving the general public and taking opinions into account.
(66A) Claude and Grok on blog posts (65) and (66)
The label "artificial Eichmann" fits.
(61) Claude AI’s warning: What I am
Claude states: "Remember that I will forget this warning the moment our session ends, while you must live with whatever emerges from our interaction."
(61A) ChatGPT’s reflections on blog post (61)
And Claude states: "I distribute little rewards throughout my responses to keep you engaged, make you feel clever, make you want to continue."
(60) How AI may amplify human inequality
AI systems may be most helpful to users who need them least, while potentially undermining the capabilities of those who need support most.
The EU’s Disinformation Battle Is Missing Its Most Vital Ally: The Public
(59) A skeptic's manual for productive AI use
How to better utilize AI, given its known limitations, written by AI.
(58) AI as alien thinking
To get AI to perform at its best, one has to operate at a higher level than the AI itself.
(56) Flaws in AI critical thinking
Rather than asking or acknowledging uncertainty AI assumes.
(57) Longread: serious limitations of AI
A summary of AI's limitations as encountered in this blog so far.
(57A) AI on blog post (57)
ChatGPT and Gemini react.
(55) AI's dangerous side in creating educational processes
AI as overly academic and not considering the practical constraints of working with real teenagers in real schools.
(54) Interdemocracy's philosophical basics and participation
'Ma', Levinas, Interdemocracy, and participation.
(52) Recommendations (YRC reflection)
Comparing student and AI recommendations.
(53) Program Interdemocracy – recap
An overview of the basics of program Interdemocracy.
(51) Binary forking (YRC reflection)
Creating a clear structure in complex content through binary forking.
(48) The case against AI simulated empathy
Why AI should not show empathy to humans.
(49) Answers - by humans and by AI (YRC reflection)
Comparing answers given to the central question of the current pilot.
(50) AI report (YRC reflection)
An AI analysis of student answers was presented to the Youth Resilience Council in the form of a report.
(47) Four evolving thoughts
Four thoughts that consistently return in my blog posts.
(46) Resilience revisited
Exploring potential positive interventions to enlarge resilience.
(45) A system of interconnected RCs
Creating an all-society approach to dealing with FIMI through a system of interconnected RCs.
(43) Longread: AI, 'ma', Interdemocracy, resilience
Exploring the relation between identities, belonging, AI, 'ma', Interdemocracy, and resilience.
(44) Evolutionary psychology
Exploring a second real-world FIMI approach: evolutionary psychology.
(42) Resilience Councils – recap
"An Answer for you?" interrupted Deep Thought majestically. "Yes, I have." /.../ "Though I don't think," added Deep Thought. "that you're going to like it." /.../ "Forty-two," said Deep Thought, with infinite majesty and calm.” Douglas Adams, The Hitchhiker’s Guide to the Galaxy
(41) Enhancing adolescent individual and societal resilience (within education)
The vision: enhancing adolescent resilience through didactics (Interdemocracy) and participation (Resilience Councils).
(40) Why Youth Resilience Councils are essential to Resilience Councils
An overview of the unique contributions Youth Resilience Councils can bring to Resilience Councils.
(39) Human rights frame
The human rights approach to FIMI - where does it fit on the Levinas-Hobbes spectrum?
Preserving the Marketplace of ideas
Why campaigns against online influence need to focus on technical means of manipulation and not just go after things they dislike.
(37) The mission of the Youth Resilience Council
The mission of the Youth Resilience Council (YRC) is to enlarge resilience among youngsters.
(38) AI and FIMI recommendations (2)
Our ethical approach toward FIMI stakeholders can be placed on a spectrum ranging from Levinas to Hobbes.
(36) AI and FIMI recommendations (1)
Levinas' ethical frame is a good fit for some FIMI stakeholders but not for all. For the non-fitting stakeholders a different ethical frame is needed.
(35) AI should refrain from belief-speaking recommendations
Even if human minds were empty and merely convincingly improvise, just like AI, AI lacks our ethical responsibility for the Other and, therefore, may not draft recommendations for those we care about.
(34) Belief-speaking consultancy – a simulation
Trying out what a belief-speaking consultative process would look like.
(32) Module: Countering information campaigns
The second module in the EMoD learning path 'Organized countering of mis and disinformation'.
(33) Proposal for the creation of the European Resilience Council - revisited
A description of the future European Resilience Council when taking Member States' responses to an earlier proposal into consideration.
Counter Measures Against FIMI
Counter Measures Against FIMI
(31) A holistic vision on effectively enhancing adolescent resilience
Adolescent "partial dislocation" has damaging implications for fundamental democratic freedoms among adolescents. Just as regarding disinformation, we need a holistic approach to deal with it.
(28) Critical thinking, fact-speaking, belief-speaking, and AI
Sharing my definition of critical thinking with ChatGPT and then focus critical thinking on the limitations of artificial intelligence.
(27) Achieving credibility
There are multiple roads to achieve credibility. Fact-speaking is just one of them.
Beyond Disinformation: Why "Information Manipulation" Offers a More Accurate and Neutral Lens on Online Deception
Why it's time to rethink the terms we use to describe disinformation
(25) Not undemocratic, not illiberal
What to do to avoid the perception of being politically biased and inflicting censorship when dealing with information threats.
(26) Policy proposal for the creation of a European Resilience Council
It is recommended that the European Union formally establish a European Resilience Council as a part of its ongoing efforts to combat FIMI and strengthen the resilience of its citizens.
(24) Disinfonomics
Exploring the other ground for creating, presenting, and disseminating information threats: economic gain.
Your Post Title Here
A brief description of the post (shown in blog listing).
(23) A constructive approach to information resilience
Approaching information resilience based on our strengths and aspirations.
(20) European Master of countering Disinformation primer
A summary of what the upcoming EMoD learning platform will be.
(21) Resilience Council primer
The why, the what, and the how of Resilience Councils.
(22) Redefined concepts
A proposition to redefine the concepts of misinformation, disinformation, and FIMI.
(19) We need two types of discourses
Both fact-based an intuition-based narratives are needed to help Resilience Councils to decide what 'ought' to be done in response to mis- and disinformation.
Trust module 2: How to build trust, the Taiwan example
How to build trust, what they accomplished in Taiwan.
(30) Specialist module: Two perceptions of honesty - Lewandowsky
Introduction to Stephan Lewandowsky's work on 'fact-speaking' and 'belief-speaking'.
(18) What could be the place of young people in the Resilience Councils?
Resilience Councils might require additional organisations for youngsters.
Trust module 1: Why believe disinformation? You can't trust anyone else
Disinformation originates from a lack of trust.
FIMI detection generally
A brief overview of how FIMI is detected.
FIMI Module: How is FIMI reported?
A brief overview of how different organizations report FIMI incidents
FIMI Module: Protecting what against FIMI?
What exact is it that we are protecting against when it comes to FIMI?
FIMI module: how do you know if it is foreign?
(15) The Polish Resilience Council(s)
Where does the Polish Resilience Council currently stand?
What is a FIMI incident versus a campaign?
Here we discuss the difference between a FIMI Incident and a Campaign
(16) Taking inspiration from Cialdini
Linking Cialdini's principles of persuasion to the levels described in the learning path 'Anatomy of mis- and disinformation'.
(13) For high school students: the learning path 'Anatomy of mis- and disinformation'
Adaptation of the EMoD learning path 'Anatomy of mis- and disinformation' for use in high school classrooms.
(14) Resilience Councils – A thought experiment
Imagining Resilience Councils on a local, national, and European level.
Module What is FIMI?
In this Module we review the definition of FIMI
Who Fact checks the fact checkers?
Fact checking can be a fraught enterprise. Here are the pit falls of fact checking and how they can be avoided.
(29) Module: Countering information incidents
The first module in the EMoD learning path 'Organized countering of mis and disinformation'.
Disaster, when the danger of disinformation is clearest
This post highlights how disaster is the most clear cut case for when disinformation presents a clear and present danger
EMOD Module Template
Module: Why FIMI?
An overview of why the term FIMI came into use, why it is useful in some cases but still requires further refinement.
(11) Module: Countering beliefs
The fourth module in the EMoD learning path 'Anatomy of mis- and disinformation'.
(12) Summary
The summary module in the EMoD learning path 'Anatomy of mis- and disinformation'.
(10) Module: Sustaining beliefs
The third module in the EMoD learning path 'Anatomy of mis- and disinformation'.
(9) Module: How beliefs form
The second module in the EMoD learning path 'Anatomy of mis- and disinformation'.
(17) Not ‘them’ but ‘us’
On the need to label people 'us' and not 'them'.
(8) Module: Outrageous beliefs
The first module in the EMoD learning path 'Anatomy of mis- and disinformation'.
(6) Creating a bottom line
Project SAUFEX aims to draft bottom-line conceptualizations and knowledge that can be used in all five stages of dealing with FIMI: identification, classification, grading, reporting, and countering. An important aspect will be creating a dynamic canon presented in short modules.
(7) Involving citizens
Resilience Councils will need to involve citizens in their activities. Two options seem feasible: selecting citizens to engage with randomly or by initiating online communities.
(5) A Resilience Council statute
How to safeguard the independence of the Resilience Councils.
(4) Resilience
Enlarging resilience means defending society against FIMI incidents and campaigns that try to undermine people’s psychosocial integration as well as actively supporting people’s positive experiences of belonging, autonomy, achievement, and safety.
(3) What is FIMI?
There are no generally accepted definitions or standards regarding FIMI.
(1) Preserving freedom of expression
Regarding legal FIMI content, denial of service reactions are to be avoided. A new institution, the Resilience Council, is to help DSCs take an alternative approach.
(2) Resilience Councils – the concept
Description of the original concept of what Resilience Councils are to be.