Monday, September 22, 2025

Here we go right into the worst world you can imagine

How dictators (and autocrats) will — and already do — use AI to suppress dissent

1) Mass surveillance + facial recognition to identify and track protesters.
AI powers city-wide camera networks, matches faces to ID databases, and flags people who attend protests or meet with known activists — enabling arrests, reprisals, or pre-emptive detention. China is the canonical example, and similar systems are documented elsewhere. (European Parliament)

2) Phone/network spyware and remote device compromise.
Governments deploy offensive tools (commercial spyware, zero-click exploits) to read messages, capture contacts, and plant evidence — often targeting journalists, lawyers, and opposition figures. These tools become far more effective when combined with AI to triage and analyze the harvested data. (AP News)

3) Social-media monitoring and automated content removal.
AI systems can crawl huge volumes of posts, flag “undesirable” content, and automatically request takedowns or block accounts. That enables rapid censorship at scale and makes it easy to silence voices before a story spreads. Platforms’ moderation tools can be co-opted by governments or tuned to follow local repressive laws. (Freedom House)

4) Disinformation, deepfakes, and identity-forgery to delegitimize opponents.
Generative AI can produce fake audio/video, realistic sock-puppet accounts, and automated propaganda tailored to micro-audiences — all used to smear dissidents, confuse the public, or create plausible pretexts for repression. State or state-linked influence ops have already used AI to run fake networks. (Reuters)

5) Predictive policing and risk-scoring.
By analyzing mobility, social ties, and communications, AI models can produce “risk” scores that flag people as potential troublemakers — then trigger surveillance, stops, or administrative actions without human transparency. Reports warn this amplifies discrimination and arbitrary enforcement. (European Parliament)

6) Social control systems (e.g., social-credit / behavior scoring).
AI can aggregate financial, social, and behavioral data to reward compliant citizens and penalize dissenters (travel restrictions, job/school access, public shaming). Even where full “social credit” systems don’t exist, partial scoring systems are being used to shape behavior. (NATIONAL ENDOWMENT FOR DEMOCRACY)

7) Automated harassment, doxxing and intimidation at scale.
Bots and AI agents can amplify abusive messages, threaten critics, flood comment sections, or dox opponents — drowning out real voices and creating fear that discourages organizing. (Freedom House)

8) Legal and regulatory capture enabled by AI “evidence.”
Governments can use AI-generated “evidence” (analytics, pattern reports, allegedly incriminating content) to justify arrests or court cases. Because models are opaque, it’s easy to present machine output as factual while avoiding scrutiny. (European Parliament)

Real examples / documented incidents

  • State-linked influence operations using AI to run fake accounts and influence public discourse (reported and disrupted by law enforcement). (Reuters)

  • Reports of spyware used to monitor journalists and opposition in Serbia and other countries. (AP News)

  • Extensive use of facial-recognition and biometric systems in occupied/contested areas to monitor populations. (The Guardian)

  • NGO and academic analyses documenting how AI amplifies digital repression and weakens internet freedom. (Freedom House)

What makes AI especially dangerous for repression

  • Scale & speed: AI automates tasks that previously needed many human analysts.

  • Cheapness: once trained, automated systems are inexpensive to run and can be exported.

  • Plausible deniability / opacity: model outputs are opaque, making it easy to hide biases or errors as “technical” decisions.

  • Personalization: propaganda can be micro-targeted to exploit emotional triggers and social fractures. (Freedom House)

Practical defenses — what citizens, platforms and policymakers can do

For individuals & activists

  • Use end-to-end encrypted messaging and practice device hygiene (keep OS/apps updated; avoid suspicious links). (AP News)

  • Use privacy tools: VPNs (with caution), anonymity-preserving browsers, and adversarial obfuscation (e.g., altering appearance in public photos where legal/feasible).

  • Minimize metadata footprint (separate accounts for activism, use burner numbers when necessary).

  • Document abuses safely (secure backups, distribute copies with trusted organizations).

For platforms & tech companies

  • Harden user verification against state coercion; resist government takedown demands that violate human rights; publish transparency reports and process-level audits. (Congress.gov)

  • Implement provenance / watermarking for AI-generated media and better tools to detect deepfakes. (PMC)

  • Offer safer defaults, stronger account protections for journalists/activists, and independent redress mechanisms when governments request takedowns.

For governments, international bodies & civil society

  • Pass targeted export controls on surveillance tech and require human rights due diligence from vendors. (NATIONAL ENDOWMENT FOR DEMOCRACY)

  • Fund open-source tools to detect/mitigate repression (deepfake detectors, traffic obfuscation, secure comms).

  • Support independent audits of government AI systems and require explainability/procedural safeguards if AI influences policing or legal actions. (European Parliament)

Bottom line

AI doesn’t invent new motives for repression — it multiplies them. The same political incentives that lead governments to silence critics become dramatically more powerful when combined with automated surveillance, disinformation, and opaque decision systems. But technology + policy + civic action can blunt those risks if actors move deliberately: better laws, platform accountability, defensive tech for citizens, and international pressure to restrict the sale and misuse of repressive AI. (Freedom House)


No comments: