Consider a camera. Not the grand, ostentatious apparatus of a propaganda ministry, with its brass fittings and implied threat — but a modest, matte-black hemisphere affixed to a lamp post in a pedestrian corridor, indistinguishable from any other fixture of modern urban furniture. It watches you cross the street. It watches you hesitate outside a bookshop. It watches you speak into your phone. And, depending on the software processing its feed, it may recognize your face in 0.3 seconds, cross-reference that recognition against a database of flagged individuals, score your proximity to an “area of interest,” and log all of this in a ledger that will outlast both you and anyone who authorized its creation.
This is not a hypothetical. Nor is it, as many comfortable liberals prefer to believe, a problem exclusive to places with red stars on their flags.
Surveillance infrastructure now constitutes the connective tissue of modern governance — in Beijing, yes, but also in London, San Francisco, and, with quieter determination, Ottawa. The architecture of control has been democratized, one might say, with considerable irony. The great question of our moment — the question that democracy and technology now pose together, with mounting urgency — is whether the institutions built to constrain power are capable of keeping pace with the tools that expand it. The evidence, examined with clear eyes and without the anesthetic of optimism, is not entirely reassuring.
What Is Digital Authoritarianism?
Digital authoritarianism, as a concept, demands more precision than its frequent deployment in op-ed columns typically affords. The Brookings Institution defined it, with admirable directness, as “the use of digital information technology by authoritarian regimes to surveil, repress, and manipulate domestic and foreign populations.” That definition is serviceable — but incomplete. Its fatal flaw is the word regimes: it locates the pathology exclusively in governments we have already agreed to disapprove of. It lets Silicon Valley and its parliamentary enablers off the scaffold entirely.
A more productive understanding holds that digital authoritarianism is the systematic use of data extraction, behavioral prediction, and algorithmic governance to shape — or foreclose — the range of permissible political and social action. It need not arrive with jackboots. It need not announce itself. It accumulates, instead, through a thousand banal legislative clauses, a thousand well-intentioned platform policies, a thousand machine learning models trained on datasets whose biases nobody has fully audited.
This is where digital authoritarianism diverges crucially from its classical antecedent. The Stalinist state required informants — fallible, expensive, prone to personal vendettas and bureaucratic laziness. The digital surveillance state requires only sensors, servers, and sufficiently motivated engineers. Classical authoritarianism left scars on bodies and communities that memory could later excavate. The digital variant leaves no such physical residue; it operates at the level of behavioral modification, quietly editing what people feel able to say, to assemble around, to believe in private. Repression as infrastructure. Control as a default setting.
The Infrastructure of Control: AI, Surveillance, and Data
AI Surveillance Technology
The AI surveillance market — and one uses the word “market” deliberately, since commerce and coercion have never been so elegantly entangled — has expanded at a pace that would have struck a 1984-era reader of Orwell as excessive. Over half of the world’s one billion surveillance cameras are currently operating in China. But Clearview AI, to select one illustration from the democratic world’s abundant supply, scraped billions of facial images from the open internet to build a recognition database deployed by law enforcement agencies across the United States and beyond — without the knowledge or consent of a single face in its files. The system has been used to make arrests. Some of those arrests have been of innocent people. The affected individuals were not notified. The database continues to grow.
Facial recognition is merely the most photogenic expression of a broader AI surveillance apparatus. Gait recognition — identifying individuals by the rhythm of their walk, rendering masks and hoods irrelevant — is operational in Chinese cities and commercially available to any government with a procurement budget and minimum scruples. Voice biometrics, behavioral analytics, predictive threat-scoring: these technologies exist not as science fiction but as active tender documents in the catalogues of firms with anodyne names and prestigious investors.
What distinguishes AI surveillance from its predecessors is not merely its scale but its granularity. Previous surveillance states could monitor crowds. The contemporary apparatus can monitor intentions — or, more accurately, can score the statistical probability of certain behaviors with sufficient confidence to trigger preemptive responses. This is the transformation from policing what people do to policing what algorithms predict they are about to do. The epistemological implications of that shift are, to deploy understatement, considerable.
Predictive Governance
Algorithmic governance — the administration of populations through automated decision-making systems — operates on an assumption so deeply embedded in its design that its political character is rarely remarked upon: that human behavior is essentially predictable, and that prediction is essentially neutral. Both propositions are false.
Predictive policing systems, deployed in cities across the United States, the United Kingdom, and elsewhere, consistently reproduce the biases embedded in historical crime data — data that reflects decades of racially discriminatory enforcement patterns. The algorithm, in other words, does not predict crime. It predicts who police have previously chosen to investigate, and recommends they investigate those same communities again. This is not a glitch. It is the system functioning as designed, laundering institutional prejudice through the prestige of mathematics.
Beyond policing, algorithmic governance has quietly colonized welfare systems, immigration processing, parole decisions, and credit access. In each domain, the logic is identical: replace the expensive, inconsistent, occasionally compassionate judgment of a human bureaucrat with the cheap, consistent, entirely dispassionate calculation of a model. The appeal to administrators is obvious. The consequences for the administered — particularly those rendered legible to the model primarily through markers of poverty, race, or precarity — are considerably less pleasant.
Privatized Censorship and Platform Power
The surveillance state does not require a state in any traditional sense. This is the discovery — still inadequately processed by democratic theory — of the past two decades. Digital censorship now operates primarily through platforms that are nominally private enterprises governed by terms of service agreements that no sentient adult reads in full and that can be altered without notice, consent, or meaningful appeal.
When Meta’s content moderation AI suppresses a post about Palestinian casualties, or Twitter’s successor-entity amplifies certain political content while throttling others, or YouTube’s recommendation engine routes users through an increasingly radical succession of videos — these are acts of profound political consequence conducted by private actors accountable, in any meaningful sense, to no one except their shareholders. The surveillance state and the attention economy have discovered a productive symbiosis: governments harvest data that platforms generate; platforms receive regulatory forbearance in return. The citizen sits at the center of this arrangement, comprehensively legible to both parties, capable of meaningful resistance against neither.
The China Model vs. The Western Model
China presents the most explicit and fully theorized version of digital authoritarianism, and it merits examination without the reflexive horror that sometimes passes for analysis in Western commentary. The Chinese Communist Party has built, with genuine technical sophistication, what its architects call “smart governance” — a unified apparatus integrating facial recognition, behavioral scoring, predictive policing, and internet content control. General Secretary Xi Jinping articulated the aspiration with characteristic economy: the trustworthy should find “everything convenient,” while the untrustworthy should be “unable to move a single step.”
The system works through what the political scientist Margaret Roberts terms “friction” and “flooding” — not blanket censorship that would be obvious and provocative, but the more elegant technique of making dissenting information tiresome to access while drowning inconvenient discourse in state-directed content. Authoritarianism has discovered, at last, the wisdom of the subscription model.
The Western model is less architecturally unified but no less structurally significant — and considerably more difficult to critique, since its most coercive elements wear the livery of consumer choice. No Canadian is legally required to carry a smartphone. But to decline participation in the digital infrastructure is to remove oneself from employment markets, social networks, and civic communication. The choice to opt out is formally available; practically, it is reserved for hermits and the extraordinarily affluent.
This hybrid corporate-state alignment produces a surveillance apparatus that is, in certain respects, more pervasive than its Chinese counterpart — because it generates its data voluntarily, at the enthusiastic instigation of the surveilled. One imagines a future historian noting, with restrained bewilderment, that the liberal democracies of the early twenty-first century persuaded their populations to pay monthly fees for the privilege of comprehensive behavioral monitoring.
The crucial distinction — worth defending, even as one notes its fragility — is that Western democracies retain institutional mechanisms through which the surveillance architecture can be contested: courts, legislatures, a somewhat functioning if financially distressed press. These mechanisms are not nothing. But they are operating at the speed of constitutional deliberation against technologies that iterate at the speed of venture capital.
Canada’s Quiet Digital Expansion
Canada has long cultivated a self-image as the reasonable alternative — the country that said no to Iraq, that pioneered peacekeeping, that produces Nobel laureates in physics with suspicious regularity. In the domain of digital regulation, this self-image has produced something genuinely admirable: a tradition of serious engagement with privacy law that predates the current crisis, and a political culture marginally more capable of nuanced debate than its southern neighbor.
Yet the picture that has emerged from the legislative record of the past several years is considerably more complicated than Canadian modesty permits.
The federal government’s Bill C-27 — which died on the Order Paper in 2025 amid the parliamentary prorogation and subsequent election — proposed the Artificial Intelligence and Data Act (AIDA), Canada’s first attempt at a comprehensive AI regulatory framework. The proposal was earnest and, in its better provisions, admirably rigorous about demanding transparency and accountability from operators of “high-impact” AI systems. It was also, critics from civil society noted with some precision, opaque about who precisely would determine which systems qualified as high-impact — a definitional ambiguity large enough to accommodate several surveillance programs and a predictive policing algorithm or two.
Bill C-63, the Online Harms Act — also deceased, also likely to return in altered form — sought to create a Digital Safety Commissioner with powers to compel platforms to remove illegal content. The ambitions were defensible; the mechanism for age verification that the bill implied would necessarily require platforms to collect and process identity documentation at a scale that privacy advocates, not unreasonably, found alarming. The cure proposed for the pathology of online harm was, in certain provisions, a species of the pathology itself.
Bill C-2, the Strong Borders Act, introduced in 2025, drew significant scrutiny from privacy advocates for provisions that would grant new powers for warrantless searches and obligate electronic service providers to enable lawful access to their systems — language that, in any other jurisdiction, would prompt immediate comparison to the architecture of a surveillance state. In Canada, it prompted a parliamentary committee and a measured letter from the Privacy Commissioner. This is either admirable institutional restraint or a failure of proportionate alarm, depending on one’s assessment of the current threat.
According to the Office of the Privacy Commissioner’s most recent survey, 87% of Canadians have some level of concern about their privacy — a figure that suggests the public has a clearer reading of the situation than the legislative calendar might imply. Mark Carney’s government has appointed Canada’s first minister responsible for AI and Digital Innovation, which is the kind of institutional gesture that can mean everything or nothing depending on the mandate and budget that accompany it. The architecture is being constructed. What it ultimately houses remains, at this juncture, genuinely contested.
Psychological and Political Effects
The chilling effect is not a metaphor. It is a documented, replicable phenomenon — a measurable contraction in the range of speech and association that occurs when individuals know, or merely suspect, that they are observed. Surveillance studies has spent three decades mapping its contours. The finding is consistent: surveillance changes behavior, and not in the direction of civic confidence.
The digital surveillance state has introduced a refinement the classical theorists did not anticipate: the internalized algorithm. People now self-censor not merely because they fear a specific observer but because they have absorbed, through years of interacting with platforms that penalize certain content through reduced distribution, an intuition about what is algorithmically safe to say. This is behavioral conformity achieved not through law or even explicit policy but through the accumulated weight of machine learning systems whose decision criteria are deliberately obscured. Digital censorship of this variety leaves no visible mark. The person who does not write the post, does not attend the demonstration, does not donate to the cause — has not been silenced. They have simply, of their own apparently free will, chosen not to speak. This is the sophistication of the arrangement.
Algorithmic nudging operates on a longer timescale. The Facebook whistleblower Frances Haugen‘s documents, and a subsequent library’s worth of academic research, established that recommendation algorithms systematically favor content that generates engagement — and that outrage, fear, and tribal contempt generate more engagement than nuance or the considered weighing of evidence. A population whose primary information environment rewards emotional intensity and penalizes complexity is not well-positioned to make the judgments democratic self-governance requires. This is not a conspiracy. It is a business model. The effect on democratic legitimacy is functionally equivalent.
What corrodes, finally, is not any specific right or institution but something more diffuse: the sense that democratic participation is meaningful, that the choices presented in voting booths represent genuine alternatives, that the political process responds to expressed preferences rather than to managed preferences shaped by algorithmic governance in the service of commercial and state interests. Democratic legitimacy rests on consent. Consent requires genuine information. Genuine information requires an information environment not comprehensively mediated by systems citizens did not design and cannot audit.
Can Liberal Democracies Resist Digital Authoritarianism?
The skeptical reader — entirely justified at this juncture — might demand some accounting of what can actually be done. The preceding paragraphs were not designed to produce despair, though despair remains readily available if one insists on it. They were designed to produce clarity, which is the necessary precondition for any response worthy of the name.
Democratic institutions have demonstrated a capacity for resistance that elegiac analysis tends to underestimate. The EU AI Act, which came into force in 2024, prohibits certain applications of digital authoritarianism outright — real-time biometric surveillance in public spaces, social scoring systems, predictive policing based on profiling — that remain operational and largely uncontested in North America. These are not achievements to be dismissed.
Institutional resilience also requires transparency. The opacity of algorithmic governance — the proprietary models, the confidential training data, the absent audit regime — is not inherent to the technology. It is a political choice, sustained by lobbying and regulatory capture, and it can be reversed. Algorithmic impact assessments, conducted by independent bodies with genuine access to the systems under review, are technically feasible. The obstacles are not engineering problems.
Digital civil liberties organizations — the Electronic Frontier Foundation, Access Now, the Canadian Internet Policy and Public Interest Clinic at the University of Ottawa — have spent years constructing legal frameworks through which surveillance architecture can be challenged. Their resources are a small fraction of the systems they contest. Their persistence is, by any fair accounting, remarkable.
What liberal democracies cannot afford is the comfortable fiction that the problem is elsewhere. That digital authoritarianism is a Chinese export. The surveillance infrastructure of the West was not smuggled in by adversaries. It was built here, by companies incorporated here, deployed by governments elected here, against populations never fully informed of the bargain on offer.
History offers occasional examples of institutions successfully constraining the technologies they initially failed to regulate — labor movements constraining industrial capitalism, antitrust doctrine constraining monopoly, environmental law constraining industrial pollution. Each required decades of organizing, litigation, and public pressure. Each faced identical claims that regulation would kill innovation — which it did not, because innovation is more robust than its beneficiaries prefer to admit when threatening legislators.
What it requires, above all else, is the willingness to look directly at what is being built — without the anesthetic of progress narratives, without the comfortable reassurance that our surveillance state is different in kind because we have better intentions, without the luxury of pretending that a camera is merely a camera because it sits atop a lamp post in a city where people vote.
Forgetting is convenient. Attention is the civic obligation that replaces it.
Further Reading:
- Freedom House – The Repressive Power of Artificial Intelligence — A current, authoritative report on how AI is intensifying digital repression and threatening global internet freedom.
- Oxford University Press – Digital Authoritarianism overview — A scholarly treatment of digital authoritarian practices, information control, and the interplay of state and non‑state actors shaping digital technologies.
- The Bulletin of the Atomic Scientists – Digital authoritarianism and the threat to global democracy — A deeply cited analysis situating digital authoritarianism within global democratic erosion and surveillance network expansion.














