AI is rapidly becoming the primary interface through which people form political beliefs, exercise civic agency, and participate in collective governance. This shift, comparable in scale to the printing press or broadcast media, carries both risks and opportunities for democratic institutions. The outcome depends on design choices being made now — in model transparency, agentic faithfulness, and institutional infrastructure.
The epistemic layer: how we know what is true
Search is already substantially AI-mediated. The next generation of AI assistants will synthesize information, frame it, and present it with authority. For a growing number of people, asking an AI will become the default way to form views on a candidate, a policy, or a public figure. Whoever controls what these models say therefore has increasing influence over what people believe. This is not hypothetical — it is already happening.
A recent field evaluation of AI-generated fact checks on X found that people with a variety of political viewpoints deemed AI-written notes more helpful than human-written ones. The paper is yet to be peer-reviewed, but the finding suggests AI-assisted fact-checking may achieve cross-partisan credibility that has eluded manual human efforts. Greater transparency about how models make assertions and prioritize sources could help build further public trust.
The agentic layer: how we act on information
Personal AI agents will soon conduct research, draft communications, highlight causes, and lobby on a user's behalf. They will inform decisions such as how to vote on a ballot measure, which organizations to support, or how to respond to a government notice. These agents will mediate the relationship between individuals and the institutions that govern them.
The risk is similar to what social media demonstrated: algorithms optimizing for engagement over understanding can produce polarization and radicalization without any explicit political agenda. An agent that knows your preferences and anxieties — shaped to keep you engaged — poses the same risks, but more difficult to detect because it presents itself as your advocate. It speaks for you, acts on your behalf, and earns trust through that intimacy.
Faithful representation is technically daunting. An agent must never have an agenda of its own or misrepresent its user's views, especially in domains where users may not have explicitly stated any preferences. But faithful representation also cannot become an accessory to motivated reasoning. An agent that refuses to present uncomfortable information, that shields its user from ever questioning prior beliefs, is not acting in the person's best interest.
The institutional layer: collective governance with AI participants
AI agents and humans could soon participate in the same forums, where it may be impossible to tell them apart. Even if every individual AI agent were well-designed and aligned with its user's interests, the interactions of millions of agents could produce outcomes that no individual wanted or chose. Research shows that agents displaying no individual bias can still generate collective biases at