The Signal #5
Regulate or resist? AI's Atlantic divide.
Two developments on opposite sides of the Atlantic have spotlighted a core tension in AI governance. In Brussels, the EU's AI Act is shifting from legislative blueprint to regulatory hammer, aiming to curb the power of big tech firms whose systems threaten rights and democratic norms. Meanwhile, in Washington and Silicon Valley, Anthropic, a frontrunner in frontier AI, has flatly rejected U.S. Department of Defense requests to repurpose its models for mass domestic surveillance and fully autonomous weapons. The company insists that laws lag behind AI's explosive capabilities, and certain applications cross ethical red lines it won't abide, even under pressure.
Taken together, these developments call on us to address a key structural question: who holds power in the age of AI, and how is that power constrained?
The European Union has chosen to constrain the power of large private companies by defining categories of unacceptable and high-risk uses, embedding fundamental rights into regulatory design. The underlying logic is that when technologies pose systemic risks to democracy, privacy, and human dignity, democratically mandated institutions must draw red lines.
Yet the Anthropic episode complicates a simple narrative of public virtue and private excess. Here, a private company appears to be resisting state demands, arguing that certain applications, specifically mass domestic surveillance and autonomous lethal decision-making, should not be pursued even if requested by the government. This flips the script: not a rogue corporation evading democracy, but a government chasing applications one company views as morally bankrupt.
This duality points to what I have previously addressed as the narrower corridor in the age of AI. The original idea of a “narrow corridor” by Daron Acemoglu and James Robinson captures the fragile balance between state capacity and societal constraints: too weak a state, and private actors dominate; too strong a state, and liberty erodes. In the context of AI, that corridor is becoming narrower still. Technological capabilities are expanding at a pace that strains both legal frameworks and institutional reflexes. States, markets, and societies are adjusting in real time and often asymmetrically.
The EU’s regulatory push can be read as an effort to keep powerful firms within the corridor, ensuring that private innovation does not outpace democratic oversight. But the Anthropic standoff suggests that the corridor also requires limits on state power. If governments can compel frontier firms to deploy AI systems for surveillance or autonomous warfare without robust democratic deliberation, the balance tips in the opposite direction. The risk is not only private overreach, but public overreach enabled by private capability.
What makes the AI moment distinctive is that both risks are intensifying simultaneously. Frontier models possess unprecedented leverage: they can shape information flows, automate strategic decisions, and potentially alter the distribution of military power. That leverage invites both regulatory constraint and strategic demand. The state seeks security and geopolitical advantage; firms seek innovation and profit; societies seek protection of rights and dignity. The corridor narrows because each actor’s incentives pull in different directions, and the speed of change reduces the margin for institutional error.
There is also a subtler dynamic at play. When Anthropic argues that the law has not kept pace with AI capabilities, it is implicitly claiming a form of normative authority: that corporate governance and internal ethical commitments must temporarily fill the vacuum left by slow-moving legislatures. That may be responsible behaviour in the short term. But it is not a stable equilibrium. Private restraint cannot substitute indefinitely for democratically legitimised rules. Nor can regulatory frameworks designed today anticipate every capability that may emerge tomorrow.
The structural challenge, then, is to design institutions that keep both state and corporate power within the corridor, even as technological capabilities accelerate. This demands anticipatory governance: mechanisms that bring policymakers, technical experts, civil society, and firms into iterative dialogue before lines are crossed. It requires strengthening parliamentary and regulatory capacity, and it may require new forms of global coordination, since national borders do not confine AI development and deployment.
An important signal in this direction is the United Nations’ creation of an Independent International Scientific Panel on Artificial Intelligence, a 40-member body convened to assess the opportunities and risks of AI and provide a shared evidence base for governance debates. Its mandate echoes previous global assessment mechanisms, but recently, Secretary-General António Guterres went further, proposing a multibillion-dollar global fund to expand computing capacity, data infrastructure, and AI expertise in developing countries. The goal is to help ensure that capabilities and benefits are more equitably distributed rather than concentrated in a handful of wealthy states or private labs.
Both efforts reflect an understanding that evidence and capacity are prerequisites for balanced governance, but they stop short of answering the question of who decides which risks are too great and whose voices shape those decisions.
This tension reveals that the corridor in the age of AI is not simply about balancing state and corporate power. It is also about determining the locus of normative judgment. If governments alone define acceptable risk, there is a danger of executive overreach, securitisation, or politicisation. If firms do so, market incentives and strategic calculations may shape decisions that carry profound societal consequences. Neither pole fully satisfies democratic principles.
A third possibility lies in widening the decision-making base itself. In my earlier reflection on the futures of AI governance, I noted how Switzerland’s semi-direct democratic system introduces another layer to the equation. Initial regulatory steps there have relied heavily on evidence-based analysis, yet the system also allows for referenda that could directly involve citizens in shaping AI policy. That model gestures towards something more ambitious: co-designing AI institutional frameworks through structured public participation, ensuring that regulatory boundaries reflect societal priorities instead of solely bureaucratic or corporate judgment.
Such participatory mechanisms are not a panacea. Complex technical questions cannot be resolved by plebiscite alone. But they can embed legitimacy, surface diverse concerns, and anchor high-level governance choices in lived experience.
The creation of a global scientific panel, then, addresses one dimension of the corridor challenge: shared knowledge. It strengthens the evidentiary foundations upon which decisions can be made. Yet the normative dimension of who decides, through what processes, and with what accountability, remains contested. If the corridor is narrowing from both sides, the answer cannot simply be to empower one side against the other. It must involve designing institutions that distribute decision-making authority in ways that are transparent, participatory, and adaptive.
The deeper implication is that AI governance may need to tread over regulatory territory to constitutional territory. It forces democratic systems to confront how power is allocated, constrained, and legitimised under conditions of rapid technological change. Whether the future tilts toward concentrated executive authority, corporate self-regulation, or more participatory forms of oversight will shape not only AI outcomes but the texture of democracy itself.
Maintaining the corridor in the age of AI will therefore likely require three layers operating together: global evidence infrastructures to clarify risks; robust national institutions capable of acting on that evidence; and democratic processes, representative and, where appropriate, direct, that determine which risks a society is willing to bear.
What I read this week:
Future Tense Fiction: Where Many Visions of the Future Battle at Once
Nature: UN creates new scientific AI advisory panel: what will it do?
Project Syndicate: The War on Iran and the War on Anthropic
—
Warmly,
Sanja
You might also like my pieces on:



