Radical Optionality
Preview — Password Required
An Essay on AI Governance

Radical
Optionality

"If capacity-building begins only once risks and benefits are unmistakable, the decisive window for proportionate action will likely have closed."
Authors Christoph Winter & Charlie Bullock
Reading time 25 minutes
↓ Download PDF
Scroll to read
Contents
  1. IThe Challenge of Regulating Transformative AI
  2. IILet the Market Handle It
  3. IIIAnticipatory Governance & the Precautionary Principle
  4. IVThe Case for Radical Optionality
  5. VPolicy Details
  6. VIConclusion
  7. VIIReferences

The Five Assumptions


  1. That there is a real possibility of transformative AI systems being invented in the next, say, ten years or so;
  2. That the benefits of transformative AI will likely outweigh its risks, especially if sensible governance measures are implemented;
  3. That significant uncertainty exists as to how and when transformative AI will be developed and what the best way to govern it will be;
  4. That transformative AI, as a dual-use technology with significant national security implications, will eventually require some degree of governance;
  5. That building the institutional capacity to govern well has long lead times, such that waiting for uncertainty to resolve may leave insufficient time to prepare.
Whether you're optimistic or pessimistic about transformative AI, the stakes are high enough and the uncertainty deep enough that building governance capacity now is justified.
I

The Challenge of Regulating Transformative AI


The importance of getting the regulatory response to a truly transformative technology right is obvious. The complexity of the challenge is perhaps less so. Governments today face a genuinely difficult dilemma: act too early, with incomplete information, and risk stifling innovation or locking in poorly designed rules. Act too late, and the window for meaningful governance may have closed.

This problem is compounded by the speed of AI development. Unlike previous transformative technologies, where decades passed between fundamental breakthroughs and widespread deployment, advances in AI capabilities can be measured in months. Regulatory institutions built for a slower pace of technological change struggle to keep up.

The challenge is further complicated by deep uncertainty about AI's trajectory. Reasonable experts disagree not only about when transformative AI will arrive, but about what form it will take, what risks it will pose, and what governance measures will be appropriate. This uncertainty makes it tempting to delay action—but delay carries its own risks.

II

Let the Market Handle It


One common response to regulatory uncertainty is to defer to market forces. On this view, the competitive dynamics of the AI industry will naturally incentivize safety and responsible development, making government intervention unnecessary or even counterproductive.

There is something to this argument. Companies developing frontier AI systems have strong incentives to avoid catastrophic failures that would damage their reputations and invite regulatory backlash. The voluntary commitments that major AI companies have made to safety testing and responsible deployment reflect, at least in part, genuine concern about the risks their products might pose.

But the market-only approach has serious limitations. Market incentives are imperfect proxies for the public interest. Companies face competitive pressure to move quickly, which can lead to underinvestment in safety. The most consequential risks from transformative AI—those involving national security, democratic stability, or large-scale social disruption—are precisely the kinds of externalities that markets handle poorly.

III

Anticipatory Governance & the Precautionary Principle


At the other end of the spectrum from market deference is the precautionary principle: the idea that, in the face of potentially catastrophic risks, governments should act decisively to prevent harm, even in the absence of complete scientific certainty about the nature or magnitude of the risk.

Applied to AI, this might mean imposing strict regulations on frontier AI development, requiring extensive pre-deployment safety testing, or even implementing moratoriums on certain kinds of AI research. The appeal of this approach is obvious: it takes seriously the possibility that transformative AI could pose risks that are difficult or impossible to reverse once they materialize.

The problems with the precautionary approach are equally significant. Overly restrictive regulation risks driving AI development to jurisdictions with less oversight. It may also stifle beneficial innovation, delaying the realization of AI's potential to address pressing social challenges. Most fundamentally, the precautionary principle offers little guidance about what, specifically, to regulate, because it is premised on uncertainty about the very risks it seeks to address.

IV

The Case for Radical Optionality


Instead of regulating or failing to regulate, governments can prepare to regulate in a way that will improve their ability to respond to a wide range of possible scenarios, foreseen or unforeseen. This can be done by building strong regulatory institutions, equipping them with appropriately flexible authorities, and ensuring that they have access to the information they'll need to respond competently and decisively.

Unlike the precautionary principle approach, the policy measures involved would impose only negligible burdens on AI companies and would have a negligible impact on innovation. Instead, the costs of an optionality-maximizing approach would be measurable in taxpayer dollars.

Consider Leopold Aschenbrenner's "Situational Awareness" essays.[1] Aschenbrenner proposes that it is necessary for the U.S. government to eventually prohibit private companies from working on transformative AI systems and instead invest in a government-run "AGI Manhattan Project." In another influential piece, Vitalik Buterin proposed "defensive acceleration" (d/acc) as a way of avoiding a closed, centralized future.[5] Anthropic CEO Dario Amodei's "Machines of Loving Grace" suggests an "entente strategy" for democracies.[6]

This follows from the scale of the problem—the premise from which we're starting is that there is a distinct possibility of a transition at least as significant as the Industrial Revolution occurring over the course of the next few years or decades. If governments genuinely accept that dual-use transformative AI systems may arrive in the near- or medium-term future, the logical consequence is that virtually anything they can do to even marginally improve the odds of the transition going well will be cost-justified.

Policy Recommendations

Steps that can and should be taken as soon as possible to increase regulatory capacity without creating significant barriers to innovation.

Reporting & Transparency Requirements

Information-Gathering Authorities

Implement well-designed transparency and reporting requirements that allow governments to develop expertise in securely collecting, analyzing, and sharing information about frontier AI systems.

Secure Disclosure Channels

Whistleblower Protections

Ensure employees at frontier AI companies can report information about risks to public safety or national security, without fear of retaliation, to appropriate government offices.

Inter-Governmental Coordination

Information Sharing

Establish channels for securely sharing appropriate information about model capabilities and risks between governments and close allies.

Adaptive Regulatory Frameworks

Flexible Rules & Definitions

Create regulatory definitions that can be updated more rapidly and reliably than definitions baked into statutes, reducing the risk of obsolescence.

Pre-Deployment Testing

Assessments & Evaluations

Build capacity for pre-deployment testing of frontier models, including voluntary and mandatory government testing regimes.

National Security Priorities

Securing Model Weights

Promulgate comprehensive voluntary standards for physical and cybersecurity throughout the frontier AI development supply chain.

VI

Conclusion


Humanity may soon face the most consequential regulatory challenge in history: governing artificial intelligence powerful enough to precipitate a societal transformation on par with the Industrial Revolution, but compressed into years rather than generations.

We argue for radical optionality—avoiding overregulation in the short term while building up the government's capacity to regulate competently when and if it becomes clear that regulation is needed. This approach equips democratic governments to respond to a range of technological developments by increasing the quality of intelligence, judgment, and authority available to public authorities.

Optimists and pessimists may disagree about what to do with that capacity when the time comes, but they should agree on building it.

VII

References


  1. [1]Leopold Aschenbrenner, "Situational Awareness: The Decade Ahead" (2024).
  2. [2]Yoshua Bengio et al., "Managing AI Risks in an Era of Rapid Progress" (2023).
  3. [3]Executive Order 14110 on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (2023).
  4. [4]EU Artificial Intelligence Act, 2024 O.J. (L 1689).
  5. [5]Vitalik Buterin, "My techno-optimism" (2023).
  6. [6]Dario Amodei, "Machines of Loving Grace" (2024).
  7. [7]Christoph Winter & Charlie Bullock, "The Governance Misspecification Problem," Institute for Law & AI Working Paper No. 3-2024 (2024).
  8. [8]Frontier Model Forum, "Advancing Frontier AI Safety" (2023).