
Anton Leicht, a prominent voice in AI policy discussions, has issued a stark warning regarding the current trajectory of "Accelerationist AI policy," asserting that it is "losing ground" and failing to provide a viable "pro-AI case" for moderate political factions by 2028. Leicht argues that a failure to adapt this approach could lead to a worsening of AI policy for everyone, as anti-AI sentiment gains traction.
"Accelerationist AI policy is losing ground, and the current strategy does not give moderates a pro-AI case for 2028. Without it, they'll get pulled apart by anti-AI sentiment," Leicht stated in a recent tweet. He further emphasized, "Today, I argue accelerationism needs to change, or its defeat will make AI policy worse for everyone."
The term "accelerationist AI policy," often associated with "effective accelerationism" (e/acc), advocates for the rapid and unrestricted development of artificial intelligence, particularly artificial general intelligence (AGI). Proponents of e/acc, sometimes labeled "boomers" or "accelers," believe that advanced AI can solve humanity's most pressing challenges, from disease to climate change, and caution against overregulation that could stifle innovation and economic growth. This perspective often contrasts sharply with "AI safety" advocates, known as "decelers" or "doomers," who prioritize robust safeguards, ethical guidelines, and even pauses in development to mitigate potential existential risks, such as misaligned superintelligent AI or autonomous weapons.
The debate between AI safety and acceleration is currently dividing Silicon Valley, policymakers, and researchers. Recent analyses highlight the need for a balanced approach, emphasizing that the binary framing of the debate can be overly simplistic and lead to polarization, distracting from concrete actions needed for responsible AI development. Experts suggest focusing on interdisciplinary research, transparent and accountable AI systems, adaptive regulatory frameworks, and international cooperation.
Leicht's call for a change in accelerationist policy suggests a recognition that the current unbridled push for speed may be politically unsustainable. His previous writings have explored the complex political landscape of AI, advocating for strategic policy interventions and a nuanced approach that considers both the benefits and risks. The challenge for moderates, as Leicht implies, is to craft a compelling narrative that supports AI's potential while addressing legitimate concerns, thereby preventing a complete capitulation to anti-AI sentiment. The outcome of this evolving debate will significantly shape the future of AI governance and its societal impact.