
The tech world is reeling this week as OpenAI CEO Sam Altman unveiled a radical vision for managing the existential threat of artificial superintelligence, only to face immediate and blistering criticism that his proposed “New Deal” is less a solution and more a cynical power grab dressed in regulatory language. The controversy erupted after Altman’s comments, reported by Fortune, suggested that the development of superintelligent AI requires unprecedented global cooperation and oversight, but critics argue his framework is designed to entrench OpenAI’s dominance while stifling competition and democratic oversight.
What Happened?
During a high-profile speech at a tech conference, Altman proposed a sweeping international framework to govern the development of artificial general intelligence (AGI) and artificial superintelligence (ASI). He framed it as a necessary “New Deal” to prevent catastrophic outcomes, arguing that the pace of AI advancement outstrips current regulatory capabilities. The core proposal involves creating a new international body, potentially modeled on nuclear non-proliferation treaties, to oversee and certify safe AI development. Crucially, Altman suggested this body should have the authority to license and monitor AI labs, including OpenAI itself, and impose strict safety standards. The proposal immediately dominated tech and policy discussions, with reactions ranging from cautious optimism to outright fury.
Who Was Involved?
The immediate reaction came from within the tech industry and policy circles. Sam Altman, as the CEO of OpenAI, is the central figure, presenting a vision that positions his company as the indispensable leader in navigating AI’s future. Critics include prominent figures like Gary Marcus, a cognitive scientist and AI skeptic, who dismissed Altman’s proposal as “regulatory nihilism” – a term he used to describe what he sees as a strategy to avoid meaningful regulation by shifting the burden onto a new, unaccountable international body that would effectively grant OpenAI a monopoly on safe AI development. Marcus and others argue that Altman’s plan is less about safety and more about creating a regulatory moat around OpenAI’s proprietary technology, making it harder for competitors to challenge its dominance.
What Did They Say?
Altman’s proposal was met with a mix of intrigue and hostility. While some experts welcomed the call for international cooperation, the criticism from figures like Marcus was swift and scathing. Marcus, in a widely shared op-ed, argued that Altman’s “New Deal” is a smokescreen, allowing OpenAI to continue its rapid, often opaque, development of powerful AI models while deflecting scrutiny. “This isn’t about safety,” Marcus wrote. “It’s about Sam Altman wanting to be the gatekeeper for the future of AI, backed by a global bureaucracy that he can influence. It’s regulatory nihilism dressed up as a grand solution.” Other critics, including some former OpenAI employees and independent AI researchers, echoed this sentiment, warning that the proposal could concentrate power in the hands of a single corporation and its allies, undermining democratic control over a technology with profound societal implications.
Why Did It Happen?
The backdrop to Altman’s proposal is the accelerating race for AI supremacy. OpenAI, alongside competitors like Google DeepMind and Anthropic, is locked in a fierce battle to develop the most advanced and capable AI systems. The fear of an uncontrolled AI takeoff, where superintelligent systems surpass human control, has driven calls for regulation. However, Altman’s proposal reflects a specific strategy: positioning OpenAI as the indispensable partner in solving the very problem it helps create. By advocating for a top-down, international regulatory body, Altman aims to shape the rules of the game before they are fully established, potentially creating a framework that favors established players like OpenAI while making it harder for smaller firms or open-source initiatives to compete. This move also comes amidst growing regulatory pressure globally, with the EU’s AI Act and US discussions on AI oversight, suggesting Altman is trying to influence the regulatory landscape to his advantage.
What Are the Consequences?
The immediate consequence is a significant escalation in the debate over AI governance. Altman’s proposal has forced a public reckoning on who controls the future of AI and how. If implemented, the “New Deal” could centralize immense power in a new international body, raising concerns about accountability and democratic oversight. For businesses, it signals a potential shift towards more stringent, globally coordinated AI development standards, which could increase compliance costs but also provide a degree of predictability. For ordinary citizens, the stakes are high: the outcome could determine whether AI development is guided by transparent, democratic processes or by the interests of a few powerful corporations and their chosen regulators. The controversy also highlights the deep divisions within the tech industry itself, with some executives supporting Altman’s bold stance while others fear it will stifle innovation and competition.
What Happens Next?
The road ahead is fraught with challenges. Altman’s proposal will now be scrutinized by governments, international bodies like the UN, and rival tech companies. The EU, in particular, will likely view it with suspicion, given its own ambitious AI legislation. Critics like Marcus are already mobilizing, planning to push back hard in policy forums and public discourse. The key question is whether Altman can build sufficient international consensus for his vision, or if it will be dismissed as a self-serving power grab. The coming months will see intense lobbying, policy debates, and public campaigns as stakeholders try to shape the future governance of AI. The outcome could determine whether the development of superintelligence is guided by open, democratic principles or by the strategic interests of a few tech giants.
My Take: The Bloody Bollocks of It All
This whole spectacle is a masterclass in tech hubris. Sam Altman, the man who once promised to “democratise superintelligence” while building a company that hoards data and patents like a dragon with a hoard, now wants to be the global sheriff of AI Armageddon. His “New Deal” sounds like something dreamt up in a boardroom after too many coffees, a desperate attempt to frame OpenAI’s self-interest as global salvation. The critics aren’t wrong; “regulatory nihilism” is a spot-on description. It’s a fancy term for “we’ll let the AI companies regulate themselves, but only if they let us be the ones doing the regulating.” The real danger isn’t superintelligence; it’s super-corporate power masquerading as super-responsibility. The only thing this “New Deal” guarantees is that Sam Altman gets to keep his crown as the king of the AI jungle, with a shiny new global bureaucracy to back him up. Bloody hell, it’s enough to make you want to switch off your phone and go for a bloody walk in the bloody countryside.