Hundreds of public figures including ‘AI godfathers’ urge ‘superintelligence’ ban

Hundreds of public figures including ‘AI godfathers’ urge ‘superintelligence’ ban


Worased Boontipchayakun | Istock | Getty Images

A group of prominent figures, including artificial intelligence and technology experts, has called for an end to efforts to create ‘superintelligence’ — a form of AI that would surpass humans on essentially all cognitive tasks. 

Over 850 people, including tech leaders like Virgin Group founder Richard Branson and Apple cofounder Steve Wozniak, signed a statement published Wednesday calling for a pause in the development of superintelligence. 

The list of signatories was notably topped by prominent AI pioneers, including the computer scientists Yoshua Bengio and Geoff Hinton, who are widely considered “godfathers” of modern AI. Leading AI researchers like UC Berkeley’s Stuart Russell also signed on. 

Superintelligence has become a buzzword in the AI world, as companies from Elon Musk’s xAI to Sam Altman’s OpenAI compete to release more advanced large language models. Meta notably has gone so far as to name its LLM division the ‘Meta Superintelligence Labs.’ 

But signatories of the recent statement warn that the prospect of superintelligence has “raised concerns, ranging from human economic obsolescence and disempowerment, losses of freedom, civil liberties, dignity, and control, to national security risks and even potential human extinction.”

It called for a ban on developing superintelligence until there is strong public support and a broad scientific consensus that it can be built safely and kept under control.

In addition to AI and tech figures, the names behind the statement came from a broad coalition ranging from academics, media personalities, religious leaders and a bipartisan group of former U.S. politicians and officials. 

Those retired officials included former chairman of the Joint Chiefs of Staff Mike Mullen and former National Security Advisor Susan Rice.

Meanwhile, Steve Bannon and Glen Beck — influential media allies to U.S. President Donald Trump — were also prominently featured on the list.

Other high-profile signatories included the British royal family members Prince Harry and his wife, Meghan Markle, as well as former president of Ireland Mary Robinson. As of Wednesday, the list was still growing.

AI doomers versus AI boomers

The great divide between those who see AI as a force for good, and those who say it's dangerous

Meanwhile, Elon Musk said on a podcast earlier this year that there was “only a 20% chance of annihilation” when discussing the risks of advanced AI surpassing human intelligence. 

The ‘Statement on Superintelligence’ cited a recent survey from the Future of Life Institute showing that only 5% of U.S. adults support “the status quo of fast, unregulated” superintelligence development. 

The survey of 2,000 American adults also found that a majority believe “superhuman AI” shouldn’t be created until proven safe or controllable and want robust regulation on advanced AI. 

In a statement provided on the site, computer scientist Bengio said AI systems could surpass most individuals in most cognitive tasks within a few years. He added that while such advances could help solve global challenges, they also carry significant risks.

“To safely advance toward superintelligence, we must scientifically determine how to design AI systems that are fundamentally incapable of harming people, whether through misalignment or malicious use,” he said. 

“We also need to make sure the public has a much stronger say in decisions that will shape our collective future.”