Elon Musk among experts urging a halt to AI training
The Future of Life Institute penned a letter to the AI community, asking for a six-month break in development to allow risk analysis and regulation to catch up.
Since the introduction of OpenAI’s ChatGPT in November 2022, a new air of anticipation has come over the AI community. Large language models (LLMs) appear to endow machines with human-like reasoning abilities, allowing them to “think” for themselves.
Now tech leaders, including Elon Musk, one of the founding members of OpenAI, are sounding the alarm bell. In an open letter, the SpaceX CEO and colleagues warn of the risks unfettered AI development poses and that the race to create a machine with human-like intelligence is out of control.
Signatories organised by the Future of Life Institute are calling for a six-month pause (and possibly longer) in the training of advanced AI systems. Experts want OpenAI, Microsoft, and other firms at the forefront of development to take steps to prevent artificial intelligence from getting out of control and for regulatory frameworks to catch up.
“AI systems with human-competitive intelligence can pose profound risks to society and humanity,” the letter reads. “Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.”
The letter then goes on to define a threshold for development. “We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4,” currently the most capable version of OpenAI’s technology.
Besides Musk, other luminary signatories include Apple Computers co-founder Steve Wozniak, Pinterest co-founder Evan Sharp, Ripple co-found Chris Larson, MIT professor Max Tegmark, and presidential candidate 2020 Andrew Yang. The list extends to virtually all significant figures in the AI community, including tech leaders, researchers, lab heads, and academics.
The Future of Life’s open letter comes on the heels of a Goldman Sachs report estimating that AI could replace 300 million jobs globally. Even if artificial intelligence doesn’t pose an existential threat to humanity, it could have profound implications on the job market, denying individuals opportunities to do meaningful work and support themselves. For instance, learning how to add online chat to your website won’t disrupt the job market, but replacing human agents outright with AI solutions could.
Computer scientist, Stuart Russel at the University of California, Berkeley, one of the open letter signatories, told BBC News that AI systems could potentially disrupt democracy with weaponised information and displacement of human employment. It could also degrade the educational system because of the risk of plagiarism and the falling value of university degrees.
Longer-term, the risks become more profound. A misaligned artificial general intelligence (AGI) could cause grievous harm to the planet, particularly if autocratic regimes took possession of it.
While Musk resigned from the board of OpenAI several years ago, he continues to criticise its direction publicly. Despite developing autonomous vehicles at Tesla, Musk worries that humanity may not survive an encounter with a true AI. He says that if the industry doesn’t pause AI development voluntarily, governments should step in and institute a moratorium. He also called for new regulatory authorities capable of dealing with the threats AI poses.