OpenAI leaders call for regulation to prevent AI from destroying humanity
OpenAI is asking for regulation to reduce the risk of artificial intelligence presenting an existential threat to humanity but says that pausing development could be dangerous.
The developers of ChatGPT, Open AI, are calling for more regulations in the artificial intelligence sector to prevent systems from destroying or harming humanity. The tech leaders believe the industry requires the equivalent of the Atomic Energy Agency, an international organisation that ensures the safe and proper use of new systems.
CEO Sam Altman and co-founders Greg Brockman and Ilya Sutskever are calling on international regulators to work on methods to audit and inspect systems in the interests of human civilisation. While they expect AI to benefit humanity, they also believe in mitigating risks.
“It’s conceivable that within the next 10 years, AI systems will exceed expert skill level in most domains, and carry out as much productive activity as one of today’s largest corporations,” they team OpenAI team says. “In terms of both potential upsides and downsides, superintelligence will be more powerful than other technologies humanity has had to contend with in the past. We can have a dramatically more prosperous future; but we have to manage risk to get there. Given the possibility of existential risk, we can’t just be reactive.”
The industry leaders believe governments should act quickly to enshrine new regulations to protect the public. AI development continues apace, following an exponential trajectory in most domains, such as processing speed, model size, and sophistication. Regulators must put systems in place today to prevent tomorrow’s autonomous agents from gaining control of civilisational systems and leveraging them to their own ends.
Even in the short term, there are significant risks. Malicious human actors could decide to harness AI to foster bias in public discourse or manipulate the political process. Artificial intelligence is a powerful tool for spreading misinformation convincingly in ways that weren’t possible before.
As such, institutions like the US-based Center for AI Safety (CAIS) believe that relinquishing human labour to AI systems could do “existential” or “catastrophic” damage to society. Humanity, the organisation says, could lose the ability to govern itself, becoming “enfeebled” relative to the emerging superintelligence. Humans in control of such systems could achieve “value lock-in,” becoming a new ruling class, while everyone else enters dystopian serfdom – provided they can retain control of them.
Even so, OpenAI believes that artificial intelligence technology will lead to a better world than we have today. Productivity, education, and creative work will all improve. Moreover, they say that it could also be dangerous to pause development, as called for in a recent open letter to the AI community from worried tech leaders, scientists, and academics. Trying to control the development of such systems would require the setting up of a global surveillance regime, according to the company, something that might also use AI to further its domination ends. Artificial intelligence technology is “inherently part of the technological path we are on,” the organisation says, ruling out any AI moratorium.