A group of influential artificial intelligence scientists from around the world are calling for the establishment of an international authority to oversee AI development and prevent potential “catastrophic outcomes” as technology advances at a rapid pace.
The New York Times reports that leading AI scientists from the United States, China, and other nations have come together to issue a stark warning about the potential risks posed by rapidly advancing AI technology. The group, which includes pioneers and influential figures in the field, is urging countries to create a global system of oversight to mitigate these risks and ensure the safe development of AI.
The scientists, who helped lay the foundation for modern AI, expressed their concerns in a statement released on Monday. They cautioned that AI technology could, within a matter of years, surpass the capabilities of its creators, and that “loss of human control or malicious use of these AI systems could lead to catastrophic outcomes for all of humanity.”
The call for action comes amidst the rapid commercialization of AI, which has seen the technology move from the fringes of science to mainstream applications in smartphones, cars, and classrooms. Governments worldwide are grappling with the challenge of regulating and harnessing this powerful technology.
Gillian Hadfield, a legal scholar and professor of computer science and government at Johns Hopkins University, highlighted the lack of a plan to control AI systems if they were to develop dangerous capabilities today. “If we had some sort of catastrophe six months from now, if we do detect there are models that are starting to autonomously self-improve, who are you going to call?” Dr. Hadfield questioned.
To address this critical issue, the group of scientists proposed the establishment of AI safety authorities in each country. These authorities would be responsible for registering the AI systems within their borders and working together to agree on a set of red lines and warning signs, such as an AI system’s ability to copy itself or intentionally deceive its creators. The collaboration would be coordinated by an international body.
The statement was signed by an impressive roster of AI luminaries, including Yoshua Bengio, Andrew Yao, and Geoffrey Hinton, all recipients of the Turing Award, the equivalent of the Nobel Prize for computing. The group also included scientists from several leading AI research institutions in China, some of which are state-funded and advise the government.
Breitbart News has previously reported that Geoffrey Hinton, known as the “Godfather of AI,” has warned that time is running out to properly regulate AI.
Hinton does not shy away from shedding light on the darker aspects and uncertainties surrounding AI. He candidly expresses, “We’re entering a period of great uncertainty where we’re dealing with things we’ve never done before. And normally the first time you deal with something totally novel, you get it wrong. And we can’t afford to get it wrong with these things.”
One of the most pressing concerns Hinton raises relates to the autonomy of AI systems, particularly their potential ability to write and modify their own computer code. This, he suggests, is an area where control may slip from human hands, and the consequences of such a scenario are not fully predictable. Furthermore, as AI systems continue to absorb information from various sources, they become increasingly adept at manipulating human behaviors and decisions. Hinton forewarns, “I think in five years time it may well be able to reason better than us.”
The meetings, organized by the nonprofit research group Far.AI, took place in Venice and served as a rare venue for engagement between Chinese and Western scientists amidst the tense technological competition between the United States and China.
Despite the challenges posed by the distrust between the two nations, the scientists emphasized the importance of their conversations and the need for collaboration. Dr. Bengio drew parallels to the talks between American and Soviet scientists during the Cold War that helped bring about coordination to avert nuclear catastrophe.
Read more at the New York Times here.
Lucas Nolan is a reporter for Breitbart News covering issues of free speech and online censorship.