Artificial Intelligence experts have issued a serious warning on Tuesday AI models may soon become more intelligent and powerful than we are and it’s time to establish restrictions to make sure that they do not take over humans or take over the world.
“Mitigating the threat of extinction due to AI should be a top global prioritization along with other risks to society like pandemics or nuclear warfare,” a group of technologists and technology industry leaders stated in the statement posted on the website of the Center to AI Safety’s web site.
Sam Altman, CEO of OpenAI, the Microsoft-backed AI research lab that is behind ChatGPT, and the so-called godfather of AI who recently left Google, Geoffrey Hinton, were among the hundreds of leading figures who signed the we’re-on-the-brink-of-crisis statement.
The demand for safeguards around AI programs has grown more urgent over the last few months as both companies that are profit-driven and public sector enterprises are adopting newer versions of AI programs.
In a separate document released in March, and that has been signed by more than thirty thousand individuals, tech executives and researchers urged an halt of six months in the training of AI systems that are more powerful than GPT-4, the most current version of ChatGPT. ChatGPT chatbot.
In an Open letter cautioned: “Advanced AI could represent an enormous change in the evolution of life on Earth and must be planned and controlled with appropriate respect and care.”
In an interview with NPR, in a recent interview on NPR, Hinton, who played a key role in the development of AI, stated that AI programs are poised to beat their creators earlier than they anticipated.
“I believed for a long time we were 30-50 years away from this. … Today I believe we could be a lot closer, perhaps just five years from that point,” He estimated.
Dan Hendrycks, director of the Center for AI Safety, said in an Twitter post on the near in the near future AI is a major risk of “systemic bias, false information and cyberattacks, malicious use and weaponsization.”
The author also suggested that the public should strive to tackle all the dangers posed by AI at the same time. “Societies can handle multiple risk simultaneously. It’s not ‘either/or’ it’s “yes/and.’ ” he said. “From an aspect of risk management the same way it is not prudent to solely prioritize the present risks as well, it’s foolish to disregard these risks as well.”
The NPR’s Bobby Allyn contributed to this report.
Copyright 2023 Copyright NPR. For more information, go to https://www.npr.org. 9(MDEwMjQ0ODM1MDEzNDk4MTEzNjU3NTRhYg004))