Free Subscription

Get the news that matters from one of the leading news sites in Kenya.

AI Companies And Researchers Sign Statement Warning Of ‘Extinction’ Risk

Post by : Hillary Musyoki

From movies and animations creating how AI’s take over the world, To dozens of AI industry leaders, academics on Tuesday called for reducing the risk of global annihilation due to artificial intelligence, arguing that the threat of an AI extinction event should be a top global priority.

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” read a statement published by the Center for AI Safety.

The statement was signed by leading industry officials including OpenAI CEO Sam Altman, the ‘godfather of AI’ Geoffrey Hinton, top executives and researchers from Google DeepMind and Anthropic, Kevin Scott, Microsoft chief technology officer, Bruce Schenneir the internet security and cryptography pioneer; climate advocate Bill Mckibben among others.

The statement lists wide-ranging concerns about the ultimate danger of unchecked artificial intelligence. AI exerts have said society is still a long way from developing the kind of artificial general intelligence that is the stuff of science fiction. AI systems largely produce patterns based on training data they’ve been fed and do not think for themselves.

AI Growth

Still, investment into the AI industry has led to calls for regulation at the outset of the AI age, before any mishaps occur.

The statement follows the growth of OpenAI’s ChatGPT, which has helped heighten an arms race into he tech industry over artificial intelligence. However, a growing number of lawmakers, advocacy groups and tech insiders have raised alarms about the potential for a new crop of AI-powered chatbots to spread wrong information and displace jobs.

‘AI’s God Father ‘ Geoffrey Hinton, who has helped shape AI’s systems, he previously told CNN he decided to leave his role at Google and “blow the whistle’ on the technology after realizing” that these things are getting smarter than us.”

Director of the Center for AI Safety, Dan Hendrycks said in a tweet that the statement first proposed by David Krueger, an AI professor at the University of Cambridge, does not prevent society from addressing other types of AI risk, such as algorithmic bias or misinformation.

He compared Tuesday’s statement to warnings by atomic scientists “issuing warnings about the very technologies they’ve created.”

Also Read: Experts Say AI Poses ‘Extinction’ Risk

Latest Posts

RECOMMENDED