Artificial intelligence expert Eliezer Yudkowsky believes the US government should implement greater than a right away six-month “pause” to AI research, as several tech innovators, including Elon Musk, have previously suggested.
in recent op-ed timeYudkowsky, a call theorist on the Machine Intelligence Research Institute who has studied AI for greater than 20 years, argued that A letter signed by Twitter’s CEO underestimates the “seriousness of the situation” as AI could allegedly turn out to be smarter than humans – and switch on.
The open letter issued by the Future of Life Institute was signed by greater than 1,600 people, including Musk and Apple co-founder Steve Wozniak.
It asks the federal government to halt the event of any AI system that’s more powerful than the present GPT-4 system.
The letter argues that “powerful AI systems should only be developed after we are confident that the results will likely be positive and the risks manageable,” disputed by Yudkowsky.
‘The key issue is just not ‘people-to-people’ intelligence (as stated within the open letter); that is what happens when artificial intelligence reaches an intelligence that’s smarter than human,” he wrote.
“Many researchers immersed in these issues, including myself, expect that the most certainly consequence of constructing superhumanly intelligent AI under conditions much like today’s is that literally everyone on Earth will die,” Yudkowsky said. “Not like in ‘possibly some distant probability,’ but in ‘it’s an obvious thing that may occur.’
Yudkowsky fears that artificial intelligence may oppose its creators and will not care about human life.
“Imagine a complete alien civilization, pondering at a millionth of human speed, initially limited to computers – in a world of creatures that from its perspective are very silly and really slow,” he wrote.
He added that half a 12 months is just not enough to develop a plan to take care of rapidly advancing technology.
“It took over 60 years for the concept of AI to be first proposed and explored to bring us to today’s capabilities,” he continued. “Resolving the safety of superhuman intelligence – not perfect security, security within the sense of ‘not literally killing everyone’ – could very reasonably take not less than half that point.”
Yudkowsky’s proposal on this regard is international cooperation to stop the event of powerful AI systems.
He argued that this might be more vital than “stopping a full nuclear exchange.”
“Shut all of it down,” he wrote. “Shut down all large GPU clusters (large computer farms where probably the most powerful AI is being refined). Close all large training runs. Set a ceiling on how much computing power anyone can use to coach an AI system, and reduce it in the approaching years to compensate for more efficient training algorithms. No exceptions for governments and armed forces.”
His warning comes as AI is already making it harder for humans to decipher what’s real.
Just last week, computer-generated photos of the struggling former president Donald Trump and the arrest by NYPD officers went viral as he awaits a possible indictment.
Another set of faux photos showing Pope Francis wearing an especially dripping white down jacket also tricked the web into pondering the religious leader had increased his fashion sense.