Thomas Bangalter, considered one of two former members of the Grammy-winning music duo Daft Punk, has revealed that he’s “terrified” of artificial intelligence – despite the pair having performed as robots for nearly 30 years.
The 48-year-old musician paradoxically stated that the growing popularization of artificial intelligence was not what Daft Punk really stood for, despite the outward appearance of the performers.
“We tried to make use of these machines to precise something incredibly moving that a machine cannot feel but a human can,” he says he explained to the BBC on Tuesday, adding that it was not their goal to glorify the rise of robots.
“We’ve all the time been on the side of humanity, not technology,” he said of Daft Punk, which consisted of himself and Guy-Manuel de Homem-Christo, 49, before they broke up in 2021.
“I almost consider the character of the robots as an art installation by Marina Abramovic that lasted for 20 years,” he said, referring to eclectic Serbian artist.
The “Harder, Better, Faster, Stronger” singer – who said the duo “blurred the road between reality and fiction” – added that his interest in AI comes when it goes beyond making music and results in “human aging”.
“As much as I really like the character, the very last thing I would like to be on the earth we live in in 2023 is a robot,” he boldly stated.
Computer-generated ‘deepfake’ images have shocked the web twice in recent weeks: once with the feisty – and faux – Pope Francis a white puffer jacket and unreal photos of former President Donald Trump triggering the scene as he was “arrested” by the NYPD.
Last week, it was reported that an AI chatbot allegedly convinced a Belgian man to commit suicide.
The news comes as many tech industry leaders have called for an “immediate break” in training advanced AI systems for at the least six months.
But AI expert Eliezer Yudkowsky argued that the proposed moratorium didn’t go far enough.
“It took greater than 60 years before the concept of AI was first proposed and explored to bring us to today’s capabilities,” he said. “Resolving the safety of superhuman intelligence – not perfect security, security within the sense of ‘not literally killing everyone’ – could very reasonably take at the least half that point.”