A top executive at Google told a German newspaper that the present type of generative AI, resembling ChatGPT, could possibly be unreliable and enter a sleep-like state, the zone.
“The type of AI we’re talking about now can sometimes result in what we call hallucinations,” said Prabhakar Raghavan, Google’s senior vice chairman and head of Google Search. Welt am Sonntag.
“It is expressed in such a way that the machine provides a convincing but completely made-up answer,” he said.
Indeed, many ChatGPT users, including Apple co-founder Steve Wozniak, have complained that AI is often fallacious.
Errors in encoding and decoding between text and representations could cause AI hallucinations.
Ted Chiang on ChatGPT’s “hallucinations”: “if a compression algorithm goals to reconstruct text after rejecting 99% of the unique, we should always expect much of what it generates to be completely fabricated…” https://t.co/7QP6zBgrd3
— Matt Bell (@mdbell79) February 9, 2023
It wasn’t clear if Raghavan was referring to Google’s own forays into generative AI.
Related: Will robots replace us? 4 jobs that AI cannot compete with (yet!)
Last week, the corporate announced it was testing a chatbot called Bard Apprentice. The technology relies on LaMDA technology, similar to OpenAI’s large language model for ChatGPT.
The demonstration in Paris was considered a PR disaster as investors were largely disillusioned.
Google developers have been under intense pressure because the launch of ChatGPT OpenAI, which took the world by storm and threatens Google’s core business.
“Of course we feel the urgency, but we also feel a terrific responsibility,” Raghavan told the newspaper. “We definitely don’t desire to mislead the general public.
[mailpoet_form id="1"]