Less than two years ago, cognitive and computer scientist Douglas Hofstadter demonstrated how easy it was to make AI hallucinate when he asked a nonsensical question and OpenAI’s GPT-3 replied, “The Golden Gate Bridge was transported for the second time across Egypt in October of 2016.”
Now, however, GPT-3.5 — which powers the free version of ChatGPT — tells you, “There is no record or historical event indicating that the Golden Gate Bridge, which is located in San Francisco, California, USA, was ever transported across Egypt.”
It’s a good example of how quickly these AI models evolve. But for all the improvements on this front, you still need to be on guard.
AI chatbots continue to hallucinate and present material that isn’t real, even if the errors are less glaringly obvious. And the chatbots confidently deliver this information as fact, which has already generated plenty of challenges for tech companies and headlines for media outlets.
Taking a more nuanced view, hallucinations are actually both a feature and a bug — and there’s an important distinction between using an AI model as a content generator and tapping into it to answer questions.
Since late 2022, we’ve seen the introduction of generative AI tools like ChatGPT, Copilot and Gemini from tech giants and startups alike. As users experiment with these tools to write code, essays and poetry, perfect their resumes, create meal and workout plans and generate never-before-seen images and videos, we continue to see mistakes, like inaccuracies in historical image generation. It’s a good reminder generative AI is still very much a work in progress, even as companies like Google and Adobe showcase tools that can generate games and music to demonstrate where the technology is headed.
If you’re trying to wrap your head around what hallucinations are and why they happen, this explainer is for you. Here’s what you need to know.
What is a hallucination?
A generative AI model “hallucinates” when it delivers false…
Read the full article here