If you ask ChatGPT whether it thinks like a human, this chatbot will tell you that it doesn’t. “I can process and understand language to a certain extent,” ChatGPT writes. But “my understanding is based on patterns in data, [not] humanlike comprehension.”
Still, talking to this artificial intelligence, or AI, system can sometimes feel like talking to a human. A pretty smart, talented person at that. ChatGPT can answer questions about math or history on demand — and in a lot of different languages. It can crank out stories and computer code. And other similarly “generative” AI models can produce artwork and videos from scratch.
“These things seem really smart,” said Melanie Mitchell. She’s a computer scientist at the Santa Fe Institute in New Mexico. She spoke at the annual meeting of the American Association for the Advancement of Science. It was held in Denver, Colo., in February.
AI’s increasing “smarts” have a lot of people worried. They fear generative AI could take people’s jobs — or take over the world. But Mitchell and other experts think those fears are overblown. At least, for now.
The problem, those experts argue, is just what ChatGPT says. Today’s most impressive AI still doesn’t truly understand what it is saying or doing the way a human would. And that puts some hard limits on its abilities.
Concerns about AI are not new
People have worried for decades that machines are getting too smart. This fear dates back to at least 1997. That’s when the computer Deep Blue defeated world chess champion Garry Kasparov.
At that time, though, it was still easy to show that AI failed miserably at many things we do well. Sure, a computer could play a mean game of chess. But could it diagnose disease? Transcribe speech? Not very well. In many key areas, humans remained supreme.
About a decade ago, that began to change.
Computer brains — known as neural networks — got a huge boost from a new…
Read the full article here