Should I tell my friend their boyfriend is cheating on them? Should I intervene when I hear an off-color joke?
When faced with moral questions—situations where the course of action relates to our sense of right and wrong—we often seek advice. And now people can turn to ChatGPT and other large language models (LLMs) for guidance, too.
Many people seem satisfied by the answers these models offer. In one preprint study, people rated the responses that LLMs produced when presented with moral quandaries as more trustworthy, reliable and even nuanced than those of New York Times ethicist columnist Kwame Anthony Appiah.
On supporting science journalism
If you’re enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.
That study joins several others that together suggest LLMs can offer sound moral advice. Another, published last April, found that people rated an AI’s reasoning as “superior” to a human’s in virtuousness, intelligence and trustworthiness. Some researchers have even suggested that LLMs can be trained to offer ethical financial guidance despite being “inherently sociopathic.”
These findings imply that virtuosic ethical advice is at our fingertips—so why not ask an LLM? But this takeaway has several questionable assumptions behind it. First, research shows that people do not always recognize good advice when they see it. In addition, many people think the content of advice—the literal words, written or spoken—is most important when considering its value, but social connection may be particularly important for tackling dilemmas, especially moral ones.
In a 2023 paper, researchers analyzed many studies to examine, among other things, what made advice most persuasive. The more expert people perceived an advice giver to be, it turned out, the more likely they were to…
Read the full article here