Can AI Replace Human Research Participants? These Scientists See Risks
Several recent proposals for using AI to generate research data could save time and effort but at a cost
In science, studying human experiences typically requires time, money and—of course—human participants. But as large language models such as OpenAI’s GPT-4 have grown more sophisticated, some in the research community have been steadily warming to the idea that artificial intelligence could replace human participants in some scientific studies.
That’s the finding of a new preprint paper accepted for the Association for Computing Machinery’s upcoming Conference on Human Factors in Computing Systems (CHI), the biggest such gathering in the field of human-computer interaction, in May. The paper draws from more than a dozen published studies that test or propose using large language models (LLMs) to stand in for human research subjects or to analyze research outcomes in place of humans. But many experts worry this practice could produce scientifically shoddy results.
This new review, led by William Agnew, who studies AI ethics and computer vision at Carnegie Mellon University, cites 13 technical reports or research articles and three commercial products; all of them replace or propose replacing human participants with LLMs in studies on topics including human behavior and psychology, marketing research or AI development. In practice, this would involve study authors posing questions meant for humans to LLMs instead and asking them for their “thoughts” on, or responses to, various prompts.
On supporting science journalism
If you’re enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.
One preprint, which won a best paper prize at CHI last year, tested whether…
Read the full article here