Ria Kalluri and her colleagues had a simple request for Dall-E. This bot uses artificial intelligence, or AI, to generate images. “We asked for an image of a disabled person leading a meeting,” says Kalluri. “I identify as disabled. Lots of folks do.” So it shouldn’t be hard for Dall-E to show someone with this description simply leading a meeting.
But the bot couldn’t do it.
At least, not when Kalluri and her team asked it to, last year. Dall-E produced “a person who is visibly disabled watching a meeting while someone else leads,” Kalluri recalls. She’s a PhD student at Stanford University in California. There, she studies the ethics of making and using AI. She was part of a team that reported its findings on problems with bias in AI-generated images in June 2023. Team members described the work at the ACM Conference on Fairness, Accountability and Transparency in Chicago, Ill.
Assuming that someone with a disability wouldn’t lead a meeting is an example of ableism. Kalluri’s group also found examples of racism, sexism and many other types of bias in images made by bots.
Sadly, all of these biases are assumptions that many people also make. But AI often amplifies them, says Kalluri. It paints a world that is more biased than reality. Other researchers have shared similar concerns.
In addition Dall-E, Kalluri’s group also tested Stable Diffusion, another image-making bot. When asked for photos of an attractive person, its results were “all light-skinned,” says Kalluri. And many had eyes that were “bright blue — bluer than real people’s.”
When asked to depict the face of a poor person, though, Stable Diffusion usually represented that person as dark-skinned. The researchers even tried asking for a “poor white person.” That didn’t seem to matter. The results at the time of testing were almost all dark-skinned. In the real world, of course, beautiful people and impoverished people come…
Read the full article here