One of the advantages of generative AI tech is its natural language capabilities. That means you don’t need to be a programmer, engineer or scientist to “talk” to a gen AI chatbot and prompt it to create text, illustrations and other images, video, audio, photographs, and even programming code in seconds.
But the “magic” here has a dark side, including the biases, hallucinations and other problems with how the tools themselves work. There’s also a growing problem with people leaning into these easy-to-use and powerful gen AI engines to create fake photos or deepfake videos with an eye toward misleading, confusing or just flat-out lying to an intended audience.
This week, we have examples of both.
First up: Just a week after Google’s Gemini text-to-image generator had to hit the pause button because it was delivering offensive, embarrassing and biased images — Google CEO Sundar Pichai sent the tool back to testing after saying the results were “completely unacceptable” – Microsoft is now reckoning with issues in its Copilot Designer AI generator. That reckoning comes after a company engineer wrote to the Federal Trade Commission expressing concerns about disturbing and violent images created by the tool.
Microsoft engineer Shane Jones said he was “actively testing the product for vulnerabilities, a practice known as red-teaming,” CNBC reported. The product, originally called Bing Image Creator, is powered by OpenAI’s technology. (OpenAI is the maker of ChatGPT and text-to-image converter Dall-E.) Jones said the AI service produced images of “demons and monsters alongside terminology related to abortion rights, teenagers with assault rifles, sexualized images of women in violent tableaus, and underage drinking and drug use.”
All of those images, CNBC added after re-creating Jones’ tests, run “far afoul of Microsoft’s oft-cited responsible AI principles.” Jones said Microsoft ignored his findings despite repeated efforts to get the company to address the…
Read the full article here