“It’s important you save your vote for the November election,” a voice that sounded an awful lot like President Joe Biden’s told Democrats in New Hampshire during a January robocall that discouraged them from voting in that month’s primary.
But it wasn’t Biden. It was an AI-generated voice that one of the men behind it now says was intended to draw attention to how AI can be harnessed to influence voter behavior.
Technology has long been used to sway voters. In the last two presidential elections, it was primarily social media, where manipulated content — like videos of former House Speaker Nancy Pelosi, which went viral after they were edited to make her appear incompetent — spread like wildfire. By 2023, nearly two-thirds of US internet users said misinformation and/or fake news were widespread on these platforms.
The advent of generative AI tools, which can easily create realistic text, images and videos (and audio like the fake-Biden call), only exacerbates the potential for misinformation in 2024. It’s a new reality that government, tech companies and voters will be grappling with in the coming months.
It’s something that software giant Adobe, maker of Photoshop, is mindful of. Last week, Adobe released the results of a study, Future of Trust, in which 6,000 consumers in the US, the UK, France and Germany were asked about online misinformation and generative AI. The study found that a majority are concerned, particularly within the context of elections.
Adobe itself has a gen AI tool, Firefly, that’s part of a growing landscape that includes chatbot and image-generation options from the likes of Anthropic, Google, Microsoft and OpenAI. As these tools become more sophisticated — like, say, offering the ability to create lifelike images and videos — they increase the potential for creativity, as well as for misuse. The tech companies behind them have guardrails to limit the creation of harmful content, but users have found loopholes. The cat and mouse…
Read the full article here