One of the things that’s turned generative AI into a global phenomenon is how easy it is for just about anyone to use powerful tools to generate text, audio and video. And while there are many good uses for the tech, the bad use cases — including creating deepfakes designed to trick, scam and generally wreak havoc — have gotten otherwise slow moving government organizations to act faster to try to minimize those harms.
Case in point: About a month after New Hampshire voters got an AI-generated call mimicking President Joe Biden and telling them not to vote in the presidential primary, the Federal Communications Commission last week made fake robocall voices illegal. CNET’s Gael Cooper has an explainer, noting that the FCC has been working on this issue since November and the agency is also hoping to use AI to create tech that could stop such illegal calls from even going out in the first place.
“Bad actors are using AI-generated voices in unsolicited robocalls to extort vulnerable family members, imitate celebrities and misinform voters. We’re putting the fraudsters behind these robocalls on notice,” FCC Chairwoman Jessica Rosenworcel said in a statement. “State attorneys general will now have new tools to crack down on these scams and ensure the public is protected from fraud and misinformation.”Â
You can use the FCC’s online form to file a complaint about a robocall — AI generated or not. The fake Biden robocalls, by the way, apparently originated with a Texas company, according to the New Hampshire attorney general, who’s started a criminal investigation.Â
The Biden administration, which released an executive order on AI in October calling for standards and guardrails to ensure AI tech is safe and secure, said government agencies have “completed all of the 90-day actions tasked by the EO and advanced other vital directives that the Order tasked over a longer timeframe.” Among those actions: creating an AI Safety Institute in the Department of Commerce to set…
Read the full article here