That one aunt of yours (you know the one) may finally think twice before forwarding Facebook posts of “lost” photos of hipster Einstein and a fashion-forward Pope Francis. On Tuesday, Meta announced that “in the coming months,” it will attempt to begin flagging all AI-generated images made using programs from major companies like Microsoft, OpenAI, Midjourney, and Google that are flooding Facebook, Instagram, and Threads.
But to tackle rampant generative AI abuse experts are calling “the world’s biggest short-term threat,” Meta requires cooperation from every major AI company, self-reporting from its roughly 5.4 billion users, as well as currently unreleased technologies.
Nick Clegg, Meta’s President of Global Affairs, explained in his February 6 post that the policy and tech rollouts are expected to debut ahead of pivotal election seasons around the world.
“During this time, we expect to learn much more about how people are creating and sharing AI content, what sort of transparency people find most valuable, and how these technologies evolve,” Clegg says.
[Related: Why an AI image of Pope Francis in a fly jacket stirred up the internet.]
Meta nebulous roadmap centers on working with “other companies in [its] industry” to develop and implement common identification technical standards for AI imagery. Examples might include digital signature algorithms and cryptographic information “manifests,” as suggested by the Coalition for Content Provenance and Authenticity (C2PA) and the International Press Telecommunications Council (IPTC). Once AI companies begin using these watermarks, Meta will begin labeling content accordingly using “classifiers” to help automatically detect AI-generated content.
“If AI companies begin using watermarks” might be more accurate. While the company’s own Meta AI feature already labels its content with an “Imagined with AI” watermark, such easy identifiers aren’t…
Read the full article here