Meta Implements AI-Generated Image Labeling Across Platforms

Meta, the parent company of Facebook, Threads, and Instagram, has announced its initiative to label artificial intelligence (AI)-generated images on all its platforms. The announcement, made on February 6, follows a call from Meta’s oversight board to revise its policy on AI-generated content, in response to concerns raised by a digitally altered video involving US President Joe Biden circulating online.

Nick Clegg, Meta’s President of Global Affairs, emphasized the importance of labeling AI-generated content to safeguard users and combat disinformation. He stated, “We’ve been working with industry partners to align on common technical standards that signal when a piece of content has been created using AI.” Meta disclosed that it can currently label images generated by AI models from Google, OpenAI, Microsoft, Adobe, Midjourney, and Shutterstock, in addition to its own AI-generated content labeled as “Imagined with AI”.

To effectively identify AI-generated images, detection tools rely on a common identifier in all such images. While some AI image generators use invisible watermarks and metadata embedding, making it evident that the images are not authentic, Meta acknowledges that not all generators employ such techniques. Moreover, some methods can strip out invisible watermarks. To address these challenges, Meta is collaborating with industry partners to develop a unified, non-removable watermarking technology. Last year, Meta’s AI research wing, Fundamental AI Research (FAIR), introduced a watermarking mechanism called Stable Signature, while Google’s DeepMind released SynthID.

Although Meta’s efforts primarily focus on images, AI-generated audio and video content also pose challenges. While detection technology for audio and video is still in development, Meta has introduced a feature for users to disclose when they share AI-generated video or audio content. Failure to disclose such content may result in penalties, especially for high-risk content that could deceive the public. In such cases, Meta may apply prominent labels to provide users with necessary context.

Be the first to comment

Leave a Reply

Your email address will not be published.


*