Meta wants industry-wide labels for AI-made images

Meta said it is working with other tech firms on standards that will let it better detect and label artificial intelligence-generated images.


Meta on Tuesday said it is working with other tech firms on standards that will let it better detect and label artificial intelligence-generated images shared with its billions of users.

The Silicon Valley social media titan expects to have a system in place in a matter of months to identify and tag AI created images posted on its Facebook, Instagram and Threads platforms.

“It’s not perfect, it’s not going to cover everything; the technology is not fully matured,” Meta head of global affairs Nick Clegg told AFP.

While Meta has implemented visible and invisible tags on images created using its own AI tools since December, it also wants to work with other companies “to maximize the transparency the users have,” Clegg added.

“That’s why we’ve been working with industry partners to align on common technical standards that signal when a piece of content has been created using AI,” the company said in a blog post.

This will be done with companies Meta already works with on AI standards, including OpenAI, Google, Microsoft, Midjourney and other firms involved in the fierce race to lead the nascent sector, Clegg said.

ALSO READ: US lawmakers win apology from Zuckerberg in ‘Big Tech’ grilling

But while companies have started including “signals” in images made using their AI tools, the industry has been slower to start putting such identifying markers into audio or video created with AI, according to Clegg.

Clegg admits that this large-scale labeling, using invisible markers, “won’t totally eliminate” the risk of false images being produced, but argues that “it would certainly minimize” their proliferation “within the limits of what technology currently allows.”

In the meantime, Meta advised people to look at online content critically, checking whether accounts posting it are trustworthy and looking for details that look or sound unnatural.

Politicians and women have been prime targets for so-called “deepfake” images, with AI-created nudes of superstar Taylor Swift recently going viral on X, formerly Twitter.

The rise of generative AI has also raised fears that people could use ChatGPT and other platforms to sow political chaos via disinformation or AI clones.

OpenAI last month announced it would “prohibit any use of our platform by political organizations or individuals.”

Meta already asks that advertisers disclose when AI is used to create or alter imagery or audio in political ads.

ALSO READ: New protections to give teens more age-appropriate experiences on Facebook and Instagram

– By: © Agence France-Presse

Access premium news and stories

Access to the top content, vouchers and other member only benefits