Advertisement

Adobe Proposes New Labeling System to Limit AI Training Use of Images

 For years, websites have used the robots.txt file to control which bots can crawl their content. Now, Adobe is proposing a similar mechanism for images to help artists and creators restrict their work from being used in AI training. The company has introduced a feature to its content credentials system that signals creators' preferences for how their media is used.

However, the bigger hurdle for Adobe may be getting AI developers to actually comply with this new signal—especially since some AI crawlers already disregard the traditional robots.txt rules.

Content credentials are pieces of metadata embedded in media files that establish authenticity and ownership. This is part of the C2PA (Coalition for Content Provenance and Authenticity) standard, which promotes transparent content sourcing.


Adobe



Adobe is launching a web-based tool that enables creators to add content credentials to their images, regardless of whether they were made using Adobe products. This tool also gives users the option to indicate that their images should not be used to train AI models.

Called the Adobe Content Authenticity App, this new tool allows users to attach credentials—like names and social media profiles—to as many as 50 JPG or PNG files at once. Adobe has also partnered with LinkedIn to integrate its verification system, allowing creators to confirm their identity via their LinkedIn accounts. Although users can link Instagram or X (formerly Twitter) accounts too, those platforms don’t support official verification within this app.

A checkbox in the app allows users to mark their content as off-limits for AI model training. While this preference is stored in the image’s metadata, Adobe hasn’t yet secured any formal agreements with AI developers to honor this signal. The company says it's in ongoing discussions with major AI firms to encourage them to respect this emerging standard.

Adobe's effort echoes a growing push to give creators more control in the face of expanding generative AI tools. But its effectiveness will largely depend on voluntary compliance from AI companies.

The topic of AI-generated content also sparked debate last year when Meta’s automated image labeling system added “Made with AI” tags to photos—causing backlash among photographers. Meta later updated the label to “AI info.” Despite both companies being part of the C2PA steering committee, implementation strategies vary significantly.

Andy Parsons, Adobe’s Senior Director of the Content Authenticity Initiative, emphasized that the new app was developed with feedback from creators. Given the fragmented global legal landscape surrounding AI and copyright, he said Adobe aims to empower creators to clearly express their intentions regarding AI usage.

“Creators want a straightforward way to indicate they don’t want their work used in generative AI training. We've heard this from both independent artists and creative agencies,”.

Adobe is also releasing a Chrome extension to help users identify images with embedded content credentials. The extension detects a mix of metadata techniques—like digital fingerprinting, open-source watermarking, and cryptographic data—embedded directly into an image’s pixels. This means even if an image is altered, its origin information remains intact. Images with these credentials will be marked with a small “CR” symbol when viewed in supported browsers.

While the current tool only supports images, Adobe plans to expand it to cover video and audio content in the future.