Tech Firms Combat Misinformation with AI-Generated Content Verification.
In an era marked by the rise of fraudsters leveraging generative AI to perpetrate scams and tarnish reputations, technology firms are innovating methods to aid users in verifying content authenticity—starting with still images. OpenAI, as outlined in its 2024 misinformation strategy, is implementing provenance metadata into images generated using ChatGPT on the web and DALL-E 3 API. By February 12, mobile counterparts will also receive this upgrade.
This metadata adheres to the C2PA (Coalition for Content Provenance and Authenticity) open standard. When uploaded to the Content Credentials Verify tool, users can trace an image’s provenance lineage. For instance, an image generated using ChatGPT will display an initial metadata manifest denoting its origin from the DALL-E 3 API, followed by a secondary manifest indicating its emergence in ChatGPT.
Despite the sophisticated cryptographic technology underpinning the C2PA standard, this verification method solely functions when the metadata remains intact. The tool proves ineffective when AI-generated images lack metadata, such as screenshots or images uploaded on social media. Notably, sample images on the official DALL-E 3 page returned blank, affirming this limitation. OpenAI acknowledges in its FAQ section that while this approach isn’t a panacea for the misinformation battle, empowering users to actively seek such signals is pivotal.
While OpenAI’s initiative focuses on verifying still images, Google’s DeepMind has introduced SynthID for digitally watermarking both images and AI-generated audio. Concurrently, Meta has been experimenting with invisible watermarking via its AI image generator, offering potential resistance to tampering.