Highlights:

  • Hackers might alter the C2PA manifest of a tempered image to conceal the fact that the image was altered.
  • Google intends to incorporate C2PA into its “About this image” function as part of the recently unveiled program.

Google LLC to launch C2PA technology to validate the authenticity of media data in its advertising systems and search engine.

Recently, the business provided an overview of the initiative.

Intel Corp., Apple Inc., and some other companies founded the Coalition for Content Provenance and Authenticity (C2PA) in the year 2021. The consortium welcomed Google earlier this year. The C2PA technical standard is designed to enable the identification of whether an image was produced by artificial intelligence or altered after it was created.

The standard links a metadata file called manifest, to every image. This manifest entails the image’s creation date, time, and whether AI was used to make it. Additional information, such as changes made to a file after its release, can also be included.

Hackers might possibly alter the C2PA manifest of a tempered image to conceal the fact that the image was changed. The system uses a cryptographic method known as hashing to shield manifests from malicious modifications to eliminate that risk.

Hashing can make a file’s digital fingerprint unique. C2PA uses this technology to create a distinct fingerprint for photo manifests. Attempts at tempering photos are easy to identify because when hackers alter a photo’s manifest, they also change the manifest’s fingerprint.

Other cybersecurity elements are also included in C2PA. Its hashing feature is most notable because it creates a distinct fingerprint of the image, and the manifest associated with it. C2PA then integrates the fingerprint to the manifest from the image. Because of this setup, images are linked to their corresponding manifests, making it difficult for hackers to swap out one of the files with a fake or outdated one.

Google intends to incorporate C2PA into its “About this image” function as part of the recently unveiled program. This feature provides contextual information about photos and can be found in the search engine’s Google photos, Lens, and Circle to Search capabilities. When a user selects an image that supports C2PA, the functionality will indicate whether the file was made or altered using AI.

Google also includes technical standards in its advertising platforms. The objective is to assist in the enforcement of advertising standards, as Laurie Richardson, the corporation’s Vice President of trust and safety, stated. In the upcoming months, Google’s search and ad systems will begin to receive C2PA support.

Later, the company may also get C2PA to YouTube. “We’re also exploring ways to relay C2PA information to viewers on YouTube when content is captured with a camera, and we’ll have more updates on that later in the year,” Richardson added.

Google is working on several projects to try to stop the spread of information created by AI. Last year, the business unveiled SynthID, a tool that uses some of its AI models to create photos with a watermark. Even if a file is compressed or altered, the watermark is not visible and stays on the original file.