Google has announced a new tool that can embed a hidden digital mark into images generated by artificial intelligence. The tool, called SynthID, is designed to help identify the source and authenticity of AI-generated images, as well as detect any modifications or tampering. SynthID is available now to Google Cloud customers who use Imagen, a service that creates realistic images from text descriptions.
SynthID works by altering the pixels of an image in a way that is imperceptible to human eyes, but can be decoded by a special algorithm. The digital mark can contain information such as the date, time, and location of the image creation, as well as the identity of the creator and the model used. SynthID can also verify if an image has been altered by comparing the original and modified marks.
Google says SynthID is useful for various applications, such as verifying the credibility of news and social media content, protecting the intellectual property rights of AI-generated images, and preventing the misuse of synthetic media for malicious purposes. SynthID is also compatible with other AI models and platforms, such as Meta’s Llama 2 and Anthropic’s Claude 2, which Google has recently added to its cloud service.
Google is not the only company working on digital watermarking for AI-generated content. OpenAI, the research lab behind GPT-3 and DALL-E, has also proposed a similar technique called Neural Watermarking3. The technique uses a secret key to embed and extract a watermark from any type of neural network output, such as text, audio, or video. OpenAI says Neural Watermarking can help track the provenance and ownership of AI-generated content, as well as deter unauthorized copying and distribution.