AI watermarks are coming – but will they work?

George Lawton Profile picture for user George Lawton August 3, 2023
Summary:
Watermarking techniques for content have been used for years to protect intellectual property. How does watermarking apply to AI?

Business, Technology, Internet Law Lawyer Legal © putilich - Canva.com
(© putilich - Canva.com)

Leading US AI companies recently promised the government they would implement AI watermarks that would make identifying AI-generated content easier. This follows on the heels of recent Chinese regulations mandating requirements for AI content watermarks and making it illegal to delete, alter or remove these. Regulators in Europe are also considering enacting similar watermarking regulations that will make it easier to identify content created by generative AI. 

It sounds like a great idea, but will these schemes work in practice? The answer is sort of. From a regulatory perspective, the right regulations may enforce perpetuating watermarks across all AI-generated content or risk fines. Opportunistic students, job seekers, and black hats may not be as compliant. 

Bharath Thota, Partner in the Advanced Analytics practice of Kearney, a global strategy and management consulting firm predicts,

Software used to create and edit content might soon have built-in tools that can find and stop the removal of watermarks, similar to how printing tech today prevents fake money.

Technically speaking, well-established watermarking techniques have been developed for images, video, and audio that are robust to tampering or removal. Paper document watermarks were first introduced in Fabriano, Italy, in 1282. Modern digital variants use various kinds of stenography and other types of encodings that can be imperceptibly embedded into digital content at different scales. These techniques are harder to remove and widely used to protect intellectual property. 

But plain text is a bit harder to reliably watermark. Simple paraphrasing tools or even grammar checkers could make it easier to thwart these.

Kit Cox, CEO of Enate, a business workflow platform, says:

The practicalities around implementing AI watermarks differ depending on the content. With text content, for example, there are promising technical approaches with researchers working at breakneck speed to test them and break them adversarially. These approaches require the developers of large language models (LLMs) to code them so that a human reading the wording won’t realize it’s LLM generated. In these instances, the code can also be checked with an algorithm based on the statistical distribution of words within the output. However, Google, Open AI, Facebook etc., will only start watermarking if mandated by legislation. There is a strong argument that slightly overregulating the use of watermarks, to begin with, is a better approach than going light and then being forced to tighten the regulation further down the line.

AI watermarking techniques

  • Text: In June, University of Maryland (UMD) researchers demonstrated one promising approach for watermarking text from LLMs. A separate team at UMD subsequently found that automated paraphrasing tools could defeat many AI watermarks and confound AI content detection in general. Bret Greenstein, data, AI & analytics leader, PwC, says the primary way to embed a watermark in text is to change the order and choice of words to encode a message. However, this can also constrain the way the text was written, which may not suit a writer’s intent.
  • Video and Images: Thota says that invisible watermarking methods using Discrete Cosine Transform (DCT) and Discrete Wavelet Transform (DWT) work well for digital content since they don’t lower perceptual quality and can survive things like resizing, cropping, or recompressing the content are best for protecting information. Also, deep neural network techniques show similar promise.  
  • Voice: Resemble AI introduced its PerTh Neural Speech Watermarker in January, which is a combination of Perceptual and Threshold. Resemble AI founder and CEO Zohaib Ahmed says it was designed to encode the watermark into sounds that are inaudible to humans in a redundant way that makes it difficult to corrupt with common audio manipulations. It can also help identify IP infringements for enterprise content. 
  • AI confounders: Other promising work is exploring anti-AI watermarks that can protect human-generated content from generative AI models. MIT recently reported work on PhotoGuard that can confuse AI models to classify a picture of a human to look pure grey or disrupt the image generation process using a separate, more computationally expensive process.

Circumventing watermarks

Experts have mixed opinions about AI watermarking schemes’ effectiveness without clear regulation and enforcement. 

Pete Nicoletti, Global CISO at Check Point Software, says:

Marketers and students who are not tech-savvy but are using AI to generate papers, artwork, music, and homework will find it extremely challenging to identify and eliminate both the perceived watermarks and the imperceptible watermarks. Blackhat hackers and advanced users will take advantage of tools that do not comply with the currently voluntary watermarking instructions. Even if the guidance is eventually enacted as a law, there are already tools accessible on the darkweb or through easy downloads that can aid programmers in developing new forms of malware and manipulated images.

Greenstein notes:

The challenge with watermarking is that it is not infallible. Since watermarks in images can be detected by design, they can also be removed with great effort. For video watermarking, if the watermarking is across frames of the video, it can be lost in common methods of video compression. For text, watermarking is easily broken through simple text edits.

Thota says:

For people who aren't very good with technology, getting around AI watermarks could be hard unless they use certain tools. Some might not even know that these hidden marks are in the content. AI watermarks are designed to survive things like cropping, scaling, and rotating, making it hard for the average person to get around them.

Human content watermarks could protect against blackhats

In response to blackhats and IP theft, it may be useful to think about how we can use watermarks on human-generated content to create a chain of trust. Greenstein says:

Watermarking, especially for images and video, is a cost-effective and valuable way to identify the source and potential ownership of the content. While they are breakable, it is hard to do, and the cost of adding watermarks is low. Ultimately, better laws, common practices and technical solutions will be required here. AI-generated content will dramatically exceed human-generated content in a very short time. It is likely we will spend more time trying to prove something was human-created rather than trying to detect AI-generated content.

Resemble is already working on this for audio and video content. Resemble argues that enterprises should have control over who uses their voice data. Without legal consent, AI models such as LLMs shouldn’t be able to access company data to train their models.

Resemble AI founder and CEO Zohaib Ahmed explains:

The significance of fraud concerns has spurred the recent improvements in Resemble AI’s watermarker, which is now a complete solution to thwart copyright infringement against its customers’ content library. The recent updates allow Resemble AI to identify precisely where the watermark has been embedded within an audio file. This provides more clarity on the origin of an audio clip and efficiently verifies whether it has been tampered with. Having a greater understanding of the watermark’s location within the audio data provides an additional layer of AI fraud prevention. It also ensures that any modification in the audio data can be detected quickly and efficiently.

How AI watermarking could unfold

Regulators are still in the early stages of figuring out how AI watermarking should be implemented and subsequently enforced. Its also important to consider the role that social media and other channels role in enforcing AI watermarking provisions. Will they be required to check content for AI watermarks and surfacing this to users? Cox says:

The first step to regulation sounds simple – but we need to decide on what exactly should be regulated. Realistically, I suspect this will fall to models that have more than a certain number of parameters. The legislation will then instruct all organizations/individuals developing such models to algorithmically watermark the content they produce and provide a free service to validate that text against the watermark.

Ahmed believes the right regulations could also spur innovation in AI watermarking research. He says:

If regulations explicitly prohibit tampering with or circumventing watermarks on AI-generated content, it would legally solidify watermarking as a mandatory security measure. This could accelerate innovation to make watermarks more robust. I expect the combination of technology improvement and appropriate regulation will make AI watermarks a trusted verification method, on par with signatures or serial numbers for physical goods. But care is needed not to impose excessive restrictions that limit constructive innovation.

My take

While working on this story, I got a call from a disembodied-sounding voice informing me about an opportunity to save on health insurance, followed by a request to know my age.

“Are you human,” I asked.

After a long pause, the cheery sounded voice responded, “Why of course, I am human, now, what is your age?”

I asked back, “What is the provenance of your company?” After a very long pause, the line clicked.

This was the first time I felt like I was being called by a bot rather than just a recording asking me to press a number to talk to a human. And I suspect it’s the beginning of a tsunami of fake people infiltrating our phones, social feeds, news, elections and much more. AI watermarking regulations with appropriate enforcement bodies could help stem this flood of interactive AI spam.

And for all those wondering if AI will put the lawyers out of business, I don’t think so. They are going to have a lot more work on their hands arising from things like IP theft and violations of acceptable use policies that other types of watermarking could help detect.

Loading
A grey colored placeholder image