“We broke them all” — How researchers broke current image watermarking protections and what it means for a new era of truth-altering ‘reality’


The tool many big tech companies are banking heavily on being able to help the public and businesses separate fact from fiction, in the context of AI’s meteoric rise, has already been undermined before it’s even taken off.
The idea of watermarking is something that companies including OpenAI, Amazon and Google have pointed to as being able to combat disinformation online. With generative AI on the rise, particularly in the form of deepfakes, it might be looked at as one way to identify what’s actually real. It’s one of the key proposals among efforts to make the usage of AI, safer and more transparent.
There aren’t, however, many clear-cut approaches to watermarking yet that are completely fool-proof or reliable, and professors with the University of Maryland have already found a way to break all of the existing methods, according to TechXplore.
How scientists have already cracked AI watermarking
The researchers used a technique called diffusion purification to blast Gaussian noise – a kind of electronic noise signaling – at a watermark to completely remove it, without impacting the underlying image too much.
With AI-generated content on the rise, especially in certain industries, the scope for abuse has also surfaced as a very real possibility. It’s also essential to find tools and strategies to be able to distinguish genuine content from that made by machines.
Watermarking is a promising approach, according to the paper, published on 29 September. It involves hiding a signal in a piece of text or image to determine if it’s AI-generated. The theory goes a tool you run the content through would then be able to determine whether it’s real or fake, and avoid the prospect of falling for something that isn’t real. But the attack method – diffusion purification – has already been able to nullify today’s watermarks.
“Based on our results, designing a robust watermark is a challenging, but not necessarily impossible task,” the paper said, offering a glimmer of hope.
“An effective method should possess specific attributes, including a substantial enough watermark perturbation, resistance to naive classification, and resilience to noise transferred from other watermarked images.”
More from TechRadar Pro
The tool many big tech companies are banking heavily on being able to help the public and businesses separate fact from fiction, in the context of AI’s meteoric rise, has already been undermined before it’s even taken off. The idea of watermarking is something that companies including OpenAI, Amazon and…
Recent Posts
- Top digital loan firm security slip-up puts data of 36 million users at risk
- Nvidia admits some early RTX 5080 cards are missing ROPs, too
- I tried ChatGPT’s Dall-E 3 image generator and these 5 tips will help you get the most from your AI creations
- Gabby Petito murder documentary sparks viewer backlash after it uses fake AI voiceover
- The quirky Alarmo clock is no longer exclusive to Nintendo’s online store
Archives
- February 2025
- January 2025
- December 2024
- November 2024
- October 2024
- September 2024
- August 2024
- July 2024
- June 2024
- May 2024
- April 2024
- March 2024
- February 2024
- January 2024
- December 2023
- November 2023
- October 2023
- September 2023
- August 2023
- July 2023
- June 2023
- May 2023
- April 2023
- March 2023
- February 2023
- January 2023
- December 2022
- November 2022
- October 2022
- September 2022
- August 2022
- July 2022
- June 2022
- May 2022
- April 2022
- March 2022
- February 2022
- January 2022
- December 2021
- November 2021
- October 2021
- September 2021
- August 2021
- July 2021
- June 2021
- May 2021
- April 2021
- March 2021
- February 2021
- January 2021
- December 2020
- November 2020
- October 2020
- September 2020
- August 2020
- July 2020
- June 2020
- May 2020
- April 2020
- March 2020
- February 2020
- January 2020
- December 2019
- November 2019
- September 2018
- October 2017
- December 2011
- August 2010