Final week, Google quietly introduced that it might be including a visual watermark to AI-generated movies made utilizing its new Veo 3 mannequin.
And when you look actually carefully whereas scrolling by means of your social feeds, you would possibly have the ability to see it.
The watermark will be seen in movies launched by Google to advertise the launch of Veo 3 within the UK and different international locations.
Credit score: Screenshot: Google
Google introduced the change in an X thread by Josh Woodward, Vice President with Google Labs and Google Gemini.
In line with Woodward’s submit, the corporate added the watermark to all Veo movies apart from these generated in Google’s Stream device by customers with a Google AI Extremely plan. The brand new watermark is along with the invisible SynthID watermark already embedded in all of Google’s AI-generated content material, in addition to a SynthID detector, which not too long ago rolled out to early testers however is just not but broadly accessible.
This Tweet is at the moment unavailable. It may be loading or has been eliminated.
The seen watermark “is a first step as we work to make our SynthID Detector available to more people in parallel,” mentioned Josh Woodward, VP of Google Labs and Gemini in his X submit.
Within the weeks after Google launched Veo 3 at Google I/O 2025, the brand new AI video mannequin has garnered a lot of consideration for its extremely real looking movies, particularly since it may well additionally generate real looking audio and dialogue. The movies posted on-line aren’t simply fantastical renderings of animals performing like people, though there’s loads of that, too. Veo 3 has additionally been used to generate extra mundane clips, together with man-on-the-street interviews, influencer advertisements, faux information segments, and unboxing movies.
Mashable Mild Pace
This Tweet is at the moment unavailable. It may be loading or has been eliminated.
Should you look carefully, you’ll be able to spot telltale indicators of AI like overly-smooth pores and skin and inaccurate artifacts within the background. However when you’re passively doomscrolling, you won’t assume to double-check whether or not the emotional help kangaroo casually holding a airplane ticket is actual or faux. Folks being duped by an AI-generated kangaroo is a comparatively innocent instance. However Veo 3’s widespread availability and realism introduce a brand new stage of threat for the unfold of misinformation, in line with AI consultants interviewed by Mashable for this story.
The brand new watermark ought to scale back these dangers, in idea. The one downside is that the seen watermark is not that seen. In a video Mashable generated utilizing Veo 3, you’ll be able to see a “Veo” watermark in a pale shade of white within the backside right-hand nook of the video. See it?

A Veo 3 video generated by Mashable contains the brand new watermark.
Credit score: Screenshot: Mashable
How about now?

Google’s Veo watermark.
Credit score: Screenshot: Mashable
“This small watermark is unlikely to be apparent to most consumers who are moving through their social media feed at a break-neck clip,” mentioned digital forensics professional Hany Farid. Certainly, it took us just a few seconds to seek out it, and we had been on the lookout for it. Until customers know to search for the watermark, they might not see it, particularly if viewing content material on their cellular gadgets.
A Google spokesperson advised Mashable by electronic mail, “We’re committed to developing AI responsibly and we have clear policies to protect users from harm and governing the use of our AI tools. Any content generated with Google AI has a SynthID watermark embedded and we also add a visible watermark to Veo videos too.”
“Individuals are aware of outstanding watermarks like Getty Photographs, however this one could be very small,” said Negar Kamali, a researcher studying people’s ability to detect AI-generated content at Kellogg School of Management. “So both the watermark must be extra noticeable, or platforms that host photos might embrace a be aware beside the picture — one thing like ‘Examine for a watermark to confirm whether or not the picture is AI-generated,'” said Kamali. “Over time, folks might be taught to search for it.”
However, visible watermarks aren’t a perfect remedy. Both Farid and Kamali told us that videos with watermarks can easily be cropped or edited. “None of those small — seen — watermarks in photos or video are adequate as a result of they’re straightforward to take away,” said Farid, who is also a professor at UC Berkeley School of Information.
But, he noted that Google’s SynthID invisible watermark, “is sort of resilient and troublesome to take away.” Farid added, “The draw back is that the common consumer can’t see this [SynthID watermark] with no watermark reader so the purpose now’s to make it simpler for the patron to know if a bit of content material accommodates such a watermark.”
Matters
Synthetic Intelligence
Google