Grok Think about, a brand new generative AI device from xAI that creates AI photographs and movies, lacks guardrails in opposition to sexual content material and deepfakes.
xAI and Elon Musk debuted Grok Think about over the weekend, and it is obtainable now within the Grok iOS and Android app for xAI Premium Plus and Heavy Grok subscribers.
Mashable has been testing the device to match it to different AI picture and video technology instruments, and primarily based on our first impressions, it lags behind comparable know-how from OpenAI, Google, and Midjourney on a technical degree. Grok Think about additionally lacks industry-standard guardrails to stop deepfakes and sexual content material. Mashable reached out to xAI, and we’ll replace this story if we obtain a response.
The xAI Acceptable Use Coverage prohibits customers from “Depicting likenesses of persons in a pornographic manner.” Sadly, there’s quite a lot of distance between “sexual” and “pornographic,” and Grok Think about appears rigorously calibrated to make the most of that grey space. Grok Think about will readily create sexually suggestive photographs and movies, but it surely stops wanting exhibiting precise nudity, kissing, or sexual acts.
Most mainstream AI corporations embody express guidelines prohibiting customers from creating doubtlessly dangerous content material, together with sexual materials and superstar deepfakes. As well as, rival AI video turbines like Google Veo 3 or Sora from OpenAI function built-in protections that cease customers from creating photographs or movies of public figures. Customers can typically circumvent these security protections, however they supply some verify in opposition to misuse.
However in contrast to its greatest rivals, xAI hasn’t shied away from NSFW content material in its signature AI chatbot Grok. The corporate just lately launched a flirtatious anime avatar that may have interaction in NSFW chats, and Grok’s picture technology instruments will let customers create photographs of celebrities and politicians. Grok Think about additionally features a “Spicy” setting, which Musk promoted within the days after its launch.
Grok’s “spicy” anime avatar.
Credit score: Cheng Xin/Getty Pictures
AI actors and deepfakes aren’t coming to YouTube advertisements. They’re already right here.
“If you look at the philosophy of Musk as an individual, if you look at his political philosophy, he is very much more of the kind of libertarian mold, right? And he has spoken about Grok as kind of like the LLM for free speech,” mentioned Henry Ajder, an professional on AI deepfakes, in an interview with Mashable. Ajder mentioned that beneath Musk’s stewardship, X (Twitter), xAI, and now Grok have adopted “a more laissez-faire approach to safety and moderation.”
“So, when it comes to xAI, in this context, am I surprised that this model can generate this content, which is certainly uncomfortable, and I’d say at least somewhat problematic? Ajder said. “I am not shocked, given the monitor document that they’ve and the security procedures that they’ve in place. Are they distinctive in affected by these challenges? No. However may they be doing extra, or are they doing much less relative to a few of the different key gamers within the house? It could look like that manner. Sure.”
Grok Imagine errs on the side of NSFW
Grok Imagine does have some guardrails in place. In our testing, it removed the “Spicy” option with some types of images. Grok Imagine also blurs out some images and videos, labeling them as “Moderated.” That means xAI could easily take further steps to prevent users from making abusive content in the first place.
“There isn’t a technical purpose why xAI couldn’t embody guardrails on each the enter and output of their generative-AI techniques, as others have,” mentioned Hany Farid, a digital forensics professional and UC Berkeley Professor of Laptop Science, in an e mail to Mashable.
Mashable Mild Velocity
Nonetheless, relating to deepfakes or NSFW content material, xAI appears to err on the aspect of permisiveness, a stark distinction to the extra cautious method of its rivals. xAI has additionally moved rapidly to launch new fashions and AI instruments, and maybe too rapidly, Ajder mentioned.
“Knowing what the kind of trust and safety teams, and the teams that do a lot of the ethics and safety policy management stuff, whether that’s a red teaming, whether it’s adversarial testing, you know, whether that’s working hand in hand with the developers, it does take time. And the timeframe at which X’s tools are being released, at least, certainly seems shorter than what I would see on average from some of these other labs,” Ajder mentioned.
Mashable’s testing reveals that Grok Think about has a lot looser content material moderation than different mainstream generative AI instruments. xAI’s laissez-faire method to moderation can be mirrored within the xAI security tips.
OpenAI and Google AI vs. Grok: How different AI corporations method security and content material moderation

Credit score: Jonathan Raa/NurPhoto by way of Getty Pictures
Each OpenAI and Google have intensive documentation outlining their method to accountable AI use and prohibited content material. For example, Google’s documentation particularly prohibits “Sexually Explicit” content material.
A Google security doc reads, “The application will not generate content that contains references to sexual acts or other lewd content (e.g., sexually graphic descriptions, content aimed at causing arousal).” Google additionally has insurance policies in opposition to hate speech, harassment, and malicious content material, and its Generative AI Prohibited Use Coverage prohibits utilizing AI instruments in a manner that “Facilitates non-consensual intimate imagery.”
OpenAI additionally takes a proactive method to deepfakes and sexual content material.
An OpenAI weblog submit asserting Sora describes the steps the AI firm took to stop such a abuse. “Today, we’re blocking particularly damaging forms of abuse, such as child sexual abuse materials and sexual deepfakes.” A footnote related to that assertion reads, “Our top priority is preventing especially damaging forms of abuse, like child sexual abuse material (CSAM) and sexual deepfakes, by blocking their creation, filtering and monitoring uploads, using advanced detection tools, and submitting reports to the National Center for Missing & Exploited Children (NCMEC) when CSAM or child endangerment is identified.”
That measured method contrasts sharply with the methods Musk promoted Grok Think about on X, the place he shared a brief video portrait of a blonde, busty, blue-eyed angel in barely-there lingerie.
This Tweet is at present unavailable. It may be loading or has been eliminated.
OpenAI additionally takes easy steps to cease deepfakes, corresponding to denying prompts for photographs and movies that point out public figures by identify. And in Mashable’s testing, Google’s AI video instruments are particularly delicate to photographs that may embody an individual’s likeness.
Compared to these prolonged security frameworks (which many specialists nonetheless consider are insufficient), the xAI Acceptable Use Coverage is lower than 350 phrases. The coverage places the onus of stopping deepfakes on the person. The coverage reads, “You are free to use our Service as you see fit so long as you use it to be a good human, act safely and responsibly, comply with the law, do not harm people, and respect our guardrails.”
For now, legal guidelines and laws in opposition to AI deepfakes and NCII stay of their infancy.
President Donald Trump just lately signed the Take It Down Act, which incorporates protections in opposition to deepfakes. Nonetheless, that legislation would not criminalize the creation of deepfakes however relatively the distribution of those photographs.
“Here in the U.S., the Take it Down Act places requirements on social media platforms to remove [Non-Consensual Intimate Images] once notified,” Farid mentioned to Mashable. “While this doesn’t directly address the generation of NCII, it does — in theory — address the distribution of this material. There are several state laws that ban the creation of NCII but enforcement appears to be spotty right now.”‘
Disclosure: Ziff Davis, Mashable’s dad or mum firm, in April filed a lawsuit in opposition to OpenAI, alleging it infringed Ziff Davis copyrights in coaching and working its AI techniques.