The latest on Grok’s gross AI deepfakes problem
See all Stories
J
X safety teams ‘repeatedly warned management’ about undressing tools.
While X has long allowed NSFW images, The Washington Post reports that the platform’s content moderation filters couldn’t handle the estimated millions of sexualized deepfakes of real women and children being generated by Grok.
“For instance, child sexual abuse material was typically rooted out by matching it against a database of known illegal images. But an AI edited image wouldn’t automatically trigger these warnings.”
Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.
Loading comments
Getting the conversation ready...
Most Popular
Most Popular
- European retailers yank popular headphones after study reports trace amounts of hormone-disrupting chemicals
- Gemini’s task automation is here and it’s wild
- PC makers are not ready for the MacBook Neo
- Amazon Prime Video nearly doubles the price to go ad-free and stream 4K video
- MacBook Air M5 review: a small update for the ‘just right’ Mac









