Skip to main content

The latest on Grok’s gross AI deepfakes problem

See all Stories

J
Quote
X safety teams ‘repeatedly warned management’ about undressing tools.

While X has long allowed NSFW images, The Washington Post reports that the platform’s content moderation filters couldn’t handle the estimated millions of sexualized deepfakes of real women and children being generated by Grok.

“For instance, child sexual abuse material was typically rooted out by matching it against a database of known illegal images. But an AI edited image wouldn’t automatically trigger these warnings.”

Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.
Comments
Loading comments
Getting the conversation ready...