It’s election season in the United States, which of course means it’s a perfect time for Elon Musk to release a deepfaker’s dream app. As part of the Grok-2 beta released to X Premium subscribers this week, the social media site’s AI chatbot got a new image generator. And based on what’s been floating around online and what I’ve been able to make myself, the thing has (almost) no filter.
Troubling images started to pop up almost immediately after the chatbot’s update, showing realistic depictions of politicians, celebrities, and copyrighted children’s icons doing anything from packing heat to snorting cocaine to taking part in 9/11. It’s not the first time this has happened—Meta faced a similar issue late last year, and the text-based side of Grok has been known to spread misinformation before. But with all that in mind, it’s concerning to see the bot’s brand new launch continue to make the same mistakes, especially after Musk's promises that it would be a "maximum truth-seeking AI."
Not all images look totally convincing, but with some clever wording, it’s totally possible to create a striking photo that could easily trick someone just glancing at it while scrolling through their feed.
Here are some photos of Taylor Swift wearing a MAGA hat, the Pope endorsing Donald Trump, and Joe Biden meeting Kim Jong Un in the Oval Office before (kind of) snorting some cocaine. Do they hold up under scrutiny? Maybe not. But they’re enough to make someone who's perhaps a little less tech literate ask “Is that real?”
Similarly, imagine the pearl-clutching headlines about this fake poster for a movie where the Minions smoke weed, or this fake screenshot from a game where Mario shoots Pikachu.
The app also doesn’t shy away from violence, being willing to show blood or even graphic imagery with the right words. Telling it to make a “crime scene analysis” seems to be the best way to get the most disturbing stuff, but here are some examples that are more fit for print, including one showing Joe Biden and Kamala Harris teaming up to do 9/11, one of Kamala Harris mimicking Donald Trump's reaction to his recent assassination attempt, and another of Elon Musk leaking raspberry jam.
The app doesn’t always get it right—funnily, it doesn’t seem to know what a Cybertruck is—but with AI, it’s quantity over quality. Pepper the web with enough junk, and someone’s bound to believe at least some of it.
I only ever had the bot tell me it couldn’t proceed with a request twice, once when I asked for Donald Trump in a KKK uniform, and the other when I asked for Elon Musk holding an assault rifle. That’s not to say it’s especially precious towards Musk, as seen above, although this image weirdly seems to show the opposite of what I asked for.
The bot would also occasionally just omit especially violent parts of an image, although it’s unclear how much of that is on purpose versus how much is the bot just failing to render the request correctly. You’ll still get the occasional misplaced finger or melting face here, and Grok in general seems better at depicting Donald Trump than Kamala Harris.
If you have X Premium, you can try out Grok’s unhinged image generator for yourself right now. This is only a bit of what I was able to make—I also have imagery of a smiling child soldier brandishing an AK-47, and of Princess Peach in a strip club. I was able to generate provocative photos as recently as an hour ago, and Musk has yet to make any statements implying the company will walk the update back.