Elon Musk’s AI chatbot Grok on Tuesday began allowing users to create AI-generated images from text prompts and post them to X. Almost immediately, people began using the tool to flood the social media site with fake images of political figures such as former President Donald Trump and Vice President Kamala Harris, as well as of Musk himself — some depicting the public figures in obviously false but nonetheless disturbing situations, like participating in the 9/11 attacks.
Unlike other mainstream AI photo tools, Grok, created by Musk’s artificial intelligence startup xAI, appears to have few guardrails.
In tests of the tool, for example, CNN was easily able to get Grok to generate fake, photorealistic images of politicians and political candidates that, taken out of context, could be misleading to voters. The tool also created benign yet convincing images of public figures, such as Musk eating steak in a park.
Some X users posted images they said they created with Grok showing prominent figures consuming drugs, cartoon characters committing violent murders and sexualized images of women in bikinis. In one post viewed nearly 400,000 times, a user shared an image created by Grok of Trump leaning out of the top of a truck, firing a rifle. CNN tests confirmed the tool is capable of creating such images.
The tool is likely to add to concerns that artificial intelligence could create an explosion of false or misleading information across the internet, especially ahead of the US presidential election. Lawmakers, civil society groups and even tech leaders themselves have?raised alarms?that the misuse of such tools could cause confusion and chaos?for voters.
“Grok is the most fun AI in the world!” Musk posted on X Wednesday, in response to a user praising the tool for being “uncensored.”
Many other leading AI companies have taken some steps to prevent their AI image generation tools from being used to create political misinformation, although?researchers found users can still sometimes find ways around enforcement measures. Some companies, including OpenAI, Meta and Microsoft, also include technology or labels to help viewers identify images that have been made with their AI tools.
Rival social media platforms, including YouTube, TikTok, Instagram and Facebook have also taken steps to label AI-generated content in users’ feeds, either by using technology to detect it themselves or asking users to identify when they’re posting such content.
X did not immediately respond to a request for comment regarding whether it has any policies against Grok generating potentially misleading images of political candidates.
By Friday, xAI appeared to have introduced some restrictions on Grok, in response to critical reports and the disturbing images that users were creating and posting. The tool now refuses to create images of political candidates or widely recognized cartoon characters (whose intellectual property belongs to other companies) committing acts of violence or alongside hate speech symbols, although users noted the restrictions seemed limited to only certain terms and image subjects.
X has a policy against sharing “synthetic, manipulated, or out-of-context media that may deceive or confuse people and lead to harm,” although it’s unclear how the policy is enforced. Musk himself?shared?a video last month on X that used AI to make it appear that Harris had said things she, in fact, did not — in an apparent violation of the policy?and with only a laughing face emoji to suggest to followers that it was fake.
The new Grok image tool also comes as Musk faces criticism for repeatedly spreading false and misleading claims on X related to the presidential election, including a post raising questions about the security of voting machines. It also comes days after Musk hosted Trump for a more than two-hour livestreamed conversation on X in which the Republican hopeful made at least 20 false claims without pushback from Musk.
Other AI image generation tools have faced backlash for various issues. Google paused the ability for its Gemini AI chatbot to generate images of people after it was blasted for producing historically inaccurate depictions of people’s races; Meta’s AI image generator came under fire for struggling to create images of couples or friends from different racial backgrounds. TikTok was also forced to pull an AI video tool after CNN discovered that any user could create realistic-looking videos of people saying anything, including vaccine misinformation, without labels.
Grok does appear to have some restrictions; for example, a prompt requesting a nude image returned a response saying, “unfortunately, I can’t generate that kind of image.”
In a separate test, the tool said it also has?“limitations on creating content that promotes or could be seen as endorsing harmful stereotypes, hate speech, or misinformation.”
“It’s important to avoid spreading falsehoods or content that could incite hatred or division. If you have other requests or need information on a different topic, feel free to ask!” Grok said.
However, in response to a different prompt, the tool did generate an image of a political figure standing alongside a hate speech symbol — a sign that whatever restrictions Grok does have, they appear not to be enforced consistently.
–CNN’s Jon Passantino contributed to this report. This story has been updated to reflect changes xAI has made to Grok.