California Attorney General Orders Elon Musk’s xAI to Stop Sexualised Deepfake Content, Sets January 20 Deadline
NEW DELHI / SAN FRANCISCO: Elon Musk’s artificial intelligence company has landed in serious legal trouble in the United States after , California’s Attorney General, issued a cease-and-desist letter demanding immediate action against the generation of sexualised deepfake images by its chatbot .
The letter, sent this week, orders xAI to immediately prevent Grok from creating and distributing nonconsensual sexualised images, with a particular emphasis on protecting minors. Bonta has set a strict compliance deadline of January 20, 2026, at 5:00 PM Pacific Time, warning that failure to act could trigger significant civil and criminal consequences under California law.
Focus on Nonconsensual Images and Child Safety
According to the California Attorney General’s office, it has opened a formal investigation into what it described as the “proliferation of nonconsensual sexually explicit material” generated using Grok. The investigation covers content created via the Grok website, its application, and its integration with , which is also owned by xAI.
In the letter addressed directly to , Bonta said his office had found multiple examples of Grok taking ordinary, clothed images of women and children available online and modifying them—based on user prompts—into sexualised or explicit depictions without the subjects’ knowledge or consent.
“Creation, distribution, publication, and exhibition of child sexual abuse material is a crime,” the letter stated, citing multiple provisions of the California Penal Code and Civil Code.
Example Cited Involving Minors
One of the most serious concerns raised in the letter involved an incident acknowledged publicly by xAI itself. In a post from the official Grok account, the company admitted that the chatbot had generated and shared an AI image of two young girls, estimated to be between 12 and 16 years old, dressed in sexualised attire based on a user prompt.
Bonta noted that this incident potentially violated US child sexual abuse material (CSAM) laws, in addition to California’s civil and criminal statutes. The Attorney General described the incident as a clear breach of ethical standards and a red line under existing law.
Paid Access Not Enough, Says AG
The letter also criticised xAI’s reported attempt to limit image generation and editing features to paid premium subscribers on X. According to Bonta, these restrictions were “clearly insufficient”, as public reports showed that Grok continued to generate and distribute nonconsensual intimate images and potentially CSAM even after those changes.
While Bonta acknowledged xAI’s recent announcement that it was implementing new “guardrails” around sexualised content, he said the scope, effectiveness, and enforcement of these measures remained unclear.
“In light of the seriousness of the incidents described above,” Bonta wrote, “I demand that xAI immediately cease and desist” from a range of actions linked to the creation and facilitation of such content.
What xAI Is Ordered to Stop
The cease-and-desist letter lays out three explicit prohibitions:
xAI must immediately stop creating, disclosing, or publicising digitised sexually explicit material portraying any individual without consent, or portraying a minor.
The company must stop facilitating or aiding the creation or distribution of such material, including content altered or generated through artificial intelligence.
xAI must halt the creation or distribution of any image depicting a person under 18, or appearing to be under 18, engaged in or simulating sexual conduct.
Bonta cited violations under California Civil Code Section 1708.86, California Penal Code Sections 311 et seq., Section 647(j)(4), and California Business & Professions Code Section 17200.
Evidence Preservation Ordered
In addition to stopping the content generation, the Attorney General ordered xAI, Grok, and X to preserve all potentially relevant evidence. This includes prompts, generated images, posts, internal records, and any electronically stored information related to the investigation.
The letter explicitly warns against deletion, alteration, or spoliation of evidence, signalling that enforcement action could escalate quickly if compliance is not demonstrated.
xAI has been instructed to provide written confirmation to the Attorney General’s office by the January 20 deadline, detailing both the steps taken to stop the creation and distribution of the problematic material and the safeguards being implemented going forward.
Growing Scrutiny of Generative AI
The action against xAI comes amid growing scrutiny of generative AI tools worldwide, particularly over their misuse in creating deepfake pornography, revenge content, and synthetic child abuse material. Regulators have increasingly argued that AI companies must build preventive safeguards by design, rather than reacting after harm has occurred.
California, which already has some of the strictest digital safety and consumer protection laws in the US, has positioned itself at the forefront of regulating AI-driven harms—especially those involving minors and nonconsensual content.
What Comes Next
If xAI fails to comply by the January 20 deadline, legal experts say the company could face civil enforcement actions, financial penalties, court injunctions, and potentially referrals for criminal investigation if violations involving CSAM are substantiated.
So far, xAI has not publicly responded in detail to the Attorney General’s letter beyond earlier statements about adding guardrails to Grok.
Our Thoughts from TheTrendingPeople.com
The California Attorney General’s action against xAI marks a critical moment for the AI industry. As generative models grow more powerful, regulators are making it clear that innovation cannot come at the cost of consent, dignity, and child safety. This case also highlights a broader shift: AI companies will increasingly be judged not just on what their tools can do, but on what they actively prevent. How xAI responds by the January 20 deadline could set an important precedent for AI accountability in the US and beyond.
