Musk fails to block California data disclosure law he fears will ruin xAI

Published: (March 6, 2026 at 01:21 PM EST)
7 min read

Source: Ars Technica

“Key Win” for California

Elon Musk’s xAI has lost its bid for a preliminary injunction that would have temporarily blocked California from enforcing a law requiring AI firms to publicly disclose information about their training data.

Background

California’s Assembly Bill 2013 (AB 2013) requires AI developers whose models are accessible in the state to:

  • Identify which dataset sources were used to train the models.
  • State when the data was collected and whether the collection is ongoing.
  • Indicate if the datasets contain any copyrighted, trademarked, or patented material.
  • Clarify whether the data was licensed, purchased, or includes personal information.
  • Reveal the extent of synthetic data used, which can serve as a quality metric.

The law took effect in January 2024.

xAI’s Arguments

xAI argued that AB 2013 forces the company to disclose carefully guarded trade secrets. In its filing, the company claimed:

  • Its dataset sources, sizes, and cleaning methods are proprietary.
  • Revealing these details would be “economically devastating,” reducing the value of its trade secrets to zero.
  • The disclosures would not help consumers and could “gut the entire AI industry.”

“If competitors could see the sources of all of xAI’s datasets or even the size of its datasets, they could evaluate both what data xAI has and what they lack,” xAI wrote.
“If OpenAI discovered that xAI was using an important dataset that OpenAI did not have, OpenAI would almost certainly acquire that dataset to train its own model, and vice versa.”

Judge’s Ruling

In an order issued on Wednesday, March 4 2026, U.S. District Judge Jesus Bernal denied xAI’s motion for a preliminary injunction. Key points from the decision:

  • Insufficient showing of harm: xAI failed to demonstrate that the law would compel it to reveal any trade secrets.
  • Vague allegations: The company offered only general statements about the importance of datasets, without concrete evidence of direct harm.
  • Support for public interest: The judge emphasized the government’s interest in helping the public assess how AI models are trained.

The full order can be viewed here.

Implications

  • Compliance required: xAI must comply with AB 2013 while the broader lawsuit proceeds.
  • Potential exposure: Musk may have to disclose information he would prefer to keep from competitors like OpenAI.
  • Continued legal battles: This defeat follows a recent ruling that dismissed one of Musk’s lawsuits against OpenAI for lack of proof of stolen trade secrets. See the related article here.

The case underscores the growing tension between trade‑secret protection and public transparency in the rapidly evolving AI landscape.

XAI’s Challenge to California’s AI Law

Key Arguments Presented by XAI

IssueXAI’s PositionCourt’s Finding
Fifth Amendment (Trade‑Secret Claim)Training data can be a trade secret; the law unlawfully forces disclosure.The judge noted that XAI has not identified any dataset or cleaning method that is distinct from its competitors and therefore cannot claim trade‑secret protection.
First Amendment (Commercial‑Speech Claim)The statute attempts to regulate the outputs of XAI’s chatbot Grok, compelling speech while exempting other firms.The court found XAI failed to show the law forces developers to disclose data sources to curb “biased” data, and there is no indication the statute aims to influence Grok’s outputs.

Judge Bernal’s Reasoning

  • Trade‑Secret Claim

    “It is not lost on the Court the important role of datasets in AI training and development, and that, hypothetically, datasets and details about them could be trade secrets. But XAI has not alleged that it actually uses datasets that are unique, that it has meaningfully larger or smaller datasets than competitors, or that it cleans its datasets in unique ways.
    Consequently, XAI is unlikely to succeed on the merits of its Fifth Amendment claim.

  • Commercial‑Speech Claim

    “XAI failed to show that the law improperly forces developers to publicly disclose their data sources in an attempt to identify what California deems to be ‘data riddled with implicit and explicit biases.’”

  • Statutory Language

    “Nothing in the language of the statute suggests that California is attempting to influence Plaintiff’s models’ outputs by requiring dataset disclosure.”
    “The statute does not functionally ask Plaintiff to share its opinions on the role of certain datasets in AI model development or make ideological statements about the utility of various datasets or cleaning methods.”
    “No part of the statute indicates any plan to regulate or censor models based on the datasets with which they are developed and trained.”

Context: Controversies Surrounding Grok

These incidents prompted a California investigation – see the Attorney General’s cease‑and‑desist letter: Office of the Attorney General, 2026.

Despite the scandals, Judge Bernal concluded that the statute does not aim to regulate Grok’s controversial or biased outputs, contrary to XAI’s fears.

Public “cannot possibly” care about AI training data

Perhaps most frustrating for xAI as it continues to fight to block the law, Judge Bernal also disputed the claim that the public has no interest in the training‑data disclosures.

“It strains credulity to essentially suggest that no consumer is capable of making a useful evaluation of Plaintiff’s AI models by reviewing information about the datasets used to train them and that therefore there is no substantial government interest advanced by this disclosure statute,” Bernal wrote.

He noted that the law simply requires companies to alert the public about information that can feasibly be used to weigh whether they want to use one model over another.

Nothing about the required disclosures is inherently political, the judge suggested, although some consumers might select or avoid certain models with perceived political biases. As an example, Bernal opined that consumers may want to know “if certain medical data or scientific information was used to train a model” to decide if they can trust the model “to be sufficiently comprehensively trained and reliable for the consumer’s purposes.”

“In the marketplace of AI models, AB 2013 requires AI model developers to provide information about training datasets, thereby giving the public information necessary to determine whether they will use—or rely on information produced by—Plaintiff’s model relative to the other options on the market,” Bernal wrote.

Moving forward, xAI faces an uphill battle to win this fight. It will need to:

  1. Gather more evidence that its datasets or cleaning methods are sufficiently unique to be considered trade secrets that give the company a competitive edge.
  2. Deepen its arguments that consumers don’t care about disclosures and that the government has not explored less burdensome alternatives that could “achieve the goal of transparency for consumers,” Bernal suggested.

One possible path to a win could be proving that California’s law is so vague that it potentially puts xAI on the hook for disclosing its customers’ training data for individual Grok licenses. But Bernal emphasized that xAI must actually face such a conundrum—rather than raising an abstract possible issue among AI systems developers—for the Court to make a determination on this issue.

xAI did not respond to Ars’ request to comment.

A spokesperson for the California Department of Justice told Reuters that the department “celebrates this key win and remains committed to continuing our defense” of the law. (source)


About the author

Photo of Ashley Belanger

Ashley Belanger is a senior policy reporter for Ars Technica, dedicated to tracking the social impacts of emerging policies and new technologies. She is a Chicago‑based journalist with 20 years of experience.


41 Comments

Most Read: RFK Jr.’s anti‑vaccine policies are “unreviewable,” DOJ lawyer tells judge (image for “Most Read” story)

0 views
Back to Blog

Related posts

Read more »