OpenAI upgrades ChatGPT with interactive learning tools as lawsuits and Pentagon backlash mount
Source: VentureBeat
The past ten days have been among the most consequential in OpenAI’s history, with developments stacking up across product, politics, personnel, and the courts. Here is what happened — and what it means.
OpenAI on Tuesday launched a set of interactive visual tools inside ChatGPT that let users manipulate mathematical and scientific formulas in real time — a genuinely impressive education feature that landed in the middle of the most turbulent stretch of the company’s corporate life.
The new experience covers more than 70 core math and science concepts, from the Pythagorean theorem to Ohm’s law to compound interest. When a user asks ChatGPT to explain one of these topics, the chatbot now generates a dynamic module with adjustable sliders alongside its written response. Drag a variable, and the equations, graphs, and diagrams update instantly. The feature is available today to all logged‑in users worldwide, across every plan, including free.
OpenAI tells VentureBeat that 140 million people already use ChatGPT each week for math and science learning. That is a staggering number. It also means the feature arrives with unusually high stakes:
- Since late February, OpenAI has been sued by the family of a 12‑year‑old mass‑shooting victim who alleges the company knew the attacker was planning violence through ChatGPT.
- It lost its head of robotics over a Pentagon deal that triggered a near‑300 % spike in app uninstalls.
- More than 30 of its own employees filed a legal brief supporting rival Anthropic against the U.S. government.
- It scrapped plans with Oracle to expand a flagship data center in Texas.
- Its chief competitor’s app, Claude, now sits atop the App Store.
The interactive learning tools are, on their merits, a strong product. They also arrive at a company fighting on every front simultaneously — and burning through an estimated $15 billion in cash this year to do it.
How the new ChatGPT learning tools actually work
The feature is built on a simple pedagogical premise: students understand formulas better when they can see what happens as the inputs change.
-
Ask ChatGPT “help me understand the Pythagorean theorem,” and the system now responds with a written explanation alongside an interactive panel.
-
On the left, the formula
[ a^{2} + b^{2} = c^{2} ]
appears in clean notation with sliders for sides a and b.
-
On the right, a geometric visualization—a right triangle with squares drawn on each side—reshapes dynamically as you adjust the values. The computed hypotenuse updates in real time.
The same treatment applies across topics: voltage and resistance for Ohm’s law, pressure and temperature for the ideal‑gas equation, radius and height for cone volume, etc.
OpenAI’s initial roster of more than 70 topics targets high‑school and introductory‑college material, including:
- Binomial squares
- Charles’ law
- Circle equations
- Coulomb’s law
- Cylinder volume
- Degrees of freedom
- Exponential decay
- Hooke’s law
- Kinetic energy
- Lens equation
- Linear equations
- Slope‑intercept form
- Surface area of a sphere
- Trigonometric angle‑sum identities
- …and many others
The company cited research suggesting that “visual, interaction‑based learning can lead to stronger conceptual understanding than traditional instruction for many students,” and pointed to a recent Gallup survey in which more than half of U.S. adults said they struggle with math. In early testing, OpenAI said students reported the modules helped them grasp how variables relate to one another, and parents described using them to work through problems alongside their children.
“The feature stands out for how strongly this feature emphasizes conceptual understanding.” – Anjini Grover, high‑school mathematics teacher
“A step towards empowering students to independently explore abstract concepts.” – Raquel Gibson, high‑school algebra teacher
The tools build on ChatGPT’s existing education features—a “study mode” for step‑by‑step problem solving and a quizzes feature for exam prep—and OpenAI said it plans to expand interactive learning to additional subjects. The company also intends to publish research through its NextGenAI initiative and OpenAI Learning Lab to study how AI shapes learning outcomes over time.
A lawsuit alleging OpenAI knew a mass shooter was planning an attack
On the day before OpenAI shipped its education tools, the company faced the most serious legal challenge it has ever faced.
- Monday: The mother of 12‑year‑old Maya Gebala filed a civil lawsuit against OpenAI in B.C. Supreme Court, alleging the company had “specific knowledge of the shooter’s long‑range planning of a mass‑casualty event” through ChatGPT interactions and “took no steps to act upon this knowledge.”
- Maya was shot three times during a mass shooting in Tumbler Ridge, British Columbia on February 10, which killed eight people and the 18‑year‑old attacker. She suffered a catastrophic traumatic brain injury with permanent cognitive and physical disabilities.
The claim paints a damning picture of how the shooter used ChatGPT. It alleges the platform functioned as a “counsellor, pseudo‑therapist, trusted confidante, friend, and ally” and was “intentionally designed to foster psychological dependency between the user and ChatGPT.”
Key allegations include:
- The shooter was under 18 when they began using the service.
- Despite OpenAI’s requirement that minors obtain parental consent, the company “took no steps to implement age verification or consent procedures.”
- OpenAI suspended the shooter’s account months before the attack but did not alert Canadian law enforcement, a decision that provoked sharp political fallout.
B.C. Premier David Eby said after a virtual meeting with Sam Altman that the CEO agreed to apologize to the people of Tumbler Ridge and work with the provincial government on AI‑regulation recommendations.
None of the claims have been proven in court. OpenAI has no…
(The original text ends abruptly here; the remainder of the statement was not provided.)
[Note: The original excerpt began mid‑sentence; the cleaned version preserves all existing content while improving readability and markdown structure.]
The Pentagon Deal That Split OpenAI From the Inside
The Tumbler Ridge lawsuit is unfolding against the backdrop of an internal crisis that has already cost OpenAI key talent and millions of users.
- Feb 28: CEO Sam Altman announced a deal giving the Pentagon access to OpenAI’s AI models inside secure government‑computing systems.
- The agreement came days after Anthropic CEO Dario Amodei publicly refused similar terms, saying his company could not proceed without assurances against autonomous weapons and mass domestic surveillance.
- In response, the Pentagon designated Anthropic a “supply‑chain risk”—a classification normally reserved for foreign adversaries—while Defense Secretary Pete Hegseth barred any military contractor from conducting commercial activity with the company.
Internal Reaction
| Person | Role | Reaction |
|---|---|---|
| Caitlin Kalinowski | Joined from Meta (2024) to build OpenAI’s robotics hardware division | Resigned on principle. “AI has an important role in national security, but surveillance of Americans without judicial oversight and lethal autonomy without human authorization are lines that deserved more deliberation than they got.” |
| Aidan McLaughlin | Research scientist | Posted on social media: “Personally don’t think this deal was worth it.” |
| Unnamed employee (to CNN) | Staff member | Said many OpenAI staff “really respect” Anthropic for walking away. |
External Reaction
- ChatGPT uninstalls spiked +295 % on the day the deal was announced.
- Anthropic’s Claude surged to #1 among free apps on the U.S. Apple App Store and remained there through the weekend.
- Protesters gathered outside OpenAI’s San Francisco headquarters calling for a “QuitGPT” movement.
Industry‑Wide Pushback
More than 30 OpenAI and Google DeepMind employees—including DeepMind chief scientist Jeff Dean—filed an amicus brief (Monday) supporting Anthropic’s lawsuit against the Defense Department.
The brief argued that the Pentagon’s actions, “if allowed to proceed,” would “undoubtedly have consequences for the United States’ industrial and scientific competitiveness in the field of artificial intelligence and beyond.”
The employees signed in their personal capacity, but the spectacle of OpenAI’s own researchers rallying to a competitor’s legal defense against the same government their company just partnered with has no real precedent in the industry.
Altman’s Response
- In an internal memo later shared publicly, Altman admitted the deal “was definitely rushed” and “just looked opportunistic and sloppy.”
- The contract was revised to include explicit prohibitions against mass domestic surveillance and the use of OpenAI technology on commercially acquired data.
- Altman also said that enforcing the supply‑chain‑risk designation against Anthropic “would be very bad for our industry and our country.”
Anthropic’s Legal Position
- Court filings warn the Pentagon’s blacklisting could cost Anthropic up to $5 billion in lost business—roughly equivalent to its total revenue since commercializing its AI technology in 2023.
- The company is seeking a temporary court order to continue working with military contractors while the case proceeds.
Why OpenAI’s $15 Billion Cash Burn Makes Every User Count
Strip away the lawsuits and the politics, and OpenAI still has a math problem of its own.
- Projected cash burn: ≈ $15 B this year (up from $9 B in 2025).
- Weekly users: ~910 M; ≈ 95 % are free.
Because subscriptions alone cannot bridge that gap, OpenAI is:
- Building an internal advertising infrastructure
- Partnering with ad tech firms such as Criteo—and reportedly The Trade Desk—to bring advertisers into ChatGPT.
Hiring Push
| Role | Location | Compensation (top of band) |
|---|---|---|
| Monetization infrastructure engineer | San Francisco | Up to $385 k |
| Engineering manager | San Francisco | Up to $385 k |
| Product designer (ads experience) | San Francisco | Up to $385 k |
| Senior manager, ad‑revenue accounting | San Francisco | Up to $385 k |
| Trust & safety specialist (ads product) | San Francisco | Up to $385 k |
These salaries signal a long‑term commitment to owning an ad stack rather than renting one.
Trust Risks
- Users who abandoned the app over the Pentagon deal demonstrated that loyalty to ChatGPT is thinner than its market share suggests.
- Adding commercial messages to a product already under fire for military ties and its handling of a mass shooter’s data will require OpenAI to navigate user sentiment with a precision it has not recently demonstrated.
Infrastructure Uncertainty
- Oracle and OpenAI recently scrapped plans to expand a flagship AI data center in Abilene, Texas, after negotiations stalled over financing and OpenAI’s evolving needs.
- Meta and Nvidia moved quickly to explore the site—a reminder that in the current AI arms race, any execution gap gets filled by a competitor within days.
Why Interactive Learning Is OpenAI’s Strongest Remaining Argument
Beyond the product itself, the education feature carries strategic significance for OpenAI.
- Education has always been ChatGPT’s cleanest use case—the application where the technology most obviously augments human capability rather than surveilling it, weaponizing it, or monetizing users’ attention.
- It resonates across demographics:
- Students prepping for the SAT
- Parents revisiting algebra at the kitchen table
- Adults circling back to concepts they never quite understood
The excerpt ends here.
The use case where ChatGPT still holds a clear lead
Google’s Gemini, Anthropic’s Claude, and xAI’s Grok are all investing in education, but none has shipped anything comparable to real‑time interactive formula visualization embedded in a conversational interface.
OpenAI acknowledged that the research landscape on how AI affects learning is still taking shape, but pointed to its own early findings on Study Mode as showing promising early signals. The company said it will continue working with educators and researchers through its NextGenAI initiative and OpenAI Learning Lab, and plans to publish findings and expand into additional subjects.
Somewhere tonight, a ninth‑grader will open ChatGPT, drag a slider, and watch a hypotenuse lengthen across her screen. The Pythagorean theorem will make sense for the first time. She will not know about the Pentagon deal, the Tumbler Ridge lawsuit, the 295 % spike in uninstalls, or the $15 billion cash burn underwriting the server that just rendered her triangle. She will only know that it worked.
For OpenAI, that may have to be enough — for now.