Thousands of Public Google Cloud API Keys Exposed with Gemini Access After API Enablement

Published: (February 28, 2026 at 04:56 AM EST)
4 min read

Source: The Hacker News

Google Cloud API Keys Can Authenticate to Gemini

Gemini image

New research has found that Google Cloud API keys—typically used as project identifiers for billing—can be abused to authenticate to sensitive Gemini endpoints and access private data.

The findings come from Truffle Security, which discovered nearly 3,000 Google API keys (identified by the prefix AIza) embedded in client‑side code to provide Google‑related services such as embedded maps.

“With a valid key, an attacker can access uploaded files, cached data, and charge LLM‑usage to your account,” security researcher Joe Leon said, adding that the keys “now also authenticate to Gemini even though they were never intended for it.”
Truffle Security blog

How the Issue Occurs

The problem arises when users enable the Gemini API (Generative Language API) on a Google Cloud project. Enabling it causes all existing API keys in that project—including those exposed in website JavaScript—to gain surreptitious access to Gemini endpoints without any warning.

Gartner diagram

This effectively allows any attacker who scrapes websites to harvest such API keys and use them for nefarious purposes and quota theft, including:

  • Accessing sensitive files via the /files and /cachedContents endpoints.
  • Making Gemini API calls, racking up huge bills for the victims.

Truffle Security also found that creating a new API key in Google Cloud defaults to “Unrestricted,” meaning it works for every enabled API in the project, Gemini included.

“The result: thousands of API keys that were deployed as benign billing tokens are now live Gemini credentials sitting on the public internet.”
—Joe Leon

In total, the company reported 2,863 live keys accessible on the public internet, including a website associated with Google.

The disclosure comes as Quokka published a similar report, finding over 35,000 unique Google API keys embedded in its scan of 250,000 Android apps.

“Beyond potential cost abuse through automated LLM requests, organizations must also consider how AI‑enabled endpoints might interact with prompts, generated content, or connected cloud services in ways that expand the blast radius of a compromised key.”
Quokka blog

API key exposure illustration

“Even if no direct customer data is accessible, the combination of inference access, quota consumption, and possible integration with broader Google Cloud resources creates a risk profile that is materially different from the original billing‑identifier model developers relied upon.”

Google’s Response

Although the behavior was initially deemed intended, Google has since stepped in to address the problem.

“We are aware of this report and have worked with the researchers to address the issue,” a Google spokesperson told The Hacker News via email. “Protecting our users’ data and infrastructure is our top priority. We have already implemented proactive measures to detect and block leaked API keys that attempt to access the Gemini API.”

It is currently unknown whether the issue has been exploited in the wild. However, a recent Reddit post claimed a “stolen” Google Cloud API key resulted in $82,314.44 in charges between Feb 11‑12 2026, up from a regular spend of $180 per month.

We have reached out to Google for further comment and will update the story if we hear back.

Recommendations for Google Cloud Users

  1. Audit your APIs and services – Verify whether any AI‑related APIs (e.g., Gemini/Generative Language) are enabled.
  2. Check for public exposure – Ensure API keys are not present in client‑side JavaScript or checked into public repositories.
  3. Rotate exposed keys – Start with your oldest keys first, as they are most likely to have been deployed publicly under the old guidance that API keys are safe to share.
  4. Apply restrictions – When creating new keys, explicitly limit them to the required APIs and set appropriate referrer or IP restrictions.

“This is a great example of how risk is dynamic, and how APIs can be over‑permissioned after the fact.” – Truffle Security.

Tim Erlin, security strategist at Wallarm, added:

“Security testing, vulnerability scanning, and other assessments must be continuous.”

“APIs are tricky in particular because changes in their operations or the data they can access aren’t necessarily vulnerabilities, but they can directly increase risk. The adoption of AI running on these APIs, and using them, only accelerates the problem. Finding vulnerabilities isn’t really enough for APIs. Organizations have to profile behavior and data access, identifying anomalies and actively blocking malicious activity.”

Stay Updated

Found this article interesting? Follow us on:

0 views
Back to Blog

Related posts

Read more »

On Moltbook

Overview The MIT Technology Review has a good article on Moltbook, the supposed AI‑only social network. Many people have pointed out that a lot of the viral co...