In early 2026, a developer reported receiving a Google Cloud bill for more than $82,000.
His normal monthly spend was about $180.
The issue was not a traditional cyberattack in the way most business owners imagine it. No dramatic ransomware note. No obvious network intrusion. No Hollywood-style breach.
The reported cause was a compromised API key.
And once that key was abused, the cost escalated fast.
That is the lesson. A compromised API key can become a financial event before anyone in leadership even understands what happened.
This is the third piece in our AI security series. The first covered Zero Data Retention. The second explored whether AI companies may build behavioral profiles the way consumer platforms do. This one is about a more immediate operational threat:
The credentials that power AI tools, cloud platforms, integrations, and automation may already be exposed. They may be sitting in public GitHub repositories. Buried in old configuration files. Stored in chatbot conversations. Present in training datasets. Sitting in development tools your team does not fully understand.
There is a good chance leadership has never asked where those keys are, who created them, what they access, or how quickly the business would know if one was compromised.
That is not a developer problem. That is a governance problem.
The Numbers Are Not Small
GitGuardian's 2026 State of Secrets Sprawl report found 28.65 million new hardcoded secrets added to public GitHub commits in 2025 — a 34% year-over-year increase and the largest single-year jump on record. GitHub's own reporting found more than 39 million secret leaks across GitHub in 2024.
And AI is making the problem worse. GitGuardian reported that AI service credential leaks increased 81% year over year, and that AI-assisted commits leaked secrets at roughly 2x the baseline rate.
That matters because modern businesses run on API-driven systems. Cloud platforms. AI tools. SaaS integrations. Automation workflows. CRM systems. Financial platforms. Backup tools. Security platforms. Client portals. Reporting systems.
All of them rely on credentials. And when those credentials leak, the blast radius depends entirely on what that key can access. A narrow read-only key may create limited exposure. A production cloud key with broad permissions can create data exposure, service disruption, and five- or six-figure billing events before anyone realizes something is wrong.
AI Did Not Create the Secrets Problem
AI did not invent hardcoded passwords. AI did not cause developers to put credentials in configuration files. That problem has existed for years.
What AI changed is the speed.
More code is being written. More integrations are being built. More automation is being created. More non-developers are experimenting with technical tools. More employees are pasting logs, scripts, and configuration files into AI assistants to get help.
The result is simple: the number of places a secret can leak has multiplied. That is the new operating reality.
The New Attack Vector: Vibe Coding
In 2025, Andrej Karpathy popularized the term vibe coding to describe a new style of software development where people describe what they want in natural language and let AI generate much of the code. The productivity gains are real. A skilled developer can build faster. A junior developer can move beyond their current skill level. A non-developer can create prototypes that would have been impossible a few years ago.
But there is a security problem hiding inside that productivity gain.
AI coding tools optimize for working code. They do not always optimize for secure code.
A developer asks an AI assistant to build an integration. The tool generates code with a placeholder API key. The developer replaces the placeholder with a real key to test. The code works. The developer commits the file. The key ends up in a repository. An automated scanner finds it.
That entire sequence can happen in an afternoon. Nobody intended to create a security incident. They were just trying to make the thing work. That is what makes this dangerous. The risk does not always come from reckless behavior. Sometimes it comes from speed, convenience, and weak guardrails.
The Problem Inside the Problem
There is another layer that should make every technology leader pay attention.
In early 2025, Truffle Security scanned Common Crawl — a massive public dataset used in AI training pipelines — and found approximately 12,000 live API keys and passwords baked into the training data. Their research specifically warned that LLMs trained on insecure code may inadvertently generate unsafe outputs.
That is the feedback loop. Developers publish insecure code. Large datasets ingest insecure code. AI systems learn from those patterns. AI coding assistants generate similar patterns. Developers accept the output because it works. More insecure code gets published.
The model is not malicious. The developer is not malicious. But the system is now moving faster than the governance layer around it. And that is where business risk lives.
Three Ways Keys Get Exposed in an AI Environment
1. Employees paste technical context into AI tools
When someone is debugging a problem, the instinct is to paste in the relevant context. That context might include environment files, configuration files, stack traces, error logs, connection strings, headers, tokens, and API keys.
To the employee, it looks like troubleshooting. To the business, it may be an unauthorized disclosure of credentials. This is especially risky when employees use consumer-tier AI tools without clear retention, logging, contractual, or administrative controls.
2. AI coding tools see more than users realize
AI-powered code editors and assistants are becoming deeply embedded in development workflows. That creates a new question: what files are being included in AI context?
Developers may assume that ignored files, local environment files, or sensitive configuration files are excluded from AI processing. Sometimes that assumption is wrong. In multiple developer communities, users have raised concerns about whether AI coding tools were including .env files in AI context despite expectations that those files would be excluded.
Whether the root cause is product behavior, configuration, or user misunderstanding, the lesson is the same: ignore files are not an AI data-loss prevention strategy. You need controls. You need policy. You need technical enforcement.
3. AI agent configuration files introduce a new secrets surface
AI agents are increasingly being connected to systems using configuration files, plugins, and protocols like the Model Context Protocol. That creates another place where secrets can leak.
GitGuardian's 2026 report highlights MCP-related exposure, including thousands of unique secrets found in MCP-related configuration files on public GitHub. This is not surprising. Documentation often shows examples with hardcoded keys. Developers copy the pattern, replace the placeholder with a real key, the file gets committed, and the secret is in Git history. Even if someone deletes the file later, the key may still be recoverable.
Once a real credential hits a public repository, the safest assumption is that it is compromised.
Why This Matters Even If You Are Not a Software Company
A lot of business leaders will read this and think: "We are not a software company." That thinking is increasingly out of date.
Your business may not sell software. But your business runs on software. Your MSP, your CRM, your finance platform, your cloud environment, your cybersecurity tools, your backup platform, your vendors, your integrations — all of them rely on API keys. Somewhere in that ecosystem, those keys exist. Somewhere, someone configured them. Somewhere, those keys have permissions. Somewhere, those keys create risk.
The question is whether anyone is managing that risk intentionally.
For a financial services firm, law firm, architecture firm, or professional services business, this is not just a technical issue. It is operational. It is contractual. It is regulatory. It is reputational. If a vendor exposes a key that can access your data, your clients may not care that the mistake happened two layers down the chain. They will ask why your firm did not understand the risk.
What Good Looks Like
Establish a clear employee policy. Employees cannot paste credentials, environment variables, configuration files, source code, client data, or sensitive logs into consumer-tier AI tools. "Be careful with AI" is not a policy. "Do not paste credentials or sensitive business data into unapproved AI tools" is closer.
Implement automated secret scanning. GitHub's secret scanning and push protection can identify and block exposed credentials before they reach a public repository. Tools like GitGuardian and TruffleHog provide broader coverage. The goal is simple: catch secrets before they become incidents.
Treat AI API keys like production credentials. Least-privilege permissions. Usage limits. Spend limits. Expiration dates. Rotation schedules. Monitoring. Alerting. Owner assignment. A key that can generate usage-based charges should have guardrails. A key that can access sensitive systems should have even stronger ones.
Audit AI agent and MCP configurations. Every configuration file needs review before it goes near a repository. Do not assume example documentation is production-safe. Do not assume a placeholder is harmless. Testing keys often become production keys. Temporary credentials often become permanent.
Ask vendors better questions. Do you use AI coding assistants in development? Do you scan for secrets before code is committed? Do you rotate production keys? Do you monitor AI API usage? Do you allow developers to paste client data or configuration files into AI tools? Do your contracts prohibit our data from being used in unmanaged AI systems?
The Common Thread
If you have followed this series, the pattern is clear. The first issue is data retention — what happens to your information after it goes into an AI system. The second is profiling — is your behavior or business context being used to build a deeper model of you. The third is credentials — are the keys that connect your systems leaking through AI-assisted workflows, public repositories, or development tools.
Different risks. Same root cause.
AI adoption is moving faster than governance. Not because AI is bad. Not because employees are reckless. But because organizations are adopting powerful tools before they have updated the operating model around those tools.
That gap is where incidents happen.
The firms that navigate this era well will not be the firms that ban AI. They will be the firms that build governance rails early enough to use it confidently. They will know which tools are approved, which data can go where, where their API keys live, who owns them, what those keys can access, and how fast they can revoke them when something goes wrong.
That is not bureaucracy. That is operational maturity.
And in the AI era, operational maturity is the difference between leverage and liability.