Cisco Just Proved My Point Better Than I Ever Could
You cannot make this up.
On March 23, Cisco stood on stage at the RSA Conference 2026 in San Francisco. Their message: DefenseClaw, the enterprise governance layer for AI agents that scans, enforces, and audits every skill before it runs. The pitch was slick. The demos were polished. The promise was simple: trust us to secure your AI deployment.
Approximately, 1 week later, reports surfaced that Cisco had been breached (allegedly). Not a minor incident. A full supply chain compromise. Attackers used credentials stolen in a Trivy vulnerability-scanning attack to breach Cisco's internal development environment. They walked away with source code from over 300 GitHub repositories. Multiple AWS keys stolen. Customer code from banks, BPOs, and US government agencies allegedly compromised.
The stolen repositories allegedly include code for Cisco's AI Assistant and AI Defense products.
Yes, you read that right. The same company selling AI governance tools had the source code for those very tools stolen in a breach.
And with that, welcome to the most expensive case study in why fundamentals matter.
The Pattern You Cannot Ignore
This is not about beating up on Cisco.
No, this is about the industry's collective delusion that you can skip basic security hygiene and jump straight to AI governance.
You want the numbers? You know I got them! Here they are:
- The Trivy supply chain attack compromised credential management across thousands of organizations.
- TeamPCP threat actors exploited a misconfigured GitHub Actions workflow that had been sitting vulnerable since October 2025.
- The attackers stole personal access tokens, planted credential-stealing malware in official releases, and set up persistent backdoors.
- Organizations using the tool had their CI/CD credentials harvested.
- Build environments became attack vectors.
Cisco's response teams contained the breach involving dozens of devices, including developer and lab workstations. The company expects continued fallout from follow-on LiteLLM and Checkmarx supply chain attacks.
But here’s the point. This was not sophisticated. This was basic DevSecOps failures at scale.
What Actually Happened
Let me break down the fundamentals Cisco missed:
- Credential management. The Trivy maintainers rotated credentials after the initial February compromise. The rotation was incomplete. Attackers retained access to credentials that survived. Incomplete containment created the conditions for the next attack.
- Supply chain visibility. Cisco used Trivy for vulnerability scanning in their development environment. They did not have visibility into the security posture of their security tools. The irony burns.
- GitHub Actions hardening. The vulnerable workflow exploited by attackers had been present in the Trivy repository since October 2025. This is a documented attack pattern. Organizations pin GitHub Actions to full commit SHA hashes rather than version tags to prevent tag manipulation attacks. Cisco either did not know this or did not enforce it.
- Least privilege access. The stolen AWS keys enabled unauthorized activities across multiple Cisco AWS accounts. If your AWS key theft results in multi-account compromise, your IAM policies are too permissive.
Again, these are not advanced persistent threats requiring nation-state resources or jedi-mind trick hacking skills. These are table stakes, fundamentals, basic security controls that should have been in place before anyone wrote a single line of AI governance code.
The Question Every Executive Should Ask
If Cisco, a company that builds networking infrastructure and security products for a living, cannot secure its own development environment from a supply chain attack, what makes you think your organization is ready to deploy AI agents that interact with your most sensitive systems?
The answer is uncomfortable. You are not ready.
And before you protest with "but we have a security team" or "we passed our audit," let me remind you: Cisco has security teams too, and they passed their audits. Cisco still announced an AI governance framework at the premier security conference. And none of that stopped them from having their source code stolen because they could not execute basic credential rotation correctly.
The Math Is Simple and The Truth Hurts
Poor security fundamentals multiplied by AI complexity equals exponential risk. I said this on stage at the AI Summit New York last December and a CISO on the panel turned around and called me a doomer, and then told the audience, “We’re winning!”
Since we are “winning,” remember:
- You cannot patch your way out of organizational dysfunction.
- You cannot buy a tool that fixes broken processes.
- You cannot deploy AI governance on top of an infrastructure you do not understand how to secure properly in the first place.
These are the hard truths that no consulting firm wants you to hear because it does not sell well. Can you imagine? “Hey customer, before you chase AI transformation, you need to master the basics.” Yea, that will land as soft as concrete.
So, what are the basics? As a primer, you could start here:
- Credential management that actually works. Not a rotation policy that exists in a document. Actual, enforced, complete rotation when a breach occurs. Monitoring that detects when credentials are used from unexpected locations or contexts.
- Supply chain security posture management. If you are using open-source tools in your CI/CD pipeline, and you do not have automated monitoring for compromises in those tools, you have a blind spot the size of your entire software supply chain.
- IAM policies that are built with least privilege. If a single stolen key can access multiple AWS accounts, your trust boundaries are too wide. Period.
- Deprecate your provisioned keys. If using AWS, you should have been moving towards IAM Identity Center and OIDC federation, and away from self-rolled access keys. This is why.
- GitHub Actions security. Pin to commit SHAs. Restrict pull_request_target workflows. Audit who has write access to your workflows. This is documented in every security guide. It is not optional.
- Developer environment hardening. Your developers are running code that pulls from external repositories. That code has access to secrets. If you do not have controls around what gets executed in those environments, you are one malicious package away from a breach.
What Cisco Got Right
Cisco didn’t do everything wrong; they got a few things right. They contained the breach. Their Unified Intelligence Center, CSIRT, and EOC teams responded. And they were transparent about expecting continued fallout from related supply chain attacks. Now, this is the correct response to a security incident.
What makes me scratch my head and say “wow,” is the timing. I’m not going to say it’s wrong because maybe that works for them, but the timing is bold. How do you announce an AI governance tool at a security conference when your own house is not in order? Like how can you, with a straight face, sell enterprise customers on trusting your security products when you are able to demonstrate you’ve mastered the fundamentals those products are supposed to enforce?
The Uncomfortable Reality for Buyers
You are making purchasing decisions based on vendor promises. Cisco promised to secure your AI agents. The evidence suggests they cannot secure their own development environment from a well-known attack pattern.
And this isn’t isolated to Cisco. This is an industry-wide pattern. Vendors are racing to ship AI governance, AI security, AI monitoring, AI everything. The fundamentals are an afterthought.
Case in point, two recent incidents:
1. Lloyds Banking Group, a UK retail and commercial financial services giant, rolled out a faulty software update on March 12 that exposed transaction details of 447,936 mobile banking customers to other users. The defect was in the design of the code used to update the application programming interface. Users accessing their transaction lists at the same time could see each other's account details, payment references, and National Insurance numbers. The update went live at 03:28. The bank did not catch the error until customers reported it at 06:20. This is a failure of basic update testing and rollout procedures at a financial institution that handles billions in customer deposits.
2. Anthropic, you know, the company building Claude AI, accidentally leaked the entire source code for Claude Code on March 31. An employee failed to exclude source map files from their npm package. The result: 512,000 lines of TypeScript code across 1,906 files became publicly accessible. The irony burns here, too. The leaked code includes a feature called "Undercover Mode" specifically designed to prevent Anthropic's internal information from leaking when employees use Claude Code on public repositories. They built a system to stop their AI from accidentally revealing internal codenames. Then they shipped the entire source code in a debug file because someone missed a basic build configuration step.
Your job as a buyer is to ask the uncomfortable questions:
1. Show me your credential rotation process after a security incident. Not the policy. The process. How do you verify completeness?
2. Show me your supply chain security controls. What tools are you using? How do you detect compromises? How fast can you respond?
3. Show me your IAM audit trail. If one of your developers' credentials gets stolen, what can an attacker access? How would you know? How long would it take you to revoke access?
4. Show me your GitHub Actions security posture. Are workflows pinned to commit SHAs? Who has write access? How do you audit changes?
If the vendor cannot answer these questions with specifics, you are buying promises, not protection.
What You Should Do Right Now
Stop buying AI governance tools from vendors who cannot demonstrate they have mastered basic security hygiene. This is not optional. This is existential.
Audit your own fundamentals before you deploy a single AI agent in production. Can you rotate credentials completely? Can you detect supply chain compromises in your dependencies? Can you enforce least privilege in your cloud accounts? Can you secure your GitHub Actions workflows?
If the answer to any of those questions is "we think so" or "it is on the roadmap," you ain’t ready for AI. You are ready for the same headline Cisco is dealing with right now.
Build a culture where security fundamentals are non-negotiable. Not compliance checkbox exercises. Not policies that sit in SharePoint. Actual, enforced, measured controls that get tested under pressure. And this can’t be vibe coded.
Treat vendor security claims with healthy skepticism. Cisco is a multi-billion-dollar networking and security company with thousands of engineers, and they still got breached because of incomplete credential rotation. And trust me, your startup vendor with 50 employees is not doing better.
The Only Path Forward
The gap between AI promises and AI reality is not a technology problem. It is an organizational readiness problem.
Organizations that succeed with AI will not be the ones with the biggest budgets or the most advanced models. They will be the ones with the discipline to master fundamentals before adding complexity.
Cisco just gave you the most expensive case study in what happens when you skip that step. You can learn from it, or you can repeat it.
Your investors might not care which path you chose. Your customers will and the regulators definitely will.
Here’s the thing, the fundamentals aren’t sexy. They don’t make slick conference demos, and they won’t get you featured in TechCrunch. But, in the end, they are the only thing standing between your AI deployment and a catastrophic breach that puts your organization on the front page for all the wrong reasons.
Cisco learned this the hard way. You do not have to.
Fix your fundamentals first. Then talk to me about implementing AI governance.
Until then, every AI tool you deploy is just adding velocity to your existing security dysfunction. And velocity without control does not end well.
Ask Cisco.
