Attackers Compromise AI Security Tools at Over 90 Organizations
TL;DR
Teams using AI security tools should verify for hijacks. Researchers warn next attacks may gain firewall write access to alter defenses.
What changed
Attackers compromised AI-driven security tools deployed at more than 90 organizations, hijacking the tools' own service accounts to operate inside victim networks. Researchers warn the next wave may pursue write access to firewall and EDR policy, letting attackers disable defenses rather than just observe traffic. Affected products span SOC automation and AI-assisted detection categories.
Why it matters
AI security tools typically request broad read and sometimes write scopes across cloud accounts, identity providers, and network appliances. A single compromised vendor token can therefore unlock dozens of systems at once. For engineering and security teams, this is a concentrated blast-radius problem, not a generic supply chain issue.
What to watch for
Audit every OAuth grant and service principal issued to AI security vendors and downscope where possible. Require short-lived credentials and per-action approvals for any tool that can mutate firewall, IAM, or EDR policy. Watch your egress logs for unusual API call patterns from vendor IP ranges, since hijacked tools will look like legitimate traffic until you correlate timing and scope.
Who this matters for
- Developers: Inventory every AI security vendor token in your org, downscope to read-only where possible, and require step-up approval for any policy-mutating action.
What to watch next
If your AI security tool has write access to your firewall, you have built a single point of catastrophic failure and called it automation. The vendors will not fix this for you because broad scopes are how their products demo well. Audit every grant this week, revoke write access where the read path is enough, and put any remaining mutate calls behind a human approval gate. You will lose a small amount of automation speed and gain a large amount of survivability.
by Harsh Desai