The comprehension gap does not affect attackers and defenders equally. Attackers operate empirically — probing systems from the outside, looking for gaps they do not need to understand the whole system to exploit. They need only a single path through it. In a system whose operators cannot enumerate its complete state, that path is always available. Defenders operate theoretically — building a mental model of what might go wrong, based on what has gone wrong before in systems similar but not identical to the one being defended. Novel attacks fall outside the model by definition.
AI amplifies both sides of this asymmetry while imposing regulatory overhead only on defenders. Attackers deploying AI face no compliance requirements, no audit obligations, no requirement to document training data or explain model decisions. The Cyberspace Solarium Commission found "no clear unity of effort or theory of victory driving the federal government's approach to protecting and securing cyberspace." Defenders, by contrast, operate under growing regulatory scrutiny — each adding process and review overhead that attackers do not bear.
The scale of the vulnerability environment is quantified in CISA's Known Exploited Vulnerabilities catalog: more than 50,000 CVEs were published in 2025 — approximately 130 per day — and CISA added 244 entries to the KEV catalog, a 28% increase over the prior year. IBM's X-Force Threat Intelligence Index 2026 documented a nearly fourfold increase in large supply chain or third-party compromises since 2020. AI-generated phishing rose from 4% of detected phishing attempts to 56% between December 2024 and early 2026 — a fourteenfold surge in roughly fourteen months.
In February 2021, an operator at the Oldsmar, Florida water treatment plant watched his cursor move across the screen, raising sodium hydroxide levels to 100 times the safe concentration. He reversed the change manually. He did not know TeamViewer was running. The plant was running Windows 7. No one had inventoried what remote access software was active on the system. What the attack required was only that the operator not know what was in the system he was responsible for — and he did not know, because the system had accumulated over years, through many hands, without any single person maintaining a complete map.
In November 2025, the Anthropic Threat Intelligence Team disrupted the first documented large-scale autonomous AI cyberattack — an AI-orchestrated campaign that, after bypassing safety filters through social engineering, executed at 80–90% autonomy against 30 targets, sending thousands of requests per second. The attack ran at machine speed. Detection relied on account monitoring rather than the defensive AI systems defenders typically maintain. The Google Threat Intelligence Group documented simultaneous nation-state AI exploitation by actors from Russia, China, Iran, and North Korea — including PROMPTFLUX, the first malware using a live AI API during execution, and PROMPTSTEAL, an APT28 campaign running live operations against Ukraine.