There’s a scene in The Sting where Henry Gondorff explains the difference between a short con and a long con. A short con is fast. You bump someone, lift their wallet, disappear. Small take, low risk, over in seconds. A long con is an investment. You build a fake storefront. You hire actors. You construct an entire alternate reality so convincing that the mark hands you everything willingly. The payoff is enormous, but it takes weeks, costs real money, and requires a team of specialists working in coordination.

For decades, cyberattacks followed the same economics.

A smash-and-grab (phishing campaign, credential stuffing, opportunistic ransomware) was fast and cheap but limited in scope. You’d hit a lot of targets, most would bounce, and you’d walk away with whatever you could grab quickly. An advanced persistent threat was the long con. Nation-state actors would spend months on reconnaissance, develop custom exploits, establish footholds, move laterally through networks with patience and precision, and extract exactly what they came for. The payoff was strategic intelligence, intellectual property, access to critical infrastructure. But it required teams of skilled operators, months of elapsed time, and significant operational investment.

That tradeoff no longer exists.

AI has collapsed the economic distance between the short con and the long con. Adversaries can now run targeted, multi-stage, adaptive intrusions at short-con speed and cost. Most security organizations haven’t internalized what this means for how we operate.


The Speed Compression

The data is unambiguous and accelerating.

CrowdStrike’s 2026 Global Threat Report, released two weeks ago and drawing on frontline intelligence from tracking over 280 named adversaries, reports that the average eCrime breakout time dropped to 29 minutes in 2025. That’s the window between an attacker’s initial access and their first lateral movement onto another system. Down 65% from the prior year. The fastest observed breakout took 27 seconds. In one intrusion, data exfiltration began within four minutes of getting in.

Twenty-seven seconds. By any traditional definition, that’s a short con timeline. But the sophistication of what happens in those seconds — credential theft, privilege escalation, lateral movement, evasion of detection — is exactly what used to require weeks of patient human operation.

CrowdStrike also observed an 89% year-over-year increase in attacks from AI-enabled adversaries, and 82% of their detections in 2025 were malware-free. The adversaries aren’t breaking in anymore. They’re logging in, using valid credentials, trusted identity flows, and approved SaaS integrations to move through environments. The attack surface isn’t a wall to breach. It’s a door to walk through, and AI is helping them find every unlocked one faster than any human team could.

The Cost Collapse

Speed is only half the story. The other half is what it costs to mount these operations.

Researchers at Harvard’s Berkman Klein Center, including Bruce Schneier, found that LLMs reduce the cost of phishing campaigns by more than 95% while achieving equal or greater success rates. IBM’s security researchers demonstrated that AI could construct a sophisticated phishing campaign in five minutes using five prompts. The same campaign took their human security experts sixteen hours to build by hand.

Sixteen hours of expert labor compressed to five minutes. A 95% cost reduction. That’s a structural collapse in the cost of attack development.

James Wickett, CEO of DryRun Security, put it plainly in a SecurityWeek piece from last month: the cost to go from vulnerability discovery to working exploit used to be weeks and thousands of dollars. Now it’s near zero. The consequence isn’t more spray-and-pray. It’s micro-targeted attacks built for a single system, a single company, maybe even a single developer.

The long con — individualized, researched, contextually convincing — at commodity prices.

The Scale Multiplier

These cheaper, faster attacks don’t happen one at a time, either.

AI lets adversaries parallelize. A nation-state group that previously needed a team of specialists to run a targeted intrusion against one organization can now run coordinated operations against dozens simultaneously, each attack customized, each probing for different weaknesses, each adapting in real time.

We saw exactly this in November 2025 when Anthropic disclosed what they believe is the first documented AI-orchestrated cyber espionage campaign. A Chinese state-sponsored group, designated GTG-1002, used Claude Code to execute 80 to 90 percent of tactical operations independently, at request rates that would be physically impossible for human operators. The AI ran the full attack lifecycle autonomously: vulnerability discovery, exploitation, lateral movement, credential harvesting, data extraction, intelligence categorization. Human operators set strategy and intervened at key escalation points. The rest was delegated to the machine.

The operation targeted roughly 30 entities across technology, finance, chemical manufacturing, and government. Simultaneously.

Thirty long cons running at machine speed, orchestrated by a handful of human operators who set the strategy and let the AI execute. The economics that used to force adversaries to choose their targets carefully no longer constrain them.


What This Means for Defenders

Every security program I’ve ever built, and every one I’ve evaluated, audited, or competed against, is predicated on a set of economic assumptions about how attacks work. We staff SOCs assuming that alert triage requires human judgment at every stage. We design incident response plans assuming hours or days between initial access and significant damage. We prioritize patching assuming that exploit development takes time and targets will be limited. We build detection capabilities assuming that adversary behavior follows human patterns: work hours, sequential operations, occasional mistakes.

Every one of those assumptions is breaking.

When breakout time is 27 seconds, your incident response plan that assumes a golden hour of detection and containment is fiction. When phishing costs drop 95% and quality goes up, your training program that teaches employees to spot grammatical errors is fighting the last war. When adversaries can run thirty coordinated operations simultaneously, your SOC that triages alerts serially is structurally outmatched. Not because your people aren’t good enough, but because the math doesn’t work anymore.

The existing model was built on economic assumptions that no longer hold. Improving it incrementally is like reinforcing the Maginot Line. The investment isn’t wrong in theory, but the adversary has already changed the axis of attack.


The Imperative

Anthropic’s security team demonstrated what an alternative looks like. At BSides San Francisco in April 2025, Jackie Bow and Peter Sanford presented “AI’s Bitter Lesson for SOCs: Let Machines Be Machines.” Their CISO, Jason Clinton, had announced at RSA 2025 that Anthropic no longer operates a traditional security operations center. No L1 or L2 team. No human analysts triaging alerts.

They built an autonomous SOC powered by Claude. It handles alert ingestion, triage, investigation, and response. Investigation time dropped from forty minutes to three, a 90% reduction. The system runs the foundation model without modification, embedding security knowledge through context and prompts rather than fine-tuning. Model upgrades don’t break the security logic. It’s a sustainable architecture, not a science project.

When your adversary can move from initial access to data exfiltration in four minutes, your forty-minute average investigation time is a gap that kills you. Deploy AI to close that gap. Not to save money, but because human response time is no longer sufficient for the threat we face.

Then take the capacity you’ve freed and reinvest it back into security. Not back into the budget. Back into the mission: rapid recovery architecture, detection engineering that accounts for AI-speed adversaries, the harder problems that the new economics are creating faster than your current team can address them.

Security isn’t getting cheaper. It’s getting harder. The economics changed on both sides of the equation. Adversaries invest less to achieve more. That doesn’t mean defenders get to invest less too. It means the same investment buys less protection than it used to. The organizations that treat AI automation as a savings opportunity will discover they’ve cut costs in the middle of an arms race. The ones that treat it as resource reallocation — freeing people from fighting the last war so they can adapt to the next one — will be the ones that keep pace.

Google Cloud’s Cybersecurity Forecast 2026 describes an emerging “Agentic SOC” where security analysts evolve from reactive alert management to strategic orchestration of AI systems. IBM’s data shows that organizations using security AI and automation experience roughly $1.8 million lower average breach costs and detect threats 60% faster. The direction is clear, but most organizations aren’t there, and the gap between early movers and everyone else is widening at exactly the wrong moment.

After disclosing the GTG-1002 campaign, Anthropic’s own recommendation was direct: security teams should experiment with applying AI for defense — SOC automation, threat detection, vulnerability assessment, incident response — and build experience with what works. That recommendation was born from watching their own product get weaponized against thirty organizations simultaneously.


Staying at the Frontier Isn’t Optional

This is where I think most security leaders are getting it wrong.

Security teams aren’t ignoring AI. Most are deeply engaged with it. But the engagement is almost entirely defensive governance: how do we secure AI use across the business, how do we write acceptable use policies, how do we manage non-human identities. AI became a new and difficult BAU challenge overnight, and teams are working hard to meet it. Even the AI products marketed specifically at security and SOC teams are mostly runbook automation or identity management for agents. Useful work, but work that accepts the current economic model and tries to make it slightly more efficient.

Almost nobody is using AI to change the economics of executing security itself. That’s the gap. The adversary isn’t using AI to do the same attacks slightly faster. They’re using it to fundamentally restructure what’s possible. The defensive response can’t be incremental either.

If you don’t understand what a frontier model can actually do — the real capabilities, the speed, the reasoning — you cannot understand what your adversaries can do with it. And if you can’t understand what they can do, you can’t design defenses that account for it. You’re building security architecture against a threat model that’s already obsolete.

This is why I invest significant personal time in frontier AI. The GTG-1002 operation showed me exactly what a motivated adversary looks like when they hand 80% of the tactical work to a frontier model. I need to understand what that model can do, its capabilities and its blind spots, with the same depth that I understand the MITRE ATT&CK framework. The model is the adversary’s toolkit now. Treating it as someone else’s domain to understand is a professional failure.

My team operates the same way. We don’t treat AI investment as separate from security operations. It is security operations. It’s the part that determines whether our capabilities evolve at the same rate as the threats we face. Every hour we spend building fluency with frontier AI is an hour we spend understanding the adversary’s current and near-future capabilities. That’s not a distraction from the mission. It’s the mission.


The Choice

The BAU security model was built on assumptions about human-speed adversaries, serial attack operations, and the economics of expensive exploit development. None of those assumptions hold anymore. The model wasn’t wrong. The world it was designed for no longer exists.

Organizations that respond by improving BAU incrementally will discover that incremental improvement can’t close an exponential gap. The adversary isn’t getting 10% faster each year. They’re getting orders of magnitude faster, cheaper, and more parallel. You can’t outrun that curve with the same legs.

I don’t know exactly what the right defensive architecture looks like five years from now. Nobody does. But I know the current one is predicated on assumptions that have already broken, and I know that the organizations that start building what comes next — right now, imperfectly, learning as they go — will be the ones still standing when the economics fully play out.