How AI Is Rewriting the Economics of Security

Security Is No Longer a Probability Game — AI Is Rewriting the Economics of Attack and Defense

The Underlying Assumption of Traditional Security Is Cracking

Traditional security has been built on an economic asymmetry: the cost of an attack outweighs the potential reward, so systems remain safe.
But AI is rapidly lowering the cost of finding and exploiting vulnerabilities. Systems once considered "secure enough" are becoming soft targets.
To establish true digital ownership in this new era, we must rebuild trust from the hardware level upward — with hardware-backed security and cryptography as the foundation.


Security, at Its Core, Is an Economic Game

I often say there's no such thing as "100% secure." Some hear that as pessimism. In fact, it's the opposite — it's the most honest starting point for building real security.

Real security advantage comes from creating asymmetry: making attack much harder and more expensive than defense.
If compromising a system costs an attacker far more than the value they could extract, you win.
Or if retrieving a secret takes so long that by the time they get it, it's already obsolete — that's also a win for the defender.

For decades, this asymmetry has been the foundation of security. Encryption, access control, patching, hardening — all of it serves the same goal: making the attacker's job economically irrational.

That foundation is now cracking.


How Attackers Think: It's All About Opportunity Cost

To understand what's changing, you need to see how attackers think. They are rational economic actors, driven by one simple formula: which target offers the lowest cost and the highest return?

Here's the nuance: when you start attacking a system, you don't know exactly how much effort it will take. You might spend weeks and hit a dead end — the cost is uncertain.
But the reward side is remarkably clear. There's an established market, with real brokers, for vulnerabilities and exploits. An attacker knows roughly what a working exploit is worth before investing a single hour.

This information asymmetry has historically benefited defenders (I know — the headlines tell a different story. But I mean those who invested seriously in security; the rest were already losing).
Because the attacker doesn't know if a target will take one week or one year, that uncertainty — multiplied across all targets — has kept the security equilibrium barely intact.

Until now.


The Vulnerability Market: From Hidden Deals to a Global Industry

The security ecosystem has matured into a full-fledged market. On one side, researchers who find vulnerabilities. On the other, buyers and brokers.

Buyers fall into categories: vendors who patch their products, but more often, nation-states, intelligence agencies, and criminal organizations willing to pay top dollar for weaponizable exploits.
Brokers like Zerodium and Crowdfense operate publicly, with million-dollar bounties, offering negotiation, escrow, and validation.

Prices tell the story. A zero-click iOS exploit chain now fetches $5–7 million; Android up to $5 million; Chrome and Safari chains sell for $3–3.5 million — a 44% annual increase.
The broader commercial surveillance software market is valued at over $16 billion, with over 435 entities across 42 countries. This is a structured global industry.

Ironically, these high prices once maintained a balance: they signaled that breaking into well-defended systems was still hard, rare, and expensive.

That signal is about to become obsolete.


AI Is Collapsing the Cost of Attack

Now, AI is breaking that balance at its core. The cost of vulnerability research — and even more critically, exploit development — is plummeting.

What once required months of expert work: reverse engineering binaries, large-scale fuzzing, crafting reliable exploits — is becoming faster, cheaper, and more accessible.
AI tools can analyze massive codebases in hours, spotting vulnerability patterns that would take a skilled human weeks.

Here's a way to think about it. In security, "secure" means zero exploitable vulnerabilities — a perfect 20/20. "Insecure" means having even one.
For years, most well-defended systems were, optimistically, at 19/20. Not perfect, but the remaining vulnerability was so expensive to find and exploit that it didn't matter in practice.

The real defense was the cost of finding that last flaw. AI is removing that cost.
Your 19/20 system, once considered effectively secure because it required months of expert work and a seven-figure budget — is now a zero. The vulnerability is still there, and now finding and exploiting it is cheap.

This is the shift. Services once deemed "secure enough" because attacks were uneconomical are now exposed. The economic equation that protected them no longer holds.


The Evidence Is Already Here

This isn't speculation. The data is already there.

The ITRC's 2025 Annual Data Breach Report shows a record 3,322 data compromises in 2025 — a 79% increase over five years. In 2024, over 16.8 billion records were exposed globally. This isn't a fluctuation; it's a structural surge.
Cybercriminal activities once limited to sophisticated groups are being commoditized. Attack toolkits are cheaper, more automated, and require less skill.

Examples of AI-assisted attacks at scale are already abundant. In early 2026, a solo operator used Anthropic's Claude to breach multiple Mexican government agencies, exploiting at least 20 vulnerabilities and exfiltrating 150GB of sensitive data — tax records, voter registration files, government credentials.
No custom malware, no C2 infrastructure, no nation-state backing. Just a commercial AI subscription and persistence. Operations that would normally take a red team 2–4 weeks were completed in under 72 hours.

Anthropic itself disclosed a Chinese state-sponsored campaign that used Claude Code to target roughly 30 global entities — tech companies, financial institutions, government agencies — with AI autonomously executing 80–90% of tactical operations.

Perhaps the most alarming signal: general-purpose OS exploits, once the exclusive arsenal of top-tier intelligence agencies, are now being used at scale by criminal organizations. Capabilities that cost tens of millions of dollars to develop a few years ago are trickling down the value chain at an accelerating rate.


The Attack Surface Is Also Exploding

At the same time, the attack surface is exploding. AI lets anyone generate software at near-zero cost, without deep technical expertise, and certainly without secure development knowledge.
Code volume is skyrocketing — and a huge proportion of it is written or prompted by people who have no idea how to build secure software. Every line of that code is a potential entry point.

A world of software abundance is also a world of vulnerability abundance.


Old Doctrines Won't Save Us

For years, security has relied on defense in depth, hardening, obfuscation, security by obscurity. These worked when bypassing them was expensive enough.

AI makes them insufficient. Obfuscation? AI can deobfuscate code in seconds. Security by obscurity? AI can exhaustively explore all possible code paths. Hardening helps, but when probing costs drop by orders of magnitude, it becomes a speed bump, not a wall.

The bar for what counts as "real security" is about to rise dramatically. To rebuild the defense-attack asymmetry, we need fundamentally different approaches.


The Path Forward: Cryptography as the Foundation

The good news is that the tools to rebuild asymmetry already exist. AI is making them more tractable to deploy at scale. The common thread is cryptography — not as a bolt-on feature, but as architectural foundation.

1. Secure Enclaves: Hardware-Rooted Trust
Hardware-backed secure elements will become mandatory for critical applications. They provide two things software alone never can: confidentiality of secrets and integrity of code execution — even when the surrounding system is compromised.
You can't prompt-inject a silicon chip. You can't social-engineer a hardware gate. When trust is anchored in physics, the attacker faces a fundamentally different battle.

2. Zero-Knowledge Proofs: Execution Integrity Without Trust Assumptions
ZK technology lets you prove that a specific computation was performed correctly, with specific inputs and outputs, without revealing internals and without trusting the executor.
For systems like financial settlement, smart contracts, and critical infrastructure, this offers unprecedented assurance.

3. Formal Verification: Provable Code
Mathematically proving that code behaves exactly as specified, for all possible inputs and states. For real-world systems, this was once intractable — too complex, too slow, too rare.
AI is changing that. AI-assisted formal verification is making it practical at scale. Critical code — the code that protects secrets, manages keys, executes transactions — should be formally proven.

4. The Longer Horizon: FHE and Indistinguishability Obfuscation
Fully Homomorphic Encryption and iO represent the ultimate security primitives. But even with AI, they remain impractical for now. They are the destination, not today's toolkit.


The Operational Reality

Of course, not every system can be rewritten with formal proofs, wrapped in a secure enclave, and verified with ZK. The real world is messy: legacy systems, finite budgets, tight timelines.

In practice, we'll continue with layered defense:

  • Supply chain integrity — knowing exactly what code you're running and where it came from

  • Memory-safe languages — rewriting the most exposed components to eliminate entire classes of vulnerabilities

  • Detection and response speed — for systems that can't be perfectly hardened, fast detection, fast patching, and blast radius containment

Speed matters more than ever. The window between vulnerability disclosure and real-world exploitation is shrinking rapidly.
AI is also helping defense: automated code scanning (SAST, DAST), behavioral detection (EDR, XDR, UEBA), real-time alerting, automated patching — these are operational today, and they are the immediate line of defense while deeper cryptographic foundations are being built.

The same technology lowering attack costs is also lowering defense costs. The first question is who moves faster. The second is whether that recreates the asymmetry.

To be blunt: it does not fully recreate it.


The Race Is On

The old equilibrium is gone. The economic assumptions that made "secure enough" possible have been invalidated by AI. Every system not built with security as a first principle is living on borrowed time.

But this is not a story of inevitable defeat. It is a story of urgency.
Secure enclaves, zero-knowledge proofs, formal verification — these tools are more powerful and more accessible than ever. AI is also strengthening defense. And for systems that cannot yet reach the highest bar, practical measures — supply chain integrity, memory-safe rewrites, rapid detection and containment — can meaningfully narrow the gap.

The question is not whether these tools will be adopted. They will.
The question is: will we adopt them proactively, before the next wave of breaches forces our hand — or after?

The asymmetry can be rebuilt. But the window to do it proactively is closing fast.

Back to the blog title

Cart

loading