Adversarial Intelligence: How AI Powers the Next Wave of Cybercrime

Adversarial Intelligence: How AI Powers the Next Wave of Cybercrime

AI Summit New York City – December 11, 2025

On December 11, 2025, I spoke at the AI Summit in New York City on a topic that is becoming unavoidable for every security leader: AI is not just improving cyber attacks, it is transforming cybercrime into an intelligence discipline. Adversarial Intelligence: How AI Powers the Next Wave of Cybercrime.

The premise of the talk was simple: adversaries are no longer running isolated campaigns with a clear beginning and end. They are building living, learning models of target organizations (e.g., your people, workflows, identity fabric, operational rhythms) and then using generative-class models and autonomous agents to probe, personalize, adapt, and persist.

The core shift: AI gives attackers decision advantage

In an AI-accelerated threat environment, the attacker’s edge often comes down to decision advantage. They see you earlier, target you more precisely, and adapt in real time when controls block them. In a pre-AI world, that level of precision required time and rare talent. Now it is becoming repeatable, automated, scalable, and accessible to people with no real skill.

Where AI shows up in the modern attack lifecycle

When people think about “AI in cybercrime”, they often jump straight to malware generation. That is not wrong, but it is incomplete. In practice, AI technologies are being applied across the attack lifecycle.

Reconnaissance becomes continuous

Autonomous agents can enumerate exposed assets, map third-party relationships, and monitor public signals that reveal how teams operate. Recon becomes less like a phase and more like a background process, always learning and always refreshing the target model.

Social engineering becomes high-context

Generative models do not just write better phishing emails. They enable sentiment analysis, tone and context matching, multi-step pretexting, and persuasion that mirrors internal language and business cadence. The outcome is fewer “obvious” lures and more synthetic conversations that simply feel real.

Identity attacks scale faster than traditional controls

Identity is the front door to modern enterprises (e.g., SaaS, SSO, MFA workflows, help desk interactions, API keys). AI-powered adversaries can probe identity systems at scale, adapt-ably test variants, and blend into normal traffic patterns, especially when enforcement is inconsistent.

“Proof” gets cheaper: impersonation goes operational

Deepfakes and impersonation have moved from novelty to operational enablement. They can be used for vibe hacking (e.g., pressure targets, accelerate trust, push high-risk decisions), especially in finance, vendor-payment, and administrative workflows.

The defensive answer is not “more AI“. It is better strategy.

A common trap is thinking, “attackers are using AI, so we need AI too”. Yes some AI is necessary, but alone it is not enough. Winning here requires adversary-informed security: security designed to shape attacker behavior, increase attacker cost, and force outcomes.

Three tactics that disrupt malicious automation

Deception Engineering: make the attacker waste time … on purpose

Deception is no longer just honeypots and honeytokens. Done well, it is environment design: believable paths that look like privilege or data access, instrumented to capture telemetry and shaped to slow, misdirect, and segment adversary activity. The goal is not only detection. It is decision disruption, raising uncertainty and forcing changes within the adversary’s ecosystem.

Adversarial Counterintelligence: treat your enterprise as contested information space

Assume adversaries are collecting, correlating, and modeling your ecosystem, then design against that reality. Practical counterintelligence includes reducing open-source signal leakage, hardening executive and finance workflows against impersonation, and introducing verification into high-risk decisions without paralyzing the business.

AI honeypots and canary systems: fight automation with instrumented ambiguity

AI-enabled adversaries love clean feedback loops. So do not give them any. Modern deception systems can present plausible but fake assets (APIs, credentials, source code repositories, data stores), generate dynamic content, and create unique fingerprints per interaction so automation becomes a liability.

What this means for CISOs: measure money, not security activity

If you are briefing a board, do not frame this as anything like “AI is scary”. Frame it as: AI changes loss-event frequency, loss magnitude, and time-to-detection/time-to-containment. These can directly impact revenue, downtime, regulatory exposure, and brand trust. If attackers can industrialize reconnaissance and/or persuasion, then defenders must industrialize identity visibility, verification controls, detection-to-decision workflows, and deception at scale.

Key takeaways

  • Assume continuous and automated recon.
  • Harden verification workflows against synthetic content; train executive and administrative teams regularly.
  • Deploy deception at scale; raise attacker cost to reduce downtime.
  • Operationalize counterintelligence; aim to avoid blind spots to reduce exposure.
  • Quantify decision advantage to accelerate funding decisions and defend revenue/margins.

Closing thought

AI is accelerating the adversary, no question. It has also lowered the entry barrier to cybercrime. But it is also giving defenders a chance to re-architect advantage: to move from passive defense to active disruption, from generic controls to adversary-shaped environments, and from security activity to measurable business outcomes.

The real message behind adversarial intelligence is this: the winners will not be the organizations that merely “adopt AI”. They will be the organizations that use it to deny attackers decision advantage, and can in turn prove it with metrics the business understands and values.