Industrialized Identity – The New Factory Model for Fraud

Industrialized Identity – adversaries now run identity like a factory. Most organizations still talk about identity breaches like they talk about storms: unfortunate, occasional, and mostly out of their control. But attackers don’t forecast storms, they manufacture them.

The adversary does not see it that way. Instead, they treat identity as raw material. They harvest it, refine it, enrich it, and operationalize it, over and over, until they can monetize it by running fraud, impersonation, and Account Takeover (ATO) campaigns like a production line.

This dynamic doesn’t just change adversarial TTPs per say. And this cascades as it changes the adversary’s economics. It also changes defender timelines. And it changes what “good” looks like for a CISO who needs to protect revenue, customers, and business operations.

In the 2026 Identity Breach Report from Constella Intelligence we see the signal clearly – identity exposure now moves at machine speed and scale, with industrial processes behind it, not opportunistic one-offs.

Identity risk didn’t just get “worse.” It got productized.

And once it’s productized, attackers don’t need to break in to create impact. They can often log in, have data changed/reset, or impersonate. Traction becomes real when they assemble “attackable profiles”. In practice, that means they can:

  • pass help desk or account recovery checks
  • bypass “knowledge-based” verification
  • look legitimate across channels
  • scale automation without spiking obvious alarms

For these attackable profiles to become real, adversaries have built an identity supply chain:

Ingest → Clean → Correlate → Enrich → Package → Operationalize

Quarterly controls and reactive incident response will not stand up to this type of pattern. Worse off this can become industrialized at scale. Defense models need to runs at that same tempo.

The Identity Density Gap – the story behind +135% record growth vs. +11% unique identifiers

Let’s quantify the shift. Here’s a 2025 statistic that should force a mindset change: breach record volume grew by 135% while unique identifiers only grew 11%.

That says something simple and brutal: adversaries don’t need more identity data. So the problem isn’t more identities. It’s more context per identity (more data per person). This is the Identity Density Gap.

Put differently, density is leverage:

  • A thin identity (email + password) supports commodity credential stuffing.
  • A dense identity (email + phone + address + DOB + linked accounts + recovery hints + active session objects) supports high-confidence impersonation and repeatable fraud.

Density gives attackers options. Options create resilience. Resilience creates pathways that can also be leveraged at scale.

The outdated way that so many security teams pursued was to fixed authentication. Yet they constantly lost to ATO and fraud. The adversary no longer cares about the login prompt, they are seeing the surface across the entire identity lifecycle:

  • onboarding and enrollment
  • authentication
  • session handling and token reuse
  • account recovery and help desk flows
  • high-risk transactions and workflow approvals

Defending only one link in that chain is a mere inconvenience now, attackers route around fragmented strategies. And they do it fast.

Industrialized data correlation – how attackers turn billions of attributes into attackable profiles

Attackers don’t win because they possess data. Attackers win because they correlate data. When an operation runs at the scale of 400 billion+ attributes, correlation stops being a research activity and becomes a manufacturing step. Couple this with the vast amount of OSINT in existence and a picture starts to form.

Here’s how the factory works:

First – Normalization

Adversaries normalize raw material – they standardize fields, clean formatting, remove duplicates, and fix missing pieces. They don’t need perfection. They need enough consistency to automate.

Next – Linking

Data gets linked across disparate datasets – the adversary matches email addresses to phone numbers. Phone numbers to addresses. Addresses to dates of birth, and so on. One dataset fills the gaps in another.

Then – Scoring

Adversaries score attackable profiles to measure ROI. They don’t ask, “Can I compromise this account?” They ask, “Can I monetize this identity fast?”

They prioritize identities that connect to:

  • financial access
  • enterprise privileges
  • payroll and HR workflows
  • customer support recovery paths
  • vendor payment processes

Finally – Packaging

Profiles get packaged for operations. This is where identity becomes attackable. The profile supports repeatable playbooks: ATO, recovery bypass, SIM swap targeting, impersonation, and payment diversion.

That’s why identity risk now behaves like a business function for adversaries. They build a pipeline. That pipeline gets refined. Then it gets scaled.

And then exposure events feed that pipeline.

The Top Exposure Events – why mega breaches punch above their weight

When massive exposure events hit, many leaders respond with the familiar: “We’ll monitor. We’ll see if we’re affected.”

That script fails at machine speed. Large exposure events don’t just increase volume, they increase operational certainty for attackers:

  • consistent record structure
  • high overlap of data points with prior leaks
  • fast enrichment potential
  • easy automation with AI powered technologies

There are many examples of large data breaches. At this point they need to be treated as more than just headlines. Treat them as inventory injections, the raw materials needed for the modern day identity supply chain.

Once that inventory enters circulation, attackers don’t “use it once.” They:

  • monetize it
  • repackage it
  • enrich it with other datasets
  • resell it
  • and operationalize it in waves

That’s why identity exposure rarely behaves like a single incident. It behaves like a persistent condition.

And that’s why “wait for confirmed compromise” becomes the wrong approach.

Machine-speed defense – stop chasing events, interdict the pipeline

If attackers run identity like a factory, defenders must reciprocate. Defenders need to treat identity like a control plane.

This isn’t about perfect security as there is no such thing. Defenders do however need faster cycles:

  • faster detection-to-decision
  • faster decision-to-enforcement
  • tighter governance around automation
  • metrics that prove reduced operational risk

Here are some practical steps to improve an ecosystem:

Convert exposure into action

Alerts don’t help if they don’t trigger changes in systems and/or behavior. If it doesn’t change enforcement, it’s just telemetry. Build an identity exposure-to-action playbook that answers:

  • Which identities matter most? (executives, finance, privileged admins, support)
  • Which workflows create the largest blast radius? (recovery, vendor payments, payroll, customer support)
  • What control do we trigger first? (session resets, account recovery restrictions, throughput throttling)

Next, attack their economics.

Render stolen credentials less valuable

Kill the advantages that adversaries love by:

  • deploying phish-resistant MFA, especially for privileged roles
  • binding sessions to devices where possible
  • tightening token lifetimes and reuse policies

Then, close the side doors.

Harden the bypass routes

Adversaries don’t always brute force their way in. They tke less resistant paths, such as socially engineering account resets via a help desk. Treat recovery like a privileged operation by:

  • restricting recovery pathways for users, especially privileged ones
  • requiring stronger proof for recovery than just login creds
  • adding friction (synchronous checks via phone call, etc) to high-impact changes (bank info, payout routing, email changes)
  • training support teams on identity manipulation patterns and escalation guardrails

Finally, scale your response.

Automate enforcement

Automation wins at machine speed when done right, but beware as it can also break business operations. Start slow with low-risk actions and require human approval for high-impact actions (account lockouts, financial workflow freezes, privileged access resets).

And if you want to win long-term, measure what matters.

Measure the right outcomes

Generally speaking, if something gets measured, it can be improved. Consider the following so as to improve a security posture:

  • time-to-detect exposure (requires analysis to unearth original exposure)
  • time-to-enforce controls
  • % of privileged users on phish-resistant MFA
  • reduction in successful recovery abuse
  • reduction in ATO attempts that reach “valid session” state

Some of these metrics are not trivial and require analysis. But they translate cleanly to business outcomes: less fraud, fewer outages, fewer customer escalations.

The bottom line

Identity risk didn’t just automagically grow. It got industrialized.

Interestingly, attackers now build identity products. They run correlation pipelines. They operationalize exposure at machine speed. And they scale fraud the way mature businesses scale customer acquisition: with automation, testing, and iteration.

Here’s the modern posture. Instead of relying on outdated perimeter strategies, consider:

  • treating exposure as a leading indicator
  • hardening the identity lifecycle, not just the login
  • interdicting the pipeline wherever possible

Defending identity in the industrial era requires a new mindset.

AI Powered Cybercrime – How AI Supercharged a Sextortion Wave

AI Powered Cybercrime - How AI Supercharged a Sextortion Wave

Part 3 of AI Powered Cybercrime

Sextortion isn’t new. Velocity has increased, personalization has sharpened, and attackers can now run campaigns at industrial scale. This wave is a collision event between the first two posts in this series: facilitation (credible intimidation) and scale (high-volume delivery). AI Powered Cybercrime – How AI Supercharged a Sextortion Wave.

Many security programs have an important blind spot: they treat coercion as a personal problem. In reality, coercion quickly becomes an enterprise problem when it pressures employees into silence, errors, or unsafe, unethical, illegal behavior.

What happened (high level)

Following the large-scale exposure of personal data from a data broker, threat actors began sending extortion emails that included real names, real email addresses, and real home addresses. The goal was not technical proof; it was psychological terrorism. When a recipient of these types of emails and/or files sees real personal details, the scam feels “more real,” even if the core claim is false.

Some variants escalated intimidation by including a photo of the victim’s home sourced from publicly available mapping imagery. That addition is a masterclass in facilitation: it takes something the attacker can generate cheaply and turns it into a credibility anchor that increases stress and compresses decision time.

Why it works: plausibility beats truth under pressure

Most victims don’t evaluate these messages like analysts. They evaluate them like humans under threat, with emotion. The campaign design is built around that reality: shock, shame, urgency, and a narrow window to “fix” the situation. The scam doesn’t need to be technically accurate to be operationally effective; it only needs to feel plausible long enough to trigger payment.

This is the same vibe hacking dynamic we see in enterprise fraud: urgency is used as a control bypass. When the attacker can manufacture plausibility quickly, policy and verification become the only reliable defenses.

Where AI fits (without fluff)

AI does not need to run the entire scheme to increase harm. It only needs to improve the leverage points. First, it enables endless text variation while maintaining a consistent tone and similar messaging. Second, it makes personalization easy by merging templates with leaked data and also seamlessly integrating with Open Source Intelligence (OSINT) sources. Third, it reduces the human effort required to run a campaign, which increases throughput.

The result isn’t “smarter extortion.” It’s cheaper extortion at higher volume paired with sharper intimidation artifacts. That combination is what makes waves like this so disruptive.

Why CISOs should care: coercion becomes a business threat

Even when a victim is targeted “personally,” the downstream effects can land inside a CISOs organization. An employee under threat may avoid reporting, reuse credentials poorly, or comply with nefarious demands. Panic and stress can lead to unsafe behaviors, and this noise can weave its way into an organization distracting security teams from secondary attacks that aim to exploit some of that chaos.

If a resilience program covers ransomware but not coercion-driven fraud and extortion, there could be an operational gap. Sextortion waves are a reminder that the adversary’s true target is often decision-making under pressure.

What organizations should do during a wave

The objective in a wave is speed, clarity, and support. Issue a same-day bulletin that states what is happening, what employees should do, and how to report. Keep it stigma-free. The most important message is this: employees can report safely, and they won’t get in trouble.

Next, harden the identity bridge. Ensure MFA is enforced for email and sensitive applications, watch for anomalous sign-ins, and monitor for new device enrollments. Then improve detection quality by treating this as a campaign: pattern match across inboxes and route messaging to a single owner to reduce confusion and duplicate work.

The resilience lesson

Waves like this are not just security events; they’re leadership events. The organizations that respond well reduce harm by moving fast, communicating clearly, and providing support. They also learn: which workflows were stress-tested, where employees hesitated, and what verification gates were missing.

If you treat coercion as out-of-scope, you will eventually treat it as an incident, under pressure. Build the playbook now.

Key takeaways

  • Normalize coercion reporting; activate employee assistance programs immediately to protect people and organizational reputation.
  • Instrument wave messaging detection to tune signals and reduce both data fatigue and operational distraction.
  • Harden identity ecosystems fast; enforce MFA immediately to prevent panic-driven account takeover actions.
  • Operationalize extortion playbooks and drill them regularly to reduce chaos and decision latency.

AI Powered Cybercrime – Scale: From One-off attacks to broad campaigns

AI Powered Cybercrime - Scale: From One-off attacks to broad campaigns

Part 2 of AI Powered Cybercrime

Once AI facilitates and reduces the skill barrier, the next step is predictable: industrialization. Scale is not simply “more X.” It’s more volume, experiments, parallel campaigns, faster iteration, and lower cost per attempt. Attackers can tolerate failure because machines keeps trying, and keeps learning. AI Powered Cybercrime – Scale.

In practice, scale changes how you risk is experienced. The question stops being “can this attack be blocked?” and becomes “can we withstand continuous throughput without fatigue, mistakes, or control bypass?” If the attacker runs campaigns like a high-volume system, defenders must design controls that behave like high-volume systems too.

Scale is attack throughput based on more attempts, more variation, and faster learning loops than human teams can match.

How scale happens

Cybercrime at scale is a stack: commodity infrastructure to deliver, automation to orchestrate, and AI to generate convincing content and decision support. That stack allows adversaries to operate like entire sophisticated teams, testing, measuring response rates, iterating on what works, and abandoning what doesn’t.

This matters because “good enough” at massive volume beats “excellent” at low volume. Even if your controls catch 99.9% of attempts, at enough throughput the remaining 0.1% becomes a real business problem.

Agentic workflows: campaigns become orchestrated systems

The most important mental model for scale is orchestration. Instead of one attacker manually working a process, you face workflows that plan tasks, execute in parallel, and adapt based on outcomes. Target research, lure writing, follow-ups, and handoffs can be partially automated, even when a human remains in the loop for high-value steps.

For defenders, this means control gaps are discovered faster, exploited more accurately, and reused more reliably. If your organization has exception-heavy processes (e.g., ad hoc approvals, inconsistent vendor change procedures, unclear escalation paths) those become discoverable cracks that an attacker’s system can exploit repeatedly.

Dark social distribution: coordination at platform speed

Distribution and coordination channels accelerate scale by enabling rapid churn: new templates, new lists, new scripts, and fast feedback loops from peers. The operational consequence is that takedowns and blocks often trail behind the adaptation cycle. If you rely solely on external enforcement or on the hope that a campaign will “fade out,” you will lose the timing battle.

This is why brand and executive impersonation monitoring matters. When attackers can quickly align a pretext with what’s visible about your leadership, partners, or vendors, they can now manufacture credibility in hours.

DDoS and distraction: availability pressure as a cover layer

At scale, disruption is often a tactic, not an outcome. Availability pressure can consume attention, create noise, and induce rushed decisions that enable secondary goals (e.g., fraud, credential abuse, or data theft). The attacker doesn’t need to “win” the DDoS battle; they need to win the operational tempo battle.

The resilience countermeasure is degraded-mode planning. If you pre-stage how the business continues when systems are strained (e.g., what gets paused, what gets routed differently, who approves exceptions) you reduce the attacker’s ability to force mistakes through urgency.

A/B testing on humans: volume plus variation

A subtle but powerful aspect of scale is experimentation. Attackers don’t need a perfect lure. They need a pipeline that generates variants, tests them across segments, measures responses, and doubles down on what works. AI makes this cheap: the cost of a new variant approaches zero.

This turns awareness training into an operational control problem. You’re no longer defending against one “phishing style.” You’re defending against a continuously mutating persuasion engine. The stable defense is workflow integrity, consistent rules for high-risk actions, enforced regardless of how convincing the request appears.

What to do: control throughput with identity and workflow gates

To survive scale, design defenses like you’re protecting a high-traffic API. The objective is not perfect prevention; it’s making irreversible actions rare, gated, and verifiable. Start with the workflows that move money, grant access, or export sensitive data.

Phishing-resistant MFA and risk-based session controls reduce account takeover success. Dual control and out-of-band verification reduce fraud success. Campaign-level detection reduces fatigue by catching patterns across many inboxes or users rather than treating each event as a one-off.

Board-level framing

Scale bends the loss curve upward even if individual success rates decline. Boards should ask a small set of questions that map directly to business continuity: Which workflows are irreversible? Which are gated? How fast can we verify? How quickly can we contain identity-driven compromise?

If you can answer those questions with metrics (e.g., time-to-verify, exception rates, time-to-contain) you can translate a complex threat into operational readiness and financial risk reduction.

Key takeaways

  • Assume nonstop attack throughput to model monthly, reduce fraud and downtime exposure.
  • Harden approval workflows; the goal is to enforce dual control always while preventing irreversible payment loss.
  • Automate identity containment by tuning regularly to cut attacker dwell time and blast radius.
  • Instrument dark social risk; that goal is to monitor weekly to reduce brand-driven compromise and extortion.
  • Govern exceptions tightly by reviewing regularly to prevent blind-spot failures and audit fallout.

Part 3 of AI Powered Cybercrime

AI Powered Cybercrime – Facilitation: How AI lowers the skill barrier for attackers

AI Powered Cybercrime - Facilitation: How AI lowers the skill barrier for attackers

Part 1 of AI Powered Cybercrime

Cybercrime has historically had a skills bottleneck. Someone had to do the research, craft a believable story, write the lure, build the tooling, and then keep the victim engaged long enough for an outcome. Even for seasoned operators, that work takes time, and time is money. AI Powered Cybercrime – Facilitation.

Generative AI has changed the economics of that effort. It acts like a quality assistant that can draft, rephrase, personalize, and refine at machine speed. The net effect is not simply “smarter attackers.” It’s more adversaries that historically could not operate in this space. It is also a set of adversaries that can now perform at a higher baseline, scale larger, with fewer mistakes and more believable artifacts.

In this series, I use “facilitation” to describe the first-order impact of AI on cybercrime: removing friction across the attack lifecycle so that an individual attack becomes easier to execute, easier to adapt, and more likely to succeed.

Facilitation is where AI makes individual attacks better by lowering the skill barrier and improving content and/or persuasion quality.

The Facilitation Lens

A useful way to think about AI-enabled crime is as a pipeline. Attackers rarely win because they have one magic tool; they win because they can move smoothly from one stage to the next, recon, pretext, access, execution, and monetization. AI can assist at every stage, and it doesn’t need to be perfect. It only needs to be good enough to keep the process flowing through its journey.

For defenders, this creates a trap: many programs still focus on blocking discrete artifacts (one phishing email, one payload hash, one suspicious domain). Facilitation shifts advantage to the attacker because artifacts can be generated rapidly and with great volume; but the human processes and identity controls on the defensive side often remain static.

AI-powered malware: from coding to assembling outcomes

“AI malware” may inspire unrealistic notions of a fully autonomous super-virus. The more realistic, and more dangerous, reality is simpler: AI compresses development and iteration cycles. Instead of writing everything from scratch, adversaries can draft components, refactor quickly, generate variants, and troubleshoot faster. That matters because it reduces the time between idea and execution. It also empowers people that would not be operating in cybercrime without AI capabilities.

For defenders, the implication is that static signatures and one-off IOCs degrade faster. The same intent can show up as many slightly different implementations, and the “shape” of attacks changes just enough to evade brittle detection logic.

What can be done about this? Shift emphasis toward behavior and context. Instead of some static defense model we need to become more adaptable. If some payload dynamically changes, attackers will likely still need access to credentials, session tokens, the creation of persistence, or the exfiltration of data. Those are the slivers of opportunity where defenders have a chance of stable detection and containment. Given todays dynamic, the best place to shrink an attacker’s options is identity: the stronger and more tightly governed the identity boundary, the fewer places malicious tooling can successfully land.

Deepfakes: visual presence is no longer identity

Deepfakes move social engineering from “message deception” to “presence deception.” It’s one thing to spoof a sender name; it’s another to appear on a call as someone your team recognizes. That’s why deepfake-enabled fraud is so consequential: it attacks the human verification shortcuts we’ve relied on for decades, voice, face, and confidence.

The operational lesson is straightforward: “I saw them on video” is no longer a control. Nor is it a point of trust. A convincing presence can be manufactured, and group dynamics can be exploited to create social proof. The only reliable protection is to place high-risk actions behind verification steps that synthetic media cannot satisfy, out-of-band callbacks to known numbers, dual control for sensitive payments, and defined escalation rituals when urgency appears.

Social engineering: AI adds memory, consistency, and coordination

The biggest upgrade AI brings to social engineering is not grammar, it’s continuity. AI can maintain context over time, keep a persona consistent across messages and disparate systems, and pivot smoothly when a target makes adjustments. That capability turns many “one-and-done” lures into persistent conversations that wear down defenses.

This is why awareness training that focuses on typos and awkward phrasing is losing relevance. The tell is increasingly a process violation: a new payment path, a new channel, a sudden bypass of normal approvals, or an exception request that tries to compress decision time. If your employees know how to spot workflow bypass, they can defeat even polished, highly personalized lures.

Vibe hacking: weaponizing emotion to bypass analysis

Vibe hacking is the weaponization of emotion as a control bypass. Attackers don’t need you to believe every detail; they need you to act before you verify. Shame, urgency, fear, status, and belonging are some of the levers that move decisions faster than policy comes into play.

The countermeasure is not “tell people to be calm.” The countermeasure is building organizational escape hatches: clear permission to slow down, explicit escalation paths, and operational friction for irreversible actions. If urgency is treated as a trigger for verification, not a reason to move faster, we can turn the attacker’s primary advantage into a liability.

Short term reset

If you want one practical takeaway from facilitation, it’s this: identity and workflow integrity are choke points. AI can generate unlimited persuasion and/or manipulation artifacts, but we have to force it to cross an authorization boundary somewhere.

Start by identifying the three most irreversible workflows in your organization, for example pick from a pool like this one: payments, vendor banking changes, payroll updates, privileged access grants, or large data exports. Then ensure those workflows have step-up verification that cannot be satisfied by sense of urgency, polished messaging, or synthetic media. Finally, run a short blind red-team exercise on a deepfake or coercion scenario and measure how long it takes the organization to verify and contain. Blind = this must mimic reality.

Key takeaways

  • Assume high-quality lures and retrain owners monthly to reduce fraud loss and downtime.
  • Gate privileged actions and enforce out-of-band checks always; the goal is to prevent unauthorized transactions.
  • Detect behavior shifts and tune telemetry regularly to cut dwell time and response costs.
  • Standardize escalation and drill managers quarterly; the goal is to reduce coercion-driven errors.
  • Institutionalize dissent and review exceptions monthly to avoid governance blind spots and audit fallout.

Part 2 of AI Powered Cybercrime

Adversarial Intelligence: How AI Powers the Next Wave of Cybercrime

Adversarial Intelligence: How AI Powers the Next Wave of Cybercrime

AI Summit New York City – December 11, 2025

On December 11, 2025, I spoke at the AI Summit in New York City on a topic that is becoming unavoidable for every security leader: AI is not just improving cyber attacks, it is transforming cybercrime into an intelligence discipline. Adversarial Intelligence: How AI Powers the Next Wave of Cybercrime.

The premise of the talk was simple: adversaries are no longer running isolated campaigns with a clear beginning and end. They are building living, learning models of target organizations (e.g., your people, workflows, identity fabric, operational rhythms) and then using generative-class models and autonomous agents to probe, personalize, adapt, and persist.

The core shift: AI gives attackers decision advantage

In an AI-accelerated threat environment, the attacker’s edge often comes down to decision advantage. They see you earlier, target you more precisely, and adapt in real time when controls block them. In a pre-AI world, that level of precision required time and rare talent. Now it is becoming repeatable, automated, scalable, and accessible to people with no real skill.

Where AI shows up in the modern attack lifecycle

When people think about “AI in cybercrime”, they often jump straight to malware generation. That is not wrong, but it is incomplete. In practice, AI technologies are being applied across the attack lifecycle.

Reconnaissance becomes continuous

Autonomous agents can enumerate exposed assets, map third-party relationships, and monitor public signals that reveal how teams operate. Recon becomes less like a phase and more like a background process, always learning and always refreshing the target model.

Social engineering becomes high-context

Generative models do not just write better phishing emails. They enable sentiment analysis, tone and context matching, multi-step pretexting, and persuasion that mirrors internal language and business cadence. The outcome is fewer “obvious” lures and more synthetic conversations that simply feel real.

Identity attacks scale faster than traditional controls

Identity is the front door to modern enterprises (e.g., SaaS, SSO, MFA workflows, help desk interactions, API keys). AI-powered adversaries can probe identity systems at scale, adapt-ably test variants, and blend into normal traffic patterns, especially when enforcement is inconsistent.

“Proof” gets cheaper: impersonation goes operational

Deepfakes and impersonation have moved from novelty to operational enablement. They can be used for vibe hacking (e.g., pressure targets, accelerate trust, push high-risk decisions), especially in finance, vendor-payment, and administrative workflows.

The defensive answer is not “more AI“. It is better strategy.

A common trap is thinking, “attackers are using AI, so we need AI too”. Yes some AI is necessary, but alone it is not enough. Winning here requires adversary-informed security: security designed to shape attacker behavior, increase attacker cost, and force outcomes.

Three tactics that disrupt malicious automation

Deception Engineering: make the attacker waste time … on purpose

Deception is no longer just honeypots and honeytokens. Done well, it is environment design: believable paths that look like privilege or data access, instrumented to capture telemetry and shaped to slow, misdirect, and segment adversary activity. The goal is not only detection. It is decision disruption, raising uncertainty and forcing changes within the adversary’s ecosystem.

Adversarial Counterintelligence: treat your enterprise as contested information space

Assume adversaries are collecting, correlating, and modeling your ecosystem, then design against that reality. Practical counterintelligence includes reducing open-source signal leakage, hardening executive and finance workflows against impersonation, and introducing verification into high-risk decisions without paralyzing the business.

AI honeypots and canary systems: fight automation with instrumented ambiguity

AI-enabled adversaries love clean feedback loops. So do not give them any. Modern deception systems can present plausible but fake assets (APIs, credentials, source code repositories, data stores), generate dynamic content, and create unique fingerprints per interaction so automation becomes a liability.

What this means for CISOs: measure money, not security activity

If you are briefing a board, do not frame this as anything like “AI is scary”. Frame it as: AI changes loss-event frequency, loss magnitude, and time-to-detection/time-to-containment. These can directly impact revenue, downtime, regulatory exposure, and brand trust. If attackers can industrialize reconnaissance and/or persuasion, then defenders must industrialize identity visibility, verification controls, detection-to-decision workflows, and deception at scale.

Key takeaways

  • Assume continuous and automated recon.
  • Harden verification workflows against synthetic content; train executive and administrative teams regularly.
  • Deploy deception at scale; raise attacker cost to reduce downtime.
  • Operationalize counterintelligence; aim to avoid blind spots to reduce exposure.
  • Quantify decision advantage to accelerate funding decisions and defend revenue/margins.

Closing thought

AI is accelerating the adversary, no question. It has also lowered the entry barrier to cybercrime. But it is also giving defenders a chance to re-architect advantage: to move from passive defense to active disruption, from generic controls to adversary-shaped environments, and from security activity to measurable business outcomes.

The real message behind adversarial intelligence is this: the winners will not be the organizations that merely “adopt AI”. They will be the organizations that use it to deny attackers decision advantage, and can in turn prove it with metrics the business understands and values.