How Security Chaos Engineering Disrupts Adversaries in Real Time

How Security Chaos Engineering Disrupts Adversaries in Real Time

In an age where cyber attackers have become more intelligent, agile, persistent, sophisticated, and empowered by Artificial Intelligence (AI), defenders must go beyond traditional detection and prevention. The traditional models of protective security are fast becoming diminished in their effectiveness and power. In the face of pursuing a proactive model one approach has emerged, security chaos engineering. It offers a proactive strategy that doesn’t just lead to hardened systems but can also actively disrupt and deceive attackers during their nefarious operations. How security chaos engineering disrupts adversaries in real time.

By intentionally injecting controlled failures or disinformation into production-like environments, defenders can observe attacker behavior, test the resilience of security controls, and frustrate adversarial campaigns in real time.

Two of the most important frameworks shaping modern cyber defense are MITRE ATT&CK (https://attack.mitre.org/) and MITRE Engage (https://engage.mitre.org/). Together, they provide defenders with a common language for understanding adversary tactics and a practical roadmap for implementing active defense strategies. This can transform intelligence about attacker behavior into actionable, measurable security outcomes. The convergence of these frameworks with security chaos engineering adds some valuable structure when building actionable and measurable programs.

What is MITRE ATT&CK?

MITRE ATT&CK (Adversarial Tactics, Techniques, and Common Knowledge) is an open, globally adopted framework developed by MITRE (https://www.mitre.org/) to systematically catalog and describe the observable tactics and techniques used by cyber adversaries. The ATT&CK matrix provides a detailed map of real-world attacker behaviors throughout the lifecycle of an intrusion, empowering defenders to identify, detect, and mitigate threats more effectively. By aligning security controls, threat hunting, and incident response to ATT&CK’s structured taxonomy, organizations can close defensive gaps, benchmark their capabilities, and respond proactively to the latest adversary tactics.

What is MITRE Engage?

MITRE Engage is a next-generation knowledge base and planning framework focused on adversary engagement, deception, and active defense. Building upon concepts from MITRE Shield, Engage provides structured guidance, practical playbooks, and real-world examples to help defenders go beyond detection. These data points enable defenders to actively disrupt, mislead, and study adversaries. Engage empowers security teams to plan, implement, and measure deception operations using proven techniques such as decoys, disinformation, and dynamic environmental changes. This bridges the gap between understanding attacker Techniques, Tactics, and Procedures (TTPs) and taking deliberate actions to shape, slow, or frustrate adversary campaigns.

What is Security Chaos Engineering?

Security chaos engineering is the disciplined practice of simulating security failures and adversarial conditions in running production environments to uncover vulnerabilities and test resilience before adversaries can. Its value lies in the fact that it is truly the closest thing to a real incident. Table Top Exercises (TTXs) and penetration tests always have constraints and/or rules of engagement which distance them from real world attacker scenarios where there are no constraints. Security chaos engineering extends the principles of chaos engineering, popularized by Netflix (https://netflixtechblog.com/chaos-engineering-upgraded-878d341f15fa) to the security domain.

Instead of waiting for real attacks to reveal flaws, defenders can use automation to introduce “security chaos experiments” (e.g. shutting down servers from active pools, disabling detection rules, injecting fake credentials, modifying DNS behavior) to understand how systems and teams respond under pressure.

The Real-World Value of this Convergence

When paired with security chaos engineering, the combined use of ATT&CK and Engage opens up a new level of proactive, resilient cyber defense strategy. ATT&CK gives defenders a comprehensive map of real-world adversary behaviors, empowering teams to identify detection gaps and simulate realistic attacker TTPs during chaos engineering experiments. MITRE Engage extends this by transforming that threat intelligence into actionable deception and active defense practices, in essence providing structured playbooks for engaging, disrupting, and misdirecting adversaries. By leveraging both frameworks within a security chaos engineering program, organizations not only validate their detection and response capabilities under real attack conditions, but also test and mature their ability to deceive, delay, and study adversaries in production-like environments. This fusion shifts defenders from reactive posture to one of continuous learning and adaptive control, turning every attack simulation into an opportunity for operational hardening and adversary engagement.

Here are some security chaos engineering techniques to consider as this becomes part of a proactive cybersecurity strategy:

Temporal Deception – Manipulating Time to Confuse Adversaries

Temporal deception involves distorting how adversaries perceive time in a system (e.g. injecting false timestamps, delaying responses, or introducing inconsistent event sequences). By disrupting an attacker’s perception of time, defenders can introduce doubt and delay operations.

Example: Temporal Deception through Delayed Credential Validation in Deception Environments

Consider a deception-rich enterprise network, temporal deception can be implemented by intentionally delaying credential validation responses on honeypot systems. For instance, when an attacker attempts to use harvested credentials to authenticate against a decoy Active Directory (AD) service or an exposed RDP server designed as a trap, the system introduces variable delays in login response times, irrespective of the result (e.g. success, failure). These delays mimic either overloaded systems or network congestion, disrupting an attacker’s internal timing model of the environment. This is particularly effective when attackers use automated tooling that depends on timing signals (e.g. Kerberos brute-forcing or timing-based account validation). It can also randomly slow down automated processes that an attacker hopes completes within some time frame.

By altering expected response intervals, defenders can inject doubt about the reliability of activities such as reconnaissance and credential validity. Furthermore, the delayed responses provide defenders with crucial dwell time for detection and the tracking of lateral movement. This subtle manipulation of time not only frustrates attackers but also forces them to second-guess whether their tools are functioning correctly or if they’ve stumbled into a monitored and/or deceptive environment.

As an example of some of the ATT&CK TTPs and Engage mappings that can be used when modeling this example of temporal deception, the following support the desired defensive disruption:

MITRE ATT&CK Mapping

  • T1110 – Brute Force – many brute force tools rely on timing-based validation. By introducing delays, defenders interfere with the attacker’s success rate and timing models.
  • T1556 – Modify Authentication Process – typically this is seen as an adversary tactic. But defenders can also leverage this by modifying authentication behavior in decoy environments to manipulate attacker perception.
  • T1078 – Valid Accounts – delaying responses to login attempts involving potentially compromised credentials can delay attacker progression and reveal account usage patterns.

MITRE Engage Mapping

  • Elicit > Reassure > Artifact Diversity – deploying decoy credentials or artifacts to create a convincing and varied environment for the adversary. Temporal manipulation of login attempts involving decoy credentials helps track adversary interactions and delay their movement.
  • Elicit > Reassure > Burn-In – introducing friction, delays, or noise to slow down or frustrate automated attacker activities.
  • Affect > Disrupt > Software Manipulation – modifying system or application software to alter attacker experience, disrupt automation, or degrade malicious tooling. Introducing time delays and inconsistent system responses create false environmental cues, leading attackers to make incorrect decisions. Also introducing artificial latency into authentication routines or system responses slows down the attack lifecycle and adds noise to attacker automation.
  • Affect > Disrupt | Direct > Network Manipulation – changing or interfering with network traffic, services, or routing to disrupt attacker operations. Also influencing or steering attacker decision-making and movement in the environment.
  • Affect > Disrupt > Isolation – segregating attacker interactions or dynamically altering access to increase confusion and contain threats.
  • Expose > Detect > Lures | Network Analysis – observing, logging, and analyzing adversary actions for intelligence and response purposes.
  • Expose > Collect > API Monitoring | Network Monitoring | System Activity Monitoring – extended interaction windows (due to delayed responses) give defenders more opportunity to monitor adversary behavior within the decoy environment.

Honey Timing and Time-Based Traps

Time-bound honeypots such as fake cron jobs, scheduled updates, or bogus backup routines can serve as deceptive traps. Interaction with these elements reveals unauthorized probing or access attempts. The very existence of these traps implies that any entity interacting with them (excluding the creators of course) needs to be treated as hostile and investigated.

Example: Deceptive Backup Scripts as Time-Based Traps in Cloud Environments

Defenders can deploy a bogus scheduled backup script named “nightly-db-backup.sh” on a decoy cloud instance. The script can be set to appear as if it ran daily at 04:00 using a convincingly sounding cron job (e.g. /etc/cron.d/backup_job). The script can contain clear-text references to fake database credentials, S3 storage paths, and mock sensitive data exports. This can be used as a timing-based honeypot, existing to attract unauthorized access attempts during off-hours when legitimate activity is minimal.

Any attempt to execute this script triggers hidden canary tokens that act as an alerting system. This can trigger things like an HTTP request where the receiving entity (e.g. web server processing the request) has been configured to log and alert on any relevant interaction. This can of course capture timestamps showing interactions with the script outside of the bogus scheduled execution window. The defenders can then not only detect the unauthorized access but also track subsequent movements due to some of the meta-data captured.

This approach demonstrates how time-based decoy elements, especially those aligned with off-hour routines, can effectively expose stealthy adversaries who are mimicking typical system administrator behavior.

As an example of some of the ATT&CK TTPs and Engage mappings that can be used when modeling this example of time based decoys, the following support the desired defensive disruption:

MITRE ATT&CK Mapping

  • T1059 – Command and Scripting Interpreter – the attacker manually executes some script using bash or another shell interpreter.
  • T1083 – File and Directory Discovery – the attacker browses system files and cron directories to identify valuable scripts.
  • T1070.004 – Indicator Removal: File Deletion – often attackers attempt to clean up after interacting with trap files.
  • T1562.001 – Impair Defenses: Disable or Modify Tools – attempting to disable cron monitoring or logging after detection is common.

MITRE Engage Mapping

  • Elicit > Reassure > Artifact Diversity – deploying decoy credentials or artifacts to create a convincing and varied environment for the adversary.
  • Affect > Disrupt > Software Manipulation – modifying system or application software to alter attacker experience, disrupt automation, or degrade malicious tooling.
  • Affect > Disrupt > Isolation – segregating attacker interactions or dynamically altering access to increase confusion and contain threats.
  • Expose > Detect > Lures – observing, logging, and analyzing adversary actions for intelligence and response purposes.

Randomized Friction

Randomized friction aims at increasing an attacker’s work factor, in turn increasing the operational cost for the adversary. Introducing unpredictability in system responses (e.g. intermittent latency, randomized errors, inconsistent firewall behavior) forces attackers to adapt continually, degrading their efficiency and increasing the likelihood of detection.

Example: Randomized Edge Behavior in Cloud Perimeter Defense

Imagine a blue/red team exercise within a large cloud-native enterprise. The security team deploys randomized friction techniques on a network segment believed to be under passive recon by red team actors. The strategy can include intermittent firewall rule randomization. Some of these rules make it so that attempts to reach specific HTTP based resources are met with occasional timeouts, 403 errors, misdirected HTTP redirects, or to simply give an actual response.

When the red team conducts external reconnaissance and tries to enumerate target resources, they experience inconsistent results. One of their obvious objectives is to remain undetected. Some ports appeared filtered one moment and opened the next. API responses switch between errors, basic authentication challenges, or other missing element challenges (e.g. HTTP request header missing). This forces red team actors to waste time revalidating findings, rewriting tooling, and second-guessing whether their scans were flawed or if detection had occurred.

Crucially, during this period, defenders are capturing every probe and fingerprint attempt. The friction-induced inefficiencies increase attack dwell time and volume of telemetry, making detection and attribution easier. Eventually, frustrated by the lack of consistent telemetry, the red team escalates their approach. This kills their attempts at stealthiness and triggers active detection systems.

This experiment successfully degrades attacker efficiency, increases their operational cost, and expands the defenders’ opportunity window for early detection and response, all without disrupting legitimate internal operations. While it does take effort on the defending side to set all of this up, the outcome would be well worth it.

As an example of some of the ATT&CK TTPs and Engage mappings that can be used when modeling this example of randomized friction, the following support the desired defensive disruption:

MITRE ATT&CK Mapping

  • T1595 – Active Scanning – adversaries conducting external enumeration are directly impacted by inconsistent firewall responses.
  • T1046 – Network Service Discovery – random port behavior disrupts service mapping efforts by the attacker.
  • T1583.006 – Acquire Infrastructure: Web Services – attackers using disposable cloud infrastructure for scanning may burn more resources due to retries and inefficiencies.

MITRE Engage Mapping

  • Elicit > Reassure > Artifact Diversity – deploying decoy credentials or artifacts to create a convincing and varied environment for the adversary.
  • Elicit > Reassure > Burn-In – introducing friction, delays, or noise to slow down or frustrate automated attacker activities.
  • Affect > Disrupt > Software Manipulation – modifying system or application software to alter attacker experience, disrupt automation, or degrade malicious tooling.
  • Affect > Disrupt > Network Manipulation – changing or interfering with network traffic, services, or routing to disrupt attacker operations.
  • Affect > Disrupt > Isolation – segregating attacker interactions or dynamically altering access to increase confusion and contain threats.
  • Expose > Detect > Network Analysis – observing, logging, and analyzing adversary actions for intelligence and response purposes.

Ambiguity Engineering

Ambiguity engineering aims to obscure the adversary’s mental model. It is the deliberate obfuscation of system state, architecture, and behavior. When attackers cannot build accurate models of the target environments, their actions become riskier and more error-prone. Tactics include using ephemeral resources, shifting IP addresses, inconsistent responses, and mimicking failure states.

Example: Ephemeral Infrastructure and Shifting Network States in Zero Trust Architectures

A SaaS provider operating in a zero trust environment can implement ambiguity engineering as part of its cloud perimeter defense strategy. In this setup, let’s consider a containerized ecosystem that leverages Kubernetes-based orchestration. This platform can utilize elements such as ephemeral IPs and DNS mappings, rotating them at certain intervals. These container hosted backend services would be accessible only via authenticated service mesh gateways, but appear (to external entities) to intermittently exist, fail, or timeout, depending on timing and access credentials.

Consider the external entity experience against a target such as this. These attackers would be looking for initial access followed by lateral movement and service enumeration inside this target environment. What they would encounter are API endpoints that resolve one moment and vanish the next. Port scans would deliver inconsistent results across multiple iterations. Even successful service calls can return varying error codes depending on timing and the identity of the caller. When this entity tries to correlate observed system behaviors into a coherent attack path, they would continually hit dead ends.

This environment was not broken, it was intentionally engineered for ambiguity. The ephemeral nature of resources, combined with intentional mimicry of common failure states, would prevent attackers from forming a reliable mental model of system behavior. Frustrated and misled, their attack chain will slow, errors will increase, and their risk of their detection will rise. Meanwhile, defenders can capture behavioral fingerprints from the failed attempts and gather critical telemetry for informed future threat hunting and active protection.

As an example of some of the ATT&CK TTPs and Engage mappings that can be used when modeling this example of ambiguity engineering, the following support the desired defensive disruption:

MITRE ATT&CK Mapping

  • T1046 – Network Service Discovery – scanning results are rendered unreliable by ephemeral network surfaces and dynamic service allocation.
  • T1590 – Gather Victim Network Information – environmental ambiguity disrupts adversary reconnaissance and target mapping.
  • T1001.003 – Data Obfuscation: Protocol or Service Impersonation – false failure states and protocol behavior can mimic broken or legacy services, confusing attackers.

MITRE Engage Mapping

  • Elicit > Reassure > Artifact Diversity – deploying decoy credentials or artifacts to create a convincing and varied environment for the adversary.
  • Elicit > Reassure > Burn-In – introducing friction, delays, or noise to slow down or frustrate automated attacker activities.
  • Affect > Disrupt > Software Manipulation – modifying system or application software to alter attacker experience, disrupt automation, or degrade malicious tooling.
  • Affect > Disrupt > Network Manipulation – changing or interfering with network traffic, services, or routing to disrupt attacker operations.
  • Affect > Disrupt > Isolation – segregating attacker interactions or dynamically altering access to increase confusion and contain threats.
  • Expose > Detect > Network Analysis – observing, logging, and analyzing adversary actions for intelligence and response purposes.
  • Affect > Direct > Network Manipulation – changing or interfering with network traffic, services, or routing to disrupt attacker operations.

Disinformation Campaigns and False Flag Operations

Just as nation-states use disinformation to mislead public opinion, defenders can plant false narratives within ecosystems. Examples include fake internal threat intel feeds, decoy sensitive documents, or impersonated attacker TTPs designed to confuse attribution.

False flag operations are where an environment mimics behaviors of known APTs. The goal is to get one attack group to think another group is at play within a given target environment. This can redirect adversaries’ assumptions and deceive real actors at an operational stage.

Example: False Flag TTP Implantation to Disrupt Attribution

Consider a long-term red vs. blue engagement inside a critical infrastructure simulation network. The blue team defenders implement a false flag operation by deliberately injecting decoy threat actor behavior into their environment. This can include elements such as:

  • Simulated PowerShell command sequences that mimic APT29 (https://attack.mitre.org/groups/G0016/) based on known MITRE ATT&CK chains.
  • Fake threat intel logs placed in internal ticketing systems referring to OilRig or APT34 (https://attack.mitre.org/groups/G0049/) activity.
  • Decoy documents labeled as “internal SOC escalation notes” with embedded references to Cobalt Strike Beacon callbacks allegedly originating from Eastern European IPs.

All of these artifacts can be placed in decoy systems, honeypots, and threat emulation zones designed to be probed or breached. The red team, tasked with emulating an external APT, stumble upon these elements during lateral movement and begin adjusting their operations based on the perceived threat context. They will incorrectly assume that a separate advanced threat actor is and/or was already in the environment.

This seeded disinformation can slow the red team’s operations, divert their recon priorities, and lead them to take defensive measures that burn time and resources (e.g. avoiding fake IOC indicators and misattributed persistence mechanisms). On the defense side, telemetry confirmed which indicators were accessed and how attackers reacted to the disinformation. This can become very predictive regarding what a real attack group would do. Ultimately, the defenders can control the narrative within an engagement of this sort by manipulating perception.

As an example of some of the ATT&CK TTPs and Engage mappings that can be used when modeling this example of disinformation, the following support the desired defensive disruption:

MITRE ATT&CK Mapping

  • T1005 – Data from Local System – adversaries collect misleading internal documents and logs during lateral movement.
  • T1204.002 – User Execution: Malicious File – decoy files mimicking malware behavior or containing false IOCs can trigger adversary toolchains or analysis pipelines.
  • T1070.001 – Indicator Removal: Clear Windows Event Logs – adversaries may attempt to clean up logs that include misleading breadcrumbs, thereby reinforcing the deception.

MITRE Engage Mapping

  • Elicit > Reassure > Artifact Diversity – deploying decoy credentials or artifacts to create a convincing and varied environment for the adversary.
  • Elicit > Reassure > Burn-In – introducing friction, delays, or noise to slow down or frustrate automated attacker activities.
  • Affect > Disrupt > Software Manipulation – modifying system or application software to alter attacker experience, disrupt automation, or degrade malicious tooling.
  • Affect > Disrupt > Network Manipulation – changing or interfering with network traffic, services, or routing to disrupt attacker operations.
  • Affect > Disrupt > Isolation – segregating attacker interactions or dynamically altering access to increase confusion and contain threats.
  • Affect > Direct > Network Manipulation – changing or interfering with network traffic, services, or routing to disrupt attacker operations.
  • Expose > Detect > Network Analysis – observing, logging, and analyzing adversary actions for intelligence and response purposes.

Real-World Examples of Security Chaos Engineering

One of the most compelling real-world examples of this chaos based approach comes from UnitedHealth Group (UHG). As one of the largest healthcare enterprises in the United States, UHG faced the dual challenge of maintaining critical infrastructure uptime while ensuring robust cyber defense. Rather than relying solely on traditional security audits or simulations, UHG pioneered the use of chaos engineering for security.

UHG

UHGs security team developed an internal tool called ChaoSlingr (no longer maintained, located at https://github.com/Optum/ChaoSlingr). This was a platform designed to inject security-relevant failure scenarios into production environments. It included features like degrading DNS resolution, introducing latency across east-west traffic zones, and simulating misconfigurations. The goal wasn’t just to test resilience; it was to validate that security operations (e.g. logging, alerting, response) mechanisms would still function under duress. In effect, UHG weaponized unpredictability, making the environment hostile not just to operational errors, but to adversaries who depend on stability and visibility.

DataDog

This philosophy is gaining traction. Forward thinking vendors like Datadog have begun formalizing Security Chaos Engineering practices and providing frameworks that organizations can adopt regardless of scale. In its blog “Chaos Engineering for Security”, Datadog (https://www.datadoghq.com/blog/chaos-engineering-for-security/) outlines practical attack-simulation experiments defenders can run to proactively assess resilience. These include:

  • Simulating authentication service degradation to observe how cascading failures are handled in authentication and/or Single Sign-On (SSO) systems.
  • Injecting packet loss to measure how network inconsistencies are handled.
  • Disrupting DNS resolution.
  • Testing how incident response tooling behaves under conditions of network instability.

By combining production-grade telemetry with intentional fault injection, teams gain insights that traditional red teaming and pen testing can’t always surface. This is accentuated when considering systemic blind spots and cascading failure effects.

What ties UHG’s pioneering work and Datadog’s vendor-backed framework together is a shift in mindset. The shift is from static defense to adaptive resilience. Instead of assuming everything will go right, security teams embrace the idea that failure is inevitable. As such, they engineer their defenses to be antifragile. But more importantly, they objectively and fearlessly test those defenses and adjust when original designs were simply not good enough.

Security chaos engineering isn’t about breaking things recklessly. It’s about learning before the adversary forces you to. For defenders seeking an edge, unpredictability might just be the most reliable ally.

From Fragility to Adversary Friction

Security chaos engineering has matured from a resilience validation tool to a method of influencing and disrupting adversary operations. By incorporating techniques such as temporal deception, ambiguity engineering, and the use of disinformation, defenders can force attackers into a reactive posture. Moreover, defenders can delay offensive objectives targeted at them and increase their attackers’ cost of operations. This strategic use of chaos allows defenders not just to protect an ecosystem but to shape adversary behavior itself. This is how security chaos engineering disrupts adversaries in real time.

Identity Risk Intelligence and it’s role in Disinformation Security

Src: https://soundproofcentral.com/wp-content/uploads/2020/10/How-To-Block-Low-Frequency-Sound-Waves-Bass-e1602767891920.jpg.webp

From Indicators to Identity: A CISOs guide to identity risk intelligence and its role in disinformation security

The power of signals, or indicators, is evident to those who understand them. They are the basis for identity risk intelligence and it’s role in disinformation security. For years, cybersecurity teams have anchored their defenses on Indicators of Compromise (IOCs), such as IP addresses, domain names, and file hashes, to identify and neutralize threats.

Technical artifacts offer security value, but alone, they’re weak against advanced threats. Attackers possess the capability to seamlessly spoof their traffic sources and rapidly cycle through their operational infrastructure. Malicious IP addresses quickly change, making reactive blocking continuously futile. Flagged IPs might be transient The Onion Routing Project (TOR) nodes, not the actual attackers themselves. Similarly, the static nature of malware file hashes makes them susceptible to trivial alterations. Attackers can modify a file’s hash in mere seconds, effectively evading signature-based detection systems. The proliferation of polymorphic malware, which automatically changes its code after each execution, further exacerbates this issue, rendering traditional hash-based detection methods largely ineffective.

Cybersecurity teams that subscribe to voluminous threat intelligence feeds face an overwhelming influx of data, a substantial portion of which rapidly loses its relevance. These massive “blacklists” of IOCs quickly become outdated or irrelevant due to the ephemeral nature of attacker infrastructure and the ease of modifying malware signatures. This data overload presents a significant challenge for security analysts and operations teams, making it increasingly difficult to discern genuine threats from the surrounding noise and to construct effective proactive protective mechanisms. Data overload obscures critical signals, proving traditional intelligence ineffective. Traditional intelligence details attacks but often misses the responsible actor. Critically, this approach provides little to no insight into how to prevent similar attacks from occurring in the future.

The era of readily identifying malware before user execution is largely behind us. Contemporary security breaches frequently involve elements that traditional IOC feeds cannot reveal – most notably, compromised identities. Verizon’s 2024 Data Breach Investigations Report (DBIR) indicated that the use of stolen credentials has been a factor in nearly one-third (31%) of all breaches over the preceding decade (https://www.verizon.com/about/news/2024-data-breach-investigations-report-emea). This statistic is further underscored by Varonis’ 2024 research, which revealed that 57% of cyberattacks initiate with a compromised identity (https://www.varonis.com/blog/the-identity-crisis-research-report).

Essentially, attackers are increasingly opting to log in rather than hack in. These crafty adversaries exploit exposed valid username and password combinations, whether obtained through phishing campaigns, purchased on dark web marketplaces, or harvested from previous data breaches. With these compromised credentials, attackers can impersonate legitimate users and quietly bypass numerous security controls. This approach extends to authenticated session objects, effectively nullifying the security benefits of Multi-Factor Authentication (MFA) in certain scenarios. While many CISOs advocate for MFA as a panacea for various security challenges, the reality is that it does not address the fundamental risks associated with compromised identities. IOCs and traditional defenses miss attacks from seemingly legitimate, compromised users. This paradigm shift necessitates a proactive and forward-thinking approach to cybersecurity, leading strategists to pivot towards identity-centric cyber intelligence.

Identity intelligence shifts focus from technical IOCs to monitoring digital entities. Security teams now ask: ‘Which identities are compromised?’ instead of just blocking IPs. This evolved approach involves establishing connections between various signals, including usernames, email addresses, and even passwords, across a multitude of data breaches and leaks to construct a more comprehensive understanding of both risky identities and the threat actors employing them, along with their associated tactics. The volume of data analyzed directly determines this approach’s efficacy; more data leads to richer and more accurate intelligence. Unusual logins trigger checks for compromised credentials via identity intelligence. Furthermore, it can enrich this analysis by examining historical data to identify patterns of misuse. Recurring patterns elevate anomalies to significant events, indicating broader attacks. Data correlation provides contextual awareness traditional intelligence lacks.

Fundamentally, identity signals play a crucial role in distinguishing legitimate users from imposters or synthetic identities operating within an environment. In an era characterized by remote and hybrid work models, widespread adoption of cloud services, and the ease of leveraging Virtual Private Network (VPN) services, attackers are increasingly attempting to create synthetic identities – fictitious users, IT personnel, or contractors – to infiltrate organizations. They may also target and compromise the identities of valid users within a given environment.

While traditional indicators like the source IP address of a login offer little value in determining whether a user truly exists within an organization’s Active Directory (AD) or whether that user is a genuine employee versus a fabricated identity, an identity-centric approach excels in this area. This excellence is achieved by meticulously analyzing multiple attributes associated with an identity, such as the employee’s email address, phone number, or other Personally Identifiable Information (PII), against extensive data stores of known breached data and fraudulent identities. Identity risk intelligence can unearth data on identities that simply appear risky. For example, if an email address with no prior legitimate online presence suddenly appears across numerous unrelated breach datasets, it could strongly suggest a synthetic profile.

Some advanced threat intelligence platforms now employ entity graphing to visually map and correlate these intricate and seemingly unrelated signals. Entity graphing involves constructing a network of relationships between various signals – connecting email addresses to passwords, passwords to specific data breaches, usernames to associated online personas, IP addresses to user accounts, and so forth. These interconnected graphs can become highly complex, yet they possess a remarkable ability to reveal hidden links that would remain invisible to a human analyst examining raw data.

An entity graph might reveal that a single Gmail address links multiple accounts across different companies and surfaces within criminal forums, strongly implicating a single threat actor who orchestrates activities across various environments. Often, these email addresses utilize convoluted strings for the username component to deliberately obfuscate the individual’s real name. By pivoting on identity-focused nodes within the graph, analysts can uncover associations between threat actors who employ obscure data points. The resulting intelligence is of high fidelity, sometimes pointing not merely to isolated threat artifacts but directly to the human adversary orchestrating a malicious campaign. This represents a new standard for threat intelligence, one where understanding the identity of the individual behind the keyboard is as critical as comprehending the specific Tactics, Techniques, and Procedures (TTPs) they employ.

The power of analyzing signals for threat intelligence is not a new concept. For example, the NSA’s ThinThread project in the 1990s aimed to analyze massive amounts of phone and email metadata to identify potential threats (https://en.wikipedia.org/wiki/ThinThread). ThinThread was designed to sort through this data, encrypt US-related communications for privacy, and use automated systems to audit how analysts handled the information. By analyzing relationships between callers and their contacts, the system could identify potential threats, and only then would the data be decrypted for further analysis.

Despite rigorous testing and demonstrating superior data-sorting capabilities compared to existing systems, ThinThread was discontinued shortly before the 9/11 attacks. The core component of ThinThread, known as MAINWAY, which focused on analyzing communication patterns, was later deployed and became a key part of the NSA’s domestic surveillance program. This historical example illustrates the potential of analyzing seemingly disparate signals to gain critical insights into potential threats, a principle that underpins modern identity risk intelligence.

Real-World Example: North Korean IT Workers Using Disinformation/Synthetic Identities for Cyber Espionage

No recent event more clearly underscores the urgent need for identity-centric intelligence than the numerous documented cases of North Korean intelligence operatives nefariously infiltrating companies by masquerading as remote IT workers. While this scenario might initially sound like a plot from a Hollywood thriller, it is unfortunately a reality that many organizations have fallen victim to. Highly skilled agents from North Korea meticulously craft elaborate fake personas, complete with fabricated online presences, counterfeit resumes, stolen personal data, and even AI-generated profile pictures, all to secure employment at companies in the West. Once these operatives successfully gain employment, data exfiltration, or at the very least the attempt thereof, becomes virtually inevitable. In some particularly insidious cases, these malicious actors diligently perform the IT work they were hired to do, effectively keeping suspicions at bay for extended periods.

In 2024, U.S. investigators corroborated the widespread nature of this tactic, revealing compelling evidence that groups of North Korean nationals had fraudulently obtained employment with American companies by falsely presenting themselves as citizens of other countries (https://www.justice.gov/archives/opa/pr/fourteen-north-korean-nationals-indicted-carrying-out-multi-year-fraudulent-information). These operatives engaged in the creation of entirely synthetic identities to successfully navigate background checks and interviews. They acquired personal information, either by “borrowing” or purchasing it from real citizens, and presented themselves as proficient software developers or IT specialists available for remote work. In one particularly concerning confirmed case, a North Korean hacker secured a position as a software developer for a cybersecurity company by utilizing a stolen American identity further bolstered by an AI-generated profile photo – effectively deceiving both HR personnel and recruiters. This deceptive “employee” even successfully navigated multiple video interviews and passed typical scrutiny.

In certain instances, the malicious actors exhibited a lack of subtlety and wasted no time in engaging in harmful activities. Reports suggest that North Korean actors exfiltrated sensitive proprietary data within mere days of commencing employment. They often stole valuable source code and other confidential corporate information, which they then used for extortion. In one instance, KnowBe4, a security training firm, discovered that a newly hired engineer on their AI team was covertly downloading hacking tools onto the company network (https://www.knowbe4.com/press/knowbe4-issues-warning-to-organizations-after-hiring-fake-north-korean-employee). Investigators later identified this individual as a North Korean operative utilizing a fabricated identity, and proactive monitoring systems allowed them to apprehend him in time by detecting the suspicious activity.

HR, CISOs, CTOs: traditional security fails against sophisticated insider threats. Early detection of synthetic insiders is crucial for preventing late-stage damage. This is precisely where the intrinsic value of identity risk intelligence becomes evident. By proactively incorporating identity risk signals early in the screening process, organizations can identify red flags indicating a potentially malicious imposter before they gain access to the internal network. For example, an identity-centric approach might have flagged the KnowBe4 hire as high-risk even before onboarding by uncovering inconsistencies or prior exposure of their personal data. Conversely, the complete absence of any historical data breaches associated with an identity could also be a suspicious indicator. Consider the types of disinformation security that identity intelligence enables:

  • Digital footprint verification – by leveraging extensive breach and darknet databases, security analysts and operators can thoroughly investigate whether a job applicant’s claimed identity has any prior history. If an email address or name appears exclusively in breach data associated with entirely different individuals, or if a supposed U.S.-based engineer’s records trace back to IP addresses in other countries, these discrepancies should immediately raise concerns. In the context of disinformation security, digital footprint verification helps to identify inconsistencies that suggest a fabricated identity used to spread false information or gain unauthorized access. Digital footprint analysis involves examining a user’s online presence across various platforms to verify the legitimacy of their identity. Inconsistencies or a complete lack of a genuine online presence can be indicative of a synthetic identity.
  • Proof of life or Synthetic identity detection – advanced platforms possess the capability to analyze combinations of PII to determine the chain of life, or the likelihood of an identity being genuine versus fabricated. For instance, if an individual’s social media presence is non-existent or their provided photo is identified as AI-generated (as was the case with the deceptive profile picture used by the hacker at KnowBe4), these are strong indicators of a synthetic persona. This is a critical aspect of disinformation security, as threat actors often use AI-generated profiles to create believable but fake identities for malicious purposes. AI algorithms and machine learning techniques play a crucial role in detecting these subtle anomalies within vast datasets. Behavioral biometrics, which analyzes unique user interaction patterns with devices, can further aid in distinguishing between genuine and synthetic identities.
  • Continuous identity monitoring – even after an individual is hired, the continuous monitoring of their activity and credentials can expose anomalies. For example, if a contractor’s account suddenly appears in a credential dump online, identity-focused alerts should immediately notify security teams. For disinformation security, this allows for the detection of compromised accounts that might be used to spread malicious content or propaganda.

These types of sophisticated disinformation campaigns underscore the critical importance of linking cyber threats to identity risk intelligence. Static IOCs would fail to reveal the inherent danger of a seemingly “normal” user account that happens to belong to a hostile actor. However, identity-centric analysis – meticulously vetting the true identity of an individual and determining whether their digital persona has any connections to known threat activity – can provide defenders with crucial early warnings before an attacker gains significant momentum.

This is threat attribution in action. Prioritizing identity signals, the attribution of suspicious activity to the actual threat actor becomes possible. The Lazarus Group, for instance, utilizes social engineering tactics on platforms like LinkedIn. Via Linkedin they distribute malware and steal credentials, highlighting the need for identity-focused monitoring even on professional networks. Similarly, APT29 (Cozy Bear) employs advanced spear-phishing campaigns, underscoring the importance of verifying the legitimacy of individuals and their digital footprints.

The Role of Identity Risk Intelligence in Strengthening Security Posture

To proactively defend against the evolving landscape of modern threats, organizations must embrace disinformation security strategies and seamlessly integrate identity-centric intelligence directly into their security operations. The core principle is to enrich every security decision with valuable context about identity risk. This means that whenever a security alert is triggered, or an access request is initiated, the security ecosystem should pose the additional critical question: “is this identity potentially compromised or fraudulent?”. By adopting this proactive approach, companies can transition from a reactive posture to a proactive one in mitigating threats:

  • Early compromised credential detection – imagine an employee’s credentials leak in a third-party breach. Traditional security misses this until active login attempts. Identity risk intelligence alerts immediately upon detection in breaches or dark web dumps. This early warning allows the security team to take immediate and decisive action, such as forcing a password reset or invalidating active sessions. Integrating these timely identity risk signals into Security Information and Event Management (SIEM) and Security Orchestration, Automation and Response (SOAR) systems enables such alerts to trigger automated responses without requiring manual intervention. Taking this further, one can proactively enrich Single Sign-On (SSO) systems and web application authentication frameworks with real-time identity risk intelligence. The following table illustrates recent high-profile data breaches where compromised credentials played a significant role:

    Table 1: Recent High-Profile Data Breaches Involving Compromised Credentials (2024-2025)
OrganizationDateEstimated Records CompromisedAttack VectorReference
Change HealthcareFeb 2024100M+Compromised CredentialsReference
SnowflakeMay 2024165+ OrgsCompromised CredentialsReference
AT&TApr 2024110MCompromised CredentialsReference
TicketmasterMay 2024560MCompromised Credentials (implied)Reference
UK Ministry of DefenceMay 2024270KCompromised Credentials (potential)Reference
New Era Life Insurance CompaniesFeb 2025335KHackingReference
Hospital Sisters Health SystemFeb 2025882KCyberattackReference
PowerSchoolFeb 202562MCyberattackReference
GrubHubFeb 2025UndisclosedCompromised Third-Party AccountReference
DISA GlobalFeb 20253.3MUnauthorized AccessReference
FinastraNov 2024 & Feb 2025400GB & 3.3MUnauthorized AccessReference
Legacy Professionals LLPFeb 2025215KSuspicious ActivityReference
Bankers Cooperative Group, IncAug 2024UndisclosedCompromised EmailReference
Medusind Inc.Jan 2025112KData SeizureReference
TalkTalkJan 202518.8MThird-Party Supplier BreachReference
Gravy AnalyticsJan 2025MillionsUnauthorized AccessReference
UnacastJan 2025UndisclosedMisappropriated KeyReference
  • Identity risk posture for users – leading providers offer something like an “Identity Risk Posture” Application Programming Interface (API). This yields a categorized value that represents the level of exposure or risk associated with a given identity. Meticulous analysis of a vast amount of data about that identity across the digital landscape derives this score. For instance, the types of exposed attributes, the categories of breaches, and data recency are all analyzed. A CISOs team can strategically utilize such a posture value to prioritize decisions and security actions. For example, a Data Security Posture Management (DSPM) solution identifies a series of users with access to specific data resources. If the security team identifies any of those users as having a high-risk posture, they could take action. Actions could include investigations or the mandate of hardware MFA devices. Or even call for more frequent and specialized security awareness training.
  • Threat attribution and hunting – identity-centric intelligence significantly empowers threat hunters to connect seemingly disparate signals, security events, and incidents. In the event of a phishing attack, a traditional response might conclude by simply blocking the sender’s email address and domain. However, incorporating identity data into the analysis might reveal that the phishing email address previously registered an account on a popular developer forum, and the username on that forum corresponds to a known alias of a specific cybercrime group. This enriched attribution helps establish a definitive link between attacks and specific threat actors or groups. Knowing precisely who is targeting your organization enables you to tailor your defenses and incident response processes more effectively. Moreover, a security team can then proactively hunt for specific traces within a given environment. This type of intelligence introduces a new dimension to threat attribution, transforming anonymous attacks into attributable actions by identifiable adversaries.

Integrate identity risk signals via API into security tools: a best practice. Effective solutions offer API access to vast identity intelligence datasets. These APIs provide real-time alerts and comprehensive risk posture data based on a vast data lake of compromised identities and related data points (e.g. infostealer data, etc). Tailored intelligence feeds continuously provide actionable data to security operations. This enables security teams to answer critical questions such as:

  • Which employee credentials have shown up in breaches, data leaks, and/or underground markets?
  • Is an executive’s personal email account being impersonated or misused?
  • Is an executive’s personal information being used to create synthetic, realistic looking public email addresses?
  • Are there any fake social media profiles impersonating our brand or our employees?

These identity risk questions exceed traditional network security’s scope. They bring crucial external insight – information about internet activity that could potentially threaten the organization – into internal defense processes.

Furthermore, identity-centric digital risk intelligence significantly strengthens an organization’s ability to progress towards a Zero Trust (ZT) security posture. ZT security models operate on the fundamental principle of “never trust, always verify” – particularly as it relates to user identities. Real-time information about a user’s identity compromise allows the system to dynamically adjust trust levels. For example, if an administrator account’s risk posture rapidly changes from low to high, a system can require re-authentication until investigation and resolution. This dynamic and adaptive response dramatically reduces the window of opportunity for attackers. Proactive interception of stolen credentials and fake identities replaces reactive breach response.

Embracing Identity-Centric Intelligence: A Call to Action

The landscape of cyber threats is in a constant state of evolution, and our defenses must adapt accordingly. IOCs alone fail against modern attackers; identity-focused threats demand stronger protection. CIOs, CISOs, CTOs: identity-centric intelligence is now a critical strategic necessity. As is understanding identity risk intelligence and it’s role in disinformation security. This necessary shift does not necessitate abandoning your existing suite of security tools; rather, it involves empowering them, where appropriate, with richer context and more identity risk intelligence signals.

By seamlessly integrating identity risk data into every aspect of security operations, from authentication workflows to incident response protocols, security teams gain holistic visibility into an attack, moving beyond fragmented views. Threat attribution capabilities then become significantly enhanced, as cybersecurity teams can more accurately pinpoint who is targeting their organization. Identifying compromised credentials or accounts speeds incident response, enabling faster breach containment. Ultimately, an organization can transition into both proactive and disinformation security strategies.

Several key questions warrant honest and critical consideration:

  • How well do we truly know our users and their associated identities?
  • How quickly can we detect an adversary if they were operating covertly amongst our legitimate users?

If either of these questions elicits uncertainty, it is time to rigorously evaluate how identity risk intelligence can effectively bridge that critical gap. I recommend you begin by exploring solutions that aggregate breach data and provide actionable insights, such as a comprehensive risk score or posture, which your current security ecosystem can seamlessly leverage.

Identity-centric intelligence is vital against sophisticated attacks, surpassing traditional methods for better breach detection. CISOs enhance breach prevention by viewing identity risk holistically, moving beyond basic IOCs. North Korean attacks and data breaches highlight the urgent need for identity-focused security. Implement identity risk, entity graphing, and Zero Trust for a proactive, resilient security posture. Understanding and securing identities equips organizations to navigate complex future threats effectively. Fundamentally, this requires understanding identity risk intelligence and it’s role in disinformation security.