
In an age where cyber attackers have become more intelligent, agile, persistent, sophisticated, and empowered by Artificial Intelligence (AI), defenders must go beyond traditional detection and prevention. The traditional models of protective security are fast becoming diminished in their effectiveness and power. In the face of pursuing a proactive model one approach has emerged, security chaos engineering. It offers a proactive strategy that doesn’t just lead to hardened systems but can also actively disrupt and deceive attackers during their nefarious operations. How security chaos engineering disrupts adversaries in real time.
By intentionally injecting controlled failures or disinformation into production-like environments, defenders can observe attacker behavior, test the resilience of security controls, and frustrate adversarial campaigns in real time.
Two of the most important frameworks shaping modern cyber defense are MITRE ATT&CK (https://attack.mitre.org/) and MITRE Engage (https://engage.mitre.org/). Together, they provide defenders with a common language for understanding adversary tactics and a practical roadmap for implementing active defense strategies. This can transform intelligence about attacker behavior into actionable, measurable security outcomes. The convergence of these frameworks with security chaos engineering adds some valuable structure when building actionable and measurable programs.
What is MITRE ATT&CK?
MITRE ATT&CK (Adversarial Tactics, Techniques, and Common Knowledge) is an open, globally adopted framework developed by MITRE (https://www.mitre.org/) to systematically catalog and describe the observable tactics and techniques used by cyber adversaries. The ATT&CK matrix provides a detailed map of real-world attacker behaviors throughout the lifecycle of an intrusion, empowering defenders to identify, detect, and mitigate threats more effectively. By aligning security controls, threat hunting, and incident response to ATT&CK’s structured taxonomy, organizations can close defensive gaps, benchmark their capabilities, and respond proactively to the latest adversary tactics.
What is MITRE Engage?
MITRE Engage is a next-generation knowledge base and planning framework focused on adversary engagement, deception, and active defense. Building upon concepts from MITRE Shield, Engage provides structured guidance, practical playbooks, and real-world examples to help defenders go beyond detection. These data points enable defenders to actively disrupt, mislead, and study adversaries. Engage empowers security teams to plan, implement, and measure deception operations using proven techniques such as decoys, disinformation, and dynamic environmental changes. This bridges the gap between understanding attacker Techniques, Tactics, and Procedures (TTPs) and taking deliberate actions to shape, slow, or frustrate adversary campaigns.
What is Security Chaos Engineering?
Security chaos engineering is the disciplined practice of simulating security failures and adversarial conditions in running production environments to uncover vulnerabilities and test resilience before adversaries can. Its value lies in the fact that it is truly the closest thing to a real incident. Table Top Exercises (TTXs) and penetration tests always have constraints and/or rules of engagement which distance them from real world attacker scenarios where there are no constraints. Security chaos engineering extends the principles of chaos engineering, popularized by Netflix (https://netflixtechblog.com/chaos-engineering-upgraded-878d341f15fa) to the security domain.
Instead of waiting for real attacks to reveal flaws, defenders can use automation to introduce “security chaos experiments” (e.g. shutting down servers from active pools, disabling detection rules, injecting fake credentials, modifying DNS behavior) to understand how systems and teams respond under pressure.
The Real-World Value of this Convergence
When paired with security chaos engineering, the combined use of ATT&CK and Engage opens up a new level of proactive, resilient cyber defense strategy. ATT&CK gives defenders a comprehensive map of real-world adversary behaviors, empowering teams to identify detection gaps and simulate realistic attacker TTPs during chaos engineering experiments. MITRE Engage extends this by transforming that threat intelligence into actionable deception and active defense practices, in essence providing structured playbooks for engaging, disrupting, and misdirecting adversaries. By leveraging both frameworks within a security chaos engineering program, organizations not only validate their detection and response capabilities under real attack conditions, but also test and mature their ability to deceive, delay, and study adversaries in production-like environments. This fusion shifts defenders from reactive posture to one of continuous learning and adaptive control, turning every attack simulation into an opportunity for operational hardening and adversary engagement.
Here are some security chaos engineering techniques to consider as this becomes part of a proactive cybersecurity strategy:
Temporal Deception – Manipulating Time to Confuse Adversaries
Temporal deception involves distorting how adversaries perceive time in a system (e.g. injecting false timestamps, delaying responses, or introducing inconsistent event sequences). By disrupting an attacker’s perception of time, defenders can introduce doubt and delay operations.
Example: Temporal Deception through Delayed Credential Validation in Deception Environments
Consider a deception-rich enterprise network, temporal deception can be implemented by intentionally delaying credential validation responses on honeypot systems. For instance, when an attacker attempts to use harvested credentials to authenticate against a decoy Active Directory (AD) service or an exposed RDP server designed as a trap, the system introduces variable delays in login response times, irrespective of the result (e.g. success, failure). These delays mimic either overloaded systems or network congestion, disrupting an attacker’s internal timing model of the environment. This is particularly effective when attackers use automated tooling that depends on timing signals (e.g. Kerberos brute-forcing or timing-based account validation). It can also randomly slow down automated processes that an attacker hopes completes within some time frame.
By altering expected response intervals, defenders can inject doubt about the reliability of activities such as reconnaissance and credential validity. Furthermore, the delayed responses provide defenders with crucial dwell time for detection and the tracking of lateral movement. This subtle manipulation of time not only frustrates attackers but also forces them to second-guess whether their tools are functioning correctly or if they’ve stumbled into a monitored and/or deceptive environment.
As an example of some of the ATT&CK TTPs and Engage mappings that can be used when modeling this example of temporal deception, the following support the desired defensive disruption:
MITRE ATT&CK Mapping
- T1110 – Brute Force – many brute force tools rely on timing-based validation. By introducing delays, defenders interfere with the attacker’s success rate and timing models.
- T1556 – Modify Authentication Process – typically this is seen as an adversary tactic. But defenders can also leverage this by modifying authentication behavior in decoy environments to manipulate attacker perception.
- T1078 – Valid Accounts – delaying responses to login attempts involving potentially compromised credentials can delay attacker progression and reveal account usage patterns.
MITRE Engage Mapping
- Elicit > Reassure > Artifact Diversity – deploying decoy credentials or artifacts to create a convincing and varied environment for the adversary. Temporal manipulation of login attempts involving decoy credentials helps track adversary interactions and delay their movement.
- Elicit > Reassure > Burn-In – introducing friction, delays, or noise to slow down or frustrate automated attacker activities.
- Affect > Disrupt > Software Manipulation – modifying system or application software to alter attacker experience, disrupt automation, or degrade malicious tooling. Introducing time delays and inconsistent system responses create false environmental cues, leading attackers to make incorrect decisions. Also introducing artificial latency into authentication routines or system responses slows down the attack lifecycle and adds noise to attacker automation.
- Affect > Disrupt | Direct > Network Manipulation – changing or interfering with network traffic, services, or routing to disrupt attacker operations. Also influencing or steering attacker decision-making and movement in the environment.
- Affect > Disrupt > Isolation – segregating attacker interactions or dynamically altering access to increase confusion and contain threats.
- Expose > Detect > Lures | Network Analysis – observing, logging, and analyzing adversary actions for intelligence and response purposes.
- Expose > Collect > API Monitoring | Network Monitoring | System Activity Monitoring – extended interaction windows (due to delayed responses) give defenders more opportunity to monitor adversary behavior within the decoy environment.
Honey Timing and Time-Based Traps
Time-bound honeypots such as fake cron jobs, scheduled updates, or bogus backup routines can serve as deceptive traps. Interaction with these elements reveals unauthorized probing or access attempts. The very existence of these traps implies that any entity interacting with them (excluding the creators of course) needs to be treated as hostile and investigated.
Example: Deceptive Backup Scripts as Time-Based Traps in Cloud Environments
Defenders can deploy a bogus scheduled backup script named “nightly-db-backup.sh” on a decoy cloud instance. The script can be set to appear as if it ran daily at 04:00 using a convincingly sounding cron job (e.g. /etc/cron.d/backup_job). The script can contain clear-text references to fake database credentials, S3 storage paths, and mock sensitive data exports. This can be used as a timing-based honeypot, existing to attract unauthorized access attempts during off-hours when legitimate activity is minimal.
Any attempt to execute this script triggers hidden canary tokens that act as an alerting system. This can trigger things like an HTTP request where the receiving entity (e.g. web server processing the request) has been configured to log and alert on any relevant interaction. This can of course capture timestamps showing interactions with the script outside of the bogus scheduled execution window. The defenders can then not only detect the unauthorized access but also track subsequent movements due to some of the meta-data captured.
This approach demonstrates how time-based decoy elements, especially those aligned with off-hour routines, can effectively expose stealthy adversaries who are mimicking typical system administrator behavior.
As an example of some of the ATT&CK TTPs and Engage mappings that can be used when modeling this example of time based decoys, the following support the desired defensive disruption:
MITRE ATT&CK Mapping
- T1059 – Command and Scripting Interpreter – the attacker manually executes some script using bash or another shell interpreter.
- T1083 – File and Directory Discovery – the attacker browses system files and cron directories to identify valuable scripts.
- T1070.004 – Indicator Removal: File Deletion – often attackers attempt to clean up after interacting with trap files.
- T1562.001 – Impair Defenses: Disable or Modify Tools – attempting to disable cron monitoring or logging after detection is common.
MITRE Engage Mapping
- Elicit > Reassure > Artifact Diversity – deploying decoy credentials or artifacts to create a convincing and varied environment for the adversary.
- Affect > Disrupt > Software Manipulation – modifying system or application software to alter attacker experience, disrupt automation, or degrade malicious tooling.
- Affect > Disrupt > Isolation – segregating attacker interactions or dynamically altering access to increase confusion and contain threats.
- Expose > Detect > Lures – observing, logging, and analyzing adversary actions for intelligence and response purposes.
Randomized Friction
Randomized friction aims at increasing an attacker’s work factor, in turn increasing the operational cost for the adversary. Introducing unpredictability in system responses (e.g. intermittent latency, randomized errors, inconsistent firewall behavior) forces attackers to adapt continually, degrading their efficiency and increasing the likelihood of detection.
Example: Randomized Edge Behavior in Cloud Perimeter Defense
Imagine a blue/red team exercise within a large cloud-native enterprise. The security team deploys randomized friction techniques on a network segment believed to be under passive recon by red team actors. The strategy can include intermittent firewall rule randomization. Some of these rules make it so that attempts to reach specific HTTP based resources are met with occasional timeouts, 403 errors, misdirected HTTP redirects, or to simply give an actual response.
When the red team conducts external reconnaissance and tries to enumerate target resources, they experience inconsistent results. One of their obvious objectives is to remain undetected. Some ports appeared filtered one moment and opened the next. API responses switch between errors, basic authentication challenges, or other missing element challenges (e.g. HTTP request header missing). This forces red team actors to waste time revalidating findings, rewriting tooling, and second-guessing whether their scans were flawed or if detection had occurred.
Crucially, during this period, defenders are capturing every probe and fingerprint attempt. The friction-induced inefficiencies increase attack dwell time and volume of telemetry, making detection and attribution easier. Eventually, frustrated by the lack of consistent telemetry, the red team escalates their approach. This kills their attempts at stealthiness and triggers active detection systems.
This experiment successfully degrades attacker efficiency, increases their operational cost, and expands the defenders’ opportunity window for early detection and response, all without disrupting legitimate internal operations. While it does take effort on the defending side to set all of this up, the outcome would be well worth it.
As an example of some of the ATT&CK TTPs and Engage mappings that can be used when modeling this example of randomized friction, the following support the desired defensive disruption:
MITRE ATT&CK Mapping
- T1595 – Active Scanning – adversaries conducting external enumeration are directly impacted by inconsistent firewall responses.
- T1046 – Network Service Discovery – random port behavior disrupts service mapping efforts by the attacker.
- T1583.006 – Acquire Infrastructure: Web Services – attackers using disposable cloud infrastructure for scanning may burn more resources due to retries and inefficiencies.
MITRE Engage Mapping
- Elicit > Reassure > Artifact Diversity – deploying decoy credentials or artifacts to create a convincing and varied environment for the adversary.
- Elicit > Reassure > Burn-In – introducing friction, delays, or noise to slow down or frustrate automated attacker activities.
- Affect > Disrupt > Software Manipulation – modifying system or application software to alter attacker experience, disrupt automation, or degrade malicious tooling.
- Affect > Disrupt > Network Manipulation – changing or interfering with network traffic, services, or routing to disrupt attacker operations.
- Affect > Disrupt > Isolation – segregating attacker interactions or dynamically altering access to increase confusion and contain threats.
- Expose > Detect > Network Analysis – observing, logging, and analyzing adversary actions for intelligence and response purposes.
Ambiguity Engineering
Ambiguity engineering aims to obscure the adversary’s mental model. It is the deliberate obfuscation of system state, architecture, and behavior. When attackers cannot build accurate models of the target environments, their actions become riskier and more error-prone. Tactics include using ephemeral resources, shifting IP addresses, inconsistent responses, and mimicking failure states.
Example: Ephemeral Infrastructure and Shifting Network States in Zero Trust Architectures
A SaaS provider operating in a zero trust environment can implement ambiguity engineering as part of its cloud perimeter defense strategy. In this setup, let’s consider a containerized ecosystem that leverages Kubernetes-based orchestration. This platform can utilize elements such as ephemeral IPs and DNS mappings, rotating them at certain intervals. These container hosted backend services would be accessible only via authenticated service mesh gateways, but appear (to external entities) to intermittently exist, fail, or timeout, depending on timing and access credentials.
Consider the external entity experience against a target such as this. These attackers would be looking for initial access followed by lateral movement and service enumeration inside this target environment. What they would encounter are API endpoints that resolve one moment and vanish the next. Port scans would deliver inconsistent results across multiple iterations. Even successful service calls can return varying error codes depending on timing and the identity of the caller. When this entity tries to correlate observed system behaviors into a coherent attack path, they would continually hit dead ends.
This environment was not broken, it was intentionally engineered for ambiguity. The ephemeral nature of resources, combined with intentional mimicry of common failure states, would prevent attackers from forming a reliable mental model of system behavior. Frustrated and misled, their attack chain will slow, errors will increase, and their risk of their detection will rise. Meanwhile, defenders can capture behavioral fingerprints from the failed attempts and gather critical telemetry for informed future threat hunting and active protection.
As an example of some of the ATT&CK TTPs and Engage mappings that can be used when modeling this example of ambiguity engineering, the following support the desired defensive disruption:
MITRE ATT&CK Mapping
- T1046 – Network Service Discovery – scanning results are rendered unreliable by ephemeral network surfaces and dynamic service allocation.
- T1590 – Gather Victim Network Information – environmental ambiguity disrupts adversary reconnaissance and target mapping.
- T1001.003 – Data Obfuscation: Protocol or Service Impersonation – false failure states and protocol behavior can mimic broken or legacy services, confusing attackers.
MITRE Engage Mapping
- Elicit > Reassure > Artifact Diversity – deploying decoy credentials or artifacts to create a convincing and varied environment for the adversary.
- Elicit > Reassure > Burn-In – introducing friction, delays, or noise to slow down or frustrate automated attacker activities.
- Affect > Disrupt > Software Manipulation – modifying system or application software to alter attacker experience, disrupt automation, or degrade malicious tooling.
- Affect > Disrupt > Network Manipulation – changing or interfering with network traffic, services, or routing to disrupt attacker operations.
- Affect > Disrupt > Isolation – segregating attacker interactions or dynamically altering access to increase confusion and contain threats.
- Expose > Detect > Network Analysis – observing, logging, and analyzing adversary actions for intelligence and response purposes.
- Affect > Direct > Network Manipulation – changing or interfering with network traffic, services, or routing to disrupt attacker operations.
Disinformation Campaigns and False Flag Operations
Just as nation-states use disinformation to mislead public opinion, defenders can plant false narratives within ecosystems. Examples include fake internal threat intel feeds, decoy sensitive documents, or impersonated attacker TTPs designed to confuse attribution.
False flag operations are where an environment mimics behaviors of known APTs. The goal is to get one attack group to think another group is at play within a given target environment. This can redirect adversaries’ assumptions and deceive real actors at an operational stage.
Example: False Flag TTP Implantation to Disrupt Attribution
Consider a long-term red vs. blue engagement inside a critical infrastructure simulation network. The blue team defenders implement a false flag operation by deliberately injecting decoy threat actor behavior into their environment. This can include elements such as:
- Simulated PowerShell command sequences that mimic APT29 (https://attack.mitre.org/groups/G0016/) based on known MITRE ATT&CK chains.
- Fake threat intel logs placed in internal ticketing systems referring to OilRig or APT34 (https://attack.mitre.org/groups/G0049/) activity.
- Decoy documents labeled as “internal SOC escalation notes” with embedded references to Cobalt Strike Beacon callbacks allegedly originating from Eastern European IPs.
All of these artifacts can be placed in decoy systems, honeypots, and threat emulation zones designed to be probed or breached. The red team, tasked with emulating an external APT, stumble upon these elements during lateral movement and begin adjusting their operations based on the perceived threat context. They will incorrectly assume that a separate advanced threat actor is and/or was already in the environment.
This seeded disinformation can slow the red team’s operations, divert their recon priorities, and lead them to take defensive measures that burn time and resources (e.g. avoiding fake IOC indicators and misattributed persistence mechanisms). On the defense side, telemetry confirmed which indicators were accessed and how attackers reacted to the disinformation. This can become very predictive regarding what a real attack group would do. Ultimately, the defenders can control the narrative within an engagement of this sort by manipulating perception.
As an example of some of the ATT&CK TTPs and Engage mappings that can be used when modeling this example of disinformation, the following support the desired defensive disruption:
MITRE ATT&CK Mapping
- T1005 – Data from Local System – adversaries collect misleading internal documents and logs during lateral movement.
- T1204.002 – User Execution: Malicious File – decoy files mimicking malware behavior or containing false IOCs can trigger adversary toolchains or analysis pipelines.
- T1070.001 – Indicator Removal: Clear Windows Event Logs – adversaries may attempt to clean up logs that include misleading breadcrumbs, thereby reinforcing the deception.
MITRE Engage Mapping
- Elicit > Reassure > Artifact Diversity – deploying decoy credentials or artifacts to create a convincing and varied environment for the adversary.
- Elicit > Reassure > Burn-In – introducing friction, delays, or noise to slow down or frustrate automated attacker activities.
- Affect > Disrupt > Software Manipulation – modifying system or application software to alter attacker experience, disrupt automation, or degrade malicious tooling.
- Affect > Disrupt > Network Manipulation – changing or interfering with network traffic, services, or routing to disrupt attacker operations.
- Affect > Disrupt > Isolation – segregating attacker interactions or dynamically altering access to increase confusion and contain threats.
- Affect > Direct > Network Manipulation – changing or interfering with network traffic, services, or routing to disrupt attacker operations.
- Expose > Detect > Network Analysis – observing, logging, and analyzing adversary actions for intelligence and response purposes.
Real-World Examples of Security Chaos Engineering
One of the most compelling real-world examples of this chaos based approach comes from UnitedHealth Group (UHG). As one of the largest healthcare enterprises in the United States, UHG faced the dual challenge of maintaining critical infrastructure uptime while ensuring robust cyber defense. Rather than relying solely on traditional security audits or simulations, UHG pioneered the use of chaos engineering for security.
UHG
UHGs security team developed an internal tool called ChaoSlingr (no longer maintained, located at https://github.com/Optum/ChaoSlingr). This was a platform designed to inject security-relevant failure scenarios into production environments. It included features like degrading DNS resolution, introducing latency across east-west traffic zones, and simulating misconfigurations. The goal wasn’t just to test resilience; it was to validate that security operations (e.g. logging, alerting, response) mechanisms would still function under duress. In effect, UHG weaponized unpredictability, making the environment hostile not just to operational errors, but to adversaries who depend on stability and visibility.
DataDog
This philosophy is gaining traction. Forward thinking vendors like Datadog have begun formalizing Security Chaos Engineering practices and providing frameworks that organizations can adopt regardless of scale. In its blog “Chaos Engineering for Security”, Datadog (https://www.datadoghq.com/blog/chaos-engineering-for-security/) outlines practical attack-simulation experiments defenders can run to proactively assess resilience. These include:
- Simulating authentication service degradation to observe how cascading failures are handled in authentication and/or Single Sign-On (SSO) systems.
- Injecting packet loss to measure how network inconsistencies are handled.
- Disrupting DNS resolution.
- Testing how incident response tooling behaves under conditions of network instability.
By combining production-grade telemetry with intentional fault injection, teams gain insights that traditional red teaming and pen testing can’t always surface. This is accentuated when considering systemic blind spots and cascading failure effects.
What ties UHG’s pioneering work and Datadog’s vendor-backed framework together is a shift in mindset. The shift is from static defense to adaptive resilience. Instead of assuming everything will go right, security teams embrace the idea that failure is inevitable. As such, they engineer their defenses to be antifragile. But more importantly, they objectively and fearlessly test those defenses and adjust when original designs were simply not good enough.
Security chaos engineering isn’t about breaking things recklessly. It’s about learning before the adversary forces you to. For defenders seeking an edge, unpredictability might just be the most reliable ally.
From Fragility to Adversary Friction
Security chaos engineering has matured from a resilience validation tool to a method of influencing and disrupting adversary operations. By incorporating techniques such as temporal deception, ambiguity engineering, and the use of disinformation, defenders can force attackers into a reactive posture. Moreover, defenders can delay offensive objectives targeted at them and increase their attackers’ cost of operations. This strategic use of chaos allows defenders not just to protect an ecosystem but to shape adversary behavior itself. This is how security chaos engineering disrupts adversaries in real time.