What Chaos Engineering Is (and Isn’t)

The Birth of Chaos

The beginning of Chaos Engineering goes back to 2008 when Netflix moved from the datacenter to the cloud. The move didn’t go as planned.

The thinking at the time was that the datacenter locked them into an architecture of single points of failure, like large databases, and vertically scaled components. Moving to the cloud would necessitate horizontally scaled components, which would decrease the single points of failure.

The move to horizontally scaled cloud deployment practices did not coincide with the boost to uptime of the streaming service that they expected.

The specific means for making a system robust enough to handle instances disappearing was not important. It might even be different depending on the context of the system. The important thing was that it had to be done, because the streaming service was facing availability deficits due to the high frequency of instance instability events. In a way, Netflix had simply multiplied the single-point-of-failure effect.

Enter Chaos Monkey. Chaos Monkey gave them a way to proactively test everyone’s resilience to the failure, and do it during business hours so that people could respond to any potential fallout when they had the resources to do so, rather than at 3 a.m. when pagers typically go off.

Then December 24, 2012 happened. AWS suffered a rolling outage of elastic load balancers (ELBs). These components connect requests and route traffic to the compute instances where services are deployed. As the ELBs went down, additional requests couldn’t be served. Since Netflix’s control plane ran on AWS, customers were not able to choose videos and start streaming them.

To ensure that all of these teams had their services up to the task, an activity was created to take a region offline. Well, AWS wouldn’t allow Netflix to take a region offline (something about having other customers in the region) so instead this was simulated.

The activity was labeled “Chaos Kong.”

This brings us to about 2015. Netflix had Chaos Monkey and Chaos Kong, working on the small scale of vanishing instances and the large scale of vanishing regions, respectively.

In these early days of Chaos Engineering at Netflix, it was not obvious what the discipline actually was. There were some catch phrases about “pulling out wires,” or “breaking things,” or “testing in production,” paired with many misconceptions about how to make services reliable, and very few examples of actual tools to help support that work.

To formalize the discipline, I was given the task of developing a charter and roadmap for a Chaos Engineering Team at Netflix in early 2015. I built and managed that team for three years. My co-author of Chaos Engineering: System Resiliency in Practice, Nora Jones, joined the Chaos Engineering team early on as an engineer and technical leader. She was responsible for significant architectural decisions about the tools we built as well as overseeing implementation.

I sat down with my teams to formally define Chaos Engineering. We specifically wanted clarity on:

  • What is the definition of Chaos Engineering?
  • What is the point of it?
  • How do I know when I’m doing it?
  • How can I improve my practice of it?

We researched Resilience Engineering and other disciplines in order to come up with a definition and a blueprint for how others could also participate in Chaos Engineering.  After about a month of working on a manifesto of sorts, we produced the Principles of Chaos Engineering. The discipline was officially formalized:

“Chaos Engineering is the discipline of experimenting on a distributed system in order to build confidence in the system’s capability to withstand turbulent conditions in production.” 

Chaos Engineering is about making the chaos inherent in the system visible. The point of doing Chaos Engineering in the first place is to build confidence.

This definition established that it is a form of experimentation, which sits apart from testing. It also mentions “turbulent conditions in production” to highlight that this isn’t about creating chaos. Chaos Engineering is about making the chaos inherent in the system visible. The point of doing Chaos Engineering in the first place is to build confidence.

The Principles go on to describe a basic template for experimentation, which borrows heavily from Karl Popper’s principle of falsifiability. In this regard, Chaos Engineering is modeled very much as a science rather than a technique.

What Chaos Engineering Is

The Principles define the discipline so that we know when we are doing Chaos Engineering, how to do it, and how to do it well. The more common definition today for Chaos Engineering is “The facilitation of experiments to uncover systemic weaknesses.”

The steps of Chaos Engineering experimentation are as follows:

  1. Start by defining “steady state” as some measurable output of a system that indicates normal behavior.
  2. Hypothesize that this steady state will continue in both the control group and the experimental group.
  3. Introduce variables that reflect real-world events like servers that crash, hard drives that malfunction, network connections that are severed, etc.
  4. Try to disprove the hypothesis by looking for a difference in steady state between the control group and the experimental group.

By design, there is great latitude in how to implement these experiments.

Experimentation Versus Testing

Testing, strictly speaking, does not create new knowledge. Testing requires that the engineer writing the test knows specific properties about the system that they are looking for in advance. Complex systems are opaque to that type of analysis—humans are simply not capable of understanding all of the potential side effects from all of the potential interactions of parts in a complex system. This leads us to one of the key properties of a test.

Tests make an assertion, based on existing knowledge, and then running the test collapses the valence of that assertion, usually into either true or false. Tests are statements about known properties of the system.

Experimentation either builds confidence, or it teaches us new properties about our own system. It is an exploration of the unknown.

Experimentation, on the other hand, creates new knowledge. Experiments propose a hypothesis, and as long as the hypothesis is not disproven, confidence grows in that hypothesis. If it is disproven, then we learn something new. This kicks off an inquiry to figure out why our hypothesis is wrong. In a complex system, the reason why something happens is often not obvious. Experimentation either builds confidence, or it teaches us new properties about our own system. It is an exploration of the unknown.

No amount of testing in practice can equal the insight gained from experimentation, because testing requires a human to come up with the assertions ahead of time.

Experimentation formally introduces a way to discover new properties. It is entirely possible to translate newly discovered properties of a system into tests after they are discovered. It also helps to encode new assumptions about a system into new hypotheses, which creates something like a “regression experiment” that explores system changes over time. Because Chaos Engineering was born from complex system problems, it is essential that the discipline favors experimentation over testing. 

Verification Versus Validation

Using definitions of verification and validation inspired by operations management and logistical planning, we can say that Chaos Engineering is strongly biased toward the former over the latter.

Verification

Verification of a complex system is a process of analyzing output at a system boundary. A homeowner can verify the quality of the water (output) coming from a sink (system boundary) by testing it for contaminants without knowing anything about how plumbing or municipal water service (system parts) functions.

Validation

Validation of a complex system is a process of analyzing the parts of the system and building mental models that reflect the interaction of those parts. A homeowner can validate the quality of water by inspecting all of the pipes and infrastructure (system parts) involved in capturing, cleaning, and delivering water (mental model of functional parts) to a residential area and eventually to the house in question.

Both of these practices are potentially useful, and both build confidence in the output of the system. As software engineers we often feel a compulsion to dive into code and validate that it reflects our mental model of how it should be working. Contrary to this predilection, Chaos Engineering strongly prefers verification over validation.

Chaos Engineering cares whether something works, not how.

In most business cases, the output of the system is much more important than whether or not the implementation matches our mental model.

Note that in the plumbing metaphor we could validate all of the components that go into supplying clean drinking water, and yet still end up with contaminated water for some reason we did not expect. In a complex system, there are always unpredictable interactions. But if we verify that the water is clean at the tap, then we do not necessarily have to care about how it got there. In most business cases, the output of the system is much more important than whether or not the implementation matches our mental model. Chaos Engineering cares more about the business case and output than about the implementation or mental model of interacting parts.

What Chaos Engineering Is Not

There are two concepts that are often confused with Chaos Engineering, namely breaking stuff in production and antifragility.

Breaking Stuff

Occasionally in blog posts or conference presentations we hear Chaos Engineering described as “breaking stuff in production.” While this might sound cool, it doesn’t appeal to enterprises running at scale and other complex system operators who can most benefit from the practice. 

A better characterization of Chaos Engineering would be: fixing stuff in production. “Breaking stuff ” is easy; the difficult parts are around mitigating blast radius, thinking critically about safety, determining if something is worth fixing, deciding whether you should invest in experimenting on it… the list goes on. 

“Breaking stuff” could be done in countless ways, with little time invested. The larger question here is, how do we reason about things that are already broken, when we don’t even know they are broken?

“Fixing stuff in production” does a much better job of capturing the value of Chaos Engineering since the point of the whole practice is to proactively improve availability and security of a complex system. Chaos Engineering is the only major discipline in software that focuses solely on proactively improving safety in complex systems.

Antifragility

People familiar with the concept of Antifragility, introduced by Nassim Taleb, often assume that Chaos Engineering is essentially the software version of the same thing. Taleb argues that words like “hormesis” are insufficient to capture the ability of complex systems to adapt, and so he invented the word “antifragile” as a way to refer to systems that get stronger when exposed to random stress. 

An important, critical distinction between Chaos Engineering and antifragility is that Chaos Engineering educates human operators about the chaos already inherent in the system, so that they can be a more resilient team. Antifragility, by contrast, adds chaos to a system in hopes that it will grow stronger in response rather than succumbing to it.

As a framework, antifragility puts forth guidance at odds with the scholarship of Resilience Engineering, Human Factors, and Safety Systems research. For example, antifragility proposes that the first step in improving a system’s robustness is to hunt for weaknesses and remove them. This proposal seems intuitive but Resilience Engineering tells us that hunting for what goes right in safety is much more informative than investigating what goes wrong. 

The next step in antifragility is to add redundancy. This also seems intuitive, but adding redundancy can cause failure just as easily as it can mitigate against it, and the literature in Resilience Engineering is rife with examples where redundancy actually contributes to safety failures. Perhaps the most famous example is the 1986 Challenger disaster. The redundancy of O-rings was one of three reasons that NASA approved the continuation of the launches, even though damage to the primary O-ring was well known internally for over fifty of the prior launch missions over the span of five years. 

There are numerous other examples of divergence between these two schools of thought. Resilience Engineering is an ongoing area of research with decades of support, whereas antifragility is a theory that exists largely outside of academia and peer review. It is easy to imagine how the two concepts become conflated, since both deal with chaos and complex systems, but the spirit of antifragile does not share the empiricism and fundamental grounding of Chaos Engineering. For these reasons we should consider them to be fundamentally different pursuits.

A combination of real-world experience applying the original four principled steps to experimenting on systems at scale, paired with additional thoughtful introspection, led the Chaos Engineering team at Netflix to push the practice further than just experimentation. These insights became the “Advanced Principles,” which guide teams through the maturity of their Chaos Engineering programs and help set a gold standard toward which we can aspire—we’ll cover these in a follow-up post.

You can read more about the history and practice of Chaos Engineering in Chaos Engineering: System Resiliency in Practice by Casey Rosenthal and Nora Jones. For a limited time Verica is giving away free digital copies of the book at verica.io/book.


Verica Co-founder & CEO

Casey Rosenthal

Casey Rosenthal was formerly the Engineering Manager of the Chaos Engineering Team at Netflix. His superpower is transforming misaligned teams into high-performance teams, and his personal mission is to help people see that something different, something better, is possible. For fun, he models human behavior using personality profiles in Ruby, Erlang, Elixir, and Prolog.


If you liked this article, we’d love to schedule some time
to talk with you about the Verica Continuous Verification Platform
and how it can help your business.