Verification vs Validation
Many people use the words Verification and Validation interchangeably, which risks the ability to focus on system-level behaviors that correspond to business value. We prefer to use definitions inspired by the field of Operations Research Management.
Verification is finding congruence between what you expect from a system and the actual output.
Validation is finding congruence between an explicit model of how something works and how it actually works.
We think of Verification as focusing on the output of a system, whereas Validation focuses on the inner workings. We actually use similar language colloquially in a few situations. If I Verify your driver’s license, I am checking that your external appearance matches a photo; whereas if I Validate you as a person, I am acknowledging your intrinsic qualities as a person.
Why does this matter for complex software systems?
As engineers we often have a natural inclination to dive into Validation. We want to understand how something works. As much as we are able, we want to dive down the stack and see if the code does what we expect it to do. This is a necessary skill for our job.
Yet in many cases this tendency can lead us astray. The return on investment for any business depends on the output of a system, and not the internal machinations. At the end of the day, the business cares whether the system works, not how it works.
By definition, we acknowledge that a single human brain can’t mentally model how all of the pieces interact in a complex system; however, a single human can interpret the output of a complex system. Therefore, it’s more likely that a person can judge the output of a complex system than understand all of the interactions that produced the output.
This is particularly important for systems that can’t be introspected. Consider a neural network for example. If you could break open a neural network and inspect it, you would see floating point values representing weights for certain paths. This isn’t sensical for a human, and it isn’t designed to be. The output of a neural network, on the other hand, is often easily confirmed by inspection. Either a training set or a human can verify the correctness of the system output of a neural network.
If some of these distinctions sound familiar, you might have experience with scientific disciplines that take a reductionist approach versus those that take an emergent approach.
Reductionist approaches assume that you can break all systems down into smaller parts. The smaller parts are easier to understand. If you understand all of the smaller parts, then you can understand the assembled system as well.
Emergent approaches assume that there are unique properties of the system that cannot be found in any constituent parts. The only way to understand the system is to observe it holistically, within the context in which it usually operates.
Neural networks as a class of solutions are a great example of an emergent system. They are easier to understand at a holistic level. It is easier to verify than it is to validate. And if we can verify the output, then we have the value that we want from that architecture, and we don’t even need to validate it.
Reductionist and Emergent viewpoints are fundamentally different, but they aren’t necessarily in conflict. In medicine for example we find strong benefits in having evidence gathered both from clinical trials (isolating properties) and epidemiological studies (holistic view in-context.) In that case, the different approaches complement each other.
Similarly, there are many methods in software engineering that validate properties of a system, like unit tests, static analysis, and peer code reviews. Verification and Validation can complement each other as methods to increase our confidence in software. That confidence translates into faster feature velocity. As the industry moves into increasingly complex systems, Verification will take precedence as the primary way organizations optimize for business value and operate complex systems with confidence.