On April 7, 2014, the world first learned about the Heartbleed vulnerability. A small flaw in OpenSSL's implementation of the TLS/DTLS (transport layer security protocols) heartbeat extension (RFC6520), Heartbleed enables an attacker to unravel the encryption measures in systems protected by vulnerable OpenSSL software, which some at the time estimated to be approximately two-thirds of the surface web. The actor can then choose to eavesdrop on seemingly encrypted communications, steal private data, and impersonate services and users. OpenSSL released a fix for Heartbleed as soon as the vulnerability made headlines. Even so, more than a year and a half later, more than 200,000 devices had not received the patch.
Heartbleed's Just One Flaw
All the attention (and unnecessary FUD) that's surrounded Heartbleed demonstrates just how damaging a small coding error can be in today's interconnected age. In the early days of computer technology, a bug didn't mean much more than a mild inconvenience. Computers weren't connected to one another, so a programming flaw at most meant a piece of software sometimes malfunctioned. Maybe you needed to restart your computer now and then to fix the issue, but that was largely the end of it. Not so in the world of the Internet. Global connectivity allows hackers to target greater numbers of users than ever before. And given the proliferation of software designed to secure people's banking credentials and other sensitive data, a bug today isn't a mere inconvenience. It's more often than not a security vulnerability, one which attackers can exploit to steal or expose the personal information of thousands if not millions of users. In a nutshell, that's why vulnerabilities like Heartbleed can and do become such a big deal. But don't get the wrong idea. It's not like all of our software is broken. OpenSSL still "worked" in most use cases. Heartbleed just constitutes one of perhaps a few scenarios where actors could manipulate a coding error to produce an unintended result in the software's functionality. Such is the reality of programming. Flaws are commonplace because it's not always an easy task to take an idea one can explain in English and translate it into commands that a computer can read. Beyond that, the goal is to make a computer program implement only an intended set of operations. No respectable programmer wants an actor to find a hole in their software that allows them to eavesdrop on supposedly encrypted communications or steal personal information. It's therefore up to the programmer to make their code as airtight as possible. Easier said than done. Anyone who's ever taken a course in coding can you tell you how easy it is to make a programming error. If only there were some way programmers could make sure their code did only what it's supposed to do…. As it turns out, there is such a resource.
Code Verification at Its Best
Formal verification is a style of programming that's been around since the 1970s. It goes beyond the minimum requirements that a program work in certain conceivable use cases by making sure a program accommodates every use case. Bryan Parno, who works at Microsoft Research on verification, told Quanta Magazine that programmers achieve such inclusivity by laying out their code like a mathematical proof, with each statement following the logic of the one that came before it:
"You’re writing down a mathematical formula that describes the program’s behavior and using some sort of proof checker that’s going to check the correctness of that statement."
For formal verification to work, programmers first need to write a formal specification, or an explanation of how the computer program is supposed to work. They can then use that specification to verify that the software achieves what it is supposed to do. That's not always an effortless process. Between the specification and the statements required to verify it, a program that uses formal verification may end up being several times longer than a program that otherwise "works" in most use cases. Fortunately, coders can now use tools like programming languages and proof-assistant programs to verify their programs. In fact, it was the very absence of those types of resources decades ago that defeated the feasibility of formal verification until the modern age of the internet. Princeton computer science professor Andrew Appel elaborates on that point:
"There were many parts of science and technology that just weren’t mature enough to make that work, and so around 1980, many parts of the computer science field lost interest in it."
Getting to Work
Security researchers aren't wasting a second in making up for lost time. For instance, in the engineers High-Assurance Cyber Military Systems (HACMS) project created by the Defense Advanced Research Projects Agency (DARPA), engineers set about to make a recreational quadcopter unhackable. They did so first by partitioning the quadcopter's code and using "high-assurance building blocks" to rewrite its software architecture. Included in one of those blocks was a proof that an intruder could not escalate privileges to gain access to other partitions if they were able to break into one. For its codenamed "Little Bird" experiment, a Red Team of hackers received access to one of the partitions that controlled the quadcopter's camera. Their mission was to gain control over the quadcopter's essential functions. But after working with the device for six weeks, they weren't able to gain entry to another code partition. This success attracted the attention of researchers at the Defense Department. As explained by Kathleen Fisher, a professor of computer science at Tufts University and the founding program manager of the High-Assurance Cyber Military Systems (HACMS) project:
"They were not able to break out and disrupt the operation in any way. That result made all of DARPA stand up and say, oh my goodness, we can actually use this technology in systems we care about."
The Road Ahead
Since Little Bird, DARPA has applied formal verification to other technology, such as self-driving cars and satellites. They've also set some lofty goals for themselves going forward. Appel intends to use a $10 million grant to build a fully verified end-to-end system like a web server. Meanwhile, over at Microsoft, engineers are working to create a verified version of HTTPS as well as verified specifications for something like a drone. Could formal verification make a program unhackable? What applications would you like researchers to find for formal verification in the future? Let us know in the comments!