I recently read the (in)famous “Social Processes” paper that dismissed the then-nascent area of program verification. Forty years after its publication, there’s still a lot of things that the paper got right, especially in the assumption of “continuity” in verifying toy programs to verifying real systems. Of course, in a world where CompCert and seL4 exist, verification has not quite died as De Millo, Lipton, and Perlis had expected (or wished), and these developments show that the authors were overly pessimistic in their assessment of program verification’s viability.

Also, I found the disanalogy the paper makes between proofs of mathematical theorems and program verification, from which it draws much of its rhetorical power, to be misguided. The authors make much hay of the difference between proofs and verifications, but caring about this difference betrays a wrong view of the role of verification in software development. In this post, I will explain why this is the case.

NB: I don’t work in verification, so take what I’m writing as from the perspective of a lay observer, not an expert!

Verifications are not proofs

Despite the dream of the formalists, the everyday work of mathematicians still consists of writing proofs that are very far removed from a sequence of deductions in, say, a Hilbert-style formal system.1 Their informality means that, as evidenced by the paper’s summary of erroneous independence proofs in set theory, it is actually quite common for proofs to be wrong.

But this informality is by design, since proofs are meant to provide that most mystical of psychological phenomena: intuition. As the authors beautifully put it:

[T]he proof of a theorem is a message. A proof is not a beautiful abstract object with an independent existence. No mathematician grasps a proof, sits back, and sighs happily at the knowledge that he can now be certain of the truth of his theorem. He runs out into the hall and looks for someone to listen to it.

More than anything, mathematical proofs are inputs to a social process, where mathematicians read proofs; internalize and paraphase them in their own words; discuss them with other mathematicians; refute them and give counterexamples; generalize them; connect them with other known theorems. A proof alone does not lead to the establishment of some mathematical fact---rather it is the organic whole of this social process that leads the mathematical community to believe the fact.

In contrast, compared to their mathematical counterparts, verifications of programs---then and now---function very poorly as vehicles for intuition. They are long and tedious, involving endless bits of shallow reasoning, as anyone who has tried to verify programs in Hoare logic know. Unlike mathematical proofs, no one really is chomping at the bit to read how some program was verified:

Verifications are not messages; a person who ran out into the hall to communicate his latest verification would rapidly fred himself a social pariah. Verifications cannot really be read; a reader can flay himself through one of the shorter ones by dint of heroic effort, but that’s not reading. Being unreadable and---literally---unspeakable, verifications cannot be internalized, transformed, generalized, used, connected to other disciplines, and eventually incorporated into a community consciousness. They cannot acquire credibility gradually, as a mathematical theorem does; one either believes them blindly, as a pure act of faith, or not at all.

…but they’re not meant to be

So why bother with verifying programs? Here I think is where the paper’s argument goes off the rails. The authors’ mistake is to notice that mathematical proofs and program verifications have the same form---that of logical reasoning---and from that conclude that they must have the same function---to provide intuition as part of a social process whereby a community establishes the correctness of a theorem or a program. Program verifications do provide reasons to believe a program is correct, but not by providing intuition for why it is so.

An analogy might help to explain this. Say you are part of a construction crew building a house. The owner wants to know whether the house has been constructed properly, but there’s a lot of fussy details to ensure this: Is there enough insulation? Are the studs too far apart? Were the electric wiring and plumbing installed properly? To allay the owner’s concerns, an inspector goes through the newly constructed house and makes sure everything is up to code. The inspector provides a document outlining all the checks that he did, and signs her name to attest that the house was built properly.

Does the owner need to read the entire document and ruminate on its contents to believe the house was built properly? No. It’s probably long and tedious. Mostly likely the owner will just make sure that the inspector signed the document and indeed attested its veracity. The function of the document is to get the owner to believe something, but not primarily by understanding why. Rather, it is a testimony to some statement, backed by the role of the inspector in the social process of constructing houses safely. The testimony of the inspector has power to persuade us because she has incentives to provide correct testimony in the form of professional and reputational harms if she doesn’t. If she she signs off on houses that are poorly built, she will soon lose her certification as an inspector when other people find out.

Showing the correctness of a theorem involves deep understanding of a mathematical structure. It’s fun to read proofs a lot of times because they help make things “click” and align your vision of a structure properly, like a pair of mental glasses. Verifying a program is not like this, because a program is less like a theorem and more like a house.

Showing a program meets some specification is involves checking a lot of boring, fussy details you need to get right---that the right variable is incremented by the right amount; that the right function is called with the right arguments; that the loop guard is right; and so on. Whereas the inspector might go around the house and check off a list of specification items the house needs to meet, we might write proofs in Coq to discharge a set of specification obligations the program needs to meet. There’s not the aesthetic pleasure of finally having the right mental picture of a mathematical structure in your mind’s eye that comes with reading a good proof but rather the tedium of bookkeeping. There is nothing to really “understand,” no deeper meaning to glean like in a proof of a theorem. But no one said verifying proofs should be like proving theorems!

To believe the verification of a program is not to believe it “blindly, as a pure act of faith,” as the authors put it. To call belief in verification of a program “blind faith” would be as absurd as calling trust in the inspector “blind faith”---we have perfectly good reasons for holding both. Instead of just believing construction workers, which the owner might be loathe to do, having an inspector sign off on the house lowers her testimonial burden. She now only has to believe on the inspector’s good word, on which the inspector’s job relies, instead of the word of many people. Likewise, the verification of a program lowers the testimonial burden---or what is usually called the trusted computing base (TCB)---of believing the correctness of that program. It allows you to only believe Coq’s kernel, designed to be as simple as possible2, is implemented correctly to believe the program is correct. Otherwise, without verification, you have to believe the entirety of the program was written without error: a much riskier proposition.

So, to conclude: programs are not like theorems, verifications are not like proofs---and it is fine that this is the case. The social process of establishing mathematical knowledge is not the same as the social process of writing correct and reliable software. Once we rid ourselves of this spurious analogy, we will have a clearer idea of verification’s role in software development.

Footnotes

  1. The formalist dream might be deferred, but perhaps not forever!

  2. A proof assistant satisfies the “de Bruijn criterion” if the language its proof terms are written in is sufficiently simple that its type checker can be understood to be correct or even reimplemented. See the “Easy-to-Check Kernel Proof Language” section of the introduction to CPDT.