Context and motivations
Can you trust your compiler?
Compilers are complicated pieces of software that implement delicate algorithms. Bugs in compilers do occur and can cause incorrect executable code to be silently generated from a correct source program. In other words, a buggy compiler can insert bugs in the programs that it compiles. This phenomenon is called miscompilation.
Several empirical studies demonstrate that many popular production compilers suffer from miscompilation issues. For example, in 1995, the authors of the Nullstone C conformance test suite reported that
Nullstone isolated defects [in integer division] in twelve of twenty commercially available compilers that were evaluated. (source)
A decade later, E. Eide and J. Regehr showed similar sloppiness in C compilers, this time concerning volatile memory accesses:
We tested thirteen production-quality C compilers and, for each, found situations in which the compiler generated incorrect code for accessing volatile variables. This result is disturbing because it implies that embedded software and operating systems --- both typically coded in C, both being bases for many mission-critical and safety-critical applications, and both relying on the correct translation of volatiles --- may be being miscompiled. (EMSOFT 2008)
More recently, Yang et al generalized their testing of C compilers and, again, found many instances of miscompilation:
We created a tool that generates random C programs, and then spent two and a half years using it to find compiler bugs. So far, we have reported more than 325 previously unknown bugs to compiler developers. Moreover, every compiler that we tested has been found to crash and also to silently generate wrong code when presented with valid inputs. (PLDI 2011)
For non-critical, "everyday" software, miscompilation is an annoyance but not a major issue: bugs introduced by the compiler are negligible compared to those already present in the source program. The situation changes dramatically, however, for safety-critical or mission-critical software, where human lives, critical infrastructures, or highly-sensitive information are at stake. There, miscompilation is a non-negligible risk that must be addressed by additional, difficult and costly verification activities such as more testing and more code reviews at the generated assembly code level.
An especially worrisome aspect of the miscompilation problem is that it weakens the usefulness of formal, tool-assisted verification of source programs. Increasingly, the development process for critical software includes the use of formal verification tools such as static analyzers, deductive verifiers (program provers), and model~checkers. Advanced verification tools are able to automatically establish valuable safety properties of the program, such as the absence of run-time errors (no out-of-bound array accesses, no arithmetic overflows, etc). However, most of these tools operate at the level of~C~source code. A buggy compiler has the potential to invalidate the safety guarantees provided by source-level formal verification, producing an incorrect executable that crashes or misbehaves from a formally-verified source program.
Formal verification of compilers
The CompCert project puts forward a radical, mathematically-grounded solution to the miscompilation problem: the formal, tool-assisted verification of the compiler itself. By applying program proof techniques to the source code of the compiler, we can prove, with mathematical certainty, that the executable code produced by the compiler behaves exactly as specified by the semantics of the source~C~program, therefore ruling out all risks of miscompilation.
Compiler verification, as outlined above, is not a new idea: the first compiler correctness proof (for the translation of arithmetic expressions to a stack machine) was published in 1967, then mechanized as early as 1972 using the Stanford LCF proof assistant. Since then, compiler verification has been the topic of much academic research. The CompCert project carries this line of work all the way to a complete, realistic, optimizing compiler than can be used in the production of critical embedded software systems.
Semantic preservation. The formal verification of CompCert consists in proving the following theorem, which we take as the high-level specification of a correct compiler:
Semantic preservation theorem:
For all source programs S and compiler-generated code C,
if the compiler, applied to the source S, produces the code C,
without reporting a compile-time error,
then the observable behavior of C is one of the possible observable behaviors of S.
In CompCert, this theorem has been proved, with the help of the Coq proof assistant, taking S to be abstract syntax trees for the CompCert C language (after preprocessing, parsing, type-checking and elaboration), and C to be abstract syntax trees for the assembly-level Asm language (before assembling and linking). Work is in progress to extend the proof "on both sides" to encompass e.g. the parsing phase or the assembling phase.
There are two noteworthy points in the statement of semantic preservation above. First, the compiler is allowed to fail at compile-time and refuse to generate code. This can happen if the source program S is syntactically incorrect or contains a type error, but also if the internal capacity of the compiler is exceeded. (For instance, CompCert C will refuse to compile a function having more than 4 Gb of local variables, as such a function cannot be executed on any 32-bit target platform.) Second, the compiler is allowed to select one of the possible behaviors of the source program. The C language has some nondeterminism in expression evaluation order; different orders can result in several different observable behaviors. By choosing an evaluation order of its liking, the compiler implements one of these valid observable behaviors.
What are observable behaviors? In a nutshell, they include everything the user of the program, or the physical world in which it executes, can "see" about the actions of the program, with the notable exception of execution time and memory consumption. More precisely, we follow the ISO~C standards in considering that we can observe:
- Whether the program terminates or diverges (runs forever), and
if it terminates, whether it terminates normally (by returning from
the
main
function) or on an error (by running into an undefined behavior such as integer division by zero). - All calls to standard library functions that perform
input/output, such as
printf()
orgetchar()
. - All read and write accesses to global variables of
volatile
types. These variables can correspond to memory-mapped hardware devices, hence any read or write over such a variable is treated as an input/output operation.
The observable behavior of a program is, therefore, a trace of all I/O and volatile operations it performs, plus an indication of whether it terminates and how it terminates (normally or on an error).
How do we define the possible behaviors of a source or executable program? This is the purpose of a formal semantics for the corresponding languages. A formal semantics is a mathematically-defined relation between programs and their possible behaviors. Several such semantics are defined as part of CompCert's verification, including one for the CompCert C language and one for the Asm language (assembly code for each of the supported target platforms). These semantics can be viewed as mathematically-precise renditions of (relevant parts of) the informal ISO C90 standard document and of (relevant parts of) the reference manuals for the PowerPC, ARM, RISC-V and x86 architectures.
What does semantic preservation tell us about source-level verification? A straightforward corollary of the semantic preservation theorem shows the following:
Let Spec be a set of acceptable behaviors, characterizing a desired safety or liveness property of the program.
Assume that a source program S satisfies Spec: all possible observable behaviors of S are in Spec.
Further assume that the compiler, applied to the source S, produces the code C.
Then, the compiled code C satisfies Spec: the observable behavior of C is in Spec.
The purpose of a sound source-level verification tool is precisely to establish that a specification Spec holds for all possible executions of a source program S. The specification can be defined by the user, for instance as pre- and post-conditions, or fixed by the tool, for instance the absence of run-time errors. Therefore, a formally-verified compiler guarantees that if a sound source-level verification tool says "yes, this program satisfies this specification", then the compiled code that really executes also satisfies this specification. In other words, using a formally-verified compiler justifies verification at the source level, insofar as the guarantees established over the source program carry over to the compiled code that actually executes in the end.
How do we conduct the proof of semantic preservation?
Because of the inherent complexity of an optimizing compiler, the
proof is a major endeavor. We split it into 13 separate proofs of
semantic preservation, one for each pass of the CompCert compiler.
The final semantic preservation theorem, then, follows from the
composition of these separate proofs. For every pass, we must prove
semantic preservation for all possible input programs and for all
possible executions of the input program (there can be many such
executions depending on the unpredictable results of input
operations). To this end, we need to consider every possible
reachable state in the execution of the program and every transition
that can be performed from this state according to the formal
semantics. The proofs take advantage of the inductive structure of
programming languages: for example, to show that a compound expression
a +
b
is correctly compiled, we assume, by induction hypothesis,
that the two smaller subexpressions a and b are correctly
compiled, then combine these results with reasoning specific to
the +
operator.
If the compiler proof were conducted using paper and pencil, it would fill hundreds of pages, and no mathematician would be willing to check it. Instead, we leverage the power of the computer: CompCert's proof of correctness is conducted using the Coq proof assistant, a software tool that helps us construct the proof in interaction with the tool, then automatically re-checks the validity of the proof. Such mechanization of the proof brings near-absolute confidence in its validity.
How effective is formal compiler verification? As mentioned above, CompCert is still a work in progress, and complete, end-to-end formal verification has not been achieved yet: as of this writing, about 90% of the compiler's algorithms (including all optimizations and all code generation algorithms) are proved correct in Coq, but the remaining 10% (including elaboration, presimplifications, assembling and linking) are not verified. This can only improve in the future. Nonetheless, this incomplete formal verification already demonstrates major correctness improvements compared with ordinary compilers. Yang et al report:
The striking thing about our CompCert results is that the middleend bugs we found in all other compilers are absent. As of early 2011, the under-development version of CompCert is the only compiler we have tested for which Csmith cannot find wrong-code errors. This is not for lack of trying: we have devoted about six CPU-years to the task. The apparent unbreakability of CompCert supports a strong argument that developing compiler optimizations within a proof framework, where safety checks are explicit and machine-checked, has tangible benefits for compiler users. (PLDI 2011)