Using functions, rather than loops and assignment statements, to express algorithms changes everything. First of all, an algorithm expressed as a function is composed of other, more basic functions that can be studied separately and reused in other algorithms. For instance, a sorting algorithm may be specified in terms of building a tree of some kind and then flattening it in some way. Functions that build trees can be studied separately from functions that consume trees. Furthermore, the properties of each of these basic functions and their relationship to others can be captured with simple equational properties. As a result, one can talk and reason about the ‘deep’ structure of an algorithm in a way that is not easily possible with imperative code. To be sure, one can reason formally about imperative programs by formulating their specifications in the predicate calculus, and using loop invariants to prove they are correct. But, and this is the nub, one cannot easily reason about the properties of an imperative program directly in terms of the language of its code. Consequently, books on formal program design have a quite different tone from those on algorithm design: they demand fluency in both the predicate calculus and the necessary imperative dictions. In contrast, many texts on algorithm design traditionally present algorithms with a step-by-step commentary, and use informally stated loop invariants to help one understand why the algorithm is correct