1.9.PROVING THE SCHORR-WAITE ALGORITHM 2 beginning of the computation,there is a path called the spine from root to this value: root The assertion restoredListR(stack,t)describes this state of the spine;the ab- stract variable stack encodes the information contained in the spine. At the instant described by the invariant,however,the links in the spine are reversed: r001 This reversed state of the spine,again containing the information encoded by stack,is described by the assertion listMarkedNodesR(stack,p). The ssertion spansR(STree root). which also alsorithm.asserts that the abstract structure STree isaspa of the heap.Thus,the second and third lines of the invariant use separating implication elegantly to assert that,if the spine is correctly restored,then the heap will have the same spanning tree as it had initially.(In fact,the proof goes through if spansR(STree,root)is any predicate about the heap
1.9. PROVING THE SCHORR-WAITE ALGORITHM 27 beginning of the computation, there is a path called the spine from root to this value: root ✄ ✄✎ ❆ ❆❯ ✁ ✁☛ ❈ ❈❲ t ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ❆ ❆ ❆ ❆ ❆ ❆ ❆ ❆ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁✁ ❆ ❆ ❆ ❆ ❆ ❆ ❆ ❆ ❆ ❆ ❆ ❆ ❆ ❆ ❆ ❆❆ The assertion restoredListR(stack,t) describes this state of the spine; the abstract variable stack encodes the information contained in the spine. At the instant described by the invariant, however, the links in the spine are reversed: root ✄ ✄✗ ❆ ❆❑ ✁ ✁✕ p t ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ❆ ❆ ❆ ❆ ❆ ❆ ❆ ❆ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁✁ ❆ ❆ ❆ ❆ ❆ ❆ ❆ ❆ ❆ ❆ ❆ ❆ ❆ ❆ ❆ ❆❆ This reversed state of the spine, again containing the information encoded by stack, is described by the assertion listMarkedNodesR(stack, p). The assertion spansR(STree, root), which also occurs in the precondition of the algorithm, asserts that the abstract structure STree is a spanning tree of the heap. Thus, the second and third lines of the invariant use separating implication elegantly to assert that, if the spine is correctly restored, then the heap will have the same spanning tree as it had initially. (In fact, the proof goes through if spansR(STree, root) is any predicate about the heap
28 CHAPTER 1.AN OVERVIEW that is independent of the boolean fields in the records;spanning trees ar used only because they are sufficent to determine the heap,except for the boolean fields.)To the author's knowledge,this part of the invariant is the earliest conceptual use of separating implication in a real proof of a program (as opposed to its formal use in expressing backward-reasoning rules and st preconditions). n rest of th nvariant,the heap is partitioned into marked and unmarked and it is asserted that every active unmarked record can be reached from the variable t or from certain fields in the spine.However, since this assertion lies within the right operand of the separating conjunction that separates marked and unmarked notes,the paths by which the unmarked records are reached must consist of unmarked records.Anyone(such as the author [41,Section 5.1])who has tried to verify this kind of graph traversal, even inform appreciate the extraordinary succinctness o of the last two lines of Yang's invariant 1.10 Shared-Variable Concurrency O'Hearn has extended separation logic to reas n about shared-variable currency,drawing upon early ideas of Hoare [42]and Owicki and Gries [43] For the simplest case,where concurrency is unconstrained by any kind of synchronization mechanism,Hoare had given the straightforward rule: {p}G{9h}{p2}c2{92} {pAp2}G‖c2{q1A2, when the free variables of p,c,and are not modified by and vice-versa Unfortunately,this rule fails in separation logic since,even though the side condition prohibits the processes from interfering via assignments to variables,they permit interference via mutations in the heap.O'Hearn real- ized that the rule could be saved by replacing the ordinary coniunctions by ns,which separated the heap into parts that can only {p}a{m}{}2{2 {p*p2}G‖2{q*2} (with the same side condition as above)
28 CHAPTER 1. AN OVERVIEW that is independent of the boolean fields in the records; spanning trees are used only because they are sufficent to determine the heap, except for the boolean fields.) To the author’s knowledge, this part of the invariant is the earliest conceptual use of separating implication in a real proof of a program (as opposed to its formal use in expressing backward-reasoning rules and weakest preconditions). In the rest of the invariant, the heap is partitioned into marked and unmarked records, and it is asserted that every active unmarked record can be reached from the variable t or from certain fields in the spine. However, since this assertion lies within the right operand of the separating conjunction that separates marked and unmarked notes, the paths by which the unmarked records are reached must consist of unmarked records. Anyone (such as the author [41, Section 5.1]) who has tried to verify this kind of graph traversal, even informally, will appreciate the extraordinary succinctness of the last two lines of Yang’s invariant. 1.10 Shared-Variable Concurrency O’Hearn has extended separation logic to reason about shared-variable concurrency, drawing upon early ideas of Hoare [42] and Owicki and Gries [43]. For the simplest case, where concurrency is unconstrained by any kind of synchronization mechanism, Hoare had given the straightforward rule: {p1} c1 {q1} {p2} c2 {q2} {p1 ∧ p2} c1 k c2 {q1 ∧ q2}, when the free variables of p1, c1, and q1 are not modified by c2, and vice-versa. Unfortunately, this rule fails in separation logic since, even though the side condition prohibits the processes from interfering via assignments to variables, they permit interference via mutations in the heap. O’Hearn realized that the rule could be saved by replacing the ordinary conjunctions by separating conjunctions, which separated the heap into parts that can only be mutated by a single process: {p1} c1 {q1} {p2} c2 {q2} {p1 ∗ p2} c1 k c2 {q1 ∗ q2} (with the same side condition as above)
1.10.SHARED-VARIABLE CONCURRENCY 29 Things became far less straightforward,however,when synchronization was introduced.Hoare had investigated conditional critical regions,keyed to "resources",which were disjoint collections of variables.His crucial idea was that there should be an invariant associated with each resource.such that when one entered a critical region keyed to a resource,one could assume that the invariant was true,but when one left the region,the invariant must be restored O'Hearn was able to generalize these ideas so that both processes and resources could "own"portions of the heap,and this ownership could move among processes and resources dynamically as the processes entered and left critical regions Ae imple example es that share a buffer consisting of a single cons ere are simply two procedures:put(x),which accepts a cons- cell and makes it disappear,and get(y),which makes a cons-cell appear.The first process allocates a cons- cell and gives it to put(x);the second process obtains a cons-cell from get(y), uses it.and deallocates it: emp (empemp) femp} emD x=cons(.,…): get(y); {x-,-} {y-,-} put(x); Usey”; femp} {y→-,-} disposey; {emp} {empemp} femp} Behind the scenes,however, there is a resource buf that implements a small buffer that can hold a single cons-cell.Associated with this resource are a boolean variable full,which indicates whether the buffer currently holds a cell,and an integer variable c that points to the cell when full is true.Then put(x)is implemented by a critical region that checks the buffer is empty and then fills it with x,and get(y)is implemented by a conditional critical
1.10. SHARED-VARIABLE CONCURRENCY 29 Things became far less straightforward, however, when synchronization was introduced. Hoare had investigated conditional critical regions, keyed to “resources”, which were disjoint collections of variables. His crucial idea was that there should be an invariant associated with each resource, such that when one entered a critical region keyed to a resource, one could assume that the invariant was true, but when one left the region, the invariant must be restored. O’Hearn was able to generalize these ideas so that both processes and resources could “own” portions of the heap, and this ownership could move among processes and resources dynamically as the processes entered and left critical regions. As a simple example, consider two processes that share a buffer consisting of a single cons-cell. At the level of the processes, there are simply two procedures: put(x), which accepts a cons-cell and makes it disappear, and get(y), which makes a cons-cell appear. The first process allocates a conscell and gives it to put(x); the second process obtains a cons-cell from get(y), uses it, and deallocates it: {emp} {emp ∗ emp} {emp} {emp} x := cons(. . . , . . .) ; get(y) ; {x 7→ −, −} k {y 7→ −, −} put(x) ; “Use y” ; {emp} {y 7→ −, −} dispose y ; {emp} {emp ∗ emp} {emp} Behind the scenes, however, there is a resource buf that implements a small buffer that can hold a single cons-cell. Associated with this resource are a boolean variable full, which indicates whether the buffer currently holds a cell, and an integer variable c that points to the cell when full is true. Then put(x) is implemented by a critical region that checks the buffer is empty and then fills it with x, and get(y) is implemented by a conditional critical
0 CHAPTER 1.AN OVERVIEW regions that checks the buffer is full and then empties it into y: put(x)=with buf when -full do(c:=x;full :=true) get(y)=with buf when full do (y:=c;full :false) Associated with the resource buf is an invariant: Rg(full Ac一-,-)V(一full A emp), The effect of O'Hearn's inference rule for critical regions is that the resource invariant is used explicitly to reason about the body of a critical region,but is hidden outside of the critical region: {x-,-} put(x)=with buf when -full do( {(R*×一-,-)A一fu {emp*×一-,-} {x一-,-} c:=x;full :=true {full A c一-,-} (R) {R emp)) femp} (emp} get(y)=with buf when full do( {(R emp)Afull} {c-,-*emp} {c→-,-} y:=c:full:=false {full A y一-,-} {(一full A emp)*y一-,-} {R*y一-,-}) y-,-}
30 CHAPTER 1. AN OVERVIEW regions that checks the buffer is full and then empties it into y: put(x) = with buf when ¬full do (c := x ; full := true) get(y) = with buf when full do (y := c ; full := false) Associated with the resource buf is an invariant: R def = (full ∧ c 7→ −, −) ∨ (¬full ∧ emp). The effect of O’Hearn’s inference rule for critical regions is that the resource invariant is used explicitly to reason about the body of a critical region, but is hidden outside of the critical region: {x 7→ −, −} put(x) = with buf when ¬full do ( {(R ∗ x 7→ −, −) ∧ ¬full} {emp ∗ x 7→ −, −} {x 7→ −, −} c := x ; full := true {full ∧ c 7→ −, −} {R} {R ∗ emp}) {emp} {emp} get(y) = with buf when full do ( {(R ∗ emp) ∧ full} {c 7→ −, − ∗ emp} {c 7→ −, −} y := c ; full := false {¬full ∧ y 7→ −, −} {(¬full ∧ emp) ∗ y 7→ −, −} {R ∗ y 7→ −, −}) {y 7→ −, −}
1.11.FRACTIONAL PERMISSIONS On the other hand,the resource invariant reappears outside the declaration of the resource,indicating that it must be initialized beforehand,and will remain true afterwards: {R*emp} resource buf in (emp} {empemp} (emp emp} {emp} {R*emp} 1.11 Fractional Permissions Especially in concurrent programming,one would like to extend separation cto permit sseto allow processes or other suppable otherw restricted to sep ate storage o share access to R.Bornat 32]has opene ed this po ity introducing the concepto permissions,originally devised by John Boyland [31]. The basic idea is to associate a fractional real number called a permission with the relation.We write ee,where z is a real number such that indicate that e points towith permission.Thenehas the same meaning ase e,so that a permi on of one allows all operations when 2<I only lookup operations are all owed. This idea is formalized by a conservation law for permissions: ee'*e名e'iff ee, along with local axiom schemata for each of the heap-manipulating opera- tions: fempre:=cons(e)e fe,-}dispose(e){emp} {e口-g=e{e凸e {e三eu=lel{e三eAv=eh
1.11. FRACTIONAL PERMISSIONS 31 On the other hand, the resource invariant reappears outside the declaration of the resource, indicating that it must be initialized beforehand, and will remain true afterwards: {R ∗ emp} resource buf in {emp} {emp ∗ emp} . . . k . . . {emp ∗ emp} {emp} {R ∗ emp} 1.11 Fractional Permissions Especially in concurrent programming, one would like to extend separation logic to permit passivity, i.e., to allow processes or other subprograms that are otherwise restricted to separate storage to share access to read-only variables. R. Bornat [32] has opened this possibility by introducing the concept of permissions, originally devised by John Boyland [31]. The basic idea is to associate a fractional real number called a permission with the 7→ relation. We write e z 7→ e 0 , where z is a real number such that 0 < z ≤ 1, to indicate that e points to e 0 with permission z. Then e 1 7→ e 0 has the same meaning as e 7→ e 0 , so that a permission of one allows all operations, but when z < 1 only lookup operations are allowed. This idea is formalized by a conservation law for permissions: e z 7→ e 0 ∗ e z 0 7→ e 0 iff e z+z 0 7−→ e 0 , along with local axiom schemata for each of the heap-manipulating operations: {emp}v := cons(e1, . . . , en){e 1 7→ e1, . . . , en} {e 1 7→ −}dispose(e){emp} {e 1 7→ −}[e] := e 0{e 1 7→ e 0} {e z 7→ e 0}v := [e]{e z 7→ e 0 ∧ v = e}