2.3.Contextual Refinement and Abstraction Theorems 299 (2.2.1)using inc would have different observable behaviors from the client calling the atomic INC,because the former may observe the divergence of the left thread (i.e.,the left thread calling inc may execute infinitely many steps),while the latter cannot (i.e., the left thread calling INC always terminates if executed).How- ever,the client has the same whole-program termination behaviors when using inc and INC. Second,is the scheduling fair or not?Some of the progress proper- ties assume fair scheduling,while others do not.Thus it might be natural that the contextual refinement has the same assumption on the scheduling as the corresponding progress property. Actually different assumption of fairness may lead to different contextual refinement results.Consider the deadlock-free counter inc in Figure 2.1(b)and the atomic INC in Figure 2.1(e).The contextual refinement inc INC does not hold if we assume fair scheduling (no matter we observe whole-program termination or per-thread termination),because the client(2.2.2)produces dif- ferent output events using inc and INC in fair executions.Using inc,it is possible that the left thread in (2.2.2)never prints 1 and the whole program generates an empty trace because the lock acquire in the left thread may always fail,but it must print out 1 if INC is called instead and the scheduling is fair-as long as the left thread gets a chance to run,the atomic INC will finish in one step.However,if we drop the fairness assumption when the abstract atomic specification INC is called,we can use an unfair scheduling that never schedules the left thread to sim- ulate the execution of inc where the lock acquire always fails (Liang et al.,2013). Finally,are the abstractions atomic or non-atomic?To characterize linearizability,II fin I directly uses atomic specifications I as the abstraction of II.But to characterize progress properties,the abstractions may have to be non-atomic to simulate the non- terminating behaviors of the concrete implementations under interleavings.As we explain above,the client (2.2.2)produces
2.3. Contextual Refinement and Abstraction Theorems 299 (2.2.1) using inc would have different observable behaviors from the client calling the atomic INC, because the former may observe the divergence of the left thread (i.e., the left thread calling inc may execute infinitely many steps), while the latter cannot (i.e., the left thread calling INC always terminates if executed). However, the client has the same whole-program termination behaviors when using inc and INC. • Second, is the scheduling fair or not? Some of the progress properties assume fair scheduling, while others do not. Thus it might be natural that the contextual refinement has the same assumption on the scheduling as the corresponding progress property. Actually different assumption of fairness may lead to different contextual refinement results. Consider the deadlock-free counter inc in Figure 2.1(b) and the atomic INC in Figure 2.1(e). The contextual refinement inc v INC does not hold if we assume fair scheduling (no matter we observe whole-program termination or per-thread termination), because the client (2.2.2) produces different output events using inc and INC in fair executions. Using inc, it is possible that the left thread in (2.2.2) never prints 1 and the whole program generates an empty trace because the lock acquire in the left thread may always fail, but it must print out 1 if INC is called instead and the scheduling is fair — as long as the left thread gets a chance to run, the atomic INC will finish in one step. However, if we drop the fairness assumption when the abstract atomic specification INC is called, we can use an unfair scheduling that never schedules the left thread to simulate the execution of inc where the lock acquire always fails (Liang et al., 2013). • Finally, are the abstractions atomic or non-atomic? To characterize linearizability, Π vfin Γ directly uses atomic specifications Γ as the abstraction of Π. But to characterize progress properties, the abstractions may have to be non-atomic to simulate the nonterminating behaviors of the concrete implementations under interleavings. As we explain above, the client (2.2.2) produces
300 Background different output events using inc and INC in fair executions.One way to re-establish the refinement is to allow unfair scheduling for the right-hand side of the refinement (i.e.,for clients calling INC) Another way is to use a non-atomic specification while keeping the assumption of fair scheduling.For instance,the client (2.2.2) cannot distinguish inc and the following non-atomic INC NA. INC_NA(){ while (done){}; x :x 1;done :true > done :false; Here done is a newly introduced object variable whose initial value is false,and (C)is an atomic block.When (2.2.2)uses INC_NA, the left thread may not print 1 because the loop while(done)t may always fail to terminate if the right thread infinitely often sets done to true.So (2.2.2)using inc and INC_NA can produce the same output traces in fair executions.In fact incINC_NA holds under fair scheduling. Abstractions for objects with partial methods.The atomic specifi- cations defined in(2.1.1)can characterize the atomic behaviors of lock objects,but they fail to specify that L ACQ should be partial in the sense that it should be blocked when the lock is unavailable. To address the problem,a natural way is to introduce the atomic partial specification T,where each method specification is in the form of await(B){C).The execution of await(B){C}is blocked if B does not hold,and C executes atomically if B holds (also the current thread cannot be interrupted between the test showing B holds and the execu- tion of C).When B holds,we say await(B){C}is enabled.Note that await(B){C}is the only blocking primitive we introduce in this work. All other primitive commands we have seen so far are non-blocking and they are always enabled.We say a thread is enabled if the next command to be executed by the thread is enabled
300 Background different output events using inc and INC in fair executions. One way to re-establish the refinement is to allow unfair scheduling for the right-hand side of the refinement (i.e., for clients calling INC). Another way is to use a non-atomic specification while keeping the assumption of fair scheduling. For instance, the client (2.2.2) cannot distinguish inc and the following non-atomic INC_NA. INC_NA(){ while (done) {}; < x := x + 1; done := true >; done := false; } Here done is a newly introduced object variable whose initial value is false, and hCi is an atomic block. When (2.2.2) uses INC_NA, the left thread may not print 1 because the loop while(done){} may always fail to terminate if the right thread infinitely often sets done to true. So (2.2.2) using inc and INC_NA can produce the same output traces in fair executions. In fact inc v INC_NA holds under fair scheduling. Abstractions for objects with partial methods. The atomic specifi- cations defined in (2.1.1) can characterize the atomic behaviors of lock objects, but they fail to specify that L_ACQ should be partial in the sense that it should be blocked when the lock is unavailable. To address the problem, a natural way is to introduce the atomic partial specification Γ, where each method specification is in the form of await(B){C}. The execution of await(B){C} is blocked if B does not hold, and C executes atomically if B holds (also the current thread cannot be interrupted between the test showing B holds and the execution of C). When B holds, we say await(B){C} is enabled. Note that await(B){C} is the only blocking primitive we introduce in this work. All other primitive commands we have seen so far are non-blocking and they are always enabled. We say a thread is enabled if the next command to be executed by the thread is enabled
2.3.Contextual Refinement and Abstraction Theorems 301 For the lock objects,we can define the atomic partial specification I as follows. L_ACQ'(){await (1=0){1 :cid } (2.3.2) L_REL(){1:=0;J The await block naturally specifies the atomicity of method function- ality,just like the traditional atomic specification (C)(which can be viewed as syntactic sugar for await(true){C}),therefore I may serve as a specification for linearizable objects.It also shows the fact that the object method is partial,with explicit specification of the enabling condition B.Below we will use the atomic partial specification as the starting point to characterize the progress of objects. Although the atomic partial specification I describes the atomic functionality and the enabling condition of each method,it is insufficient to serve as a progress-aware abstraction for the following reasons. First,the progress of the await command itself is affected by the fairness of scheduling,such as strong fairness and weak fairness. Strong fairness:Every thread which is infinitely often enabled will execute infinitely often.Then,await(B){C}is not executed only if B is continuously false after some point in the execution trace. Weak fairness:Every thread which is eventually always enabled will execute infinitely often.Then,await(B){C}may not be executed when B is infinitely often false.This fairness notion is weaker than strong fairness. As a result,the choice of fair scheduling will affect the behaviors of a program or a specification with await commands.To see this,we consider the following client program (2.3.3). [-]AcQ;[_]REL;print(1);II while(true){[_]AcQ;[_]REL; (2.3.3) where [laco and [REL represent holes to be filled with method calls of lock acquire and release,respectively.Table 2.1 shows the behaviors of the client with different locks. If the client calls the abstract specifications in (2.3.2),it must execute print(1)under strongly fair scheduling,but may not do so
2.3. Contextual Refinement and Abstraction Theorems 301 For the lock objects, we can define the atomic partial specification Γ as follows. L_ACQ’(){ await (l = 0) { l := cid }; } L_REL(){ l := 0; } (2.3.2) The await block naturally specifies the atomicity of method functionality, just like the traditional atomic specification hCi (which can be viewed as syntactic sugar for await(true){C}), therefore Γ may serve as a specification for linearizable objects. It also shows the fact that the object method is partial, with explicit specification of the enabling condition B. Below we will use the atomic partial specification as the starting point to characterize the progress of objects. Although the atomic partial specification Γ describes the atomic functionality and the enabling condition of each method, it is insufficient to serve as a progress-aware abstraction for the following reasons. First, the progress of the await command itself is affected by the fairness of scheduling, such as strong fairness and weak fairness. • Strong fairness: Every thread which is infinitely often enabled will execute infinitely often. Then, await(B){C} is not executed only if B is continuously false after some point in the execution trace. • Weak fairness: Every thread which is eventually always enabled will execute infinitely often. Then, await(B){C} may not be executed when B is infinitely often false. This fairness notion is weaker than strong fairness. As a result, the choice of fair scheduling will affect the behaviors of a program or a specification with await commands. To see this, we consider the following client program (2.3.3). [ _ ]acq; [ _ ]rel; print(1); || while(true){ [ _ ]acq; [ _ ]rel; } (2.3.3) where [ _ ]acq and [ _ ]rel represent holes to be filled with method calls of lock acquire and release, respectively. Table 2.1 shows the behaviors of the client with different locks. If the client calls the abstract specifications in (2.3.2), it must execute print(1) under strongly fair scheduling, but may not do so
302 Background Table2.l:Client(2.3.3)with different locks.“Yes”means it must print out 1,“No” otherwise Strong Fairness Weak Fairness spec.(2.3.2) Yes No ticket lock (Fig.2.1(c)) Yes Yes test-and-set lock (Fig.2.1(a)) No No under weakly fair scheduling.This is because the call of L_ACQ'by the left thread could be infinitely often enabled and infinitely often disabled in an execution.In detail,whenever the lock is owned by the right thread,L ACQ'on the left is disabled,and whenever the right thread releases the lock,L_ACQ'on the left is enabled.In a weakly fair execution,it is possible that the left thread does not execute any step since weak fairness guarantees a thread to execute only when it is always enabled.However,such an execution does not satisfy strong fairness,which requires the thread to execute infinitely often when it is infinitely often enabled (instead of being always enabled).This example also explains in what sense weak fairness is weaker than strong fairness. Also note that the two fairness notions coincide when the program does not contain blocking operations and is thus always enabled.Both strong and weak fairness degrade to fairness(saying that every thread executes infinitely often)for programs without await commands.There- fore,as shown in Table 2.1,regardless of strongly or weakly fair schedul- ing,the client(2.3.3)using a ticket lock always executes print(1),but it may not do so if using a test-and-set lock instead.In detail,the ticket lock implementation guarantees that the left thread eventually acquires the lock.But with the test-and-set lock in Figure 2.1(a),it is possible that the cas in the lock-acquire method L_acq()of the left thread always fails and hence L_acq()does not return. As a result,for the same object implementation,we may need different abstractions under different scheduling.As shown in Table 2.1, the specification (2.3.2)cannot serve as the specification of the test- and-set locks under both strong fairness and weak fairness. Second,even under the same scheduling,different implementations demonstrate different progress,therefore need different abstractions
302 Background Table 2.1: Client (2.3.3) with different locks. “Yes” means it must print out 1, “No” otherwise Strong Fairness Weak Fairness spec. (2.3.2) Yes No ticket lock (Fig. 2.1(c)) Yes Yes test-and-set lock (Fig. 2.1(a)) No No under weakly fair scheduling. This is because the call of L_ACQ’ by the left thread could be infinitely often enabled and infinitely often disabled in an execution. In detail, whenever the lock is owned by the right thread, L_ACQ’ on the left is disabled, and whenever the right thread releases the lock, L_ACQ’ on the left is enabled. In a weakly fair execution, it is possible that the left thread does not execute any step since weak fairness guarantees a thread to execute only when it is always enabled. However, such an execution does not satisfy strong fairness, which requires the thread to execute infinitely often when it is infinitely often enabled (instead of being always enabled). This example also explains in what sense weak fairness is weaker than strong fairness. Also note that the two fairness notions coincide when the program does not contain blocking operations and is thus always enabled. Both strong and weak fairness degrade to fairness (saying that every thread executes infinitely often) for programs without await commands. Therefore, as shown in Table 2.1, regardless of strongly or weakly fair scheduling, the client (2.3.3) using a ticket lock always executes print(1), but it may not do so if using a test-and-set lock instead. In detail, the ticket lock implementation guarantees that the left thread eventually acquires the lock. But with the test-and-set lock in Figure 2.1(a), it is possible that the cas in the lock-acquire method L_acq() of the left thread always fails and hence L_acq() does not return. As a result, for the same object implementation, we may need different abstractions under different scheduling. As shown in Table 2.1, the specification (2.3.2) cannot serve as the specification of the testand-set locks under both strong fairness and weak fairness. Second, even under the same scheduling, different implementations demonstrate different progress, therefore need different abstractions
2.4.Verifying Progress Properties 303 As shown in Table 2.1,the different lock implementations have different behaviors,even under the same scheduling. For the above two reasons,we need different abstractions for different combinations of fairness and progress.For PSF and PDF under strong and weak fairness respectively,we may need four different abstractions. Can we generate all of them in a systematic approach? 2.4 Verifying Progress Properties Below we first give an overview of the traditional rely-guarantee logic for safety proofs (Jones,1983),which serves as the basis of our concur- rency reasoning.Then we show how it can be extended to do relational reasoning and verify the contextual refinement equivalent to linearizabil- ity.Our program logic LiLi actually further extends the idea to verify different contextual refinements corresponding to different progress prop- erties and linearizability.Finally we explain the challenges in supporting progress verification,and give a quick look at our ideas of LiLi. 2.4.1 Rely-Guarantee Reasoning In rely-guarantee reasoning (Jones,1983),each thread is verified in isolation under some assumptions on its environment (i.e.,the other threads in the system).The judgment is in the form of R,G[P)C[Q}, where the pre-and post-conditions P and Q are assertions over program states and they specify the initial and final states of the program C respectively.The rely condition R and the guarantee condition G are assertions over state pairs (i.e.,state transitions).The rely condition R specifies the permitted state transitions that the environment threads may have.The guarantee condition G specifies the possible transitions made by the thread C itself.Informally,the judgment R,G(P)C[Q} says that,starting from an initial state satisfying P,and assuming that the environment's state transitions all satisfy R,the execution of C is safe,every state transition made by C satisfies G,and the final state (if C terminates)satisfies Q
2.4. Verifying Progress Properties 303 As shown in Table 2.1, the different lock implementations have different behaviors, even under the same scheduling. For the above two reasons, we need different abstractions for different combinations of fairness and progress. For PSF and PDF under strong and weak fairness respectively, we may need four different abstractions. Can we generate all of them in a systematic approach? 2.4 Verifying Progress Properties Below we first give an overview of the traditional rely-guarantee logic for safety proofs (Jones, 1983), which serves as the basis of our concurrency reasoning. Then we show how it can be extended to do relational reasoning and verify the contextual refinement equivalent to linearizability. Our program logic LiLi actually further extends the idea to verify different contextual refinements corresponding to different progress properties and linearizability. Finally we explain the challenges in supporting progress verification, and give a quick look at our ideas of LiLi. 2.4.1 Rely-Guarantee Reasoning In rely-guarantee reasoning (Jones, 1983), each thread is verified in isolation under some assumptions on its environment (i.e., the other threads in the system). The judgment is in the form of R, G ` {P}C{Q}, where the pre- and post-conditions P and Q are assertions over program states and they specify the initial and final states of the program C respectively. The rely condition R and the guarantee condition G are assertions over state pairs (i.e., state transitions). The rely condition R specifies the permitted state transitions that the environment threads may have. The guarantee condition G specifies the possible transitions made by the thread C itself. Informally, the judgment R, G ` {P}C{Q} says that, starting from an initial state satisfying P, and assuming that the environment’s state transitions all satisfy R, the execution of C is safe, every state transition made by C satisfies G, and the final state (if C terminates) satisfies Q