A:6 Hongjin Liang et al. 2.5.Our Approach In this paper we propose a Rely-Guarantee-based Simulation(RGSim)<between the target and the source programs.It establishes a weak simulation,ensuring that for every externally observable event made by the target program there is a corresponding one in the source.We choose to view intermediate state accesses as silent steps,thus we can relate programs with different implementation details.This also makes our simulation independent of language details. To support parallel compositionality,our relation takes into account explicitly the expected interference between threads and their parallel environments.Inspired by the Rely-Guarantee(R-G)verification method [Jones 1983],we specify the interference using rely/guarantee conditions.In Rely-Guarantee reasoning,the rely condition R of a thread specifies the permitted state transitions that its environment may have,and its guarantee G specifies the possible transitions made by the thread itself.To ensure parallel threads can collaborate,we need to check the interference constraint,i.e.,the guarantee of each thread is permitted in the rely of every other.Then we can verify their parallel composition by separately verifying each thread,showing its behaviors under the rely condition indeed satisfy its guarantee.After parallel composition,the threads should be executed under their common environment (i.e.,the intersection of their relies)and guarantee all the possible transitions made by them (i.e.,the union of their guarantees). Parametrized with rely/guarantee conditions for the two levels,our relation (C,R,G)(C,R,G)talks about not only the target C and the source C,but also the interference R and g between C and its target-level environment,and R and G between C and its environment at the source level.Informally,(C,R,g)(C,R,G) says the executions of C under the environment R do not exhibit more observable behaviors than the executions of C under the environment R,and the state transi- tions of C and C satisfy g and G respectively.RGSim is now compositional,as long as the threads are composed with well-behaved environments only.The parallel com- positionality lemma is in the following form.If we know (C1,R1,91)(C1,R1,G1) and(C2,R2,92)(C2,R2,G2),and also the interference constraints are satisfied,i.e., g2≤R1,g1≤R2,G2∈R1 and GICR2,we could get (C1‖C2,R1nR2,G1Ug2)≤(C1‖C2,R1∩R2,G1UG2) The compositionality of RGSim gives us a proof theory for concurrent program trans- formations. Also different from fully abstract semantics for threads,which assumes arbitrary be- haviors of environments,RGSim allows us to instantiate the interference R,C,R and G differently for different assumptions about environments,therefore it can be used to verify the aforementioned four classes of transformations.For instance,if we want to prove that a transformation preserves the behaviors of data-race-free programs,we can specify the data-race-freedom in R and G.Then we are no longer concerned with the examples in Figure 1,both of which have data races. Example.Below we give an example of loop invariant hoisting to illustrate how RGSim works.The formal proofs are shown in Section 5.2.1. Target Code(C1) Source Code(C) local t; local t; t:=x+1; while(i<n){ while(i n){ t:=x+1; i:=i t; i :i +t; 2 ACM Transactions on Programming Languages and Systems,Vol.V,No.N,Article A,Publication date:January YYYY
A:6 Hongjin Liang et al. 2.5. Our Approach In this paper we propose a Rely-Guarantee-based Simulation (RGSim) ≼ between the target and the source programs. It establishes a weak simulation, ensuring that for every externally observable event made by the target program there is a corresponding one in the source. We choose to view intermediate state accesses as silent steps, thus we can relate programs with different implementation details. This also makes our simulation independent of language details. To support parallel compositionality, our relation takes into account explicitly the expected interference between threads and their parallel environments. Inspired by the Rely-Guarantee (R-G) verification method [Jones 1983], we specify the interference using rely/guarantee conditions. In Rely-Guarantee reasoning, the rely condition R of a thread specifies the permitted state transitions that its environment may have, and its guarantee G specifies the possible transitions made by the thread itself. To ensure parallel threads can collaborate, we need to check the interference constraint, i.e., the guarantee of each thread is permitted in the rely of every other. Then we can verify their parallel composition by separately verifying each thread, showing its behaviors under the rely condition indeed satisfy its guarantee. After parallel composition, the threads should be executed under their common environment (i.e., the intersection of their relies) and guarantee all the possible transitions made by them (i.e., the union of their guarantees). Parametrized with rely/guarantee conditions for the two levels, our relation (C, R, G) ≼ (C, R, G) talks about not only the target C and the source C, but also the interference R and G between C and its target-level environment, and R and G between C and its environment at the source level. Informally, (C, R, G) ≼ (C, R, G) says the executions of C under the environment R do not exhibit more observable behaviors than the executions of C under the environment R, and the state transitions of C and C satisfy G and G respectively. RGSim is now compositional, as long as the threads are composed with well-behaved environments only. The parallel compositionality lemma is in the following form. If we know (C1, R1, G1) ≼ (C1, R1, G1) and (C2, R2, G2) ≼ (C2, R2, G2), and also the interference constraints are satisfied, i.e., G2 ⊆ R1, G1 ⊆ R2, G2 ⊆ R1 and G1 ⊆ R2, we could get (C1 ∥C2, R1 ∩ R2, G1 ∪ G2) ≼ (C1 ∥C2, R1 ∩ R2, G1 ∪ G2). The compositionality of RGSim gives us a proof theory for concurrent program transformations. Also different from fully abstract semantics for threads, which assumes arbitrary behaviors of environments, RGSim allows us to instantiate the interference R, G, R and G differently for different assumptions about environments, therefore it can be used to verify the aforementioned four classes of transformations. For instance, if we want to prove that a transformation preserves the behaviors of data-race-free programs, we can specify the data-race-freedom in R and G. Then we are no longer concerned with the examples in Figure 1, both of which have data races. Example. Below we give an example of loop invariant hoisting to illustrate how RGSim works. The formal proofs are shown in Section 5.2.1. Target Code (C1) local t; t := x + 1; while(i < n) { i := i + t; } ⇐ Source Code (C) local t; while(i < n) { t := x + 1; i := i + t; } ACM Transactions on Programming Languages and Systems, Vol. V, No. N, Article A, Publication date: January YYYY
Rely-Guarantee-Based Simulation A:7 (Events)e:=.. (Labels)o::=eT (a)Events and Transition Labels (LState)a:=... (LEpr)E∈LState→IntL (LBExp)B∈LState→{true,false}i (LInstr)c LState-P((Labels x LState)Uabort}) (LStmt)C::=skip c C1;C2 if(B)C1 else C2 while(B)C ClC2 (LStep)→reP(LStmt\{skip}×LState)x Labels×(LStmt×LState)U{abort}) (b)The Low-Level Language (HState)∑:=.. (HExpr)E∈HState→Inti (HBExp) B∈HState→{true,false} (HInstr)c E HState-P((Labels x HState)U{abort}) (HStmt)C ::skip c C1:C2 if B then C1 else C2|while B do C CC2 (HStep)→H∈P(HStmt\{skip}×HState)×Labels×(HStmt×HState)U{abort}) (c)The High-Level Language Fig.2.Generic Languages at Target and Source Levels Benton [2004]has proved that the optimized code C preserves the sequential be- haviors of the source C.In a concurrent setting,this optimization is incorrect within arbitrary environments.For instance,if other threads may update x,the final values of i might be different at the two levels.In fact,this optimization works only when the environments R at both levels do not update x nor t.It requires the program- mers to mark x as a volatile variable in practice.The guarantees g of both Ci and C can be specified as arbitrary transitions.Then we can prove the RGSim relation (C1,R,g)(C,R,g)and conclude the correctness of the transformation. 3.BASIC TECHNICAL SETTINGS In this section,we present the source and the target programming languages.Then we define a basic refinement which naturally says the target has no more externally observable event traces than the source.We use as an intuitive formulation of the correctness of transformations.Our RGSim relation,which will be formally defined in Section 4,is proposed as a proof technique for E. 3.1.The Languages Following standard simulation techniques,we model the semantics of target and source programs as labeled transition systems.Before showing the languages,we first define events and labels in Figure 2(a).We leave the set of events unspecified here.It can be instantiated by program verifiers,depending on their interest (e.g.,input/output events).A label that will be associated with a state transition is either an event or r, which means the corresponding transition does not generate any event(i.e.,a silent step). ACM Transactions on Programming Languages and Systems,Vol.V,No.N,Article A,Publication date:January YYYY
Rely-Guarantee-Based Simulation A:7 (Events) e ::= . . . (Labels) o ::= e | τ (a) Events and Transition Labels (LState) σ ::= . . . (LExpr) E ∈ LState → Int⊥ (LBExp) B ∈ LState → {true,false}⊥ (LInstr) c ∈ LState ⇀ P((Labels × LState) ∪ {abort}) (LStmt) C ::= skip | c | C1; C2 | if (B) C1 else C2 | while (B) C | C1 ∥C2 (LStep) −→L ∈ P((LStmt\{skip} × LState) × Labels × ((LStmt × LState) ∪ {abort})) (b) The Low-Level Language (HState) Σ ::= . . . (HExpr) E ∈ HState → Int⊥ (HBExp) B ∈ HState → {true,false}⊥ (HInstr) c ∈ HState ⇀ P((Labels × HState) ∪ {abort}) (HStmt) C ::= skip | c | C1; ; C2 | if B then C1 else C2 | while B do C | C19C2 (HStep) −→H ∈ P((HStmt\{skip} × HState) × Labels × ((HStmt × HState) ∪ {abort})) (c) The High-Level Language Fig. 2. Generic Languages at Target and Source Levels Benton [2004] has proved that the optimized code C1 preserves the sequential behaviors of the source C. In a concurrent setting, this optimization is incorrect within arbitrary environments. For instance, if other threads may update x, the final values of i might be different at the two levels. In fact, this optimization works only when the environments R at both levels do not update x nor t. It requires the programmers to mark x as a volatile variable in practice. The guarantees G of both C1 and C can be specified as arbitrary transitions. Then we can prove the RGSim relation (C1, R, G) ≼ (C, R, G) and conclude the correctness of the transformation. 3. BASIC TECHNICAL SETTINGS In this section, we present the source and the target programming languages. Then we define a basic refinement ⊑, which naturally says the target has no more externally observable event traces than the source. We use ⊑ as an intuitive formulation of the correctness of transformations. Our RGSim relation, which will be formally defined in Section 4, is proposed as a proof technique for ⊑. 3.1. The Languages Following standard simulation techniques, we model the semantics of target and source programs as labeled transition systems. Before showing the languages, we first define events and labels in Figure 2(a). We leave the set of events unspecified here. It can be instantiated by program verifiers, depending on their interest (e.g., input/output events). A label that will be associated with a state transition is either an event or τ , which means the corresponding transition does not generate any event (i.e., a silent step). ACM Transactions on Programming Languages and Systems, Vol. V, No. N, Article A, Publication date: January YYYY
A:8 Hongjin Liang et al. The target language,which we also call the low-level language,is shown in Fig- ure 2(b).We abstract away the forms of states,expressions and primitive instructions in the language.An arithmetic expression E is modeled as a function from states to integers lifted with an undefined value L.Boolean expressions Bs are modeled simi- larly.An instruction c is a partial function from states to sets of label and state pairs, describing the state transitions and the events it generates.We use P(-)to denote the power set.Unsafe executions lead to abort.Note that the semantics of an instruction could be non-deterministic.Moreover,it might be undefined on some states,making it possible to model blocking operations such as acquiring a lock. Statements are either primitive instructions or compositions of them.skip is a spe- cial statement used as a flag to show the end of executions.When it is sequentially composed with other statements,it has no computational effects.A single-step execu- tion of statements is modeled as a labeled transition_,which is a triple of an initial program configuration (a pair of statement and state),a label and a resulting configuration.It is undefined when the initial statement is skip.The step aborts if an unsafe instruction is executed. The high-level language(source language)is defined similarly in Figure 2(c),but it is important to note that its states and primitive instructions may be different from those in the low-level language.The compound statements are almost the same as their low-level counterparts.C1;C2 and CiC2 are sequential and parallel composi- tions of Ci and C2 respectively.Note that we choose to use the same set of compound statements in the two languages for simplicity only.This is not required by our simu- lation relation,although the analogous program constructs of the two languages (e.g., parallel compositions Ci C2 and CiC2)make it convenient for us to discuss the compositionality later. (o,)∈c abort∈c ∑tdom(c) (c,)。(skp,') (c,)-abort (c,)→(c,) (C1,)。(C1,') (skipskip,)→(shp,) (CllC2,)。(C"C2,') (C2,)(C2,) (C1,)→abort or(C2,)→abort (C川lC2,)°(C1lC2,Σ') (C1川C2,)→abort Fig.3.Selected Operational Semantics Rules of the High-Level Language Figure 3 shows part of the definition of_,which gives the high-level opera- tional semantics of statements.We often omit the subscript H(or L)in_(or -and the label on top of the arrow when it is T.The semantics is mostly stan- dard.We only show the rules for primitive instructions and parallel compositions here. Note that when a primitive instruction c is blocked at state (i.e.,g dom(c)),we let the program configuration reduce to itself.For example,the instruction lock(1)would be blocked when l is not 0,making it be repeated until 1 becomes 0;whereas unlock(1) simply sets 1 to 0 at any time and would never be blocked.Primitive instructions in the high-level and low-level languages are atomic in the interleaving semantics.Below we use-->*-for zero or multiple-step transitions with no events generated,and__ for multiple-step transitions with only one event e generated. ACM Transactions on Programming Languages and Systems,Vol.V,No.N,Article A,Publication date:January YYYY
A:8 Hongjin Liang et al. The target language, which we also call the low-level language, is shown in Figure 2(b). We abstract away the forms of states, expressions and primitive instructions in the language. An arithmetic expression E is modeled as a function from states to integers lifted with an undefined value ⊥. Boolean expressions Bs are modeled similarly. An instruction c is a partial function from states to sets of label and state pairs, describing the state transitions and the events it generates. We use P( ) to denote the power set. Unsafe executions lead to abort. Note that the semantics of an instruction could be non-deterministic. Moreover, it might be undefined on some states, making it possible to model blocking operations such as acquiring a lock. Statements are either primitive instructions or compositions of them. skip is a special statement used as a flag to show the end of executions. When it is sequentially composed with other statements, it has no computational effects. A single-step execution of statements is modeled as a labeled transition −→L , which is a triple of an initial program configuration (a pair of statement and state), a label and a resulting configuration. It is undefined when the initial statement is skip. The step aborts if an unsafe instruction is executed. The high-level language (source language) is defined similarly in Figure 2(c), but it is important to note that its states and primitive instructions may be different from those in the low-level language. The compound statements are almost the same as their low-level counterparts. C1; ; C2 and C19C2 are sequential and parallel compositions of C1 and C2 respectively. Note that we choose to use the same set of compound statements in the two languages for simplicity only. This is not required by our simulation relation, although the analogous program constructs of the two languages (e.g., parallel compositions C1 ∥ C2 and C1 9C2) make it convenient for us to discuss the compositionality later. (o, Σ ′ ) ∈ c Σ (c, Σ) o −→ (skip, Σ ′ ) abort ∈ c Σ (c, Σ) −→ abort Σ ̸∈ dom(c) (c, Σ) −→ (c, Σ) (skip9skip, Σ) −→ (skip, Σ) (C1, Σ) o −→ (C ′ 1, Σ ′ ) (C19C2, Σ) o −→ (C ′ 19C2, Σ ′ ) (C2, Σ) o −→ (C ′ 2, Σ ′ ) (C19C2, Σ) o −→ (C19C ′ 2, Σ ′ ) (C1, Σ) −→ abort or (C2, Σ) −→ abort (C19C2, Σ) −→ abort Fig. 3. Selected Operational Semantics Rules of the High-Level Language Figure 3 shows part of the definition of −→H , which gives the high-level operational semantics of statements. We often omit the subscript H (or L) in −→H (or −→L ) and the label on top of the arrow when it is τ . The semantics is mostly standard. We only show the rules for primitive instructions and parallel compositions here. Note that when a primitive instruction c is blocked at state Σ (i.e., Σ ̸∈ dom(c)), we let the program configuration reduce to itself. For example, the instruction lock(l) would be blocked when l is not 0, making it be repeated until l becomes 0; whereas unlock(l) simply sets l to 0 at any time and would never be blocked. Primitive instructions in the high-level and low-level languages are atomic in the interleaving semantics. Below we use −→∗ for zero or multiple-step transitions with no events generated, and e −→∗ for multiple-step transitions with only one event e generated. ACM Transactions on Programming Languages and Systems, Vol. V, No. N, Article A, Publication date: January YYYY
Rely-Guarantee-Based Simulation A:9 3.2.The Event Trace Refinement Now we can formally define the refinement relation that relates the set of externally observable event traces generated by the target and the source programs.A trace is a sequence of events e,and may end with a termination marker done or a fault marker abort. (EutTrace)&::=e done abort e::8 Definition 3.1 (Event Trace Set).ETrSetn(C,o)represents a set of external event traces produced by C in n steps from the state o: (1)ETrSeto(C,a)e; (2)ETrSetn+1(C,σ)≌ {E|(C,o)-→(C',a)AEEETrSetn(C,a) V(C,a)(C',a')AE'EETrSetn(C',')AE=e::E' V(C,o)-abort A&=abort VC=skip AE=done}. We define ETrSet(C,o)as U ETrSetn(C,o). We overload the notation and use ETrSet(C,>)for the high-level language.Note that we treat abort as a specific behavior instead of undefined arbitrary behaviors.The choices should depend on applications.The ideas in the paper should also apply for the latter setting,though we need to change our refinement and simulation relations defined below. Then we define an event trace refinement as the subset relation between event trace sets,which is similar to Leroy's refinement property [Leroy 2009]. Definition 3.2 (Event Trace Refinement).We say (C,o)is an e-trace refinement of (C,>),i.e.,(C,)(C >)if and only if ETrSet(C,a)C ETrSet(C,E) The refinement is defined for program configurations instead of for code only because the initial states may affect the behaviors of programs.In this case,the transformation T should translate states as well as code.We overload the notation and use T()to represent the state transformation,and use Cr C for o,∑.o=T()→(C,σ)三(C,), then Correct(T)defined in formula(1.1)can be reformulated as Correct(T)≌VC,C.C=T(C)→CgrC, (3.1) From the above e-trace refinement definition,we can derive an e-trace equivalence relation by requiring both directions hold: (C,σ)≈(C,)(C,o)E(C,)A(C,)E(C,o), and use C≈rC for Vo,∑.o=T()→(C,o)≈(C,). 4.THE RGSIM RELATION The e-trace refinement is defined directly over the externally observable behaviors of programs.It is intuitive,and also abstract in that it is independent of language details.However,as we explained before,it is not compositional w.r.t.parallel composi- tions.In this section we propose RGSim,which can be viewed as a compositional proof technique that allows us to derive the simple e-trace refinement and then verify the corresponding transformation T. ACM Transactions on Programming Languages and Systems,Vol.V,No.N,Article A,Publication date:January YYYY
Rely-Guarantee-Based Simulation A:9 3.2. The Event Trace Refinement Now we can formally define the refinement relation ⊑ that relates the set of externally observable event traces generated by the target and the source programs. A trace is a sequence of events e, and may end with a termination marker done or a fault marker abort. (EvtTrace) E ::= ϵ | done | abort | e::E Definition 3.1 (Event Trace Set). ETrSetn(C, σ) represents a set of external event traces produced by C in n steps from the state σ: (1) ETrSet0(C, σ) , {ϵ} ; (2) ETrSetn+1(C, σ) , {E | (C, σ) −→ (C ′ , σ′ ) ∧ E ∈ETrSetn(C ′ , σ′ ) ∨ (C, σ) e −→ (C ′ , σ′ ) ∧ E′∈ETrSetn(C ′ , σ′ ) ∧ E =e::E ′ ∨ (C, σ) −→ abort ∧ E =abort ∨ C =skip ∧ E =done} . We define ETrSet(C, σ) as ∪ n ETrSetn(C, σ). We overload the notation and use ETrSet(C, Σ) for the high-level language. Note that we treat abort as a specific behavior instead of undefined arbitrary behaviors. The choices should depend on applications. The ideas in the paper should also apply for the latter setting, though we need to change our refinement and simulation relations defined below. Then we define an event trace refinement as the subset relation between event trace sets, which is similar to Leroy’s refinement property [Leroy 2009]. Definition 3.2 (Event Trace Refinement). We say (C, σ) is an e-trace refinement of (C, Σ), i.e., (C, σ) ⊑ (C, Σ), if and only if ETrSet(C, σ) ⊆ ETrSet(C, Σ). The refinement is defined for program configurations instead of for code only because the initial states may affect the behaviors of programs. In this case, the transformation T should translate states as well as code. We overload the notation and use T(Σ) to represent the state transformation, and use C ⊑T C for ∀σ, Σ. σ = T(Σ) =⇒ (C, σ) ⊑ (C, Σ), then Correct(T) defined in formula (1.1) can be reformulated as Correct(T) , ∀C, C. C = T(C) =⇒ C ⊑T C . (3.1) From the above e-trace refinement definition, we can derive an e-trace equivalence relation by requiring both directions hold: (C, σ) ≈ (C, Σ) , (C, σ) ⊑ (C, Σ) ∧ (C, Σ) ⊑ (C, σ), and use C ≈T C for ∀σ, Σ. σ = T(Σ) =⇒ (C, σ) ≈ (C, Σ). 4. THE RGSIM RELATION The e-trace refinement is defined directly over the externally observable behaviors of programs. It is intuitive, and also abstract in that it is independent of language details. However, as we explained before, it is not compositional w.r.t. parallel compositions. In this section we propose RGSim, which can be viewed as a compositional proof technique that allows us to derive the simple e-trace refinement and then verify the corresponding transformation T. ACM Transactions on Programming Languages and Systems, Vol. V, No. N, Article A, Publication date: January YYYY
A:10 Hongjin Liang et al. 0 B R RMl (a)a-Related Transitions (b)The Side Condition of TRANS Fig.4.Related Transitions 4.1.The Definition Our co-inductively defined RGSim relation is in the form of (C,o,R,9) (C,E,R,G),which is a simulation between program configurations(C,o)and(C,>)It is parametrized with the rely and guarantee conditions at the low level and the high level,which are binary relations over states: R,gE P(LState x LState),R,G E P(HState x HState). The simulation also takes two additional parameters:the step invariant a and the post- condition y,which are both relations between the low-level and the high-level states. a,Y E P(LState x HState). Before we formally define RGSim in Definition 4.2,we first introduce the a-related transitions as follows. Definition 4.1 (a-Related Transitions). (R,)a{(a,σ),(E,)|(o,σ)∈RA(E,)∈RA(o,)∈aA(σ,)∈a}. (R,R)represents a set of the a-related transitions in R and R,putting together the corresponding transitions in R and R that can be related by a,as illustrated in Fig- ure 4(a).(g,G)is defined in the same way. Definition 4.2 (RGSim).Whenever (C,o,R,g):(C,,R,G),then (o,>)Ea and the following are true: (1)if (C,)>(C,'),then there exist C'and >such that (C,)-→*(C,),(a,o),(②,)∈(g,G)aand(C,o,R,g)≤an(C,y,R,G (2)if (C,)(C,o'),then there exist C'and such that (C,)e,*(C','),(o,o),(E,)∈(G,G)aand(C,o',R,g)≤ay(C,',R,G; (3)if C=skip,then there exists >such that (C,)-→*(skip,),(a,o),(,)∈(g,G*〉a,(a,)∈y andy≤a; (4)if(C,o)-→abort,then(C,)-→*abort; (⑤)if(o,a),(∑,)∈(R,R*)a,then(C,o',R,g)≤a(C,',R,G). Then,(C,R,g):(C,R,G)iff for all o and if (,)then (C.,R,g)(C.,R.G).Here the precondition E P(LState x HState)is used to relate the initial states o and > Informally,(C,a,R,g)(C,R,G)says the low-level configuration (C,o)is sim- ulated by the high-level configuration(C,>)with behaviors g and G respectively,no matter how their environments R and R interfere with them.It requires the following hold for every execution of C: ACM Transactions on Programming Languages and Systems,Vol.V,No.N,Article A,Publication date:January YYYY
A:10 Hongjin Liang et al. σ σ ′ Σ Σ ′ α ❄ R α ❄ R σ σ ′ θ θ ′ Σ Σ ′ α β ❄ R ❄ RM ❄ R ∗ α β (a) α-Related Transitions (b) The Side Condition of TRANS Fig. 4. Related Transitions 4.1. The Definition Our co-inductively defined RGSim relation is in the form of (C, σ, R, G) ≼α;γ (C, Σ, R, G), which is a simulation between program configurations (C, σ) and (C, Σ). It is parametrized with the rely and guarantee conditions at the low level and the high level, which are binary relations over states: R, G ∈ P(LState × LState), R, G ∈ P(HState × HState). The simulation also takes two additional parameters: the step invariant α and the postcondition γ, which are both relations between the low-level and the high-level states. α, γ ∈ P(LState × HState). Before we formally define RGSim in Definition 4.2, we first introduce the α-related transitions as follows. Definition 4.1 (α-Related Transitions). ⟨R, R⟩α , {((σ, σ′ ),(Σ, Σ ′ )) | (σ, σ′ ) ∈ R ∧ (Σ, Σ ′ ) ∈ R ∧ (σ, Σ) ∈ α ∧ (σ ′ , Σ ′ ) ∈ α} . ⟨R, R⟩α represents a set of the α-related transitions in R and R, putting together the corresponding transitions in R and R that can be related by α, as illustrated in Figure 4(a). ⟨G, G⟩α is defined in the same way. Definition 4.2 (RGSim). Whenever (C, σ, R, G) ≼α;γ (C, Σ, R, G), then (σ, Σ) ∈ α and the following are true: (1) if (C, σ) −→ (C ′ , σ′ ), then there exist C ′ and Σ ′ such that (C, Σ) −→∗ (C ′ , Σ ′ ), ((σ, σ′ ),(Σ, Σ ′ )) ∈ ⟨G, G∗ ⟩α and (C ′ , σ′ , R, G) ≼α;γ (C ′ , Σ ′ , R, G); (2) if (C, σ) e −→ (C ′ , σ′ ), then there exist C ′ and Σ ′ such that (C, Σ) e −→∗ (C ′ , Σ ′ ), ((σ, σ′ ),(Σ, Σ ′ )) ∈ ⟨G, G∗ ⟩α and (C ′ , σ′ , R, G) ≼α;γ (C ′ , Σ ′ , R, G); (3) if C = skip, then there exists Σ ′ such that (C, Σ) −→∗ (skip, Σ ′ ), ((σ, σ),(Σ, Σ ′ )) ∈ ⟨G, G∗ ⟩α, (σ, Σ ′ ) ∈ γ and γ ⊆ α; (4) if (C, σ) −→ abort, then (C, Σ) −→∗ abort; (5) if ((σ, σ′ ),(Σ, Σ ′ )) ∈ ⟨R, R ∗ ⟩α, then (C, σ′ , R, G) ≼α;γ (C, Σ ′ , R, G). Then, (C, R, G) ≼α;ζnγ (C, R, G) iff for all σ and Σ, if (σ, Σ) ∈ ζ, then (C, σ, R, G) ≼α;γ (C, Σ, R, G). Here the precondition ζ ∈ P(LState × HState) is used to relate the initial states σ and Σ. Informally, (C, σ, R, G) ≼α;γ (C, Σ, R, G) says the low-level configuration (C, σ) is simulated by the high-level configuration (C, Σ) with behaviors G and G respectively, no matter how their environments R and R interfere with them. It requires the following hold for every execution of C: ACM Transactions on Programming Languages and Systems, Vol. V, No. N, Article A, Publication date: January YYYY