147:6 Tian Tan,Yue Li,Xiaoxing Ma,Chang Xu,and Yannis Smaragdakis 3obj 2obj 1type 2obj 2type 2type 9 CI 9 CI S1 S2 S3 Unity Fig.2.An example for illustrating the idea of Unity.S1,S2 and S3 are three input selective context-sensitivity approaches.The four squares show the selected context-sensitivity information for the same program P (i.e.,which methods of P should be analyzed by what context-sensitivity variants)yielded by the three input selective approaches(S1,S2 and S3)and Unity respectively.For example,the square of S1 means:the S1 approach determines that the methods enclosed in the green circle will be analyzed by 2obj while the remaining methods of P will be analyzed context-insensitively(i.e.,CI). heavier context elements and longer context lengths,if possible.This counterintuitive proposal is based on the following insight. Insight.Analyzing more methods context-sensitively may not incur an efficiency decline(some- times it may even accelerate an analysis as more false data flows are pruned away),as long as the right methods are chosen.Avoiding scalability-threat methods is essential [Li et al.2020; Smaragdakis et al.2014],but the net number of methods analyzed context-sensitively is not. As described in Section 2,each of the existing selective context-sensitivity approaches(henceforth, just "selective approaches")has its own insight (e.g.,based on expert experience,parameterized heuristics,principled program patterns,etc.)to identify which methods are more likely useful for improving precision while not incurring much efficiency cost,when analyzed context-sensitively. Limited by the effectiveness of each insight,every approach may miss a collection of methods which are truly precision-useful and efficiency-friendly;however,this problem can be alleviated when combining more selective approaches to identify more target methods.As a result,we propose to simultaneously and context-sensitively analyze all the methods identified by a set of selective approaches,S.Although now possibly many more methods need to be analyzed context-sensitively, we still have a chance to obtain an analysis that is scalable even if we analyze them together,as each of these extra methods is determined as not a scalability threat by at least one approach in S.Hence,although it is hard to give one principle to accurately decide which methods,say M,are precision-useful but not scalability-threatening,our proposal naturally takes advantages of the insights of different selective approaches for identifying M from different perspectives. We illustrate the key idea of Unity in Figure 2.Assume we have three selective approaches S1,S2,and S3.Figure 2 depicts the selected context-sensitivity information determined by each approach for the same program P.For instance,after analyzing P,S1 reports that the set of methods in the green circle(in S1 of Figure 2)should be analyzed by 2obj and the remaining methods(the gray part in S1)need to be analyzed context-insensitively(CI).Similarly,we have the results for S2 and S3 in Figure 2,and in the case of S3,three context-sensitivity variants,3obj,2obj and 1type are selected for different sets of methods. What does Unity produce given the above three approaches and their selected context-sensitivity information?Briefly,Unity finds out the most precise configuration by reassigning context-sensitivity variants to the methods which are reported to be analyzed context-sensitively by at least two input approaches.For example,among the overlapped methods reported by S1 and S3,from the point Proc.ACM Program.Lang.,Vol.5.No.OOPSLA,Article 147.Publication date:October 2021
147:6 Tian Tan, Yue Li, Xiaoxing Ma, Chang Xu, and Yannis Smaragdakis 2obj 2type 1type 2obj 3obj CI 2obj CI 1type 2obj 3obj CI 2type CI S1 S2 S3 Unity Fig. 2. An example for illustrating the idea of Unity. S1, S2 and S3 are three input selective context-sensitivity approaches. The four squares show the selected context-sensitivity information for the same program P (i.e., which methods of P should be analyzed by what context-sensitivity variants) yielded by the three input selective approaches (S1, S2 and S3) and Unity respectively. For example, the square of S1 means: the S1 approach determines that the methods enclosed in the green circle will be analyzed by 2obj while the remaining methods of P will be analyzed context-insensitively (i.e., CI). heavier context elements and longer context lengths, if possible. This counterintuitive proposal is based on the following insight. Insight. Analyzing more methods context-sensitively may not incur an efficiency decline (sometimes it may even accelerate an analysis as more false data flows are pruned away), as long as the right methods are chosen. Avoiding scalability-threat methods is essential [Li et al. 2020; Smaragdakis et al. 2014], but the net number of methods analyzed context-sensitively is not. As described in Section 2, each of the existing selective context-sensitivity approaches (henceforth, just łselective approachesž) has its own insight (e.g., based on expert experience, parameterized heuristics, principled program patterns, etc.) to identify which methods are more likely useful for improving precision while not incurring much efficiency cost, when analyzed context-sensitively. Limited by the effectiveness of each insight, every approach may miss a collection of methods which are truly precision-useful and efficiency-friendly; however, this problem can be alleviated when combining more selective approaches to identify more target methods. As a result, we propose to simultaneously and context-sensitively analyze all the methods identified by a set of selective approaches, 𝑆. Although now possibly many more methods need to be analyzed context-sensitively, we still have a chance to obtain an analysis that is scalable even if we analyze them together, as each of these extra methods is determined as not a scalability threat by at least one approach in 𝑆. Hence, although it is hard to give one principle to accurately decide which methods, say M, are precision-useful but not scalability-threatening, our proposal naturally takes advantages of the insights of different selective approaches for identifying M from different perspectives. We illustrate the key idea of Unity in Figure 2. Assume we have three selective approaches S1, S2, and S3. Figure 2 depicts the selected context-sensitivity information determined by each approach for the same program P. For instance, after analyzing P, S1 reports that the set of methods in the green circle (in S1 of Figure 2) should be analyzed by 2obj and the remaining methods (the gray part in S1) need to be analyzed context-insensitively (CI). Similarly, we have the results for S2 and S3 in Figure 2, and in the case of S3, three context-sensitivity variants, 3obj, 2obj and 1type are selected for different sets of methods. What does Unity produce given the above three approaches and their selected context-sensitivity information? Briefly, Unity finds out the most precise configuration by reassigning context-sensitivity variants to the methods which are reported to be analyzed context-sensitively by at least two input approaches. For example, among the overlapped methods reported by S1 and S3, from the point Proc. ACM Program. Lang., Vol. 5, No. OOPSLA, Article 147. Publication date: October 2021
Making Pointer Analysis More Precise by Unleashing the Power of Selective Context Sensitivity 147:7 of view of S1,some methods that should have been analyzed under 2obj,are now assigned to 3obj(color turns from green to red)as 3obj is more precise than 2obj;for the rest of overlapped methods,they remain green(i.e.,still be analyzed by 2obj)as 2obj is more precise than 1type (reported by S3).After applying similar treatment to other overlapped methods(with the disjoint ones unchanged),we get the output of Unity as in Figure 2.Note that the methods reported by all three input approaches(the middle part of the Unity figure)remain green as 2obj(reported by S1) is more precise than 2type(by S2)and 1type(by S3).Section 4 will detail the rules on how Unity deals with more input approaches with different context-sensitivity variants. From the point of view of each selective approach,Unity modifies it by(1)adding many more methods(identified by other approaches)to be analyzed context-sensitively,and(2)upgrading the context-sensitivity variant to the most precise one for the methods which are also selected by other approaches.We have explained the insight for(1)at the beginning of Section 3.2,and the insight for(2)is similar:if a method m is reported to be analyzed under a certain context-sensitivity variant,cs,by an input approach s,this implies that according to the insight of s,m is considered as not a scalability-threat,regardless of how costly cs is.Then,if the input approaches are reliable, we still have a chance to produce a scalable analysis,even if selecting the most precise cs.Later, experimental results further demonstrate the validity of the Unity insight:even if many more methods are analyzed context-sensitively and are under the most precise configuration (w.r.t.the input approaches).Unity is still able to scale for 12 out of 13 hard-to-analyze programs(with default settings). However,there is no guarantee regarding the scalability of Unity.What can we do if Unity fails to scale?As a second try,a straightforward approach is to not make the analysis too costly. For example,if some overlapped methods M are reported to be analyzed under 3obj and 1type by different input approaches,what about considering the less precise(also less costly)one,i.e.,1type? Nevertheless,this may introduce significant precision loss,and if M is critical form improving precision,such treatment may make the analysis even less precise than the original one(s)guided by certain input approach(es)individually.To reap as much precision as possible while achieving good scalability,we employ the Relay technique. The core insight of Relay is to divide the scalability burden of Unity into different passes(each pass is a run of selective context-sensitive pointer analysis)and the precision achieved by the former pass can be transmitted to and accumulated in the next pass.Based on this insight,we design two options of Relay,called Relay-o1 and Relay-02 that the former is more precise but less scalable than the latter.Figure 3 depicts the above idea with the same input selective approaches as in Figure 2.We introduce Relay-o1 first. 3.3 Relay The first pass of Relay-o1 in Figure 3 differs from Unity in that it only analyzes the methods(say M1)selected by S1(enclosed in the circle with solid line)context-sensitively,with the remaining ones being analyzed under CI.Similarly,only the methods,say M2(M3)selected by S2(S3)are analyzed context-sensitively in pass 2(pass 3).Thus,for each context-sensitive method of M1(M2 or M3),as shown in the figure,Relay selects the same context-sensitivity variant for it as in Unity, namely,still the most precise configuration suggested by any of the combined approaches.For example, for M1 in pass 1,its context-sensitivity variants distribution is the same as the one of Unity in Figure 2(i.e.,the same subset of methods are assigned to 3obj while the remaining methods are assigned to 2obj).For each pass of Relay,only a subset of methods of Unity are analyzed context-sensitively.Although this does not ensure better scalability than Unity,no more methods than the base approach are analyzed context-sensitively-only the same methods may be analyzed Proc.ACM Program.Lang.,Vol.5,No.OOPSLA,Article 147.Publication date:October 2021
Making Pointer Analysis More Precise by Unleashing the Power of Selective Context Sensitivity 147:7 of view of S1, some methods that should have been analyzed under 2obj, are now assigned to 3obj (color turns from green to red) as 3obj is more precise than 2obj; for the rest of overlapped methods, they remain green (i.e., still be analyzed by 2obj) as 2obj is more precise than 1type (reported by S3). After applying similar treatment to other overlapped methods (with the disjoint ones unchanged), we get the output of Unity as in Figure 2. Note that the methods reported by all three input approaches (the middle part of the Unity figure) remain green as 2obj (reported by S1) is more precise than 2type (by S2) and 1type (by S3). Section 4 will detail the rules on how Unity deals with more input approaches with different context-sensitivity variants. From the point of view of each selective approach, Unity modifies it by (1) adding many more methods (identified by other approaches) to be analyzed context-sensitively, and (2) upgrading the context-sensitivity variant to the most precise one for the methods which are also selected by other approaches. We have explained the insight for (1) at the beginning of Section 3.2, and the insight for (2) is similar: if a method m is reported to be analyzed under a certain context-sensitivity variant, cs, by an input approach 𝑠, this implies that according to the insight of 𝑠, m is considered as not a scalability-threat, regardless of how costly cs is. Then, if the input approaches are reliable, we still have a chance to produce a scalable analysis, even if selecting the most precise cs. Later, experimental results further demonstrate the validity of the Unity insight: even if many more methods are analyzed context-sensitively and are under the most precise configuration (w.r.t. the input approaches), Unity is still able to scale for 12 out of 13 hard-to-analyze programs (with default settings). However, there is no guarantee regarding the scalability of Unity. What can we do if Unity fails to scale? As a second try, a straightforward approach is to not make the analysis too costly. For example, if some overlapped methods M are reported to be analyzed under 3obj and 1type by different input approaches, what about considering the less precise (also less costly) one, i.e., 1type? Nevertheless, this may introduce significant precision loss, and if M is critical form improving precision, such treatment may make the analysis even less precise than the original one(s) guided by certain input approach(es) individually. To reap as much precision as possible while achieving good scalability, we employ the Relay technique. The core insight of Relay is to divide the scalability burden of Unity into different passes (each pass is a run of selective context-sensitive pointer analysis) and the precision achieved by the former pass can be transmitted to and accumulated in the next pass. Based on this insight, we design two options of Relay, called Relay-o1 and Relay-o2 that the former is more precise but less scalable than the latter. Figure 3 depicts the above idea with the same input selective approaches as in Figure 2. We introduce Relay-o1 first. 3.3 Relay The first pass of Relay-o1 in Figure 3 differs from Unity in that it only analyzes the methods (say M1) selected by S1 (enclosed in the circle with solid line) context-sensitively, with the remaining ones being analyzed under CI. Similarly, only the methods, say M2 (M3) selected by S2 (S3) are analyzed context-sensitively in pass 2 (pass 3). Thus, for each context-sensitive method of M1 (M2 or M3), as shown in the figure, Relay selects the same context-sensitivity variant for it as in Unity, namely, still the most precise configuration suggested by any of the combined approaches. For example, for M1 in pass 1, its context-sensitivity variants distribution is the same as the one of Unity in Figure 2 (i.e., the same subset of methods are assigned to 3obj while the remaining methods are assigned to 2obj). For each pass of Relay, only a subset of methods of Unity are analyzed context-sensitively. Although this does not ensure better scalability than Unity, no more methods than the base approach are analyzed context-sensitivelyÐonly the same methods may be analyzed Proc. ACM Program. Lang., Vol. 5, No. OOPSLA, Article 147. Publication date: October 2021
147:8 Tian Tan,Yue Li,Xiaoxing Ma,Chang Xu,and Yannis Smaragdakis 4 ,3ob1 300 2obj: 2obl:1type Relay-01 2type Precision of 2obf Precision of Pass 1(P1) 2type Pass 2(P2) 2 CI ci CI Pass 1 Pass 2 Pass 3 3obj 2obj 1type Relay-02 Precision of Precision of Pass 1(P1) 2type Pass 2(P2) CI CI Pass 1 Pass 2 Pass 3 Fig.3.Examples for illustrating the idea of Relay with the same input approaches setting as in Figure 2.Relay has two options,Relay-o1 and Relay-02 and the former is more precise but less scalable than the latter.In each pass of both options,only the methods enclosed in the solid circles,say M.are analyzed context-sensitively (the dotted lines in Relay-o1 are to facilitate understanding).Both options adopt the same precision filtering mechanism,and their difference only lies in what context-sensitivity variants are assigned to M. with a more precise context.Therefore,the probability to select scalability-threat methods is low, making Relay more likely to scale compared to Unity. After running the first pass,its precision,which is reflected in the points-to set for every variable, will be transmitted to the next pass(pass 2).Then,in pass 2,we can use the precision of pass 1, i.e.,the points-to information,to help soundly filtering the points-to sets for all variables when performing the analysis in pass 2.For example,assume a variable v points to three objects under CI with its points-to set,say pt(v),being {o1,02,03).After pass 1,pt(v)becomes {o1,03)where o2 is identified as a spurious object by analyzing M1 context-sensitively.However,in pass 2,purely analyzing M2 under the contexts shown in Figure 3 cannot recognize that v should not point to o2. However,this imprecision does not matter,as the pt(v)transmitted from pass 1 can be used to filter pt(v)during the analysis in pass 2,i.e.,preventing o2 from being propagated to pt(v).Such filtering is safe as the pointer analysis in pass 1 is sound,so no true objects will be filtered away in Relay(this will be formally proven in Section 4). Finally,although uncommon,what if Relay-o1 is not scalable?After all,given a program,we hope to confidently rely on Unity-Relay for yielding precise pointer analysis results all the time. Under this context,we design Relay-02 as the last line of defense of Unity-Relay.Relay-o2 employs the same principle as Relay-o1 with the exception that context-sensitivity variants (selected by input approach Si)are not reassigned any more in each pass,say Passi(see Relay-o2 in Figure 3).In other words,Pass;and the analysis guided by Si,say Ai,adopt the same selective context sensitivity.The only difference between running the selective approaches independently is that precision accumulates,using the same filtering mechanism as in Relay-01.Thus,in Passi,the points-to set for each variable is never larger than the one in Ai,resulting in less data propagation and faster analysis convergence.The only extra cost introduced in Relay-02 is the process of filtering,which is small compared to the overhead of a context-sensitive pointer analysis,and easily offset by the aforementioned efficiency benefit.Thus,we can expect Relay-02 to scale as long Proc.ACM Program.Lang.,Vol.5.No.OOPSLA,Article 147.Publication date:October 2021
147:8 Tian Tan, Yue Li, Xiaoxing Ma, Chang Xu, and Yannis Smaragdakis 2obj CI 3obj 1type 2obj 3obj CI 2obj 2type CI 2type 2obj Pass 1 Pass 2 Pass 3 Precision of Pass 1 (P1) Precision of Pass 2 (P2) P1 P2 2obj CI 1type 2obj 3obj CI CI Pass 1 Pass 2 Pass 3 Precision of Pass 1 (P1) Precision of Pass 2 (P2) P1 P2 Relay-o2 Relay-o1 2type Fig. 3. Examples for illustrating the idea of Relay with the same input approaches setting as in Figure 2. Relay has two options, Relay-o1 and Relay-o2 and the former is more precise but less scalable than the latter. In each pass of both options, only the methods enclosed in the solid circles, say M, are analyzed context-sensitively (the dotted lines in Relay-o1 are to facilitate understanding). Both options adopt the same precision filtering mechanism, and their difference only lies in what context-sensitivity variants are assigned to M. with a more precise context. Therefore, the probability to select scalability-threat methods is low, making Relay more likely to scale compared to Unity. After running the first pass, its precision, which is reflected in the points-to set for every variable, will be transmitted to the next pass (pass 2). Then, in pass 2, we can use the precision of pass 1, i.e., the points-to information, to help soundly filtering the points-to sets for all variables when performing the analysis in pass 2. For example, assume a variable v points to three objects under CI with its points-to set, say pt(v), being {o1,o2,o3}. After pass 1, pt(v) becomes {o1,o3} where o2 is identified as a spurious object by analyzing M1 context-sensitively. However, in pass 2, purely analyzing M2 under the contexts shown in Figure 3 cannot recognize that v should not point to o2. However, this imprecision does not matter, as the pt(v) transmitted from pass 1 can be used to filter pt(v) during the analysis in pass 2, i.e., preventing o2 from being propagated to pt(v). Such filtering is safe as the pointer analysis in pass 1 is sound, so no true objects will be filtered away in Relay (this will be formally proven in Section 4). Finally, although uncommon, what if Relay-o1 is not scalable? After all, given a program, we hope to confidently rely on Unity-Relay for yielding precise pointer analysis results all the time. Under this context, we design Relay-o2 as the last line of defense of Unity-Relay. Relay-o2 employs the same principle as Relay-o1 with the exception that context-sensitivity variants (selected by input approach S𝑖 ) are not reassigned any more in each pass, say Pass𝑖 (see Relay-o2 in Figure 3). In other words, Pass𝑖 and the analysis guided by S𝑖 , say A𝑖 , adopt the same selective context sensitivity. The only difference between running the selective approaches independently is that precision accumulates, using the same filtering mechanism as in Relay-o1. Thus, in Pass𝑖 , the points-to set for each variable is never larger than the one in A𝑖 , resulting in less data propagation and faster analysis convergence. The only extra cost introduced in Relay-o2 is the process of filtering, which is small compared to the overhead of a context-sensitive pointer analysis, and easily offset by the aforementioned efficiency benefit. Thus, we can expect Relay-o2 to scale as long Proc. ACM Program. Lang., Vol. 5, No. OOPSLA, Article 147. Publication date: October 2021
Making Pointer Analysis More Precise by Unleashing the Power of Selective Context Sensitivity 147:9 Variable CI S1 S2 S1&S2 Relay 1 x.f a; a o1 01 07 01 01 2 b=y.f; X 02 02 02 02 02 3 y 02,03 03 02,03 03 03 4 p.f a; p 04 04 04 04 04 5 b=q.f; q 04,05 04,05 o5 05 05 6 01 01 01 01 Fig.4.Example for illustrating precision intervention in Relay.Specifically,b is supposed to be a null pointer at run time,and only Relay can analyze it precisely in this case. as A;scales.We will further discuss the scalability of Relay-02 in Section 4.Note that Relay-02 is fundamentally different from merely intersecting the points-to results of all pointer analyses guided by the input approaches.In Relay-02,there is early precision intervention of each analysis pass to the next:the pointer analysis in each pass is filtered by the points-to results produced by the previous passes on the fly,likely preventing the computation of more spurious value flows during the analysis.As a result,Relay-02 is more precise than simply intersecting the final points-to sets after individually running each guided pointer analysis without such precision intervention. We next use an example,in Figure 4,to illustrate the benefit of precision intervention.The left side of Figure 4 is an example code snippet,where the value of a is stored into x.f and p.f(through lines 1 and 4),and b loads values from y.f and q.f (through lines 2 and 5).The table on the right side contains the points-to sets of the variables in the code snippet under different pointer analyses. Column CI gives the points-to sets produced by a context-insensitive pointer analysis,and we consider two selective context-sensitive pointer analyses,S1 and S2,which identify o2 and o4 as spurious objects for variables y and q,respectively.If we simply intersect the points-to results of S1 and S2,then we can eliminate spurious objects for y and q,as shown in column S1&S2.However,b still points to the spurious object o1 under S1&S2,as o1 can flow to b through either line 2 or line 5: both S1 and S2 consider o1 to be valid for b,for different reasons each.Among the analyses in the table,only Relay can block the two flows(through lines 2 and 5)simultaneously.In Relay,with S1 as pass 1 and S2 as pass 2,S2 identifies o4 as spurious for q so that the flow through line 5 can be blocked (as p and q are not aliased).Further benefiting from the precision intervention,the flow through line 2 is also blocked as pass 2 also incorporates the result from pass 1,i.e.,o2 is identified as spurious for y by S1(so that x and y are not aliased).As a result,o1 is identified as spurious for b in pass 2.(The points-to set of b is empty,based on the fragment and example values shown,but other program statements could give it values,orthogonally to the example-e.g.,by setting y.f or q.f.) In the overall Relay mechanism,Relay-o1 and Relay-02 apply independently to analysis passes and are mixed as necessary for scalability.The two Relay variants can be seen as the best and worst precision options,and there is design space between them for making precision and efficiency trade-offs.In our design,precision is the first priority,so for every Passi,we first attempt Relay-o1. If Relay-o1 is not scalable,we select Relay-02(which scales if the base approach does),and repeat the same scheme(i.e.,first trying Relay-01,then Relay-02 if it fails)to Passi+1,etc.until all n passes complete(assuming there are n input selective approaches). The running order for different passes of Relay does not affect the soundness and precision guarantees of Relay(including both Relay-o1 and Relay-02)and also the scalability potential of Relay-02(as proved and discussed in Section 4.4).In other words,in practice,regardless of the pass ordering,users can rely on Relay to produce an analysis that is more precise than the one(say Proc.ACM Program.Lang.,Vol.5,No.OOPSLA,Article 147.Publication date:October 2021
Making Pointer Analysis More Precise by Unleashing the Power of Selective Context Sensitivity 147:9 1 x.f = a; 2 b = y.f; 3 ... 4 p.f = a; 5 b = q.f; Variable CI S1 S2 S1&S2 Relay a o1 o1 o1 o1 o1 x o2 o2 o2 o2 o2 y o2,o3 o3 o2,o3 o3 o3 p o4 o4 o4 o4 o4 q o4,o5 o4,o5 o5 o5 o5 b o1 o1 o1 o1 Fig. 4. Example for illustrating precision intervention in Relay. Specifically, b is supposed to be a null pointer at run time, and only Relay can analyze it precisely in this case. as A𝑖 scales. We will further discuss the scalability of Relay-o2 in Section 4. Note that Relay-o2 is fundamentally different from merely intersecting the points-to results of all pointer analyses guided by the input approaches. In Relay-o2, there is early precision intervention of each analysis pass to the next: the pointer analysis in each pass is filtered by the points-to results produced by the previous passes on the fly, likely preventing the computation of more spurious value flows during the analysis. As a result, Relay-o2 is more precise than simply intersecting the final points-to sets after individually running each guided pointer analysis without such precision intervention. We next use an example, in Figure 4, to illustrate the benefit of precision intervention. The left side of Figure 4 is an example code snippet, where the value of a is stored into x.f and p.f (through lines 1 and 4), and b loads values from y.f and q.f (through lines 2 and 5). The table on the right side contains the points-to sets of the variables in the code snippet under different pointer analyses. Column CI gives the points-to sets produced by a context-insensitive pointer analysis, and we consider two selective context-sensitive pointer analyses, S1 and S2, which identify o2 and o4 as spurious objects for variables y and q, respectively. If we simply intersect the points-to results of S1 and S2, then we can eliminate spurious objects for y and q, as shown in column S1&S2. However, b still points to the spurious object o1 under S1&S2, as o1 can flow to b through either line 2 or line 5: both S1 and S2 consider o1 to be valid for b, for different reasons each. Among the analyses in the table, only Relay can block the two flows (through lines 2 and 5) simultaneously. In Relay, with S1 as pass 1 and S2 as pass 2, S2 identifies o4 as spurious for q so that the flow through line 5 can be blocked (as p and q are not aliased). Further benefiting from the precision intervention, the flow through line 2 is also blocked as pass 2 also incorporates the result from pass 1, i.e., o2 is identified as spurious for y by S1 (so that x and y are not aliased). As a result, o1 is identified as spurious for b in pass 2. (The points-to set of b is empty, based on the fragment and example values shown, but other program statements could give it values, orthogonally to the exampleÐe.g., by setting y.f or q.f.) In the overall Relay mechanism, Relay-o1 and Relay-o2 apply independently to analysis passes and are mixed as necessary for scalability. The two Relay variants can be seen as the best and worst precision options, and there is design space between them for making precision and efficiency trade-offs. In our design, precision is the first priority, so for every Pass𝑖 , we first attempt Relay-o1. If Relay-o1 is not scalable, we select Relay-o2 (which scales if the base approach does), and repeat the same scheme (i.e., first trying Relay-o1, then Relay-o2 if it fails) to Pass𝑖+1, etc. until all 𝑛 passes complete (assuming there are 𝑛 input selective approaches). The running order for different passes of Relay does not affect the soundness and precision guarantees of Relay (including both Relay-o1 and Relay-o2) and also the scalability potential of Relay-o2 (as proved and discussed in Section 4.4). In other words, in practice, regardless of the pass ordering, users can rely on Relay to produce an analysis that is more precise than the one (say Proc. ACM Program. Lang., Vol. 5, No. OOPSLA, Article 147. Publication date: October 2021