16 CHAPTER ONE §1.4 1.4 Average-Case Analysis.The mathematical techniques that we con- sider in this book are not just applicable to solving problems related to the performance of algorithms,but also to mathematical models for all manner of scientific applications,from genomics to statistical physics.Accordingly, we often consider structures and techniques that are broadly applicable.Still, our prime motivation is to consider mathematical tools that we need in or- der to be able to make precise statements about resource usage of important algorithms in practical applications. Our focus is on average-case analysis of algorithms:we formulate a rea- sonable input model and analyze the expected running time of a program given an input drawn from that model.This approach is effective for two primary reasons. The first reason that average-case analysis is important and effective in modern applications is that straightforward models of randomness are often extremely accurate.The following are just a few representative examples from sorting applications: Sorting is a fundamental process in cryptanalysis,where the adversary has gone to great lengths to make the data indistinguishable from random data. Commercial data processing systems routinely sort huge files where keys typically are account numbers or other identification numbers that are well modeled by uniformly random numbers in an appropriate range. .Implementations of computer networks depend on sorts that again involve keys that are well modeled by random ones. Sorting is widely used in computational biology,where significant devi- ations from randomness are cause for further investigation by scientists trying to understand fundamental biological and physical processes. As these examples indicate,simple models of randomness are effective,not just for sorting applications,but also for a wide variety ofuses of fundamental algorithms in practice.Broadly speaking,when large data sets are created by humans,they typically are based on arbitrary choices that are well modeled by random ones.Random models also are often effective when working with scientific data.We might interpret Einstein's oft-repeated admonition that "God does not play dice"in this context as meaning that random models are effective,because if we discover significant deviations from randomness,we have learned something significant about the natural world. www.it-ebooks.info
Șȝ C Ŕ ō Ŝ Š ő Ş O Ś ő §Ș.ț 1.4 Average-Case Analysis. Ļe mathematical techniques that we consider in this book are not just applicable to solving problems related to the performance of algorithms, but also to mathematical models for all manner of scientiŀc applications, from genomics to statistical physics. Accordingly, we often consider structures and techniques that are broadly applicable. Still, our prime motivation is to consider mathematical tools that we need in order to be able to make precise statements about resource usage of important algorithms in practical applications. Our focus is on average-case analysis of algorithms: we formulate a reasonable input model and analyze the expected running time of a program given an input drawn from that model. Ļis approach is effective for two primary reasons. Ļe ŀrst reason that average-case analysis is important and effective in modern applications is that straightforward models of randomness are often extremely accurate. Ļe following are just a few representative examples from sorting applications: • Sorting is a fundamental process in cryptanalysis, where the adversary has gone to great lengths to make the data indistinguishable from random data. • Commercial data processing systems routinely sort huge ŀles where keys typically are account numbers or other identiŀcation numbers that are well modeled by uniformly random numbers in an appropriate range. • Implementations of computer networks depend on sorts that again involve keys that are well modeled by random ones. • Sorting is widely used in computational biology, where signiŀcant deviations from randomness are cause for further investigation by scientists trying to understand fundamental biological and physical processes. As these examples indicate, simple models of randomness are effective, not just for sorting applications, but also for a wide variety of uses of fundamental algorithms in practice. Broadly speaking, when large data sets are created by humans, they typically are based on arbitrary choices that are well modeled by random ones. Random models also are often effective when working with scientiŀc data. We might interpret Einstein’s oft-repeated admonition that “God does not play dice” in this context as meaning that random models are effective, because if we discover signiŀcant deviations from randomness, we have learned something signiŀcant about the natural world. www.it-ebooks.info
§14 ANALY S IS O F ALGORIT H M S 7 The second reason that average-case analysis is important and effective in modern applications is that we can often manage to inject randomness into a problem instance so that it appears to the algorithm(and to the ana- lyst)to be random.This is an effective approach to developing efficient algo- rithms with predictable performance,which are known as randomized algo- rithms.M.O.Rabin [25]was among the first to articulate this approach,and it has been developed by many other researchers in the years since.The book by Motwani and Raghavan [23]is a thorough introduction to the topic Thus,we begin by analyzing random models,and we typically start with the challenge of computing the mean-the average value of some quantity of interest for N instances drawn at random.Now,elementary probability theory gives a number of different(though closely related)ways to compute the average value of a quantity.In this book,it will be convenient for us to explicitly identify two different approaches to doing so. Distributional.Let IIN be the number of possible inputs of size N and IINk be the number of inputs of size N that cause the algorithm to have cost k,so that IIN =kIINk.Then the probability that the cost is k is IINk/IIN and the expected cost is ∑kIN. k The analysis depends on"counting."How many inputs are there of size N and how many inputs of size N cause the algorithm to have cost k?These are the steps to compute the probability that the cost is k,so this approach is perhaps the most direct from elementary probability theory. Cumulative.Let EN be the total (or cumulated)cost of the algorithm on all inputs of size N.(That is,SN=kkIIN&,but the point is that it is not necessary to compute EN in that way.)Then the average cost is simply EN/IIN.The analysis depends on a less specific counting problem:what is the total cost of the algorithm,on all inputs?We will be using general tools that make this approach very attractive. The distributional approach gives complete information,which can be used directly to compute the standard deviation and other moments.Indi- rect (often simpler)methods are also available for computing moments when using the cumulative approach,as we will see.In this book,we consider both approaches,though our tendency will be toward the cumulative method, www.it-ebooks.info
§Ș.ț A Ś ō Ř ť ş ŕ ş ś Œ A Ř œ ś Ş ŕ Š Ŕ ř ş ȘȞ Ļe second reason that average-case analysis is important and effective in modern applications is that we can often manage to inject randomness into a problem instance so that it appears to the algorithm (and to the analyst) to be random. Ļis is an effective approach to developing efficient algorithms with predictable performance, which are known as randomized algorithms. M. O. Rabin [25] was among the ŀrst to articulate this approach, and it has been developed by many other researchers in the years since. Ļe book by Motwani and Raghavan [23] is a thorough introduction to the topic. Ļus, we begin by analyzing random models, and we typically start with the challenge of computing the mean—the average value of some quantity of interest for N instances drawn at random. Now, elementary probability theory gives a number of different (though closely related) ways to compute the average value of a quantity. In this book, it will be convenient for us to explicitly identify two different approaches to doing so. Distributional. Let N be the number of possible inputs of size N and Nk be the number of inputs of size N that cause the algorithm to have cost k, so that N = ∑ k Nk. Ļen the probability that the cost is k is Nk/N and the expected cost is 1 N ∑ k kNk. Ļe analysis depends on “counting.” How many inputs are there of size N and how many inputs of size N cause the algorithm to have cost k? Ļese are the steps to compute the probability that the cost is k, so this approach is perhaps the most direct from elementary probability theory. Cumulative. Let N be the total (or cumulated) cost of the algorithm on all inputs of size N. (Ļat is, N = ∑ k kNk, but the point is that it is not necessary to compute N in that way.) Ļen the average cost is simply N /N . Ļe analysis depends on a less speciŀc counting problem: what is the total cost of the algorithm, on all inputs? We will be using general tools that make this approach very attractive. Ļe distributional approach gives complete information, which can be used directly to compute the standard deviation and other moments. Indirect (often simpler) methods are also available for computing moments when using the cumulative approach, as we will see. In this book, we consider both approaches, though our tendency will be toward the cumulative method, www.it-ebooks.info
18 CHAPTER ONE SI.5 which ultimately allows us to consider the analysis of algorithms in terms of combinatorial properties of basic data structures. Many algorithms solve a problem by recursively solving smaller sub- problems and are thus amenable to the derivation of a recurrence relationship that the average cost or the total cost must satisfy.A direct derivation of a recurrence from the algorithm is often a natural way to proceed,as shown in the example in the next section. No matter how they are derived,we are interested in average-case results because,in the large number of situations where random input is a reasonable model,an accurate analysis can help us: Compare different algorithms for the same task. .Predict time and space requirements for specific applications. Compare different computers that are to run the same algorithm. Adjust algorithm parameters to optimize performance. The average-case results can be compared with empirical data to validate the implementation,the model,and the analysis.The end goal is to gain enough confidence in these that they can be used to predict how the algorithm will perform under whatever circumstances present themselves in particular appli- cations.If we wish to evaluate the possible impact of a new machine archi- tecture on the performance of an important algorithm,we can do so through analysis,perhaps before the new architecture comes into existence.The suc- cess of this approach has been validated over the past several decades:the sorting algorithms that we consider in the section were first analyzed more than 50 years ago,and those analytic results are still useful in helping us eval- uate their performance on today's computers. 1.5 Example:Analysis of Quicksort.To illustrate the basic method just sketched,we examine next a particular algorithm of considerable importance, the quicksort sorting method.This method was invented in 1962 by C.A.R. Hoare,whose paper [15]is an early and outstanding example in the analysis of algorithms.The analysis is also covered in great detail in Sedgewick [27] (see also [291);we give highlights here.It is worthwhile to study this analysis in detail not just because this sorting method is widely used and the analytic results are directly relevant to practice,but also because the analysis itself is illustrative of many things that we will encounter later in the book.In partic- ular,it turns out that the same analysis applies to the study of basic properties of tree structures,which are of broad interest and applicability.More gen- www.it-ebooks.info
Șȟ C Ŕ ō Ŝ Š ő Ş O Ś ő §Ș.Ȝ which ultimately allows us to consider the analysis of algorithms in terms of combinatorial properties of basic data structures. Many algorithms solve a problem by recursively solving smaller subproblems and are thus amenable to the derivation of a recurrence relationship that the average cost or the total cost must satisfy. A direct derivation of a recurrence from the algorithm is often a natural way to proceed, as shown in the example in the next section. No matter how they are derived, we are interested in average-case results because, in the large number of situations where random input is a reasonable model, an accurate analysis can help us: • Compare different algorithms for the same task. • Predict time and space requirements for speciŀc applications. • Compare different computers that are to run the same algorithm. • Adjust algorithm parameters to optimize performance. Ļe average-case results can be compared with empirical data to validate the implementation, the model, and the analysis. Ļe end goal is to gain enough conŀdence in these that they can be used to predict how the algorithm will perform under whatever circumstances present themselves in particular applications. If we wish to evaluate the possible impact of a new machine architecture on the performance of an important algorithm, we can do so through analysis, perhaps before the new architecture comes into existence. Ļe success of this approach has been validated over the past several decades: the sorting algorithms that we consider in the section were ŀrst analyzed more than 50 years ago, and those analytic results are still useful in helping us evaluate their performance on today’s computers. 1.5 Example: Analysis of Quicksort. To illustrate the basic method just sketched, we examine next a particular algorithm of considerable importance, the quicksort sorting method. Ļis method was invented in 1962 by C. A. R. Hoare, whose paper [15] is an early and outstanding example in the analysis of algorithms. Ļe analysis is also covered in great detail in Sedgewick [27] (see also [29]); we give highlights here. It is worthwhile to study this analysis in detail not just because this sorting method is widely used and the analytic results are directly relevant to practice, but also because the analysis itself is illustrative of many things that we will encounter later in the book. In particular, it turns out that the same analysis applies to the study of basic properties of tree structures, which are of broad interest and applicability. More genwww.it-ebooks.info
S15 ANALY SIS O F ALGO R IT H M S 19 erally,our analysis of quicksort is indicative of how we go about analyzing a broad class of recursive programs. Program 1.2 is an implementation of quicksort in Java.It is a recursive program that sorts the numbers in an array by partitioning it into two inde- pendent(smaller)parts,then sorting those parts.Obviously,the recursion should terminate when empty subarrays are encountered,but our implemen- tation also stops with subarrays of size 1.This detail might seem inconse- quential at first blush,but,as we will see,the very nature of recursion ensures that the program will be used for a large number of small files,and substantial performance gains can be achieved with simple improvements of this sort. The partitioning process puts the element that was in the last position in the array (the partitioning element)into its correct position,with all smaller elements before it and all larger elements after it.The program accomplishes this by maintaining two pointers:one scanning from the left,one from the right.The left pointer is incremented until an element larger than the parti- private void quicksort(int[]a,int lo,int hi) if (hi <lo)return; int i 1o-1,j=hi; int t,v=a[hi]; while (true) while (a[++i]v); while (v<a[--j])if (j==lo)break; if (i>=j)break; t a[i];a[i]a[j];a[j]t; t a[i];a[i]=a[hi];a[hi]t; quicksort(a,lo,i-1); quicksort(a,i+l,hi); Program 1.2 Quicksort www.it-ebooks.info
§Ș.Ȝ A Ś ō Ř ť ş ŕ ş ś Œ A Ř œ ś Ş ŕ Š Ŕ ř ş ȘȠ erally, our analysis of quicksort is indicative of how we go about analyzing a broad class of recursive programs. Program 1.2 is an implementation of quicksort in Java. It is a recursive program that sorts the numbers in an array by partitioning it into two independent (smaller) parts, then sorting those parts. Obviously, the recursion should terminate when empty subarrays are encountered, but our implementation also stops with subarrays of size 1. Ļis detail might seem inconsequential at ŀrst blush, but, as we will see, the very nature of recursion ensures that the program will be used for a large number of small ŀles, and substantial performance gains can be achieved with simple improvements of this sort. Ļe partitioning process puts the element that was in the last position in the array (the partitioning element) into its correct position, with all smaller elements before it and all larger elements after it. Ļe program accomplishes this by maintaining two pointers: one scanning from the left, one from the right. Ļe left pointer is incremented until an element larger than the partiprivate void quicksort(int[] a, int lo, int hi) { if (hi <= lo) return; int i = lo-1, j = hi; int t, v = a[hi]; while (true) { while (a[++i] < v) ; while (v < a[--j]) if (j == lo) break; if (i >= j) break; t = a[i]; a[i] = a[j]; a[j] = t; } t = a[i]; a[i] = a[hi]; a[hi] = t; quicksort(a, lo, i-1); quicksort(a, i+1, hi); } Program 1.2 Quicksort www.it-ebooks.info
20 CHAPTER ONE S.5 tioning element is found;the right pointer is decremented until an element smaller than the partitioning element is found.These two elements are ex- changed,and the process continues until the pointers meet,which defines where the partitioning element is put.After partitioning,the program ex- changes a[i]with a[hi]to put the partitioning element into position.The call quicksort (a,0,N-1)will sort the array. There are several ways to implement the general recursive strategy just outlined;the implementation described above is taken from Sedgewick and Wayne [30](see also [27]).For the purposes of analysis,we will be assuming that the array a contains randomly ordered,distinct numbers,but note that this code works properly for all inputs,including equal numbers.It is also possible to study this program under perhaps more realistic models allowing equal numbers(see [28]),long string keys(see [4]),and many other situations. Once we have an implementation,the first step in the analysis is to estimate the resource requirements of individual instructions for this program. This depends on characteristics of a particular computer,so we sketch the details.For example,the "inner loop"instruction while (a[++i]v); might translate,on a typical computer,to assembly language instructions such as the following: LOOP INC 1,1 increment i CMP V,A(I) compare v with A(i) BL LOOP branch if less To start,we might say that one iteration of this loop might require four time units (one for each memory reference).On modern computers,the precise costs are more complicated to evaluate because of caching,pipelines,and other effects.The other instruction in the inner loop (that decrements j) is similar,but involves an extra test of whether j goes out of bounds.Since this extra test can be removed via sentinels(see [26]),we will ignore the extra complication it presents. The next step in the analysis is to assign variable names to the frequency of execution of the instructions in the program.Normally there are only a few true variables involved:the frequencies of execution of all the instructions can be expressed in terms of these few.Also,it is desirable to relate the variables to www.it-ebooks.info
șȗ C Ŕ ō Ŝ Š ő Ş O Ś ő §Ș.Ȝ tioning element is found; the right pointer is decremented until an element smaller than the partitioning element is found. Ļese two elements are exchanged, and the process continues until the pointers meet, which deŀnes where the partitioning element is put. After partitioning, the program exchanges a[i] with a[hi] to put the partitioning element into position. Ļe call quicksort(a, 0, N-1) will sort the array. Ļere are several ways to implement the general recursive strategy just outlined; the implementation described above is taken from Sedgewick and Wayne [30] (see also [27]). For the purposes of analysis, we will be assuming that the array a contains randomly ordered, distinct numbers, but note that this code works properly for all inputs, including equal numbers. It is also possible to study this program under perhaps more realistic models allowing equal numbers (see [28]), long string keys (see [4]), and many other situations. Once we have an implementation, the ŀrst step in the analysis is to estimate the resource requirements of individual instructions for this program. Ļis depends on characteristics of a particular computer, so we sketch the details. For example, the “inner loop” instruction while (a[++i] < v) ; might translate, on a typical computer, to assembly language instructions such as the following: LOOP INC I,1 # increment i CMP V,A(I) # compare v with A(i) BL LOOP # branch if less To start, we might say that one iteration of this loop might require four time units (one for each memory reference). On modern computers, the precise costs are more complicated to evaluate because of caching, pipelines, and other effects. Ļe other instruction in the inner loop (that decrements j) is similar, but involves an extra test of whether j goes out of bounds. Since this extra test can be removed via sentinels (see [26]), we will ignore the extra complication it presents. Ļe next step in the analysis is to assign variable names to the frequency of execution of the instructions in the program. Normally there are only a few true variables involved: the frequencies of execution of all the instructions can be expressed in terms of these few. Also, it is desirable to relate the variables to www.it-ebooks.info