6 CHAPTER ONE S1.2 careful scrutiny required for proper analysis often leads to better and more ef- ficient implementation on particular computers.Analysis requires a far more complete understanding of an algorithm that can inform the process of pro- ducing a working implementation.Indeed,when the results of analytic and empirical studies agree,we become strongly convinced of the validity of the algorithm as well as of the correctness of the process of analysis. Some algorithms are worth analyzing because their analyses can add to the body of mathematical tools available.Such algorithms may be of limited practical interest but may have properties similar to algorithms of practical interest so that understanding them may help to understand more important methods in the future.Other algorithms(some of intense practical inter- est,some of little or no such value)have a complex performance structure with properties of independent mathematical interest.The dynamic element brought to combinatorial problems by the analysis of algorithms leads to chal- lenging,interesting mathematical problems that extend the reach of classical combinatorics to help shed light on properties of computer programs. To bring these ideas into clearer focus,we next consider in detail some classical results first from the viewpoint of the theory of algorithms and then from the scientific viewpoint that we develop in this book.As a running example to illustrate the different perspectives,we study sorting algorithms, which rearrange a list to put it in numerical,alphabetic,or other order.Sort- ing is an important practical problem that remains the object of widespread study because it plays a central role in many applications. 1.2 Theory of Algorithms.The prime goal of the theory of algorithms is to classify algorithms according to their performance characteristics.The following mathematical notations are convenient for doing so: Definition Given a function f(N), O(f(N))denotes the set of allg(N)such that lg(N)/f(N)|is bounded from above as N→o. (f(N))denotes the set ofall g(N)such that lg(N)/f(N)is bounded from below by a(strictly)positive number as N->oo. (f(N))denotes the set ofallg(N)such that lg(N)/f(N)|is bounded from both above and below as N->oo. These notations,adapted from classical analysis,were advocated for use in the analysis of algorithms in a paper by Knuth in 1976 [21].They have come www.it-ebooks.info
ȝ C Ŕ ō Ŝ Š ő Ş O Ś ő §Ș.ș careful scrutiny required for proper analysis often leads to better and more ef- ŀcient implementation on particular computers. Analysis requires a far more complete understanding of an algorithm that can inform the process of producing a working implementation. Indeed, when the results of analytic and empirical studies agree, we become strongly convinced of the validity of the algorithm as well as of the correctness of the process of analysis. Some algorithms are worth analyzing because their analyses can add to the body of mathematical tools available. Such algorithms may be of limited practical interest but may have properties similar to algorithms of practical interest so that understanding them may help to understand more important methods in the future. Other algorithms (some of intense practical interest, some of little or no such value) have a complex performance structure with properties of independent mathematical interest. Ļe dynamic element brought to combinatorial problems by the analysis of algorithms leads to challenging, interesting mathematical problems that extend the reach of classical combinatorics to help shed light on properties of computer programs. To bring these ideas into clearer focus, we next consider in detail some classical results ŀrst from the viewpoint of the theory of algorithms and then from the scientiŀc viewpoint that we develop in this book. As a running example to illustrate the different perspectives, we study sorting algorithms, which rearrange a list to put it in numerical, alphabetic, or other order. Sorting is an important practical problem that remains the object of widespread study because it plays a central role in many applications. 1.2 Ļeory of Algorithms. Ļe prime goal of the theory of algorithms is to classify algorithms according to their performance characteristics. Ļe following mathematical notations are convenient for doing so: Deŀnition Given a function f(N), O(f(N)) denotes the set of all g(N)such that |g(N)/f(N)| is bounded from above as N → ∞. (f(N)) denotes the set of all g(N) such that |g(N)/f(N)| is bounded from below by a (strictly) positive number as N → ∞. (f(N)) denotes the set of all g(N) such that |g(N)/f(N)| is bounded from both above and below as N → ∞. Ļese notations, adapted from classical analysis, were advocated for use in the analysis of algorithms in a paper by Knuth in 1976 [21]. Ļey have come www.it-ebooks.info
S12 ANALY S IS OF ALGORITHMS 7 into widespread use for making mathematical statements about bounds on the performance of algorithms.The O-notation provides a way to express an upper bound;the -notation provides a way to express a lower bound;and the -notation provides a way to express matching upper and lower bounds. In mathematics,the most common use of the O-notation is in the con- text of asymptotic series.We will consider this usage in detail in Chapter 4. In the theory of algorithms,the O-notation is typically used for three pur- poses:to hide constants that might be irrelevant or inconvenient to compute, to express a relatively small"error"term in an expression describing the run- ning time of an algorithm,and to bound the worst case.Nowadays,the and -notations are directly associated with the theory of algorithms,though similar notations are used in mathematics(see [21]). Since constant factors are being ignored,derivation of mathematical re- sults using these notations is simpler than if more precise answers are sought. For example,both the“natural'”logarithm InN≡logeN and the“binary” logarithm Ig N log2N often arise,but they are related by a constant factor, so we can refer to either as being O(logN)if we are not interested in more precision.More to the point,we might say that the running time of an al- gorithm is (NlogN)seconds just based on an analysis of the frequency of execution of fundamental operations and an assumption that each operation takes a constant number of seconds on a given computer,without working out the precise value of the constant. Exercise 1.1 Show that f(N)=NIgN+O(N)implies that f(N)=e(NlogN). As an illustration of the use of these notations to study the performance characteristics of algorithms,we consider methods for sorting a set of num- bers in an array.The input is the numbers in the array,in arbitrary and un- known order;the output is the same numbers in the array,rearranged in as- cending order.This is a well-studied and fundamental problem:we will con- sider an algorithm for solving it,then show that algorithm to be"optimal"in a precise technical sense. First,we will show that it is possible to solve the sorting problem ef- ficiently,using a well-known recursive algorithm called mergesort.Merge- sort and nearly all of the algorithms treated in this book are described in detail in Sedgewick and Wayne [30],so we give only a brief description here. Readers interested in further details on variants of the algorithms,implemen- tations,and applications are also encouraged to consult the books by Cor- www.it-ebooks.info
§Ș.ș A Ś ō Ř ť ş ŕ ş ś Œ A Ř œ ś Ş ŕ Š Ŕ ř ş Ȟ into widespread use for making mathematical statements about bounds on the performance of algorithms. Ļe O-notation provides a way to express an upper bound; the -notation provides a way to express a lower bound; and the -notation provides a way to express matching upper and lower bounds. In mathematics, the most common use of the O-notation is in the context of asymptotic series. We will consider this usage in detail in Chapter 4. In the theory of algorithms, the O-notation is typically used for three purposes: to hide constants that might be irrelevant or inconvenient to compute, to express a relatively small “error” term in an expression describing the running time of an algorithm, and to bound the worst case. Nowadays, the - and - notations are directly associated with the theory of algorithms, though similar notations are used in mathematics (see [21]). Since constant factors are being ignored, derivation of mathematical results using these notations is simpler than if more precise answers are sought. For example, both the “natural” logarithm lnN ≡ logeN and the “binary” logarithm lgN ≡ log2N often arise, but they are related by a constant factor, so we can refer to either as being O(logN) if we are not interested in more precision. More to the point, we might say that the running time of an algorithm is (NlogN) seconds just based on an analysis of the frequency of execution of fundamental operations and an assumption that each operation takes a constant number of seconds on a given computer, without working out the precise value of the constant. Exercise 1.1 Show that f(N) = NlgN + O(N) implies that f(N) = Θ(NlogN). As an illustration of the use of these notations to study the performance characteristics of algorithms, we consider methods for sorting a set of numbers in an array. Ļe input is the numbers in the array, in arbitrary and unknown order; the output is the same numbers in the array, rearranged in ascending order. Ļis is a well-studied and fundamental problem: we will consider an algorithm for solving it, then show that algorithm to be “optimal” in a precise technical sense. First, we will show that it is possible to solve the sorting problem ef- ŀciently, using a well-known recursive algorithm called mergesort. Mergesort and nearly all of the algorithms treated in this book are described in detail in Sedgewick and Wayne [30], so we give only a brief description here. Readers interested in further details on variants of the algorithms, implementations, and applications are also encouraged to consult the books by Corwww.it-ebooks.info
8 CHAPTER ONE S1.2 men,Leiserson,Rivest,and Stein [6],Gonnet and Baeza-Yates [11],Knuth [17][18][19][20],Sedgewick [26],and other sources. Mergesort divides the array in the middle,sorts the two halves(recur- sively),and then merges the resulting sorted halves together to produce the sorted result,as shown in the Java implementation in Program 1.1.Merge- sort is prototypical of the well-known divide-and-conguer algorithm design paradigm,where a problem is solved by(recursively)solving smaller sub- problems and using the solutions to solve the original problem.We will an- alyze a number of such algorithms in this book.The recursive structure of algorithms like mergesort leads immediately to mathematical descriptions of their performance characteristics. To accomplish the merge,Program 1.1 uses two auxiliary arrays b and c to hold the subarrays(for the sake of efficiency,it is best to declare these arrays external to the recursive method).Invoking this method with the call mergesort(0,N-1)will sort the array a[0...N-1].After the recursive private void mergesort(int[]a,int lo,int hi) { if (hi <lo)return; int mid lo +(hi-lo)/2; mergesort(a,1o,mid); mergesort(a,mid 1,hi); for (int k lo;k <mid;k++) b[k-1o]=a[k]; for (int k mid+l;k <hi;k++) c[k-mid-1]a[k]; b[mid-1o+1]INFTY;c[hi mid]INFTY; int i=0,j=0; for (int k lo;k <hi;k++) if (c[j]b[i])a[k]c[j++]; else a[k]=b[i++]; Program 1.1 Mergesort www.it-ebooks.info
ȟ C Ŕ ō Ŝ Š ő Ş O Ś ő §Ș.ș men, Leiserson, Rivest, and Stein [6], Gonnet and Baeza-Yates [11], Knuth [17][18][19][20], Sedgewick [26], and other sources. Mergesort divides the array in the middle, sorts the two halves (recursively), and then merges the resulting sorted halves together to produce the sorted result, as shown in the Java implementation in Program 1.1. Mergesort is prototypical of the well-known divide-and-conquer algorithm design paradigm, where a problem is solved by (recursively) solving smaller subproblems and using the solutions to solve the original problem. We will analyze a number of such algorithms in this book. Ļe recursive structure of algorithms like mergesort leads immediately to mathematical descriptions of their performance characteristics. To accomplish the merge, Program 1.1 uses two auxiliary arrays b and c to hold the subarrays (for the sake of efficiency, it is best to declare these arrays external to the recursive method). Invoking this method with the call mergesort(0, N-1) will sort the array a[0...N-1]. After the recursive private void mergesort(int[] a, int lo, int hi) { if (hi <= lo) return; int mid = lo + (hi - lo) / 2; mergesort(a, lo, mid); mergesort(a, mid + 1, hi); for (int k = lo; k <= mid; k++) b[k-lo] = a[k]; for (int k = mid+1; k <= hi; k++) c[k-mid-1] = a[k]; b[mid-lo+1] = INFTY; c[hi - mid] = INFTY; int i = 0, j = 0; for (int k = lo; k <= hi; k++) if (c[j] < b[i]) a[k] = c[j++]; else a[k] = b[i++]; } Program 1.1 Mergesort www.it-ebooks.info
S1.2 ANALY SIS OF ALGORITHMS 9 calls,the two halves of the array are sorted.Then we move the first half of a[to an auxiliary array b[]and the second half of a[]to another auxiliary array c[].We add a"sentinel"INFTY that is assumed to be larger than all the elements to the end of each of the auxiliary arrays,to help accomplish the task of moving the remainder of one of the auxiliary arrays back to a after the other one has been exhausted.With these preparations,the merge is easily accomplished:for each k,move the smaller of the elements b[i]and c[j] to a[k],then increment k and i or j accordingly. Exercise 1.2 In some situations,defining a sentinel value may be inconvenient or impractical.Implement a mergesort that avoids doing so(see Sedgewick [26]for various strategies). Exercise 1.3 Implement a mergesort that divides the array into three equal parts, sorts them,and does a three-way merge.Empirically compare its running time with standard mergesort. In the present context,mergesort is significant because it is guaranteed to be as efficient as any sorting method can be.To make this claim more precise,we begin by analyzing the dominant factor in the running time of mergesort,the number of compares that it uses. Theorem 1.1 (Mergesort compares).Mergesort uses NlgN+O(N)com- pares to sort an array of N elements. Proof.If CN is the number of compares that the Program 1.1 uses to sort N elements,then the number of compares to sort the first half is C2,the number of compares to sort the second half is CrN/21,and the number of compares for the merge is N (one for each value of the index k).In other words,the number of compares for mergesort is precisely described by the recurrence relation CN CLN/21 C[N/21+N forN≥2 with C1=0. (1) To get an indication for the nature of the solution to this recurrence,we con- sider the case when N is a power of 2: C2m=2C2m-1+2n forn≥1 with C1=0. Dividing both sides of this equation by 2",we find that -+1- 22+2 C2m-3 2n-3+3=. C20 2 +nn www.it-ebooks.info
§Ș.ș A Ś ō Ř ť ş ŕ ş ś Œ A Ř œ ś Ş ŕ Š Ŕ ř ş Ƞ calls, the two halves of the array are sorted. Ļen we move the ŀrst half of a[] to an auxiliary array b[] and the second half of a[] to another auxiliary array c[]. We add a “sentinel” INFTY that is assumed to be larger than all the elements to the end of each of the auxiliary arrays, to help accomplish the task of moving the remainder of one of the auxiliary arrays back to a after the other one has been exhausted. With these preparations, the merge is easily accomplished: for each k, move the smaller of the elements b[i] and c[j] to a[k], then increment k and i or j accordingly. Exercise 1.2 In some situations, deŀning a sentinel value may be inconvenient or impractical. Implement a mergesort that avoids doing so (see Sedgewick [26] for various strategies). Exercise 1.3 Implement a mergesort that divides the array into three equal parts, sorts them, and does a three-way merge. Empirically compare its running time with standard mergesort. In the present context, mergesort is signiŀcant because it is guaranteed to be as efficient as any sorting method can be. To make this claim more precise, we begin by analyzing the dominant factor in the running time of mergesort, the number of compares that it uses. Ļeorem 1.1 (Mergesort compares). Mergesort uses NlgN + O(N) compares to sort an array of N elements. Proof. If CN is the number of compares that the Program 1.1 uses to sort N elements, then the number of compares to sort the ŀrst half is C⌊N/2⌋ , the number of compares to sort the second half is C⌈N/2⌉ , and the number of compares for the merge is N (one for each value of the index k). In other words, the number of compares for mergesort is precisely described by the recurrence relation CN = C⌊N/2⌋ + C⌈N/2⌉ + N for N ≥ 2 with C1 = 0. (1) To get an indication for the nature of the solution to this recurrence, we consider the case when N is a power of 2: C2n = 2C2n−1 + 2n for n ≥ 1 with C1 = 0. Dividing both sides of this equation by 2 n , we ŀnd that C2n 2 n = C2n−1 2 n−1 + 1 = C2n−2 2 n−2 + 2 = C2n−3 2 n−3 + 3 = . . . = C2 0 2 0 + n = n. www.it-ebooks.info
10 CHAPTER ONE §1.2 This proves that CN=NlgN when N =2";the theorem for general N follows from(1)by induction.The exact solution turns out to be rather complicated,depending on properties of the binary representation of N.In Chapter 2 we will examine how to solve such recurrences in detail. Exercise 1.4 Develop a recurrence describing the quantity CN+-CN and use this to prove that Cw=∑(lg则+2. 1≤k<N Exercise 1.5 Prove that CN N[lgN]+N-2fgN1. Exercise 1.6 Analyze the number of compares used by the three-way mergesort pro- posed in Exercise 1.2. For most computers,the relative costs of the elementary operations used Program 1.1 will be related by a constant factor,as they are all integer mul- tiples of the cost of a basic instruction cycle.Furthermore,the total running time of the program will be within a constant factor of the number of com- pares.Therefore,a reasonable hypothesis is that the running time of merge- sort will be within a constant factor of NlgN. From a theoretical standpoint,mergesort demonstrates that Nlog N is an"upper bound"on the intrinsic difficulty of the sorting problem: There exists an algorithm that can sort any N-element file in time proportional to NlogN. A full proofof this requires a careful model of the computer to be used in terms of the operations involved and the time they take,but the result holds under rather generous assumptions.We say that the "time complexity of sorting is O(NlogN).” Exercise 1.7 Assume that the running time of mergesort is cNIgN dN,where c and d are machine-dependent constants.Show that if we implement the program on a particular machine and observe a running time tN for some value of N,then we can accurately estimate the running time for 2N by 2tN(1+1/1gN),independent of the machine. Exercise 1.8 Implement mergesort on one or more computers,observe the running time for N 1,000,000,and predict the running time for N=10,000,000 as in the previous exercise.Then observe the running time for N =10,000,000 and calculate the percentage accuracy of the prediction. www.it-ebooks.info
Șȗ C Ŕ ō Ŝ Š ő Ş O Ś ő §Ș.ș Ļis proves that CN = NlgN when N = 2n ; the theorem for general N follows from (1) by induction. Ļe exact solution turns out to be rather complicated, depending on properties of the binary representation of N. In Chapter 2 we will examine how to solve such recurrences in detail. Exercise 1.4 Develop a recurrence describing the quantity CN+1 − CN and use this to prove that CN = ∑ 1≤k<N (⌊lgk⌋ + 2). Exercise 1.5 Prove that CN = N⌈lgN⌉ + N − 2 ⌈lgN⌉ . Exercise 1.6 Analyze the number of compares used by the three-way mergesort proposed in Exercise 1.2. For most computers, the relative costs of the elementary operations used Program 1.1 will be related by a constant factor, as they are all integer multiples of the cost of a basic instruction cycle. Furthermore, the total running time of the program will be within a constant factor of the number of compares. Ļerefore, a reasonable hypothesis is that the running time of mergesort will be within a constant factor of NlgN. From a theoretical standpoint, mergesort demonstrates that NlogN is an “upper bound” on the intrinsic difficulty of the sorting problem: Ļere exists an algorithm that can sort any N-element ŀle in time proportional to NlogN. A full proof of this requires a careful model of the computer to be used in terms of the operations involved and the time they take, but the result holds under rather generous assumptions. We say that the “time complexity of sorting is O(NlogN).” Exercise 1.7 Assume that the running time of mergesort is cNlgN + dN, where c and d are machine-dependent constants. Show that if we implement the program on a particular machine and observe a running time tN for some value of N, then we can accurately estimate the running time for 2N by 2tN (1 + 1/lgN), independent of the machine. Exercise 1.8 Implement mergesort on one or more computers, observe the running time for N = 1,000,000, and predict the running time for N = 10,000,000 as in the previous exercise. Ļen observe the running time for N = 10,000,000 and calculate the percentage accuracy of the prediction. www.it-ebooks.info