6.001 Structure and Interpretation of Computer Programs. Copyright o 2004 by Massachusetts Institute of Technology 6.001 Notes: Section 4.1 Slide 4.1.1 Today ' s topics In this lecture, we are going to take a careful look at the kinds of procedures we can build. We will first go back to look ver Orders of growth of processes carefully at the substitution model, to see how the rules for Relating ty pes of procedures to different orders of growth evaluation help us determine the evolution of a process controlled by a procedure. We will then see how different kinds of process evolution can occur, with different demands in terms of computational resources. And we will see how to characterize those differences formally in terms of orders of growth of a process. Finally, we will explore examples of procedures from different classes of growth, helping you to 8w203 60015e egin to relate how choices in procedure design may affect performance of the actual use of the procedure Rules for evaluation Slide 4.1.2 Now that we have seen two different implementations of the ame idea, with different behaviors, we need a way of more formally characterizing why that difference occurs. To do that we are going to go back to our substitution model Now we are going to think of our model as a set of rewrite es. that set of rules that formally specify, given expression of a particular form, rewrite it in the following form Slide 4.1.3 Rules for evaluation So elementary expressions in this viewpoint are just "left Elementary expressions are left alone: Elementary expressions are alone, that is, numbers, names of built-in procedures, and lambda expressions are left as is ambda expressions. naming procedures iial names of primitive procedures
6.001 Structure and Interpretation of Computer Programs. Copyright © 2004 by Massachusetts Institute of Technology. 6.001 Notes: Section 4.1 Slide 4.1.1 In this lecture, we are going to take a careful look at the kinds of procedures we can build. We will first go back to look very carefully at the substitution model, to see how the rules for evaluation help us determine the evolution of a process controlled by a procedure. We will then see how different kinds of process evolution can occur, with different demands in terms of computational resources. And we will see how to characterize those differences formally in terms of orders of growth of a process. Finally, we will explore examples of procedures from different classes of growth, helping you to begin to relate how choices in procedure design may affect performance of the actual use of the procedure. Slide 4.1.2 Now that we have seen two different implementations of the same idea, with different behaviors, we need a way of more formally characterizing why that difference occurs. To do that, we are going to go back to our substitution model. Now we are going to think of our model as a set of rewrite rules, that is, as a set of rules that formally specify, given an expression of a particular form, rewrite it in the following form. Slide 4.1.3 So elementary expressions in this viewpoint are just "left alone", that is, numbers, names of built-in procedures, and lambda expressions are left as is
6.001 Structure and Interpretation of Computer Programs. Copyright o 2004 by Massachusetts Institute of Technology Slide 4.1.4 Rules for evaluation If our expression is a name that we created ourselves, we will Elementary expressions"are left alone: Elementary expressions are rewrite in place of it the value that was associated with that name during the definition A name bound by DEFINE: Rewrite the name as the value it is Slide 4.1.5 For the special form if we use the rule we just saw. We Rules for evaluation Elementary expressions"are left alone: Elementary expressions are evaluate the predicate clause and based on its value, we either rewrite the i f expression by its consequent or its alternative write the name as the value it is associated with by the definiti IF: If the evaluation of the predicate expression terminates in non-false Rules for evaluation Slide 4.1.6 And finally: combinations. We use the rule that we first inital names of primtive procedures evaluate the operator expression to get the procedure, and we Rssaosetn n e pese a era reme etse ate primitive procedure, we are just going to "do the right thing 9 A name bound by DEFINE Rewmte the name as the value it I evaluate the operands to get the set of arguments. If we have Otherwise we are going to replace this entire expression with Evaluate the operator expression to get the procedure, and the body of the compound expression, with the arguments if the operator names a primitive procedure, do whatever m: substituted for their associated parameters operator names a compound procedure, evaluate the body of the compound procedure wth the arguments substituted for the formal parameters in the body 8200 Slide 4.1.7 Orders of growth of processes Given that model, we can more formally capture the evolution Suppose n is a parameter that measures the size of a of a process. We can talk about the order of growth of a process problem these rewrite rules evolve. This measures how much of a Let r(n) be the amount of resources needed to compute a procedure of size n particular resource a process is going to take as these rules We say R(n) has order of growth e(f(n)if there are evolve, where typically we measure either space or time as the onstants k, and kz such that k,f(n)<=R(n)<=kf(n) for large n More formally, let n be a parameter that measures the size of number of deferred operations, and time, measured by the the problem of interest. In the case of factorial, this would be number of primitive steps the size of the input argument. We let r(n)denote the amount of resources we will need to solve a problem of this size, where 842003 600SC as noted, the resources are usually space and time. We are interested in characterizing R(n) for large values of N, that is, in
6.001 Structure and Interpretation of Computer Programs. Copyright © 2004 by Massachusetts Institute of Technology. Slide 4.1.4 If our expression is a name that we created ourselves, we will rewrite in place of it the value that was associated with that name during the definition. Slide 4.1.5 For the special form if we use the rule we just saw. We evaluate the predicate clause, and based on its value, we either rewrite the if expression by its consequent or its alternative. Slide 4.1.6 And finally: combinations. We use the rule that we first evaluate the operator expression to get the procedure, and we evaluate the operands to get the set of arguments. If we have a primitive procedure, we are just going to “do the right thing”. Otherwise we are going to replace this entire expression with the body of the compound expression, with the arguments substituted for their associated parameters. Slide 4.1.7 Given that model, we can more formally capture the evolution of a process. We can talk about the order of growth of a process as these rewrite rules evolve. This measures how much of a particular resource a process is going to take as these rules evolve, where typically we measure either space or time as the resource. More formally, let n be a parameter that measures the size of the problem of interest. In the case of factorial, this would be the size of the input argument. We let R(n) denote the amount of resources we will need to solve a problem of this size, where as noted, the resources are usually space and time. We are interested in characterizing R(n) for large values of N, that is, in
6.001 Structure and Interpretation of Computer Programs. Copyright o 2004 by Massachusetts Institute of Technology the asymptotic limit. We say that r(n) has order of growth, Theta of f of n if we can find a function f(n) that is oughly of the same order as the needed resource, where the more formal definition is shown in the Slide 4.1.8 Partial trace for (fact 4) In more common terms, this says that when measuring the amount of space we need, we want to find a function that measures the number of deferred operations, as a function of the size of the problem, up to a constant factor. For measuring the amount of time we want to find a function that measures the number of basic or primitive steps that the rewrite rules go through, again as a function of the size of the problem 6001 SKP Slide 4.1.9 Partial trace for(fact 4) So let's use our rewrite rule idea, together with this notion of measuring the amount of resource we need as an order of a(if (=n 1)i bda(n) n(act(-n1)))) growth in the size of the problem. Here is our recursive factorial 1)1《·4《aet(-41)))) 4(·3(·2(11)1(·1《tact(-11))))))) 4(·32)) 6 001 SICP Partial trace for (ifact 4) Slide 4.1.10 (define ifact -helper (lambda (product count n) and here is a partial trace of fact using those rewrite rules We start with (fact 4). That reduces to evaluating an if detine ifact (lambda (n) (ifact-helper 11 n))) expression. Since the predicate is not true, this reduces to 1per 11 4) evaluating the alternative statement, which is a combination of (f14)1( ifact- helper(*11)(+11)4) e 6 2 4) i difaot-helper (+1 2)(214) a multiplication and another evaluation of fact (if (3 4)2 (afaat-helper ( 2 3)(31) 4)) Notice how the orange colored lines capture the evolution of the rewrite rules. Note how(fact 4 )reduces to a deferred operation ft44)6 helper(64)(+41)4)) and an evaluation of (fact 3)and this further reduces to another irc>54)24( ifact- helper(*245)(+51)4) deferred operation and a subsequent call to fact. Notice the shape of this process: it grows out with a set of deferre operations until it reaches a base case, and then it contracts back in, completing each deferred operation. Using this, we can see that the amount of space grows linearly with the size of the argument. That is, with each increase in the argument size, I add a constant amount of space requirement. In terms of the number of operations, we can see that we basically need twice as many steps as the size of the problem, one to expand out, and one to reduce back down. This is also a linear growth, since the constant 2 does not change with problem size
6.001 Structure and Interpretation of Computer Programs. Copyright © 2004 by Massachusetts Institute of Technology. the asymptotic limit. We say that R(n) has order of growth, Theta of f of n if we can find a function f(n) that is roughly of the same order as the needed resource, where the more formal definition is shown in the Slide 4.1.8 In more common terms, this says that when measuring the amount of space we need, we want to find a function that measures the number of deferred operations, as a function of the size of the problem, up to a constant factor. For measuring the amount of time, we want to find a function that measures the number of basic or primitive steps that the rewrite rules go through, again as a function of the size of the problem. Slide 4.1.9 So let's use our rewrite rule idea, together with this notion of measuring the amount of resource we need as an order of growth in the size of the problem. Here is our recursive factorial procedure ... Slide 4.1.10 ..and here is a partial trace of fact using those rewrite rules. We start with (fact 4). That reduces to evaluating an if expression. Since the predicate is not true, this reduces to evaluating the alternative statement, which is a combination of a multiplication and another evaluation of fact. Notice how the orange colored lines capture the evolution of the rewrite rules. Note how (fact 4) reduces to a deferred operation and an evaluation of (fact 3) and this further reduces to another deferred operation and a subsequent call to fact. Notice the shape of this process: it grows out with a set of deferred operations until it reaches a base case, and then it contracts back in, completing each deferred operation. Using this, we can see that the amount of space grows linearly with the size of the argument. That is, with each increase in the argument size, I add a constant amount of space requirement. In terms of the number of operations, we can see that we basically need twice as many steps as the size of the problem, one to expand out, and one to reduce back down. This is also a linear growth, since the constant 2 does not change with problem size
6.001 Structure and Interpretation of Computer Programs. Copyright o 2004 by Massachusetts Institute of Technology Slide 4.1.11 And let's compare that our iterative factorial. Here is our code Examples of orders of growth gain, and here is a partial trace of the rewrite evolution of that FACT pr Here we see the different shape of the process, and our order of growth analysis captures that difference. In particular, there are no deferred operations, and you can see that the maximum amount of space I need at any stage, that is, the maximun amount of space used by any rewrite, is independent of the size of the problem. We say that it is constant, as it does not grow with n In terms of time, there is basically one operation for each increment in the argument, so this is again a linear order of growth Examples of orders of growth Slide 4.1.12 ·FAcT So we can formally capture this, as shown. Fact has linear Space e(n)-linear growth in space, written as shown, because the maximum Time e(n)-linear amount of space needed by any rewrite stage is a linear multiple of the size of the argument. It also has linear growth in time, ⊙(1)- constant because as we saw it takes a number of basic steps that is also a (n)-linear linear multiple of the size of the argument Room 6001 SICP Slide 4.1.13 Examples of orders of growth On the other hand. iterative-fact has no deferred FACT operations, and is constant in space, written as shown, while as Space e(n)-linear we argued, the time order of growth is linear. Notice that the fact that the recursive version took 2n steps and the iterative version took n steps doesn't matter in terms of order of growth."P ASpTace e(1)-constant Both are said to be linear Time e(n)-linear Thus we see that we can formally characterize the difference between these two processes, which have different shapes of evolution 60015e
6.001 Structure and Interpretation of Computer Programs. Copyright © 2004 by Massachusetts Institute of Technology. Slide 4.1.11 And let's compare that our iterative factorial. Here is our code again, and here is a partial trace of the rewrite evolution of that process. Here we see the different shape of the process, and our order of growth analysis captures that difference. In particular, there are no deferred operations, and you can see that the maximum amount of space I need at any stage, that is, the maximum amount of space used by any rewrite, is independent of the size of the problem. We say that it is constant, as it does not grow with n. In terms of time, there is basically one operation for each increment in the argument, so this is again a linear order of growth. Slide 4.1.12 So we can formally capture this, as shown. Fact has linear growth in space, written as shown, because the maximum amount of space needed by any rewrite stage is a linear multiple of the size of the argument. It also has linear growth in time, because as we saw it takes a number of basic steps that is also a linear multiple of the size of the argument. Slide 4.1.13 On the other hand, iterative-fact has no deferred operations, and is constant in space, written as shown, while as we argued, the time order of growth is linear. Notice that the fact that the recursive version took 2n steps and the iterative version took n steps doesn't matter in terms of order of growth. Both are said to be linear. Thus we see that we can formally characterize the difference between these two processes, which have different shapes of evolution
6.001 Structure and Interpretation of Computer Programs. Copyright o 2004 by Massachusetts Institute of Technology Slide 4.1.1 So why have we done this? The primary goal is to allow you to start to recognize different kinds of behaviors, and the associated procedure forms that give rise to those behaviors This will start to allow you to work backwards, in designing a procedure, by enabling you to visualize the consequences of performing a computation in different ways 6.001 Notes: Section 4.2 Slide 4.2.1 Com puting Fi Having seen two different kinds of processes, one linear, and Consider the following function one constant, we want to fill out our repertoire of processes, by. F(m)=0 if n=0 looking at other kinds of processes. The next one is an example|.F(n)=1 ifn=1 of an exponential process, and deals with a classic function F(n)= F(n-1)+ F(n-2)otherwise called Fibonacci. Its definition is that if its argument is 0, its value is 0, if its argument is 1, its value is 1, and for all other positive integer arguments, its value is the sum of its values for the two preceding arguments We would like to write a procedure to compute Fibonacci, and in particular see how it gives rise to a different kind of behavior Fibonacci Slide 4.2.2 o solve this problem, let's use our tool of wishful thinking Here, that wishful thinking says, lets assume that given an (oand (( n 0) 0) n1)1) argument n, we know how to solve Fibonacci for any smaller (else ( (fib ("n 1)) (1b(n2))))))) sized argument. Using that idea, we can then work out a solution to the problem. With this in hand, it is clear that the (cond (<predicate <consequent <consequent> solution to the general problem is just to solve two smaller sized problems, then just add the results together. Note that in Else <consequent><consequent) this case we are using wishful thinking twice, not once, as in our previous examples I Here is a procedure that captures this idea First, we introduce a new expression, called a cond expression. Cond uses the following rules of evaluation. The cond consists of a set of clauses, each of which has within it, a predicate clause, and one or more subsequent expressions. Cond proceeds by first evaluating the predicate of the first clause, in this case(= n 0). If it is true, then we evaluate in turn each of the other expressions in this clause, returning the value of the last one as the value of the whole cond. In the case of the first clause of this cond that is the expression 0. If the predicate of the first clause is false we move to the next
6.001 Structure and Interpretation of Computer Programs. Copyright © 2004 by Massachusetts Institute of Technology. Slide 4.1.14 So why have we done this? The primary goal is to allow you to start to recognize different kinds of behaviors, and the associated procedure forms that give rise to those behaviors. This will start to allow you to work backwards, in designing a procedure, by enabling you to visualize the consequences of performing a computation in different ways. 6.001 Notes: Section 4.2 Slide 4.2.1 Having seen two different kinds of processes, one linear, and one constant, we want to fill out our repertoire of processes, by looking at other kinds of processes. The next one is an example of an exponential process, and deals with a classic function called Fibonacci. Its definition is that if its argument is 0, its value is 0, if its argument is 1, its value is 1, and for all other positive integer arguments, its value is the sum of its values for the two preceding arguments. We would like to write a procedure to compute Fibonacci, and in particular see how it gives rise to a different kind of behavior. Slide 4.2.2 To solve this problem, let's use our tool of wishful thinking. Here, that wishful thinking says, let's assume that given an argument n, we know how to solve Fibonacci for any smaller sized argument. Using that idea, we can then work out a solution to the problem. With this in hand, it is clear that the solution to the general problem is just to solve two smaller sized problems, then just add the results together. Note that in this case we are using wishful thinking twice, not once, as in our previous examples. Here is a procedure that captures this idea. First, we introduce a new expression, called a cond expression. Cond uses the following rules of evaluation. The cond consists of a set of clauses, each of which has within it, a predicate clause, and one or more subsequent expressions. Cond proceeds by first evaluating the predicate of the first clause, in this case (= n 0). If it is true, then we evaluate in turn each of the other expressions in this clause, returning the value of the last one as the value of the whole cond. In the case of the first clause of this cond that is the expression 0. If the predicate of the first clause is false, we move to the next