6.001 Structure and Interpretation of Computer Programs. Copyright o 2004 by Massachusetts Institute of Technology clause of the cond and repeat the process. This continues for however many clauses are contained in the cond until either a true predicate is reached or we reach a clause with the special keyword e lse. In this latter case, that predicate is treated as true, and the subsequent expressions are evaluated Given that form for cond, we can see how our fibonacci procedure captures the idea we expressed. We keep recursively solving smaller sized problems until we get to a base case. Notice that here we have two base cases. not one Also notice that while this is a recursive procedure, it is different from our earlier ones. Here, there are two recursive calls to the procedure in the body, rather than just one. Our question is whether this leads to a different kind of behavior, or a different order of growth Slide 4.2.3 A tree recursion This does give rise to a different kind of behavior, which we can easily see with the illustrated diagram. To solve a problem of size 4, notice that we have to solve two problems, one of size 3 and one of size 2, and each of these requires solving two smaller problems, and so on This gives rise to a kind of tree of things we have to do, and Fib 1 Fib each recursive call gives rise to two subproblems of smaller size. This leads to a different order of growth Fib1 Fib o 6001 sICP Orders of growth for Fibonacci Slide 4.2.4 Let ta be the number of steps that we need to take to solve the case o measure the order of growth, let,'s let t of n denote the for size n. Then number of time steps we need to solve a problem of size n t=tnt+t,2=25,2=454=8 From our tree. we see that to do this we need to solve In space, we have one defered operation for cach increment of the problem of size n-1, and a problem of size n-2.We could stack of -e(n)-linear actually work out a detailed analysis of this relationship, but for our purposes, we can approximate this as roughly the same as solving two problems of size n-2. Expanding a step further, this is roughly the same as solving 4 problems of size n-4 and roughly the same as solving 8 problems of size n-6. A little I math shows that in general this reduces to 2 to the power of n/2 This is an example of an exponential order of growth, and this is very different from what we saw earlier. To convince yourself of this, assume that each step takes one second and see how much time it would take for an exponential process as compared to a linear one, as n gets large In terms of space, our tree shows us that we have basically one deferred operation for step, or in other words, the maximum depth of that tree is linear in the size of the problem, and the maximum depth exactly captures the maximum number of deferred operations 6.001 Notes: Section 4.3
6.001 Structure and Interpretation of Computer Programs. Copyright © 2004 by Massachusetts Institute of Technology. clause of the cond and repeat the process. This continues for however many clauses are contained in the cond until either a true predicate is reached, or we reach a clause with the special keyword else. In this latter case, that predicate is treated as true, and the subsequent expressions are evaluated. Given that form for cond, we can see how our fibonacci procedure captures the idea we expressed. We keep recursively solving smaller sized problems until we get to a base case. Notice that here we have two base cases, not one. Also notice that while this is a recursive procedure, it is different from our earlier ones. Here, there are two recursive calls to the procedure in the body, rather than just one. Our question is whether this leads to a different kind of behavior, or a different order of growth. Slide 4.2.3 This does give rise to a different kind of behavior, which we can easily see with the illustrated diagram. To solve a problem of size 4, notice that we have to solve two problems, one of size 3 and one of size 2, and each of these requires solving two smaller problems, and so on. This gives rise to a kind of tree of things we have to do, and each recursive call gives rise to two subproblems of smaller size. This leads to a different order of growth. Slide 4.2.4 To measure the order of growth, let's let t of n denote the number of time steps we need to solve a problem of size n. From our tree, we see that to do this, we need to solve a problem of size n-1, and a problem of size n-2. We could actually work out a detailed analysis of this relationship, but for our purposes, we can approximate this as roughly the same as solving two problems of size n-2. Expanding a step further, this is roughly the same as solving 4 problems of size n-4 and roughly the same as solving 8 problems of size n-6. A little math shows that in general this reduces to 2 to the power of n/2 steps. This is an example of an exponential order of growth, and this is very different from what we saw earlier. To convince yourself of this, assume that each step takes one second, and see how much time it would take for an exponential process as compared to a linear one, as n gets large. In terms of space, our tree shows us that we have basically one deferred operation for step, or in other words, the maximum depth of that tree is linear in the size of the problem, and the maximum depth exactly captures the maximum number of deferred operations. 6.001 Notes: Section 4.3
6.001 Structure and Interpretation of Computer Programs. Copyright o 2004 by Massachusetts Institute of Technology Slide 4.3.1 Lets take another quick look at how we can create procedures Using different processes for the same goal ith different orders of growth to compute the same function We want to compute a b, just using multiplication and As a specific example, suppose we want to compute exponentials, such as a raised to the b power but to do so only using the simpler operations of multiplication and addition How might we use the tools we have been developing to accomplish this? Using different processes for the same goal Slide 4.3.2 We want to compute a b, just using multiplication and So, recall the stages we used to solve problems like this:w will use some wishful thinking to assume that solutions to Remember our stages: Wishful thinking simpler versions of the problem exist; we will then decompose the problem into a simpler version of the same problem, plus some other simple operations, and we will use this to construct a solution to the more general problem; and finally, we will determine the smallest sized subproblem into which we want to do decomposition. Let's look at using these tools on the problem of exponentiation 6001 SICP Slide 4.3.3 Using different processes for the same goal wishful thinking is our tool for using induction. It says, let's assume that some procedure my-expt exists, so that we can. Assume that the procedure my-expt exists, but only use it, but that it only solves smaller versions of the same solves smaller versions of the same problem problem Using different processes for the same goal Slide 4.3.4 Wishful thinking Given that assumption, we can then turn to the tricky part, Assume that the procedure mry-expt exists, but only which is determining how to decompose the problem into a soles smaller versions of the same problem simpler version of the same problem Decompose problem into soking smaller version and using Here is one method: Notice that a to the power b result b=a*a*,*a=a*a^{b-1 mathematically is just b products of a. But this we can also group as a times b-l products of a, and that latter we ecognize as a smaller version of the same exponentiation problem. Thus, we can reduce a to the bth power as a times 1001 SICP I to the b-1st power. Thus we can reduce the problem to a simpler version of the same problem, together with some simple operations, in this case, an additional multiplication
6.001 Structure and Interpretation of Computer Programs. Copyright © 2004 by Massachusetts Institute of Technology. Slide 4.3.1 Let's take another quick look at how we can create procedures with different orders of growth to compute the same function. As a specific example, suppose we want to compute exponentials, such as a raised to the b power, but to do so only using the simpler operations of multiplication and addition. How might we use the tools we have been developing to accomplish this? Slide 4.3.2 So, recall the stages we used to solve problems like this: we will use some wishful thinking to assume that solutions to simpler versions of the problem exist; we will then decompose the problem into a simpler version of the same problem, plus some other simple operations, and we will use this to construct a solution to the more general problem; and finally, we will determine the smallest sized subproblem into which we want to do decomposition. Let's look at using these tools on the problem of exponentiation. Slide 4.3.3 Wishful thinking is our tool for using induction. It says, let's assume that some procedure my-expt exists, so that we can use it, but that it only solves smaller versions of the same problem. Slide 4.3.4 Given that assumption, we can then turn to the tricky part, which is determining how to decompose the problem into a simpler version of the same problem. Here is one method: Notice that a to the power b mathematically is just b products of a. But this we can also group as a times b-1 products of a, and that latter we recognize as a smaller version of the same exponentiation problem. Thus, we can reduce a to the bth power as a times a to the b-1st power. Thus we can reduce the problem to a simpler version of the same problem, together with some simple operations, in this case, an additional multiplication