762 Chapter 17.Two Point Boundary Value Problems for(i=1;i<=n;i+)f[i]=f1[i]-f2[1]: free_vector(y,1,nvar); free_vector(f2,1,nvar); free_vector(f1,1,nvar); There are boundary value problems where even shooting to a fitting point fails -the integration interval has to be partitioned by several fitting points with the solution being matched at each such point.For more details see [1]. CITED REFERENCES AND FURTHER READING: Acton,F.S.1970,Numerica/Methods That Work;1990,corrected edition (Washington:Mathe- matical Association of America). Keller,H.B.1968,Numerical Methods for Two-Point Boundary-Value Problems (Waltham,MA: Blaisdell). Stoer.J.,and Bulirsch.R.1980.Introduction to Numerical Analysis(New York:Springer-Verlag). 397.3.5-7.3.6.[1] RECIPES 17.3 Relaxation Methods 6 Press. In relaxation methods we replace ODEs by approximate finite-difference equations (FDEs)on a grid or mesh of points that spans the domain of interest.As a typical example, we could replace a general first-order differential equation IENTIFIC dy =g(x,y) (17.3.1) dx 6 with an algebraic equation relating function values at two points k,k-1: 张-k-1-(xk-xk-1)g[2(xk+xk-1,(十张-1】=0 (17.3.2) The form of the FDE in (17.3.2)illustrates the idea,but not uniquely:There are many ways to turn the ODE into an FDE.When the problem involves N coupled first-order ODEs represented by FDEs on a mesh of M points,a solution consists of values for N dependent 10621 functions given at each of the M mesh points,or Nx M variables in all.The relaxation Numerica method determines the solution by starting with a guess and improving it,iteratively.As the 43106 iterations improve the solution,the result is said to relax to the true solution. While several iteration schemes are possible,for most problems our old standby,multi- Recipes dimensional Newton's method,works well.The method produces a matrix equation that must be solved,but the matrix takes a special,"block diagonal"form,that allows it to be inverted far more economically both in time and storage than would be possible for a general North Software. matrix of size (MN)x (MN).Since MN can easily be several thousand,this is crucial for the feasibility of the method Our implementation couples at most pairs of points,as in equation (17.3.2).More points can be coupled,but then the method becomes more complex. We will provide enough background so that you can write a more general scheme if you have the patience to do so. Let us develop a general set ofalgebraic equations that represent the ODEs by FDEs.The ODE problem is exactly identical to that expressed in equations(17.0.1)(17.0.3)where we had N coupled first-order equations that satisfy n boundary conditions at and n2=N-n boundary conditions at 2.We first define a mesh or grid by a set ofk=1,2,...,M points at which we supply values for the independent variable r.In particular,1 is the initial boundary,and is the final boundary.We use the notation y to refer to the entire set of
762 Chapter 17. Two Point Boundary Value Problems Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machinereadable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). for (i=1;i<=n;i++) f[i]=f1[i]-f2[i]; free_vector(y,1,nvar); free_vector(f2,1,nvar); free_vector(f1,1,nvar); } There are boundary value problems where even shooting to a fitting point fails — the integration interval has to be partitioned by several fitting points with the solution being matched at each such point. For more details see [1]. CITED REFERENCES AND FURTHER READING: Acton, F.S. 1970, Numerical Methods That Work; 1990, corrected edition (Washington: Mathematical Association of America). Keller, H.B. 1968, Numerical Methods for Two-Point Boundary-Value Problems (Waltham, MA: Blaisdell). Stoer, J., and Bulirsch, R. 1980, Introduction to Numerical Analysis (New York: Springer-Verlag), §§7.3.5–7.3.6. [1] 17.3 Relaxation Methods In relaxation methods we replace ODEs by approximate finite-difference equations (FDEs) on a grid or mesh of points that spans the domain of interest. As a typical example, we could replace a general first-order differential equation dy dx = g(x, y) (17.3.1) with an algebraic equation relating function values at two points k, k − 1: yk − yk−1 − (xk − xk−1) g 1 2 (xk + xk−1), 1 2 (yk + yk−1) =0 (17.3.2) The form of the FDE in (17.3.2) illustrates the idea, but not uniquely: There are many ways to turn the ODE into an FDE. When the problem involves N coupled first-order ODEs represented by FDEs on a mesh of M points, a solution consists of values for N dependent functions given at each of the M mesh points, or N × M variables in all. The relaxation method determines the solution by starting with a guess and improving it, iteratively. As the iterations improve the solution, the result is said to relax to the true solution. While several iteration schemes are possible, for most problems our old standby, multidimensional Newton’s method, works well. The method produces a matrix equation that must be solved, but the matrix takes a special, “block diagonal” form, that allows it to be inverted far more economically both in time and storage than would be possible for a general matrix of size (MN) × (MN). Since MN can easily be several thousand, this is crucial for the feasibility of the method. Our implementation couples at most pairs of points, as in equation (17.3.2). More points can be coupled, but then the method becomes more complex. We will provide enough background so that you can write a more general scheme if you have the patience to do so. Let us develop a general set of algebraic equations that represent the ODEs by FDEs. The ODE problem is exactly identical to that expressed in equations (17.0.1)–(17.0.3) where we had N coupled first-order equations that satisfy n1 boundary conditions at x1 and n2 = N − n1 boundary conditions at x2. We first define a mesh or grid by a set of k = 1, 2, ..., M points at which we supply values for the independent variable xk. In particular, x1 is the initial boundary, and xM is the final boundary. We use the notation yk to refer to the entire set of
17.3 Relaxation Methods 763 dependent variables ,y....,yN at point.At an arbitrary point k in the middle of the mesh,we approximate the set of N first-order ODEs by algebraic relations of the form 0=Ek=yk-yk-1-(k-2k-1)gk(k,k-1:Yk:Yk-1),k =2,3,...,M (17.3.3) The notation signifies that g can be evaluated using information from both pointsk,k-1. The FDEs labeled by Ek provide N equations coupling 2N variables at points k,k-1.There are M-1 points,k=2,3,...,M,at which difference equations of the form(17.3.3)apply. Thus the FDEs provide a total of(M-1)N equations for the MN unknowns.The remaining N equations come from the boundary conditions. At the first boundary we have 0=E1≡B(x1,y1) (17.3.4) 81 while at the second boundary 0=EM+1≡C(xaM,yM) (17.3.5) The vectors E and B have only n nonzero components,corresponding to the m boundary conditions at 1.It will turn out to be useful to take these nonzero components to be the last n components.In other words,Ej.0 only for j n2 +1,n2 +2,...,N.At the other boundary,only the first n2 components of E+and C are nonzero:Ei.+0 only for=1,2,...,n2. RECIPES The "solution"of the FDE problem in (17.3.3)(17.3.5)consists of a set of variables y.&,the values of the N variables y;at the M points k.The algorithm we describe 暴 below requires an initial guess for the y..We then determine increments Ay.such that yi.k+Ayi,k is an improved approximation to the solution. Equations for the increments are developed by expanding the FDEs in first-order Taylor 器 Press. series with respect to small changes Ay.At an interior point,k=2,3,...,M this gives: Ek(yk+△yk,yk-1+△yk-i)≈Ek(yk,yk-1) 9 N N +∑OE AUn-1+∑E△gnk (17.3.6) SCIENTIFIC n=1 0Un.k-1 n=i Oun.k 6 For a solution we want the updated value E(y+Ay)to be zero,so the general set of equations at an interior point can be written in matrix form as Sj.nAyn-N.k =-Ej.k:j=1,2,...,N (17.3.7) n=1 n=N+1 where 10621 8Ej.k 0E5,k n=1,2,.,N (17.3.8) Numerica i,n=’Jn+N-0yn.天’ 431 The quantity S.n is an N x 2N matrix at each point k.Each interior point thus supplies a Recipes block of N equations coupling 2N corrections to the solution variables at the points k,k-1. Similarly,the algebraic relations at the boundaries can be expanded in a first-order (outside Taylor series for increments that improve the solution.Since E depends only on y1,we find at the first boundary: North Software. N Si,n△n,1=-E.1,j=n2+1,n2+2,.,N (17.3.9) where Sj.n= 8Ej ,n=1,2,,N (17.3.10) 0yn.1 At the second boundary, N Sj.n Ayn,M =-Ej.M+1,j=1,2,...,n2 (17.3.11)
17.3 Relaxation Methods 763 Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machinereadable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). dependent variables y1, y2,...,yN at point xk. At an arbitrary point k in the middle of the mesh, we approximate the set of N first-order ODEs by algebraic relations of the form 0 = Ek ≡ yk − yk−1 − (xk − xk−1)gk(xk, xk−1, yk, yk−1), k = 2, 3,...,M (17.3.3) The notation signifies that gk can be evaluated using information from both points k, k − 1. The FDEs labeled by Ek provide N equations coupling 2N variables at points k, k −1. There are M − 1 points, k = 2, 3,...,M, at which difference equations of the form (17.3.3) apply. Thus the FDEs provide a total of (M −1)N equations for the MN unknowns. The remaining N equations come from the boundary conditions. At the first boundary we have 0 = E1 ≡ B(x1, y1) (17.3.4) while at the second boundary 0 = EM+1 ≡ C(xM, yM) (17.3.5) The vectors E1 and B have only n1 nonzero components, corresponding to the n1 boundary conditions at x1. It will turn out to be useful to take these nonzero components to be the last n1 components. In other words, Ej,1 = 0 only for j = n2 + 1, n2 + 2,...,N. At the other boundary, only the first n2 components of EM+1 and C are nonzero: Ej,M+1 = 0 only for j = 1, 2,...,n2. The “solution” of the FDE problem in (17.3.3)–(17.3.5) consists of a set of variables yj,k, the values of the N variables yj at the M points xk. The algorithm we describe below requires an initial guess for the yj,k. We then determine increments ∆yj,k such that yj,k + ∆yj,k is an improved approximation to the solution. Equations for the increments are developed by expanding the FDEs in first-order Taylor series with respect to small changes ∆yk. At an interior point, k = 2, 3,...,M this gives: Ek(yk + ∆yk, yk−1 + ∆yk−1) ≈ Ek(yk, yk−1) + N n=1 ∂Ek ∂yn,k−1 ∆yn,k−1 + N n=1 ∂Ek ∂yn,k ∆yn,k (17.3.6) For a solution we want the updated value E(y + ∆y) to be zero, so the general set of equations at an interior point can be written in matrix form as N n=1 Sj,n∆yn,k−1 + 2N n=N+1 Sj,n∆yn−N,k = −Ej,k, j = 1, 2,...,N (17.3.7) where Sj,n = ∂Ej,k ∂yn,k−1 , Sj,n+N = ∂Ej,k ∂yn,k , n = 1, 2,...,N (17.3.8) The quantity Sj,n is an N × 2N matrix at each point k. Each interior point thus supplies a block of N equations coupling 2N corrections to the solution variables at the points k, k − 1. Similarly, the algebraic relations at the boundaries can be expanded in a first-order Taylor series for increments that improve the solution. Since E1 depends only on y1, we find at the first boundary: N n=1 Sj,n∆yn,1 = −Ej,1, j = n2 + 1, n2 + 2,...,N (17.3.9) where Sj,n = ∂Ej,1 ∂yn,1 , n = 1, 2,...,N (17.3.10) At the second boundary, N n=1 Sj,n∆yn,M = −Ej,M+1, j = 1, 2,...,n2 (17.3.11)
764 Chapter 17.Two Point Boundary Value Problems where Si.n= 0Ej.M+1 Oyn.M n=1,2,..,N (17.3.12) We thus have in equations (17.3.7)(17.3.12)a set of linear equations to be solved for the corrections Ay,iterating until the corrections are sufficiently small.The equations have a special structure,because each Si.n couples only points k,k-1.Figure 17.3.1 illustrates the typical structure of the complete matrix equation for the case of 5 variables and 4 mesh points,with 3 boundary conditions at the first boundary and 2 at the second.The 3 x 5 block of nonzero entries in the top left-hand corner of the matrix comes from the boundary condition Sin at point k 1.The next three 5 x 10 blocks are the Sin at the interior points,coupling variables at mesh points (2,1),(3,2),and (4,3).Finally we have the block 8 corresponding to the second boundary condition. We can solve equations (17.3.7)(17.3.12)for the increments Ay using a form of Gaussian elimination that exploits the special structure of the matrix to minimize the total number of operations,and that minimizes storage of matrix coefficients by packing the elements in a special blocked structure.(You might wish to review Chapter 2,especially 82.2,if you are unfamiliar with the steps involved in Gaussian elimination.)Recall that Gaussian elimination consists of manipulating the equations by elementary operations such from NUMERICAL RECIPESI 19881992 as dividing rows of coefficients by a common factor to produce unity in diagonal elements, and adding appropriate multiples of other rows to produce zeros below the diagonal.Here we take advantage of the block structure by performing a bit more reduction than in pure Gaussian elimination,so that the storage of coefficients is minimized.Figure 17.3.2 shows the form that we wish to achieve by elimination,just prior to the backsubstitution step.Only a 。27 THE small subset of the reduced MN x MN matrix elements needs to be stored as the elimination progresses.Once the matrix elements reach the stage in Figure 17.3.2,the solution follows ART quickly by a backsubstitution procedure. Furthermore,the entire procedure,except the backsubstitution step,operates only on 9 Programs one block of the matrix at a time.The procedure contains four types of operations:(1) partial reduction to zero of certain elements of a block using results from a previous step, (2)elimination of the square structure of the remaining block elements such that the square section contains unity along the diagonal,and zero in off-diagonal elements,(3)storage of the remaining nonzero coefficients for use in later steps,and (4)backsubstitution.We illustrate to dir the steps schematically by figures. Consider the block ofequations describing corrections available from the initial boundary OF SCIENTIFIC COMPUTING(ISBN conditions.We have n equations for N unknown corrections.We wish to transform the first 1988-19920 block so that its left-hand n x n square section becomes unity along the diagonal,and zero in off-diagonal elements.Figure 17.3.3 shows the original and final form of the first block of the matrix.In the figure we designate matrix elements that are subject to diagonalization Numerical Recipes 10-621 by "D",and elements that will be altered by "A";in the final block,elements that are stored are labeled by“S”.We get from start to finish by selecting in turn n1“pivot'”elements from 43106 among the first n columns,normalizing the pivot row so that the value of the "pivot"element is unity,and adding appropriate multiples of this row to the remaining rows so that they contain zeros in the pivot column.In its final form,the reduced block expresses values for the (outside corrections to the first n variables at mesh point 1 in terms of values for the remaining n2 unknown corrections at point 1,i.e.,we now know what the first n elements are in terms of North Software. the remaining n2 elements.We store only the final set of n2 nonzero columns from the initial block,plus the column for the altered right-hand side of the matrix equation We must emphasize here an important detail of the method.To exploit the reduced storage allowed by operating on blocks,it is essential that the ordering of columns in the s matrix of derivatives be such that pivot elements can be found among the first n rows of the matrix.This means that the n boundary conditions at the first point must contain some dependence on the first j=1,2,...,m dependent variables,y[j][1].If not,then the original square nxn subsection of the first block will appear to be singular,and the method will fail.Alternatively,we would have to allow the search for pivot elements to involve all N columns of the block,and this would require column swapping and far more bookkeeping The code provides a simple method of reordering the variables,i.e.,the columns of the s matrix,so that this can be done easily.End of important detail
764 Chapter 17. Two Point Boundary Value Problems Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machinereadable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). where Sj,n = ∂Ej,M+1 ∂yn,M , n = 1, 2,...,N (17.3.12) We thus have in equations (17.3.7)–(17.3.12) a set of linear equations to be solved for the corrections ∆y, iterating until the corrections are sufficiently small. The equations have a special structure, because each Sj,n couples only points k, k − 1. Figure 17.3.1 illustrates the typical structure of the complete matrix equation for the case of 5 variables and 4 mesh points, with 3 boundary conditions at the first boundary and 2 at the second. The 3 × 5 block of nonzero entries in the top left-hand corner of the matrix comes from the boundary condition Sj,n at point k = 1. The next three 5 × 10 blocks are the Sj,n at the interior points, coupling variables at mesh points (2,1), (3,2), and (4,3). Finally we have the block corresponding to the second boundary condition. We can solve equations (17.3.7)–(17.3.12) for the increments ∆y using a form of Gaussian elimination that exploits the special structure of the matrix to minimize the total number of operations, and that minimizes storage of matrix coefficients by packing the elements in a special blocked structure. (You might wish to review Chapter 2, especially §2.2, if you are unfamiliar with the steps involved in Gaussian elimination.) Recall that Gaussian elimination consists of manipulating the equations by elementary operations such as dividing rows of coefficients by a common factor to produce unity in diagonal elements, and adding appropriate multiples of other rows to produce zeros below the diagonal. Here we take advantage of the block structure by performing a bit more reduction than in pure Gaussian elimination, so that the storage of coefficients is minimized. Figure 17.3.2 shows the form that we wish to achieve by elimination, just prior to the backsubstitution step. Only a small subset of the reduced MN ×MN matrix elements needs to be stored as the elimination progresses. Once the matrix elements reach the stage in Figure 17.3.2, the solution follows quickly by a backsubstitution procedure. Furthermore, the entire procedure, except the backsubstitution step, operates only on one block of the matrix at a time. The procedure contains four types of operations: (1) partial reduction to zero of certain elements of a block using results from a previous step, (2) elimination of the square structure of the remaining block elements such that the square section contains unity along the diagonal, and zero in off-diagonal elements, (3) storage of the remaining nonzero coefficients for use in later steps, and (4) backsubstitution. We illustrate the steps schematically by figures. Consider the block of equations describing corrections available from the initial boundary conditions. We have n1 equations for N unknown corrections. We wish to transform the first block so that its left-hand n1 × n1 square section becomes unity along the diagonal, and zero in off-diagonal elements. Figure 17.3.3 shows the original and final form of the first block of the matrix. In the figure we designate matrix elements that are subject to diagonalization by “D”, and elements that will be altered by “A”; in the final block, elements that are stored are labeled by “S”. We get from start to finish by selecting in turn n1 “pivot” elements from among the first n1 columns, normalizing the pivot row so that the value of the “pivot” element is unity, and adding appropriate multiples of this row to the remaining rows so that they contain zeros in the pivot column. In its final form, the reduced block expresses values for the corrections to the first n1 variables at mesh point 1 in terms of values for the remaining n2 unknown corrections at point 1, i.e., we now know what the first n1 elements are in terms of the remaining n2 elements. We store only the final set of n2 nonzero columns from the initial block, plus the column for the altered right-hand side of the matrix equation. We must emphasize here an important detail of the method. To exploit the reduced storage allowed by operating on blocks, it is essential that the ordering of columns in the s matrix of derivatives be such that pivot elements can be found among the first n1 rows of the matrix. This means that the n1 boundary conditions at the first point must contain some dependence on the first j=1,2,...,n1 dependent variables, y[j][1]. If not, then the original square n1 ×n1 subsection of the first block will appear to be singular, and the method will fail. Alternatively, we would have to allow the search for pivot elements to involve all N columns of the block, and this would require column swapping and far more bookkeeping. The code provides a simple method of reordering the variables, i.e., the columns of the s matrix, so that this can be done easily. End of important detail
17.3 Relaxation Methods 765 XXXXX B XXXXX V B XXXXX V XXXXXXXXXX XXXXXXXXXX V ⊙ XXXXX X XXXX ⊙ XXXXXXXXXX B XXXXXXXXXX XXXXXXXXXX XXXXXXXXXX V Permission is XXXXXXXXXX V B XXXXXXXXXX V XXXXXXXXXX V BB XXXXXXXXXX B XXXXXXXXXX V B .com or call 1-800-872- (including this one) granted fori XXXXXXXXXX V internet XXXXXXXXXX V B XXXXXXXXXX V B XXXXX 7423 (North America to any server computer, 1988-1992 by Cambridge University Press. from NUMERICAL RECIPES IN C: XXXXX B Figure 17.3.1.Matrix structure of a set of linear finite-difference equations (FDEs)with boundary conditions imposed at both endpoints.Here X represents a coefficient of the FDEs,V represents a t users to make one paper component of the unknown solution vector,and B is a component of the known right-hand side.Empty spaces represent zeros.The matrix equation is to be solved by a special form of Gaussian elimination. (See text for details.) is is copy for their Programs XX B XX 0 Copyright (C) 1 XX B 1 XX B 1 1 十 B 1 X 1 X B To order Numerical Recipes books or 1988-1992 by Numerical Recipes THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) 1 XX X B 1 1 B 1 X XX 1 B @cambridge.org(outside North America). Software. X B 1 XX B ying of machine visit website 1 V B 1 0 Figure 17.3.2.Target structure of the Gaussian elimination.Once the matrix of Figure 17.3.1 has been reduced to this form,the solution follows quickly by backsubstitution
17.3 Relaxation Methods 765 Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machinereadable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X V V V V V V V V V V V V V V V V V V V V B B B B B B B B B B B B B B B B B B B B Figure 17.3.1. Matrix structure of a set of linear finite-difference equations (FDEs) with boundary conditions imposed at both endpoints. Here X represents a coefficient of the FDEs, V represents a component of the unknown solution vector, and B is a component of the known right-hand side. Empty spaces represent zeros. The matrix equation is to be solved by a special form of Gaussian elimination. (See text for details.) 1 1 1 X X X 1 X X X 1 1 1 1 X X X X X 1 X X X X X 1 1 1 1 X X X X X 1 1 1 1 X X X X X 1 X X X X X 1 V V V V V V V V V V V V V V V V V V V V B B B B B B B B B B B B B B B B B B B B X X X X X 1 Figure 17.3.2. Target structure of the Gaussian elimination. Once the matrix of Figure 17.3.1 has been reduced to this form, the solution follows quickly by backsubstitution
766 Chapter 17.Two Point Boundary Value Problems (a)DD D AA A DDDAA V A DDDAA A (b)100SS V 010sS V 001SS Figure 17.3.3.Reduction process for the first (upper left)block of the matrix in Figure 17.3.1.(a) Original form of the block,(b)final form.(See text for explanation.) (a)100SS V 83 010SS 001SS D ZZZ DDDD DA A V A ZZZ D DDD DA A V g 111800-672 Z ZZ DD DD D AA A to any ZZZD DDD DA A A NUMERICAL RECIPES IN C: ZZZD DD D DA A A (b)100SS V v -7423 (North America server computer, e University Press. THE 010SS 001SS V 00010000SS V 00001000sS V only),or Programs 00000100SS 00000010SS V 00000001SS email to dir Copyright (C) Figure 17.3.4.Reduction process for intermediate blocks of the matrix in Figure 17.3.1.(a)Original form,(b)final form.(See text for explanation.) (a)00010000SS V 00001000SS ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) 00000100SS V 00000010SS D 1988-1992 by Numerical Recipes 00000001SS V ZZZDD A ZZZDD A (b)00010000SS V Software. 00001000SS @cambridge.org(outside North America). 00000100SS V 00000010SS v 00000001SS V 00010 V 00001 Figure 17.3.5.Reduction process for the last (lower right)block of the matrix in Figure 17.3.1.(a) Original form,(b)final form.(See text for explanation.)
766 Chapter 17. Two Point Boundary Value Problems Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machinereadable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). (a) (b) D D D 1 0 0 D D D 0 1 0 D D D 0 0 1 A A A S S S A A A S S S V V V V V V A A A S S S Figure 17.3.3. Reduction process for the first (upper left) block of the matrix in Figure 17.3.1. (a) Original form of the block, (b) final form. (See text for explanation.) (a) 1 0 0 Z Z Z Z Z V V V V V V V V S S S A A A A A (b) 1 0 0 0 0 0 0 0 0 0 1 0 0 V V V V V V V V S S S S S S S S 0 1 0 Z Z Z Z Z 0 0 1 Z Z Z Z Z S S S D D D D D S S S D D D D D D D D D D D D D D D D D D D D A A A A A A A A A A 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 S S S 1 0 0 0 0 S S S 0 1 0 0 0 0 0 0 1 0 0 0 0 0 1 S S S S S S S S S S Figure 17.3.4. Reduction process for intermediate blocks of the matrix in Figure 17.3.1. (a) Original form, (b) final form. (See text for explanation.) (a) 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 1 0 0 Z Z 0 0 0 1 0 Z Z 0 0 0 0 1 Z Z S S S S S D D S S S S S D D V V V V V V V S S S S S A A (b) 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 S S S S S 1 0 S S S S S 0 1 V V V V V V V S S S S S S S Figure 17.3.5. Reduction process for the last (lower right) block of the matrix in Figure 17.3.1. (a) Original form, (b) final form. (See text for explanation.)