76 Chapter 2.Solution of Linear Algebraic Equations Here A is,as usual,an N x N matrix,while U and V are N x P matrices with P<N and usually P N.The inner piece of the correction term may become clearer if written as the tableau, . (2.7.13) .com or where you can see that the matrix whose inverse is needed is only Px P rather than N x N. The relation between the Woodbury formula and successive applications of the Sherman- Morrison formula is now clarified by noting that,if U is the matrix formed by columns out of the P vectors u,...,up,and Vis the matrix formed by columns out of the Pvectors vi,...,vp, (2.7.14) 寻9 2 then two ways of expressing the same correction to A are Press. P A+∑uk⑧vk=(A+U.VT) (2.7.15) Program k=1 (Note that the subscripts on u and v do not denote components,but rather distinguish the different column vectors. Equation (2.7.15)reveals that,if you have A in storage,then you can either make the P corrections in one fell swoop by using (2.7.12),inverting a P x P matrix,or else make them by applying (2.7.5)P successive times. If you don't have storage for A-1,then you must use (2.7.12)in the following way: OF SCIENTIFIC COMPUTING(ISBN To solve the linear equation 草 198918920 x=b (2.7.16) 10-621 first solve the P auxiliary problems Numerical Recipes 43106 A·Z1=u1 A·Z2=u2 (outside (2.7.17) North Software. A·ZP=uP ing of and construct the matrix Z by columns from the z's obtained. ZP (2.7.18) Next,do the Px P matrix inversion H≡(1+vr.Z)-1 (2.7.19)
76 Chapter 2. Solution of Linear Algebraic Equations Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machinereadable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). Here A is, as usual, an N × N matrix, while U and V are N × P matrices with P <N and usually P N. The inner piece of the correction term may become clearer if written as the tableau, U · 1 + VT · A−1 · U −1 · VT (2.7.13) where you can see that the matrix whose inverse is needed is only P × P rather than N × N. The relation between the Woodbury formula and successive applications of the ShermanMorrison formula is now clarified by noting that, if U is the matrix formed by columns out of the P vectors u1,..., uP , and V is the matrix formed by columns out of the P vectors v1,..., vP , U ≡ u1 ··· uP V ≡ v1 ··· vP (2.7.14) then two ways of expressing the same correction to A are A + P k=1 uk ⊗ vk = (A + U · VT ) (2.7.15) (Note that the subscripts on u and v do not denote components, but rather distinguish the different column vectors.) Equation (2.7.15) reveals that, if you have A−1 in storage, then you can either make the P corrections in one fell swoop by using (2.7.12), inverting a P × P matrix, or else make them by applying (2.7.5) P successive times. If you don’t have storage for A−1, then you must use (2.7.12) in the following way: To solve the linear equation A + P k=1 uk ⊗ vk · x = b (2.7.16) first solve the P auxiliary problems A · z1 = u1 A · z2 = u2 ··· A · zP = uP (2.7.17) and construct the matrix Z by columns from the z’s obtained, Z ≡ z1 ··· zP (2.7.18) Next, do the P × P matrix inversion H ≡ (1 + VT · Z) −1 (2.7.19)
2.7 Sparse Linear Systems 77 Finally,solve the one further auxiliary problem A·y=b (2.7.20) In terms of these quantities,the solution is given by x=y-z.[H.(vT.y) (2.721) Inversion by Partitioning Once in a while,you will encounter a matrix(not even necessarily sparse) that can be inverted efficiently by partitioning.Suppose that the Nx N matrix A is partitioned into A= 「PQ R S (2.7.22) where P and S are square matrices of size p x p and s x s respectively (p+s=N). The matrices Q and R are not necessarily square,and have sizes p x s and s x p, RECIPES 2 respectively. If the inverse of A is partitioned in the same manner, A-1 (2.7.23) —判◆ 9 then P,Q,R,S,which have the same sizes as P,Q,R,S,respectively,can be found by either the formulas IENTIFIC P=(P-QS-1.R)-1 0=-(P-Qs-1.R)-1.(QS-1) (2.7.24) =-(S-1.R)(P-Q·S-1.R)-1 s=s-1+(S-1.R)(P-Q.s-1.R)-1.(Q.S-1) Numerical Recipes 021 ridge.org 43108 or else by the equivalent formulas P=P-1+P-1.Q)(S-RP-1.Q)-1.(R.P-1) (outside Q=-(P-1.Q)(S-R.P-1.Q)-1 North Software. (2.7.25) R=-(S-R.P-1.Q)-1.(R.P-1) s=(S-R.P-1.Q)-1 The parentheses in equations (2.7.24)and(2.7.25)highlight repeated factors that you may wish to compute only once.(Of course,by associativity,you can instead do the matrix multiplications in any order you like.)The choice between using equation(2.7.24)and(2.7.25)depends on whether you want P or S to have the simpler formula;or on whether the repeated expression(S-R.P-1.Q)-1 is easier
2.7 Sparse Linear Systems 77 Permission is granted for internet users to make one paper copy for their own personal use. Further reproduction, or any copyin Copyright (C) 1988-1992 by Cambridge University Press. Programs Copyright (C) 1988-1992 by Numerical Recipes Software. Sample page from NUMERICAL RECIPES IN C: THE ART OF SCIENTIFIC COMPUTING (ISBN 0-521-43108-5) g of machinereadable files (including this one) to any server computer, is strictly prohibited. To order Numerical Recipes books or CDROMs, visit website http://www.nr.com or call 1-800-872-7423 (North America only), or send email to directcustserv@cambridge.org (outside North America). Finally, solve the one further auxiliary problem A · y = b (2.7.20) In terms of these quantities, the solution is given by x = y − Z · H · (VT · y) (2.7.21) Inversion by Partitioning Once in a while, you will encounter a matrix (not even necessarily sparse) that can be inverted efficiently by partitioning. Suppose that the N × N matrix A is partitioned into A = P Q R S (2.7.22) where P and S are square matrices of size p × p and s × s respectively (p + s = N). The matrices Q and R are not necessarily square, and have sizes p × s and s × p, respectively. If the inverse of A is partitioned in the same manner, A−1 = P Q R S (2.7.23) then P, Q, R, S, which have the same sizes as P, Q, R, S, respectively, can be found by either the formulas P = (P − Q · S−1 · R) −1 Q = −(P − Q · S−1 · R) −1 · (Q · S−1) R = −(S−1 · R) · (P − Q · S−1 · R) −1 S = S−1 + (S−1 · R) · (P − Q · S−1 · R) −1 · (Q · S−1) (2.7.24) or else by the equivalent formulas P = P−1 + (P−1 · Q) · (S − R · P−1 · Q) −1 · (R · P−1) Q = −(P−1 · Q) · (S − R · P−1 · Q) −1 R = −(S − R · P−1 · Q) −1 · (R · P−1) S = (S − R · P−1 · Q) −1 (2.7.25) The parentheses in equations (2.7.24) and (2.7.25) highlight repeated factors that you may wish to compute only once. (Of course, by associativity, you can instead do the matrix multiplications in any order you like.) The choice between using equation (2.7.24) and (2.7.25) depends on whether you want P or S to have the simpler formula; or on whether the repeated expression (S − R · P −1 · Q)−1 is easier