50.2Matriceswith specified basis B = (xi,..., xnl, then, since any element x e V may be wrttenuniquely asx = aixi +...+anx, in whicheach a; eF,wemay identifyx with then-vector [x]B = [a] ... an].For any basis B, themapping x →[x] is an isomorphismbetweenVandF".0.2 MatricesThe fundamental object of study here may be thought of in two important ways: as arectangular array of scalars and as a linear transformation between twovector spaces,given specified bases foreach space.0.2.1 Rectangular arrays. A matrix is an m-by-n array of scalars from a field F.Ifm = n,thematrix is said to be square.The setof all m-by-n matrices overFis denotedby Mm,n(F),and Mn.,(F) is often denoted by Mn(F). The vector spaces Mn.(F) and Fnare identical. IfF=C,then Mn(C)is further abbreviated to Mn,and Mm.n(C)to Mm,n-Matrices are typically denoted by capital letters, and their scalar entries are typicallydenoted by doubly subscripted lowercase letters.For example, ifA=[-2 - 4] =[a]元thenAM2.3(R)hasentries a1=2,a12=-3/2,a13=0,a21=-1,a22=元,a23 =4.A submatrix of a given matrix is a rectangular arraylying in specified subsetsoftherowsand columns ofa given matrix.Forexample,[4]isa submatrix (lying inrow2and columns2and3)of A.Suppose that A = [aij] e Mn,m(F). The main diagonal of A is the list of entriesali,a22,...,aqq,inwhichq =min(n,m).It is sometimesconvenient to express themain diagonal of A as a vector diag A = [aiJ)-I e F9. The pth superdiagonal of A isthe list a1,p+1, a2,p+2,..., ak.p+k, in which k = min(n,m - p), p = 0, 1, 2,..,m 1; the pth subdiagonal of Ais the listap+1,1,ap+2,2, .,ap+e,e,in whiche =min(np,ml,p=0,1,2,...,n-1.0.2.2Linear transformations.Let U be an n-dimensional vector space and let Vbeanm-dimensional vectorspace,both overthesamefieldFletBubeabasis of Uand let Bvbe a basis of V.Wemay use the isomorphisms x-→ [x]Buand y→[yls,torepresent vectors inUand V as n-vectors and m-vectorsoverF,respectively.Alineartransformation isafunctionT:U-→VsuchthatT(aixi+a2x2)=aT(xi)+a2T(x2)for any scalars a1, a2 and vectors x1, x2.A matrix A Mm.n(F) correspondsto a linear transformation T : U→V in the following way: y =T(x) if and only if[ylBy =A[x]Bu-The matrix A is said to represent the linear transformation T (relativeto thebasesBuand Bv);the representing matrixAdepends onthe baseschosen.WhenwestudyamatrixA,werealizethatwearestudying alineartransformationrelativetoa particular choice of bases,but explicit appeal to the bases is usually notnecessary.0.2.3Vector spaces associated with a matrix or linear transformation.Anyn-dimensional vector spaceover Fmay beidentified with Fn;we maythink of
0.2 Matrices 5 with specified basis B = {x1,., xn}, then, since any element x ∈ V may be written uniquely as x = a1x1 +···+ an xn in which each ai ∈ F, we may identify x with the nvector [x]B = [a1 . an] T . For any basis B, the mapping x → [x]B is an isomorphism between V and Fn. 0.2 Matrices The fundamental object of study here may be thought of in two important ways: as a rectangular array of scalars and as a linear transformation between two vector spaces, given specified bases for each space. 0.2.1 Rectangular arrays. A matrix is an m-by-n array of scalars from a field F. If m = n, the matrix is said to be square. The set of all m-by-n matrices over F is denoted by Mm,n(F), and Mn,n(F) is often denoted by Mn(F). The vector spaces Mn,1(F) and Fn are identical. If F = C, then Mn(C) is further abbreviated to Mn, and Mm,n(C) to Mm,n. Matrices are typically denoted by capital letters, and their scalar entries are typically denoted by doubly subscripted lowercase letters. For example, if A = 2 −3 2 0 −1 π 4 = [ai j] then A ∈ M2,3(R) has entries a11 = 2, a12 = −3/2, a13 = 0, a21 = −1, a22 = π , a23 = 4. A submatrix of a given matrix is a rectangular array lying in specified subsets of the rows and columns of a given matrix. For example, [π 4] is a submatrix (lying in row 2 and columns 2 and 3) of A. Suppose that A = [ai j] ∈ Mn,m(F). The main diagonal of A is the list of entries a11, a22,., aqq , in which q = min{n, m}. It is sometimes convenient to express the main diagonal of A as a vector diag A = [aii] q i=1 ∈ Fq . The pth superdiagonal of A is the list a1,p+1, a2,p+2,., ak,p+k , in which k = min{n, m − p}, p = 0, 1, 2,., m − 1; the pth subdiagonal of A is the list ap+1,1, ap+2,2,., ap+,, in which = min{n − p, m}, p = 0, 1, 2,., n − 1. 0.2.2 Linear transformations. Let U be an n-dimensional vector space and let V be an m-dimensional vector space, both over the same field F; let BU be a basis of U and let BV be a basis of V. We may use the isomorphisms x → [x]BU and y → [y]BV to represent vectors in U and V as n-vectors and m-vectors over F, respectively. A linear transformation is a function T : U → V such that T (a1x1 + a2x2) = a1T (x1) + a2T (x2) for any scalars a1, a2 and vectors x1, x2. A matrix A ∈ Mm,n(F) corresponds to a linear transformation T : U → V in the following way: y = T (x) if and only if [y]BV = A[x]BU . The matrix A is said to represent the linear transformation T (relative to the bases BU and BV ); the representing matrix A depends on the bases chosen. When we study a matrix A, we realize that we are studying a linear transformation relative to a particular choice of bases, but explicit appeal to the bases is usually not necessary. 0.2.3 Vector spaces associated with a matrix or linear transformation. Any n-dimensional vector space over F may be identified with Fn; we may think of
6Review and miscellaneaA e Mm.n(F) as a linear transformation x → Ax from Fn to F" (and also as an array)The domain of this linear transformation is F"; its range is range A=(y eFm : y:Ax) for somex EF"; its null spaceis nullspaceA= (x eFn : Ax = O).TherangeofA is a subspace of Fm, and the null space of A is a subspace of F".The dimension ofnullspace A is denoted by nullity A; the dimension of range A is denoted by rank A.Thesenumbers arerelated bytherank-nullitytheoremdim(range A) + dim (nullspace A) = rank A + nullity A = n(0.2.3.1)for A e Mm.n(F). The null space of A is a set of vectors in F" whose entries satisfy mhomogeneous linearequations.0.2.4 Matrix operations.Matrix addition is defined entrywise for arrays of thesame dimensions and is denoted by +("A + B").It corresponds to addition oflinear transformations (relative to the same basis),and it inherits commutativity andassociativityfrom the scalarfield.The zero matrix (all entriesarezero)is the additiveidentity,and Mm.n(F) is a vector space over F.Matrix multiplication is denoted byjuxtaposition("AB")and coresponds to the composition of linear transformationsTherefore, it is defined only when A e Mm.n(F)and B e Mn.g(F). It is associative, butnotalwayscommutative.Forexample,[ ]-[ 9][3]*[3[ 9] -[9 ]The identity matrixEMn(F)is the multiplicative identity in Mn(F); its main diagonal entries are 1, and all otherentries are0.Theidentitymatrixandany scalarmultipleofit (a scalar matrix)commutewitheverymatrix inM,(F);theyaretheonlymatricesthatdoso.Matrixmultiplicationisdistributiveovermatrixaddition.The symbol o is used throughout the book to denote each of the following: the zeroscalar of a field,the zero vector of a vector space,the zeron-vector in Fn (all entriesequal to the zero scalar in F), and the zero matrix in Mm.n(F) (all entries equal to thezero scalar).The symbol I denotes the identity matrix of any size.If there is potentialfor confusion,we indicate thedimension ofa zero or identitymatrix with subscripts,forexample, Op.g,Ok,orlk.0.2.5 The transpose, conjugate transpose, and trace. If A = [aj] e Mm.n(F),the transpose of A, denoted by AT, is the matrix in Mn.m(F) whose i, j entry is ajt;that is, rows are exchanged for columns and vice versa.For example,231=2 536Of course,(A))=A.The conjugatetranspose(sometimes calledtheadjoint orHermitian adjoint) of A e Mm,n(C), is denoted by A* and defined by A* = AT, in
6 Review and miscellanea A ∈ Mm,n(F) as a linear transformation x → Ax from Fn to Fm (and also as an array). The domain of this linear transformation is Fn; its range is range A = {y ∈ Fm : y = Ax} for some x ∈ Fn; its null space is nullspace A = {x ∈ Fn : Ax = 0}. The range of A is a subspace of Fm, and the null space of A is a subspace of Fn. The dimension of nullspace A is denoted by nullity A; the dimension of range A is denoted by rank A. These numbers are related by the rank-nullity theorem dim (range A) + dim (nullspace A) = rank A + nullity A = n (0.2.3.1) for A ∈ Mm,n(F). The null space of A is a set of vectors in Fn whose entries satisfy m homogeneous linear equations. 0.2.4 Matrix operations. Matrix addition is defined entrywise for arrays of the same dimensions and is denoted by + (“A + B”). It corresponds to addition of linear transformations (relative to the same basis), and it inherits commutativity and associativity from the scalar field. The zero matrix (all entries are zero) is the additive identity, and Mm,n(F) is a vector space over F. Matrix multiplication is denoted by juxtaposition (“AB”) and corresponds to the composition of linear transformations. Therefore, it is defined only when A ∈ Mm,n(F) and B ∈ Mn,q (F). It is associative, but not always commutative. For example, 1 2 6 8 = 1 0 0 2 1 2 3 4 = 1 2 3 4 1 0 0 2 = 1 4 3 8 The identity matrix I = 1 . 1 ∈ Mn(F) is the multiplicative identity in Mn(F); its main diagonal entries are 1, and all other entries are 0. The identity matrix and any scalar multiple of it (a scalar matrix) commute with every matrix in Mn(F); they are the only matrices that do so. Matrix multiplication is distributive over matrix addition. The symbol 0 is used throughout the book to denote each of the following: the zero scalar of a field, the zero vector of a vector space, the zero n-vector in Fn (all entries equal to the zero scalar in F), and the zero matrix in Mm,n(F) (all entries equal to the zero scalar). The symbol I denotes the identity matrix of any size. If there is potential for confusion, we indicate the dimension of a zero or identity matrix with subscripts, for example, 0p,q , 0k , or Ik . 0.2.5 The transpose, conjugate transpose, and trace. If A = [ai j] ∈ Mm,n(F), the transpose of A, denoted by AT , is the matrix in Mn,m(F) whose i, j entry is aji ; that is, rows are exchanged for columns and vice versa. For example, 123 456 T = ⎡ ⎣ 1 4 2 5 3 6 ⎤ ⎦ Of course, (AT ) T = A. The conjugate transpose (sometimes called the adjoint or Hermitian adjoint) of A ∈ Mm,n(C), is denoted by A∗ and defined by A∗ = A¯ T , in
70.2MatriceswhichAis the entrywise conjugate.Forexample,1+i 2-i1--2-3-2i2+i2iBoth the transpose and the conjugate transpose obeythe reverse-order law:(AB)*=B*A* and (AB)T=BTA.For the complex conjugate of a product, there is noreversing:AB=AB.If x,y are real or complexvectors of the same size,theny*x is a scalar and its conjugate transpose and complex conjugate are the same:(y*x)*=y*x =x*y =yTx.Many important classes of matrices aredefined by identities involving the transposeor conjugate transpose.For example,AeMn(F)is said tobe symmetric if AT=A,skew symmetric if A=-A,and orthogonal if AA=I;AeM,(C)is said to beHermitian if A*=A, skew Hermitian if A*=-A,essentially Hermitian if eie AisHermitianforsome eR,unitaryifA*A=I,and normalif A*A=AA*.Each A e Mn(F) can be written in exactly one way as A = S(A)+C(A), in whichS(A)is symmetric and C(A)is skew symmetric:S(A)=(A+A)is the symmetricpartofA;C(A)=(A-A)isthe skew-symmetricpartofA.Each A e Mm.n(C) can be written in exactly one way as A=B +iC, in whichB,C eMm.n(R):B=(A+A)isthe realpartof A;C=(A-A)istheimaginarypart of A.Each A e Mn(C) can be written in exactly one way as A=H(A)+iK(A), inwhich H(A) and K(A) are Hermitian: H(A)=(A +A*) is the Hermitian part ofA; iK(A) = (A - A*) is the skew-Hermitian part of A. The representation A =H(A)+iK(A)of a complex or real matrix is its Toeplitz decomposition.The trace of A =[aij] e Mm.n(F) is the sum of its main diagonal entries: tr A =ail + ... +agg, in which q = min(m, n]. For any A =[aj] e Mm.n(C), tr AA* --tr A*A = Zi., laijP, so(0.2.5.1)trAA*=OifandonlyifA=OA vector x eF" is isotropic if x x =0.For example,[1 ijT e C2 is a nonzeroisotropic vector.There are no nonzero isotropic vectors in R".0.2.6 Metamechanics of matrix multiplication.In addition to the conventionaldefinitionofmatrix-vectorand matrix-matrixmultiplication,severalalternativeview-points can be useful.1. If A e Mm.n(F), x e F", and y e F"m, then the (column) vector Ax is a linearcombinationofthecolumns of A;thecoefficients of the linearcombination aretheentries of x.TherowvectoryTAis a linear combinationof therowsof A:thecoefficients of the linear combination are the entries of y.2. If b, is the jth column of B and al is the ith row of A, then the jth column ofABisAb,and theithrowof ABisalB.To paraphrase, in the matrix product AB,left multiplication by Amultiplies thecolumns of B and rightmultiplication byB multiplies the rows of A.See (0.9.1)for animportantspecialcaseofthisobservationwhenoneofthefactorsisadiagonalmatrix
0.2 Matrices 7 which A¯ is the entrywise conjugate. For example, 1 + i 2 − i −3 −2i ∗ = 1 − i −3 2 + i 2i Both the transpose and the conjugate transpose obey the reverse-order law: (AB) ∗ = B∗A∗ and (AB) T = BT AT . For the complex conjugate of a product, there is no reversing: AB = A¯ B¯ . If x, y are real or complex vectors of the same size, then y∗x is a scalar and its conjugate transpose and complex conjugate are the same: (y∗x) ∗ = y∗x = x∗ y = yT x¯. Many important classes of matrices are defined by identities involving the transpose or conjugate transpose. For example, A ∈ Mn(F) is said to be symmetric if AT = A, skew symmetric if AT = −A, and orthogonal if AT A = I; A ∈ Mn(C) is said to be Hermitian if A∗ = A, skew Hermitian if A∗ = −A, essentially Hermitian if eiθ A is Hermitian for some θ ∈ R, unitary if A∗A = I, and normal if A∗A = AA∗. Each A ∈ Mn(F) can be written in exactly one way as A = S(A) + C(A), in which S(A) is symmetric and C(A) is skew symmetric: S(A) = 1 2 (A + AT ) is the symmetric part of A; C(A) = 1 2 (A − AT ) is the skew-symmetric part of A. Each A ∈ Mm,n(C) can be written in exactly one way as A = B + iC, in which B,C ∈ Mm,n(R): B = 1 2 (A + A¯) is the real part of A; C = 1 2i (A − A¯) is the imaginary part of A. Each A ∈ Mn(C) can be written in exactly one way as A = H(A) + i K(A), in which H(A) and K(A) are Hermitian: H(A) = 1 2 (A + A∗) is the Hermitian part of A; i K(A) = 1 2 (A − A∗) is the skew-Hermitian part of A. The representation A = H(A) + i K(A) of a complex or real matrix is its Toeplitz decomposition. The trace of A = [ai j] ∈ Mm,n(F) is the sum of its main diagonal entries: tr A = a11 +···+ aqq , in which q = min{m, n}. For any A = [ai j] ∈ Mm,n(C), tr AA∗ = tr A∗A = i,j |ai j| 2, so tr AA∗ = 0 if and only if A = 0 (0.2.5.1) A vector x ∈ Fn is isotropic if x T x = 0. For example, [1 i] T ∈ C2 is a nonzero isotropic vector. There are no nonzero isotropic vectors in Rn. 0.2.6 Metamechanics of matrix multiplication. In addition to the conventional definition of matrix-vector and matrix-matrix multiplication, several alternative viewpoints can be useful. 1. If A ∈ Mm,n(F), x ∈ Fn, and y ∈ Fm, then the (column) vector Ax is a linear combination of the columns of A; the coefficients of the linear combination are the entries of x. The row vector yT A is a linear combination of the rows of A; the coefficients of the linear combination are the entries of y. 2. If bj is the jth column of B and aT i is the ith row of A, then the jth column of AB is Abj and the ith row of AB is aT i B. To paraphrase, in the matrix product AB, left multiplication by A multiplies the columns of B and right multiplication by B multiplies the rows of A. See (0.9.1) for an important special case of this observation when one of the factors is a diagonal matrix.
8Reviewand miscellaneaSuppose that A e Mm.p(F) and B e Mn.g(F). Let as be the kth column of A and letb,bethekthcolumnofB.Then3. If m = n, then AT B= [a] b]]: the i, j entry of A' B is the scalar a, bj.4. If p = q, then ABT = Ek=1 axbf : each summand is an m-by-n matrix, the outerproductofaandbk.0.2.7Column spaceand row spaceof amatrix.Therange of A eMm.n(F)is alsocalled its column spacebecauseAx isa linear combination ofthecolumns of Aforanyx eFn (the entries of x are the coefficients in the linear combination); range A is thespan of the columns of A. Analogously,(y A : y EFm) is called the row space of A.If the column space of A Mm.n(F) is contained in the column space of B Mm.(F),then there is some X e Mk.n(F) such that A =BX (and conversely); the entries incolumnjof XtellhowtoexpresscolumnjofA asalinearcombinationof thecolumns ofB.If A e Mm.n(F) and B e Mm.q(F), thenrangeA+rangeB=rangeA B](0.2.7.1)If A E Mm,n(F) and B E Mp.n(F), thennullspace An nullspace B = nullspace(0.2.7.2)R0.2.8 The all-ones matrix and vector. In F", every entry of the vector e =e, +...+en is 1.Every entryof thematrix Jn =eeT is1.0.3DeterminantsOften inmathematics,itisusefultosummarizeamultivariatephenomenonwithasinglenumber,andthedeterminantfunctionisanexampleofthis.ItsdomainisM(F)(square matrices only), and it may be presented in several different ways. We denotethe determinant of A e M,(F) by det A.0.3.1 Laplace expansion by minors along a row or column. The determinantmay bedefined inductivelyfor A=[a;] e M,(F) in the following way.Assume thatthe determinant is defined over Mn-i(F) and let Aij E Mn-1(F) denote the submatrixof A eMn(F) obtained by deleting row i and column j of A.Then, for any i, je(,...,n],wehavedetA =(-1)+kaik det Aik =(-1)*+jiak; det Akj(0.3.1.1)(=lThe first sum is the Laplace expansion by minors along rowi;the second sum is theLaplace expansionbyminors along column j.Thisinductivepresentation beginsby
8 Review and miscellanea Suppose that A ∈ Mm,p(F) and B ∈ Mn,q (F). Let ak be the kth column of A and let bk be the kth column of B. Then 3. If m = n, then AT B = aT i bj : the i, j entry of AT B is the scalar aT i bj . 4. If p = q, then ABT = n k=1 akbT k : each summand is an m-by-n matrix, the outer product of ak and bk . 0.2.7 Column space and row space of a matrix. The range of A ∈ Mm,n(F) is also called its column space because Ax is a linear combination of the columns of A for any x ∈ Fn (the entries of x are the coefficients in the linear combination); range A is the span of the columns of A. Analogously, {yT A : y ∈ Fm} is called the row space of A. If the column space of A ∈ Mm,n(F) is contained in the column space of B ∈ Mm,k (F), then there is some X ∈ Mk,n(F) such that A = B X (and conversely); the entries in column j of X tell how to express column j of A as a linear combination of the columns of B. If A ∈ Mm,n(F) and B ∈ Mm,q (F), then range A + range B = range A B (0.2.7.1) If A ∈ Mm,n(F) and B ∈ Mp,n(F), then nullspace A ∩ nullspace B = nullspace A B (0.2.7.2) 0.2.8 The all-ones matrix and vector. In Fn, every entry of the vector e = e1 + ···+ en is 1. Every entry of the matrix Jn = eeT is 1. 0.3 Determinants Often in mathematics, it is useful to summarize a multivariate phenomenon with a single number, and the determinant function is an example of this. Its domain is Mn(F) (square matrices only), and it may be presented in several different ways. We denote the determinant of A ∈ Mn(F) by det A. 0.3.1 Laplace expansion by minors along a row or column. The determinant may be defined inductively for A = [ai j] ∈ Mn(F) in the following way. Assume that the determinant is defined over Mn−1(F) and let Ai j ∈ Mn−1(F) denote the submatrix of A ∈ Mn(F) obtained by deleting row i and column j of A. Then, for any i, j ∈ {1,., n}, we have det A = n k=1 (−1)i+k aik det Aik = n k=1 (−1)k+ j akj det Akj (0.3.1.1) The first sum is the Laplace expansion by minors along row i; the second sum is the Laplace expansion by minors along column j. This inductive presentation begins by
90.3Determinantsdefining the determinant of a 1-by-1 matrix to be the value of the single entry. Thus,det[an] = an[aai2deta11a22-a12a21[a21a22ai2a13anldeta21a22a11a22a33+12a23a31+a13a21a32a23a31a32a33a11a23a32-a12a21a33-a13a22a31and so on. Notice that det AT = det A, det A* = det A if A e Mn(C), and det I = 1.0.3.2 Alternating sums and permutations. Apermutation of (1,...,n) is a one-to-onefunctiong: (l,.,n) (l,...,n).Theidentitypermutation satisfieso(i)-i for each i =l,...,n.There are n!distinctpermutations of (l,...,n),and thecollectionofallsuchpermutationsformsagroupundercompositionoffunctionsConsistent with the low-dimensional examples in (0.3.1), for A= [a;jJ Mn(F) wehavethealternativepresentationdet A=sgngTaia(i)(0.3.2.1)in which the sum is over all n! permutations of (l, ..,n) and sgng, the "sign"or"signum"ofapermutation,is+1or-laccordingtowhethertheminimumnumber oftranspositions (pairwise interchanges)necessary to achieve it startingfrom(l,...,n) isevenor odd.Wesay thatapermutation is even if sgn=+l; is oddif sgng=-l.If sgn in (0.3.2.1) is replaced by certain other functions of ,one obtains general-izedmatrixfunctions in placeofdet A.For example,thepermanent of A,denoted byper A, is obtained by replacing sgng by the function that is identically+1.0.3.3 Elementary row and column operations.Three simple and fundamentaloperations onrows or columns,called elementary rowand column operations,canbe used to transformamatrix (square or not)into a simpleformthat facilitates suchtasks assolvinglinear equations,determiningrank,andcalculating determinants andinverses of square matrices.We focus on row operations,which are implemented bymatrices that act on the left.Column operations are defined and used in a similarfashion; the matrices that implementthem act on the right
0.3 Determinants 9 defining the determinant of a 1-by-1 matrix to be the value of the single entry. Thus, det [ a11] = a11 det a11 a12 a21 a22 = a11a22 − a12a21 det ⎡ ⎣ a11 a12 a13 a21 a22 a23 a31 a32 a33 ⎤ ⎦ = a11a22a33 + a12a23a31 + a13a21a32 − a11a23a32 − a12a21a33 − a13a22a31 and so on. Notice that det AT = det A, det A∗ = det A if A ∈ Mn(C), and det I = 1. 0.3.2 Alternating sums and permutations. A permutation of {1,., n} is a oneto-one function σ : {1,., n}→{1,., n}. The identity permutation satisfies σ(i) = i for each i = 1,., n. There are n! distinct permutations of {1,., n}, and the collection of all such permutations forms a group under composition of functions. Consistent with the low-dimensional examples in (0.3.1), for A = [ai j] ∈ Mn(F) we have the alternative presentation det A = σ sgn σ n i=1 aiσ(i) (0.3.2.1) in which the sum is over all n! permutations of {1,., n} and sgn σ, the “sign” or “signum” of a permutation σ, is +1 or −1 according to whether the minimum number of transpositions (pairwise interchanges) necessary to achieve it starting from {1,., n} is even or odd. We say that a permutation σ is even if sgn σ = +1; σ is odd if sgn σ = −1. If sgn σ in (0.3.2.1) is replaced by certain other functions of σ, one obtains generalized matrix functions in place of det A. For example, the permanent of A, denoted by per A, is obtained by replacing sgn σ by the function that is identically +1. 0.3.3 Elementary row and column operations. Three simple and fundamental operations on rows or columns, called elementary row and column operations, can be used to transform a matrix (square or not) into a simple form that facilitates such tasks as solving linear equations, determining rank, and calculating determinants and inverses of square matrices. We focus on row operations, which are implemented by matrices that act on the left. Column operations are defined and used in a similar fashion; the matrices that implement them act on the right.