Ch. 1 Matrix Algebra 1 Some Terminology A matrix is a rectangular array of numbers, denoted 1 ai1 ai2 aik where a subscribed element of a matrix is always read as arou, column. Here we confine the element to be real number a vector is a matrix with one row or one column. Therefore a row vector is Alxk and a column vector is AixI and commonly denoted as ak and ai,respec- tively. In the followings of this course, we follow conventional custom to say that a vector is a columnvector except for particular mention The dimension of a matrix is the numbers of rows and columns it contained If i equals to k, then A is a square matrix. Several particular types of square matrices occur in econometrics (1).A symmetric matrix, a is one in which aik=ak: for all i and k. (e2).A diagonal matrix is a square matrix whose nonzero elements appears on the main diagonal, moving from upper left to lower right (3). A scalar matrix is a diagonal matrix with the same values in all diagonal elements (4). An identity matrix is a scalar matrix with ones on the diagonal. This matrix is always denoted as I. A subscript is sometimes included to indicate its size. for example 010 001 (5). A triangular matrix is one that has only zeros either above or below the main diagonal
Ch. 1 Matrix Algebra 1 Some Terminology A matrix is a rectangular array of numbers, denoted Ai×k = [aik] = a11 a12 . . . a1k a21 a22 . . . a2k . . . . . . . . . . . . . . . . . . ai1 ai2 . . . aik , where a subscribed element of a matrix is always read as arow,column. Here we confine the element to be real number. A vector is a matrix with one row or one column. Therefore a row vector is A1×k and a column vector is Ai×1 and commonly denoted as a ′ k and ai , respectively. In the followings of this course, we follow conventional custom to say that a vector is a columnvector except for particular mention. The dimension of a matrix is the numbers of rows and columns it contained. If i equals to k, then A is a square matrix. Several particular types of square matrices occur in econometrics. (1). A symmetric matrix, A is one in which aik = aki for all i and k. (2). A diagonal matrix is a square matrix whose nonzero elements appears on the main diagonal, moving from upper left to lower right. (3). A scalar matrix is a diagonal matrix with the same values in all diagonal elements (4). An identity matrix is a scalar matrix with ones on the diagonal. This matrix is always denoted as I. A subscript is sometimes included to indicate its size. for example, I3 = 1 0 0 0 1 0 0 0 1 . (5). A triangular matrix is one that has only zeros either above or below the main diagonal. 1
2 Algebraic Manipulation of Matrices 2.1 Equality of matrices Matrices A and B are equal if and only if they have the same dimensions and each elements of A equal the corresponding element of B A=B if and only if aik= bik for all i and k 2.2 Transposition The transpose of a matrix A, denoted A, is obtained by creating the matrix whose kth row is the kth column of the original matrix. If A is i x k, a is k x For example 1563 A 645 A 2141 314 3554 If A is symmetric, A=A. It is also apparent that for any matrix A, (A)=A Finally,the transpose of a column vector, a, is a row vector: 2.3 Matrix Addition Matrices cannot be added unless they have the same dimension. The operation of addition is extended to matrices by definin A+B=aik + bil We also extend the operation of subtraction to matrices precisely as if they were scalars by performing the operation element by element. Thus It follows that matrix addition is commutative A+b=b+A and associative
2 Algebraic Manipulation of Matrices 2.1 Equality of Matrices Matrices A and B are equal if and only if they have the same dimensions and each elements of A equal the corresponding element of B. A=B if and only if aik = bik for all i and k. 2.2 Transposition The transpose of a matrix A, denoted A′ , is obtained by creating the matrix whose kth row is the kth column of the original matrix. If A is i × k, A′ is k × i. For example, A = 1 2 3 5 1 5 6 4 5 3 1 4 , A′ = 1 5 6 3 2 1 4 1 3 5 5 4 . If A is symmetric, A=A′ . It is also apparent that for any matrix A, (A′ ) ′ = A. Finally, the transpose of a column vector, ai is a row vector: a ′ i = a1 a2 . . . ai . 2.3 Matrix Addition Matrices cannot be added unless they have the same dimension. The operation of addition is extended to matrices by defining C = A + B=[aik + bik]. We also extend the operation of subtraction to matrices precisely as if they were scalars by performing the operation element by element. Thus, C = A − B=[aik − bik]. It follows that matrix addition is commutative, A + B = B + A, and associative, 2
(A+B)+C and that (A+B=A+B 2.4 Matrix Multiplication Matrices are multiplied by using the inner product. The inner product of two vectors. a and b, is a scalar and is written a'b=anb+a2b2 +.+anbn= ba For an n x k matrix A and a k x T matrix B, the product matrix C= AB is an n x T matrix whose ith element is the inner product of row i of A and column k of B. generally, AB+BA The product of a matrix and a vector is a vector and is written as C= Ab ba1+ b2a+.+ bak where 6, are ith element of vector b and a are ith column of matrix A. here we see that the right-hand side is a linear combination of the columns of the matrix where the coefficients are the elements of the vector In the calculation of a matrix product C= Anxk BkxT, it can be written as aB where b: are ith column of matrix B Some general rules for matrix multiplication are as follow Associate law:(ABC=A(BC) Distributive law: A(B+C)=AB+AC Transpose of a product:(AB)=BA Scalar multiplication: aa=[aaik]fo
(A + B) + C = A + (B + C), and that (A + B) ′ = A′ + B′ . 2.4 Matrix Multiplication Matrices are multiplied by using the inner product. The inner product of two vectors, an and bn, is a scalar and is written a ′b = a1b1 + a2b2 + ... + anbn = b ′a. For an n × k matrix A and a k × T matrix B, the product matrix, C = AB, is an n × T matrix whose ikth element is the inner product of row i of A and column k of B. Generally, AB 6= BA. The product of a matrix and a vector is a vector and is written as c = Ab = b1a1 + b2a2 + ... + bkak, where bi are ith element of vector b and ai are ith column of matrix A. Here we see that the right-hand side is a linear combination of the columns of the matrix where the coefficients are the elements of the vector. In the calculation of a matrix product C = An×kBk×T , it can be written as C = AB = [Ab1 Ab2 AbT], where bi are ith column of matrix B. Some general rules for matrix multiplication are as follow: Associate law: (AB)C = A(BC). Distributive law: A(B + C) = AB + AC. Transpose of a product: (AB) ′ = B′A′ . Scalar multiplication: αA = [αaik] for a scalar α. 3
2.5 Matrix Inversion Definition A square matrix A is said to be nonsingular or invertible if there exist a unique matrix(square) B such that AB= BA The matrix b is said to be a multiplicative inverse of A We will refer to the multiplicative inverse of a nonsingular matrix a as simply the inverse of A and denote it by a Some computational results involving inverse are A (AB)-1 when both inverse matrices exist. Finally, if A is symmetric, then a-1is also symmetric Suppose that a, b and a+b are all m x m nonsingular matrices. Then (A+B)1=A-1-A-1(B-1+A-1)- 2.6 A useful idempotent matrix Definition An idempotent matrix is the one that is equal to its square, that is M2=MM M A useful idempotent matrix we will often face is the matrix Mo=I-ii
2.5 Matrix Inversion Definition: A square matrix A is said to be nonsingular or invertible if there exist a unique matrix (square) B such that AB = BA = I. The matrix B is said to be a multiplicative inverse of A. We will refer to the multiplicative inverse of a nonsingular matrix A as simply the inverse of A and denote it by A−1 . Some computational results involving inverse are |A−1 | = 1 |A| , (A−1 ) −1 = A, (A−1 ) ′ = (A′ ) −1 (AB) −1 = B −1A−1 , when both inverse matrices exist. Finally, if A is symmetric, then A−1 is also symmetric. Lemma: Suppose that A, B and A + B are all m × m nonsingular matrices. Then (A + B) −1 = A−1 − A−1 (B −1 + A−1 ) −1A−1 . 2.6 A useful idempotent matrix Definition: An idempotent matrix is the one that is equal to its square, that is M2 = MM = M. A useful idempotent matrix we will often face is the matrix M0 = I − 1 n ii′ , 4
such that Mox where i is a column of ones' s vector, x=[ 1, 2,,n]and i=i2iai Proof As definition Mox=(I--iix=x--iix Using the idempotent matrix Mo to calculate 2=G-j)2, where j=200 2=IG) from gauss 2. 7 Trace of matrix The trace of a square k x k matrix is the sums of its diagonal elements tr(A Some useful results are tr(ca)=ctr(a)) (A)=tr(A'), tr(A+B)=tr(B)+tr(A), tr(ABCD)= tr(BCDa)=tr(CDaB)=tr(DAbC)
such that M0x = x1 − x¯ x2 − x¯ . . . xn − x¯ , where i is a column of ones’s vector, x = [x1, x2, ..., xn] ′ and ¯x = 1 n Pn i=1 xi . Proof. As definition, M0x = (I − 1 n ii′ )x = x − 1 n ii′x = x − ix. ¯ Exercise: Using the idempotent matrix M0 to calculate P200 j=1(j−¯j) 2 , where ¯j = 1 200 P200 j=1(j) from Gauss. 2.7 Trace of Matrix The trace of a square k × k matrix is the sums of its diagonal elements: tr(A)=Pk i=1 aii. Some useful results are: 1. tr(cA) = c(tr(A)), 2. tr(A) = tr(A′ ), 3. tr(A + B) = tr(B) + tr(A), 4. tr(ABCD) = tr(BCDA) = tr(CDAB) = tr(DABC). 5