where n is a stationary process. Premultiplying 3) by Aresults in A'yt=A"(y0-m)+A·t+A业(1)(1+e2+…+e)+Am:~I(04) If E(Ee is nonsingular, then c(E1+E2+.+Et)is I(1)for every nonzero (k x 1)vector c. Moreover, if some of the series exhibit nonzero drift(8+0), the linear combination A'y, will grow deterministically at rate A'o. Thus if the underlying hypothesis suggesting the possibility of cointegration is that certain linear combination of yt are I(0), this require that both conditions that A'y(1) 0 and A's=0 hold The second condition means that despite the presence of a drift term in the process generating yt, there is no linear trend in the cointegrated combination See Banerjee et al (1993)p. 151 for details. To the implication of the first con- dition, from partitioned matrix production we have (1×k) a1业(1) 0 a2业(1) A业(1)= (k×k) ah(1×k) an业(1) 0 which implies v(1)i(xk) y(1)2(xk) av(1)=[ u12 ∑anv(1).=01x)fori=1,2,….,(5) s=1 v(1)×(x) where asi is the sth elements of the row vector a' and (1) is the i th row of the matrix y(1 Equation(5) implies that certain linear combination of the rows of y(1)are zero, meaning that the row vector of y (1)are linearly dependent. That is y (1)is a singular matrix, or equivalently, the determinant of y (1)are zero
where ηt is a stationary process. Premultiplying (3) by A0 results in A0yt = A0 (y0 − η0 ) + A0 δ · t + A0Ψ(1) · (ε1 + ε2 + ... + εt) + A0ηt ∼ I(0)(4). If E(εtε 0 t ) is nonsingular, then c 0 (ε1 + ε2 + ... + εt) is I(1) for every nonzero (k × 1) vector c. Moreover, if some of the series exhibit nonzero drift (δ 6= 0), the linear combination A0yt will grow deterministically at rate A0δ. Thus if the underlying hypothesis suggesting the possibility of cointegration is that certain linear combination of yt are I(0), this require that both conditions that A0Ψ(1) = 0 and A0δ = 0 hold. The second condition means that despite the presence of a drift term in the process generating yt , there is no linear trend in the cointegrated combination. See Banerjee et.al (1993) p. 151 for details. To the implication of the first condition, from partitioned matrix production we have A0Ψ(1) = a 0 1(1×k) a 0 2(1×k) . . . ah(1×k) · Ψ(1)(k×k) = a 0 1Ψ(1) a 0 2Ψ(1) . . . ahΨ(1) = 0 0 . . . 0 , which implies a 0 iΨ(1) = a1i a2i . . . aki ψ(1)0 1(1×k) ψ(1)0 2(1×k) . . . ψ(1)0 k(1×k) = X k s=1 asiψ(1)0 s = 0(1×k) for i = 1, 2, ..., k, (5) where asi is the sth elements of the row vector a 0 i and ψ(1)0 i is the i th row of the matrix Ψ(1) Equation (5) implies that certain linear combination of the rows of Ψ(1) are zero, meaning that the row vector of Ψ(1) are linearly dependent. That is, Ψ(1) is a singular matrix, or equivalently, the determinant of Ψ(1) are zero, 6
i.e. 4(1)=0. This in turn means that the matrix operator y(L)is non- invertible. Thus, a cointegrated system can never be represented by a finite-order vector autoregression in the differenced data Ayt from the non-invertibility of y(l) of the following equations △yt-6=业(L)et 2.2 Implication of Cointegration For the VAR Represen- tation Suppose that the level of t can be represented as a non-stationary pth-order vector autoregression:3 yt=c+由yt-1+中2yt-2+…+中pyt-p+Et, 更(DJyt=c+ 重(L)≡[Lk-重1L一重2L 更D] Suppose that Ay has the Wold representation D)yt=0+业(L)et Premultiplying(8)by (L)results in (1-L)重(L)y=重(1)6+重(L)业(L)Et I Recall from Theorem 4 on page 7 of Chapter 22, this condition violate the proof of spurious If the determinant of an(n x n) matrix H is not equal zero, its inverse is found by dividing the adjoint by the determinant: H-=(1/H[-1)'+Hiill 3The is not the only model for I(1). See Saikkonen and Luukkonen(1997)infinite VAR and VARMA model
i.e. |Ψ(1)| = 0. 1 This in turn means that the matrix operator Ψ(L) is noninvertible. 2 Thus, a cointegrated system can never be represented by a finite-order vector autoregression in the differenced data 4yt from the non-invertibility of Ψ(L) of the following equations: 4yt − δ = Ψ(L)εt . 2.2 Implication of Cointegration For the VAR Representation Suppose that the level of yt can be represented as a non-stationary pth-order vector autoregression: 3 yt = c + Φ1yt−1 + Φ2yt−2 + ... + Φpyt−p + εt , (6) or Φ(L)yt = c + εt . (7) where Φ(L) ≡ [Ik − Φ1L − Φ2L 2 − ... − ΦpL p ]. Suppose that 4yt has the Wold representation (1 − L)yt = δ + Ψ(L)εt . (8) Premultiplying (8) by Φ(L) results in (1 − L)Φ(L)yt = Φ(1)δ + Φ(L)Ψ(L)εt . (9) 1Recall from Theorem 4 on page 7 of Chapter 22, this condition violate the proof of spurious regression 2 If the determinant of an (n × n) matrix H is not equal zero, its inverse is found by dividing the adjoint by the determinant: H−1 = (1/|H)| · [(−1)i+j |Hji |]. 3The is not the only model for I(1). See Saikkonen and Luukkonen (1997) infinite VAR and ? VARMA model 7
Substituting(7) into(9), we have (1-D)et=重(1)6+重(L业(L)et since(1-L)c=0. Now, equation(10) has to hold for all realizations of Et, which y (1)8=0(a vector) and that(1-LIk and (LY(L) represent the identical polynomials in L. In particular, for L= 1, equation(10) implies that 重(1)亚(1)=0.( a matrix) (12) Let i denote ith row of p(1). Then(11)and(12)state that p y(1)=0(arow of zero) and 8=0(a zero scalar). Recalling conditions(a)and(b)of section 2. 1, this mean that i is a cointegrating vector. If al, a,, an form a basis for the space of cointegrating vectors, then it must be possible to express i as a linear combination of a1, a2, ah-that is, there exist an(h x 1) vector b; such that abi or that 1=b2A for a'the(h x k) matrix whose ith row is a!. Applying this reasoning to each of the rows of更(1),ie. 中1 b,A 2 b2A BA b where b is an k x h matrix. However it is seen that the matrix a and b is not identified since for any choice of h x h matrix m, the matrix p(1)=BrmA B'A"implies the same distribution with p (1)=ba. What can be determined
Substituting (7) into (9), we have (1 − L)εt = Φ(1)δ + Φ(L)Ψ(L)εt , (10) since (1−L)c = 0. Now, equation (10) has to hold for all realizations of εt , which require that Ψ(1)δ = 0 (a vector) (11) and that (1 − L)Ik and Φ(L)Ψ(L) represent the identical polynomials in L. In particular, for L = 1, equation (10) implies that Φ(1)Ψ(1) = 0. (a matrix) (12) Let φ 0 i denote ith row of Φ(1). Then (11) and (12) state that φ 0 iΨ(1) = 0 0 (a row of zero) and φ 0 iδ = 0 (a zero scalar). Recalling conditions (a) and (b)of section 2.1, this mean that φi is a cointegrating vector. If a1, a2,..., ah form a basis for the space of cointegrating vectors, then it must be possible to express φi as a linear combination of a1, a2,..., ah–that is, there exist an (h × 1) vector bi such that φi = [a1 a2 .... ah]bi or that φ 0 i = b 0 iA0 for A0 the (h × k) matrix whose ith row is a 0 i . Applying this reasoning to each of the rows of Φ(1), i.e. Φ(1) = φ 0 1 φ 0 2 . . . φ 0 k = b 0 1A0 b 0 2A0 . . . b 0 kA0 = BA0 , (13) where B is an k × h matrix. However, it is seen that the matrix A and B is not identified since for any choice of h×h matrix Υ, the matrix Φ(1) = BΥ−1ΥA0 = B∗A∗0 implies the same distribution with Φ(1) = BA0 . What can be determined 8