Ch. 18 Vector Time series 1 Introduction In dealing with economic variables often the value of one variables is not only related to its predecessors in time but, in addition, it depends on past values of other variables. This naturally extends the concept of univariate stochastic process to vector time series analysis. This chapter describes the dynamic in teractions among a set of variables collected in an(k x 1)vector y. Definition Let(S, F, p) be a probability space and T an index set of real numbers and define the k-dimensional vector function y( by y(, :SxT ordered sequence of random vector y(, t),t ET is called a vector stochastic process 1.1 First Two moments of Stationary Vector Time Series From now on in this chapter we follows convention to use t in stead of y(, t) to indicate that we are considering discrete vector time series. The first two moments of a vector time series yt are E(yt)=μt,and El(y-At(yt-i-u_i) for alltE T If neither At and Tt, j are function of t, that is, A,=u and Tt=T, then we say that y is a covariance-stationary vector process. Note that although 1j=71-j for a scalar stationary process, the same is not true of a vector process T≠r Instead. the correct relation is Since Ely++;-u)y uT E[(y+-1)(yt-1)
Ch. 18 Vector Time Series 1 Introduction In dealing with economic variables often the value of one variables is not only related to its predecessors in time but, in addition, it depends on past values of other variables. This naturally extends the concept of univariate stochastic process to vector time series analysis. This chapter describes the dynamic interactions among a set of variables collected in an (k × 1) vector yt . Definition : Let (S, F,P) be a probability space and T an index set of real numbers and define the k-dimensional vector function y(·, ·) by y(·, ·) : S × T → Rk . The ordered sequence of random vector {y(·,t),t ∈ T } is called a vector stochastic process. 1.1 First Two moments of Stationary Vector Time Series From now on in this chapter we follows convention to use yt in stead of y(·,t) to indicate that we are considering discrete vector time series. The first two moments of a vector time series yt are E(yt) = µt , and Γt,j = E[(yt − µt )(yt−j − µt−j ) 0 ] for all t ∈ T . If neither µt and Γt,j are function of t, that is, µt = µ and Γt,j = Γj , then we say that yt is a covariance-stationary vector process. Note that although γj = γ−j for a scalar stationary process, the same is not true of a vector process: Γj 6= Γ−j . Instead, the correct relation is Γ 0 j = Γ−j since Γj = E[(yt+j − µ)(y(t+j)−j − µ) 0 ] = E[(yt+j − µ)(yt − µ) 0 ], 1
and taking transpose r=E[(y2-)34+-p) 1) 1.2 Vector white noise process Definition a k x I vector process Et, tET) is said to be a white-noise process if (). E(et) (ii). E(EET) 0ift≠ where S2 is an(k x k) symmetric positive definite matrix. It is important to note that in general n2 is not necessary a diagonal matrix, since it is the contempora- neous correlation among variables that called for the needs of vector time series anaIvsis 1.3 Vector MA(q)Process A vector moving average process of order g takes the form yt=p+et+1et-1+白2et-2+…+eet-q, where Et is a vector white noise process and e; denotes an(k x k)matrix of MA coefficients for j=1, 2,.. 9. The mean of yt is u, and the variance is ro=E[(yt-1)(y:-) fEE/+O1EEt-1Et10+E2EEt-2Et-2Je? +…+nEet-9=-le 2+192e1+e2962+…+22e with autocovariance(compares with )j of Ch. 14 on p. 3) e!+6+19261++2!2e2+…+ese for j=1, 2, r={se-+19+1+2+2+…++9e4foj for lil
and taking transpose, Γ 0 j = E[(yt − µ)(yt+j − µ) 0 ] = E[(yt − µ)(yt−(−j) − µ) 0 ] = Γ−j . 1.2 Vector White Noise Process Definition: A k × 1 vector process {εt , t ∈ T } is said to be a white-noise process if (i). E(εt) = 0; (ii). E(εtε 0 τ ) = Ω if t = τ 0 if t 6= τ, where Ω is an (k × k) symmetric positive definite matrix. It is important to note that in general Ω is not necessary a diagonal matrix, since it is the contemporaneous correlation among variables that called for the needs of vector time series analysis. 1.3 Vector MA(q) Process A vector moving average process of order q takes the form yt = µ + εt + Θ1εt−1 + Θ2εt−2 + ... + Θqεt−q, where εt is a vector white noise process and Θj denotes an (k ×k) matrix of MA coefficients for j = 1, 2, ..., q. The mean of yt is µ, and the variance is Γ0 = E[(yt − µ)(yt − µ) 0 ] = E[εtε 0 t ] + Θ1E[εt−1ε 0 t−1 ]Θ0 1 + Θ2E[εt−2ε 0 t−2 ]Θ0 2 +... + ΘqE[εt−qε 0 t−q ]Θ0 q = Ω + Θ1ΩΘ0 1 + Θ2ΩΘ0 2 + ... + ΘqΩΘ0 q , with autocovariance (compares with γj of Ch. 14 on p.3) Γj = ΘjΩ + Θj+1ΩΘ0 1 + Θj+2ΩΘ0 2 + ... + ΘqΩΘ0 q−j for j = 1, 2, ..., q ΩΘ0 −j + Θ1ΩΘ0 −j+1 + Θ2ΩΘ0 −j+2 + ... + Θq+jΩΘ0 q for j = −1, −2, ..., −q 0 for |j| > q, 2
where O0=Ik. Thus any vector MA(q) process is covariance-stationary 1.4Ⅴ ector MA(∞)P rocess The vector MA(oo)process is written yt=l+Et+业1Et-1+业2Et-2+ where Et is a vector white noise process and y i denotes an(k x k) matrix of MA coefficients Definition: For an(n x m) matrix H, the sequence of matrices Hs so_ is absolutely summable if each of its elements forms an absolutely summable scalar sequence Example If i denotes the row i, column j element of the moving average parameters matrix ys associated with lag s, then the sequence ys=o is absolutely if ∑|e|<∞fori=1,2…, k and j=1,2,… Theorem. Let yt=+et+业1Et-1+业2Et-2+ where Et is a vector white noise process and (ilio is absolutely summable. Let lit denote the ith element of yt, and let u; denote the ith element of u. Then (a). the autocovariance between the ith variable at time t and the jth variable s period earlier, E(yit -Hi)(; t-s-ui), exist and is given by the row i, column j r ys+Qyy fa 0,1,2
where Θ0 = Ik. Thus any vector MA(q) process is covariance-stationary. 1.4 Vector MA(∞) Process The vector MA(∞) process is written yt = µ + εt + Ψ1εt−1 + Ψ2εt−2 + .... where εt is a vector white noise process and Ψj denotes an (k ×k) matrix of MA coefficients. Definition: For an (n × m) matrix H, the sequence of matrices {Hs} ∞ s=−∞ is absolutely summable if each of its elements forms an absolutely summable scalar sequence. Example: If ψ (s) ij denotes the row i, column j element of the moving average parameters matrix Ψs associated with lag s, then the sequence {Ψs} ∞ s=0 is absolutely if X∞ s=0 |ψ (s) ij | < ∞ for i = 1, 2, ..., k and j = 1, 2, ..., k. Theorem: Let yt = µ + εt + Ψ1εt−1 + Ψ2εt−2 + .... where εt is a vector white noise process and {Ψl} ∞ l=0 is absolutely summable. Let yit denote the ith element of yt , and let µi denote the ith element of µ. Then (a). the autocovariance between the ith variable at time t and the jth variable s period earlier, E(yit − µi)(yj,t−s − µj), exist and is given by the row i, column j element of Γs = X∞ v=0 Ψs+vΩΨ0 v for s = 0, 1, 2, ...; 3
(b). the sequence of matrices rs so is absolutely summable (a). By definition TS=EO T。=E[t+业1et-1+业2Et-2+…+业Et-s++1et-8-1+… et-s+亚1Et-8-1+亚2Et-s-2+ 业92亚+重+9v1+更+2亚2 y。y fo 0,1,2 The row i, column j element of Is is therefore the autocovariance between the ith variable at time t and the jth variable s period earlier, E(yit -ui(,t-s -ui)
(b). the sequence of matrices {Γs} ∞ s=0 is absolutely summable. Proof: (a). By definition Γs = E(yt − µ)(yt−s − µ) 0 or Γs = E [εt + Ψ1εt−1 + Ψ2εt−2 + ... + Ψsεt−s + Ψs+1εt−s−1 + ....] [εt−s + Ψ1εt−s−1 + Ψ2εt−s−2 + ....] 0 = ΨsΩΨ0 0 + Ψs+1ΩΨ0 1 + Ψs+2ΩΨ0 2 + ... = X∞ v=0 Ψs+vΩΨ0 v for s = 0, 1, 2, ... The row i, column j element of Γs is therefore the autocovariance between the ith variable at time t and the jth variable s period earlier, E(yit −µi)(yj,t−s−µj ). (b). 4
2 Vector Autoregressive Process, VAR A pth order vector autoregression, denoted V AR(p) is written as yt=C+重yt-1+重yt-2+…+中pyt-p+Et where c denotes an(k×1) vector of constants and重;an(k×k) matrix of au toregressive coefficients for j=1, 2, . p and Et is a vector white noise process 2.1 Population Characteristics Let ci denotes the ith element of the vector c and let oia denote the row i, column j element of the matrix s, then the first row of the vector system in(1) specifies that y,t-2+12y2:-2+…+01k9kt y/1,t- 9k,t- Thus, a vector autoregression is a system in which each variable is regressed n a constant and p of its own lags as well as on p lags of each of the other(k-1) variables in the V AR. Note that each regression has the same explanatory vari- Using lag operator notation, (1)can be written in this form k-重1L-重2L 更LP 更(L)y Here p(L) indicate an k x k matrix polynomial in the lag operator L. The row i, column j elements of p(L)is a scalar polynomial in L where Si; is unity if i=j and zero otherwise
2 Vector Autoregressive Process, V AR A pth order vector autoregression, denoted V AR(p) is written as; yt = c + Φ1yt−1 + Φ2yt−2 + ... + Φpyt−p + εt , (1) where c denotes an (k × 1) vector of constants and Φj an (k × k) matrix of autoregressive coefficients for j = 1, 2, ..., p and εt is a vector white noise process. 2.1 Population Characteristics Let ci denotes the ith element of the vector c and let φ (s) ij denote the row i, column j element of the matrix Φs, then the first row of the vector system in (1) specifies that y1t = c1 + φ (1) 11 y1,t−1 + φ (1) 12 y2,t−1 + ... + φ (1) 1k yk,t−1 +φ (2) 11 y1,t−2 + φ (2) 12 y2,t−2 + .... + φ (2) 1k yk,t−2 +.... + φ (p) 11 y1,t−p + φ (p) 12 y2,t−p + ... + φ (p) 1k yk,t−p + ε1t . Thus, a vector autoregression is a system in which each variable is regressed on a constant and p of its own lags as well as on p lags of each of the other (k −1) variables in the V AR. Note that each regression has the same explanatory variables. Using lag operator notation, (1) can be written in this form [Ik − Φ1L − Φ2L 2 − ... − ΦpL p ]yt = c + εt or Φ(L)yt = c + εt . (2) Here Φ(L) indicate an k × k matrix polynomial in the lag operator L. The row i, column j elements of Φ(L) is a scalar polynomial in L: Φ(L)ij = [δij − φ (1) ij L 1 − φ (2) ij L 2 − ... − φ (p) ij L p ], where δij is unity if i = j and zero otherwise. 5