1.3. Notes and references Using this stabilization result, a loop-shaping design technique is proposed. The pro- posed technique uses only the basic concept of loop-shaping methods, and then a robust stabilization controller for the normalized coprime factor perturbed system is used to construct the final controller Chapter 17 introduces the gap metric and the v-gap metric. The frequency domain red pretation and applications of the v-gap metric are discussed. The controller order auction in the gap or v-gap metric framework is also considered Chapter 18 considers briefly the problems of model validation and the mixed real ad complex u analysis and synthesis sk Most computations and examples in this book are done using MATLAB. Since we hall use MATLAB as a major computational tool, it is assumed that readers have some basic working knowledge of the MATLAB operations(for example, how to input vec- ors and matrices). We have also included in this book some brief explanations of MATLAB, SIMULINK, Control System Toolbox, and u Analysis and Synthesis Toolbox commands. In particular, this book is written consistently with the u Analysis and Synthesis Toolbox.(Robust Control Toolbox, LMI Control Toolbox, and other soft ware packages may equally be used with this book. ) Thus it is helpful for readers to have access to this toolbox. It is suggested at this point to try the following demo programs from this toolbox. 》 maemo1 》 midem2 We shall introduce many more MATLAB commands in the subsequent chapters 1.3 Notes and references The original formulation of the Hoo control problem can be found in Zames [198 Relations between Hoo have now been established with many other topics in contre for example, risk-sensitive control of Whittle [1990; differential games(see Basar and Bernhard [1991], Limebeer, Anderson, Khargonekar, and Green [1992; Green and Lime- beer [1995 ) chain-scattering representation, and J -lossless factorization( Green [1992 and Kimura [1997 ) See also Zhou, Doyle, and Glover [1996 for additional discussions and references. The state-space theory of Hoo has also been carried much further, by generalizing time invariant to time varying, infinite horizon to finite horizon, and finite dimensional to infinite dimensional. and even to some nonlinear settings SIMULINK is a regi trademark of The Math Works, Inc. H-Analysis and Synthesis is a trade- mark of The MathWorks, Inc. and MUSYN Inc; Control System Toolbox, Robust Control Toolbox and LMI Control Toolbox are trademarks of The Math Works, Inc
1.3. Notes and References 9 Using this stabilization result, a loop-shaping design technique is proposed. The proposed technique uses only the basic concept of loop-shaping methods, and then a robust stabilization controller for the normalized coprime factor perturbed system is used to construct the final controller. Chapter 17 introduces the gap metric and the ν-gap metric. The frequency domain interpretation and applications of the ν-gap metric are discussed. The controller order reduction in the gap or ν-gap metric framework is also considered. Chapter 18 considers briefly the problems of model validation and the mixed real and complex µ analysis and synthesis. Most computations and examples in this book are done using Matlab. Since we shall use Matlab as a major computational tool, it is assumed that readers have some basic working knowledge of the Matlab operations (for example, how to input vectors and matrices). We have also included in this book some brief explanations of Matlab, Simulink R , Control System Toolbox, and µ Analysis and Synthesis Toolbox1 commands. In particular, this book is written consistently with the µ Analysis and Synthesis Toolbox. (Robust Control Toolbox, LMI Control Toolbox, and other software packages may equally be used with this book.) Thus it is helpful for readers to have access to this toolbox. It is suggested at this point to try the following demo programs from this toolbox. msdemo1 msdemo2 We shall introduce many more Matlab commands in the subsequent chapters. 1.3 Notes and References The original formulation of the H∞ control problem can be found in Zames [1981]. Relations between H∞ have now been established with many other topics in control: for example, risk-sensitive control of Whittle [1990]; differential games (see Ba¸sar and Bernhard [1991], Limebeer, Anderson, Khargonekar, and Green [1992]; Green and Limebeer [1995]); chain-scattering representation, and J-lossless factorization (Green [1992] and Kimura [1997]). See also Zhou, Doyle, and Glover [1996] for additional discussions and references. The state-space theory of H∞ has also been carried much further, by generalizing time invariant to time varying, infinite horizon to finite horizon, and finite dimensional to infinite dimensional, and even to some nonlinear settings. 1Simulink is a registered trademark of The MathWorks, Inc.; µ-Analysis and Synthesis is a trademark of The MathWorks, Inc. and MUSYN Inc.; Control System Toolbox, Robust Control Toolbox, and LMI Control Toolbox are trademarks of The MathWorks, Inc
INTRODUCTION 1. 4 Problems Problem 1.1 We shall solve an easy problem first. When you read a paper or a book, you often come across a statement like this It is easy What the author really meant was one of the following:(a) it is really easy;(b)it seems to be easy;(c)it is easy for an expert; (d) the author does not know how to show it but he or she thinks it is correct. Now prove that when I say "It is easy" in this book, I mean it is really easy (Hint: If you can prove it after you read the whole book, ask your boss for a promotion If you cannot prove it after you read the whole book, trash the book and write a book yourself. Remember use something like"it is easy. if you are not sure what you are talking about
10 INTRODUCTION 1.4 Problems Problem 1.1 We shall solve an easy problem first. When you read a paper or a book, you often come across a statement like this “It is easy ...”. What the author really meant was one of the following: (a) it is really easy; (b) it seems to be easy; (c) it is easy for an expert; (d) the author does not know how to show it but he or she thinks it is correct. Now prove that when I say “It is easy” in this book, I mean it is really easy. (Hint: If you can prove it after you read the whole book, ask your boss for a promotion. If you cannot prove it after you read the whole book, trash the book and write a book yourself. Remember use something like “it is easy ...” if you are not sure what you are talking about.)
Chapter 2 Linear Algebra Some basic linear algebra facts will be reviewed in this chapter. The detailed treatment of this topic can be found in the references listed at the end of the chapter. Hence we shall omit most proofs and provide proofs only for those results that either cannot be easil found in the standard linear algebra textbooks or are insightful to the understanding of some related problen 2.1Li inear ubspaces Let R denote the real scalar field and C the complex scalar field. For the interest of this chapter, let F be either R or C and let Fm be the vector space over F(i.e, Fn is either Rn or Cn). Now let T1, 2, .. Ik E F. Then an element of the form a11+.+okTk with ai E F is a linear combination over F of a1,.. Tk. The set of all linear combinations of z1, 2, .. Ik EF is a subspace called the span of 11, T2,. Tk, denoted by span{x1,x2,…,xk}:={x=a11+…+akxk:ai∈F set of vectors T1, T2, .. k E F is said to be linearly dependent over F if there exists a1, .. ak E F not all zero such that a1 2+.+akTk=0; otherwise the vectors are said to be linearly independent. Let S be a subspace of Fn, then a set of vectors 1, 12, .. TkE S is called a basis for S if a1, T2, .. Tk are linearly independent and S=span r1,I2. oweve such a basis for a subspace S is not unique but all bases for S have the same number of elements. This number is called the dimension of S, denoted by dim(S) A set of vectors ( 1, I2,..., Tk in Fn is mutually orthogonal if xa,=0 for all i≠ j and orthonormal if T= transpose and dii is the Kronecke , where the superscript*denotes complex conjugate delta function with dij=1 for i=j and dij=0 for i #j. More generally, a collection of subspaces S1, S2,..., Sk of Fn is mutually orthogonal if aT’y=0 whenever r∈ Si and y∈S;fori≠j
Chapter 2 Linear Algebra Some basic linear algebra facts will be reviewed in this chapter. The detailed treatment of this topic can be found in the references listed at the end of the chapter. Hence we shall omit most proofs and provide proofs only for those results that either cannot be easily found in the standard linear algebra textbooks or are insightful to the understanding of some related problems. 2.1 Linear Subspaces Let R denote the real scalar field and C the complex scalar field. For the interest of this chapter, let F be either R or C and let Fn be the vector space over F (i.e., Fn is either Rn or Cn). Now let x1,x2,...,xk ∈ Fn. Then an element of the form α1x1+...+αkxk with αi ∈ F is a linear combination over F of x1,...,xk. The set of all linear combinations of x1,x2,...,xk ∈ Fn is a subspace called the span of x1,x2,...,xk, denoted by span{x1,x2,...,xk} := {x = α1x1 + ... + αkxk : αi ∈ F}. A set of vectors x1,x2,...,xk ∈ Fn is said to be linearly dependent over F if there exists α1,...,αk ∈ F not all zero such that α1x2 +...+αkxk = 0; otherwise the vectors are said to be linearly independent. Let S be a subspace of Fn, then a set of vectors {x1,x2,...,xk} ∈ S is called a basis for S if x1,x2,...,xk are linearly independent and S = span{x1,x2,...,xk}. However, such a basis for a subspace S is not unique but all bases for S have the same number of elements. This number is called the dimension of S, denoted by dim(S). A set of vectors {x1,x2,...,xk} in Fn is mutually orthogonal if x∗ i xj = 0 for all i 6= j and orthonormal if x∗ i xj = δij , where the superscript ∗ denotes complex conjugate transpose and δij is the Kronecker delta function with δij = 1 for i = j and δij = 0 for i 6= j. More generally, a collection of subspaces S1,S2,...,Sk of Fn is mutually orthogonal if x∗y = 0 whenever x ∈ Si and y ∈ Sj for i 6= j. 11
LINEAR ALGEBRA The orthogonal complement of a subspace S Fn is defined by S:={y∈Fn:yx=0 for all x∈S We call a set of vectors u1, u2, .. uk an orthonormal basis for a subspace SE Fn if the vectors form a basis of S and are orthonormal. It is always possible to extend such a basis to a full orthonormal basis u1, u2, .. un for Fn. Note that in this case k+1 d uk+1,., un) is called an orthonormal completion of (u1, u2, .. k) Let A E Fmxn be a linear transformation from Fn to Fm: that is Then the hernel or null space of the linear transformation A is defined by KerA=N(A): =EFn: Ar=Of ge of A ImA=R(A):={y∈Fm:y=Ax,x∈F} n denote the columns of a matrix A E Fmxn: ther ImA=spana,a square matrix U E Fnxn whose columns form an orthonormal basis for Fn is called a unitary matrit(or orthogonal matric if F=R), and it satisfies UU=I=UU Now let A=aii e cnn; then the trace of A is defined as e(A) Illustrative maTLaB commands basis-of_KerA= null(A); basis-of_ImA orth(A); rank_of-A= rank(A) 2.2 Eigenvalues and eigenvectors Let A E Cnxn; then the eigenvalues of A are the n roots of its characteristic polynomial P(A)= det(AI -A). The maximal modulus of the eigenvalues is called the spectral if Ai is a root of p(A), where, as usual,. denotes the magnitude. The real spectral radius of a matrix A, denoted by PR(A), is the maximum modulus of the real eigenvalues
12 LINEAR ALGEBRA The orthogonal complement of a subspace S ⊂ Fn is defined by S⊥ := {y ∈ Fn : y∗x = 0 for all x ∈ S}. We call a set of vectors {u1,u2,...,uk} an orthonormal basis for a subspace S ∈ Fn if the vectors form a basis of S and are orthonormal. It is always possible to extend such a basis to a full orthonormal basis {u1,u2,...,un} for Fn. Note that in this case S⊥ = span{uk+1,...,un}, and {uk+1,...,un} is called an orthonormal completion of {u1,u2,...,uk}. Let A ∈ Fm×n be a linear transformation from Fn to Fm; that is, A : Fn 7−→ Fm. Then the kernel or null space of the linear transformation A is defined by KerA = N(A) := {x ∈ Fn : Ax = 0}, and the image or range of A is ImA = R(A) := {y ∈ Fm : y = Ax, x ∈ Fn}. Let ai, i = 1, 2,...,n denote the columns of a matrix A ∈ Fm×n; then ImA = span{a1,a2,...,an}. A square matrix U ∈ F n×n whose columns form an orthonormal basis for Fn is called a unitary matrix (or orthogonal matrix if F = R), and it satisfies U∗U = I = UU∗. Now let A = [aij ] ∈ Cn×n; then the trace of A is defined as trace(A) := Xn i=1 aii. Illustrative MATLAB Commands: basis of KerA = null(A); basis of ImA = orth(A); rank of A = rank(A); 2.2 Eigenvalues and Eigenvectors Let A ∈ Cn×n; then the eigenvalues of A are the n roots of its characteristic polynomial p(λ) = det(λI − A). The maximal modulus of the eigenvalues is called the spectral radius, denoted by ρ(A) := max 1≤i≤n |λi| if λi is a root of p(λ), where, as usual, |·| denotes the magnitude. The real spectral radius of a matrix A, denoted by ρR(A), is the maximum modulus of the real eigenvalues
2.3. Matrix Inversion Formulas of A; that is, PR(A): = max Ai and PR(A): =0 if A has no real eigenvalues. A nonzero vector r e cn that satisfies Ar= ar is referred to as a right eigenvector of A. Dually, a nonzero vector y is called a left eigenvector of A if lues need not be real. and neither do thei However, if A is real and A is a real eigenvalue of A, then there is a real eigenvector corresponding to A. In the case that all eigenvalues of a matrix A are real, we will denote Amax(A) for the largest eigenvalue of A and Amin(A)for the smallest eigenvalue In particular, if A is a Hermitian matrix(i.e, A=A*), then there exist a unitary matrix U and a real diagonal matrix A such that A=UAU*, where the diagonal elements of A are the eigenvalues of A and the columns of U are the eigenvectors of A Lemma 2.1 Consider the Sylvester equation AX+XB=C mh2mam(甲By=1m,mna 1,2, In particular, if B=A*, equation(2.1) is called the Lyapunov equation; and the necessary and sufficient condition for the existence of a unique solution is that A(A)+入(4)≠0,Vi,j=1,2, Illustrative MaTLaB commands TV, DI (A)% AV=VD 2.3 Matrix Inversion formulas Let A be a square matrix partitioned as follows A21A22 where All and A22 are also square matrices. Now suppose All is nonsingular; then A has the following decomposition: A1 4 01[A101「rA141 0△0
2.3. Matrix Inversion Formulas 13 of A; that is, ρR(A) := max λi∈R |λi| and ρR(A) := 0 if A has no real eigenvalues. A nonzero vector x ∈ Cn that satisfies Ax = λx is referred to as a right eigenvector of A. Dually, a nonzero vector y is called a left eigenvector of A if y∗A = λy∗. In general, eigenvalues need not be real, and neither do their corresponding eigenvectors. However, if A is real and λ is a real eigenvalue of A, then there is a real eigenvector corresponding to λ. In the case that all eigenvalues of a matrix A are real, we will denote λmax(A) for the largest eigenvalue of A and λmin(A) for the smallest eigenvalue. In particular, if A is a Hermitian matrix (i.e., A = A∗), then there exist a unitary matrix U and a real diagonal matrix Λ such that A = UΛU∗, where the diagonal elements of Λ are the eigenvalues of A and the columns of U are the eigenvectors of A. Lemma 2.1 Consider the Sylvester equation AX + XB = C, (2.1) where A ∈ Fn×n, B ∈ Fm×m, and C ∈ Fn×m are given matrices. There exists a unique solution X ∈ Fn×m if and only if λi(A) + λj (B) 6= 0, ∀i = 1, 2,...,n, and j = 1, 2,...,m. In particular, if B = A∗, equation (2.1) is called the Lyapunov equation; and the necessary and sufficient condition for the existence of a unique solution is that λi(A) + λ¯j (A) 6= 0, ∀i,j = 1, 2,...,n. Illustrative MATLAB Commands: [V, D] = eig(A) % AV = V D X=lyap(A,B,-C) % solving Sylvester equation. 2.3 Matrix Inversion Formulas Let A be a square matrix partitioned as follows: A := A11 A12 A21 A22 , where A11 and A22 are also square matrices. Now suppose A11 is nonsingular; then A has the following decomposition: A11 A12 A21 A22 = I 0 A21A−1 11 I A11 0 0 ∆ I A−1 11 A12 0 I