Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science 6.243j(Fall 2003): DYNAMICS OF NONLINEAR SYSTEMS by A. Megretski Lecture 7: Finding Lyapunov Functions This lecture gives an introduction into basic methods for finding Lyapunov functions and storage functions for given dynamical systems 7.1 Convex search for storage functions The set of all real-valued functions of system state which do not increase along system trajectories is convea, i.e. closed under the operations of addition and multiplication by a positive constant. This serves as a basis for a general procedure of searching for Lyapunov functions or storage functions 7.1.1 Linearly parameterized storage function candidates Consider a system model given by discrete time state space equations r(t+1)=f(x(t),(t),y(t)=9(x(t),u(t) where r(tEXCR is the system state, w(t)EWCR is system input, y(EYCR is system output, and f: XxW+X, 9: XxWHY are given functions. A functional V: XH R is a storage function for system(7. 1)with supply rate o: Y x W HR if V((t+1))-v(a(t))<(y(t) (72) for every solution of(7. 1), i.e. if V(f(x,)-V(x)≤σ(9(正,),)V∈X,∈W I Version of September 26, 2003
Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science 6.243j (Fall 2003): DYNAMICS OF NONLINEAR SYSTEMS by A. Megretski Lecture 7: Finding Lyapunov Functions1 This lecture gives an introduction into basic methods for finding Lyapunov functions and storage functions for given dynamical systems. 7.1 Convex search for storage functions The set of all real-valued functions of system state which do not increase along system trajectories is convex, i.e. closed under the operations of addition and multiplication by a positive constant. This serves as a basis for a general procedure of searching for Lyapunov functions or storage functions. 7.1.1 Linearly parameterized storage function candidates Consider a system model given by discrete time state space equations x(t + 1) = f(x(t), w(t)), y(t) = g(x(t), w(t)), (7.1) where x(t) ≤ X ∀ Rn is the system state, w(t) ≤ W ∀ Rm is system input, y(t) ≤ Y ∀ Rk is system output, and f : X ×W ∈� X, g : X ×W ∈� Y are given functions. A functional V : X ∈� R is a storage function for system (7.1) with supply rate ψ : Y × W ∈� R if V (x(t + 1)) − V (x(t)) → ψ(y(t)) (7.2) for every solution of (7.1), i.e. if V (f(¯x, w¯)) − V (¯x) → ψ(g(¯x, w¯), w¯) � x¯ ¯ ≤ X, w ≤ W. (7.3) 1Version of September 26, 2003
In particular, when o=0, this yields the definition of a Lyapunov function Finding, for a given supply rate, a valid storage function(or at least proving that one exists)is a major challenge in constructive analysis of nonlinear systems. The most com- mon approach is based on considering a linearly parameterized subset of storage function candidates y defined by ={V(2)=∑nV( where (Va) is a fixed set of basis functions, and Tk are parameters to be determined. Here every element of v is considered as a storage function candidate, and one wants to set up an efficient search for the values of Th which yield a function V satisfying(7.3) Example 7.1 Consider the finite state automata defined by equations(7.1)with value sets X={1,2,3},W={0,1},Y={0,1} and with dynamics defined by f(1,1)=2,∫(2,1)=3,f(3,1)=1,f(1,0)=1,f(2,0)=2,f(3,0)=2, 9(1,1)=1,g(,)=0V(,)≠(1,1) In order to show that the amount of 1s in the output is never much larger than one third of the amount of 1s in the input, one can try to find a storage function V with supply rate (, Taking three basis functions Vi, V2, V3 defined by 1,=k 0,≠k the conditions imposed on T1, T2, T3 can be written as the set of six affine inequalities(7.3) two of which(with(I, a)=(1, 0)and(C, )=(2, 0)) will be satisfied automatically, while the other four are ≤1at(x,)=(3,0), 72-n1≤-2at(x,如)=(1,1) 73-72≤1at(x,)=(2,1) 73≤1at(x,)=(3,1) Solutions of this linear program are given by
� 2 In particular, when ψ ∞ 0, this yields the definition of a Lyapunov function. Finding, for a given supply rate, a valid storage function (or at least proving that one exists) is a major challenge in constructive analysis of nonlinear systems. The most common approach is based on considering a linearly parameterized subset of storage function candidates V defined by N V = {V (¯x) = φqVq(¯x), (7.4) q=1 where {Vq} is a fixed set of basis functions, and φk are parameters to be determined. Here every element of V is considered as a storage function candidate, and one wants to set up an efficient search for the values of φk which yield a function V satisfying (7.3). Example 7.1 Consider the finite state automata defined by equations (7.1) with value sets X = {1, 2, 3}, W = {0, 1}, Y = {0, 1}, and with dynamics defined by f(1, 1) = 2, f(2, 1) = 3, f(3, 1) = 1, f(1, 0) = 1, f(2, 0) = 2, f(3, 0) = 2, g(1, 1) = 1, g(¯x, w¯) = 0 � (¯x, w¯ =) ≡ (1, 1). In order to show that the amount of 1’s in the output is never much larger than one third of the amount of 1’s in the input, one can try to find a storage function V with supply rate ψ(¯y, w¯ ¯ ) = w − 3¯y. Taking three basis functions V1, V2, V3 defined by 1, x¯ = k, Vk(¯x) = 0, x¯ =≡ k, the conditions imposed on φ1, φ2, φ3 can be written as the set of six affine inequalities (7.3), two of which (with (¯x, w¯) = (1, 0) and (¯x, w¯) = (2, 0)) will be satisfied automatically, while the other four are φ2 − φ3 → 1 at (¯x, w¯) = (3, 0), φ2 − φ1 → −2 at (¯x, w¯) = (1, 1), φ3 − φ2 → 1 at (¯x, w¯) = (2, 1), φ1 − φ3 → 1 at (¯x, w¯) = (3, 1). Solutions of this linear program are given by φ1 = c, φ2 = c − 2, φ3 = c − 1
where c R is arbitrary. It is customary to normalize storage and Lyapunov functions so that their minimum equals zero, which yields c=2 and 72 Now, summing the inequalities(7. 2)from t=0 to t= T yields T-1 y y(t)≤v(x(0)-Vx()+∑m(), t=0 which is implies the desired relation between the numbers of 1s in the input and in the output, since V(a(O))-V(a(r)cannot be larger than 2 7.1.2 Storage functions via cutting plane algorithms The possibility to reduce the search for a valid storage function to convex optimization as demonstrated by the example above, is a general trend. One general situation in which an efficient search for a storage function can be performed is when a cheap procedure of checking condition(7.3)(an oracle) is available Assume that for every given element V E v it is possible to find out whether condition (7. 3 )is satisfied, and, in the case when the answer is negative, to produce a pair of vectors tEX, w EW for which the inequality in(7.3)does not hold. Select a sufficiently large set To(a polytope or an ellipsoid)in the space of parameter vector T=(aI(this set will limit the search for a valid storage function). Let T*be the "center"of To. Define V by the r*, and apply the verification "oracle" to it. If V is a valid storage function the search for storage function ends successfully. Otherwise, the "invalidity certificate I, a) produced by the oracle yields a hyperplane separating T* and the(unknown)set of T defining valid storage functions, thus cutting a substantial portion from the search set To, reducing it to a smaller set T1. Now re-define T'as the center of Ti and repeat the process by constructing a sequence of monotonically decreasing search sets Tk, until either a valid storage function is found, or Tk shrinks to nothing With an appropriate selection of a class of search sets Tk(ellipsoids or polytopes are most frequently used) and with an adequate definition of a "center"(the so-called "analytical center"is used for polytopes), the volume of Tk can be made exponentially decreasing, which constitutes fast convergence of the search algorithm 7.1.3 Completion of squares The success of the search procedure described in the previous section depends heavily on the choice of the basis functions Vk. A major difficulty to overcome is verification of (7.3) for a given V. It turns out that the only known large linear space of functionals
3 where c ≤ R is arbitrary. It is customary to normalize storage and Lyapunov functions so that their minimum equals zero, which yields c = 2 and φ1 = 2, φ2 = 0, φ3 = 1. Now, summing the inequalities (7.2) from t = 0 to t = T yields T −1 T −1 3 y(t) → V (x(0)) − V (x(T)) + w(t), t=0 t=0 which is implies the desired relation between the numbers of 1’s in the input and in the output, since V (x(0)) − V (x(T)) cannot be larger than 2. 7.1.2 Storage functions via cutting plane algorithms The possibility to reduce the search for a valid storage function to convex optimization, as demonstrated by the example above, is a general trend. One general situation in which an efficient search for a storage function can be performed is when a cheap procedure of checking condition (7.3) (an oracle) is available. Assume that for every given element V ≤ V it is possible to find out whether condition (7.3) is satisfied, and, in the case when the answer is negative, to produce a pair of vectors x¯ ¯ ≤ X, w ≤ W for which the inequality in (7.3) does not hold. Select a sufficiently large set T0 (a polytope or an ellipsoid) in the space of parameter vector φ = (φq)q N =1 (this set will limit the search for a valid storage function). Let φ � be the “center” of T0. Define V by the φ �, and apply the verification “oracle” to it. If V is a valid storage function, the search for storage function ends successfully. Otherwise, the “invalidity certificate” x, w¯(¯ ) produced by the oracle yields a hyperplane separating φ � and the (unknown) set of φ defining valid storage functions, thus cutting a substantial portion from the search set T0, reducing it to a smaller set T1. Now re-define φ � as the center of T1 and repeat the process by constructing a sequence of monotonically decreasing search sets Tk, until either a valid storage function is found, or Tk shrinks to nothing. With an appropriate selection of a class of search sets Tk (ellipsoids or polytopes are most frequently used) and with an adequate definition of a “center” (the so-called “analytical center” is used for polytopes), the volume of Tk can be made exponentially decreasing, which constitutes fast convergence of the search algorithm. 7.1.3 Completion of squares The success of the search procedure described in the previous section depends heavily on the choice of the basis functions Vk. A major difficulty to overcome is verification of (7.3) for a given V . It turns out that the only known large linear space of functionals
F: R"HR which admits efficient check of non-negativity of its elements is the set of quadratic for F() for which nonnegativity is equivalent to positive semidefiniteness of the coefficient matrix Q This observation is exploited in the linear-quadratic case, when f, g are linear functions ∫(,)=A+B,9(z,⑦)=Cz+ and o is a quadratic form (,⑦) Then it is natural to consider quadratic storage function candidates V(a)=iPD only, and(.3)transforms into the(symmetric)matri inequality PA+AP PB BP (7.5) Since this inequality is linear with respect to its parameters P and 2, it can be solve elatively efficiently even when additional linear constraints are imposed on P and 2 Note that a quadratic functional is non-negative if and only if it can be represented as a sum of squares of linear functionals. The idea of checking non-negativity of a functional by trying to represent it as a sum of squares of functions from a given linear set can be used in searching for storage functions of general nonlinear systems as well. Indeed, let H: RXR"R and V: R"HR be arbitrary vector-valued functions. For every TER, condition(7.3)with V(z)=7V() is implied by the identity rV(f(G,))-TV()+H(x,)SH(,)=0(正,)V∈X,∈W,(7.6) S=S>0 is a positive semidefinite symmetric matrix. Note that both the storage function candidate parameter T and the"sum of squares"parameter S=s20 enter constraint(7.6)linearly. This, the search for a valid storage function is reduced to semidefinite program. In practice, the scalar components of vector H should include enough elements so that identity(7.6)can be achieved for every T E R by choosing an appropriate S=S'(not necessarily positivie semidefinite). For example, if f, g, o are polynomials, it may be a good idea to use a polynomial V and to define h as the vector of monomials up to a given degree
� � 4 F : Rn ∈� R which admits efficient check of non-negativity of its elements is the set of quadratic forms � �∗ � � x x ¯ ¯ ∗ F(¯x) = Q , (Q = Q ) 1 1 for which nonnegativity is equivalent to positive semidefiniteness of the coefficient matrix Q. This observation is exploited in the linear-quadratic case, when f, g are linear functions f(¯x, w¯) = Ax¯ + Bw, ¯ g(¯x, w¯) = Cx¯ + Dw, ¯ and ψ is a quadratic form � �∗ � � x x ¯ ¯ ψ(¯ w) = ¯ x, ¯ � . w w¯ Then it is natural to consider quadratic storage function candidates V (¯x) = ¯x x ∗ P ¯ only, and (7.3) transforms into the (symmetric) matrix inequality PA + A∗ P PB → �. (7.5) B∗ P 0 Since this inequality is linear with respect to its parameters P and �, it can be solved relatively efficiently even when additional linear constraints are imposed on P and �. Note that a quadratic functional is non-negative if and only if it can be represented as a sum of squares of linear functionals. The idea of checking non-negativity of a functional by trying to represent it as a sum of squares of functions from a given linear set can be used in searching for storage functions of general nonlinear systems as well. Indeed, let Hˆ : Rn × Rm ∈� RM and Vˆ : Rn ∈� RN be arbitrary vector-valued functions. For every φ ≤ RN , condition (7.3) with x) = φ ∗ Vˆ V (¯ (¯x) is implied by the identity x, ¯ φ ∗ ˆ x) + ˆ x, ¯ H(¯ w) = ψ(¯ w) � ¯ ¯ ∗ Vˆ (f(¯ w)) − φ V (¯ H(¯ w) ∗ S ˆ x, ¯ x, ¯ x ≤ X, w ≤ W, (7.6) as long as S = S∗ ∗ 0 is a positive semidefinite symmetric matrix. Note that both the storage function candidate parameter φ and the “sum of squares” parameter S = S∗ ∗ 0 enter constraint (7.6) linearly. This, the search for a valid storage function is reduced to semidefinite program. In practice, the scalar components of vector Hˆ should include enough elements so that identity (7.6) can be achieved for every φ ≤ RN by choosing an appropriate S = S∗ (not necessarily positivie semidefinite). For example, if f, g, ψ are polynomials, it may be a good idea to use a polynomial Vˆ and to define Hˆ as the vector of monomials up to a given degree
7. 2 Storage functions with quadratic supply rates As described in the previous section, one can search for storage functions by considering linearly parameterized sets of storage function candidates. It turns out that storage functions derived for subsystems of a given system can serve as convenient building blocks (i.e. the components Vg of V). Indeed, assume that Va =va(a(t) are storage functions with supply rates aa=a(z(t)). Typically, a(t)includes x(t) as its component, and has some additional elements, such as inputs, outputs, and othe nonlinear combinations of system states and inputs. If the objective is to find a storage function V, with a given supply rate o, one can search for V. in the form V(x(t)=∑v(x(),τ≥0 where Ta are the search parameters. Note that in this case it is known a-priori that every V in(7.7)is a storage function with supply rate (2(t) Therefore, in order to find a storage function with supply rate o.=o.(2()), it is sufficient to find T≥0 such that ∑n(2)≤(2) (7.9) When o, aa are generic functions, even this simplified task can be difficult. However, in the important special case when o, and g are quadratic functionals, the search for Tg in (7.9)becomes a semidefinite program In this section, the use of storage functions with quadratic supply rates is discussed 7.2.1 Storage functions for LTI systems A quadratic form V(r)=I'PI is a storage function for LTI system Ar+ Bw (7.10 with quadratic supply rate (,⑦) a if and only if matrix inequality(7.5) is satisfied The well-known Kalman-Popou-Yakubouich Lemma, or positive real lemma gives use requency domain condition for eristence of such P= Pl for given A, B, 2
5 7.2 Storage functions with quadratic supply rates As described in the previous section, one can search for storage functions by considering linearly parameterized sets of storage function candidates. It turns out that storage functions derived for subsystems of a given system can serve as convenient building blocks (i.e. the components Vq of Vˆ ). Indeed, assume that Vq = Vq(x(t)) are storage functions with supply rates ψq = ψq(z(t)). Typically, z(t) includes x(t) as its component, and has some additional elements, such as inputs, outputs, and othe nonlinear combinations of system states and inputs. If the objective is to find a storage function V� with a given supply rate ψ�, one can search for V� in the form N V (x(t)) = Vq(x(t)), φq ∗ 0, (7.7) q=1 where φq are the search parameters. Note that in this case it is known a-priori that every V� in (7.7) is a storage function with supply rate N ψ(z(t)) = φqψq(z(t)). (7.8) q=1 Therefore, in order to find a storage function with supply rate ψ� = ψ�(z(t)), it is sufficient to find φq ∗ 0 such that N φ1ψq(¯z) → ψ�(¯z) � z. ¯ (7.9) q=1 When ψ�, ψq are generic functions, even this simplified task can be difficult. However, in the important special case when ψ� and ψq are quadratic functionals, the search for φq in (7.9) becomes a semidefinite program. In this section, the use of storage functions with quadratic supply rates is discussed. 7.2.1 Storage functions for LTI systems A quadratic form V (¯x) = ¯x∗ Px¯ is a storage function for LTI system x˙ = Ax + Bw (7.10) with quadratic supply rate � �∗ � � x x ¯ ¯ ψ(¯ w) = ¯ x, ¯ � w w¯ if and only if matrix inequality (7.5) is satisfied. The well-known Kalman-Popov-Yakubovich Lemma, or positive real lemma gives useful frequency domain condition for existence of such P = P∗ for given A, B, �