Given f: X-R, define the bordered Hessian matrix B,(a) 0f1 fn f af(ar) af(a) f1 f1 B(a) fn f Theorem 2.6. Given convex X CR, twice differentiable f: X-R, and the principal b(a),., bn+1() of B(a), For XCR, f is quasi-convex=bk(x)≤0,yx∈X,k. 2. For X CR4, f is quasi-concave -(1)b()<0,VTEX, v k 3. For X=Rn or Rn, bk()<0,V CEX, Vk22- f is strictly quasi-convex 4. For X= R or Rn,(-1) bk()<0,VaE X, vk22=f is strictly quasI-Concave.■ Example 2.8. For f(a, y)=ra+y, defined on R2+, where a, B>0, quasI-concave, if0≤a,B≤1; f is strictly quasi-concave,if0<a,B≤1anda≠1or3≠1 Example 2.9. For Cobb-Douglas function f(a, y)=x@y, defined on R2+,where B≥0, quasI-concave, ifa,β≥0; ve, if a,B>0
Given f : X → R, define the bordered Hessian matrix Bf (x) : fi ≡ ∂f(x) ∂xi , fij ≡ ∂2f(x) ∂xi∂xj , Bf (x) ≡ ⎛ ⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝ 0 f1 ··· fn f1 f11 ··· f1n . . . . . . . . . fn fn1 ··· fnn ⎞ ⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ . Theorem 2.6. Given convex X ⊂ Rn, twice differentiable f : X → R, and the principal minors b1(x),...,bn+1(x) of Bf (x), 1. For X ⊂ Rn +, f is quasi-convex =⇒ bk(x) ≤ 0, ∀ x ∈ X, ∀ k. 2. For X ⊂ Rn +, f is quasi-concave =⇒ (−1)kbk(x) ≤ 0, ∀ x ∈ X, ∀ k. 3. For X = Rn + or Rn, bk(x) < 0, ∀ x ∈ X, ∀ k ≥ 2 =⇒ f is strictly quasi-convex. 4. For X = Rn + or Rn, (−1)kbk(x) < 0, ∀ x ∈ X, ∀ k ≥ 2 =⇒ f is strictly quasi-concave. Example 2.8. For f(x, y) = xα + yβ, defined on R2 ++, where α, β ≥ 0, f is ⎧ ⎪⎨ ⎪⎩ quasi-concave, if 0 ≤ α, β ≤ 1; strictly quasi-concave, if 0 < α, β ≤ 1 and α 9= 1 or β 9= 1. Example 2.9. For Cobb-Douglas function f(x, y) = xαyβ, defined on R2 ++, where α, β ≥ 0, f is ⎧ ⎪⎨ ⎪⎩ quasi-concave, if α, β ≥ 0; strictly quasi-concave, if α, β > 0. 2—6
4. Unconstrained Optimization See Sydsaeter(2005, Chapter 3) and Chiang(1984, Chapter 9) If there is a neighborhood N(x*)of a with radius r such that z* is the maximum point of f on Mr(a*), then a* is a local maximum point of f For f: R-R, if Df( )=0, call i or (i, f()) a stationary point, and f(e) the stationary value. Given a stationary point i, there are three possible situations at i: local maximum, local minimum point, and a reflection point Example 2.10. Compare y=r2 with y=x3 at =0 Theorem 2.7.(Extreme-Value Theorem). For continuous f: R-R and compact AC Rn max f(ar) has at least one solution Theorem 2.8. Let ACRn (a)If a'is an interior solution of max then(FOC) Df(a*)=0 and(SONC)D2f(a*)<0 (b) If Df(a*)=0 and(SOSC) D f(*<0, 3 Mr(*)st. is the maximum point of f on Nr(a* (c) If f is concave on A, any point a E A satisfying Df(a*)=0 is a maximum point (d)If f is strictly quasi-concave, a local maximum over a convex set A is the unique global maximum Note: the FOC and SONC are not necessary for corner solutions; they are also not sufficient for local maximization, even for interior points Example 2.11. Find a maximum point for f(1, I2)=T2-4.2+3.1.T2-I2
4. Unconstrained Optimization See Sydsæter (2005, Chapter 3) and Chiang (1984, Chapter 9). If there is a neighborhood Nr(x∗) of x∗ with radius r such that x∗ is the maximum point of f on Nr(x∗), then x∗ is a local maximum point of f. For f : Rn → R, if Df(ˆx)=0, call xˆ or (ˆx, f(ˆx)) a stationary point, and f(ˆx) the stationary value. Given a stationary point x, ˆ there are three possible situations at xˆ : local maximum, local minimum point, and a reflection point. Example 2.10. Compare y = x2 with y = x3 at x = 0. Theorem 2.7. (Extreme-Value Theorem). For continuous f : Rn → R and compact A ⊂ Rn, max x∈A f(x) has at least one solution. Theorem 2.8. Let A ⊂ Rn. (a) If x∗ is an interior solution of max x∈A f(x) then (FOC) Df(x∗)=0 and (SONC) D2f(x∗) ≤ 0. (b) If Df(x∗)=0 and (SOSC) D2f(x∗) < 0, ∃ Nr(x∗) s.t. x∗ is the maximum point of f on Nr(x∗). (c) If f is concave on A, any point x∗ ∈ A satisfying Df(x∗)=0 is a maximum point. (d) If f is strictly quasi-concave, a local maximum over a convex set A is the unique global maximum. Note: the FOC and SONC are not necessary for corner solutions; they are also not sufficient for local maximization, even for interior points. Example 2.11. Find a maximum point for f(x1, x2) = x2 − 4x2 1 + 3x1x2 − x2 2. 2—7
5. Constrained Optimization e Sydsaeter(2005, Chapter 3)and Chiang(1984, Chapters 12 and 21) Theorem 2.9.(Lagrange). For f: Rn-+R, G: Rn-R, consider problem 图x f(a) t.G(x)=0. Let L(A, a)=f()+A G()(Lagrange function) If a* is a solution and if DG(a*)has full rank, then 3 AERm(Lagrange multi- pler s FOC DxC(,x)=0, SONC h'D2f(a*)hs0, V h satisfying DG (a*)h=0, Vi If the FOC is satisfied, G(r)=0, G is quasi-concave, and SOSC: hDf(a)h<0, for h+0 gDG(x·)h then we have a unique local maximum. L Example 2. 12. For a>0 and b>0, consider F(a,b)≡max x1+T2 Theorem 2.10. Let A E Rnxn be symmetric. C E Rmxn has full rank, m n, and 0 C 1,…,bm+ be the principal minors of B≡ Then 1.xAx>0fox≠0 satisfying Ca=0→(-1)mbk>0,Vk≥2m+1 2.xAx<0frx≠0 satisfying Cr=0←→(-1)m+bk>0,yk≥2m+1
5. Constrained Optimization See Sydsæter (2005, Chapter 3) and Chiang (1984, Chapters 12 and 21). Theorem 2.9. (Lagrange). For f : Rn → R, G : Rn → Rm, consider problem max x∈Rn f(x) s.t. G(x)=0. Let L(λ, x) ≡ f(x) + λ · G(x) (Lagrange function). • If x∗ is a solution and if DG(x∗) has full rank, then ∃ λ ∈ Rm (Lagrange multiplier) s.t. FOC : DxL(λ, x∗ )=0, SONC : h0 D2 xf(x∗ )h ≤ 0, ∀ h satisfying DGi(x∗ )h = 0, ∀ i. • If the FOC is satisfied, G(x∗)=0, G is quasi-concave, and SOSC: h0 D2 f(x∗ )h < 0, for h 9= 0 satisfying DG(x∗ )h = 0, then we have a unique local maximum. Example 2.12. For a > 0 and b > 0, consider F(a, b) ≡ max x1, x2 −ax2 1 − bx2 2 s.t. x1 + x2 = 1. Theorem 2.10. Let A ∈ Rn×n be symmetric, C ∈ Rm×n has full rank, m < n, and b1,...,bm+n be the principal minors of B ≡ ⎛ ⎜⎝ 0 C CT A ⎞ ⎟⎠ . Then, 1. x0 Ax > 0 for x 9= 0 satisfying Cx = 0 ⇐⇒ (−1)mbk > 0, ∀ k ≥ 2m + 1. 2. x0 Ax < 0 for x 9= 0 satisfying Cx = 0 ⇐⇒ (−1)m+kbk > 0, ∀ k ≥ 2m + 1. 2—8