Mlsd Technique Overview 16888 E77 Steepest Descent UNCONSTRAINED Conjugate Gradient Quasi-Newton Newton Simplex-linear CONSTRAINED SLP-linear SQP-nonlinear, expensive, common in engineering applications Exterior Penalty-nonlinear, discontinuous design spaces Interior Penalty-nonlinear Generalized reduced gradient- nonlinear Method of feasible directions nonlinear Mixed Integer Programming C Massachusetts Institute of Technology - Prof de Weck and Prof Willcox Engineering Systems Division and Dept of Aeronautics and Astronautics
6 © Massachusetts Institute of Technology - Prof. de Weck and Prof. Willcox Engineering Systems Division and Dept. of Aeronautics and Astronautics Technique Overview Technique Overview Steepest Descent Conjugate Gradient Quasi-Newton Newton Simplex – linear SLP – linear SQP – nonlinear, expensive, common in engineering applications Exterior Penalty – nonlinear, discontinuous design spaces Interior Penalty – nonlinear Generalized Reduced Gradient – nonlinear Method of Feasible Directions – nonlinear Mixed Integer Programming UNCONSTRAINED CONSTRAINED
Mesd Standard Problem Definition 16888 ES077 s.9(x)≤0j h2(x)=0k=1m2 X≤Xj=1,,n For now, we consider a single objective function Jx) There are n design variables and a total of m constraints(m=m,+m2) The bounds are known as side constraints o Massachusetts Institute of Technology -Prof de Weck and Prof Willcox Engineering Systems Division and Dept of Aeronautics and Astronautics
7 © Massachusetts Institute of Technology - Prof. de Weck and Prof. Willcox Engineering Systems Division and Dept. of Aeronautics and Astronautics Standard Problem Definition Standard Problem Definition 1 2 min ( ) s.t. ( ) 0 1,.., ( ) 0 1,.., 1,.., j k u i ii J g jm h km x x xi n d dd x x x A • For now, we consider a single objective function, J(x). • There are n design variables, and a total of m constraints ( m = m 1 + m 2). • The bounds are known as side constraints
Mest Linear vs Nonlinear 16888 E77 The objective function is a linear function of the design variables if each design variable appears only to the first power with constant coefficients multiplying it J(X)=X1+2x2-34X is linear in X=[x,X2 X3] J(x)=X1X2+2X2-34x3 is nonlinear in x J(x)=COS(X1)+2X2-3 4X3 is nonlinear in x 8 C Massachusetts Institute of Technology - Prof de Weck and Prof Willcox Engineering Systems Division and Dept of Aeronautics and Astronautics
8 © Massachusetts Institute of Technology - Prof. de Weck and Prof. Willcox Engineering Systems Division and Dept. of Aeronautics and Astronautics Linear vs. Nonlinear Linear vs. Nonlinear The objective function is a linear function of the design variables if each design variable appears only to the first power with constant coefficients multiplying it. 12 3 J xx x ( ) 2 3.4 x is linear in x=[x1 x 2 x 3 ] T 12 2 3 J xx x x ( ) 2 3.4 x is nonlinear in x 1 2 J xx ( ) cos( ) 2 3.4 x 3 x is nonlinear in x
Mest Linear vs Nonlinear 16888 ES077 A constraint is a linear function of the design variables if each design variable appears only to the first power with constant coefficients multiplying it 6x,, <10 is linear in x 6X1-X 2 2x6< 10 is nonlinear in x 6X1-sin(x2 )-2X3 10 is nonlinear in x C Massachusetts Institute of Technology - Prof de Weck and Prof Willcox Engineering Systems Division and Dept of Aeronautics and Astronautics
9 © Massachusetts Institute of Technology - Prof. de Weck and Prof. Willcox Engineering Systems Division and Dept. of Aeronautics and Astronautics Linear vs. Nonlinear Linear vs. Nonlinear A constraint is a linear function of the design variables if each design variable appears only to the first power with constant coefficients multiplying it. 6 21 12 3 xx x 0 is linear in x 0 is nonlinear in x 2 6 21 12 3 xx x is nonlinear in x 6 sin( ) 2 10 1 23 x xx
M esd Iterative Optimization Procedures ESD. 71 Most optimization algorithms are iterative x?=x9-1+as Where g=iteration number S=vector search direction ascalar distance and the initial solution x is given The algorithm determines the search direction S according to some criteria Gradient-based algorithms use gradient information to decide where to move o Massachusetts Institute of Technology -Prof de Weck and Prof Willcox Engineering Systems Division and Dept of Aeronautics and Astronautics
10 © Massachusetts Institute of Technology - Prof. de Weck and Prof. Willcox Engineering Systems Division and Dept. of Aeronautics and Astronautics Iterative Optimization Procedures Iterative Optimization Procedures Most optimization algorithms are iterative: where q=iteration number S=vector search direction D=scalar distance and the initial solution x0 is given The algorithm determines the search direction S according to some criteria. Gradient-based algorithms use gradient information to decide where to move. q q qq 1 D xx S