Tutorial Table 2-1: Minimization(Continued) notati。n Functi。n Constrained minimization min f(x)such that fmincon c(x)≤0,ceq(x)=0 beq,l≤. Goal Attainment min y such that fgoalattain F(x)-u≤goal c(x)≤0,ceqg(x)=0 A·x≤b,Aeq·x=b Minimax min max [F(x)) such that fminimax < A·x≤b,Ae Semi-Infinite Minimization min f(x)such that fseminf 0 for all t c(x)≤0,ceq(x)=0 b,Aeq·x=beq,l≤x≤u Table 2-2: Equation Solving Ty Functi Linear equations Cx =d, n equations, n variables \(slash) Nonlinear Equation of One f(a)=0 fzero Nonlinear Equations F(x)=0, n equations, n variables fsolve 2-4
2 Tutorial 2-4 Constrained Minimization such that fmincon Goal Attainment such that fgoalattain Minimax such that fminimax Semi-Infinite Minimization such that fseminf Table 2-1: Minimization (Continued) Type Notation Function f x( ) x min c x( ) ≤ 0, ceq x( )= 0 A x⋅ ≤ b Aeq x , ⋅ = beq l x u , ≤ ≤ γ x, γ min F x( ) – wγ ≤ goal c x( ) ≤ 0, ceq x( )= 0 A x⋅ ≤ b Aeq x , ⋅ = beq l x u , ≤ ≤ min x max Fi { } Fi { } ( ) x c x( ) ≤ 0, ceq x( )= 0 A x⋅ ≤ b Aeq x , ⋅ = beq l x u , ≤ ≤ f x( ) x min Kx w ( ) , ≤ 0 for all w c x( ) ≤ 0, ceq x( )= 0 A x⋅ ≤ b Aeq x , ⋅ = beq l x u , ≤ ≤ Table 2-2: Equation Solving Type Notation Function Linear Equations , n equations, n variables \ (slash) Nonlinear Equation of One Variable fzero Nonlinear Equations , n equations, n variables fsolve C x⋅ = d f a( ) = 0 F x( ) = 0
Introduction Table 2-3: Least-Squares(Curve Fitting) Ty Notation Function Linear Least-Squares 2, m equations, n variables \(slash) Nonnegative such that x≥0 lsqnonneg Linear-Least-Squares Constrained such that lsqlir Linear-Least-Squares A·x≤b,Aeq·x=beq, Least-Squares min 2 F(6N2-25F (a" such that Issu isnonlin Nonlinear Curve Fitting min iF(, xdata)-ydatall2 such that isxs u Isqcurvefit Using the Optimization Functions Most of these optimization routines require the definition of an M-file containing the function to be minimized, i.e., the objective function Alternatively, you can use an inline object created from a matlab expression Maximization is achieved by supplying the routines with-f, where f is the Optimization options passed to the routines change optimization parameters Default optimization parameters are used extensively but can be changed through an options structure. Gradients are calculated using an adaptive finite-difference method unless they are supplied in a function. Parameters can be passed directly to functions, avoiding the need for global variables This guide separates"medium-scale"algorithms from"large-scale"algorithms Medium-scale is not a standard term and is used here only to differentiate these algorithms from the large-scale algorithms, which are designed to handle large-scale problems efficientl 25
Introduction 2-5 Using the Optimization Functions Most of these optimization routines require the definition of an M-file containing the function to be minimized, i.e., the objective function. Alternatively, you can use an inline object created from a MATLAB expression. Maximization is achieved by supplying the routines with -f, where f is the function being optimized. Optimization options passed to the routines change optimization parameters. Default optimization parameters are used extensively but can be changed through an options structure. Gradients are calculated using an adaptive finite-difference method unless they are supplied in a function. Parameters can be passed directly to functions, avoiding the need for global variables. This guide separates “medium-scale” algorithms from “large-scale” algorithms. Medium-scale is not a standard term and is used here only to differentiate these algorithms from the large-scale algorithms, which are designed to handle large-scale problems efficiently. Table 2-3: Least-Squares (Curve Fitting) Type Notation Function Linear Least-Squares , m equations, n variables \ (slash) Nonnegative Linear-Least-Squares such that lsqnonneg Constrained Linear-Least-Squares such that lsqlin Nonlinear Least-Squares such that lsqnonlin Nonlinear Curve Fitting such that lsqcurvefit C x⋅ – d 2 x min 2 C x⋅ – d 2 x min 2 x ≥ 0 C x⋅ – d 2 x min 2 A x⋅ ≤ b Aeq x , ⋅ = beq l x u , ≤ ≤ 1 2 -- F x( ) 2 2 x min 1 2 -- Fi( )x 2 i = ∑ lxu ≤ ≤ 1 2 -- F x xdata ( ) , – ydata 2 2 x min lxu ≤ ≤
2 Tutorial Medium-Scale Algorithms The Optimization Toolbox routines offer a choice of algorithms and line search strategies. The principal algorithms for unconstrained minimization are the Nelder-Mead simplex search method and the BFgs (Broyden, Fletcher Goldfarb, and Shanno) quasi-Newton method. For constrained minimization minimax, goal attainment, and semi-infinite optimization, variations of equential quadratic programming(SQP)are used. Nonlinear least-squares problems use the Gauss-Newton and Levenberg-Marquardt methods Nonlinear equation solving also uses the trust-region dogleg algorithm a choice of line search strategy is given for unconstrained minimization and nonlinear least-squares problems. The line search strategies use safeguarded cubic and quadratic interpolation and extrapolation methods All the large-scale algorithms, except linear programming, are trust-region methods Bound constrained problems are solved reflective newto methods Equality constrained problems are solved using a projective preconditioned conjugate gradient iteration. You can use sparse iterative olvers or sparse direct solvers in solving the linear systems to determine the current step. Some choice of preconditioning in the iterative solvers is also available The linear programming method is a variant of Mehrotra's predictor-corrector algorithm, a primal-dual interior-point method 2-6
2 Tutorial 2-6 Medium-Scale Algorithms The Optimization Toolbox routines offer a choice of algorithms and line search strategies. The principal algorithms for unconstrained minimization are the Nelder-Mead simplex search method and the BFGS (Broyden, Fletcher, Goldfarb, and Shanno) quasi-Newton method. For constrained minimization, minimax, goal attainment, and semi-infinite optimization, variations of sequential quadratic programming (SQP) are used. Nonlinear least-squares problems use the Gauss-Newton and Levenberg-Marquardt methods. Nonlinear equation solving also uses the trust-region dogleg algorithm. A choice of line search strategy is given for unconstrained minimization and nonlinear least-squares problems. The line search strategies use safeguarded cubic and quadratic interpolation and extrapolation methods. Large-Scale Algorithms All the large-scale algorithms, except linear programming, are trust-region methods. Bound constrained problems are solved using reflective Newton methods. Equality constrained problems are solved using a projective preconditioned conjugate gradient iteration. You can use sparse iterative solvers or sparse direct solvers in solving the linear systems to determine the current step. Some choice of preconditioning in the iterative solvers is also available. The linear programming method is a variant of Mehrotra’s predictor-corrector algorithm, a primal-dual interior-point method
Examples that Use Standard Algorithms Examples that Use Standard algorithms This section presents the medium-scale (i. e, standard) algorithms through a tutorial Examples similar to those in the first part of this tutorial (Unconstrained Minimization Example"through the"Equality Constrained Example")can also be found in the first demonstration, "Tutorial Walk Through, in the M-file optdemo. The examples in this manual differ in that they use M-file functions for the objective functions, whereas the online demonstrations use inline objects for some functions Note Medium-scale is not a standard term and is used to differentiate these algorithms from the large-scale algorithms described in"Large-Scale Algorithms "on page 4-1 The tutorial uses the functions fminunc. fmincon and fsolve. The other optimization routines, fgoalattain, fminimax, lsqnonlin, and fseminf, are used in a nearly identical manner, with differences only in the problem formulation and the termination criteria. The section"Multiobjective Examples"on page 2-21 discusses multiobjective optimization and gives everal examples using lsqnonlin, fminimax and fgoalattain, including how Simulink can be used in conjunction with the toolbox This section includes the following examples Unconstrained Minimization Example Nonlinear Inequality Constrained Example onstream ined e, oe with bounds w Gradient Check: Analytic Versus Numeric Equality Constrained Example It also discusses Greater-Than-Zero Constraints Additional Arguments: Avoiding Global variables Nonlinear Equations with Analytic Jacobian 27
Examples that Use Standard Algorithms 2-7 Examples that Use Standard Algorithms This section presents the medium-scale (i.e., standard) algorithms through a tutorial. Examples similar to those in the first part of this tutorial (“Unconstrained Minimization Example” through the “Equality Constrained Example”) can also be found in the first demonstration, “Tutorial Walk Through,” in the M-file optdemo. The examples in this manual differ in that they use M-file functions for the objective functions, whereas the online demonstrations use inline objects for some functions. Note Medium-scale is not a standard term and is used to differentiate these algorithms from the large-scale algorithms described in “Large-Scale Algorithms” on page 4-1. The tutorial uses the functions fminunc, fmincon, and fsolve. The other optimization routines, fgoalattain, fminimax, lsqnonlin, and fseminf, are used in a nearly identical manner, with differences only in the problem formulation and the termination criteria. The section “Multiobjective Examples” on page 2-21 discusses multiobjective optimization and gives several examples using lsqnonlin, fminimax, and fgoalattain, including how Simulink can be used in conjunction with the toolbox. This section includes the following examples: • Unconstrained Minimization Example • Nonlinear Inequality Constrained Example • Constrained Example with Bounds • Constrained Example with Gradients • Gradient Check: Analytic Versus Numeric • Equality Constrained Example It also discusses • Maximization • Greater-Than-Zero Constraints • Additional Arguments: Avoiding Global Variables • Nonlinear Equations with Analytic Jacobian
2 Tutorial Nonlinear Equations with Finite-Difference Jacobian ective Examples Unc。 strained minimizati。 n Example Consider the problem of finding a set of values lxl, x] that solves minimize f(x)=e(4 1+ 2x2+4x1=2+2x2+1) To solve this two-dimensional problem, write an M-file that returns the function value. Then. invoke the unconstrained minimization routine fminu Step 1: Write an M-file objfun m function f obj fun(x) f=exp(x(1))*(4*X(1)^2+2*X(2)^2+4*X(1)*X(2)+2*x(2)+1); Step 2: Invoke one of the unconstrained optimization routines X0=[-1,1 o Starting guess options optimset( LargeScale,'off); [X, fval, exitflag, output]= fminunc(cobj fun, xO, options); After 40 function evaluations, this produces the solution X= The function at the solution x is returned in fval: fval= 1.3030e-10 The exitflag tells whether the algorithm converged. An exitflag>0 means a local um was found exitfl The output structure gives more details about the optimization For fminunc it includes the number ofiterations in iterations. the number of function evaluations in funcCount, the final step- size in stepsize, a measure of first-order optimality(which in this unconstrained case is the infinity norm of
2 Tutorial 2-8 • Nonlinear Equations with Finite-Difference Jacobian • Multiobjective Examples Unconstrained Minimization Example Consider the problem of finding a set of values [x1, x2] that solves (2-1) To solve this two-dimensional problem, write an M-file that returns the function value. Then, invoke the unconstrained minimization routine fminunc. Step 1: Write an M-file objfun.m. function f = objfun(x) f = exp(x(1))*(4*x(1)^2+2*x(2)^2+4*x(1)*x(2)+2*x(2)+1); Step 2: Invoke one of the unconstrained optimization routines. x0 = [-1,1]; % Starting guess options = optimset('LargeScale','off'); [x,fval,exitflag,output] = fminunc(@objfun,x0,options); After 40 function evaluations, this produces the solution x = 0.5000 -1.0000 The function at the solution x is returned in fval: fval = 1.3030e-10 The exitflag tells whether the algorithm converged. An exitflag > 0 means a local minimum was found: exitflag = 1 The output structure gives more details about the optimization. For fminunc, it includes the number of iterations in iterations, the number of function evaluations in funcCount, the final step-size in stepsize, a measure of first-order optimality (which in this unconstrained case is the infinity norm of minimize x f x( ) e x1 4x1 2 2x2 2 4x1x2 2x2 = ( ) ++ ++ 1