2 Tutorial If you do not set these parameters to'on' in the options structure, fmincon does not use the analytic gradients The arguments lb and ub place lower and upper bounds on the independent ariables in x. In this example there are no bound constraints and so they are both set to Step 3 Invoke constrained optimization routine. X0=[-1,1]; rting guess options optimset('LargeScale,'off) options optimset(options, 'Gradobj','on, Gradconstr','on) [] ub=[ No upper or lower bounds [X, fval]= fmincon (aobjfungrad, XO,[,[,[,[,lb, ub, aconfung [C, ceq] confungrad(x)% Check the constraint values at After 20 function evaluations, the solution produced is 9.5474 1.0474 fval 0.0236 C 0.1110 -0.1776 Gradient Check: Analytic Versus Numeric When analytically determined gradients are provided you can compare the supplied gradients with a set calculated by finite-difference evaluation. This is particularly useful for detecting mistakes in either the objective function or the gradient function formulation If you want such gradient checks, set the Derivative check parameter to'oI Ising optimset ptions= optimset(options, Derivativecheck',on); 2-14
2 Tutorial 2-14 If you do not set these parameters to 'on' in the options structure, fmincon does not use the analytic gradients. The arguments lb and ub place lower and upper bounds on the independent variables in x. In this example, there are no bound constraints and so they are both set to []. Step 3: Invoke constrained optimization routine. x0 = [-1,1]; % Starting guess options = optimset('LargeScale','off'); options = optimset(options,'GradObj','on','GradConstr','on'); lb = [ ]; ub = [ ]; % No upper or lower bounds [x,fval] = fmincon(@objfungrad,x0,[],[],[],[],lb,ub,... @confungrad,options) [c,ceq] = confungrad(x) % Check the constraint values at x After 20 function evaluations, the solution produced is x = -9.5474 1.0474 fval = 0.0236 c = 1.0e-14 * 0.1110 -0.1776 ceq = [] Gradient Check: Analytic Versus Numeric When analytically determined gradients are provided, you can compare the supplied gradients with a set calculated by finite-difference evaluation. This is particularly useful for detecting mistakes in either the objective function or the gradient function formulation. If you want such gradient checks, set the DerivativeCheck parameter to 'on' using optimset: options = optimset(options,'DerivativeCheck','on');
Examples that Use Standard Algorithms The first cycle of the optimization checks the analytically determined gradients (of the objective function and, if they exist, the nonlinear constraints ) If they do not match the finite-differencing gradients within a given tolerance, a warning message indicates the discrepancy and gives the option to abort the optimization or to continue Equality Constrained Example For routines that permit equality constraints, nonlinear equality constraints must be computed in the M-file with the nonlinear inequality constraints. For linear equalities, the coefficients of the equalities are passed in through the matrix Aeg and the right-hand-side vector beg For example, if you have the nonlinear equality constraint xi+x, 1 and the nonlinear inequality constraint x1x22-10, rewrite them as and then solve the problem using the following steps Step 1: Write an M-file objfun m function f objfun (x) f=exp(x(1))*(4*x(1)2+2*x(2)2+4*x(1)*X(2)+2*X(2)+1) Step 2: Write an M-file confuneq. m for the nonlinear constraints function [c, ceq]=confuneg (x) Nonlinear inequality constraints Nonlinear equality constraints (1)^2+X(2)·1 Step 3: Invoke constrained optimization routine Make a starting guess at the solution [X, fval]= fmincon(cobjfun, O,[,[,0,U,U,[, c, ceq] confuneq (x)% Check the constraint values at x 2-15
Examples that Use Standard Algorithms 2-15 The first cycle of the optimization checks the analytically determined gradients (of the objective function and, if they exist, the nonlinear constraints). If they do not match the finite-differencing gradients within a given tolerance, a warning message indicates the discrepancy and gives the option to abort the optimization or to continue. Equality Constrained Example For routines that permit equality constraints, nonlinear equality constraints must be computed in the M-file with the nonlinear inequality constraints. For linear equalities, the coefficients of the equalities are passed in through the matrix Aeq and the right-hand-side vector beq. For example, if you have the nonlinear equality constraint and the nonlinear inequality constraint , rewrite them as and then solve the problem using the following steps. Step 1: Write an M-file objfun.m. function f = objfun(x) f = exp(x(1))*(4*x(1)^2+2*x(2)^2+4*x(1)*x(2)+2*x(2)+1); Step 2: Write an M-file confuneq.m for the nonlinear constraints. function [c, ceq] = confuneq(x) % Nonlinear inequality constraints c = -x(1)*x(2) - 10; % Nonlinear equality constraints ceq = x(1)^2 + x(2) - 1; Step 3: Invoke constrained optimization routine. x0 = [-1,1]; % Make a starting guess at the solution options = optimset('LargeScale','off'); [x,fval] = fmincon(@objfun,x0,[],[],[],[],[],[],... @confuneq,options) [c,ceq] = confuneq(x) % Check the constraint values at x x1 2 x2 + = 1 x1x2 ≥ –10 x1 2 x2 + – 1 = 0 x1x2 – – 10 ≤ 0
2 Tutorial After 21 function evaluations, the solution produced is -0.7529 fval= 1.5093 9.6739 4.0684e-010 Note that ceq is equal to o within the default tolerance on the constraints of 1. 0e-006 and that c is less than or equal to zero as desired. Maximization The optimization functions fminbnd, fminsearch, fminunc, fmincon fgoalattain, fminimax, lsqcurvefit, and lsqnonlin all perform minimization of the objective function f(x). Maximization is achieved by supplying the routines with -f(x). Similarly, to achieve maximization for quadprog supply -H and- f, and for linprog supply -f. Greater- Than-Zer。c。 nstraints The Optimization Toolbox assumes that nonlinear inequality constraints are of the form C, (x)s0. Greater-than-zero constraints are expressed as less-than-zero constraints by multiplying them by -1. For example, a constraint of the form C, (x)>0is equivalent to the constraint (-C (x))s0; a constraint of the form C (x)>b is equivalent to the constraint (-C (x)+bs0 Additional Arguments: Avoiding Global Variables You can pass parameters that would otherwise have to be declared as global directly to M-file functions using additional arguments at the end of the calling For example, entering a number of variables at the end of the call to fsolve [x, fval]= f passes the arguments directly to the function obj fun when it is called from inside fsolve 2-16
2 Tutorial 2-16 After 21 function evaluations, the solution produced is x = -0.7529 0.4332 fval = 1.5093 c = -9.6739 ceq = 4.0684e-010 Note that ceq is equal to 0 within the default tolerance on the constraints of 1.0e-006 and that c is less than or equal to zero as desired. Maximization The optimization functions fminbnd, fminsearch, fminunc, fmincon, fgoalattain, fminimax, lsqcurvefit, and lsqnonlin all perform minimization of the objective function . Maximization is achieved by supplying the routines with . Similarly, to achieve maximization for quadprog supply -H and -f, and for linprog supply -f. Greater-Than-Zero Constraints The Optimization Toolbox assumes that nonlinear inequality constraints are of the form . Greater-than-zero constraints are expressed as less-than-zero constraints by multiplying them by -1. For example, a constraint of the form is equivalent to the constraint ; a constraint of the form is equivalent to the constraint . Additional Arguments: Avoiding Global Variables You can pass parameters that would otherwise have to be declared as global directly to M-file functions using additional arguments at the end of the calling sequence. For example, entering a number of variables at the end of the call to fsolve [x,fval] = fsolve(@objfun,x0,options,P1,P2,...) passes the arguments directly to the function objfun when it is called from inside fsolve: f x( ) –f x( ) Ci ( ) x ≤ 0 Ci ( ) x ≥ 0 Ci ( ) – ( ) x ≤ 0 Ci ( ) x ≥ b Ci ( ) – ( ) x + b ≤ 0
Examples that Use Standard Algorithms Consider for example, finding zeros of the function ellipj(u, m). The function ell as input u. To look for );% Turn off fsolve(eellipj, 3, options, m) returns Then, solve for the function ellipj f ellipj(x, m 2.9925e-008 The call to optimset to get the default options for fsolve implies that default tolerances are used and that analytic gradients are not provided Nonlinear Equations with Analytic Jacobian This example demonstrates the use of the default medium-scale fsolve gorithm. It is intended for problems where The system of nonlinear equations is square, i.e., the number of equations equals the number of unknowns. There exists a solution x such that F(x)=0 The example uses fsolve to obtain the minimum ofthe banana (or rosenbrock) function by deriving and then solving an equivalent system of nonlinear equations. The Rosenbrock function, which has a minimum at F(x)=0, is a common test problem in optimization. It has a high degree of nonlinearity and converges extremely slowly if you try to use steepest descent type methods. It is given by First generalize this function to an n-dimensional function, for any positive 2-17
Examples that Use Standard Algorithms 2-17 F = objfun(x,P1,P2, ... ) Consider, for example, finding zeros of the function ellipj(u,m). The function needs parameter m as well as input u. To look for a zero near u = 3, for m = 0.5, m = 0.5; options = optimset('Display','off'); % Turn off Display x = fsolve(@ellipj,3,options,m) returns x = 3.7081 Then, solve for the function ellipj: f = ellipj(x,m) f = -2.9925e-008 The call to optimset to get the default options for fsolve implies that default tolerances are used and that analytic gradients are not provided. Nonlinear Equations with Analytic Jacobian This example demonstrates the use of the default medium-scale fsolve algorithm. It is intended for problems where • The system of nonlinear equations is square, i.e., the number of equations equals the number of unknowns. • There exists a solution such that . The example uses fsolve to obtain the minimum of the banana (or Rosenbrock) function by deriving and then solving an equivalent system of nonlinear equations. The Rosenbrock function, which has a minimum at , is a common test problem in optimization. It has a high degree of nonlinearity and converges extremely slowly if you try to use steepest descent type methods. It is given by First generalize this function to an n-dimensional function, for any positive, even value of n: x Fx( ) = 0 F x( ) = 0 f x( ) 100 x2 x1 2 ( ) – 2 1 x1 ( ) – 2 = +
Tutorial ∑ This function is referred to as the generalized rosenbrock function. It consists of n squared terms involving Before you can use fsolve to find the values of x such that F(x)=0,i.e obtain the minimum of the generalized Rosenbrock function, you must rewrite the function as the following equivalent system of nonlinear equations F(1)=1 F(2)=10(x2-x1) F(3)=1 F(4)=10(x4-x3) F(n-1)=1 F(n)=10(xn-xn-1) This system is square, and you can use fsolve to solve it. As the example demonstrates, this system has a unique solution given by x; =1, i= 1,...,n Step 1: Write an M-file bananaobj m to compute the objective function values and the jacobian function [F, J]= bananaobj(x) Evaluate the vector function and the jacobian matrix for the system of nonlinear equations derived from the general % n-dimensional Rosenbrock function Get the problem size n= length(x); if n==0, error('Input vector, x, is empty. );end if mod (n, 2) error( ' Input vector,x, must have an even number of 2-18
2 Tutorial 2-18 This function is referred to as the generalized Rosenbrock function. It consists of n squared terms involving n unknowns. Before you can use fsolve to find the values of such that , i.e., obtain the minimum of the generalized Rosenbrock function, you must rewrite the function as the following equivalent system of nonlinear equations: This system is square, and you can use fsolve to solve it. As the example demonstrates, this system has a unique solution given by . Step 1: Write an M-file bananaobj.m to compute the objective function values and the Jacobian. function [F,J] = bananaobj(x); % Evaluate the vector function and the Jacobian matrix for % the system of nonlinear equations derived from the general % n-dimensional Rosenbrock function. % Get the problem size n = length(x); if n == 0, error('Input vector, x, is empty.'); end if mod(n,2) ~= 0, error('Input vector, x, must have an even number of components.'); end f x( ) 100 x2i x2i – 1 2 ( ) – 2 1 x2i – 1 ( ) – 2 + i = 1 n ⁄ 2 = ∑ x Fx( ) = 0 F( ) 1 1 x1 = – F( ) 2 10 x2 x1 2 = ( ) – F( ) 3 1 x3 = – F( ) 4 10 x4 x3 2 = ( ) – F n( ) – 1 1 xn – 1 = – F n( ) 10 xn xn – 1 2 = ( ) – . . . xi = 1, i = 1, , … n