Technical Conventions Technical Conventions Matrix vector and scalar notation Uppercase letters such as a are used to denote matrices. Lowercase letters such as x are used to denote vectors, except where noted that the variable is a or functions, the notation differs slightly to follow the usual conventions in optimization For vector functions, we use an upper-case letter such as Fin F(x). A function that returns a scalar value is denoted with a lowercase letter such as f in f(x)
Technical Conventions 1-5 Technical Conventions Matrix, Vector, and Scalar Notation Uppercase letters such as are used to denote matrices. Lowercase letters such as are used to denote vectors, except where noted that the variable is a scalar. For functions, the notation differs slightly to follow the usual conventions in optimization. For vector functions, we use an upper-case letter such as in . A function that returns a scalar value is denoted with a lowercase letter such as in . A x F F x( ) f fx( )
Introduction Ackn。 wledgments The Math Works would like to acknow ledge these contributors Thomas F. Coleman researched and contributed the large scale algorithms for constrained and unconstrained minimization, nonlinear least squares and curve fitting, constrained linear least squares, quadratic programming, and Dr. Coleman is Professor of Computer Science and Applied Mathematics at Cornell University. He is Director of the Cornell Theory Center and the cornell Computational Finance Institute. Dr. Coleman is Chair of the SIAM Activity Group on Optimization, and a member of the Editorial Boards of Applied Mathematics Letters, SIAM Journal of Scientific Computing, Computational Optimization and applications, Communications on applied nonlinear Dr. Coleman has published 4 books and over 70 technical papers in the areas of continuous optimization and computational methods and tools for large-scale problems. Yin Zhang researched and contributed the large-scale linear programming algorithm Dr. Zhang is Associate Professor of Computational and Applied mathematic on the faculty of the Keck Center for Computational Biology at Rice University He is on the Editorial Board of sIAM Journal on Optimization, and is Associate Editor of Journal of Optimization: Theory and Applications Dr. Zhang has published over 40 technical papers in the areas of interior-point methods for linear programming and computation mathematical programming 1-6
1 Introduction 1-6 Acknowledgments The MathWorks would like to acknowledge these contributors: Thomas F. Coleman researched and contributed the large scale algorithms for constrained and unconstrained minimization, nonlinear least squares and curve fitting, constrained linear least squares, quadratic programming, and nonlinear equations. Dr. Coleman is Professor of Computer Science and Applied Mathematics at Cornell University. He is Director of the Cornell Theory Center and the Cornell Computational Finance Institute. Dr. Coleman is Chair of the SIAM Activity Group on Optimization, and a member of the Editorial Boards of Applied Mathematics Letters, SIAM Journal of Scientific Computing, Computational Optimization and Applications, Communications on Applied Nonlinear Analysis, and Mathematical Modeling and Scientific Computing. Dr. Coleman has published 4 books and over 70 technical papers in the areas of continuous optimization and computational methods and tools for large-scale problems. Yin Zhang researched and contributed the large-scale linear programming algorithm. Dr. Zhang is Associate Professor of Computational and Applied Mathematics on the faculty of the Keck Center for Computational Biology at Rice University. He is on the Editorial Board of SIAM Journal on Optimization, and is Associate Editor of Journal of Optimization: Theory and Applications. Dr. Zhang has published over 40 technical papers in the areas of interior-point methods for linear programming and computation mathematical programming
① utorial The Tutorial provides information on how to use the toolbox functions. It also provides examples for solving different optimization problems. It consists of these sections Introduction(p 2-3) Summarizes in tabular form the functions available for minimization, equation solving, and solving least-squares or data fitting problems. It also provides basic guidelines for using the optimization routines and introduces the algorithms and line-search strategies that are available for solving medium- and large-scale problems Examples that Use Standard Presents medium-scale algorithms through a selection of Algorithms(p 2-7) minimization examples. These examples include constrained and constrained pro problems with and without user-supplied gradients. This section also discusses maximization, greater-than-zero constraints, pass multiobjective examples Large-Scale Examples(p 2-33) Presents large-scale algorithms through a selection of large-scale examples. These examples include specifying sparsity structures, and preconditioners, as well as unconstrained and constrained problems Default Parameter Settings(p 2-65) Describes the use of default parameter settings and tell them It also tells you how to determine which parameters are used by a specifie function, and provides examples of setting some commonly used parameters Displaying Iterative Output(p 2-68) Describes the column headings used in the iterative output of both medium-scale and large-scale algorithms
2 Tutorial The Tutorial provides information on how to use the toolbox functions. It also provides examples for solving different optimization problems. It consists of these sections. Introduction (p. 2-3) Summarizes, in tabular form, the functions available for minimization, equation solving, and solving least-squares or data fitting problems. It also provides basic guidelines for using the optimization routines and introduces the algorithms and line-search strategies that are available for solving medium- and large-scale problems. Examples that Use Standard Algorithms (p. 2-7) Presents medium-scale algorithms through a selection of minimization examples. These examples include unconstrained and constrained problems, as well as problems with and without user-supplied gradients. This section also discusses maximization, greater-than-zero constraints, passing additional arguments, and multiobjective examples. Large-Scale Examples (p. 2-33) Presents large-scale algorithms through a selection of large-scale examples. These examples include specifying sparsity structures, and preconditioners, as well as unconstrained and constrained problems. Default Parameter Settings (p. 2-65) Describes the use of default parameter settings and tells you how to change them. It also tells you how to determine which parameters are used by a specified function, and provides examples of setting some commonly used parameters. Displaying Iterative Output (p. 2-68) Describes the column headings used in the iterative output of both medium-scale and large-scale algorithms
2 Tutorial Optimization of Inline Objects Instead Tells you how to represent a mathematical function at the of M-Files(p 2-74) command line by creating an inline object from a string Typical Problems and How to Deal Provides tips to help you improve solutions found usin with Them(p 2-76) the optimization functions, improve efficiency of the algorithms, overcome common difficulties, and transform problems that are typically not in standard form. Converting Your Code to Version 2 Compares a Version 1.5 call to the equivalent version Syntax(p 2-80) call for each function. This section also describes the Version 2 calling sequences and provides a detailed example of converting from constr to its replacement Selected Bibliography (p 2-92 Lists published materials that support concepts implemented in the Optimization Toolbox 2-2
2 Tutorial 2-2 Optimization of Inline Objects Instead of M-Files (p. 2-74) Tells you how to represent a mathematical function at the command line by creating an inline object from a string expression. Typical Problems and How to Deal with Them (p. 2-76) Provides tips to help you improve solutions found using the optimization functions, improve efficiency of the algorithms, overcome common difficulties, and transform problems that are typically not in standard form. Converting Your Code to Version 2 Syntax (p. 2-80) Compares a Version 1.5 call to the equivalent Version 2 call for each function. This section also describes the Version 2 calling sequences and provides a detailed example of converting from constr to its replacement fmincon. Selected Bibliography (p. 2-92) Lists published materials that support concepts implemented in the Optimization Toolbox
Introduction Introduction Optimization concerns the minimization or maximization of functions. The Optimization Toolbox consists of functions that perform minimization(or maximization) on general nonlinear functions. Functions for nonlinear equation solving and least-squares(data-fitting) problems are also provided. This introduction includes the following sections Problems Covered by the Toolbox Using the Optimization Functions Problems c。 vered by the to。box The following tables show the functions available for minimization, equation solving, and solving least-squares or data-fitting problems. Note The following tables list the types of problems in order of increasing complexity. Table 2-1: Minimization notati。n Functi。n Scalar minimization min f(a) such that a1<a<a fminbnd Unconstrained Minimization min f(x) fminunc Linear Programming min fx such that linprog A·x≤b,Aeq·x=beq,l≤x≤u Quadratic Programming min x Hx+fx such that A·x≤b,Aeq·x=beq,l≤x≤u
Introduction 2-3 Introduction Optimization concerns the minimization or maximization of functions. The Optimization Toolbox consists of functions that perform minimization (or maximization) on general nonlinear functions. Functions for nonlinear equation solving and least-squares (data-fitting) problems are also provided. This introduction includes the following sections: • Problems Covered by the Toolbox • Using the Optimization Functions Problems Covered by the Toolbox The following tables show the functions available for minimization, equation solving, and solving least-squares or data-fitting problems. Note The following tables list the types of problems in order of increasing complexity. Table 2-1: Minimization Type Notation Function Scalar Minimization such that fminbnd Unconstrained Minimization fminunc, fminsearch Linear Programming such that linprog Quadratic Programming such that quadprog f a( ) a min a1 a a2 < < f x( ) x min f T x x min A x⋅ ≤ b Aeq x , ⋅ = beq l x u , ≤ ≤ 1 2 -- x THx fT + x x min A x⋅ ≤ b Aeq x , ⋅ = beq l x u , ≤ ≤