Department of Combinatorics and Optimization, The
University of Waterloo The Fields Institute, Toronto

Numerical Solution of Optimization Problems

Winter 2002

Prerequisites

background material from linear algebra and multivariable calculus is used throughout this class. We will review some of it in class as needed, but it is a good idea to refresh your knowledge beforehand. In addition, you should be familiar with Matlab.

Lectures


Class 1

  1. Introduction (Chapter 1)
    1. Mathematical Formulation: example, level sets
    2. Definitions: local and global optima
    3. Definitions: convexity
  2. Fundamentals of Unconstrained Optimization (Chapter 2)
    1. Example of nonlinear least squares data fitting
    2. Definitions: gradient, Hessian, Taylor's Theorem
Class 2, C&O 466/666

Class 2

  1. Fundamentals Unconstrained Opt. cont... (Chapter 2)
    1. Recognizing Solutions
      1. Definitions: gradient, Hessian, Taylor's Theorem, order notation (big and little O)
      2. directional derivative, curvature, linear model, direction of steepest descent
      3. first and second order necessary optimality conditions
      4. second order sufficient optimality conditions
Class 3, C&O 466/666

Class 3

  1. Overview of Algorithms
    1. Importance of convex functions and sets for global optima
    2. line search and trust region methods
      1. line search: steepest descent, Newton's method (scale invariance)
Class 4, C&O 466/666

Class 4

  1. Overview of Algorithms continued...
    1. quasiNewton methods, trust region methods (comparisons)
    2. scaling
    3. rates of convergence
    4. sparsity considerations
Class 5, C&O 466/666

Class 5

  1. Line Search Methods (Chapter 3), of type where pk is the search direction and alphak is the step length
    1. Step Length
      1. Wolfe conditions (geometric interpretations)
          sufficient decrease step is not too large curvature condition step is not too small
      2. Lemma 3.1 (existence of step lengths)
    2. Convergence of Line Search Methods (global, rates)
    Class 6, C&O 466/666

    Class 6

    1. Line Search Methods continued ...(Chapter 3)
      1. Rate of Convergence
        1. Theorems 3.3,3.4 (linear convergence rate for steepest descent dependence on condition number of Hessian)
        2. Theorem 3.5 (Dennis-More condition quasi-Newton method Hessian approximate needs (asymptotic) convergence in search direction for superlinear convergence)
        3. Theorem 3.7 ( quadratic convergence of Newton's method near optimum.)
    Class 7, C&O 466/666

    Class 7

    1. Discussion of the derivative as a linear operator
    2. Line Search Methods continued ...(Chapter 3)
      1. Theorem 3.7 (quadratic convergence of Newton's method near optimum.)
    3. Trust Region Methods (Chapter 4)
      1. basic outline,
Class 8, C&O 466/666

Class 8

  1. Trust Region Subproblem
    1. secular equation, optimality conditions, scaling, Cauchy point
Class 9, C&O 466/666

Class 9

  1. Conjugate Direction Methods
    1. Solving Ax=b for A pos. def.; Least squares problems.
Class 10, C&O 466/666

Class 10

  1. Conjugate Gradient Methods
    1. Solving Ax=b for A pos. def. Practical form. Rates of convergence (clustering eigenvalues). Preconditioning.
Class 11, C&O 466/666

Class 11

  1. Conjugate Direction Methods cont...
    1. Gauss-Newton method for nonlinear least squares: properties of GN direction, Theorem 10.1, convergence rate
Class 12, C&O 466/666

Class 12

  1. Nonlinear Equations
    1. Newton's Method, derivation using linearization, Theorem 11.2, convergence rate, Inexact Newton method, merit functions
    2. Continuation (Homotopy) methods, (general outline only)
Class 13, C&O 466/666

Class 13

  1. Theory of Constrained Optimization, Chap. 12
    1. Definitions: constrained program, feasible set Omega, tangent cone of Omega at x, polar cone (dual cone).
    2. Geometric necessary conditions of optimality (extension of Fermat's theorem) for min f(x) x in Omega
    3. Geometric characterization of optimality for the convex case, min f(x) x in Omega , where f convex function and Omega convex set
Class 14, C&O 466/666

Class 14

  1. Theory of Constrained Optimization, Chap. 12, cont...
    1. Definitions: local vs global minimum, smoothing problems by adding constraints, active constraints,
Class 15, C&O 466/666

Class 15

  1. Theory of Constrained Optimization, Chap. 12, cont...
    (with basic convex analysis added)
    1. Basic Hyperplane Separation Theorem
    2. constraint qualifications, stability and sensitivity analysis
Class 16, C&O 466/666

Class 16

  1. Theory of Constrained Optimization, Chap. 12, cont...
    (with basic convex analysis/duality added)
    1. Basic separation theorems
Class 17, C&O 466/666

Class 17

  1. Theory of Constrained Optimization, Chap. 12, cont...
    (with basic convex analysis/duality added)
    1. generalized Farkas' Lemma (with respect to cone ordering with examples e.g. semidefinite programming)
    2. examples where Farkas' lemma fails
Class 18, C&O 466/666

Class 18

  1. Theory of Constrained Optimization, Chap. 12, cont...
    (with basic convex analysis/duality added)
    1. Lagrange Multiplier Theorem of Generalized Convex Program
    2. dual of (generalized) convex programs (derivation using game theory)
Class 19, C&O 466/666

Class 19

  1. Theory of Constrained Optimization, Chap. 12, cont...
    (with basic convex analysis/duality added)
    1. Examples using KKT conditions
    2. Examples of duals of convex programs
    3. sensitivity interpretation of Lagrange multipliers for convex programs
Class 20, C&O 466/666

Class 20

  1. Sequential Quadratic Programming Methods
Class 21, C&O 466/666

Class 21

  1. Sequential Quadratic Programming Methods cont....
  2. Basics of Linear Programming
Class 22, C&O 466/666

Class 22

  1. Zoutendijk's Feasible Directions Method, Penalty Function Methods, Barrier Methods, Augmented Lagrangian Methods
Class 23, C&O 466/666

Class 23

  1. Zoutendijk's Feasible Directions Method, Penalty Function Methods, Barrier Methods, Augmented Lagrangian Methods, cont...
Class 24, C&O 466/666

Class 24

  1. Primal-Dual Interior-Point Methods for Linear (and Semidefinite) Programming (via log-barrier problem)