Class 1



  1. What is an Algorithm?
    1. Illustration of a trust region algorithm for unconstrained minimization.
  2. Introduction to Nonlinear Programming
    1. Mathematical Formulation: example, level sets
    2. Definitions: local and global optima
    3. Definitions: convexity
  3. Fundamentals of Unconstrained Optimization (Recognizing Solutions)
    1. Definitions: gradient, Hessian, Taylor's Theorem, order notation (big and little O)
    2. directional derivative, curvature, linear model, direction of steepest descent
    3. first and second order necessary optimality conditions
    4. second order sufficient optimality conditions
Class 2, Cortona, Italy/02

Class 2



  1. Overview of Algorithms
    1. Importance of convex functions and sets for global optima
    2. line search and trust region methods
      1. line search: steepest descent, Newton's method (scale invariance)
    3. quasi-Newton methods, trust region methods (comparisons)
    4. scaling
    5. rates of convergence
  2. Line Search Methods, of type where pk is the search direction and alphak is the step length
    1. Step Length
      1. Wolfe conditions (geometric interpretations)
          sufficient decrease - step is not too large curvature condition - step is not too small
      2. existence of step lengths
    2. Convergence of Line Search Methods (cos theta geometric interpretation)
    3. Rate of Convergence
      1. linear convergence rate for steepest descent - dependence on condition number of Hessian
      2. Dennis-More condition - quasi-Newton method Hessian approximate needs (asymptotic) convergence in search direction for superlinear convergence
      3. with proof! - quadratic convergence of Newton's method near optimum.
      Class 3, Cortona, Italy/02

      Class 3



      1. Trust Region Methods
        1. basic outline,
        2. Trust Region Subproblem
          1. secular equation, optimality conditions, scaling, Cauchy point
      2. Conjugate Direction Methods
        1. Solving Ax=b for A pos. def. Expanding subspace theorem
        2. Conjugate Gradient Methods
          1. Solving Ax=b for A pos. def. Practical form. Rates of convergence (clustering eigenvalues). Preconditioning.
      3. Gauss-Newton method for nonlinear least squares: properties of GN direction, convergence rate
    Class 4, Cortona, Italy/02

    Class 4



    1. Nonlinear Equations
      1. Newton's Method, derivation using linearization, WITH PROOF, convergence rate, Inexact Newton method, merit functions
      2. Continuation (Homotopy) methods, (general outline only)
    2. Theory of Constrained Optimization,
      1. Definitions: constrained program, feasible set Omega, tangent cone of Omega at x, polar cone (dual cone).
      2. Geometric necessary conditions of optimality (extension of Fermat's theorem) for min f(x) x in Omega
      3. Geometric characterization of optimality for the convex case, min f(x) x in Omega , where f convex function and Omega convex set
      4. Definitions: local vs global minimum, smoothing problems by adding constraints, active constraints,
    Class 5, Cortona, Italy/02

    Class 5



    1. Theory of Constrained Optimization, cont...
      (with basic convex analysis)
      1. Basic Hyperplane Separation Theorem (with proof)
      2. Lemma: K is a closed convex cone if and only K=K++ (for derivation of Farkas' Lemma)
      3. Linearizing Cone, Cone of Gradients, Weakest Constraint Qualification, Karush-Kuhn-Tucker conditions
      4. Lemma: K is a ccc iff K=K++ and proof using separation theorem
      5. derivation of Farkas' Lemma (using above lemma); extensions/failure of Farkas' lemma
    Class 6, Cortona, Italy/02

    Class 6



    1. Theory of Constrained Optimization, cont...
      1. Lagrange Multiplier Theorem (with proof) of Generalized Convex Program
      2. dual of (generalized) convex programs (derivation using game theory)
    Class 7, Cortona, Italy/02

    Class 7



    1. Theory of Constrained Optimization, cont...
      1. Examples using KKT conditions
      2. Examples of duals of convex programs
      3. sensitivity interpretation of Lagrange multipliers for convex programs
    Class 8, Cortona, Italy/02

    Class 8



    1. Optimality and Duality Continued...
      1. Examples of LP and convex QP duals in finite/infinite dimensions
        • Comparison of examples in Hilbert and Banach spaces, i.e. Lagrange multiplier acts using an inner-product when the image of the constraint is an inner-product space; the action is still a bilinear form when the Lagrange multiplier is in the dual of a normed space. (The dual of C[0,1] is BV[0,1]. The optimal value in Example 3 is attained by a discontinuous function in (22), but the value can be approximated as closely as desired by a continuous function. This does not contradict the fact that C[0,1] is a Banach space.)
      2. Examples of duality gaps in convex programs
      3. First and Second order optimality conditions for general NLPs
    2. Basics of Linear Programming
    Class 9, Cortona, Italy/02

    Class 9



    1. Penalty Function Methods, SQP methods
    Class 10, Cortona, Italy/02

    Class 10



    1. Primal-Dual Interior-Point Methods for Linear (and Semidefinite) Programming (via log-barrier problem)