ABSTRACT: Slater's condition -- existence of a ``strictly feasible solution'' -- is at the heart of convex optimization. It is enough to look at the basics: without strict feasibility, first-order optimality conditions may be meaningless, the dual problem may yield little information about the primal, and small changes in the data may render the problem infeasible. In consequence, many off-the-shelf numerical methods can perform poorly; primal-dual interior point methods, in particular.
New optimization modelling techniques and convex relaxations for hard nonconvex problems have shown that the loss of strict feasibility is a much more pronounced phenomenon than has previously been realized. Such new developments suggest a reappraisal.
In this talk we illustrate how to exploit structure so that loss of Slater's condition can be used as an advantage. We apply this to three specific applications: compressed sensing, sensor network localization, and molecular conformation.