Deep Learning in Computational Discrete Optimization
CO 759, Winter 2018
Class meets in MC 6486, Monday and Wednesday, 11:30--12:50.
Click here for an updated version of the notes (Spring 2019, Johns Hopkins University).
The goal of the course is to examine research-level topics in the application of deep-learning techniques to the solution of computational problems in discrete optimization. The first part of the course will cover background material, introducing students to deep learning (focusing on practical aspects) and covering major topics in computational discrete optimization: heuristic methods, dynamic programming, linear programming, cutting planes, column generation, and branch-and-bound. We will then make an in-depth study of recent papers where deep learning has been proposed as a solution-technique in discrete optimization, aiming towards discussions of open research questions.
General mathematical maturity is expected: students should feel comfortable reading on their own Part 1 (Applied Math and Machine Learning Basics) in the text Deep Learning.
Although no specific background in the topics is required, students will get the most out of the course if they have previous knowledge of one of the three components: (1) discrete optimization (CO255 and/or CO450/650 and/or CO452/652), (2) machine learning (CS489/695 or the Deep Learning text), (3) computation (TensorFlow or a good background in computer programming).
HW1: Hello World
Outline of Lectures
Computational discrete optimization (Weeks 1-4)
Computation, linear-programming bounds, the cutting-plane method, column generation, branch decomposition.
Rectified linear units (Week 5)
Understanding deep neural networks with rectified linear units, R. Arora, A. Basu, P. Mianjy, A. Mukherjee. 2017
Bounding and counting linear regions of deep neural networks, T. Serra, C. Tjandraatmadja, S. Ramalingam. 2018
Maximum resilience of artificial neural networks, C.-H. Cheng, G. N\"uhrenberg, H. Ruess, 2017.
Deep neural networks as 0-1 mixed integer linear programs: a feasibility study, M. Fischetti, J. Jo. 2017.
Q-Learning (Week 6)
Learning combinatorial optimization algorithms over graphs, H. Dai, E. B. Khalil, Y. Zhang, B. Dilkina, L. Song. 2017. This is the main paper we will discuss in this week's lectures.
Implementation of the techniques in 'Learning combinatorial optimization algorithms over graphs', Hanjun-Dai. December 30, 2017. Code for the methods presented in the Dai et al. paper.
Machine Learning for Humans, Part 5: Reinforcement Learning, V. Maini. 2017. Very gentle introduction; good way to get accustomed to the terminology used in Q-learning.
Branch-and-Bound (Week 7)
On learning and branching: a survey, A. Lodi, G. Zarpellon.
Learning to branch in mixed integer programming, E. B. Khalil, P. Le Bodic, L. Song, G. Nemhauser, B. Dilkina.
Sequence-to-Sequence Learning (Weeks 8-9)
Neural machine translation by jointly learning to align and translate, D. Bahdanau, K. Cho, Y. Bengio. 2014.
Sequence to sequence learning with neural networks, I. Sutskever, O. Vinyals, Q. V. Le. 2014.
Pointer networks, O. Vinyals, M. Fortunato, N. Jaitly. 2017.
Neural combinatorial optimization with reinforcement learning,I. Bello, H. Pham, Q. V. Le, M. Norouzi, S. Bengio. 2017. Lecture by Samy Bengio (Berlin, June 2017.)
Deep reinforcement learning for solving the vehicle routing problem, M. Nazari, A. Oroojlooy, L. V. Snyder, M. Takac. February 2018.
Attention (Week 10a)
Attention is all you need, A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. Gomez, L. Kaiser, I. Polosukhin. December 2017.
Attention solves your TSP, W. Kool, M. Welling. March 2018. (Source code on GitHub.)
Computational Testing (Week 10b)
Constraint Integer Programming, Tobias Achterberg, Ph. D. Thesis, Berlin 2009. Beautiful examples of careful computational testing.
Benchmarking optimization software with performance profiles, E. Dolan, J. More. 2004. Performance profiles at GAMS -- background on performance profiles.
Experimental analysis of algorithms, Catherine McGeoch. Notices of the AMS, March 2001.
A theoretician's guide to the experimental analysis of algorithms, David S. Johnson, 2001.
1. Youthful Indiscretion
Deep learning to test if a graph is Hamiltonian
2. Funny, It Worked Last Time
Deep learning for online knapsack and bin-packing problems
3. Displacement Activity
Improving local-search methods using deep neural networks
4. Another Fine Product from the Nonsense Factory
A mixed convex-combinatorial approach for training hard-threshold networks
5. So Much for Subtlety
Reinforcement learning for MIP branch-and-bound decisions
6. Well, I was in the Neighborhood
Using deep neural networks to generate local-cut vertex clusters
7. Limiting Factor
Deep learning for branch-and-bound variable selection in graph optimization
Oberwolfach Seminar: Mathematics of Deep Learning
Ian Goodfellow, Yoshua Bengio, Aaron Courville
MIT Press, 2016
The main course text for fundamentals of deep learning.
A Practical Guide to Discrete Optimization, Chapter 1, Chapter 7
David Applegate, William Cook, Sanjeeb Dash
Computational studies in discrete optimization.