Learning the travelling salesperson problem requires rethinking generalization

Authors
Cappart, Quentin 
Rousseau, Louis-Martin 
Laurent, Thomas 

Change log
Abstract

jats:titleAbstract</jats:title>jats:pEnd-to-end training of neural network solvers for graph combinatorial optimization problems such as the Travelling Salesperson Problem (TSP) have seen a surge of interest recently, but remain intractable and inefficient beyond graphs with few hundreds of nodes. While state-of-the-art learning-driven approaches for TSP perform closely to classical solvers when trained on trivially small sizes, they are unable to generalize the learnt policy to larger instances at practical scales. This work presents an end-to-endjats:italicneural combinatorial optimization</jats:italic>pipeline that unifies several recent papers in order to identify the inductive biases, model architectures and learning algorithms that promote generalization to instances larger than those seen in training. Our controlled experiments provide the first principled investigation into suchjats:italiczero-shot</jats:italic>generalization, revealing that extrapolating beyond training data requires rethinking the neural combinatorial optimization pipeline, from network layers and learning paradigms to evaluation protocols. Additionally, we analyze recent advances in deep learning for routing problems through the lens of our pipeline and provide new directions to stimulate future research.</jats:p>

Publication Date
2022-04
Online Publication Date
2022-04-28
Acceptance Date
2022-03-16
Keywords
46 Information and Computing Sciences, 4611 Machine Learning
Journal Title
Constraints
Journal ISSN
1383-7133
1572-9354
Volume Title
27
Publisher
Springer Science and Business Media LLC