Abstract
In this article, we study the reinforcement learning (RL) for vehicle routing problems (VRPs). Recent works have shown that attention-based RL models outperform recurrent neural network-based methods on these problems in terms of both effectiveness and efficiency. However, existing RL models simply aggregate node embeddings to generate the context embedding without taking into account the dynamic network structures, making them incapable of modeling the state transition and action selection dynamics. In this work, we develop a new attention-based RL model that provides enhanced node embeddings via batch normalization reordering and gate aggregation, as well as dynamic-aware context embedding through an attentive aggregation module on multiple relational structures. We conduct experiments on five types of VRPs: 1) travelling salesman problem (TSP); 2) capacitated VRP (CVRP); 3) split delivery VRP (SDVRP); 4) orienteering problem (OP); and 5) prize collecting TSP (PCTSP). The results show that our model not only outperforms the learning-based baselines but also solves the problems much faster than the traditional baselines. In addition, our model shows improved generalizability when being evaluated in large-scale problems, as well as problems with different data distributions.
Original language | English |
---|---|
Pages (from-to) | 1-14 |
Number of pages | 14 |
Journal | IEEE Transactions on Cybernetics |
DOIs | |
Publication status | Accepted/In press - 8 Jul 2021 |
Externally published | Yes |
Keywords
- Attention mechanism
- combinatorial optimization
- Computational modeling
- Context modeling
- Decoding
- deep reinforcement learning (DRL)
- Optimization
- Task analysis
- traveling salesman problem
- Vehicle dynamics
- Vehicle routing
- vehicle routing problem (VRP)
ASJC Scopus subject areas
- Software
- Control and Systems Engineering
- Information Systems
- Human-Computer Interaction
- Computer Science Applications
- Electrical and Electronic Engineering