TY - JOUR
T1 - Deep Reinforcement Learning for Real-Time Assembly Planning in Robot-Based Prefabricated Construction
AU - Zhu, Aiyu
AU - Dai, Tianhong
AU - Xu, Gangyan
AU - Pauwels, Pieter
AU - de Vries, Bauke
AU - Fang, Meng
N1 - Funding Information:
This article was recommended for publication by Editor X. Xie upon evaluation of the reviewers’ comments. This work was supported in part by the China Scholarship Council under Grant 202007720036 and in part by the National Natural Science Foundation of China under Grant 72174042. An earlier version of this paper was presented in part at the 2021 IEEE International Conference on Automa- tion Science and Engineering [DOI: 10.1109/CASE49439.2021.9551402]. (Aiyu Zhu, Tianhong Dai, and Gangyan Xu contributed equally to this work.
Publisher Copyright:
© 2004-2012 IEEE.
PY - 2023/7/1
Y1 - 2023/7/1
N2 - The adoption of robotics is promising to improve the efficiency, quality, and safety of prefabricated construction. Besides technologies that improve the capability of a single robot, the automated assembly planning for robots at construction sites is vital for further improving the efficiency and promoting robots into practices. However, considering the highly dynamic and uncertain nature of a construction environment, and the varied scenarios in different construction sites, it is always challenging to make appropriate and up-to-date assembly plans. Therefore, this paper proposes a Deep Reinforcement Learning (DRL) based method for automated assembly planning in robot-based prefabricated construction. Specifically, a re-configurable simulator for assembly planning is developed based on a Building Information Model (BIM) and an open game engine, which could support the training and testing of various optimization methods. Furthermore, the assembly planning problem is modelled as a Markov Decision Process (MDP) and a set of DRL algorithms are developed and trained using the simulator. Finally, experimental case studies in four typical scenarios are conducted, and the performance of our proposed methods have been verified, which can also serve as benchmarks for future research works within the community of automated construction. Note to Practitioners - This paper is conducted based on the comprehensive analysis of real-life assembly planning processes in prefabricated construction, and the methods proposed could bring many benefits to practitioners. Firstly, the proposed simulator could be easily re-configured to simulate diverse scenarios, which can be used to evaluate and verify the operations' optimization methods and new construction technologies. Secondly, the proposed DRL-based optimization methods can be directly adopted in various robot-based construction scenarios, and can also be tailored to support the assembly planning in traditional human-based or human-robot construction environments. Thirdly, the proposed DRL methods and their performance in the four typical scenarios can serve as benchmarks for proposing new advanced construction technologies and optimization methods in assembly planning.
AB - The adoption of robotics is promising to improve the efficiency, quality, and safety of prefabricated construction. Besides technologies that improve the capability of a single robot, the automated assembly planning for robots at construction sites is vital for further improving the efficiency and promoting robots into practices. However, considering the highly dynamic and uncertain nature of a construction environment, and the varied scenarios in different construction sites, it is always challenging to make appropriate and up-to-date assembly plans. Therefore, this paper proposes a Deep Reinforcement Learning (DRL) based method for automated assembly planning in robot-based prefabricated construction. Specifically, a re-configurable simulator for assembly planning is developed based on a Building Information Model (BIM) and an open game engine, which could support the training and testing of various optimization methods. Furthermore, the assembly planning problem is modelled as a Markov Decision Process (MDP) and a set of DRL algorithms are developed and trained using the simulator. Finally, experimental case studies in four typical scenarios are conducted, and the performance of our proposed methods have been verified, which can also serve as benchmarks for future research works within the community of automated construction. Note to Practitioners - This paper is conducted based on the comprehensive analysis of real-life assembly planning processes in prefabricated construction, and the methods proposed could bring many benefits to practitioners. Firstly, the proposed simulator could be easily re-configured to simulate diverse scenarios, which can be used to evaluate and verify the operations' optimization methods and new construction technologies. Secondly, the proposed DRL-based optimization methods can be directly adopted in various robot-based construction scenarios, and can also be tailored to support the assembly planning in traditional human-based or human-robot construction environments. Thirdly, the proposed DRL methods and their performance in the four typical scenarios can serve as benchmarks for proposing new advanced construction technologies and optimization methods in assembly planning.
KW - assembly planning
KW - automated construction
KW - building information modelling (BIM)
KW - deep reinforcement learning (DRL)
KW - Prefabricated construction
UR - http://www.scopus.com/inward/record.url?scp=85147277557&partnerID=8YFLogxK
U2 - 10.1109/TASE.2023.3236805
DO - 10.1109/TASE.2023.3236805
M3 - Journal article
AN - SCOPUS:85147277557
SN - 1545-5955
VL - 20
SP - 1515
EP - 1526
JO - IEEE Transactions on Automation Science and Engineering
JF - IEEE Transactions on Automation Science and Engineering
IS - 3
ER -