TY - GEN
T1 - Benchmarking TPU and GPU for Stock Price Forecasting Using LSTM Model Development
AU - Kehinde, T. O.
AU - Chung, S. H.
AU - Chan, Felix T.S.
N1 - Publisher Copyright:
© 2023, The Author(s), under exclusive license to Springer Nature Switzerland AG.
PY - 2023/9
Y1 - 2023/9
N2 - Due to the surge in annual stock debuts, it is vital to use deep learning algorithms. Deep learning requires in-depth knowledge of the technology and how to utilize it. Valuable tasks include assessing and projecting stock prices. The complexity of the uncertain behaviour of the stock market requires deploying deep learning models over several processors of competent computer hardware. This research uses TPU and GPU hardware processors and accelerators to train and assess a deep learning model employing a recurrent neural network of long-short-term memory (LSTM). This model was trained using HKEX, FTSE100, and S&P500 stock price datasets across several periods (1-year, 3-year, 5-year, and 10-year). Runtime, execution time, and evaluation metrics were used to compare results. The number of stacked layers rises as model runtime on a TPU increases. In all three situations, the TPU model was quicker at all the scenarios considered. The stock price dataset is used to train the LSTM model and create prediction with reduced root mean squared error, which indicates that GPU has a shorter runtime with near accuracy as TPU, with a more significant runtime more than ten times that of GPU on big datasets. TPU outperforms GPU in stock price predictions when trained on large datasets, whereas GPU outperforms TPU on smaller datasets. More work is needed to justify evaluating the TPU and GPU for stock price prediction using different deep-learning frameworks by adjusting the LSTM hyper-parameters and running them across more than three datasets.
AB - Due to the surge in annual stock debuts, it is vital to use deep learning algorithms. Deep learning requires in-depth knowledge of the technology and how to utilize it. Valuable tasks include assessing and projecting stock prices. The complexity of the uncertain behaviour of the stock market requires deploying deep learning models over several processors of competent computer hardware. This research uses TPU and GPU hardware processors and accelerators to train and assess a deep learning model employing a recurrent neural network of long-short-term memory (LSTM). This model was trained using HKEX, FTSE100, and S&P500 stock price datasets across several periods (1-year, 3-year, 5-year, and 10-year). Runtime, execution time, and evaluation metrics were used to compare results. The number of stacked layers rises as model runtime on a TPU increases. In all three situations, the TPU model was quicker at all the scenarios considered. The stock price dataset is used to train the LSTM model and create prediction with reduced root mean squared error, which indicates that GPU has a shorter runtime with near accuracy as TPU, with a more significant runtime more than ten times that of GPU on big datasets. TPU outperforms GPU in stock price predictions when trained on large datasets, whereas GPU outperforms TPU on smaller datasets. More work is needed to justify evaluating the TPU and GPU for stock price prediction using different deep-learning frameworks by adjusting the LSTM hyper-parameters and running them across more than three datasets.
KW - Deep Learning Models
KW - LSTM Model
KW - Recurrent Neural Network
KW - Stock Price Forecasting
UR - https://www.scopus.com/pages/publications/85174675018
U2 - 10.1007/978-3-031-37717-4_20
DO - 10.1007/978-3-031-37717-4_20
M3 - Conference article published in proceeding or book
AN - SCOPUS:85174675018
SN - 9783031377167
VL - 711
T3 - Lecture Notes in Networks and Systems
SP - 289
EP - 306
BT - Intelligent Computing - Proceedings of the 2023 Computing Conference
A2 - Arai, Kohei
PB - Springer Science and Business Media Deutschland GmbH
T2 - Proceedings of the Computing Conference 2023
Y2 - 22 June 2023 through 23 June 2023
ER -