A Test Oracle for Reinforcement Learning Software Based on Lyapunov Stability Control Theory

Shiyu Zhang, Haoyang Song, Qixin Wang (Corresponding Author), Henghua Shen, Yu Pei

Research output: Chapter in book / Conference proceedingConference article published in proceeding or bookAcademic researchpeer-review

Abstract

Reinforcement Learning (RL) has gained significant attention in recent years. As RL software becomes more complex and infiltrates critical application domains, ensuring its quality and correctness becomes increasingly important. An indispensable aspect of software quality/correctness insurance is testing. However, testing RL software faces unique challenges compared to testing traditional software, due to the difficulty on defining the outputs' correctness. This leads to the RL test oracle problem. Current approaches to testing RL software often rely on human oracles, i.e. convening human experts to judge the correctness of RL software outputs. This heavily depends on the availability and quality (including the experiences, subjective states, etc.) of the human experts, and cannot be fully automated. In this paper, we propose a novel approach to design test oracles for RL software by leveraging the Lyapunov stability control theory. By incorporating Lyapunov stability concepts to guide RL training, we hypothesize that a correctly implemented RL software shall output an agent that respects Lyapunov stability control theories. Based on this heuristics, we propose a Lyapunov stability control theory based oracle, LPEA(ϑ,θ), for testing RL software. We conduct extensive experiments over representative RL algorithms and RL software bugs to evaluate our proposed oracle. The results show that our proposed oracle can outperform the human oracle in most metrics. Particularly, LPEA(ϑ=100%,θ=75%) outperforms the human oracle by 53.6%, 50%, 18.4%, 34.8%, 18.4%, 127.8%, 60.5%, 38.9%, and 31.7 % respectively on accuracy, precision, recall, F1 score, true positive rate, true negative rate, false positive rate, false negative rate, and ROC curve's AUC; and LPEA (ϑ=100%,θ=50%) outperforms the human oracle by 48.2%, 47.4%, 10.5%, 29.1%, 10.5%, 127.8%, 60.5%, 22.2%, and 26.0 % respectively on these metrics. 
Original languageEnglish
Title of host publicationProceedings - 2025 IEEE/ACM 47th International Conference on Software Engineering, ICSE 2025
PublisherIEEE Computer Society
Pages502-513
Number of pages12
ISBN (Electronic)979-8-3315-0569-1
DOIs
Publication statusPublished - Apr 2025

Publication series

NameProceedings - International Conference on Software Engineering
ISSN (Print)0270-5257

Keywords

  • reinforcement learning
  • test oracle

ASJC Scopus subject areas

  • Software

Fingerprint

Dive into the research topics of 'A Test Oracle for Reinforcement Learning Software Based on Lyapunov Stability Control Theory'. Together they form a unique fingerprint.

Cite this