Explainable Fault Diagnosis of Oil-Immersed Transformers: A Glass-Box Model

Wenlong Liao, Yi Zhang, Di Cao, Takayuki Ishizaki, Zhe Yang, Dechang Yang

Research output: Journal article publicationJournal articleAcademic researchpeer-review

12 Citations (Scopus)

Abstract

Recently, remarkable progress has been made in the application of machine learning (ML) techniques (e.g., neural networks) to transformer fault diagnosis. However, the diagnostic processes employed by these techniques often suffer from a lack of interpretability. To address this limitation, this article proposes a novel glass-box model that integrates interpretability with high accuracy for transformer fault diagnosis. In particular, the model captures the nonlinear relationship between dissolved gases and fault types using shape functions, while also modeling the correlations between dissolved gases through interaction terms. Simulation results demonstrate that the performance of the proposed glass-box model surpasses those of benchmark models. Furthermore, the model offers both global and instance-specific perspectives for the interpretation of diagnostic processes. The combination of high accuracy and interpretability makes the proposed glass-box model an attractive option for reliable transformer fault diagnosis.

Original languageEnglish
Article number2506204
Pages (from-to)1-4
Number of pages4
JournalIEEE Transactions on Instrumentation and Measurement
Volume73
DOIs
Publication statusPublished - 5 Jan 2024

Keywords

  • Explainable artificial intelligence
  • fault diagnosis
  • glass box
  • machine learning (ML)
  • power transformer

ASJC Scopus subject areas

  • Instrumentation
  • Electrical and Electronic Engineering

Fingerprint

Dive into the research topics of 'Explainable Fault Diagnosis of Oil-Immersed Transformers: A Glass-Box Model'. Together they form a unique fingerprint.

Cite this