Robust and Explainable Autoencoders for Time Series Outlier Detection

Tung Kieu, Bin Yang*, Chenjuan Guo, Christian S. Jensen, Yan Zhao, Feiteng Huang, Kai Zheng

*Corresponding author for this work

Research output: Contribution to book/anthology/report/conference proceedingArticle in proceedingResearchpeer-review

Abstract

Time series data occurs widely, and outlier detection is a fundamental problem in data mining, which has numerous applications. Existing autoencoder-based approaches deliver state-of-the-art performance on challenging real-world data but are vulnerable to outliers and exhibit low explainability. To address these two limitations, we propose robust and explainable unsupervised autoencoder frameworks that decompose an input time series into a clean time series and an outlier time series using autoencoders. Improved explainability is achieved because clean time series are better explained with easy-to-understand patterns such as trends and periodicities. We provide insight into this by means of a post-hoc explainability analysis and empirical studies. In addition, since outliers are separated from clean time series iteratively, our approach offers improved robustness to outliers, which in turn improves accuracy. We evaluate our approach on five real-world datasets and report improvements over the state-of-the-art approaches in terms of robustness and explainability.
Original languageEnglish
Title of host publicationProceeding of the 38th IEEE International Conference on Data Engineering, ICDE 2022
Number of pages12
Publication date2022
Publication statusPublished - 2022

Keywords

  • TIme Series Analysis
  • Outlier Detection
  • Data Mining
  • Explainable AI
  • Machine Learning
  • Autoencoders

Fingerprint

Dive into the research topics of 'Robust and Explainable Autoencoders for Time Series Outlier Detection'. Together they form a unique fingerprint.

Cite this