Robust and Explainable Autoencoders for Unsupervised Time Series Outlier Detection

Tung Kieu, Bin Yang*, Chenjuan Guo, Christian S. Jensen, Yan Zhao, Feiteng Huang, Kai Zheng

*Kontaktforfatter

Publikation: Bidrag til bog/antologi/rapport/konference proceedingKonferenceartikel i proceedingForskningpeer review

17 Citationer (Scopus)

Abstract

Time series data occurs widely, and outlier detection is a fundamental problem in data mining, which has numerous applications. Existing autoencoder-based approaches deliver state-of-the-art performance on challenging real-world data but are vulnerable to outliers and exhibit low explainability. To address these two limitations, we propose robust and explainable unsupervised autoencoder frameworks that decompose an input time series into a clean time series and an outlier time series using autoencoders. Improved explainability is achieved because clean time series are better explained with easy-to-understand patterns such as trends and periodicities. We provide insight into this by means of a post-hoc explainability analysis and empirical studies. In addition, since outliers are separated from clean time series iteratively, our approach offers improved robustness to outliers, which in turn improves accuracy. We evaluate our approach on five real-world datasets and report improvements over the state-of-the-art approaches in terms of robustness and explainability.
OriginalsprogEngelsk
TitelProceeding of the 38th IEEE International Conference on Data Engineering, ICDE 2022
Antal sider13
ForlagIEEE
Publikationsdato2022
Sider3038-3050
ISBN (Elektronisk)9781665408837
DOI
StatusUdgivet - 2022

Fingeraftryk

Dyk ned i forskningsemnerne om 'Robust and Explainable Autoencoders for Unsupervised Time Series Outlier Detection'. Sammen danner de et unikt fingeraftryk.

Citationsformater