Graph Neural Network-Enhanced Feature Learning for Unsupervised Anomalous Sound Detection

Published in 2025 IEEE International Conference on Systems, Man, and Cybernetics (SMC), 2026

Abstract: Anomalous sound detection (ASD) is crucial in industrial applications due to its non-invasive and real-time capabilities. However, existing ASD methods often rely on autoencoders, which require machine-specific tuning, or large pre-trained models with high computational costs. Additionally, many self-supervised approaches depend on extensive meta-information, increasing deployment complexity. To address these limitations, we propose a lightweight, metadata-free ASD framework that generalizes across different machine types without requiring complex hyperparameter tuning. Our approach extracts high-dimensional features from Log-Mel spectrograms using MobileNetV2, then refines feature representations through relational learning with a Graph Neural Network-based SAGE-GAT model. Unlike conventional methods that treat machine types independently, our approach leverages cross-category feature propagation through local neighbor relationships, capturing discriminative information from nearby samples. Furthermore, an MLP optimized with ArcFace loss enhances feature structuring, while anomaly detection is performed using K-means clustering. Experiments on the DCASE 2024 Task 2 dataset validate the effectiveness of our approach, demonstrating its robustness, efficiency, and suitability for real-world industrial deployment.

Keywords: Anomalous Sound Detection, Graph Neural Network, K-means, ArcFace, MobilenetV2

Recommended citation: J. Lu, W. Guan, M. Zhang and T. Li, "Graph Neural Network-Enhanced Feature Learning for Unsupervised Anomalous Sound Detection," 2025 IEEE International Conference on Systems, Man, and Cybernetics (SMC), Vienna, Austria, 2025, pp. 1511-1517, doi: 10.1109/SMC58881.2025.11342493.
Download Paper