An Efficient and Memory-Friendly Unsupervised Industrial Anomaly Detection Model

2025/07/31
  • Research

Researchers develop a novel unsupervised anomaly detection model that uses paired well-lit and low-light images for efficient anomaly detection
Industrial anomaly detection is crucial for maintaining quality control and reducing production errors, but traditional supervised models require extensive datasets. While embedding-based methods are promising for unsupervised anomaly detection, they are highly memory-intensive and unsuited to low-light conditions. In a new study, researchers developed a new unsupervised model that utilizes both well-lit and low-light images to achieve computationally efficient and memory-friendly industrial anomaly detection.

Image
 

Title:  Proposed unsupervised framework for industrial anomaly detection
Caption:The proposed industrial anomaly detection model is computationally efficient, memory-friendly, and also suitable for low-light conditions, common in manufacturing environments, making it well-suited for real-time industrial applications.

Credit :Dr. Phan Xuan Tan from SIT, Japan 
Source Link:https://doi.org/10.1093/jcde/qwaf043
License: CC BY-NC 4.0

Usage restrictions: Credit must be given to the creator. Only non-commercial uses of the work are permitted.

In a bid to improve quality control and reduce production errors, manufacturers are increasingly adopting automated anomaly detection systems. These systems automatically identify defects in manufacturing products and are widely employed in various industries like automotive manufacturing, electronics, and pharmaceuticals. Traditional anomaly detection systems rely on supervised learning, where models are trained on labeled datasets containing both normal and defective samples. However, in most industrial settings, defect data is usually scarce or highly variable, making these approaches impractical. As a result, unsupervised learning models that rely exclusively on defect-free data have emerged as a promising alternative.
Recently, embedding-based methods have achieved state-of-the-art results for unsupervised anomaly detection. Unfortunately, these methods rely on memory banks, resulting in substantial increases in memory consumption and inference times. This is problematic for industrial settings, which are often constrained in memory resources and processing speed. Additionally, these methods perform poorly in low-light or variable lighting conditions, which are common in manufacturing environments.
To address these challenges, a research team led by Associate Professor Phan Xuan Tan from the Innovative Global Program and College of Engineering at Shibaura Institute of Technology, Japan, and Dr. Hoang Dinh Cuong from FPT University, Vietnam, has developed an innovative unsupervised model for industrial anomaly detection (IAD) and localization using paired well-lit and low-light images. “Our approach is the first to leverage paired well-lit and low-light images for detecting anomalies, while remaining lightweight and memory-efficient,” explains Dr. Tan. The team also included researchers from FPT University in Vietnam. Their study was made available online on April 24, 2025, and published in Volume 12, Issue 5, of the Journal of Computational Design and Engineering on May 1, 2025.
The proposed model first extracts feature maps for both well-lit and low-light images, capturing their structural and textural details. It then refines these features using two innovative modules. The Low-pass Feature Enhancement (LFE) module emphasizes low-frequency components of feature maps. In low-light environments, high-frequency features are often corrupted by noise, obscuring subtle surface details crucial for anomaly detection. By focusing on low-frequency features, the LFE module effectively suppresses noise in low-light images.
Low-light images also suffer from nonuniform lighting. The Illumination-aware Feature Enhancement module addresses this issue by generating an estimated illumination map to dynamically adjust features based on lighting variations. This ensures correct feature representation even in uneven lighting.
The outputs from these two modules are then integrated using an Adaptive Feature Fusion module, which captures their complementary features and enhances the most relevant details. It does so by employing channel-attention and spatial-attention mechanisms. The former ensures that only the relevant features are emphasized while the redundant ones are suppressed, and the latter preserves fine-grained details.
Anomalies are then detected by comparing the well-lit features reconstructed from the low-light inputs to the original extracted features. This produces an anomaly map for localizing defects and a global anomaly score for detection. This approach improves computational efficiency by minimizing reliance on memory-intensive feature embeddings, making it suitable for real-time industrial applications.
To evaluate their model, the researchers developed a new dataset called LL-IAD, where LL stands for low-light. It comprises 6,600 paired well-lit and low-light images across 10 object categories. They also tested their model on additional external industrial datasets, including Insulator and Clutch. The model significantly outperformed existing state-of-the-art methods across all datasets. Notably, even in the absence of real well-lit images, it maintained high detection accuracy, proving its practical applicability in challenging industrial environments.
Our model offers a computationally efficient, memory-friendly, and robust solution for IAD in common low-light manufacturing conditions,” remarks Dr. Tan. “Additionally, LL-IAD is the first dataset specifically designed for IAD under low-light conditions, which we hope will serve as a valuable resource for future research.
By introducing both an innovative anomaly detection framework and a comprehensive new dataset, this research addresses a key challenge in IAD, with potential benefits across diverse industries.

 

 

Reference

Title of original paper:

Unsupervised industrial anomaly detection using paired well-lit and low-light images

Journal

Journal of Computational Design and Engineering

DOI:

10.1093/jcde/qwaf043

Authors

About Associate Professor Phan Xuan Tan from SIT, Japan

Phan Xuan Tan received the B.E. degree in Electrical-Electronic Engineering from Le Quy Don Technical University, Vietnam, the M.E. degree in Computer and Communication Engineering from Hanoi University of Science and Technology, Vietnam, and the Ph.D. degree in Functional Control Systems from Shibaura Institute of Technology, Japan. He is currently an Associate Professor at Shibaura Institute of Technology. His current research interests include computer vision, deep learning, and image processing.

   

Funding Information

N/A