Hazard Detection for Robotic Applications as Visual Anomaly Detection

Staff - Faculty of Informatics

Date: 3 May 2024 / 10:00 - 13:00

USI East Campus, Room D1.14

You are cordially invited to attend the PhD Dissertation Defence of Dario Mantegazza on Friday 3 May 2024 at 10:00 in room D1.14

For a robot, a hazard is an event or object that poses risks to its mission or to itself. Some hazards such as obstacles are known, and can be accounted for; others, such as piercing debris or dense fog might be unexpected and may be seen only on some occasions. For these, collecting samples to train a perception model is impossible. Thus, a hazard detection system should not require hazard samples to function correctly. In this thesis, we propose to use deep learning-based visual anomaly detection models to solve hazard detection for mobile robots employed in industry. Our proposal of relying on visual anomaly detection is particularly suited for these robots since most of those have cameras. Anomaly detection is a machine learning topic focused on finding rare, unexpected, patterns in data that deviate from an expected behavior. It can be applied to various fields and data types, but the application of anomaly detection in robotics is rather new and limited to specific use cases. Nonetheless, anomaly detection fits well with hazard detection as it requires datasets composed only of non-anomalous (i.e., expected, normal) samples. No public datasets are available for the task of hazard detection for robotics. We start by closing this gap with our general-purpose visual hazard detection dataset for mobile robots. Then, we introduce a hazard detection system based on convolutional undercomplete autoencoders. Our approach detects multiple types of hazards using only images coming from the robot's front-facing camera. We test this solution using two real-world qualitative demonstrations with a wheeled robot in a lab, and an industrial drone in a factory, and detect all anomalies. In both cases, all anomalies are detected. Based on the expectation that few anomalous samples will be collected during deployment, we experiment with an outlier exposure approach, to learn from these key anomalous samples. We employ a Real-NVP model, combined with a features extractor and a novel loss, to train using a few detected anomalies in addition to normal samples. Our experiments show that our solution effectively increases the detection performance for all anomalies, measured by the AUC, by 9.6%. Similarly, we can expect that the data collected by the deployed robots becomes too much to be all manually inspected and labeled. We propose two novel active learning methods designed for anomaly detection using Real-NVP. We test our solutions against six other queries strategies from the literature, across more than 6500 experiments. We show that when multiple samples are collected, our approaches are best for choosing informative samples collected. Lastly, we study deep learning solutions for 3D vision that fit our task. We introduce a novel 3D point clouds dataset for semantic segmentation, we explore how deep learning approaches generalize to unseen point clouds, and we study how pre-trained feature extraction models perform on 3D anomaly detection tasks. Our results show that while our approaches are better than older models and baselines, ad hoc methods are the models of choice for 3D anomaly detection. 

Dissertation Committee:
- Prof. Luca Maria Gambardella, Università della Svizzera italiana, Switzerland (Research Advisor)
- Prof. Alessandro Giusti, Università della Svizzera italiana, Switzerland (Research co-Advisor)
- Prof. Kai Hormann, Università della Svizzera italiana, Switzerland (Internal Member)
- Prof. Paolo Tonella, Università della Svizzera italiana, Switzerland (Internal Member)
- Prof. Giacomo Boracchi, Associate Professor – Politecnico di Milano. (External Member)
- Prof. Simone Gasparini, Institut National Polytechnique de Toulouse (External Member)