Results 1 -
3 of
3
Visual Novelty Detection for Autonomous Inspection Robots
, 2006
"... Mobile robot applications that involve automated exploration and inspection of environments are often dependant on novelty detection, the ability to differentiate between common and uncommon perceptions. Because novelty can be anything that deviates from the normal context, we argue that in order to ..."
Abstract
-
Cited by 4 (0 self)
- Add to MetaCart
Mobile robot applications that involve automated exploration and inspection of environments are often dependant on novelty detection, the ability to differentiate between common and uncommon perceptions. Because novelty can be anything that deviates from the normal context, we argue that in order to implement a novelty filter it is necessary to exploit the robot’s sensory data from the ground up, building models of normality rather than abnormality. In this work we use unrestricted colour visual data as perceptual input to on-line incremental learning algorithms. Unlike other sensor modalities, vision can provide a variety of useful information about the environment through massive amounts of data, which often need to be reduced for realtime operation. Here we use mechanisms of visual attention to select candidate image regions to be encoded and fed to higher levels of processing, enabling the localisation of novel features within the input image frame. An extensive series of experiments using visual input, obtained by a real
Visual novelty detection with automatic scale selection
, 2007
"... This paper presents experiments with an autonomous inspection robot, whose task was to highlight novel features in its environment from camera images. The experiments used two different attention mechanisms – saliency map and multi-scale Harris detector – and two different novelty detection mechanis ..."
Abstract
- Add to MetaCart
This paper presents experiments with an autonomous inspection robot, whose task was to highlight novel features in its environment from camera images. The experiments used two different attention mechanisms – saliency map and multi-scale Harris detector – and two different novelty detection mechanisms — the Grow-When-Required (GWR) neural network and an incremental Principal Component Analysis (PCA). For all mechanisms we compared fixed-scale image encoding with automatically scaled image patches. Results show that automatic scale selection provides a more efficient representation of the visual input space, but that performance is generally better using a fixed-scale image encoding. c ○ 2007 Elsevier B.V. All rights reserved.