Results 1 - 10
of
318
Hierarchical Models of Object Recognition in Cortex
, 1999
"... The classical model of visual processing in cortex is a hierarchy of increasingly sophisticated representations, extending in a natural way the model of simple to complex cells of Hubel and Wiesel. Somewhat surprisingly, little quantitative modeling has been done in the last 15 years to explore th ..."
Abstract
-
Cited by 836 (84 self)
- Add to MetaCart
The classical model of visual processing in cortex is a hierarchy of increasingly sophisticated representations, extending in a natural way the model of simple to complex cells of Hubel and Wiesel. Somewhat surprisingly, little quantitative modeling has been done in the last 15 years to explore the biological feasibility of this class of models to explain higher level visual processing, such as object recognition. We describe a new hierarchical model that accounts well for this complex visual task, is consistent with several recent physiological experiments in inferotemporal cortex and makes testable predictions. The model is based on a novel MAX-like operation on the inputs to certain cortical neurons which may have a general role in cortical function.
Slow Feature Analysis: Unsupervised Learning of Invariances
"... Invariant features of temporally varying signals are useful for analysis and classification. Slow feature analysis (SFA) is a new method for learning invariant or slowly varying features from a vectorial input signal. It is based on a nonlinear expansion of the input signal and application of princi ..."
Abstract
-
Cited by 245 (13 self)
- Add to MetaCart
Invariant features of temporally varying signals are useful for analysis and classification. Slow feature analysis (SFA) is a new method for learning invariant or slowly varying features from a vectorial input signal. It is based on a nonlinear expansion of the input signal and application of principal component analysis to this expanded signal and its time derivative. It is guaranteed to find the optimal solution within a family of functions directly and can learn to extract a large number of decorrelated features, which are ordered by their degree of invariance. SFA can be applied hierarchically to process high-dimensional input signals and extract complex features. SFA is applied first to complex cell tuning properties based on simple cell output, including disparity and motion. Then more complicated input-output functions are learned by repeated application of SFA. Finally, a hierarchical network of SFA modules is presented as a simple model of the visual system. The same unstructured network can learn translation, size, rotation, contrast, or, to a lesser degree, illumination invariance for one-dimensional objects, depending on only the training stimulus. Surprisingly, only a few training objects suffice to achieve good generalization to new objects. The generated representation is suitable for object recognition. Performance degrades if the network is trained to learn multiple invariances simultaneously.
Emergence of Phase- and Shift-Invariant Features by Decomposition of Natural Images into Independent Feature Subspaces
, 2000
"... this article, we show that the same principle of independence maximization can explain the emergence of phase- and shift-invariant features, similar to those found in complex cells. This new kind of emergence is obtained by maximizing the independence between norms of projections on linear subspaces ..."
Abstract
-
Cited by 201 (31 self)
- Add to MetaCart
this article, we show that the same principle of independence maximization can explain the emergence of phase- and shift-invariant features, similar to those found in complex cells. This new kind of emergence is obtained by maximizing the independence between norms of projections on linear subspaces (instead of the independence of simple linear filter outputs). Thenorms of the projections on such "independent feature subspaces" then indicate the values of invariant features
Invariant Object Recognition in the Visual System with Novel Views of 3D Objects
, 2002
"... ... In this article, we show how trace learning could solve the problem of in-depth rotation-invariant object recognition by developing representations of the transforms that features undergo when they are on the surfaces of 3D objects. Moreover, we show that having learned how features on 3D object ..."
Abstract
-
Cited by 111 (17 self)
- Add to MetaCart
... In this article, we show how trace learning could solve the problem of in-depth rotation-invariant object recognition by developing representations of the transforms that features undergo when they are on the surfaces of 3D objects. Moreover, we show that having learned how features on 3D objects transform geometrically as the object is rotated in depth, the network can correctly recognize novel 3D variations within a generic view of an object composed of a new combination of previously learned features. These results are demonstrated in simulations of a hierarchical network model (VisNet) of the visual system that show that it can develop representations useful for the recognition of 3D objects by forming perspective-invariant representations to allow generalization within a generic view.
Transfer of Coded Information from Sensory to Motor Networks
, 1995
"... During sensory-guided motor tasks, information must be transferred from arrays of neurons coding target location to motor networks that generate and control movement. We address two basic questions about this information transfer. First, what mechanisms assure that the different neural representatio ..."
Abstract
-
Cited by 101 (14 self)
- Add to MetaCart
(Show Context)
During sensory-guided motor tasks, information must be transferred from arrays of neurons coding target location to motor networks that generate and control movement. We address two basic questions about this information transfer. First, what mechanisms assure that the different neural representations align properly so that activity in the sensory network representing target location evokes a motor response generating accurate movement toward the target? Coordinate transformations may be needed to put the sensory data into a form appropriate for use by the motor system. For example, in visually guided reaching the location of a target relative to the body is determined by a combination of the position of its image on the retina and the direction of gaze. What assures that the motor network responds to the appropriate combination of sensory inputs corresponding to target position in body- or arm-centered coordinates ? To answer these questions, we model a sensory network coding target p...
Face recognition by humans: Nineteen results all computer vision researchers should know about
- Proceedings of the IEEE
, 2006
"... Increased knowledge about the ways people recognize each other may help to guide efforts to develop practical automatic face-recognition systems. ..."
Abstract
-
Cited by 97 (0 self)
- Add to MetaCart
(Show Context)
Increased knowledge about the ways people recognize each other may help to guide efforts to develop practical automatic face-recognition systems.
Learning Optimized Features for Hierarchical Models of Invariant Object Recognition
, 2002
"... There is an ongoing debate over the capabilities of hierarchical neural feed-forward architectures for performing real-world invariant object recognition. Although a variety of hierarchical models exists, appropriate supervised and unsupervised learning methods are still an issue of intense rese ..."
Abstract
-
Cited by 93 (28 self)
- Add to MetaCart
(Show Context)
There is an ongoing debate over the capabilities of hierarchical neural feed-forward architectures for performing real-world invariant object recognition. Although a variety of hierarchical models exists, appropriate supervised and unsupervised learning methods are still an issue of intense research. We propose a feedforward model for recognition that shares components like weightsharing, pooling stages, and competitive nonlinearities with earlier approaches, but focus on new methods for learning optimal featuredetecting cells in intermediate stages of the hierarchical network.
A theory of object recognition: computations and circuits in the feedforward path of the ventral stream in primate visual cortex
, 2005
"... ..."
(Show Context)
Learning to Represent Spatial Transformations with Factored Higher-Order Boltzmann Machines
, 2010
"... To allow the hidden units of a restricted Boltzmann machine to model the transformation between two successive images, Memisevic and Hinton (2007) introduced three-way multiplicative interactions that use the intensity of a pixel in the first image as a multiplicative gain on a learned, symmetric we ..."
Abstract
-
Cited by 75 (18 self)
- Add to MetaCart
To allow the hidden units of a restricted Boltzmann machine to model the transformation between two successive images, Memisevic and Hinton (2007) introduced three-way multiplicative interactions that use the intensity of a pixel in the first image as a multiplicative gain on a learned, symmetric weight between a pixel in the second image and a hidden unit. This creates cubically many parameters, which form a three-dimensional interaction tensor. We describe a low-rank approximation to this interaction tensor that uses a sum of factors, each of which is a three-way outer product. This approximation allows efficient learning of transformations between larger image patches. Since each factor can be viewed as an image filter, the model as a whole learns optimal filter pairs for efficiently representing transformations. We demonstrate the learning of optimal filter pairs from various synthetic and real image sequences. We also show how learning about image transformations allows the model to perform a simple visual analogy task, and we show how a completely unsupervised network trained on transformations perceives multiple motions of transparent dot patterns in the same way as humans.
A hierarchical Bayesian model of invariant pattern recognition in the visual cortex
- In Proceedings of the International Joint Conference on Neural Networks. IEEE
, 2005
"... Abstract — We describe a hierarchical model of invariant visual pattern recognition in the visual cortex. In this model, the knowledge of how patterns change when objects move is learned and encapsulated in terms of high probability sequences at each level of the hierarchy. Configuration of object p ..."
Abstract
-
Cited by 71 (2 self)
- Add to MetaCart
(Show Context)
Abstract — We describe a hierarchical model of invariant visual pattern recognition in the visual cortex. In this model, the knowledge of how patterns change when objects move is learned and encapsulated in terms of high probability sequences at each level of the hierarchy. Configuration of object parts is captured by the patterns of coincident high probability sequences. This knowledge is then encoded in a highly efficient Bayesian Network structure.The learning algorithm uses a temporal stability criterion to discover object concepts and movement patterns. We show that the architecture and algorithms are biologically plausible. The large scale architecture of the system matches the large scale organization of the cortex and the micro-circuits derived from the local computations match the anatomical data on cortical circuits. The system exhibits invariance across a wide variety of transformations and is robust in the presence of noise. Moreover, the model also offers alternative explanations for various known cortical phenomena. I.