Results 1  10
of
110
Face recognition by independent component analysis
 IEEE Transactions on Neural Networks
, 2002
"... Abstract—A number of current face recognition algorithms use face representations found by unsupervised statistical methods. Typically these methods find a set of basis images and represent faces as a linear combination of those images. Principal component analysis (PCA) is a popular example of such ..."
Abstract

Cited by 348 (5 self)
 Add to MetaCart
(Show Context)
Abstract—A number of current face recognition algorithms use face representations found by unsupervised statistical methods. Typically these methods find a set of basis images and represent faces as a linear combination of those images. Principal component analysis (PCA) is a popular example of such methods. The basis images found by PCA depend only on pairwise relationships between pixels in the image database. In a task such as face recognition, in which important information may be contained in the highorder relationships among pixels, it seems reasonable to expect that better basis images may be found by methods sensitive to these highorder statistics. Independent component analysis (ICA), a generalization of PCA, is one such method. We used a version of ICA derived from the principle of optimal information transfer through sigmoidal neurons. ICA was performed on face images in the FERET database under two different architectures, one which treated the images as random variables and the pixels as outcomes, and a second which treated the pixels as random variables and the images as outcomes. The first architecture found spatially local basis images for the faces. The second architecture produced a factorial face code. Both ICA representations were superior to representations based on PCA for recognizing faces across days and changes in expression. A classifier that combined the two ICA representations gave the best performance. Index Terms—Eigenfaces, face recognition, independent component analysis (ICA), principal component analysis (PCA), unsupervised learning. I.
Local Feature Analysis: A general statistical theory for object representation
, 1996
"... . Lowdimensional representations of sensory signals are key to solving many of the computational problems encountered in highlevel vision. Principal Component Analysis has been used in the past to derive practically useful compact representations for different classes of objects. One major object ..."
Abstract

Cited by 285 (8 self)
 Add to MetaCart
. Lowdimensional representations of sensory signals are key to solving many of the computational problems encountered in highlevel vision. Principal Component Analysis has been used in the past to derive practically useful compact representations for different classes of objects. One major objection to the applicability of PCA is that it invariably leads to global, nontopographic representations that are not amenable to further processing and are not biologically plausible. In this paper we present a new mathematical constructionLocal Feature Analysis (LFA)for deriving local topographic representations for any class of objects. The LFA representations are sparsedistributed and, hence, are effectively lowdimensional and retain all the advantages of the compact representations of the PCA. But unlike the global eigenmodes, they give a description of objects in terms of statistically derived local features and their positions. We illustrate the theory by using it to extract loca...
A connectionist approach to knowledge representation and limited inference
 Cognitive Science
, 1988
"... Although the connectionist approach has lead to elegant solutions to a number of problems in cognitive science and artificial intelligence, its suitability for dealing with problems in knowledge representation and inference has often been questioned. This paper partly answers this criticism by demo ..."
Abstract

Cited by 35 (5 self)
 Add to MetaCart
Although the connectionist approach has lead to elegant solutions to a number of problems in cognitive science and artificial intelligence, its suitability for dealing with problems in knowledge representation and inference has often been questioned. This paper partly answers this criticism by demonstrating that effective solutions to certain problems in knowledge representation and limited inference can be found by adopting a connectionist approach. The paper presents a connectionist realization of semantic networks, that is, it describes how knowledge about concepts, their properties, and the hierarchical relationship between them may be encoded as an Interpreterfree massively parallel network of simple processing elements that can solve an interesting class of inherltonce and recognlt/on problems extremely fastin time proportional to the depth of the conceptual hierarchy. The connectionist realization is based on an evidential formulation that leads to principled solutions to the problems of exceptions and conflicting m&p/e lnherftance situations during inheritance, and the bestmatch or partlolmatch computation during recognition. The paper also identifies constraints that must be satisfied by the conceptual structure in order to arrive at an efficient parallel realization. 1
ConvergenceZone Episodic Memory: Analysis and Simulations
 NEURAL NETWORKS
, 1997
"... Human episodic memory provides a seemingly unlimited storage for everyday experiences, and a retrieval system that allows us to access the experiences with partial activation of their components. The system is believed to consist of a fast, temporary storage in the hippocampus, and a slow, longterm ..."
Abstract

Cited by 33 (1 self)
 Add to MetaCart
Human episodic memory provides a seemingly unlimited storage for everyday experiences, and a retrieval system that allows us to access the experiences with partial activation of their components. The system is believed to consist of a fast, temporary storage in the hippocampus, and a slow, longterm storage within the neocortex. This paper presents a neural network model of the hippocampal episodic memory inspired by Damasio's idea of Convergence Zones. The model consists of a layer of perceptual feature maps and a binding layer. A perceptual feature pattern is coarse coded in the binding layer, and stored on the weights between layers. A partial activation of the stored features activates the binding pattern, which in turn reactivates the entire stored pattern. For many configurations of the model, a theoretical lower bound for the memory capacity can be derived, and it can be an order of magnitude or higher than the number of all units in the model, and several orders of magnitude higher than the number of bindinglayer units. Computational simulations further indicate that the average capacity is an order of magnitude larger than the theoretical lower bound, and making the connectivity between layers sparser causes an even further increase in capacity. Simulations also show that if more descriptive binding patterns are used, the errors tend to be more plausible (patterns are confused with other similar patterns), with a slight cost in capacity. The convergencezone episodic memory therefore accounts for the immediate storage and associative retrieval capability and large capacity of the hippocampal memory, and shows why the memory encoding areas can be much smaller than the perceptual maps, consist of rather coarse computational units, and be only sparsely connected t...
Iterative Retrieval of Sparsely Coded Associative Memory Patterns
 Neural Networks
, 1995
"... We investigate the pattern completion performance of neural autoassociative memories composed of binary threshold neurons for sparsely coded binary memory patterns. Focussing on iterative retrieval, effective threshold control strategies are introduced. These are investigated by means of computer s ..."
Abstract

Cited by 32 (15 self)
 Add to MetaCart
We investigate the pattern completion performance of neural autoassociative memories composed of binary threshold neurons for sparsely coded binary memory patterns. Focussing on iterative retrieval, effective threshold control strategies are introduced. These are investigated by means of computer simulation experiments and analytical treatment. To evaluate the systems performance we consider the completion capacity C and the mean retrieval errors. The asymptotic completion capacity values for the recall of sparsely coded binary patterns in onestep retrieval is known to be ln 2=4 ß 17:32% for binary Hebbian learning, and 1=(8 ln2) ß 18% for additive Hebbian learning [Palm, 1988]. These values are accomplished with vanishing error probability and yet are higher than those obtained in other known neural memory models. Recent investigations on binary Hebbian learning have proved that iterative retrieval as a more refined retrieval method does not improve the asymptotic completion capacit...
Erlbaum Associates
 User Centered System Design: New Perspectives on HumanComputer Interaction
, 1986
"... Storing and restoring visual input with collaborative rank coding and ..."
Abstract

Cited by 32 (0 self)
 Add to MetaCart
(Show Context)
Storing and restoring visual input with collaborative rank coding and
Pattern Separation and Synchronization in Spiking Associative Memories and Visual Areas
 Neural Networks
, 2001
"... Scene analysis in the mammalian visual system, conceived as a distributed and parallel process, faces the socalled binding problem. As a possible solution, the temporal correlation hypothesis has been suggested and implemented in phasecoding models. ..."
Abstract

Cited by 30 (13 self)
 Add to MetaCart
Scene analysis in the mammalian visual system, conceived as a distributed and parallel process, faces the socalled binding problem. As a possible solution, the temporal correlation hypothesis has been suggested and implemented in phasecoding models.
Towards Cortex Sized Artificial Neural Systems," Neural Networks
, 2006
"... We propose, implement, and discuss an abstract model of the mammalian neocortex. This model is instantiated with a sparse recurrently connected neural network that has spiking leaky integrator units and continuous Hebbian learning. First we study the structure, modularization, and size of neocortex, ..."
Abstract

Cited by 27 (3 self)
 Add to MetaCart
We propose, implement, and discuss an abstract model of the mammalian neocortex. This model is instantiated with a sparse recurrently connected neural network that has spiking leaky integrator units and continuous Hebbian learning. First we study the structure, modularization, and size of neocortex, and then we describe a generic computational model of the cortical circuitry. A characterizing feature of the model is that it is based on the modularization of neocortex into hypercolumns and minicolumns. Both a floating and fixedpoint arithmetic implementation of the model are presented along with simulation results. We conclude that an implementation on a cluster computer is not communication but computation bounded. A mouse and rat cortex sized version of our model executes in 44 % and 23 % of realtime respectively. Further, an instance of the model with 1.6 × 10 6 units and 2 × 10 11 connections performed noise reduction and pattern completion. These implementations represent the current frontier of largescale abstract neural network simulations in terms of network size and running speed.
Scene Segmentation by Spike Synchronization in Reciprocally Connected Visual Areas I. Local Effects of Cortical Feedback
 Biological Cybernetics
, 2002
"... To investigate scene segmentation in the visual system we present a model of two reciprocally connected visual areas using spiking neurons. Area P corresponds to the orientation selective subsystem of the primary visual cortex, while the central visual area C is modeled as associative memory represe ..."
Abstract

Cited by 24 (5 self)
 Add to MetaCart
To investigate scene segmentation in the visual system we present a model of two reciprocally connected visual areas using spiking neurons. Area P corresponds to the orientation selective subsystem of the primary visual cortex, while the central visual area C is modeled as associative memory representing stimulus objects according to Hebbian learning. Without feedback from area C, a single stimulus results in relatively slow and irregular activity, synchronized only for neighboring patches (slow state), while in the complete model activity is faster with enlarged synchronization range (fast state). Presenting a superposition of several stimulus objects, scene segmentation happens on a time scale of hundreds of milliseconds by alternating epochs of the slow and fast state, where neurons representing the same object are simultaneously in the fast state. Correlation analysis reveals synchronization on different time scales as found in experiments (T,C,H peaks). On the fast time scale (T peaks, gamma frequency range), recordings from two sites coding either different or the same object lead to correlograms that are either at or exhibit oscillatory modulations with a central peak. This is in agreement with experimental findings while standard phase coding models would predict shifted peaks in the case of different objects.