Results 1  10
of
27
Efficient coding of natural sounds
 Nature Neuroscience
, 2002
"... The auditory system encodes sound by decomposing the amplitude signal arriving at the ear into multiple frequency bands whose center frequencies and bandwidths are approximately logarithmic functions of the distance from the stapes. This particular organization is thought to result from the adaptati ..."
Abstract

Cited by 135 (3 self)
 Add to MetaCart
The auditory system encodes sound by decomposing the amplitude signal arriving at the ear into multiple frequency bands whose center frequencies and bandwidths are approximately logarithmic functions of the distance from the stapes. This particular organization is thought to result from the adaptation of cochlear mechanisms to the statistics of an animal’s auditory environment. Here we report that several basic auditory nerve fiber tuning properties can be accounted for by adapting a population of filter shapes to optimally encode natural sounds. The form of the code is dependent on the class of sounds, resembling a Fourier transformation when optimized for animal vocalizations and a wavelet transformation when optimized for nonbiological environmental sounds. Only for a combined set of vocalizations and environmental sounds does the optimal code follow scaling characteristics that are consistent with physiological data. These results suggest that the population of auditory nerve fibers encode a broad set of natural sounds in a manner that is consistent with information theoretic principles. Correspondence:
Sparse and shiftinvariant representations of music
 IEEE Transactions on Speech and Audio Processing
, 2006
"... c○2006 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other w ..."
Abstract

Cited by 47 (9 self)
 Add to MetaCart
(Show Context)
c○2006 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the
Graph regularized sparse coding for image representation
 IEEE Transactions on Image Processing
, 2011
"... Abstract—Sparse coding has received an increasing amount of interest in recent years. It is an unsupervised learning algorithm, which finds a basis set capturing highlevel semantics in the data and learns sparse coordinates in terms of the basis set. Originally applied to modeling the human visual ..."
Abstract

Cited by 32 (1 self)
 Add to MetaCart
Abstract—Sparse coding has received an increasing amount of interest in recent years. It is an unsupervised learning algorithm, which finds a basis set capturing highlevel semantics in the data and learns sparse coordinates in terms of the basis set. Originally applied to modeling the human visual cortex, sparse coding has been shown useful for many applications. However, most of the existing approaches to sparse coding fail to consider the geometrical structure of the data space. In many real applications, the data is more likely to reside on a lowdimensional submanifold embedded in the highdimensional ambient space. It has been shown that the geometrical information of the data is important for discrimination. In this paper, we propose a graph based algorithm, called graph regularized sparse coding, to learn the sparse representations that explicitly take into account the local manifold structure of the data. By using graph Laplacian as a smooth operator, the obtained sparse representations vary smoothly along the geodesics of the data manifold. The extensive experimental results on image classification and clustering have demonstrated the effectiveness of our proposed algorithm. Index Terms—Image classification, image clustering, manifold learning, sparse coding. I.
Modularity in the Motor System: Decomposition of Muscle Patterns as Combinations of TimeVarying Synergies
, 2001
"... The question of whether the nervous system produces movement through the combination of a few discrete elements has long been central to the study of motor control. Muscle synergies, i.e. coordinated patterns of muscle activity, have been proposed as possible building blocks. Here we propose a m ..."
Abstract

Cited by 18 (0 self)
 Add to MetaCart
The question of whether the nervous system produces movement through the combination of a few discrete elements has long been central to the study of motor control. Muscle synergies, i.e. coordinated patterns of muscle activity, have been proposed as possible building blocks. Here we propose a model based on combinations of muscle synergies with a specific amplitude and temporal structure. Timevarying synergies provide a realistic basis for the decomposition of the complex patterns observed in natural behaviors. To extract timevarying synergies from simultaneous recording of EMG activity we developed an algorithm which extends existing nonnegative matrix factorization techniques.
Fast convolutional sparse coding
 In CVPR, 2013. 6
"... Sparse coding has become an increasingly popular method in learning and vision for a variety of classification, reconstruction and coding tasks. The canonical approach intrinsically assumes independence between observations during learning. For many natural signals however, sparse coding is applie ..."
Abstract

Cited by 11 (1 self)
 Add to MetaCart
(Show Context)
Sparse coding has become an increasingly popular method in learning and vision for a variety of classification, reconstruction and coding tasks. The canonical approach intrinsically assumes independence between observations during learning. For many natural signals however, sparse coding is applied to subelements ( i.e. patches) of the signal, where such an assumption is invalid. Convolutional sparse coding explicitly models local interactions through the convolution operator, however the resulting optimization problem is considerably more complex than traditional sparse coding. In this paper, we draw upon ideas from signal processing and Augmented Lagrange Methods (ALMs) to produce a fast algorithm with globally optimal subproblems and superlinear convergence. 1.
1Sparse Recovery of Streaming Signals Using `1Homotopy
"... Most of the existing methods for sparse signal recovery assume a static system: the unknown signal is a finitelength vector for which a fixed set of linear measurements and a sparse representation basis are available and an `1norm minimization program is solved for the reconstruction. However, the ..."
Abstract

Cited by 4 (3 self)
 Add to MetaCart
Most of the existing methods for sparse signal recovery assume a static system: the unknown signal is a finitelength vector for which a fixed set of linear measurements and a sparse representation basis are available and an `1norm minimization program is solved for the reconstruction. However, the same representation and reconstruction framework is not readily applicable in a streaming system: the unknown signal changes over time, and it is measured and reconstructed sequentially over small time intervals. A streaming framework for the reconstruction is particularly desired when dividing a streaming signal into disjoint blocks and processing each block independently is either infeasible or inefficient. In this paper, we discuss two such streaming systems and a homotopybased algorithm for quickly solving the associated weighted `1norm minimization programs: 1) Recovery of a smooth, timevarying signal for which, instead of using block transforms, we use lapped orthogonal transforms for sparse representation. 2) Recovery of a sparse, timevarying signal that follows a linear dynamic model. For both the systems, we iteratively process measurements over a sliding interval and solve a weighted `1norm minimization problem for estimating sparse coefficients. Since we estimate overlapping portions of the streaming signal while adding and removing measurements, instead of solving a new `1 program
Monte Carlo methods for adaptive sparse approximations of timeseries
, 2007
"... ..."
(Show Context)
Hierarchical spike coding of sound
"... Natural sounds exhibit complex statistical regularities at multiple scales. Acoustic events underlying speech, for example, are characterized by precise temporal and frequency relationships, but they can also vary substantially according to the pitch, duration, and other highlevel properties of spe ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
Natural sounds exhibit complex statistical regularities at multiple scales. Acoustic events underlying speech, for example, are characterized by precise temporal and frequency relationships, but they can also vary substantially according to the pitch, duration, and other highlevel properties of speech production. Learning this structure from data while capturing the inherent variability is an important first step in building auditory processing systems, as well as understanding the mechanisms of auditory perception. Here we develop Hierarchical Spike Coding, a twolayer probabilistic generative model for complex acoustic structure. The first layer consists of a sparse spiking representation that encodes the sound using kernels positioned precisely in time and frequency. Patterns in the positions of first layer spikes are learned from the data: on a coarse scale, statistical regularities are encoded by a secondlayer spiking representation, while finescale structure is captured by recurrent interactions within the first layer. When fit to speech data, the second layer acoustic features include harmonic stacks, sweeps, frequency modulations, and precise temporal onsets, which can be composed to represent complex acoustic events. Unlike spectrogrambased methods, the model gives a probability distribution over sound pressure waveforms. This allows us to use the secondlayer representation to synthesize sounds directly, and to perform modelbased denoising, on which we demonstrate a significant improvement over standard methods. 1
Modeling human motion trajectories by sparse activation of motion primitives learned from unpartitioned data, ha l0 6, v er sio
 n  2 Ja n in: KI 2012: Advances in Artificial Intelligence
"... Abstract. We interpret biological motion trajectories as being composed of sequences of subblocks or motion primitives. Such primitives, together with the information, when they occur during an observed trajectory, provide a compact representation of movement in terms of events that is invariant ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
Abstract. We interpret biological motion trajectories as being composed of sequences of subblocks or motion primitives. Such primitives, together with the information, when they occur during an observed trajectory, provide a compact representation of movement in terms of events that is invariant to temporal shifts. Based on this representation, we present a model for the generation of motion trajectories that consists of two layers. In the lower layer, a trajectory is generated by activating a number of motion primitives from a learned dictionary, according to a given set of activation times and amplitudes. In the upper layer, the process generating the activation times is modeled by a group of IntegrateandFire neurons that emits spikes, dependent on a given class of trajectories, that activate the motion primitives in the lower layer. We learn the motion primitives together with their activation times and amplitudes in an unsupervised manner from unpartitioned data, with a variant of shiftNMF that is extended to support the eventlike encoding. We present our model on the generation of handwritten character trajectories and show that we can generate good reconstructions of characters with shared primitives for all characters modeled.