Results 1  10
of
2,193
Sparse MRI: The Application of Compressed Sensing for Rapid MR Imaging
 MAGNETIC RESONANCE IN MEDICINE 58:1182–1195
, 2007
"... The sparsity which is implicit in MR images is exploited to significantly undersample kspace. Some MR images such as angiograms are already sparse in the pixel representation; other, more complicated images have a sparse representation in some transform domain–for example, in terms of spatial finit ..."
Abstract

Cited by 538 (11 self)
 Add to MetaCart
The sparsity which is implicit in MR images is exploited to significantly undersample kspace. Some MR images such as angiograms are already sparse in the pixel representation; other, more complicated images have a sparse representation in some transform domain–for example, in terms of spatial
Regularization paths for generalized linear models via coordinate descent
, 2009
"... We develop fast algorithms for estimation of generalized linear models with convex penalties. The models include linear regression, twoclass logistic regression, and multinomial regression problems while the penalties include ℓ1 (the lasso), ℓ2 (ridge regression) and mixtures of the two (the elastic ..."
Abstract

Cited by 724 (15 self)
 Add to MetaCart
elastic net). The algorithms use cyclical coordinate descent, computed along a regularization path. The methods can handle large problems and can also deal efficiently with sparse features. In comparative timings we find that the new algorithms are considerably faster than competing methods.
Sequential minimal optimization: A fast algorithm for training support vector machines
 Advances in Kernel MethodsSupport Vector Learning
, 1999
"... This paper proposes a new algorithm for training support vector machines: Sequential Minimal Optimization, or SMO. Training a support vector machine requires the solution of a very large quadratic programming (QP) optimization problem. SMO breaks this large QP problem into a series of smallest possi ..."
Abstract

Cited by 461 (3 self)
 Add to MetaCart
for linear SVMs and sparse data sets. On realworld sparse data sets, SMO can be more than 1000 times faster than the chunking algorithm. 1.
Localityconstrained linear coding for image classification
 IN: IEEE CONFERENCE ON COMPUTER VISION AND PATTERN CLASSIFICATOIN
, 2010
"... The traditional SPM approach based on bagoffeatures (BoF) requires nonlinear classifiers to achieve good image classification performance. This paper presents a simple but effective coding scheme called Localityconstrained Linear Coding (LLC) in place of the VQ coding in traditional SPM. LLC util ..."
Abstract

Cited by 443 (20 self)
 Add to MetaCart
, achieving stateoftheart performance on several benchmarks. Compared with the sparse coding strategy [22], the objective function used by LLC has an analytical solution. In addition, the paper proposes a fast approximated LLC method by first performing a Knearestneighbor search and then solving a
Dynamics of Sparsely Connected Networks of Excitatory and Inhibitory Spiking Neurons
, 1999
"... The dynamics of networks of sparsely connected excitatory and inhibitory integrateand re neurons is studied analytically. The analysis reveals a very rich repertoire of states, including: Synchronous states in which neurons re regularly; Asynchronous states with stationary global activity and very ..."
Abstract

Cited by 306 (16 self)
 Add to MetaCart
The dynamics of networks of sparsely connected excitatory and inhibitory integrateand re neurons is studied analytically. The analysis reveals a very rich repertoire of states, including: Synchronous states in which neurons re regularly; Asynchronous states with stationary global activity and very
A Data Structure for Dynamic Trees
, 1983
"... A data structure is proposed to maintain a collection of vertexdisjoint trees under a sequence of two kinds of operations: a link operation that combines two trees into one by adding an edge, and a cut operation that divides one tree into two by deleting an edge. Each operation requires O(log n) ti ..."
Abstract

Cited by 347 (21 self)
 Add to MetaCart
) time. Using this data structure, new fast algorithms are obtained for the following problems: (1) Computing nearest common ancestors. (2) Solving various network flow problems including finding maximum flows, blocking flows, and acyclic flows. (3) Computing certain kinds of constrained minimum spanning
New spectral methods for ratio cut partition and clustering
 IEEE TRANS. ON COMPUTERAIDED DESIGN
, 1992
"... Partitioning of circuit netlists is important in many phases of VLSI design, ranging from layout to testing and hardware simulation. The ratio cut objective function [29] has received much attention since it naturally captures both mincut and equipartition, the two traditional goals of partitionin ..."
Abstract

Cited by 296 (17 self)
 Add to MetaCart
of partitioning. In this paper, we show that the second smallest eigenvalue of a matrix derived from the netlist gives a provably good approximation of the optimal ratio cut partition cost. We also demonstrate that fast Lanczostype methods for the sparse symmetric eigenvalue problem are a robust basis
Efficient learning of sparse representations with an energybased model
 ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS (NIPS 2006
, 2006
"... We describe a novel unsupervised method for learning sparse, overcomplete features. The model uses a linear encoder, and a linear decoder preceded by a sparsifying nonlinearity that turns a code vector into a quasibinary sparse code vector. Given an input, the optimal code minimizes the distance b ..."
Abstract

Cited by 219 (15 self)
 Add to MetaCart
We describe a novel unsupervised method for learning sparse, overcomplete features. The model uses a linear encoder, and a linear decoder preceded by a sparsifying nonlinearity that turns a code vector into a quasibinary sparse code vector. Given an input, the optimal code minimizes the distance
Sparse multinomial logistic regression: fast algorithms and generalization bounds
 IEEE Trans. on Pattern Analysis and Machine Intelligence
"... Abstract—Recently developed methods for learning sparse classifiers are among the stateoftheart in supervised learning. These methods learn classifiers that incorporate weighted sums of basis functions with sparsitypromoting priors encouraging the weight estimates to be either significantly larg ..."
Abstract

Cited by 190 (1 self)
 Add to MetaCart
introduce a true multiclass formulation based on multinomial logistic regression. Second, by combining a bound optimization approach with a componentwise update procedure, we derive fast exact algorithms for learning sparse multiclass classifiers that scale favorably in both the number of training samples
Sparse Geometric Image Representations with Bandelets
, 2004
"... This paper introduces a new class of bases, called bandelet bases, which decompose the image along multiscale vectors that are elongated in the direction of a geometric flow. This geometric flow indicates directions in which the image grey levels have regular variations. The image decomposition in ..."
Abstract

Cited by 192 (4 self)
 Add to MetaCart
in a bandelet basis is implemented with a fast subband filtering algorithm. Bandelet bases lead to optimal approximation rates for geometrically regular images. For image compression and noise removal applications, the geometric flow is optimized with fast algorithms, so that the resulting bandelet
Results 1  10
of
2,193