Results 1  10
of
67
Large scale multiple kernel learning
 JOURNAL OF MACHINE LEARNING RESEARCH
, 2006
"... While classical kernelbased learning algorithms are based on a single kernel, in practice it is often desirable to use multiple kernels. Lanckriet et al. (2004) considered conic combinations of kernel matrices for classification, leading to a convex quadratically constrained quadratic program. We s ..."
Abstract

Cited by 340 (20 self)
 Add to MetaCart
with sparse feature maps as appear for string kernels, allowing us to train a string kernel SVM on a 10 million realworld splice data set from computational biology. We integrated multiple kernel learning in our machine learning toolbox SHOGUN for which the source code is publicly available at
Sparse Fuzzy Model Identification Matlab Toolbox RuleMaker Toolbox
"... AbstractFuzzy systems applying a sparse rule base and a fuzzy rule interpolation based reasoning method are popular solutions in cases with partial knowledge of the modeled area or cases when the full coverage of the input space by rule antecedents would require too many rules. In several practica ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
to the increasing popularity of the fuzzy rule interpolation (FRI) The rest of this paper is organized as follows. Section II gives a brief survey on the main tendencies in sparse fuzzy model identification and introduces shortly the implemented techniques. Section III presents the RuleMaker toolbox.
DSPCA: a Toolbox for Sparse Principal Component Analysis
, 2006
"... In this paper, we describe DSPCA, a toolbox for sparse principal component analysis (PCA). Sparse PCA seeks sparse factors, or linear combinations of the data variables, explaining a maximum amount of variance in the data while having only a limited number of nonzero coefficients. We begin with a br ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
In this paper, we describe DSPCA, a toolbox for sparse principal component analysis (PCA). Sparse PCA seeks sparse factors, or linear combinations of the data variables, explaining a maximum amount of variance in the data while having only a limited number of nonzero coefficients. We begin with a
Fuzzy Rule Interpolation Matlab Toolbox – FRI Toolbox
 Proc. of the IEEE World Congress on Computational Intelligence (WCCI'06), 15th Int. Conf. on Fuzzy Systems (FUZZIEEE'06), July 1621, 2006, Vancouver, BC, Canada
, 2006
"... In most fuzzy systems, the completeness of the fuzzy rule base is required to generate meaningful output when classical fuzzy reasoning methods are applied. This means, in other words, that the fuzzy rule base has to cover all possible inputs. Regardless of the way of rule base construction, be it c ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
Toolbox, which is freely available. With the introduction of this Matlab Toolbox, different FRI methods can be used for different real time applications, which have sparse or incomplete fuzzy rule base.
Efficient MATLAB computations with sparse and factored tensors
 SIAM JOURNAL ON SCIENTIFIC COMPUTING
, 2007
"... In this paper, the term tensor refers simply to a multidimensional or $N$way array, and we consider how specially structured tensors allow for efficient storage and computation. First, we study sparse tensors, which have the property that the vast majority of the elements are zero. We propose stori ..."
Abstract

Cited by 84 (17 self)
 Add to MetaCart
In this paper, the term tensor refers simply to a multidimensional or $N$way array, and we consider how specially structured tensors allow for efficient storage and computation. First, we study sparse tensors, which have the property that the vast majority of the elements are zero. We propose
SOLVING A SEQUENCE OF SPARSE LEAST SQUARES PROBLEMS
"... We describe how to maintain an explicit sparse orthogonal factorization in order to solve the sequence of sparse least squares subproblems needed to implement an activeset method to solve the nonnegative least squares problem for a matrix with more columns than rows. In order to do that, we have ad ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
adapted the sparse direct methodology of Björck and Oreborn of late 80s in a similar way to Coleman and Hulbert, but without forming the hessian matrix that is only positive semidefinite in this case. We comment on our implementation on top of the sparse toolbox of Matlab 5, and we emphasize
SOLVING A SEQUENCE OF SPARSE compatible systems
"... We describe how to use an upper trapezoidal sparse orthogonal factorization to solve the sequence of sparse compatible systems needed to implement sparse reduced gradient versions of certain nonsimplex activeset LP methods. For reasons of familiarity, we focus on the reduced gradient nonsimplex a ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
the given formulae, we report the results obtained with our implementation on top of the sparse toolbox of Matlab 5 when solving the first 15 smallest Netlib problems with a highlydegenerate Phase I, and several paralellizability issues are remarked.
Updating and downdating an upper trapezoidal sparse orthogonal factorization
"... We describe how to update and downdate an upper trapezoidal sparse orthogonal factorization, namely the sparse QR factorization of AT k, where Ak is a “tall and thin” full column rank matrix formed with a subset of the columns of a fixed matrix A. In order to do that, we have adapted to rectangular ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
update/downdate; it fits well into the Linpack downdating algorithm and ensures that the updated trapezoidal factor will remain sparse. We give all the necessary formulae even if the orthogonal factor is not available, and we comment on our implementation using the sparse toolbox of Matlab 5.
A SPARSE IMPLEMENTATION OF LAWSON AND HANSON’S CONVEX NNLS METHOD
, 2000
"... An explicit sparse orthogonal factorization is maintained in order to solve the sequence of sparse linear least squares subproblems needed to implement Lawson and Hanson’s activeset method to solve the nonnegative least squares problem for a matrix with more columns than rows. The sparse direct m ..."
Abstract
 Add to MetaCart
methodology of Björck and Oreborn of late 80s is used in a similar way to Coleman and Hulbert, but without forming the hessian matrix that is only positive semidefinite in this case. We comment on our implementation on top of the sparse toolbox of Matlab, and to highlight its robustness, some preliminary
The Multicomputer Toolbox Project
"... The contribution of this paper is to lay a foundation for the development of nextgeneration standard sequential mathematical libraries for numerical linear algebra. The primary areas of concern are functionality, data formats, performance, and functional interfaces. We review and analyze the existin ..."
Abstract
 Add to MetaCart
the existing dense standard (levels 1, 2, and 3) and sparse proposals for basic linear algebra subprograms (BLAS) on sequential platforms. Based on this analysis of the BLAS we propose a set of requirements for the next generation of standard libraries, which we call the basic linear algebra instruction set
Results 1  10
of
67