Results 1  10
of
195,719
Learning quickly when irrelevant attributes abound: A new linearthreshold algorithm
 Machine Learning
, 1988
"... learning Boolean functions, linearthreshold algorithms Abstract. Valiant (1984) and others have studied the problem of learning various classes of Boolean functions from examples. Here we discuss incremental learning of these functions. We consider a setting in which the learner responds to each ex ..."
Abstract

Cited by 773 (5 self)
 Add to MetaCart
learning Boolean functions, linearthreshold algorithms Abstract. Valiant (1984) and others have studied the problem of learning various classes of Boolean functions from examples. Here we discuss incremental learning of these functions. We consider a setting in which the learner responds to each
Using Linearthreshold Algorithms to Combine Multiclass Subexperts
 In Proc. of the 20 th ICML Conference
, 2003
"... We present a new type of multiclass learning algorithm called a linearmax algorithm. Linearmax algorithms learn with a special type of attribute called a subexpert. A subexpert is a vector attribute that has a value for each output class. The goal of the multiclass algorithm is to learn a linea ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
class linearthreshold algorithm. We apply these techniques to three linearthreshold algorithms: Perceptron, Winnow, and Romma. We show these algorithms give good performance on artificial and real datasets. 1.
Transforming Linearthreshold Learning Algorithms into Multiclass Linear Learning Algorithms
, 2001
"... In this paper, we present a new type of multiclass learning algorithm called a linearmax algorithm. Linearmax algorithms learn with a special type of attribute called a subexpert. A subexpert is a vector attribute that has a value for each output class. The goal of the multiclass algorithm ..."
Abstract

Cited by 4 (3 self)
 Add to MetaCart
is to learn a linear function combining the subexperts and to use this linear function to make correct class predictions. We will prove that, in the online mistakebounded model of learning, these multiclass learning algorithms have the same mistake bounds as a related two class linearthreshold
An iterative thresholding algorithm for linear inverse problems with a sparsity constraint
, 2008
"... ..."
Tracking Linearthreshold Concepts with Winnow
 Proc.15th Annu.Conf.on Comput.Learning Theory
, 2002
"... In this paper, we give a mistakebound for learning arbitrary linearthreshold concepts that are allowed to change over time in the online model of learning. We use a standard variation of the Winnow algorithm and show that the bounds for learning shifting linearthreshold functions have many of ..."
Abstract

Cited by 7 (2 self)
 Add to MetaCart
In this paper, we give a mistakebound for learning arbitrary linearthreshold concepts that are allowed to change over time in the online model of learning. We use a standard variation of the Winnow algorithm and show that the bounds for learning shifting linearthreshold functions have many
A fast iterative shrinkagethresholding algorithm with application to . . .
, 2009
"... We consider the class of Iterative ShrinkageThresholding Algorithms (ISTA) for solving linear inverse problems arising in signal/image processing. This class of methods is attractive due to its simplicity, however, they are also known to converge quite slowly. In this paper we present a Fast Iterat ..."
Abstract

Cited by 1058 (9 self)
 Add to MetaCart
We consider the class of Iterative ShrinkageThresholding Algorithms (ISTA) for solving linear inverse problems arising in signal/image processing. This class of methods is attractive due to its simplicity, however, they are also known to converge quite slowly. In this paper we present a Fast
Regret bounds for hierarchical classification with linearthreshold functions
 Proceedings of the 17th Annual Conference on Learning Theory
, 2004
"... Abstract. We study the problem of classifying data in a given taxonomy when classifications associated with multiple and/or partial paths are allowed. We introduce an incremental algorithm using a linearthreshold classifier at each node of the taxonomy. These classifiers are trained and evaluated i ..."
Abstract

Cited by 10 (3 self)
 Add to MetaCart
Abstract. We study the problem of classifying data in a given taxonomy when classifications associated with multiple and/or partial paths are allowed. We introduce an incremental algorithm using a linearthreshold classifier at each node of the taxonomy. These classifiers are trained and evaluated
For Most Large Underdetermined Systems of Linear Equations the Minimal ℓ1norm Solution is also the Sparsest Solution
 Comm. Pure Appl. Math
, 2004
"... We consider linear equations y = Φα where y is a given vector in R n, Φ is a given n by m matrix with n < m ≤ An, and we wish to solve for α ∈ R m. We suppose that the columns of Φ are normalized to unit ℓ 2 norm 1 and we place uniform measure on such Φ. We prove the existence of ρ = ρ(A) so that ..."
Abstract

Cited by 568 (10 self)
 Add to MetaCart
. In contrast, heuristic attempts to sparsely solve such systems – greedy algorithms and thresholding – perform poorly in this challenging setting. The techniques include the use of random proportional embeddings and almostspherical sections in Banach space theory, and deviation bounds for the eigenvalues
Robust face recognition via sparse representation
 IEEE TRANS. PATTERN ANALYSIS AND MACHINE INTELLIGENCE
, 2008
"... We consider the problem of automatically recognizing human faces from frontal views with varying expression and illumination, as well as occlusion and disguise. We cast the recognition problem as one of classifying among multiple linear regression models, and argue that new theory from sparse signa ..."
Abstract

Cited by 936 (40 self)
 Add to MetaCart
We consider the problem of automatically recognizing human faces from frontal views with varying expression and illumination, as well as occlusion and disguise. We cast the recognition problem as one of classifying among multiple linear regression models, and argue that new theory from sparse
Learning probabilistic linearthreshold classifiers via selective sampling
 In Proc. 16th COLT
, 2003
"... Abstract. In this paper we investigate selective sampling, a learning model where the learner observes a sequence of i.i.d. unlabeled instances each time deciding whether to query the label of the current instance. We assume that labels are binary and stochastically related to instances via a linear ..."
Abstract

Cited by 32 (9 self)
 Add to MetaCart
linear probabilistic function whose coefficients are arbitrary and unknown. We then introduce a new selective sampling rule and show that its expected regret (with respect to the classifier knowing the underlying linear function and observing the label realization after each prediction) grows not much
Results 1  10
of
195,719