Results 1  10
of
22,159
A training algorithm for optimal margin classifiers
 PROCEEDINGS OF THE 5TH ANNUAL ACM WORKSHOP ON COMPUTATIONAL LEARNING THEORY
, 1992
"... A training algorithm that maximizes the margin between the training patterns and the decision boundary is presented. The technique is applicable to a wide variety of classifiaction functions, including Perceptrons, polynomials, and Radial Basis Functions. The effective number of parameters is adjust ..."
Abstract

Cited by 1838 (43 self)
 Add to MetaCart
A training algorithm that maximizes the margin between the training patterns and the decision boundary is presented. The technique is applicable to a wide variety of classifiaction functions, including Perceptrons, polynomials, and Radial Basis Functions. The effective number of parameters
Perceptrons
"... Perceptrons have been on the forefront of neural network research since its beginning. This paper gives a brief review of the perceptron concept and attempts to point out some critical issues involved in the design and implementation of multilayer perceptrons. ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
Perceptrons have been on the forefront of neural network research since its beginning. This paper gives a brief review of the perceptron concept and attempts to point out some critical issues involved in the design and implementation of multilayer perceptrons.
Perceptron
"... F17.29> Aggregation of the "proper" input signals results in the activation potential, v, which can be expressed as the inner product of "proper" input signals and related weights: v = p 1 X i=1 w i x i = w 1:p 1 x 1:p 1 A.P.Paplinski 31 NNets  L. 3 March 22, 2000 T ..."
Abstract
 Add to MetaCart
F17.29> Aggregation of the "proper" input signals results in the activation potential, v, which can be expressed as the inner product of "proper" input signals and related weights: v = p 1 X i=1 w i x i = w 1:p 1 x 1:p 1 A.P.Paplinski 31 NNets  L. 3 March 22, 2000 The augmented activation potential, ^ v, can now be expressed simply as: ^ v = w x = v For each input signal, the output is determined as y = (^v) = 8 < : 0 if v < (^v < 0) 1 if v
Residual Algorithms: Reinforcement Learning with Function Approximation
 In Proceedings of the Twelfth International Conference on Machine Learning
, 1995
"... A number of reinforcement learning algorithms have been developed that are guaranteed to converge to the optimal solution when used with lookup tables. It is shown, however, that these algorithms can easily become unstable when implemented directly with a general functionapproximation system, such ..."
Abstract

Cited by 304 (6 self)
 Add to MetaCart
A number of reinforcement learning algorithms have been developed that are guaranteed to converge to the optimal solution when used with lookup tables. It is shown, however, that these algorithms can easily become unstable when implemented directly with a general functionapproximation system
Perceptron Classifiers
, 2010
"... Suppose that we have n training examples. The training data are a matrix with n rows and p columns, where each example is represented by values for p different features. Assume that each feature value is a real number. Let feature value j for example number i be written xij. The label of example i i ..."
Abstract
 Add to MetaCart
Suppose that we have n training examples. The training data are a matrix with n rows and p columns, where each example is represented by values for p different features. Assume that each feature value is a real number. Let feature value j for example number i be written xij. The label of example i is yi. For example, yi = 1 if message i is spam and yi = 0 if it is not spam. We have separate training and test sets of examples. Each test example is also represented as a row vector of length p. The label y for a test example is unknown. The output of a classifier is a guess at y. The simplest way to distinguish between two classes in pdimensional Euclidean space Rp is a hyperplane, that is a linear subspace of dimension p−1. The parameters defining a hyperplane are a vector w in Rp and a scalar b. The former gives the orientation of the hyperplane, which is at right angles (also called perpendicular, also called orthogonal) to w. Only the direction of w is important, not its length, so the true number of parameters of w is p − 1. The scalar b specifies the distance from the origin to the hyperplane along the direction specified by w.
Sequence perceptron
, 2009
"... Minimum error rate classification Discriminant functions and decision surfaces 3 Parametric models and parameter estimation 4 Nonparametric techniques KNearest neighbors classifier ..."
Abstract
 Add to MetaCart
Minimum error rate classification Discriminant functions and decision surfaces 3 Parametric models and parameter estimation 4 Nonparametric techniques KNearest neighbors classifier
Fast Training of Multilayer Perceptrons
, 1997
"... Training a multilayer perceptron by an error backpropagation algorithm is slow and uncertain. This paper describes a new approach which is much faster and certain than error backpropagation. The proposed approach is based on combined iterative and direct solution methods. In this approach, we use an ..."
Abstract

Cited by 15 (3 self)
 Add to MetaCart
Training a multilayer perceptron by an error backpropagation algorithm is slow and uncertain. This paper describes a new approach which is much faster and certain than error backpropagation. The proposed approach is based on combined iterative and direct solution methods. In this approach, we use
A ”Thermal” Perceptron Learning Rule
, 1992
"... The thermal perceptron is a simple extension to Rosenblatt’s perceptron learning rule for training individual linear threshold units. It finds stable weights for nonseparable problems as well as separable ones. Experiments indicate that if a good initial setting for a temperature parameter, To, has ..."
Abstract

Cited by 51 (0 self)
 Add to MetaCart
The thermal perceptron is a simple extension to Rosenblatt’s perceptron learning rule for training individual linear threshold units. It finds stable weights for nonseparable problems as well as separable ones. Experiments indicate that if a good initial setting for a temperature parameter, To
Revisiting the Perceptron Predictor
, 2004
"... The perceptron branch predictor has been recently proposed by Jimenez and Lin as an alternative to conventional branch predictors. In this paper, we build upon this original proposal in three directions. First, we show ..."
Abstract

Cited by 20 (1 self)
 Add to MetaCart
The perceptron branch predictor has been recently proposed by Jimenez and Lin as an alternative to conventional branch predictors. In this paper, we build upon this original proposal in three directions. First, we show
Results 1  10
of
22,159