Results 1  10
of
2,005,296
GRADE DISCRIMINANT FUNCTION IN TWOCLASS PROBLEMS
"... Abstract. The socalled vlevel grade discrimmant function is introduced and its properties are investigated. In particular, its distributions in class 1 and class 2 serve to characterize somc natural ordenng of dlscrimnant models. Rdes defied by comparing the value of a chosen grade drscriminant fu ..."
Abstract
 Add to MetaCart
Abstract. The socalled vlevel grade discrimmant function is introduced and its properties are investigated. In particular, its distributions in class 1 and class 2 serve to characterize somc natural ordenng of dlscrimnant models. Rdes defied by comparing the value of a chosen grade drscriminant
Pattern Classification Using Polynomial Neural Networks for Two Classes ’ Problem
"... Abstract—Polynomials networks have been known to have excellent properties as classifiers and universal approximators to the optimal Bayes classifier. In this paper, the use of polynomial neural networks is proposed for efficient implementation of the polynomialbased classifiers. The polynomial neu ..."
Abstract
 Add to MetaCart
neural network is a trainable device consisting of some rules and two processes. Two processes are assumption and effect processes. The assumption process is driven by fuzzy cmeans and the effect processes deals with a polynomial function. A learning algorithm for the polynomial neural network
Differential Negative Reinforcement Improves Classifier System Learning Rate in TwoClass Problems with Unequal Base Rates
 In Koza et al
, 1990
"... The effect of biasing negative reinforcement levels on learning rate and classification accuracy in a learning classifier system (LCS) was investigated. Simulation data at five prevalences (base rates) were used to train and test the LCS. Erroneous decisions made by the LCS during training were puni ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
The effect of biasing negative reinforcement levels on learning rate and classification accuracy in a learning classifier system (LCS) was investigated. Simulation data at five prevalences (base rates) were used to train and test the LCS. Erroneous decisions made by the LCS during training were punished differentially according to type: false positive (FP) or false negative (FN), across a range of four FP:FN ratios. Training performance was assessed by learning rate, determined from the number of iterations required to reach 95% of the maximum area under the receiver operating characteristic (ROC) curve obtained during learning. Learning rates were compared across the three biased ratios with those obtained at the unbiased ratio. Classification performance of the LCS at testing was evaluated by means of the area under the ROC curve. During learning, differences were found between the biased and unbiased penalty schemes, but only at unequal base rates. A linear relationship between bias...
Additive Logistic Regression: a Statistical View of Boosting
 Annals of Statistics
, 1998
"... Boosting (Freund & Schapire 1996, Schapire & Singer 1998) is one of the most important recent developments in classification methodology. The performance of many classification algorithms can often be dramatically improved by sequentially applying them to reweighted versions of the input dat ..."
Abstract

Cited by 1719 (25 self)
 Add to MetaCart
data, and taking a weighted majority vote of the sequence of classifiers thereby produced. We show that this seemingly mysterious phenomenon can be understood in terms of well known statistical principles, namely additive modeling and maximum likelihood. For the twoclass problem, boosting can
Estimating Attributes: Analysis and Extensions of RELIEF
, 1994
"... . In the context of machine learning from examples this paper deals with the problem of estimating the quality of attributes with and without dependencies among them. Kira and Rendell (1992a,b) developed an algorithm called RELIEF, which was shown to be very efficient in estimating attributes. Origi ..."
Abstract

Cited by 450 (23 self)
 Add to MetaCart
. Original RELIEF can deal with discrete and continuous attributes and is limited to only twoclass problems. In this paper RELIEF is analysed and extended to deal with noisy, incomplete, and multiclass data sets. The extensions are verified on various artificial and one well known realworld problem. 1
M.: Task Decomposition and Module Combination Based on Class Relations: A Modular Neural Network for Pattern Classification
 IEEE Transcations on Neural Network
, 1999
"... Abstract — In this paper, we propose a new method for decomposing pattern classification problems based on the class relations among training data. By using this method, we can divide a Kclass classification problem into a series of K 2 twoclass problems. These twoclass problems are to discrimin ..."
Abstract

Cited by 83 (36 self)
 Add to MetaCart
Abstract — In this paper, we propose a new method for decomposing pattern classification problems based on the class relations among training data. By using this method, we can divide a Kclass classification problem into a series of K 2 twoclass problems. These twoclass problems
Where the REALLY Hard Problems Are
 IN J. MYLOPOULOS AND R. REITER (EDS.), PROCEEDINGS OF 12TH INTERNATIONAL JOINT CONFERENCE ON AI (IJCAI91),VOLUME 1
, 1991
"... It is well known that for many NPcomplete problems, such as KSat, etc., typical cases are easy to solve; so that computationally hard cases must be rare (assuming P != NP). This paper shows that NPcomplete problems can be summarized by at least one "order parameter", and that the hard p ..."
Abstract

Cited by 681 (1 self)
 Add to MetaCart
problems occur at a critical value of such a parameter. This critical value separates two regions of characteristically different properties. For example, for Kcolorability, the critical value separates overconstrained from underconstrained random graphs, and it marks the value at which the probability
The Extended Linear Complementarity Problem
, 1993
"... We consider an extension of the horizontal linear complementarity problem, which we call the extended linear complementarity problem (XLCP). With the aid of a natural bilinear program, we establish various properties of this extended complementarity problem; these include the convexity of the biline ..."
Abstract

Cited by 776 (28 self)
 Add to MetaCart
We consider an extension of the horizontal linear complementarity problem, which we call the extended linear complementarity problem (XLCP). With the aid of a natural bilinear program, we establish various properties of this extended complementarity problem; these include the convexity
A Note on the Confinement Problem
, 1973
"... This not explores the problem of confining a program during its execution so that it cannot transmit information to any other program except its caller. A set of examples attempts to stake out the boundaries of the problem. Necessary conditions for a solution are stated and informally justified. ..."
Abstract

Cited by 532 (0 self)
 Add to MetaCart
This not explores the problem of confining a program during its execution so that it cannot transmit information to any other program except its caller. A set of examples attempts to stake out the boundaries of the problem. Necessary conditions for a solution are stated and informally justified.
The Symbol Grounding Problem
, 1990
"... There has been much discussion recently about the scope and limits of purely symbolic models of the mind and about the proper role of connectionism in cognitive modeling. This paper describes the "symbol grounding problem": How can the semantic interpretation of a formal symbol system be m ..."
Abstract

Cited by 1072 (18 self)
 Add to MetaCart
There has been much discussion recently about the scope and limits of purely symbolic models of the mind and about the proper role of connectionism in cognitive modeling. This paper describes the "symbol grounding problem": How can the semantic interpretation of a formal symbol system
Results 1  10
of
2,005,296