Results 1  10
of
100
Instancebased learning algorithms
 Machine Learning
, 1991
"... Abstract. Storing and using specific instances improves the performance of several supervised learning algorithms. These include algorithms that learn decision trees, classification rules, and distributed networks. However, no investigation has analyzed algorithms that use only specific instances to ..."
Abstract

Cited by 1389 (18 self)
 Add to MetaCart
Abstract. Storing and using specific instances improves the performance of several supervised learning algorithms. These include algorithms that learn decision trees, classification rules, and distributed networks. However, no investigation has analyzed algorithms that use only specific instances to solve incremental learning tasks. In this paper, we describe a framework and methodology, called instancebased learning, that generates classification predictions using only specific instances. Instancebased learning algorithms do not maintain a set of abstractions derived from specific instances. This approach extends the nearest neighbor algorithm, which has large storage requirements. We describe how storage requirements can be significantly reduced with, at most, minor sacrifices in learning rate and classification accuracy. While the storagereducing algorithm performs well on several realworld databases, its performance degrades rapidly with the level of attribute noise in training instances. Therefore, we extended it with a significance test to distinguish noisy instances. This extended algorithm's performance degrades gracefully with increasing noise levels and compares favorably with a noisetolerant decision tree algorithm.
Extracting treestructured representations of trained networks.
 In Touretzky et al.(1996),
, 1996
"... ..."
(Show Context)
Extracting Comprehensible Models from Trained Neural Networks
, 1996
"... To Mom, Dad, and Susan, for their support and encouragement. ..."
Abstract

Cited by 84 (3 self)
 Add to MetaCart
(Show Context)
To Mom, Dad, and Susan, for their support and encouragement.
Adaptive Playout Scheduling and Loss Concealment for Voice Communication over IP Networks
 IEEE TRANSACTIONS ON MULTIMEDIA
, 2002
"... A new receiverbased playout scheduling scheme is proposed to improve the tradeoff between buffering delay and late loss for realtime voice communication over IP networks. The scheme estimates the network delay from past statistics and adaptively adjusts the playout time of the voice packets. In c ..."
Abstract

Cited by 56 (2 self)
 Add to MetaCart
A new receiverbased playout scheduling scheme is proposed to improve the tradeoff between buffering delay and late loss for realtime voice communication over IP networks. The scheme estimates the network delay from past statistics and adaptively adjusts the playout time of the voice packets. In contrast to previous work, the adjustment is not only performed between talkspurts, but also within talkspurts in a highly dynamic way. Proper reconstruction of continuous playout speech is achieved by scaling individual voice packets using a timefile modification technique based on the WSOLA algorithm. Results of subjective listening tests show that this operation does not impair audio quality, since the adaptation process requires infrequent scaling of the voice packets and low playout jitter is perceptually tolerable. The same timescale modification technique is also used to conceal packet loss at very low delay,i.e.,one packet time. Simulation results based on Internet measurements show that the tradeoff between buffering delay and late loss can be improved significantly. The overall audio quality is investigated based on subjective listening tests,showing typical gains of 1 on a 5point MOS scale.
Gamut constrained illuminant estimation
 International Journal of Computer Vision
, 2006
"... This paper presents a novel solution to the illuminant estimation problem: the problem of how, given an image of a scene taken under an unknown illuminant, we can recover an estimate of that light. The work is founded on previous gamut mapping solutions to the problem which solve for a scene illumin ..."
Abstract

Cited by 47 (0 self)
 Add to MetaCart
(Show Context)
This paper presents a novel solution to the illuminant estimation problem: the problem of how, given an image of a scene taken under an unknown illuminant, we can recover an estimate of that light. The work is founded on previous gamut mapping solutions to the problem which solve for a scene illuminant by determining the set of diagonal mappings which take image data captured under an unknown light to a gamut of reference colours taken under a known light. Unfortunately a diagonal model is not always a valid model of illumination change and so previous approaches sometimes return a null solution. In addition, previous methods are difficult to implement. We address these problems by recasting the problem as one of illuminant classification: we define aprioria set of plausible lights thus ensuring that a scene illuminant estimate will always be found. A plausible light is represented by the gamut of colours observable under it and the illuminant in an image is classified by determining the plausible light whose gamut is most consistent with the image data. We show that this step (the main computational burden of the algorithm) can be performed simply, quickly, and efficiently by means of a nonnegative leastsquares optimisation. We report results on a large set of real images which show that it provides excellent illuminant estimation, outperforming previous algorithms. 1.
Scene illuminant estimation: Past, present, and future,”
 Color Research & Application,
, 2006
"... Abstract This paper addresses the problem of colour constancy: how a visual system is able to ensure that the colours it perceives remain stable, regardless of the prevailing scene illuminant. Our aim is firstly to summarise and review the most important theoretical advances that have been made in ..."
Abstract

Cited by 33 (0 self)
 Add to MetaCart
(Show Context)
Abstract This paper addresses the problem of colour constancy: how a visual system is able to ensure that the colours it perceives remain stable, regardless of the prevailing scene illuminant. Our aim is firstly to summarise and review the most important theoretical advances that have been made in this field. Second, we present a comparative analysis of algorithm performance which we use as the basis of a discussion of the current state of colour constancy research and of the important issues which future research in this field should address. Finally, we highlight some areas of recent research which we believe are important in the context of further improving the performance of colour constancy algorithms.
On connected multiple point coverage in wireless sensor networks
 Journal of Wireless Information Networks
, 2006
"... Abstract — We consider a wireless sensor network consisting of a set of sensors deployed randomly. A point in the monitored area is covered if it is within the sensing range of a sensor. In some applications, when the network is sufficiently dense, area coverage can be approximated by guaranteeing ..."
Abstract

Cited by 30 (0 self)
 Add to MetaCart
(Show Context)
Abstract — We consider a wireless sensor network consisting of a set of sensors deployed randomly. A point in the monitored area is covered if it is within the sensing range of a sensor. In some applications, when the network is sufficiently dense, area coverage can be approximated by guaranteeing point coverage. In this case, all the points of wireless devices could be used to represent the whole area, and the working sensors are supposed to cover all the sensors. Many applications related to security and reliability require guaranteed kcoverage of the area at all times. In this paper, we formalize the k(Connected) Coverage Set (kCCS/kCS) problems, develop a linear programming algorithm, and design two nonglobal solutions for them. Some theoretical analysis is also provided followed by simulation results. Index Terms — Coverage problem, linear programming, localized algorithms, reliability, wireless sensor networks.
Interpretation and inference in mixture models: Simple MCMC works
 Journal of Econometrics
, 2007
"... The mixture model likelihood function is invariant with respect to permutation of the components of the mixture. If functions of interest are permutation sensitive, as in classification applications, then interpretation of the likelihood function requires valid inequality constraints and a very larg ..."
Abstract

Cited by 28 (1 self)
 Add to MetaCart
The mixture model likelihood function is invariant with respect to permutation of the components of the mixture. If functions of interest are permutation sensitive, as in classification applications, then interpretation of the likelihood function requires valid inequality constraints and a very large sample may be required to resolve ambiguities. If functions of interest are permutation invariant, as in prediction applications, then there are no such problems of interpretation. Contrary to assessments in some recent publications, simple and widely used Markov chain Monte Carlo (MCMC) algorithms with data augmentation reliably recover the entire posterior distribution. 1 1
Reevaluating colour constancy algorithms.
 17th International Conference on Pattern recognition. (IEEE Computer 372 Society
, 2004
"... Abstract We present a reevaluation of previous experimental data for five ..."
Abstract

Cited by 18 (1 self)
 Add to MetaCart
(Show Context)
Abstract We present a reevaluation of previous experimental data for five
A New Statistical Testing for Symmetric Ciphers and Hash Functions
 Proc. Information and Communications Security 2002, volume 2513 of LNCS
, 2002
"... This paper presents a new, powerful statistical testing of symmetric ciphers and hash functions which allowed us to detect biases in both of these systems where previously known tests failed. We first give a complete characterization of the Algebraic Normal Form (ANF) of random Boolean functions by ..."
Abstract

Cited by 17 (1 self)
 Add to MetaCart
(Show Context)
This paper presents a new, powerful statistical testing of symmetric ciphers and hash functions which allowed us to detect biases in both of these systems where previously known tests failed. We first give a complete characterization of the Algebraic Normal Form (ANF) of random Boolean functions by means of the M obius transform. Then we built a new testing based on the comparison between the structure of the different Boolean functions Algebraic Normal Forms characterizing symmetric ciphers and hash functions and those of purely random Boolean functions. Detailed testing results on several cryptosystems are presented. As a main result we show that AES, DES Snow and Lili128 fail all or part of the tests and thus present strong biases.