Results 1  10
of
20
Training Digital Circuits with Hamming Clustering
 IEEE TRANSACTIONS ON CIRCUIT AND SYSTEMS
, 2000
"... A new algorithm, called Hamming Clustering (HC), for the solution of classification problems with binary inputs is proposed. It builds a logical network containing only and, or and not ports, which, besides satisfying all the inputoutput pairs included in a given finite consistent training set, ..."
Abstract

Cited by 19 (15 self)
 Add to MetaCart
(Show Context)
A new algorithm, called Hamming Clustering (HC), for the solution of classification problems with binary inputs is proposed. It builds a logical network containing only and, or and not ports, which, besides satisfying all the inputoutput pairs included in a given finite consistent training set, is able to reconstruct the underlying Boolean function. The basic
On Convergence Properties of Pocket Algorithm
 IEEE Transactions on Neural Networks
, 1997
"... The problem of finding optimal weights for a single threshold neuron starting from a general training set is considered. Among the variety of possible learning techniques, the pocket algorithm has a proper convergence theorem which asserts its optimality. ..."
Abstract

Cited by 11 (5 self)
 Add to MetaCart
(Show Context)
The problem of finding optimal weights for a single threshold neuron starting from a general training set is considered. Among the variety of possible learning techniques, the pocket algorithm has a proper convergence theorem which asserts its optimality.
Spectral Technique for Hidden Layer Neural Network Training
, 1997
"... We propose a new constructive algorithm for learning binarytobinary mappings. Weight constraints derived from a spectral summation are used to check separability during the partitioning phase, and to limit hyperplane movement during training. ..."
Abstract

Cited by 3 (3 self)
 Add to MetaCart
We propose a new constructive algorithm for learning binarytobinary mappings. Weight constraints derived from a spectral summation are used to check separability during the partitioning phase, and to limit hyperplane movement during training.
STRIP  a stripbased neuralnetwork growth algorithm for learning multiplevalued functions
 IEEE TRANSACTIONS ON NEURAL NETWORKS
, 2001
"... We consider the problem of synthesizing multiplevalued logic functions by neural networks. A genetic algorithm (GA) which finds the longest strip in is described. A strip contains points located between two parallel hyperplanes. Repeated application of GA partitions the space into certain number of ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
(Show Context)
We consider the problem of synthesizing multiplevalued logic functions by neural networks. A genetic algorithm (GA) which finds the longest strip in is described. A strip contains points located between two parallel hyperplanes. Repeated application of GA partitions the space into certain number of strips, each of them corresponding to a hidden unit. We construct two neural networks based on these hidden units and show that they correctly compute the given but arbitrary multiplevalued function. Preliminary experimental results are presented and discussed.
A Comparison of Methods for Learning of Highly NonSeparable Problems
"... Abstract. Learning in cases that are almost linearly separable is easy, but for highly nonseparable problems all standard machine learning methods fail. Many strategies to build adaptive systems are based on the “divideandconquer ” principle. Constructive neural network architectures with novel t ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
(Show Context)
Abstract. Learning in cases that are almost linearly separable is easy, but for highly nonseparable problems all standard machine learning methods fail. Many strategies to build adaptive systems are based on the “divideandconquer ” principle. Constructive neural network architectures with novel training methods allow to overcome some drawbacks of standard backpropagation MLP networks. They are able to handle complex multidimensional problems in reasonable time, creating models with small number of neurons. In this paper a comparison of our new constructive c3sep algorithm based on kseparability idea with several sequential constructive learning methods is reported. Tests have been performed on parity function, 3 artificial Monks problems, and a few benchmark problems. Simple and accurate solutions have been discovered using c3sep algorithm even in highly nonseparable cases. 1
Constructive Neural Network Algorithms that Solve Highly NonSeparable Problems
"... Abstract Learning from data with complex nonlocal relations and multimodal class distribution for widely used classification algorithms is still very hard. Even if accurate solution is found the resulting model may be too complex for a given data and will not generalize well. New types of learning ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
Abstract Learning from data with complex nonlocal relations and multimodal class distribution for widely used classification algorithms is still very hard. Even if accurate solution is found the resulting model may be too complex for a given data and will not generalize well. New types of learning algorithms are needed to extend capabilities of standard machine learning systems. Projection pursuit methods can avoid “curse of dimensionality ” by discovering interesting structures in lowdimensional subspace. This paper introduces constructive neural architectures based on projection pursuit techniques that are able to discover simplest models of data with inherent highly complex logical structures.
A Unified Approach to Sequential Constructive Methods
 WIRN’98  THE 10TH ITALIAN WORKSHOP ON NEURAL NETS
, 1998
"... A general treatment of a particular class of learning techniques for neural networks, called sequential constructive methods, is proposed. They subsequently add units to the hidden layer until all the inputoutput relations contained in a given training set are satisfied. Every addition ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
A general treatment of a particular class of learning techniques for neural networks, called sequential constructive methods, is proposed. They subsequently add units to the hidden layer until all the inputoutput relations contained in a given training set are satisfied. Every addition