Results 1  10
of
11
The principle of presence: A heuristic for growing knowledge structured neural networks
 In Proceedings of the NeuroSymbolic Workshop at IJCAI (NeSy’05
, 2005
"... Fully connected neural networks such as multilayer perceptrons can approximate any given bounded function provided they have sufficient time. But this time grows quickly with the number of connections. In lifelong learning, the agent must acquire more and more knowledge in order to solve problems ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
Fully connected neural networks such as multilayer perceptrons can approximate any given bounded function provided they have sufficient time. But this time grows quickly with the number of connections. In lifelong learning, the agent must acquire more and more knowledge in order to solve problems growing in complexity. In this purpose, it does not sound reasonable to fully connect huge networks. By applying the point of view of locality, we hypothesize that memorization only takes what one perceives and thinks into account. Based on this principle of presence, a neural network is constructed for structuring knowledge online. Advantages and limitations are discussed. 1
Extraction of Symbolic Rules from Artificial Neural Networks
, 2005
"... ... better than decision trees do for pattern classification problems, they are often regarded as black boxes, i.e., their predictions cannot be explained as those of decision trees. In many applications, it is desirable to extract knowledge from trained ANNs for the users to gain a better understan ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
... better than decision trees do for pattern classification problems, they are often regarded as black boxes, i.e., their predictions cannot be explained as those of decision trees. In many applications, it is desirable to extract knowledge from trained ANNs for the users to gain a better understanding of how the networks solve the problems. A new rule extraction algorithm, called rule extraction from artificial neural networks (REANN) is proposed and implemented to extract symbolic rules from ANNs. A standard threelayer feedforward ANN is the basis of the algorithm. A fourphase training algorithm is proposed for backpropagation learning. Explicitness of the extracted rules is supported by comparing them to the symbolic rules generated by other methods. Extracted rules are comparable with other methods in terms of number of rules, average number of conditions for a rule, and predictive accuracy. Extensive experimental studies on several benchmarks classification problems, such as breast cancer, iris, diabetes, and season classification problems, demonstrate the effectiveness of the proposed approach with good generalization ability.
Improving Rule Extraction from Neural Networks by Modifying Hidden Layer Representation
"... Abstract — This paper describes a new method for extracting symbolic rules from multilayer feedforward neural networks. Our approach is to encourage backpropagation to learn a sparser representation at the hidden layer and to use the improved representation to extract fewer, easier to understand rul ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
Abstract — This paper describes a new method for extracting symbolic rules from multilayer feedforward neural networks. Our approach is to encourage backpropagation to learn a sparser representation at the hidden layer and to use the improved representation to extract fewer, easier to understand rules. A new error term defined over the hidden layer is added to the standard sum of squared error so that the total squared distance between hidden activation vectors is increased. We show that this method helps extract fewer rules without decreasing classification accuracy in four publicly available data sets. I.
ABSTRACT Title of dissertation: EXTRACTING SYMBOLIC REPRESENTATIONS LEARNED BY NEURAL NETWORKS
"... Understanding what neural networks learn from training data is of great interest in data mining, data analysis, and critical applications, and in evaluating neural network models. Unfortunately, the product of neural network training is typically opaque matrices of floating point numbers that are no ..."
Abstract
 Add to MetaCart
(Show Context)
Understanding what neural networks learn from training data is of great interest in data mining, data analysis, and critical applications, and in evaluating neural network models. Unfortunately, the product of neural network training is typically opaque matrices of floating point numbers that are not obviously understandable. This difficulty has inspired substantial past research on how to extract symbolic, humanreadable representations from a trained neural network, but the results obtained so far are very limited (e.g., large rule sets produced). This problem occurs in part due to the distributed hidden layer representation created during learning. Most past symbolic knowledge extraction algorithms have focused on progressively more sophisticated ways to cluster this distributed representation. In contrast, in this dissertation, I take a different approach. I develop ways to alter the error backpropagation neural network training process itself so that it creates a representation of what has been learned in the hidden layer activation space that is more amenable to existing symbolic representation extraction methods. In this context, this dissertation research makes four main contributions. First,
Extracting Propositional Logic Programs From Neural Networks:
, 2006
"... student: name Valentin MayerEichberger matr. number 2889037 date and place of birth October 14th 1982, Tübingen task: Construction and evaluation of a new decompositional extraction algorithm in the field of neural symbolic integration. Combining artificial neural networks and logic programming for ..."
Abstract
 Add to MetaCart
(Show Context)
student: name Valentin MayerEichberger matr. number 2889037 date and place of birth October 14th 1982, Tübingen task: Construction and evaluation of a new decompositional extraction algorithm in the field of neural symbolic integration. Combining artificial neural networks and logic programming for machine learning tasks is the main objective of neural symbolic integration. One important step towards practical applications in this field is the development of techniques for extracting symbolic knowledge from neural networks. In this thesis a new extraction method is proposed and thoroughly investigated. It translates the class of feedforward networks with binary threshold functions into propositional logic programs by means of a decompositional
An Algorithm to Extract Rules from Artificial Neural Networks for Medical Diagnosis Problems
"... Artificial neural networks (ANNs) have been successfully applied to solve a variety of classification and function approximation problems. Although ANNs can generally predict better than decision trees for pattern classification problems, ANNs are often regarded as black boxes since their prediction ..."
Abstract
 Add to MetaCart
(Show Context)
Artificial neural networks (ANNs) have been successfully applied to solve a variety of classification and function approximation problems. Although ANNs can generally predict better than decision trees for pattern classification problems, ANNs are often regarded as black boxes since their predictions cannot be explained clearly like those of decision trees. This paper presents a new algorithm, called rule extraction from ANNs (REANN), to extract rules from trained ANNs for medical diagnosis problems. A standard threelayer feedforward ANN with fourphase training is the basis of the proposed algorithm. In the first phase, the number of hidden nodes in ANNs is determined automatically by a constructive algorithm. In the second phase, irrelevant connections and input nodes are removed from trained ANNs without sacrificing the predictive accuracy of ANNs. The continuous activation values of the hidden nodes are discretized by using an efficient heuristic clustering algorithm in the third phase. Finally, rules are extracted from compact ANNs by examining the discretized activation values of the hidden nodes. Extensive experimental studies on three benchmark classification problems, i.e. breast cancer, diabetes and lenses, demonstrate that REANN can generate high quality rules from ANNs, which are comparable with other methods in terms of number of rules, average number of conditions for a rule, and predictive accuracy.
Article A New Data Mining Scheme Using Artificial Neural Networks
, 2011
"... sensors ..."
(Show Context)