Results 1  10
of
10
Are Artificial Neural Networks White Boxes?
, 2004
"... We introduce a novel Mamdanitype fuzzy model, referred to as the allpermutations fuzzy rulebase, and show that it is mathematically equivalent to a standard feedforward neural network. We describe several applications of this equivalence between a neural network and our fuzzy rule base, inclu ..."
Abstract

Cited by 12 (4 self)
 Add to MetaCart
We introduce a novel Mamdanitype fuzzy model, referred to as the allpermutations fuzzy rulebase, and show that it is mathematically equivalent to a standard feedforward neural network. We describe several applications of this equivalence between a neural network and our fuzzy rule base, including knowledge extraction from and knowledge insertion into neural networks.
Extracting symbolic knowledge from recurrent neural networksA fuzzy logic approach
 Fuzzy Sets and Systems, Volume 160, Issue
, 2009
"... Considerable research has been devoted to the integration of fuzzy logic (FL) tools with classic artificial intelligence (AI) paradigms. One reason for this is that FL provides powerful mechanisms for handling and processing symbolic information stated using natural language. In this respect, fuzzy ..."
Abstract

Cited by 8 (3 self)
 Add to MetaCart
(Show Context)
Considerable research has been devoted to the integration of fuzzy logic (FL) tools with classic artificial intelligence (AI) paradigms. One reason for this is that FL provides powerful mechanisms for handling and processing symbolic information stated using natural language. In this respect, fuzzy rulebased systems are whiteboxes, as they process information in a form that is easy to understand, verify and, if necessary, refine. The synergy between artificial neural networks (ANNs), which are notorious for their blackbox character, and FL proved to be particularly successful. Such a synergy allows combining the powerful learningfromexamples capability of ANNs with the highlevel symbolic information processing of FL systems. In this paper, we present a new approach for extracting symbolic information from recurrent neural networks (RNNs). The approach is based on the mathematical equivalence between a specific fuzzy rulebase and functions composed of sums of sigmoids. We show that this equivalence can be used to provide a comprehensible explanation of the RNN functioning. We demonstrate the applicability of our approach by using it to extract the knowledge embedded within an RNN trained to recognize a formal language.
Inductive Bias Strength in KnowledgeBased Neural Networks: Application to Magnetic Resonance Spectroscopy of Breast Tissues
 Artificial Intelligence in Medicine
, 2001
"... The integration of symbolic knowledge with artificial neural networks is becoming an increasingly popular paradigm for solving realworld applications. The paradigm provides means for using prior knowledge to determine the network architecture, to program a subset of weights to induce a learning bia ..."
Abstract

Cited by 7 (3 self)
 Add to MetaCart
(Show Context)
The integration of symbolic knowledge with artificial neural networks is becoming an increasingly popular paradigm for solving realworld applications. The paradigm provides means for using prior knowledge to determine the network architecture, to program a subset of weights to induce a learning bias which guide network training, and to extract knowledge from trained networks. The role of neural networks then becomes that of knowledge refinement. It thus provides a methodology for dealing with uncertainty in the prior knowledge. We address the open question of how to determine the strength of the inductive bias of programmed weights; we present a quantitative solution which takes the network architecture, the prior knowledge, and the training data into consideration. We apply our solution to the difficult problem of analyzing breast tissue from magnetic resonance spectroscopy; the available database is extremely limited and cannot be adequately explained by expert knowledge alone.
A new approach to knowledgebased design of recurrent neural networks
 IEEE Trans. Neural Networks
, 2008
"... Abstract — A major drawback of artificial neural networks (ANNs) is their blackbox character. This is especially true for recurrent neural networks (RNNs) because of their intricate feedback connections. In particular, given a problem and some initial information concerning its solution, it is not ..."
Abstract

Cited by 3 (3 self)
 Add to MetaCart
(Show Context)
Abstract — A major drawback of artificial neural networks (ANNs) is their blackbox character. This is especially true for recurrent neural networks (RNNs) because of their intricate feedback connections. In particular, given a problem and some initial information concerning its solution, it is not at all clear how to design an RNN that is suitable for solving this problem. In this paper, we consider a fuzzy rulebase with a special structure, referred to as the fuzzy allpermutations rulebase (FARB). Inferring the FARB yields an inputoutput mapping that is mathematically equivalent to that of an RNN. We use this equivalence to develop two new knowledgebased design methods for RNNs. The first method, referred to as the direct approach, is based on stating the desired functioning of the RNN in terms of several sets of symbolic rules, each one corresponding to a subnetwork. Each set is then transformed into a suitable FARB. The second method is based on first using the direct approach to design a library of simple modules, such as counters or comparators, and realize them using RNNs. Once designed, the correctness of each RNN can be verified. Then, the initial design problem is solved by using these basic modules as building blocks. This yields a modular and systematic approach for knowledgebased design of RNNs. We demonstrate the efficiency of these approaches by designing RNNs that recognize both regular and nonregular formal languages.
Rule Extraction from KnowledgeBased Neural Networks with Adaptive Inductive Bias
, 2001
"... The integration of symbolic knowledge with artificial neural networks is becoming an increasingly popular paradigm for solving realworld applications. The paradigm provides means for using prior knowledge to determine the network architecture, to program a subset of weights to induce a learning bia ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
(Show Context)
The integration of symbolic knowledge with artificial neural networks is becoming an increasingly popular paradigm for solving realworld applications. The paradigm provides means for using prior knowledge to determine the network architecture, to program a subset of weights to induce a learning bias which guides network training, and to extract knowledge from trained networks. The role of neural networks then becomes that of knowledge refinement. It thus provides a methodology for dealing with uncertainty in the prior knowledge. We have previously proposed a heuristic for determining the strength of the inductive bias which takes the network architecture, the prior knowledge, the training data, and the learning algorithm into consideration; networks trained with adaptive inductive bias showed superior performance over networks trained with a standard inductive bias. This paper compares the performance of symbolic rules extracted from networks trained with and without adaptive bias, respectively. We give empirical results for a difficult problem in molecular biology.
KnowledgeBased Neural Networks for Modelling Time Series
, 2001
"... Various methods exist for extracting rules from data for classi cation purposes. We propose a new method for initializing a neural network used for time series modelling and prediction. We extract binary rules from a real valued time series and encode them into a neural network using an adaptation o ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Various methods exist for extracting rules from data for classi cation purposes. We propose a new method for initializing a neural network used for time series modelling and prediction. We extract binary rules from a real valued time series and encode them into a neural network using an adaptation of KBANN. We test the method on the Lorenz system as well as on real world data in the form of a seismic time series.
Declaration
, 2003
"... I, the undersigned, hereby declare that the work contained in this thesis is my own original work and that I have not previously in its entirety or in part submitted it at any university for a degree. Signature: Stellenbosch University ..."
Abstract
 Add to MetaCart
(Show Context)
I, the undersigned, hereby declare that the work contained in this thesis is my own original work and that I have not previously in its entirety or in part submitted it at any university for a degree. Signature: Stellenbosch University
Center of Excellence for IP and Internet Computing
"... We present an artificial intelligence method for the development of decision support systems for environmental management and demonstrate its strengths using an example from the domain of biodiversity and conservation biology. Renosterveld is a vegetation unique to South Africa; it is under threat o ..."
Abstract
 Add to MetaCart
(Show Context)
We present an artificial intelligence method for the development of decision support systems for environmental management and demonstrate its strengths using an example from the domain of biodiversity and conservation biology. Renosterveld is a vegetation unique to South Africa; it is under threat of extinction as a result of rapidly growing agricultural activities. Our approach takes into account local expert knowledge together with collected field data about plant habitats in order to identify areas which show potential for conserving thriving areas of Renosterveld vegetation and areas that are best suited for agriculture and combines them in a knowledgebased neural network. The paradigm provides means for using prior knowledge to determine a suitable neural network architecture, to program a subset of weights to induce an explicit learning bias which guides network training, and to extract knowledge from trained networks. The role of neural networks then becomes that of knowledge refinement. It thus provides a methodology for dealing with uncertainty in an initial domain theory. We present a quantitative solution to the determination of the learning bias which takes the network architecture, the prior knowledge, the training data and the gradient descent neural network learning algorithm into consideration.
Inductive Bias in Recurrent Neural Networks
, 2001
"... The use of prior knowledge to train neural networks for better performance has attracted increased attention. Initial domain theories exists for many machine learning applications. In both, feed forward and recurrent neural networks, algortihms for encoding prior knwoledge has been constructed. We p ..."
Abstract
 Add to MetaCart
(Show Context)
The use of prior knowledge to train neural networks for better performance has attracted increased attention. Initial domain theories exists for many machine learning applications. In both, feed forward and recurrent neural networks, algortihms for encoding prior knwoledge has been constructed. We propose a heuristic for determining the strength of the prior knowledge (inductive bias) for recurrent neural networks encoded with a DFA as initial domain knowledge. Our heuristic uses gradient information in weight space in the direction of the prior knowledge to enhance performance. Tests on known benchmark problems demonstrate that our heuristic reduces training time, on average, by 30\% compared to a random choice of the strength of the inductive bias. It also achieves, on average, near perfect generalization for that specific choice of the inductive bias.
Jacobus van Zyl, B.Sc. B.Eng. M.Sc.
"... In 1965 Lofti A. Zadeh proposed fuzzy sets as a generalization of crisp (or classic) sets to address the incapability of crisp sets to model uncertainty and vagueness inherent in the real world. Initially, fuzzy sets did not receive a very warm welcome as many academics stood skeptical towards a the ..."
Abstract
 Add to MetaCart
In 1965 Lofti A. Zadeh proposed fuzzy sets as a generalization of crisp (or classic) sets to address the incapability of crisp sets to model uncertainty and vagueness inherent in the real world. Initially, fuzzy sets did not receive a very warm welcome as many academics stood skeptical towards a theory of “imprecise ” mathematics. In the middle to late 1980’s the success of fuzzy controllers brought fuzzy sets into the limelight, and many applications using fuzzy sets started appearing. In the early 1970’s the first machine learning algorithms started appearing. The AQ (for Aq) family of algorithms pioneered by Ryszard S. Michalski is a good example of the family of set covering algorithms. This class of learning algorithm induces concept descriptions by a greedy construction of rules that describe (or cover) positive training examples but not negative training examples. The learning process is iterative, and in each iteration one rule is induced and the positive examples covered by the rule removed from the set of positive training examples. Because positive instances are separated from negative instances, the term separateandconquer has been used to contrast the learning strategy against decision tree induction that use a divideandconquer learning strategy. This dissertation proposes fuzzy set covering as a powerful rule induction strategy. We survey existing