Results 1  10
of
28
Symbolic knowledge extraction from trained neural networks: A sound approach
, 2001
"... Although neural networks have shown very good performance in many application domains, one of their main drawbacks lies in the incapacity to provide an explanation for the underlying reasoning mechanisms. The "explanation capability" of neural networks can be achieved by the extraction of ..."
Abstract

Cited by 57 (10 self)
 Add to MetaCart
Although neural networks have shown very good performance in many application domains, one of their main drawbacks lies in the incapacity to provide an explanation for the underlying reasoning mechanisms. The "explanation capability" of neural networks can be achieved by the extraction of symbolic knowledge. In this paper, we present a new method of extraction that captures nonmonotonic rules encoded in the network, and prove that such a method is sound. We start by discussing some of the main problems of knowledge extraction methods. We then discuss how these problems may be ameliorated. To this end, a partial ordering on the set of input vectors of a network is defined, as well as a number of pruning and simplification rules. The pruning rules are then used to reduce the search space of the extraction algorithm during a pedagogical extraction, whereas the simplification rules are used to reduce the size of the extracted set of rules. We show that, in the case of regular networks, the extraction algorithm is sound and complete. We proceed to extend the extraction algorithm to the class of nonregular networks, the general case. We show that nonregular networks always contain regularities in their subnetworks. As a result, the underlying extraction method for regular networks can be applied, but now in a decompositional fashion. In order to combine the sets of rules extracted from each subnetwork into the final set of rules, we use a method whereby we are able to keep the soundness of the extraction algorithm. Finally, we present the results of an empirical analysis of the extraction system, using traditional examples and realworld application problems. The results have shown that a very high fidelity between the extracted set of rules and the network can be achieved....
Combining Abductive Reasoning and Inductive Learning to Evolve Requirements Specifications
 IEE Proceedings  Software
, 2003
"... The development of requirements specifications inevitably involves modification and evolution. To support modification while preserving particular requirements goals and properties, we propose the use of a cycle composed of two phases: analysis and revision. In the analysis phase, a desirable prop ..."
Abstract

Cited by 15 (6 self)
 Add to MetaCart
The development of requirements specifications inevitably involves modification and evolution. To support modification while preserving particular requirements goals and properties, we propose the use of a cycle composed of two phases: analysis and revision. In the analysis phase, a desirable property of the system is checked against a partial specification. Should the property be violated, diagnostic information is provided. In the revision phase, the diagnostic information is used to help modify the specification in such a way that the new specification no longer violates the original property.
A Connectionist Inductive Learning System for Modal Logic Programming
"... NeuralSymbolic integration has become a very active research area in the last decade. In this paper, we present a new massively parallel model for modal logic. We do so by extending the lang ugF of Modal Prolog [32, 37] to allow modal operators in the head of theclau( s. We thenu se an ensemble of ..."
Abstract

Cited by 12 (1 self)
 Add to MetaCart
NeuralSymbolic integration has become a very active research area in the last decade. In this paper, we present a new massively parallel model for modal logic. We do so by extending the lang ugF of Modal Prolog [32, 37] to allow modal operators in the head of theclau( s. We thenu se an ensemble of CIL neu ral networks [14, 15] to encode the extended modal theory (and its relations), and show that the ensemble compu tes a fixpoint semantics of the extended theory. An immediate resui of ou approach is the ability to perform learning from examples e#cientlyu sing each network of the ensemble. Therefore, one can adapt the extended CIL P system by training possible world representations.
Reasoning about Time and Knowledge in NeuralSymbolic Learning Systems
, 2003
"... We show that temporal logic and combinations of temporal logics and modal logics of knowledge can be effectively represented in artificial neural networks. We present a Translation Algorithm from temporal rules to neural networks, and show that the networks compute a fixedpoint semantics of the ..."
Abstract

Cited by 11 (5 self)
 Add to MetaCart
We show that temporal logic and combinations of temporal logics and modal logics of knowledge can be effectively represented in artificial neural networks. We present a Translation Algorithm from temporal rules to neural networks, and show that the networks compute a fixedpoint semantics of the rules. We also apply the translation to the muddy children puzzle, which has been used as a testbed for distributed multiagent systems. We provide a complete solution to the puzzle with the use of simple neural networks, capable of reasoning about time and of knowledge acquisition through inductive learning.
Challenge problems for the integration of logic and connectionist systems (Extended Abstract)
, 1999
"... ..."
Extracting reduced logic programs from artificial neural networks
 Applied Intelligence
, 2010
"... Artificial neural networks can be trained to perform excellently in many application areas. Whilst they can learn from raw data to solve sophisticated recognition and analysis problems, the acquired knowledge remains hidden within the network architecture and is not readily accessible for analysis o ..."
Abstract

Cited by 7 (2 self)
 Add to MetaCart
(Show Context)
Artificial neural networks can be trained to perform excellently in many application areas. Whilst they can learn from raw data to solve sophisticated recognition and analysis problems, the acquired knowledge remains hidden within the network architecture and is not readily accessible for analysis or further use: Trained networks are black boxes. Recent research efforts therefore investigate the possibility to extract symbolic knowledge from trained networks, in order to analyze, validate, and reuse the structural insights gained implicitly during the training process. In this paper, we will study how knowledge in form of propositional logic programs can be obtained in such a way that the programs are as simple as possible — where simple is being understood in some clearly defined and meaningful way. 1 1
Neural networks and structured knowledge: Rule extraction and applications
 Applied Intelligence
, 2000
"... Abstract. As the second part of a special issue on “Neural Networks and Structured Knowledge, ” the contributions collected here concentrate on the extraction of knowledge, particularly in the form of rules, from neural networks, and on applications relying on the representation and processing of st ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
(Show Context)
Abstract. As the second part of a special issue on “Neural Networks and Structured Knowledge, ” the contributions collected here concentrate on the extraction of knowledge, particularly in the form of rules, from neural networks, and on applications relying on the representation and processing of structured knowledge by neural networks. The transformation of the lowlevel internal representation in a neural network into higherlevel knowledge or information that can be interpreted more easily by humans and integrated with symboloriented mechanisms is the subject of the first group of papers. The second group of papers uses specific applications as starting point, and describes approaches based on neural networks for the knowledge representation required to solve crucial tasks in the respective application.
Connectionist Modal Logic: Representing Modalities in Neural Networks
"... Modal logics are amongst the most successful applied logical systems. Neural networks were proved to be effective learning systems. In this paper, we propose to combine the strengths of modal logics and neural networks by introducing Connectionist Modal Logics (CML). CML belongs to the domain of neu ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
Modal logics are amongst the most successful applied logical systems. Neural networks were proved to be effective learning systems. In this paper, we propose to combine the strengths of modal logics and neural networks by introducing Connectionist Modal Logics (CML). CML belongs to the domain of neuralsymbolic integration, which concerns the application of problemspecific symbolic knowledge within the neurocomputing paradigm. In CML, one may represent, reason or learn modal logics using a neural network. This is achieved by a Modalities Algorithm that translates modal logic programs into neural network ensembles. We show that the translation is sound, i.e. the network ensemble computes a fixedpoint meaning of the original modal program, acting as a distributed computational model for modal logic. We also show that the fixedpoint computation terminates whenever the modal program is wellbehaved. Finally, we validate CML as a computational model for integrated knowledge representation and learning by applying it to a wellknown testbed for distributed knowledge representation. This paves the way for a range of applications on integrated knowledge representation and learning, from practical reasoning to evolving multiagent systems.
Firstorder deduction in neural networks
 In Proceedings of the 1st International Conference on Language and Automata Theory and Applications (LATA’07
, 2007
"... Abstract. We show how the algorithm of SLDresolution for firstorder logic programs can be performed in connectionist neural networks. The most significant properties of the resulting neural networks are their finiteness and ability to learn. Key words: Logic programs, artificial neural networks, S ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
Abstract. We show how the algorithm of SLDresolution for firstorder logic programs can be performed in connectionist neural networks. The most significant properties of the resulting neural networks are their finiteness and ability to learn. Key words: Logic programs, artificial neural networks, SLDresolution, connectionism, neurosymbolic integration