Results 1  10
of
19
Logic Programs and Connectionist Networks
 Journal of Applied Logic
, 2004
"... One facet of the question of integration of Logic and Connectionist Systems, and how these can complement each other, concerns the points of contact, in terms of semantics, between neural networks and logic programs. In this paper, we show that certain semantic operators for propositional logic p ..."
Abstract

Cited by 62 (22 self)
 Add to MetaCart
(Show Context)
One facet of the question of integration of Logic and Connectionist Systems, and how these can complement each other, concerns the points of contact, in terms of semantics, between neural networks and logic programs. In this paper, we show that certain semantic operators for propositional logic programs can be computed by feedforward connectionist networks, and that the same semantic operators for firstorder normal logic programs can be approximated by feedforward connectionist networks. Turning the networks into recurrent ones allows one also to approximate the models associated with the semantic operators. Our methods depend on a wellknown theorem of Funahashi, and necessitate the study of when Funahasi's theorem can be applied, and also the study of what means of approximation are appropriate and significant.
Dimensions of neuralsymbolic integration – a structural survey
 We Will Show Them: Essays in Honour of Dov Gabbay
"... Research on integrated neuralsymbolic systems has made significant progress in the recent past. In particular the understanding of ways to deal with symbolic knowledge within connectionist systems (also called artificial neural networks) has reached a critical mass which enables the community to ..."
Abstract

Cited by 25 (8 self)
 Add to MetaCart
Research on integrated neuralsymbolic systems has made significant progress in the recent past. In particular the understanding of ways to deal with symbolic knowledge within connectionist systems (also called artificial neural networks) has reached a critical mass which enables the community to
Connectionist Model generation: A FirstOrder Approach
, 2007
"... Knowledge based artificial neural networks have been applied quite successfully to propositional knowledge representation and reasoning tasks. However, as soon as these tasks are extended to structured objects and structuresensitive processes as expressed e.g., by means of firstorder predicate log ..."
Abstract

Cited by 20 (5 self)
 Add to MetaCart
Knowledge based artificial neural networks have been applied quite successfully to propositional knowledge representation and reasoning tasks. However, as soon as these tasks are extended to structured objects and structuresensitive processes as expressed e.g., by means of firstorder predicate logic, it is not obvious at all what neural symbolic systems would look like such that they are truly connectionist, are able to learn, and allow for a declarative reading and logical reasoning at the same time. The core method aims at such an integration. It is a method for connectionist model generation using recurrent networks with feedforward core. We show in this paper how the core method can be used to learn firstorder logic programs in a connectionist fashion, such that the trained network is able to do reasoning over the acquired knowledge. We also report on experimental evaluations which show the feasibility of our approach.
The integration of connectionism and firstorder knowledge representation and reasoning as a challenge for artificial intelligence
 In Proceedings of the Third International Conference on Information
, 2006
"... Intelligent systems based on firstorder logic on the one hand, and on artificial neural networks (also called connectionist systems) on the other, differ substantially. It would be very desirable to combine the robust neural networking machinery with symbolic knowledge representation and reasoning ..."
Abstract

Cited by 12 (8 self)
 Add to MetaCart
(Show Context)
Intelligent systems based on firstorder logic on the one hand, and on artificial neural networks (also called connectionist systems) on the other, differ substantially. It would be very desirable to combine the robust neural networking machinery with symbolic knowledge representation and reasoning paradigms like logic programming in such a way that the strengths of either paradigm will be retained. Current stateoftheart research, however, fails by far to achieve this ultimate goal. As one of the main obstacles to be overcome we perceive the question how symbolic knowledge can be encoded by means of connectionist systems: Satisfactory answers to this will naturally lead the way to knowledge extraction algorithms and to integrated neuralsymbolic systems. 1
Integrating FirstOrder Logic Programs and Connectionist Systems  A Constructive Approach
 Proceedings of the IJCAI05 Workshop on NeuralSymbolic Learning and Reasoning, NeSy’05
, 2005
"... Significant advances have recently been made concerning the integration of symbolic knowledge representation with artificial neural networks (also called connectionist systems). However, while the integration with propositional paradigms has resulted in applicable systems, the case of firstord ..."
Abstract

Cited by 11 (7 self)
 Add to MetaCart
(Show Context)
Significant advances have recently been made concerning the integration of symbolic knowledge representation with artificial neural networks (also called connectionist systems). However, while the integration with propositional paradigms has resulted in applicable systems, the case of firstorder knowledge representation has so far hardly proceeded beyond theoretical studies which prove the existence of connectionist systems for approximating firstorder logic programs up to any chosen precision.
Extracting reduced logic programs from artificial neural networks
 Applied Intelligence
, 2010
"... Artificial neural networks can be trained to perform excellently in many application areas. Whilst they can learn from raw data to solve sophisticated recognition and analysis problems, the acquired knowledge remains hidden within the network architecture and is not readily accessible for analysis o ..."
Abstract

Cited by 7 (2 self)
 Add to MetaCart
(Show Context)
Artificial neural networks can be trained to perform excellently in many application areas. Whilst they can learn from raw data to solve sophisticated recognition and analysis problems, the acquired knowledge remains hidden within the network architecture and is not readily accessible for analysis or further use: Trained networks are black boxes. Recent research efforts therefore investigate the possibility to extract symbolic knowledge from trained networks, in order to analyze, validate, and reuse the structural insights gained implicitly during the training process. In this paper, we will study how knowledge in form of propositional logic programs can be obtained in such a way that the programs are as simple as possible — where simple is being understood in some clearly defined and meaningful way. 1 1
Computing firstorder logic programs by fibring artificial neural networks
 Proceedings of the Eighteenth International Florida Artificial Intelligence Research Symposium Conference
, 2005
"... The integration of symbolic and neuralnetworkbased artificial intelligence paradigms constitutes a very challenging area of research. The overall aim is to merge these two very different major approaches to intelligent systems engineering while retaining their respective strengths. For symbolic ..."
Abstract

Cited by 6 (3 self)
 Add to MetaCart
(Show Context)
The integration of symbolic and neuralnetworkbased artificial intelligence paradigms constitutes a very challenging area of research. The overall aim is to merge these two very different major approaches to intelligent systems engineering while retaining their respective strengths. For symbolic paradigms that use the syntax of some firstorder language this appears to be particularly difficult. In this paper, we will extend on an idea proposed by Garcez and Gabbay (2004) and show how firstorder logic programs can be represented by fibred neural networks. The idea is to use a neural network to iterate a global counter n. For each clause Ci in the logic program, this counter is combined (fibred) with another neural network, which determines whether Ci outputs an atom of level n for a given interpretation I. As a result, the fibred network approximates the singlestep operator TP of the logic program, thus capturing the semantics of the program.
Continuity of Semantic Operators in Logic Programming and Their Approximation by Artificial Neural Networks
 Proceedings of the 26th German Conference on Arti Intelligence, KI2003
, 2001
"... One approach to integrating Firstorder logic programming and neural network systems employs the approximation of semantic operators by feedforward networks. For this purpose, it is necessary to view these semantic operators as continuous functions on the reals. This can be accomplished by endowing ..."
Abstract

Cited by 4 (3 self)
 Add to MetaCart
(Show Context)
One approach to integrating Firstorder logic programming and neural network systems employs the approximation of semantic operators by feedforward networks. For this purpose, it is necessary to view these semantic operators as continuous functions on the reals. This can be accomplished by endowing the space of all interpretations of a logic program with topologies obtained from suitable embeddings. We will present such topologies which arise naturally out of the theory of logic programming, discuss continuity issues of several wellknown semantic operators, and derive some results concerning the approximation of these operators by feedforward neural networks.
Ontology Learning as a UseCase for NeuralSymbolic Integration
, 2005
"... We argue that the field of neuralsymbolic integration is in need of identifying application scenarios for guiding further research. We furthermore argue that ontology learning — as occuring in the context of semantic technologies — provides such an application scenario with potential for success ..."
Abstract

Cited by 3 (3 self)
 Add to MetaCart
We argue that the field of neuralsymbolic integration is in need of identifying application scenarios for guiding further research. We furthermore argue that ontology learning — as occuring in the context of semantic technologies — provides such an application scenario with potential for success and high impact on neuralsymbolic integration.
Corollaries on the fixpoint completion: studying the stable semantics by means of the Clark completion
, 2004
"... The xpoint completion x(P ) of a normal logic program P is a program transformation such that the stable models of P are exactly the models of the Clark completion of x(P ). This is wellknown and was studied by Dung and Kanchanasut [15]. The correspondence, however, goes much further: The Ge ..."
Abstract

Cited by 3 (3 self)
 Add to MetaCart
The xpoint completion x(P ) of a normal logic program P is a program transformation such that the stable models of P are exactly the models of the Clark completion of x(P ). This is wellknown and was studied by Dung and Kanchanasut [15]. The correspondence, however, goes much further: The GelfondLifschitz operator of P coincides with the immediate consequence operator of x(P ), as shown by Wendt [51], and even carries over to standard operators used for characterizing the wellfounded and the KripkeKleene semantics. We will apply this knowledge to the study of the stable semantics, and this will allow us to almost eortlessly derive new results concerning xedpoint and metricbased semantics, and neuralsymbolic integration.