Results 1  10
of
12
Logic Programs and Connectionist Networks
 Journal of Applied Logic
, 2004
"... One facet of the question of integration of Logic and Connectionist Systems, and how these can complement each other, concerns the points of contact, in terms of semantics, between neural networks and logic programs. In this paper, we show that certain semantic operators for propositional logic p ..."
Abstract

Cited by 62 (22 self)
 Add to MetaCart
(Show Context)
One facet of the question of integration of Logic and Connectionist Systems, and how these can complement each other, concerns the points of contact, in terms of semantics, between neural networks and logic programs. In this paper, we show that certain semantic operators for propositional logic programs can be computed by feedforward connectionist networks, and that the same semantic operators for firstorder normal logic programs can be approximated by feedforward connectionist networks. Turning the networks into recurrent ones allows one also to approximate the models associated with the semantic operators. Our methods depend on a wellknown theorem of Funahashi, and necessitate the study of when Funahasi's theorem can be applied, and also the study of what means of approximation are appropriate and significant.
Dimensions of neuralsymbolic integration – a structural survey
 We Will Show Them: Essays in Honour of Dov Gabbay
"... Research on integrated neuralsymbolic systems has made significant progress in the recent past. In particular the understanding of ways to deal with symbolic knowledge within connectionist systems (also called artificial neural networks) has reached a critical mass which enables the community to ..."
Abstract

Cited by 25 (8 self)
 Add to MetaCart
Research on integrated neuralsymbolic systems has made significant progress in the recent past. In particular the understanding of ways to deal with symbolic knowledge within connectionist systems (also called artificial neural networks) has reached a critical mass which enables the community to
Connectionist Model generation: A FirstOrder Approach
, 2007
"... Knowledge based artificial neural networks have been applied quite successfully to propositional knowledge representation and reasoning tasks. However, as soon as these tasks are extended to structured objects and structuresensitive processes as expressed e.g., by means of firstorder predicate log ..."
Abstract

Cited by 20 (5 self)
 Add to MetaCart
Knowledge based artificial neural networks have been applied quite successfully to propositional knowledge representation and reasoning tasks. However, as soon as these tasks are extended to structured objects and structuresensitive processes as expressed e.g., by means of firstorder predicate logic, it is not obvious at all what neural symbolic systems would look like such that they are truly connectionist, are able to learn, and allow for a declarative reading and logical reasoning at the same time. The core method aims at such an integration. It is a method for connectionist model generation using recurrent networks with feedforward core. We show in this paper how the core method can be used to learn firstorder logic programs in a connectionist fashion, such that the trained network is able to do reasoning over the acquired knowledge. We also report on experimental evaluations which show the feasibility of our approach.
Reasoning about Time and Knowledge in NeuralSymbolic Learning Systems
, 2003
"... We show that temporal logic and combinations of temporal logics and modal logics of knowledge can be effectively represented in artificial neural networks. We present a Translation Algorithm from temporal rules to neural networks, and show that the networks compute a fixedpoint semantics of the ..."
Abstract

Cited by 11 (5 self)
 Add to MetaCart
We show that temporal logic and combinations of temporal logics and modal logics of knowledge can be effectively represented in artificial neural networks. We present a Translation Algorithm from temporal rules to neural networks, and show that the networks compute a fixedpoint semantics of the rules. We also apply the translation to the muddy children puzzle, which has been used as a testbed for distributed multiagent systems. We provide a complete solution to the puzzle with the use of simple neural networks, capable of reasoning about time and of knowledge acquisition through inductive learning.
The core method: Connectionist model generation
 In Proceedings of the 16th International Conference on Artificial Neural Networks (ICANN
, 2006
"... Abstract. Knowledge based artificial networks networks have been applied quite successfully to propositional knowledge representation and reasoning tasks. However, as soon as these tasks are extended to structured objects and structuresensitive processes it is not obvious at all how neural symbol ..."
Abstract

Cited by 9 (4 self)
 Add to MetaCart
(Show Context)
Abstract. Knowledge based artificial networks networks have been applied quite successfully to propositional knowledge representation and reasoning tasks. However, as soon as these tasks are extended to structured objects and structuresensitive processes it is not obvious at all how neural symbolic systems should look like such that they are truly connectionist and allow for a declarative reading at the same time. The core method aims at such an integration. It is a method for connectionist model generation using recurrent networks with feedforward core. After an introduction to the core method, this paper will focus on possible connectionist representations of structured objects and their use in structuresensitive reasoning tasks. 1
Extracting reduced logic programs from artificial neural networks
 Applied Intelligence
, 2010
"... Artificial neural networks can be trained to perform excellently in many application areas. Whilst they can learn from raw data to solve sophisticated recognition and analysis problems, the acquired knowledge remains hidden within the network architecture and is not readily accessible for analysis o ..."
Abstract

Cited by 7 (2 self)
 Add to MetaCart
(Show Context)
Artificial neural networks can be trained to perform excellently in many application areas. Whilst they can learn from raw data to solve sophisticated recognition and analysis problems, the acquired knowledge remains hidden within the network architecture and is not readily accessible for analysis or further use: Trained networks are black boxes. Recent research efforts therefore investigate the possibility to extract symbolic knowledge from trained networks, in order to analyze, validate, and reuse the structural insights gained implicitly during the training process. In this paper, we will study how knowledge in form of propositional logic programs can be obtained in such a way that the programs are as simple as possible — where simple is being understood in some clearly defined and meaningful way. 1 1
Connectionist Representation of MultiValued Logic Programs
"... Hölldobler and Kalinke showed how, given a propositional logic program P, a 3layer feedforward artificial neural network may be constructed, using only binary threshold units, which can compute the familiar immediateconsequence operator TP associated with P. In this chapter, essentially these res ..."
Abstract

Cited by 3 (3 self)
 Add to MetaCart
(Show Context)
Hölldobler and Kalinke showed how, given a propositional logic program P, a 3layer feedforward artificial neural network may be constructed, using only binary threshold units, which can compute the familiar immediateconsequence operator TP associated with P. In this chapter, essentially these results are established for a class of logic programs which can handle manyvalued logics, constraints and uncertainty; these programs therefore represent a considerable extension of conventional propositional programs. The work of the chapter basically falls into two parts. In the first of these, the programs considered extend the syntax of conventional logic programs by allowing elements of quite general algebraic structures to be present in clause bodies. Such programs include manyvalued logic programs, and semiringbased constraint logic programs. In the second part, the programs considered are bilatticebased annotated logic programs in which body literals are annotated by elements drawn from bilattices. These programs are wellsuited to handling uncertainty. Appropriate semantic operators are defined for the programs considered in both parts of the chapter, and it is shown that one may construct
The grand challenges and myths of neuralsymbolic computation
 Recurrent Neural Networks Models, Capacities, and Applications, number 08041 in Dagstuhl Seminar Proceedings, Dagstuhl, Germany, 2008. Internationales Begegnungs und Forschungszentrum für Informatik (IBFI), Schloss Dagstuhl
"... Abstract. The construction of computational cognitive models integrating the connectionist and symbolic paradigms of artificial intelligence is a standing research issue in the field. The combination of logicbased inference and connectionist learning systems may lead to the construction of semanti ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
Abstract. The construction of computational cognitive models integrating the connectionist and symbolic paradigms of artificial intelligence is a standing research issue in the field. The combination of logicbased inference and connectionist learning systems may lead to the construction of semantically sound computational cognitive models in artificial intelligence, computer and cognitive sciences. Over the last decades, results regarding the computation and learning of classical reasoning within neural networks have been promising. Nonetheless, there still remains much do be done. Artificial intelligence, cognitive and computer science are strongly based on several nonclassical reasoning formalisms, methodologies and logics. In knowledge representation, distributed systems, hardware design, theorem proving, systems specification and verification classical and nonclassical logics have had a great impact on theory and realworld applications. Several challenges for neuralsymbolic computation are pointed out, in particular for classical and nonclassical computation in connectionist systems. We also analyse myths about neuralsymbolic computation and shed new light on them considering recent research advances.