Results 1  10
of
61
Logic Programs and Connectionist Networks
 Journal of Applied Logic
, 2004
"... One facet of the question of integration of Logic and Connectionist Systems, and how these can complement each other, concerns the points of contact, in terms of semantics, between neural networks and logic programs. In this paper, we show that certain semantic operators for propositional logic p ..."
Abstract

Cited by 62 (22 self)
 Add to MetaCart
(Show Context)
One facet of the question of integration of Logic and Connectionist Systems, and how these can complement each other, concerns the points of contact, in terms of semantics, between neural networks and logic programs. In this paper, we show that certain semantic operators for propositional logic programs can be computed by feedforward connectionist networks, and that the same semantic operators for firstorder normal logic programs can be approximated by feedforward connectionist networks. Turning the networks into recurrent ones allows one also to approximate the models associated with the semantic operators. Our methods depend on a wellknown theorem of Funahashi, and necessitate the study of when Funahasi's theorem can be applied, and also the study of what means of approximation are appropriate and significant.
Generalized Metrics and Uniquely Determined Logic Programs
 Theoretical Computer Science
"... The introduction of negation into logic programming brings the benefit of enhanced syntax and expressibility, but creates some semantical problems. Specifically, certain operators which are monotonic in the absence of negation become nonmonotonic when it is introduced, with the result that standard ..."
Abstract

Cited by 32 (19 self)
 Add to MetaCart
(Show Context)
The introduction of negation into logic programming brings the benefit of enhanced syntax and expressibility, but creates some semantical problems. Specifically, certain operators which are monotonic in the absence of negation become nonmonotonic when it is introduced, with the result that standard approaches to denotational semantics then become inapplicable. In this paper, we show how generalized metric spaces can be used to obtain fixedpoint semantics for several classes of programs relative to the supported model semantics, and investigate relationships between the underlying spaces we employ. Our methods allow the analysis of classes of programs which include the acyclic, locally hierarchical, and acceptable programs, amongst others, and draw on fixedpoint theorems which apply to generalized ultrametric spaces and to partial metric spaces.
Dimensions of neuralsymbolic integration – a structural survey
 We Will Show Them: Essays in Honour of Dov Gabbay
"... Research on integrated neuralsymbolic systems has made significant progress in the recent past. In particular the understanding of ways to deal with symbolic knowledge within connectionist systems (also called artificial neural networks) has reached a critical mass which enables the community to ..."
Abstract

Cited by 25 (8 self)
 Add to MetaCart
Research on integrated neuralsymbolic systems has made significant progress in the recent past. In particular the understanding of ways to deal with symbolic knowledge within connectionist systems (also called artificial neural networks) has reached a critical mass which enables the community to
Dislocated Topologies
, 2000
"... We study a generalized notion of topology which evolved out of applications in the area of logic programming semantics. The generalization is obtained by relaxing the requirement that a neighbourhood of a point includes the point itself, and by allowing neighbourhoods of points to be empty. The c ..."
Abstract

Cited by 24 (7 self)
 Add to MetaCart
(Show Context)
We study a generalized notion of topology which evolved out of applications in the area of logic programming semantics. The generalization is obtained by relaxing the requirement that a neighbourhood of a point includes the point itself, and by allowing neighbourhoods of points to be empty. The corresponding generalized notion of metric is obtained by allowing points to have nonzero distance to themselves. We further show that it is meaningful to discuss neighbourhoods, convergence, and continuity in these spaces. A generalized version of the Banach contraction mapping theorem can also be established. We show nally how the generalized metrics studied here can be obtained from conventional metrics. Contents 1 Introduction 2 2 Motivation: A Fixed Point Application in Logic Programming 3 2.1 A Generalized Banach Contraction Mapping Theorem . . . . . . . . . . . 3 2.2 Acceptable Logic Programs . . . . . . . . . . . . . . . . . . . . . . . . . 5 3 Dislocated Topologies 7 3.1 Nei...
A uniform approach to logic programming semantics
 Theory and Practice of Logic Programming
, 2005
"... ..."
Fibring Neural Networks
, 2004
"... Neuralsymbolic systems are hybrid systems that integrate symbolic logic and neural networks. The goal of neuralsymbolic integration is to benefit from the combination of features of the symbolic and connectionist paradigms of artificial intelligence. This paper introduces a new neural network ..."
Abstract

Cited by 20 (7 self)
 Add to MetaCart
Neuralsymbolic systems are hybrid systems that integrate symbolic logic and neural networks. The goal of neuralsymbolic integration is to benefit from the combination of features of the symbolic and connectionist paradigms of artificial intelligence. This paper introduces a new neural network architecture based on the idea of fibring logical systems. Fibring allows one to combine di#erent logical systems in a principled way. Fibred neural networks may be composed not only of interconnected neurons but also of other networks, forming a recursive architecture. A fibring function then defines how this recursive architecture must behave by defining how the networks in the ensemble relate to each other, typically by allowing the activation of neurons in one network (A) to influence the change of weights in another network (B). Intuitively, this can be seen as training network B at the same time that one runs network A. We show that, in addition to being universal approximators like standard feedforward networks, fibred neural networks can approximate any polynomial function to any desired degree of accuracy, thus being more expressive than standard feedforward networks.
Connectionist Model generation: A FirstOrder Approach
, 2007
"... Knowledge based artificial neural networks have been applied quite successfully to propositional knowledge representation and reasoning tasks. However, as soon as these tasks are extended to structured objects and structuresensitive processes as expressed e.g., by means of firstorder predicate log ..."
Abstract

Cited by 20 (5 self)
 Add to MetaCart
Knowledge based artificial neural networks have been applied quite successfully to propositional knowledge representation and reasoning tasks. However, as soon as these tasks are extended to structured objects and structuresensitive processes as expressed e.g., by means of firstorder predicate logic, it is not obvious at all what neural symbolic systems would look like such that they are truly connectionist, are able to learn, and allow for a declarative reading and logical reasoning at the same time. The core method aims at such an integration. It is a method for connectionist model generation using recurrent networks with feedforward core. We show in this paper how the core method can be used to learn firstorder logic programs in a connectionist fashion, such that the trained network is able to do reasoning over the acquired knowledge. We also report on experimental evaluations which show the feasibility of our approach.
Logic Programs, Iterated Function Systems, and Recurrent Radial Basis Function Networks
 Journal of Applied Logic
, 2004
"... Graphs of the singlestep operator for firstorder logic programs  displayed in the real plane  exhibit selfsimilar structures known from topological dynamics, i.e. they appear to be fractals, or more precisely, attractors of iterated function systems. We show that this observation can be ..."
Abstract

Cited by 19 (14 self)
 Add to MetaCart
(Show Context)
Graphs of the singlestep operator for firstorder logic programs  displayed in the real plane  exhibit selfsimilar structures known from topological dynamics, i.e. they appear to be fractals, or more precisely, attractors of iterated function systems. We show that this observation can be made mathematically precise. In particular, we give conditions which ensure that those graphs coincide with attractors of suitably chosen iterated function systems, and conditions which allow the approximation of such graphs by iterated function systems or by fractal interpolation. Since iterated function systems can easily be encoded using recurrent radial basis function networks, we eventually obtain connectionist systems which approximate logic programs in the presence of function symbols.
On the Integration of Connectionist and LogicBased Systems
, 2004
"... We discuss the computation by neural networks of semantic operators TP determined by propositional logic programs P. We revisit and clarify the foundations of the relevant notions employed in approximating both TP and its fixed points when P is a firstorder program. ..."
Abstract

Cited by 17 (8 self)
 Add to MetaCart
(Show Context)
We discuss the computation by neural networks of semantic operators TP determined by propositional logic programs P. We revisit and clarify the foundations of the relevant notions employed in approximating both TP and its fixed points when P is a firstorder program.
A fully connectionist model generator for covered firstorder logic programs
 Proceedings of the Twentieth International Joint Conference on Artificial Intelligence (IJCAI07), Hyderabad, India, Menlo Park CA, AAAI Press (2007) 666–671
, 2007
"... We present a fully connectionist system for the learning of firstorder logic programs and the generation of corresponding models: Given a program and a set of training examples, we embed the associated semantic operator into a feedforward network and train the network using the examples. This resu ..."
Abstract

Cited by 12 (5 self)
 Add to MetaCart
We present a fully connectionist system for the learning of firstorder logic programs and the generation of corresponding models: Given a program and a set of training examples, we embed the associated semantic operator into a feedforward network and train the network using the examples. This results in the learning of firstorder knowledge while damaged or noisy data is handled gracefully. 1