Results 1  10
of
11
Dimensions of neuralsymbolic integration – a structural survey
 We Will Show Them: Essays in Honour of Dov Gabbay
"... Research on integrated neuralsymbolic systems has made significant progress in the recent past. In particular the understanding of ways to deal with symbolic knowledge within connectionist systems (also called artificial neural networks) has reached a critical mass which enables the community to ..."
Abstract

Cited by 25 (8 self)
 Add to MetaCart
Research on integrated neuralsymbolic systems has made significant progress in the recent past. In particular the understanding of ways to deal with symbolic knowledge within connectionist systems (also called artificial neural networks) has reached a critical mass which enables the community to
A fully connectionist model generator for covered firstorder logic programs
 Proceedings of the Twentieth International Joint Conference on Artificial Intelligence (IJCAI07), Hyderabad, India, Menlo Park CA, AAAI Press (2007) 666–671
, 2007
"... We present a fully connectionist system for the learning of firstorder logic programs and the generation of corresponding models: Given a program and a set of training examples, we embed the associated semantic operator into a feedforward network and train the network using the examples. This resu ..."
Abstract

Cited by 12 (5 self)
 Add to MetaCart
We present a fully connectionist system for the learning of firstorder logic programs and the generation of corresponding models: Given a program and a set of training examples, we embed the associated semantic operator into a feedforward network and train the network using the examples. This results in the learning of firstorder knowledge while damaged or noisy data is handled gracefully. 1
The core method: Connectionist model generation
 In Proceedings of the 16th International Conference on Artificial Neural Networks (ICANN
, 2006
"... Abstract. Knowledge based artificial networks networks have been applied quite successfully to propositional knowledge representation and reasoning tasks. However, as soon as these tasks are extended to structured objects and structuresensitive processes it is not obvious at all how neural symbol ..."
Abstract

Cited by 9 (4 self)
 Add to MetaCart
(Show Context)
Abstract. Knowledge based artificial networks networks have been applied quite successfully to propositional knowledge representation and reasoning tasks. However, as soon as these tasks are extended to structured objects and structuresensitive processes it is not obvious at all how neural symbolic systems should look like such that they are truly connectionist and allow for a declarative reading at the same time. The core method aims at such an integration. It is a method for connectionist model generation using recurrent networks with feedforward core. After an introduction to the core method, this paper will focus on possible connectionist representations of structured objects and their use in structuresensitive reasoning tasks. 1
Extracting reduced logic programs from artificial neural networks
 Applied Intelligence
, 2010
"... Artificial neural networks can be trained to perform excellently in many application areas. Whilst they can learn from raw data to solve sophisticated recognition and analysis problems, the acquired knowledge remains hidden within the network architecture and is not readily accessible for analysis o ..."
Abstract

Cited by 7 (2 self)
 Add to MetaCart
(Show Context)
Artificial neural networks can be trained to perform excellently in many application areas. Whilst they can learn from raw data to solve sophisticated recognition and analysis problems, the acquired knowledge remains hidden within the network architecture and is not readily accessible for analysis or further use: Trained networks are black boxes. Recent research efforts therefore investigate the possibility to extract symbolic knowledge from trained networks, in order to analyze, validate, and reuse the structural insights gained implicitly during the training process. In this paper, we will study how knowledge in form of propositional logic programs can be obtained in such a way that the programs are as simple as possible — where simple is being understood in some clearly defined and meaningful way. 1 1
Unification Neural Networks: Unification by ErrorCorrection Learning
"... We show that the conventional firstorder algorithm of unification can be simulated by finite artificial neural networks with one layer of neurons. In these unification neural networks, the unification algorithm is performed by errorcorrection learning. Each timestep of adaptation of the network c ..."
Abstract

Cited by 4 (4 self)
 Add to MetaCart
(Show Context)
We show that the conventional firstorder algorithm of unification can be simulated by finite artificial neural networks with one layer of neurons. In these unification neural networks, the unification algorithm is performed by errorcorrection learning. Each timestep of adaptation of the network corresponds to a single iteration of the unification algorithm. We present this result together with the library of learning functions and examples fully formalised in MATLAB Neural Network Toolbox.
Ontology Learning as a UseCase for NeuralSymbolic Integration
, 2005
"... We argue that the field of neuralsymbolic integration is in need of identifying application scenarios for guiding further research. We furthermore argue that ontology learning — as occuring in the context of semantic technologies — provides such an application scenario with potential for success ..."
Abstract

Cited by 3 (3 self)
 Add to MetaCart
We argue that the field of neuralsymbolic integration is in need of identifying application scenarios for guiding further research. We furthermore argue that ontology learning — as occuring in the context of semantic technologies — provides such an application scenario with potential for success and high impact on neuralsymbolic integration.
Computation of normal logic programs by fibring neural networks
 In Proceedings of the Seventh International Workshop on FirstOrder Theorem Proving (FTP’05
, 2005
"... Abstract. In this paper, we develop a theory of the integration of fibring neural networks (a generalization of conventional neural networks) into modeltheoretic semantics for logic programming. We present some ideas and results about the approximate computation by fibring neural networks of sema ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
Abstract. In this paper, we develop a theory of the integration of fibring neural networks (a generalization of conventional neural networks) into modeltheoretic semantics for logic programming. We present some ideas and results about the approximate computation by fibring neural networks of semantic immediate consequence operators TP and TP, where TP denotes a generalization of TP relative to a manyvalued logic analogous to Kleene’s strong logic. We establish a minimalfixedpoint semantics for normal logic programs somewhat analogous to the leastfixedpoint semantics for definite logic programs. We argue that the class of logic programs for which the approximation by fibring neural networks may be employed to compute minimal fixed points of TP and of TP is the class of normal programs. Our theorems on the approximation of TP and TP for normal programs extend recent results on approximation of these operators for definite programs by conventional neural networks. Key words: logic programs, fibring neural networks, immediate consequence operators, leastfixedpoint semantics, Kleene’s strong logic. 1
Unification by ErrorCorrection
"... The paper formalises the famous algorithm of firstorder unification by Robinson by means of the errorcorrection learning in neural networks. The significant achievement of this formalisation is that, for the first time, the firstorder unification of two arbitrary firstorder atoms is performed b ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
(Show Context)
The paper formalises the famous algorithm of firstorder unification by Robinson by means of the errorcorrection learning in neural networks. The significant achievement of this formalisation is that, for the first time, the firstorder unification of two arbitrary firstorder atoms is performed by finite (twoneuron) network.
Extracting Reduced Logic Programs from Artificial Neural Networks
"... Artificial neural networks can be trained to perform excellently in many application areas. While they can learn from raw data to solve sophisticated recognition and analysis problems, the acquired knowledge remains hidden within the network architecture and is not readily accessible for analysi ..."
Abstract
 Add to MetaCart
(Show Context)
Artificial neural networks can be trained to perform excellently in many application areas. While they can learn from raw data to solve sophisticated recognition and analysis problems, the acquired knowledge remains hidden within the network architecture and is not readily accessible for analysis or further use: Trained networks are black boxes. Recent research efforts therefore investigate the possibility to extract symbolic knowledge from trained networks, in order to analyze, validate, and reuse the structural insights gained implicitly during the training process. In this paper, we will study how knowledge in form of propositional logic programs can be obtained in such a way that the programs are as simple as possible — where simple is being understood in some clearly defined and meaningful way. 1