Results 1 - 10
of
27
A foundational architecture for artificial general intelligence
- Advance of Artificial General Intelligence (IOS
, 2007
"... Abstract. Implementing and fleshing out a number of psychological and neuroscience theories of cognition, the LIDA conceptual model aims at being a cognitive “theory of everything. ” With modules or processes for perception, working memory, episodic memories, “consciousness, ” procedural memory, ..."
Abstract
-
Cited by 16 (2 self)
- Add to MetaCart
(Show Context)
Abstract. Implementing and fleshing out a number of psychological and neuroscience theories of cognition, the LIDA conceptual model aims at being a cognitive “theory of everything. ” With modules or processes for perception, working memory, episodic memories, “consciousness, ” procedural memory, action selection, perceptual learning, episodic learning, deliberation, volition, and non-routine problem solving, the LIDA model is ideally suited to provide a working ontology that would allow for the discussion, design, and comparison of AGI systems. The LIDA architecture is based on the LIDA cognitive cycle, a sort of “cognitive atom. ” The more elementary cognitive modules and processes play a role in each cognitive cycle. Higher-level processes are performed over multiple cycles. In addition to giving a quick overview of the LIDA conceptual model, and its underlying computational technology, we argue for the LIDA architecture’s role as a foundational architecture for an AGI. Finally, lessons For AGI researchers drawn from the model and its architecture are discussed.
LIDA: A Computational Model of Global Workspace Theory and Developmental Learning. BICS 2006: Brain Inspired Cognitive Systems
, 2006
"... In this paper, we present LIDA, a working model of, and theoretical foundation for, machine consciousness. LIDA’s architecture and mechanisms were inspired by a variety of computational paradigms and LIDA implements the Global Workspace Theory of consciousness. The LIDA architecture’s cognitive modu ..."
Abstract
-
Cited by 11 (6 self)
- Add to MetaCart
In this paper, we present LIDA, a working model of, and theoretical foundation for, machine consciousness. LIDA’s architecture and mechanisms were inspired by a variety of computational paradigms and LIDA implements the Global Workspace Theory of consciousness. The LIDA architecture’s cognitive modules include perceptual associative memory, episodic memory, functional consciousness, procedural memory and action-selection. Cognitive robots and software agents controlled by the LIDA architecture will be capable of multiple learning mechanisms. With artificial feelings and emotions as primary motivators and learning facilitators, such systems will ‘live ’ through a developmental period during which they will learn in multiple, human-like ways to act effectively in their environments. We also provide a comparison of the LIDA model with other models of consciousness.
1 A Conceptual and Computational Model of Moral Decision Making in Human and Artificial Agents
"... Abstract: Recently there has been a resurgence of interest in general, comprehensive models of human cognition. Such models aim to explain higher order cognitive faculties, such as deliberation and planning. Given a computational representation, the validity of these models can be tested in computer ..."
Abstract
-
Cited by 9 (3 self)
- Add to MetaCart
Abstract: Recently there has been a resurgence of interest in general, comprehensive models of human cognition. Such models aim to explain higher order cognitive faculties, such as deliberation and planning. Given a computational representation, the validity of these models can be tested in computer simulations such as software agents or embodied robots. The push to implement computational models of this kind has created the field of Artificial General Intelligence, or AGI. Moral decision making is arguably one of the most challenging tasks for computational approaches to higher order cognition. The need for increasingly autonomous artificial agents to factor moral considerations into their choices and actions has given rise to another new field of inquiry variously known as Machine Morality, Machine Ethics, Roboethics or Friendly AI. In this paper we discuss how LIDA, an AGI model of human cognition, can be adapted to model both affective and rational features of moral decision making. Using the LIDA model we will demonstrate how moral decisions can be made in many domains using the same
Cognitive robots: Perceptual associative memory and learning
- In Proceedings of the 14th annual international
"... Abstract- In this position paper we attempt to derive an architecture and mechanism for perceptual associative memory and learning for software agents and cognitive robots from what is known, or believed, about the same faculties in human and other animal cognition. Based on that of the IDA model of ..."
Abstract
-
Cited by 6 (4 self)
- Add to MetaCart
Abstract- In this position paper we attempt to derive an architecture and mechanism for perceptual associative memory and learning for software agents and cognitive robots from what is known, or believed, about the same faculties in human and other animal cognition. Based on that of the IDA model of Global Workspace Theory, a conceptual and computational model of cognition, this architecture, together with its mechanisms, offers the real possibility of autonomous software agents and cognitive robots learning their own ontologies during a developmental period. Thus the onerous chore of designing and implementing such an ontology can be avoided. I. PREMISES In particular we want to base the design on the following premises:
Some Knowledge Representation and Reasoning Requirements for Self-awareness
- In Proc. AAAI Spring Symposium on Metacognition in Computation
"... This paper motivates and defines a notion of explicit selfawareness, one that implies human-like scope of the selfmodel, and an explicit internal representation susceptible to general inference methods and permitting overt communication about the self. The features proposed for knowledge representat ..."
Abstract
-
Cited by 6 (6 self)
- Add to MetaCart
This paper motivates and defines a notion of explicit selfawareness, one that implies human-like scope of the selfmodel, and an explicit internal representation susceptible to general inference methods and permitting overt communication about the self. The features proposed for knowledge representation and reasoning supporting explicit selfawareness include natural language-like expressiveness, autoepistemic inference grounded in a computable notion of knowing/believing, certain metasyntactic devices, and an ability to abstract and summarize stories. A small preliminary example of self-awareness involving knowledge of knowledge categories is attached as an appendix. 1
Perceptual Memory and Learning: Recognizing, Categorizing and Relating
- Proc. Developmental Robotics AAAI Spring Symp
, 2005
"... In this position paper we attempt to derive an architecture and mechanism for perceptual memory and learning for software agents and robots from what is known, or believed, about the same faculties in human and other animal cognition. Based on that of the IDA model of Global Workspace Theory, a conc ..."
Abstract
-
Cited by 4 (2 self)
- Add to MetaCart
In this position paper we attempt to derive an architecture and mechanism for perceptual memory and learning for software agents and robots from what is known, or believed, about the same faculties in human and other animal cognition. Based on that of the IDA model of Global Workspace Theory, a conceptual and computational model of cognition, this architecture, together with its mechanisms, offers the real possibility of autonomous software agents and robots learning their own ontologies during a developmental period. Thus the onerous chore of designing and implementing such an ontology can be avoided. Premises In particular we want to base the design on the
The Role of Consciousness in
- Consciousness, Intentionality and Causality. In Reclaiming Cognition
, 2005
"... ..."
(Show Context)
Implications of resource limitations for a conscious machine. Neurocomputing
- ICONIP
, 2008
"... A machine with human like consciousness would be an extremely complex system. Prior work has demonstrated that the way in which information handling resources are organized (the resource architecture) in an extremely complex learning system is constrained within some specific bounds if the available ..."
Abstract
-
Cited by 2 (1 self)
- Add to MetaCart
(Show Context)
A machine with human like consciousness would be an extremely complex system. Prior work has demonstrated that the way in which information handling resources are organized (the resource architecture) in an extremely complex learning system is constrained within some specific bounds if the available resources are limited, and that there is evidence that the human brain has been constrained in this way. An architectural concept is developed for a conscious machine that is within the architectural bounds imposed by resource limitations. This architectural concept includes a resource driven architecture, a description of how conscious phenomena would be supported by information processes within that architecture, and a description of actual implementations of the key information processes. Other approaches to designing a conscious machine are reviewed. The conclusion is reached that although they could be capable of supporting human consciousness-like phenomena, they do not take into account the architectural bounds imposed by resource limitations. Systems implemented using these approaches to learn a full range of cognitive features including human like consciousness would therefore require more information handling resources, could have difficulty learning without severe interference with prior learning, and could require add-on subsystems to support some conscious phenomena that emerge naturally as consequences of a resource driven architecture. Key words: consciousness; information model; system resource architecture; system design
Testing for machine consciousness using insight learning." This volume
, 2007
"... We explore the idea that conscious thought is the ability to men-tally simulate the world in order to optimize behavior. A com-puter simulation of an autonomous agent was created in which the agent had to learn to explore its world and learn (using Bay-esian Networks) that pushing a box over a squar ..."
Abstract
-
Cited by 1 (0 self)
- Add to MetaCart
We explore the idea that conscious thought is the ability to men-tally simulate the world in order to optimize behavior. A com-puter simulation of an autonomous agent was created in which the agent had to learn to explore its world and learn (using Bay-esian Networks) that pushing a box over a square would lead to a reward. Afterward, the agent was placed in a novel situation, and had to plan ahead via "mental " simulation to solve the new problem. Only after learning the environmental contingencies was the agent able to solve the novel problem. In the animal learning literature this type of behavior is called insight learning, and provides possibly the best indirect evidence of conscious-ness in the absence of language. This work has implications for testing for consciousness in machines and animals.
Exploring the Complex Interplay between AI and Consciousness
"... This paper embodies the authors ’ suggestive, hypothetical and sometimes speculative attempts to answer questions related to the interplay between consciousness and AI. We explore the theoretical foundations of consciousness in AI systems. We provide examples that demonstrate the potential utility o ..."
Abstract
-
Cited by 1 (0 self)
- Add to MetaCart
This paper embodies the authors ’ suggestive, hypothetical and sometimes speculative attempts to answer questions related to the interplay between consciousness and AI. We explore the theoretical foundations of consciousness in AI systems. We provide examples that demonstrate the potential utility of incorporating functional consciousness in cognitive AI systems. We also explore the possible contributions to the scientific study of consciousness from insights obtained by building and experimenting with conscious AI systems. Finally, we evaluate the possibility of phenomenally conscious machines.