Results 1 -
8 of
8
Typicality, graded membership, and vagueness
- Cognitive Science
, 2007
"... A vague concept is a concept that picks out a category with no clearly defined boundary (see Keefe & Smith, 1997 for a review of theories of vagueness). Classical examples are the concepts labelled by adjectives such as BALD, TALL, or RED, and nouns such as CHAIR or VEGETABLE (Rosch, 1975). Ther ..."
Abstract
-
Cited by 29 (4 self)
- Add to MetaCart
(Show Context)
A vague concept is a concept that picks out a category with no clearly defined boundary (see Keefe & Smith, 1997 for a review of theories of vagueness). Classical examples are the concepts labelled by adjectives such as BALD, TALL, or RED, and nouns such as CHAIR or VEGETABLE (Rosch, 1975). There is arguably no precise height at which a man or woman becomes tall, and so the class TALL MEN is not a well-defined set; consequently the truth of a statement such as “John is tall ” may be in some sense undecidable if John is of intermediate height. Vague concepts are also susceptible to Zeno’s infamous sorites paradox. If a man who is 1 m tall is clearly not tall, then nor is a man who is 1.0001 m tall. In general, if a man who is x meters tall is clearly not tall, then nor is a man who is (x + 0.0001) m tall. But repeated application of this deduction can be used to prove that there are no tall men. Equivalently the argument can be reversed, starting with a man who is 2 m tall and working down the scale to prove on the contrary that all men are tall. Vagueness poses a serious challenge to theories of the logic of conceptual thought, and thus to theories of cognitive science—particularly those in the symbol-processing, representational theory of mind tradition. The problem of vagueness has long been acknowledged as posing serious difficulties for philosophy and epistemology. How, it is asked, can we claim to have certain knowledge of the world when the very words that we use to express that knowledge are prone to such vagueness? Not only does vagueness cause difficulties with theories of reference—how it is that our concepts refer to classes of entities in the world—but it also creates major problems for the development of cognitively plausible logics that are consistent with conceptual thought. Logics that incorporate vagueness have been proposed (Zadeh’s, 1965, Fuzzy Logic being the most well-known), but with limited success, and they have proved of little value as accounts of the psychology of vague reasoning (see for example, Osherson & Smith, 1981; Cohen &
Symbol Grounding Transfer with Hybrid Self-Organizing/Supervised Neural Networks
- in IJCNN04 International Joint Conference on Neural Networks
, 2004
"... This paper reports new simulations on an extended neural network model for the transfer of symbol grounding. It uses a hybrid and modular connectionist model, consisting of an unsupervised, self-organizing map for stimulus classification and a supervised network for category acquisition and naming. ..."
Abstract
-
Cited by 8 (4 self)
- Add to MetaCart
(Show Context)
This paper reports new simulations on an extended neural network model for the transfer of symbol grounding. It uses a hybrid and modular connectionist model, consisting of an unsupervised, self-organizing map for stimulus classification and a supervised network for category acquisition and naming. The model is based on a psychologicallyplausible view of symbolic communication, where unsupervised concept formation precedes the supervised acquisition of category names. The simulation results demonstrate that grounding is transferred from symbols denoting object properties to newly acquired symbols denoting the object as a whole. The implications for cognitive models integrating neural networks and multi-agent systems are discussed. I.
T.: Self-refreshing SOM as a semantic memory model
- In: Proceedings of AKRR’05, International and Interdisciplinary Conference on Adaptive Knowledge Representation and Reasoning
, 2005
"... Natural and artificial cognitive systems suffer from forgetting information. However, in natural systems forgetting is typically gradual whereas in artificial systems forgetting is often catastrophic. Catastrophic forgetting is also a problem for the Self-Organizing Map (SOM) when used as a semantic ..."
Abstract
-
Cited by 1 (1 self)
- Add to MetaCart
(Show Context)
Natural and artificial cognitive systems suffer from forgetting information. However, in natural systems forgetting is typically gradual whereas in artificial systems forgetting is often catastrophic. Catastrophic forgetting is also a problem for the Self-Organizing Map (SOM) when used as a semantic memory model in a continuous learning task in a nonstationary environment. Methods based on rehearsal and pseudorehearsal have been successfully applied in feedforward networks to avoid catastrophic interference. A novel method based on pseudorehearsal for avoiding catastrophic forgetting in the SOM is presented. Simulations comparing the performance of a self-refreshing SOM compared to a standard SOM are presented in the task of learning three separate sets of data adjacently with results showing that the use of pseudorehearsal can effectively decrease catastrophic forgetting. 1.
TKK-ICS-R24 MODELING COMMUNITIES OF EXPERTS Conceptual grounding of expertise
"... TKK ICS ..."
(Show Context)
ISSN 1796-2803Visualizing Practice Theory through a Simulation Model
, 2007
"... Theories on human action are often either constructed in such a way that the emphasis is on the social or on the individual level. Especially within economics and consumer research, practice theory aims to build a bridge between these points of view. This report describes a simulation model that is ..."
Abstract
- Add to MetaCart
Theories on human action are often either constructed in such a way that the emphasis is on the social or on the individual level. Especially within economics and consumer research, practice theory aims to build a bridge between these points of view. This report describes a simulation model that is a means to visualize some of the basic concepts of practice theory. Some of the aspects of the system may also be applicable in other domains. 1
SEARCH FOR MEANING: AN EVOLUTIONARY AGENTS APPROACH
"... To build intelligent systems it is crucial to understand what the phenomenon of meaning is, how can such phenomenon arise and what is the process behind it. The meaning is peculiar to living organisms and only for the living organisms things in the surrounding world mean something. It is proposed th ..."
Abstract
- Add to MetaCart
(Show Context)
To build intelligent systems it is crucial to understand what the phenomenon of meaning is, how can such phenomenon arise and what is the process behind it. The meaning is peculiar to living organisms and only for the living organisms things in the surrounding world mean something. It is proposed that the meaning arises when an agent starts to distinguish things that are positive or negative in a sense of survival and behaves accordingly —preferring desirable and avoiding undesirable states. As the life is a process so is the meaning. In this paper a simulation of evolutionary agents is proposed to find out whether a configuration can arise from a random initial state or trough the evolutionary process that allows an agent to distinguish good things form bad ones and to choose appropriate action. 1.
Simulating processes of language emergence, communication and agent modeling
"... We discuss two different approaches for modeling other agents in multiagent systems. One approach is based on language between agents and modeling their cognitive processes. Another approach utilizes game theory and is based on modeling utilities of other agents. In both cases, we discuss how differ ..."
Abstract
- Add to MetaCart
(Show Context)
We discuss two different approaches for modeling other agents in multiagent systems. One approach is based on language between agents and modeling their cognitive processes. Another approach utilizes game theory and is based on modeling utilities of other agents. In both cases, we discuss how different machine learning paradigms can be utilized for acquiring experience from the environment and other agents. 1
Address for correspondence:
"... Copyright & reuse City University London has developed City Research Online so that its users may access the research outputs of City University London's staff. Copyright © and Moral Rights for this paper are retained by the individual author(s) and / or other copyright holders. All materia ..."
Abstract
- Add to MetaCart
(Show Context)
Copyright & reuse City University London has developed City Research Online so that its users may access the research outputs of City University London's staff. Copyright © and Moral Rights for this paper are retained by the individual author(s) and / or other copyright holders. All material in City Research Online is checked for eligibility for copyright before being made available in the live archive. URLs from City Research Online may be freely distributed and linked to from other web pages. Versions of research The version in City Research Online may differ from the final published version. Users are advised to check the Permanent City Research Online URL above for the status of the paper. Enquiries If you have any enquiries about any aspect of City Research Online, or if you wish to make contact with the author(s) of this paper, please email the team at publications@city.ac.uk.Conceptual combination and negation