Results 11 -
18 of
18
Semantic Advantage for Learning New Phonological Form Representations
"... ■ Learning a new word requires discrimination between a novel sequence of sounds and similar known words. We inves-tigated whether semantic information facilitates the acquisition of new phonological representations in adults and whether this learning enhancement is modulated by overnight consolidat ..."
Abstract
- Add to MetaCart
■ Learning a new word requires discrimination between a novel sequence of sounds and similar known words. We inves-tigated whether semantic information facilitates the acquisition of new phonological representations in adults and whether this learning enhancement is modulated by overnight consolidation. Participants learned novel spoken words either consistently associated with a visual referent or with no consistent meaning. An auditory oddball task tested discrimination of these newly learned phonological forms from known words. The MMN, an electrophysiological measure of auditory discrimination, was only elicited for words learned with a consistent semantic association. Immediately after training, this semantic benefit on auditory discrimination was linked to explicit learning of the associations, where participants with greater semantic learning exhibited a larger MMN. However, although the semantic-associated words continued to show greater auditory discrimination than non-associated words after consolidation, the MMN was no longer related to performance in learning the semantic associations. We suggest that the provision of semantic systematicity directly impacts upon the development of new phonological representa-tions and that a period of offline consolidation may promote the abstraction of these representations. ■
Research Article Decoding the Formation of New Semantics: MVPA Investigation of Rapid Neocortical Plasticity during Associative Encoding through Fast Mapping
"... Copyright © 2015 Tali Atir-Sharon et al.This is an open access article distributed under theCreative CommonsAttribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Neocortical structures typically only support s ..."
Abstract
- Add to MetaCart
(Show Context)
Copyright © 2015 Tali Atir-Sharon et al.This is an open access article distributed under theCreative CommonsAttribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Neocortical structures typically only support slow acquisition of declarative memory; however, learning through fast mapping may facilitate rapid learning-induced cortical plasticity and hippocampal-independent integration of novel associations into existing semantic networks. During fast mapping the meaning of new words and concepts is inferred, and durable novel associations are incidentally formed, a process thought to support early childhood’s exuberant learning. The anterior temporal lobe, a cortical semantic memory hub, may critically support such learning. We investigated encoding of semantic associations through fast mapping using fMRI and multivoxel pattern analysis. Subsequent memory performance following fast mapping was more efficiently predicted using anterior temporal lobe than hippocampal voxels, while standard explicit encoding was best predicted by hippocampal activity. Searchlight algorithms revealed additional activity patterns that predicted successful fast mapping semantic learning located in lateral occipitotemporal and parietotemporal neocortex and ventrolateral prefrontal cortex. By contrast, successful explicit encoding could be classified by activity in medial and dorsolateral prefrontal and parahippocampal cortices. We propose that fast mapping promotes incidental rapid integration of new associations into existing neocortical
PhD Transfer Report
, 2009
"... Approaches to electronic music are divided between the computational and perceptual. Computational approaches focus on algorithms, formal languages and discrete representations, categorisations and relationships. Perceptual approaches focus on movement, sensors, gesture, articulation and continuous ..."
Abstract
- Add to MetaCart
Approaches to electronic music are divided between the computational and perceptual. Computational approaches focus on algorithms, formal languages and discrete representations, categorisations and relationships. Perceptual approaches focus on movement, sensors, gesture, articulation and continuous mappings between domains. This report describes research uniting musical computation and perception, proposing a music that is neither acousmatic or syntactic but conceptual, where musical concepts are felt out in perceptual space and mapped out using language. The research programme concluding this thesis provides ground for practice to develop upon. In turn, the practice provides a figure around which the research is formed. This figure-ground relationship runs deep, structuring our definition of timbre as the figure which we hear in terms of the grounding source which produced it, as well as the relationship be-tween the symbolic figures against conceptual ground. The programme includes development of mappings between words and timbre, a symbolic language for improvising rhythm, and exploration and empirical testing of the supporting ideas.
and Brain Sciences Unit, UK
, 2014
"... Over the course of development, speech sounds that are contrastive in one’s native language tend to become perceived categorically: that is, listeners are unaware of variation within phonetic categories while showing excellent sensitivity to speech sounds that span linguistically meaningful phonetic ..."
Abstract
- Add to MetaCart
Over the course of development, speech sounds that are contrastive in one’s native language tend to become perceived categorically: that is, listeners are unaware of variation within phonetic categories while showing excellent sensitivity to speech sounds that span linguistically meaningful phonetic category boundaries. The end stage of this developmental process is that the perceptual systems that handle acoustic-phonetic information show special tuning to native language contrasts, and as such, category-level information appears to be present at even fairly low levels of the neural processing stream. Research on adults acquiring non-native speech categories offers an avenue for investigating the interplay of category-level information and perceptual sensitivities to these sounds as speech categories emerge. In particular, one can observe the neural changes that unfold as listeners learn not only to perceive acoustic distinctions that mark non-native speech sound contrasts, but also to map these distinctions onto category-level representations. An emergent literature on the neural basis of novel and non-native speech sound learning offers new insight into this question. In this review, I will examine this literature in order to answer two key questions. First, where in the neural pathway
Review Article Brain and Language: Evidence for Neural Multifunctionality
"... Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. This review paper presents converging evidence from studies of brain damage and longitudinal studies of language in aging which supports the following thes ..."
Abstract
- Add to MetaCart
(Show Context)
Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. This review paper presents converging evidence from studies of brain damage and longitudinal studies of language in aging which supports the following thesis: the neural basis of language can best be understood by the concept of neural multifunctionality. In this paper the term “neural multifunctionality ” refers to incorporation of nonlinguistic functions into language models of the intact brain, reflecting a multifunctional perspective whereby a constant and dynamic interaction exists among neural networks subserving cognitive, affective, and praxic functions with neural networks specialized for lexical retrieval, sentence comprehension, and discourse processing, giving rise to language as we know it. Byway of example, we consider effects of executive system functions on aspects of semantic processing among persons with and without aphasia, as well as the interaction of executive and language functions among older adults. We conclude by indicating how this multifunctional view of brain-language relations extends to the realm of language recovery from aphasia, where evidence of the influence of nonlinguistic factors on the reshaping of neural circuitry for aphasia rehabilitation is clearly emerging. 1.
journal Frontiers in Psychology
, 2014
"... This article was submitted to Language Sciences, a section of the ..."