Results 1 - 10
of
136
Morpheme-specific phonology: Constraint indexation and inconsistency resolution
- Phonological Argumentation: Essays on Evidence and Motivation, Equinox
, 2009
"... Abstract. This paper argues that exceptions and other instances of morpheme-specific phonology are best analyzed in Optimality Theory (OT) in terms of lexically indexed markedness and faithfulness constraints. This approach is shown to capture locality restrictions, distinctions between exceptional ..."
Abstract
-
Cited by 43 (1 self)
- Add to MetaCart
Abstract. This paper argues that exceptions and other instances of morpheme-specific phonology are best analyzed in Optimality Theory (OT) in terms of lexically indexed markedness and faithfulness constraints. This approach is shown to capture locality restrictions, distinctions between exceptional and truly impossible patterns, distinctions between blocking and triggering, and distinctions between variation and exceptionality. It is contrasted with other OT analyses of exceptions, in particular those that disallow lexically indexed markedness constraints and those that invoke lexically specified rankings (that is, cophonologies). The data discussed are from Assamese, Finnish and Yine (formerly Piro). A learnability account of the genesis of lexically indexed constraints is also provided, in which indexation is used to resolve inconsistency detected by Tesar and Smolensky's (1998, 2000) Recursive Constraint Demotion algorithm. 1.
Harmonic grammar with linear programming: From linear . . .
, 2009
"... Harmonic Grammar (HG) is a model of linguistic constraint interaction in which well-formedness is calculated as the sum of weighted constraint violations. We show how linear programming algorithms can be used to determine whether there is a weighting for a set of constraints that fits a set of ling ..."
Abstract
-
Cited by 41 (9 self)
- Add to MetaCart
Harmonic Grammar (HG) is a model of linguistic constraint interaction in which well-formedness is calculated as the sum of weighted constraint violations. We show how linear programming algorithms can be used to determine whether there is a weighting for a set of constraints that fits a set of linguistic data. The associated software package OT-Help provides a practical tool for studying large and complex linguistic systems in the HG framework and comparing the results with those of OT. We first describe the translation from Harmonic Grammars to systems solvable by linear programming algorithms. We then develop an HG analysis of ATR harmony in Lango that is, we argue, superior to the existing OT and rule-based treatments. We further highlight the usefulness of OT-Help, and the analytic power of HG, with a set of studies of the predictions HG makes for phonological typology.
Weighted Constraints in Generative Linguistics
- Cognitive Science
, 2009
"... Harmonic Grammar (HG) and Optimality Theory (OT) are closely related formal frameworks for the study of language. In both, the structure of a given language is determined by the relative strengths of a set of constraints. They differ in how these strengths are represented: as numerical weights (HG) ..."
Abstract
-
Cited by 29 (4 self)
- Add to MetaCart
(Show Context)
Harmonic Grammar (HG) and Optimality Theory (OT) are closely related formal frameworks for the study of language. In both, the structure of a given language is determined by the relative strengths of a set of constraints. They differ in how these strengths are represented: as numerical weights (HG) or as ranks (OT). Weighted constraints have advantages for the construction of accounts of language learning and other cognitive processes, partly because they allow for the adaptation of connectionist and statistical models. HG has been little studied in generative linguistics, however, largely due to influential claims that weighted constraints make incorrect predictions about the typology of natural languages, predictions that are not shared by the more popular OT. This paper makes the case that HG is in fact a promising framework for typological research, and reviews and extends the existing arguments for weighted over ranked constraints. 1
Natural and Unnatural Constraints in Hungarian Vowel Harmony
- TO APPEAR IN LANGUAGE
, 2009
"... Phonological constraints can, in principle, be classified according to whether they are natural (founded in principles of Universal Grammar (UG)) or unnatural (arbitrary, learned inductively from the language data). Recent work has used this distinction as the basis for arguments about the role of ..."
Abstract
-
Cited by 19 (1 self)
- Add to MetaCart
Phonological constraints can, in principle, be classified according to whether they are natural (founded in principles of Universal Grammar (UG)) or unnatural (arbitrary, learned inductively from the language data). Recent work has used this distinction as the basis for arguments about the role of UG in learning. Some languages have phonological patterns that arguably reflect unnatural constraints. With experimental testing, one can assess whether such patterns are actually learned by native speakers. Becker, Ketrez, and Nevins (2007), testing speakers of Turkish, suggest that they do indeed go unlearned. They interpret this result with a strong UG position: humans are unable to learn data patterns not backed by UG principles. This article pursues the same research line, locating similarly unnatural data patterns in the vowel harmony system of Hungarian, such as the tendency (among certain stem types) for a final bilabial stop to favor front harmony. Our own test leads to the opposite conclusion to Becker et al.: Hungarians evidently do learn the unnatural patterns. To conclude we consider a bias account—that speakers are able to learn unnatural environments, but devalue them relative to natural ones. We outline a method for testing the strength of constraints as learned by speakers against the strength of the corresponding patterns in the lexicon, and show that it offers tentative support for the hypothesis that unnatural constraints are disfavored by language learners.
Linguistic optimization
"... Optimality Theory (OT) is a model of language that combines aspects of generative and connectionist linguistics. It is unique in the field in its use of a rank ordering on constraints, which is used to formalize optimization, the choice of the best of a set of potential linguistic forms. We show tha ..."
Abstract
-
Cited by 16 (2 self)
- Add to MetaCart
(Show Context)
Optimality Theory (OT) is a model of language that combines aspects of generative and connectionist linguistics. It is unique in the field in its use of a rank ordering on constraints, which is used to formalize optimization, the choice of the best of a set of potential linguistic forms. We show that phenomena argued to require ranking fall out equally from the form of optimization in OT’s predecessor Harmonic Grammar (HG), which uses numerical weights to encode the relative strength of constraints. We further argue that the known problems for HG can be resolved by adopting assumptions about the nature of constraints that have precedents both in OT and elsewhere in computational and generative linguistics. This leads to a formal proof that if the range of each constraint is a bounded number of violations, HG generates a finite number of languages. This is nontrivial, since the set of possible weights for each constraint is nondenumerably infinite. We also briefly review some advantages of HG. 1
Asymmetries in generalizing alternations to and from initial syllables. Language 88. 231–268
- Becker, Michael, Andrew Nevins & Jonathan Levine
, 2012
"... In the English lexicon, laryngeal alternations in the plural (e.g. leaf ∼ leaves) impact monosyl-lables more than finally stressed polysyllables. This is the opposite of what happens typologically, and would thereby run contrary to the predictions of INITIAL-SYLLABLE FAITHFULNESS. Despite the lexica ..."
Abstract
-
Cited by 12 (1 self)
- Add to MetaCart
In the English lexicon, laryngeal alternations in the plural (e.g. leaf ∼ leaves) impact monosyl-lables more than finally stressed polysyllables. This is the opposite of what happens typologically, and would thereby run contrary to the predictions of INITIAL-SYLLABLE FAITHFULNESS. Despite the lexical pattern, in a wug test we found monosyllables to be impacted no more than finally stressed polysyllables were—a ‘surfeit of the stimulus ’ effect, in which speakers fail to learn a statistical generalization present in the lexicon. We present two artificial-grammar experiments in which En-glish speakers indeed manifest a universal bias for protecting monosyllables, and initial syllables more generally. The conclusion, therefore, is that speakers can exhibit spontaneous learning that goes directly against the evidence offered by the ambient language, a result we attribute to formal and substantive biases in phonological acquisition.*
Learning Long-Distance Phonotactics
, 2008
"... Two questions regarding the non-local nature of long-distance agreement in consonantal harmony patterns (Hansson 2001, Rose and Walker 2004) are addressed: (1) How can such patterns be learned from surface forms alone? (2) How can we understand a a major feature of the typology—the absence of blocki ..."
Abstract
-
Cited by 12 (7 self)
- Add to MetaCart
(Show Context)
Two questions regarding the non-local nature of long-distance agreement in consonantal harmony patterns (Hansson 2001, Rose and Walker 2004) are addressed: (1) How can such patterns be learned from surface forms alone? (2) How can we understand a a major feature of the typology—the absence of blocking effects? It is shown that a learner which generalizes only by making distinctions with respect to the order of sounds (and by not making distinctions with respect to the distance between sounds) is able to learn major classes of long-distance phonotactic patterns, and is unable to learn hypothetical long-distance phonotactic patterns with blocking effects. Thus not only is the learner able to acquire attested patterns, it explains the absence of unattested ones. Furthermore, this result lends support to the idea that long distance phonotactic patterns are phenomonologically distinct from spreading patterns contra the hypothesis of Strict Locality (Gafos 1999, et seq).
Modeling the contribution of phonotactic cues to the problem of
, 2009
"... doi:10.1017/S030500090999050X ..."
(Show Context)
Estimating Strictly Piecewise Distributions
"... Strictly Piecewise (SP) languages are a subclass of regular languages which encode certain kinds of long-distance dependencies that are found in natural languages. Like the classes in the Chomsky and Subregular hierarchies, there are many independently converging characterizations of the SP class (R ..."
Abstract
-
Cited by 10 (6 self)
- Add to MetaCart
(Show Context)
Strictly Piecewise (SP) languages are a subclass of regular languages which encode certain kinds of long-distance dependencies that are found in natural languages. Like the classes in the Chomsky and Subregular hierarchies, there are many independently converging characterizations of the SP class (Rogers et al., to appear). Here we define SP distributions and show that they can be efficiently estimated from positive data. 1
Improving Word Segmentation by Simultaneously Learning Phonotactics
- CONLL 2008
, 2008
"... The most accurate unsupervised word segmentation systems that are currently available (Brent, 1999; Venkataraman, 2001; Goldwater, 2007) use a simple unigram model of phonotactics. While this simplifies some of the calculations, it overlooks cues that infant language acquisition researchers have sho ..."
Abstract
-
Cited by 9 (3 self)
- Add to MetaCart
(Show Context)
The most accurate unsupervised word segmentation systems that are currently available (Brent, 1999; Venkataraman, 2001; Goldwater, 2007) use a simple unigram model of phonotactics. While this simplifies some of the calculations, it overlooks cues that infant language acquisition researchers have shown to be useful for segmentation (Mattys et al., 1999; Mattys and Jusczyk, 2001). Here we explore the utility of using bigram and trigram phonotactic models by enhancing Brent’s (1999) MBDP-1 algorithm. The results show the improved MBDP-Phon model outperforms other unsupervised word segmentation systems (e.g., Brent, 1999; Venkataraman, 2001; Goldwater, 2007).