Results 1 - 10
of
19
Weighted constraints and gradient restrictions on place co-ccurrence
- in Muna and Arabic. Natural Language and Linguistic Theory
, 2008
"... Abstract. This paper documents a restriction against the co-occurrence of homorganic consonants in the root morphemes of Muna, a western Austronesian language, and compares the Muna pattern with the much-studied similar pattern in Arabic. As in Arabic, the restriction applies gradiently: its force d ..."
Abstract
-
Cited by 34 (3 self)
- Add to MetaCart
Abstract. This paper documents a restriction against the co-occurrence of homorganic consonants in the root morphemes of Muna, a western Austronesian language, and compares the Muna pattern with the much-studied similar pattern in Arabic. As in Arabic, the restriction applies gradiently: its force depends on the place of articulation of the consonants involved, and on whether the homorganic consonants are similar in terms of other features. Muna differs from Arabic in the relative strengths of these other features in affecting co-occurrence rates of homorganic consonants. Along with the descriptions of these patterns, this paper presents phonological analyses of the Muna and Arabic patterns in terms of weighted constraints, as in Harmonic Grammar. This account uses a gradual learning algorithm that acquires weights that reflect the relative frequency of different sequence types in the two languages. The resulting grammars assign the sequences acceptability scores that correlate with a measure of their attestedness in the lexicon. This application of Harmonic Grammar illustrates its ability to capture both gradient and categorical patterns.
Linear Optimality Theory as a Model of Gradience in Grammar
- In Gradience in Grammar: Generative Perspectives, ed. Gisbert Fanselow, Caroline Féry, Ralph Vogel, and Matthias Schlesewsky
, 2005
"... This paper provides an overview of Linear Optimality Theory (LOT), a variant of Optimality Theory (OT) designed for the modeling of gradient acceptability judgment data. We summarize the empirical properties of gradient data that have been reported in the experimental literature, and use them to mot ..."
Abstract
-
Cited by 33 (0 self)
- Add to MetaCart
This paper provides an overview of Linear Optimality Theory (LOT), a variant of Optimality Theory (OT) designed for the modeling of gradient acceptability judgment data. We summarize the empirical properties of gradient data that have been reported in the experimental literature, and use them to motivate the design of LOT. We discuss LOT’s notions of constraint competition and optimality, as well as a new formulation of ranking argumentation, which makes it possible to apply standard parameter estimation techniques to LOT. Then the LOT model is compared to Standard OT, to Harmonic Grammar, and to recently proposed probabilisitic versions of OT. 1.
e chicken or the egg? A probabilistic analysis of English binomials. Language 82(2
, 2006
"... Abstract. Why is it preferable to say salt and pepper over pepper and salt? Based on an analysis of 692 binomial tokens from on-line corpora, we show that a number of semantic, metrical, and frequency constraints contribute significantly to ordering preferences, overshadowing the phonological facto ..."
Abstract
-
Cited by 28 (4 self)
- Add to MetaCart
Abstract. Why is it preferable to say salt and pepper over pepper and salt? Based on an analysis of 692 binomial tokens from on-line corpora, we show that a number of semantic, metrical, and frequency constraints contribute significantly to ordering preferences, overshadowing the phonological factors that have traditionally been considered important. The ordering of binomials exhibits a considerable amount of variation. For example, although principal and interest is the more frequent order, interest and principal also occurs. We consider three frameworks for analysis of this variation: traditional Optimality Theory, stochastic Optimality Theory, and logistic regression. Our best models – using logistic regression – predict 79.2 % of the binomial tokens and 76.7 % of types, and the remainder are predicted as less-frequent – but not ungrammatical – variants.*
Weighted Constraints in Generative Linguistics
- Cognitive Science
, 2009
"... Harmonic Grammar (HG) and Optimality Theory (OT) are closely related formal frameworks for the study of language. In both, the structure of a given language is determined by the relative strengths of a set of constraints. They differ in how these strengths are represented: as numerical weights (HG) ..."
Abstract
-
Cited by 21 (3 self)
- Add to MetaCart
(Show Context)
Harmonic Grammar (HG) and Optimality Theory (OT) are closely related formal frameworks for the study of language. In both, the structure of a given language is determined by the relative strengths of a set of constraints. They differ in how these strengths are represented: as numerical weights (HG) or as ranks (OT). Weighted constraints have advantages for the construction of accounts of language learning and other cognitive processes, partly because they allow for the adaptation of connectionist and statistical models. HG has been little studied in generative linguistics, however, largely due to influential claims that weighted constraints make incorrect predictions about the typology of natural languages, predictions that are not shared by the more popular OT. This paper makes the case that HG is in fact a promising framework for typological research, and reviews and extends the existing arguments for weighted over ranked constraints. 1
Linguistic optimization
"... Optimality Theory (OT) is a model of language that combines aspects of generative and connectionist linguistics. It is unique in the field in its use of a rank ordering on constraints, which is used to formalize optimization, the choice of the best of a set of potential linguistic forms. We show tha ..."
Abstract
-
Cited by 16 (2 self)
- Add to MetaCart
(Show Context)
Optimality Theory (OT) is a model of language that combines aspects of generative and connectionist linguistics. It is unique in the field in its use of a rank ordering on constraints, which is used to formalize optimization, the choice of the best of a set of potential linguistic forms. We show that phenomena argued to require ranking fall out equally from the form of optimization in OT’s predecessor Harmonic Grammar (HG), which uses numerical weights to encode the relative strength of constraints. We further argue that the known problems for HG can be resolved by adopting assumptions about the nature of constraints that have precedents both in OT and elsewhere in computational and generative linguistics. This leads to a formal proof that if the range of each constraint is a bounded number of violations, HG generates a finite number of languages. This is nontrivial, since the set of possible weights for each constraint is nondenumerably infinite. We also briefly review some advantages of HG. 1
Linking speech errors and phonological grammars: Insights from Harmonic Grammar networks
- Phonology
, 2009
"... Phonological grammars characterize distinctions between relatively well-formed (unmarked) and relatively ill-formed (marked) phonological structures. We review evidence that markedness influences speech error probabilities. Specifically, although errors result in both unmarked as well as marked stru ..."
Abstract
-
Cited by 6 (2 self)
- Add to MetaCart
(Show Context)
Phonological grammars characterize distinctions between relatively well-formed (unmarked) and relatively ill-formed (marked) phonological structures. We review evidence that markedness influences speech error probabilities. Specifically, although errors result in both unmarked as well as marked structures, there is a markedness asymmetry: errors are more likely to produce unmarked outcomes. We show that stochastic disruption to the computational mechanisms realizing a Harmonic Grammar (HG) can account for the broad empirical patterns of speech errors. We demonstrate that our proposal can account for the general markedness asymmetry. We also develop methods for linking particular HG proposals to speech error distributions, and illustrate these methods using a simple HG and a set of initial consonant errors in English. 1. Phonological markedness and linguistic behavior * A central concern of generative phonological theory is to characterize the relative well-formedness of phonological structures. In this paper we will use the term markedness to refer to distinctions in well-formedness (where well-formed structures are
Phonological constraints on constituent ordering
- Proceedings of the 26th West Coast Conference on Formal Linguistics. Somerville, MA: Cascadilla Proceedings Project
, 2008
"... Does phonology influence the ordering of meaningful elements (morphemes, words, phrases)? The answer is usually taken to be no (see e.g. Pullum and Zwicky 1989), but an investigation into the quantitative distribution of constituents tells a different story. This paper reports the results of a ..."
Abstract
-
Cited by 5 (0 self)
- Add to MetaCart
(Show Context)
Does phonology influence the ordering of meaningful elements (morphemes, words, phrases)? The answer is usually taken to be no (see e.g. Pullum and Zwicky 1989), but an investigation into the quantitative distribution of constituents tells a different story. This paper reports the results of a
Learning serial constraint-based grammars
- Harmonic Grammar and Harmonic Serialism
, 2014
"... In this paper we describe a method for learning grammars in the general framework of Harmonic Serialism (see McCarthy this volume for references and an introduction). We have two main goals. The first is to address the hidden structure problem that serial derivations introduce. The second is to addr ..."
Abstract
-
Cited by 3 (2 self)
- Add to MetaCart
(Show Context)
In this paper we describe a method for learning grammars in the general framework of Harmonic Serialism (see McCarthy this volume for references and an introduction). We have two main goals. The first is to address the hidden structure problem that serial derivations introduce. The second is to address the problem of learning variation, which has yet to be
The VC dimension of constraint-based grammars
, 2009
"... We analyze the complexity of Harmonic Grammar (HG), a linguistic model in which licit underlying-to-surfaceform mappings are determined by optimization over weighted constraints. We show that the Vapnik-Chervonenkis Dimension of HG grammars with k constraints is k − 1. This establishes a fundamental ..."
Abstract
-
Cited by 3 (1 self)
- Add to MetaCart
We analyze the complexity of Harmonic Grammar (HG), a linguistic model in which licit underlying-to-surfaceform mappings are determined by optimization over weighted constraints. We show that the Vapnik-Chervonenkis Dimension of HG grammars with k constraints is k − 1. This establishes a fundamental bound on the complexity of HG in terms of its capacity to classify sets of linguistic data that has significant ramifications for learnability. The VC dimension of HG is the same as that of Optimality Theory (OT), which is similar to HG, but uses ranked rather than weighted constraints in optimization. The parity of the VC dimension in these two models is somewhat surprising because OT defines finite classes of grammars—there are at most k! ways to rank k constraints—while HG can define infinite classes of grammars because the weights associated with constraints are real-valued. The parity is also surprising because HG permits groups of constraints that interact through so-called ‘gang effects’ to generate languages that cannot be generated in OT. The fact that the VC dimension grows linearly with the number of constraints in both models means that, even in the worst case, the number of randomly chosen training samples needed to weight/rank a known set of constrains is a linear function of k. We conclude that though there may be factors that favor one model or the other, the complexity of learning weightings/rankings is not one of them.