Results 1 - 10
of
34
A maximum entropy model of phonotactics and phonotactic learning
, 2006
"... The study of phonotactics (e.g., the ability of English speakers to distinguish possible words like blick from impossible words like *bnick) is a central topic in phonology. We propose a theory of phonotactic grammars and a learning algorithm that constructs such grammars from positive evidence. Our ..."
Abstract
-
Cited by 132 (15 self)
- Add to MetaCart
(Show Context)
The study of phonotactics (e.g., the ability of English speakers to distinguish possible words like blick from impossible words like *bnick) is a central topic in phonology. We propose a theory of phonotactic grammars and a learning algorithm that constructs such grammars from positive evidence. Our grammars consist of constraints that are assigned numerical weights according to the principle of maximum entropy. Possible words are assessed by these grammars based on the weighted sum of their constraint violations. The learning algorithm yields grammars that can capture both categorical and gradient phonotactic patterns. The algorithm is not provided with any constraints in advance, but uses its own resources to form constraints and weight them. A baseline model, in which Universal Grammar is reduced to a feature set and an SPE-style constraint format, suffices to learn many phonotactic phenomena. In order to learn nonlocal phenomena such as stress and vowel harmony, it is necessary to augment the model with autosegmental tiers and metrical grids. Our results thus offer novel, learning-theoretic support for such representations. We apply the model to English syllable onsets, Shona vowel harmony, quantity-insensitive stress typology, and the full phonotactics of Wargamay, showing that the learned grammars capture the distributional generalizations of these languages and accurately predict the findings of a phonotactic experiment.
Harmonic grammar with linear programming: From linear . . .
, 2009
"... Harmonic Grammar (HG) is a model of linguistic constraint interaction in which well-formedness is calculated as the sum of weighted constraint violations. We show how linear programming algorithms can be used to determine whether there is a weighting for a set of constraints that fits a set of ling ..."
Abstract
-
Cited by 40 (9 self)
- Add to MetaCart
Harmonic Grammar (HG) is a model of linguistic constraint interaction in which well-formedness is calculated as the sum of weighted constraint violations. We show how linear programming algorithms can be used to determine whether there is a weighting for a set of constraints that fits a set of linguistic data. The associated software package OT-Help provides a practical tool for studying large and complex linguistic systems in the HG framework and comparing the results with those of OT. We first describe the translation from Harmonic Grammars to systems solvable by linear programming algorithms. We then develop an HG analysis of ATR harmony in Lango that is, we argue, superior to the existing OT and rule-based treatments. We further highlight the usefulness of OT-Help, and the analytic power of HG, with a set of studies of the predictions HG makes for phonological typology.
Convergence properties of a gradual learning algorithm for Harmonic Grammar. Rutgers Optimality Archive 970
, 2008
"... Abstract. This paper investigates a gradual on-line learning algorithm for Harmonic Grammar. By adapting existing convergence proofs for perceptrons, we show that for any nonvarying target language, Harmonic-Grammar learners are guaranteed to converge to an appropriate grammar, if they receive compl ..."
Abstract
-
Cited by 39 (14 self)
- Add to MetaCart
Abstract. This paper investigates a gradual on-line learning algorithm for Harmonic Grammar. By adapting existing convergence proofs for perceptrons, we show that for any nonvarying target language, Harmonic-Grammar learners are guaranteed to converge to an appropriate grammar, if they receive complete information about the structure of the learning data. We also prove convergence when the learner incorporates evaluation noise, as in Stochastic Optimality Theory. Computational tests of the algorithm show that it converges quickly. When learners receive incomplete information (e.g. some structure remains hidden), tests indicate that the algorithm is more likely to converge than two comparable Optimality-Theoretic learning algorithms.
Weighted Constraints in Generative Linguistics
- Cognitive Science
, 2009
"... Harmonic Grammar (HG) and Optimality Theory (OT) are closely related formal frameworks for the study of language. In both, the structure of a given language is determined by the relative strengths of a set of constraints. They differ in how these strengths are represented: as numerical weights (HG) ..."
Abstract
-
Cited by 21 (3 self)
- Add to MetaCart
(Show Context)
Harmonic Grammar (HG) and Optimality Theory (OT) are closely related formal frameworks for the study of language. In both, the structure of a given language is determined by the relative strengths of a set of constraints. They differ in how these strengths are represented: as numerical weights (HG) or as ranks (OT). Weighted constraints have advantages for the construction of accounts of language learning and other cognitive processes, partly because they allow for the adaptation of connectionist and statistical models. HG has been little studied in generative linguistics, however, largely due to influential claims that weighted constraints make incorrect predictions about the typology of natural languages, predictions that are not shared by the more popular OT. This paper makes the case that HG is in fact a promising framework for typological research, and reviews and extends the existing arguments for weighted over ranked constraints. 1
Linguistic optimization
"... Optimality Theory (OT) is a model of language that combines aspects of generative and connectionist linguistics. It is unique in the field in its use of a rank ordering on constraints, which is used to formalize optimization, the choice of the best of a set of potential linguistic forms. We show tha ..."
Abstract
-
Cited by 16 (2 self)
- Add to MetaCart
(Show Context)
Optimality Theory (OT) is a model of language that combines aspects of generative and connectionist linguistics. It is unique in the field in its use of a rank ordering on constraints, which is used to formalize optimization, the choice of the best of a set of potential linguistic forms. We show that phenomena argued to require ranking fall out equally from the form of optimization in OT’s predecessor Harmonic Grammar (HG), which uses numerical weights to encode the relative strength of constraints. We further argue that the known problems for HG can be resolved by adopting assumptions about the nature of constraints that have precedents both in OT and elsewhere in computational and generative linguistics. This leads to a formal proof that if the range of each constraint is a bounded number of violations, HG generates a finite number of languages. This is nontrivial, since the set of possible weights for each constraint is nondenumerably infinite. We also briefly review some advantages of HG. 1
A maximum entropy model of phonotactics and phonotactic learning
, 2008
"... The study of phonotactics is a central topic in phonology. We propose a theory of phonotactic grammars and a learning algorithm that constructs such grammars from positive evidence. Our grammars consist of constraints that are assigned numerical weights according to the principle of maximum entropy. ..."
Abstract
-
Cited by 12 (0 self)
- Add to MetaCart
The study of phonotactics is a central topic in phonology. We propose a theory of phonotactic grammars and a learning algorithm that constructs such grammars from positive evidence. Our grammars consist of constraints that are assigned numerical weights according to the principle of maximum entropy. The grammars assess possible words on the basis of the weighted sum of their constraint violations. The learning algorithm yields grammars that can capture both categorical and gradient phonotactic patterns. The algorithm is not provided with constraints in advance, but uses its own resources to form constraints and weight them. A baseline model, in which Universal Grammar is reduced to a feature set and an SPE-style constraint format, suffices to learn many phonotactic phenomena. In order for the model to learn nonlocal phenomena such as stress and vowel harmony, it must be augmented with autosegmental tiers and metrical grids. Our results thus offer novel, learning-theoretic support for such representations. We apply the model in a variety of learning simulations, showing that the learned grammars capture the distributional generalizations of these languages and accurately predict the findings of a phonotactic experiment.
A Comparison of Lexicographic and Linear Numeric Optimization Using Violation Difference Ratios
, 2007
"... ..."
(Show Context)
Gradual learning and faithfulness: Consequences of ranked vs. weighted constraints
- Proceedings of the 38th Meeting of the North East Linguistics Society (NELS 38). GLSA
, 2008
"... This paper investigates a class of restrictive intermediate stages that emerge during L1 phonological acquisition, and argues that these stages are naturally accounted for within a gradual learning model that uses weighted constraints. The particular type of pattern of interest here – Intermediate F ..."
Abstract
-
Cited by 7 (2 self)
- Add to MetaCart
(Show Context)
This paper investigates a class of restrictive intermediate stages that emerge during L1 phonological acquisition, and argues that these stages are naturally accounted for within a gradual learning model that uses weighted constraints. The particular type of pattern of interest here – Intermediate Faithfulness (IF) stages – involves the preservation of marked structures just in privileged environments. We illustrate this with data from Bat-El (2007), which shows the innovation of morphologically-sensitive phonology during the acquisition of Hebrew. The resulting IF stage displays greater faithfulness in nouns – a privileged context (Smith 2000, 2001) – than in non-nouns. Like other IF stages, it emerges without any direct support from the target grammar (Revthiadou & Tzakosta, Rose 2000, Tessier 2007a). Our IF-stage analysis makes use of a gradual on-line learner and a grammar of weighted constraints (as in Harmonic Grammar (HG); Legendre et al. 1990, Smolensky & Legendre 2006). In this model, the harmony score H of a candidate R is calculated by multiplying its violations of each constraint {C1(R),C2(R),…,Cn(R)} by the weights associated with those constraints {w1,w2,…,wn} and summing, as in (1). The candidate
Phonological acquisition as weighted constraint interaction
- Proceedings of the 2nd Conference on Generative Approaches to Language Acquisition North America (GALANA
, 2007
"... In the study of acquisition and learnability in Optimality Theory (OT; Prince and Smolensky 1993/2004) learning is characterized in terms of changes in constraint ranking. Learners begin with a ranking of Markedness constraints above Faithfulness constraints, and rerank them on the basis of evidence ..."
Abstract
-
Cited by 3 (1 self)
- Add to MetaCart
In the study of acquisition and learnability in Optimality Theory (OT; Prince and Smolensky 1993/2004) learning is characterized in terms of changes in constraint ranking. Learners begin with a ranking of Markedness constraints above Faithfulness constraints, and rerank them on the basis of evidence from the target language. A theory of learnability that accounts for the human acquisition