Results 1  10
of
19,750
A new approach to the maximum flow problem
 JOURNAL OF THE ACM
, 1988
"... All previously known efficient maximumflow algorithms work by finding augmenting paths, either one path at a time (as in the original Ford and Fulkerson algorithm) or all shortestlength augmenting paths at once (using the layered network approach of Dinic). An alternative method based on the pre ..."
Abstract

Cited by 672 (33 self)
 Add to MetaCart
All previously known efficient maximumflow algorithms work by finding augmenting paths, either one path at a time (as in the original Ford and Fulkerson algorithm) or all shortestlength augmenting paths at once (using the layered network approach of Dinic). An alternative method based
A MaximumEntropyInspired Parser
, 1999
"... We present a new parser for parsing down to Penn treebank style parse trees that achieves 90.1% average precision/recall for sentences of length 40 and less, and 89.5% for sentences of length 100 and less when trained and tested on the previously established [5,9,10,15,17] "stan dard" se ..."
Abstract

Cited by 971 (19 self)
 Add to MetaCart
We present a new parser for parsing down to Penn treebank style parse trees that achieves 90.1% average precision/recall for sentences of length 40 and less, and 89.5% for sentences of length 100 and less when trained and tested on the previously established [5,9,10,15,17] "stan dard
Maximum entropy markov models for information extraction and segmentation
, 2000
"... Hidden Markov models (HMMs) are a powerful probabilistic tool for modeling sequential data, and have been applied with success to many textrelated tasks, such as partofspeech tagging, text segmentation and information extraction. In these cases, the observations are usually modeled as multinomial ..."
Abstract

Cited by 561 (18 self)
 Add to MetaCart
as multinomial distributions over a discrete vocabulary, and the HMM parameters are set to maximize the likelihood of the observations. This paper presents a new Markovian sequence model, closely related to HMMs, that allows observations to be represented as arbitrary overlapping features (such as word
A Simple, Fast, and Accurate Algorithm to Estimate Large Phylogenies by Maximum Likelihood
, 2003
"... The increase in the number of large data sets and the complexity of current probabilistic sequence evolution models necessitates fast and reliable phylogeny reconstruction methods. We describe a new approach, based on the maximumlikelihood principle, which clearly satisfies these requirements. The ..."
Abstract

Cited by 2182 (27 self)
 Add to MetaCart
of the topology and branch lengths, only a few iterations are sufficient to reach an optimum. We used extensive and realistic computer simulations to show that the topological accuracy of this new method is at least as high as that of the existing maximumlikelihood programs and much higher than the performance
Maximum Likelihood Linear Transformations for HMMBased Speech Recognition
 COMPUTER SPEECH AND LANGUAGE
, 1998
"... This paper examines the application of linear transformations for speaker and environmental adaptation in an HMMbased speech recognition system. In particular, transformations that are trained in a maximum likelihood sense on adaptation data are investigated. Other than in the form of a simple bias ..."
Abstract

Cited by 570 (68 self)
 Add to MetaCart
This paper examines the application of linear transformations for speaker and environmental adaptation in an HMMbased speech recognition system. In particular, transformations that are trained in a maximum likelihood sense on adaptation data are investigated. Other than in the form of a simple
Boosting the margin: A new explanation for the effectiveness of voting methods
 IN PROCEEDINGS INTERNATIONAL CONFERENCE ON MACHINE LEARNING
, 1997
"... One of the surprising recurring phenomena observed in experiments with boosting is that the test error of the generated classifier usually does not increase as its size becomes very large, and often is observed to decrease even after the training error reaches zero. In this paper, we show that this ..."
Abstract

Cited by 897 (52 self)
 Add to MetaCart
that this phenomenon is related to the distribution of margins of the training examples with respect to the generated voting classification rule, where the margin of an example is simply the difference between the number of correct votes and the maximum number of votes received by any incorrect label. We show
New empirical relationships among magnitude, rupture length, rupture width, rupture area, and surface
, 1994
"... Abstract Source parameters for historical earthquakes worldwide are compiled to develop a series of empirical relationships among moment magnitude (M), surface rupture length, subsurface rupture length, downdip rupture width, rupture area, and maximum and average displacement per event. The resultin ..."
Abstract

Cited by 541 (0 self)
 Add to MetaCart
Abstract Source parameters for historical earthquakes worldwide are compiled to develop a series of empirical relationships among moment magnitude (M), surface rupture length, subsurface rupture length, downdip rupture width, rupture area, and maximum and average displacement per event
Theoretical improvements in algorithmic efficiency for network flow problems

, 1972
"... This paper presents new algorithms for the maximum flow problem, the Hitchcock transportation problem, and the general minimumcost flow problem. Upper bounds on ... the numbers of steps in these algorithms are derived, and are shown to compale favorably with upper bounds on the numbers of steps req ..."
Abstract

Cited by 560 (0 self)
 Add to MetaCart
This paper presents new algorithms for the maximum flow problem, the Hitchcock transportation problem, and the general minimumcost flow problem. Upper bounds on ... the numbers of steps in these algorithms are derived, and are shown to compale favorably with upper bounds on the numbers of steps
Discriminative Training Methods for Hidden Markov Models: Theory and Experiments with Perceptron Algorithms
, 2002
"... We describe new algorithms for training tagging models, as an alternative to maximumentropy models or conditional random fields (CRFs). The algorithms rely on Viterbi decoding of training examples, combined with simple additive updates. We describe theory justifying the algorithms through a modific ..."
Abstract

Cited by 660 (13 self)
 Add to MetaCart
We describe new algorithms for training tagging models, as an alternative to maximumentropy models or conditional random fields (CRFs). The algorithms rely on Viterbi decoding of training examples, combined with simple additive updates. We describe theory justifying the algorithms through a
Sticky Information versus Sticky Prices: a Proposal to Replace the New Keynesian Phillips Curve
, 2002
"... This paper examines a model of dynamic price adjustment based on the assumption that information disseminates slowly throughout the population. Compared with the commonly used stickyprice model, this stickyinformation model displays three related properties that are more consistent with accepted v ..."
Abstract

Cited by 489 (25 self)
 Add to MetaCart
views about the effects of monetary policy. First, disinflations are always contractionary (although announced disinflations are less contractionary than surprise ones). Second, monetary policy shocks have their maximum impact on inflation with a substantial delay. Third, the change in inflation
Results 1  10
of
19,750