Results 1  10
of
110
A Simple, Fast, and Accurate Algorithm to Estimate Large Phylogenies by Maximum Likelihood
, 2003
"... The increase in the number of large data sets and the complexity of current probabilistic sequence evolution models necessitates fast and reliable phylogeny reconstruction methods. We describe a new approach, based on the maximumlikelihood principle, which clearly satisfies these requirements. The ..."
Abstract

Cited by 2182 (27 self)
 Add to MetaCart
The increase in the number of large data sets and the complexity of current probabilistic sequence evolution models necessitates fast and reliable phylogeny reconstruction methods. We describe a new approach, based on the maximumlikelihood principle, which clearly satisfies these requirements. The core of this method is a simple hillclimbing algorithm that adjusts tree topology and branch lengths simultaneously. This algorithm starts from an initial tree built by a fast distancebased method and modifies this tree to improve its likelihood at each iteration. Due to this simultaneous adjustment of the topology and branch lengths, only a few iterations are sufficient to reach an optimum. We used extensive and realistic computer simulations to show that the topological accuracy of this new method is at least as high as that of the existing maximumlikelihood programs and much higher than the performance of distancebased and parsimony approaches. The reduction of computing time is dramatic in comparison with other maximumlikelihood packages, while the likelihood maximization ability tends to be higher. For example, only 12 min were required on a standard personal computer to analyze a data set consisting of 500 rbcL sequences with 1,428 base pairs from plant plastids, thus reaching a speed of the same order as some popular distancebased and parsimony algorithms. This new method is implemented in the PHYML program, which is freely available on our web page:
NeighborNet: An agglomerative method for the construction of phylogenetic networks
, 2003
"... ..."
Naive Bayesian classifier for rapid assignment of rRNA sequences into the new bacterial taxonomy.
 Appl. Environ. Microbiol.
, 2007
"... ..."
(Show Context)
Fast and accurate phylogeny reconstruction algorithms based on the minimumevolution principle
, 2002
"... ..."
(Show Context)
Reconstructing Phylogenies from GeneContent and GeneOrder Data
 MATHEMATICS OF EVOLUTION AND PHYLOGENY, OLIVIER GASCUEL (ED.)
"... ..."
The Omp85 family of proteins is essential for outer membrane biogenesis in mitochondria and bacteria
 J. Cell
, 2004
"... in mitochondria and bacteria ..."
(Show Context)
Exploring amongsite rate variation models in a maximum likelihood framework using empirical data: effects of model assumptions on estimates of topology, branch lengths, and bootstrap support
 Syst. Biol
, 2001
"... Abstract.—We have investigated the effects of different amongsite rate variation models on the estimation of substitution model parameters, branch lengths, topology, and bootstrap proportions under minimum evolution (ME) and maximum likelihood (ML). Speci�cally, we examined equal rates, invariable ..."
Abstract

Cited by 36 (4 self)
 Add to MetaCart
Abstract.—We have investigated the effects of different amongsite rate variation models on the estimation of substitution model parameters, branch lengths, topology, and bootstrap proportions under minimum evolution (ME) and maximum likelihood (ML). Speci�cally, we examined equal rates, invariable sites, gammadistributed rates, and sitespeci�c rates (SSR) models, using mitochondrial DNA sequence data from three proteincoding genes and one tRNA gene from species of the New Zealand cicada genus Maoricicada. Estimates of topology were relatively insensitive to the substitution model used; however, estimates of bootstrap support, branch lengths, and Rmatrices (underlying relative substitution rate matrix) were strongly in�uenced by the assumptions of the substitution model. We identi�ed one situation where ME and ML tree building became inaccurate when implemented with an inappropriate amongsite rate variation model. Despite the fact the SSR models often have a better �t to the data than do invariable sites and gamma rates models, SSR models have some serious weaknesses. First, SSR rate parameters are not comparable across data sets, unlike the proportion of invariable sites or the alpha shape parameter of the gamma distribution. Second, the extreme amongsite rate variation within codon positions is problematic for SSR models, which explicitly assume rate homogeneity within each rate class. Third, the SSR models appear to give severe underestimates of
Reconstructing trees from subtree weights
 Applied Mathematics Letters
, 2004
"... The treemetric theorem provides a necessary and sufficient condition for a dissimilarity matrix to be a tree metric, and has served as the foundation for numerous distancebased reconstruction methods in phylogenetics. Our main result is an extension of the treemetric theorem to more general dissi ..."
Abstract

Cited by 33 (6 self)
 Add to MetaCart
(Show Context)
The treemetric theorem provides a necessary and sufficient condition for a dissimilarity matrix to be a tree metric, and has served as the foundation for numerous distancebased reconstruction methods in phylogenetics. Our main result is an extension of the treemetric theorem to more general dissimilarity maps. In particular, we show that a tree with n leaves is reconstructible from the weights of the mleaf subtrees provided that n ≥ 2m − 1. This result can be applied towards the design of more accurate tree reconstruction methods that are based on estimates of the weights of subtrees rather than just pairwise distances. 1
On the uniqueness of the selection criterion in neighborjoining
 Journal of Classification
"... The NeighborJoining (NJ) method of Saitou and Nei is the most widely used distance based method in phylogenetic analysis. Central to the method is the selection criterion, the formula used to choose which pair of objects to amalgamate next. Here we analyze the NJ selection criterion using an axioma ..."
Abstract

Cited by 30 (1 self)
 Add to MetaCart
The NeighborJoining (NJ) method of Saitou and Nei is the most widely used distance based method in phylogenetic analysis. Central to the method is the selection criterion, the formula used to choose which pair of objects to amalgamate next. Here we analyze the NJ selection criterion using an axiomatic approach. We show that any selection criterion that is linear, permutation equivariant, statistically consistent and based solely on distance data will give the same trees as those created by NJ. 1
Pandit: an evolutioncentric database of protein and associated nucleotide domains with inferred trees. Nucleic Acids Res 34: D327–D331
, 2006
"... nucleotide domains with inferred trees ..."
(Show Context)