Results 1 - 10
of
30,773
A Simple, Fast, and Accurate Algorithm to Estimate Large Phylogenies by Maximum Likelihood
, 2003
"... The increase in the number of large data sets and the complexity of current probabilistic sequence evolution models necessitates fast and reliable phylogeny reconstruction methods. We describe a new approach, based on the maximumlikelihood principle, which clearly satisfies these requirements. The ..."
Abstract
-
Cited by 2182 (27 self)
- Add to MetaCart
The increase in the number of large data sets and the complexity of current probabilistic sequence evolution models necessitates fast and reliable phylogeny reconstruction methods. We describe a new approach, based on the maximumlikelihood principle, which clearly satisfies these requirements. The core of this method is a simple hill-climbing algorithm that adjusts tree topology and branch lengths simultaneously. This algorithm starts from an initial tree built by a fast distance-based method and modifies this tree to improve its likelihood at each iteration. Due to this simultaneous adjustment of the topology and branch lengths, only a few iterations are sufficient to reach an optimum. We used extensive and realistic computer simulations to show that the topological accuracy of this new method is at least as high as that of the existing maximum-likelihood programs and much higher than the performance of distance-based and parsimony approaches. The reduction of computing time is dramatic in comparison with other maximum-likelihood packages, while the likelihood maximization ability tends to be higher. For example, only 12 min were required on a standard personal computer to analyze a data set consisting of 500 rbcL sequences with 1,428 base pairs from plant plastids, thus reaching a speed of the same order as some popular distance-based and parsimony algorithms. This new method is implemented in the PHYML program, which is freely available on our web page:
Accurate Estimates Without Calibration?
"... Abstract. Most process models calibrate their internal settings using local data. Collecting this data is expensive, tedious, and often an incomplete process. Is it possible to make accurate process decisions without historical data? Variability in model output arises from (a) uncertainty in model i ..."
Abstract
-
Cited by 4 (4 self)
- Add to MetaCart
Abstract. Most process models calibrate their internal settings using local data. Collecting this data is expensive, tedious, and often an incomplete process. Is it possible to make accurate process decisions without historical data? Variability in model output arises from (a) uncertainty in model
Accurate Estimation of the Cost of Spatial Selections
- IN ICDE'00, PROCEEDINGS OF THE 16TH INTERNATIONAL CONFERENCE ON DATA ENGINEERING
, 2000
"... Optimizing queries that involve operations on spatial data requires estimating the selectivity and cost of these operations. In this paper, we focus on estimating the cost of spatial selections, or window queries, where the query windows and data objects are general polygons. Cost estimation techniq ..."
Abstract
-
Cited by 30 (2 self)
- Add to MetaCart
. Capturing these attributes makes this type of histogram useful for accurate estimation, as we experimentally dem...
Answering the Skeptics: Yes, Standard Volatility Models Do Provide Accurate Forecasts
"... Volatility permeates modern financial theories and decision making processes. As such, accurate measures and good forecasts of future volatility are critical for the implementation and evaluation of asset and derivative pricing theories as well as trading and hedging strategies. In response to this, ..."
Abstract
-
Cited by 561 (45 self)
- Add to MetaCart
Volatility permeates modern financial theories and decision making processes. As such, accurate measures and good forecasts of future volatility are critical for the implementation and evaluation of asset and derivative pricing theories as well as trading and hedging strategies. In response to this
Unscented Filtering and Nonlinear Estimation
- PROCEEDINGS OF THE IEEE
, 2004
"... The extended Kalman filter (EKF) is probably the most widely used estimation algorithm for nonlinear systems. However, more than 35 years of experience in the estimation community has shown that is difficult to implement, difficult to tune, and only reliable for systems that are almost linear on the ..."
Abstract
-
Cited by 566 (5 self)
- Add to MetaCart
The extended Kalman filter (EKF) is probably the most widely used estimation algorithm for nonlinear systems. However, more than 35 years of experience in the estimation community has shown that is difficult to implement, difficult to tune, and only reliable for systems that are almost linear
Accurate Estimation of the Cost of Spatial Selections
"... Optimizing queries that involve operations on spatial data requires estimating the selectivity and cost of these operations. In this paper, we focus on estimating the cost of spatial selections, or window queries, where the query windows and data objects are general polygons. Cost estimation techniq ..."
Abstract
- Add to MetaCart
these attributes makes this type of histogram useful for accurate estimation, as we experimentally demonstrate. We also investigate sampling-based estimation approaches. Sampling can yield better selectivity estimates than histograms for polygon data, but at the high cost of performing exact geometry comparisons
Privacy-Preserving Data Mining
, 2000
"... A fruitful direction for future data mining research will be the development of techniques that incorporate privacy concerns. Specifically, we address the following question. Since the primary task in data mining is the development of models about aggregated data, can we develop accurate models with ..."
Abstract
-
Cited by 844 (3 self)
- Add to MetaCart
and the distribution of data values is also very different from the original distribution. While it is not possible to accurately estimate original values in individual data records, we propose a-novel reconstruction procedure to accurately estimate the distribution of original data values. By using
Boosting and differential privacy
, 2010
"... Boosting is a general method for improving the accuracy of learning algorithms. We use boosting to construct improved privacy-preserving synopses of an input database. These are data structures that yield, for a given set Q of queries over an input database, reasonably accurate estimates of the resp ..."
Abstract
-
Cited by 648 (14 self)
- Add to MetaCart
Boosting is a general method for improving the accuracy of learning algorithms. We use boosting to construct improved privacy-preserving synopses of an input database. These are data structures that yield, for a given set Q of queries over an input database, reasonably accurate estimates
An accurate estimate of Mathieu’s series
- Int. Math. Forum
"... We dedicate this article to Leonard Euler, the father of the Euler-Maclaurin formula, on the occasion of three hundreds anniversary of his birthday, April 15, 1707. Using Hermite's, i.e. the Euler-Maclaurin summation formula of or-der four, new approximations to Mathieu's series S(x) ∑∞k=1 ..."
Abstract
-
Cited by 2 (1 self)
- Add to MetaCart
=1 2k(k2+x2)2 are obtained, which are more accurate than the approximations pre-sented recently in the literature.
Fast and Accurate Estimation of RFID Tags
"... Abstract—Radio frequency identification (RFID) systems have been widely deployed for various applications such as object tracking, 3-D positioning, supply chain management, inventory control, and access control. This paper concerns the fundamental problem of estimating RFID tag population size, whic ..."
Abstract
- Add to MetaCart
Abstract—Radio frequency identification (RFID) systems have been widely deployed for various applications such as object tracking, 3-D positioning, supply chain management, inventory control, and access control. This paper concerns the fundamental problem of estimating RFID tag population size
Results 1 - 10
of
30,773