Results 1  10
of
7,259
The Maximum Asymptotic Bias of Outlier Identifiers
, 1997
"... In their paper, Davies and Gather (1993) formalized the task of outlier identification, considering also certain performance criteria for outlier identifiers. One of those criteria, the maximum asymptotic bias, is carried over here to multivariate outlier identifiers. We show how this term depends o ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
In their paper, Davies and Gather (1993) formalized the task of outlier identification, considering also certain performance criteria for outlier identifiers. One of those criteria, the maximum asymptotic bias, is carried over here to multivariate outlier identifiers. We show how this term depends
A Service of zbw The maximum asymptotic bias of outlier identifiers The Maximum Asymptotic Bias of Outlier Identi ers
"... StandardNutzungsbedingungen: Die Dokumente auf EconStor dürfen zu eigenen wissenschaftlichen Zwecken und zum Privatgebrauch gespeichert und kopiert werden. Sie dürfen die Dokumente nicht für öffentliche oder kommerzielle Zwecke vervielfältigen, öffentlich ausstellen, öffentlich zugänglich machen, ..."
Abstract
 Add to MetaCart
Stor may www.econstor.eu The Maximum Asymptotic Bias of Outlier Identi ers Claudia Becker and Ursula Gather 1 Abstract In their paper, Davies and Gather 1993 formalized the task of outlier identi cation, considering also certain performance criteria for outlier identi ers. One of those criteria
Maximum Likelihood Linear Transformations for HMMBased Speech Recognition
 COMPUTER SPEECH AND LANGUAGE
, 1998
"... This paper examines the application of linear transformations for speaker and environmental adaptation in an HMMbased speech recognition system. In particular, transformations that are trained in a maximum likelihood sense on adaptation data are investigated. Other than in the form of a simple bias ..."
Abstract

Cited by 570 (68 self)
 Add to MetaCart
This paper examines the application of linear transformations for speaker and environmental adaptation in an HMMbased speech recognition system. In particular, transformations that are trained in a maximum likelihood sense on adaptation data are investigated. Other than in the form of a simple
Statistical Analysis of Cointegrated Vectors
 Journal of Economic Dynamics and Control
, 1988
"... We consider a nonstationary vector autoregressive process which is integrated of order 1, and generated by i.i.d. Gaussian errors. We then derive the maximum likelihood estimator of the space of cointegration vectors and the likelihood ratio test of the hypothesis that it has a given number of dimen ..."
Abstract

Cited by 2749 (12 self)
 Add to MetaCart
We consider a nonstationary vector autoregressive process which is integrated of order 1, and generated by i.i.d. Gaussian errors. We then derive the maximum likelihood estimator of the space of cointegration vectors and the likelihood ratio test of the hypothesis that it has a given number
Fit indices in covariance structure modeling: Sensitivity to underparameterized model misspecification
 Psychological Methods
, 1998
"... This study evaluated the sensitivity of maximum likelihood (ML), generalized least squares (GLS), and asymptotic distributionfree (ADF)based fit indices to model misspecification, under conditions that varied sample size and distribution. The effect of violating assumptions of asymptotic robustn ..."
Abstract

Cited by 543 (0 self)
 Add to MetaCart
This study evaluated the sensitivity of maximum likelihood (ML), generalized least squares (GLS), and asymptotic distributionfree (ADF)based fit indices to model misspecification, under conditions that varied sample size and distribution. The effect of violating assumptions of asymptotic
Normalization for cDNA microarray data: a robust composite method addressing single and multiple slide systematic variation
, 2002
"... There are many sources of systematic variation in cDNA microarray experiments which affect the measured gene expression levels (e.g. differences in labeling efficiency between the two fluorescent dyes). The term normalization refers to the process of removing such variation. A constant adjustment is ..."
Abstract

Cited by 718 (9 self)
 Add to MetaCart
is often used to force the distribution of the intensity log ratios to have a median of zero for each slide. However, such global normalization approaches are not adequate in situations where dye biases can depend on spot overall intensity and/or spatial location within the array. This article proposes
Conditional random fields: Probabilistic models for segmenting and labeling sequence data
, 2001
"... We present conditional random fields, a framework for building probabilistic models to segment and label sequence data. Conditional random fields offer several advantages over hidden Markov models and stochastic grammars for such tasks, including the ability to relax strong independence assumptions ..."
Abstract

Cited by 3485 (85 self)
 Add to MetaCart
made in those models. Conditional random fields also avoid a fundamental limitation of maximum entropy Markov models (MEMMs) and other discriminative Markov models based on directed graphical models, which can be biased towards states with few successor states. We present iterative parameter estimation
Boosting the margin: A new explanation for the effectiveness of voting methods
 IN PROCEEDINGS INTERNATIONAL CONFERENCE ON MACHINE LEARNING
, 1997
"... One of the surprising recurring phenomena observed in experiments with boosting is that the test error of the generated classifier usually does not increase as its size becomes very large, and often is observed to decrease even after the training error reaches zero. In this paper, we show that this ..."
Abstract

Cited by 897 (52 self)
 Add to MetaCart
that this phenomenon is related to the distribution of margins of the training examples with respect to the generated voting classification rule, where the margin of an example is simply the difference between the number of correct votes and the maximum number of votes received by any incorrect label. We show
Mega: molecular evolutionary genetic analysis software for microcomputers
 CABIOS
, 1994
"... A computer program package called MEGA has been developed for estimating evolutionary distances, reconstructing phylogenetic trees and computing basic statistical quantities from molecular data. It is written in C+ + and is intended to be used on IBM and IBMcompatible personal computers. In this pr ..."
Abstract

Cited by 505 (10 self)
 Add to MetaCart
. In this program, various methods for estimating evolutionary distances from nucleotide and amino acid sequence data, three different methods of phylogenetic inference (UPGMA, neighborjoining and maximum parsimony) and two statistical tests of topological differences are included. For the maximum parsimony method
Lag length selection and the construction of unit root tests with good size and power
 Econometrica
, 2001
"... It is widely known that when there are errors with a movingaverage root close to −1, a high order augmented autoregression is necessary for unit root tests to have good size, but that information criteria such as the AIC and the BIC tend to select a truncation lag (k) that is very small. We conside ..."
Abstract

Cited by 558 (14 self)
 Add to MetaCart
consider a class of Modified Information Criteria (MIC) with a penalty factor that is sample dependent. It takes into account the fact that the bias in the sum of the autoregressive coefficients is highly dependent on k and adapts to the type of deterministic components present. We use a local asymptotic
Results 1  10
of
7,259