Results 1  10
of
505
Fault Injection and Dependability Evaluation of FaultTolerant Systems
 IEEE Trans. Computers
, 1993
"... Abstract This paper describes a dependability evaluation method based on fault injection that establishes the link between the experimental evaluation of the fault tolerance process and the fault occurrence process. The main characteristics of a fault injection test sequence aimed at evaluating the ..."
Abstract

Cited by 72 (13 self)
 Add to MetaCart
(Show Context)
Abstract This paper describes a dependability evaluation method based on fault injection that establishes the link between the experimental evaluation of the fault tolerance process and the fault occurrence process. The main characteristics of a fault injection test sequence aimed at evaluating the coverage of the fault tolerance process are presented. Emphasis is given to the derivation of experimental measures. The various steps by which the fault occurrence and fault tolerance processes are combined to evaluate dependability measures are identified and their interactions are analyzed. The method is illustrated by an application to the dependability evaluation of the distributed faulttolerant architecture of the ESPRIT Delta4 Project. Index Terms Coverage, dependability modeling and evaluation, experimental evaluation, fault injection, fault tolerance, Markov chains. I.
Uncovering the temporal dynamics of diffusion networks
 in Proc. of the 28th Int. Conf. on Machine Learning (ICML’11
, 2011
"... Time plays an essential role in the diffusion of information, influence and disease over networks. In many cases we only observe when a node copies information, makes a decision or becomes infected – but the connectivity, transmission rates between nodes and transmission sources are unknown. Inferri ..."
Abstract

Cited by 53 (10 self)
 Add to MetaCart
(Show Context)
Time plays an essential role in the diffusion of information, influence and disease over networks. In many cases we only observe when a node copies information, makes a decision or becomes infected – but the connectivity, transmission rates between nodes and transmission sources are unknown. Inferring the underlying dynamics is of outstanding interest since it enables forecasting, influencing and retarding infections, broadly construed. To this end, we model diffusion processes as discrete networks of continuous temporal processes occurring at different rates. Given cascade data – observed infection times of nodes – we infer the edges of the global diffusion network and estimate the transmission rates of each edge that best explain the observed data. The optimization problem is convex. The model naturally (without heuristics) imposes sparse solutions and requires no parameter tuning. The problem decouples into a collection of independent smaller problems, thus scaling easily to networks on the order of hundreds of thousands of nodes. Experiments on real and synthetic data show that our algorithm both recovers the edges of diffusion networks and accurately estimates their transmission rates from cascade data. 1.
Following the leader: A study of individual analysts’ earnings forecasts
 Journal of Financial Economics
, 2001
"... This paper develops and tests procedures for identifying lead analysts based on the timeliness of analyst forecast revisions, the trading levels associated with these revisions, and forecast accuracy. Our framework provides an objective assessment of analyst quality that differs from the standard ap ..."
Abstract

Cited by 47 (2 self)
 Add to MetaCart
This paper develops and tests procedures for identifying lead analysts based on the timeliness of analyst forecast revisions, the trading levels associated with these revisions, and forecast accuracy. Our framework provides an objective assessment of analyst quality that differs from the standard approach that uses survey evidence to rate analysts. Using a sample of equity analysts, we find that lead analysts identified by our procedures have more price impact than follower analysts. Evidence also is presented that suggests analysts use recent stock price trends to help them modify forecast revisions, regardless of whether the analyst is a leader or a follower. Finally, we find that our ranking procedures based on timeliness, trading volume, and accuracy are consistent. That is, if analysts are selected as Full service brokerage firms provide many services to their customers in addition to the execution of trades. One such service is the provision of information concerning the investment value of equity securities. This information is typically produced by analysts with expertise in tracking certain industries and selected firms within those industries. The
Generalized confidence intervals
 J Am Stat Assoc
, 1993
"... The definition of a confidence interval is generalized so that problems such as constructing exact confidence regions for the difference in two normal means can be tackled without the assumption of equal variances. Under certain conditions, the extended definition is shown to preserve a repeated sam ..."
Abstract

Cited by 45 (0 self)
 Add to MetaCart
(Show Context)
The definition of a confidence interval is generalized so that problems such as constructing exact confidence regions for the difference in two normal means can be tackled without the assumption of equal variances. Under certain conditions, the extended definition is shown to preserve a repeated sampling property that a practitioner expects from exact confidence intervals. The proposed procedure is also applied to the problem of constructing confidence intervals for the difference in two exponential means and for variance components in mixed models. A repeated sampling property of generalized p values is also given. With this characterization one can carry out fixed level tests of parameters of continuous distributions on the basis of generalized p values. Finally, Pratt's paradox is revisited, and a procedure that resolves the paradox is given.
A common protocol for agentbased social simulation
 JOURNAL OF ARTIFICIAL SOCIETIES AND SOCIAL SIMULATION
, 2006
"... Traditional (i.e. analytical) modelling practices in the social sciences rely on a very well established, although implicit, methodological protocol, both with respect to the way models are presented and to the kinds of analysis that are performed. Unfortunately, computersimulated models often lack ..."
Abstract

Cited by 38 (4 self)
 Add to MetaCart
(Show Context)
Traditional (i.e. analytical) modelling practices in the social sciences rely on a very well established, although implicit, methodological protocol, both with respect to the way models are presented and to the kinds of analysis that are performed. Unfortunately, computersimulated models often lack such a reference to an accepted methodological standard. This is one of the main reasons for the scepticism among mainstream social scientists that results in low acceptance of papers with agentbased methodology in the top journals. We identify some methodological pitfalls that, according to us, are common in papers employing agentbased simulations, and propose appropriate solutions. We discuss each issue with reference to a general characterization of dynamic micro models, which encompasses both analytical and simulation models. In the way, we also clarify some confusing terminology. We then propose a threestage process that could lead to the establishment of methodological standards in social and economic simulations.
Tutorial in Biostatistics: Multivariable prognostic models. Statistics in Medicine
, 1996
"... Multivariable regression models are powerful tools that are used frequently in studies of clinical outcomes. These models can use a mixture of categorical and continuous variables and can handle partially observed (censored) responses. However, uncritical application of modelling techniques can resu ..."
Abstract

Cited by 28 (0 self)
 Add to MetaCart
Multivariable regression models are powerful tools that are used frequently in studies of clinical outcomes. These models can use a mixture of categorical and continuous variables and can handle partially observed (censored) responses. However, uncritical application of modelling techniques can result in models that poorly fit the dataset at hand, or, even more likely, inaccurately predict outcomes on new subjects. One must know how to measure qualities of a model's fit in order to avoid poorly fitted or overfitted models. Measurement of predictive accuracy can be difficult for survival time data in the presence of censoring. We discuss an easily interpretable index of predictive discrimination as well as methods for assessing calibration of predicted survival probabilities. Both types of predictive accuracy should be unbiasedly validated using bootstrapping or crossvalidation, before using predictions in a new data series. We discuss some of the hazards of poorly fitted and overfitted regression models and present one modelling strategy that avoids many of the problems discussed. The methods described are applicable to all regression models, but are particularly needed for binary, ordinal, and timetoevent outcomes. Methods are illustrated with a survival analysis in prostate cancer using Cox regression. 1.
Signal detection theory and generalized linear models
 Psychol. Methods
, 1998
"... Generalized linear models are a general class of regressionlike models for continuous and categorical response variables. Signal detection models can be formulated as a subclass of generalized linear models, and the result is a rich class of signal detection models based on different underlying dis ..."
Abstract

Cited by 23 (8 self)
 Add to MetaCart
(Show Context)
Generalized linear models are a general class of regressionlike models for continuous and categorical response variables. Signal detection models can be formulated as a subclass of generalized linear models, and the result is a rich class of signal detection models based on different underlying distributions. An example is a signal detection model based on the extreme value distribution. The extreme value model is shown to yield unit slope receiver operating characteristic (ROC) curves for several classic data sets that are commonly given as examples of normal or logistic ROC curves with slopes that differ from unity. The result is an additive model with a simple interpretation i terms of a shift in the location of an underlying distribution. The models can also be extended in several ways, such as to recognize response dependencies, to include random coefficients, or to allow for more general underlying probability distributions. Signal detection theory (SDT) arose as an application of statistical decision theory to engineering problems, in particular, the detection of a signal embedded in noise. The relevance of the theory to psychophysical studies of detection, recognition, and discrimination was recognized early on by Tanner and Swets
Consumer store choice dynamics: An analysis of the competitive market structure for grocery stores
 Journal of Retailing
, 2000
"... This study aims at formulating and testing a model of store choice dynamics to measure the effects of consumer characteristics on consumer grocery store choice and switching behavior. A dynamic hazard model is estimated to obtain an understanding of the components influencing consumer purchase timin ..."
Abstract

Cited by 21 (1 self)
 Add to MetaCart
(Show Context)
This study aims at formulating and testing a model of store choice dynamics to measure the effects of consumer characteristics on consumer grocery store choice and switching behavior. A dynamic hazard model is estimated to obtain an understanding of the components influencing consumer purchase timing, store choice, and the competitive dynamics of retail competition. The hazard model is combined with an internal market structure analysis using a generalized factor analytic structure. We estimate a latent structure that is both store and store chain specific. This allows us to study store competition at the store chain level such as competition based on price such as EDLP versus a HiLo pricing strategy and competition specific to a store due to differences in location. Competition in the retailing industry has reached dramatic dimensions. New retailing formats appear in the market increasingly more rapidly. A focus on a particular aspect of the retail mix (e.g., service or price) means that retailers can compete on highly diverse dimensions. Scrambled merchandising and similar developments have implied that particular retailers are now competing against retailers they did not compete with in the past.
M.: Visual Tracking: an Experimental Survey
"... Abstract—There is a large variety of trackers, which have been proposed in the literature during the last two decades with some mixed success. Object tracking in realistic scenarios is a difficult problem, therefore, it remains a most active area of research in computer vision. A good tracker should ..."
Abstract

Cited by 14 (1 self)
 Add to MetaCart
Abstract—There is a large variety of trackers, which have been proposed in the literature during the last two decades with some mixed success. Object tracking in realistic scenarios is a difficult problem, therefore, it remains a most active area of research in computer vision. A good tracker should perform well in a large number of videos involving illumination changes, occlusion, clutter, camera motion, low contrast, specularities, and at least six more aspects. However, the performance of proposed trackers have been evaluated typically on less than ten videos, or on the special purpose datasets. In this paper, we aim to evaluate trackers systematically and experimentally on 315 video fragments covering above aspects. We selected a set of nineteen trackers to include a wide variety of algorithms often cited in literature, supplemented with trackers appearing in 2010 and 2011 for which the code was publicly available. We demonstrate that trackers can be evaluated objectively by survival curves, Kaplan Meier statistics, and Grubs testing. We find that in the evaluation practice the Fscore is as effective as the object tracking accuracy (OTA) score. The analysis under a large variety of circumstances provides objective insight into the strengths and weaknesses of trackers.
The effect of education on mortality among older Taiwanese and its pathways
 Journal of Gerontology: Social Sciences
, 1998
"... This series of research reports deals with the status of the elderly in several Asian countries. It presents research that is being ..."
Abstract

Cited by 14 (1 self)
 Add to MetaCart
(Show Context)
This series of research reports deals with the status of the elderly in several Asian countries. It presents research that is being