Results 1  10
of
544
Fault Injection and Dependability Evaluation of FaultTolerant Systems
 IEEE Trans. Computers
, 1993
"... Abstract This paper describes a dependability evaluation method based on fault injection that establishes the link between the experimental evaluation of the fault tolerance process and the fault occurrence process. The main characteristics of a fault injection test sequence aimed at evaluating the ..."
Abstract

Cited by 73 (13 self)
 Add to MetaCart
(Show Context)
Abstract This paper describes a dependability evaluation method based on fault injection that establishes the link between the experimental evaluation of the fault tolerance process and the fault occurrence process. The main characteristics of a fault injection test sequence aimed at evaluating the coverage of the fault tolerance process are presented. Emphasis is given to the derivation of experimental measures. The various steps by which the fault occurrence and fault tolerance processes are combined to evaluate dependability measures are identified and their interactions are analyzed. The method is illustrated by an application to the dependability evaluation of the distributed faulttolerant architecture of the ESPRIT Delta4 Project. Index Terms Coverage, dependability modeling and evaluation, experimental evaluation, fault injection, fault tolerance, Markov chains. I.
Uncovering the temporal dynamics of diffusion networks
 in Proc. of the 28th Int. Conf. on Machine Learning (ICML’11
, 2011
"... Time plays an essential role in the diffusion of information, influence and disease over networks. In many cases we only observe when a node copies information, makes a decision or becomes infected – but the connectivity, transmission rates between nodes and transmission sources are unknown. Inferri ..."
Abstract

Cited by 56 (11 self)
 Add to MetaCart
(Show Context)
Time plays an essential role in the diffusion of information, influence and disease over networks. In many cases we only observe when a node copies information, makes a decision or becomes infected – but the connectivity, transmission rates between nodes and transmission sources are unknown. Inferring the underlying dynamics is of outstanding interest since it enables forecasting, influencing and retarding infections, broadly construed. To this end, we model diffusion processes as discrete networks of continuous temporal processes occurring at different rates. Given cascade data – observed infection times of nodes – we infer the edges of the global diffusion network and estimate the transmission rates of each edge that best explain the observed data. The optimization problem is convex. The model naturally (without heuristics) imposes sparse solutions and requires no parameter tuning. The problem decouples into a collection of independent smaller problems, thus scaling easily to networks on the order of hundreds of thousands of nodes. Experiments on real and synthetic data show that our algorithm both recovers the edges of diffusion networks and accurately estimates their transmission rates from cascade data. 1.
Generalized confidence intervals
 J Am Stat Assoc
, 1993
"... The definition of a confidence interval is generalized so that problems such as constructing exact confidence regions for the difference in two normal means can be tackled without the assumption of equal variances. Under certain conditions, the extended definition is shown to preserve a repeated sam ..."
Abstract

Cited by 51 (0 self)
 Add to MetaCart
(Show Context)
The definition of a confidence interval is generalized so that problems such as constructing exact confidence regions for the difference in two normal means can be tackled without the assumption of equal variances. Under certain conditions, the extended definition is shown to preserve a repeated sampling property that a practitioner expects from exact confidence intervals. The proposed procedure is also applied to the problem of constructing confidence intervals for the difference in two exponential means and for variance components in mixed models. A repeated sampling property of generalized p values is also given. With this characterization one can carry out fixed level tests of parameters of continuous distributions on the basis of generalized p values. Finally, Pratt's paradox is revisited, and a procedure that resolves the paradox is given.
Following the leader: A study of individual analysts’ earnings forecasts.
 Journal of Financial Economics
, 2001
"... Abstract This paper develops and tests procedures for ranking the performance of security analysts based on the timeliness of their earnings forecasts, the abnormal trading volume associated with these forecasts, and forecast accuracy. Our framework provides an objective assessment of analyst quali ..."
Abstract

Cited by 49 (2 self)
 Add to MetaCart
(Show Context)
Abstract This paper develops and tests procedures for ranking the performance of security analysts based on the timeliness of their earnings forecasts, the abnormal trading volume associated with these forecasts, and forecast accuracy. Our framework provides an objective assessment of analyst quality that differs from the standard approach, which uses survey evidence to rate analysts. We find that lead analysts identified by our measure of forecast timeliness have a greater impact on stock prices than follower analysts. Further, we find that performance rankings based on forecast timeliness are more informative than rankings based on abnormal trading volume and forecast accuracy. We also present evidence that analyst's forecast revisions are correlated with recent stock price performance, suggesting that security analysts use publicly JEL classification: G11; G14; G24; J44
A Common Protocol for AgentBased Social Simulation
, 2005
"... Traditional (i.e. analytical) modelling practices in the social sciences rely on a very well established, although implicit, methodological protocol, both with respect to the way models are presented and to the kinds of analysis that are performed. Unfortunately, computersimulated models often la ..."
Abstract

Cited by 39 (4 self)
 Add to MetaCart
(Show Context)
Traditional (i.e. analytical) modelling practices in the social sciences rely on a very well established, although implicit, methodological protocol, both with respect to the way models are presented and to the kinds of analysis that are performed. Unfortunately, computersimulated models often lack such a reference to an accepted methodological standard. This is one of the main reasons for the scepticism among mainstream social scientists that results in low acceptance of papers with agentbased methodology in the top journals. We identify some methodological pitfalls that, according to us, are common in papers employing agentbased simulations, and propose appropriate solutions. We discuss each issue with reference to a general characterization of dynamic micro models, which encompasses both analytical and simulation models. In the way, we also clarify some confusing terminology. We then propose a threestage process that could lead to the establishment of methodological standards in social and economic simulations. Keywords: Agentbased, simulations, methodology, calibration, validation Acknowledgements: A Lagrange fellowship by ISI Foundation is gratefully acknowledged by MR and MS.
Tutorial in Biostatistics: Multivariable prognostic models. Statistics in Medicine
, 1996
"... Multivariable regression models are powerful tools that are used frequently in studies of clinical outcomes. These models can use a mixture of categorical and continuous variables and can handle partially observed (censored) responses. However, uncritical application of modelling techniques can resu ..."
Abstract

Cited by 30 (0 self)
 Add to MetaCart
Multivariable regression models are powerful tools that are used frequently in studies of clinical outcomes. These models can use a mixture of categorical and continuous variables and can handle partially observed (censored) responses. However, uncritical application of modelling techniques can result in models that poorly fit the dataset at hand, or, even more likely, inaccurately predict outcomes on new subjects. One must know how to measure qualities of a model's fit in order to avoid poorly fitted or overfitted models. Measurement of predictive accuracy can be difficult for survival time data in the presence of censoring. We discuss an easily interpretable index of predictive discrimination as well as methods for assessing calibration of predicted survival probabilities. Both types of predictive accuracy should be unbiasedly validated using bootstrapping or crossvalidation, before using predictions in a new data series. We discuss some of the hazards of poorly fitted and overfitted regression models and present one modelling strategy that avoids many of the problems discussed. The methods described are applicable to all regression models, but are particularly needed for binary, ordinal, and timetoevent outcomes. Methods are illustrated with a survival analysis in prostate cancer using Cox regression. 1.
Competing risk hazard model of activity choice, timing, sequencing, and duration. In: Transportation Research Record
 Transportation Research Board of the National Academies
, 1995
"... ..."
(Show Context)
Signal detection theory and generalized linear models
 Psychol. Methods
, 1998
"... Generalized linear models are a general class of regressionlike models for continuous and categorical response variables. Signal detection models can be formulated as a subclass of generalized linear models, and the result is a rich class of signal detection models based on different underlying dis ..."
Abstract

Cited by 26 (10 self)
 Add to MetaCart
(Show Context)
Generalized linear models are a general class of regressionlike models for continuous and categorical response variables. Signal detection models can be formulated as a subclass of generalized linear models, and the result is a rich class of signal detection models based on different underlying distributions. An example is a signal detection model based on the extreme value distribution. The extreme value model is shown to yield unit slope receiver operating characteristic (ROC) curves for several classic data sets that are commonly given as examples of normal or logistic ROC curves with slopes that differ from unity. The result is an additive model with a simple interpretation i terms of a shift in the location of an underlying distribution. The models can also be extended in several ways, such as to recognize response dependencies, to include random coefficients, or to allow for more general underlying probability distributions. Signal detection theory (SDT) arose as an application of statistical decision theory to engineering problems, in particular, the detection of a signal embedded in noise. The relevance of the theory to psychophysical studies of detection, recognition, and discrimination was recognized early on by Tanner and Swets
Reliability of Freeway Traffic Flow: A Stochastic Concept of Capacity.
 Proceedings of the 16th International Symposium on Transportation and Traffic Theory,
, 2005
"... ABSTRACT The paper introduces a new understanding of freeway capacity. Here capacity is understood as the traffic volume below which traffic still flows and above which the flow breaks down into stopandgo or even standing traffic. It is easy to understand that a capacity in this sense is by no me ..."
Abstract

Cited by 25 (1 self)
 Add to MetaCart
(Show Context)
ABSTRACT The paper introduces a new understanding of freeway capacity. Here capacity is understood as the traffic volume below which traffic still flows and above which the flow breaks down into stopandgo or even standing traffic. It is easy to understand that a capacity in this sense is by no means a constant value. Empirical analysis of traffic flow patterns, counted at 5minute intervals over several months and at many sites, clearly shows that this type of capacity is Weibulldistributed with a nearly constant shape parameter, which represents the variance. This was identified using the socalled Product Limit Method, which is based on the statistics of lifetime data analysis. It is demonstrated that this method is applicable to all types of freeways. The stochastic methodology allows for a derivation of a theoretical transformation between capacities identified for different interval durations. The technique can also be used to identify effects of different external conditions like speed limits or weather on the capacity of a freeway. The statistical distribution of capacity directly indicates the reliability of the freeway section under investigation. This distribution for one section is then transformed into statistical measures of reliability for larger parts of a network composed of sections of different capacity. Thus, the stochastic concept is also expanded into reliabilities of freeway networks. It is found that a freeway operates at the highest expected efficiency if it is only loaded to 90 % of the conventionally estimated (constantvalue) capacity. On the one hand, the paper quotes some real world results from German freeways. On the other hand, the reliabilitybased analysis leads to a new sophisticated concept for highway traffic engineering.
Consumer store choice dynamics: An analysis of the competitive market structure for grocery stores
 Journal of Retailing
, 2000
"... This study aims at formulating and testing a model of store choice dynamics to measure the effects of consumer characteristics on consumer grocery store choice and switching behavior. A dynamic hazard model is estimated to obtain an understanding of the components influencing consumer purchase timin ..."
Abstract

Cited by 23 (1 self)
 Add to MetaCart
(Show Context)
This study aims at formulating and testing a model of store choice dynamics to measure the effects of consumer characteristics on consumer grocery store choice and switching behavior. A dynamic hazard model is estimated to obtain an understanding of the components influencing consumer purchase timing, store choice, and the competitive dynamics of retail competition. The hazard model is combined with an internal market structure analysis using a generalized factor analytic structure. We estimate a latent structure that is both store and store chain specific. This allows us to study store competition at the store chain level such as competition based on price such as EDLP versus a HiLo pricing strategy and competition specific to a store due to differences in location. Competition in the retailing industry has reached dramatic dimensions. New retailing formats appear in the market increasingly more rapidly. A focus on a particular aspect of the retail mix (e.g., service or price) means that retailers can compete on highly diverse dimensions. Scrambled merchandising and similar developments have implied that particular retailers are now competing against retailers they did not compete with in the past.