Results 1  10
of
11
RELIABILITY ENGINEERING: OLD PROBLEMS AND NEW CHALLENGES
"... The first recorded usage of the word reliability dates back to the 1800s, albeit referred to a person and not a technical system. Since then, the concept of reliability has become a pervasive attribute worth of both qualitative and quantitative connotations. In particular, the revolutionary social, ..."
Abstract

Cited by 35 (3 self)
 Add to MetaCart
The first recorded usage of the word reliability dates back to the 1800s, albeit referred to a person and not a technical system. Since then, the concept of reliability has become a pervasive attribute worth of both qualitative and quantitative connotations. In particular, the revolutionary social, cultural and technological changes that have occurred from the 1800s to the 2000s have contributed to the need for a rational framework and quantitative treatment of the reliability of engineered systems and plants. This has led to the rise of reliability engineering as a scientific discipline. In this paper, some considerations are shared with respect to a number of problems and challenges which researchers and practitioners in reliability engineering are facing when analyzing today’s complex systems. The focus will be on the contribution of reliability to system safety and on its role within system risk analysis.
Probabilities of judgments provided by unknown experts by using the imprecise Dirichlet model
 Risk, Decision and Policy, 9(4):391 – 400
, 2004
"... Most models of aggregating expert judgments assume that there is available some information characterizing the experts. This information may be incorporated into hierarchical uncertainty models (secondorder models). However, very often we do not know anything about experts or it is difficult to ev ..."
Abstract

Cited by 7 (5 self)
 Add to MetaCart
(Show Context)
Most models of aggregating expert judgments assume that there is available some information characterizing the experts. This information may be incorporated into hierarchical uncertainty models (secondorder models). However, very often we do not know anything about experts or it is difficult to evaluate their quality. In this case, beliefs to experts may be in the interval [0,1] and the resulting assessments become to be noninformative. Moreover, attempts to assign some weights or beliefs to experts were not crowned with success because the behavior of experts may be distinguished in different circumstances. Therefore, this paper proposes to estimate expert judgments instead of experts themselves and studies how to assign interval probabilities of expert judgments by using the multinomial model.
Imprecise reliability. In:
 International Encyclopedia of Statistical Science, M. Lovric
, 2011
"... ..."
(Show Context)
On zerofailure testing for Bayesian highreliability demonstration
 Proc. IMechE, Part O: J. Risk and Reliability
"... Abstract: Recent results are summarized on the required testing of a system, in order to demonstrate a level of reliability with regard to the system’s use in a process after testing. It is explicitly assumed that testing reveals zero failures, which is realistic in situations where high reliability ..."
Abstract

Cited by 3 (3 self)
 Add to MetaCart
Abstract: Recent results are summarized on the required testing of a system, in order to demonstrate a level of reliability with regard to the system’s use in a process after testing. It is explicitly assumed that testing reveals zero failures, which is realistic in situations where high reliability is required and where failures during testing lead to redesign of the system. Several related aspects are discussed, including the choice of prior distribution, what to do if failures do occur during testing, and possibilities to take dependencies between tasks into account. Throughout, it is emphasized that, for reliability demonstration, one should try not to rely too much on mathematical assumptions that cannot be justified by the data, which gives particular restrictions owing to the nature of data from zerofailure tests. All of these topics raise interesting questions for future research.
Nonparametric predictive precedence testing for two groups
 Journal of Statistical Theory and Practice
, 2009
"... Nonparametric predictive inference (NPI) is a statistical approach based on few assumptions about probability distributions, with inferences based on data. NPI assumes exchangeability of random quantities, both related to observed data and future observations, and uncertainty is quantified via lower ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
Nonparametric predictive inference (NPI) is a statistical approach based on few assumptions about probability distributions, with inferences based on data. NPI assumes exchangeability of random quantities, both related to observed data and future observations, and uncertainty is quantified via lower and upper probabilities. In this paper, lifetimes of units from groups X and Y are compared, based on observed lifetimes from an experiment that may have ended before all units had failed. We present upper and lower probabilities for the event that the lifetime of a future unit from X is less than the lifetime of a future unit from Y, and we compare this approach with traditional precedence testing. 1
Nonparametric predictive inference for voting systems
 In: Proceedings MMR 2007
, 2007
"... We present upper and lower probabilities for reliability of voting systems, also known as koutofm systems, which include series and parallelsystems. We restrict attention to systems with identical components. These interval probabilities are based on the nonparametric predictive inferential (NPI ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
We present upper and lower probabilities for reliability of voting systems, also known as koutofm systems, which include series and parallelsystems. We restrict attention to systems with identical components. These interval probabilities are based on the nonparametric predictive inferential (NPI) approach for Bernoulli data presented by Coolen (1998). In this approach, it is assumed that test data are available on the components, and that the future components to be used in the system are exchangeable with these. This approach fits into the general framework of Imprecise Reliability, which has received increasing attention in recent years (Utkin and Coolen 2007). An early overview of NPI in Reliability was presented by Coolen, CoolenSchrijner and Yan (2002), in recent years NPI has been further developed and presented as an attractive statistical theory for situations where one aims at inference for future observables on the basis of available data, adding only rather limited additional assumptions (Coolen 2006a). A particularly attractive feature of NPI in Reliability, with lower and upper probabilities, is that data containing zero failures can be dealt with in an attractive manner, which will also be illustrated in this paper for reliability of voting systems. 1
Bayesian ZeroFailure Reliability Demonstration
, 2005
"... We study required numbers of tasks to be tested for a technical system, including systems with builtin redundancy, in order to demonstrate its reliability with regard to its use in a process after testing, where the system has to function for different types of tasks, which we assume to be independ ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
We study required numbers of tasks to be tested for a technical system, including systems with builtin redundancy, in order to demonstrate its reliability with regard to its use in a process after testing, where the system has to function for different types of tasks, which we assume to be independent. We consider optimal numbers
of tests as required for Bayesian reliability demonstration in terms of failurefree periods, which is suitable in case of catastrophic failures, and in terms of the expected
number of failures in a process after testing. We explicitly assume that testing reveals zero failures. For the process after testing, we consider both deterministic
and random numbers of tasks. We also consider optimal numbers of tasks to be tested when aiming at minimal total expected costs, including costs of testing and of failures in the process after testing. Cost and time constraints on testing are also included in the analysis. We consider such reliability demonstration for a single type of task, as well as for multiple types of tasks to be performed by one system. We also consider optimal Bayesian reliability demonstration testing in combination with ﬂexibility in the system redundancy, where more components can be installed
to reduce test eﬀort. For systems with redundancy, we restrict attention to systems with exchangeable components, with testing only at the component level.
We use the Bayesian approach with the Binomial model and Beta prior distributions for the failure probabilities. We discuss the inﬂuence of choice of prior distribution on the required zerofailure test numbers, where these inferences are very sensitive to the choice of prior distribution, which requires careful attention to the interpretation of noninformativeness of priors.
MODELLING UNCERTAINTY IN DECISION MAKING: FUZZY PERFORMANCE MEASURES AND FUZZY TARGET
"... ABSTRACT Decision making processes are based on performance measures that are compared with other performance measures values or with targets. Generally, the performance measure is only a number which do not consider the uncertainty associated with obtaining it such as measurement errors, precision ..."
Abstract
 Add to MetaCart
(Show Context)
ABSTRACT Decision making processes are based on performance measures that are compared with other performance measures values or with targets. Generally, the performance measure is only a number which do not consider the uncertainty associated with obtaining it such as measurement errors, precision and accuracy of measurement tools and subjective judgment. In this work, the uncertainty associated with performance measures is quantified through fuzzy numbers. The decision criterion is also represented by a fuzzy number. An indicator that considers both uncertainties in performance measure and decision criterion is developed. This indicator will help the decision maker once it quantifies the risk associated with the decision.
On optimality criteria for age replacement
"... Age replacement is a wellknown topic in the literature of Operational Research and Reliability. Traditionally, the probability distribution of a unit’s failure time is assumed to be known, and the cost criterion is derived via the renewal reward theorem, which implicitly assumes that the same preve ..."
Abstract
 Add to MetaCart
(Show Context)
Age replacement is a wellknown topic in the literature of Operational Research and Reliability. Traditionally, the probability distribution of a unit’s failure time is assumed to be known, and the cost criterion is derived via the renewal reward theorem, which implicitly assumes that the same preventive replacement strategy will be used over a very long period of time. As an alternative, one can use a onecycle criterion, aiming at minimisation of costs per unit of time only over the period that one unit is in place. We discuss these two criteria, and we also consider possible alternatives. Recently, we have presented a nonparametric predictive approach to age replacement, which is based on rather minimal assumptions for the failure time distributions, and provides full flexibility to the information from the process. We summarize the main conclusions from this research, where we also considered both the renewal criterion and the onecycle criterion. We discuss further aspects related to age replacement, highlighting several interesting topics for future research. Key words: Age replacement, nonparametric predictive inference, optimality criteria 1.