Results

**1 - 2**of**2**### Case-Based Reasoning for Explaining Probabilistic Machine Learning

"... ABSTRACT This paper describes a generic framework for explaining the prediction of probabilistic machine learning algorithms using cases. The framework consists of two components: a similarity metric between cases that is defined relative to a probability model and an novel case-based approach to j ..."

Abstract
- Add to MetaCart

(Show Context)
ABSTRACT This paper describes a generic framework for explaining the prediction of probabilistic machine learning algorithms using cases. The framework consists of two components: a similarity metric between cases that is defined relative to a probability model and an novel case-based approach to justifying the probabilistic prediction by estimating the prediction error using case-based reasoning. As basis for deriving similarity metrics, we define similarity in terms of the principle of interchangeability that two cases are considered similar or identical if two probability distributions, derived from excluding either one or the other case in the case base, are identical. Lastly, we show the applicability of the proposed approach by deriving a metric for linear regression, and apply the proposed approach for explaining predictions of the energy performance of households.

### Usages of Generalization in Case-based Reasoning

"... Abstract. The aim of this paper is to analyze how the generalizations built by a CBR method can be used as local approximations of a concept. From this point of view, these local approximations can take a role similar to the global approximations built by eager learning methods. Thus, we propose th ..."

Abstract
- Add to MetaCart

Abstract. The aim of this paper is to analyze how the generalizations built by a CBR method can be used as local approximations of a concept. From this point of view, these local approximations can take a role similar to the global approximations built by eager learning methods. Thus, we propose that local approximations can be interpreted either as: 1) a symbolic similitude among a set of cases, 2) a partial domain model, or 3) an explanation of the system classification. We illustrate these usages by solving the Predictive Toxicology task.