Results 1  10
of
196
The Complexity of Causality and Responsibility for Query Answers and nonAnswers
"... An answer to a query has a welldefined lineage expression (alternatively called howprovenance) that explains how the answer was derived. Recent work has also shown how to compute the lineage of a nonanswer to a query. However, the cause of an answer or nonanswer is a more subtle notion and consi ..."
Abstract

Cited by 42 (4 self)
 Add to MetaCart
(Show Context)
An answer to a query has a welldefined lineage expression (alternatively called howprovenance) that explains how the answer was derived. Recent work has also shown how to compute the lineage of a nonanswer to a query. However, the cause of an answer or nonanswer is a more subtle notion and consists, in general, of only a fragment of the lineage. In this paper, we adapt Halpern, Pearl, and Chockler’s recent definitions of causality and responsibility to define the causes of answers and nonanswers to queries, and their degree of responsibility. Responsibility captures the notion of degree of causality and serves to rank potentially many causes by their relative contributions to the effect. Then, we study the complexity of computing causes and responsibilities for conjunctive queries. It is known that computing causes is NPcomplete in general. Our first main result shows that all causes to conjunctive queries can be computed by a relational query which may involve negation. Thus, causality can be computed in PTIME, and very efficiently so. Next, we study computing responsibility. Here, we prove that the complexity depends on the conjunctive query and demonstrate a dichotomy between PTIME and NPcomplete cases. For the PTIME cases, we give a nontrivial algorithm, consisting of a reduction to the maxflow computation problem. Finally, we prove that, even when it is in PTIME, responsibility is complete for LOGSPACE, implying that, unlike causality, it cannot be computed by a relational query. 1.
Reasoning With Cause And Effect
, 1999
"... This paper summarizes basic concepts and principles that I have found to be useful in dealing with causal reasoning. The paper is written as a companion to a lecture under the same title, to be presented at IJCAI99, and is intended to supplement the lecture with technical details and pointers to mo ..."
Abstract

Cited by 39 (0 self)
 Add to MetaCart
This paper summarizes basic concepts and principles that I have found to be useful in dealing with causal reasoning. The paper is written as a companion to a lecture under the same title, to be presented at IJCAI99, and is intended to supplement the lecture with technical details and pointers to more elaborate discussions in the literature. The ruling conception will be to treat causation as a computational schema devised to identify the invariant relationships in the environment, so as to facilitate reliable prediction of the effect of actions. This conception, as well as several of its satellite principles and tools, has been guiding paradigm for several research communities in AI, most notably those connected with causal discovery, troubleshooting, planning under uncertainty and modeling the behavior of physical systems. My hopes are to encourage a broader and more effective usage of causal modeling by explicating these common principles in simple and familiar mathematical form. Af...
Intuitive theories of mind: a rational approach to false belief
 Proceedings of the TwentyEigth Annual Conference of the Cognitive Science Society. Mahwah, NJ: Erlbaum
, 2006
"... We propose a causal Bayesian model of false belief reasoning in children. This model realizes theory of mind as the rational use of intuitive theories and supports causal prediction, explanation, and theory revision. The model undergoes an experiencedriven false belief transition. We investigate th ..."
Abstract

Cited by 34 (11 self)
 Add to MetaCart
We propose a causal Bayesian model of false belief reasoning in children. This model realizes theory of mind as the rational use of intuitive theories and supports causal prediction, explanation, and theory revision. The model undergoes an experiencedriven false belief transition. We investigate the relationship between prediction, explanation, and surprise; this is used to interpret an empirical study of children’s explanations in an extension of the false belief task. Our study includes the standard outcome, surprising to younger children, and a novel “Psychic Sally ” condition that challenges older children with an unexpected outcome. In everyday life, humans constantly attribute unobservable mental states to one another, and use them to
Approximate Lineage for Probabilistic Databases
"... In probabilistic databases, lineage is fundamental to both query processing and understanding the data. Current systems s.a. Trio or Mystiq use a complete approach in which the lineage for a tuple t is a Boolean formula which represents all derivations of t. In large databases lineage formulas can b ..."
Abstract

Cited by 33 (9 self)
 Add to MetaCart
(Show Context)
In probabilistic databases, lineage is fundamental to both query processing and understanding the data. Current systems s.a. Trio or Mystiq use a complete approach in which the lineage for a tuple t is a Boolean formula which represents all derivations of t. In large databases lineage formulas can become huge: in one public database (the Gene Ontology) we often observed 10MB of lineage (provenance) data for a single tuple. In this paper we propose to use approximate lineage, which is a much smaller formula keeping track of only the most important derivations, which the system can use to process queries and provide explanations. We discuss in detail two specific kinds of approximate lineage: (1) a conservative approximation called sufficient lineage that records the most important derivations for each tuple, and (2) polynomial lineage, which is more aggressive and can provide higher compression ratios, and which is based on Fourier approximations of Boolean expressions. In this paper we define approximate lineage formally, describe algorithms to compute approximate lineage and prove formally their error bounds, and validate our approach experimentally on a real data set. 1.
Complexity Results for StructureBased Causality
 Artificial Intelligence
, 2001
"... We analyze the computational complexity of causal relationships in Pearl's structural models, where we focus on causality between variables, event causality, and probabilistic causality. In particular, we analyze the complexity of the sophisticated notions of weak and actual causality by H ..."
Abstract

Cited by 31 (6 self)
 Add to MetaCart
We analyze the computational complexity of causal relationships in Pearl's structural models, where we focus on causality between variables, event causality, and probabilistic causality. In particular, we analyze the complexity of the sophisticated notions of weak and actual causality by Halpern and Pearl. In the course of this, we also prove an open conjecture by Halpern and Pearl, and establish other semantic results. To our knowledge, no complexity aspects of causal relationships have been considered so far, and our results shed light on this issue. 1
Explaining Counterexamples Using Causality
"... Abstract. When a model does not satisfy a given specification, a counterexample is produced by the model checker to demonstrate the failure. A user must then examine the counterexample trace, in order to visually identify the failure that it demonstrates. If the trace is long, or the specification i ..."
Abstract

Cited by 28 (1 self)
 Add to MetaCart
(Show Context)
Abstract. When a model does not satisfy a given specification, a counterexample is produced by the model checker to demonstrate the failure. A user must then examine the counterexample trace, in order to visually identify the failure that it demonstrates. If the trace is long, or the specification is complex, finding the failure in the trace becomes a nontrivial task. In this paper, we address the problem of analyzing a counterexample trace and highlighting the failure that it demonstrates. Using the notion of causality, introduced by Halpern and Pearl, we formally define a set of causes for the failure of the specification on the given counterexample trace. These causes are marked as red dots and presented to the user as a visual explanation of the failure. We study the complexity of computing the exact set of causes, and provide a polynomialtime algorithm that approximates it. This algorithm is implemented as a feature in the IBM formal verification platform RuleBase PE, where these visual explanations are an integral part of every counterexample trace. Our approach is independent of the tool that produced the counterexample, and can be applied as a lightweight external layer to any model checking tool, or used to explain simulation traces. 1
Defaults and Normality in Causal Structures
"... A serious defect with the HalpernPearl (HP) definition of causality is repaired by combining a theory of causality with a theory of defaults. In addition, it is shown that (despite a claim to the contrary) a cause according to the HP condition need not be a single conjunct. A definition of causalit ..."
Abstract

Cited by 21 (8 self)
 Add to MetaCart
(Show Context)
A serious defect with the HalpernPearl (HP) definition of causality is repaired by combining a theory of causality with a theory of defaults. In addition, it is shown that (despite a claim to the contrary) a cause according to the HP condition need not be a single conjunct. A definition of causality motivated by Wright’s NESS test is shown to always hold for a single conjunct. Moreover, conditions that hold for all the examples considered by HP are given that guarantee that causality according to (this version) of the NESS test is equivalent to the HP definition. 1
Causality in Databases
, 2010
"... Provenance is often used to validate data, by verifying its origin and explaining its derivation. When searching for “causes” of tuples in the query results or in general observations, the analysis of lineage becomes an essential tool for providing such justifications. However, lineage can quickly g ..."
Abstract

Cited by 20 (4 self)
 Add to MetaCart
(Show Context)
Provenance is often used to validate data, by verifying its origin and explaining its derivation. When searching for “causes” of tuples in the query results or in general observations, the analysis of lineage becomes an essential tool for providing such justifications. However, lineage can quickly grow very large, limiting its immediate use for providing intuitive explanations to the user. The formal notion of causality is a more refined concept that identifies causes for observations based on userdefined criteria, and that assigns to them gradual degrees of responsibility based on their respective contributions. In this paper, we initiate a discussion on causality in databases, give some simple definitions, and motivate this formalism through a number of example applications.
Clarifying the usage of structural models for commonsense causal reasoning
 In Proc. AAAI Spring Symposium on Logical Formalizations of Commonsense Reasoning
, 2003
"... Recently, Halpern and Pearl proposed a definition of actual cause within the framework of structural models. In this paper, we explicate some of the assumptions underlying their definition, and reevaluate the effectiveness of their account. We also briefly contemplate the suitability of structural ..."
Abstract

Cited by 18 (2 self)
 Add to MetaCart
(Show Context)
Recently, Halpern and Pearl proposed a definition of actual cause within the framework of structural models. In this paper, we explicate some of the assumptions underlying their definition, and reevaluate the effectiveness of their account. We also briefly contemplate the suitability of structural models as a language for expressing subtle notions of commonsense causation.