Results 1 - 10
of
2,826
Multiparty Communication Complexity
, 1989
"... A given Boolean function has its input distributed among many parties. The aim is to determine which parties to tMk to and what information to exchange with each of them in order to evaluate the function while minimizing the total communication. This paper shows that it is possible to obtain the Boo ..."
Abstract
-
Cited by 760 (22 self)
- Add to MetaCart
A given Boolean function has its input distributed among many parties. The aim is to determine which parties to tMk to and what information to exchange with each of them in order to evaluate the function while minimizing the total communication. This paper shows that it is possible to obtain the Boolean answer deterministically with only a polynomial increase in communication with respect to the information lower bound given by the nondeterministic communication complexity of the function.
How bad is selfish routing?
- JOURNAL OF THE ACM
, 2002
"... We consider the problem of routing traffic to optimize the performance of a congested network. We are given a network, a rate of traffic between each pair of nodes, and a latency function for each edge specifying the time needed to traverse the edge given its congestion; the objective is to route t ..."
Abstract
-
Cited by 657 (27 self)
- Add to MetaCart
We consider the problem of routing traffic to optimize the performance of a congested network. We are given a network, a rate of traffic between each pair of nodes, and a latency function for each edge specifying the time needed to traverse the edge given its congestion; the objective is to route traffic such that the sum of all travel times—the total latency—is minimized. In many settings, it may be expensive or impossible to regulate network traffic so as to implement an optimal assignment of routes. In the absence of regulation by some central authority, we assume that each network user routes its traffic on the minimum-latency path available to it, given the network congestion caused by the other users. In general such a “selfishly motivated ” assignment of traffic to paths will not minimize the total latency; hence, this lack of regulation carries the cost of decreased network performance. In this article, we quantify the degradation in network performance due to unregulated traffic. We prove that if the latency of each edge is a linear function of its congestion, then the total latency of the routes chosen by selfish network users is at most 4/3 times the minimum possible total latency (subject to the condition that all traffic must be routed). We also consider the more general setting in which edge latency functions are assumed only to be continuous and nondecreasing in the edge congestion. Here, the total
The DLV System for Knowledge Representation and Reasoning
- ACM Transactions on Computational Logic
, 2002
"... Disjunctive Logic Programming (DLP) is an advanced formalism for knowledge representation and reasoning, which is very expressive in a precise mathematical sense: it allows to express every property of finite structures that is decidable in the complexity class ΣP 2 (NPNP). Thus, under widely believ ..."
Abstract
-
Cited by 456 (102 self)
- Add to MetaCart
Disjunctive Logic Programming (DLP) is an advanced formalism for knowledge representation and reasoning, which is very expressive in a precise mathematical sense: it allows to express every property of finite structures that is decidable in the complexity class ΣP 2 (NPNP). Thus, under widely believed assumptions, DLP is strictly more expressive than normal (disjunction-free) logic programming, whose expressiveness is limited to properties decidable in NP. Importantly, apart from enlarging the class of applications which can be encoded in the language, disjunction often allows for representing problems of lower complexity in a simpler and more natural fashion. This paper presents the DLV system, which is widely considered the state-of-the-art implementation of disjunctive logic programming, and addresses several aspects. As for problem solving, we provide a formal definition of its kernel language, function-free disjunctive logic programs (also known as disjunctive datalog), extended by weak constraints, which are a powerful tool to express optimization problems. We then illustrate the usage of DLV as a tool for knowledge representation and reasoning, describing a new declarative programming methodology which allows one to encode complex problems (up to ∆P 3-complete problems) in a declarative fashion. On the foundational side, we provide a detailed analysis of the computational complexity of the language of
Probabilistic checking of proofs: a new characterization of NP
- JOURNAL OF THE ACM
, 1998
"... We give a new characterization of NP: the class NP contains exactly those languages L for which membership proofs (a proof that an input x is in L) can be verified probabilistically in polynomial time using logarithmic number of random bits and by reading sublogarithmic number of bits from the proof ..."
Abstract
-
Cited by 414 (26 self)
- Add to MetaCart
We give a new characterization of NP: the class NP contains exactly those languages L for which membership proofs (a proof that an input x is in L) can be verified probabilistically in polynomial time using logarithmic number of random bits and by reading sublogarithmic number of bits from the proof. We discuss implications of this characterization; specifically, we show that approximating Clique and Independent Set, even in a very weak sense, is NP-hard.
The Complexity of Decentralized Control of Markov Decision Processes
- Mathematics of Operations Research
, 2000
"... We consider decentralized control of Markov decision processes and give complexity bounds on the worst-case running time for algorithms that find optimal solutions. Generalizations of both the fullyobservable case and the partially-observable case that allow for decentralized control are described. ..."
Abstract
-
Cited by 411 (46 self)
- Add to MetaCart
(Show Context)
We consider decentralized control of Markov decision processes and give complexity bounds on the worst-case running time for algorithms that find optimal solutions. Generalizations of both the fullyobservable case and the partially-observable case that allow for decentralized control are described. For even two agents, the finite-horizon problems corresponding to both of these models are hard for nondeterministic exponential time. These complexity results illustrate a fundamental difference between centralized and decentralized control of Markov decision processes. In contrast to the problems involving centralized control, the problems we consider provably do not admit polynomial-time algorithms. Furthermore, assuming EXP NEXP, the problems require super-exponential time to solve in the worst case.
Extending and Implementing the Stable Model Semantics
, 2002
"... A novel logic program like language, weight constraint rules, is developed for answer set programming purposes. It generalizes normal logic programs by allowing weight constraints in place of literals to represent, e.g., cardinality and resource constraints and by providing optimization capabilities ..."
Abstract
-
Cited by 396 (9 self)
- Add to MetaCart
A novel logic program like language, weight constraint rules, is developed for answer set programming purposes. It generalizes normal logic programs by allowing weight constraints in place of literals to represent, e.g., cardinality and resource constraints and by providing optimization capabilities. A declarative semantics is developed which extends the stable model semantics of normal programs. The computational complexity of the language is shown to be similar to that of normal programs under the stable model semantics. A simple embedding of general weight constraint rules to a small subclass of the language called basic constraint rules is devised. An implementation of the language, the smodels system, is developed based on this embedding. It uses a two level architecture consisting of a front-end and a kernel language implementation. The front-end allows restricted use of variables and functions and compiles general weight constraint rules to basic constraint rules. A major part of the work is the development of an ecient search procedure for computing stable models for this kernel language. The procedure is compared with and empirically tested against satis ability checkers and an implementation of the stable model semantics. It offers a competitive implementation of the stable model semantics for normal programs and attractive performance for problems where the new types of rules provide a compact representation.
The complexity of computing a Nash equilibrium
, 2006
"... We resolve the question of the complexity of Nash equilibrium by showing that the problem of computing a Nash equilibrium in a game with 4 or more players is complete for the complexity class PPAD. Our proof uses ideas from the recently-established equivalence between polynomialtime solvability of n ..."
Abstract
-
Cited by 329 (23 self)
- Add to MetaCart
We resolve the question of the complexity of Nash equilibrium by showing that the problem of computing a Nash equilibrium in a game with 4 or more players is complete for the complexity class PPAD. Our proof uses ideas from the recently-established equivalence between polynomialtime solvability of normal-form games and graphical games, and shows that these kinds of games can implement arbitrary members of a PPAD-complete class of Brouwer functions. 1