Results 1  10
of
10
Averagecase computational complexity theory
 Complexity Theory Retrospective II
, 1997
"... ABSTRACT Being NPcomplete has been widely interpreted as being computationally intractable. But NPcompleteness is a worstcase concept. Some NPcomplete problems are \easy on average", but some may not be. How is one to know whether an NPcomplete problem is \di cult on average"? ..."
Abstract

Cited by 32 (2 self)
 Add to MetaCart
(Show Context)
ABSTRACT Being NPcomplete has been widely interpreted as being computationally intractable. But NPcompleteness is a worstcase concept. Some NPcomplete problems are \easy on average&quot;, but some may not be. How is one to know whether an NPcomplete problem is \di cult on average&quot;? The theory of averagecase computational complexity, initiated by Levin about ten years ago, is devoted to studying this problem. This paper is an attempt to provide an overview of the main ideas and results in this important new subarea of complexity theory. 1
TypicalCase Challenges to Complexity Shields That Are Supposed to Protect Elections Against Manipulation and Control: A Survey
, 2012
"... In the context of voting, manipulation and control refer to attempts to influence the outcome of elections by either setting some of the votes strategically (i.e., by reporting untruthful preferences) or by altering the structure of elections via adding, deleting, or partitioning either candidates o ..."
Abstract

Cited by 13 (2 self)
 Add to MetaCart
In the context of voting, manipulation and control refer to attempts to influence the outcome of elections by either setting some of the votes strategically (i.e., by reporting untruthful preferences) or by altering the structure of elections via adding, deleting, or partitioning either candidates or voters. Since by the celebrated Gibbard–Satterthwaite theorem (and other results expanding its scope) all reasonable voting systems are manipulable in principle and since many voting systems are in principle susceptible to many control types modeling natural control scenarios, much work has been done to use computational complexity as a shield to protect elections against manipulation and control. However, most of this work has yielded NPhardness results, showing that certain voting systems resist certain types of manipulation or control only in the worst case. The typical case, where votes are given according to some natural distribution, poses a serious challenge to such worstcase complexity results and is frequently open to successful manipulation or control attempts, despite the NPhardness of the corresponding problems. We survey some recent results on typicalcase challenges to worstcase complexity results for manipulation and control.
Computational Tractability: The View From Mars
 BULLETIN OF THE EUROPEAN ASSOCIATION OF THEORETICAL COMPUTER SCIENCE
"... We describe a point of view about the parameterized computational complexity framework in the broad context of one of the central issues of theoretical computer science as a field: the problem of systematically coping with computational intractability. Those already familiar with the basic ideas of ..."
Abstract

Cited by 11 (1 self)
 Add to MetaCart
We describe a point of view about the parameterized computational complexity framework in the broad context of one of the central issues of theoretical computer science as a field: the problem of systematically coping with computational intractability. Those already familiar with the basic ideas of parameterized complexity will nevertheless find here something new: the emerging systematic connections between fixedparameter tractability techniques and the design of useful heuristic algorithms, and also perhaps the philosophical maturation of the parameterized complexity program.
Smoothed Complexity Theory
"... Abstract. Smoothed analysis is a new way of analyzing algorithms introduced by Spielman and Teng (J. ACM, 2004). Classical methods like worstcase or averagecase analysis have accompanying complexity classes, like P and AvgP, respectively. While worstcase or averagecase analysis give us a means ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
Abstract. Smoothed analysis is a new way of analyzing algorithms introduced by Spielman and Teng (J. ACM, 2004). Classical methods like worstcase or averagecase analysis have accompanying complexity classes, like P and AvgP, respectively. While worstcase or averagecase analysis give us a means to talk about the running time of a particular algorithm, complexity classes allows us to talk about the inherent difficulty of problems. Smoothed analysis is a hybrid of worstcase and averagecase analysis and compensates some of their drawbacks. Despite its success for the analysis of single algorithms and problems, there is no embedding of smoothed analysis into computational complexity, which is necessary to classify problems according to their intrinsic difficulty. We propose a framework for smoothed complexity theory, define the relevant classes, prove some first results, and study some examples. 1
Introduction to Computational Complexity  Mathematical Programming Glossary Supplement
, 2010
"... ..."
Improving Deterministic and Randomized ExponentialTime Algorithms for the Satisfiability, the Colorability, and the Domatic Number Problem
, 2006
"... NPcomplete problems cannot have efficient algorithms unless P = NP. Due to their importance in practice, however, it is useful to improve the known exponentialtime algorithms for NPcomplete problems. We survey some of the recent results on such improved exponentialtime algorithms for the NPcom ..."
Abstract
 Add to MetaCart
NPcomplete problems cannot have efficient algorithms unless P = NP. Due to their importance in practice, however, it is useful to improve the known exponentialtime algorithms for NPcomplete problems. We survey some of the recent results on such improved exponentialtime algorithms for the NPcomplete problems satisfiability, graph colorability, and the domatic number problem. The deterministic time bounds are compared with the corresponding time bounds of randomized algorithms, which often run faster but only at the cost of having a certain error probability.
Distributional Word Problem for Tseitin
"... Abstract. The main criticism of known algebraic distributional NP (DistNP) complete problems is based on the fact that they contain too many specific relations to simulate a Turing machine. In this paper we present a construction of the semigroup with very few relations and word problem that is Dist ..."
Abstract
 Add to MetaCart
Abstract. The main criticism of known algebraic distributional NP (DistNP) complete problems is based on the fact that they contain too many specific relations to simulate a Turing machine. In this paper we present a construction of the semigroup with very few relations and word problem that is DistNP complete. Our construction follows Tseitin ideas [Tse56]. We modify original construction to work with words in standard binary presentation and arbitrary semigroups without any special conditions on its relations. The study of average case complexity (i.e. complexity of algorithms for problems with probability distribution on instances) is hard and interesting from many points of view. For example, in industry, it is interesting to understand the behavior of programs on most common inputs, or in cryptography, hardness of a cipher in the worst case is not too interesting. In [Lev86] Levin defined a notion of distributional NPcomplete problem (DistNPcomplete) where decision problem component belongs to NP and every
A Status Report on the P versus NP Question
"... We survey some of the history of the most famous open question in computing: the P versus NP question. We summarize some of the progress that has been made to date, and assess the current situation. ..."
Abstract
 Add to MetaCart
We survey some of the history of the most famous open question in computing: the P versus NP question. We summarize some of the progress that has been made to date, and assess the current situation.
Using Average Case Intractability in Cryptography
, 2004
"... One of the most critical questions in Cryptography is referred to the misunderstanding equivalence between using a difficult problem as basis of a cryptographic application and its security verification. Under this erroneous perspective, it is common using problems belonging to NP (according to the ..."
Abstract
 Add to MetaCart
(Show Context)
One of the most critical questions in Cryptography is referred to the misunderstanding equivalence between using a difficult problem as basis of a cryptographic application and its security verification. Under this erroneous perspective, it is common using problems belonging to NP (according to the worstcase analysis) in the design stage. Afterwards the verification analysis is completely based on the complexity assumptions. However it should be bared in mind that when random generated instances are used, then many times there are fast and efficient algorithms to solve them. This work includes the description of a new multiparty protocol devoted to the sharing of secrets whose main application is related to key management. The main particularity of this scheme is that it is based on a problem classified as DistNPComplete under the averagecase analysis, the so called Distributional Matrix Representability Problem.