Results 11  20
of
729
On the Complexity of BlocksWorld Planning
 Artificial Intelligence
, 1992
"... In this paper, we show that in the bestknown version of the blocks world (and several related versions), planning is difficult, in the sense that finding an optimal plan is NPhard. However, the NPhardness is not due to deletedcondition interactions, but instead due to a situation which we call a ..."
Abstract

Cited by 92 (15 self)
 Add to MetaCart
In this paper, we show that in the bestknown version of the blocks world (and several related versions), planning is difficult, in the sense that finding an optimal plan is NPhard. However, the NPhardness is not due to deletedcondition interactions, but instead due to a situation which we call a deadlock. For problems that do not contain deadlocks, there is a simple hillclimbing strategy that can easily find an optimal plan, regardless of whether or not the problem contains any deletedcondition interactions. The above result is rather surprising, since one of the primary roles of the blocks world in the planning literature has been to provide examples of deletedcondition interactions such as creative destruction and Sussman's anomaly. However, we can explain why deadlocks are hard to handle in terms of a domainindependent goal interaction which we call an enablingcondition interaction, in which an action invoked to achieve one goal has a sideeffect of making it easier to achi...
SpaceEfficiency for Routing Schemes of Stretch Factor Three
 JOURNAL OF PARALLEL AND DISTRIBUTED COMPUTING
, 1997
"... We deal with routing algorithms on arbitrary nnode networks. A routing algorithm is a deterministic distributed algorithm which routes messages from any source to any destination. It includes not only the classical routing tables, but also the routing algorithm that generates paths with loops. Our ..."
Abstract

Cited by 68 (7 self)
 Add to MetaCart
We deal with routing algorithms on arbitrary nnode networks. A routing algorithm is a deterministic distributed algorithm which routes messages from any source to any destination. It includes not only the classical routing tables, but also the routing algorithm that generates paths with loops. Our goal is to design routing algorithms which minimize, for each router of the network, the amount of routing information that needs to be stored by the router in order to implement its own local routing algorithm. So as to simplify the implementation of a routing algorithm, names of the routers can be chosen in advance. We take also into account the efficiency of the routing, i.e., the length of the routing paths. The stretch factor is the maximum ratio, taken over all sourcedestination pairs, between the length of the paths computed by the routing algorithm and the distance between the source and the destination. We show that there exists an nnode network on which every routing algorithm o...
A Mathematica qAnalogue of Zeilberger's Algorithm for Proving qHypergeometric Identities
, 1995
"... Besides an elementary introduction to qidentities and basic hypergeometric series, a newly developed Mathematica implementation of a qanalogue of Zeilberger's fast algorithm for proving terminating qhypergeometric identities together with its theoretical background is described. To illustr ..."
Abstract

Cited by 67 (11 self)
 Add to MetaCart
Besides an elementary introduction to qidentities and basic hypergeometric series, a newly developed Mathematica implementation of a qanalogue of Zeilberger's fast algorithm for proving terminating qhypergeometric identities together with its theoretical background is described. To illustrate the usage of the package and its range of applicability, nontrivial examples are presented as well as additional features like the computation of companion and dual identities.
Greatest Factorial Factorization and Symbolic Summation
 J. SYMBOLIC COMPUT
, 1995
"... This paper is selfcontained, no difference field knowledge but only basic facts from algebra are required. In the following we briefly review its sections. Section 2 presents the basic GFF notions, in particular the Fundamental Lemma and an algorithm for computing the GFFform of a polynomial. In S ..."
Abstract

Cited by 66 (7 self)
 Add to MetaCart
(Show Context)
This paper is selfcontained, no difference field knowledge but only basic facts from algebra are required. In the following we briefly review its sections. Section 2 presents the basic GFF notions, in particular the Fundamental Lemma and an algorithm for computing the GFFform of a polynomial. In Section 3 we investigate the relation to the dispersion function (Abramov, 1971) and discuss "shiftsaturated" polynomials which are polynomials with sufficiently nice GFFform. Due to lattice properties of K[x] with respect to gcd, a minimal shiftsaturated polynomial sat(p) can be assigned to each p 2 K[x]. The canonical Sform of a rational function is introduced as the quotient of two polynomials with denominator of type sat(p). In Section 4 rational telescoping is treated; based on Sform representation, Theorem 4.1 explains why factorials rather than powers play the essential role in summation. Section 5 presents a new and algebraically motivated approach to Gosper's algorithm; together with the basic notions of GFF and Symbolic Summation 3 Section 2 this section can be read independently from the rest of the paper. In Section 6 we consider the general rational summation problem from GFF point of view. Two new algorithms are given. The first one works iteratively similar to the approach sketched by Moenck (1977). His approach is implemented in the computer algebra system Maple to sum rational functions, but due to several gaps in Moenck's original description the Maple algorithm fails on certain rational function inputs as observed by the author of this paper; see Example 6.6. The second algorithm provides an analogue to what is called "Horowitz' Method" or "HermiteOstrogradsky Formula" for rational function integration. In addition, discussing minimaldegree answers to...
A Unified Approach to Ranking in Probabilistic Databases
"... The dramatic growth in the number of application domains that naturally generate probabilistic, uncertain data has resulted in a need for efficiently supporting complex querying and decisionmaking over such data. In this paper, we present a unified approach to ranking and topk query processing in ..."
Abstract

Cited by 62 (3 self)
 Add to MetaCart
(Show Context)
The dramatic growth in the number of application domains that naturally generate probabilistic, uncertain data has resulted in a need for efficiently supporting complex querying and decisionmaking over such data. In this paper, we present a unified approach to ranking and topk query processing in probabilistic databases by viewing it as a multicriteria optimization problem, and by deriving a set of features that capture the key properties of a probabilistic dataset that dictate the ranked result. We contend that a single, specific ranking function may not suffice for probabilistic databases, and we instead propose two parameterized ranking functions, called P RF ω and P RF e, that generalize or can approximate many of the previously proposed ranking functions. We present novel generating functionsbased algorithms for efficiently ranking large datasets according to these ranking functions, even if the datasets exhibit complex correlations modeled using probabilistic and/xor trees or Markov networks. We further propose that the parameters of the ranking function be learned from user preferences, and we develop an approach to learn those parameters. Finally, we present a comprehensive experimental study that illustrates the effectiveness of our parameterized ranking functions, especially P RF e, at approximating other ranking functions and the scalability of our proposed algorithms for exact or approximate ranking. 1.
Secure TwoParty Computation via CutandChoose Oblivious Transfer
 In the 8th TCC, Springer (LNCS 6597
, 2011
"... Protocols for secure twoparty computation enable a pair of parties to compute a function of their inputs while preserving security properties such as privacy, correctness and independence of inputs. Recently, a number of protocols have been proposed for the efficient construction of twoparty compu ..."
Abstract

Cited by 62 (8 self)
 Add to MetaCart
Protocols for secure twoparty computation enable a pair of parties to compute a function of their inputs while preserving security properties such as privacy, correctness and independence of inputs. Recently, a number of protocols have been proposed for the efficient construction of twoparty computation secure in the presence of malicious adversaries (where security is proven under the standard simulationbased ideal/real model paradigm for defining security). In this paper, we present a protocol for this task that follows the methodology of using cutandchoose to boost Yao’s protocol to be secure in the presence of malicious adversaries. Relying on specific assumptions (DDH), we construct a protocol that is significantly more efficient and far simpler than the protocol of Lindell and Pinkas (Eurocrypt 2007) that follows the same methodology. We provide an exact, concrete analysis of the efficiency of our scheme and demonstrate that (at least for not very small circuits) our protocol is more efficient than any other known today. secure twoparty computation, malicious adversaries, cutandchoose, concrete effiKeywords: ciency
Fast Algorithms to Enumerate All Common Intervals of Two Permutations
 Algorithmica
, 2000
"... Given two permutations of n elements, a pair of intervals of these permutations consisting of the same set of elements is called a common interval. Some genetic algorithms based on such common intervals have been proposed for sequencing problems and have exhibited good prospects. In this paper, we p ..."
Abstract

Cited by 61 (1 self)
 Add to MetaCart
Given two permutations of n elements, a pair of intervals of these permutations consisting of the same set of elements is called a common interval. Some genetic algorithms based on such common intervals have been proposed for sequencing problems and have exhibited good prospects. In this paper, we propose three types of fast algorithms to enumerate all common intervals: i) a simple O(n 2 ) time algorithm (LHP), whose expected running time becomes O(n) for two randomly generated permutations, ii) a practically fast O(n 2 ) time algorithm (MNG) using the reverse Monge property, and iii) an O(n + K) time algorithm (RC), where K ( 0 n 2 ) is the number of common intervals. It will be also shown that the expected number of common intervals for two random permutations is O(1). This result gives a reason for the phenomenon that the expected time complexity O(n) of the algorithm LHP is independent of K. Among the proposed algorithms, RC is most desirable from the theoretical point ...