Results 1  10
of
17
Learning Bayesian belief networks: An approach based on the MDL principle
 Computational Intelligence
, 1994
"... A new approach for learning Bayesian belief networks from raw data is presented. The approach is based on Rissanen's Minimal Description Length (MDL) principle, which is particularly well suited for this task. Our approach does not require any prior assumptions about the distribution being lear ..."
Abstract

Cited by 247 (7 self)
 Add to MetaCart
A new approach for learning Bayesian belief networks from raw data is presented. The approach is based on Rissanen's Minimal Description Length (MDL) principle, which is particularly well suited for this task. Our approach does not require any prior assumptions about the distribution being learned. In particular, our method can learn unrestricted multiplyconnected belief networks. Furthermore, unlike other approaches our method allows us to tradeo accuracy and complexity in the learned model. This is important since if the learned model is very complex (highly connected) it can be conceptually and computationally intractable. In such a case it would be preferable to use a simpler model even if it is less accurate. The MDL principle o ers a reasoned method for making this tradeo. We also show that our method generalizes previous approaches based on Kullback crossentropy. Experiments have been conducted to demonstrate the feasibility of the approach. Keywords: Knowledge Acquisition � Bayes Nets � Uncertainty Reasoning. 1
From Ukkonen to McCreight and Weiner: A Unifying View of LinearTime Suffix Tree Constructions
 ALGORITHMICA
, 1997
"... We review the linear time suffix tree constructions by Weiner, McCreight, and Ukkonen. We use the terminology of the most recent algorithm, Ukkonen's online construction, to explain its historic predecessors. This reveals relationships much closer than one would expect, since the three algorith ..."
Abstract

Cited by 86 (7 self)
 Add to MetaCart
(Show Context)
We review the linear time suffix tree constructions by Weiner, McCreight, and Ukkonen. We use the terminology of the most recent algorithm, Ukkonen's online construction, to explain its historic predecessors. This reveals relationships much closer than one would expect, since the three algorithms are based on rather different intuitive ideas. Moreover, it completely explains the dierences between these algorithms in terms of simplicity, efficiency, and implementation complexity.
Executing Reactive, Modelbased Programs through Graphbased Temporal Planning
 IN PROCEEDINGS OF IJCAI2001
, 2001
"... In the future, webs of unmanned air and space vehicles will act together to robustly perform elaborate missions in uncertain environments. We coordinate these systems by introducing a reactive modelbased programming language (RMPL) that combines within a single unified representation the flex ..."
Abstract

Cited by 64 (22 self)
 Add to MetaCart
In the future, webs of unmanned air and space vehicles will act together to robustly perform elaborate missions in uncertain environments. We coordinate these systems by introducing a reactive modelbased programming language (RMPL) that combines within a single unified representation the flexibility of embedded programming and reactive execution languages, and the deliberative reasoning power of temporal planners. The KIRK planning system takes as input a problem expressed as a RMPL program, and compiles it into a temporal plan network (TPN), similar to those used by temporal planners, but extended for symbolic constraints and decisions. This intermediate representation clarifies the relation between temporal planning and causallink planning, and permits a single task model to be used for planning and execution. Such a
Using Causal Information and Local Measures to Learn Bayesian Networks
, 1993
"... In previous work we developed a method of learning Bayesian Network models from raw data. This method relies on the well known minimal description length (MDL) principle. The MDL principle is particularly well suited to this task as it allows us to tradeoff, in a principled way, the accuracy of the ..."
Abstract

Cited by 38 (2 self)
 Add to MetaCart
In previous work we developed a method of learning Bayesian Network models from raw data. This method relies on the well known minimal description length (MDL) principle. The MDL principle is particularly well suited to this task as it allows us to tradeoff, in a principled way, the accuracy of the learned network against its practical usefulness. In this paper we present some new results that have arisen from our work. In particular, we present a new local way of computing the description length. This allows us to make significant improvements in our search algorithm. In addition, we modify our algorithm so that it can take into account partial domain information that might be provided by a domain expert. The local computation of description length also opens the door for local refinement of an existent network. The feasibility of our approach is demonstrated by experiments involving networks of a practical size.
A Comparison of Imperative and Purely Functional Suffix Tree Constructions
 Science of Computer Programming
, 1995
"... We explore the design space of implementing suffix tree algorithms in the functional paradigm. We review the linear time and space algorithms of McCreight and Ukkonen. Based on a new terminology of nested suffixes and nested prefixes, we give a simpler and more declarative explanation of these algor ..."
Abstract

Cited by 22 (6 self)
 Add to MetaCart
(Show Context)
We explore the design space of implementing suffix tree algorithms in the functional paradigm. We review the linear time and space algorithms of McCreight and Ukkonen. Based on a new terminology of nested suffixes and nested prefixes, we give a simpler and more declarative explanation of these algorithms than was previously known. We design two "naive" versions of these algorithms which are not linear time, but use simpler data structures, and can be implemented in a purely functional style. Furthermore, we present a new, "lazy" suffix tree construction which is even simpler. We evaluate both imperative and functional implementations of these algorithms. Our results show that the naive algorithms perform very favourably, and in particular, the lazy construction compares very well to all the others. 1 Introduction Suffix trees are the method of choice when a large sequence of symbols, the "text", is to be searched frequently for occurrences of short sequences, the "patterns". Given tha...
Formalizing Convex Hulls Algorithms
 IN TPHOLS’01
, 2001
"... We study the development of formally proved algorithms for computational geometry. The result of this work is a formal description of the basic principles that make convex hull algorithms work and two programs that implement convex hull computation and have been automatically obtained from formally ..."
Abstract

Cited by 15 (3 self)
 Add to MetaCart
(Show Context)
We study the development of formally proved algorithms for computational geometry. The result of this work is a formal description of the basic principles that make convex hull algorithms work and two programs that implement convex hull computation and have been automatically obtained from formally verified mathematical proofs. A special attention has been given to handling degenerated cases that are often overlooked by conventional algorithm presentations.
An Algorithm for Finding All the Spanning Trees in Undirected Graphs
, 1993
"... : In this paper, we propose an algorithm for finding all the spanning trees in undirected graphs. The algorithm requires O(n + m + øn) time and O(n + m) space, where the given graph has n vertices, m edges and ø spanning trees. For outputting all the spanning trees explicitly, this algorithm is opt ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
(Show Context)
: In this paper, we propose an algorithm for finding all the spanning trees in undirected graphs. The algorithm requires O(n + m + øn) time and O(n + m) space, where the given graph has n vertices, m edges and ø spanning trees. For outputting all the spanning trees explicitly, this algorithm is optimal. 1 Introduction This paper considers a problem for finding all the spanning trees in undirected graphs. This problem has a long history and a lot of algorithms have been proposed (e.g., [4, 5, 9]). In 1975, Read and Tarjan presented an algorithm by using a technique called backtracking [7]. Their algorithm requires O(n +m + øm) time and O(n +m) space, where the given graph has n vertices, m edges and ø spanning trees. In [3], Gabow and Myers refined the backtracking approach and obtained an algorithm with O(n+m+øn) time and O(n+m) space. For outputting all the spanning trees explicitly, this algorithm is optimal. In this paper, we propose an algorithm which generates all the spanning ...
A Flexible Algorithm For Generating All The Spanning Trees In Undirected Graphs
 Algorithmica
, 1997
"... . In this paper, we propose an algorithm for generating all the spanning trees in undirected graphs. The algorithm requires O(n + m + øn) time where the given graph has n vertices, m edges and ø spanning trees. For outputting all the spanning trees explicitly, this time complexity is optimal. Our a ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
(Show Context)
. In this paper, we propose an algorithm for generating all the spanning trees in undirected graphs. The algorithm requires O(n + m + øn) time where the given graph has n vertices, m edges and ø spanning trees. For outputting all the spanning trees explicitly, this time complexity is optimal. Our algorithm follows a special rooted tree structure on the skeleton graph of the spanning tree polytope. The rule by which the rooted tree structure is traversed is irrelevant to the time complexity. In this sense, our algorithm is flexible. If we employ the depthfirst search rule, we can save the memory requirement to O(n + m): A breadthfirst implementation requires as much as O(m + øn) space, but when a parallel computer is available, this might have an advantage. When a given graph is weighted, the bestfirst search rule provides a ranking algorithm for the minimum spanning tree problem. The ranking algorithm requires O(n +m + øn) time and O(m + øn) space when we have a minimum spanning tr...
Parallel Maximum Weight Bipartite Matching for Scheduling
 in InputQueued Switches. International Parallel and Distributed Processing Symposium
, 2004
"... An inputqueued switch with virtual output queuing is able to provide a maximum throughput of 100 % in the supporting more sophisticated scheduling strategies. Switch scheduling can be cast as a maximum flow problem. In this paper we propose a maximum weight bipartite matching (MWBM) scheduling alg ..."
Abstract

Cited by 6 (2 self)
 Add to MetaCart
(Show Context)
An inputqueued switch with virtual output queuing is able to provide a maximum throughput of 100 % in the supporting more sophisticated scheduling strategies. Switch scheduling can be cast as a maximum flow problem. In this paper we propose a maximum weight bipartite matching (MWBM) scheduling algorithm for inputqueued switches. Our goal is to provide 100 % throughput while maintaining fairness and stability. Our algorithm provides sublinear parallel run time complexity using a polynomial number of processing elements. We are able to obtain the MWBM for a time slot in sublinear time by using the matching produced in the previous time slot based on the observation that in inputqueued cellbased switches, the weight of edges changes very little during successive time slots. To the best of our knowledge, our algorithm outperforms all previously proposed MWBM scheduling algorithms proposed for inputqueued switches. We also describe a linear time complexity MWBM algorithm for a general bipartite graph which outperforms the best known sublinear MWBM algorithm for any bipartite graph with less than 1015 number of nodes. 1.
RequirementBased Data Cube Schema Design
, 1999
"... Online analytical processing (OLAP) requires efficient processing of complex decision support queries over very large databases. It is well accepted that precomputed data cubes can help reduce the response time of such queries dramatically. A very important design issue of an efficient OLAP system ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
Online analytical processing (OLAP) requires efficient processing of complex decision support queries over very large databases. It is well accepted that precomputed data cubes can help reduce the response time of such queries dramatically. A very important design issue of an efficient OLAP system is therefore the choice of the right data cubes to materialize. We call this problem the data cube schema design problem. In this paper we show that the problem of finding an optimal data cube schema for an OLAP system with limited memory is NPhard. As a more computationally efficient alternative, we propose a greedy approximation algorithm cMP and its variants. Algorithm cMP consists of two phases. In the first phase, an initial schema consisting of all the cubes required to efficiently answer the user queries is formed. In the second phase, cubes in the initial schema are selectively merged to satisfy the memory constraint. We show that cMP is very effective in prunning the search space ...