Results 1 - 10
of
17
MONA: Monadic Second-Order Logic in Practice
- IN PRACTICE, IN TOOLS AND ALGORITHMS FOR THE CONSTRUCTION AND ANALYSIS OF SYSTEMS, FIRST INTERNATIONAL WORKSHOP, TACAS '95, LNCS 1019
, 1995
"... The purpose of this article is to introduce Monadic Second-order Logic as a practical means of specifying regularity. The logic is a highly succinct alternative to the use of regular expressions. We have built a tool MONA, which acts as a decision procedure and as a translator to finite-state au ..."
Abstract
-
Cited by 149 (20 self)
- Add to MetaCart
The purpose of this article is to introduce Monadic Second-order Logic as a practical means of specifying regularity. The logic is a highly succinct alternative to the use of regular expressions. We have built a tool MONA, which acts as a decision procedure and as a translator to finite-state automata. The tool is based on new algorithms for minimizing finitestate automata that use binary decision diagrams (BDDs) to represent transition functions in compressed form. A byproduct of this work is a new bottom-up algorithm to reduce BDDs in linear time without hashing. The potential
Mosel: A Flexible Toolset for Monadic Second-Order Logic
- IN PROCEEDINGS OF CAV'97, LNCS 1254
, 1997
"... Mosel is a new tool-set for the analysis and verification in Monadic Second-order Logic. In this paper we concentrate on the system's design: Mosel is a tool-set to include a flexible set of decision procedures for several theories of the logic complemented byavariety of support components f ..."
Abstract
-
Cited by 27 (5 self)
- Add to MetaCart
Mosel is a new tool-set for the analysis and verification in Monadic Second-order Logic. In this paper we concentrate on the system's design: Mosel is a tool-set to include a flexible set of decision procedures for several theories of the logic complemented byavariety of support components for input format translations, visualization, and interfaces to other logics and tools. The main distinguishing features of Mosel are its layered approach to the logic, based on a formal semantics for a minimal subset, its modular design, and its integration in a heterogeneous analysis and verification environment.
Backward and forward bisimulation minimisation of tree automata
, 2007
"... Abstract. We improve an existing bisimulation minimisation algorithm for tree automata by introducing backward and forward bisimulations and developing minimisation algorithms for them. Minimisation via forward bisimulation is also effective for deterministic automata and faster than the previous al ..."
Abstract
-
Cited by 14 (5 self)
- Add to MetaCart
(Show Context)
Abstract. We improve an existing bisimulation minimisation algorithm for tree automata by introducing backward and forward bisimulations and developing minimisation algorithms for them. Minimisation via forward bisimulation is also effective for deterministic automata and faster than the previous algorithm. Minimisation via backward bisimulation generalises the previous algorithm and is thus more effective but just as fast. We demonstrate implementations of these algorithms on a typical task in natural language processing.
Modular and polymorphic set-based analysis: Theory and practice
, 1996
"... Set-based analysis (SBA) produces good predictions about the behavior of functional and object-oriented programs. The analysis proceeds by inferring constraints that characterize the data flow relationships of the analyzed program. Experiences with Rice's program development environment, which ..."
Abstract
-
Cited by 13 (2 self)
- Add to MetaCart
(Show Context)
Set-based analysis (SBA) produces good predictions about the behavior of functional and object-oriented programs. The analysis proceeds by inferring constraints that characterize the data flow relationships of the analyzed program. Experiences with Rice's program development environment, which includes a static debugger based on SBA, indicate that SBA can deal with programs of up to a couple of thousand lines of code. However, SBA does not cope with larger programs because it generates large systems of constraints for these programs. These constraint systems are at least linear, and possibly quadratic, in the size of the analyzed program. This paper presents theoretical and practical results concerning methods for reducing the size of constraint systems. The theoretical results include a complete proof-theoretic characterization of the observable behavior of a constraint system, which we use to establish a close connection between the observable equivalence of constraint systems and the equivalence of regular-tree grammars. We then exploit this connection to adapt avariety of algorithms for simplifying grammars to the practical problem of simplifying
An algorithmic view of gene teams
"... Comparative genomics is a growing field in computational biology, and one of its typical problem is the identification of sets of orthologous genes that have virtually the same function in several genomes. Many different bioinformatics approaches have been proposed to define these groups, often base ..."
Abstract
-
Cited by 8 (1 self)
- Add to MetaCart
Comparative genomics is a growing field in computational biology, and one of its typical problem is the identification of sets of orthologous genes that have virtually the same function in several genomes. Many different bioinformatics approaches have been proposed to define these groups, often based on the detection of sets of genes that are “not too far” in all genomes. In this paper, we propose a unifying concept, called gene teams, which can be adapted to various notions of distance. We present two algorithms for identifying gene teams formed by n genes placed on m linear chromosomes. The first one runs in O(mn log² n) and uses a divide and conquer approach based on the formal properties of gene teams. We next propose an optimization of the original algorithm, and, in order to better understand the complexity bound of the algorithms, we recast the problem in the Hopcroft’s partition refinement framework. This allows us to analyze the complexity of the algorithms with elegant amortized techniques. Both algorithms require linear space. We also discuss extensions to circular chromosomes that achieve the same complexity.
Incremental Methods for Formal Verification and Logic Synthesis
, 1996
"... IC design is an iterative process; the initial specification of a design is rarely complete and correct. The designer begins with a preliminary and usually incorrect sketch (possibly from a previous generation design), and iteratively refines and corrects it. Usually, refinements are small, and the ..."
Abstract
-
Cited by 6 (0 self)
- Add to MetaCart
IC design is an iterative process; the initial specification of a design is rarely complete and correct. The designer begins with a preliminary and usually incorrect sketch (possibly from a previous generation design), and iteratively refines and corrects it. Usually, refinements are small, and there is much common information between successive design iterations. The current genre of CAD tools do not take into account this iterative nature of design. For each change made to the design, the design is re-verified and re-optimized without taking advantage of information from previous iterations. This leads to inefficient performance. In this thesis, we propose the paradigm of incremental algorithms for CAD. Incremental algorithms use information from a...
A Demand-Driven Approach for Efficient Interprocedural Data Flow Analysis
- IBM RESEARCH
, 1996
"... ..."
Minimizing Deterministic Weighted Tree Automata
, 2008
"... The problem of efficiently minimizing deterministic weighted tree automata (wta) is investigated. Such automata have found promising applications as language models in Natural Language Processing. A polynomial-time algorithm is presented that given a deterministic wta over a commutative semifield, o ..."
Abstract
-
Cited by 6 (4 self)
- Add to MetaCart
(Show Context)
The problem of efficiently minimizing deterministic weighted tree automata (wta) is investigated. Such automata have found promising applications as language models in Natural Language Processing. A polynomial-time algorithm is presented that given a deterministic wta over a commutative semifield, of which all operations including the computation of the inverses are polynomial, constructs an equivalent minimal (with respect to the number of states) deterministic and total wta. If the semifield operations can be performed in constant time, then the algorithm runs in time O(rmn 4) where r is the maximal rank of the input symbols, m is the number of transitions, and n is the number of states of the input wta.