Results 1 - 10
of
453
Taverna: A tool for the composition and enactment of bioinformatics workflows
- Bioinformatics
, 2004
"... *To whom correspondence should be addressed. Running head: Composing and enacting workflows using Taverna Motivation: In silico experiments in bioinformatics involve the co-ordinated use of computational tools and information repositories. A growing number of these resources are being made available ..."
Abstract
-
Cited by 465 (8 self)
- Add to MetaCart
*To whom correspondence should be addressed. Running head: Composing and enacting workflows using Taverna Motivation: In silico experiments in bioinformatics involve the co-ordinated use of computational tools and information repositories. A growing number of these resources are being made available with programmatic access in the form of Web services. Bioinformatics scientists will need to orchestrate these Web services in workflows as part of their analyses. Results: The Taverna project has developed a tool for the composition and enactment of bioinformatics workflows for the life sciences community. The tool includes a workbench application which provides a graphical user interface for the composition of workflows. These workflows are written in a new language called the Simple conceptual unified flow language (Scufl), where by each step within a workflow represents one atomic task. Two examples are used to illustrate the ease by with which in silico experiments can be represented as Scufl workflows using the workbench application. Availability: The Taverna workflow system is available as open source and can be downloaded with example Scufl workflows from
Performance Debugging for Distributed Systems of Black Boxes
, 2003
"... Many interesting large-scale systems are distributed systems of multiple communicating components. Such systems can be very hard to debug, especially when they exhibit poor performance. The problem becomes much harder when systems are composed of "black-box" components: software from many ..."
Abstract
-
Cited by 316 (3 self)
- Add to MetaCart
(Show Context)
Many interesting large-scale systems are distributed systems of multiple communicating components. Such systems can be very hard to debug, especially when they exhibit poor performance. The problem becomes much harder when systems are composed of "black-box" components: software from many different (perhaps competing) vendors, usually without source code available. Typical solutions-provider employees are not always skilled or experienced enough to debug these systems efficiently. Our goal is to design tools that enable modestly-skilled programmers (and experts, too) to isolate performance bottlenecks in distributed systems composed of black-box nodes.
Km3: a dsl for metamodel specification
- In proc. of 8th FMOODS, LNCS 4037
, 2006
"... Abstract. We consider in this paper that a DSL (Domain Specific Language) may be defined by a set of models. A typical DSL is the ATLAS Transformation Language (ATL). An ATL program transforms a source model (conforming to a source metamodel) into a target model (conforming to a target metamodel). B ..."
Abstract
-
Cited by 117 (23 self)
- Add to MetaCart
(Show Context)
Abstract. We consider in this paper that a DSL (Domain Specific Language) may be defined by a set of models. A typical DSL is the ATLAS Transformation Language (ATL). An ATL program transforms a source model (conforming to a source metamodel) into a target model (conforming to a target metamodel). Being itself a model, the transformation program conforms to the ATL metamodel. The notion of metamodel is thus used to define the source DSL, the target DSL and the transformation DSL itself. As a consequence we can see that agility to define metamodels and precision of these definitions is of paramount importance in any model engineering activity. In order to fullfill the goals of agility and precision in the definition of our metamodels, we have been using a notation called KM3 (Kernel MetaMetaModel). KM3 may itself be considered as a DSL for describing metamodels. This paper presents the rationale for using KM3, some examples of its use and a precise definition of the language.
Graphviz — open source graph drawing tools
- Lecture Notes in Computer Science
, 2001
"... Graphviz is a heterogeneous collection of graph drawing tools containing batch layout programs (dot, neato, fdp, twopi); a platform for incremental layout (Dynagraph); customizable graph editors (dotty, Grappa); a server for including graphs in Web pages (WebDot); support for graphs as COM objects ( ..."
Abstract
-
Cited by 114 (0 self)
- Add to MetaCart
(Show Context)
Graphviz is a heterogeneous collection of graph drawing tools containing batch layout programs (dot, neato, fdp, twopi); a platform for incremental layout (Dynagraph); customizable graph editors (dotty, Grappa); a server for including graphs in Web pages (WebDot); support for graphs as COM objects (Montage); utility programs useful in graph visualization; and libraries for attributed graphs. The software is available under an Open Source license. The article[1] provides a detailed description of the package. The Graphviz software began with a precursor of dot in 1988, followed by neato in the early 90’s. The features expanded greatly over the years, driven by user request. Graphviz became Open Source in 2000, and was recently distributed on about 500,000 CDROMs as an add-on package for the SUSE Linux release, and is redistributed by Debian, Mandrake, SourceForge, and soon OpenBSD. 2 Areas of Application Thanks to the variety of components available and its open, “toolkit ” design, Graphviz supports a wide variety of applications. The foremost application is probably presentation layouts, such as including graphs in papers. As stream processors, the Graphviz tools can be used as co-processes with interactive components to provide dynamic layouts for debuggers, process monitors, program analysis software, etc. Graphviz tools have been adopted as a visualization service by the W3C Resource Description Framework
Implicit structure and the dynamics of blogspace
- In Workshop on the Weblogging Ecosystem
, 2004
"... Weblogs link together in a complex structure through which new ideas and discourse can flow. Such a structure is ideal for the study of the propagation of information. In this paper we describe general categories of information epidemics and create a tool to infer and visualize the paths specific in ..."
Abstract
-
Cited by 104 (4 self)
- Add to MetaCart
(Show Context)
Weblogs link together in a complex structure through which new ideas and discourse can flow. Such a structure is ideal for the study of the propagation of information. In this paper we describe general categories of information epidemics and create a tool to infer and visualize the paths specific infections take through the network. This inference is based in part on a novel utilization of data describing historical, repeating patterns of infection. We conclude with a description of a new ranking algorithm, iRank, for blogs. In contrast to traditional ranking strategies, iRank acts on the implicit link structure to find those blogs that initiate these epidemics.
MatrixExplorer: a Dual-Representation System to Explore Social Networks
- IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS
, 2006
"... MatrixExplorer is a network visualization system that uses two representations: node-link diagrams and matrices. Its design comes from a list of requirements formalized after several interviews and a participatory design session conducted with social science researchers. Although matrices are comm ..."
Abstract
-
Cited by 91 (13 self)
- Add to MetaCart
(Show Context)
MatrixExplorer is a network visualization system that uses two representations: node-link diagrams and matrices. Its design comes from a list of requirements formalized after several interviews and a participatory design session conducted with social science researchers. Although matrices are commonly used in social networks analysis, very few systems support the matrix-based representations to visualize and analyze networks. MatrixExplorer provides several novel features to support the exploration of social networks with a matrix-based representation, in addition to the standard interactive filtering and clustering functions. It provides tools to reorder (layout) matrices, to annotate and compare findings across different layouts and find consensus among several clusterings. MatrixExplorer also supports Node-link diagram views which are familiar to most users and remain a convenient way to publish or communicate exploration results. Matrix and node-link representations are kept synchronized at all stages of the exploration process.
Velodrome: A Sound and Complete Dynamic Atomicity Checker for Multithreaded Programs
"... Atomicity is a fundamental correctness property in multithreaded programs, both because atomic code blocks are amenable to sequential reasoning (which significantly simplifies correctness arguments), and because atomicity violations often reveal defects in a program’s synchronization structure. Unfo ..."
Abstract
-
Cited by 85 (10 self)
- Add to MetaCart
(Show Context)
Atomicity is a fundamental correctness property in multithreaded programs, both because atomic code blocks are amenable to sequential reasoning (which significantly simplifies correctness arguments), and because atomicity violations often reveal defects in a program’s synchronization structure. Unfortunately, all atomicity analyses developed to date are incomplete in that they may yield false alarms on correctly synchronized programs, which limits their usefulness. We present the first dynamic analysis for atomicity that is both sound and complete. The analysis reasons about the exact dependencies between operations in the observed trace of the target program, and it reports error messages if and only if the observed trace is not conflict-serializable. Despite this significant increase in precision, the performance and coverage of our analysis is competitive with earlier incomplete dynamic analyses for atomicity.
The Worst-Case Execution Time Problem – Overview of Methods and Survey of Tools
- ACM Transactions on Embedded Computing Systems
, 2008
"... ATRs (AVACS Technical Reports) are freely downloadable from www.avacs.org Copyright c © April 2007 by the author(s) ..."
Abstract
-
Cited by 82 (18 self)
- Add to MetaCart
(Show Context)
ATRs (AVACS Technical Reports) are freely downloadable from www.avacs.org Copyright c © April 2007 by the author(s)
Efficient, correct simulation of biological processes in the stochastic pi-calculus
- GILMORE (EDS.), PROC. INT. CONF. COMPUTATIONAL METHODS IN SYSTEMS BIOLOGY (CMSB’07
, 2007
"... This paper presents a simulation algorithm for the stochastic π-calculus, designed for the efficient simulation of biological systems with large numbers of molecules. The cost of a simulation depends on the number of species, rather than the number of molecules, resulting in a significant gain in e ..."
Abstract
-
Cited by 65 (12 self)
- Add to MetaCart
(Show Context)
This paper presents a simulation algorithm for the stochastic π-calculus, designed for the efficient simulation of biological systems with large numbers of molecules. The cost of a simulation depends on the number of species, rather than the number of molecules, resulting in a significant gain in efficiency. The algorithm is proved correct with respect to the calculus, and then used as a basis for implementing the latest version of the SPiM stochastic simulator. The algorithm is also suitable for generating graphical animations of simulations, in order to visualise system dynamics.
Debugging temporal specifications with concept analysis
- In ACM SIGPLAN Conf on Prog Lang Design and Implem
, 2003
"... ABSTRACT Program verification tools (such as model checkers and static ana-lyzers) can find many errors in programs. These tools need formal specifications of correct program behavior, but writing a correctspecification is difficult, just as writing a correct program is difficult. Thus, just as we n ..."
Abstract
-
Cited by 62 (0 self)
- Add to MetaCart
(Show Context)
ABSTRACT Program verification tools (such as model checkers and static ana-lyzers) can find many errors in programs. These tools need formal specifications of correct program behavior, but writing a correctspecification is difficult, just as writing a correct program is difficult. Thus, just as we need methods for debugging programs, weneed methods for debugging specifications. This paper describes a novel method for debugging formal, tem-poral specifications. A straightforward way to debug a specification is based on manually examining the short program execution tracesthat program verification tools generate from specification violations and that specification miners extract from programs. Thismethod is tedious and error-prone because there may be hundreds or thousands of traces to inspect. Our method uses concept anal-ysis to automatically group traces into highly similar clusters. By examining clusters instead of individual traces, a person can debuga specification with less work. To test our method, we implemented a tool, Cable, for debug-ging specifications. We have used Cable to debug specifications produced by Strauss, our specification miner. We found that us-ing Cable to debug these specifications requires, on average, less than one third as many user decisions as debugging by examiningall traces requires. In one case, using Cable required only 28 decisions, while debugging by examining all traces required 224.