Results 11  20
of
253,036
A classification and comparison framework for software architecture description languages
 IEEE Transactions on Software Engineering
, 2000
"... Software architectures shift the focus of developers from linesofcode to coarsergrained architectural elements and their overall interconnection structure. Architecture description languages (ADLs) have been proposed as modeling notations to support architecturebased development. There is, howev ..."
Abstract

Cited by 840 (59 self)
 Add to MetaCart
Software architectures shift the focus of developers from linesofcode to coarsergrained architectural elements and their overall interconnection structure. Architecture description languages (ADLs) have been proposed as modeling notations to support architecturebased development. There is, however, little consensus in the research community on what is an ADL, what aspects of an architecture should be modeled in an ADL, and which of several possible ADLs is best suited for a particular problem. Furthermore, the distinction is rarely made between ADLs on one hand and formal specification, module interconnection, simulation, and programming languages on the other. This paper attempts to provide an answer to these questions. It motivates and presents a definition and a classification framework for ADLs. The utility of the definition is demonstrated by using it to differentiate ADLs from other modeling notations. The framework is used to classify and compare several existing ADLs, enabling us in the process to identify key properties of ADLs. The comparison highlights areas where existing ADLs provide extensive support and those in which they are deficient, suggesting a research agenda for the future.
A Simple, Fast, and Accurate Algorithm to Estimate Large Phylogenies by Maximum Likelihood
, 2003
"... The increase in the number of large data sets and the complexity of current probabilistic sequence evolution models necessitates fast and reliable phylogeny reconstruction methods. We describe a new approach, based on the maximumlikelihood principle, which clearly satisfies these requirements. The ..."
Abstract

Cited by 2109 (30 self)
 Add to MetaCart
The increase in the number of large data sets and the complexity of current probabilistic sequence evolution models necessitates fast and reliable phylogeny reconstruction methods. We describe a new approach, based on the maximumlikelihood principle, which clearly satisfies these requirements. The core of this method is a simple hillclimbing algorithm that adjusts tree topology and branch lengths simultaneously. This algorithm starts from an initial tree built by a fast distancebased method and modifies this tree to improve its likelihood at each iteration. Due to this simultaneous adjustment of the topology and branch lengths, only a few iterations are sufficient to reach an optimum. We used extensive and realistic computer simulations to show that the topological accuracy of this new method is at least as high as that of the existing maximumlikelihood programs and much higher than the performance of distancebased and parsimony approaches. The reduction of computing time is dramatic in comparison with other maximumlikelihood packages, while the likelihood maximization ability tends to be higher. For example, only 12 min were required on a standard personal computer to analyze a data set consisting of 500 rbcL sequences with 1,428 base pairs from plant plastids, thus reaching a speed of the same order as some popular distancebased and parsimony algorithms. This new method is implemented in the PHYML program, which is freely available on our web page:
Bandera: Extracting Finitestate Models from Java Source Code
 IN PROCEEDINGS OF THE 22ND INTERNATIONAL CONFERENCE ON SOFTWARE ENGINEERING
, 2000
"... Finitestate verification techniques, such as model checking, have shown promise as a costeffective means for finding defects in hardware designs. To date, the application of these techniques to software has been hindered by several obstacles. Chief among these is the problem of constructing a fini ..."
Abstract

Cited by 653 (35 self)
 Add to MetaCart
Finitestate verification techniques, such as model checking, have shown promise as a costeffective means for finding defects in hardware designs. To date, the application of these techniques to software has been hindered by several obstacles. Chief among these is the problem of constructing a finitestate model that approximates the executable behavior of the software system of interest. Current bestpractice involves handconstruction of models which is expensive (prohibitive for all but the smallest systems), prone to errors (which can result in misleading verification results), and difficult to optimize (which is necessary to combat the exponential complexity of verification algorithms). In this paper, we describe an integrated collection of program analysis and transformation components, called Bandera, that enables the automatic extraction of safe, compact finitestate models from program source code. Bandera takes as input Java source code and generates a program model in the input language of one of several existing verification tools; Bandera also maps verifier outputs back to the original source code. We discuss the major components of Bandera and give an overview of how it can be used to model check correctness properties of Java programs.
Predicting Internet Network Distance with CoordinatesBased Approaches
 In INFOCOM
, 2001
"... In this paper, we propose to use coordinatesbased mechanisms in a peertopeer architecture to predict Internet network distance (i.e. roundtrip propagation and transmission delay) . We study two mechanisms. The first is a previously proposed scheme, called the triangulated heuristic, which is bas ..."
Abstract

Cited by 633 (5 self)
 Add to MetaCart
In this paper, we propose to use coordinatesbased mechanisms in a peertopeer architecture to predict Internet network distance (i.e. roundtrip propagation and transmission delay) . We study two mechanisms. The first is a previously proposed scheme, called the triangulated heuristic, which is based on relative coordinates that are simply the distances from a host to some special network nodes. We propose the second mechanism, called Global Network Positioning (GNP), which is based on absolute coordinates computed from modeling the Internet as a geometric space. Since end hosts maintain their own coordinates, these approaches allow end hosts to compute their interhost distances as soon as they discover each other. Moreover coordinates are very efficient in summarizing interhost distances, making these approaches very scalable. By performing experiments using measured Internet distance data, we show that both coordinatesbased schemes are more accurate than the existing state of the art system IDMaps, and the GNP approach achieves the highest accuracy and robustness among them.
The FF planning system: Fast plan generation through heuristic search
 Journal of Artificial Intelligence Research
, 2001
"... We describe and evaluate the algorithmic techniques that are used in the FF planning system. Like the HSP system, FF relies on forward state space search, using a heuristic that estimates goal distances by ignoring delete lists. Unlike HSP's heuristic, our method does not assume facts to be ind ..."
Abstract

Cited by 822 (53 self)
 Add to MetaCart
We describe and evaluate the algorithmic techniques that are used in the FF planning system. Like the HSP system, FF relies on forward state space search, using a heuristic that estimates goal distances by ignoring delete lists. Unlike HSP's heuristic, our method does not assume facts to be independent. We introduce a novel search strategy that combines Hillclimbing with systematic search, and we show how other powerful heuristic information can be extracted and used to prune the search space. FF was the most successful automatic planner at the recent AIPS2000 planning competition. We review the results of the competition, give data for other benchmark domains, and investigate the reasons for the runtime performance of FF compared to HSP.
Shape modeling with front propagation: A level set approach
 IEEE Transactions on Pattern Analysis and Machine Intelligence
, 1995
"... Abstract Shape modeling is an important constituent of computer vision as well as computer graphics research. Shape models aid the tasks of object representation and recognition. This paper presents a new approach to shape modeling which retains some of the attractive features of existing methods ..."
Abstract

Cited by 804 (20 self)
 Add to MetaCart
Abstract Shape modeling is an important constituent of computer vision as well as computer graphics research. Shape models aid the tasks of object representation and recognition. This paper presents a new approach to shape modeling which retains some of the attractive features of existing methods and overcomes some of their limitations. Our techniques can be applied to model arbitrarily complex shapes, which include shapes with significant protrusions, and to situations where no a priori assumption about the object’s topology is made. A single instance of our model, when presented with an image having more than one object of interest, has the ability to split freely to represent each object. This method is based on the ideas developed by Osher and Sethian to model propagating solidhiquid interfaces with curvaturedependent speeds. The interface (front) is a closed, nonintersecting, hypersurface flowing along its gradient field with constant speed or a speed that depends on the curvature. It is moved by solving a “HamiltonJacob? ’ type equation written for a function in which the interface is a particular level set. A speed term synthesizpd from the image is used to stop the interface in the vicinity of object boundaries. The resulting equation of motion is solved by employing entropysatisfying upwind finite difference schemes. We present a variety of ways of computing evolving front, including narrow bands, reinitializations, and different stopping criteria. The efficacy of the scheme is demonstrated with numerical experiments on some synthesized images and some low contrast medical images. Index Terms Shape modeling, shape recovery, interface motion, level sets, hyperbolic conservation laws, HamiltonJacobi
Blind Beamforming for Non Gaussian Signals
 IEE ProceedingsF
, 1993
"... This paper considers an application of blind identification to beamforming. The key point is to use estimates of directional vectors rather than resorting to their hypothesized value. By using estimates of the directional vectors obtained via blind identification i.e. without knowing the arrray mani ..."
Abstract

Cited by 704 (31 self)
 Add to MetaCart
This paper considers an application of blind identification to beamforming. The key point is to use estimates of directional vectors rather than resorting to their hypothesized value. By using estimates of the directional vectors obtained via blind identification i.e. without knowing the arrray manifold, beamforming is made robust with respect to array deformations, distortion of the wave front, pointing errors, etc ... so that neither array calibration nor physical modeling are necessary. Rather surprisingly, `blind beamformers' may outperform `informed beamformers' in a plausible range of parameters, even when the array is perfectly known to the informed beamformer. The key assumption blind identification relies on is the statistical independence of the sources, which we exploit using fourthorder cumulants. A computationally efficient technique is presented for the blind estimation of directional vectors, based on joint diagonalization of 4thorder cumulant matrices
Inducing Features of Random Fields
 IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE
, 1997
"... We present a technique for constructing random fields from a set of training samples. The learning paradigm builds increasingly complex fields by allowing potential functions, or features, that are supported by increasingly large subgraphs. Each feature has a weight that is trained by minimizing the ..."
Abstract

Cited by 664 (14 self)
 Add to MetaCart
We present a technique for constructing random fields from a set of training samples. The learning paradigm builds increasingly complex fields by allowing potential functions, or features, that are supported by increasingly large subgraphs. Each feature has a weight that is trained by minimizing the KullbackLeibler divergence between the model and the empirical distribution of the training data. A greedy algorithm determines how features are incrementally added to the field and an iterative scaling algorithm is used to estimate the optimal values of the weights. The random field models and techniques introduced in this paper differ from those common to much of the computer vision literature in that the underlying random fields are nonMarkovian and have a large number of parameters that must be estimated. Relations to other learning approaches, including decision trees, are given. As a demonstration of the method, we describe its application to the problem of automatic word classifica...
Computer support for knowledgebuilding communities
 The Journal of the Learning Sciences
, 1994
"... Nobody wants to use technology to recreate education as it is, yet there is not much to distinguish what goes on in most computersupported classrooms versus traditional classrooms. Kay (1991) has suggested that the phenomenon of reframing innovations to recreate the familiar is itself commonplace. ..."
Abstract

Cited by 593 (4 self)
 Add to MetaCart
Nobody wants to use technology to recreate education as it is, yet there is not much to distinguish what goes on in most computersupported classrooms versus traditional classrooms. Kay (1991) has suggested that the phenomenon of reframing innovations to recreate the familiar is itself commonplace. Thus, one sees all manner of powerful technology (Hypercard, CDROM, Lego Logo, and so forth) used to conduct shopworn school activities: copying material from one resource into another (e.g., using Hypercard to assemble sound and visual bites produced by others), and following stepbystep procedures (e.g., creating Lego Logo machines by following steps in a manual). With new technologies, studentgenerated collages and reproductions appear more inventive and sophisticatedwith impressive displays of sound, video, and typographybut from a cognitive perspective, it is not clear what, if any, knowledge content has been processed by the students. In this chapter we offer a suggestion for how to escape the pattern of reinventing the familiar with educational technology. Knowledgebuilding discourse is at the heart of the superior education that we have in mind. We argue that the classroom needs to foster
Segmentation of brain MR images through a hidden Markov random field model and the expectationmaximization algorithm
 IEEE TRANSACTIONS ON MEDICAL. IMAGING
, 2001
"... The finite mixture (FM) model is the most commonly used model for statistical segmentation of brain magnetic resonance (MR) images because of its simple mathematical form and the piecewise constant nature of ideal brain MR images. However, being a histogrambased model, the FM has an intrinsic limi ..."
Abstract

Cited by 619 (14 self)
 Add to MetaCart
The finite mixture (FM) model is the most commonly used model for statistical segmentation of brain magnetic resonance (MR) images because of its simple mathematical form and the piecewise constant nature of ideal brain MR images. However, being a histogrambased model, the FM has an intrinsic limitation—no spatial information is taken into account. This causes the FM model to work only on welldefined images with low levels of noise; unfortunately, this is often not the the case due to artifacts such as partial volume effect and bias field distortion. Under these conditions, FM modelbased methods produce unreliable results. In this paper, we propose a novel hidden Markov random field (HMRF) model, which is a stochastic process generated by a MRF whose state sequence cannot be observed directly but which can be indirectly estimated through observations. Mathematically, it can be shown that the FM model is a degenerate version of the HMRF model. The advantage of the HMRF model derives from the way in which the spatial information is encoded through the mutual influences of neighboring sites. Although MRF modeling has been employed in MR image segmentation by other researchers, most reported methods are limited to using MRF as a general prior in an FM modelbased approach. To fit the HMRF model, an EM algorithm is used. We show that by incorporating both the HMRF model and the EM algorithm into a HMRFEM framework, an accurate and robust segmentation can be achieved. More importantly, the HMRFEM framework can easily be combined with other techniques. As an example, we show how the bias field correction algorithm of Guillemaud and Brady (1997) can be incorporated into this framework to achieve a threedimensional fully automated approach for brain MR image segmentation.
Results 11  20
of
253,036