Results 1  10
of
587
Nearoptimal hashing algorithms for approximate nearest neighbor in high dimensions
, 2008
"... In this article, we give an overview of efficient algorithms for the approximate and exact nearest neighbor problem. The goal is to preprocess a dataset of objects (e.g., images) so that later, given a new query object, one can quickly return the dataset object that is most similar to the query. The ..."
Abstract

Cited by 457 (7 self)
 Add to MetaCart
In this article, we give an overview of efficient algorithms for the approximate and exact nearest neighbor problem. The goal is to preprocess a dataset of objects (e.g., images) so that later, given a new query object, one can quickly return the dataset object that is most similar to the query. The problem is of significant interest in a wide variety of areas.
Efficient algorithms for web services selection with endtoend qos constraints
 ACM Transactions on the Web (TWEB
"... ServiceOriented Architecture (SOA) provides a flexible framework for service composition. Using standardbased protocols (such as SOAP and WSDL), composite services can be constructed by integrating atomic services developed independently. Algorithms are needed to select service components with var ..."
Abstract

Cited by 160 (1 self)
 Add to MetaCart
ServiceOriented Architecture (SOA) provides a flexible framework for service composition. Using standardbased protocols (such as SOAP and WSDL), composite services can be constructed by integrating atomic services developed independently. Algorithms are needed to select service components with various QoS levels according to some applicationdependent performance requirements. We design a brokerbased architecture to facilitate the selection of QoSbased services. The objective of service selection is to maximize an applicationspecific utility function under the endtoend QoS constraints. The problem is modeled in two ways: the combinatorial model and the graph model. The combinatorial model defines the problem as a multidimension multichoice 01 knapsack problem (MMKP). The graph model defines the problem as a multiconstraint optimal path (MCOP) problem. Efficient heuristic algorithms for service processes of different composition structures are presented in this article and their performances are studied by simulations. We also compare the pros and cons between the two models.
Multiple Object Tracking using KShortest Paths Optimization
 IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE
, 2011
"... Multiobject tracking can be achieved by detecting objects in individual frames and then linking detections across frames. Such an approach can be made very robust to the occasional detection failure: If an object is not detected in a frame but is in previous and following ones, a correct trajectory ..."
Abstract

Cited by 123 (6 self)
 Add to MetaCart
Multiobject tracking can be achieved by detecting objects in individual frames and then linking detections across frames. Such an approach can be made very robust to the occasional detection failure: If an object is not detected in a frame but is in previous and following ones, a correct trajectory will nevertheless be produced. By contrast, a falsepositive detection in a few frames will be ignored. However, when dealing with a multiple target problem, the linking step results in a difficult optimization problem in the space of all possible families of trajectories. This is usually dealt with by sampling or greedy search based on variants of Dynamic Programming, which can easily miss the global optimum. In this paper, we show that reformulating that step as a constrained flow optimization results in a convex problem. We take advantage of its particular structure to solve it using the kshortest paths algorithm, which is very fast. This new approach is far simpler formally and algorithmically than existing techniques and lets us demonstrate excellent performance in two very different contexts.
Network Information Flow with Correlated Sources
 TO APPEAR IN THE IEEE TRANSACTIONS ON INFORMATION THEORY
, 2005
"... Consider the following network communication setup, originating in a sensor networking application we refer to as the “sensor reachback ” problem. We have a directed graph G = (V, E), where V = {v0v1...vn} and E ⊆ V × V. If (vi, vj) ∈ E, then node i can send messages to node j over a discrete memor ..."
Abstract

Cited by 93 (7 self)
 Add to MetaCart
(Show Context)
Consider the following network communication setup, originating in a sensor networking application we refer to as the “sensor reachback ” problem. We have a directed graph G = (V, E), where V = {v0v1...vn} and E ⊆ V × V. If (vi, vj) ∈ E, then node i can send messages to node j over a discrete memoryless channel (Xij, pij(yx), Yij), of capacity Cij. The channels are independent. Each node vi gets to observe a source of information Ui (i = 0...M), with joint distribution p(U0U1...UM). Our goal is to solve an incast problem in G: nodes exchange messages with their neighbors, and after a finite number of communication rounds, one of the M + 1 nodes (v0 by convention) must have received enough information to reproduce the entire field of observations (U0U1...UM), with arbitrarily small probability of error. In this paper, we prove that such perfect reconstruction is possible if and only if H(USUS c) < i∈S,j∈S c for all S ⊆ {0...M}, S = ∅, 0 ∈ S c. Our main finding is that in this setup a general source/channel separation theorem holds, and that Shannon information behaves as a classical network flow, identical in nature to the flow of water in pipes. At first glance, it might seem surprising that separation holds in a
A Novel Coevolutionary Approach to Automatic Software Bug Fixing
 In Proceedings of the IEEE Congress on Evolutionary Computation (CEC ’08
, 2008
"... expensive, and that has led the investigation to how to automate them. In particular, Software Testing can take up to half of the resources of the development of new software. Although there has been a lot of work on automating the testing phase, fixing a bug after its presence has been discovered i ..."
Abstract

Cited by 81 (8 self)
 Add to MetaCart
(Show Context)
expensive, and that has led the investigation to how to automate them. In particular, Software Testing can take up to half of the resources of the development of new software. Although there has been a lot of work on automating the testing phase, fixing a bug after its presence has been discovered is still a duty of the programmers. In this paper we propose an evolutionary approach to automate the task of fixing bugs. This novel evolutionary approach is based on Coevolution, in which programs and test cases coevolve, influencing each other with the aim of fixing the bugs of the programs. This competitive coevolution is similar to what happens in nature for predators and prey. The user needs only to provide a buggy program and a formal specification of it. No other information is required. Hence, the approach may work for any implementable software. We show some preliminary experiments in which bugs in an implementation of a sorting algorithm are automatically fixed. I.
Evaluation of Machine Translation and its Evaluation
 In Proceedings of MT Summit IX
, 2003
"... Evaluation of MT evaluation measures is limited by inconsistent human judgment data. Nonetheless, machine translation can be evaluated using the wellknown measures precision, recall, and the Fmeasure. The Fmeasure has significantly higher correlation with human judgments than recently proposed al ..."
Abstract

Cited by 75 (4 self)
 Add to MetaCart
(Show Context)
Evaluation of MT evaluation measures is limited by inconsistent human judgment data. Nonetheless, machine translation can be evaluated using the wellknown measures precision, recall, and the Fmeasure. The Fmeasure has significantly higher correlation with human judgments than recently proposed alternatives. More importantly, the standard measures have an intuitive graphical interpretation, which can facilitate insight into how MT systems might be improved. The relevant software is publicly available from http://nlp.cs.nyu.edu/GTM/
Relay node placement in wireless sensor networks
 IEEE TRANSACTIONS ON COMPUTERS
, 2007
"... A wireless sensor network consists of many lowcost, lowpower sensor nodes, which can perform sensing, simple computation, and transmission of sensed information. Long distance transmission by sensor nodes is not energy efficient, since energy consumption is a superlinear function of the transmissi ..."
Abstract

Cited by 69 (6 self)
 Add to MetaCart
(Show Context)
A wireless sensor network consists of many lowcost, lowpower sensor nodes, which can perform sensing, simple computation, and transmission of sensed information. Long distance transmission by sensor nodes is not energy efficient, since energy consumption is a superlinear function of the transmission distance. One approach to prolong network lifetime while preserving network connectivity is to deploy a small number of costly, but more powerful, relay nodes whose main task is communication with other sensor or relay nodes. In this paper, we assume that sensor nodes have communication range r> 0 while relay nodes have communication range R ≥ r, and study two versions of relay node placement problems. In the first version, we want to deploy the minimum number of relay nodes so that between each pair of sensor nodes, there is a connecting path consisting of relay and/or sensor nodes. In the second version, we want to deploy the minimum number of relay nodes so that between each pair of sensor nodes, there is a connecting path consisting solely of relay nodes. We present a polynomial time 7approximation algorithm for the first problem, and a polynomial time (5 + ɛ)approximation algorithm for the second problem, where ɛ> 0 can be any given constant.
On the Maximum Stable Throughput Problem in Random Networks with Directional Antennas
 IN PROC. ACM MOBIHOC
, 2003
"... We consider the problem of determining rates of growth for the maximum stable throughput achievable in dense wireless networks. We formulate this problem as one of finding maximum flows on random unitdisk graphs. Equipped with the maxflow/mincut theorem as our basic analysis tool, we obtain rates ..."
Abstract

Cited by 67 (6 self)
 Add to MetaCart
We consider the problem of determining rates of growth for the maximum stable throughput achievable in dense wireless networks. We formulate this problem as one of finding maximum flows on random unitdisk graphs. Equipped with the maxflow/mincut theorem as our basic analysis tool, we obtain rates of growth under three models of communication: (a) omnidirectional transmissions; (b) "simple" directional transmissions, in which sending nodes generate a single beam aimed at a particular receiver; and (c) "complex " directional transmissions, in which sending nodes generate multiple beams aimed at multiple receivers. Our main finding is that an increase of 54 54 in maximum stable throughput is all that can be achieved by allowing arbitrarily complex signal processing (in the form of generation of directed beams) at the transmitters and receivers. We conclude therefore that neither directional antennas, nor the ability to communicate simultaneously with multiple nodes, can be expected in practice to effectively circumvent the constriction on capacity in dense networks that results from the geometric layout of nodes in space.
Exact and approximate algorithms for the extension of embedded processor instruction sets
 IEEE TRANS. ON CAD OF INTEGRATED CIRCUITS AND SYSTEMS
, 2006
"... In embedded computing, cost, power, and performance constraints call for the design of specialized processors, rather than for the use of the existing offtheshelf solutions. While the design of these applicationspecific CPUs could be tackled from scratch, a cheaper and more effective option is t ..."
Abstract

Cited by 66 (19 self)
 Add to MetaCart
(Show Context)
In embedded computing, cost, power, and performance constraints call for the design of specialized processors, rather than for the use of the existing offtheshelf solutions. While the design of these applicationspecific CPUs could be tackled from scratch, a cheaper and more effective option is that of extending the existing processors and toolchains. Extensibility is indeed a feature now offered in real designs, e.g., by processors such as Tensilica Xtensa [T. R. Halfhill, Microprocess
Outofcore algorithms for scientific visualization and computer graphics
 In Visualization’02 Course Notes
, 2002
"... Recently, several external memory techniques have been developed for a wide variety of graphics and visualization problems, including surface simplification, volume rendering, isosurface generation, ray tracing, surface reconstruction, and so on. This work has had significant impact given that in re ..."
Abstract

Cited by 59 (11 self)
 Add to MetaCart
Recently, several external memory techniques have been developed for a wide variety of graphics and visualization problems, including surface simplification, volume rendering, isosurface generation, ray tracing, surface reconstruction, and so on. This work has had significant impact given that in recent years there has been a rapid increase in the raw size of datasets. Several technological trends are contributing to this, such as the development of highresolution 3D scanners, and the need to visualize ASCIsize (Accelerated Strategic Computing Initiative) datasets. Another important push for this kind of technology is the growing speed gap between main memory and caches, which penalizes algorithms that do not optimize for coherence of access. Because of these reasons, much research in computer graphics focuses on developing outofcore (and often cachefriendly) techniques. This paper surveys fundamental issues, current problems, and unresolved questions, and aims to provide graphics researchers and professionals with an effective knowledge of current techniques, as well as the foundation to develop novel techniques on their own. Keywords: Outofcore algorithms, scientific visualization, computer graphics, interactive rendering, volume rendering, surface simplification.