Results 1  10
of
434
The Hungarian method for the assignment problem
 Naval Res. Logist. Quart
, 1955
"... Assuming that numerical scores are available for the performance of each of n persons on each of n jobs, the "assignment problem" is the quest for an assignment of persons to jobs so that the sum of the n scores so obtained is as large as possible. It is shown that ideas latent in the work ..."
Abstract

Cited by 1238 (0 self)
 Add to MetaCart
(Show Context)
Assuming that numerical scores are available for the performance of each of n persons on each of n jobs, the "assignment problem" is the quest for an assignment of persons to jobs so that the sum of the n scores so obtained is as large as possible. It is shown that ideas latent in the work of two Hungarian mathematicians may be exploited to yield a new method of solving this problem. 1.
Bundle Methods for Regularized Risk Minimization
"... A wide variety of machine learning problems can be described as minimizing a regularized risk functional, with different algorithms using different notions of risk and different regularizers. Examples include linear Support Vector Machines (SVMs), Gaussian Processes, Logistic Regression, Conditional ..."
Abstract

Cited by 78 (4 self)
 Add to MetaCart
A wide variety of machine learning problems can be described as minimizing a regularized risk functional, with different algorithms using different notions of risk and different regularizers. Examples include linear Support Vector Machines (SVMs), Gaussian Processes, Logistic Regression, Conditional Random Fields (CRFs), and Lasso amongst others. This paper describes the theory and implementation of a scalable and modular convex solver which solves all these estimation problems. It can be parallelized on a cluster of workstations, allows for datalocality, and can deal with regularizers such as L1 and L2 penalties. In addition to the unified framework we present tight convergence bounds, which show that our algorithm converges in O(1/ɛ) steps to ɛ precision for general convex problems and in O(log(1/ɛ)) steps for continuously differentiable problems. We demonstrate the performance of our general purpose solver on a variety of publicly available datasets.
Clothing cosegmentation for recognizing people
 In Proc. of Conf. on Computer Vision and Pattern Recognition
, 2008
"... Reseachers have verified that clothing provides information about the identity of the individual. To extract features from the clothing, the clothing region first must be localized or segmented in the image. At the same time, given multiple images of the same person wearing the same clothing, we exp ..."
Abstract

Cited by 76 (3 self)
 Add to MetaCart
(Show Context)
Reseachers have verified that clothing provides information about the identity of the individual. To extract features from the clothing, the clothing region first must be localized or segmented in the image. At the same time, given multiple images of the same person wearing the same clothing, we expect to improve the effectiveness of clothing segmentation. Therefore, the identity recognition and clothing segmentation problems are intertwined; a good solution for one aides in the solution for the other. We build on this idea by analyzing the mutual information between pixel locations near the face and the identity of the person to learn a global clothing mask. We segment the clothing region in each image using graph cuts based on a clothing model learned from one or multiple images believed to be the same person wearing the same clothing. We use facial features and clothing features to recognize individuals in other images. The results show that clothing segmentation provides a significant improvement in recognition accuracy for large image collections, and useful clothing masks are simultaneously produced. A further significant contribution is that we introduce a publicly available consumer image collection where each individual is identified. We hope this dataset allows the vision community to more easily compare results for tasks related to recognizing people in consumer image collections. 1.
On coreference resolution performance metrics
 In Proc. of HLT/EMNLP
, 2005
"... The paper proposes a Constrained EntityAlignment FMeasure (CEAF) for evaluating coreference resolution. The metric is computed by aligning reference and system entities (or coreference chains) with the constraint that a system (reference) entity is aligned with at most one reference (system) entit ..."
Abstract

Cited by 70 (0 self)
 Add to MetaCart
(Show Context)
The paper proposes a Constrained EntityAlignment FMeasure (CEAF) for evaluating coreference resolution. The metric is computed by aligning reference and system entities (or coreference chains) with the constraint that a system (reference) entity is aligned with at most one reference (system) entity. We show that the best alignment is a maximum bipartite matching problem which can be solved by the KuhnMunkres algorithm. Comparative experiments are conducted to show that the widelyknown MUC Fmeasure has serious flaws in evaluating a coreference system. The proposed metric is also compared with the ACEValue, the official evaluation metric in the Automatic Content Extraction (ACE) task, and we conclude that the proposed metric possesses some properties such as symmetry and better interpretability missing in the ACEValue. 1
Framework for Performance Evaluation of Face, Text, and Vehicle Detection and Tracking
 in Video: Data, Metrics, and Protocol”, to appear in IEEE Transactions on Pattern Analysis and Machine Intelligence, 2008. 4. Conclusion In this paper, an
"... Abstract—Common benchmark data sets, standardized performance metrics, and baseline algorithms have demonstrated considerable impact on research and development in a variety of application domains. These resources provide both consumers and developers of technology with a common framework to objecti ..."
Abstract

Cited by 65 (3 self)
 Add to MetaCart
(Show Context)
Abstract—Common benchmark data sets, standardized performance metrics, and baseline algorithms have demonstrated considerable impact on research and development in a variety of application domains. These resources provide both consumers and developers of technology with a common framework to objectively compare the performance of different algorithms and algorithmic improvements. In this paper, we present such a framework for evaluating object detection and tracking in video: specifically for face, text, and vehicle objects. This framework includes the source video data, groundtruth annotations (along with guidelines for annotation), performance metrics, evaluation protocols, and tools including scoring software and baseline algorithms. For each detection and tracking task and supported domain, we developed a 50clip training set and a 50clip test set. Each data clip is approximately 2.5 minutes long and has been completely spatially/temporally annotated at the Iframe level. Each task/domain, therefore, has an associated annotated corpus of approximately 450,000 frames. The scope of such annotation is unprecedented and was designed to begin to support the necessary quantities of data for robust machine learning approaches, as well as a statistically significant comparison of the performance of algorithms. The goal of this work was to systematically address the challenges of object detection and tracking through a common evaluation framework that permits a meaningful objective comparison of techniques, provides the research community with sufficient data for the exploration of automatic modeling techniques, encourages the incorporation of objective evaluation into the development process, and contributes useful lasting resources of a scale and magnitude that will prove to be extremely useful to the computer vision research community for years to come.
Video Event Recognition Using Kernel Methods with Multilevel Temporal Alignment
 IEEE Trans. Pattern Analysis and Machine Intelligence
, 1985
"... Abstract—In this work, we systematically study the problem of event recognition in unconstrained news video sequences. We adopt the discriminative kernelbased method for which video clip similarity plays an important role. First, we represent a video clip as a bag of orderless descriptors extracted ..."
Abstract

Cited by 40 (6 self)
 Add to MetaCart
(Show Context)
Abstract—In this work, we systematically study the problem of event recognition in unconstrained news video sequences. We adopt the discriminative kernelbased method for which video clip similarity plays an important role. First, we represent a video clip as a bag of orderless descriptors extracted from all of the constituent frames and apply the earth mover’s distance (EMD) to integrate similarities among frames from two clips. Observing that a video clip is usually comprised of multiple subclips corresponding to event evolution over time, we further build a multilevel temporal pyramid. At each pyramid level, we integrate the information from different subclips with Integervalueconstrained EMD to explicitly align the subclips. By fusing the information from the different pyramid levels, we develop Temporally Aligned Pyramid Matching (TAPM) for measuring video similarity. We conduct comprehensive experiments on the TRECVID 2005 corpus, which contains more than 6,800 clips. Our experiments demonstrate that 1) the TAPM multilevel method clearly outperforms singlelevel EMD (SLEMD) and 2) SLEMD outperforms keyframe and multiframebased detection methods by a large margin. In addition, we conduct indepth investigation of various aspects of the proposed techniques such as weight selection in SLEMD, sensitivity to temporal clustering, the effect of temporal alignment, and possible approaches for speedup. Extensive analysis of the results also reveals intuitive interpretation of video event recognition through video subclip alignment at different levels. Index Terms—Event recognition, news video, concept ontology, Temporally Aligned Pyramid Matching, video indexing, earth mover’s distance. Ç 1
Learning language semantics from ambiguous supervision
 In AAAI
, 2007
"... This paper presents a method for learning a semantic parser from ambiguous supervision. Training data consists of natural language sentences annotated with multiple potential meaning representations, only one of which is correct. Such ambiguous supervision models the type of supervision that can be ..."
Abstract

Cited by 40 (9 self)
 Add to MetaCart
(Show Context)
This paper presents a method for learning a semantic parser from ambiguous supervision. Training data consists of natural language sentences annotated with multiple potential meaning representations, only one of which is correct. Such ambiguous supervision models the type of supervision that can be more naturally available to languagelearning systems. Given such weak supervision, our approach produces a semantic parser that maps sentences into meaning representations. An existing semantic parsing learning system that can only learn from unambiguous supervision is augmented to handle ambiguous supervision. Experimental results show that the resulting system is able to cope up with ambiguities and learn accurate semantic parsers.
Reweighted random walks for graph matching
 In ECCV
, 2010
"... Abstract. Graph matching is an essential problem in computer vision and machine learning. In this paper, we introduce a random walk view on the problem and propose a robust graph matching algorithm against outliers and deformation. Matching between two graphs is formulated as node selection on an as ..."
Abstract

Cited by 40 (4 self)
 Add to MetaCart
(Show Context)
Abstract. Graph matching is an essential problem in computer vision and machine learning. In this paper, we introduce a random walk view on the problem and propose a robust graph matching algorithm against outliers and deformation. Matching between two graphs is formulated as node selection on an association graph whose nodes represent candidate correspondences between the two graphs. The solution is obtained by simulating random walks with reweighting jumps enforcing the matching constraints on the association graph. Our algorithm achieves noiserobust graph matching by iteratively updating and exploiting the confidences of candidate correspondences. In a practical sense, our work is of particular importance since the realworld matching problem is made difficult by the presence of noise and outliers. Extensive and comparative experiments demonstrate that it outperforms the stateoftheart graph matching algorithms especially in the presence of outliers and deformation.
Comparison of treechild phylogenetic networks
 IEEE/ACM Trans. Comput. Biol. Bioinform
, 2009
"... ..."
(Show Context)
Lower Bounds for the Quadratic Assignment Problem Based Upon a Dual Formulation
"... A new bounding procedure for the Quadratic Assignment Problem (QAP) is described which extends the Hungarian method for the Linear Assignment Problem (LAP) to QAPs, operating on the four dimensional cost array of the QAP objective function. The QAP is iteratively transformed in a series of equivalen ..."
Abstract

Cited by 39 (7 self)
 Add to MetaCart
A new bounding procedure for the Quadratic Assignment Problem (QAP) is described which extends the Hungarian method for the Linear Assignment Problem (LAP) to QAPs, operating on the four dimensional cost array of the QAP objective function. The QAP is iteratively transformed in a series of equivalent QAPs leading to an increasing sequence of lower bounds for the original problem. To this end, two classes of operations which transform the four dimensional cost array are defined. These have the property that the values of the transformed objective function Z' are the corresponding values of the old objective function Z, shifted by some amount C. In the case that all entries of the transformed cost array are nonnegative, then C is a lower bound for the initial QAP. If, moreover, there exists a feasible solution U to the QAP, such that its value in the transformed problem is zero, then C is the optimal value of Z and U is an optimal solution for the original QAP. The transformations are iteratively applied until no significant increase in constant C as above is found, resulting in the so called Dual Procedure (DP). Several strategies are listed for appropriately determining C, or equivalently, transforming the cost array. The goal is the modification of the elements in the cost array so as to obtain new equivalent problems which bring the QAP closer to solution. In some cases the QAP is actually solved, though solution is not guaranteed. The close relationship between the DP and the Linear Programming formulation of Adams and Johnson is presented. The DP attempts to solve Adams and Johnsons CLP, a continuous relaxation of a linearization of the QAP. This explains why the DP produces bounds close to the optimum values for CLP calculated by Johnson in her dissertation and by...