Results 11  20
of
378
Fast maximum margin matrix factorization for collaborative prediction
 In Proceedings of the 22nd International Conference on Machine Learning (ICML
, 2005
"... Maximum Margin Matrix Factorization (MMMF) was recently suggested (Srebro et al., 2005) as a convex, infinite dimensional alternative to lowrank approximations and standard factor models. MMMF can be formulated as a semidefinite programming (SDP) and learned using standard SDP solvers. However, cu ..."
Abstract

Cited by 248 (6 self)
 Add to MetaCart
Maximum Margin Matrix Factorization (MMMF) was recently suggested (Srebro et al., 2005) as a convex, infinite dimensional alternative to lowrank approximations and standard factor models. MMMF can be formulated as a semidefinite programming (SDP) and learned using standard SDP solvers. However
The Most Generative Maximum Margin Bayesian Networks
"... *These authors contributed equally to this paper Althoughdiscriminativelearningingraphical models generally improves classification results, the generative semantics of the model are compromised. In this paper, we introduce a novel approach of hybrid generativediscriminative learning for Bayesian ne ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
networks. We use an SVMtype large margin formulation for discriminative training, introducing a likelihoodweighted ℓ 1norm for theSVMnormpenalization. Thissimultaneouslyoptimizesthedatalikelihoodandtherefore partly maintains the generative character of the model. For many network structures
Learning optimal seeds for diffusionbased salient object detection
"... In diffusionbased saliency detection, an image is partitioned into superpixels and mapped to a graph, with superpixels as nodes and edge strengths proportional to superpixel similarity. Saliency information is then propagated over the graph using a diffusion process, whose equilibrium state yiel ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
and background saliency is then learned, using a largemargin formulation of the discriminant saliency principle. The propagation of the resulting saliency seeds, using a diffusion process, is finally shown to outperform the state of the art on a number of salient object detection datasets. 1.
Video Event Detection by Inferring Temporal Instance Labels
"... Video event detection allows intelligent indexing of video content based on events. Traditional approaches extract features from video frames or shots, then quantize and pool the features to form a single vector representation for the entire video. Though simple and efficient, the final pooling step ..."
Abstract
 Add to MetaCart
segments of different temporal intervals. The objective is to learn an instancelevel event detection model based on only videolevel labels. To solve this problem, we propose a largemargin formulation which treats the instance labels as hidden latent variables, and simultaneously infers the instance
Video Event Detection by Inferring Temporal Instance Labels
"... Video event detection allows intelligent indexing of video content based on events. Traditional approaches extract features from video frames or shots, then quantize and pool the features to form a single vector representation for the entire video. Though simple and efficient, the final pooling step ..."
Abstract
 Add to MetaCart
segments of different temporal intervals. The objective is to learn an instancelevel event detection model based on only videolevel labels. To solve this problem, we propose a largemargin formulation which treats the instance labels as hidden latent variables, and simultaneously infers the instance
Modified mpe/mmi in a transducerbased framework
 in Proc. IEEE Int. Conf. on Acoustics, Speech, and Signal Processing
, 2009
"... In this paper we show how common training criteria like for example MPE or MMI can be extended to incorporate a margin term. In addition, a transducerbased training implementation is presented, which covers a large variety of discriminative training criteria for ASR, including the standard MMI, MPE ..."
Abstract

Cited by 8 (7 self)
 Add to MetaCart
, MPE, and MCE criteria, as well as the modifications to these criteria presented here. The modified criteria are directly related with the conventional large margin formulation of SVMs. In the proposed approach, we can take advantage of the generalization guarantees of large margin classifiers while
Large Margin Filtering
"... Many signal processing problems are tackled by filtering the signal and subsequent feature classification or regression. Both steps are critical and need to be designed carefully to deal with the particular statistical characteristics of both signal and noise. Optimal design of the filter and the cl ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
segmentation. In all the examples, large margin filtering shows competitive classification performances while offering the advantage of interpretability of the filtered channels retrieved.
Large Relative Margin and Applications
"... Over the last decade or so, machine learning algorithms such as support vector machines, boosting etc. have become extremely popular. The core idea in these and other related algorithms is the notion of large margin. Simply put, the idea is to geometrically separate two classes with a large separati ..."
Abstract
 Add to MetaCart
Over the last decade or so, machine learning algorithms such as support vector machines, boosting etc. have become extremely popular. The core idea in these and other related algorithms is the notion of large margin. Simply put, the idea is to geometrically separate two classes with a large
Learning Large Margin Mappings
"... We present a method to simultaneously learn a mixture of mappings and large margin hyperplane classifier. This method learns useful mappings of the training data to improve classification accuracy. We first present a simple iterative algorithm that finds a greedy local solution and then derive a se ..."
Abstract
 Add to MetaCart
We present a method to simultaneously learn a mixture of mappings and large margin hyperplane classifier. This method learns useful mappings of the training data to improve classification accuracy. We first present a simple iterative algorithm that finds a greedy local solution and then derive a
Structured Learning from Partial Annotations
"... Structured learning is appropriate when predicting structured outputs such as trees, graphs, or sequences. Most prior work requires the training set to consist of complete trees, graphs or sequences. Specifying such detailed ground truth can be tedious or infeasible for large outputs. Our main contr ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
contribution is a large margin formulation that makes structured learning from only partially annotated data possible. The resulting optimization problem is nonconvex, yet can be efficiently solve by concaveconvex procedure (CCCP) with novel speedup strategies. We apply our method to a challenging trackingby
Results 11  20
of
378