Results

**11 - 19**of**19**### Rank Consistency based Multi-View Learning: A Privacy-Preserving Approach

"... Complex media objects are often described by multi-view feature groups collected from diverse domains or information channels. Multi-view learning, which attempts to exploit the relationship am-ong multiple views to improve learning performance, has drawn extensive attention. It is noteworthy that i ..."

Abstract
- Add to MetaCart

(Show Context)
Complex media objects are often described by multi-view feature groups collected from diverse domains or information channels. Multi-view learning, which attempts to exploit the relationship am-ong multiple views to improve learning performance, has drawn extensive attention. It is noteworthy that in some real-world appli-cations, features of different views may come from different pri-vate data repositories, and thus, it is desired to exploit view re-lationship with data privacy preserved simultaneously. Existing multi-view learning approaches such as subspace methods and pre-fusion methods are not applicable in this scenario because they need to access the whole features, whereas late-fusion approaches could not exploit information from other views to improve the in-dividual view-specific learners. In this paper, we propose a novel multi-view learning framework which works in a hybrid fusion manner. Specifically, we convert predicted values of each view into an Accumulated Prediction Matrix (APM) with low-rank con-straint enforced jointly by the multiple views. The joint low-rank constraint enables the view-specific learner to exploit other views to help improve the performance, without accessing the features of other views. Thus, the proposed RANC framework provides a privacy-preserving way for multi-view learning. Furthermore, we consider variants of solutions to achieve rank consistency and present corresponding methods for the optimization. Empirical in-vestigations on real datasets show that the proposed method achiev-es state-of-the-art performance on various tasks.

### A General Framework for Fast Stagewise Algorithms

"... Forward stagewise regression follows a very simple strategy for constructing a sequence of sparse regression estimates: it starts with all coefficients equal to zero, and iteratively updates the coefficient (by a small amount ) of the variable that achieves the maximal absolute inner product with th ..."

Abstract
- Add to MetaCart

Forward stagewise regression follows a very simple strategy for constructing a sequence of sparse regression estimates: it starts with all coefficients equal to zero, and iteratively updates the coefficient (by a small amount ) of the variable that achieves the maximal absolute inner product with the current residual. This procedure has an interesting connection to the lasso: under some conditions, it can be shown that the sequence of forward stagewise estimates exactly coincides with the lasso path, as the step size goes to zero. Furthermore, essentially the same equivalence holds outside of least squares regression, with the minimization of a differentiable convex loss function subject to an `1 norm constraint (the stagewise algorithm now updates the coefficient corresponding to the maximal absolute component of the gradient). Even when they do not match their `1-constrained analogues, stagewise estimates provide a useful approximation, and are computationally appealing. Their success in sparse modeling motivates the question: can a simple, effective strategy like forward stagewise be applied more broadly in other regularization settings, beyond the `1 norm and sparsity? The current paper is an attempt to do just this. We present a general framework for stagewise estimation, which yields fast algorithms for problems such as group-structured learning, matrix completion, image denoising, and more.

### Editor: U.N.Known

, 804

"... We extend the well-known BFGS quasi-Newton method and its limited-memory variant LBFGS to the optimization of nonsmooth convex objectives. This is done in a rigorous fashion by generalizing three components of BFGS to subdifferentials: The local quadratic model, the identification of a descent direc ..."

Abstract
- Add to MetaCart

(Show Context)
We extend the well-known BFGS quasi-Newton method and its limited-memory variant LBFGS to the optimization of nonsmooth convex objectives. This is done in a rigorous fashion by generalizing three components of BFGS to subdifferentials: The local quadratic model, the identification of a descent direction, and the Wolfe line search conditions. We apply the resulting subLBFGS algorithm to L2-regularized risk minimization with the binary hinge loss. To extend our algorithm to the multiclass and multilabel settings we develop a new, efficient, exact line search algorithm. We prove its worst-case time complexity bounds, and show that it can also extend a recently developed bundle method to the multiclass and multilabel settings. We also apply the direction-finding component of our algorithm to L1-regularized risk minimization with logistic loss. In all these contexts our methods perform comparable to or better than specialized state-of-the-art solvers on a number of publicly available datasets. Open source software implementing our algorithms is freely available for download.

### LEAR-LJK, INRIA Grenoble

"... We consider penalized formulations of machine learning problems with regularization penalty having conic structure. For several important learning problems, state-of-the-art optimization approaches such as proximal gradient algorithms are difficult to apply and computationally expensive, preventing ..."

Abstract
- Add to MetaCart

(Show Context)
We consider penalized formulations of machine learning problems with regularization penalty having conic structure. For several important learning problems, state-of-the-art optimization approaches such as proximal gradient algorithms are difficult to apply and computationally expensive, preventing from using them for large-scale learning purpose. We present a conditional gradient algorithm, with theoretical guarantees, and show promising experimental results on two largescale real-world datasets. 1

### Elastic-Net Regularization of Singular Values for Robust Subspace Learning

"... Learning a low-dimensional structure plays an impor-tant role in computer vision. Recently, a new family of methods, such as l1 minimization and robust principal com-ponent analysis, has been proposed for low-rank matrix ap-proximation problems and shown to be robust against out-liers and missing da ..."

Abstract
- Add to MetaCart

(Show Context)
Learning a low-dimensional structure plays an impor-tant role in computer vision. Recently, a new family of methods, such as l1 minimization and robust principal com-ponent analysis, has been proposed for low-rank matrix ap-proximation problems and shown to be robust against out-liers and missing data. But these methods often require heavy computational load and can fail to find a solution when highly corrupted data are presented. In this paper, an elastic-net regularization based low-rank matrix factor-ization method for subspace learning is proposed. The pro-posed method finds a robust solution efficiently by enforcing a strong convex constraint to improve the algorithm’s sta-bility while maintaining the low-rank property of the solu-tion. It is shown that any stationary point of the proposed algorithm satisfies the Karush-Kuhn-Tucker optimality con-ditions. The proposed method is applied to a number of low-rank matrix approximation problems to demonstrate its effi-ciency in the presence of heavy corruptions and to show its effectiveness and robustness compared to the existing meth-ods. 1.

### A New Convex Relaxation for Tensor Completion

, 2013

"... We study the problem of learning a tensor from a set of linear measurements. A prominent methodology for this problem is based on a generalization of trace norm regularization, which has been used extensively for learning low rank matrices, to the tensor setting. In this paper, we highlight some lim ..."

Abstract
- Add to MetaCart

We study the problem of learning a tensor from a set of linear measurements. A prominent methodology for this problem is based on a generalization of trace norm regularization, which has been used extensively for learning low rank matrices, to the tensor setting. In this paper, we highlight some limitations of this approach and propose an alternative convex relaxation on the Euclidean ball. We then describe a technique to solve the associated regularization problem, which builds upon the alternating direction method of multipliers. Experiments on one synthetic dataset and two real datasets indicate that the proposed method improves significantly over tensor trace norm regularization in terms of estimation error, while remaining computationally tractable.