Results

**1 - 3**of**3**### Noname manuscript No. (will be inserted by the editor) Detecting the Most Distant (z>7) Objects with ALMA

, 704

"... Abstract Detecting and studying objects at the highest redshifts, out to the end of Cosmic Reionization at z>7, is clearly a key science goal of ALMA. ALMA will in principle be able to detect objects in this redshift range both from high-J (J>7) CO transitions and emission from ionized carbon, ..."

Abstract
- Add to MetaCart

(Show Context)
Abstract Detecting and studying objects at the highest redshifts, out to the end of Cosmic Reionization at z>7, is clearly a key science goal of ALMA. ALMA will in principle be able to detect objects in this redshift range both from high-J (J>7) CO transitions and emission from ionized carbon, [CII], which is one of the main cooling lines of the ISM. ALMA will even be able to resolve this emission for individual targets, which will be one of the few ways to determine dynamical masses for systems in the Epoch of Reionization. We discuss some of the current problems regarding the detection and characterization of objects at high redshifts and how ALMA will eliminate most (but not all) of them. 1 Introduction: The

### Bayesian Multitask Distance Metric Learning

"... We present a Bayesian approach for jointly learning distance metrics for a large collection of potentially related learning tasks. We assume there exists a rela-tively smaller set of basis distance metrics and the distance metric for each task is a sparse, positively weighted combination of these ba ..."

Abstract
- Add to MetaCart

(Show Context)
We present a Bayesian approach for jointly learning distance metrics for a large collection of potentially related learning tasks. We assume there exists a rela-tively smaller set of basis distance metrics and the distance metric for each task is a sparse, positively weighted combination of these basis distance metrics. The set of basis distance metrics and the combination weights are learned from data. Moreover, taking a nonparametric Bayesian approach, the number of basis dis-tance metrics need not be set a priori. Our proposed construction significantly reduces the number of parameters to be learned, especially when the number of tasks and/or data dimensionality is large. Several existing methods for multi-task/transfer distance metric learning arise as special cases of our model. Prelimi-nary results on real-world data show that our model outperforms various baselines. We also discuss some possible extensions of our model and future work. 1

### Similarity Learning for High-Dimensional Sparse Data

"... Abstract A good measure of similarity between data points is crucial to many tasks in machine learning. Similarity and metric learning methods learn such measures automatically from data, but they do not scale well respect to the dimensionality of the data. In this paper, we propose a method that c ..."

Abstract
- Add to MetaCart

Abstract A good measure of similarity between data points is crucial to many tasks in machine learning. Similarity and metric learning methods learn such measures automatically from data, but they do not scale well respect to the dimensionality of the data. In this paper, we propose a method that can learn efficiently similarity measure from highdimensional sparse data. The core idea is to parameterize the similarity measure as a convex combination of rank-one matrices with specific sparsity structures. The parameters are then optimized with an approximate Frank-Wolfe procedure to maximally satisfy relative similarity constraints on the training data. Our algorithm greedily incorporates one pair of features at a time into the similarity measure, providing an efficient way to control the number of active features and thus reduce overfitting. It enjoys very appealing convergence guarantees and its time and memory complexity depends on the sparsity of the data instead of the dimension of the feature space. Our experiments on realworld high-dimensional datasets demonstrate its potential for classification, dimensionality reduction and data exploration.