Results

**1 - 4**of**4**### Scalable Multidimensional Hierarchical Bayesian Modeling on

"... We consider the problem of estimating occurrence rates of rare events for extremely sparse data using pre-existing hierarchies and selected features to perform inference along multiple dimensions. In particular, we focus on the problem of estimating click rates for {Advertiser, Publisher, User} tupl ..."

Abstract
- Add to MetaCart

(Show Context)
We consider the problem of estimating occurrence rates of rare events for extremely sparse data using pre-existing hierarchies and selected features to perform inference along multiple dimensions. In particular, we focus on the problem of estimating click rates for {Advertiser, Publisher, User} tuples where both the Advertisers and the Publishers are organized as hierarchies that capture broad contextual information at different levels of granularities. Typically, the click rates are low and the coverage of the hierarchies and dimensions is sparse. To overcome these difficulties, we decompose the joint prior of the three-dimensional Click-Through-Rate (CTR) using tensor decomposition and propose a Multidimensional Hierarchical Bayesian framework (abbreviated as MadHab). We set up a specific framework of each dimension to model dimension-specific characteristics. More specifically, we consider the hierarchical beta process prior for the Advertiser dimension and for the Publisher dimension respectively and a feature-dependent mixture model for the User dimension. Besides the centralized implementation, we propose a distributed algorithm through Spark for inference which make the model highly scalable and suited for large scale data mining applications. We demonstrate that on a real world ads campaign platform our framework can effectively discriminate extremely rare events in terms of their click propensity.

### Approximate Inference in Graphical Models using Tensor Decompositions

"... We demonstrate that tensor decompositions can be used to trans-form graphical models into structurally simpler graphical models that approximate the same joint probability distribution. In this way, stan-dard inference algorithms such as the junction tree algorithm, can be used in order to use the t ..."

Abstract
- Add to MetaCart

(Show Context)
We demonstrate that tensor decompositions can be used to trans-form graphical models into structurally simpler graphical models that approximate the same joint probability distribution. In this way, stan-dard inference algorithms such as the junction tree algorithm, can be used in order to use the transformed graphical model for approximate inference. The usefulness of the technique is demonstrated by means of its application to thirty randomly generated small-world Markov networks. Key words: graphical models; approximate inference, tensor de-compositions 2 1

### Decomposition of probability tables representing Boolean functions

"... We apply tensor rank-one decomposition (Savicky and Vomlel, 2005) to conditional probability tables representing Boolean functions. We present a numerical algorithm that can be used to find a minimal tensor rank-one decomposition together with the results of the experiments performed using the propo ..."

Abstract
- Add to MetaCart

(Show Context)
We apply tensor rank-one decomposition (Savicky and Vomlel, 2005) to conditional probability tables representing Boolean functions. We present a numerical algorithm that can be used to find a minimal tensor rank-one decomposition together with the results of the experiments performed using the proposed algorithm. We will pay special attention to a family of Boolean functions that are common in probabilistic models from practice- monotone and symmetric Boolean functions. We will show that these functions can be better decomposed than general Boolean functions, specifically, rank of their corresponding tensor is lower than average rank of a general Boolean function. 1

### Decomposition of Probability Tables Representing Boolean Functions

"... We apply tensor rank-one decomposition (Savicky and Vomlel, 2005) to conditional probability tables representing Boolean functions. We present a numerical algorithm that can be used to find a minimal tensor rank-one decomposition together with the results of the experiments performed using the propo ..."

Abstract
- Add to MetaCart

(Show Context)
We apply tensor rank-one decomposition (Savicky and Vomlel, 2005) to conditional probability tables representing Boolean functions. We present a numerical algorithm that can be used to find a minimal tensor rank-one decomposition together with the results of the experiments performed using the proposed algorithm. We will pay special attention to a family of Boolean functions that are common in probabilistic models from practice- monotone and symmetric Boolean functions. We will show that these functions can be better decomposed than general Boolean functions, specifically, rank of their corresponding tensor is lower than average rank of a general Boolean function. 1