Results 1  10
of
977
Shape reconstruction with intrinsic priors
, 2009
"... ShapefromX, a problem of shape reconstruction from some measurements, is a classical inverse problem in computer vision. In this paper, we propose a framework for intrinsic regularization of such problems. The assumption is that we have a shape intrinsically similar to (a bending of) the unknown s ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
ShapefromX, a problem of shape reconstruction from some measurements, is a classical inverse problem in computer vision. In this paper, we propose a framework for intrinsic regularization of such problems. The assumption is that we have a shape intrinsically similar to (a bending of) the unknown
Propriety of Intrinsic Priors in Invariant Testing Situations
, 2006
"... The Theory of Intrinsic Priors, developed by Berger and Pericchi (1996a,b), is a general method of constructing objective priors for testing and model selection when proper priors are considered for the simpler null hypotheses. When this prior distribution is improper, as is typically the case for O ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
The Theory of Intrinsic Priors, developed by Berger and Pericchi (1996a,b), is a general method of constructing objective priors for testing and model selection when proper priors are considered for the simpler null hypotheses. When this prior distribution is improper, as is typically the case
Intrinsic priors for testing two lognormal populations with the fractional bayes factor
 Journal of the Korean Data & Information Science Society
, 2005
"... The Bayes factors with improper noninformative priors are defined only up to arbitrary constants. So, it is known that Bayes factors are not well defined due to this arbitrariness in Bayesian hypothesis testing and model selections. The intrinsic Bayes factor by Berger and Pericchi (1996) and the fr ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
The Bayes factors with improper noninformative priors are defined only up to arbitrary constants. So, it is known that Bayes factors are not well defined due to this arbitrariness in Bayesian hypothesis testing and model selections. The intrinsic Bayes factor by Berger and Pericchi (1996
Consistency of Bayes factors for intrinsic priors in normal linear models
"... Abstract The JeffreysLindley paradox refers to the wellknown fact that a sharp null hypothesis on the normal mean parameter is always accepted when the variance of the conjugate prior goes to infinity, thus implying that the resulting Bayesian procedure is not consistent, and that some limiting f ..."
Abstract
 Add to MetaCart
forms of proper prior distributions are not necessarily suitable for testing problems. Intrinsic priors, which are limits of proper priors, have been proved to be extremely useful for testing problems, and, in particular, for testing hypothesis on the regression coefficients of normal linear models
Motivational and selfregulated learning components of classroom academic performance
 Journal of Educational Psychology
, 1990
"... A correlational study examined relationships between motivational orientation, selfregulated learning, and classroom academic performance for 173 seventh graders from eight science and seven English classes. A selfreport measure of student selfefficacy, intrinsic value, test anxiety, selfregulat ..."
Abstract

Cited by 679 (6 self)
 Add to MetaCart
regulation, selfefficacy, and test anxiety emerged as the best predictors of performance. Intrinsic value did not have a direct influence on performance but was strongly related to selfregulation and cognitive strategy use, regardless of prior achievement level. The implications of individual differences
Segmentation of brain MR images through a hidden Markov random field model and the expectationmaximization algorithm
 IEEE TRANSACTIONS ON MEDICAL. IMAGING
, 2001
"... The finite mixture (FM) model is the most commonly used model for statistical segmentation of brain magnetic resonance (MR) images because of its simple mathematical form and the piecewise constant nature of ideal brain MR images. However, being a histogrambased model, the FM has an intrinsic limi ..."
Abstract

Cited by 639 (15 self)
 Add to MetaCart
The finite mixture (FM) model is the most commonly used model for statistical segmentation of brain magnetic resonance (MR) images because of its simple mathematical form and the piecewise constant nature of ideal brain MR images. However, being a histogrambased model, the FM has an intrinsic
I Tube, You Tube, Everybody Tubes: Analyzing the World’s Largest User Generated Content Video System
 In Proceedings of the 5th ACM/USENIX Internet Measurement Conference (IMC’07
, 2007
"... User Generated Content (UGC) is reshaping the way people watch video and TV, with millions of video producers and consumers. In particular, UGC sites are creating new viewing patterns and social interactions, empowering users to be more creative, and developing new business opportunities. To better ..."
Abstract

Cited by 373 (7 self)
 Add to MetaCart
. To better understand the impact of UGC systems, we have analyzed YouTube, the world’s largest UGC VoD system. Based on a large amount of data collected, we provide an indepth study of YouTube and other similar UGC systems. In particular, we study the popularity lifecycle of videos, the intrinsic
Deriving Intrinsic Images from Image Sequences
, 2001
"... Intrinsic images are a useful midlevel description of scenes proposed by Barrow and Tenebaum [1]. An image is decomposed into two images: a reflectance image and an illumination image. Finding such a decomposition remains a difficult problem in computer vision. Here we focus on a slightly easier pro ..."
Abstract

Cited by 253 (5 self)
 Add to MetaCart
Intrinsic images are a useful midlevel description of scenes proposed by Barrow and Tenebaum [1]. An image is decomposed into two images: a reflectance image and an illumination image. Finding such a decomposition remains a difficult problem in computer vision. Here we focus on a slightly easier
Shape Priors for Level Set Representations
 In ECCV
, 2002
"... Level Set Representations, the pioneering framework introduced by Osher and Sethian [14] is the most common choice for the implementation of variational frameworks in Computer Vision since it is implicit, intrinsic, parameter and topology free. However, many Computer vision applications refer to ..."
Abstract

Cited by 202 (14 self)
 Add to MetaCart
Level Set Representations, the pioneering framework introduced by Osher and Sethian [14] is the most common choice for the implementation of variational frameworks in Computer Vision since it is implicit, intrinsic, parameter and topology free. However, many Computer vision applications refer
Results 1  10
of
977