Results 1  10
of
11
Sequence prediction for nonstationary processes
 In proceedings: Combinatorial and Algorithmic Foundations of Pattern and Association Discovery Dagstuhl Seminar
, 2006
"... 1 Suppose we are given two probability measures on the set of oneway infinite finitealphabet sequences. Consider the question when one of the measures predicts the other, that is, when conditional probabilities converge (in a certain sense), if one of the measures is chosen to generate the sequenc ..."
Abstract

Cited by 16 (11 self)
 Add to MetaCart
1 Suppose we are given two probability measures on the set of oneway infinite finitealphabet sequences. Consider the question when one of the measures predicts the other, that is, when conditional probabilities converge (in a certain sense), if one of the measures is chosen to generate the sequence. This question may be considered a refinement of the problem of sequence prediction in its most general formulation: for a given class of probability measures, does there exist a measure which predicts all of the measures in the class? To address this problem, we find some conditions on local absolute continuity which are sufficient for prediction and generalize several different notions that are known to be sufficient for prediction. We also formulate some open questions to outline a direction for finding the conditions on classes of measures for which prediction is possible.
On sequence prediction for arbitrary measures
 In Proc. 2007 IEEE International Symposium on Information Theory
, 2007
"... Abstract — Suppose we are given two probability measures on the set of oneway infinite finitealphabet sequences. Consider the question when one of the measures predicts the other, that is, when conditional probabilities converge (in a certain sense), if one of the measures is chosen to generate th ..."
Abstract

Cited by 10 (9 self)
 Add to MetaCart
(Show Context)
Abstract — Suppose we are given two probability measures on the set of oneway infinite finitealphabet sequences. Consider the question when one of the measures predicts the other, that is, when conditional probabilities converge (in a certain sense), if one of the measures is chosen to generate the sequence. This question may be considered a refinement of the problem of sequence prediction in its most general formulation: for a given class of probability measures, does there exist a measure which predicts all of the measures in the class? To address this problem, we find some conditions on local absolute continuity which are sufficient for prediction and generalize several different notions that are known to be sufficient for prediction. We also formulate some open questions to outline a direction for finding the conditions on classes of measures for which prediction is possible. I.
Is there an Elegant Universal Theory of Prediction?
 IDSIA / USISUPSI DALLE MOLLE INSTITUTE FOR ARTIFICIAL INTELLIGENCE. GALLERIA 2, 6928
, 2006
"... Solomonoff’s inductive learning model is a powerful, universal and highly elegant theory of sequence prediction. Its critical flaw is that it is incomputable and thus cannot be used in practice. It is sometimes suggested that it may still be useful to help guide the development of very general and p ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
(Show Context)
Solomonoff’s inductive learning model is a powerful, universal and highly elegant theory of sequence prediction. Its critical flaw is that it is incomputable and thus cannot be used in practice. It is sometimes suggested that it may still be useful to help guide the development of very general and powerful theories of prediction which are computable. In this paper it is shown that although powerful algorithms exist, they are necessarily highly complex. This alone makes their theoretical analysis problematic, however it is further shown that beyond a moderate level of complexity the analysis runs into the deeper problem of Gödel incompleteness. This limits the power of mathematics to analyse and study prediction algorithms, and indeed intelligent systems in general.
Limits of Learning about a Categorical Latent Variable under Prior NearIgnorance
, 2009
"... In this paper, we consider the coherent theory of (epistemic) uncertainty of Walley, in which beliefs are represented through sets of probability distributions, and we focus on the problem of modeling prior ignorance about a categorical random variable. In this setting, it is a known result that a s ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
In this paper, we consider the coherent theory of (epistemic) uncertainty of Walley, in which beliefs are represented through sets of probability distributions, and we focus on the problem of modeling prior ignorance about a categorical random variable. In this setting, it is a known result that a state of prior ignorance is not compatible with learning. To overcome this problem, another state of beliefs, called nearignorance, has been proposed. Nearignorance resembles ignorance very closely, by satisfying some principles that can arguably be regarded as necessary in a state of ignorance, and allows learning to take place. What this paper does, is to provide new and substantial evidence that also nearignorance cannot be really regarded as a way out of the problem of starting statistical inference in conditions of very weak beliefs. The key to this result is focusing on a setting characterized by a variable of interest that is latent. We argue that such a setting is by far the most common case in practice, and we provide, for the case of categorical latent variables (and general manifest variables) a condition that, if satisfied, prevents learning to take place under prior nearignorance. This condition is shown to be easily satisfied even in the most common statistical problems. We regard these results as a strong form of evidence against the possibility to adopt a condition of prior nearignorance in real statistical problems.
Open problems in universal induction & intelligence
 Algorithms
, 2009
"... algorithms ..."
(Show Context)
LEARNABILITY IN PROBLEMS OF SEQUENTIAL INFERENCE
, 2012
"... Les travaux présentés sont dédiés à la possibilité de faire de l’inférence statistique à partir de données séquentielles. Le problème est le suivant. Étant donnée une suite d’observations x1,...,xn,..., on cherche à faire de l’inférence sur le processus aléatoire ayant produit la suite. Plusieurs pr ..."
Abstract
 Add to MetaCart
(Show Context)
Les travaux présentés sont dédiés à la possibilité de faire de l’inférence statistique à partir de données séquentielles. Le problème est le suivant. Étant donnée une suite d’observations x1,...,xn,..., on cherche à faire de l’inférence sur le processus aléatoire ayant produit la suite. Plusieurs problèmes, qui d’ailleurs ont des applications multiples dans différents domaines des mathématiques et de l’informatique, peuvent être formulés ainsi. Par exemple, on peut vouloir prédire la probabilité d’apparition de l’observation suivante, xn+1 (le problème de prédiction séquentielle); ou répondre à la question de savoir si le processus aléatoire qui produit la suite appartient à un certain ensemble H0 versus appartient à un ensemble différent H1 (test d’hypothèse); ou encore, effectuer une action avec le but de maximiser une certain fonction d’utilité. Dans chacun de ces problèmes, pour rendre l’inférence possible il faut d’abord faire certaines hypothèses sur le processus aléatoire qui produit les données. La question centrale adressée dans les travaux présentés est la suivante: sous quelles hypothèses l’inférence estelle possible? Cette question
Learning about a Categorical Latent Variable
, 2007
"... It is well known that complete prior ignorance is not compatible with learning, at least in a coherent theory of (epistemic) uncertainty. What is less widely known, is that there is a state similar to full ignorance, that Walley calls nearignorance, that permits learning to take place. In this pape ..."
Abstract
 Add to MetaCart
It is well known that complete prior ignorance is not compatible with learning, at least in a coherent theory of (epistemic) uncertainty. What is less widely known, is that there is a state similar to full ignorance, that Walley calls nearignorance, that permits learning to take place. In this paper we provide new and substantial evidence that also nearignorance cannot be really regarded as a way out of the problem of starting statistical inference in conditions of very weak beliefs. The key to this result is focusing on a setting characterized by a variable of interest that is latent. We argue that such a setting is by far the most common case in practice, and we show, for the case of categorical latent variables (and general manifest variables) that there is a sufficient condition that, if satisfied, prevents learning to take place under prior nearignorance. This condition is shown to be easily satisfied in the most
Abstract
"... It is well known that complete prior ignorance is not compatible with learning, at least in a coherent theory of (epistemic) uncertainty. What is less widely known, is that there is a state similar to full ignorance, that Walley calls nearignorance, that permits learning to take place. In this pape ..."
Abstract
 Add to MetaCart
It is well known that complete prior ignorance is not compatible with learning, at least in a coherent theory of (epistemic) uncertainty. What is less widely known, is that there is a state similar to full ignorance, that Walley calls nearignorance, that permits learning to take place. In this paper we provide new and substantial evidence that also nearignorance cannot be really regarded as a way out of the problem of starting statistical inference in conditions of very weak beliefs. The key to this result is focusing on a setting characterized by a variable of interest that is latent. We argue that such a setting is by far the most common case in practice, and we show, for the case of categorical latent variables (and general manifest variables) that there is a sufficient condition that, if satisfied, prevents learning to take place under prior nearignorance. This condition is shown to be easily satisfied in the most common statistical problems.
Advance Access publication on June 18, 2008 doi:10.1093/comjnl/bxm117
"... One of the second generation of computer scientists, Chris Wallace completed his tertiary education in 1959 with a Ph.D. in nuclear physics, on cosmic ray showers, under Dr Paul George at Sydney University. Needless to say, computer science was not, at that stage, an established academic discipline. ..."
Abstract
 Add to MetaCart
One of the second generation of computer scientists, Chris Wallace completed his tertiary education in 1959 with a Ph.D. in nuclear physics, on cosmic ray showers, under Dr Paul George at Sydney University. Needless to say, computer science was not, at that stage, an established academic discipline. With Max Brennan 1 andJohnMaloshehaddesignedand built a large automatic data logging system for recording cosmic ray air shower events and with Max Brennan also developed a complex computer programme for Bayesian analysis of cosmic ray events on the recently installed SILLIAC computer. Appointed lecturer in Physics at Sydney in 1960 he was sent almost immediately to the University of Illinois to copy the design of ILLIAC II, a duplicate of which was to be built at Sydney. ILLIAC II was not in fact completed at that stage and, after an initial less than warm welcome by a department who seemed unsure exactly what this Australian was doing in their midst, his talents were recognized and he was invited to join their staff (under very generous conditions) to assist in ILLIAC II design 2. He remained there for two years helping in particular to design the input output channels and aspects of the advanced control unit (first stage pipeline). In the event, Sydney decided it would be too expensive to build a copy of ILLIAC II, although a successful copy (the Golem) was built in Israel using circuit designs developed by Wallace and Ken Smith. In spite of the considerable financial and academic inducements to remain in America, Wallace returned to Australia after three months spent in England familiarizing himself with the KDF9 computer being purchased by Sydney University to replace SILLIAC. Returning to the School of Physics he joined the Basser