Results 11 
16 of
16
Optimizing and profiling users online with Bayesian probabilistic modeling
 In Proceedings of the International Networked Learning Conference of Natural and Artifical Intelligence Systems Organization
, 2002
"... One solution to build adaptive educational material is to model the user with a questionnaire before he/she enters the system, and then use this information to carry out adaptation of the platform. For example, users that are profiled could be offered personalised links to resources based on their m ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
(Show Context)
One solution to build adaptive educational material is to model the user with a questionnaire before he/she enters the system, and then use this information to carry out adaptation of the platform. For example, users that are profiled could be offered personalised links to resources based on their metacognitive strategies or intrinsic goal orientations. These machine understandable beliefs of the profiles of different users could then be updated by collecting additional information with online questionnaire in regular intervals. An adaptive online questionnaire system EDUFORM is based on intelligent techniques that optimize the number of propositions presented to each respondent. In addition EDUFORM creates an individual profile for each respondent. The adaptive graphical user interface is generated automatically (e.g., propositions in the questionnaire, collaborative actions and links to resources), and profile analysis and the related selection of order of the propositions is performed with Bayesian probabilistic modeling. Preliminary testing implies that the obvious advantage with EDUFORM is that the questionnaires are usually significantly shorter compared to traditional nonadaptive questionnaires. The empirical results show that after reducing dramatically the number of propositions (from 5060%) one is still able to control the error ratio (1222%). In the context of course feedback from a webbased course, the model construction in the Profile creation phase can offer he1p for teachers to find differences among the various learner groups so that different versions of the web course can be prepared to suit the individual needs of the group. The correct profile information of the respondent is in most cases obtained already with less than 33% of the original prop...
Unsupervised Bayesian Visualization of HighDimensional Data
 In
, 2000
"... We propose a data reduction method based on a probabilistic similarity framework where two vectors are considered similar if they lead to similar predictions. We show how this type of a probabilistic similarity metric can be defined both in a supervised and unsupervised manner. As a concrete applica ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
(Show Context)
We propose a data reduction method based on a probabilistic similarity framework where two vectors are considered similar if they lead to similar predictions. We show how this type of a probabilistic similarity metric can be defined both in a supervised and unsupervised manner. As a concrete application of the suggested multidimensional scaling scheme, we describe how the method can be used for producing visual images of highdimensional data, and give several examples of visualizations obtained by using the suggested scheme with probabilistic Bayesian network models. 1. INTRODUCTION Multidimensional scaling (see, e.g., [3, 2]) is a data compression or data reduction task where the goal is to replace the original highdimensional data vectors with much shorter vectors, while losing as little information as possible. Intuitively speaking, it can be argued that a pragmatically sensible data reduction scheme is such that two vectors close to each other in the original multidimensional s...
Comparing Soft Computing Methods in Prediction of Manufacturing data
, 1998
"... . In the literature there exist several soft computing methods for building predictive models: neural network models, fuzzy models and probabilistic approaches. In this paper we are interested in the question which one of these approaches is likely to give best performance in practice. We study ..."
Abstract
 Add to MetaCart
(Show Context)
. In the literature there exist several soft computing methods for building predictive models: neural network models, fuzzy models and probabilistic approaches. In this paper we are interested in the question which one of these approaches is likely to give best performance in practice. We study this problem empirically by selecting a set of typical models from the different model families, and by experimentally evaluating their predictive performance. For the evaluation, we use two realworld manufacturing datasets from a production plant of electrical machines. The models considered here include fuzzy rulebases, various neural network models and probabilistic finite mixtures. Our investigation indicates that all the methods can produce predictors that are accurate enough for practical purposes. Moreover, the results show that adding expert knowledge leads to improved predictive performance in the domain where such knowledge was available. In the domain where no expert kno...
Batch Classifications with Discrete Finite Mixtures
, 1998
"... . In this paper we study batch classification problems where multiple predictions are made simultaneously, in contrast to the standard independent classification case, where the predictions are made independently one at a time. The main contribution of this paper is to demonstrate how the standard E ..."
Abstract
 Add to MetaCart
(Show Context)
. In this paper we study batch classification problems where multiple predictions are made simultaneously, in contrast to the standard independent classification case, where the predictions are made independently one at a time. The main contribution of this paper is to demonstrate how the standard EM algorithm for finite mixture models can be modified for the batch classification case. In the empirical part of the paper, the results obtained by the batch classification approach are compared to those obtained by independent predictions. 1 Introduction In the standard classification approach, the model used to classify data is first constructed by using the available training data, and each classification problem is then solved independently with the produced model. In this paper, we extend this classification problem by allowing multiple predictions (classifications) to be made at the same time. In this batch classification case, all the classification problems are given simultaneously...
Using Neural Networks for Descriptive Statistical Analysis of Educational Data
, 1997
"... In this paper we discuss the methodological issues of using a class of neural networks called Mixture Density Networks (MDN) for discriminant analysis. MDN models have the advantage of having a rigorous probabilistic interpretation, and they have proven to be a viable alternative as a classification ..."
Abstract
 Add to MetaCart
In this paper we discuss the methodological issues of using a class of neural networks called Mixture Density Networks (MDN) for discriminant analysis. MDN models have the advantage of having a rigorous probabilistic interpretation, and they have proven to be a viable alternative as a classification procedure in discrete domains. We will address both the classification and interpretive aspects of discriminant analysis, and compare the approach to the traditional method of linear discriminants as implemented in standard statistical packages. We show that the MDN approach adopted performs well in both aspects. Many of the observations made are not restricted to the particular case at hand, and are applicable to most applications of discriminant analysis in educational research.