Results 1  10
of
2,386,674
Independent variables
"... ESE & Modeling 1 Development and continual improvement of empiricalevidencebased software models. 2 Capitalization organization wide of the results. 3 ICEIS 2002, Ciudad Real, April 4th © UniRoma2 – DISP – ESEG Giovanni Cantone Basic components ..."
Abstract
 Add to MetaCart
ESE & Modeling 1 Development and continual improvement of empiricalevidencebased software models. 2 Capitalization organization wide of the results. 3 ICEIS 2002, Ciudad Real, April 4th © UniRoma2 – DISP – ESEG Giovanni Cantone Basic components
Hamiltonian with z as the Independent Variable
"... Deduce the form of the Hamiltonian when z rather than t is considered to be the independent variable. Illustrate this for the case of a particle of charge q and mass m in an external electromagnetic field. ..."
Abstract

Cited by 3 (3 self)
 Add to MetaCart
Deduce the form of the Hamiltonian when z rather than t is considered to be the independent variable. Illustrate this for the case of a particle of charge q and mass m in an external electromagnetic field.
Survey on Independent Component Analysis
 NEURAL COMPUTING SURVEYS
, 1999
"... A common problem encountered in such disciplines as statistics, data analysis, signal processing, and neural network research, is nding a suitable representation of multivariate data. For computational and conceptual simplicity, such a representation is often sought as a linear transformation of the ..."
Abstract

Cited by 2241 (104 self)
 Add to MetaCart
of the original data. Wellknown linear transformation methods include, for example, principal component analysis, factor analysis, and projection pursuit. A recently developed linear transformation method is independent component analysis (ICA), in which the desired representation is the one that minimizes
Independent Variable Group Analysis
 in International Conference on Artificial Neural Networks  ICANN 2001, Georg Dorffner
, 2001
"... When modeling large problems with limited representational resources, it is important to be able to construct compact models of the data. Structuring the problem into subproblems that can be modeled independently is a means for achieving compactness. In this article we introduce Independent Variabl ..."
Abstract

Cited by 7 (3 self)
 Add to MetaCart
with VQ. Experimental results are presented to show that variables are grouped according to statistical independence, and that a more compact model ensues due to the algorithm.
HOW TO CHOOSE THE INDEPENDENT VARIABLE?
"... A case study is presented, where the paper and pencil environment and the technological one are combined together and designed to face a subtle mathematical problem: how to choose the dependent Vs independent variables in modelling situations? We show how the combined approach allows to pose the pro ..."
Abstract
 Add to MetaCart
A case study is presented, where the paper and pencil environment and the technological one are combined together and designed to face a subtle mathematical problem: how to choose the dependent Vs independent variables in modelling situations? We show how the combined approach allows to pose
An introduction to variable and feature selection
 Journal of Machine Learning Research
, 2003
"... Variable and feature selection have become the focus of much research in areas of application for which datasets with tens or hundreds of thousands of variables are available. ..."
Abstract

Cited by 1283 (16 self)
 Add to MetaCart
Variable and feature selection have become the focus of much research in areas of application for which datasets with tens or hundreds of thousands of variables are available.
High dimensional graphs and variable selection with the Lasso
 ANNALS OF STATISTICS
, 2006
"... The pattern of zero entries in the inverse covariance matrix of a multivariate normal distribution corresponds to conditional independence restrictions between variables. Covariance selection aims at estimating those structural zeros from data. We show that neighborhood selection with the Lasso is a ..."
Abstract

Cited by 751 (23 self)
 Add to MetaCart
The pattern of zero entries in the inverse covariance matrix of a multivariate normal distribution corresponds to conditional independence restrictions between variables. Covariance selection aims at estimating those structural zeros from data. We show that neighborhood selection with the Lasso
PROBABILITY INEQUALITIES FOR SUMS OF BOUNDED RANDOM VARIABLES
, 1962
"... Upper bounds are derived for the probability that the sum S of n independent random variables exceeds its mean ES by a positive number nt. It is assumed that the range of each summand of S is bounded or bounded above. The bounds for Pr(SES> nt) depend only on the endpoints of the ranges of the s ..."
Abstract

Cited by 2217 (2 self)
 Add to MetaCart
Upper bounds are derived for the probability that the sum S of n independent random variables exceeds its mean ES by a positive number nt. It is assumed that the range of each summand of S is bounded or bounded above. The bounds for Pr(SES> nt) depend only on the endpoints of the ranges
A new scale of social desirability independent of psychopathology
 Journal of Consulting Psychology
, 1960
"... It has long been recognized that personality test scores are influenced by nontestrelevant response determinants. Wiggins and Rumrill (1959) distinguish three approaches to this problem. Briefly, interest in the problem of response distortion has been concerned with attempts at statistical correct ..."
Abstract

Cited by 656 (1 self)
 Add to MetaCart
It has long been recognized that personality test scores are influenced by nontestrelevant response determinants. Wiggins and Rumrill (1959) distinguish three approaches to this problem. Briefly, interest in the problem of response distortion has been concerned with attempts at statistical correction for "faking good " or "faking bad " (Meehl & Hathaway, 1946), the analysis of response sets (Cronbach, 1946,1950), and ratings of the social desirability of personality test items (Edwards, 19 5 7). A further distinction can be made, however, which results in a somewhat different division of approaches to the question of response distortion. Common to both the Meehl
Naive (Bayes) at Forty: The Independence Assumption in Information Retrieval
, 1998
"... The naive Bayes classifier, currently experiencing a renaissance in machine learning, has long been a core technique in information retrieval. We review some of the variations of naive Bayes models used for text retrieval and classification, focusing on the distributional assump tions made abou ..."
Abstract

Cited by 496 (1 self)
 Add to MetaCart
The naive Bayes classifier, currently experiencing a renaissance in machine learning, has long been a core technique in information retrieval. We review some of the variations of naive Bayes models used for text retrieval and classification, focusing on the distributional assump tions made about word occurrences in documents.
Results 1  10
of
2,386,674