Results 1  10
of
7,952
Stochastic Perturbation Theory
, 1988
"... . In this paper classical matrix perturbation theory is approached from a probabilistic point of view. The perturbed quantity is approximated by a firstorder perturbation expansion, in which the perturbation is assumed to be random. This permits the computation of statistics estimating the variatio ..."
Abstract

Cited by 907 (36 self)
 Add to MetaCart
and the eigenvalue problem. Key words. perturbation theory, random matrix, linear system, least squares, eigenvalue, eigenvector, invariant subspace, singular value AMS(MOS) subject classifications. 15A06, 15A12, 15A18, 15A52, 15A60 1. Introduction. Let A be a matrix and let F be a matrix valued function of A
Semantics of ContextFree Languages
 In Mathematical Systems Theory
, 1968
"... "Meaning " may be assigned to a string in a contextfree language by defining "attributes " of the symbols in a derivation tree for that string. The attributes can be defined by functions associated with each production in the grammar. This paper examines the implications of th ..."
Abstract

Cited by 569 (0 self)
 Add to MetaCart
towards programming languages, but the same methods appear to be relevant also in the study of natural anguages. 1. Introduction. Let
I. Introduction. Let X be a Banach space and G be a closed
"... Let X be a Banach space and G be a closed subspace of X. We say G is 2−simultaneously proximinal in X if for any x1, x2 in X, there exists some y ∈ G such that ‖x1 − y ‖ + ‖x2 − y ‖ = inf{‖x1 − z‖+ ‖x2 − z ‖ : z ∈ G} = d({x1, x2},G). In this paper, we give a formula for d({x1, x2},G) in vector valu ..."
Abstract
 Add to MetaCart
Let X be a Banach space and G be a closed subspace of X. We say G is 2−simultaneously proximinal in X if for any x1, x2 in X, there exists some y ∈ G such that ‖x1 − y ‖ + ‖x2 − y ‖ = inf{‖x1 − z‖+ ‖x2 − z ‖ : z ∈ G} = d({x1, x2},G). In this paper, we give a formula for d({x1, x2},G) in vector
A Tutorial on Learning Bayesian Networks
 Communications of the ACM
, 1995
"... We examine a graphical representation of uncertain knowledge called a Bayesian network. The representation is easy to construct and interpret, yet has formal probabilistic semantics making it suitable for statistical manipulation. We show how we can use the representation to learn new knowledge by c ..."
Abstract

Cited by 365 (12 self)
 Add to MetaCart
by combining domain knowledge with statistical data. 1 Introduction Many techniques for learning rely heavily on data. In contrast, the knowledge encoded in expert systems usually comes solely from an expert. In this paper, we examine a knowledge representation, called a Bayesian network, that lets us have
1 Introduction Let us consider the following classical hamiltonian model of a system of rotators:
"... ..."
1. Introduction. Let X be a Hilbert space and V: X! X be an isometry. The well
"... ABSTRACT. This paper studies closed subspaces L of the Hardy spaces Hp which are ginvariant (i.e., gÐL L) where g is inner, g Â ≥ 1. If p ≥ 2, the Wold decomposition theorem implies that there is a countable “gbasis ” f1, f2,... of L in the sense that L is a direct sum of spaces fj Ð H2[g] where ..."
Abstract
 Add to MetaCart
ABSTRACT. This paper studies closed subspaces L of the Hardy spaces Hp which are ginvariant (i.e., gÐL L) where g is inner, g Â ≥ 1. If p ≥ 2, the Wold decomposition theorem implies that there is a countable “gbasis ” f1, f2,... of L in the sense that L is a direct sum of spaces fj Ð H2[g] where H2[g] ≥ ff Ž g j f 2 H2g. The basis elements fj satisfy the additional property that RT jfj j2gk ≥ 0, k ≥ 1, 2,.... We call such functions g2inner. It also follows that any f 2 H2 can be factored f ≥ hf,2 Ð (F2 Ž g) where hf,2 is g2inner and F is outer, generalizing the classical Riesz factorization. Using Lp estimates for the canonical decomposition of H2, we find a factorization f ≥ hf,pÐ(FpŽg) for f 2 Hp. If p 1 and g is a finite Blaschke product we obtain, for any ginvariant L Hp, a finite gbasis of gpinner functions.
Let {}
"... Abstract. In this paper we introduce for the first time the fusion of information on infinite discrete frames of discernment and we give general results of the fusion of two such masses using the Dempster’s rule and the PCR5 rule for Bayesian and nonBayesian cases. Introduction. θ = x, x,..., x,... ..."
Abstract
 Add to MetaCart
Abstract. In this paper we introduce for the first time the fusion of information on infinite discrete frames of discernment and we give general results of the fusion of two such masses using the Dempster’s rule and the PCR5 rule for Bayesian and nonBayesian cases. Introduction. θ = x, x,..., x
Let {}
"... Abstract. In this paper we introduce for the first time the fusion of information on infinite discrete frames of discernment and we give general results of the fusion of two such masses using the Dempster’s rule and the PCR5 rule for Bayesian and nonBayesian cases. Introduction. θ = x, x,..., x,... ..."
Abstract
 Add to MetaCart
Abstract. In this paper we introduce for the first time the fusion of information on infinite discrete frames of discernment and we give general results of the fusion of two such masses using the Dempster’s rule and the PCR5 rule for Bayesian and nonBayesian cases. Introduction. θ = x, x,..., x
Let
"... Abstract. By means of the Ruscheweyh derivative we define a new class BRp,n(m, µ, α) involving functions f ∈ A (p, n). Parallel results, for some related classes including the class of starlike and convex functions respectively, are also obtained. 2000 Mathematics Subject Classification: 30C45 1. In ..."
Let
"... Abstract. By means of the Sălăgean differential operator we define a new class BS(p, m, µ, α) involving functions f ∈ A (p, n). Parallel results, for some related classes including the class of starlike and convex functions respectively, are also obtained. 2000 Mathematics Subject Classification: 30 ..."
Results 1  10
of
7,952