Results 1  10
of
14
Error Correcting Tournaments
, 2008
"... Abstract. We present a family of adaptive pairwise tournaments that are provably robust against large error fractions when used to determine the largest element in a set. The tournaments use nk pairwise comparisons but have only O(k + log n) depth, where n is the number of players and k is the robus ..."
Abstract

Cited by 26 (4 self)
 Add to MetaCart
(Show Context)
Abstract. We present a family of adaptive pairwise tournaments that are provably robust against large error fractions when used to determine the largest element in a set. The tournaments use nk pairwise comparisons but have only O(k + log n) depth, where n is the number of players and k is the robustness parameter (for reasonable values of n and k). These tournaments also give a reduction from multiclass to binary classification in machine learning, yielding the best known analysis for the problem. 1
Reconstructing a ThreeDimensional Model with Arbitrary Errors
 Journal of the ACM
, 1996
"... A number of current technologies allow for the determination of interatomic distance information in structures such as proteins and RNA. Thus, the reconstruction of a threedimensional set of points using information about its interpoint distances has become a task of basic importance in determini ..."
Abstract

Cited by 25 (0 self)
 Add to MetaCart
(Show Context)
A number of current technologies allow for the determination of interatomic distance information in structures such as proteins and RNA. Thus, the reconstruction of a threedimensional set of points using information about its interpoint distances has become a task of basic importance in determining molecular structure. The distance measurements one obtains from techniques such as NMR are typically sparse and errorprone, greatly complicating the reconstruction task. Many of these errors result in distance measurements that can be safely assumed to lie within certain fixed tolerances. But a number of sources of systematic error in these experiments lead to inaccuracies in the data that are very hard to quantify; in effect, one must treat certain entries of the measured distance matrix as being arbitrarily "corrupted." The existence of arbitrary errors leads to an interesting sort of errorcorrection problem  how many corrupted entries in a distance matrix can be efficiently corre...
Threedimensional unsteady incompressible flow calculations using multigrid
, 1997
"... We apply a robust and computationally efficient multigriddriven algorithm for the simulation of timedependent threedimensional incompressible bluff body wakes at low Reynolds numbers (Re less than or equal to 350). The computational algorithm combines a generalized timeaccurate artificial compre ..."
Abstract

Cited by 6 (2 self)
 Add to MetaCart
(Show Context)
We apply a robust and computationally efficient multigriddriven algorithm for the simulation of timedependent threedimensional incompressible bluff body wakes at low Reynolds numbers (Re less than or equal to 350). The computational algorithm combines a generalized timeaccurate artificial compressibility approach, a finitevolume discretization in space, and an implicit backward discretization in time. The solution is advanced in time by performing iterative 'pseudotransient ' steadystate calculations at each time step. The key to the algorithm's efficiency is a powerful multigrid scheme that is employed to accelerate the rate of convergence of the pseudotransient iteration. The computational efficiency is improved even further by the application of residual smoothing and local pseudo timestepping techniques, and by using a pointimplicit discretization of the unsteady terms. The solver is implemented on a multiprocessor IBM SP2 computer by using the MPI Standard, and a high parallel scalability is demonstrated. The low Reynolds number regime (Re less than or equal to 500) encompasses flow transitions to unsteadiness and to threedimensionality and attracts considerable attention as an important step on the road to turbulence. In this regime, the slow asymptotics of the wake provide a challenging test for numerical methods since long integration times are necessary to resolve the flow evolution toward a limiting cycle. Our method is extended to three dimensions and applied for low Reynolds number flows over a circular cylinder (Re less than or equal to 250) and a circular semicylinder (Re = 350). The computational results are found to be in close agreement with the available experimental and computational data. (Author)
On Computing Univariate GCDs over Number Fields.
, 1998
"... We compare the two main competing methods for fast univariate polynomial GCD computation over an algebraic number field, namely, the modular method of Langymyr et al (1987), and the heuristic method of Smedley et al (1988). Because of recent improvements to the modular method by Encarnacion (1994), ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
We compare the two main competing methods for fast univariate polynomial GCD computation over an algebraic number field, namely, the modular method of Langymyr et al (1987), and the heuristic method of Smedley et al (1988). Because of recent improvements to the modular method by Encarnacion (1994), we expected that the modular method, if implemented "properly ", would now be the method of choice in Maple. This turned out to be the case for several kinds of GCD problems. As an exercise, to complete the comparison, we implemented also a Hensel based method. We then realized that Hensel lifting is "pointless" when applied to univariate GCD computation and implemented a more direct method that we call the primepower method. It turns out that not only is the primepower method simple to implement, it is also better than the heuristic method. Due to the large effort required to implement the modular method "properly", we recommend that the primepower method to systems' implementors as a ve...
B.: The Bayesian ARTMAP
 IEEE Transactions on Neural Networks
, 2007
"... Abstract—In this paper, we modify the fuzzy ARTMAP (FA) neural network (NN) using the Bayesian framework in order to improve its classification accuracy while simultaneously reduce its category proliferation. The proposed algorithm, called Bayesian ARTMAP (BA), preserves the FA advantages and also e ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
Abstract—In this paper, we modify the fuzzy ARTMAP (FA) neural network (NN) using the Bayesian framework in order to improve its classification accuracy while simultaneously reduce its category proliferation. The proposed algorithm, called Bayesian ARTMAP (BA), preserves the FA advantages and also enhances its performance by the following: 1) representing a category using a multidimensional Gaussian distribution, 2) allowing a category to grow or shrink, 3) limiting a category hypervolume, 4) using Bayes ’ decision theory for learning and inference, and 5) employing the probabilistic association between every category and a class in order to predict the class. In addition, the BA estimates the class posterior probability and thereby enables the introduction of loss and classification according to the minimum expected loss. Based on these characteristics and using synthetic and 20 realworld databases, we show that the BA outperformes the FA, either trained for one epoch or until completion, with respect to classification accuracy, sensitivity to statistical overlapping,
Cryogenic Design of the Liquid Helium Experiment "Critical Dynamics in
"... Although many well controlled experiments have been to measure the static properties of systems near few experiments have explored the transport properties in systems driven far away from equilibrium as a phase transition occurs. The cryogenic design of an experiment to study the dynamic aspect of ..."
Abstract
 Add to MetaCart
Although many well controlled experiments have been to measure the static properties of systems near few experiments have explored the transport properties in systems driven far away from equilibrium as a phase transition occurs. The cryogenic design of an experiment to study the dynamic aspect of critical phenomena is reported here. Measurements of the thermal gradient across the superfluid II)  normal fluid (He 1) interface in helium under conditions will be performed as a heat flux holds the system away from equilibrium. New technologies are under development for this experiment, which is in the definition phase for a space shuttle flight. KEYWORDS: superfluid helium, lambda transition, critical I. 1NTRODUCTION Although static critical phenomena are relatively well studied, very little is known about transport properties through criticality in general. This is unfortunate, since in nature virtually all phase transitions occur while a system is driven far from equilibrium....
unknown title
"... This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination. IEEE TRANSACTIONS ON NEURAL NETWORKS 1 Abstract—In this paper, we modify the fuzzy ARTMAP (FA) neural network (NN) using the Bayesian framework in order to ..."
Abstract
 Add to MetaCart
(Show Context)
This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination. IEEE TRANSACTIONS ON NEURAL NETWORKS 1 Abstract—In this paper, we modify the fuzzy ARTMAP (FA) neural network (NN) using the Bayesian framework in order to improve its classification accuracy while simultaneously reduce its category proliferation. The proposed algorithm, called Bayesian ARTMAP (BA), preserves the FA advantages and also enhances its performance by the following: 1) representing a category using a multidimensional Gaussian distribution, 2) allowing a category to grow or shrink, 3) limiting a category hypervolume, 4) using Bayes ’ decision theory for learning and inference, and 5) employing the probabilistic association between every category and a class in order to predict the class. In addition, the BA estimates the class posterior probability and thereby enables the introduction of loss and classification according to the minimum expected loss. Based on these characteristics and using synthetic and 20 realworld databases, we show that the BA outperformes the FA, either trained for one epoch or until completion, with respect to classification accuracy, sensitivity to statistical overlapping,
Parity–violating electron scattering from the pion–correlated relativistic Fermi gas ∗
"... Parity–violating quasielastic electron scattering is studied within the context of the relativistic Fermi gas and its extensions to include the effects of pionic correlations and meson–exchange currents. The work builds on previous studies using the same model; here the part of the parity–violating ..."
Abstract
 Add to MetaCart
(Show Context)
Parity–violating quasielastic electron scattering is studied within the context of the relativistic Fermi gas and its extensions to include the effects of pionic correlations and meson–exchange currents. The work builds on previous studies using the same model; here the part of the parity–violating asymmetry that contains axial–vector hadronic currents is developed in detail using those previous studies and a link is provided to the transverse vector–isovector response. Various integrated observables are constructed from the differential asymmetry. These include an asymmetry averaged over the quasielastic peak, as well as the difference of the asymmetry integrated to the left and right of the peak — the latter is shown to be optimal for bringing out the nature of the pionic correlations. Special weighted integrals involving the differential asymmetry and electromagnetic cross section, based on the concepts of y–scaling and sum rules, are constructed and shown to be suited to studies of the single–nucleon form factor content in the problem, in particular, to determinations of the isovector/axial–vector and electric strangeness form factors. Comparisons are also made with recent predictions made on the basis of relativistic mean–field theory.