Results 11  20
of
263
Probabilistically accurate program transformations
 In SAS
, 2011
"... Abstract. The standard approach to program transformation involves the use of discrete logical reasoning to prove that the transformation does not change the observable semantics of the program. We propose a new approach that, in contrast, uses probabilistic reasoning to justify the application of t ..."
Abstract

Cited by 38 (14 self)
 Add to MetaCart
(Show Context)
Abstract. The standard approach to program transformation involves the use of discrete logical reasoning to prove that the transformation does not change the observable semantics of the program. We propose a new approach that, in contrast, uses probabilistic reasoning to justify the application of transformations that may change, within probabilistic accuracy bounds, the result that the program produces. Our new approach produces probabilistic guarantees of the form P(D  ≥ B) ≤ ɛ, ɛ ∈ (0, 1), where D is the difference between the results that the transformed and original programs produce, B is an acceptability bound on the absolute value of D, and ɛ is the maximum acceptable probability of observing large D. We show how to use our approach to justify the application of loop perforation (which transforms loops to execute fewer iterations) to a set of computational patterns. 1
Adaptive multilevel splitting for rare event analysis
, 2005
"... The estimation of rare event probability is a crucial issue in areas such as reliability, telecommunications, aircraft management. In complex systems, analytical study is out of question and one has to use Monte Carlo methods. When rare is really rare, which means a probability less than 10^−9, na ..."
Abstract

Cited by 37 (8 self)
 Add to MetaCart
The estimation of rare event probability is a crucial issue in areas such as reliability, telecommunications, aircraft management. In complex systems, analytical study is out of question and one has to use Monte Carlo methods. When rare is really rare, which means a probability less than 10^−9, naive Monte Carlo becomes unreasonable. A widespread technique consists in multilevel splitting, but this method requires enough knowledge about the system to decide where to put the levels at hand. This is unfortunately not always possible. In this paper, we propose an adaptive algorithm to cope with this problem: the estimation is asymptotically consistent, costs just a little bit more than classical multilevel splitting and has the same efficiency in terms of asymptotic variance. In the one dimensional case, we prove rigorously the a.s. convergence and the asymptotic normality of our estimator, with the same variance as with other algorithms that use fixed crossing levels. In our proofs we mainly use tools from the theory of empirical processes, which seems to be quite new in the field of rare events.
Fundamental limits of wideband localization – Part II: Cooperative networks
 IEEE Trans. Inf. Theory
, 2010
"... Abstract—The availability of position information is of great importance in many commercial, governmental, and military applications. Localization is commonly accomplished through the use of radio communication between mobile devices (agents) and fixed infrastructure (anchors). However, precise de ..."
Abstract

Cited by 36 (10 self)
 Add to MetaCart
Abstract—The availability of position information is of great importance in many commercial, governmental, and military applications. Localization is commonly accomplished through the use of radio communication between mobile devices (agents) and fixed infrastructure (anchors). However, precise determination of agent positions is a challenging task, especially in harsh environments due to radio blockage or limited anchor deployment. In these situations, cooperation among agents can significantly improve localization accuracy and reduce localization outage probabilities. A general framework of analyzing the fundamental limits of wideband localization has been developed in Part I of the paper. Here, we build on this framework and establish the fundamental limits of wideband cooperative locationaware networks. Our analysis is based on the waveforms received at the nodes, in conjunction with Fisher information inequality. We provide a geometrical interpretation of equivalent Fisher information (EFI) for cooperative networks. This approach allows us to succinctly derive fundamental performance limits and their scaling behaviors, and to treat anchors and agents in a unified way from the perspective of localization accuracy. Our results yield important insights into how and when cooperation is beneficial. Index Terms—Cooperative localization, Cramér–Rao bound
Noisy optimization with evolution strategies
 SIAM Journal on Optimization
"... Evolution strategies are general, natureinspired heuristics for search and optimization. Supported both by empirical evidence and by recent theoretical findings, there is a common belief that evolution strategies are robust and reliable, and frequently they are the method of choice if neither deriv ..."
Abstract

Cited by 36 (6 self)
 Add to MetaCart
(Show Context)
Evolution strategies are general, natureinspired heuristics for search and optimization. Supported both by empirical evidence and by recent theoretical findings, there is a common belief that evolution strategies are robust and reliable, and frequently they are the method of choice if neither derivatives of the objective function are at hand nor differentiability and numerical accuracy can be assumed. However, despite their widespread use, there is little exchange between members of the “classical ” optimization community and people working in the field of evolutionary computation. It is our belief that both sides would benefit from such an exchange. In this paper, we present a brief outline of evolution strategies and discuss some of their properties in the presence of noise. We then empirically demonstrate that for a simple but nonetheless nontrivial noisy objective function, an evolution strategy outperforms other optimization algorithms designed to be able to cope with noise. The environment in which the algorithms are tested is deliberately chosen to afford a transparency of the results that reveals the strengths and shortcomings of the strategies, making it possible to draw conclusions with regard to the design of better optimization algorithms for noisy environments. 1
Theoretical Foundations Of Linear And Order Statistics Combiners For Neural Pattern Classifiers
 IEEE Transactions on neural networks
, 1996
"... : Several researchers have experimentally shown that substantial improvements can be obtained in difficult pattern recognition problems by combining or integrating the outputs of multiple classifiers. This paper provides an analytical framework to quantify the improvements in classification results ..."
Abstract

Cited by 35 (5 self)
 Add to MetaCart
(Show Context)
: Several researchers have experimentally shown that substantial improvements can be obtained in difficult pattern recognition problems by combining or integrating the outputs of multiple classifiers. This paper provides an analytical framework to quantify the improvements in classification results due to combining. The results apply to both linear combiners and the order statistics combiners introduced in this paper. We show that combining networks in output space reduces the variance of the actual decision region boundaries around the optimum boundary. For linear combiners, we show that in the absence of classifier bias, the added classification error is proportional to the boundary variance. For nonlinear combiners, we show analytically that the selection of the median, the maximum and in general the ith order statistic improves classifier performance. The analysis presented here facilitates the understanding of the relationships among error rates, classifier boundary distributions...
Exploiting multiuser diversity with only 1–bit feedback
 in Proc. IEEE Wireless Commun. and Networking Conf
, 2005
"... Abstract — In a system with n users, the sumrate capacity of the downlink channel grows as log log n, assuming optimal scheduling. However, optimal scheduling requires that the downlink channel state information (CSI) for all users be fully available at the base station. In this paper we show that ..."
Abstract

Cited by 32 (2 self)
 Add to MetaCart
(Show Context)
Abstract — In a system with n users, the sumrate capacity of the downlink channel grows as log log n, assuming optimal scheduling. However, optimal scheduling requires that the downlink channel state information (CSI) for all users be fully available at the base station. In this paper we show that the same capacity growth holds even if the feedback rate from the mobiles to the base station is reduced to one bit. We propose a simple scheduling method to achieve this multiuser capacity and furthermore we show that by a judicious choice of the onebit quantizer, not only the growth rate, but also most of the capacity of a fully informed system can be preserved. I.
On maximum likelihood estimation of the extreme value index
 Ann. Appl. Probab
, 2004
"... We prove asymptotic normality of the socalled maximum likelihood estimator of the extreme value index. 1. Introduction. Let X1,X2,... be independent and identically distributed (i.i.d.) random variables (r.v.’s) from some unknown distribution function (d.f.) F. Denote the upper endpoint of F by x ∗ ..."
Abstract

Cited by 26 (2 self)
 Add to MetaCart
We prove asymptotic normality of the socalled maximum likelihood estimator of the extreme value index. 1. Introduction. Let X1,X2,... be independent and identically distributed (i.i.d.) random variables (r.v.’s) from some unknown distribution function (d.f.) F. Denote the upper endpoint of F by x ∗ , where x ∗ = sup{x:F(x) < 1} ≤ ∞, and let (1)
Optimal Auctions Revisited
 In Proceedings of AAAI98
, 1998
"... This paper addresses several basic problems inspired by the adaptation of economic mechanisms, and auctions in particular, to the Internet. ..."
Abstract

Cited by 25 (6 self)
 Add to MetaCart
(Show Context)
This paper addresses several basic problems inspired by the adaptation of economic mechanisms, and auctions in particular, to the Internet.
On the throughputdelay tradeoff in cellular multicast
 WirelessCom’05
, 2005
"... In this paper, we adopt a cross layer design approach for analyzing the throughputdelay tradeoff of the multicast channel in a single cell system. To illustrate the main ideas, we start with the single group case, i.e., pure multicast, where a common information stream is requested by all the users ..."
Abstract

Cited by 24 (0 self)
 Add to MetaCart
(Show Context)
In this paper, we adopt a cross layer design approach for analyzing the throughputdelay tradeoff of the multicast channel in a single cell system. To illustrate the main ideas, we start with the single group case, i.e., pure multicast, where a common information stream is requested by all the users. We consider three classes of scheduling algorithms with progressively increasing complexity. The first class strives for minimum complexity by resorting to a static scheduling strategy along with memoryless decoding. Our analysis for this class of scheduling algorithms reveals the existence of a static scheduling policy that achieves the optimal scaling law of the throughput at the expense of a delay that increases exponentially with the number of users. The second scheduling policy resorts to a higher complexity incremental redundancy encoding/decoding strategy to achieve a superior throughputdelay tradeoff. The third, and most complex, scheduling strategy benefits from the cooperation between the different users to minimize the delay while achieving the optimal scaling law of the throughput. In particular, the proposed cooperative multicast strategy is shown to simultaneously achieve the optimal scaling laws of both throughput and delay. Then, we generalize our scheduling algorithms to exploit the multigroup diversity available when different information streams are requested by different subsets of the user population. Finally, we discuss the effect of the potential gains of equipping the base station with multitransmit antennas and present simulation results that validate our theoretical claims. 1
Analysis of hybrid selection/maximalratio diversity combiners with Gaussian errors
 IEEE Trans. Wireless Commun
, 2002
"... Abstract—The paper examines the impact of Gaussian distributed weighting errors (in the channel gain estimates used for coherent combination) on both the output statistics of a hybrid selection/maximalratio (SC/MRC) receiver and the degradation of the average symbolerror rate (ASER) performance a ..."
Abstract

Cited by 23 (0 self)
 Add to MetaCart
(Show Context)
Abstract—The paper examines the impact of Gaussian distributed weighting errors (in the channel gain estimates used for coherent combination) on both the output statistics of a hybrid selection/maximalratio (SC/MRC) receiver and the degradation of the average symbolerror rate (ASER) performance as compared with the ideal case. New expressions are derived for the probability density function, cumulative distribution function and moment generating function (MGF) of the coherent hybrid SC/MRC combiner output signaltonoise ratio (SNR). The MGF is then used to derive exact, closedform, ASER expressions for binary and Mary modulations in conjunction a nonideal hybrid SC/MRC receiver in a Rayleigh fading environment. Results for both selection combining (SC) and maximalratio combining (MRC) are obtained as limiting cases. Additionally, the effect of the weighting errors on both the outage rate of error probability and the average combined SNR is investigated. These analytical results provide insights into the tradeoff between diversity gain and combination losses, in concert with increasing orders of diversity branches in an energysharing communication system. Index Terms—Binary and Mary signaling, coherent combiner with weighting errors, diversity methods, hybrid diversity receivers. I.