Results 1  10
of
32
Generalized BoxMuller method for generating qGaussian random deviates,”
 IEEE Transactions on Information Theory,
, 2007
"... ..."
The importance of network topology in local contribution games
, 2009
"... We consider a stylized model of content contribution in a peertopeer network. The model is appealing because it allows for linearquadratic payoff functions and for very general interaction patterns among agents. We ask: How do different link patterns affect contributions? How are contributions a ..."
Abstract

Cited by 10 (0 self)
 Add to MetaCart
We consider a stylized model of content contribution in a peertopeer network. The model is appealing because it allows for linearquadratic payoff functions and for very general interaction patterns among agents. We ask: How do different link patterns affect contributions? How are contributions affected by whether these goods are strategic complements, substitutes, or something in between? Who contributes and who free rides? How is the computational complexity of optimal play affected by the externalities of the game? What networks optimize social contributions and social welfare? And how does the worstcase equilibrium performance of a network compare to its bestcase performance? The analysis finds that Nash equilibria of this game always exist and that they are computable by solving a linear complementarity problem. The equilibrium is unique when goods are strategic complements or weak substitutes and contributions are proportional to a network centrality measure called the Bonacich index. In the case of strong substitutes, the equilibrium is nonunique and for every network there is an equilibrium where some individuals contribute optimally and others completely freeride. Moreover, these equilibria with complete freeriders and optimal contributors are the only candidate stable equilibria are are characterized by korder maximal independent sets. We find that the structure of optimal networks is hublike when the game exhibits strict or weak complements. Under strong substitute scenarios, while hublike networks remain optimal in the best case, they also yield the worstperforming equilibria. Finally, we discuss two networkbased policies for improving the equilibrium performance of networks.
Faster Monte Carlo simulations at low temperatures. The waiting time method
 Comp. Phys. Comm
"... We discuss a rejectionless global optimization technique which, while being technically similar to the recently introduced method of Extremal Optimization, still relies on a physical analogy with a thermalizing system. This method can be used at constant temperature or combined with annealing techni ..."
Abstract

Cited by 10 (7 self)
 Add to MetaCart
(Show Context)
We discuss a rejectionless global optimization technique which, while being technically similar to the recently introduced method of Extremal Optimization, still relies on a physical analogy with a thermalizing system. This method can be used at constant temperature or combined with annealing techniques, and is especially well suited for studying the low temperature relaxation of complex systems as glasses and spin glasses. PACS: 02.60.Pn; 05.10.Ln; 75.10Nr 1
Simulated annealing: Rigorous finitetime guarantees for optimization on continuous domains
 Advances in Neural Information Processing Systems 20
, 2008
"... Simulated annealing is a popular method for approaching the solution of a global optimization problem. Existing results on its performance apply to discrete combinatorial optimization where the optimization variables can assume only a finite set of possible values. We introduce a new general formula ..."
Abstract

Cited by 10 (8 self)
 Add to MetaCart
(Show Context)
Simulated annealing is a popular method for approaching the solution of a global optimization problem. Existing results on its performance apply to discrete combinatorial optimization where the optimization variables can assume only a finite set of possible values. We introduce a new general formulation of simulated annealing which allows one to guarantee finitetime performance in the optimization of functions of continuous variables. The results hold universally for any optimization problem on a bounded domain and establish a connection between simulated annealing and uptodate theory of convergence of Markov chain Monte Carlo methods on continuous domains. This work is inspired by the concept of finitetime learning with known accuracy and confidence developed in statistical learning theory. Optimization is the general problem of finding a value of a vector of variables θ that maximizes (or minimizes) some scalar criterion U(θ). The set of all possible values of the vector θ is called the optimization domain. The elements of θ can be discrete or continuous variables. In the first case
Stochastic optimization on continuous domains with finitetime guarantees by Markov chain Monte Carlo Methods
"... We introduce bounds on the finitetime performance of Markov chain Monte Carlo (MCMC) algorithms in solving global stochastic optimization problems defined over continuous domains. It is shown that MCMC algorithms with finitetime guarantees can be developed with a proper choice of the target distri ..."
Abstract

Cited by 9 (3 self)
 Add to MetaCart
We introduce bounds on the finitetime performance of Markov chain Monte Carlo (MCMC) algorithms in solving global stochastic optimization problems defined over continuous domains. It is shown that MCMC algorithms with finitetime guarantees can be developed with a proper choice of the target distribution and by studying their convergence in total variation norm. This work is inspired by the concept of finitetime learning with known accuracy and confidence developed in statistical learning theory.
Optimal Information Transmission in Organizations: Search and Congestion
, 2003
"... We propose a stylized model of a problemsolving organization whose internal communication structure is given by a fixed network. Problems arrive randomly anywhere in this network and must find their way to their respective “specialized solvers” by relying on local information alone. The organizatio ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
We propose a stylized model of a problemsolving organization whose internal communication structure is given by a fixed network. Problems arrive randomly anywhere in this network and must find their way to their respective “specialized solvers” by relying on local information alone. The organization handles multiple problems simultaneously. For this reason, the process may be subject to congestion. We provide a characterization of the threshold of collapse of the network and of the stock of floating problems (or average delay) that prevails below that threshold. We build upon this characterization to address a design problem: the determination of what kind of network architecture optimizes performance for any given problem arrival rate. We conclude that, for low arrival rates, the optimal network is very polarized (i.e. starlike or “centralized”), whereas it is largely homogenous (or “decentralized”) for high arrival rates. We also show that, if an auxiliary assumption holds, the transition between these two opposite structures is sharp and they are the only ones to ever qualify as optimal.
Fast simulated annealing in with an application to maximum likelihood estimation
 in statespace models, Stochastic Processes and their Applications
, 2009
"... Abstract We study simulated annealing algorithms to maximise a function ψ on a subset of R d . In classical simulated annealing, given a current state θ n in stage n of the algorithm, the probability to accept a proposed state z at which ψ is smaller, is exp(−β n+1 (ψ(z) − ψ(θ n )) where (β n ) is ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
(Show Context)
Abstract We study simulated annealing algorithms to maximise a function ψ on a subset of R d . In classical simulated annealing, given a current state θ n in stage n of the algorithm, the probability to accept a proposed state z at which ψ is smaller, is exp(−β n+1 (ψ(z) − ψ(θ n )) where (β n ) is the inverse temperature. With the standard logarithmic increase of (β n ) the probability P(ψ(θ n ) ≤ ψ max − ε), with ψ max the maximal value of ψ, then tends to zero at a logarithmic rate as n increases. We examine variations of this scheme in which (β n ) is allowed to grow faster, but also consider other functions than the exponential for determining acceptance probabilities. The main result shows that faster rates of convergence can be obtained, both with the exponential and other acceptance functions. We also show how the algorithm may be applied to functions that cannot be computed exactly but only approximated, and give an example of maximising the loglikelihood function for a statespace model.
Selfadaptation of mutation distribution in evolutionary algorithms
 In Proc. of the 2007 IEEE Cong. on Evol. Comput
, 2007
"... Abstract — This paper proposes a selfadaptation method to control not only the mutation strength parameter, but also the mutation distribution for evolutionary algorithms. For this purpose, the isotropic qGaussian distribution is employed in the mutation operator. The qGaussian distribution allow ..."
Abstract

Cited by 3 (3 self)
 Add to MetaCart
(Show Context)
Abstract — This paper proposes a selfadaptation method to control not only the mutation strength parameter, but also the mutation distribution for evolutionary algorithms. For this purpose, the isotropic qGaussian distribution is employed in the mutation operator. The qGaussian distribution allows to control the shape of the distribution by setting a real parameter q and can reproduce either finite second moment distributions or infinite second moment distributions. In the proposed method, the real parameter q of the qGaussian distribution is encoded in the chromosome of an individual and is allowed to evolve. An evolutionary programming algorithm with the proposed idea is presented. Experiments were carried out to study the performance of the proposed algorithm. I.
Information theoretic justification of Boltzmann selection and its generalization to Tsallis case
 Proceedings of IEEE Congress on Evolutionary Computation
, 2005
"... Abstract A generalized evolutionary algorithm based on Tsallis statistics is proposed. The algorithm uses Tsallis generalized canonical distribution, which is one parameter generalization of Boltzmann distribution, to weigh the configurations in the selection mechanism. This generalization is motiv ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
(Show Context)
Abstract A generalized evolutionary algorithm based on Tsallis statistics is proposed. The algorithm uses Tsallis generalized canonical distribution, which is one parameter generalization of Boltzmann distribution, to weigh the configurations in the selection mechanism. This generalization is motivated by the recently proposed generalized simulated annealing algorithm based on Tsallis statistics. We also present an information theoretic justification to use Boltzmann distribution in the selection mechanism, since these ‘canonical ’ distributions have deep roots in information theory. Our simulation results show that for an appropriate choice of nonextensive index that is offered by Tsallis statistics, evolutionary algorithms based on this generalization outperform algorithms based on Boltzmann distribution. 1