Results 1  10
of
20
Maximum pressure policies in stochastic processing networks
, 2005
"... Complex systems like semiconductor wafer fabrication facilities (fabs), networks of data switches, and largescale call centers all demand efficient resource allocation. Deterministic models like linear programs (LP) have been used for capacity planning at both the design and expansion stages of s ..."
Abstract

Cited by 71 (6 self)
 Add to MetaCart
Complex systems like semiconductor wafer fabrication facilities (fabs), networks of data switches, and largescale call centers all demand efficient resource allocation. Deterministic models like linear programs (LP) have been used for capacity planning at both the design and expansion stages of such systems. LPbased planning is critical in setting a medium range or longterm goal for many systems, but it does not translate into a daytoday operational policy that must deal with discreteness of jobs and the randomness of the processing environment. A stochastic processing network, advanced by J. Michael Harrison (2000, 2002, 2003), is a system that takes inputs of materials of various kinds and uses various processing resources to produce outputs of materials of various kinds. Such a network provides a powerful abstraction of a wide range of realworld systems. It provides highfidelity stochastic models in diverse economic sectors including manufacturing, service, and information technology. We propose a family of maximum pressure service policies for dynamically allocating service capacities in a stochastic processing network. Under a mild assumption on network structure, we prove that a network operating under a maximum pressure policy achieves maximum throughput predicted by LPs. These policies are semilocal in the sense that each
On Dynamic Scheduling of a Parallel Server System with Complete Resource Pooling
 In Analysis of Communication Networks: Call Centres, Traffic and Performance
, 2000
"... scientific noncommercial use only for individuals, with permission from the authors. We consider a parallel server queueing system consisting of a bank of buffers for holding incoming jobs and a bank of flexible servers for processing these jobs. Incoming jobs are classified into one of several dif ..."
Abstract

Cited by 67 (5 self)
 Add to MetaCart
(Show Context)
scientific noncommercial use only for individuals, with permission from the authors. We consider a parallel server queueing system consisting of a bank of buffers for holding incoming jobs and a bank of flexible servers for processing these jobs. Incoming jobs are classified into one of several different classes (or buffers). Jobs within a class are processed on a firstinfirstout basis, where the processing of a given job may be performed by any server from a given (classdependent) subset of the bank of servers. The random service time of a job may depend on both its class and the server providing the service. Each job departs the system after receiving service from one server. The system manager seeks to minimize holding costs by dynamically scheduling waiting jobs to available servers. We consider a parameter regime in which the system satisfies both a heavy traffic and a complete resource pooling condition. Our cost function is an expected cumulative discounted cost of holding jobs in the system, where the (undiscounted) cost per unit time is a linear function of normalized (with heavy traffic scaling) queue length. In a prior work [40], the second author proposed a continuous review threshold control policy for use in such a parallel server system. This policy was advanced as an “interpretation ” of the analytic solution to an associated Brownian control problem (formal heavy
Asymptotic optimality of maximum pressure policies in stochastic processing networks
 Annals of Applied Probability
, 2008
"... We consider a class of stochastic processing networks. Assume that the networks satisfy a complete resource pooling condition. We prove that each maximum pressure policy asymptotically minimizes the workload process in a stochastic processing network in heavy traffic. We also show that, under each q ..."
Abstract

Cited by 43 (4 self)
 Add to MetaCart
(Show Context)
We consider a class of stochastic processing networks. Assume that the networks satisfy a complete resource pooling condition. We prove that each maximum pressure policy asymptotically minimizes the workload process in a stochastic processing network in heavy traffic. We also show that, under each quadratic holding cost structure, there is a maximum pressure policy that asymptotically minimizes the holding cost. A key to the optimality proofs is to prove a state space collapse result and a heavy traffic limit theorem for the network processes under a maximum pressure policy. We extend a framework of Bramson [Queueing Systems Theory Appl. 30 (1998) 89–148] and Williams [Queueing Systems Theory Appl. 30 (1998b) 5–25] from the multiclass queueing network setting to the stochastic processing network setting to prove the state space collapse result and the heavy traffic limit theorem. The extension can be adapted to other studies of stochastic processing networks.
Heavy traffic analysis of open processing networks with complete resource pooling: asymptotic optimality of discrete review policies
 ANN. APPL. PROBAB
, 2005
"... We consider a class of open stochastic processing networks, with feedback routing and overlapping server capabilities, in heavy traffic. The networks ..."
Abstract

Cited by 23 (0 self)
 Add to MetaCart
(Show Context)
We consider a class of open stochastic processing networks, with feedback routing and overlapping server capabilities, in heavy traffic. The networks
Workload Models for Stochastic Networks: Value Functions and Performance Evaluation
, 2005
"... This paper concerns control and performance evaluation for stochastic network models. Structural properties of value functions are developed for controlled Brownian motion (CBM) and deterministic (fluid) workloadmodels, leading to the following conclusions: Outside of a nullset of network paramete ..."
Abstract

Cited by 16 (9 self)
 Add to MetaCart
This paper concerns control and performance evaluation for stochastic network models. Structural properties of value functions are developed for controlled Brownian motion (CBM) and deterministic (fluid) workloadmodels, leading to the following conclusions: Outside of a nullset of network parameters, (i) The fluid valuefunction is a smooth function of the initial state. Under further minor conditions, the fluid valuefunction satisfies the derivative boundary conditions that are required to ensure it is in the domain of the extended generator for the CBM model. Exponential ergodicity of the CBM model is demonstrated as one consequence. (ii) The fluid valuefunction provides a shadow function for use in simulation variance reduction for the stochastic model. The resulting simulator satisfies an exact large deviation principle, while a standard simulation algorithm does not satisfy any such bound. (iii) The fluid valuefunction provides upper and lower bounds on performance for the CBM model. This follows from an extension of recent linear programming approaches to performance evaluation.
Stability and asymptotic optimality of generalized maxweight policies
 SIAM Journal on Control and Optimization
"... Abstract It is shown that stability of the celebrated MaxWeight or back pressure policies is a consequence of the following interpretation: either policy is myopic with respect to a surrogate value function of a very special form, in which the "marginal disutility" at a buffer vanishes fo ..."
Abstract

Cited by 16 (2 self)
 Add to MetaCart
(Show Context)
Abstract It is shown that stability of the celebrated MaxWeight or back pressure policies is a consequence of the following interpretation: either policy is myopic with respect to a surrogate value function of a very special form, in which the "marginal disutility" at a buffer vanishes for vanishingly small buffer population. This observation motivates the hMaxWeight policy, defined for a wide class of functions h. These policies share many of the attractive properties of the MaxWeight policy: (i) Arrival rate data is not required in the policy. (ii) Under a variety of general conditions, the policy is stabilizing when h is a perturbation of a monotone linear function, a monotone quadratic, or a monotone Lyapunov function for the fluid model. (iii) A perturbation of the relative value function for a workload relaxation gives rise to a myopic policy that is approximately averagecost optimal in heavy traffic, with logarithmic regret. The first results are obtained for a general Markovian network model. Asymptotic optimality is established for a general Markovian scheduling model with a single bottleneck, and homogeneous servers.
Queueing systems with many servers: null controllability in heavy traffic
, 2005
"... A queueing model has J ≥ 2 heterogeneous service stations, each consisting of many independent servers with identical capabilities. Customers of I ≥ 2 classes can be served at these stations at different rates, that depend on both the class and the station. A system administrator dynamically control ..."
Abstract

Cited by 11 (5 self)
 Add to MetaCart
A queueing model has J ≥ 2 heterogeneous service stations, each consisting of many independent servers with identical capabilities. Customers of I ≥ 2 classes can be served at these stations at different rates, that depend on both the class and the station. A system administrator dynamically controls scheduling and routing. We study this model in the Central Limit Theorem (or heavy traffic) regime proposed by Halfin and Whitt. We derive a diffusion model on R I with a singular control term, that describes the scaling limit of the queueing model. The singular term may be used to constrain the diffusion to lie in certain subsets of R I at all times t> 0. We say that the diffusion is nullcontrollable if it can be constrained to X−, the minimal closed subset of R I containing all states of the prelimit queueing model for which all queues are empty. We give sufficient conditions for null controllability of the diffusion. Under these conditions we also show that an analogous, asymptotic result holds for the queueing model, by constructing control policies under which, for any given 0 < ε < T < ∞, all queues in the system are kept empty on the time interval [ε, T], with probability approaching one. This introduces a new, unusual heavy traffic ‘behavior’: On one hand the system is critically loaded, in the sense that an increase in any of the external arrival rates at the ‘fluid level ’ results with an overloaded system. On the other hand, as far as queue lengths are concerned, the system behaves as if it is underloaded.
Diffusion approximations for controlled stochastic networks: An asymptotic bound for the value function
 Ann. Appl. Probab
, 2006
"... We consider the scheduling control problem for a family of unitary networks under heavy traffic, with general interarrival and service times, probabilistic routing and infinite horizon discounted linear holding cost. A natural nonanticipativity condition for admissibility of control policies is intr ..."
Abstract

Cited by 10 (6 self)
 Add to MetaCart
(Show Context)
We consider the scheduling control problem for a family of unitary networks under heavy traffic, with general interarrival and service times, probabilistic routing and infinite horizon discounted linear holding cost. A natural nonanticipativity condition for admissibility of control policies is introduced. The condition is seen to hold for a broad class of problems. Using this formulation of admissible controls and a timetransformation technique, we establish that the infimum of the cost for the network control problem over all admissible sequencing control policies is asymptotically bounded below by the value function of an associated diffusion control problem (the Brownian control problem). This result provides a useful bound on the best achievable performance for any admissible control policy for a wide class of networks.
Singular Control with State Constraints on Unbounded Domain
 Annals of Probability
"... We study a class of stochastic control problems where a cost of the form E e [0,∞) −βs [ℓ(Xs)ds + h(Y ◦ s)dY s] is to be minimized over control processes Y whose increments take values in a cone Y of R p, keeping the state process X = x + B + GY in a cone X of R k, k ≤ p. Here, x ∈ X, B is a Brown ..."
Abstract

Cited by 9 (3 self)
 Add to MetaCart
We study a class of stochastic control problems where a cost of the form E e [0,∞) −βs [ℓ(Xs)ds + h(Y ◦ s)dY s] is to be minimized over control processes Y whose increments take values in a cone Y of R p, keeping the state process X = x + B + GY in a cone X of R k, k ≤ p. Here, x ∈ X, B is a Brownian motion with drift b and covariance Σ, G is a fixed matrix, and Y ◦ is the Radon–Nikodym derivative dY/dY . Let L = −(1/2)trace(ΣD 2) − b · D where D denotes the gradient. Solutions to the corresponding dynamic programming PDE, [(L + β)f − ℓ] ∨ sup y∈Y:Gy=1 [−Gy · Df − h(y)] = 0, on X o are considered with a polynomial growth condition and are required to be supersolution up to the boundary (corresponding to a “state constraint ” boundary condition on ∂X). Under suitable conditions on the problem data, including continuity and nonnegativity of ℓ and h, and polynomial growth of ℓ, our main result is the unique viscositysense solvability of the PDE by the control problem’s value function in appropriate classes of functions. In some cases where uniqueness generally fails to hold in the class of functions that grow at most polynomially (e.g., when h = 0), our methods provide uniqueness within the class of functions that, in addition, have compact level sets. The results are new even in the following special cases: (1) The onedimensional case k = p = 1, X = Y = R+; (2) The firstorder case Σ = 0; (3) The case where ℓ and h are linear. The proofs
Workload reduction of a generalized Brownian network
 Ann. Appl. Probab
, 2005
"... We consider a dynamic control problem associated with a generalized Brownian network, the objective being to minimize expected discounted cost over an infinite planning horizon. In this Brownian control problem (BCP), both the system manager’s control and the associated cumulative cost process may b ..."
Abstract

Cited by 6 (3 self)
 Add to MetaCart
(Show Context)
We consider a dynamic control problem associated with a generalized Brownian network, the objective being to minimize expected discounted cost over an infinite planning horizon. In this Brownian control problem (BCP), both the system manager’s control and the associated cumulative cost process may be locally of unbounded variation. Due to this aspect of the cost process, both the precise statement of the problem and its analysis involve delicate technical issues. We show that the BCP is equivalent, in a certain sense, to a reduced Brownian control problem (RBCP) of lower dimension. The RBCP is a singular stochastic control problem, in which both the controls and the cumulative cost process are locally of bounded variation. 1. Introduction. The