Results 1  10
of
140
Constrained model predictive control: Stability and optimality
 AUTOMATICA
, 2000
"... Model predictive control is a form of control in which the current control action is obtained by solving, at each sampling instant, a finite horizon openloop optimal control problem, using the current state of the plant as the initial state; the optimization yields an optimal control sequence and t ..."
Abstract

Cited by 696 (15 self)
 Add to MetaCart
Model predictive control is a form of control in which the current control action is obtained by solving, at each sampling instant, a finite horizon openloop optimal control problem, using the current state of the plant as the initial state; the optimization yields an optimal control sequence and the first control in this sequence is applied to the plant. An important advantage of this type of control is its ability to cope with hard constraints on controls and states. It has, therefore, been widely applied in petrochemical and related industries where satisfaction of constraints is particularly important because efficiency demands operating points on or close to the boundary of the set of admissible states and controls. In this review, we focus on model predictive control of constrained systems, both linear and nonlinear and discuss only briefly model predictive control of unconstrained nonlinear and/or timevarying systems. We concentrate our attention on research dealing with stability and optimality; in these areas the subject has developed, in our opinion, to a stage where it has achieved sufficient maturity to warrant the active interest of researchers in nonlinear control. We distill from an extensive literature essential principles that ensure stability and use these to present a concise characterization of most of the model predictive controllers that have been proposed in the literature. In some cases the finite horizon optimal control problem solved online is exactly equivalent to the same problem with an infinite horizon; in other cases it is equivalent to a modified infinite horizon optimal control problem. In both situations, known advantages of infinite horizon optimal control accrue.
A survey of industrial model predictive control technology
, 2003
"... This paper provides an overview of commercially available model predictive control (MPC) technology, both linear and nonlinear, based primarily on data provided by MPC vendors. A brief history of industrial MPC technology is presented first, followed by results of our vendor survey of MPC control an ..."
Abstract

Cited by 436 (5 self)
 Add to MetaCart
This paper provides an overview of commercially available model predictive control (MPC) technology, both linear and nonlinear, based primarily on data provided by MPC vendors. A brief history of industrial MPC technology is presented first, followed by results of our vendor survey of MPC control and identification technology. A general MPC control algorithm is presented, and approaches taken by each vendor for the different aspects of the calculation are described. Identification technology is reviewed to determine similarities and differences between the various approaches. MPC applications performed by each vendor are summarized by application area. The final section presents a vision of the next generation of MPC technology, with an emphasis on potential business and research opportunities.
Model Predictive Control: Past, Present and Future
 Computers and Chemical Engineering
, 1997
"... More than 15 years after Model Predictive Control (MPC) appeared in industry as an effective means to deal with multivariable constrained control problems, a theoretical basis for this technique has started to emerge. The issues of feasibility of the online optimization, stability and performance a ..."
Abstract

Cited by 210 (8 self)
 Add to MetaCart
More than 15 years after Model Predictive Control (MPC) appeared in industry as an effective means to deal with multivariable constrained control problems, a theoretical basis for this technique has started to emerge. The issues of feasibility of the online optimization, stability and performance are largely understood for systems described by linear models. Much progress has been made on these issues for nonlinear systems but for practical applications many questions remain, including the reliability and efficiency of the online computation scheme. To deal with model uncertainty "rigorously" an involved dynamic programming problem must be solved. The approximation techniques proposed for this purpose are largely at a conceptual stage. Among the broader research needs the following areas are identified: multivariable system identification, performance monitoring and diagnostics, nonlinear state estimation, and batch system control. Many practical problems like control objective prior...
Chromatographic Methods
 In International
, 1992
"... Still the doctor — by a country mile! Preferences for health services in two country towns in northwest New South Wales he relative importance people place on particular healthcare services is a significant factor in meeting their healthcare needs and influencing their health behaviour. ..."
Abstract

Cited by 76 (1 self)
 Add to MetaCart
(Show Context)
Still the doctor — by a country mile! Preferences for health services in two country towns in northwest New South Wales he relative importance people place on particular healthcare services is a significant factor in meeting their healthcare needs and influencing their health behaviour.
Optimization over state feedback policies for robust control with constraints
, 2005
"... This paper is concerned with the optimal control of linear discretetime systems, which are subject to unknown but bounded state disturbances and mixed constraints on the state and input. It is shown that the class of admissible affine state feedback control policies with memory of prior states is e ..."
Abstract

Cited by 52 (5 self)
 Add to MetaCart
This paper is concerned with the optimal control of linear discretetime systems, which are subject to unknown but bounded state disturbances and mixed constraints on the state and input. It is shown that the class of admissible affine state feedback control policies with memory of prior states is equivalent to the class of admissible feedback policies that are affine functions of the past disturbance sequence. This result implies that a broad class of constrained finite horizon robust and optimal control problems, where the optimization is over affine state feedback policies, can be solved in a computationally efficient fashion using convex optimization methods without having to introduce any conservatism in the problem formulation. This equivalence result is used to design a robust receding horizon control (RHC) state feedback policy such that the closedloop system is inputtostate stable (ISS) and the constraints are satisfied for all time and for all allowable disturbance sequences. The cost that is chosen to be minimized in the associated finite horizon optimal control problem is a quadratic function in the disturbancefree state and input sequences. It is shown that the value of the receding horizon control law can be calculated at each sample instant using a single, tractable and convex quadratic program (QP) if the disturbance set is polytopic or given by a 1norm or ∞norm bound, or a secondorder cone program (SOCP) if the disturbance set is ellipsoidal or given by a 2norm bound.
State and output feedback nonlinear model predictive control: An overview
 EUROPEAN JOURNAL OF CONTROL
, 2003
"... The purpose of this paper is twofold. In the first part we give a review on the current state of nonlinear model predictive control (NMPC). After a brief presentation of the basic principle of predictive control we outline some of the theoretical, computational, and implementational aspects of this ..."
Abstract

Cited by 35 (2 self)
 Add to MetaCart
The purpose of this paper is twofold. In the first part we give a review on the current state of nonlinear model predictive control (NMPC). After a brief presentation of the basic principle of predictive control we outline some of the theoretical, computational, and implementational aspects of this control strategy. Most of the theoretical developments in the area of NMPC are based on the assumption that the full state is available for measurement, an assumption that does not hold in the typical practical case. Thus, in the second part of this paper we focus on the output feedback problem in NMPC. After a brief overview on existing output feedback NMPC approaches we derive conditions that guarantee stability of the closedloop if an NMPC state feedback controller is used together with a full state observer for the recovery of the system state.
Decentralized robust receding horizon control for multivehicle guidance
"... Abstract — This paper presents a decentralized robust Model Predictive Control algorithm for multivehicle trajectory optimization. The algorithm is an extension of a previous robust safe but knowledgeable (RSBK) algorithm that uses the constraint tightening technique to achieve robustness, an invar ..."
Abstract

Cited by 25 (8 self)
 Add to MetaCart
(Show Context)
Abstract — This paper presents a decentralized robust Model Predictive Control algorithm for multivehicle trajectory optimization. The algorithm is an extension of a previous robust safe but knowledgeable (RSBK) algorithm that uses the constraint tightening technique to achieve robustness, an invariant set to ensure safety, and a costtogo function to generate an intelligent trajectory around obstacles in the environment. Although the RSBK algorithm was shown to solve faster than the previous robust MPC algorithms, the approach was based on a centralized calculation that is impractical for a large group of vehicles. This paper decentralizes the algorithm by ensuring that each vehicle always has a feasible solution under the action of disturbances. The key advantage of this algorithm is that it only requires local knowledge of the environment and the other vehicles while guaranteeing robust feasibility of the entire fleet. The new approach also facilitates a significantly more general implementation architecture for the decentralized trajectory optimization, which further decreases the delay due to computation time.
Robust Dynamic Programming for MinMax Model Predictive Control of Constrained Uncertain Systems
 IEEE TRANSACTIONS ON AUTOMATIC CONTROL
"... We address minmax model predictive control (MPC) for uncertain discretetime systems by a robust dynamic programming approach, and develop an algorithm that is suitable for linearly constrained polytopic systems with piecewise affine cost functions. The method uses polyhedral representations of the ..."
Abstract

Cited by 25 (2 self)
 Add to MetaCart
We address minmax model predictive control (MPC) for uncertain discretetime systems by a robust dynamic programming approach, and develop an algorithm that is suitable for linearly constrained polytopic systems with piecewise affine cost functions. The method uses polyhedral representations of the costtogo functions and feasible sets, and performs multiparametric programming by a duality based approach in each recursion step. We show how to apply the method to robust MPC, and give conditions guaranteeing closed loop stability. Finally, we apply the method to a tutorial example, a parking car with uncertain mass.
Feedback minmax model predictive control using a single linear program: Robust stability and the explicit solution
, 2004
"... ..."
Reinforcement learning versus model predictive control: a comparison on a power system problem
 IEEE Transactions on Systems, Man, and Cybernetics  Part B: Cybernetics
, 2009
"... Abstract—This paper compares reinforcement learning (RL) with model predictive control (MPC) in a unified framework and reports experimental results of their application to the synthesis of a controller for a nonlinear and deterministic electrical power oscillations damping problem. Both families of ..."
Abstract

Cited by 17 (9 self)
 Add to MetaCart
(Show Context)
Abstract—This paper compares reinforcement learning (RL) with model predictive control (MPC) in a unified framework and reports experimental results of their application to the synthesis of a controller for a nonlinear and deterministic electrical power oscillations damping problem. Both families of methods are based on the formulation of the control problem as a discretetime optimal control problem. The considered MPC approach exploits an analytical model of the system dynamics and cost function and computes openloop policies by applying an interiorpoint solver to a minimization problem in which the system dynamics are represented by equality constraints. The considered RL approach infers in a modelfree way closedloop policies from a set of system trajectories and instantaneous cost values by solving a sequence of batchmode supervised learning problems. The results obtained provide insight into the pros and cons of the two approaches and show that RL may certainly be competitive with MPC even in contexts where a good deterministic system model is available. Index Terms—Approximate dynamic programming (ADP), electric power oscillations damping, fitted Q iteration, interior– point method (IPM), model predictive control (MPC), reinforcement learning (RL), treebased supervised learning (SL). I.