Results 1 
6 of
6
Stability of polynomial differential equations: Complexity and converse lyapunov questions
 CoRR
"... Stability analysis of polynomial differential equations is a central topic in nonlinear dynamics and control which in recent years has undergone major algorithmic developments due to advances in optimization theory. Notably, the last decade has seen a widespread interest in the use of sum of squares ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
Stability analysis of polynomial differential equations is a central topic in nonlinear dynamics and control which in recent years has undergone major algorithmic developments due to advances in optimization theory. Notably, the last decade has seen a widespread interest in the use of sum of squares (sos) based semidefinite programs that can automatically find polynomial Lyapunov functions and produce explicit certificates of stability. However, despite their popularity, the converse question of whether such algebraic, efficiently constructable certificates of stability always exist has remained elusive. Naturally, an algorithmic question of this nature is closely intertwined with the fundamental computational complexity of proving stability. In this paper, we make a number of contributions to the questions of (i) complexity of deciding stability, (ii) existence of polynomial Lyapunov functions, and (iii) existence of sos Lyapunov functions. (i) We show that deciding local or global asymptotic stability of cubic vector fields is strongly NPhard. Simple variations of our proof are shown to imply strong NPhardness of several other decision problems: testing local attractivity of an equilibrium point, stability of an equilibrium point in the sense of Lyapunov, invariance of the unit ball, boundedness of trajectories, conver
Control and verification of highdimensional systems via DSOS and SDSOS optimization
 In Proceedings of the 53rd IEEE Conference on Decision and Control
, 2014
"... Abstract — In this paper, we consider linear programming (LP) and second order cone programming (SOCP) based alternatives to sum of squares (SOS) programming and apply this framework to highdimensional problems arising in control applications. Despite the wide acceptance of SOS programming in the c ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
Abstract — In this paper, we consider linear programming (LP) and second order cone programming (SOCP) based alternatives to sum of squares (SOS) programming and apply this framework to highdimensional problems arising in control applications. Despite the wide acceptance of SOS programming in the control and optimization communities, scalability has been a key challenge due to its reliance on semidefinite programming (SDP) as its main computational engine. While SDPs have many appealing features, current SDP solvers do not approach the scalability or numerical maturity of LP and SOCP solvers. Our approach is based on the recent work of Ahmadi and Majumdar [1], which replaces the positive semidefiniteness constraint inherent in the SOS approach with stronger conditions based on diagonal dominance and scaled diagonal dominance. This leads to the DSOS and SDSOS cones of polynomials, which can be optimized over using LP and SOCP respectively. We demonstrate this approach on four high dimensional control problems that are currently well beyond the reach of SOS programming: computing a region of attraction for a 22 dimensional system, analysis of a 50 node network of oscillators, searching for degree 3 controllers and degree 8 Lyapunov functions for an Acrobot system (with the resulting controller validated on a hardware platform), and a balancing controller for a 30 state and 14 control input model of the ATLAS humanoid robot. While there is additional conservatism introduced by our approach, extensive numerical experiments on smaller instances of our problems demonstrate that this conservatism can be small compared to SOS programming. I.
REVIEW ON COMPUTATIONAL METHODS FOR LYAPUNOV FUNCTIONS
"... Abstract. Lyapunov functions are an essential tool in the stability analysis of dynamical systems, both in theory and applications. They provide sufficient conditions for the stability of equilibria or more general invariant sets, as well as for their basin of attraction. The necessity, i.e. the ex ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract. Lyapunov functions are an essential tool in the stability analysis of dynamical systems, both in theory and applications. They provide sufficient conditions for the stability of equilibria or more general invariant sets, as well as for their basin of attraction. The necessity, i.e. the existence of Lyapunov functions, has been studied in converse theorems, however, they do not provide a general method to compute them. Because of their importance in stability analysis, numerous computational construction methods have been developed within the Engineering, Informatics, and Mathematics community. They cover different types of systems such as ordinary differential equations, switched systems, nonsmooth systems, discretetime systems etc., and employ different methods such as series expansion, linear programming, linear matrix inequalities, collocation methods, algebraic methods, settheoretic methods, and many others. This review brings these different methods together. First, the different types of systems, where Lyapunov functions are used, are briefly discussed. In the main part, the computational methods are presented, ordered by the type of method used to construct a Lyapunov function.
A Numerical Algebraic Geometry Approach To Regional Stability Analysis of Polynomial Systems
"... Abstract — We explore region of attraction (ROA) estimation for polynomial systems via the numerical solution of polynomial equations. Computing an optimal, stable sublevel set of a Lyapunov function is first posed as a polynomial optimization problem. Solutions to this optimization problem are fou ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract — We explore region of attraction (ROA) estimation for polynomial systems via the numerical solution of polynomial equations. Computing an optimal, stable sublevel set of a Lyapunov function is first posed as a polynomial optimization problem. Solutions to this optimization problem are found by solving a polynomial system of equations using techniques from numerical algebraic geometry. This system describes KKT points and singular points not satisfying a regularity condition. Though this system has exponentially many solutions, the proposed method trivially parallelizes and is practical for problems of moderate dimension and degree. In suitably generic settings, the method solves the underlying optimization problem to arbitrary precision, which could make it a useful tool for studying popular semidefinite programming based relaxations used in ROA analysis. I.
Revisiting the Complexity of Stability of Continuous and Hybrid Systems
, 2014
"... We develop a general framework for obtaining upper bounds on the “practical ” computational complexity of stability problems, for a wide range of nonlinear continuous and hybrid systems. To do so, we describe stability properties of dynamical systems in firstorder theories over the real numbers, an ..."
Abstract
 Add to MetaCart
We develop a general framework for obtaining upper bounds on the “practical ” computational complexity of stability problems, for a wide range of nonlinear continuous and hybrid systems. To do so, we describe stability properties of dynamical systems in firstorder theories over the real numbers, and reduce stability problems to the δdecision problems of their descriptions. The framework allows us to give a precise characterization of the complexity of different notions of stability for nonlinear continuous and hybrid systems. We prove that bounded versions of the δstability problems are generally decidable, and give upper bounds on their complexity. The unbounded versions are generally undecidable, for which we measure their degrees of unsolvability.