Results 1  10
of
17
Sensor Selection via Convex Optimization
 IEEE Transactions on Signal Processing
, 2009
"... We consider the problem of choosing a set of k sensor measurements, from a set of m possible or potential sensor measurements, that minimizes the error in estimating some parameters. Solving this problem by evaluating the performance for each of the(m k possible choices of sensor measurements is not ..."
Abstract

Cited by 96 (2 self)
 Add to MetaCart
(Show Context)
We consider the problem of choosing a set of k sensor measurements, from a set of m possible or potential sensor measurements, that minimizes the error in estimating some parameters. Solving this problem by evaluating the performance for each of the(m k possible choices of sensor measurements is not practical unless m and k are small. In this paper we describe a heuristic, based on convex optimization, for approximately solving this problem. Our heuristic gives a subset selection as well as a bound on the best performance that can be achieved by any selection of k sensor measurements. There is no guarantee that the gap between the performance of the chosen subset and the performance bound is always small; but numerical experiments suggest that the gap is small in many cases. Our heuristic method requires on the order of m3 operations; for m = 1000 possible sensors, we can carry out sensor selection in a few seconds on a 2 GHz personal computer. 1
Design of affine controllers via convex optimization
, 2008
"... Abstract—We consider a discretetime timevarying linear dynamical system, perturbed by process noise, with linear noise corrupted measurements, over a finite horizon. We address the problem of designing a general affine causal controller, in which the control input is an affine function of all prev ..."
Abstract

Cited by 14 (1 self)
 Add to MetaCart
(Show Context)
Abstract—We consider a discretetime timevarying linear dynamical system, perturbed by process noise, with linear noise corrupted measurements, over a finite horizon. We address the problem of designing a general affine causal controller, in which the control input is an affine function of all previous measurements, in order to minimize a convex objective, in either a stochastic or worstcase setting. This controller design problem is not convex in its natural form, but can be transformed to an equivalent convex optimization problem by a nonlinear change of variables, which allows us to efficiently solve the problem. Our method is related to the classicaldesign procedure for timeinvariant, infinitehorizon linear controller design, and the more recent purified output control method. We illustrate the method with applications to supply chain optimization and dynamic portfolio optimization, and show the method can be combined with model predictive control techniques when perfect state information is available. Index Terms—Affine controller, dynamical system, dynamic linear programming (DLP), linear exponential quadratic Gaussian (LEQG), linear quadratic Gaussian (LQG), model predictive control (MPC), proportionalintegralderivative (PID). I.
Approximate dynamic programming via iterated Bellman inequalities
, 2010
"... In this paper we introduce new methods for finding functions that lower bound the value function of a stochastic control problem, using an iterated form of the Bellman inequality. Our method is based on solving linear or semidefinite programs, and produces both a bound on the optimal objective, as w ..."
Abstract

Cited by 11 (4 self)
 Add to MetaCart
In this paper we introduce new methods for finding functions that lower bound the value function of a stochastic control problem, using an iterated form of the Bellman inequality. Our method is based on solving linear or semidefinite programs, and produces both a bound on the optimal objective, as well as a suboptimal policy that appears to works very well. These results extend and improve bounds obtained by authors in a previous paper using a single Bellman inequality condition. We describe the methods in a general setting, and show how they can be applied in specific cases including the finite state case, constrained linear quadratic control, switched affine control, and multiperiod portfolio investment.
A Tractable Method for Robust Downlink Beamforming in Wireless Communications
"... Abstract—In downlink beamforming in a multipleinput multipleoutput (MIMO) wireless communication system, we design beamformers that minimize the power subject to guaranteeing given signaltointerference noise ratio (SINR) threshold levels for the users, assuming that the channel responses between ..."
Abstract

Cited by 10 (0 self)
 Add to MetaCart
(Show Context)
Abstract—In downlink beamforming in a multipleinput multipleoutput (MIMO) wireless communication system, we design beamformers that minimize the power subject to guaranteeing given signaltointerference noise ratio (SINR) threshold levels for the users, assuming that the channel responses between the base station and the users are known exactly. In robust downlink beamforming, we take into account uncertainties in the channel vectors, by designing beamformers that minimize the power subject to guaranteeing given SINR threshold levels over the given set of possible channel vectors. When the uncertainties in channel vectors are described by complex uncertainty ellipsoids, we show that the associated worstcase robust beamforming problem can be solved efficiently using an iterative method. The method uses an alternating sequence of optimization and worstcase analysis steps, where at each step we solve a convex optimization problem using efficient interiorpoint methods. Typically, the method provides a fairly robust beamformer design within 5–10 iterations. The robust downlink beamforming method is demonstrated with a numerical example. I.
Robust Linear Optimization With Recourse
, 2010
"... We propose an approach to twostage linear optimization with recourse that does not involve a probabilistic description of the uncertainty and allows the decisionmaker to adjust the degree of conservativeness of the model, while preserving its linear properties. We model uncertain parameters as b ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
We propose an approach to twostage linear optimization with recourse that does not involve a probabilistic description of the uncertainty and allows the decisionmaker to adjust the degree of conservativeness of the model, while preserving its linear properties. We model uncertain parameters as belonging to a polyhedral uncertainty set and minimize the sum of the firststage costs and the worstcase secondstage costs over that set, i.e., taking a robust optimization approach. The decisionmaker’s conservatism is taken into account through a budget of uncertainty, which determines the size of the uncertainty set around the nominal values of the uncertain parameters. We establish that the robust problem is a linear programming problem with a potentially very large number of constraints, and describe how a cutting plane algorithm can be used in the the twostage setting. We test the robust modeling approach on two examples of problems, one with simple, and one with general recourse, to illustrate the structure and performance of robust policies as well as to evaluate the performance of the cutting plane algorithm.
Discovering Support and Affiliated Features from Very High Dimensions
"... In this paper, a novel learning paradigm is presented to automatically identify groups of informative and correlated features from very high dimensions. Specifically, we explicitly incorporate correlation measures as constraints and then propose an efficient embedded feature selection method using r ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
In this paper, a novel learning paradigm is presented to automatically identify groups of informative and correlated features from very high dimensions. Specifically, we explicitly incorporate correlation measures as constraints and then propose an efficient embedded feature selection method using recently developed cutting plane strategy. The benefits of the proposed algorithm are twofolds. First, it can identify the optimal discriminative and uncorrelated feature subset to the output labels, denoted here as Support Features, which brings about significant improvements in prediction performance over other state of the art feature selection methods considered in the paper. Second, during the learning process, the underlying group structures of correlated features associated with each support feature, denoted as Affiliated Features, can also be discovered without any additional cost. These affiliated features serve to improve the interpretations on the learning tasks. Extensive empirical studies on both synthetic and very high dimensional realworld datasets verify the validity and efficiency of the proposed method. To address this issue, a plethora of feature selection methods have been developed in the recent decades. In general, these methods have been categorized as three core themes
Optimizing Performance Measures for Feature Selection
"... Abstract—Feature selection with specific multivariate performance measures is the key to the success of many applications, such as information retrieval and bioinformatics. The existing feature selection methods are usually designed for classification error. In this paper, we present a unified featu ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
(Show Context)
Abstract—Feature selection with specific multivariate performance measures is the key to the success of many applications, such as information retrieval and bioinformatics. The existing feature selection methods are usually designed for classification error. In this paper, we present a unified feature selection framework for general loss functions. In particular, we study the novel feature selection paradigm by optimizing multivariate performance measures. The resultant formulation is a challenging problem for highdimensional data. Hence, a twolayer cutting plane algorithm is proposed to solve this problem, and the convergence is presented. Extensive experiments on largescale and highdimensional real world datasets show that the proposed method outperforms l1SVM and SVMRFE when choosing a small subset of features, and achieves significantly improved performances over SVM perf in terms of F1score. Keywordsfeature selection; multivariate performance measure; multiple kernel learning; structural SVMs I.
A Polyhedral Approximation Framework for Convex and Robust Distributed Optimization
, 2013
"... In this paper we consider a general problem setup for a wide class of convex and robust distributed optimization problems in peertopeer networks. In this setup convex constraint sets are distributed to the network processors who have to compute the optimizer of a linear cost function subject to ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
In this paper we consider a general problem setup for a wide class of convex and robust distributed optimization problems in peertopeer networks. In this setup convex constraint sets are distributed to the network processors who have to compute the optimizer of a linear cost function subject to the constraints. We propose a novel fully distributed algorithm, named cuttingplane consensus, to solve the problem, based on an outer polyhedral approximation of the constraint sets. Processors running the algorithm compute and exchange linear approximations of their locally feasible sets. Independently of the number of processors in the network, each processor stores only a small number of linear constraints, making the algorithm scalable to large networks. The cuttingplane consensus algorithm is presented and analyzed for the general framework. Specifically, we prove that all processors running the algorithm agree on an optimizer of the global problem, and that the algorithm is tolerant to node and link failures as long as network connectivity is preserved. Then, the cutting plane consensus algorithm is specified to three different classes of distributed optimization problems, namely (i) inequality constrained problems, (ii) robust optimization problems, and (iii) almost separable optimization problems with separable objective functions and coupling constraints. For each one of these problem classes we solve a concrete problem that can be expressed in that framework and present computational results. That is, we show how to solve: position estimation in wireless sensor networks, a distributed robust linear program and, a distributed microgrid control problem.
Robust Relay Precoder Design for MIMORelay Networks
"... Abstract—In this paper, we consider a robust design of MIMOrelay precoder and receive filter for the destination nodes in a nonregenerative multipleinput multipleoutput (MIMO) relay network. The network consists of multiple sourcedestination node pairs assisted by a single MIMOrelay node. The ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
Abstract—In this paper, we consider a robust design of MIMOrelay precoder and receive filter for the destination nodes in a nonregenerative multipleinput multipleoutput (MIMO) relay network. The network consists of multiple sourcedestination node pairs assisted by a single MIMOrelay node. The source and destination nodes are single antenna nodes, whereas the MIMOrelay node has multiple transmit and multiple receive antennas. The channel state information (CSI) available at the MIMOrelay node for precoding purpose is assumed to be imperfect. We assume that the norms of errors in CSI are upperbounded, and the MIMOrelay node knows these bounds. We consider the robust design of the MIMOrelay precoder and receive filter based on the minimization of the total MIMOrelay transmit power with constraints on the mean square error (MSE) at the destination nodes. We show that this design problem can be solved by solving an alternating sequence of minimization and worstcase analysis problems. The minimization problem is formulated as a convex optimization problem that can be solved efficiently using interiorpoint methods. The worstcase analysis problem can be solved analytically using an approximation for the MSEs at the destination nodes. We demonstrate the robust performance of the proposed design through simulations. I.
Distributed Robust Optimization via CuttingPlane Consensus
"... This paper addresses the problem of robust optimization in largescale networks of identical processors. General convex optimization problems are considered, where uncertain constraints are distributed to the processors in the network. The processors have to compute a maximizer of a linear objectiv ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
This paper addresses the problem of robust optimization in largescale networks of identical processors. General convex optimization problems are considered, where uncertain constraints are distributed to the processors in the network. The processors have to compute a maximizer of a linear objective over the robustly feasible set, defined as the intersection of all locally known feasible sets. We propose a novel asynchronous algorithm, based on outerapproximations of the robustly feasible set, to solve such problems. Each processor stores a small set of linear constraints that form an outerapproximation of the robustly feasible set. Based on its locally available information and the data exchange with neighboring processors, each processor repeatedly updates its local approximation. A computational study for robust linear programming illustrates that the completion time of the algorithm depends primarily on the diameter of the communication graph and is independent of the network size.