Results 1  10
of
482
A tutorial on support vector regression
, 2004
"... In this tutorial we give an overview of the basic ideas underlying Support Vector (SV) machines for function estimation. Furthermore, we include a summary of currently used algorithms for training SV machines, covering both the quadratic (or convex) programming part and advanced methods for dealing ..."
Abstract

Cited by 865 (3 self)
 Add to MetaCart
In this tutorial we give an overview of the basic ideas underlying Support Vector (SV) machines for function estimation. Furthermore, we include a summary of currently used algorithms for training SV machines, covering both the quadratic (or convex) programming part and advanced methods for dealing with large datasets. Finally, we mention some modifications and extensions that have been applied to the standard SV algorithm, and discuss the aspect of regularization from a SV perspective.
A Comprehensive Survey of EvolutionaryBased Multiobjective Optimization Techniques
 Knowledge and Information Systems
, 1998
"... . This paper presents a critical review of the most important evolutionarybased multiobjective optimization techniques developed over the years, emphasizing the importance of analyzing their Operations Research roots as a way to motivate the development of new approaches that exploit the search cap ..."
Abstract

Cited by 292 (22 self)
 Add to MetaCart
(Show Context)
. This paper presents a critical review of the most important evolutionarybased multiobjective optimization techniques developed over the years, emphasizing the importance of analyzing their Operations Research roots as a way to motivate the development of new approaches that exploit the search capabilities of evolutionary algorithms. Each technique is briefly described mentioning its advantages and disadvantages, their degree of applicability and some of their known applications. Finally, the future trends in this discipline and some of the open areas of research are also addressed. Keywords: multiobjective optimization, multicriteria optimization, vector optimization, genetic algorithms, evolutionary algorithms, artificial intelligence. 1 Introduction Since the pioneer work of Rosenberg in the late 60s regarding the possibility of using geneticbased search to deal with multiple objectives, this new area of research (now called evolutionary multiobjective optimization) has grown c...
LAGRANGE MULTIPLIERS AND OPTIMALITY
, 1993
"... Lagrange multipliers used to be viewed as auxiliary variables introduced in a problem of constrained minimization in order to write firstorder optimality conditions formally as a system of equations. Modern applications, with their emphasis on numerical methods and more complicated side conditions ..."
Abstract

Cited by 122 (7 self)
 Add to MetaCart
Lagrange multipliers used to be viewed as auxiliary variables introduced in a problem of constrained minimization in order to write firstorder optimality conditions formally as a system of equations. Modern applications, with their emphasis on numerical methods and more complicated side conditions than equations, have demanded deeper understanding of the concept and how it fits into a larger theoretical picture. A major line of research has been the nonsmooth geometry of onesided tangent and normal vectors to the set of points satisfying the given constraints. Another has been the gametheoretic role of multiplier vectors as solutions to a dual problem. Interpretations as generalized derivatives of the optimal value with respect to problem parameters have also been explored. Lagrange multipliers are now being seen as arising from a general rule for the subdifferentiation of a nonsmooth objective function which allows blackandwhite constraints to be replaced by penalty expressions. This paper traces such themes in the current theory of Lagrange multipliers, providing along the way a freestanding exposition of basic nonsmooth analysis as motivated by and applied to this subject.
A Review of Kernel Methods in Machine Learning
, 2006
"... We review recent methods for learning with positive definite kernels. All these methods formulate learning and estimation problems as linear tasks in a reproducing kernel Hilbert space (RKHS) associated with a kernel. We cover a wide range of methods, ranging from simple classifiers to sophisticate ..."
Abstract

Cited by 95 (4 self)
 Add to MetaCart
(Show Context)
We review recent methods for learning with positive definite kernels. All these methods formulate learning and estimation problems as linear tasks in a reproducing kernel Hilbert space (RKHS) associated with a kernel. We cover a wide range of methods, ranging from simple classifiers to sophisticated methods for estimation with structured data.
A multiple discretecontinuous extreme value model: formulation and application to discretionary timeuse decisions
 Transportation Research Part B
, 2005
"... Many consumer choice situations are characterized by the simultaneous demand for multiple alternatives that are imperfect substitutes for one another. A simple and parsimonious Multiple DiscreteContinuous Extreme Value (MDCEV) econometric approach to handle such multiple discreteness was formulated ..."
Abstract

Cited by 82 (27 self)
 Add to MetaCart
Many consumer choice situations are characterized by the simultaneous demand for multiple alternatives that are imperfect substitutes for one another. A simple and parsimonious Multiple DiscreteContinuous Extreme Value (MDCEV) econometric approach to handle such multiple discreteness was formulated by Bhat (2005) within the broader KuhnTucker (KT) multiple discretecontinuous economic consumer demand model of Wales and Woodland (1983). This paper examines several issues associated with the MDCEV model and other extant KT multiple discretecontinuous models. Specifically, the paper proposes a new utility function form that enables clarity in the role of each parameter in the utility specification, presents identification considerations associated with both the utility functional form as well as the stochastic nature of the utility specification, extends the MDCEV model to the case of price variation across goods and to general error covariance structures, discusses the relationship between earlier KTbased multiple discretecontinuous models, and illustrates the many technical nuances and identification considerations of the multiple discretecontinuous model structure through empirical examples. The paper also highlights the technical problems associated with the stochastic specification used in the KTbased multiple discretecontinuous models formulated in recent Environmental Economics papers.
LINEAR COMPLEMENTARITY SYSTEMS
, 2000
"... We introduce a new class of dynamical systems called “linear complementarity systems.” The time evolution of these systems consists of a series of continuous phases separated by “events ” which cause a change in dynamics and possibly a jump in the state vector. The occurrence of events is governed ..."
Abstract

Cited by 79 (26 self)
 Add to MetaCart
(Show Context)
We introduce a new class of dynamical systems called “linear complementarity systems.” The time evolution of these systems consists of a series of continuous phases separated by “events ” which cause a change in dynamics and possibly a jump in the state vector. The occurrence of events is governed bycertain inequalities similar to those appearing in the linear complementarity problem of mathematical programming. The framework we describe is suitable for certain situations in which both differential equations and inequalities playa role; for instance, in mechanics, electrical networks, piecewise linear systems, and dynamic optimization. We present a precise definition of the solution concept of linear complementarity systems and give sufficient conditions for existence and uniqueness of solutions.
Quadratic Optimization
, 1995
"... . Quadratic optimization comprises one of the most important areas of nonlinear programming. Numerous problems in real world applications, including problems in planning and scheduling, economies of scale, and engineering design, and control are naturally expressed as quadratic problems. Moreover, t ..."
Abstract

Cited by 64 (3 self)
 Add to MetaCart
. Quadratic optimization comprises one of the most important areas of nonlinear programming. Numerous problems in real world applications, including problems in planning and scheduling, economies of scale, and engineering design, and control are naturally expressed as quadratic problems. Moreover, the quadratic problem is known to be NPhard, which makes this one of the most interesting and challenging class of optimization problems. In this chapter, we review various properties of the quadratic problem, and discuss different techniques for solving various classes of quadratic problems. Some of the more successful algorithms for solving the special cases of bound constrained and large scale quadratic problems are considered. Examples of various applications of quadratic programming are presented. A summary of the available computational results for the algorithms to solve the various classes of problems is presented. Key words: Quadratic optimization, bilinear programming, concave pro...