• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Tools

Sorted by:
Try your query at:
Semantic Scholar Scholar Academic
Google Bing DBLP
Results 1 - 10 of 44,010
Next 10 →

Table 2 Terms of the convex optimization problem depending on the choice of the loss function.

in A Tutorial on Support Vector Regression
by Alex J. Smola, Bernhard Schölkopf
"... In PAGE 16: ...Table 2 Terms of the convex optimization problem depending on the choice of the loss function. These two cases can be combined into 2 [0; C] and T ( ) = ?p ? 1 p C? p p?1 p p?1 : (48) Table2 contains a summary of the various conditions on and formulas for T ( ) for di erent cost functions.6 Note that the maximum slope of ~ c determines the region of feasibility of , i.... ..."

Table 1: Four convex loss functions and the corresponding -transform. On the interval [ B; B], each loss function has the indicated Lipschitz constant LB and modulus of con- vexity ( ) with respect to d . All have a quadratic modulus of convexity.

in Large Margin Classifiers: Convex Loss, Low Noise, and Convergence Rates
by Peter L. Bartlett, Michael I. Jordan, Jon D. Mcauliffe 2004
"... In PAGE 3: ...) It is immediate from the definitions that ~ and are nonnegative and that they are also con- tinuous on [0; 1]. We calculate the -transform for exponential loss, logistic loss, quadratic loss and truncated quadratic loss, tabulating the results in Table1 . All of these loss func- tions can be verified to be classification-calibrated.... In PAGE 7: ...dometric d on a0 : we say that : a0 ! a0 is Lipschitz with respect to d, with constant L, if for all a; b 2 a0 , j (a) (b)j L d(a; b): (Note that if d is a metric and is convex, then necessarily satisfies a Lipschitz condition on any compact subset of a0 .) We consider four loss functions that satisfy these conditions: the exponential loss function used in AdaBoost, the deviance function for logistic regression, the quadratic loss function, and the truncated quadratic loss function; see Table1 . We use the pseudometric d (a; b) = inf fja j + j bj : constant on (minf ; g; maxf ; g)g : For all except the truncated quadratic loss function, this corresponds to the standard metric on a0 , d (a; b) = ja bj.... ..."
Cited by 9

Table 1: Summary of the characteristics of the benchmarks. The stock management problems theoretically lead to convex Bellman-functions, but their learnt counterparts are not convex. The arm and away problem deal with robot-hand-control; these two problems can be handled approximately (but not exactly) by bang-bang solutions. Walls and Multi-Agent problems are motion-control problems with hard penalties when hitting boundaries; the loss functions are very unsmooth.

in NONLINEAR PROGRAMMING IN APPROXIMATE DYNAMIC PROGRAMMING: BANG-BANG SOLUTIONS, STOCK-MANAGEMENT AND UNSMOOTH PENALTIES
by Olivier Teytaud, Sylvain Gelly

Table 1 Common loss functions and corresponding density models

in A Tutorial on Support Vector Regression
by Alex J. Smola, Bernhard Schölkopf
"... In PAGE 13: ... this particular cost function that matters ultimately. Table1 contains an overview over some common density models and the corresponding loss functions as de ned by (35), whereas gure 2 contains graphs of the corresponding functions. The only requirement we will impose on c in the following is that for xed xi and yi we have convexity in f(xi).... ..."

Table 1: Common loss functions and corresponding density models

in A Tutorial on Support Vector Regression
by Alex J. Smola, Bernhard Schölkopf 1998
"... In PAGE 6: ... this particular cost function that matters ultimately. Table1 contains an overview over some common density models and the corresponding loss functions as defined by (37). The only requirement we will impose on c(x,y, f(x)) in the following is that for fixed x and y we have convexity in f(x).... ..."
Cited by 184

Table 1: Common loss functions and corresponding density models

in A tutorial on support vector regression
by Alex J. Smola, Bernhard Schölkopf
"... In PAGE 6: ... this particular cost function that matters ultimately. Table1 contains an overview over some common density models and the corresponding loss functions as defined by (37). The only requirement we will impose on c(x,y,f(x)) in the following is that for fixed x and y we have convexity in f(x).... ..."

Table 1. Common loss functions and corresponding density models

in A tutorial on support vector regression
by Alex J. Smola, Bernhard Schölkopf 2002
"... In PAGE 6: ... this particular cost function that matters ultimately. Table1 contains an overview over some common density models and the corresponding loss functions as defined by (37). The only requirement we will impose on c(x, y, f (x)) in the following is that for fixed x and y we have convexity in f (x).... In PAGE 7: ...4. Examples Let us consider the examples of Table1 . We will show explicitly for two examples how (43) can be further simplified to bring it into a form that is practically useful.... ..."

Table 1. Common loss functions and corresponding density models

in A tutorial on support vector regression
by Alex J. Smola, Bernhard Schölkopf 2002
"... In PAGE 6: ... this particular cost function that matters ultimately. Table1 contains an overview over some common density models and the corresponding loss functions as defined by (37). The only requirement we will impose on c(x, y, f (x)) in the following is that for fixed x and y we have convexity in f (x).... In PAGE 7: ...4. Examples Let us consider the examples of Table1 . We will show explicitly for two examples how (43) can be further simplified to bring it into a form that is practically useful.... ..."

Table 1. Some common loss functions for the domain [0; 1] [0; 1]

in Averaging Expert Predictions
by Jyrki Kivinen, Manfred K. Warmuth 1999
"... In PAGE 5: ...) 3 Basic Loss Bounds We begin with a short discussion of some basic properties of loss functions. The de nitions of the loss functions most interesting to us are given in Table1 . For a loss function L, we de ne Ly(b y) = L(y; b y) for convenience in writing derivatives with respect to b y.... In PAGE 5: ... For a loss function L, we de ne Ly(b y) = L(y; b y) for convenience in writing derivatives with respect to b y. Note that with the exception of the absolute loss, all the loss functions given in Table1 are convex, i.e.... ..."
Cited by 37

Table 6. Predicted coefficients for the weighting function, for different value functions.

in Experimental Economics, 5:53–84 (2002) c ○ 2002 Economic Science Association Risk Attitudes of Children and Adults: Choices Over Small and Large Probability Gains and Losses
by William T. Harbaugh, Kate Krause, Lise Vesterlund
"... In PAGE 17: ... The linear results are identical to those at the bottom of Table 5 and are included for comparison. As seen in Table6 , our qualitative results are not sensitive to relaxing the assumption that the value function is linear. Consider for example the alternative proposed by CPT that the value function is concave over gains and convex over losses.... ..."
Next 10 →
Results 1 - 10 of 44,010
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University