Results 1  10
of
7,266
Distance metric learning for large margin nearest neighbor classification
 In NIPS
, 2006
"... We show how to learn a Mahanalobis distance metric for knearest neighbor (kNN) classification by semidefinite programming. The metric is trained with the goal that the knearest neighbors always belong to the same class while examples from different classes are separated by a large margin. On seven ..."
Abstract

Cited by 695 (14 self)
 Add to MetaCart
convex optimization based on the hinge loss. Unlike learning in SVMs, however, our framework requires no modification or extension for problems in multiway (as opposed to binary) classification. 1
Monopolistic competition and optimum product diversity. The American Economic Review,
, 1977
"... The basic issue concerning production in welfare economics is whether a market solution will yield the socially optimum kinds and quantities of commodities. It is well known that problems can arise for three broad reasons: distributive justice; external effects; and scale economies. This paper is c ..."
Abstract

Cited by 1911 (5 self)
 Add to MetaCart
The basic issue concerning production in welfare economics is whether a market solution will yield the socially optimum kinds and quantities of commodities. It is well known that problems can arise for three broad reasons: distributive justice; external effects; and scale economies. This paper
The inductive approach to verifying cryptographic protocols
 Journal of Computer Security
, 1998
"... Informal arguments that cryptographic protocols are secure can be made rigorous using inductive definitions. The approach is based on ordinary predicate calculus and copes with infinitestate systems. Proofs are generated using Isabelle/HOL. The human effort required to analyze a protocol can be as ..."
Abstract

Cited by 480 (29 self)
 Add to MetaCart
be as little as a week or two, yielding a proof script that takes a few minutes to run. Protocols are inductively defined as sets of traces. A trace is a list of communication events, perhaps comprising many interleaved protocol runs. Protocol descriptions incorporate attacks and accidental losses. The model
Online passiveaggressive algorithms
 JMLR
, 2006
"... We present a unified view for online classification, regression, and uniclass problems. This view leads to a single algorithmic framework for the three problems. We prove worst case loss bounds for various algorithms for both the realizable case and the nonrealizable case. The end result is new alg ..."
Abstract

Cited by 435 (24 self)
 Add to MetaCart
algorithms and accompanying loss bounds for hingeloss regression and uniclass. We also get refined loss bounds for previously studied classification algorithms.
Rapid worldwide depletion of predatory fish communities.
 Nature,
, 2003
"... Serious concerns have been raised about the ecological effects of industrialized fishing Ecological communities on continental shelves and in the open ocean contribute almost half of the planet's primary production 9 , and sustain threequarters of global fishery yields 1 . The widespread dec ..."
Abstract

Cited by 367 (7 self)
 Add to MetaCart
Serious concerns have been raised about the ecological effects of industrialized fishing Ecological communities on continental shelves and in the open ocean contribute almost half of the planet's primary production 9 , and sustain threequarters of global fishery yields 1 . The widespread
L1 AND L2 REGULARIZATION FOR MULTICLASS HINGE LOSS MODELS
"... This paper investigates the relationship between the loss function, the type of regularization, and the resulting model sparsity of discriminativelytrained multiclass linear models. The effects on sparsity of optimizing log loss are straightforward: L2 regularization produces very dense models whil ..."
Abstract
 Add to MetaCart
while L1 regularization produces much sparser models. However, optimizing hinge loss yields more nuanced behavior. We give experimental evidence and theoretical arguments that, for a class of problems that arises frequently in naturallanguage processing, both L1 and L2regularized hinge loss lead
Linear Hinge Loss and Average Margin
, 1998
"... We describe a unifying method for proving relative loss bounds for online linear threshold classification algorithms, such as the Perceptron and the Winnow algorithms. For classification problems the discrete loss is used, i.e., the total number of prediction mistakes. We introduce a continuous ..."
Abstract

Cited by 42 (13 self)
 Add to MetaCart
loss function, called the "linear hinge loss", that can be employed to derive the updates of the algorithms. We first prove bounds w.r.t. the linear hinge loss and then convert them to the discrete loss. We introduce a notion of "average margin" of a set of examples . We show how
On the coherence of expected shortfall
 In: Szegö, G. (Ed.), “Beyond VaR” (Special Issue). Journal of Banking & Finance
, 2002
"... Expected Shortfall (ES) in several variants has been proposed as remedy for the deficiencies of ValueatRisk (VaR) which in general is not a coherent risk measure. In fact, most definitions of ES lead to the same results when applied to continuous loss distributions. Differences may appear when the ..."
Abstract

Cited by 217 (8 self)
 Add to MetaCart
the underlying loss distributions have discontinuities. In this case even the coherence property of ES can get lost unless one took care of the details in its definition. We compare some of the definitions of Expected Shortfall, pointing out that there is one which is robust in the sense of yielding a coherent
On coding for reliable communication over packet networks
, 2008
"... We consider the use of random linear network coding in lossy packet networks. In particular, we consider the following simple strategy: nodes store the packets that they receive and, whenever they have a transmission opportunity, they send out coded packets formed from random linear combinations of ..."
Abstract

Cited by 217 (37 self)
 Add to MetaCart
on links arrive according to processes that have average rates. Thus packet losses on links may exhibit correlations in time or with losses on other links. In the special case of Poisson traffic with i.i.d. losses, we give error exponents that quantify the rate of decay of the probability of error
Smooth Hinge Classification 1 Smooth Hinge Loss
, 2005
"... In earlier writing [2, 1], we discussed alternate loss functions that might be used for classification. We continue our discussion here by introducing yet another loss function, the Smooth Hinge. Recall that the (Shifted) Hinge loss function is defined as Hinge(z) = max(0, 1 − z). (1) In our eyes, ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
In earlier writing [2, 1], we discussed alternate loss functions that might be used for classification. We continue our discussion here by introducing yet another loss function, the Smooth Hinge. Recall that the (Shifted) Hinge loss function is defined as Hinge(z) = max(0, 1 − z). (1) In our eyes
Results 1  10
of
7,266