Results 1  10
of
2,912
Books in graphs
, 2008
"... A set of q triangles sharing a common edge is called a book of size q. We write β (n, m) for the the maximal q such that every graph G (n, m) contains a book of size q. In this note 1) we compute β ( n, cn 2) for infinitely many values of c with 1/4 < c < 1/3, 2) we show that if m ≥ (1/4 − α) ..."
Abstract

Cited by 2380 (22 self)
 Add to MetaCart
A set of q triangles sharing a common edge is called a book of size q. We write β (n, m) for the the maximal q such that every graph G (n, m) contains a book of size q. In this note 1) we compute β ( n, cn 2) for infinitely many values of c with 1/4 < c < 1/3, 2) we show that if m ≥ (1/4 − α) n 2 with 0 < α < 17 −3 (), and G has no book of size at least graph G1 of order at least
Performance Analysis of the IEEE 802.11 Distributed Coordination Function
, 2000
"... Recently, the IEEE has standardized the 802.11 protocol for Wireless Local Area Networks. The primary medium access control (MAC) technique of 802.11 is called distributed coordination function (DCF). DCF is a carrier sense multiple access with collision avoidance (CSMA/CA) scheme with binary slott ..."
Abstract

Cited by 1831 (1 self)
 Add to MetaCart
Recently, the IEEE has standardized the 802.11 protocol for Wireless Local Area Networks. The primary medium access control (MAC) technique of 802.11 is called distributed coordination function (DCF). DCF is a carrier sense multiple access with collision avoidance (CSMA/CA) scheme with binary slotted exponential backoff. This paper provides a simple, but nevertheless extremely accurate, analytical model to compute the 802.11 DCF throughput, in the assumption of finite number of terminals and ideal channel conditions. The proposed analysis applies to both the packet transmission schemes employed by DCF, namely, the basic access and the RTS/CTS access mechanisms. In addition, it also applies to a combination of the two schemes, in which packets longer than a given threshold are transmitted according to the RTS/CTS mechanism. By means of the proposed model, in this paper we provide an extensive throughput performance evaluation of both access mechanisms of the 802.11 protocol.
Approximate Nearest Neighbors: Towards Removing the Curse of Dimensionality
, 1998
"... The nearest neighbor problem is the following: Given a set of n points P = fp 1 ; : : : ; png in some metric space X, preprocess P so as to efficiently answer queries which require finding the point in P closest to a query point q 2 X. We focus on the particularly interesting case of the ddimens ..."
Abstract

Cited by 1017 (40 self)
 Add to MetaCart
The nearest neighbor problem is the following: Given a set of n points P = fp 1 ; : : : ; png in some metric space X, preprocess P so as to efficiently answer queries which require finding the point in P closest to a query point q 2 X. We focus on the particularly interesting case of the ddimensional Euclidean space where X = ! d under some l p norm. Despite decades of effort, the current solutions are far from satisfactory; in fact, for large d, in theory or in practice, they provide little improvement over the bruteforce algorithm which compares the query point to each data point. Of late, there has been some interest in the approximate nearest neighbors problem, which is: Find a point p 2 P that is an fflapproximate nearest neighbor of the query q in that for all p 0 2 P , d(p; q) (1 + ffl)d(p 0 ; q). We present two algorithmic results for the approximate version that significantly improve the known bounds: (a) preprocessing cost polynomial in n and d, and a trul...
Controlled and automatic human information processing
 I. Detection, search, and attention. Psychological Review
, 1977
"... A twoprocess theory of human information processing is proposed and applied to detection, search, and attention phenomena. Automatic processing is activation of a learned sequence of elements in longterm memory that is initiated by appropriate inputs and then proceeds automatically—without subjec ..."
Abstract

Cited by 841 (15 self)
 Add to MetaCart
A twoprocess theory of human information processing is proposed and applied to detection, search, and attention phenomena. Automatic processing is activation of a learned sequence of elements in longterm memory that is initiated by appropriate inputs and then proceeds automatically—without subject control, without stressing the capacity limitations of the system, and without necessarily demanding attention. Controlled processing is a temporary activation of a sequence of elements that can be set up quickly and easily but requires attention, is capacitylimited (usually serial in nature), and is controlled by the subject. A series of studies using both reaction time and accuracy measures is presented, which traces these concepts in the form of automatic detection and controlled, search through the areas of detection, search, and attention. Results in these areas are shown to arise from common mechanisms. Automatic detection is shown to develop following consistent mapping of stimuli to responses over trials. Controlled search is utilized in variedmapping paradigms, and in our studies, it takes the form of serial, terminating search. The approach resolves a number of apparent conflicts in the literature.
A theory of memory retrieval
 PSYCHOL. REV
, 1978
"... A theory of memory retrieval is developed and is shown to apply over a range of experimental paradigms. Access to memory traces is viewed in terms of a resonance metaphor. The probe item evokes the search set on the basis of probememory item relatedness, just as a ringing tuning fork evokes sympath ..."
Abstract

Cited by 728 (81 self)
 Add to MetaCart
A theory of memory retrieval is developed and is shown to apply over a range of experimental paradigms. Access to memory traces is viewed in terms of a resonance metaphor. The probe item evokes the search set on the basis of probememory item relatedness, just as a ringing tuning fork evokes sympathetic vibrations in other tuning forks. Evidence is accumulated in parallel from each probememory item comparison, and each comparison is modeled by a continuous random walk process. In item recognition, the decision process is selfterminating on matching comparisons and exhaustive on nonmatching comparisons. The mathematical model produces predictions about accuracy, mean reaction time, error latency, and reaction time distributions that are in good accord with experimental data. The theory is applied to four item recognition paradigms (Sternberg, prememorized list, studytest, and continuous) and to speedaccuracy paradigms; results are found to provide a basis for comparison of these paradigms. It is noted that neural network models can be interfaced to the retrieval theory with little difficulty and that semantic memory models may benefit from such a retrieval scheme.
Practical network support for IP traceback
, 2000
"... This paper describes a technique for tracing anonymous packet flooding attacks in the Internet back towards their source. This work is motivated by the increased frequency and sophistication of denialofservice attacks and by the difficulty in tracing packets with incorrect, or “spoofed”, source ad ..."
Abstract

Cited by 666 (14 self)
 Add to MetaCart
(Show Context)
This paper describes a technique for tracing anonymous packet flooding attacks in the Internet back towards their source. This work is motivated by the increased frequency and sophistication of denialofservice attacks and by the difficulty in tracing packets with incorrect, or “spoofed”, source addresses. In this paper we describe a general purpose traceback mechanism based on probabilistic packet marking in the network. Our approach allows a victim to identify the network path(s) traversed by attack traffic without requiring interactive operational support from Internet Service Providers (ISPs). Moreover, this traceback can be performed “postmortem ” – after an attack has completed. We present an implementation of this technology that is incrementally deployable, (mostly) backwards compatible and can be efficiently implemented using conventional technology. 1.
Mixed MNL Models for Discrete Response
 JOURNAL OF APPLIED ECONOMETRICS
, 2000
"... This paper considers mixed, or random coefficients, multinomial logit (MMNL) models for discrete response, and establishes the following results: Under mild regularity conditions, any discrete choice model derived from random utility maximization has choice probabilities that can be approximated as ..."
Abstract

Cited by 466 (14 self)
 Add to MetaCart
This paper considers mixed, or random coefficients, multinomial logit (MMNL) models for discrete response, and establishes the following results: Under mild regularity conditions, any discrete choice model derived from random utility maximization has choice probabilities that can be approximated as closely as one pleases by a MMNLmodel. Practical estimation of a parametric mixing family can be carried out by Maximum Simulated Likelihood Estimation or Method of Simulated Moments, and easily computed instruments are provided that make the latter procedure fairly efficient. The adequacy of a mixing specification can be tested simply as an omitted variable test with appropriately defined artificial variables. An application to a problem of demand for alternative vehicles shows that MMNL provides a flexible and computationally practical approach to discrete response analysis.
Approximate counting, uniform generation and rapidly mixing markov chains
 Inf. Comput
, 1989
"... The paper studies effective approximate solutions to combinatorial counting and uniform generation problems. Using a technique based on the simulation of ergodic Markov chains, it is shown that, for selfreducible structures, almost uniform generation is possible in polynomial time provided only tha ..."
Abstract

Cited by 318 (11 self)
 Add to MetaCart
The paper studies effective approximate solutions to combinatorial counting and uniform generation problems. Using a technique based on the simulation of ergodic Markov chains, it is shown that, for selfreducible structures, almost uniform generation is possible in polynomial time provided only that randomised approximate counting to within some arbitrary polynomial factor is possible in polynomial time. It follows that, for selfreducible structures, polynomial time randomised algorithms for counting to within factors of the form (1