Results 1 - 10
of
14,621
BIRCH: an efficient data clustering method for very large databases
- In Proc. of the ACM SIGMOD Intl. Conference on Management of Data (SIGMOD
, 1996
"... Finding useful patterns in large datasets has attracted considerable interest recently, and one of the most widely st,udied problems in this area is the identification of clusters, or deusel y populated regions, in a multi-dir nensional clataset. Prior work does not adequately address the problem of ..."
Abstract
-
Cited by 576 (2 self)
- Add to MetaCart
is also the first clustering algorithm proposerl in the database area to handle “noise) ’ (data points that are not part of the underlying pattern) effectively. We evaluate BIRCH’S time/space efficiency, data input order sensitivity, and clustering quality through several experiments. We also present a
OPTICS: Ordering Points To Identify the Clustering Structure
, 1999
"... Cluster analysis is a primary method for database mining. It is either used as a stand-alone tool to get insight into the distribution of a data set, e.g. to focus further analysis and data processing, or as a preprocessing step for other algorithms operating on the detected clusters. Almost all of ..."
Abstract
-
Cited by 527 (51 self)
- Add to MetaCart
of the well-known clustering algorithms require input parameters which are hard to determine but have a significant influence on the clustering result. Furthermore, for many real-data sets there does not even exist a global parameter setting for which the result of the clustering algorithm describes
Automatic Subspace Clustering of High Dimensional Data
- Data Mining and Knowledge Discovery
, 2005
"... Data mining applications place special requirements on clustering algorithms including: the ability to find clusters embedded in subspaces of high dimensional data, scalability, end-user comprehensibility of the results, non-presumption of any canonical data distribution, and insensitivity to the or ..."
Abstract
-
Cited by 724 (12 self)
- Add to MetaCart
identical results irrespective of the order in which input records are presented and does not presume any specific mathematical form for data distribution. Through experiments, we show that CLIQUE efficiently finds accurate clusters in large high dimensional datasets.
Data Preparation for Mining World Wide Web Browsing Patterns
- KNOWLEDGE AND INFORMATION SYSTEMS
, 1999
"... The World Wide Web (WWW) continues to grow at an astounding rate in both the sheer volume of tra#c and the size and complexity of Web sites. The complexity of tasks such as Web site design, Web server design, and of simply navigating through a Web site have increased along with this growth. An i ..."
Abstract
-
Cited by 567 (43 self)
- Add to MetaCart
is the application of data mining techniques to usage logs of large Web data repositories in order to produce results that can be used in the design tasks mentioned above. However, there are several preprocessing tasks that must be performed prior to applying data mining algorithms to the data collected from
Static Scheduling of Synchronous Data Flow Programs for Digital Signal Processing
- IEEE TRANSACTIONS ON COMPUTERS
, 1987
"... Large grain data flow (LGDF) programming is natural and convenient for describing digital signal processing (DSP) systems, but its runtime overhead is costly in real time or cost-sensitive applications. In some situations, designers are not willing to squander computing resources for the sake of pro ..."
Abstract
-
Cited by 598 (37 self)
- Add to MetaCart
Large grain data flow (LGDF) programming is natural and convenient for describing digital signal processing (DSP) systems, but its runtime overhead is costly in real time or cost-sensitive applications. In some situations, designers are not willing to squander computing resources for the sake
Liquidity Risk and Expected Stock Returns
, 2002
"... This study investigates whether market-wide liquidity is a state variable important for asset pricing. We find that expected stock returns are related cross-sectionally to the sensitivities of returns to fluctuations in aggregate liquidity. Our monthly liquidity measure, an average of individual-sto ..."
Abstract
-
Cited by 629 (6 self)
- Add to MetaCart
-stock measures estimated with daily data, relies on the principle that order flow induces greater return reversals when liquidity is lower. Over a 34-year period, the average return on stocks with high sensitivities to liquidity exceeds that for stocks with low sensitivities by 7.5 % annually, adjusted
Boosting and differential privacy
, 2010
"... Boosting is a general method for improving the accuracy of learning algorithms. We use boosting to construct improved privacy-preserving synopses of an input database. These are data structures that yield, for a given set Q of queries over an input database, reasonably accurate estimates of the resp ..."
Abstract
-
Cited by 648 (14 self)
- Add to MetaCart
Boosting is a general method for improving the accuracy of learning algorithms. We use boosting to construct improved privacy-preserving synopses of an input database. These are data structures that yield, for a given set Q of queries over an input database, reasonably accurate estimates
Aggregate Productivity Growth: Lessons from Microeconomic Evidence
, 2000
"... Recent research using establishment and firm level data has raised a variety of conceptual and measurement questions regarding our understanding of aggregate productivity growth. 1 Several key, related findings are of interest. First, there is large scale, ongoing reallocation of outputs and input ..."
Abstract
-
Cited by 472 (49 self)
- Add to MetaCart
Recent research using establishment and firm level data has raised a variety of conceptual and measurement questions regarding our understanding of aggregate productivity growth. 1 Several key, related findings are of interest. First, there is large scale, ongoing reallocation of outputs
Layered depth images
, 1997
"... In this paper we present an efficient image based rendering system that renders multiple frames per second on a PC. Our method performs warping from an intermediate representation called a layered depth image (LDI). An LDI is a view of the scene from a single input camera view, but with multiple pix ..."
Abstract
-
Cited by 456 (29 self)
- Add to MetaCart
pixels along each line of sight. When n input images are preprocessed to create a single LDI, the size of the representation grows linearly only with the observed depth complexity in the n images, instead of linearly with n. Moreover, because the LDI data are represented in a single image coordinate
Image analogies
, 2001
"... Figure 1 An image analogy. Our problem is to compute a new “analogous ” image B ′ that relates to B in “the same way ” as A ′ relates to A. Here, A, A ′ , and B are inputs to our algorithm, and B ′ is the output. The full-size images are shown in Figures 10 and 11. This paper describes a new framewo ..."
Abstract
-
Cited by 455 (8 self)
- Add to MetaCart
is applied to some new target image in order to create an “analogous” filtered result. Image analogies are based on a simple multiscale autoregression, inspired primarily by recent results in texture synthesis. By choosing different types of source image pairs as input, the framework supports a wide variety
Results 1 - 10
of
14,621