Results 1  10
of
80,168
Book Chapter Anisotropy Estimation of Trabecular Bone in GrayScale: Comparison Between Cone Beam and Micro Computed Tomography Data
"... http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva117950 Anisotropy estimation of trabecular bone in grayscale: comparison between cone beam and micro computed tomography data ..."
Abstract
 Add to MetaCart
http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva117950 Anisotropy estimation of trabecular bone in grayscale: comparison between cone beam and micro computed tomography data
Reflectance and texture of realworld surfaces
 ACM TRANS. GRAPHICS
, 1999
"... In this work, we investigate the visual appearance of realworld surfaces and the dependence of appearance on scale, viewing direction and illumination direction. At ne scale, surface variations cause local intensity variation or image texture. The appearance of this texture depends on both illumina ..."
Abstract

Cited by 586 (23 self)
 Add to MetaCart
In this work, we investigate the visual appearance of realworld surfaces and the dependence of appearance on scale, viewing direction and illumination direction. At ne scale, surface variations cause local intensity variation or image texture. The appearance of this texture depends on both illumination and viewing direction and can be characterized by the BTF (bidirectional texture function). At su ciently coarse scale, local image texture is not resolvable and local image intensity is uniform. The dependence of this image intensity on illumination and viewing direction is described by the BRDF (bidirectional re ectance distribution function). We simultaneously measure the BTF and BRDF of over 60 di erent rough surfaces, each observed with over 200 di erent combinations of viewing and illumination direction. The resulting BTF database is comprised of over 12,000 image textures. To enable convenient use of the BRDF measurements, we t the measurements to two recent models and obtain a BRDF parameter database. These parameters can be used directly in image analysis and synthesis of a wide variety of surfaces. The BTF, BRDF, and BRDF parameter databases have important implications for computer vision and computer graphics and and each is made publicly available.
Blobworld: Image segmentation using ExpectationMaximization and its application to image querying
 IEEE Transactions on Pattern Analysis and Machine Intelligence
, 1999
"... Retrieving images from large and varied collections using image content as a key is a challenging and important problem. We present a new image representation which provides a transformation from the raw pixel data to a small set of image regions which are coherent in color and texture. This "B ..."
Abstract

Cited by 431 (10 self)
 Add to MetaCart
Retrieving images from large and varied collections using image content as a key is a challenging and important problem. We present a new image representation which provides a transformation from the raw pixel data to a small set of image regions which are coherent in color and texture. This "Blobworld" representation is created by clustering pixels in a joint colortextureposition feature space. The segmentation algorithm is fully automatic and has been run on a collection of 10,000 natural images. We describe a system that uses the Blobworld representation to retrieve images from this collection. An important aspect of the system is that the user is allowed to view the internal representation of the submitted image and the query results. Similar systems do not offer the user this view into the workings of the system; consequently, query results from these systems can be inexplicable, despite the availability of knobs for adjusting the similarity metrics. By finding image regions whi...
ThreeYear Wilkinson Microwave Anisotropy Probe (WMAP) Observations: Temperature Analysis. On arXiv.org: astroph/0603451
, 2006
"... A simple cosmological model with only six parameters (matter density, Ωmh 2, baryon density, Ωbh 2, Hubble Constant, H0, amplitude of fluctuations, σ8, optical ..."
Abstract

Cited by 362 (7 self)
 Add to MetaCart
A simple cosmological model with only six parameters (matter density, Ωmh 2, baryon density, Ωbh 2, Hubble Constant, H0, amplitude of fluctuations, σ8, optical
PHENIX: a comprehensive Pythonbased system for macromolecular structure solution
 Acta Crystallogr. D Biol. Crystallogr
, 2010
"... Macromolecular Xray crystallography is routinely applied to understand biological processes at a molecular level. However, significant time and effort are still required to solve and complete many of these structures because of the need for manual interpretation of complex numerical data using many ..."
Abstract

Cited by 410 (5 self)
 Add to MetaCart
Macromolecular Xray crystallography is routinely applied to understand biological processes at a molecular level. However, significant time and effort are still required to solve and complete many of these structures because of the need for manual interpretation of complex numerical data using many software packages and the repeated use of interactive threedimensional graphics. PHENIX has been developed to provide a comprehensive system for macromolecular crystallographic structure solution with an emphasis on the automation of all procedures. This has relied on the development of algorithms that minimize or eliminate subjective input, the development of algorithms that automate procedures that are traditionally performed by hand and, finally, the development of a framework that allows a tight integration between the algorithms. 1. Foundations
2007 . Phaser crystallographic software
 658 – 674
"... Phaser is a program for phasing macromolecular crystal structures by both molecular replacement and experimental phasing methods. The novel phasing algorithms implemented in Phaser have been developed using maximum likelihood and multivariate statistics. For molecular replacement, the new algorithms ..."
Abstract

Cited by 401 (0 self)
 Add to MetaCart
Phaser is a program for phasing macromolecular crystal structures by both molecular replacement and experimental phasing methods. The novel phasing algorithms implemented in Phaser have been developed using maximum likelihood and multivariate statistics. For molecular replacement, the new algorithms have proved to be significantly better than traditional methods in discriminating correct solutions from noise, and for singlewavelength anomalous dispersion experimental phasing, the new algorithms, which account for correlations between F + and F, give better phases (lower mean phase error with respect to the phases given by the refined structure) than those that use mean F and anomalous differences F. One of the design concepts of Phaser was that it be capable of a high degree of automation. To this end, Phaser (written in C++) can be called directly from Python, although it can also be called using traditional CCP4 keywordstyle input. Phaser is a platform for future development of improved phasing methods and their release, including source code, to the crystallographic community. 1.
The curvelet transform for image denoising
 IEEE TRANS. IMAGE PROCESS
, 2002
"... We describe approximate digital implementations of two new mathematical transforms, namely, the ridgelet transform [2] and the curvelet transform [6], [5]. Our implementations offer exact reconstruction, stability against perturbations, ease of implementation, and low computational complexity. A cen ..."
Abstract

Cited by 396 (40 self)
 Add to MetaCart
We describe approximate digital implementations of two new mathematical transforms, namely, the ridgelet transform [2] and the curvelet transform [6], [5]. Our implementations offer exact reconstruction, stability against perturbations, ease of implementation, and low computational complexity. A central tool is Fourierdomain computation of an approximate digital Radon transform. We introduce a very simple interpolation in Fourier space which takes Cartesian samples and yields samples on a rectopolar grid, which is a pseudopolar sampling set based on a concentric squares geometry. Despite the crudeness of our interpolation, the visual performance is surprisingly good. Our ridgelet transform applies to the Radon transform a special overcomplete wavelet pyramid whose wavelets have compact support in the frequency domain. Our curvelet transform uses our ridgelet transform as a component step, and implements curvelet subbands using a filter bank of à trous wavelet filters. Our philosophy throughout is that transforms should be overcomplete, rather than critically sampled. We apply these digital transforms to the denoising of some standard images embedded in white noise. In the tests reported here, simple thresholding of the curvelet coefficients is very competitive with “state of the art ” techniques based on wavelets, including thresholding of decimated or undecimated wavelet transforms and also including treebased Bayesian posterior mean methods. Moreover, the curvelet reconstructions exhibit higher perceptual quality than waveletbased reconstructions, offering visually sharper images and, in particular, higher quality recovery of edges and of faint linear and curvilinear features. Existing theory for curvelet and ridgelet transforms suggests that these new approaches can outperform wavelet methods in certain image reconstruction problems. The empirical results reported here are in encouraging agreement.
Curvelets: a surprisingly effective nonadaptive representation of objects with edges
 IN CURVE AND SURFACE FITTING: SAINTMALO
, 2000
"... It is widely believed that to efficiently represent an otherwise smooth object with discontinuities along edges, one must use an adaptive representation that in some sense ‘tracks ’ the shape of the discontinuity set. This folkbelief — some would say folktheorem — is incorrect. At the very least ..."
Abstract

Cited by 390 (23 self)
 Add to MetaCart
It is widely believed that to efficiently represent an otherwise smooth object with discontinuities along edges, one must use an adaptive representation that in some sense ‘tracks ’ the shape of the discontinuity set. This folkbelief — some would say folktheorem — is incorrect. At the very least, the possible quantitative advantage of such adaptation is vastly smaller than commonly believed. We have recently constructed a tight frame of curvelets which provides stable, efficient, and nearoptimal representation of otherwise smooth objects having discontinuities along smooth curves. By applying naive thresholding to the curvelet transform of such an object, one can form mterm approximations with rate of L 2 approximation rivaling the rate obtainable by complex adaptive schemes which attempt to ‘track ’ the discontinuity set. In this article we explain the basic issues of efficient mterm approximation, the construction of efficient adaptive representation, the construction of the curvelet frame, and a crude analysis of the performance of curvelet schemes.
Ad Hoc Positioning System (APS)
 IN GLOBECOM
, 2001
"... Many ad hoc network protocols and applications assume the knowledge of geographic location of nodes. The absolute location of each networked node is an assumed fact by most sensor networks which can then present the sensed information on a geographical map. Finding location without the aid of GPS ..."
Abstract

Cited by 362 (8 self)
 Add to MetaCart
Many ad hoc network protocols and applications assume the knowledge of geographic location of nodes. The absolute location of each networked node is an assumed fact by most sensor networks which can then present the sensed information on a geographical map. Finding location without the aid of GPS in each node of an ad hoc network is important in cases where GPS is either not accessible, or not practical to use due to power, form factor or line of sight conditions. Location would
Independent Component Filters Of Natural Images Compared With Simple Cells In Primary Visual Cortex
, 1998
"... this article we investigate to what extent the statistical properties of natural images can be used to understand the variation of receptive field properties of simple cells in the mammalian primary visual cortex. The receptive fields of simple cells have been studied extensively (e.g., Hubel & ..."
Abstract

Cited by 361 (0 self)
 Add to MetaCart
this article we investigate to what extent the statistical properties of natural images can be used to understand the variation of receptive field properties of simple cells in the mammalian primary visual cortex. The receptive fields of simple cells have been studied extensively (e.g., Hubel & Wiesel 1968, DeValois et al. 1982a, DeAngelis et al. 1993): they are localised in space and time, have bandpass characteristics in the spatial and temporal frequency domains, are oriented, and are often sensitive to the direction of motion of a stimulus. Here we will concentrate on the spatial properties of simple cells. Several hypotheses as to the function of these cells have been proposed. As the cells preferentially respond to oriented edges or lines, they can be viewed as edge or line detectors. Their joint localisation in both the spatial domain and the spatial frequency domain has led to the suggestion that they mimic Gabor filters, minimising uncertainty in both domains (Daugman 1980, Marcelja 1980). More recently, the match between the operations performed by simple cells and the wavelet transform has attracted attention (e.g., Field 1993). The approaches based on Gabor filters and wavelets basically consider processing by the visual cortex as a general image processing strategy, relatively independent of detailed assumptions about image statistics. On the other hand, the edge and line detector hypothesis is based on the intuitive notion that edges and lines are both abundant and important in images. This theme of relating simple cell properties with the statistics of natural images was explored extensively by Field (1987, 1994). He proposed that the cells are optimized specifically for coding natural images. He argued that one possibility for such a code, sparse coding...
Results 1  10
of
80,168