Results 1  10
of
179,372
Crack Width Prediction for Interior Portion of Inverted ‘T ’ Bent Caps by
"... 1.1 Research Objectives Inverted "T " bent caps are used extensively in highway bridges to support elevated roadways on beams. Such bent caps have appeal because they are esthetically pleasing as well as economically sound. The crosssection of an inverted "T " bent cap ..."
Abstract
 Add to MetaCart
1.1 Research Objectives Inverted "T " bent caps are used extensively in highway bridges to support elevated roadways on beams. Such bent caps have appeal because they are esthetically pleasing as well as economically sound. The crosssection of an inverted "T " bent cap
A simple method for displaying the hydropathic character of a protein
 Journal of Molecular Biology
, 1982
"... A computer program that progressively evaluates the hydrophilicity and hydrophobicity of a protein along its amino acid sequence has been devised. For this purpose, a hydropathy scale has been composed wherein the hydrophilic and hydrophobic properties of each of the 20 amino acid sidechains is tak ..."
Abstract

Cited by 2249 (2 self)
 Add to MetaCart
correspondence between the interior portions of their sequence and the regions appearing on the hydrophobic side of the midpoint line, as well as the exterior portions and the regions on the hydrophilic side. The correlation was demonstrated by comparisons between the plotted values and known structures
The Digital Michelangelo Project: 3D Scanning of Large Statues
, 2000
"... We describe a hardware and software system for digitizing the shape and color of large fragile objects under nonlaboratory conditions. Our system employs laser triangulation rangefinders, laser timeofflight rangefinders, digital still cameras, and a suite of software for acquiring, aligning, merg ..."
Abstract

Cited by 488 (8 self)
 Add to MetaCart
, merging, and viewing scanned data. As a demonstration of this system, we digitized 10 statues by Michelangelo, including the wellknown figure of David, two building interiors, and all 1,163 extant fragments of the Forma Urbis Romae, a giant marble map of ancient Rome. Our largest single dataset
Why a diagram is (sometimes) worth ten thousand words
 Cognitive Science
, 1987
"... We distinguish diagrammatic from sentential paperandpencil representationsof information by developing alternative models of informationprocessing systems that are informationally equivalent and that can be characterized as sentential or diagrammatic. Sentential representations are sequential, li ..."
Abstract

Cited by 777 (2 self)
 Add to MetaCart
We distinguish diagrammatic from sentential paperandpencil representationsof information by developing alternative models of informationprocessing systems that are informationally equivalent and that can be characterized as sentential or diagrammatic. Sentential representations are sequential, like the propositions in a text. Dlogrammotlc representations ore indexed by location in a plane. Diogrommatic representations also typically display information that is only implicit in sententiol representations and that therefore has to be computed, sometimes at great cost, to make it explicit for use. We then contrast the computational efficiency of these representotions for solving several illustrative problems in mothematics and physics. When two representotions are informationally equivolent, their computational efficiency depends on the informationprocessing operators that act on them. Two sets of operators may differ in their copobilities for recognizing patterns, in the inferences they con carry out directly, and in their control strategies (in portitular. the control of search). Diogrommotic ond sentential representations sup
Attention, similarity, and the identificationCategorization Relationship
, 1986
"... A unified quantitative approach to modeling subjects ' identification and categorization of multidimensional perceptual stimuli is proposed and tested. Two subjects identified and categorized the same set of perceptually confusable stimuli varying on separable dimensions. The identification dat ..."
Abstract

Cited by 663 (28 self)
 Add to MetaCart
A unified quantitative approach to modeling subjects ' identification and categorization of multidimensional perceptual stimuli is proposed and tested. Two subjects identified and categorized the same set of perceptually confusable stimuli varying on separable dimensions. The identification data were modeled using Sbepard's (1957) multidimensional scalingchoice framework. This framework was then extended to model the subjects ' categorization performance. The categorization model, which generalizes the context theory of classification developed by Medin and Schaffer (1978), assumes that subjects store category exemplars in memory. Classification decisions are based on the similarity of stimuli to the stored exemplars. It is assumed that the same multidimensional perceptual representation underlies performance in both the identification and Categorization paradigms. However, because of the influence of selective attention, similarity relationships change systematically across the two paradigms. Some support was gained for the hypothesis that subjects distribute attention among component dimensions so as to optimize categorization performance. Evidence was also obtained that subjects may have augmented their category representations with inferred exemplars. Implications of the results for theories of multidimensional scaling and categorization are discussed.
The Coordination of Arm Movements: An Experimentally Confirmed Mathematical Model
 Journal of neuroscience
, 1985
"... This paper presents studies of the coordination of voluntary human arm movements. A mathematical model is formulated which is shown to predict both the qualitative features and the quantitative details observed experimentally in planar, multijoint arm movements. Coordination is modeled mathematic ..."
Abstract

Cited by 663 (18 self)
 Add to MetaCart
This paper presents studies of the coordination of voluntary human arm movements. A mathematical model is formulated which is shown to predict both the qualitative features and the quantitative details observed experimentally in planar, multijoint arm movements. Coordination is modeled mathematically by defining an objective function, a measure of performance for any possible movement. The unique trajectory which yields the best performance is determined using dynamic optimization theory. In the work presented here, the objective function is the square of the magnitude of jerk (rate of change of acceleration) of the hand integrated over the entire movement. This is equivalent to assuming that a major goal of motor coordination is the production of the smoothest possible movement
Maximum Likelihood Phylogenetic Estimation from DNA Sequences with Variable Rates over Sites: Approximate Methods
 J. Mol. Evol
, 1994
"... Two approximate methods are proposed for maximum likelihood phylogenetic estimation, which allow variable rates of substitution across nucleotide sites. Three data sets with quite different characteristics were analyzed to examine empirically the performance of these methods. The first, called ..."
Abstract

Cited by 540 (28 self)
 Add to MetaCart
Two approximate methods are proposed for maximum likelihood phylogenetic estimation, which allow variable rates of substitution across nucleotide sites. Three data sets with quite different characteristics were analyzed to examine empirically the performance of these methods. The first, called the "discrete gamma model," uses several categories of rates to approximate the gamma distribution, with equal probability for each category. The mean of each category is used to represent all the rates falling in the category. The performance of this method is found to be quite good, and four such categories appear to be sufficient to produce both an optimum, or nearoptimum fit by the model to the data, and also an acceptable approximation to the continuous dis tribution. The second method, called "fixedrates mod el," classifies sites into several classes according to their rates predicted assuming the star tree. Sites in different classes are then assumed to be evolving at these fixed rates when other tree topologies are evaluated.
Primitives for the manipulation of general subdivisions and the computations of Voronoi diagrams
 ACM Tmns. Graph
, 1985
"... The following problem is discussed: Given n points in the plane (the sites) and an arbitrary query point 4, find the site that is closest to q. This problem can be solved by constructing the Voronoi diagram of the given sites and then locating the query point in one of its regions. Two algorithms ar ..."
Abstract

Cited by 543 (11 self)
 Add to MetaCart
The following problem is discussed: Given n points in the plane (the sites) and an arbitrary query point 4, find the site that is closest to q. This problem can be solved by constructing the Voronoi diagram of the given sites and then locating the query point in one of its regions. Two algorithms are given, one that constructs the Voronoi diagram in O(n log n) time, and another that inserts a new site in O(n) time. Both are based on the use of the Voronoi dual, or Delaunay triangulation, and are simple enough to be of practical value. The simplicity of both algorithms can be attributed to the separation of the geometrical and topological aspects of the problem and to the use of two simple but powerful primitives, a geometric predicate and an operator for manipulating the topology of the diagram. The topology is represented by a new data structure for generalized diagrams, that is, embeddings of graphs in twodimensional manifolds. This structure represents simultaneously an embedding, its dual, and its mirror image. Furthermore, just two operators are sufficient for building and modifying arbitrary diagrams.
A Volumetric Method for Building Complex Models from Range Images
, 1996
"... A number of techniques have been developed for reconstructing surfaces by integrating groups of aligned range images. A desirable set of properties for such algorithms includes: incremental updating, representation of directional uncertainty, the ability to fill gaps in the reconstruction, and robus ..."
Abstract

Cited by 1018 (18 self)
 Add to MetaCart
A number of techniques have been developed for reconstructing surfaces by integrating groups of aligned range images. A desirable set of properties for such algorithms includes: incremental updating, representation of directional uncertainty, the ability to fill gaps in the reconstruction, and robustness in the presence of outliers. Prior algorithms possess subsets of these properties. In this paper, we present a volumetric method for integrating range images that possesses all of these properties. Our volumetric representation consists of a cumulative weighted signed distance function. Working with one range image at a time, we first scanconvert it to a distance function, then combine this with the data already acquired using a simple additive scheme. To achieve space efficiency, we employ a runlength encoding of the volume. To achieve time efficiency, we resample the range image to align with the voxel grid and traverse the range and voxel scanlines synchronously. We generate the f...
Representing twentieth century spacetime climate variability, part 1: development of a 196190 mean monthly terrestrial climatology
 Journal of Climate
, 1999
"... The construction of a 0.58 lat 3 0.58 long surface climatology of global land areas, excluding Antarctica, is described. The climatology represents the period 1961–90 and comprises a suite of nine variables: precipitation, wetday frequency, mean temperature, diurnal temperature range, vapor pressur ..."
Abstract

Cited by 551 (12 self)
 Add to MetaCart
The construction of a 0.58 lat 3 0.58 long surface climatology of global land areas, excluding Antarctica, is described. The climatology represents the period 1961–90 and comprises a suite of nine variables: precipitation, wetday frequency, mean temperature, diurnal temperature range, vapor pressure, sunshine, cloud cover, ground frost frequency, and wind speed. The climate surfaces have been constructed from a new dataset of station 1961–90 climatological normals, numbering between 19 800 (precipitation) and 3615 (wind speed). The station data were interpolated as a function of latitude, longitude, and elevation using thinplate splines. The accuracy of the interpolations are assessed using cross validation and by comparison with other climatologies. This new climatology represents an advance over earlier published global terrestrial climatologies in that it is strictly constrained to the period 1961–90, describes an extended suite of surface climate variables, explicitly incorporates elevation as a predictor variable, and contains an evaluation of regional errors associated with this and other commonly used climatologies. The climatology is already being used by researchers in the areas of ecosystem modelling, climate model evaluation, and climate change impact assessment. The data are available from the Climatic Research Unit and images of all the monthly fields can be accessed via the World Wide Web. 1.
Results 1  10
of
179,372