Results 1 - 10
of
16
Automated Building Extraction fromHigh-Resolution Satellite Imagery in Urban Areas Using Structural, Contextual, and Spectral Information
- EURASIP JOURNAL ON APPLIED SIGNAL PROCESSING 2005:14, 2196–2206
, 2005
"... High-resolution satellite imagery provides an important new data source for building extraction. We demonstrate an integrated strategy for identifying buildings in 1-meter resolution satellite imagery of urban areas. Buildings are extracted using structural, contextual, and spectral information. Fir ..."
Abstract
-
Cited by 38 (0 self)
- Add to MetaCart
High-resolution satellite imagery provides an important new data source for building extraction. We demonstrate an integrated strategy for identifying buildings in 1-meter resolution satellite imagery of urban areas. Buildings are extracted using structural, contextual, and spectral information. First, a series of geodesic opening and closing operations are used to build a differential morphological profile (DMP) that provides image structural information. Building hypotheses are generated and verified through shape analysis applied to the DMP. Second, shadows are extracted using the DMP to provide reliable contextual information to hypothesize position and size of adjacent buildings. Seed building rectangles are verified and grown on a finely segmented image. Next, bright buildings are extracted using spectral information. The extraction results from the different information sources are combined after independent extraction. Performance evaluation of the building extraction on an urban test site using IKONOS satellite imagery of the City of Columbia, Missouri, is reported. With the combination of structural, contextual, and spectral information, 72.7 % of the building areas are extracted with a quality percentage 58.8%.
Improved rooftop detection in aerial images with machine learning
- Machine Learning
, 2003
"... Abstract. In this paper, we examine the use of machine learning to improve a rooftop detection process, one step in a vision system that recognizes buildings in overhead imagery. We review the problem of analyzing aerial images and describe an existing system that detects buildings in such images. W ..."
Abstract
-
Cited by 23 (2 self)
- Add to MetaCart
(Show Context)
Abstract. In this paper, we examine the use of machine learning to improve a rooftop detection process, one step in a vision system that recognizes buildings in overhead imagery. We review the problem of analyzing aerial images and describe an existing system that detects buildings in such images. We briefly detail four algorithms that we selected to improve rooftop detection. The data sets were highly skewed and the cost of mistakes differed between the classes, so we used ROC analysis to evaluate the methods under varying error costs. We report three experiments designed to illuminate facets of applying machine learning to the image analysis task. One investigated learning with all available images to determine the best performing method. Another focused on within-image learning, in which we derived training and testing data from the same image. A final experiment addressed between-image learning, in which training and testing sets came from different images. Results suggest that useful generalization occurred when training and testing on data derived from images differing in location and in aspect. They demonstrate that under most conditions, naive Bayes exceeded the accuracy of other methods and a handcrafted classifier, the solution currently used in the building detection system.
Expandable bayesian networks for 3d object description from multiple views and multiple mode inputs
- IEEE Transactions on Pattern Analysis and Machine Intelligence
, 2003
"... Abstract—Computing 3D object descriptions from images is an important goal of computer vision. A key problem here is the evaluation of a hypothesis based on evidence that is uncertain. There have been few efforts on applying formal reasoning methods to this problem. In multiview and multimode object ..."
Abstract
-
Cited by 11 (1 self)
- Add to MetaCart
(Show Context)
Abstract—Computing 3D object descriptions from images is an important goal of computer vision. A key problem here is the evaluation of a hypothesis based on evidence that is uncertain. There have been few efforts on applying formal reasoning methods to this problem. In multiview and multimode object description problems, reasoning is required on evidence features extracted from multiple images and nonintensity data. One challenge here is that the number of the evidence features varies at runtime because the number of images being used is not fixed and some modalities may not always be available. We introduce an augmented Bayesian network, the expandable Bayesian network (EBN), which instantiates its structure at runtime according to the structure of input. We introduce the use of hidden variables to handle correlation of evidence features across images. We show an application of an EBN to a multiview building description system. Experimental results show that the proposed method gives significant and consistent performance improvement to others. Index Terms—Multiview object description, learning, uncertain reasoning, building description, Bayesian network. 1
Perceptual organization with image formation compatibilities
- Pattern Recognition Letters
, 2002
"... The work presents a methodology contributing to boundary extraction in images of approximate polyhedral objects. We make extensive use of basic principles underlying the process of image formation and thus reduce the role of object-specific knowledge. Simple configurations of line segments are extra ..."
Abstract
-
Cited by 7 (2 self)
- Add to MetaCart
The work presents a methodology contributing to boundary extraction in images of approximate polyhedral objects. We make extensive use of basic principles underlying the process of image formation and thus reduce the role of object-specific knowledge. Simple configurations of line segments are extracted subject to geometricphotometric compatibilities. The perceptual organization into polygonal arrangements is based on geometric regularity compatibilities under projective transformation. The combination of several types of compatibilities yields a saliency function for extracting a list of most salient structures. Based on systematic measurements during an experimentation phase the adequacy and degrees of compatibilities are determined. The methodology is demonstrated for objects of various shapes located in cluttered scenes. Key words: Perceptual organization, boundary extraction, geometric-photometric compatibility, geometric regularity compatibility, Hough transformation.
Learning Bayesian Networks for Diverse and Varying Numbers of Evidence Sets
- Proc. Int’l Conf. on Machine Learning
, 2000
"... We introduce an expandable Bayesian network (EBN) to handle the combination of diverse multiple homogeneous evidence sets. An EBN is an augmented Bayesian network which instantiates its structure at runtime according to the structure of input. We show an application of an EBN for a multi-view ..."
Abstract
-
Cited by 5 (4 self)
- Add to MetaCart
We introduce an expandable Bayesian network (EBN) to handle the combination of diverse multiple homogeneous evidence sets. An EBN is an augmented Bayesian network which instantiates its structure at runtime according to the structure of input. We show an application of an EBN for a multi-view 3-D object description problem in computer vision. The experiments show that the proposed method gives reasonable performance even for an unlearned structure of input data. 1. Introduction It is common in machine learning that training data and test data have the same structure. An exception is found in Bayesian networks, which allow missing data. But, in some applications, the structure of input data is not determined when the system is developed. Such a case can be found in computer vision applications dealing with multiple images; the number of images to use is not determined when the classifiers are trained. In this paper, we present an expandable Bayesian network which modi...
Multi-View 3-D Object Description with Uncertain Reasoning and Machine Learning
, 2001
"... xi Chapter 1. ..."
Robust Detection of Buildings from a Single Color Aerial Image
- Dept. of Geodetic and Geographic Information Technologies, Middle East Technical University, Turkey. Commission VI, WG VI/4
"... In this study, a robust methodology for the detection of buildings from a single color aerial image is proposed. The methodology is initialized with the mean-shift segmentation algorithm. Next, a vector-valued canny edge detection algorithm which relies on the photometric quasi-invariant gradients i ..."
Abstract
-
Cited by 2 (0 self)
- Add to MetaCart
In this study, a robust methodology for the detection of buildings from a single color aerial image is proposed. The methodology is initialized with the mean-shift segmentation algorithm. Next, a vector-valued canny edge detection algorithm which relies on the photometric quasi-invariant gradients is utilized to detect edges from the color segmented image. Morphological operations are applied to the edge image and two raster datasets are generated from the morphologically reconstructed edge image, (i) the edge pixels that form closed-boundary shapes, and (ii) the edge pixels that do not form closed-boundary shapes. The first dataset, the edge pixels that form closed-boundary shapes, are vectorized using boundary tracing followed with the douglas-peucker simplification algorithm. On the other hand, a minimum bounding convex-hull algorithm followed with gradient vector flow (GVF) snake is used to generate polygons from the second dataset. The polygon results of both datasets are joined together in a unification step. In a final verification stage, two vegetation indices are used to mask out the polygons that belong to healthy vegetated areas. One color (RGB) aerial ortho-image with a resolution of 30 cm is utilized to test the performance of the proposed methodology. Based on the results computed, the edge detector suppressed the shadows and highlight edges with accuracy around 99%. Among the available 251 apartment buildings in the site, 231 of them were almost completely detected. The algorithm provided %73 accuracy for the buildings that are in a neighboring house condition (810 out of 1104 are detected). 1.
Reconstructing 3d building wireframes from multiple images
- In: Proceedings of the ISPRS Commission III Symposium on Photogrammetric Computer Vision
, 2002
"... Building extraction in urban areas is one of the difficult problems in image understanding and photogrammetry. Building delineations are needed in cartographic analysis, urban area planning, and visualization. Although one pair of images is adequate to find the 3D position of two visibly correspondi ..."
Abstract
-
Cited by 1 (0 self)
- Add to MetaCart
(Show Context)
Building extraction in urban areas is one of the difficult problems in image understanding and photogrammetry. Building delineations are needed in cartographic analysis, urban area planning, and visualization. Although one pair of images is adequate to find the 3D position of two visibly corresponding image features it is not sufficient to extract the entire building due to hidden features that are not projected into the image pair. This paper presents a new technique to detect and delineate buildings with complex rooftops by extracting roof polygons and matching them using multiple images. The algorithm discussed in this paper starts by segmenting the images into regions. Regions are then classified into roof regions and non-roof regions using a two-layered Neural Network. A rule-based system is then used to convert the roof boundaries to polygons. Polygon correspondence is established geometrically, all possible polygon correspondent sets are considered and the optimal set is selected. Polygon vertices are then refined using the known geometric properties of urban buildings to generate the building wireframes. The algorithm is tested on a number of buildings and the results are evaluated. The RMS error for the extracted building vertices is 0.25m using 1:4000 scale aerial photographs. The results show the completeness and accuracy that this method can provide for extracting complex urban buildings. 1.
Surveying c © Copyright by
"... The automatic recognition and reconstruction of buildings from sensory input data is an important research topic with widespread applications in city modeling, urban planning, environmental studies, and telecommunication. This study presents integration methods to increase the level of automation in ..."
Abstract
- Add to MetaCart
(Show Context)
The automatic recognition and reconstruction of buildings from sensory input data is an important research topic with widespread applications in city modeling, urban planning, environmental studies, and telecommunication. This study presents integration methods to increase the level of automation in building recognition and reconstruction. Aerial imagery has been used as a major source in mapping fields and, in recent years, LIDAR data became popular as another type of mapping re-source. Regarding their performances, aerial imagery has the ability to delineate object boundaries but omits much of these boundaries during feature extraction. LI-DAR data provide direct information about heights of object surfaces but have limi-tations with respect to boundary localization. Efficient methods to generate building boundary hypotheses and localize object features are described. Such methods use complementary characteristics of two sensors. Graph data structures are used for interpreting surface discontinuities. Buildings are recognized by analyzing contour graphs and modeled with surface patches from LIDAR data. Building model hy-potheses are generated as a combination of wing models and are verified by assessing the consistency between corresponding data sets. Experiments using aerial imagery and LIDAR data are presented. Three findings are noted: First, building boundaries are successfully recognized using the proposed contour analysis method. Second, the wing model and hypothesized contours increase the level of automation in building ii hypothesis generation/verification. Third, the integration of aerial images and LI-DAR data enhances the accuracy of reconstructed buildings in the horizontal and vertical directions. iii Dedicated to my parents and my wife, Jae-Eun iv
Car Detection in Low Resolution Aerial Images
- Image and Vision Computing
, 2001
"... We present a system to detect passenger cars in aerial images along the road directions where cars appear as small objects. We pose this as a 3D object recognition problem to account for the variation in viewpoint and the shadow. We started from psychological tests to find important features for hum ..."
Abstract
- Add to MetaCart
We present a system to detect passenger cars in aerial images along the road directions where cars appear as small objects. We pose this as a 3D object recognition problem to account for the variation in viewpoint and the shadow. We started from psychological tests to find important features for human detection of cars. Based on these observations, we selected the boundary of the car body, the boundary of the front windshield, and the shadow as the features. Some of these features are affected by the intensity of the car and whether or not there is a shadow along it. This information is represented in the structure of the Bayesian network that we use to integrate all features. Experiments show very promising results even on some very challenging images.