Results 1 - 10
of
255
Reconstructing building interiors from images
- In Proc. of the International Conference on Computer Vision (ICCV
, 2009
"... This paper proposes a fully automated 3D reconstruction and visualization system for architectural scenes (interiors and exteriors). The reconstruction of indoor environments from photographs is particularly challenging due to texture-poor planar surfaces such as uniformly-painted walls. Our system ..."
Abstract
-
Cited by 109 (13 self)
- Add to MetaCart
(Show Context)
This paper proposes a fully automated 3D reconstruction and visualization system for architectural scenes (interiors and exteriors). The reconstruction of indoor environments from photographs is particularly challenging due to texture-poor planar surfaces such as uniformly-painted walls. Our system first uses structure-from-motion, multiview stereo, and a stereo algorithm specifically designed for Manhattan-world scenes (scenes consisting predominantly of piece-wise planar surfaces with dominant directions) to calibrate the cameras and to recover initial 3D geometry in the form of oriented points and depth maps. Next, the initial geometry is fused into a 3D model with a novel depth-map integration algorithm that, again, makes use of Manhattanworld assumptions and produces simplified 3D models. Finally, the system enables the exploration of reconstructed environments with an interactive, image-based 3D viewer. We demonstrate results on several challenging datasets, including a 3D reconstruction and image-based walk-through of an entire floor of a house, the first result of this kind from an automated computer vision system. 1.
Search-based Procedural Content Generation: A Taxonomy and Survey
, 2011
"... The focus of this survey is on research in applying evolutionary and other metaheuristic search algorithms to automatically generating content for games, both digital and non-digital (such as board games). The term search-based procedural content generation is proposed as the name for this emergin ..."
Abstract
-
Cited by 78 (38 self)
- Add to MetaCart
The focus of this survey is on research in applying evolutionary and other metaheuristic search algorithms to automatically generating content for games, both digital and non-digital (such as board games). The term search-based procedural content generation is proposed as the name for this emerging field, which at present is growing quickly. A taxonomy for procedural content generation is devised, centering on what kind of content is generated, how the content is represented and how the quality/fitness of the content is evaluated; search-based procedural content generation in particular is situated within this taxonomy. This article also contains a survey of all published papers known to the authors in which game content is generated through search or optimisation, and ends with an overview of important open research problems.
Interactive Visual Editing of Grammars for Procedural Architecture
"... Figure 1: Screenshots from our real-time editor for grammar-based procedural architecture. Left: Visual editing of grammar rules. Middle left: Direct dragging of the red ground-plan vertex and modifying the height with a slider creates the building on the middle right. While dragging, the building i ..."
Abstract
-
Cited by 59 (11 self)
- Add to MetaCart
Figure 1: Screenshots from our real-time editor for grammar-based procedural architecture. Left: Visual editing of grammar rules. Middle left: Direct dragging of the red ground-plan vertex and modifying the height with a slider creates the building on the middle right. While dragging, the building is updated instantly. Right: Editing is possible at multiple levels, here the high-level shell of a building is modified. We introduce a real-time interactive visual editing paradigm for shape grammars, allowing the creation of rulebases from scratch without text file editing. In previous work, shape-grammar based procedural techniques were successfully applied to the creation of architectural models. However, those methods are text based, and may therefore be difficult to use for artists with little computer science background. Therefore the goal was to enable a visual workflow combining the power of shape grammars with traditional modeling techniques. We extend previous shape grammar approaches by providing direct and persistent local control over the generated instances, avoiding the combinatorial explosion of grammar rules
VideoTrace: Rapid interactive scene modelling from video
, 2007
"... VideoTrace is a system for interactively generating realistic 3D models of objects from video—models that might be inserted into a video game, a simulation environment, or another video sequence. The user interacts with VideoTrace by tracing the shape of the object to be modelled over one or more ..."
Abstract
-
Cited by 57 (3 self)
- Add to MetaCart
VideoTrace is a system for interactively generating realistic 3D models of objects from video—models that might be inserted into a video game, a simulation environment, or another video sequence. The user interacts with VideoTrace by tracing the shape of the object to be modelled over one or more frames of the video. By interpreting the sketch drawn by the user in light of 3D information obtained from computer vision techniques, a small number of simple 2D interactions can be used to generate a realistic 3D model. Each of the sketching operations in VideoTrace provides an intuitive and powerful means of modelling shape from video, and executes quickly enough to be used interactively. Immediate feedback allows the user to model rapidly those parts of the scene which are
Imagebased facade modeling
- Proc. of SIGGRAPH Asia 2008
, 2008
"... Figure 1: A few façade modeling examples from the two sides of a street with 614 captured images: some input images in the bottom row, the recovered model rendered in the middle row, and three zoomed sections of the recovered model rendered in the top row. We propose in this paper a semi-automatic i ..."
Abstract
-
Cited by 52 (11 self)
- Add to MetaCart
Figure 1: A few façade modeling examples from the two sides of a street with 614 captured images: some input images in the bottom row, the recovered model rendered in the middle row, and three zoomed sections of the recovered model rendered in the top row. We propose in this paper a semi-automatic image-based approach to façade modeling that uses images captured along streets and relies on structure from motion to recover camera positions and point clouds automatically as the initial stage for modeling. We start by considering a building façade as a flat rectangular plane or a developable surface with an associated texture image composited from the multiple visible images. A façade is then decomposed and structured into a Directed Acyclic Graph of rectilinear elementary patches. The decomposition is carried out top-down by a recursive subdivision, and followed by a bottom-up merging with the detection of the architectural bilateral symmetry and repetitive patterns. Each subdivided patch of the flat façade is augmented with a depth optimized using the 3D points cloud. Our system also allows for an easy user feedback in the 2D image space for the proposed decomposition and augmentation. Finally, our approach is demonstrated on a large number of façades from a variety of street-side images.
What Makes Paris Look Like Paris?
, 2012
"... Given a large repository of geotagged imagery, we seek to automatically find visual elements, e.g. windows, balconies, and street signs, that are most distinctive for a certain geo-spatial area, for example the city of Paris. This is a tremendously difficult task as the visual features distinguish ..."
Abstract
-
Cited by 49 (8 self)
- Add to MetaCart
Given a large repository of geotagged imagery, we seek to automatically find visual elements, e.g. windows, balconies, and street signs, that are most distinctive for a certain geo-spatial area, for example the city of Paris. This is a tremendously difficult task as the visual features distinguishing architectural elements of different places can be very subtle. In addition, we face a hard search problem: given all possible patches in all images, which of them are both frequently occurring and geographically informative? To address these issues, we propose to use a discriminative clustering
SmartBoxes for Interactive Urban Reconstruction
"... We introduce an interactive tool which enables a user to quickly assemble an architectural model directly over a 3D point cloud acquired from large-scale scanning of an urban scene. The user loosely defines and manipulates simple building blocks, which we call SmartBoxes, over the point samples. T ..."
Abstract
-
Cited by 42 (4 self)
- Add to MetaCart
We introduce an interactive tool which enables a user to quickly assemble an architectural model directly over a 3D point cloud acquired from large-scale scanning of an urban scene. The user loosely defines and manipulates simple building blocks, which we call SmartBoxes, over the point samples. These boxes quickly snap to their proper locations to conform to common architectural structures. The key idea is that the building blocks are smart in the sense that their locations and sizes are automatically adjusted on-the-fly to fit well to the point data, while at the same time respecting contextual relations with nearby similar blocks. SmartBoxes are assembled through a discrete optimization to balance between two snapping forces defined respectively by a data-fitting term and a contextual term, which together assist the user in reconstructing the architectural model from a sparse and noisy point cloud. We show that a combination of the user’s interactive guidance and high-level knowledge about the semantics of the underlying model, together with the snapping forces, allows the reconstruction of structures which are partially or even completely missing from the input.
Example-based model synthesis
- IN I3D ’07: PROCEEDINGS OF THE 2007 SYMPOSIUM ON INTERACTIVE 3D GRAPHICS AND GAMES
, 2007
"... Model synthesis is a new approach to 3D modeling which automatically generates large models that resemble a small example model provided by the user. Model synthesis extends the 2D texture synthesis problem into higher dimensions and can be used to model many different objects and environments. Th ..."
Abstract
-
Cited by 39 (3 self)
- Add to MetaCart
Model synthesis is a new approach to 3D modeling which automatically generates large models that resemble a small example model provided by the user. Model synthesis extends the 2D texture synthesis problem into higher dimensions and can be used to model many different objects and environments. The user only needs to provide an appropriate example model and does not need to provide any other instructions about how to generate the model. Model synthesis can be used to create symmetric models, models that change over time, and models that fit soft constraints. There are two important differences between our method and existing texture synthesis algorithms. The first is the use of a global search to find potential conflicts before adding new material to the model. The second difference is that we divide the problem of generating a large model into smaller subproblems which are easier to solve.
An example-based procedural system for element arrangement
- COMPUT. GRAPH. FORUM
, 2008
"... We present a method for synthesizing two dimensional (2D) element arrangements from an example. The main idea is to combine texture synthesis techniques based-on a local neighborhood comparison and procedural modeling systems based-on local growth. Given a user-specified reference pattern, our syste ..."
Abstract
-
Cited by 32 (1 self)
- Add to MetaCart
We present a method for synthesizing two dimensional (2D) element arrangements from an example. The main idea is to combine texture synthesis techniques based-on a local neighborhood comparison and procedural modeling systems based-on local growth. Given a user-specified reference pattern, our system analyzes neighborhood information of each element by constructing connectivity. Our synthesis process starts with a single seed and progressively places elements one by one by searching a reference element which has local features that are the most similar to the target place of the synthesized pattern. To support creative design activities, we introduce three types of interaction for controlling global features of the resulting pattern, namely a spray tool, a flow field tool, and a boundary tool. We also introduce a global optimization process that helps to avoid local error concentrations. We illustrate the feasibility of our method by creating several types of 2D patterns.