Results 1 - 10
of
33
Intrinsic Scene Properties from a Single RGB-D Image
"... In this paper we extend the “shape, illumination and reflectance from shading ” (SIRFS) model [3, 4], which recovers intrinsic scene properties from a single image. Though SIRFS performs well on images of segmented objects, it performs poorly on images of natural scenes, which contain occlusion and ..."
Abstract
-
Cited by 28 (6 self)
- Add to MetaCart
In this paper we extend the “shape, illumination and reflectance from shading ” (SIRFS) model [3, 4], which recovers intrinsic scene properties from a single image. Though SIRFS performs well on images of segmented objects, it performs poorly on images of natural scenes, which contain occlusion and spatially-varying illumination. We therefore present Scene-SIRFS, a generalization of SIRFS in which we have a mixture of shapes and a mixture of illuminations, and those mixture components are embedded in a “soft” segmentation of the input image. We additionally use the noisy depth maps provided by RGB-D sensors (in this case, the Kinect) to improve shape estimation. Our model takes as input a single RGB-D image and produces as output an improved depth map, a set of surface normals, a reflectance image, a shading image, and a spatially varying model of illumination. The output of our model can be used for graphics applications, or for any application involving RGB-D images. 1.
Interactive images: cuboid proxies for smart image manipulation
- ACM Trans. Graph
"... Figure 1: Starting from single images with segmented interest regions, we generate cuboid proxies for partial scene modeling enabling a range of smart image manipulations. Here, we replace furniture in one image using candidates from other images, while automatically conforming to the non-local rela ..."
Abstract
-
Cited by 27 (16 self)
- Add to MetaCart
Figure 1: Starting from single images with segmented interest regions, we generate cuboid proxies for partial scene modeling enabling a range of smart image manipulations. Here, we replace furniture in one image using candidates from other images, while automatically conforming to the non-local relations extracted from the original scene, e.g., the sofas have the same heights, table is correctly placed, etc. Images are static and lack important depth information about the underlying 3D scenes. We introduce interactive images in the context of man-made environments wherein objects are simple and regular, share various non-local relations (e.g., coplanarity, parallelism, etc.), and are often repeated. Our interactive framework creates partial scene reconstructions based on cuboid-proxies with minimal user interaction. It subsequently allows a range of intuitive image edits mimicking real-world behavior, which are otherwise difficult to achieve. Effectively, the user simply provides high-level semantic hints, while our system ensures plausible operations by conforming to the extracted non-local relations. We demonstrate our system on a range of real-world images and validate the plausibility of the results using a user study.
Rich intrinsic image decomposition of outdoor scenes from multiple views -- APPENDIX: ILLUMINANT CALIBRATION
- IEEE TRANS. ON
, 2012
"... In this appendix, we describe the details of the illuminant calibration step for the sky and the sun. First, because our model separates sun light from sky light, we need to remove sun pixels from the environment map. We define the sun position as the barycenter of the saturated sun pixels, and use ..."
Abstract
-
Cited by 9 (4 self)
- Add to MetaCart
(Show Context)
In this appendix, we describe the details of the illuminant calibration step for the sky and the sun. First, because our model separates sun light from sky light, we need to remove sun pixels from the environment map. We define the sun position as the barycenter of the saturated sun pixels, and use inpainting to fill-in these saturated pixels from their neighbors. Since our model also separates sky light from indirect light, we use a standard color selection tool to label sky pixels that will contribute to the sky illumination, while other pixels (building, trees) will contribute to indirect lighting. This is illustrated in Fig. 5c. Second, we align the environment map and sun with the reconstructed scene. To do so we manually mark a vertical edge of the reconstructed geometry and rotate the environment map and sun until the cast shadow of the
3DNN: Viewpoint Invariant 3D Geometry Matching for Scene Understanding
"... We present a new algorithm 3DNN (3D Nearest-Neighbor), which is capable of matching an image with 3D data, independently of the viewpoint from which the image was captured. By leveraging rich annotations associated with each image, our algorithm can automatically produce precise and detailed 3D mode ..."
Abstract
-
Cited by 6 (0 self)
- Add to MetaCart
(Show Context)
We present a new algorithm 3DNN (3D Nearest-Neighbor), which is capable of matching an image with 3D data, independently of the viewpoint from which the image was captured. By leveraging rich annotations associated with each image, our algorithm can automatically produce precise and detailed 3D models of a scene from a single image. Moreover, we can transfer information across images to accurately label and segment objects in a scene. The true benefit of 3DNN compared to a traditional 2D nearest-neighbor approach is that by generalizing across viewpoints, we free ourselves from the need to have training examples captured from all possible viewpoints. Thus, we are able to achieve comparable results using orders of magnitude less data, and recognize objects from never-beforeseen viewpoints. In this work, we describe the 3DNN algorithm and rigorously evaluate its performance for the tasks of geometry estimation and object detection/segmentation. By decoupling the viewpoint and the geometry of an image, we develop a scene matching approach which is truly 100% viewpoint invariant, yielding state-of-the-art performance on challenging data. 1.
Image-Based Remodeling
"... Abstract—Imagining what a proposed home remodel might look like without actually performing it is challenging. We present an image-based remodeling methodology that allows real-time photorealistic visualization during both the modeling and remodeling process of a home interior. Large-scale edits, li ..."
Abstract
-
Cited by 3 (0 self)
- Add to MetaCart
(Show Context)
Abstract—Imagining what a proposed home remodel might look like without actually performing it is challenging. We present an image-based remodeling methodology that allows real-time photorealistic visualization during both the modeling and remodeling process of a home interior. Large-scale edits, like removing a wall or enlarging a window, are performed easily and in realtime, with realistic results. Our interface supports the creation of concise, parameterized, and constrained geometry, as well as remodeling directly from within the photographs. Real-time texturing of modified geometry is made possible by precomputing view-dependent textures for all faces that are potentially visible to each original camera viewpoint, blending multiple viewpoints and hole-filling when necessary. The resulting textures are stored and accessed efficiently enabling intuitive real-time realistic visualization, modeling, and editing of the building interior. Index Terms—Image-based rendering, modeling packages, visualization systems and software 1
Exposing Photo Manipulation from Shading and Shadows
"... We describe a method for detecting physical inconsistencies in lighting from the shading and shadows in an image. This method imposes a mul-titude of shading- and shadow-based constraints on the projected location of a distant point light source. The consistency of a collection of such con-straints ..."
Abstract
-
Cited by 2 (0 self)
- Add to MetaCart
We describe a method for detecting physical inconsistencies in lighting from the shading and shadows in an image. This method imposes a mul-titude of shading- and shadow-based constraints on the projected location of a distant point light source. The consistency of a collection of such con-straints is posed as a linear programming problem. A feasible solution indi-cates that the combination of shading and shadows is physically consistent, while a failure to find a solution provides evidence of photo tampering.
Lighting Estimation in Indoor Environments from Low-Quality Images
"... Abstract. Lighting conditions estimation is a crucial point in many applications. In this paper, we show that combining color images with corresponding depth maps (provided by modern depth sensors) allows to improve estimation of positions and colors of multiple lights in a scene. Since usually such ..."
Abstract
-
Cited by 1 (0 self)
- Add to MetaCart
(Show Context)
Abstract. Lighting conditions estimation is a crucial point in many applications. In this paper, we show that combining color images with corresponding depth maps (provided by modern depth sensors) allows to improve estimation of positions and colors of multiple lights in a scene. Since usually such devices provide low-quality images, for many steps of our framework we propose alternatives to classical algorithms that fail when the image quality is low. Our approach consists in decomposing an original image into specular shading, diffuse shading and albedo. The two shading images are used to render different versions of the original image by changing the light configuration. Then, using an optimization process, we find the lighting conditions allowing to minimize the differ-ence between the original image and the rendered one. Key words: light estimation, depth sensor, color constancy. 1
Smartannotator: An interactive tool for annotating rgbd indoor images. CoRR abs/1403.5718
, 2014
"... bed night stand night stand lamp dresserpillow bed night stand night stand lamp pillow pillow night ..."
Abstract
-
Cited by 1 (0 self)
- Add to MetaCart
bed night stand night stand lamp dresserpillow bed night stand night stand lamp pillow pillow night
Novel Approach for Image Decomposition from Multiple Views
"... Abstract---Intrinsic images aim is separating an image into its reflectance and illumination components to facilitate further analysis or manipulation. This paper presents the system that is able to estimate shading and reflectance intrinsic images from a single real image, given the direction of t ..."
Abstract
- Add to MetaCart
Abstract---Intrinsic images aim is separating an image into its reflectance and illumination components to facilitate further analysis or manipulation. This paper presents the system that is able to estimate shading and reflectance intrinsic images from a single real image, given the direction of the dominant illumination of the scene. Although some properties of real-world scenes are not modeled directly, such as occlusion edges, the system produces satisfying image decompositions. The basic strategy of our system is to gather local evidence from color and intensity patterns in the image. This evidence is then propagated to other areas of the image. The most computationally intense steps for recovering the shading and reflectance images are computing the local evidence, and running the Generalized Belief Propagation algorithm. One of the primary limitations of this work was the use of synthetic training data. This limited both the performance of the system and the range of algorithm pseudo inverse process is available for designing the classifiers. Then introduce an optimization method to estimate sun visibility over the point cloud. This algorithm compensates for the lack of accurate geometry and allows the extraction of precise shadows in the final image. Finally propagate the information computed over the sparse point cloud to every pixel in the photograph using image-guided propagation. Our propagation not only separates reflectance from illumination, but also decomposes the illumination into a sun, sky, and indirect layer they expect that performance would be improved by training from a set of intrinsic images gathered from real data. Index Terms-Intrinsic images,, belief propagation, mean-shift algorithm.