#### DMCA

## Time-of-Flight Cameras and Microsoft Kinect (2012)

Venue: | SpringerBriefs in Electrical and Computer Engineering |

Citations: | 9 - 4 self |

### Citations

3782 | Normalized cuts and image segmentation - Shi, Malik |

2394 | Mean shift: A robust approach toward feature space analysis
- Comaniciu, Meer
(Show Context)
Citation Context ...a virtual camera placed between L and D). The choice of lattice ΛZ is a major design decision when fusing data from S and D. Approaches like [30, 29, 31] adopt ΛZ = ΛS, while other approaches such as =-=[8]-=- adopt ΛZ = ΛD. The choice of ΛS leads to high 5.4 Fusion of data from a depth camera and a stereo vision system 83 resolution depth map estimates (i.e., to depth maps with the resolution of stereo im... |

2118 | Fast Approximate Energy Minimization via Graph Cuts
- Boykov, Veksler, et al.
- 2001
(Show Context)
Citation Context ...inear support placed on a rotating platform, with consequent motion both in vertical and horizontal directions, as in the case of the scan-systems used for topographic or architectural surveys (e.g., =-=[3, 4, 5, 6]-=-). Since any time-sequential scanning mechanism takes some time in order to scan a scene, such systems are intrinsically unsuited to acquire dynamic scenes, i.e., scenes with moving objects. Different... |

939 | Efficient graph-based image segmentation
- Felzenszwalb, Huttenlocher
- 2004
(Show Context)
Citation Context ...s and noise or color distortion. Examples of spatial-multiplexing coding are the ones based on non-formal codification [10], the ones based on De Bruijns sequences [11] and the ones based on M-arrays =-=[12]-=-. 3.1 Matricial active triangulation 41 3.1.2 About KinectTM range camera operation 3.1.2.1 Horizontal uncorrelation of the projected pattern The KinectTM is a proprietary product and its algorithms a... |

623 | shift, mode seeking, and clustering
- Mean
- 1995
(Show Context)
Citation Context ...hown in Figure 4.3. Due to the planar shape of the sensor and checkerboard, the position of the sensor plane with respect to the WCS, associated to the checkerboard, can be derived by a 3D homography =-=[5,4]-=-, characterized by 9 coefficients. Once the 3D position of the sensor with respect to the checkerboard is known, as shown by Figure 4.3, the 3D coordinates of the checkerboard corners, P∗i , serving a... |

457 |
A review on image segmentation techniques
- Pal, Pal
- 1993
(Show Context)
Citation Context ...ing another issue of depth estimates obtained by triangulation, i.e., it is possible to manipulate Equation (1.14) to show that the depth estimate resolution decreases with the square of the distance =-=[17]-=-. Therefore the quality of the depth measurements obtained by matricial active triangulation is worse for the furthest scene points than for the closest ones [3]. 3.3 Conclusion and further reading Or... |

440 |
Compositing digital images
- PORTER, DUFF
- 1984
(Show Context)
Citation Context ... is not controllable and, for instance, it includes colors similar to the ones of the foreground or moving objects, video matting becomes rather difficult. The video matting problem can be formalized =-=[18]-=- by representing each pixel pi in the observed color image IC as the linear combination of a background color B(pi) and a foreground color F(pi) weighted by the opacity value α(pi), i.e.: IC(pi) = α(p... |

252 | Computer Vision: Algorithms and Applications
- Szeliski
- 2010
(Show Context)
Citation Context ... measure distances of dynamic scenes, i.e., scenes with moving objects, subsequent light coding methods focused on reducing the number of projected patterns to few units or to a single pattern, e.g., =-=[23, 24, 25]-=-. The KinectTM belongs to this last family of methods as seen in greater detail in Chapter 3. 1.3 Plan of the book This introduction motivates the book and provides the basics for understanding the To... |

235 | A Bayesian approach to digital matting,”
- Chuang, Curless, et al.
- 2001
(Show Context)
Citation Context ...RGB color channels, and then apply standard matting techniques originally developed for color data to IC,Z(pi). For example, Wang et al. [27] extend the Bayesian matting scheme originally proposed in =-=[6]-=- for color images by introducing the depth component in the probability maximization scheme. Another interesting idea, also proposed in [27], is weighting the confidence of the depth channel on the ba... |

149 |
Efficient RANSAC for point-cloud shape detection
- Schnabel, Wahl, et al.
- 2007
(Show Context)
Citation Context ... of view. The reader interested to more details about camera calibration can find image processing and corner detection techniques in [17, 18], projective geometry in [7, 19] and numerical methods in =-=[20]-=-. Classical camera calibration approaches are the ones of [3, 4, 5]. Some theoretical and practical hints can also be found in [6]. The principles of stereo vision system calibration are reported in [... |

115 | Poisson matting
- Sun, Jia, et al.
- 2004
(Show Context)
Citation Context ...tances and their usage to still scenes. This type of systems were and continue to be the common choice for 3D modeling of still scenes. For the 3D modeling methods used in this field see for instance =-=[2, 22]-=-. In general, active techniques are recognized to be more expensive and slower than passive methods but way more accurate and robust than them. In order to measure distances of dynamic scenes, i.e., s... |

77 | Bi-layer segmentation of binocular stereo video
- Kolmogorov, Criminisi, et al.
- 2005
(Show Context)
Citation Context ...3] are based on CW-ToF technology, this book focuses only on this technology, which is presented in detail in Chapter 2. An exhaustive review of the state-of-the-art in ToF technology can be found in =-=[14]-=-. 1.2 Basics of imaging systems and KinectTM operation The KinectTM is a special case of 3D acquisition systems based on light coding. Understanding its operation requires a number of preliminary noti... |

43 | Segmentation of Point Clouds using Smoothness Constraints
- Rabbani, Heuvel, et al.
- 2006
(Show Context)
Citation Context ...ith [ f ,ku,kv,cx,cy]. The estimation of intrinsic and extrinsic parameters of an imaging system is called geometrical calibration as discussed in Chapter 4. Suitable tools for this task are given by =-=[19]-=- and [20]. 1.2.3 Stereo vision systems A stereo vision (or just stereo) system is made by two standard (typically identical) cameras partially framing the same scene. Such a system can always be calib... |

29 | Real-time plane segmentation using rgb-d cameras. In: RoboCup Symposium
- Holz, Holzer, et al.
- 2011
(Show Context)
Citation Context ... for this kind of data. Man-made objects usually feature sets of planar surfaces and the recognition of the different planes of the acquired point cloud is a possible depth data segmentation approach =-=[13]-=-. A simple solution to locate the scene planes is to compute the 3D vectors representing the surface normal at each point and then cluster the 3D points on the basis of their normals by a clustering t... |

26 | Joint optimisation for object class segmentation and dense stereo reconstruction
- LADICKY, STURGESS, et al.
- 2010
(Show Context)
Citation Context ...ameras. The fundamental instruments for treating these two problems are the bilateral filter [25] and the Markov-Random-Field (MRF) framework [17, 3]. Other techniques, such as non-local means filter =-=[15, 6]-=- and Conditional-Random-Fields (CRF) [26] have also been adopted in this context. Examples of the main methods for depth super-resolution through the fusion of data from a range camera and a standard ... |

21 | Real-time fore-ground segmentation via range and color imaging,”
- Crabb, Tracey, et al.
- 2008
(Show Context)
Citation Context ...dom-Fields (CRF) [26] have also been adopted in this context. Examples of the main methods for depth super-resolution through the fusion of data from a range camera and a standard color camera are in =-=[28, 10, 9, 15, 20, 14]-=-, and examples of the main approaches for the fusion of data from a range camera and a stereo vision system are in [30, 29, 31, 8, 27, 16, 11, 28]. In general it can be said that data fusion by either... |

20 | Fusing time-of-flight depth and color for real-time segmentation and tracking.
- Bleiweiss, Werman
- 2009
(Show Context)
Citation Context ...estimate of the relationship between the actual scene point color and the color measured by the camera at the corresponding pixel. A comprehensive treatment of photometric calibration can be found in =-=[1, 2]-=-. Photometric calibration is not considered in this chapter, since for the applications treated in this book only geometrical calibration is relevant. Statement 4.2. The geometric calibration of a vid... |

13 | Automatic natural video matting with depth’.
- WANG, FINGER, et al.
- 2007
(Show Context)
Citation Context ... ways, namely by an independent random variable approach or by a MRF approach, as seen next. 5.4.1 Independent random variable approach The first approach (explicitly adopted in [8] and implicitly in =-=[27]-=-) models Z as a juxtaposition of independent random variables Zi ,Z (pi), pi ∈Λ , where Λ is the considered lattice (either ΛS, ΛD or another lattice). These variables are characterized by prior proba... |

12 | TofCut: towards robust real-time foreground extraction using a time-of-flight camera
- Wang, Zhang, et al.
- 2010
(Show Context)
Citation Context ...he depth term for foreground pixels only. Better results can be obtained by more complex models of the foreground and background likelihoods. Figure 6.4 shows some results obtained by the approach of =-=[26]-=- that models the two likelihoods as Gaussian Mixture Models (GMM). Another key issue is that color and depth lie in completely different spaces and it is necessary to adjust their mutual relevance. Th... |

7 | Scene segmentation assisted by stereo vision
- Mutto, Zanuttigh, et al.
- 2011
(Show Context)
Citation Context ...oordinates) and cluster such vectors by state-of-the-art clustering techniques. This approach can be extended in order to handle both depth and color data. The basic idea, used for example in [2] and =-=[10]-=-2, is replacing the 2D coordinates of the image pixels with the corresponding 3D coordinates of the depth data and associating to each pixel a 6D vector instead of a 5D vector as in image segmentation... |

7 | Combining color, depth, and motion for video segmentation,” in Computer Vision Systems
- Leens, Pirard, et al.
- 2009
(Show Context)
Citation Context ...es of P = [x,y,z]T can be obtained from P = [hx,hy,hz,h]T dividing them by the fourth coordinate h. An introduction to projective geometry suitable to its computer vision applications can be found in =-=[16]-=-. The homogeneous coordinates representation of p allows to rewrite non-linear relationship (1.4) in a convenient matrix form, namely: z uv 1 = f 0 cx0 f cy 0 0 1 xy z (1.5) 1.2 Basics ... |

5 |
Hierarchical fusion of color and depth information at partition level by cooperative region merging
- Calderero, Marques
(Show Context)
Citation Context ...on W centered at (uA,vA)T and C(ui,u j,vi) is the horizontal covariance of the projected pattern between supports centered at piA and p j A. Some reverse engineering analysis suggest 7× 7 [3] or 9× 9 =-=[4]-=- as support of the spatial multiplexing window adopted by KinectTM. It is important to note that (3.1) strictly holds only for the covariance of the projected ideal pattern. The patterns actually meas... |

5 | Channel coding for joint colour and depth segmentation
- Wallenberg, Felsberg, et al.
- 2011
(Show Context)
Citation Context ...r spectral clustering [21], can be used to cluster the set of vectors (6.9) in order to segment the scene. A possible variant of this approach is to exploit the derivatives or gradients of depth data =-=[24]-=-. This is similar to using the surface normals associated to depth samples in [5] and it has the advantage that it can discriminate close surfaces with different orientations. Unfortunately the gradie... |

3 |
Visual tracking and segmentation using time-of-flight sensor
- Arif, Daley, et al.
- 2010
(Show Context)
Citation Context ...nt to remember that there exist 1 2 1 Introduction fundamental technological differences between them which cannot be ignored. The synopsis of distance measurement methods of Figure 1.1, derived from =-=[1]-=-, offers a good framework to introduce such differences. For the purposes of this book it suffices to note that the reflective optical methods of Figure 1.1 are typically classified into passive and a... |

3 | Segmenting color images into surface patches by exploiting sparse depth data
- Dellen, Alenya, et al.
- 2011
(Show Context)
Citation Context ...ave (CW) intensity modulation approach introduced in [7], the optical shutter (OS) approach of [8, 9] and the singlephoton avalanche diodes (SPAD) approach [10]. Since all the commercial solutions of =-=[11, 12, 13]-=- are based on CW-ToF technology, this book focuses only on this technology, which is presented in detail in Chapter 2. An exhaustive review of the state-of-the-art in ToF technology can be found in [1... |

3 | Automatic real-time video matting using time-of-flight camera and multichannel Poisson equations
- Wang, Gong, et al.
- 2012
(Show Context)
Citation Context ... measure distances of dynamic scenes, i.e., scenes with moving objects, subsequent light coding methods focused on reducing the number of projected patterns to few units or to a single pattern, e.g., =-=[23, 24, 25]-=-. The KinectTM belongs to this last family of methods as seen in greater detail in Chapter 3. 1.3 Plan of the book This introduction motivates the book and provides the basics for understanding the To... |