Capturing the 3D shape of objects and scenes has been explored within the field of computer vision for many years. Whether for computer graphics, motion pictures, or sports medicine, being able to accurately resconstruct the motion of a scene or actor is highly desirable. We develop a system that can generate 3D video from arbitrary positions surrounding a central reconstruction volume. We accomplish this by using three structured light stations positioned around a central volume. Systems which simply use multiple cameras to capture the 3D models can have difficulty reconstructing the scene when there is little uniquely identifiable texture. By using structured light, rather than just passive cameras, each point in the scene is guaranteed to have have enough texture to be identified by capturing cameras. Our current system consists of three structured light stations as illustrated below.
Our current results are constructed using a single structured light station. Within each station, two grayscale PointGray Dragonfly Express cameras are used with an Optoma TX 780 projector to capture depth. Each station is also equipped with a PointGray Flea2 for eventual use to capture the color texture of the scene.
Below is a listing of publications with abstracts for our work within the field of structured light depth capture.
We present a multi-view structured light system for markerless motion capture of human subjects. In contrast to existing approaches that use multiple camera streams, we reconstruct the scene by combining six partial 3D scans generated from three structured light stations surrounding the subject and operating in a round robin fashion. We avoid interference between multiple projectors through time multiplexing and synchronization across all cameras and projectors. Assuming known multi-camera projector calibration parameters, we generate point clouds from each station, convert them to partial surfaces, and merge them into a single coordinate frame. We develop algorithms to reconstruct dynamic geometry using a template generated by the system itself. Specifically, we deform the template to each frame of the captured geometry by iteratively aligning each bone of the skeleton. This is done by searching for correspondences between the source template and the captured geometry, solving for rotations of bones, and enforcing constraints on each rotation to prevent the template from taking on anatomically unnatural poses. Once the sequence of captured geometry is processed, the template is textured using color images from the multi-view structured light systems. We show the effectiveness of our system for a 50-second sequence of a moving human subject./p>
R.R. Garcia, A. Zakhor, "Markerless Motion Capture with Multi-view Structured Light," To be submitted. [Adobe PDF]
Click to play video Click to play video
In this paper, we describe a calibration method for multi camera-projector systems in which sensors face each other as well as share a common viewpoint. We use a translucent planar sheet framed in PVC piping as a calibration target which is placed at multiple positions and orientations within a scene. In each position, the target is captured by the cameras while it is being illuminated by a set of projected patterns from various projectors. The translucent sheet allows the projected patterns to be visible from both sides, allowing correspondences between devices that face each other. The set of correspondences generated between the devices using this target are input into a bundle adjustment framework to estimate calibration parameters. We demonstrate the effectiveness of this approach on a multiview structured light system made of three projectors and nine cameras.
R.R. Garcia, A. Zakhor, "Geometric Calibration for a Multi-Camera-Projector System," Workshop on the Applications of Computer Vision (WACV) 2013. Clearwater Beach, Florida, January, 2013. [Adobe PDF]
Click to play video
Phase shifted sinusoidal patterns have proven to be effective in structured light systems, which typically consist of a camera and projector. They offer low decoding complexity, require as few as three projection frames per reconstruction, and are well suited for capturing dynamic scenes. In these systems, depth is reconstructed by determining the phase projected onto each pixel in the camera and establishing correspondences between camera and projector pixels. Typically, multiple periods are projected within the set of sinusoidal patterns, thus requiring phase unwrapping on the phase image before correspondences can be established. A second camera can be added to the structured light system to help with phase unwrapping. In this work, we present two consistent phase unwrapping methods for two-camera stereo structured light systems. The first method enforces viewpoint consistency by phase unwrapping in the projector domain. Loopy belief propagation is run over the graph of projector pixels to select pixel correspondences between the left and right camera that align in 3-D space and are spatially smooth in each 2-D image. The second method enforces temporal consistency by unwrapping across space and time. We combine a quality guided phase unwrapping approach with absolute phase estimates from the stereo cameras to solve for the absolute phase of connected regions. We present results for both methods to show their effectiveness on real world scenes.
R.R. Garcia, A. Zakhor, "Consistent Stereo-Assisted Absolute Phase Unwrapping Methods for Structured Light Systems," IEEE Journal on Selected Topics in Signal Processing, vol. 6, no. 5, pp. 411-424, Sept. 2012. [Adobe PDF]
Phase-shifted sinusoids are commonly used as projection patterns in structured light systems consisting of projectors and cameras. They require few image captures per reconstruction and have low decoding complexity. Recently, structured light systems with a projector and a pair of stereo cameras have been used in order to overcome the traditional phase discontinuity problem and allow for the reconstruction of scenes with multiple objects. In this paper, we propose a new approach to the phase unwrapping process in such systems. Rather than iterating through all pixels in the two cameras to determine the global phase of each pixel, we iterate through the projector pixels to solve for correspondences between the two camera views. An energy minimization framework is applied to these initial estimated correspondences to enforce smoothness and to fill in missing pixels. Unlike existing approaches, our method allows simultaneous unwrapping of both camera images and enforces consistency across them. We demonstrate the effectiveness of our approach experimentally on a few scenes.
R.R. Garcia, A. Zakhor, "Projector Domain Phase Unwrapping in a Structured Light System with Stereo Cameras," 3DTV 2011, Antalya, Turkey, May 16-18, 2011. [Adobe PDF]
Phase shifted sinusoidal projection patterns are extensively used in structured light systems due to their low decoding complexity and projection of only three frames per reconstruction. However, they require correct unwrapping of phase images to reconstruct depth accurately. For time varying scenes, it is important for the phase unwrapped results to be temporally coherent. In this paper, we propose a spatio-temporal phase unwrapping algorithm for a structured light system made of a projector and a pair of stereo cameras. Unlike existing frame by frame spatial domain approaches, our proposed algorithm results in a temporally consistent threedimensional unwrapped phase volume for time varying scenes. We experimentally show the effectiveness of our approach in recovering absolute phase for scenes with multiple disjoint objects, significant motion, and large depth discontinuities.
R.R. Garcia, A. Zakhor, "Temporally-Consistent Phase Unwrapping for a Stereo-Assisted Structured Light System," 3DIMPVT 2011, Hangzhou, China, May 16-19, 2011. [Adobe PDF]
Comparison of our phase unwrapping method to T.Weise et al . Click to play video
 T. Weise, B. Leibe, L. Van Gool, "Fast 3D Scanning with Automatic Motion Compensation," Computer Vision and Pattern Recognition, 2007. CVPR '07. IEEE Conference on , vol., no., pp.1-8, 17-22 June 2007.
Selection of Temporally Dithered Codes for Increasing Virtual Depth of Field in Structured Light Systems
Temporally dithered codes have recently been used for depth reconstruction of fast dynamic scenes using off-theshelf DLP Projectors. Even though temporally dithered codes overcome the DLP projector's limited frame rate, limitations with the optics create challenges for using these codes in an actual structured light system. Specifically, to maximize the amount of light leaving the projector, projector lenses are designed to have large apertures resulting in projected patterns that appear in focus over a narrow depth of field. In this paper, we propose a method to design temporally dithered codes in order to extend the virtual depth of field of a structured light system. By simulating the PWM sequences of a DLP projector and the blurring process from the projector lens, we develop algorithms for designing and decoding projection patterns in the presence of out of focus blur. Our simulation results show a 47% improvement in the depth of field when compared against randomly selected codewords.
R.R. Garcia, A. Zakhor, "Selection of Temporally Dithered Codes for Increasing Virtual Depth of Field in Structured Light Systems," PROCAMS 2010, San Francisco, CA, June 2010. [Adobe PDF]
In this paper, we describe an approach to simultaneously capture visual appearance and depth of a time-varying scene. Our approach is based on projecting structured infrared (IR) light. Specifically, we project a combination of (a) a static vertical IR stripe pattern, and (b) a horizontal IR laser line sweeping up and down the scene; at the same time, the scene is captured with an IR-sensitive camera. Since IR light is invisible to the human eye, it does not disturb human subjects or interfere with human activities in the scene; in addition, it does not affect the sceneâ€™s visual appearance as recorded by a color video camera. Vertical lines in the IR frames are identified using the horizontal line, intra-frame tracking, and inter-frame tracking; depth along these lines is reconstructed via triangulation. Interpolating these sparse depth lines within the foreground silhouette of the recorded video sequence, we obtain a dense depth map for every frame in the video sequence. Experimental results corresponding to a dynamic scene with a human subject in motion are presented to demonstrate the effectiveness of our proposed approach.
C. Frueh, and A. Zakhor, "Capturing 2 1/2 D Depth and Texture of Time-Varying Scenes Using Structured Infrared Light", PROCAMS workshop, San Diego, CA 2005, June 2005. Also presented at 3DIM, Ottawa, Canada,June2005 pp. 318-325. [Adobe PDF]
The videos below show some of our recent results for static scene capture as well as dynamic scenes.
Multi-view Dynamic Point CloudClick to play video
Captured Point CloudClick to play video
Intensity Textured Point CloudClick to play video
Dynamic Intensity Textured Point CloudClick to play video
- R.R. Garcia, A. Zakhor, "Geometric Calibration for a Multi-Camera-Projector System," Workshop on the Applications of Computer Vision (WACV) 2013. Clearwater Beach, Florida, January, 2013. [Adobe PDF]
- R.R. Garcia and A. Zakhor, "Consistent Stereo-Assisted Absolute Phase Unwrapping Methods for Structured Light Systems," IEEE Journal on Selected Topics in Signal Processing, vol. 6, no. 5, pp. 411-424, Sept. 2012, [Adobe PDF]
- R.R. Garcia, A. Zakhor, "Projector Domain Phase Unwrapping in a Structured Light System with Stereo Cameras," 3DTV 2011, Antalya, Turkey, May 16-18, 2011. [Adobe PDF]
- R.R. Garcia, A. Zakhor, "Temporally-Consistent Phase Unwrapping for a Stereo-Assisted Structured Light System," 3DIMPVT 2011, Hangzhou, China, May 16-19, 2011. [Adobe PDF]
- R.R. Garcia, A. Zakhor, "Selection of Temporally Dithered Codes for Increasing Virtual Depth of Field in Structured Light Systems," PROCAMS 2010, San Francisco, CA, June 2010. [Adobe PDF]
- C. Frueh, A. Zakhor, "Capturing 2 1/2 D Depth and Texture of Time-Varying Scenes Using Structured Infrared Light", PROCAMS workshop, San Diego, CA 2005, June 2005. Also presented at 3DIM, Ottawa, Canada,June2005 pp. 318-325. [Adobe PDF]