This paper presents preliminary work to utilize a commercial time of flight depth camera for real-time 3D scene reconstruction of underwater objects. Typical RGB stereo camera imaging for 3D capturing suffers from blur and haziness due to turbidity of water in addition to critical dependence on light, either from natural or artificial sources. We propose a method for repurposing the low-cost Microsoft Kinect™ Time of Flight camera for underwater environment enabling dense depth data acquisition that can be processed in real time. Our motivation is the ease of use and low cost of the device for high quality real-time scene reconstruction as compared to multi view stereo cameras, albeit at a smaller range. Preliminary results of depth data acquisition and surface reconstruction in underwater environment are also presented. The novelty of our work is the utilization of the Kinect depth camera for real-time 3D mesh reconstruction and our main objective is to develop an economical and compact solution for underwater 3D mapping.
This paper presents preliminary results of using commercial time of flight depth camera for 3D scanning of underwater objects. Generating accurate and detailed 3D models of objects in underwater environment is a challenging task. This work presents experimental results of using Microsoft Kinect™ v2 depth camera for dense depth data acquisition underwater that gives reasonable 3D scanned data but with smaller scanning range. Motivations for this research are the user friendliness and low-cost of the device as compared to multi view stereo cameras or marine-hardened laser scanning solutions and equipment. Preliminary results of underwater point cloud generation and volumetric reconstruction are also presented. The novelty of this work is the utilization of the Kinect depth camera for real-time 3D mesh reconstruction and the main objective is to develop an economical and compact solution for underwater 3D scanning.
One of the major research directions in robotic vision focuses on calculating the real world size of objects in a scene using stereo imaging. This information can be used while decision making for robots, manipulator localization in the workspace, path planning and collision prevention, augmented reality, object classification in images and other areas that require object sizes as a feature. In this paper we present a novel approach to calculate real world object size using RGB-D image acquired from Microsoft Kinect™, as an alternative to stereo imaging based approaches. We introduce a dynamic resolution matrix that estimates the size of each pixel in an image in real world units. The main objective is to convert the size of
objects represented in the image in pixels; to real world units (such as feet, inches etc.).We verify our results using available OpenSource RGB-D datasets. The experimental results show that our approach provides accurate measurements.
Commercial time of flight depth camera such as the Kinect v2 have been shown to perform underwater for 3D scene reconstruction of underwater objects. However, to incorporate the additional noise and incorporating effect of refraction due to the change in the imaging medium, a customized user implementation of the underlying scene reconstruction algorithms with the additional developed filters needs to be developed. This paper presents the details and performance of such a graphical user interface developed for Kinect v2. The GUI is a customized implementation of the Kinect Fusion framework, and incorporates underwater camera calibration, noise filtering, time of flight correction and refraction correction filters developed to adapt Kinect Fusion for 3D scene reconstruction in an underwater environment. Details of the user interface and the effect of various sub-functions and additional correction filters on the performance of Kinect Fusion reconstruction are discussed in detail.