Commercial depth cameras have recently been tested to work underwater with trade off in measured depth distance but providing several advantages over conventional depth acquisition sensors such as Sonars and LiDARS. The biggest advantage is real-time 3D reconstruction and significantly better accuracy for small scale 3D scanning of submerged objects. Since traditional issues that are faced while using normal imaging cameras such as dependence on light and turbidity of water etc. are avoided, commercial depth cameras can open a new direction in small scale 3D scene reconstruction. This paper is an extension of our previous work in which we provided proof of concept that the Microsoft Kinect v2, which is a time of flight depth sensor, provides real-time 3D scanning in underwater environment, albeit at a shorter distance. However, the working of the time of flight sensor showed several issues in depth measurement in underwater environment. Preliminary results after correction of the measured distance are also provided in this work. Furthermore, the RGB and NIR cameras of Kinect v2 are not designed to perform underwater. To cater for the unwanted effects in the depth values, camera calibration has been performed on underwater images acquired from Kinect v2 and the results are elaborated. A fast, accurate and intuitive refraction correction method has been developed providing real-time correction to the created 3D mesh.
DOI: 10.1109/ACCESS.2017.2733003
Commercial time of flight depth camera such as the Kinect v2 have been shown to perform underwater for 3D scene reconstruction of underwater objects. However, to incorporate the additional noise and incorporating effect of refraction due to the change in the imaging medium, a customized user implementation of the underlying scene reconstruction algorithms with the additional developed filters needs to be developed. This paper presents the details and performance of such a graphical user interface developed for Kinect v2. The GUI is a customized implementation of the Kinect Fusion framework, and incorporates underwater camera calibration, noise filtering, time of flight correction and refraction correction filters developed to adapt Kinect Fusion for 3D scene reconstruction in an underwater environment. Details of the user interface and the effect of various sub-functions and additional correction filters on the performance of Kinect Fusion reconstruction are discussed in detail.
One of the major research directions in robotic vision focuses on calculating the real world size of objects in a scene using stereo imaging. This information can be used while decision making for robots, manipulator localization in the workspace, path planning and collision prevention, augmented reality, object classification in images and other areas that require object sizes as a feature. In this paper we present a novel approach to calculate real world object size using RGB-D image acquired from Microsoft Kinect™, as an alternative to stereo imaging based approaches. We introduce a dynamic resolution matrix that estimates the size of each pixel in an image in real world units. The main objective is to convert the size of
objects represented in the image in pixels; to real world units (such as feet, inches etc.).We verify our results using available OpenSource RGB-D datasets. The experimental results show that our approach provides accurate measurements.
This paper presents preliminary work to utilize a commercial time of flight depth camera for real-time 3D scene reconstruction of underwater objects. Typical RGB stereo camera imaging for 3D capturing suffers from blur and haziness due to turbidity of water in addition to critical dependence on light, either from natural or artificial sources. We propose a method for repurposing the low-cost Microsoft Kinect™ Time of Flight camera for underwater environment enabling dense depth data acquisition that can be processed in real time. Our motivation is the ease of use and low cost of the device for high quality real-time scene reconstruction as compared to multi view stereo cameras, albeit at a smaller range. Preliminary results of depth data acquisition and surface reconstruction in underwater environment are also presented. The novelty of our work is the utilization of the Kinect depth camera for real-time 3D mesh reconstruction and our main objective is to develop an economical and compact solution for underwater 3D mapping.
This paper presents preliminary results of using commercial time of flight depth camera for 3D scanning of underwater objects. Generating accurate and detailed 3D models of objects in underwater environment is a challenging task. This work presents experimental results of using Microsoft Kinect™ v2 depth camera for dense depth data acquisition underwater that gives reasonable 3D scanned data but with smaller scanning range. Motivations for this research are the user friendliness and low-cost of the device as compared to multi view stereo cameras or marine-hardened laser scanning solutions and equipment. Preliminary results of underwater point cloud generation and volumetric reconstruction are also presented. The novelty of this work is the utilization of the Kinect depth camera for real-time 3D mesh reconstruction and the main objective is to develop an economical and compact solution for underwater 3D scanning.