Customized graphical user interface implementation of Kinect Fusion for underwater application

Conference paper
Atif Anwer, Syed Saad Azhar Ali*, Fabrice Mériaudeau
IEEE 7th International Conference on Underwater System Technology: Theory and Applications (USYS2017)
Publication year: 2017

Commercial time of flight depth camera such as the Kinect v2 have been shown to perform underwater for 3D scene reconstruction of underwater objects. However, to incorporate the additional noise and incorporating effect of refraction due to the change in the imaging medium, a customized user implementation of the underlying scene reconstruction algorithms with the additional developed filters needs to be developed. This paper presents the details and performance of such a graphical user interface developed for Kinect v2. The GUI is a customized implementation of the Kinect Fusion framework, and incorporates underwater camera calibration, noise filtering, time of flight correction and refraction correction filters developed to adapt Kinect Fusion for 3D scene reconstruction in an underwater environment. Details of the user interface and the effect of various sub-functions and additional correction filters on the performance of Kinect Fusion reconstruction are discussed in detail.

Underwater online 3D mapping and scene reconstruction

Conference paper
A. Anwer, S. S. A. Ali, F. Mériaudeau
Intelligent and Advanced Systems (ICIAS), 2016 6th International Conference on, 2016.
Publication year: 2016

Real-Time Underwater 3D Scene Reconstruction Using Commercial Depth Sensor

Conference paper
A. Anwer, S. S. A. Ali, Amjad Khan, F. Mériaudeau
IEEE 6th International Conference on Underwater System Technology: Theory and Applications (USYS2016). 13-14 December
Publication year: 2016

This paper presents preliminary work to utilize a commercial time of flight depth camera for real-time 3D scene reconstruction of underwater objects. Typical RGB stereo camera imaging for 3D capturing suffers from blur and haziness due to turbidity of water in addition to critical dependence on light, either from natural or artificial sources. We propose a method for repurposing the low-cost Microsoft Kinect™ Time of Flight camera for underwater environment enabling dense depth data acquisition that can be processed in real time. Our motivation is the ease of use and low cost of the device for high quality real-time scene reconstruction as compared to multi view stereo cameras, albeit at a smaller range. Preliminary results of depth data acquisition and surface reconstruction in underwater environment are also presented. The novelty of our work is the utilization of the Kinect depth camera for real-time 3D mesh reconstruction and our main objective is to develop an economical and compact solution for underwater 3D mapping.

Underwater 3D scanning using Kinect v2 time of flight camera

Conference paper
A. Anwer, S. S. A. Ali, A. Khan, F. Mériaudeau
13th International Conference on Quality Control by Artificial Vision (QCAV2017). 14-16 May 2017
Publication year: May 2017

This paper presents preliminary results of using commercial time of flight depth camera for 3D scanning of underwater objects. Generating accurate and detailed 3D models of objects in underwater environment is a challenging task. This work presents experimental results of using Microsoft Kinect™ v2 depth camera for dense depth data acquisition underwater that gives reasonable 3D scanned data but with smaller scanning range. Motivations for this research are the user friendliness and low-cost of the device as compared to multi view stereo cameras or marine-hardened laser scanning solutions and equipment. Preliminary results of underwater point cloud generation and volumetric reconstruction are also presented. The novelty of this work is the utilization of the Kinect depth camera for real-time 3D mesh reconstruction and the main objective is to develop an economical and compact solution for underwater 3D scanning.

Calculating real-world object dimensions from Kinect RGB-D image using dynamic resolution

Conference paper
A. Anwer, Baig, A., Nawaz, R.
2015 12th International Bhurban Conference on Applied Sciences and Technology (IBCAST)
Publication year: 2015

One of the major research directions in robotic vision focuses on calculating the real world size of objects in a scene using stereo imaging. This information can be used while decision making for robots, manipulator localization in the workspace, path planning and collision prevention, augmented reality, object classification in images and other areas that require object sizes as a feature. In this paper we present a novel approach to calculate real world object size using RGB-D image acquired from Microsoft Kinect™, as an alternative to stereo imaging based approaches. We introduce a dynamic resolution matrix that estimates the size of each pixel in an image in real world units. The main objective is to convert the size of
objects represented in the image in pixels; to real world units (such as feet, inches etc.).We verify our results using available OpenSource RGB-D datasets. The experimental results show that our approach provides accurate measurements.