Underwater 3D Scene Reconstruction Using new Kinect sensor Based on Physical Models for Refraction and Time of Flight Correction

Journal Article
A. Anwer, S. S. A. Ali, Amjad Khan, F. Mériaudeau
IEEE Access, 2017 - (Q1, IF: 3.557)
Publication year: 2017

Commercial depth cameras have recently been tested to work underwater with trade off in measured depth distance but providing several advantages over conventional depth acquisition sensors such as Sonars and LiDARS. The biggest advantage is real-time 3D reconstruction and significantly better accuracy for small scale 3D scanning of submerged objects. Since traditional issues that are faced while using normal imaging cameras such as dependence on light and turbidity of water etc. are avoided, commercial depth cameras can open a new direction in small scale 3D scene reconstruction. This paper is an extension of our previous work in which we provided proof of concept that the Microsoft Kinect v2, which is a time of flight depth sensor, provides real-time 3D scanning in underwater environment, albeit at a shorter distance. However, the working of the time of flight sensor showed several issues in depth measurement in underwater environment. Preliminary results after correction of the measured distance are also provided in this work. Furthermore, the RGB and NIR cameras of Kinect v2 are not designed to perform underwater. To cater for the unwanted effects in the depth values, camera calibration has been performed on underwater images acquired from Kinect v2 and the results are elaborated. A fast, accurate and intuitive refraction correction method has been developed providing real-time correction to the created 3D mesh.

DOI: 10.1109/ACCESS.2017.2733003

Customized graphical user interface implementation of Kinect Fusion for underwater application

Conference paper
Atif Anwer, Syed Saad Azhar Ali*, Fabrice Mériaudeau
IEEE 7th International Conference on Underwater System Technology: Theory and Applications (USYS2017)
Publication year: 2017

Commercial time of flight depth camera such as the Kinect v2 have been shown to perform underwater for 3D scene reconstruction of underwater objects. However, to incorporate the additional noise and incorporating effect of refraction due to the change in the imaging medium, a customized user implementation of the underlying scene reconstruction algorithms with the additional developed filters needs to be developed. This paper presents the details and performance of such a graphical user interface developed for Kinect v2. The GUI is a customized implementation of the Kinect Fusion framework, and incorporates underwater camera calibration, noise filtering, time of flight correction and refraction correction filters developed to adapt Kinect Fusion for 3D scene reconstruction in an underwater environment. Details of the user interface and the effect of various sub-functions and additional correction filters on the performance of Kinect Fusion reconstruction are discussed in detail.

Underwater image enhancement by wavelet based fusion

Co-Author
Amjad Khan ; Syed Saad Azhar Ali ; Aamir Saeed Malik ; Atif Anwer ; Fabrice Meriaudeau
Underwater System Technology: Theory and Applications (USYS), IEEE International Conference on
Publication year: 2016

The image captured in water is hazy due to the several effects of the underwater medium. These effects are governed by the suspended particles that lead to absorption and scattering of light during image formation process. The underwater medium is not friendly for imaging data and brings low contrast and fade color issues. Therefore, during any image based exploration and inspection activity, it is essential to enhance the imaging data before going for further processing. This paper presents a wavelet-based fusion method to enhance the hazy underwater images by addressing the low contrast and color alteration issues. The publicly available hazy underwater images are enhanced and analyzed qualitatively with some state of the art methods. The quantitative study of image quality depicts promising results.

Control of autonomous underwater vehicle based on visual feedback for pipeline inspection

Co-Author
Amjad Khan ; Syed Saad Azhar Ali ; Aamir Saeed Malik ; Atif Anwer ; Nur Afande Ali Hussain ; Fabrice Meriaudeau
Robotics and Manufacturing Automation (ROMA), 2016 2nd IEEE International Symposium on
Publication year: 2016

For everyday inspection jobs in offshore oil and gas industry, the human divers are being replaced by underwater vehicles. This paper proposes a visual feedback based control of an autonomous underwater vehicle for pipeline inspection. The hydrodynamic disturbances in water severely affect the movement of the vehicle resulting in performance degrading. The heading of the autonomous underwater vehicle under such disturbances is controlled using visual feedback to track the pipeline for inspection. The proposed method does not demand expensive position feedback devices such as underwater acoustic positioning system. By using built-in camera of the vehicle and few image processing techniques a simpler, easier and low-cost solution is proposed. The performance evaluation of the proposed technique on sample underwater images is also presented.

Underwater surveying and mapping using rotational potential fields for multiple autonomous vehicles

Co-Author
David McIntyre ; Wasif Naeem ; Syed Saad Azhar Ali ; Atif Anwer
IEEE 6th International Conference on Underwater System Technology: Theory and Applications (USYS2016). 13-14 December
Publication year: 2016

his paper presents a new technique for exploration and mapping/surveying of underwater infrastructure and/or objects of interest, using multiple autonomous underwater vehicles (AUVs). The proposed method employs rotational potential fields, and extends them for use on multiple vehicles within a three dimensional environment. An inter-vehicle fluid formation is maintained throughout, free of angular constraints (or the need of a virtual vehicle). When an object of interest is approached, the formation is split and follows a smooth trajectory around opposite sides of its boundary. To fully utilise the potential of rotational fields, a unique local 2D-plane is created around every object within the 3D environment, which is employed for boundary coverage. Traditional artificial potential fields are used to guide vehicles towards each object in turn (and maintain the fluid formation), while rotational fields are employed within the local 2D-plane providing a smooth trajectory around opposing sides of every object. Simulation results show the method to be effective, providing a more stable trajectory. Comparison with the standard technique shows that the formation is maintained throughout and overall journey time is significantly reduced using this method.

Calculating real-world object dimensions from Kinect RGB-D image using dynamic resolution

Conference paper
A. Anwer, Baig, A., Nawaz, R.
2015 12th International Bhurban Conference on Applied Sciences and Technology (IBCAST)
Publication year: 2015

One of the major research directions in robotic vision focuses on calculating the real world size of objects in a scene using stereo imaging. This information can be used while decision making for robots, manipulator localization in the workspace, path planning and collision prevention, augmented reality, object classification in images and other areas that require object sizes as a feature. In this paper we present a novel approach to calculate real world object size using RGB-D image acquired from Microsoft Kinect™, as an alternative to stereo imaging based approaches. We introduce a dynamic resolution matrix that estimates the size of each pixel in an image in real world units. The main objective is to convert the size of
objects represented in the image in pixels; to real world units (such as feet, inches etc.).We verify our results using available OpenSource RGB-D datasets. The experimental results show that our approach provides accurate measurements.

Underwater online 3D mapping and scene reconstruction

Conference paper
A. Anwer, S. S. A. Ali, F. Mériaudeau
Intelligent and Advanced Systems (ICIAS), 2016 6th International Conference on, 2016.
Publication year: 2016

Real-Time Underwater 3D Scene Reconstruction Using Commercial Depth Sensor

Conference paper
A. Anwer, S. S. A. Ali, Amjad Khan, F. Mériaudeau
IEEE 6th International Conference on Underwater System Technology: Theory and Applications (USYS2016). 13-14 December
Publication year: 2016

This paper presents preliminary work to utilize a commercial time of flight depth camera for real-time 3D scene reconstruction of underwater objects. Typical RGB stereo camera imaging for 3D capturing suffers from blur and haziness due to turbidity of water in addition to critical dependence on light, either from natural or artificial sources. We propose a method for repurposing the low-cost Microsoft Kinect™ Time of Flight camera for underwater environment enabling dense depth data acquisition that can be processed in real time. Our motivation is the ease of use and low cost of the device for high quality real-time scene reconstruction as compared to multi view stereo cameras, albeit at a smaller range. Preliminary results of depth data acquisition and surface reconstruction in underwater environment are also presented. The novelty of our work is the utilization of the Kinect depth camera for real-time 3D mesh reconstruction and our main objective is to develop an economical and compact solution for underwater 3D mapping.

Underwater 3D scanning using Kinect v2 time of flight camera

Conference paper
A. Anwer, S. S. A. Ali, A. Khan, F. Mériaudeau
13th International Conference on Quality Control by Artificial Vision (QCAV2017). 14-16 May 2017
Publication year: May 2017

This paper presents preliminary results of using commercial time of flight depth camera for 3D scanning of underwater objects. Generating accurate and detailed 3D models of objects in underwater environment is a challenging task. This work presents experimental results of using Microsoft Kinect™ v2 depth camera for dense depth data acquisition underwater that gives reasonable 3D scanned data but with smaller scanning range. Motivations for this research are the user friendliness and low-cost of the device as compared to multi view stereo cameras or marine-hardened laser scanning solutions and equipment. Preliminary results of underwater point cloud generation and volumetric reconstruction are also presented. The novelty of this work is the utilization of the Kinect depth camera for real-time 3D mesh reconstruction and the main objective is to develop an economical and compact solution for underwater 3D scanning.

Subsea Pipeline Corrosion Estimation by Restoring and Enhancing Degraded Underwater Images

Co-AuthorJournal Article
Amjad Khan, Syed Saad Azhar Ali, Atif Anwer, Syed Hasan Adil, Fabrice Mériaudeau
IEEE Access, vol. 6, pp. 40585-40601, 2018.
Publication year: 2018

Subsea pipeline corrosion is considered as a severe problem in offshore oil and gas industry. It directly affects the integrity of the pipeline which further leads to cracks and leakages. At present, subsea visual inspection and monitoring is performed by trained human divers; however, offshore infrastructures are moving from shallow to deep waters due to exhaustion of fossil fuels. Therefore, inhospitable underwater environmental conditions for human diver demand imaging-based robotic solution as an alternate for visual inspection and monitoring of subsea pipelines. However, an unfriendly medium is a challenge for underwater imaging-based inspection and monitoring activities due to absorption and scattering of light that further leads to blur, color attenuation, and low contrast. This paper presents a new method for subsea pipeline corrosion estimation by using color information of corroded pipe.