posted on 2022-12-16, 15:15authored byMatija Rossi
This thesis describes a body of research work in the domain of underwater robotics,
aimed towards improving performance and efficiency; and achieving partial or full
autonomy in a wide range of inspection and intervention tasks. The emphasis is on
the development and application of real-time vision systems, utilising underwater
cameras for mapping, navigation, and manipulation. Real-time analysis of survey
data, which is typically post-processed, can significantly improve the quality and
time of inspection operations. If vision systems prove sufficiently robust, they may
obviate the need for inertial navigation systems and replace them with image-based
target referenced navigation. Additionally, in order for an intervention task to be
carried out autonomously, it is necessary to know the structure of the scene around
the target and the position of the robot relative to it. This makes it possible to then
implement higher level features such as path planning, obstacle avoidance, and
target identification. Even in the case of manual operations, providing an augmented
feedback could increase the ROV pilot’s efficiency multiple times compared to a
standard 2D camera stream, which is what is currently being used for teleoperation.
Due to offshore operations being particularly expensive, time consuming, and limited
by other factors such as weather, making them more efficient is of great value. The
work presented in this thesis consists of three systems that aim to bring underwater
robotics closer to achieving these and many other new opportunities.
The first is a real-time 2D image mosaicking tool developed to provide instantaneous
feedback on image quality and area coverage during underwater site inspection
or documentation surveys. The algorithm implements a feature extraction and matching
approach to stitching video frames into a single image. While its main advantage
is providing good results for fast documentation in real-time even on low-end computer
hardware, it has also the possibility to provide camera motion estimation and
therefore be used as a navigation system or aid.
The second is underwater StereoFusion, an algorithm for real-time 3D dense
reconstruction and camera tracking. Unlike KinectFusion on which it is based,
StereoFusion relies on a stereo camera as its main sensor. The algorithm uses the
depth map obtained from the stereo camera to incrementally build a volumetric
3D model of the environment, while simultaneously using the model for camera
tracking. It has been successfully tested both in a lake and in the ocean, using two
different state-of-the-art underwater ROVs. A monocular camera solution for dense
reconstruction is also investigated and reported.