Sonar Image Composition for Semantic Segmentation Using Machine Learning
Document Type
Conference Proceeding
Publication Date
1-1-2023
Abstract
This paper presents an approach for merging side scan sonar data and bathymetry information for the benefit of improved automatic shipwreck identification. The steps to combine a raw side-scan sonar image with a 2D relief map into a new composite RGB image are presented in detail, and a supervised image segmentation approach via the U-Net architecture is implemented to identify shipwrecks. To validate the effectiveness of the approach, two datasets were created from shipwreck surveys: one using side-scan only, and one using the new composite RGB images. The U-Net model was trained and tested on each dataset, and the results were compared. The test results show a mean accuracy which is 15% higher for the case where the RGB composition is used when compared with the model trained and tested with the side-scan sonar only dataset. Furthermore, the mean intersection over union (IoU) shows an increase of 9.5% using the RGB composition model.
Publication Source (Journal or Book title)
Proceedings - 2023 IEEE/CVF Winter Conference on Applications of Computer Vision Workshops, WACVW 2023
First Page
248
Last Page
254
Recommended Citation
Ard, W., & Barbalata, C. (2023). Sonar Image Composition for Semantic Segmentation Using Machine Learning. Proceedings - 2023 IEEE/CVF Winter Conference on Applications of Computer Vision Workshops, WACVW 2023, 248-254. https://doi.org/10.1109/WACVW58289.2023.00031