A novel feature extraction and mapping using convolutional autoencoder for enhancement of Underwater image/video

. Marine resources known to human are very limited and as 71% world is surrounded by ocean, we are yet to discover the many of the species and the enriched resources. Often the Underwater scenery collected are poorly illuminated, degraded, and distorted due to light propagation model underwater, water molecules and impurities as well. Counting on to these factors images/videos collected in underwater environment are in need of enhancement. We propose a method of utilizing convolution autoencoder, which can be able to collect the features of underwater images and enhanced image and then the feature mapping of this can be used in testing of the other underwater images/videos. The method utilizes the technique, which combines benefits of unsupervised convolution autoencoder to extract non-trivial features and utilized them for the enhancement of the underwater images. In order to evaluate the performance, we have used both subjective as well as objective evaluation method. Evaluation parameters used represent the results of the proposed method are significant for enhancement of underwater imagery. With the proposed network, we expect to advance underwater image enhancement research and its applications in many areas like in study of marine organism, their behaviour according to the environment, ocean exploration and Autonomous underwater vehicle.


Introduction
Underwater imagery collected using various approaches like underwater photography or with the help of autonomous underwater vehicle often used for collecting data, but they face various issues as, underwater images are collected at larger depth in ocean are often impacted with blue wavelength, they have inappropriate illumination, particle in water often corrupt the images captured. As the underwater environment is a challenging environment, so we need to mitigate some of the problems like blurring of the images, contrast adjustment of the images, fogging etc. [1,2] These factors can impact on the performance of underwater image enhancement applications which can significantly influence on various applications such as marine exploration, aquatic robotics and underwater surveillance. As we require the images with high resolution and lowly impacted by all the interfering parameter that are generally faced while capturing underwater images or videos, and obviously it can never be fully eliminated so the necessity of the algorithms, which can then be able to recover the important data from the footage of these underwater scenery. The different methods employed for the image enhancement should be faster, optimized and adaptive.
Underwater image enhancement process is often divided into two approaches; one is Underwater Image Enhancement and other is Underwater Image Restoration. Out of which the Underwater Image Enhancement technique does not follow the principal of image formation model. Here in this paper we too have considered the approach of underwater image enhancement. In underwater image/video enhancement generally we consider the approaches like contrast enhancement, colour correction and even there are few fusion techniques are developed by the few researchers.
Various researcher is recent few years used variety of the underwater image enhancement techniques such convolutional neural network(CNN), autoencoder, deep learning etc. In 2017, Sun et al. [3] suggested the use of deep CNN architecture for detecting underwater objects using data augmentation. Even if the data imbalance is there with dataset this approach leads to the appropriate results. Also created a weighted probabilities decision mechanism improve objects from frames in underwater video. Honnutagi [4], in later year introduced Fusion-Based Underwater Image Enhancement by Weight Map Techniques, in which it was used to resolve low contrast problem usually encounters in underwater images is resolved also the MSE, PSNR, entropy were improved by ITM Web of Conferences 44, 03066 (2022) https://doi.org/10.1051/itmconf/20224403066 ICACC-2022 the proposed system. In 2019, liu et al. [5] created an underwater imaging system and proposed the RUIE benchmark. for visibility degradation, color cast, and higher-level detection/classification. Liu also benchmarked some of the important challenges of the Underwater enhancement. Jamadandi et al. [6] in 2019, implemented deep learning framework to enhance underwater images by augmenting network with wavelet corrected transformations, which resulted in helps in recovering highly degraded images and helps achieve low noise and overall better global contrast while retaining the sharp features which are obfuscated by the backscattering of light underwater and results were characterized by the PSNR and SSIM values. In the same year Tang et al. [7] put forward method to enhance the image and video based on Retinex. The method pre corrects the colour and reduces the dominant colour. Also, the improved multiscale Retinex is performed with the intensity channel in order to calculate the reflectance and the illumination component. The image is restored from logarithmic domain, and the dynamics is compensated at the same time. at the end he was able to preserve the colour by this way.
Hashisho et. al [8] proposed a network that estimates the accuracy and the computation cost for ensuring real-time deployment on underwater visual tasks using autoencoder network. Irfan et. al [9] used classification convolution autoencoder which is designed as a hybrid network. classify large size underwater images with higher degree of accuracy and classification. Yadav et. al [10] proposed a technique using the principle of histogram equalization for enhancement of underwater images and the method utilized for the same was able to retain the colours using convolution neural network model. Wang et. al [11] presented stacked convolutional sparse denoising autoencoder model which utilized parse denoising autoencoders and convolution neural network for denoising the underwater heterogeneous information data. The algorithm proposed by the Mello et. al [12] feeds the output image to a degradation block as per image formation model that reinforces its degradation. The degraded and input images are then matched using a loss function. Algorithm can restore image from the decoder after the training process.
Autoencoders is the one of the important machine learning algorithm which can regenerate the images on just by training itself and if the image is mapped properly, it would be difficult to tell visually to observe the difference between original image and the regenerated one. In this paper we propose an algorithm designed with the convolutional autoencoder architecture for feature extraction and mapping. Which later used in enhancement of the Underwater image/video. Experiments are performed on the EUVP Dataset [13] contains separate sets of paired and unpaired image samples of poor and good perceptual quality to facilitate supervised training of underwater image enhancement models.
The rest of this paper is organized as follows. In Section 2, we review Convolutional autoencoder, in which we have presented detailed description of encoder and decoder model for Underwater and enhanced image a. In Section 3, we present details of the proposed technique. Experimental results are discussed in Section 4. Experimental results are presented in subjective as well as in objective manner and discussed. In section 5, Conclusion is drawn.

Convolutional autoencoder
Autoencoder is comprised of encoder part and decoder part [12,13]. Neural network of the type autoencoders are used for dimensionality reduction [13], also capable of finding a structure within the image or data in order to develop a compressed representation of the input. The encoder acquires a knowledge to interpret the input and compresses it to an internal representation which is defined by the bottleneck layer [14].
Output of the encoder is provided to the decoder which then tries to regenerate the input.
For a input image the encoder function is defined as : Where, is encoder bias, is encoding of the input in a low dimensional space, is 2-D convolutional filter, * is 2-D convolution and denotes activation function such as ReLU [15,16].
In decoding phase, is the input of the decoding function, which can be defined as = decoder( ) = ( * ˜+˜) Where ˜ is 2-D convolutional filter in decoder and ˜ is bias of decoder [17].

Proposed Method
The proposed technique utilizes the benefit of the convolutional autoencoder which adds up to two parts encoder and decoder combined with the mapping network. In the proposed method we first train the underwater image for feature extraction as shown in Figure 1. Similar approach is followed for the Enhanced image feature extraction through the convolution autoencoder as shown in Figure 2.   All the features are then mapped using the mapping network shown in Figure 3. In lateral phase of experimentation while testing we had modified the architecture as shown in Figure 4, wherein the architecture presented is only used for the testing purpose.
The autoencoders generally designed in such a way that architecture follows the bottleneck at the centre of the model, which can be used for the reduction of the dimensionality of input for the representation, later that the reconstruction is followed.
There are various types of autoencoders, and their use varies depending upon application, but more common use is as a learned or automatic feature extraction model. Following Table 1. shows the exact configuration followed for the proposed method.

Experimental Results and discussion
In this section we have discussed the results in subjective as well as in objective manner to present the results of the proposed technique.

Subjective Evaluation
As shown in Figure 5. represents the results obtained on different algorithms and the proposed method. Subjective evaluation is the process where assessment of the quality of the results can be experienced visually, results represented in this paper can help us analyse the outcome.

Objective Evaluation
Objective evaluation can be done based on the statistics of the results, and for that we have used performance evaluation parameter [18] such as peak signal-to-noise ratio (PSNR) [19,22], structural similarity index measure (SSIM) [20,22], underwater image quality measure (UIQM) [20,21,22].  It is evident from the results that the proposed technique outperforms in both the evaluation mechanism, subjective and objective.

Conclusion
In this paper, convolutional autoencoder based approach proposed to improve the subjective as well as objective quality of the underwater images, same can be utilized on the underwater videos for enhancement. Proposed model uses the power of unsupervised deep convolutional autoencoder extracts the useful features from underwater images and also from enhanced underwater images and based on the features extracted from both the phases of proposed algorithm, later mapping network will map the features which then shall be able to regenerate the underwater image. From experiment performed on underwater image data sets demonstrate that proposed model has significant outcome.