Leila Bashmal is awarded best M.Sc. thesis
Leila Bashmal is awarded best M.Sc. thesis
This research is supported by the Deanship of Scientific Research at King Saud University for funding this work through research group No (RG-1435-055).
The student Leila Bashmal has received the best thesis award during the graduate student competition in the college of computer and information sciences for the academic year 1439/40. The thesis is supervised by Dr. Yakoub Bazi who is one of the team members of ALISR. Part of this work is published in an ISI journal:
Siamese-GAN: Learning Invariant Representations for Aerial Vehicle Image Categorization
Below are the details of this research:
Deep Adaptation Neural Networks for the Classification of Remote Sensing Images
Laila Mohammed Bashmal (436203885)
Advisor: Dr. Yakoub Bazi
Abstract:
Remote sensing techniques have proved to be efficient and reliable for monitoring the environment. The availability of very-high-resolution (VHR) as well as extremely-high-resolution (EHR) images acquired by satellites and Unmanned Aerial Vehicles (UAVs), respectively has opened the door for several interesting applications. On the other hand, the availability of such large amount of heterogeneous data introduces new challenges and thus calls for the development of advanced methodologies for automatic processing and analysis.
Besides the commonly pixel-based and object-based classification methodologies, the scene-level analysis is currently attracting much interest from the remote sensing community. Unlike the former analysis methods, the latter aims to classify the image based on a set of semantic categories in accordance with human interpretation. To address this issue, this thesis aims to propose advanced methods based on deep adaptation neural networks. In particular, this thesis will focus on classification scenarios characterized by the data-shift problem typically encountered when dealing with images acquired over different locations of the earth surface and with different sensors.
This thesis presents two deep learning methods for cross-domain classification. The first approach is termed as Asymmetric Adaptation Neural Network (AANN). It starts by generating an initial feature representation of both source and target images under analysis using a pretrained Convolutional Neural Network (CNN). Then to tackle the data-shift problem, it uses an additional network composed of two fully connected layers. The first hidden layer of AANN projects the labeled source data to the target space, while the subsequent layers maintain the discrimination ability between the different land-cover classes. To learn its weights, the network minimizes an objective function composed of two losses terms related to class separation and the distance between the source and target data distributions.
The second approach relies an advanced deep learning architecture based on Generative Adversarial Networks (GANs) to reduce the gap between source and target domains. To this end, it aims to learn robust and domain-invariant feature representations. The proposed method named Siamese-GANs is comprised of two steps. In a first step, the source and target images are fed to a pretrained CNN for generating an initial feature representation. Then, an encoder-decoder network coupled with a discriminator network is trained in an adversarial manner. The encoder-decoder network has the task to match the distributions of both domains in a shared space regularized by the reconstruction ability, while the discriminator seeks to separate between them. After this phase, the resulting encoded labeled and unlabeled features are fed to an additional network composed of two fully connected layers for training and classification, respectively.
The experimental results obtained on several remote sensing datasets acquired by satellites and aerial vehicle platforms confirm the promising capabilities of both methods compared to state-of-the-art.