TY - JOUR
T1 - Self-Attentive Generative Adversarial Network for Cloud Detection in High Resolution Remote Sensing Images
AU - Wu, Zhaocong
AU - Li, Jun
AU - Wang, Yisong
AU - Hu, Zhongwen
AU - Molinier, Matthieu
N1 - Funding Information:
Manuscript received August 27, 2019; revised October 15, 2019; accepted November 19, 2019. Date of publication December 5, 2019; date of current version September 25, 2020. This work was supported in part by the National Key Research and Development Program of China under Grant 2017YFC0506200 and in part by the National Natural Science Foundation of China under Grant 41501369 and Grant 41871227. (Corresponding author: Jun Li.) Z. Wu, J. Li, and Y. Wang are with the School of Remote Sensing and Information Engineering, Wuhan University, Wuhan 430079, China (e-mail: [email protected]).
PY - 2020/10
Y1 - 2020/10
N2 - Cloud detection is an important step in the processing of remote sensing images. Most methods based on convolutional neural networks (CNNs) for cloud detection require pixel-level labels, which are time-consuming and expensive to annotate. To overcome this challenge, this letter proposes a novel semisupervised algorithm for cloud detection by training a self-attentive generative adversarial network (SAGAN) to extract the feature difference between cloud images and cloud-free images. Our main idea is to introduce visual attention into the process of generating 'real' cloud-free images. The training of SAGAN is based on three guiding principles: expansion of attention maps of cloud regions which will be replaced with translated cloud-free images, reduction of attention maps to coincide with cloud boundaries, and optimization of a self-attentive network to handle the extreme cases. The inputs for SAGAN training are the images and image-level labels, which are easier, cheaper, and more time-saving than the existing methods based on CNN. To test the performance of SAGAN, experiments are conducted on the Sentinel-2A Level 1C image data. The results show that the proposed method achieves very promising results with only the image-level labels of training samples.
AB - Cloud detection is an important step in the processing of remote sensing images. Most methods based on convolutional neural networks (CNNs) for cloud detection require pixel-level labels, which are time-consuming and expensive to annotate. To overcome this challenge, this letter proposes a novel semisupervised algorithm for cloud detection by training a self-attentive generative adversarial network (SAGAN) to extract the feature difference between cloud images and cloud-free images. Our main idea is to introduce visual attention into the process of generating 'real' cloud-free images. The training of SAGAN is based on three guiding principles: expansion of attention maps of cloud regions which will be replaced with translated cloud-free images, reduction of attention maps to coincide with cloud boundaries, and optimization of a self-attentive network to handle the extreme cases. The inputs for SAGAN training are the images and image-level labels, which are easier, cheaper, and more time-saving than the existing methods based on CNN. To test the performance of SAGAN, experiments are conducted on the Sentinel-2A Level 1C image data. The results show that the proposed method achieves very promising results with only the image-level labels of training samples.
KW - Cloud detection
KW - deep learning (DL)
KW - generative adversarial network (GAN)
KW - remote sensing
KW - self-attention
UR - http://www.scopus.com/inward/record.url?scp=85087502582&partnerID=8YFLogxK
U2 - 10.1109/LGRS.2019.2955071
DO - 10.1109/LGRS.2019.2955071
M3 - Article
AN - SCOPUS:85087502582
SN - 1545-598X
VL - 17
SP - 1792
EP - 1796
JO - IEEE Geoscience and Remote Sensing Letters
JF - IEEE Geoscience and Remote Sensing Letters
IS - 10
M1 - 8924781
ER -