Subjects -> ELECTRONICS (Total: 207 journals)
| A B C D E F G H I J K L M N O P Q R S T U V W X Y Z | The end of the list has been reached or no journals were found for your choice. |
|
|
- Comparative Study of Real-Time Semantic Segmentation Networks in Aerial
Images During Flooding Events-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Farshad Safavi;Maryam Rahnemoonfar;
Pages: 15 - 31 Abstract: Real-time semantic segmentation of aerial imagery is essential for unmanned ariel vehicle applications, including military surveillance, land characterization, and disaster damage assessments. Recent real-time semantic segmentation neural networks promise low computation and inference time, appropriate for resource-limited platforms, such as edge devices. However, these methods are mainly trained on human-centric view datasets, such as Cityscapes and CamVid, unsuitable for aerial applications. Furthermore, we do not know the feasibility of these models under adversarial settings, such as flooding events. To solve these problems, we train the most recent real-time semantic segmentation architectures on the FloodNet dataset containing annotated aerial images captured after hurricane Harvey. This article comprehensively studies several lightweight architectures, including encoder–decoder and two-pathway architectures, evaluating their performance on aerial imagery datasets. Moreover, we benchmark the efficiency and accuracy of different models on the FloodNet dataset to examine the practicability of these models during emergency response for aerial image segmentation. Some lightweight models attain more than 60% test mIoU on the FloodNet dataset and yield qualitative results on images. This article highlights the strengths and weaknesses of current segmentation models for aerial imagery, requiring low computation and inference time. Our experiment has direct applications during catastrophic events, such as flooding events. PubDate:
2023
Issue No: Vol. 16 (2023)
- Axial Cross Attention Meets CNN: Bibranch Fusion Network for Change
Detection-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Lei Song;Min Xia;Liguo Weng;Haifeng Lin;Ming Qian;Binyu Chen;
Pages: 32 - 43 Abstract: In the previous years, vision transformer has demonstrated a global information extraction capability in the field of computer vision that convolutional neural network (CNN) lacks. Due to the lack of inductive bias in vision transformer, it requires a large amount of data to support its training. In the field of remote sensing, it costs a lot to obtain a significant number of high-resolution remote sensing images. Most existing change detection networks based on deep learning rely heavily on the CNN, which cannot effectively utilize the long-distance dependence between pixels for difference discrimination. Therefore, this work aims to use a high-performance vision transformer to conduct change detection research with limited data. A bibranch fusion network based on axial cross attention (ACABFNet) is proposed. The network extracts local and global information of images through the CNN branch and transformer branch, respectively, and then, fuses local and global features by the bidirectional fusion approach. In the upsampling stage, similar feature information and difference feature information of the two branches are explicitly generated by feature addition and feature subtraction. Considering that the self-attention mechanism is not efficient enough for global attention over small datasets, we propose the axial cross attention. First, global attention along the height and width dimensions of images is performed respectively, and then cross attention is used to fuse the global feature information along two dimensions. Compared with the original self-attention, the structure is more graphics processing unit friendly and efficient. Experimental results on three datasets reveal that the ACABFNet outperforms existing change detection algorithms. PubDate:
2023
Issue No: Vol. 16 (2023)
- CNN, RNN, or ViT? An Evaluation of Different Deep Learning Architectures
for Spatio-Temporal Representation of Sentinel Time Series-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Linying Zhao;Shunping Ji;
Pages: 44 - 56 Abstract: Rich information in multitemporal satellite images can facilitate pixel-level land cover classification. However, what is the most suitable deep learning architecture for high-dimension spatio-temporal representation of remote sensing time series remains unclear. In this study, we theoretically analyzed the different mechanisms of the different deep learning structures, including the commonly used convolutional neural network (CNN), the high-dimension CNN [three-dimensional (3-D) CNN], the recurrent neural network, and the newest vision transformer (ViT), with regard to learning and representing the temporal information for spatio-temporal data. The performance of the different models was comprehensively evaluated on large-scale Sentinel-1 and Sentinel-2 time-series images covering the whole of Slovenia. First, the 3-D CNN, long short-term memory (LSTM), and ViT, which all have specific structures that preserve temporal information, can effectively extract the spatio-temporal information, with the 3-D CNN and ViT showing the best performance. Second, the performance of the 2-D CNN, in which the temporal information is collapsed, is lower than that of the 3-D CNN, LSTM, and ViT but outperforms the conventional methods. Thirdly,using both optical and synthetic aperture radar (SAR) images performs almost the same as using only optical images, indicating that the information that can be extracted from optical images is sufficient for land-cover classification. However, when optical images are unavailable, SAR imagescan provide satisfactorily classification results. Finally, the modern deep learning methods can effectively overcome the disadvantages in imaging conditions where parts of an image or images of some periods are missing. The testing data are available at gpcv.whu.edu.cn/data. PubDate:
2023
Issue No: Vol. 16 (2023)
- Adaptive Granulation-Based Convolutional Neural Networks With Single Pass
Learning for Remote Sensing Image Classification-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Sankar K. Pal;Dasari Arun Kumar;
Pages: 57 - 70 Abstract: Convolutional neural networks (CNNs) with the characteristics like spatial filtering, feed-forward mechanism, and back propagation-based learning are being widely used recently for remote sensing (RS) image classification. The fixed architecture of CNN with a large number of network parameters is managed by learning through a number of iterations, and, thereby increasing the computational burden. To deal with this issue, an adaptive granulation-based CNN (AGCNN) model is proposed in the present study. AGCNN works in the framework of fuzzy set theoretic data granulation and adaptive learning by upgrading the network architecture to accommodate the information of new samples, and avoids iterative training, unlike conventional CNN. Here, granulation is done both on the 2-D input image and its 1-D representative feature vector output, as obtained after a series of convolution and pooling layers. While the class-dependent fuzzy granulation on input image space exploits more domain knowledge for uncertainty modeling, rough set theoretic reducts computed on them select only the relevant features for input to CNN. During classification of unknown patterns, a new principle of roughness-minimization with weighted membership is adopted on overlapping granules to deal with the ambiguous cases. All these together improve the classification accuracy of AGCNN, while reducing the computational time significantly. The superiority of AGCNN over some state-of-the-art models in terms of different performance metrics is demonstrated for hyperspectral and multispectral images both quantitatively and visually. PubDate:
2023
Issue No: Vol. 16 (2023)
- Fine-Grained Object Detection in Remote Sensing Images via Adaptive Label
Assignment and Refined-Balanced Feature Pyramid Network-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Junjie Song;Lingjuan Miao;Qi Ming;Zhiqiang Zhou;Yunpeng Dong;
Pages: 71 - 82 Abstract: Object detection in high-resolution remote sensing images remains a challenging task due to the uniqueness of its viewing perspective, complex background, arbitrary orientation, etc. For fine-grained object detection in high-resolution remote sensing images, the high intra-class similarity is even more severe, which makes it difficult for the object detector to recognize the correct classes. In this article, we propose the refined and balanced feature pyramid network (RB-FPN) and center-scale aware (CSA) label assignment strategy to address the problems of fine-grained object detection in remote sensing images. RB-FPN fuses features from different layers and suppresses background information when focusing on regions that may contain objects, providing high-quality semantic information for fine-grained object detection. Intersection over Union (IoU) is usually applied to select the positive candidate samples for training. However, IoU is sensitive to the angle variation of oriented objects with large aspect ratios, and a fixed IoU threshold will cause the narrow oriented objects without enough positive samples to participate in the training. In order to solve the problem, we propose the CSA label assignment strategy that adaptively adjusts the IoU threshold according to statistical characteristics of oriented objects. Experiments on FAIR1M dataset demonstrate that the proposed approach is superior. Moreover, the proposed method was applied to the fine-grained object detection in high-resolution optical images of 2021 Gaofen challenge. Our team ranked sixth and was awarded as the winning team in the final. PubDate:
2023
Issue No: Vol. 16 (2023)
- Application of the LSTM Models for Baltic Sea Wave Spectra Estimation
-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Martin Simon;Sander Rikka;Sven Nõmm;Victor Alari;
Pages: 83 - 88 Abstract: This article proposes to apply long-short-term memory (LSTM) deep learning models to transform Sentinel-1 A/B interferometric wide (IW) swath image data into the wave density spectrum. Although spectral wave estimation methods for synthetic aperture radar data have been developed, similar approaches for coastal areas have not received enough attention. Partially, this is caused by the lack of high-resolution wave-mode data, as well as the nature of wind waves that have more complicated backscattering mechanisms compared to the swell waves for which the aforementioned methods were developed. The application of the LSTM model has allowed the transformation of the Sentinel-1 A/B IW one-dimensional image spectrum into wave density spectra. The best results in the test dataset led to the mean Pearson's correlation coefficient 0.85 for the comparison of spectra and spectra. The result was achieved with the LSTM model using $VV$ and $VH$ polarization spectra fed into the model independently. Experiments with LSTM neural networks that classify images into wave spectra with the Baltic Sea dataset demonstrated promising results in cases where empirical methods were previously considered. PubDate:
2023
Issue No: Vol. 16 (2023)
- Monitoring the Catastrophic Flood With GRACE-FO and Near-Real-Time
Precipitation Data in Northern Henan Province of China in July 2021-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Cuiyu Xiao;Yulong Zhong;Wei Feng;Wei Gao;Zhonghua Wang;Min Zhong;Bing Ji;
Pages: 89 - 101 Abstract: Zhengzhou and its surrounding areas, located in northern Henan Province, China, receive continuous extreme rainfall from July 17 to July 22, 2021. Northern Henan Province experiences extensive flash floods and urban floods, causing severe casualties and property damage. Understanding the variation of hydrologic features during this flood event could be valuable for future flood emergency response work and flood risk management. This study first demonstrates the rainstorm process based on near-real-time precipitation data from the China Meteorological Administration Land Data Assimilation System (CLDAS-V2.0). To meet the temporal resolution required for monitoring this short-term flood event, reconstructed daily terrestrial water storage anomalies (TWSAs) based on GRACE and GRACE-FO data and CLDAS-V2.0 datasets are first introduced. The spatial and temporal evolution of the reconstructed daily TWSA is analyzed in the study area during this heavy rainfall event. We further employ a wetness index based on the reconstructed daily TWSA for flood warnings. Furthermore, the modeled soil moisture data and daily runoff data are used for flood monitoring. Results show that the reconstructed daily TWSA increases by 437.7 mm in just six days (from July 17 to July 22, 2021), with a terrestrial water storage increment of 9.4 km3. Compared with ITSG-Grace2018, the reconstructed daily TWSA has better potential for near-real-time flood monitoring for short-term events in a small region. The wetness index derived from reconstructed daily TWSA is potential for flood early warning. PubDate:
2023
Issue No: Vol. 16 (2023)
- Drone-Aided Detection of Weeds: Transfer Learning for Embedded Image
Processing-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Iaroslav Koshelev;Maxim Savinov;Alexander Menshchikov;Andrey Somov;
Pages: 102 - 111 Abstract: In this article, we address the problem of hogweed detection using a drone equipped with red, green, blue (RGB) and multispectral cameras. We study two approaches: 1) offline detection running on the orthophoto of the area scanned within the mission and 2) real-time scanning from the frame stream directly on the edge device performing the flight mission. We show that by fusing the information from an additional multispectral camera installed on the drone, there is an opportunity to boost the detection quality, which can then be preserved even with a single RGB camera setup by the introduction of an additional convolution neural network trained with transfer learning to produce the fake multispectral images directly from the RGB stream. We show that this approach helps either eliminate the multispectral hardware from the drone or, if only the RGB camera is at hand, boost the segmentation performance by the cost of slight increase in computational budget. To support this claim, we have performed an extensive study of network performance in simulations of both the real-time and offline modes, where we achieve at least 1.1% increase in terms of the mean intersection over union metric when evaluated on the RGB stream from the camera and 1.4% when evaluated on orthophoto data. Our results show that the proper optimization guarantees a complete elimination of the multispectral camera from the flight mission by adding a preprocessing stage to the segmentation network without the loss of quality. PubDate:
2023
Issue No: Vol. 16 (2023)
- A Back Propagation Neural Network-Based Radiometric Correction Method
(BPNNRCM) for UAV Multispectral Image-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Yin Zhang;Qingwu Hu;Hailong Li;Jiayuan Li;Tiancheng Liu;Yuting Chen;Mingyao Ai;Jianye Dong;
Pages: 112 - 125 Abstract: Radiometric correction is one of the most important preprocessing parts of unmanned aerial vehicle (UAV) multispectral remote sensing data analysis and application. In this article, a back propagation (BP) neural network-based radiometric correction method (BPNNRCM) considering optimal parameters was proposed. First, we used different UAV multispectral sensors (K6 equipped on the DJI M600, D-MSPC2000 equipped on the FEIMA D2000) to collect training, validation, testing and cross-validation data. Second, the radiometric correction results of BP neural network with different input variables and hidden layer node number were compared to select the best combination of input parameters and hidden layer node number. Finally, the radiometric correction accuracy and robustness of BP neural network considering the optimal parameters were verified. When the number of nodes in the input layer was five (digital number, UAV sensor height, wavelength, solar altitude angle, and temperature) and the number of nodes in the hidden layer was eight, the BP neural network had the best comprehensive performance in training time of train set and accuracy of validation/test set. In the aspect of accuracy and robustness, the absolute errors of test and cross-validation images' surface reflectance obtained by the BPNNRCM were all less than 0.054. The BPNNRCM had smaller average absolute error (0.0141), mean squared error (0.0003), mean absolute error (0.0141) and mean relative error (7.1%) comparing with empirical line method and radiative transfer model. In general, the research results of this article prove the feasibility and prospect of BPNNRCM for radiometric correction of UAV multispectral images. PubDate:
2023
Issue No: Vol. 16 (2023)
- An Improved Imaging Algorithm for HRWS Space-Borne SAR Data Processing
Based on CVPRI-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Yanan Guo;Pengbo Wang;Xinkai Zhou;Tao He;Jie Chen;
Pages: 126 - 140 Abstract: In space-borne synthetic aperture radar (SAR), the sliding spotlight mode can acquire images with both high-resolution and wide-swath in azimuth direction. Due to the significant two-dimensional spatial variance of Doppler parameters, the traditional imaging algorithms based on the conventional range models is not available. In this article, the strategy of continuously varying pulse repetition interval (CVPRI) is likely to lead to a novel approach to dealing with the azimuth variance problem to realize high-resolution wide-swath (HRWS) imaging in azimuth direction for sliding spotlight SAR. First, the eighth-order Taylor expansion of the modified equivalent squint range model (MESRM-TE8) is adopted, and the accuracy of MESRM-TE8 is accordingly explained. Then, the properties of the spatial variance for the MESRM-TE8 are analyzed in detail, based on which the strategy of CVPRI is given theoretically to eliminate the azimuth variance. An improved imaging algorithm based on CVPRI is subsequently proposed to address the azimuthal-variant Doppler parameters and realize a batch data processing of a large scene in azimuth frequency domain. The extended scaling method is integrated in this algorithm to uniformly compensate cubic phase modulation introduced by CVPRI and circumvent azimuth time folding caused by subaperture processing in the focused image. Finally, the effectiveness of the CVPRI strategy and the proposed algorithm is demonstrated by the simulation results. PubDate:
2023
Issue No: Vol. 16 (2023)
- OFFS-Net: Optimal Feature Fusion-Based Spectral Information Network for
Airborne Point Cloud Classification-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Peipei He;Kejia Gao;Wenkai Liu;Wei Song;Qingfeng Hu;Xingxing Cheng;Shiming Li;
Pages: 141 - 152 Abstract: Airborne laser scanning (ALS) point cloud classification is a necessary step for understanding 3-D scenes and their applications in various industries. However, the classification accuracy and efficiency are low: 1) point cloud classification methods lack effective filtering of the large number of traditional features, 2) significant category imbalance and coordinate scale problems in ALS point cloud classification. To address these problems, this article proposes an airborne LiDAR point cloud classification method based on deep learning network with optimal feature fusion-based spectral information. This method involves the following steps: First, multiscale point cloud features are extracted, and random forest method is used to filter the features, while spectral information is fused to obtain a point cloud feature dataset with less but better data. Second, to adapt to the characteristics of the airborne point cloud, the improved RandLA-Net can simultaneously retain the advantages of random sampling and learn deeper semantic information by fusing the constructed point cloud features with the local feature aggregation module in the network. Third, four fusion models are constructed to verify the effectiveness of the optimal feature fusion-based spectral information network (OFFS-Net) model for airborne point cloud classification. Last, these models are trained and tested on Vaihingen 3-D dataset. The OFFS-Net achieves overall accuracy score of 84.9% and F1-score of 72.3%, which are better than the mainstream methods. This also validates that the proposed OFFS-Net point cloud classification method, based on the advantages of geometric feature and spectral information is excellent. PubDate:
2023
Issue No: Vol. 16 (2023)
- Spatial and Temporal Evolution of Ground Subsidence in the Beijing Plain
Area Using Long Time Series Interferometry-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Yueze Zheng;Junhuan Peng;Xue Chen;Cheng Huang;Pinxiang Chen;Sen Li;Yuhan Su;
Pages: 153 - 165 Abstract: Due to the overexploitation of water resources, ground subsidence is becoming increasingly problematic in Beijing, China's political, economic, and cultural capital. This article aims to investigate the relationship between ground subsidence and changes in groundwater depth, and water supply from a long-term point of view. Multisource synthetic aperture radar (SAR) data using the interferometric SAR (InSAR) technique were adopted in this research, combined with a set of leveling and ground subsidence data in the Beijing Plain area from 2003 to 2020. The InSAR results demonstrate that ground subsidence in the plain area increased steadily from 2003 to 2015, expanding from sporadic to continuous laminar dispersion and producing five major subsidence centers. The South-to-North Water Diversion Project (SNWDP) that was completed in 2008 and 2015 considerably reduced the demand for groundwater supply in the Beijing Plain area. Since then, the groundwater level depth has continued to increase. However, since 2016, the ground subsidence rate has dramatically slowed down. The obtained results showed that, thanks to the SNWDP, which resulted in a decline in groundwater exploitation and an increase in renewable water recycling, the ground subsidence in Beijing's plain area has been effectively managed. PubDate:
2023
Issue No: Vol. 16 (2023)
- Panchromatic and Hyperspectral Image Fusion: Outcome of the 2022 WHISPERS
Hyperspectral Pansharpening Challenge-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Gemine Vivone;Andrea Garzelli;Yang Xu;Wenzhi Liao;Jocelyn Chanussot;
Pages: 166 - 179 Abstract: This article presents the scientific outcomes of the 2022 Hyperspectral Pansharpening Challenge organized by the 12th IEEE Workshop on Hyperspectral Image and Signal Processing: Evolution in Remote Sensing (IEEE WHISPERS 2022). The 2022 Hyperspectral Pansharpening Challenge aims at fusing a panchromatic image with hyperspectral data to get a high spatial resolution hyperspectral cube with the same spatial resolution of the panchromatic image while preserving the spectral information of hyperspectral data. Four datasets acquired by the PRISMA mission owned and managed by the Italian Space Agency have been prepared for participants. They are made available for the benefit of the scientific community. Each dataset contains a panchromatic image and a hyperspectral cube with different spatial resolutions. More than 100 registrations have been received for the event. Four teams submitted their outcomes. Since no team actually outperformed the baseline provided by the organizers, the challenge was declared inconclusive and no winner was recognized. PubDate:
2023
Issue No: Vol. 16 (2023)
- Shadow Pattern-Enhanced Building Height Extraction Using
Very-High-Resolution Image-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Xiran Zhou;Soe W. Myint;
Pages: 180 - 190 Abstract: Building height is valuable for a variety of foci in urban studies. The traditional field investigations are not practical for the updates of massive building height in a large-scale urban area. Given the relationship between building structures and their shadow sizes, the building shadow becomes practical for estimating its corresponding building height when its geometrical shape is visible in newly emerging very-high-resolution (VHR) images. However, the shadow shape of different buildings might vary significantly, posing a great challenge to determining the edge of shadow useful for predicting building height. This study proposes a shadow pattern classification system (ShadowClass) to summarize the varied shadow shapes into a number of pattern categories and employ a cutting-edge CNN model to classify the extracted shadows into a pattern for automatically determining the edge of a building shadow being useful for building height estimation. We integrated the proposed approach into two branches of the state-of-the-art approaches: shadow-based building height estimation with open cyberinfrastructure and shadow-based building height estimation with VHR image. The experimental results proved that the proposed method could be a practical solution for single and isolated buildings that have their complete shadow shape. PubDate:
2023
Issue No: Vol. 16 (2023)
- Solid Waste Detection in Cities Using Remote Sensing Imagery Based on a
Location-Guided Key Point Network With Multiple Enhancements-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Huifang Li;Chao Hu;Xinrun Zhong;Chao Zeng;Huanfeng Shen;
Pages: 191 - 201 Abstract: Solid waste is a widespread problem that is having a negative effect on the global environment. Owing to the ability of macroscopic observation, it is reasonable to believe that remote sensing could be an effective way to realize the detection and monitoring of solid waste. Solid waste is usually a mixture of various materials, with a randomly scattered distribution, which brings great difficulty to precise detection. In this article, we propose a deep learning network for solid waste detection in urban areas, aiming to realize the fast and automatic extraction of solid waste from the complicated and large-scale urban background. A novel dataset for solid waste detection was constructed by collecting 3192 images from Google Earth (with a resolution from 0.13 to 0.52 m), and then a location-guided key point network with multiple enhancements (LKN-ME) is proposed to perform the urban solid waste detection task. The LKN-ME method uses corner pooling and central convolution to capture the key points of an object. The location guidance is realized through constraining the key point locations situated of the annotated bounding box of an object. Multiple enhancements, including data mosaicing, an attention enhancement, and path aggregation, are integrated to improve the detection accuracy. The results show that the LKN-ME method can achieve a state-of-the-art AR100(the average recall computed over 100 detections per image) of 71.8% and an average precision of 44.0% for the DSWD dataset, outperforming the classic object detection methods in solving the solid waste detection problem. PubDate:
2023
Issue No: Vol. 16 (2023)
- Baseline-Based Soil Salinity Index (BSSI): A Novel Remote Sensing
Monitoring Method of Soil Salinization-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Zhimei Zhang;Yanguo Fan;Aizhu Zhang;Zhijun Jiao;
Pages: 202 - 214 Abstract: Soil salinization leads to dehydration of plants, which seriously threatens ecologically sustainable development and food security guarantee. In the complex and diverse coastal wetland environment, the impervious surface and bare soil have similar spectral features with salinized soil, which make it difficult for traditional satellite data and algorithms to accurately and timely monitor the small surface features of salinization. This article presents a baseline-based soil salinity index (BSSI) for soil salinization monitoring using medium-resolution data. In BSSI, we construct a virtual salinization baseline by connecting the near-infrared (NIR) band and the short-wave infrared-2 (SWIR2) band to enhance the spectral feature of salinized soils which border on the impervious surface. In addition, we calculate the distance between the short-wave infrared-1 (SWIR1) band and the virtual salinization baseline as the BSSI, which can effectively improve the stability of salinity inversion for different soils. Through data comparison and model simulations, BSSI has shown advantages over a series of the traditional salinization spectral indices (SSIs). The results show that the saline soil extraction accuracy of BSSI exceeds 85% and the correlation coefficient of the BSSI and the degree of soil salinization exceeds 0.90. Since the related spectral bands, such as NIR, SWIR1, and SWIR2, are available on many existing satellite sensors such as Landsat TM/ETM+, OLI, and sentinel 2, the BSSI concept can be extended to establish long-term records for soil salinization monitoring. PubDate:
2023
Issue No: Vol. 16 (2023)
- SelfS2: Self-Supervised Transfer Learning for Sentinel-2 Multispectral
Image Super-Resolution-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Xiao Qian;Tai-Xiang Jiang;Xi-Le Zhao;
Pages: 215 - 227 Abstract: The multispectral image captured by the Sentinel-2 satellite contains 13 spectral bands with different resolutions, which may hider some of the subsequent applications. In this article, we design a novel method to super-resolve 20- and 60-m coarse bands of the S2 images to 10 m, achieving a complete dataset at the 10-m resolution. To tackle this inverse problem, we leverage the deep image prior expressed by the convolution neural network (CNN). Specifically, a plain ResNet architecture is adopted, and the 3-D separable convolution is utilized to better capture the spatial–spectral features. The loss function is tailored based on the degradation model, enforcing the network output obeying the degradation process. Meanwhile, a network parameter initialization strategy is designed to further mine the abundant fine information provided by existing 10-m bands. The network parameters are inferred solely from the observed S2 image in a self-supervised manner without involving any extra training data. Finally, the network outputs the super-resolution result. On the one hand, our method could utilize the high model capacity of CNNs and work without large amounts of training data required by many deep learning techniques. On the other hand, the degradation process is fully considered, and each module in our work is interpretable. Numerical results on synthetic and real data illustrate that our method could outperform compared state-of-the-art methods. PubDate:
2023
Issue No: Vol. 16 (2023)
- Monitoring the Spatiotemporal Distribution of Invasive Aquatic Plants in
the Guadiana River, Spain-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Elena C. Rodríguez-Garlito;Abel Paz-Gallardo;Antonio Plaza;
Pages: 228 - 241 Abstract: Monitoring the spatiotemporal distribution of invasive aquatic plants is a challenge in many regions worldwide. One of the most invasive species on Earth is the water hyacinth. These plants are harmful to biodiversity and create negative impacts on society and economy. The Guadiana river (one of the most important ones in Spain) has suffered from this problem since the early 2000s. Several efforts have been made to mitigate it. However, invasive plants, such as the water hyacinth, are still present in seed banks at the bottom of the river and can germinate even more than a decade after. In this article, we propose an automatic methodology, based on remote sensing and deep learning techniques, to monitor the water hyacinth in the Guadiana river. Specifically, a multitemporal analysis was carried out during two years using images collected by ESA's Sentinel-2 satellite, analyzed with a convolutional neural network. We demonstrate that, with our strategy, the river can be monitored every few days, and we are able to automatically detect the water hyacinth. Three experiments have been carried out to predict the presence of water hyacinth from a few scattered training samples, which represent invasive plants in different phenological stages and with different spectral responses. PubDate:
2023
Issue No: Vol. 16 (2023)
- Burned Area Mapping Using Unitemporal PlanetScope Imagery With a Deep
Learning Based Approach-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Ah Young Cho;Si-eun Park;Duk-jin Kim;Junwoo Kim;Chenglei Li;Juyoung Song;
Pages: 242 - 253 Abstract: The risk and damage of wildfires have been increasing due to various reasons including climate change, and the Republic of Korea is no exception to this situation. Burned area mapping is crucial not only to prevent further damage but also to manage burned areas. Burned area mapping using satellite data, however, has been limited by the spatial and temporal resolution of satellite data and classification accuracy. This article presents a new burned area mapping method, by which damaged areas can be mapped using semantic segmentation. For this research, PlanetScope imagery that has high-resolution images with very short revisit time was used, and the proposed method is based on U-Net which requires a unitemporal PlanetScope image. The network was trained using 17 satellite images for 12 forest fires and corresponding label images that were obtained semiautomatically by setting threshold values. Band combination tests were conducted to produce an optimal burned area mapping model. The results demonstrated that the optimal and most stable band combination is red, green, blue, and near infrared of PlanetScope. To improve classification accuracy, Normalized Difference Vegetation Index, dissimilarity extracted from Gray-Level Co-Occurrence Matrix, and Land Cover Maps were used as additional datasets. In addition, topographic normalization was conducted to improve model performance and classification accuracy by reducing shadow effects. The F1 scores and overall accuracies of the final image segmentation models are ranged from 0.883 to 0.939, and from 0.990 to 0.997, respectively. These results highlight the potential of detecting burned areas using the deep learning based approach. PubDate:
2023
Issue No: Vol. 16 (2023)
- Multispectral Crop Yield Prediction Using 3D-Convolutional Neural Networks
and Attention Convolutional LSTM Approaches-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Seyed Mahdi Mirhoseini Nejad;Dariush Abbasi-Moghadam;Alireza Sharifi;Nizom Farmonov;Khilola Amankulova;Mucsi Lászlź;
Pages: 254 - 266 Abstract: In recent years, national economies are highly affected by crop yield predictions. By early prediction, the market price can be predicted, importing, and exporting plan can be provided, social, and economic effects of waste products can be minimized, and a program can be presented for humanitarian food aid. In addition, agricultural fields are constantly growing to generate products required. The use of machine learning (ML) methods in this sector can lead to the efficient production and high-quality agricultural products. Traditional predictive machine models were unable to find nonlinear relationships between data. Recently, there has been a revolution in prediction systems via the advancement of ML, which can be used to achieve highly accurate decision-making networks. Thus far, many strategies have been used to evaluate agricultural products, such as DeepYield, CNN-LSTM, and ConvLSTM. However, preferable prediction accuracy is required. In this study, two architectures have been proposed. The first model includes 2D-CNN, skip connections, and LSTM-Attentions. The second model comprises 3D-CNN, skip connections, and ConvLSTM Attention. The Input data given from MODIS products such as Land-Cover, Surface-Temperature, and MODIS-Land-surface from 2003 to 2018 on the county level over 1800 counties, where soybean is mainly cultivated in the USA. The proposed methods have been compared with the most recent models. Then, the results showed that the second proposed method notably outperformed the other techniques. In case of MAE, the second proposed method, DeepYield, ConvLSTM, 3DCNN, and CNN-LSTM obtained 4.3, 6.003, 6.05, 6.3, and 7.002, respectively. PubDate:
2023
Issue No: Vol. 16 (2023)
- Evaluation and Improvement of FY-4A/AGRI Sea Surface Temperature
Data-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Quanjun He;Xin Hu;Yanwei Wu;
Pages: 267 - 277 Abstract: The advanced geosynchronous radiation imager (AGRI) aboard the Chinese Fengyun-4A (FY-4A) satellite can provide operational hourly sea surface temperature (SST) product. However, the temporal and spatial variation of the errors for this product is still unclear. In this article, FY-4A/AGRI SST is evaluated using the in situ SST from 2019-2021, and a cumulative distribution function matching method is adopted to reduce the errors. Statistical results show that the mean bias and root-mean-square error (RMSE) of FY-4A/AGRI SST are −0.37 °C and 0.98 °C, the median and robust standard deviation (RSD) are −0.30 °C and 0.90 °C. The variations in daily and monthly errors are large and there are no prominent seasonal variations during the period analyzed. There are negative biases exceeding −1.0 °C in low-mid latitude regions and larger positive biases in southern high latitude region. There are dependencies of satellite SST minus in situ SST on satellite zenith angle and on SST itself. After the bias correction, the bias and RMSE are reduced to −0.02 °C and 0.72 °C, and the median and RSD are reduced to 0.00 °C and 0.60 °C. On the time scale, the fluctuation ranges of bias and median are smaller. The difference of satellite SST minus in situ SST can reflect the diurnal variation of SST. The biases are generally within ±0.2 °C in full disk. The error dependencies on satellite zenith angle and SST are also greatly reduced. PubDate:
2023
Issue No: Vol. 16 (2023)
- An Automatic and Accurate Method for Marking Ground Control Points in
Unmanned Aerial Vehicle Photogrammetry-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Linghao Kong;Ting Chen;Taibo Kang;Qing Chen;Di Zhang;
Pages: 278 - 290 Abstract: Owing to the rapid development of unmanned aerial vehicle (UAV) technology and various photogrammetric software, UAV photogrammetry projects are becoming increasingly automated. However, marking ground control points (GCPs) in current UAV surveys still generally needs to be manually completed, which brings the problem of inefficiency and human error. Based on the characteristics of UAV photogrammetry, a novel type of circular coded target with its identification and decoding algorithm is proposed to realize an automatic and accurate approach for marking GCPs. UAV survey experiments validate the feasibility of the proposed method, which has comparative advantages in efficiency, robustness, and accuracy over traditional targets. Additionally, we conducted experiments to discuss the effects of projection size and viewing angle, number of coded bits, and environmental conditions on the proposed method. The results show that it can achieve robust identification and accurate positioning even under challenging conditions, and a smaller number of coded bits is recommended for better robustness. PubDate:
2023
Issue No: Vol. 16 (2023)
- PVT-SAR: An Arbitrarily Oriented SAR Ship Detector With Pyramid Vision
Transformer-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Yue Zhou;Xue Jiang;Guozheng Xu;Xue Yang;Xingzhao Liu;Zhou Li;
Pages: 291 - 305 Abstract: The development of deep learning has significantly boosted the development of ship detection in synthetic aperture radar (SAR) images. Most previous works rely on the convolutional neural networks (CNNs), which extract characteristics through local receptive fields and are sensitive to noise. Moreover, these detectors have limited performance in large-scale and complex scenes due to the strong interference of inshore background and the variability of target imaging characteristics. In this article, a novel SAR ship detection framework is proposed, which establishes the pyramid vision transformer (PVT) paradigm for multiscale feature representations in SAR images and, hence, is referred to as PVT-SAR. It breaks the limitation of the CNN receptive field and captures the global dependence through the self-attention mechanism. Since the difficulties of object detection in SAR and natural images are quite different, directly applying the existing transformer structure, such as PVT-small, cannot achieve satisfactory performance for SAR object detection. Compared with the PVT, overlapping patch embedding and mixed transformer encoder modules are incorporated to overcome the problems of densely arranged targets and insufficient data. Then, a multiscale feature fusion module is designed to further improve the detection ability for small targets. Moreover, a normalized Gaussian Wasserstein distance loss is employed to suppress the influence of scattering interference at the ship's boundary. The superiority of the proposed PVT-SAR detector over several state-of-the-art-oriented bounding box detectors has been evaluated in both inshore and offshore scenes on two commonly used SAR ship datasets (i.e., RSSDD and HRSID). PubDate:
2023
Issue No: Vol. 16 (2023)
- SWDet: Anchor-Based Object Detector for Solid Waste Detection in Aerial
Images-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Liming Zhou;Xiaohan Rao;Yahui Li;Xianyu Zuo;Yang Liu;Yinghao Lin;Yong Yang;
Pages: 306 - 320 Abstract: As we all know, waste pollution is one of the most serious environmental issues in the world. Efficient detection of solid waste (SW) in aerial images can improve subsequent waste classification and automatic sorting on the ground. However, traditional methods have some problems, such as poor generalization and limited detection performance. This article presents an anchor-based object detector for solid waste in aerial images (SWDet). Specifically, we construct asymmetric deep aggregation (ADA) network with structurally reparameterized asymmetric blocks to extract waste features with inconspicuous appearance. Besides, considering the waste with blurred boundaries caused by the resolution of aerial images, this article constructs efficient attention fusion pyramid network (EAFPN) to obtain contextual information and multiscale geospatial information via attention fusion. And the model can capture the scattering features of irregular shape waste. In addition, we construct the dataset for solid waste aerial detection (SWAD) by collecting aerial images of SW in Henan Province, China, to validate the effectiveness of our method. Experimental results show that SWDet outperforms most of existing methods for SW detection in aerial images. PubDate:
2023
Issue No: Vol. 16 (2023)
- Remote Sensing Scene Classification Via Multigranularity Alternating
Feature Mining-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Qian Weng;Zhiming Huang;Jiawen Lin;Cairen Jian;Zhengyuan Mao;
Pages: 318 - 330 Abstract: Models based on convolutional neural networks (CNNs) have achieved remarkable advances in high-resolution remote sensing (HRRS) images scene classification, but there are still challenges due to the high similarity among different categories and loss of local information. To address this issue, a multigranularity alternating feature mining (MGA-FM) framework is proposed in this article to learn and fuse both global and local information for HRRS scene classification. First, a region confusion mechanism is adopted to guide network's shallow layers to adaptively learn the salient features of distinguishing regions. Second, an alternating comprehensive training strategy is designed to capture and fuse shallow local feature information and deep semantic information to enhance feature representation capabilities. In particular, the MGA-FM framework can be flexibly embedded in various CNN backbone networks as a training mechanism. Extensive experimental results and visualization analysis on three remote sensing scene datasets indicated that the proposed method can achieve competitive classification performance. PubDate:
2023
Issue No: Vol. 16 (2023)
- Correction of Sea Surface Wind Speed Based on SAR Rainfall Grade
Classification Using Convolutional Neural Network-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Chaogang Guo;Weihua Ai;Xi Zhang;Yanan Guan;Yin Liu;Shensen Hu;Xianbin Zhao;
Pages: 321 - 328 Abstract: The technology of retrieving sea surface wind field from spaceborne synthetic aperture radar (SAR) is increasingly mature. However, the retrieval of the sea surface wind field related to the precipitation effect is still facing challenges, especially the strong precipitation related to extreme weather such as tropical cyclone will cause the wind speed retrieval error to exceed 10 m/s. Semantic segmentation and weak supervision methods have been used for SAR rainfall recognition, but rainfall segmentation is not accurate enough to support the correction of wind field retrieval. In this article, we propose to use deep learning to classify the rainfall grades in SAR images, and combine the rainfall correction model to improve the retrieval accuracy of sea surface wind speed. To overcome the challenge of limited training samples, the transfer learning method in fine-tune is adopted. Preliminary results demonstrate the effectiveness of this deep learning methodology. The model classifies rain and no-rain images with an accuracy of 96.2%, and classifies rainfall intensity grades with an accuracy of 86.2%. The rainfall correction model with SAR rainfall grade identified by convolution neural network reduces the root-mean-square error of retrieved wind speed from 3.83 to 1.76 m/s. The combination of SAR rainfall grade recognition and rainfall correction method improves the retrieval accuracy of SAR wind speed, which can further promote the operational application of SAR wind field. PubDate:
2023
Issue No: Vol. 16 (2023)
- SAR Target Recognition via Random Sampling Combination in Open-World
Environments-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Xiaojing Geng;Ganggang Dong;Ziheng Xia;Hongwei Liu;
Pages: 331 - 343 Abstract: Target recognition in SAR images was widely studied over the years. Most of these works were usually based on the assumption that the targets in the test set belong to a limited set of classes. In the practical scenarios, it is common to encounter various kinds of new targets. It is therefore more meaningful to study target recognition in open-world environments. In these scenes, it is needed to reject the unknown classes while maintain the classification performance on known classes. In the past years, few works were devoted to open set target recognition. Though the detection performance on unknown targets can be improved to a certain extent in the preceding works, most detection schemes are independent of a pretrained feature extractor, leading to potential open space risks. Besides, the model architectures are complicated, resulting in huge computational cost. To solve these problems, a family of new methods for open set target recognition is proposed. Targets indistinguishable from known classes are constructed by random sampling combination strategy. They are further sent into the classifier for feature learning. The original open-world environment is then transformed into a closed-world environment containing the unknown class. Moreover, the special implication of generated unknown targets is highlighted and used to realize unknown detection. Extensive experimental results on the MSTAR benchmark dataset illustrate the effectiveness of the proposed methods. PubDate:
2023
Issue No: Vol. 16 (2023)
- A Fast Large-Scale Path Planning Method on Lunar DEM Using Distributed
Tile Pyramid Strategy-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Zhonghua Hong;Bin Tu;Xiaohua Tong;Haiyan Pan;Ruyan Zhou;Yun Zhang;Yanling Han;Jing Wang;Shuhu Yang;Zhenling Ma;
Pages: 344 - 355 Abstract: In lunar exploration missions, path planning for lunar rovers using digital elevation models (DEMs) is currently a hot topic in academic research. However, research on path planning using large-scale DEMs has rarely been discussed, owing to the low time efficiency of existing algorithms. Therefore, in this article, we propose a fast path-planning method using a distributed tile pyramid strategy and an improved A* algorithm. The proposed method consists of three main steps. First, the tile pyramid is generated for the large lunar DEM and stored in Hadoop distributed file system. Second, a distributed path-planning strategy based on tile pyramid (DPPS-TP) is used to accelerate path-planning tasks on large-scale lunar DEMs using Spark and Hadoop. Finally, an improved A* algorithm was proposed to improve the speed of the path-planning task in each tile. The method was tested using lunar DEM images. Experimental results demonstrate that: in a single-machine serial strategy using source DEM generated by the Chang'e-2 CCD stereo camera, the proposed A* algorithm for open list and closed list with random access feature (OC-RA-A* algorithm) is 3.59 times faster than the traditional A* algorithm in long-distance path planning tasks and compared to the distributed parallel computation strategy using source DEM generated by the Chang'e-2 CCD stereo camera, the proposed DPPS-TP based on tile pyramid DEM is 113.66 times faster in the long-range path planning task. PubDate:
2023
Issue No: Vol. 16 (2023)
- Hypothetical Cirrus Band Generation for Advanced Himawari Imager Sensor
Using Data-to-Data Translation With Advanced Meteorological Imager Observations-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Jeong-Eun Park;Yun-Jeong Choi;Jaehoon Jeong;Sungwook Hong;
Pages: 356 - 368 Abstract: Cirrus cloud contributes significantly to earth's radiation budget and the greenhouse effect. The Advanced Himawari Imager (AHI) onboard the Himawari-8 satellite lacks a 1.37 μm band, sensitive to monitoring cirrus clouds. This article proposed a conditional generative adversarial network-based data-to-data translation (D2D) model to generate a hypothetical AHI 1.37 μm band. For training and testing the D2D model, the Geo-Kompsat-2A Advanced Meteorological Imager (AMI) 1.37 μm bands and other highly correlated bands to cirrus from July 24, 2019 to July 31, 2020, were used. The D2D model exhibited a high level of agreement (mean of statistics: correlation coefficient (CC) = 0.9827, bias = 0.0004, and root-mean-square error (RMSE) = 0.0086 in albedo units) between the observed and D2D-generated AMI 1.37 μm bands from validation datasets. The application of the D2D model to the AHI sensor showed that the D2D-generated AHI 1.37 μm band was qualitatively analogous to the observed AMI 1.37 μm band (average of statistics: bias = 0.0026, RMSE = 0.0191 in albedo units, and CC = 0.9158) on the 1st, 15th, and 28th of each month of 2020 in the common observing regions between Korea and Japan. The validation results with the CALIPSO data also showed that the D2D-generated AHI 1.37 μm band performed similarly to the observed AMI 1.37 µm band. Consequently, this article can significantly contribute to cirrus detection and its application to climatology. PubDate:
2023
Issue No: Vol. 16 (2023)
- Improved Swin Transformer-Based Semantic Segmentation of Postearthquake
Dense Buildings in Urban Areas Using Remote Sensing Images-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Liangyi Cui;Xin Jing;Yu Wang;Yixuan Huan;Yang Xu;Qiangqiang Zhang;
Pages: 369 - 385 Abstract: Timely acquiring the earthquake-induced damage of buildings is crucial for emergency assessment and post-disaster rescue. Optical remote sensing is a typical method for obtaining seismic data due to its wide coverage and fast response speed. Convolutional neural networks (CNNs) are widely applied for remote sensing image recognition. However, insufficient extraction and expression ability of global correlations between local image patches limit the performance of dense building segmentation. This paper proposes an improved Swin Transformer to segment dense urban buildings from remote sensing images with complex backgrounds. The original Swin Transformer is used as a backbone of the encoder, and a convolutional block attention module is employed in the linear embedding and patch merging stages to focus on significant features. Hierarchical feature maps are then fused to strengthen the feature extraction process and fed into the UPerNet (as the decoder) to obtain the final segmentation map. Collapsed and non-collapsed buildings are labeled from remote sensing images of the Yushu and Beichuan earthquakes. Data augmentations of horizontal and vertical flipping, brightness adjustment, uniform fogging, and non-uniform fogging are performed to simulate actual situations. The effectiveness and superiority of the proposed method over the original Swin Transformer and several mature CNN-based segmentation models are validated by ablation experiments and comparative studies. The results show that the mean intersection-over-union of the improved Swin Transformer reaches 88.53%, achieving an improvement of 1.3% compared to the original model. The stability, robustness, and generalization ability of dense building recognition under complex weather disturbances are also validated. PubDate:
2023
Issue No: Vol. 16 (2023)
- Regional Characteristics and Impact Factors of Change in Terrestrial Water
Storage in Northwestern China From 2002 to 2020-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Jianguo Yin;Jiahua Wei;Qiong Li;Olusola O. Ayantobo;
Pages: 386 - 398 Abstract: This article characterized the linear trends and interannual signals of terrestrial water storage (TWS) and meteorological variables including precipitation (P) and evapotranspiration (ET) over the arid Northwestern China (NWC). The relative impaction of P, ET, and human water utilization (HU) on TWS variation among the 10 watersheds of NWC (i.e., 5 watersheds in Xinjiang, 3 watersheds in Hexi Corridor, and 2 watersheds in Qinghai) were then investigated. The result indicated that groundwater storage (GWS) was the main contributor to the TWS variation and matched well with TWS in spatial features or watershed-scale variations. The entire NWC presented growth trends for P (0.05 cm/year) and ET (0.22 cm/year) and decline trends for TWS (−0.19 cm/year) and GWS (−0.20 cm/year). The watersheds in Qinghai province where mainly affected by natural factors showed the increasing TWS/GWS trend. The watersheds in Xinjiang and Hexi Corridor, which had strong impact from human activities generally showed the declining TWS/GWS trends, but Xinjiang showed more intensive declining trend than Hexi Corridor. The analysis of HU indicated that water sustainable management and water-saving technologies had effectively kept down the tendency of TWS/GWS declining in the watersheds in Hexi Corridor, however, they were not sufficient to address the water shortage caused by farmland expansion, slight P growth, and high ET growth in Xinjiang. Groundwater use, as the main source to compensate for the increase in HU (especially agricultural water use), exacerbated TWS/GWS loss in Xinjiang. This article provides valuable information for the water management over the arid NWC. PubDate:
2023
Issue No: Vol. 16 (2023)
- A New Method for Estimating Signal-to-Noise Ratio in UAV Hyperspectral
Images Based on Pure Pixel Extraction-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Wenzhong Tian;Qingzhan Zhao;Za Kan;Xuefeng Long;Hanqing Liu;Juntao Cheng;
Pages: 399 - 408 Abstract: Signal-to-noise ratio (SNR) is an important radiation characteristic parameter for remote sensing image quality assessment as well as a key performance indicator for remote sensing sensors. At present, the SNR estimation methods based on regular segmentation or continuous segmentation are generally used to obtain image SNR. However, the land cover type has a great influence on the results of the SNR estimation method using regular segmentation, especially the high spectral resolution and high spatial resolution remote sensing images obtained by the low-altitude UAV hyperspectral sensor. In addition, some land cover types are difficult to achieve continuous segmentation. In view of this limitation of the existing SNR estimation methods, a new unsupervised method for estimating SNR in UAV hyperspectral images has been developed in this article, called pure pixel extraction and spectral decorrelation. By directly extracting pure pixels in the spatial dimension and combining the correlation of the spectral dimension to obtain the SNR of the hyperspectral image, this new method replaces the conventional method of improving the segmentation algorithm to improve the accuracy of SNR estimation. Additionally, the box counting method is introduced to determine the image SNR aggregation interval. The results showed that the proposed method had higher accuracy and smaller errors than the other SNR estimation methods. Besides, this method had stronger robustness, it can be used for both radiance and reflectance (atmospherically corrected) UAV hyperspectral images. PubDate:
2023
Issue No: Vol. 16 (2023)
- A Two-Step Ensemble-Based Genetic Algorithm for Land Cover Classification
-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Yang Cao;Wei Feng;Yinghui Quan;Wenxing Bao;Gabriel Dauphin;Yijia Song;Aifeng Ren;Mengdao Xing;
Pages: 409 - 418 Abstract: Accurate land use and land cover (LULC) maps are effective tools to help achieve sound urban planning and precision agriculture. As an intelligent optimization technology, genetic algorithm (GA) has been successfully applied to various image classification tasks in recent years. However, simple GA faces challenges, such as complex calculation, poor noise immunity, and slow convergence. This research proposes a two-step ensemble protocol for LULC classification using a grayscale-spatial-based GA model. The first ensemble framework uses fuzzy c-means to classify pixels into those that are difficult to cluster and those that are easy to cluster, which aids in reducing the search space for evolutionary computation. The second ensemble framework uses neighborhood windows as heuristic information to adaptively modify the objective function and mutation probability of the GA, which brings valuable benefits to the discrimination and decision of GA. In this study, three research areas in Dangyang, China, are utilized to validate the effectiveness of the proposed method. The experiments show that the proposed method can effectively maintain the image details, restrain noise, and achieve rapid algorithm convergence. Compared with the reference methods, the best overall accuracy obtained by the proposed algorithm is 88.72%. PubDate:
2023
Issue No: Vol. 16 (2023)
- Spatial–Spectral Split Attention Residual Network for Hyperspectral
Image Classification-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Zhenqiu Shu;Zigao Liu;Jun Zhou;Songze Tang;Zhengtao Yu;Xiao-Jun Wu;
Pages: 419 - 430 Abstract: In the past few years, many convolutional neural networks (CNNs) have been applied to hyperspectral image (HSI) classification. However, many of them have the following drawbacks: they do not fully consider the abundant band spectral information and insufficiently extract the spatial information of HSI; all bands and neighboring pixels are treated equally, so CNNs may learn features from redundant or useless bands/pixels; and a significant amount of hidden semantic information is lost when a single-scale convolution kernel is used in CNNs. To alleviate these problems, we propose a spatial–spectral split attention residual networks (S$^{3}$ARN) for HSI classification. In S$^{3}$ARN, a split attention strategy is used to fuse the features extracted from multireceptive fields, in which both spectral and spatial split attention modules are composed of bottleneck residual blocks. Thanks to the bottleneck structure, the proposed method can effectively prevent overfitting, speeds up the model training, and reduces the network parameters. Moreover, the spectral and spatial attention residual branches aim to generate the attention masks, which can simultaneously emphasize useful bands and neighbor pixels and suppress useless ones. Experimental results on three benchmark datasets demonstrate the effectiveness of the proposed model for HSI classification. PubDate:
2023
Issue No: Vol. 16 (2023)
- Cross Field-Based Segmentation and Learning-Based Vectorization for
Rectangular Windows-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Xiangyu Zhuo;Jiaojiao Tian;Friedrich Fraundorfer;
Pages: 431 - 448 Abstract: Detection and vectorization of windows from building façades are important for building energy modeling, civil engineering, and architecture design. However, current applications still face the challenges of low accuracy and lack of automation. In this article we propose a new two-steps workflow for window segmentation and vectorization from façade images. First, we propose a cross field learning-based neural network architecture, which is augmented by a grid-based self-attention module for window segmentation from rectified façade images, resulting in pixel-wise window blobs. Second, we propose a regression neural network augmented by squeeze-and-excitation (SE) attention blocks for window vectorization. The network takes the segmentation results together with the original façade image as input, and directly outputs the position of window corners, resulting in vectorized window objects with improved accuracy. In order to validate the effectiveness of our method, experiments are carried out on four public façades image datasets, with results usually yielding a higher accuracy for the final window prediction in comparison to baseline methods on four datasets in terms of intersection over union score, F1 score, and pixel accuracy. PubDate:
2023
Issue No: Vol. 16 (2023)
- Multicascaded Feature Fusion-Based Deep Learning Network for Local Climate
Zone Classification Based on the So2Sat LCZ42 Benchmark Dataset-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Weizhen Ji;Yunhao Chen;Kangning Li;Xiujuan Dai;
Pages: 449 - 467 Abstract: A detailed investigation of the microclimate is beneficial for optimizing the urban inner/spatial pattern to enhance thermal comfort or even reduce heatwave disasters, whereas accurately classifying local climate zones (LCZs) severely restricts analysis of thermal characterization. Generally, deep learning-based approaches are effective for adaptive LCZ mapping, yet they often have poor accuracy because inadequate cascade feature extraction patterns may not adapt to the fuzzy LCZ boundaries produced by intricate urban textures, especially when using large-scale datasets. To address these issues, we propose a novel CNN model in which we design a strategy that incorporates a triple feature fusion pattern to enhance LCZ recognition based on the So2sat LCZ 42 large-scale annotated dataset. The approach connects multilayer cascaded information to participate in category judgment, which avoids the loss of effective feature information via continuous cascade transformation as much as possible. The results show that the overall accuracy and kappa coefficient of the proposed model reach 0.70 and 0.68, respectively, manifesting an improvement of approximately 4.47% and 6.25% over advanced LCZ classification approaches. In particular, the accuracy of the proposed approach improves even further after the fusion structure or layer depth is partially removed or reduced, respectively. Finally, we discuss several items, including the effectiveness of different parameters and cascaded feature fusion patterns, the superiority of multilayer cascade feature fusion, the mapping impact of seasons and cloud cover, and even some other issues in LCZ mapping. This article will facilitate improvements in the research precision of urban thermal environments. PubDate:
2023
Issue No: Vol. 16 (2023)
- Determination of the Spatial Extent of the Engine Exhaust-Disturbed Region
of the Chang'E-4 Landing Site Using LROC NAC Images-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Yaqiong Wang;Huan Xie;Chao Wang;Xiaohua Tong;Sicong Liu;Xiong Xu;
Pages: 468 - 481 Abstract: The regolith of the Chang'E-4 landing site was disturbed by its engine exhaust. To explore the interaction between the engine exhaust and the regolith, it was necessary to identify the exhaust-disturbed region. This article focuses on determining the extent of the disturbed region by using lunar reconnaissance orbiter camera narrow angle camera (LROC NAC) images. For this purpose, the tools of temporal-ratio images, phase-ratio images, reflectance profiles, and reflectance isoline graphs are employed. The reflectance profiles and isoline graphs derived from the temporal-ratio images reveal the reflectance changes before and after landing. Compared with the reflectance profiles, isoline graphs further include the spatial information of isolines, thus more robust to noise. Based on the magnitudes of changed reflectance around the lander, the engine exhaust-disturbed region was further divided into the focus disturbed region (FDR) and the diffuse disturbed region (DDR). The final estimated spatial extent along the north–south and east–west directions of the FDR were ∼9.6 and ∼10.8 m, and those of the DDR were ∼75 and ∼80 m. As compared with the estimated spatial extent of the Chang'E-3 landing site, the DDR of the Chang'E-4 landing site was larger, but the FDR was smaller. We attributed this to geological and topography factors. The reflectance changes between the FDR and the undisturbed region increased by ∼10±1%. This indicates similar processes causing the variations in the regolith properties, likely including the smoothing of the surface from microscopic to macroscopic by destroying fine-grained regolith components, or changing of the surface maturity. PubDate:
2023
Issue No: Vol. 16 (2023)
- High-Resolution Semantically Consistent Image-to-Image Translation
-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Mikhail Sokolov;Christopher Henry;Joni Storie;Christopher Storie;Victor Alhassan;Mathieu Turgeon-Pelchat;
Pages: 482 - 492 Abstract: Deep learning has become one of remote sensing scientists' most efficient computer vision tools in recent years. However, the lack of training labels for the remote sensing datasets means that scientists need to solve the domain adaptation (DA) problem to narrow the discrepancy between satellite image datasets. As a result, image segmentation models that are then trained, could better generalize and use an existing set of labels instead of requiring new ones. This work proposes an unsupervised DA model that preserves semantic consistency and per-pixel quality for the images during the style-transferring phase. This article's major contribution is proposing the improved architecture of the SemI2I model, which significantly boosts the proposed model's performance and makes it competitive with the state-of-the-art CyCADA model. A second contribution is testing the CyCADA model on the remote sensing multiband datasets, such as WorldView-2 and SPOT-6. The proposed model preserves semantic consistency and per-pixel quality for the images during the style-transferring phase. Thus, the semantic segmentation model, trained on the adapted images, shows substantial performance gain compared to the SemI2I model and reaches similar results as the state-of-the-art CyCADA model. The future development of the proposed method could include ecological domain transfer, a priori evaluation of dataset quality in terms of data distribution, or exploration of the inner architecture of the DA model. PubDate:
2023
Issue No: Vol. 16 (2023)
- Development of Simulation Models Supporting Next-Generation Airborne
Weather Radar for High Ice Water Content Monitoring-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Yunish Shrestha;Yan Zhang;Greg M. McFarquhar;William Blake;Mariusz Starzec;Steven D. Harrah;
Pages: 493 - 507 Abstract: In this article, a method of extending airborne weather radar modeling to incorporate high-ice-water-content (HIWC) conditions has been developed. A novel aspect is incorporating flight test measurement data, including forward-looking radar measurements and in situ microphysics probes data, into the model and part of the evaluations of modeling. The simulation models assume a dual-polarized, airborne forward-looking radar, while for single-polarized operations, the index-of-dispersion is included as a helpful indicator for HIWC detection. The radar system simulation models are useful for design evaluations for the next generation of airborne aviation hazard monitoring and incorporate HIWC hazard detection algorithms. Example applications of the simulator, such as hazard detection based on the simulated HIWC flight encounter scenarios based on specific numeric weather prediction (NWP) model outputs, are discussed. PubDate:
2023
Issue No: Vol. 16 (2023)
- GNSS-Based Passive Inverse SAR Imaging
-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Pengbo Wang;Xinkai Zhou;Yue Fang;Hongcheng Zeng;Jie Chen;
Pages: 508 - 521 Abstract: The utilization of global navigation satellite system (GNSS) signals for remote sensing has been a hot topic recently. In this article, the feasibility of the GNSS-based passive inverse synthetic aperture radar (P-ISAR) is analyzed. GNSS-based P-ISAR can generate the two-dimensional image of a moving target, providing an estimation of the target size, which is very important information in target recognition. An effective GNSS-based P-ISAR moving target imaging algorithm is proposed. First, a precise direct path interference (DPI) suppression method is derived to eliminate the DPI power in the detection channel. Then, the P-ISAR signal processing method is established. Due to the large synthetic aperture time, the Doppler profile of the ISAR image will defocus if directly performing the Fourier transform. As a solution, a parametric autofocusing and cross-range scaling algorithm is specially tailored for the GNSS-based P-ISAR. The proposed algorithm cannot only focus and scale the ISAR image, but also provide an estimation of the cross-range velocity of the target. Simulation with an airplane target is designed to test the signal processing method. Finally, an experiment is conducted with a civil airplane as the target and GPS satellites as the illumination source. Focused ISAR image is successfully acquired. The estimated length and velocity of the target are approximately consistent with ground truth, which are obtained by the flight record. The potential of the GNSS-based P-ISAR on multistatic operations is also illustrated by the fusion of the ISAR images obtained using different satellites as illumination sources. PubDate:
2023
Issue No: Vol. 16 (2023)
- Spectral–Spatial Generative Adversarial Network for Super-Resolution
Land Cover Mapping With Multispectral Remotely Sensed Imagery-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Cheng Shang;Shan Jiang;Feng Ling;Xiaodong Li;Yadong Zhou;Yun Du;
Pages: 522 - 537 Abstract: Super-resolution mapping (SRM) can effectively predict the spatial distribution of land cover classes within mixed pixels at a higher spatial resolution than the original remotely sensed imagery. The uncertainty of land cover fraction errors within mixed pixels is one of the most important factors affecting SRM accuracy. Studies have shown that SRM methods using deep learning techniques have significantly improved land cover mapping accuracy but have not coped well with spectral–spatial errors. This study proposes an end-to-end SRM model using a spectral–spatial generative adversarial network (SGS) with the direct input of multispectral remotely sensed imagery, which deals with spectral–spatial error. The proposed SGS comprises the following three parts: first, cube-based convolution for spectral unmixing is adopted to generate land cover fraction images. Second, a residual-in-residual dense block fully and jointly considers spectral and spatial information and reduces spectral errors. Third, a relativistic average GAN is designed as a backbone to further improve the super-resolution performance and reduce spectral–spatial errors. SGS was tested in one synthetic and two realistic experiments with multi/hyperspectral remotely sensed imagery as the input, comparing the results with those of hard classification and several classic SRM methods. The results showed that SGS performed well at reducing land cover fraction errors, reconstructing spatial details, removing unpleasant and unrealistic land cover artifacts, and eliminating false recognition. PubDate:
2023
Issue No: Vol. 16 (2023)
- Global Unsupervised Assessment of Multifrequency Vegetation Optical Depth
Sensitivity to Vegetation Cover-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Claudia Olivares-Cabello;David Chaparro;Mercè Vall-llossera;Adriano Camps;Carlos López-Martínez;
Pages: 538 - 552 Abstract: Vegetation optical depth (VOD) has contributed to monitor vegetation dynamics and carbon stocks at different microwave frequencies. Nevertheless, there is a need to determine which are the appropriate frequencies to monitor different vegetation types. Also, as only a few VOD-related studies use multifrequency approaches, it is needed to evaluate their applicability. Here, we analyze the sensitivity of VOD at three frequencies (L-, C-, and X-bands) to different vegetation covers by applying a global-scale unsupervised classification of VOD. A combination of these frequencies (LCX-VOD) is also studied. Two land cover datasets are used as benchmarks and, conceptually, serve as proxies of vegetation density. Results confirm that L-VOD is appropriate for monitoring the densest canopies but, in contrast, there is a higher sensitivity of X-, C-, and LCX-VOD to the vegetation cover in savannahs, shrublands, and grasslands. In particular, the multifrequency combination is the most suited to sense vegetation in savannahs. Also, our study shows a vegetation–frequency relationship that is consistent with theory: the same canopies (e.g., savannahs and some boreal forests) are classified as lighter ones at L-band due to its higher penetration (e.g., as shrublands), but labeled as denser ones at C- and X-bands due their saturation (e.g., boreal forests are labeled as tropical forests). This study complements quantitative approaches investigating the link between VOD and vegetation, extends them to different frequencies, and provides hints on which frequencies are suitable for vegetation monitoring depending on the land cover. Conclusions are informative for upcoming multifrequency missions, such as the Copernicus Multifrequency Image Radiometer. PubDate:
2023
Issue No: Vol. 16 (2023)
- Novel Air2water Model Variant for Lake Surface Temperature Modeling With
Detailed Analysis of Calibration Methods-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Adam P. Piotrowski;Jaroslaw J. Napiorkowski;Senlin Zhu;
Pages: 553 - 569 Abstract: The air2water model is a simple and efficient tool for modeling surface water temperature in lakes based solely on the air temperature. In this article, we propose to modify the air2water model in such a way that different parameters would be associated with lake stratification of cold waters than with lake stratification of warm waters. The situation of a mix of both cold water and warm water is also considered. The model is tested on 22 lowland Polish lakes against two classical air2water variants. As the new air2water model variant is slightly more complicated than the basic versions, we focus on the importance of the choice of the calibration method. Each variant of the air2water model is calibrated with eight different optimization methods, which are also compared on various benchmark problems. We show that the proposed variant is superior to the classical air2water models on about 90% of tested lakes, but only if the calibration approach is properly selected, which confirms the importance of the links between the model and appropriate optimization procedures. The proposed air2water variant performs well on various lowland lakes, with exception of large but shallow ones, probably due to the weak stratification of the shallow lakes. PubDate:
2023
Issue No: Vol. 16 (2023)
- Multiresolution-Based Rough Fuzzy Possibilistic C-Means Clustering Method
for Land Cover Change Detection-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Tong Xiao;Yiliang Wan;Jianjun Chen;Wenzhong Shi;Jianxin Qin;Deping Li;
Pages: 570 - 580 Abstract: Object-oriented change detection (OOCD) plays an important role in remote sensing change detection. Generally, most of current OOCD methods adopt the highest predicted probability to determine whether objects have changes. However, it ignores the fact that only parts of an object have changes, which will generate the uncertain classification information. To reduce the classification uncertainty, an improved rough-fuzzy possibilistic $c$-means clustering algorithm combined with multiresolution scales information (MRFPCM) is proposed. First, stacked bitemporal images are segmented using the multiresolution segmentation approach from coarse to fine scale. Second, objects at the coarsest scale are classified into changed, unchanged, and uncertain categories by the proposed MRFPCM. Third, all the changed and unchanged objects in previous scales are combined as training samples to classify the uncertain objects into new changed, unchanged, and uncertain objects. Finally, segmented objects are classified layer by layer based on the MRFPCM until there are no uncertain objects. The MRFPCM method is validated on three datasets with different land change complexity and compared with five widely used change detection methods. The experimental results demonstrate the effectiveness and stability of the proposed approach. PubDate:
2023
Issue No: Vol. 16 (2023)
- A Novel Tensor-Based Hyperspectral Image Restoration Method With Low-Rank
Modeling in Gradient Domains-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Pengfei Liu;Lanlan Liu;Liang Xiao;
Pages: 581 - 597 Abstract: The hyperspectral image (HSI) is easily contaminated by various kinds of mixed noise (such as Gaussian noise, impulse noise, stripes, and deadlines) during the process of data acquisition and conversion, which significantly affect the quality and applications of HSI. As an important and effective scheme for the quality improvement of HSI, the HSI restoration problem aims to recover a clean HSI from the noisy HSI with mixed noise. Thus, based on the tensor modeling of HSI, we propose a novel tensor-based HSI restoration model with low-rank modeling in gradient domains in a unified tensor representation framework in this article. First, for the spectral low-rank modeling of HSI in spectral gradient domain, we particularly exploit the low-rank property of spectral gradient, and propose the spectral gradient-based weighted nuclear norm low-rank prior term. Second, for the spatial-mode low-rank modeling of HSI in spatial gradient domain, we particularly exploit the low-rank property of spatial gradient tensors via the discrete Fourier transform, and propose the spatial gradient-based tensor nuclear norm low-rank prior term. Then, we use the alternative direction method of multipliers to solve the proposed model. Finally, the restoration results on both the simulated and real HSI datasets demonstrate that the proposed method is superior to many state-of-the-art methods in the aspects of visual and quantitative comparisons. PubDate:
2023
Issue No: Vol. 16 (2023)
- Tomographic Imaging for Orbital Radar Sounding of Earth's Ice
Sheets-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Min Liu;Peng Xiao;Lu Liu;Xiaohong Sui;Chunzhu Yuan;
Pages: 598 - 608 Abstract: BingSat-Tomographic Observation of Polar Ice Sheets (BingSat-TOPIS) is a spaceborne multistatic radar sounding system that can achieve high resolution and stereoscopic observation. It is designed to penetrate ice sheets and acquire tomographic image. The satellite group fly over the Polar Regions, which can form about 6.56-km cross-track baseline. This cross-track baseline is formed by one master satellite and 40 slave CubeSats. In this article, we propose a tomographic imaging algorithm for orbital radar sounding of the Earth's ice sheets. First, we give the method to calculate the two-way slant range in air and ice. The phase errors of the method are smaller than π/4 rad. Second, we express the radar data cube for a point target in ice sheets. The radar data cube is made of 40 bistatic synthetic aperture radar echo signals. At last, the echoes are simulated for the TOPIS system, and a matched filter is used for pulse compression in range. Then, the back projection algorithm is used in track and cross-track imaging. The experimental results demonstrate that our tomographic imaging algorithm is effective and reliable. PubDate:
2023
Issue No: Vol. 16 (2023)
- High-Precision ZTD Model of Altitude-Related Correction
-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Qingzhi Zhao;Jing Su;Chaoqian Xu;Yibin Yao;Xiaoya Zhang;Jifeng Wu;
Pages: 609 - 621 Abstract: Zenith tropospheric delay (ZTD) is one of the main error sources in space geodesy. The existing regional or global models, such as Global Pressure and Temperature 3 (GPT3), Global Tropospheric model, Global Hopfield, and Shanghai Astronomical observatory tropospheric delay model models, have good performance. However, the precision of these models is relatively low in regions with a large height difference, which becomes the focus of this article. A high-precision ZTD model considering the height effect on tropospheric delay is proposed, and China is selected as study area due to its large height difference, which is called the high-precision ZTD model for China (CHZ). The initial ZTD value is calculated on the basis of the GPT3 model, and the periodic terms of ZTD residual between the global navigation satellite system (GNSS) and GPT3 model, such as annual, semiannual, and seasonal periods, are determined by the Lomb–Scargle periodogram method in different subareas of China. The relationship between the ZTD periodic residual term and the height of the GNSS station is further analyzed at different seasons, and linear ZTD periodic residual models are obtained. A total of 164 GNSS stations derived from the Crustal Movement Observation Network of China and 87 radiosonde stations are selected to validate the proposed CHZ model, and hourly ZTD data derived from GNSS are used to establish the CHZ model. Statistical result shows that the averaged root mean square and Bias of the CHZ model are 21.12 and −2.51 mm, respectively, in the whole of China. In addition, the application of CHZ model in precision point positioning (PPP) show that the convergence time is improved by 34%, 15%, and 35%, respectively, in N, E, and U components when compared to GPT3-based PPP. PubDate:
2023
Issue No: Vol. 16 (2023)
- Efficient Global Color, Luminance, and Contrast Consistency Optimization
for Multiple Remote Sensing Images-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Zhonghua Hong;Changyou Xu;Xiaohua Tong;Shijie Liu;Ruyan Zhou;Haiyan Pan;Yun Zhang;Yanling Han;Jing Wang;Shuhu Yang;
Pages: 622 - 637 Abstract: Light and color uniformity is essential for the production of high-quality remote-sensing image mosaics. Existing color correction methods mainly use flexible models to express the color differences between multiple images and impose specific constraints (e.g., image gradient or contrast constraints) to preserve image texture information as much as possible. Due to these constraints, it is usually difficult to correct for the differences in texture between images during image processing. We propose a method that can optimize the luminance, contrast, and color difference of remote-sensing images. In the YCbCr color space, this method processes the chrominance and luminance channels of the image. This is conducive to reducing the influence of the different channels. In the luminance channel, the block-based Wallis transform method is used to optimize the luminance and contrast of the image. In the chromaticity channel, to optimize the color differences, a spline curve is used as a model; the color differences are formulated as a cost function and solved using convex quadratic programming. Moreover, considering the efficiency of our method, we use a graphics processing unit to make the algorithm parallel. The proposed method has been tested on several challenging datasets that cover different topographic regions. In terms of visuals and quality indicators, it shows better results than state-of-the-art approaches. PubDate:
2023
Issue No: Vol. 16 (2023)
- A Multiresolution Details Enhanced Attentive Dual-UNet for Hyperspectral
and Multispectral Image Fusion-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Jian Fang;Jingxiang Yang;Abdolraheem Khader;Liang Xiao;
Pages: 638 - 655 Abstract: The fusion-based super-resolution of hyperspectral images (HSIs) draws more and more attention in order to surpass the hardware constraints intrinsic to hyperspectral imaging systems in terms of spatial resolution. Low-resolution (LR)-HSI is combined with a high-resolution multispectral image (HR-MSI) to achieve HR-HSI. In this article, we propose multiresolution details enhanced attentive dual-UNet to improve the spatial resolution of HSI. The entire network contains two branches. The first branch is the wavelet detail extraction module, which performs discrete wavelet transform on MSI to extract spatial detail features and then passes through the encoding–decoding. Its main purpose is to extract the spatial features of MSI at different scales. The latter branch is the spatio-spectral fusion module, which aims to inject the detail features of the wavelet detail extraction network into the HSI to reconstruct the HSI better. Moreover, this network uses an asymmetric feature selective attention model to focus on important features at different scales. Extensive experimental results on both simulated and real data show that the proposed network architecture achieves the best performance compared with several leading HSI super-resolution methods in terms of qualitative and quantitative aspects. PubDate:
2023
Issue No: Vol. 16 (2023)
- MVCNN: A Deep Learning-Based Ocean–Land Waveform Classification Network
for Single-Wavelength LiDAR Bathymetry-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Gang Liang;Xinglei Zhao;Jianhu Zhao;Fengnian Zhou;
Pages: 656 - 674 Abstract: Ocean–land waveform classification (OLWC) is crucial in airborne LiDAR bathymetry (ALB) data processing and can be used for ocean–land discrimination and waterline extraction. However, the accuracy of OLWC for single-wavelength ALB systems is low given the nature of the green laser waveform in complex environments. Thus, in this article, a deep learning-based OLWC method called the multichannel voting convolutional neural network (MVCNN) is proposed based on the comprehensive utilization of multichannel green laser waveforms. First, multiple green laser waveforms collected in deep and shallow channels are input into a multichannel input module. Second, a one-dimensional (1-D) convolutional neural network (CNN) structure is proposed to handle each green channel waveform. Finally, a multichannel voting module is introduced to perform majority voting on the predicted categories derived by each 1-D CNN model and output the final waveform category (i.e., ocean or land waveforms). The proposed MVCNN is evaluated using the raw green laser waveforms collected by Optech coastal zone mapping and imaging LiDAR (CZMIL). Results show that the overall accuracy, kappa coefficient, and standard deviation of the overall accuracy for the OLWC utilizing green laser waveforms based on MVCNN can reach 99.41%, 0.9800, and 0.03%, respectively. Results further show that the classification accuracy of the MVCNN is improved gradually with the increase in the number of laser channels. The multichannel voting module can select the correct waveform category from the deep and shallow channels. The proposed MVCNN is highly accurate and robust, and it is slightly affected by aquaculture rafts and the merging effect of green laser waveform in very shallow waters. Thus, the use of MVCNN in OLWC for single-wavelength ALB systems is recommended. In addition, this article explores the relationships between green deep and shall-w channel waveforms based on the analysis of CZMIL waveform data. PubDate:
2023
Issue No: Vol. 16 (2023)
- LRAD-Net: An Improved Lightweight Network for Building Extraction From
Remote Sensing Images-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Jiabin Liu;Huaigang Huang;Hanxiao Sun;Zhifeng Wu;Renbo Luo;
Pages: 675 - 687 Abstract: The building extraction method of remote sensing images that uses deep learning algorithms can solve the problems of low efficiency and poor effect of traditional methods during feature extraction. Although some semantic segmentation networks proposed recently can achieve good segmentation performance in extracting buildings, their huge parameters and large amount of calculation lead to great obstacles in practical application. Therefore, we propose a lightweight network (named LRAD-Net) for building extraction from remote sensing images. LRAD-Net can be divided into two stages: encoding and decoding. In the encoding stage, the lightweight RegNet network with 600 million flop (600 MF) is finally selected as our feature extraction backbone net though lots of experimental comparisons. Then, a multiscale depthwise separable atrous spatial pyramid pooling structure is proposed to extract more comprehensive and important details of buildings. In the decoding stage, the squeeze-and-excitation attention mechanism is applied innovatively to redistribute the channel weights before fusing feature maps with low-level details and high-level semantics, thus can enrich the local and global information of the buildings. What's more, a lightweight residual block with polarized self-attention is proposed, it can incorporate features extracted from the space of maps and different channels with a small number of parameters, and improve the accuracy of recovering building boundary. In order to verify the effectiveness and robustness of proposed LRAD-Net, we conduct experiments on a self-annotated UAV dataset with higher resolution and three public datasets (the WHU aerial image dataset, the WHU satellite image dataset and the Inria aerial image dataset). Compared with several representative networks, LRAD-Net can extract more details of building, and has smaller number of parameters, faster computing speed, stronger generalization ability, which can improve the trai-ing speed of the network without affecting the building extraction effect and accuracy. PubDate:
2023
Issue No: Vol. 16 (2023)
- Hypergraph-Enhanced Textual-Visual Matching Network for Cross-Modal Remote
Sensing Image Retrieval via Dynamic Hypergraph Learning-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Fanglong Yao;Xian Sun;Nayu Liu;Changyuan Tian;Liangyu Xu;Leiyi Hu;Chibiao Ding;
Pages: 688 - 701 Abstract: Cross-modal remote sensing (RS) image retrieval aims to retrieve RS images using other modalities (e.g., text) and vice versa. The relationship between objects in the RS image is complex, i.e., the distribution of multiple types of objects is uneven, which makes the matching with query text inaccurate, and then restricts the performance of remote sensing image retrieval. Previous methods generally focus on the feature matching between RS image and text and rarely model the relationships between features of RS image. Hypergraph (hyperedge connecting multiple vertices) is an extended structure of a regular graph and has attracted extensive attention for its superiority in representing high-order relationships. Inspired by the advantages of the hypergraph, in this work, a hypergraph-enhanced textual-visual matching network (HyperMatch) is proposed to circumvent the inaccurate matching between the RS image and query text. Specifically, a multiscale RS image hypergraph network is designed to model the complex relationships between features of the RS image for forming the valuable and redundant features into different hyperedges. In addition, a hypergraph construction and update method for an RS image is designed. For constructing a hypergraph, the features of an RS image running as vertices and cosine similarity is the metric to measure the correlation between them. Vertex and hyperedge attention mechanisms are introduced for the dynamic update of a hypergraph to realize the alternating update of vertices and hyperedges. Quantitative and qualitative experiments on the RSICD and RSITMD datasets verify the effectiveness of the proposed method in cross-modal remote sensing image retrieval. PubDate:
2023
Issue No: Vol. 16 (2023)
- Compression Supports Spatial Deep Learning
-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Gabriel Dax;Srilakshmi Nagarajan;Hao Li;Martin Werner;
Pages: 702 - 713 Abstract: In the last decades, the domain of spatial computing became more and more data driven, especially when using remote sensing-based images. Furthermore, the satellites provide huge amounts of images, so the number of available datasets is increasing. This leads to the need for large storage requirements and high computational costs when estimating the label scene classification problem using deep learning. This consumes and blocks important hardware recourses, energy, and time. In this article, the use of aggressive compression algorithms will be discussed to cut the wasted transmission and resources for selected land cover classification problems. To compare the different compression methods and the classification performance, the satellite image patches are compressed by two methods. The first method is the image quantization of the data to reduce the bit depth. Second is the lossy and lossless compression of images with the use of image file formats, such as JPEG and TIFF. The performance of the classification is evaluated with the use of convolutional neural networks (CNNs) like VGG16. The experiments indicated that not all remote sensing image classification problems improve their performance when taking the full available information into account. Moreover, compression can set the focus on specific image features, leading to fewer storage needs and a reduction in computing time with comparably small costs in terms of quality and accuracy. All in all, quantization and embedding into file formats do support CNNs to estimate the labels of images, by strengthening the features. PubDate:
2023
Issue No: Vol. 16 (2023)
- DARN: Distance Attention Residual Network for Lightweight Remote-Sensing
Image Superresolution-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Qingjian Wang;Sen Wang;Mingfang Chen;Yang Zhu;
Pages: 714 - 724 Abstract: The application of single-image superresolution (SISR) in remote sensing is of great significance. Although the state-of-the-art convolution neural network (CNN)-based SISR methods have achieved excellent results, the large model and slow speed make it difficult to deploy in real remote sensing tasks. In this article, we propose a compact and efficient distance attention residual network (DARN) to achieve a better compromise between model accuracy and complexity. The distance attention residual connection block (DARCB), the core component of the DARN, uses multistage feature aggregation to learn more accurate feature representations. The main branch of the DARCB adopts a shallow residual block (SRB) to flexibly learn residual information to ensure the robustness of the model. We also propose a distance attention block (DAB) as a bridge between the main branch and the branch of the DARCB; the DAB can effectively alleviate the loss of detail features in the deep CNN extraction process. Experimental results on two remote sensing and five super-resolution benchmark datasets demonstrate that the DARN achieves a better compromise than existing methods in terms of performance and model complexity. In addition, the DARN achieves the optimal solution compared with the state-of-the-art lightweight remote sensing SISR method in terms of parameter amount, computation amount, and inference speed. Our code will be available at https://github.com/candygogogogo/DARN. PubDate:
2023
Issue No: Vol. 16 (2023)
- Hyperspectral Compressive Image Reconstruction With Deep Tucker
Decomposition and Spatial–Spectral Learning Network-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Hao Xiang;Baozhu Li;Le Sun;Yuhui Zheng;Zebin Wu;Jianwei Zhang;Byeungwoo Jeon;
Pages: 725 - 737 Abstract: Hyperspectral compressive imaging has taken advantage of compressive sensing theory to capture spectral information of the dynamic world in recent decades of years, where an optical encoder is employed to compress high dimensional signals into a single 2-D measurement. The core issue is how to reconstruct the underlying hyperspectral image (HSI), although deep neural network methods have achieved much success in compressed sensing image reconstruction in recent years, they still have some unsolved issues, such as tradeoffs between performance and efficiency, and accurate exploitation of cubic structure information. In this article, we propose a deep Tucker decomposition and spatial–spectral learning network (DS-net) to learn the tensor low-lank structure features and spatial–spectral correlation of HSI for reconstruction quality promotion. Inspired by tensor decomposition, we first construct a deep Tucker decomposition module to learn the principal components from different modes of the image features. Then, we cascade a series of decomposition modules to learn multihierarchical features. Furthermore, to jointly capture the spatial–spectral correlation of HSI, we propose a spatial–spectral correlation learning module in a U-net structure for more robust reconstruction performance. Finally, experimental results on both synthetic and real datasets demonstrate the superiority of the proposed method compared to several state-of-the-art methods in quantitative assessment and visual effects. PubDate:
2023
Issue No: Vol. 16 (2023)
- Vision Transformer With Contrastive Learning for Remote Sensing Image
Scene Classification-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Meiqiao Bi;Minghua Wang;Zhi Li;Danfeng Hong;
Pages: 738 - 749 Abstract: Remote sensing images (RSIs) are characterized by complex spatial layouts and ground object structures. ViT can be a good choice for scene classification owing to the ability to capture long-range interactive information between patches of input images. However, due to the lack of some inductive biases inherent to CNNs, such as locality and translation equivariance, ViT cannot generalize well when trained on insufficient amounts of data. Compared with training ViT from scratch, transferring a large-scale pretrained one is more cost-efficient with better performance even when the target data are small scale. In addition, the cross-entropy (CE) loss is frequently utilized in scene classification yet has low robustness to noise labels and poor generalization performances for different scenes. In this article, a ViT-based model in combination with supervised contrastive learning (CL) is proposed, named ViT-CL. For CL, supervised contrastive (SupCon) loss, which is developed by extending the self-supervised contrastive approach to the fully supervised setting, can explore the label information of RSIs in embedding space and improve the robustness to common image corruption. In ViT-CL, a joint loss function that combines CE loss and SupCon loss is developed to prompt the model to learn more discriminative features. Also, a two-stage optimization framework is introduced to enhance the controllability of the optimization process of the ViT-CL model. Extensive experiments on the AID, NWPU-RESISC45, and UCM datasets verified the superior performance of ViT-CL, with the highest accuracies of 97.42%, 94.54%, and 99.76% among all competing methods, respectively. PubDate:
2023
Issue No: Vol. 16 (2023)
- A Building Shape Vectorization Hierarchy From VHR Remote Sensing Imagery
Combined DCNNs-Based Edge Detection and PCA-Based Corner Extraction-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Xiang Wen;Xing Li;Wenquan Han;Erzhu Li;Wei Liu;Lianpeng Zhang;Yihu Zhu;Shengli Wang;Sibao Hao;
Pages: 750 - 761 Abstract: The automatic vectorization of building shape from very high resolution remote sensing imagery is fundamental in many fields, such as urban management and geodatabase updating. Recently, deep convolutional neural networks (DCNNs) have been successfully used for building edge detection, but the results are raster images rather than vectorized maps and do not meet the requirements of many applications. Although there are some algorithms for converting raster images into vector maps, such vector maps often have too many vector points and irregular shapes. This article proposed a building shape vectorization hierarchy, which combined DCNNs-based building edge detection and a corner extraction algorithm based on principle component analysis for rapidly extracting building corners from the building edges. Experiments on the Jiangbei New Area Buildings and Massachusetts Buildings datasets showed that compared with the state-of-the-art corner detectors, the building vector corners extracted using our proposed algorithm had fewer breakpoints and isolated points, and our building vector boundaries were more complete and regular. In addition, the building shapes extracted using our hierarchy were 7.94% higher than the nonmaximum suppression method in terms of relaxed overall accuracy on the Massachusetts dataset. Overall, our proposed hierarchy is effective for building shape vectorization. PubDate:
2023
Issue No: Vol. 16 (2023)
- Neural Network Emulation of Synthetic Hyperspectral Sentinel-2-Like
Imagery With Uncertainty-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Miguel Morata;Bastian Siegmann;Adrián Pérez-Suay;José Luis García-Soria;Juan Pablo Rivera-Caicedo;Jochem Verrelst;
Pages: 762 - 772 Abstract: Hyperspectral satellite imagery provides highly resolved spectral information for large areas and can provide vital information. However, only a few imaging spectrometer missions are currently in operation. Aiming to generate synthetic satellite-based hyperspectral imagery potentially covering any region, we explored the possibility of applying statistical learning, i.e., emulation. Based on the relationship of a Sentinel-2 (S2) scene and a hyperspectral HyPlant airborne image, this work demonstrates the possibility to emulate a hyperspectral S2-like image. We tested the role of different machine learning regression algorithms and varied the image-extracted training dataset size. We found superior performance of neural network as opposed to the other algorithms when trained with large datasets (up to 100 000 samples). The developed emulator was then applied to the L2A (bottom-of-atmosphere reflectance) S2 subset, and the obtained S2-like hyperspectral reflectance scene was evaluated. The validation of emulated against reference spectra demonstrated the potential of the technique. $R^{2}$ values between 0.75 and 0.9 and NRMSE between 2 and 5% across the full 402–2356 nm range were obtained. Moreover, epistemic uncertainty is obtained using the dropout technique, revealing spatial fidelity of the emulated scene. We obtained highest SD values of 0.05 (CV of 8%) in clouds and values below 0.01 (CV of 7%) in vegetation land covers. Finally, the emulator was applied to an entire S2 tile (5490 × 5490 pixels) to generate a hyperspectral reflectance datacube with the texture of S2 (60 Gb, at a speed of 0.14 s/10000 pixels). As the emulator can convert any S2 tile into a hyperspectral image, such scenes give perspectives how future satellite imaging spectroscopy will look like. PubDate:
2023
Issue No: Vol. 16 (2023)
- PCL–PTD Net: Parallel Cross-Learning-Based Pixel Transferred
Deconvolutional Network for Building Extraction in Dense Building Areas With Shadow-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Wuttichai Boonpook;Yumin Tan;Kritanai Torsri;Patcharin Kamsing;Peerapong Torteeka;Attawut Nardkulpat;
Pages: 773 - 786 Abstract: Urban building segmentation from remote sensed imageries is challenging because there usually exists a variety of building features. Furthermore, very high spatial resolution imagery can provide many details of the urban building, such as styles, small gaps among buildings, building shadows, etc. Hence, satisfactory accuracy in detecting and extracting urban features from highly detailed images still remains. Deep learning semantic segmentation using baseline networks works well on building extraction; however, their ability in building extraction in shadows area, unclear building feature, and narrow gaps among buildings in dense building zone is still limited. In this article, we propose parallel cross-learning-based pixel transferred deconvolutional network (PCL–PTD net), and then is used to segment urban buildings from aerial photographs. The proposed method is evaluated and intercompared with traditional baseline networks. In PCL–PTD net, it is composed of parallel network, cross-learning functions, residual unit in encoder part, and PTD in decoder part. The performance is applied to three datasets (Inria aerial dataset, international society for photogrammetry and remote sensing Potsdam dataset, and UAV building dataset), to evaluate its accuracy and robustness. As a result, we found that PCL–PTD net can improve learning capacities of the supervised learning model in differentiating buildings in dense area and extracting buildings covered by shadows. As compared to the baseline networks, we found that proposed network shows superior performance compared to all eight networks (SegNet, U-net, pyramid scene parsing network, PixelDCL, DeeplabV3+, U-Net++, context feature enhancement networ, and improved ResU-Net). The experiments on three datasets also demonstrate the ability of proposed framework and indicating its performance. PubDate:
2023
Issue No: Vol. 16 (2023)
- Joint Radio Frequency Interference and Deceptive Jamming Suppression
Method for Single-Channel SAR via Subpulse Coding-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Guoli Nie;Guisheng Liao;Cao Zeng;Xuepan Zhang;Dongchen Li;
Pages: 787 - 798 Abstract: The radio frequency interference (RFI) and deceptive jamming (DJ), as two major external threats to synthetic aperture radar (SAR) systems, can greatly reduce the readability and veracity of the obtained SAR images. Current interference suppression methods have no capability to suppress both of them. In this article, a subpulse coding (SPC)-based joint RFI and DJ suppression method for single-channel SAR systems is proposed. By making full use of the elaborate coding scheme and subpulse transmitting mode, SPC can effectively suppress RFIs in the Doppler domain. On the other hand, after the decoding process, by utilizing the subpulse digital beamforming (DBF) technology with the well-designed DBF weight vectors, DJs can also be suppressed greatly. Numerical experiments verify the effectiveness of the proposed method. PubDate:
2023
Issue No: Vol. 16 (2023)
- Pansharpening Based on Adaptive High-Frequency Fusion and Injection
Coefficients Optimization-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Yong Yang;Chenxu Wan;Shuying Huang;Hangyuan Lu;Weiguo Wan;
Pages: 799 - 811 Abstract: The purpose of pansharpening is to fuse a multispectral (MS) image with a panchromatic (PAN) image to generate a high spatial-resolution multispectral (HRMS) image. However, the traditional pansharpening methods do not adequately take consideration of the information of MS images, resulting in inaccurate detail injection and spectral distortion in the pansharpened results. To solve this problem, a new pansharpening approach based on adaptive high-frequency fusion and injection coefficients optimization is proposed, which can obtain an accurate injected high-frequency component (HFC) and injection coefficients. First, we propose a multi-level sharpening model to enhance the spatial information of the MS image, and then extract the HFCs from the sharpened MS image and PAN image. Next, an adaptive fusion strategy is designed to obtain the accurate injected HFC by calculating the similarity and difference of the extracted HFCs. Regarding the injection coefficients, we propose injection coefficients optimization scheme based on the spatial and spectral relationship between the MS image and PAN image. Finally, the HRMS image is obtained through injecting the fused HFC into the upsampled MS image with the injection coefficients. Experiments with simulated and real data are performed on IKONOS and Pléiades datasets. Both subjective and objective results indicate that our method has better performance than state-of-the-art pansharpening approaches. PubDate:
2023
Issue No: Vol. 16 (2023)
- Text-Image Matching for Cross-Modal Remote Sensing Image Retrieval via
Graph Neural Network-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Hongfeng Yu;Fanglong Yao;Wanxuan Lu;Nayu Liu;Peiguang Li;Hongjian You;Xian Sun;
Pages: 812 - 824 Abstract: The rapid development of remote sensing (RS) technology has produced massive images, which makes it difficult to obtain interpretation results by manual screening. Therefore, researchers began to develop automatic retrieval method of RS images. In recent years, cross-modal RS image retrieval based on query text has attracted many researchers because of its flexible and has become a new research trend. However, the primary problem faced is that the information of query text and RS image is not aligned. For example, RS images often have the attributes of multiscale and multiobjective, and the amount of information is rich, while the query text contains only a few words, and the information is scarce. Recently, graph neural network (GNN) has shown its potential in many tasks with its powerful feature representation ability. Therefore, based on GNN, this article proposes a new cross-modal RS feature matching network, which can avoid the degradation of retrieval performance caused by information misalignment by learning the feature interaction in query text and RS image, respectively, and modeling the feature association between the two modes. Specifically, to fuse the within-modal features, the text and RS image graph modules are designed based on GNN. In addition, in order to effectively match the query text and RS image, combined with the multihead attention mechanism, an image-text association module is constructed to focus on the parts related to RS image in the text. The experiments on two public standard datasets verify the competitive performance of the proposed model. PubDate:
2023
Issue No: Vol. 16 (2023)
- An Analysis of Environmental Effect on VIIRS Nighttime Light Monthly
Composite Data at Multiple Scales in China-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Mengxin Yuan;Xi Li;Deren Li;Ji Wu;
Pages: 825 - 840 Abstract: Nighttime light (NTL) can provide valuable information about human activities. The temporal NTL variation has been previously explored, but the effect of environmental factors has not been fully considered. Here, this article focused on the environmental effect on NTL time series in China, using the visible infrared imaging radiometer suite (VIIRS) monthly products, Earth Observations Group (EOG) product, and Black Marble product, from January 2014 to December 2020. It was found that the NTL variations were statistically correlated with aerosols, vegetation, and surface albedo. NTL variations were negatively correlated with aerosol and vegetation, but positively correlated with surface albedo. Aerosol optical depth was important to explain the NTL variation among environmental factors. In 79% of urban areas in China, the adjusted R-squared of NTL and the three factors surpassed that of NTL and the two factors (vegetation and surface albedo) based on EOG product. In 60% of urban areas in China, the adjusted R-squared of NTL and the three factors surpassed that of NTL and the two factors (vegetation and surface albedo), based on Black Marble product. Both EOG monthly product and Black Marble monthly product were affected by aerosols, surface albedo, and vegetation at multiple scales. However, Black Marble product was less affected by aerosols than EOG product. This article suggests that environmental effect is crucial in the NTL variation. Understanding NTL temporal variation can improve the accuracy of time series VIIRS imagery for socioeconomic applications. PubDate:
2023
Issue No: Vol. 16 (2023)
- Global Sea Surface Height Measurement From CYGNSS Based on Machine
Learning-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Yun Zhang;Qi Lu;Qin Jin;Wanting Meng;Shuhu Yang;Shen Huang;Yanling Han;Zhonghua Hong;Zhansheng Chen;Weiliang Liu;
Pages: 841 - 852 Abstract: Cyclone Global Navigation Satellite System (CYGNSS) launched in recent years, provides a large amount of spaceborne GNSS Reflectometry data with all-weather, global coverage, high space-time resolution, and multiple signal sources, which provides new opportunities for the machine learning (ML) study of sea surface height (SSH) inversion. This article proposes for the first time two different CYGNSS SSH inversion models based on two widely used ML methods, back propagation (BP) neural network and convolutional neural network (CNN). The SSH calculated by using Danmarks Tekniske Universitet (DTU) 18 ocean wide mean SSH (MSSH) model (DTU18) with DTU global ocean tide model is used for verification. According to the strategy of independent analysis of data from different signal sources, the mean absolute error (MAE) of the BP and CNN models’ inversion specular points’ results during 7 days is 1.04 m and 0.63 m, respectively. The CLS 2015 product and Jason-3 data were also used for further validation. In addition, the generalization ability of the model, for 6 days and 13 days training sets, was also evaluated. For 6 days training set, the prediction results’ MAE of the BP model is 11.59 m and 5.90 m for PRN2 and PRN4, and the MAE of the CNN model is 1.37 m and 0.97 m for PRN2 and PRN4, respectively. The results show that BP and CNN inversions are in high agreement with each product, and the CNN model has relatively higher accuracy and better generalization ability. PubDate:
2023
Issue No: Vol. 16 (2023)
- Sensitivity Analysis of Microwave Spectrometer for Atmospheric Temperature
and Humidity Sounding on the New Generation Fengyun Satellite-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Wenming He;Zhenzhan Wang;Wenyu Wang;Zhou Zhang;
Pages: 853 - 865 Abstract: The vertical profiles, spatiotemporal distribution, and trends of temperature and humidity in the middle atmosphere are significant for numerical weather prediction and the analysis of global climate change. To better design and apply spaceborne microwave spectrometer on the new generation Fengyun satellite, the sensitivities of the spectrometers at 22.235, 50–60, 118.75, 183.31, 325.153, 380.33, 424.77, 448.0, and 556.94 GHz are analyzed, respectively. Qpack2 included in the atmospheric radiative transfer simulator is used. The results show that the retrieval accuracy of the 50.0–60.0 GHz spectrometer is obviously better than other spectrometers, and the effective detection height (EDH) is the highest, reaching 0.233 hPa. The humidity profiles bigger than 500 hPa are well detected by the seven channels of the Humidity and Temperature Profiler or 22.0–32.0 GHz spectrometer. The retrieval accuracy is better than 6%, which greatly improves the retrieval performance of humidity profiles in the lower troposphere over the sea. In addition, this frequency band is not affected by errors in sea surface temperature or wind speed. The humidity profiles over the sea and land in the middle atmosphere under clear-sky and cloudy-sky conditions are well detected by the 183.31 and 556.94 GHz spectrometer. The EDH can be increased to 1.14 hPa by the 556.94 GHz spectrometer. In the design and application of future spaceborne microwave spectrometers for temperature and humidity profiles detection, the spectrometers mentioned previously are good candidates, and the parameter configuration of these spectrometers can be used as a reference. PubDate:
2023
Issue No: Vol. 16 (2023)
- Foreword
-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Jón Atli Benediktsson;Melba Crawford;John Kerekes;Jie Shan;
Pages: 866 - 867 Abstract: The eleven papers in this special section serve as a tribute to Professor David A. Landgrebe who is known for his work in the fundamentals of multispectral image processing and analysis. The papers are grouped into three categories: historical and future developments, methodological advancements, and survey and review. PubDate:
2023
Issue No: Vol. 16 (2023)
- Assessing the Effects of Fuel Moisture Content on the 2018 Megafires in
California-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Zhenyu Kang;Xingwen Quan;Gengke Lai;
Pages: 868 - 877 Abstract: In 2018, the megafire episodes on record occurred in California, causing a large number of civilian deaths and damages. As an important part of the “fire environment triangle,” the fuel moisture content (FMC) of both live (LFMC) and dead (DFMC) vegetation were broadly accepted as the important drivers of wildfire ignition and spread, but their effects on the 2018 megafires in California were less explored. Here, we explored and compared the effects of LFMC and DFMC on the 2018 megafire in California, allowing for highlighting the role of different types of FMC in megafire risk assessment. The LFMC was collected from the global LFMC product. We used three indices obtained from the Canadian Forest Service Fire Weather Index Rating System as a proxy of DFMC products, including the fine fuel moisture code, duff moisture code (DMC), and drought code. We analyzed the long-term series (2001–2018) of these four indices in California to test whether these indices were indicative of the occurrence of the megafire, and which of the index was the most powerful driving the 2018 megafires. The results showed that all these indices were correlated with the fires in California. The LFMC showed the highest correlation with the fire occurrence between 2001 and 2018, whereas the DMC performed a major role in driving the 2018 megafire in California. This study presented the insights that the LFMC and DMC should be carefully considered in future operational fire risk assessments for megafire prescription, suppression, and response. PubDate:
2023
Issue No: Vol. 16 (2023)
- Tropical Cyclone Wind Direction Retrieval From Dual-Polarized SAR Imagery
Using Histogram of Oriented Gradients and Hann Window Function-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Weicheng Ni;Ad Stoffelen;Kaijun Ren;
Pages: 878 - 888 Abstract: Accurate knowledge of wind directions plays a critical role in ocean surface wind retrieval and tropical cyclone (TC) research. Under TC conditions, apparent wind streaks induced by marine atmospheric boundary layer rolls can be detected in VV- and VH-polarized synthetic aperture radar (SAR) images. It suggests that though relatively noisy, VH signals may help enhance wind streak orientation magnitudes contained in VV signals and thus to achieve a more accurate wind direction estimation. The study proposes a new method for wind direction retrieval from TC SAR images. Unlike conventional approaches, which calculate wind directions from single-polarization imagery, the method combines VV and VH signals to obtain continuous wind direction maps across moderate and extreme wind speed regimes. The technique is developed based on the histogram of oriented gradient descriptor and Hann window function, accounting for the contribution of neighboring wind streak information (weighted by separation distances). As a case study, the wind directions over four TCs (Karl, Maria, Douglas, and Larry) are derived and verified by estimates from simultaneous dropsonde, ASCAT and ECMWF winds, showing a promising consistency. Furthermore, a more comprehensive statistical analysis is carried out with 14 SAR images, revealing that obtained wind directions have a correlation coefficient of 0.98, a bias of −6.07$^{circ }$ and a RMSD of 20.24$^{circ }$, superior to estimates from VV (0.97, −7.84$^{circ }$, and 24.23$^{circ }$, resp.) and VH signals (0.96, −10.46$^{circ }$, and 29.53$^{circ }$, resp.). The encouraging results prove the feasibility of the technique in SAR wind direction retrieval. PubDate:
2023
Issue No: Vol. 16 (2023)
- A Discriminative Feature Learning Approach With Distinguishable Distance
Metrics for Remote Sensing Image Classification and Retrieval-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Zhiqi Zhang;Wen Lu;Xiaoxiao Feng;Jinshan Cao;Guangqi Xie;
Pages: 889 - 901 Abstract: The fast data acquisition rate due to the shorter revisit periods and wider observation coverage of satellites results in large amounts of remote sensing images every day. This brings the challenge of how to accurately search the images with similar visual content as the query image. Content-based image retrieval (CBIR) is a solution to this challenge, its performance heavily depends on the effectiveness of the image representation features and similarity evaluation metrics. Ideal image feature representations have dispersed interclass, compact intraclass distribution. However, the neural networks employed by many CBIR methods are trained with cross entropy loss, which does not directly optimize the metrics that evaluates interclass variance over intraclass variance, hence, their feature representations are suboptimal. Meanwhile, the traditional distance metrics used by many CBIR methods cannot index the similarity of feature representations well in high-dimensional space. For better CBIR performance, we propose a discriminative feature learning approach with distinguishable distance metrics for remote sensing image classification and retrieval. By balancing the diagonal elements and nondiagonal elements of the within-class scatter matrix of deep linear discriminant analysis, our proposed loss function, balanced deep linear discriminant analysis, can better optimize the Rayleigh–Ritz quotient, which measures interclass variance over intraclass variance. In addition, the proposed distance metrics, reciprocal exponential distance (RED), is more capable of maintaining distance contrast in high dimensionality, therefore, it can better index similarity for feature representations in high dimensionality. Both visual interpretations and quantitative metrics of extensive experiments demonstrated the effectiveness of our approach. PubDate:
2023
Issue No: Vol. 16 (2023)
- Lightweight Reconstruction of Urban Buildings: Data Structures,
Algorithms, and Future Directions-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Vivek Kamra;Prachi Kudeshia;Somaye ArabiNaree;Dong Chen;Yasushi Akiyama;Jiju Peethambaran;
Pages: 902 - 917 Abstract: Commercial buildings as well as residential houses represent core structures of any modern day urban or semiurban areas. Consequently, 3-D models of urban buildings are of paramount importance to a majority of digital urban applications, such as city planning, 3-D mapping and navigation, video games and movies, and construction progress tracking, among others. However, current studies suggest that existing 3-D modeling approaches often involve high computational cost and large storage volumes for processing the geometric details of the buildings. Therefore, it is essential to generate concise digital representations of urban buildings from the 3-D measurements or images so that the acquired information can be efficiently utilized for various urban applications. Such concise representations, often referred to as “lightweight” models, strive to capture the details of the physical objects with less computational storage. Furthermore, lightweight models consume less bandwidth for online applications and facilitate accelerated visualizations. With many emerging digital urban infrastructure applications, lightweight reconstruction is poised to become a new area of research in the urban remote sensing community. We aim to provide a thorough review of data structures, representations, and state-of-the-art algorithms for lightweight 3-D urban reconstruction. We discuss the strengths and weaknesses of key lightweight urban reconstruction techniques, ultimately providing guidance on future research prospects to fulfill the pressing needs of urban applications. PubDate:
2023
Issue No: Vol. 16 (2023)
- Generalized Fine-Resolution FPAR Estimation Using Google Earth Engine:
Random Forest or Multiple Linear Regression-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Yiting Wang;Yinggang Zhan;Guangjian Yan;Donghui Xie;
Pages: 918 - 929 Abstract: Accurate estimation of fine-resolution fraction of absorbed photosynthetically active radiation (FPAR) is urgently needed for modeling land surface processes at finer scales. While traditional methods can hardly balance universality, efficiency, and accuracy, methods using coarse-resolution products as a reference are promising for operational fine-resolution FPAR estimation. However, current methods confront major problems of underrepresentation of FPAR-reflectance relations within coarse-resolution FPAR products, particularly for densely vegetated areas. To overcome this limitation, this article has developed an enhanced scaling method that proposes an outlier removal procedure and a method weighting the selected samples and models FPAR through weighted multiple linear regression (MLR) between the coarse-resolution FPAR product and the aggregated fine-resolution surface reflectance. Meanwhile, a random forest regression (RFR) method has also been implemented for comparison. Both methods were particularly applied to Landsat 8 OLI and moderate resolution imaging spectroradiometer (MODIS) FPAR data on the Google earth engine. Their performance was tested on a regional scale for an entire year. The results of the enhanced scaling method were closer to the in situ measurements (RMSE = 0.058 and R2 = 0.768) and were more consistent with the MODIS FPAR (RMSE = 0.091 and R2 = 0.894) than those of the RFR, particularly over densely vegetated pixels. This indicates that a well-designed simple MLR-based method can outperform the more sophisticated RFR method. The enhanced scaling method is also less sensitive to the number of training samples than the RFR method. Moreover, both methods are insensitive to land cover maps, and their computation efficiency depends on the number of images to be estimated. PubDate:
2023
Issue No: Vol. 16 (2023)
- Dual-Resolution and Deformable Multihead Network for Oriented Object
Detection in Remote Sensing Images-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Donghang Yu;Qing Xu;Xiangyun Liu;Haitao Guo;Jun Lu;Yuzhun Lin;Liang Lv;
Pages: 930 - 945 Abstract: Compared with general object detection, the scale variations, arbitrary orientations, and complex backgrounds of objects in remote sensing images make it more challenging to detect oriented objects. Especially for oriented objects that have large aspect ratios, it is more difficult to accurately detect their boundary. Many methods show excellent performance on oriented object detection, most of which are anchor-based algorithms. To mitigate the performance gap between anchor-free algorithms and anchor-based algorithms, this article proposes an anchor-free algorithm called dual-resolution and deformable multihead network (DDMNet) for oriented object detection. Specifically, the dual-resolution network with bilateral fusion is adopted to extract high-resolution feature maps which contain both spatial details and multiscale contextual information. Then, the deformable convolution is incorporated into the network to alleviate the misalignment problem of oriented object detection. And a dilated feature fusion module is performed on the deformable feature maps to expand their receptive fields. Finally, box boundary-aware vectors instead of the angle are leveraged to represent the oriented bounding box and the multihead network is designed to get robust predictions. DDMNet is a single-stage oriented object detection method without using anchors and exhibits promising performance on the public challenging benchmarks. DDMNet obtains 90.49%, 93.25%, and 78.66% mean average precision on the HRSC2016, FGSD2021, and DOTA datasets. In particular, DDMNet achieves 79.86% at mAP75 and 53.85% at mAP85 on the HRSC2016 dataset, respectively, outperforming the current state-of-the-art methods. PubDate:
2023
Issue No: Vol. 16 (2023)
- Hyperspectral Anomaly Detection via Sparse Representation and
Collaborative Representation-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Sheng Lin;Min Zhang;Xi Cheng;Kexue Zhou;Shaobo Zhao;Hai Wang;
Pages: 946 - 961 Abstract: Sparse representation (SR)-based approaches and collaborative representation (CR)-based methods are proved to be effective to detect the anomalies in a hyperspectral image (HSI). Nevertheless, the existing methods for achieving hyperspectral anomaly detection (HAD) generally only consider one of them, failing to comprehensively exploit them to further promote the detection performance. To address the issue, a novel HAD method, which integrates both SR and CR, is proposed in this article. To be specific, an SR model, whose overcomplete dictionary is generated by means of the density-based clustering algorithm and superpixel segmentation method, is first constructed for each pixel in an HSI. Then, for each pixel in an HSI, the used atoms in SR model are sifted to form the background dictionary corresponding to the CR model. To fully exploit both SR and CR information, we further combine the residual features obtained from both SR and CR model by the nonlinear transformation function to generate the response map. Finally, to preserve contour information of the objects, a postprocessing operation with guided filter is imposed into the response map to acquire the detection result. Experiments conducted on simulated and real datasets demonstrate that the proposed SRCR outperforms the state-of-the-art methods. PubDate:
2023
Issue No: Vol. 16 (2023)
- Super-Resolution-Aided Sea Ice Concentration Estimation From AMSR2 Images
by Encoder–Decoder Networks With Atrous Convolution-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Tiantian Feng;Xiaomin Liu;Rongxing Li;
Pages: 962 - 973 Abstract: Passive microwave data is an important data source for the continuous monitoring of Arctic-wide sea ice concentration (SIC). However, its coarse spatial resolution leads to blurring effects at the ice-water divides, resulting in the great challenges of fine-scale and accurate SIC estimation, especially for regions with low SIC. Besides, the SIC derived by operational algorithms using high-frequency passive microwave observations has great uncertainties in open water or marginal ice zones due to atmospheric effects. In this article, a novel framework is proposed to achieve accurately SIC estimation with improved spatial details from original low-resolution Advanced Microwave Scanning Radiometer 2 (AMSR2) images, with joint the super-resolution (SR) and SIC estimation network. Based on the SR network, the spatial resolution of original AMSR2 images can be improved by four times, benefiting to construct AMSR2 SR features with more high-frequency information for SIC estimation. The SIC network with an encoder–decoder structure and atrous convolution, is employed to accurately perform the SIC retrieval by considering the characteristics of passive microwave images in the Arctic sea ice region. Experimental results show that the proposed SR-Aided SIC estimation approach can generate accurate SIC with more detailed sea ice textures and much sharper sea ice edges. With respect to MODIS SIC products distributed in Arctic scale, the proposed model achieves a root-mean-square error (RMSE) of 5.94% and mean absolute error (MAE) of 3.04%, whereas the Arctic Radiation and Turbulence Interaction Study (ARTIST) Sea Ice (ASI) SIC results have three and two times greater values of RMSE and MAE. PubDate:
2023
Issue No: Vol. 16 (2023)
- Determination and Sensitivity Analysis of the Specular Reflection Point in
GNSS Reflectometry-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Jyh-Ching Juang;
Pages: 974 - 982 Abstract: In applying Global Navigation Satellite System Reflectometry (GNSS-R) techniques for the remote sensing of surface properties of the Earth, it is imperative to determine the specular reflection point and provide a quantitative characterization of the associated errors. In this article, a rigorous formulation of the problem with respect to ellipsoidal Earth is provided for the determination and error analysis of the specular reflection point. A polynomial equation approach is developed to characterize the specular reflection point. This explicit characterization is beneficial in the GNSS-R receiver operation. The sensitivity analysis is further performed to assess the errors in the presence of uncertainties. PubDate:
2023
Issue No: Vol. 16 (2023)
- Capturing Small Objects and Edges Information for Cross-Sensor and
Cross-Region Land Cover Semantic Segmentation in Arid Areas-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Panli Yuan;Qingzhan Zhao;Yuchen Zheng;Xuewen Wang;Bin Hu;
Pages: 983 - 997 Abstract: In the oasis area adjacent to the desert, there is more complex land cover information with rich details, multiscales of interest objects, and blur edge information, which poses some challenges to the semantic segmentation task in remote sensing images (RSIs). In traditional semantic segmentation methods, detailed spatial information is more likely lost in feature extraction stage and the global context information is more effectively integrated into segmentation results. To overcome these land cover semantic segmentation model, FPN_PSA_DLV3+ network, is proposed in an encoder–decoder manner capturing more fine edge and small objects information in RSIs. In the encoder stage, the improved atrous spatial pyramid pooling module extracts the multiscale features, especially small-scale feature details; feature pyramid network (FPN) module realizes better integration of detailed information and semantic information; and the spatial context information at both global and local levels is enhanced by introducing polarized self-attention (PSA) module. For the decoder stage, the FPN_PSA_DLV3+ network further adds a feature fusion branch to concatenate more low-level features. We select Landsat5/7/8 satellite RSIs from the areas of north and south of Xinjiang. Then, three self-annotated time-series datasets with more small objects and fine edges information are constructed by data augmentation. The experimental results show that the proposed method improves the segmentation performance of small targets and edges, and the classification performance increases from 81.55% to 83.10% F1 score and from 72.65% to 74.82% mean intersection over union only using red–green–blue bands. Meanwhile, the FPN_PSA_DLV3+ network s-ows great generalization in cross region and cross sensor. PubDate:
2023
Issue No: Vol. 16 (2023)
- Susceptibility-Guided Landslide Detection Using Fully Convolutional Neural
Network-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Yangyang Chen;Dongping Ming;Junchuan Yu;Lu Xu;Yanni Ma;Yan Li;Xiao Ling;Yueqin Zhu;
Pages: 998 - 1018 Abstract: Automatic landslide detection based on very high spatial resolution remote sensing images is crucial for disaster prevention and mitigation applications. With the rapid development of deep-learning techniques, state-of-the-art semantic segmentation methods based on fully convolutional network (FCNN) have achieved outstanding performance in the landslide detection task. However, most of the existing articles only utilize visual features. Even if the advanced FCNN models are applied, there is still a certain amount of falsely detected and miss detected landslides. In this article, we innovatively introduce landslide susceptibility as prior knowledge and propose an innovative susceptibility-guided landslide detection method based on FCNN (SG-FCNN) to detect landslides from single temporal images. In addition, an unsupervised change detection method based on the mean changing magnitude of objects (MCMO) is further proposed and integrated with the SG-FCNN to detect newly occurred landslides from bitemporal images. The effectiveness of the proposed SG-FCNN and MCMO has been tested in Lantau Island, Hong Kong. The experimental results show that the SG-FCNN can significantly reduce the amount of falsely detected and miss detected landslides compared with the FCNN. It can conclude that applying landslide susceptibility as prior knowledge is much more effective than using visual features only, which introduces a new methodology of landslide detection and lifts the detection performance to a new level. PubDate:
2023
Issue No: Vol. 16 (2023)
- High-Resolution Planetscope Imagery and Machine Learning for Estimating
Suspended Particulate Matter in the Ebinur Lake, Xinjiang, China-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Pan Duan;Fei Zhang;Changjiang Liu;Mou Leong Tan;Jingchao Shi;Weiwei Wang;Yunfei Cai;Hsiang-Te Kung;Shengtian Yang;
Pages: 1019 - 1032 Abstract: Ebinur Lake is a shallow lake and vulnerable to strong winds, which can lead to drastic changes in suspended particulate matter (SPM). High spatial and temporal resolution images are therefore urgently needed for SPM monitoring over the Ebinur Lake. Hence, a high-efficiency inversion model of estimating SPM from high-resolution images using machine learning is essential to increase the amount of extracted information through band combinations quadratic optimization. This article aims to evaluate the capability of the PlanetScope images and four machine learning approaches for estimating SPM of the Ebinur Lake. The specific objectives include: to obtain the sensitive bands and band combinations for SPM using correlation analysis; to quadratically optimize the combination pattern of sensitive bands using a linear model; and to compare the accuracy of traditional linear model and machine learning models in estimating SPM. The results of the study confirm that after linear model quadratic optimization, the band combinations of B3*B4, (B2+B3)/ (B2-B3), (B3+B4)*(B3+B4), and (B3-B2)/(B2/B3) have higher accuracy than that of the single band model. By inputting the preferred four-band combinations into the partial least squares, random forest, extreme gradient boosting, gradient boosting decision tree, and categorical boosting (CatBoost) models, the performance of the SPM inversion based on PlanetScope images is better than the traditional linear model. Validation of the inversion maps with observations further indicates that the CatBoost model performed the best. PubDate:
2023
Issue No: Vol. 16 (2023)
- A New Algorithm for Measuring Vegetation Growth Using GNSS Interferometric
Reflectometry-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Jie Li;Dongkai Yang;Feng Wang;Xuebao Hong;
Pages: 1033 - 1041 Abstract: The use of global navigation satellite system interferometric reflectometry (GNSS-IR) to measure vegetation growth status has become a rapidly growing technique in remote sensing. GNSS signals reflected by the soil surface affect the accuracy of vegetation growth status (vegetation cover density) measurement, and the influence of soil moisture (SM) varies. This study establishes a calibration model that can reduce the influence of SM and snow layer on reflectivity. We used a direct-reflected signal amplitude ratio and GNSS-IR altimeter based on the Lomb–Scargle Periodogram to calculate the reflectivity of vegetation and snow layer depth. GNSS data from plate boundary observation were used to verify the validity of our model. The results show that reflectivity correlates better with vegetation growth status after calibrating the influence of the SM and snow layer. Moreover, the correlation increased by nearly 0.14. This study analyzed the influence of the snow layer and found that it had a noticeable effect on vegetation growth status measurement when the snow depth was over 30 cm. Furthermore, a fusion method is proposed to improve the accuracy of vegetation growth status measurement by combining the reflectivity and normalized microwave reflection index (NMRI). The experimental results show that better performance can be obtained compared to the single observation of the reflectivity and NMRI, and the best correlation between the measured and in situ normalized difference vegetation index is over 0.91, and the root mean square error decreases to 0.1893. PubDate:
2023
Issue No: Vol. 16 (2023)
- Feature Enhancement Pyramid and Shallow Feature Reconstruction Network for
SAR Ship Detection-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Lin Bai;Cheng Yao;Zhen Ye;Dongling Xue;Xiangyuan Lin;Meng Hui;
Pages: 1042 - 1056 Abstract: Recently, convolutional neural network based methods have been studied for ship detection in optical remote sensing images. However, it is challenging to apply them to microwave synthetic aperture radar (SAR) images. First, most of the regions in the inshore scene include scattered spots and noises, which dramatically interfere with ship detection. Besides, SAR ship images contain ship targets of different sizes, especially small ships with dense distribution. Unfortunately, small ships have fewer distinguishing features making it difficult to be detected. In this article, we propose a novel SAR ship detection network called feature enhanced pyramid and shallow feature reconstruction network (FEPS-Net) to solve the above problems. We design a feature enhancement pyramid, which includes a spatial enhancement module to enhance spatial position information and suppress background noise, and the feature alignment module to solve the problem of feature misalignment during feature fusion. Additionally, to solve the problem of small ship detection in SAR ship images, we design a shallow feature reconstruction module to extract semantic information from small ships. The effectiveness of the proposed network for SAR ship detection is demonstrated by experiments on two publicly available datasets: SAR ship detection dataset and high-resolution SAR images dataset. The experimental results show that the proposed FEPS-Net has advantages in SAR ship detection over the current state-of-the-art methods. PubDate:
2023
Issue No: Vol. 16 (2023)
- A Lightweight Multitask Learning Model With Adaptive Loss Balance for
Tropical Cyclone Intensity and Size Estimation-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Wei Tian;Xinxin Zhou;Xianhua Niu;Linhong Lai;Yonghong Zhang;Kenny Thiam Choy Lim Kam Sian;
Pages: 1057 - 1071 Abstract: Accurate tropical cyclone (TC) intensity and size estimation are key in disaster management and prevention. While great breakthroughs have been made in TC intensity estimation research, there is currently a lack of research on TC size reflecting TC influence radius. Therefore, we propose a lightweight multi-task learning model (TC-MTLNet) with adaptive loss balance to simultaneously estimate TC intensity and size. Adaptive loss balance is utilized to solve the problem of inconsistent convergence speed of TC intensity and size estimation tasks. The model based on four 2-D convolutions, four 3-D convolutions and three fully connected layers takes up less computational and storage space and improves the accuracy of TC intensity and size estimation by sharing knowledge among multiple tasks. In addition, due to the imbalanced distribution of TC samples, with significantly few low-intensity and high-intensity TC satellite data, this phenomenon poses a great challenge to TC intensity and size estimation. So, we utilize the influence of nearby samples to calibrate the sample density to weight the loss function to enable the model to be generalized to all samples. The result shows that the root-mean-square error (RMSE) of TC intensity estimation is $text{8.40},text{kts}$, which is 33.5% lower than that of the Advanced Dvorak Technique (ADT) and 11.4% lower than that of the deep learning method (3DAttentionTCNet). The mean absolute error (MAE) of the TC size estimation is $text{20.89},text{nmi}$, which is a 16% reduction compared to the Multi-Platform Tropical Cyclone Surface Winds Analysis (MTCSWA). PubDate:
2023
Issue No: Vol. 16 (2023)
- Hyperspectral Image Classification Based on 3-D Multihead Self-Attention
Spectral–Spatial Feature Fusion Network-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Qigao Zhou;Shuai Zhou;Feng Shen;Juan Yin;Dingjie Xu;
Pages: 1072 - 1084 Abstract: Convolutional neural networks are a popular method in hyperspectral image classification. However, the accuracy of the models is closely related to the number and spatial size of training samples. Which relieve the performance decline by the number and spatial size of training samples, we designed a 3-D multihead self-attention spectral–spatial feature fusion network (3DMHSA-SSFFN) that contains step-by-step feature extracted blocks (SBSFE) and 3-D multihead-self-attention-module (3DMHSA). The proposed step-by-step feature extracted blocks relieved the declining-accuracy phenomenon for the limited number of training samples. Multiscale convolution kernels extract more spatial–spectral features in the step-by-step feature-extracted blocks. In hyperspectral image classification, the 3DMHSA module enhances the stability of classification by correlating disparate features. Experimental results show that 3DMHSA-SSFFN possesses a better classification performance than other advanced models through the limited number of balance and imbalance training data in three data. PubDate:
2023
Issue No: Vol. 16 (2023)
- Bag-of-Features-Driven Spectral-Spatial Siamese Neural Network for
Hyperspectral Image Classification-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Zhaohui Xue;Tianzhi Zhu;Yiyang Zhou;Mengxue Zhang;
Pages: 1085 - 1099 Abstract: Deep learning (DL) exhibits commendable performance in hyperspectral image (HSI) classification because of its powerful feature expression ability. Siamese neural network further improves the performance of DL models by learning similarities within-class and differences between-class from sample pairs. However, there are still some limitations in siamese neural network. On the one hand, siamese neural network usually needs a large number of negative pair samples in the training process, leading to computing overhead. On the other hand, current models may lack interpretability because of complex network structure. To overcome the above limitations, we propose a spectral-spatial siamese neural network with bag-of-features (S3BoF) for HSI classification. First, we use a siamese neural network with 3-D and 2-D convolutions to extract the spectral-spatial features. Second, we introduce stop-gradient operation and prediction head structure to make the siamese neural network work without negative pair samples, thus reducing the computational burden. Third, a bag-of-features (BoF) learning module is introduced to enhance the model interpretability and feature representation. Finally, a symmetric loss and a cross entropy loss are respectively used for contrastive learning and classification. Experiments results on four common hyperspectral datasets indicated that S3BoF performs better than the other traditional and state-of-the-art deep learning HSI classification methods in terms of classification accuracy and generalization performance, with improvements in terms of OA around 1.40%–30.01%, 0.27%–8.65%, 0.37%–6.27%, 0.22%–6.64% for Indian Pines, University of Pavia, Salinas, and Yellow River Delta datasets, respectively, under 5% -abeled samples per class. PubDate:
2023
Issue No: Vol. 16 (2023)
- Land Surface Temperature Retrieval From Landsat 9 TIRS-2 Data Using
Radiance-Based Split-Window Algorithm-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Mengmeng Wang;Miao Li;Zhengjia Zhang;Tian Hu;Guojin He;Zhaoming Zhang;Guizhou Wang;Hua Li;Junlei Tan;Xiuguo Liu;
Pages: 1100 - 1112 Abstract: The thermal infrared sensor-2 (TIRS-2) carried on Landsat 9 is the newest thermal infrared (TIR) sensor for the Landsat project and provides two adjacent TIR bands, which greatly benefits the land surface temperature (LST) retrieval at high spatial resolution. In this article, a radiance based split window (RBSW) algorithm for retrieving LST from Landsat 9 TIRS-2 data was proposed. In addition, the split-window covariance-variance ratio (SWCVR) algorithm was improved and applied to Landsat 9 TIRS-2 data for estimating atmospheric water vapor (AWV) that is required for accurate LST retrieval. The performance of the proposed method was assessed using the simulation data and satellite observations. Results reveal that the retrieved LST using the RBSW algorithm has a bias of 0.06 K and root-mean-square error (RMSE) of 0.51 K based on validation with the simulation data. The sensitivity analysis exhibited a LST error of PubDate:
2023
Issue No: Vol. 16 (2023)
- Self-Filtered Learning for Semantic Segmentation of Buildings in Remote
Sensing Imagery With Noisy Labels-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Hunsoo Song;Lexie Yang;Jinha Jung;
Pages: 1113 - 1129 Abstract: Not all building labels for training improve the performance of the deep learning model. Some labels can be falsely labeled or too ambiguous to represent their ground truths, resulting in poor performance of the model. For example, building labels in OpenStreetMap (OSM) and Microsoft Building Footprints (MBF) are publicly available training sources that have great potential to train deep models, but directly using those labels for training can limit the model's performance as their labels are incomplete and inaccurate, called noisy labels. This article presents self-filtered learning (SFL) that helps a deep model learn well with noisy labels for building extraction in remote sensing images. SFL iteratively filters out noisy labels during the training process based on loss of samples. Through a multiround manner, SFL makes a deep model learn progressively more on refined samples from which the noisy labels have been removed. Extensive experiments with the simulated noisy map as well as real-world noisy maps, OSM and MBF, showed that SFL can improve the deep model's performance in diverse error types and different noise levels. PubDate:
2023
Issue No: Vol. 16 (2023)
- Local Information Interaction Transformer for Hyperspectral and LiDAR Data
Classification-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Yuwen Zhang;Yishu Peng;Bing Tu;Yaru Liu;
Pages: 1130 - 1143 Abstract: The multisource remote sensing classification task has two main challenges. 1) How to capture hyperspectral image (HSI) and light detection and ranging (LiDAR) features cooperatively to fully mine the complementary information between data. 2) How to adaptively fuse multisource features, which should not only overcome the imbalance between HSI and LiDAR data but also avoid the generation of redundant information. The local information interaction transformer (LIIT) model proposed herein can effectively address these above issues. Specifically, multibranch feature embedding is first performed to help in the fine-grained serialization of multisource features; subsequently, a local-based multisource feature interactor (L-MSFI) is designed to explore HSI and LiDAR features together. This structure provides an information transmission environment for multibranch features and further alleviates the homogenization processing mode of the self-attention process. More importantly, a multisource feature selection module (MSTSM) is developed to dynamically fuse HSI and LiDAR features to solve the problem of insufficient fusion. Experiments were carried out on three multisource remote-sensing classification datasets, the results of which show that LIIT has more performance advantages than the state-of-the-art CNN and transformer methods. PubDate:
2023
Issue No: Vol. 16 (2023)
- Optimized Views Photogrammetry: Precision Analysis and a Large-Scale Case
Study in Qingdao-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Qingquan Li;Hui Huang;Wenshuai Yu;San Jiang;
Pages: 1144 - 1159 Abstract: Unmanned aerial vehicle (UAVs) have become one of the widely used remote sensing platforms and played a critical role in the construction of smart cities. However, due to the complex environment in urban scenes, secure, and accurate data acquisition brings great challenges to 3-D modeling and scene updating. Optimal trajectory planning of UAVs and accurate data collection of onboard cameras are nontrivial issues in urban modeling. This study presents the principle of optimized views photogrammetry and verifies its precision and potential in large-scale 3-D modeling. Different from oblique photogrammetry, optimized views photogrammetry uses rough models to generate and optimize UAV trajectories, which is achieved through the consideration of model point reconstructability and view point redundancy. Based on the principle of optimized views photogrammetry, this study first conducts a precision analysis of 3-D models by using UAV images of optimized views photogrammetry and then executes a large-scale case study in the urban region of Qingdao City, China, to verify its engineering potential. By using GCPs for image orientation precision analysis and terrestrial laser scanning (TLS) point clouds for model quality analysis, experimental results show that optimized views photogrammetry could construct stable image connection networks and could achieve comparable image orientation accuracy. Benefiting from the accurate image acquisition strategy, the quality of mesh models significantly improves, especially for urban areas with serious occlusions, in which 3 to 5 times of higher accuracy has been achieved. Besides, the case study in Qingdao City verifies that optimized views photogrammetry can be a reliable and powerful solution for the large-scale 3-D modeling in complex urban scenes. PubDate:
2023
Issue No: Vol. 16 (2023)
- Dynamic Soft Label Assignment for Arbitrary-Oriented Ship Detection
-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Yangfan Li;Chunjiang Bian;Hongzhen Chen;
Pages: 1160 - 1170 Abstract: Ship detection with several military and civilian applications has drawn considerable attention in recent years. In remote sensing images, ships have the characteristics of arbitrary orientation. Based on the characteristics, many arbitrary-oriented ship detectors have been proposed. Most of these detectors preset many horizontal or rotated anchors and determine the positive and negative samples based on the intersection over union (IoU) between the anchor and ground-truth bounding box, in what is called the label assignment process. However, IoU performance is limited as it can only reflect the quality of the anchor to a certain extent. In addition, the manually fixed IoU threshold to separate the positive and negative limits the flexibility of the method, as different ships may have different optimal thresholds. Moreover, the equally weighted training samples cause a misalignment between the classification and regression heads. Therefore, we propose a dynamic soft label assignment method for arbitrary-oriented ship detection. First, we design a novel anchor quality score function that takes into account both prior and prediction information of the anchor and enables the model to participate in the label assignment process. Second, we propose a dynamic anchor quality score threshold instead of a fixed IoU threshold for dividing positive and negative samples. Third, in contrast to assigning equal weights, we propose a soft label assignment strategy to weigh the training samples in the loss function. The proposed method offers superior detection performances for arbitrary-oriented ships with only one horizontal preset anchor. Experimental results on HRSC2016, FGSD, and ShipRSImageNet datasets demonstrate the effectiveness of our proposed dynamic soft label assignment for arbitrary-oriented ship detection. PubDate:
2023
Issue No: Vol. 16 (2023)
- Attention-Aware Deep Feature Embedding for Remote Sensing Image Scene
Classification-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Xiaoning Chen;Zonghao Han;Yong Li;Mingyang Ma;Shaohui Mei;Wei Cheng;
Pages: 1171 - 1184 Abstract: Due to the wide application of remote sensing (RS) image scene classification, more and more scholars activate great attention to it. With the development of the convolutional neural network (CNN), the CNN-based methods of the RS image scene classification have made impressive progress. In the existing works, most of the architectures just considered the global information of the RS images. However, the global information contains a large number of redundant areas that diminish the classification performance and ignore the local information that reflects more fine spatial details of local objects. Furthermore, most CNN-based methods assign the same weights to each feature vector causing the mode to fail to discriminate the crucial features. In this article, a novel method by Two-branch Deep Feature Embedding (TDFE) with a dual attention-aware (DAA) module for RS image scene classification is proposed. In order to mine more complementary information, we extract global semantic-based features of high level and local object-based features of low level by the TDFE module. Then, to focus selectively on the key global-semantics feature maps as well as the key local regions, we propose a DAA module to attain those key information. We conduct extensive experiments to verify the superiority of our proposed method, and the experimental results obtained on two widely used RS scene classification benchmarks demonstrate the effectiveness of the proposed method. PubDate:
2023
Issue No: Vol. 16 (2023)
- Filtering Specialized Change in a Few-Shot Setting
-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Martin Hermann;Sudipan Saha;Xiao Xiang Zhu;
Pages: 1185 - 1196 Abstract: The aim of change detection in remote sensing usually is not to find all differences between the observations, but rather only specific types of change, such as urban development, deforestation, or even more specialized categories like roadwork. However, often there are no large public datasets available for very fine-grained tasks, and to collect the amount of training data needed for most supervised learning methods is very costly and often prohibitive. For this reason, we formulate the problem of few-shot filtering, where we are provided with a relatively large change detection dataset and, at test time, a few instances of one particular change type that we try to “filter out” of the learned changes. For example, we might train on data of general urban change, and, given some samples of building construction, aim to only predict instances of these on the test set, all without any explicit labels for buildings in the training data. We further investigate a fine-tuning approach to this problem and assess its performance on a public dataset that we adapt to be used in this novel setting. PubDate:
2023
Issue No: Vol. 16 (2023)
- Query by Example in Remote Sensing Image Archive Using Enhanced Deep
Support Vector Data Description-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Omid Ghozatlou;Miguel Heredia Conde;Mihai Datcu;
Pages: 1197 - 1210 Abstract: This article studies remote sensing image retrieval using kernel-based support vector data description (SVDD). We exploit deep SVDD, which is a well-known method for one-class classification to recover the most relevant samples from the archive. To this end, a deep neural network (DNN) is jointly trained to map the data into a hypersphere of minimum volume in the latent space. It is expected that similar samples to the query are compressed inside of the hypersphere. The closest embedding to the center of the hypersphere is related to the most relevant sample to query. We enhance deep SVDD by injecting the statistical information of data to the DNN by means of additional terms in the cost function. The first enhancement method takes advantage of covariance regularization of batches of the training set to penalize unnecessary redundancy and minimize the correlation between the different dimensions of the embedding. The second method involves unlocking the hypersphere's predefined center while preventing network divergence during training. Therefore, two parameters are designed to control the importance of the drifting of the center and the importance of a fixed predefined center (convergence), respectively. This has been implemented by considering the average of batches of embedding in each iteration as the updated center. This pushes irrelevant samples away from query samples, making data clustering easier for the DNN. The performance of the proposed methods is evaluated on benchmark datasets. PubDate:
2023
Issue No: Vol. 16 (2023)
- A CNN-Transformer Hybrid Model Based on CSWin Transformer for UAV Image
Object Detection-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Wanjie Lu;Chaozhen Lan;Chaoyang Niu;Wei Liu;Liang Lyu;Qunshan Shi;Shiju Wang;
Pages: 1211 - 1231 Abstract: The object detection of unmanned aerial vehicle (UAV) images has widespread applications in numerous fields; however, the complex background, diverse scales, and uneven distribution of objects in UAV images make object detection a challenging task. This study proposes a convolution neural network transformer hybrid model to achieve efficient object detection in UAV images, which has three advantages that contribute to improving object detection performance. First, the efficient and effective cross-shaped window (CSWin) transformer can be used as a backbone to obtain image features at different levels, and the obtained features can be input into the feature pyramid network to achieve multiscale representation, which will contribute to multiscale object detection. Second, a hybrid patch embedding module is constructed to extract and utilize low-level information such as the edges and corners of the image. Finally, a slicing-based inference method is constructed to fuse the inference results of the original image and sliced images, which will improve the small object detection accuracy without modifying the original network. Experimental results on public datasets illustrate that the proposed method can improve performance more effectively than several popular and state-of-the-art object detection methods. PubDate:
2023
Issue No: Vol. 16 (2023)
- A High-Efficiency Spectral Element Method Based on CFS-PML for GPR
Numerical Simulation and Reverse Time Migration-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Xun Wang;Tianxiao Yu;Deshan Feng;Siyuan Ding;Bingchao Li;Yuxin Liu;Zheng Feng;
Pages: 1232 - 1243 Abstract: Improving the accuracy and efficiency of the numerical simulation of ground penetrating radar (GPR) becomes a pressing need with the rapidly increased amount of inversion data and the growing demand for migration imaging quality. In this article, we present a numerical spectral element time-domain (SETD) simulation procedure for GPR forward modeling and further apply it to the reverse time migration (RTM) with complex geoelectric models. This approach takes into account the flexibility of the finite element methods and the high precision of the spectral methods. Meanwhile, in this procedure, the complex frequency shifted perfectly matched layer (CFS-PML) is loaded to effectively suppress the echo at the truncated boundary, and the per-element GPU parallel framework used can achieve up to 5.7788 times the efficiency compared with the CPU calculation. The experiments on SETD spatial convergence and CFS-PML optimal parameter selection showed that, under the same degree of freedom, the SETD offered substantially better accuracy compared with the traditional FETD. The experiments on RTM of different profiles with different orders of SETD via a complex geoelectric model verify the universality of the algorithm. The results indicate that the RTM imaging effect has been significantly improved with the increase of SETD order. It fully proves the great potential of efficient and high-precision SETD simulation algorithm in the RTM imaging direction and shows certain guiding significance for underground target structure exploration. PubDate:
2023
Issue No: Vol. 16 (2023)
- Estimation of European Terrestrial Ecosystem NEP Based on an Improved CASA
Model-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Siyi Qiu;Liang Liang;Qianjie Wang;Di Geng;Junjun Wu;Shuguo Wang;Bingqian Chen;
Pages: 1244 - 1255 Abstract: Net ecosystem productivity (NEP) is a key indicator to describe terrestrial ecosystem functions and carbon sinks. The CASA model was improved by optimizing the parameters optimum temperature and maximum light use efficiency (${varepsilon }_{max }$), and the NEP value of the European terrestrial ecosystem was calculated by combining the soil respiration model. The results showed that when using vegetation classification data to optimize parameter ${varepsilon }_{max }$, the R2 of NEP between estimates and observations increased from 0.252 to 0.403, and the RMSE decreased from 84.557 to 64.466 gC·m−2·month−1. After further optimizing the optimum temperature, R2 increased to 0.428, and the RMSE decreased to 63.720 gC·m−2·month−1. It indicated that the CASA model could be improved by optimizing ${varepsilon }_{max }$ as well as optimum temperature, which was a good approach to improve the NEP estimations. Based on this, the NEP spatiotemporal changes in various regions of Europe were analyzed using the optimization results. The NEP values of European terrestrial ecosystem has regional differences, showing a pattern of western region> southern region> central region> eastern region> northern region. The monthly change of NEP in each region is a single peak curve with high in summer and low in winter, and the annual overall value is positive (i.e., it shows a carbon sink). The research can enable us to obtain the carbon source/-ink distribution information in Europe more accurately and provide a scientific reference for carbon balance policy formulation in the region. PubDate:
2023
Issue No: Vol. 16 (2023)
- The Spatio-Temporal Patterns of Glacier Activities in the Eastern Pamir
Plateau Investigated by Time Series Sub-Pixel Offsets From Sentinel-2 Optical Images-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Jue Zhang;Ping He;Xiaoping Hu;Zhumei Liu;
Pages: 1256 - 1268 Abstract: The eastern Pamir Plateau, with a mean altitude of 5000 m, forms a large-scale glacier region with a size of 2054 km2 that serves as an important environmental climate condition for central Asia. In this article, we estimated the spatio-temporal patterns of glacier activities in this region by using the time series sub-pixel offsets derived from 63 sentinel-2 optical images between December 31, 2020 and December 31, 2021. Our results indicated the mean glacier flow velocity in this region was 0.531 m/d ± 0.007 m/d in 2021, and the Kongur Tagh Glacier was much more active than the Kingata Glacier and Muztagh Ata Glacier. The time series observations revealed that the glacier motion involves a pre-melting period (January–April) with a velocity of 0.600 m/d ± 0.012 m/d, a melting period (May–August) with a velocity of 0.608 m/d ± 0.003 m/d, and a post-melting period (September–December) with a velocity of 0.659 m/d ± 0.006 m/d. In order to get insights into the characteristics of these glacier activities, we carried out a correlation analysis between the glacier flow velocity change and its potential caused reasons (i.e., topography, temperature, precipitation, glacier surges, debris-cover, and glacier thickness), and current results suggest that the glacier flow velocity is influenced by a combination of these factors. PubDate:
2023
Issue No: Vol. 16 (2023)
- Optical and SAR Image Dense Registration Using a Robust Deep Optical Flow
Framework-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Han Zhang;Lin Lei;Weiping Ni;Xiaoliang Yang;Tao Tang;Kenan Cheng;Deliang Xiang;Gangyao Kuang;
Pages: 1269 - 1294 Abstract: The coregistration of optical and synthetic aperture radar (SAR) imageries is the bottleneck in exploring the complementary information from the two multimodal datasets. The difficulties lie in not only the complex radiometric relationship between them, but also the distinct geometrical models of the optical and SAR imaging systems, which cause it nontrivial to explicitly depict the spatial relationship between the corresponding image regions when elevation fluctuations exist. This article aims to investigate the optical flow technique for the pixelwise dense registration of the high-resolution optical and SAR images, so as to get rid of the outlier removal and geometric mapping procedures, which have to be conducted in the classical image registration approaches that are based on sparse feature point matching. Herein, a deep optical flow framework is designed. First, a dilated feature concatenation method is proposed to enhance the discriminability of the pixelwise features for similarity measurement. An effective network training strategy is used, based on a smoothed flow loss, and also a training dataset that contains simulated elevation fluctuations. Second, we propose a self-supervised optical flow fine-tuning strategy. It incorporates the strength of the blockwise matching approach, which produces better matching precision, into the proposed pixelwise matching procedure. In this way, the accuracy of the optical-SAR dense registration is substantially improved. Extensive experiments conducted on the 1-m resolution optical-SAR image pairs of different land-cover types and distinct topographic conditions indicate that the proposed optical-SAR optical flow network -Ft framework is quite robust, and has the potential to perform the optical-SAR image dense registration in practical applications. The Python code of the proposed deep optical flow network will be made available. PubDate:
2023
Issue No: Vol. 16 (2023)
- Squint Spotlight SAR Imaging by Two-Step Scaling Transform-Based Extended
PFA and 2-D Autofocus-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Shengliang Han;Daiyin Zhu;Xinhua Mao;
Pages: 1295 - 1307 Abstract: In this article, a novel imaging algorithm by combining two-step scaling transform (TSST) with structure-aided 2-D autofocus is proposed for the squint spotlight synthetic aperture radar (SAR). First, on the basis of planar wavefront assumption, a modified range-frequency linear scaling transform (MRFLST) and an azimuth-time nonlinear scaling transform (ATNST) are proposed to eliminate the coupling between range-frequency and azimuth-time of the received echo. Furthermore, to improve the efficiency, the MRFLST is implemented by using the principle of chirp scaling (PCS), which involves only complex multiplications and fast Fourier transforms (FFTs) without any interpolation, meanwhile, a constant scaling factor (CSF) selecting criteria is defined to avoid range spectrum aliasing. Then, to correct the phase error caused by the range measurement error and atmospheric propagation effects, the prior 2-D phase error structure implied in the TSST is analyzed. Finally, by integrating the derived 2-D phase error structure and range frequency fragmentation technique, a new 2-D autofocus algorithm is presented to improve the image quality. Simulated and real data experiments are carried out to verify the proposed algorithm. PubDate:
2023
Issue No: Vol. 16 (2023)
- Spectral–Temporal Fusion of Satellite Images via an End-to-End
Two-Stream Attention With an Effective Reconstruction Network-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Tayeb Benzenati;Yousri Kessentini;Abdelaziz Kallel;
Pages: 1308 - 1320 Abstract: Due to technical and budget constraints on current optical satellites, the acquisition of satellite images with the best resolutions is not practicable. In this article, aiming to produce products with high spectral (HS) and temporal resolutions, we introduced a two-stream spectral–temporal fusion technique based on attention mechanism called STA-Net. STA-Net aims to combine high spectral and low temporal (HSLT) resolution images with low spectral and high temporal (LSHT) resolution images to generate products with the best characteristics. The proposed technique involves two stages. In the first one, two fused images are generated by a two-stream architecture based on residual attention blocks. The temporal difference estimator stream estimates the temporal difference between HS images at desired and neighboring dates. The reflectance difference estimator is the second stream. It predicts the reflectance difference between the input images (HS–LS) to map LS images into HS products. In the second stage, a reconstruction network combines the latter two-stream outputs via an effective learnable weighted-sum strategy. The two-stage model is trained in an end-to-end fashion using an effective loss function to ensure the best fusion quality. To the best of our knowledge, this work represents the first attempt to address the spectral–temporal fusion using an end-to-end deep neural network model. Experimental results conducted on two actual datasets of Sentinel-2 (HSLT:10 spectral bands and long revisit period) and Planetscope (LSHT: four spectral bands and daily images) images, which proved the effectiveness of the proposed technique with respect to baseline technique. PubDate:
2023
Issue No: Vol. 16 (2023)
- Simulation Framework and Case Studies for the Design of Sea Surface
Salinity Remote Sensing Missions-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Alexander Akins;Shannon Brown;Tong Lee;Sidharth Misra;Simon Yueh;
Pages: 1321 - 1334 Abstract: L-band microwave radiometers have now been used to measure sea surface salinity (SSS) from space for over a decade with the SMOS, Aquarius, and SMAP missions, and it is expected that the launch of the CIMR mission in the later half of this decade will ensure measurement continuity in the near future. Beyond these missions, it is useful to consider how future missions can be designed to meet different scientific objectives and performance requirements as well as to fit within different cost spaces. In this article, we present a software simulator for remote sensing measurements of ocean state capable of generating L1- and L2- equivalent data products for an arbitrary spacecraft mission including multifrequency fixed-pointing or scanning microwave radiometers.This simulator is then applied to case studies of SSS measurement over selected areas of interest, including the Gulf Stream, Southern Ocean, and Pacific tropical instability wave regions. These simulations illustrate how different design choices concerning receiver bandwidth and revisit time can improve the detection of SSS features in these regions from the mesoscale to the seasonal scale. PubDate:
2023
Issue No: Vol. 16 (2023)
- Convolutional Transformer-Based Few-Shot Learning for Cross-Domain
Hyperspectral Image Classification-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Yishu Peng;Yaru Liu;Bing Tu;Yuwen Zhang;
Pages: 1335 - 1349 Abstract: In cross-domain hyperspectral image (HSI) classification, the labeled samples of the target domain are very limited, and it is a worthy attention to obtain sufficient class information from the source domain to categorize the target domain classes (both the same and new unseen classes). This article investigates this problem by employing few-shot learning (FSL) in a meta-learning paradigm. However, most existing cross-domain FSL methods extract statistical features based on convolutional neural networks (CNNs), which typically only consider the local spatial information among features, while ignoring the global information. To make up for these shortcomings, this article proposes novel convolutional transformer-based few-shot learning (CTFSL). Specifically, FSL is first performed in the classes of source and target domains simultaneously to build the consistent scenario. Then, a domain aligner is set up to map the source and target domains to the same dimensions. In addition, a convolutional transformer (CT) network is utilized to extract local-global features. Finally, a domain discriminator is executed subsequently that can not only reduce domain shift but also distinguish from which domain a feature originates. Experiments on three widely used hyperspectral image datasets indicate that the proposed CTFSL method is superior to the state-of-the-art cross-domain FSL methods and several typical HSI classification methods in terms of classification accuracy. PubDate:
2023
Issue No: Vol. 16 (2023)
- Filling Then Spatio-Temporal Fusion for All-Sky MODIS Land Surface
Temperature Generation-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Yijie Tang;Qunming Wang;Peter M. Atkinson;
Pages: 1350 - 1364 Abstract: The thermal infrared band of the moderate resolution imaging spectroradiometer (MODIS) onboard the Terra/Aqua satellite can provide daily, 1 km land surface temperature (LST) observations. However, due to the influence of cloud contamination, spatial gaps are common in the LST product, restricting its application greatly at the regional scale. In this article, to deal with the challenge of large gaps (especially complete data loss) in MODIS LST for local monitoring, a filling then spatio-temporal fusion (FSTF) method is proposed, which utilizes another type of product with all-sky coverage, but coarser spatial resolution (i.e., the 7 km China Land Data Assimilation System (CLDAS) LST product). Due to the great temporal heterogeneity of LST, temporally closer auxiliary MODIS LST images are considered to be preferable choices for spatio-temporal fusion of CLDAS and MODIS LST time-series. However, such data are always abandoned inappropriately in conventional spatio-temporal fusion if they contain gaps. Accordingly, pregap filling is performed in FSTF to make fuller use of the valid information in temporally close MODIS LST images with small gaps. Through evaluation in both the spatial and temporal domains for three regions in China, FSTF was found to be more accurate in reconstructing MODIS LST images than the original spatio-temporal fusion methods. FSTF, thus, has great potential for updating the current MODIS LST product at the global scale. PubDate:
2023
Issue No: Vol. 16 (2023)
- Analyzing Gradual Vegetation Changes in the Athabasca Oil Sands Region
Using Landsat Data-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Moritz Lucas;Antara Dasgupta;Björn Waske;
Pages: 1365 - 1377 Abstract: Oil sand mining in northern Alberta/Canada in the Athabasca region is a major intrusion into the otherwise pristine natural environment. The various types of oil sands mining, transport, and processing are causing large-scale discharge of pollutants. Accordingly, this study examined the gradual changes in the physically undisturbed vegetation, that occurred from 1984 to 2021 in the Athabasca oil sands monitoring region. First, the abrupt changes were masked out with the help of auxiliary and Landsat data. Subsequently, a normalized burn ratio Landsat time-series was applied to the LandTrendr algorithm on the Google Earth Engine. In order to interpret gradual changes, measurement criteria were used to describe vegetation development, vulnerability, and variability. In addition, the spatial and temporal relationship of these to oil sand opencast mines, processing facilities, and steam assisted gravity drainage (SAGD) mines was examined. The results showed that a major part of the vegetation in the Athabasca oil sand monitoring region underwent a positive development (65.9%). However, around the opencast mines a negative vegetation development and stability within a radius of 10 km could be observed. In the surroundings of processing facilities, the development and stability of vegetation was disturbed within a radius of 2 km. Thereby the analysis of land cover classes showed that deciduous, coniferous, and mixed forest are disproportionately affected. Conversely, no negative influences on neighboring vegetation could be detected around SAGD mines. The temporal analysis showed that vegetation disturbance was most pronounced between 1990 and 2000, but recovered in recent years. PubDate:
2023
Issue No: Vol. 16 (2023)
- Blockchain-Assisted Verifiable and Secure Remote Sensing Image Retrieval
in Cloud Environment-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Xue Ouyang;Yanyan Xu;Yangsu Mao;Yunqi Liu;Zhiheng Wang;Yuejing Yan;
Pages: 1378 - 1389 Abstract: Secure retrieval of remote sensing images in an outsourced cloud environment garners considerable attention. Since the cloud service provider (CSP) is considered as a semitrusted third party that may return incorrect retrieval results to save computational resources or defraud retrieval fees for profit, it becomes a critical challenge to achieve secure and verifiable remote sensing image retrieval. This article presents a secure retrieval and blockchain-assisted verifiable scheme for encrypted remote sensing images in the cloud environment. In response to the characteristic that geographical objects in remote sensing images with clear category attributes, we design a remote sensing image retrieval method to facilitate secure and efficient retrieval. In addition, we propose a verifiable method combined with blockchain and Merkle trees for checking the integrity and correctness of the storage and retrieval services provided by CSP, which can replace the traditional third-party auditor. The security analysis and experimental evaluation demonstrate the security, verifiability, and feasibility of the proposed scheme, achieving secure remote sensing image retrieval while preventing malicious behavior of CSP. PubDate:
2023
Issue No: Vol. 16 (2023)
- Towards Scalable Within-Season Crop Mapping With Phenology Normalization
and Deep Learning-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Zijun Yang;Chunyuan Diao;Feng Gao;
Pages: 1390 - 1402 Abstract: Crop-type mapping using time-series remote sensing data is crucial for a wide range of agricultural applications. Crop mapping during the growing season is particularly critical in timely monitoring of the agricultural system. Most existing studies focusing on within-season crop mapping leverage historical remote sensing and crop type reference data for model building, due to the difficulty in obtaining timely crop type samples for the current growing season. Yet the crop type samples from previous years may not be used directly considering the diverse patterns of crop phenology across years and locations, which hampers the scalability and transferability of the model to the current season for timely crop mapping. This article proposes an innovative within-season emergence (WISE) phenology normalized deep learning model towards scalable within-season crop mapping. The crop time-series remote sensing data are first normalized by the WISE crop emergence dates before being fed into an attention-based one-dimensional convolutional neural network classifier. Compared to conventional calendar-based approaches, the WISE-phenology normalization approach substantially helps the deep learning crop mapping model accommodate the spatiotemporal variations in crop phenological dynamics. Results in Illinois from 2017 to 2020 indicate that the proposed model outperforms calendar-based approaches and yields over 90% overall accuracy for classifying corn and soybeans at the end of season. During the growing season, the proposed model can give satisfactory performance (85% overall accuracy) one to four weeks earlier than calendar-based approaches. With WISE-phenology normalization, the proposed model exhibits more stable performance across Illinois and can be transferred to different years with enhanced scalability and robustness. PubDate:
2023
Issue No: Vol. 16 (2023)
- Mapping Surface Organic Soil Properties in Arctic Tundra Using C-Band SAR
Data-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Yonghong Yi;Kazem Bakian-Dogaheh;Mahta Moghaddam;Umakant Mishra;John S. Kimball;
Pages: 1403 - 1413 Abstract: Surface soil organic carbon (SOC) content is among the first-order controls on the rate and extent of Arctic permafrost thaw. There is a large discrepancy in current SOC estimates in Arctic tundra, where sparse measurements are unable to capture SOC complexity over the vast tundra region. Synthetic aperture radar (SAR) data are sensitive to surface vegetation, roughness, and moisture conditions, and may provide useful information on surface SOC properties. Here, we investigated the potential of multitemporal Sentinel-1 C-band SAR data for regional SOC mapping in the Arctic tundra through principal component analysis (PCA). Multiple in situ SOC datasets in the Alaska North Slope were assembled to generate a consistent surface (0–10 cm) SOC and bulk density dataset (n = 97). The radar VV backscatter shows a strong correlation with surface SOC, but the correlation varies greatly with surface snow, moisture, and freeze/thaw conditions. However, the first principal component (PC1) of radar backscatter time series from different years shows spatial consistency representing dominant and persistent surface backscatter behavior. The PC1 also shows a strong linear correlation with surface SOC concentration (R = 0.65, p PubDate:
2023
Issue No: Vol. 16 (2023)
- Anomaly Detection of Hyperspectral Images Based on Transformer With
Spatial–Spectral Dual-Window Mask-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Song Xiao;Tian Zhang;Zhangchun Xu;Jiahui Qu;Shaoxiong Hou;Wenqian Dong;
Pages: 1414 - 1426 Abstract: Anomaly detection has become one of the crucial tasks in hyperspectral images processing. However, most deep learning-based anomaly detection methods often suffer from the incapability of utilizing spatial–spectral information, which decreases the detection accuracy. To address this problem, we propose a novel hyperspectral anomaly detection method with a spatial–spectral dual-window mask transformer, termed as S2DWMTrans, which can fully extract features from global and local perspectives, and suppress the reconstruction of anomaly targets adaptively. Specifically, the dual-window mask transformer aggregates background information of the entire image from a global perspective to neutralize anomalies, and uses neighboring pixels in a dual-window to suppress anomaly reconstruction. An adaptive-weighted loss function is designed to further suppress anomaly reconstruction adaptively during network training process. According to our investigation, this is the first work to apply transformer to hyperspectral anomaly detection. Comparative experiments and ablation studies demonstrate that the proposed S2DWMTrans achieves competitive performance. PubDate:
2023
Issue No: Vol. 16 (2023)
- Compensation for High-Frequency Vibration of SAR Imaging in the Terahertz
Band Based on Linear Chirplet Transform and Empirical Mode Decomposition-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Siyu Chen;Yong Wang;Yun Zhang;
Pages: 1427 - 1446 Abstract: SAR in THz band is very important and valuable in the field of radar signal processing, and it is much sensitive to the high-frequency vibration of the platform due to the short wavelength. In this article, the high-frequency vibration is characterized as a multicomponent SFM signal, and the parameters estimation method based on the linear chirplet transform and empirical mode decomposition is proposed to compensate for the high-frequency vibration errors. This method can extract the instantaneous frequency of the received signal with high precision, and the focused SAR image can be obtained consequently. Results of simulated and real measured data are provided to illustrate the effectiveness of the novel algorithm proposed in this article. PubDate:
2023
Issue No: Vol. 16 (2023)
- Remote Sensing Image Retrieval in the Past Decade: Achievements,
Challenges, and Future Directions-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Weixun Zhou;Haiyan Guan;Ziyu Li;Zhenfeng Shao;Mahmoud R. Delavar;
Pages: 1447 - 1473 Abstract: Remote sensing image retrieval (RSIR) aims to search and retrieve the images of interest from a large remote sensing image archive, which has remained to be a hot topic over the past decade. Benefited from the advent and progress of deep learning, RSIR has been promoted by developing novel approaches, constructing new datasets, and exploring potential applications. To the best of our knowledge, there lacks a comprehensive review of RSIR achievements, including systematic and hierarchical categorization of RSIR methods and benchmark datasets over the past decade. This article, therefore, provides a systematic survey of the recently published RSIR methods and benchmarks by reviewing more than 200 papers. To be specific, in terms of image source, label, and modality, we first group the RSIR methods into some hierarchical categories, each of which is reviewed in detail. Following the categorization of the RSIR methods, we list the benchmark datasets publicly available for performance evaluation and present our newly collected RSIR dataset. Moreover, some of the existing RSIR methods are selected and evaluated on the representative benchmark datasets. The results demonstrate that deep learning-based methods are currently the dominant RSIR approaches and outperform handcrafted feature-based methods by a significant margin. Finally, we discuss the main challenges of RSIR and point out some potential directions for the future RSIR research. PubDate:
2023
Issue No: Vol. 16 (2023)
- 3D-CNN and Autoencoder-Based Gas Detection in Hyperspectral Images
-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Okan Bilge Özdemir;Alper Koz;
Pages: 1474 - 1482 Abstract: The detection of gas emission levels is a crucial problem for ecology and human health. Hyperspectral image analysis offers many advantages over traditional gas detection systems with its detection capability from safe distances. Observing that the existing hyperspectral gas detection methods in the thermal range neglect the fact that the captured radiance in the longwave infrared (LWIR) spectrum is better modeled as a mixture of the radiance of background and target gases, we propose a deep learning-based hyperspectral gas detection method in this article, which combines unmixing and classification. The proposed method first converts the radiance data to luminance-temperature data. Then, a 3-D convolutional neural network (CNN) and autoencoder-based network, which is specially designed for unmixing, is applied to the resulting data to acquire abundances and endmembers for each pixel. Finally, the detection is achieved by a three-layer fully connected network to detect the target gases at each pixel based on the extracted endmember spectra and abundance values. The superior performance of the proposed method with respect to the conventional hyperspectral gas detection methods using spectral angle mapper and adaptive cosine estimator is verified with LWIR hyperspectral images including methane and sulfur dioxide gases. In addition, the ablation study with respect to different combinations of the proposed structure including direct classification and unmixing methods has revealed the contribution of the proposed system. PubDate:
2023
Issue No: Vol. 16 (2023)
- Assessment of Spatiotemporal Characteristic of Droughts Using In Situ and
Remote Sensing-Based Drought Indices-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Sepideh Jalayer;Alireza Sharifi;Dariush Abbasi-Moghadam;Aqil Tariq;Shujing Qin;
Pages: 1483 - 1502 Abstract: Drought has been identified as one of the significant complicated natural disasters exacerbated by land degradation and climate change. Hence, monitoring drought and evaluating its spatiotemporal dynamics are essential to manage regional drought conditions and protecting the natural environment. In this study, various single remote sensing-based drought indices including soil moisture condition index (SMCI), precipitation condition index (PCI), temperature condition index (TCI), and vegetation condition index (VCI) and combined RS-based drought Indices including optimized meteorological drought index (OMDI) and synthesized drought index (SDI) have been used to investigate the spatiotemporal variations of meteorological and agricultural droughts between 2000 and 2021 in Iran. The in situ drought indices, including the standardized precipitation index (SPI) and standardized precipitation evapotranspiration index (SPEI) series of 1, 3, 6, 12, and 24 months were utilized to verify remote sensing-based drought indices and evaluate their applicability for analyzing drought conditions. The results indicated that the correlation coefficients of the in situ drought indices with the combined drought indices are higher than the RS-based single drought indexes. Generally, single-factor drought indexes, including VCI, TCI, PCI, and SMCI, have specific characteristics. The PCI and SMCI have an acceptable correlation with the short-term SPI and SPEI and are more applicable to monitoring short-term drought conditions. Further, the TCI has better performance in monitoring long-term drought conditions in Iran. This research concluded that the central, eastern, and southeastern parts of Iran mainly were experiencing exceptional and extreme drought conditions as the worst agricultural and meteorological drought conditions observed in the years 2008 and 2021 in the region during the last 20 years. The results also showed that, in 2019 and 2020, most ar-as of Iran had higher OMDI and SDI values and the severity of the drought has decreased in these years. Particularly, this research provides an essential reference for reasonably choosing RS-based drought indices for monitoring meteorological and agricultural droughts from a local to global scale. PubDate:
2023
Issue No: Vol. 16 (2023)
- Pyramidal Dilation Attention Convolutional Network With Active and
Self-Paced Learning for Hyperspectral Image Classification-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Wenhui Hou;Na Chen;Jiangtao Peng;Weiwei Sun;Qian Du;
Pages: 1503 - 1518 Abstract: In recent years, deep neural networks have been widely used for hyperspectral image (HSI) classification and have shown excellent performance using numerous labeled samples. The acquisition of HSI labels is usually based on the field investigation, which is expensive and time consuming. Hence, the available labels are usually limited, which affects the efficiency of deep HSI classification methods. To improve the classification performance while reducing the labeling cost, this article proposes a semisupervised deep learning (DL) method for HSI classification, named pyramidal dilation attention convolutional network with active and self-paced learning (PDAC-ASPL), which integrates active learning (AL), self-paced learning (SPL), and DL into a unified framework. First, a densely connected pyramidal dilation attention convolutional network is trained with a limited number of labeled samples. Then, the most informative samples from the unlabeled set are selected by AL and queried real labels, and the highest confidence samples with corresponding pseudo labels are extracted by SPL. Finally, the samples from AL and SPL are added to the training set to retrain the network. Compared with some DL- and AL-based HSI classification methods, our PDAC-ASPL achieves better performance on four HSI datasets. PubDate:
2023
Issue No: Vol. 16 (2023)
- Research on SAR Imaging Simulation Based on Time-Domain Shooting and
Bouncing Ray Algorithm-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Chun-lei Dong;Xiao Meng;Li-xin Guo;
Pages: 1519 - 1530 Abstract: The signal source of synthetic aperture radar (SAR) usually adopts the linear frequency modulation (LFM) signal, which exhibits the characteristics of a wide pulse. Hence, in the case of usage of the LFM signal by time-domain shooting and bouncing ray (TDSBR) to simulate the SAR echo signal, numerous time sampling points are generated, resulting in huge computational efforts; thereby, it is hard to exploit the TDSBR algorithm for simulating the SAR echo. In order to unlock this dilemma, the hybrid approach, the transfer function in conjunction with the range frequency-domain pulse coherence, is developed, in which the transfer function is stated by the radar cross-section. The proposed methodology is capable of enhancing the computational efficiency through avoiding the massive time sampling, so that the suggested TDSBR approach could be more conveniently applied to the SAR echo simulations. Furthermore, because of the efficiency advantage of the TDSBR in the calculation of wideband scattering field, the proposed methodology exhibits higher computational efficiency than the frequency-domain shooting and bouncing ray in the SAR image simulation. Finally, the key equation of the TDSBR for the transient scattering field of dielectric targets based on a closed-form integration formula is analytically derived, which is the basis of SAR imaging simulation for the composite scene of ships above the sea surface. PubDate:
2023
Issue No: Vol. 16 (2023)
- BIBED-Seg: Block-in-Block Edge Detection Network for Guiding Semantic
Segmentation Task of High-Resolution Remote Sensing Images-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Baikai Sui;Yungang Cao;Xueqin Bai;Shuang Zhang;Renzhe Wu;
Pages: 1531 - 1549 Abstract: Edge optimization of semantic segmentation results is a challenging issue in remote sensing image processing. This article proposes a semantic segmentation model guided by a block-in-block edge detection network named BIBED-Seg. This is a two-stage semantic segmentation model, where edges are extracted first and then segmented. We do two key works: The first work is edge detection, and we present BIBED, a block-in-block edge detection network, to extract the accurate boundary features. Here, the edge detection of multiscale feature fusion is first realized by creating the block-in-block residual network structure and devising the multilevel loss function. Second, we add the channel and spatial attention module into the residual structure to improve high-resolution remote sensing images' boundary positioning and detection accuracy by focusing on their channel and spatial dimensions. Finally, we evaluate our method on International Society for Photogrammetry and Remote Sensing (ISPRS) Potsdam and Vaihingen data sets and obtain ODS F-measure of 0.6671 and 0.7432, higher than other excellent edge detection methods. The second work is two-stage segmentation. First, the proposed BIBED is individually pretrained, and subsequently, the pretrained model is introduced into the entire segmentation network to extract boundary features. In the second segmentation stage, the edge detection network is used to constrain semantic segmentation results by loss cycles and feature bootstrapping. Our best model obtains the OA of 90.2%, 87.7%, and 81.5%, the IOU of 76.0%, 69.6%, and 61.3% on the ISPRS and WHDLD datasets, respectively. PubDate:
2023
Issue No: Vol. 16 (2023)
- Adversarial Spectral Super-Resolution for Multispectral Imagery Using
Spatial Spectral Feature Attention Module-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Ziyu Liu;Han Zhu;Zhenzhong Chen;
Pages: 1550 - 1562 Abstract: Acquiring high-quality hyperspectral imagery with high spatial and spectral resolution plays an important role in remote sensing. Due to the limited capacity of sensors, providing high spatial and spectral resolution is still a challenging issue. Spectral super-resolution (SSR) increases the spectral dimensionality of multispectral images to achieve resolution enhancement. In this article, we propose a spectral resolution enhancement method based on the generative adversarial network framework without introducing additional spectral responses prior. In order to adaptively rescale informative features for capturing interdependencies in the spectral and spatial dimensions, a spatial spectral feature attention module is introduced. The proposed method jointly exploits spatio-spectral distribution in the hyperspectral manifold to increase spectral resolution while maintaining spatial content consistency. Experiments are conducted on both synthetic Landsat 8 and Sentinel-2 radiance data and real coregistered advanced land image and Hyperion (MS and HS) images, which indicates the superiority of the proposed method compared to other state-of-the-art methods. PubDate:
2023
Issue No: Vol. 16 (2023)
- Performance Analysis of Wavenumber Domain Algorithms for Highly Squinted
SAR-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Xing Chen;Zhenyu Hou;Zhen Dong;Zhihua He;
Pages: 1563 - 1575 Abstract: Wavenumber domain algorithms have unique advantages in processing highly squinted synthetic aperture radar data. This article studies the performance of three commonly used wavenumber domain algorithms including the classical wavenumber domain (CWD) algorithm, extended wavenumber domain (EWD) algorithm, and squint wavenumber domain (SWD) algorithm. First, the wavenumber domain signal expression under the zero-Doppler and acquisition-Doppler reference geometries are both derived. Second, the internal relationship between three wavenumber domain algorithms is analyzed. A new interpretation of the relationship between the three algorithms and an interpolation strategy are given. The analysis not only provides a deeper understanding of the three algorithms, but also provides a basis for comparing them. Then, the performance of the three wavenumber domain algorithms is evaluated from the perspectives of computational complexity, image quality, and geometric position through theoretical analysis and simulation experiments. Aiming at the problem that the range and azimuth profiles are not orthogonal, a method to calculate resolution and extract profile is proposed. The results show that all three algorithms can obtain a well-focused images if full-resolution interpolation is performed, and the computational complexities of CWD and SWD are less than that of EWD. PubDate:
2023
Issue No: Vol. 16 (2023)
- Crop Type Classification by DESIS Hyperspectral Imagery and Machine
Learning Algorithms-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Nizom Farmonov;Khilola Amankulova;József Szatmári;Alireza Sharifi;Dariush Abbasi-Moghadam;Seyed Mahdi Mirhoseini Nejad;László Mucsi;
Pages: 1576 - 1588 Abstract: Developments in space-based hyperspectral sensors, advanced remote sensing, and machine learning can help crop yield measurement, modelling, prediction, and crop monitoring for loss prevention and global food security. However, precise and continuous spectral signatures, important for large-area crop growth monitoring and early prediction of yield production with cutting-edge algorithms, can be only provided via hyperspectral imaging. Therefore, this article used new-generation Deutsches Zentrum für Luft- und Raumfahrt Earth Sensing Imaging Spectrometer (DESIS) images to classify the main crop types (hybrid corn, soybean, sunflower, and winter wheat) in Mezőhegyes (southeastern Hungary). A Wavelet-attention convolutional neural network (WA-CNN), random forest and support vector machine (SVM) algorithms were utilized to automatically map the crops over the agricultural lands. The best accuracy was achieved with the WA-CNN, a feature-based deep learning algorithm and a combination of two images with overall accuracy (OA) value of 97.89% and the user's accuracy producer's accuracy was from 97% to 99%. To obtain this, first, factor analysis was introduced to decrease the size of the hyperspectral image data cube. A wavelet transform was applied to extract important features and combined with the spectral attention mechanism CNN to gain higher accuracy in mapping crop types. Followed by SVM algorithm reported OA of 87.79%, with the producer's and user's accuracies of its classes ranging from 79.62% to 96.48% and from 79.63% to 95.73%, respectively. These results demonstrate the potentiality of DESIS data to observe the growth of different crop types and predict the harvest volume, which is crucial for farmers- smallholders, and decision-makers. PubDate:
2023
Issue No: Vol. 16 (2023)
- Combining Luojia1-01 Nighttime Light and Points-of-Interest Data for Fine
Mapping of Population Spatialization Based on the Zonal Classification Method-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Wei Guo;Jinyu Zhang;Xuesheng Zhao;Yongxing Li;Jinke Liu;Wenbin Sun;Deqin Fan;
Pages: 1589 - 1600 Abstract: Fine-scale population spatial distribution plays an important role in urban microcosmic research, influencing infrastructure placement, emergency evacuation management, business decisions, and urban planning. In the past, nighttime light (NTL) data were generally used to map the spatial distribution of the population at a large scale because of their low spatial resolution. The new generation of Luojia1-01 NTL data can be used for fine-scale social and economic analysis with its high spatial resolution and quantitative range. However, due to the geometry and background noise of the data themselves, the accuracy of the original NTL data is still low. Points-of-interest (POI) also can be used to map the population spatialization, but the indicative relationship between the POI and population is not clear, especially in rural and urban areas with different landscape structures. To solve the above-mentioned problems, this study proposes an improved nighttime light (INTL) index to better use the Luojia1-01 NTL data. Meanwhile, a zonal classification model based on INTL and impervious surface area is proposed to distinguish urban and rural areas. Compared with previous research and existing datasets, our result had the highest accuracy (R² = 0.86). This study explains that the INTL index is applicable to population spatialization research with the emergence of high-resolution and multispectral NTL satellite data. Moreover, the zonal classification model in this research can significantly improve the accuracy of population spatialization in rural areas. This study provides a possible way to use NTL and POI data in other social and economic spatialization research. PubDate:
2023
Issue No: Vol. 16 (2023)
- Fusing Ultra-Hyperspectral and High Spatial Resolution Information for
Land Cover Classification Based on AISAIBIS Sensor and Phase Camera-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Fangfang Qu;Shuo Shi;Zhongqiu Sun;Wei Gong;Biwu Chen;Lu Xu;Bowen Chen;Xingtao Tang;
Pages: 1601 - 1612 Abstract: Hyperspectral imaging technology are widely used in vegetation, agriculture, and other fields, especially in land cover classification of complex scenes. Higher spectral resolution has become the focus of the development of hyperspectral imaging technology for classification. The advent of airborne AISAIBIS sensor reaches 0.11 nm ultrahyperspectral resolution. The ultrahyperspectral imagery shows great advantages in classification with its increasing spectral resolution. But its spatial resolution is limited because of the imaging mechanism, which brings great difficulties to the accurate extract of fine and regular objects. Therefore, we proposed an optimal fusion and classification strategy based on the complementary advantage information of ultrahyperspectral and high spatial resolution image. The fusion feasibility and effectiveness were verified by various fusion methods. And a quality evaluation system was developed to assess the quality of fusion results. Besides, a multiresolution segmentation optimization and classification evaluation scheme was proposed to comparatively analyze the effect of optimal fusion result on improving classification accuracy. Results show that the classification accuracy of the optimal fused image reaches 88.10%, and 7.11%–19.03% higher than that of original images. It fully validates the effectiveness of the strategy proposed in this article. PubDate:
2023
Issue No: Vol. 16 (2023)
- Ionospheric Disturbances Triggered by China's Long March 2D Rocket
-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Youkun Wang;Yibin Yao;Jian Kong;Lulu Shan;
Pages: 1613 - 1623 Abstract: Using the data of 382 ground global navigation satellite system (GNSS) network stations in Western China, we studied and analyzed the ionospheric disturbances triggered by the “Long March” 2D rocket launch in Jiuquan, China on December 3, 2017. As compared with previous research, a higher sampling resolution for GNSS data (with a frequency of 1 Hz) was used to obtain more accurate occurrence times and propagation velocities of ionospheric disturbances. By using a method based on a quadratic function of time to fit a raw total electron content (TEC) series, a filtered TEC series was calculated using carrier observations, and a two-dimensional disturbances map was drawn. A new method, which accounts for the flight time of the rocket, was used to calculate the velocity of the shock wave. Ionospheric depletions and the shock wave were observed after the launch of the rocket. The depletion was observed within 100 to 1000 km south of the launch site along the rocket trajectory, which had a maximum amplitude of ∼3.8 TEC units (TECU), reaching ∼56% of the background TEC. A shock wave of V-shaped disturbances with amplitudes of ∼0.67 TECU was detected on both sides of the rocket trajectory. The shock wave moved southeast at an average velocity of ∼1861 m/s at a location 2200 km away from the launch site. Ionospheric disturbances with distances of more than ∼3000 km from the launch site were also observed. PubDate:
2023
Issue No: Vol. 16 (2023)
- Subpixel Mapping for Remote Sensing Imagery Based on Spatial Adaptive
Attraction Model and Conditional Random Fields-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Yujia Chen;Cheng Huang;Cheng Yang;Junhuan Peng;Jun Zhang;Yuxian Wang;Zhengxue Yao;Guang Chen;Wenhua Yu;Qinghao Liu;
Pages: 1624 - 1640 Abstract: For subpixel mapping (SPM), ensuring the operational efficiency of the algorithm and mitigating the effect of abundance errors often cannot be achieved simultaneously. To solve the problem, we propose a new SPM method based on the spatial adaptive attraction model (SAAM) and conditional random fields (CRFs). First, the proposed SAAM obtains the spatial adaptive attraction value by adaptively adjusting the spatial attraction value obtained using the traditional spatial attraction model, thereby turning the display form of the abundance constraints in the SPM into an implicit form for expression, to perform the physical significance of the abundance constraints with the relative size of the attraction value of each subpixel. Second, the spatial adaptive attraction value of the implicitly represented abundance constraints and the local spatial smoothing prior are modeled in the CRFs, and the model makes full use of the spatial information in the label field while considering the abundance constraint. Third, Graph-cut is used to optimize the model, the proposed SPM can not only guarantee the operational efficiency, but also extinguish the influence of abundance error and decrease the noise artifact on the results of SPM. Experiments on three remote sensing images show that the proposed SPM accuracy is considerably better than the previously available SPM methods and is the least time-consuming. This study provides a new solution for the SPM of remote-sensing images. PubDate:
2023
Issue No: Vol. 16 (2023)
- How Well Do EO-Based Food Security Warning Systems for Food Security
Agree' Comparison of NDVI-Based Vegetation Anomaly Maps in West Africa-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Agnès Bégué;Simon Madec;Louise Lemettais;Louise Leroux;Roberto Interdonato;Inbal Becker-Reshef;Brian Barker;Christina Justice;Hervé Kerdilés;Michele Meroni;
Pages: 1641 - 1653 Abstract: The GEOGLAM crop monitor for early warning is based on the integration of the crop conditions assessments produced by regional systems. Discrepancies between these assessments can occur and are generally attributed to the interpretation of the vegetation and climate data. The premise of this article is that other sources of discrepancy related to the data themselves must also be considered. We conducted a comparative experiment of the growth vegetation anomalies routinely produced by four operational crop monitoring systems in West Africa [FEWSNET, GIEWS, ASAP, VAM] for the 2010–2020 period. We collected a set of normalized differences vegetation index-based indicators (% mean, % median, and Z-score) and proposed original methods to analyze and compare the spatio-temporal variations of these indices using Hovmöller representation, statistics, and spatial analysis. To facilitate systems comparison, a classification scheme based on the percentile rank values of anomaly indicators was applied to produce 3-class alarm maps (negative, absence, and positive anomalies). Results show that, on an annual basis, the per-pixel similarity is relatively low between the four systems [24.5%–34.1%], and that VAM and ASAP are the most similar (70%). The reasons of the products discrepancies come mainly from different preprocessing methods, especially the choice of the reference period used to calculate the anomaly. The negative alarm agreement classes show no eco-climatic zoning influence, but negative alarms hot-spots were locally observed. The negative alarm agreement maps can be a useful tool for early warning as they synthesize the information provided by the different systems, with a confidence level. PubDate:
2023
Issue No: Vol. 16 (2023)
- Advection-Free Convolutional Neural Network for Convective Rainfall
Nowcasting-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Jenna Ritvanen;Bent Harnist;Miguel Aldana;Terhi Mäkinen;Seppo Pulkkinen;
Pages: 1654 - 1667 Abstract: Nowcasts (i.e., short-term forecasts from 5 min to 6 h) of heavy rainfall are important for applications such as flash flood predictions. However, current precipitation nowcasting methods based on the extrapolation of radar echoes have a limited ability to predict the growth and decay of rainfall. While deep learning applications have recently shown improvement compared to extrapolation-based methods, they still struggle to correctly nowcast small-scale high-intensity rainfall. To address this issue, we present a novel model called the Lagrangian convolutional neural network (L-CNN) that separates the growth and decay of rainfall from motion using the advection equation. In the model, differences between consecutive rain rate fields in Lagrangian coordinates are fed into a U-Net-based CNN, known as RainNet, that was trained with the root-mean-squared-error loss function. This results in a better representation of rainfall temporal evolution compared to the RainNet and the extrapolation-based LINDA model that were used as reference models. On Finnish weather radar data, the L-CNN underestimates rainfall less than RainNet, demonstrated by greater POD (29% at 30 min at 1 mm·h$^{-1}$ threshold) and smaller bias (98% at 15 min). The increased ETS values over LINDA for leadtimes under 15 min, with maximum increases of 7% (5 mm·h$^{-1}$ threshold) and 10% (10 mm·h$^{-1}$), show that the L-CNN represents the growth and decay of heavy rainfall more accurately than LINDA. This implies that nowcasting of heavy rainfall is improved when growth and decay are predicted using a deep learning model. PubDate:
2023
Issue No: Vol. 16 (2023)
- Overview of the D3R Observations During the ICE-POP Field Campaign With
Emphasis on Snow Studies-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Shashank S. Joshil;V. Chandrasekar;David B. Wolff;
Pages: 1668 - 1677 Abstract: The International Collaborative Experiment during the PyeongChang Olympics and Paralympic winter games 2018 took place in the PyeongChang region of South Korea. The main goal of this field campaign was to study winter precipitation in an environment that has complex terrain. The NASA dual-frequency, dual-polarization, Doppler radar (D3R) was calibrated and deployed in this field campaign. The positioning error of the radar was calibrated to be within 0.1°. The D3R was deployed for more than four months and was able to capture many interesting snowfall events along with a few rain events. In this article, the deployment and performance of the D3R during the campaign are discussed. The snowfall events captured by the D3R are discussed in detail to interpret the microphysics from a radar's perspective. The reflectivity–snowfall rate relationship is derived at the Ku band, and the snow accumulation computed is in good agreement with a precipitation gauge that was deployed near the radar. The benefit of the dual-frequency ratio for identifying the precipitation particle types is briefly introduced using the data from a large snow event on 28th February 2018. The vertical profile D3R data for this snow event are studied for detecting the presence of pristine-oriented ice crystals in the mixed hydrometeor phase conditions. Various other instruments, such as X-band radar and disdrometers, were deployed in the campaign. The D3R data are compared with the MxPOL X-band radar, and the reflectivity values match within a couple of dB in the common volume region. PubDate:
2023
Issue No: Vol. 16 (2023)
- Statistical Downscaling of Temperature Distributions in Southwest China by
Using Terrain-Guided Attention Network-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Guangyu Liu;Rui Zhang;Renlong Hang;Lingling Ge;Chunxiang Shi;Qingshan Liu;
Pages: 1678 - 1690 Abstract: Deep learning techniques, especially convolutional neural networks (CNNs), have dramatically boosted the performance of statistical downscaling. In this study, we propose a CNN-based 2 m air temperature downscaling model named Terrain-Guided Attention Network (TGAN), which aims at rebuilding 2 m air temperature distribution from 0.0625$^{circ }$ to 0.01$^{circ }$ over Southwest China. More concretely, TGAN utilizes two upsampling modules to progressively reconstruct the high-resolution temperature data from the low-resolution one. Then, to better recover the spatial detail of the low-resolution temperature data, an attentive-terrain block is proposed to introduce digital terrain model (DEM) information. It aggregates the temperature data and the corresponding-scale DEM information via the attention mechanism in a multiscale manner. Ultimately, the reconstruction module is employed to obtain the high-resolution temperature data. We use the 2019 data for training, and utilize the 2018 data to verify the effectiveness of the proposed TGAN. The experimental results showed that TGAN achieved the lowest root-mean-square error (1.12$,^circ$C) when incorporating DEM data by attentive-terrain blocks in a multiscale manner, followed by incorporating DEM data in a multiscale manner (TGAN-land, 1.31$,^circ$C) and only incorporating DEM data (SRCNN-land, 1.36$,^circ$C). Meanwhile, TGAN showed a competitive performance when compared with several advanced deep-learning-based super-resolution algorithms and reconstructed the texture details of 2 m air temperature fields more clearly. In general, among various deep learning a-proaches, TGAN achieves better downscaling results for 2 m air temperature reconstruction and provides a practical method and guidance for the back-calculation of high-resolution historical meteorological grid data. PubDate:
2023
Issue No: Vol. 16 (2023)
- Fast and Structured Block-Term Tensor Decomposition for Hyperspectral
Unmixing-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Meng Ding;Xiao Fu;Xi-Le Zhao;
Pages: 1691 - 1709 Abstract: The block-term tensor decomposition model with multilinear rank-$(L_{r},L_{r},1)$ terms (or the “${mathsf{LL1}}$ tensor decomposition” in short) offers a valuable alternative formulation for hyperspectral unmixing (HU), which ensures the identifiability of the endmembers/abundances in cases where classic matrix factorization (MF) approaches cannot provide such guarantees. However, the existing ${mathsf{LL1}}$-tensor-decomposition-based HU algorithms use a three-factor parameterization of the tensor (i.e., the hyperspectral image cube), which causes difficulties in incorporating structural prior information arising in HU. Consequently, their algorithms often exhibit high per-iteration complexity and slow convergence. This article focuses on ${mathsf{LL1}}$ tensor decomposition under structural constraints and regularization terms in HU. Our algorithm uses a two-factor reparameterization of the tensor model. Like in the MF-based approaches, the factors correspond to the endmembers and abundances in the context of HU. Thus, the proposed framework is natural to incorporate physics-motivated priors in HU. To tackle the formulated optimization problem, a two-block alternating gradient projection (GP)-based algorithm is proposed. Carefully designed projection solvers are proposed to implement the GP algorithm with a relatively low per-iteration complexity. An extrapolation-based acceleration strategy is proposed to expedite the GP algorithm. Such an extrapolated multiblock algorithm only had asymptotic convergence assurances in the literature. Our analysis shows that the algorithm converges to the vicinity of a stationa-y point within finite iterations, under reasonable conditions. Empirical study shows that the proposed algorithm often attains orders-of-magnitude speedup and substantial HU performance gains compared with the existing ${mathsf{LL1}}$-decomposition-based HU algorithms. PubDate:
2023
Issue No: Vol. 16 (2023)
- Self-FuseNet: Data Free Unsupervised Remote Sensing Image Super-Resolution
-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Divya Mishra;Ofer Hadar;
Pages: 1710 - 1727 Abstract: Real-world degradations deviate from ideal degradations, as most deep learning-based scenarios involve the ideal synthesis of low-resolution (LR) counterpart images by popularly used bicubic interpolation. Moreover, supervised learning approaches rely on many high-resolution (HR) and LR image pairings to reconstruct missing information based on their association, developed by complex long hours of deep neural network training. Additionally, the trained model's generalizability on various image datasets with various distributions is not guaranteed. To overcome this challenge, we proposed our novel Self-FuseNet, particularly for extremely poor-resolution satellite images. Also, the network exhibits strong generalization performance on additional datasets (both “ideal” and “nonideal” scenarios). The network is especially for those image datasets suffering from the following two significant limitations: 1) nonavailability of ground truth HR images; 2) limitation of a large count of the unpaired dataset for deep neural network training. The benefit of the proposed model is threefold: 1) it does not require any significant extensive training data, either paired or unpaired but only a single LR image without prior knowledge of its distribution; 2) it is a simple and effective model for super-resolving very poor-resolution images, saving computational resources and time; 3) using UNet, the processing of data are accelerated by the network's wide skip connections, allowing image reconstruction with fewer parameters. Rather than using an inverse approach, as common in most deep learning scenarios, we introduced a forward approach to super-resolve exceptionally LR remote sensing images. This demonstrates its supremacy over recently proposed state-of-the-art methods for unsupervised single real-world image blind super-resolution. PubDate:
2023
Issue No: Vol. 16 (2023)
- Detection of Detached Ice-fragments at Martian Polar Scarps Using a
Convolutional Neural Network-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Shu Su;Lida Fanara;Haifeng Xiao;Ernst Hauber;Jürgen Oberst;
Pages: 1728 - 1739 Abstract: Repeated high-resolution imaging has revealed current mass wasting in the form of ice block falls at steep scarps of Mars. However, both the accuracy and efficiency of ice-fragments’ detection are limited when using conventional computer vision methods. Existing deep learning methods suffer from the problem of shadow interference and indistinguishability between classes. To address these issues, we proposed a deep learning-driven change detection model that focuses on regions of interest. A convolutional neural network simultaneously analyzed bitemporal images, i.e., pre- and postdetach images. An augmented attention module was integrated in order to suppress irrelevant regions such as shadows while highlighting the detached ice-fragments. A combination of dice loss and focal loss was introduced to deal with the issue of imbalanced classes and hard, misclassified samples. Our method showed a true positive rate of 84.2% and a false discovery rate of 16.9%. Regarding the shape of the detections, the pixel-based evaluation showed a balanced accuracy of 85% and an F1 score of 73.2% for the detached ice-fragments. This last score reflected the difficulty in delineating the exact boundaries of some events both by a human and the machine. Compared with five state-of-the-art change detection methods, our method can achieve a higher F1 score and surpass other methods in excluding the interference of the changed shadows. Assessing the detections of the detached ice-fragments with the help of previously detected corresponding shadow changes demonstrated the capability and robustness of our proposed model. Furthermore, the good performance and quick processing speed of our developed model allow us to efficiently study large-scale areas, which is an important step in estimating the ongoing mass wasting and studying the evolution of the martian polar scarps. PubDate:
2023
Issue No: Vol. 16 (2023)
- Deep Learning Approach for Classifying the Built Year and Structure of
Individual Buildings by Automatically Linking Street View Images and GIS Building Data-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Yoshiki Ogawa;Chenbo Zhao;Takuya Oki;Shenglong Chen;Yoshihide Sekimoto;
Pages: 1740 - 1755 Abstract: The built year and structure of individual buildings are crucial factors for estimating and assessing potential earthquake and tsunami damage. Recent advances in sensing and analysis technologies allow the acquisition of high-resolution street view images (SVIs) that present new possibilities for research and development. In this study, we developed a model to estimate the built year and structure of a building using omnidirectional SVIs captured using an onboard camera. We used geographic information system (GIS) building data and SVIs to generate an annotated built-year and structure dataset by developing a method to automatically combine the GIS data with images of individual buildings cropped through object detection. Furthermore, we trained a deep learning model to classify the built year and structure of buildings using the annotated image dataset based on a deep convolutional neural network (DCNN) and a vision transformer (ViT). The results showed that SVI accurately predicts the built year and structure of individual buildings using ViT (overall accuracies for structure = 0.94 [three classes] and 0.96 [two classes] and for age = 0.68 [six classes] and 0.90 [three classes]). Compared with DCNN-based networks, the proposed Swin transformer based on ViT architectures effectively improves prediction accuracy. The results indicate that multiple high-resolution images can be obtained for individual buildings using SVI, and the proposed method is an effective approach for classifying structures and determining building age. The automatic, accurate, and large-scale mapping of the built year and structure of individual buildings can help develop specific disaster prevention measures. PubDate:
2023
Issue No: Vol. 16 (2023)
- IVIU-Net: Implicit Variable Iterative Unrolling Network for Hyperspectral
Sparse Unmixing-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Yuantian Shao;Qichao Liu;Liang Xiao;
Pages: 1756 - 1770 Abstract: At present, an emerging technique called the algorithm unrolling approach has attracted wide attention, because it is capable of developing efficient and interpretable layers to eliminate the black-box nature of deep learning (DL). In this article, inspired by the sparse unmixing model, we propose a model-driven DL approach, namely, an implicit variable iterative unrolling network (IVIU-Net). First of all, the unmixing performance and adaptive ability of the model are enhanced by introducing learnable parameters into the sparse unmixing algorithm. Then, a specific spatial convolution module is integrated into the network to promote the smoothness of the latent abundance map. Finally, a comprehensive loss function with three terms such as average spectral angle distance, hyperspectral images reconstruction error, and spectral information divergence, is presented to train the IVIU-Net in an unsupervised way. Compared to the unmixing results of most existing data-driven DL algorithms, our network has significant advantages in two folds: it is able to achieve better stability instead of relying heavily on the endmember initialization results and it has better interpretability and robustness in the unmixing procedure. Experimental results on synthetic and real data show that the proposed network outperforms the state-of-the-art in terms of better convergence, faster unmixing speed as well as better accuracy. PubDate:
2023
Issue No: Vol. 16 (2023)
- Superpixel-Based Multiscale CNN Approach Toward Multiclass Object
Segmentation From UAV-Captured Aerial Images-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Tanmay Kumar Behera;Sambit Bakshi;Michele Nappi;Pankaj Kumar Sa;
Pages: 1771 - 1784 Abstract: Unmanned aerial vehicles (UAVs) are promising remote sensors capable of reforming remote sensing applications. However, for artificial-intelligence-guided tasks, such as land cover mapping and ground-object mapping, most deep-learning-based architectures fail to extract scale-invariant features, resulting in poor performance accuracy. In this context, the article proposes a superpixel-aided multiscale convolutional neural network (CNN) architecture to avoid misclassification in complex urban aerial images. The proposed framework is a two-tier deep-learning-based segmentation architecture. In the first stage, a superpixel-based simple linear iterative cluster algorithm produces superpixel images with crucial contextual information. The second stage comprises a multiscale CNN architecture that uses these information-rich superpixel images to extract scale-invariant features for predicting the object class of each pixel. Two UAV-image-based aerial image datasets: 1) NITRDrone dataset and 2) urban drone dataset (UDD), are considered to perform the experiment. The proposed model outperforms the considered state-of-the-art methods with an intersection of union of 76.39% and 86.85% on UDD and NITRDrone datasets, respectively. Experimentally obtained results prove that the proposed architecture performs superior by achieving better performance accuracy in complex and challenging scenarios. PubDate:
2023
Issue No: Vol. 16 (2023)
- Attribute-Guided Generative Adversarial Network With Improved Episode
Training Strategy for Few-Shot SAR Image Generation-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Yuanshuang Sun;Yinghua Wang;Liping Hu;Yuanyuan Huang;Hongwei Liu;Siyuan Wang;Chen Zhang;
Pages: 1785 - 1801 Abstract: Deep-learning-based models usually require a large amount of data for training, which guarantees the effectiveness of the trained model. Generative models are no exception, and sufficient training data are necessary for the diversity of generated images. However, for synthetic aperture radar (SAR) images, data acquisition is expensive. Therefore, SAR image generation under a few training samples is still a challenging problem to be solved. In this article, we propose an attribute-guided generative adversarial network (AGGAN) with an improved episode training strategy for few-shot SAR image generation. First, we design the AGGAN structure, and spectral normalization is used to stabilize the training in the few-shot situation. The attribute labels of AGGAN are designed to be the category and aspect angle labels, which are essential information for SAR images. Second, an improved episode training strategy is proposed according to the characteristics of the few-shot generative task, and it can improve the quality of generated images in the few-shot situation. In addition, we explore the effectiveness of the proposed method when using different auxiliary data for training and use the Moving and Stationary Target Acquisition and Recognition benchmark dataset and a simulated SAR dataset for verification. The experimental results show that AGGAN and the proposed improved episode training strategy can generate images of better quality when compared with some existing methods, which have been verified through visual observation, image similarity measures, and recognition experiments. When applying the generated images to the 5-shot SAR image recognition problem, the average recognition accuracy can be improved by at least 4$%$. PubDate:
2023
Issue No: Vol. 16 (2023)
- Soil Moisture Retrieval From Sentinel-1 and Sentinel-2 Data Using Ensemble
Learning Over Vegetated Fields-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Liguo Wang;Ya Gao;
Pages: 1802 - 1814 Abstract: Soil moisture (SM) is valuable basic data in climate, hydrological models, and agricultural applications. The rapid development of remote sensing technology can be used to monitor changes in SM at multiple spatial and temporal scales. In this article, we unfolded an SM retrieval method using ensemble learning combined with the Water Cloud Model (WCM) by Sentinel-1 and Sentinel-2 with multisource datasets. First, using the WCM, the influence of vegetation cover on the backscattering coefficient was removed, where we use three vegetation index (enhanced vegetation index (EVI), normalized difference vegetation index, and normalized difference water index) for analysis and comparison. Then, combined with other multisource datasets, an SM retrieval model was established based on the ensemble learning algorithm. Here, we choose two familiar ensemble learning algorithms for analysis and comparison, using Pearson correlation significance analysis, which are the random forest (RF) and the adaptive boosting (AdaBoost). The results revealed that the RF model performed is slightly superior to the AdaBoost model. The optimal performance mean absolute error, root-mean-square error (RMSE), and the unbiased RMSE of RF model are 2.289 vol%, 2.934 vol%, 2.934 vol%, respectively, which are slightly better than the AdaBoost model. EVI is suitable for WCM model to remove vegetation scattering effect. It shows that it is attainable to utilize the ensemble learning method to inversion of SM using radar data. The proposed framework maximizes the potential of WCM, RF model, and multisource datasets in deriving spatiotemporally continuous SM estimates, which should be valuable for SM inversion development. PubDate:
2023
Issue No: Vol. 16 (2023)
- Rigorous Sensor Model of Gaofen-7 Satellite Laser Altimeter Based on
Coupled Footprint Camera-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Shaoning Li;Guo Zhang;Xiufang Fan;
Pages: 1815 - 1826 Abstract: The Gaofen-7 (GF-7) satellite system adds a footprint camera that shares the same optical path as its laser altimeter to ensure consistent spatial referencing between the laser footprint point and the obtained optical images. However, this introduces additional errors between the two different loads while ensuring the geometric relations of the laser altimeter and footprint camera. First, the accuracy and error analyses of the laser altimeter and footprint camera are carried out based on the working mode of the GF-7 satellite laser altimeter and footprint camera in this study. A rigorous sensor model of laser geometric positioning is proposed based on the coupled footprint camera, which is achieved for the geometric correlation of laser spots on the ground and the focal plane of the footprint camera. The satellite laser altimeter simulation platform was used to analyze the various error sources in the geometric positioning of the laser altimeter, and GF-7 satellite data were used to verify the proposed geometric positioning model of the laser altimeter and footprint camera. The results show that the positioning error of the GF-7 footprint camera is less than 5 m (root-mean-square error) relative to the dual-line array image, which can provide ground control points for stereo mapping. PubDate:
2023
Issue No: Vol. 16 (2023)
- Small Maritime Target Detection Using Gradient Vector Field
Characterization of Infrared Image-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Ping Yang;Lili Dong;Wenhai Xu;
Pages: 1827 - 1841 Abstract: Infrared small maritime target detection under strong ocean waves, a challenging task, plays a key role in maritime distress target search and rescue applications. Many methods based on directionality or gradient properties have proven to perform well for infrared images with heterogeneous scenarios. However, they tend to perform poorly when facing strong ocean wave background, mainly due to the following: 1) infrared images have low signal-to-clutter ratio with low intensity for small targets; 2) some waves have high local contrast that may be similar to or higher than targets. To solve these issues, a new method based on gradient vector field characterization (GVFC) of infrared images is proposed. First, we construct the gradient vector field and coarsely extract suspected targets. Then, gradient vector distribution measure (GVDM) is presented, which comprehensively integrates a synergistic homogeneity test based on Kolmogorov–Smirnov test with absolute difference standard deviation for gradient direction angle and regression analysis for gradient modulus. The proposed GVDM takes advantage of pixel-level gradient distribution property to further filtrate refined suspected targets. Moreover, gradient modulus horizontal local dissimilarity is proposed to measure the diversity of gradient modulus in horizontal direction between targets and waves, so as to enhance target saliency and suppress residual clutter simultaneously, which achieves preferable performance. Finally, a simple adaptive threshold is applied to confirm targets. Extensive experiments implemented on infrared maritime images with strong ocean waves demonstrate that the proposed method is superior to the state-of-the-art methods with respect to robustness and detection accuracy. PubDate:
2023
Issue No: Vol. 16 (2023)
- Multilayer Ionospheric Model Constrained by Physical Prior Based on GNSS
Stations-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Yun Sui;Haiyang Fu;Denghui Wang;Feng Xu;Nan Zhi;Shaojun Feng;Jin Cheng;Ya-Qiu Jin;
Pages: 1842 - 1857 Abstract: The need for accurate modeling of the ionosphere plays an important role in the global navigation satellite system (GNSS) positioning. The traditional multilayer VTEC model without prior has been used for modeling the ionospheric delay error. However, it is assumed that the electron density of the ionosphere is compressed into multiple thin layers at fixed heights in the lack of capturing ionospheric physics. In this article, the data enhancement method by virtual observations is proposed to build the constrained multilayer VTEC model to capture physical features from empirical ionospheric models. The extraction methods of physical knowledge have been developed by prior VTEC based on the principal component analysis and model coefficients based on the EBF. The constrained multilayer modeling has been verified based on simulation and real measurement of GNSS data in Yunnan, China, collected from Ground-based GNSS stations by Qianxun on November 3, 2021. The receiver DCB error estimated by the multilayer model with prior constraint is significantly lower than that of the single-layer model and the traditional multilayer model. The experimental test shows that the constrained multilayer model achieves the accuracy of $0.5 ,rm {TECU}$ for the independent reference station. The dSTEC of the proposed two multilayer models are significantly lower than those of the single-layer model for low elevation angles, and the RMSE of dSTEC is reduced by 63$%$ with the cutoff elevation angle of $10^{circ }$. The spatial distribution of the multilayer VTEC model shows consistency with the tomography model to verify vertical feature-capturing capability. Compared with the undifferenced and uncombined precise point positioning without ionospheric constraint, the multilayer-constrained model based on the test data improves the convergence time approximately by 36.55% and 18.78% in the horizontal (H) and up (U) directions, respectively. These results demonstrate that the proposed multilayer models not only improve ionospheric delay estimation precision but also can obtain the VTEC distribution capturing the physical characteristics of the ionosphere. The proposed multilayer models may be valuable for the ionospheric delay modeling of satellite navigation systems under harsh variable ionospheric conditions. PubDate:
2023
Issue No: Vol. 16 (2023)
- National Scale Land Cover Classification Using the Semiautomatic
High-Quality Reference Sample Generation (HRSG) Method and an Adaptive Supervised Classification Scheme-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Amin Naboureh;Ainong Li;Jinhu Bian;Guangbin Lei;
Pages: 1858 - 1870 Abstract: The advent of new high-performance cloud computing platforms [e.g., Google Earth Engine (GEE)] and freely available satellite data provides a great opportunity for land cover (LC) mapping over large-scale areas. However, the shortage of reliable and sufficient reference samples still hinders large-scale LC classification. Here, selecting Turkey as the case study, we presented a semiautomatic high-quality reference sample generation (HRSG) method using the publicly available scientific LC products and the linear spectral unmixing analysis to generate high-quality ground samples for the years 1995 and 2020 within the GEE platform. Furthermore, we developed an adaptive random forest classification scheme based on Köppen–Geiger climate zone classification system. Our rationale was related to the fact that large-scale study areas often contain multiple climate zones where the spectral signature of the same LC class may vary within different climate zones that can lead to a poor LC classification accuracy. To have a robust assessment, the generated LC maps were evaluated against independent test datasets. In regard to the proposed sample generation method, it was observed that HRSG can generate high-quality samples independent of the characteristics of scientific LC products. The high overall accuracy of 92% for 2020 and 90% for 1995 and satisfactory results for producer's accuracy (ranging between 83.4% and 99.3%) and user's accuracy (ranging between 86.1% and 99.7%) of nine LC classes demonstrated the effectiveness of the proposed framework. The presented methodologies can be incorporated into future studies related to large-scale LC mapping and LC change monitoring studies. PubDate:
2023
Issue No: Vol. 16 (2023)
- Hyperspectral Image Classification Based on Unsupervised Regularization
-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Jian Ji;Shuiqiao Liu;Fangrong Zhang;Xianfu Liao;Shuzhen Wang;Junru Liao;
Pages: 1871 - 1882 Abstract: Due to the powerful feature expression ability of deep learning and its end-to-end nonlinear mapping relationship, deep-learning-based methods have become the mainstream method for hyperspectral image (HSI) classification tasks. However, the accuracy of deep learning methods greatly depends on the use of a large number of labeled samples to train the model. Also, HSIs have few labeled samples and unbalanced categories, which make the depth model prone to overfittingand seriously affect the classification accuracy. Therefore, how to alleviate the overfitting phenomenon caused by small samples in the classification problem based on deep learning is still a problem that needs to be solved. Considering that it is relatively easier to obtain a large number of unlabeled samples in the field of remote sensing, making full use of the unsupervised information learned from unlabeled data can regularize the supervised classification model, which can effectively alleviate the overfitting phenomenon caused by the small samples problem. In the supervised training process, unsupervised information from the overall distribution of the sample is introduced to guide the regularization of the model, so as to realize the effective classification of the data in the case of a small number of labeled samples. Experimental results demonstrate the effectiveness of the proposed method in terms of HSI classification with few training samples. PubDate:
2023
Issue No: Vol. 16 (2023)
- Triple Collocation Analysis and In Situ Validation of the CYGNSS Soil
Moisture Product-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Xiaodong Deng;Luyao Zhu;Hongquan Wang;XianYun Zhang;Cheng Tong;Sinan Li;Ke Wang;
Pages: 1883 - 1899 Abstract: Cyclone Global Navigation Satellite System (CYGNSS) soil moisture (SM) product is characterized by high temporal resolution, but the relative strengths and weaknesses of this new product are unknown. In this article, we analyze the performance of CYGNSS SM product across varied land covers and climates, using the triple collocation (TC) analysis and in situ validation. The Soil Moisture Active Passive, Advanced Microwave Scanning Radiometer 2 Land Parameter Retrieval Model, and European Space Agency Climate Change Initiative Active SM products were used as references as well as data alternatives to calculate TC-based standard deviation (SDTC), correlation (RTC), and in situ validation Pearson's correlation coefficient (R), unbiased root-mean-square error (ubRMSE). The TC analysis indicated that CYGNSS had a relatively low median SDTC of 0.024 m3/m3 and RTC of 0.419. Validation based on 251 in situ SM stations showed that CYGNSS obtained a relatively low median ubRMSE of 0.057 m3/m3 along with a low median R of 0.414. Both interproduct comparisons of triple collocation (TC) analysis and in situ validations revealed that the CYGNSS product was characterized by small TC-based standard deviation (SDTC) and unbiased root-mean-square error (ubRMSE) but performed poorly in capturing SM temporal variability. Additionally, the performance degradation for CYGNSS capturing the SM temporal variability over the barren areas including in Northern Africa, the Arabian Peninsula, and Central Australia with arid/semiarid climates, and forested regions including in eastern South America, the Indo-China Peninsula, and Southeastern China with temperate/tropical climates. This suggests that capturing SM temporal variations over barren and forests regions is a key-priority to improve CYGNSS SM algorithms. PubDate:
2023
Issue No: Vol. 16 (2023)
- A Wavelet-Driven Subspace Basis Learning Network for High-Resolution
Synthetic Aperture Radar Image Classification-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Kang Ni;Mingliang Zhai;Qianqian Wu;Minrui Zou;Peng Wang;
Pages: 1900 - 1913 Abstract: The feature learning strategy of convolutional neural networks learns the deep spatial features from high-resolution (HR) synthetic aperture radar (SAR) images while ignoring the speckle noise based on the SAR imaging mechanism. In the feature learning module, the noise reduction by feature-adaptive projection guided by a powerful embedded wavelet feature reconstruction mechanism can effectively learn the deep feature statistics. In this article, we present a wavelet-driven subspace basis learning network (WDSBLN), following an encoder–decoder architecture, for the HR SAR image classification. The powerful wavelet module, including wavelet decomposition and reconstruction, is employed for keeping the structures of learned features well under speckle noise. Specifically, a compact second-order feature enhancement mechanism is designed for improving the contour and edge information of low-frequency components in the feature decomposition stage, and a local feature attention module based on the point-wise convolutional layer is adopted to aggregate the contextual information of the local channel and reserves detail information in the high-frequency components. Then, the reconstructed feature map is employed as a guided standard in the subspace basis learning (SBL) module. The SBL module, including basis generation (generating the subspace basis vectors) and subspace projection (transforming deep feature maps into a signal subspace), maintains the local structure of HR SAR image patches and acquires the robust feature statistics. We conduct evaluations on three real HR SAR image classification datasets, achieving superior performances as compared to other related networks. PubDate:
2023
Issue No: Vol. 16 (2023)
- Fine-Grained Ship Detection in High-Resolution Satellite Images With
Shape-Aware Feature Learning-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Bo Guo;Ruixiang Zhang;Haowen Guo;Wen Yang;Huai Yu;Peng Zhang;Tongyuan Zou;
Pages: 1914 - 1926 Abstract: Fine-grained ship detection is an important task in high-resolution satellite remote sensing applications. However, large aspect ratios and severe category imbalance make fine-grained ship detection a challenging problem. Current methods usually extract square-like features that do not work well to detect ships with large aspect ratios, and the misalignments in feature representation will severely degrade the performance of ship localization and classification. To tackle this, we propose a shape-aware feature learning method to mitigate the misalignments during feature extraction. Furthermore, for the issue of category imbalance, we design a shape-aware instance switching to balance the quantity distribution of ships in different categories, which can greatly improve the network's learning ability for rare instances. To verify the effectiveness of the proposed method, we contribute a multicategory ship detection dataset (MCSD) that contains 4000 images carefully labeled with oriented bounding boxes, including 16 types of ship objects and nearly 18 000 instances. We conduct experiments on our MCSD and ShipRSImageNet, and extensive experimental results demonstrate the superiority of the proposed method over several state-of-the-art methods. PubDate:
2023
Issue No: Vol. 16 (2023)
- Hyperspectral Image Band Selection Based on CNN Embedded GA (CNNeGA)
-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Mohammad Esmaeili;Dariush Abbasi-Moghadam;Alireza Sharifi;Aqil Tariq;Qingting Li;
Pages: 1927 - 1950 Abstract: Hyperspectral images (HSIs) are a powerful source of reliable data in various remote sensing applications. But due to the large number of bands, HSI has information redundancy, and methods are often used to reduce the number of spectral bands. Band selection (BS) is used as a preprocessing solution to reduce data volume, increase processing speed, and improve methodology accuracy. However, most conventional BS approaches are unable to fully explain the interaction between spectral bands and evaluate the representation and redundancy of the selected band subset. This study first examines a supervised BS method that allows the selection of the required number of bands. A deep network with 3D-convolutional layers embedded in a genetic algorithm (GA). The GA uses embedded 3D-CNN (CNNeGA) as a fitness function. GA also considers the parent check box. The parent check box (parent subbands) is designed to make genetic operators more effective. In addition, the effectiveness of increasing the attention layer to a 3D-CNN and converting this model to spike neural networks has been investigated in terms of accuracy and complexity over time. The evaluation of the proposed method and the obtained results are satisfactory. The accuracy improved from 6% to 21%. Accuracy between 90% and 99% has been obtained in each evaluation mode. PubDate:
2023
Issue No: Vol. 16 (2023)
- TCIANet: Transformer-Based Context Information Aggregation Network for
Remote Sensing Image Change Detection-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Xintao Xu;Jinjiang Li;Zheng Chen;
Pages: 1951 - 1971 Abstract: Change detection based on remote sensing data is an important method to detect the earth surface changes. With the development of deep learning, convolutional neural networks have excelled in the field of change detection. However, the existing neural network models are susceptible to external factors in the change detection process, leading to pseudo change and missed detection in the detection results. In order to better achieve the change detection effect and improve the ability to discriminate pseudo change, this article proposes a new method, namely, transformer-based context information aggregation network for remote sensing image change detection. First, we use a filter-based visual tokenizer to segment each temporal feature map into multiple visual semantic tokens. Second, the addition of the progressive sampling vision transformer not only effectively excludes the interference of irrelevant changes, but also uses the transformer encoder to obtain compact spatiotemporal context information in the token set. Then, the tokens containing rich semantic information are fed into the pixel space, and the transformer decoder is used to acquire pixel-level features. In addition, we use the feature fusion module to fuse low-level semantic feature information to complete the extraction of coarse contour information of the changed region. Then, the semantic relationships between object regions and contours are captured by the contour-graph reasoning module to obtain feature maps with complete edge information. Finally, the prediction model is used to discriminate the change of feature information and generate the final change map. Numerous experimental results show that our method has more obvious advantages in visual effect and quantitative evaluation than other methods. PubDate:
2023
Issue No: Vol. 16 (2023)
- A Feature-Map-Based Method for Explaining the Performance Degradation of
Ship Detection Networks-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Peng Jia;Xiaowei He;Bo Wang;Jun Li;Qinghong Sheng;Guo Zhang;
Pages: 1972 - 1984 Abstract: The unknowability of the inner workings limits the magnitude of performance improvement of ship target detection networks in synthetic aperture radar (SAR) images under Gaussian noise. However, none of the existing interpretation methods explain the phenomenon of network changes under noise. The feature map can visually reflect the changes in image delivery in the network, and some metrics can quantitatively characterize the degree of network performance degradation in a noise environment. So, in this article, we propose a comprehensive analysis method that integrates texture and brightness features of the internal feature map of the network to clarify the change process of target features under Gaussian noise. First, we analyzed the degradation of three target detection networks under different levels of Gaussian noise; then, the feature maps of four convolution layers were sampled and visualized for qualitative analysis; finally, the texture and brightness features were extracted for quantitative characterization of the feature amount changes. We experimentally validated the method on publicly available SSDD radar datasets. The networks were extremely sensitive to Gaussian noise, and the mean Average Precision decreased by up to 96.3%. The angular second moment and entropy texture feature values of the feature map could drop and rise 59.10% and 97.81%, respectively, while the brightness value could increase up to 100.92%. This indicates that noise changes the structure of feature maps and reduces the amount of effective information. PubDate:
2023
Issue No: Vol. 16 (2023)
- Anomaly Detection Based on Tree Topology for Hyperspectral Images
-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Xiaotong Sun;Bing Zhang;Lina Zhuang;Hongmin Gao;Xu Sun;Li Ni;
Pages: 1985 - 2008 Abstract: As one of the most important research and application directions in hyperspectral remote sensing, anomaly detection (AD) aims to locate objects of interest within a specific scene by exploiting spectral feature differences between different types of land cover without any prior information. Most traditional AD algorithms are model-driven and describe hyperspectral data with specific assumptions, which cannot combat the distributional complexity of land covers in real scenes, resulting in a decrease in detection performance. To overcome the limitations of traditional algorithms, a novel tree topology based anomaly detection (TTAD) method for hyperspectral images (HSIs) is proposed in this article. TTAD departs from the single analytical mode based on specific assumptions but directly parses the HSI data itself. It makes full use of the “few and different” characteristics of anomalous data points that are sparsely distributed and far away from high-density populations. On this basis, topology, a powerful tool in mathematics that successfully handle multiple types of data mining tasks, is applied to AD to ensure sufficient feature extraction of land covers. First, the redistribution of HSI data is realized by constructing a tree-type topological space to improve the separability between anomalies and backgrounds. Then, topologically related subsets in this space are utilized to evaluate the abnormality degree of each sample in a dataset, and detection results for the HSI are output accordingly. Abandoning traditional modeling but focusing on mining the data characteristics of HSI itself enables TTAD to better adapt to different complex scenes and locate anomalies with high precision. Experimental results on a large number of benchmark datasets demonstrate that TTAD could achieve excellent detection results with considerable computational efficiency. The proposed method exhibits superior comprehensive performance and is promising t- be popularized in practical applications. PubDate:
2023
Issue No: Vol. 16 (2023)
- Dual Collaborative Constraints Regularized Low-Rank and Sparse
Representation via Robust Dictionaries Construction for Hyperspectral Anomaly Detection-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Sheng Lin;Min Zhang;Xi Cheng;Kexue Zhou;Shaobo Zhao;Hai Wang;
Pages: 2009 - 2024 Abstract: The low rank and sparse representation (LRSR) technique has attracted increasing attention for hyperspectral anomaly detection (HAD). Although a large quantity of research based on LRSR for HAD is proposed, the detection performance is still limited, due to the unsatisfactory dictionary construction and insufficient consideration of global and local characteristics. To tackle the above-mentioned concern, a novel HAD method, termed dual collaborative constraints regularized low-rank and sparse representation via robust dictionaries construction, is proposed in this article. Concretely, a robust dictionary construction strategy, which thoroughly excavates the potential of the density estimation model and local outlier factor, is proposed to yield pure and representative dictionary atoms. To fully exploit the global and local characteristics of hyperspectral images, dual collaborative constraints corresponding to the background and anomaly components are imposed on the LRSR model. Notably, two weighted matrices are further exerted on the representation coefficients to improve the effect of collaborative constraints, considering the fact that the surrounding pixels similar to the testing pixel should be given a large weight, otherwise the weight is expected to be small. In this way, the background and anomaly components can be well modeled. Additionally, a nonlinear transformation operation, which combines the output of the density estimation model and local outlier factor with the detection result derived from the LRSR model, is developed to suppress the background. The experiments conducted on one simulated dataset and three real datasets demonstrate the superiority of the proposed method compared with the four typical methods and four state-of-the-art methods. PubDate:
2023
Issue No: Vol. 16 (2023)
- MISNet: Multiscale Cross-Layer Interactive and Similarity Refinement
Network for Scene Parsing of Aerial Images-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Wujie Zhou;Xiaomin Fan;Lu Yu;Jingsheng Lei;
Pages: 2025 - 2034 Abstract: Although progress has been made in multisource data scene parsing of natural scene images, extracting complex backgrounds from aerial images of various types and presenting the image at different scales remain challenging. Various factors in high-resolution aerial images (HRAIs), such as imaging blur, background clutter, object shadow, and high resolution, substantially reduce the integrity and accuracy of object segmentation. By applying multisource data fusion, as in scene parsing of natural scene images, we can solve the aforementioned problems through the integration of auxiliary data into HRAIs. To this end, we propose a multiscale cross-layer interactive and similarity refinement network (MISNet) for scene parsing of HRAIs. First, in a feature fusion optimization module, we extract, filter, and optimize multisource features and further guide and optimize the features using a feature guidance module. Second, a multiscale context aggregation module increases the receptive field, captures semantic information, and extracts rich multiscale background features. Third, a dense decoding module fuses the global guidance information and high-level fused features. We also propose a joint learning method based on feature similarity and a joint learning module to obtain deep multilevel information, enhance feature generation, and fuse multiscale and global features to enhance network representation for accurate scene parsing of HRAIs. Comprehensive experiments on two benchmark HRAIs datasets indicate that our proposed MISNet is qualitatively and quantitatively superior to similar state-of-the-art models. PubDate:
2023
Issue No: Vol. 16 (2023)
- Glacier Retreating Analysis on the Southeastern Tibetan Plateau via
Multisource Remote Sensing Data-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Yao Xiao;Chang-Qing Ke;Yu Cai;Xiaoyi Shen;Zifei Wang;Vahid Nourani;Drolma Lhakpa;
Pages: 2035 - 2049 Abstract: Accurate multitemporal glacier change investigations and analyses are lacking on the southeastern Tibetan Plateau (SETP). A combination of photogrammetry, optical remote sensing, and synthetic aperture radar datasets can accurately identify large-scale glaciers; In this article, glaciers in three periods on the SETP (1970s, 2000, 2020) were identified from multisource remote sensing data based on a deep learning method and manual visual interpretation, and multitemporal glacial inventory data from relatively high-frequency source imagery. Totals of 11648, 12993, and 11875 glaciers were identified in the 1970s, 2000, and 2020, with total areas of 13372.08 km2, 11692.31 km2, and 10612.94 km2, respectively. The general distribution of SETP glaciers was identified to be typical of alpine glaciers dominated by small-sized glaciers. The average elevation of glaciers was approximately 5000 m; the slopes were mostly lower than 40°, and the main aspect was southeast, followed by south and southwest. The glaciers retreated from the 1970s to 2020, and a total glacier area of approximately 2759.14 km2 was degraded during this time, with an average annual melting rate of 0.45% yr−1. Rising summer temperatures may be the driving force behind the continuous decline in the glacier area. Overall, the results obtained in this article showed relatively low uncertainty involved in the identification of glaciers compared to some previous studies. The results can provide accurate glacier information for glacier monitoring and modeling studies on the SETP. PubDate:
2023
Issue No: Vol. 16 (2023)
- Infrared Small Target Detection Based on Singularity Analysis and
Constrained Random Walker-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Xinpeng Zhang;Zhixia Yang;Fan Shi;Yanhong Yang;Meng Zhao;
Pages: 2050 - 2064 Abstract: Effective infrared small target detection is still challenging due to small target sizes and the clutter in the background. Unfortunately, many advanced methods do not perform well in preserving and detecting multiscale objects in complex scenes. We propose an infrared small targets method to suppress the background and adapt the infrared small targets with different sizes. Based on the singular value analysis in the facet model, we propose a multiderivative descriptor to enhance the targets and suppress various clutter in the dual derivative channels. In the first-order derivative channel, we design four facet kernels with different directions to enhance and preserve the isotropic small targets and suppress the block clutter. In the second-order derivative channel, we use the facet kernel to enhance the center pixels of targets and suppress the band clutter. In order to adapt to the targets with various sizes, we propose a constrained random walker technique, including an adaptive matching algorithm to extract the local regions of each candidate adaptively based on the constraint of size and shape. The experimental results demonstrate that the proposed method can accurately detect multiscale small targets in complex scenes, resulting in better detection performance than the state-of-the-art methods. PubDate:
2023
Issue No: Vol. 16 (2023)
- Performances of Atmospheric Correction Processors for Sentinel-2 MSI
Imagery Over Typical Lakes Across China-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Sijia Li;Kaishan Song;Yong Li;Ge Liu;Zhidan Wen;Yingxin Shang;Lili Lyu;Chong Fang;
Pages: 2065 - 2078 Abstract: The launched MultiSpectral Instrument (MSI) equipped on Sentinel-2 satellites offers a powerful tool for observing the biogeochemical parameters of inland waters on a large scale. The proper use of atmospheric correction processors is essential for acquiring accurate satellite remote-sensing reflectance and downstream products. Therefore, we compared the performances of typical atmospheric correction processors, such as Sen2Cor, C2RCC-nets, C2RCC-C2X, Acolite, iCOR, Polymer, SeaDas/l2gen, and 6S processors, for MSI imagery over lake groups (N = 296) across China collected from 2016 to 2020. Linear fitting between corrected reflectance and in situ spectral measurements was used to assess performance; for the single lake, we additionally evaluated the performance of atmospheric correction processors in typical Chagan Lake in 2021. For large-scale lake groups with different water quality backgrounds, the SeaDas/l2gen and C2RCC processors performed best for all band match-ups, and the C2RCC processor had the smallest errors. The SeaDas/l2gen processor works well for the signal bands (490, 560, 665, 704, and 740 nm), followed by the signal bands (560, 665, and 704 nm) of the C2RCC processor. For large-scale observations, this study revealed that Sentinel-2 satellite optical MSI imagery related to the C2RCC processor can be used to monitor aquatic systems with high-frequency investigations. For the signal band, the SeaDas/l2gen processor was used to select potential match-ups for the availability of MSI data related to the empirical models of processors. Our results may help satellite users select appropriate atmospheric correction processors for large-scale lake observations. PubDate:
2023
Issue No: Vol. 16 (2023)
- AMIO-Net: An Attention-Based Multiscale Input–Output Network for
Building Change Detection in High-Resolution Remote Sensing Images-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Wei Gao;Yu Sun;Xianwei Han;Yimin Zhang;Lei Zhang;Yunliang Hu;
Pages: 2079 - 2093 Abstract: Building change detection (CD) from remote sensing images (RSI) has great significance in exploring the utilization of land resources and determining the building damage after a disaster. This article proposed an attention-based multiscale input–output network, named AMIO-Net, for building CD in high-resolution RSI. It is able to overcome partial drawbacks of existing CD methods, such as insufficient utilization of information (details of building edges) of original images and poor detection effect of small targets (small-scale buildings or small-area changed buildings that are disturbed by other buildings). In AMIO-Net, the input image is scaled down to different sizes, and performed the convolution to extract features. Then, the feature maps are fed into the encoding stage so that the network can fully utilize the feature information (FI) of the original image. More importantly, we design two attention mechanism modules: the pyramid pooling attention module (PPAM) and the Siamese attention mechanism module (SAMM). PPAM combines a pyramid pooling module and an attention mechanism to fully consider the global information and focus on the FI of changed pixels in the image. The input of SAMM is the parallel multiscale output diagram of the decoding portion and deep feature maps of the network so that AMIO-Net can utilize the global contextual semantic FI and strengthen detection ability for small targets. Experiments on three datasets show that the proposed method achieves higher detection accuracy and F1 score compared with the state-of-the-art methods. PubDate:
2023
Issue No: Vol. 16 (2023)
- ResAt-UNet: A U-Shaped Network Using ResNet and Attention Module for Image
Segmentation of Urban Buildings-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Zhiyong Fan;Yu Liu;Min Xia;Jianmin Hou;Fei Yan;Qiang Zang;
Pages: 2094 - 2111 Abstract: Architectural image segmentation refers to the extraction of architectural objects from remote sensing images. At present, most neural networks ignore the relationship between feature information, and there are problems such as model overfitting and gradient explosion. Thus, this article proposes an improved UNet based on ResNet34 and Attention Module (ResAt-UNet) to solve the related problems. The algorithm adds a two-layer residual structure (BasicBlock) and a regional enhancement attention mechanism (Space Enhancement Area Enhancement, SEAE) to the original framework of UNet, which enhances the network depth, improves the fitting performance, and extracts small objects more accurately. The experimental results show that the network has achieved MIOU of 78.81% in the Massachusetts dataset, and the newly developed model outperforms UNet in both quantitative and qualitative aspects. PubDate:
2023
Issue No: Vol. 16 (2023)
- OBBStacking: An Ensemble Method for Remote Sensing Object Detection
-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Haoning Lin;Changhao Sun;Yunpeng Liu;
Pages: 2112 - 2120 Abstract: Ensemble methods are a reliable way to combine several models to achieve superior performance. However, research on the application of ensemble methods in the remote sensing object detection scenario is mostly overlooked. Two problems arise. First, one unique characteristic of remote sensing object detection is the oriented bounding boxes (OBB) of the objects and the fusion of multiple OBBs requires further research attention. Second, the widely used deep learning object detectors provide a score for each detected object as an indicator of confidence, but how to use these indicators effectively in an ensemble method remains a problem. Trying to address these problems, this article proposes OBBStacking, an ensemble method that is compatible with OBBs and combines the detection results in a learned fashion. This ensemble method helps take first place in the Challenge Track Fine-Grained Object Recognition in High-Resolution Optical Images, which was featured in 2021 Gaofen Challenge on Automated High-Resolution Earth Observation Image Interpretation. The experiments on DOTA dataset and FAIR1M dataset demonstrate the improved performance of OBBStacking and the features of OBBStacking are analyzed. PubDate:
2023
Issue No: Vol. 16 (2023)
- Improving GNSS-R Ocean Wind Speed Retrieval for the BF-1 Mission Using
Satellite Platform Attitude Measurements-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Chenxin Chen;Xiaoyu Wang;Zhao Bian;Haoyun Wei;Dongdong Fan;Zhaoguang Bai;
Pages: 2121 - 2133 Abstract: The receive antenna gain is needed to accurately calibrate the normalized bistatistic radar cross section measured by the BF-1 mission, which is a global navigation satellite system reflectometry (GNSS-R) constellation of two microsatellites and the first Chinese GNSS-R satellite mission. The instability of the satellite platform is the main cause of receive antenna gain errors. To obtain a high precision gain value, a calibration method that remaps the ocean surface detection location to the receive antenna pattern using satellite platform attitude measurements is proposed in this article. Thirty-two orbits of delay Doppler maps data, which were greatly disturbed by the attitude, are selected to test the effectiveness of the proposed algorithm. The accuracy of wind speed retrieval is analyzed, and results show that the data calibration algorithm is effective in reducing the wind speed retrieval error. Compared with the un-calibrated data, the data subjected to the calibration algorithm show a significant improvement of 19.33% in correlation coefficient and average decreases of 30.91% and 42.57% in root-mean-square error and mean bias error, respectively. Moreover, the comparison highlights that the influence of the satellite platform attitude disturbance on wind speed retrieval is abated significantly. The proposed approach can effectively improve the quality of GNSS-R measurements, allowing for a better understanding of global weather abnormalities and generally improving weather forecasting. PubDate:
2023
Issue No: Vol. 16 (2023)
- Roof Segmentation From Airborne LiDAR Using Octree-Based Hybrid Region
Growing and Boundary Neighborhood Verification Voting-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Ke Liu;Hongchao Ma;Liang Zhang;Xiaoli Liang;Dachang Chen;Yihang Liu;
Pages: 2134 - 2146 Abstract: Building roof segmentation is a key step in the process of 3-D building reconstruction using airborne light detection and ranging point cloud data. Voxel-based region growing is one of the most widely used methods to segment planes because of its high efficiency and easy implementation, but it is easy to omit roof planes due to the unreasonable voxel size and the complex roof structures. In addition, boundaries between adjacent roof planes are inaccurate. To solve the issues, a roof segmentation method using octree-based hybrid region growing and boundary neighborhood verification voting is proposed. First, an octree-based voxelization is performed on the raw building points to generate two basic units: planar voxels and individual points (i.e., points that are not in the planar voxels). Then, the hybrid region growing is conducted on these two basic units to segment coarse roof planes. A parameter-free boundary neighborhood verification voting strategy is used to assign the boundary points to the correct roof planes by verifying the neighborhoods of the boundary points and using reliable neighborhood information. Experimental results of four datasets, including two datasets provided by the International Society for Photogrammetry and Remote Sensing and two high-density datasets provided by OpenTopography, verify that roof planes can be successfully segmented by the proposed method with over 96.8% completeness and a minimum of 93.2% correctness. In addition, boundary points are assigned to the correct roof planes by the neighborhood verification voting strategy. Thus, the segmented roof planes can be used in various applications. PubDate:
2023
Issue No: Vol. 16 (2023)
- A Dual-Branch Deep Learning Architecture for Multisensor and Multitemporal
Remote Sensing Semantic Segmentation-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Luca Bergamasco;Francesca Bovolo;Lorenzo Bruzzone;
Pages: 2147 - 2162 Abstract: Multisensor data analysis allows exploiting heterogeneous data regularly acquired by the many available remote sensing (RS) systems. Machine- and deep-learning methods use the information of heterogeneous sources to improve the results obtained by using single-source data. However, the state-of-the-art methods analyze either the multiscale information of multisensor multiresolution images or the time component of image time series. We propose a supervised deep-learning classification method that jointly performs a multiscale and multitemporal analysis of RS multitemporal images acquired by different sensors. The proposed method processes very-high-resolution (VHR) images using a residual network with a wide receptive field that handles geometrical details and multitemporal high-resolution (HR) image using a 3-D convolutional neural network that analyzes both the spatial and temporal information. The multiscale and multitemporal features are processed together in a decoder to retrieve a land-cover map. We tested the proposed method on two multisensor and multitemporal datasets. One is composed of VHR orthophotos and Sentinel-2 multitemporal images for pasture classification, and another is composed of VHR orthophotos and Sentinel-1 multitemporal images. Results proved the effectiveness of the proposed classification method. PubDate:
2023
Issue No: Vol. 16 (2023)
- A Subpixel Mapping Method for Urban Land Use by Reducing Shadow Effects
-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Ming Hao;Guimiao Dou;Xiaotong Zhang;Huijing Lin;Wenqi Huo;
Pages: 2163 - 2177 Abstract: Urban land use classification is significant for urban development planning. Considering complex environments of urban surface features, traditional semantic segmentation methods are difficult to solve the problems of mixed pixels and limited spatial resolution of images. The subpixel mapping technology is an effective method to solve the above problems in urban land use classification. However, traditional subpixel mapping methods are sensitive to mountain shadow, high-rise building shadow and impermeable surface heterogeneity, resulting in false classification. Therefore, we propose a subpixel mapping method that can reduce the shadow effect. This method uses a multi-index feature fusion strategy to optimize the abundance of the shadow errors in the abundance image, and uses a super-resolution reconstruction neural network model to reconstruct the optimized abundance image for the subpixel mapping of urban land use. Experiments were conducted on sentinel-2 images obtained over Yuelu District of Changsha City, Hunan Province, China. The experimental results show that the method proposed in this article can effectively overcome the influence of building shadows and mountain shadows in urban land cover classification and is superior to traditional subpixel/pixel spatial attraction model, radial basis function, super-resolution subpixel mapping, and other methods in the effect and accuracy of urban land use subpixel mapping. PubDate:
2023
Issue No: Vol. 16 (2023)
- Comparison of Doppler-Derived Sea Ice Radial Surface Velocity Measurement
Methods From Sentinel-1A IW Data-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Ruifu Wang;Wenshuo Zhu;Xi Zhang;Ya Zhang;Junhui Zhu;
Pages: 2178 - 2191 Abstract: The near-instantaneous radial velocity of a target can be obtained using the Doppler effect of synthetic aperture radar (SAR), which is widely used in ocean current retrieval. However, in sea ice drift velocity measurements, only a Doppler centroid estimation algorithm in frequency domain has been studied, so whether there is a better algorithm is worth exploring. In this article, based on Sentinel-1A interferometric wide data, three Doppler centroid estimation algorithms applied to ocean current retrieval are selected. Combined with the characteristics of the Terrain Observation by Progressive Scans mode, made two applicability adjustments to each algorithm, and finally applied the three algorithms to sea ice radial surface velocity measurements. The first adjustment is to explore and determine the optimal parameters. The second adjustment is to use parallel computing to improve the efficiency, which is improved by an average of 43.55%. In addition, the deviation of Doppler centroid estimation bias correction is verified using rainforest data, and the deviation is controlled at approximately 3 Hz. Based on the three algorithms, five sets of experiments are carried out in this article. By analyzing and comparing the results of each algorithm, it is found that the results of the three algorithms are relatively consistent, among which the correlation Doppler estimation algorithm has the advantages of high efficiency and high precision and is the most suitable method for sea ice drift measurement among the three methods. However, for SAR images with abnormal speckles caused by human activities, the sign Doppler estimation algorithm can effectively remove abnormal speckles and ensure the smoothness of the image with better adaptability. PubDate:
2023
Issue No: Vol. 16 (2023)
- SPANet: Spatial Adaptive Convolution Based Content-Aware Network for
Aerial Image Semantic Segmentation-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Jianlong Hou;Zhi Guo;Yingchao Feng;Youming Wu;Wenhui Diao;
Pages: 2192 - 2204 Abstract: Semantic segmentation of remote sensing images encounters four significant difficulties: 1) complex backgrounds, 2) large-scale differences, 3) numerous small objects, and 4) extreme foreground–background imbalance. However, the existing generic semantic segmentation models mainly focus on the modeling context information and rarely focus on these four issues. This article presents an enhanced remote sensing image semantic segmentation framework to solve these problems through the hierarchical atrous pyramid (HASP) module and spatial-adaptive convolution-based FPN decoder framework. On the one hand, HASP solved the problem of complex backgrounds and large-scale differences by further enlarging the receptive field of the network through the cascade of atrous convolution with various rates. On the other hand, spatial adaptive convolution is embedded in FPN decoder framework step by step to solve the problems of numerous small objects and extreme foreground–background imbalance. Besides, a boundary-based loss function is constructed to help the network optimize the relevant segmentation results. Extensive experiments over iSAID and ISPRS Vaihingen datasets reflect the superiority of the presented structure to conventional the state-of-the-art semantic segmentation approaches. PubDate:
2023
Issue No: Vol. 16 (2023)
- Deep Adversarial Cascaded Hashing for Cross-Modal Vessel Image Retrieval
-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Jiaen Guo;Xin Guan;
Pages: 2205 - 2220 Abstract: In recent years, cross-modal remote sensing image retrieval has attracted a lot of attention in remote sensing (RS) information processing. It is worth mentioning that land cover scenes, whether unimodal or cross-modal, are the primary research contents of remote sensing image retrieval, and there are few studies on vessel images captured by RS satellites, let alone cross-modal retrieval tasks. Vessel images have smaller scale, lower resolution, and less detailed information than land cover images, so it is difficult to retrieve the exact images we want. In this article, a hashing method called deep adversarial cascaded hashing (DACH) is proposed to address these problems. To extract the subtle and discriminative features contained in RS vessel images accurately, we build a deep cascaded network that fuses multilevel features boosted both in depth and width, and the self-attention mechanism can further enhance the fused features. Combined with hash learning, we also design a weighted quintuplet loss to supervise the transition of discrimination and similarity between different metric spaces, and reduce cross-modal discrepancy at the same time. In addition, we apply the deep adversarial constraint to both feature learning and hash learning, trying to bridge the modality gap and achieve a cross-modal retrieval as precise as unimodal retrieval. Comprehensive experiments on two public bimodal vessel image datasets compared with several excellent cross-modal retrieval methods are conducted to demonstrate the effectiveness of our DACH, and the results show that the proposed method is effective and competitive on cross-modal vessel image retrieval tasks, outperforming state-of-the-art methods. PubDate:
2023
Issue No: Vol. 16 (2023)
- Analysis and Compensation for Systematical Errors in Airborne Microwave
Photonic SAR Imaging by 2-D Autofocus-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Min Chen;Xiaolan Qiu;Ruoming Li;Wangzhe Li;Kun Fu;
Pages: 2221 - 2236 Abstract: In the area of synthetic aperture radar (SAR), the ultrahigh resolution is always a continuous pursuit for researchers. To realize ultrahigh resolution, microwave photonics technology is an excellent solution since it has advantages of low transmission loss, ultrawide bandwidth (UWB), etc. However, with the improvement of resolution, new problems arise as the impact of more systematical errors becomes non-negligible, for example, the synchronization error between the transmit channel and the mixing channel, unknown system delay during signal reception, and the fluctuation of frequency for UWB signals. These factors will bring the degradation of the focusing quality of SAR images together with the motion errors and the trajectory measurement errors, causing not only residual range cell migration and azimuth phase error, but also higher order phase error along the range frequency dimension. In this article, we discuss the adverse influence of the above factors and give a detailed analysis of the 2-D phase error of the coarse focused image processed by range migration algorithm. Then, to compensate for the above unknown systematical errors, a 2-D autofocus method is proposed, and the effectiveness is validated by experiments on both simulated and real data. PubDate:
2023
Issue No: Vol. 16 (2023)
- Aircraft Wake Recognition and Strength Classification Based on Deep
Learning-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Chun Shen;Weiwei Tang;Hang Gao;Xuesong Wang;Pak-Wai Chan;Kai-Kwong Hon;Jianbing Li;
Pages: 2237 - 2249 Abstract: Aircraft wake is a pair of counter-rotating vortices generated behind the aircraft, which can greatly impact the safety of fast takeoff and landing of aircraft and limit the improvement of airport capacity. The current wake parameter retrieval methods cannot locate the wake vortex's position and estimate its strength level in real time. To deal with this issue, a novel algorithm based on the YOLOv5s deep learning network is proposed. The new algorithm establishes a single vortex locating concept to adapt the wake vortex's evolution at complicate background wind field conditions, and proposes strength-based classification standard which can represent the real-time hazard of wake vortex to shorten the takeoff and landing intervals. Meanwhile, the EIOU loss function is introduced to improve the precision of YOLOv5s network. Compared with the state-of-the-art object detection approaches, such as Cascade R-CNN, FCOS, and YOLOv5l, the superiority of new method is demonstrated in terms of accuracy and robustness by using the field detection data from Hong Kong International Airport. PubDate:
2023
Issue No: Vol. 16 (2023)
- Coastal Aquaculture Area Extraction Based on Self-Attention Mechanism and
Auxiliary Loss-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Bo Ai;Heng Xiao;Hanwen Xu;Feng Yuan;Mengyun Ling;
Pages: 2250 - 2261 Abstract: With the development of deep learning in satellite remote sensing image segmentation, convolutional neural networks have achieved better results than traditional methods. In some full convolutional networks, the number of network layers usually increases to obtain deep features, but the gradient disappearance problem occurs when the number of network layers deepens. Many scholars have obtained multiscale features by using different convolutional calculations. We want to obtain multiscale features in the network structure while obtaining contextual information by other means. This article employs the self-attention mechanism and auxiliary loss network (SAMALNet) structure to solve the above problems. We adopt the self-attention strategy in the atrous spatial pyramid pooling module to extract multiscale features while considering the contextual information. We add auxiliary loss to overcome the gradient disappearance problem. The experimental results of extracting aquaculture areas in the Jiaozhou Bay area of Qingdao from high-resolution GF-2 satellite images show that, in general, SAMALNet achieves better experimental results compared with UPS-Net, SegNet, DeepLabv3, UNet, DeepLabv3+, and PSPNet network structures, including recall 96.34%, precision 95.91%, F1 score 96.12%, and MIoU 92.60%. SAMALNet achieved better results extracting aquaculture area boundaries than the other network structures listed above. The high accuracy of the aquaculture area can provide data support for the rational planning and environmental protection of the coastal aquaculture area and promote more rational usage of the coastal aquaculture area. PubDate:
2023
Issue No: Vol. 16 (2023)
- Subaperture Keystone Transform Matched Filtering Algorithm and Its
Application for Air Moving Target Detection in an SBEWR System-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Muyang Zhan;Chanjuan Zhao;Kun Qin;Penghui Huang;Ming Fang;Chunlei Zhao;
Pages: 2262 - 2274 Abstract: Long-time coherent integration is an effective approach to improve detection performance for weak air moving targets (AMTs). The range position variation and azimuth Doppler variation will easily exceed range gate and Doppler resolution in a long observation time, resulting in severe performance degradation, especially for high maneuverability target. Besides, the high detection performance and low computational complexity ability are, most of times, the contradiction requirements by using the existing long-time coherent integration algorithms. To overcome above constraints, a novel subaperture keystone transform matched filtering (SAKTMF) method is developed in this article based on the conventional hybrid integration (HI) algorithm, which can realize coherent integration both within the subaperture and among subapertures, effectively improving the detection performance of a weak moving target. Furthermore, the proposed SAKTMF is applied for weak AMT detection in spaceborne early warning radar, which considers serious extended clutter, range migration, and Doppler migration problems simultaneously. Simulation experiments processing results show that the proposed method can provide improved detection performance compared with the conventional HI methods. PubDate:
2023
Issue No: Vol. 16 (2023)
- A Review of Spatial Enhancement of Hyperspectral Remote Sensing Imaging
Techniques-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Nour Aburaed;Mohammed Q. Alkhatib;Stephen Marshall;Jaime Zabalza;Hussain Al Ahmad;
Pages: 2275 - 2300 Abstract: Remote sensing technology has undeniable importance in various industrial applications, such as mineral exploration, plant detection, defect detection in aerospace and shipbuilding, and optical gas imaging, to name a few. Remote sensing technology has been continuously evolving, offering a range of image modalities that can facilitate the aforementioned applications. One such modality is hyperspectral imaging (HSI). Unlike multispectral images (MSI) and natural images, HSI consist of hundreds of bands. Despite their high spectral resolution, HSI suffer from low spatial resolution in comparison to their MSI counterpart, which hinders the utilization of their full potential. Therefore, spatial enhancement, or super resolution (SR), of HSI is a classical problem that has been gaining rapid attention over the past two decades. The literature is rich with various SR algorithms that enhance the spatial resolution of HSI while preserving their spectral fidelity. This article reviews and discusses the most important algorithms relevant to this area of research between 2002 and 2022, along with the most frequently used datasets, HSI sensors, and quality metrics. Metaanalysis are drawn based on the aforementioned information, which is used as a foundation that summarizes the state of the field in a way that bridges the past and the present, identifies the current gap in it, and recommends possible future directions. PubDate:
2023
Issue No: Vol. 16 (2023)
- Faster and Lighter Meteorological Satellite Image Classification by a
Lightweight Channel-Dilation-Concatenation Net-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Shuyao Shang;Jinglin Zhang;Xing Wang;Xinghua Wang;Yuanjun Li;Yuanjiang Li;
Pages: 2301 - 2317 Abstract: With the development of satellite photography, meteorologists are inclined to rely on methods for the automatic and efficient classification of weather images. However, many popular networks require numerous parameters and a lengthy inference time, making them unsuitable for real-time classification tasks. To solve these problems, a lightweight convolutional network termed the channel-dilation-concatenation network (CDC-net) is constructed for meteorological satellite image classification. When extracting features, CDC-net utilizes depth-wise convolution rather than standard convolution. Additionally, a FeatureCopy operation was employed instead of a half-convolution operation. CDC-net extracts high-dimensional features and contains a local importance-based pooling layer, reducing the network's depth, the number of network parameters and inference time. Based on these techniques, the CDC-net achieves an accuracy of 93.56% on the large-scale satellite cloud image database for meteorological research, with a graphics processing unit (GPU) inference time of 3.261 ms and 1.12 million parameters. Because many weather images reveal multiple weather patterns, multiple labels are necessary. Therefore, we propose a prediction method and conduct experiments on multilabel data. Experiments on single-label and multilabel meteorological satellite image datasets demonstrate the superiority of the CDC-net over other structures. Thus, the proposed CDC-net can provide a faster and lighter solution in meteorological satellite image classification. PubDate:
2023
Issue No: Vol. 16 (2023)
- Evaluation of Sources of Artificial Light at Night With an Autonomous
Payload in a Sounding Balloon Flight-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Carlo Bettanini;Mirco Bartolomei;Pietro Fiorentin;Alessio Aboudan;Stefano Cavazzani;
Pages: 2318 - 2326 Abstract: The presence of artificial light at night is not only limiting astronomical observations but has been linked to negative effects on human health and behavior of wildlife. New measurement systems are therefore needed to monitor artificial light emissions and their time evolution; Misurazione dell’ INquinamento LUminoso autonomous payload has been designed and tested at University of Padova to provide complete aerial observations of artificial light sources over extended areas, with the capability to be integrated either on stratospheric balloons or drones. The implemented architecture is based on commercial components and is controlled by a Raspberry PI single board computer with the capability of uninterrupted operation up to 5 h. The payload was successfully launched with a stratospheric sounding balloon on July 8, 2021 from Lajatico (Tuscany) and performed continuous analysis of emission sources up to the burst altitude of 34 km. The article will describe the calibration activity of the imaging unit which includes commercial cameras with dedicated filters used as luminance measuring device and raw spectrometer and present the elaboration of georeferenced images after reconstruction of the unit's inertial pointing along flight trajectory using combined GPS and IMU data integration. PubDate:
2023
Issue No: Vol. 16 (2023)
- A High-Resolution Airborne SAR Autofocusing Approach Based on SR-PGA and
Subimage Resampling With Precise Hyperbolic Model-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Yuhui Deng;Hui Wang;Guang-Cai Sun;Yong Wang;Wenkang Liu;Jun Yang;Mengdao Xing;
Pages: 2327 - 2338 Abstract: High-resolution airborne synthetic aperture radar imaging requires long synthetic aperture time (LSAT). However, the LSAT invalidates the Fresnel approximation in the traditional autofocusing method, leaving a residual signal phase in the motion error. To solve the problem, a signal-reconstruction-based phase gradient autofocus (SR-PGA) and a subimage resampling (SIR) methods are proposed. They are developed on the hyperbolic model. First, a subaperture division strategy divides the full-aperture high-order error into multiple low-order subaperture errors (SPEs). Then, the SR-PGA is developed to estimate the SPE, in which the precise deramping is reconstructed to eliminate the residual signal phase in the SPE. Third, the SIR is proposed to eliminate residual Doppler-variant shift of the adjacent subimages, improving the accuracy of the SPE combination. Finally, simulation and actual data processing verify the effectiveness and validity of the algorithm. PubDate:
2023
Issue No: Vol. 16 (2023)
- A Novel Dense-Attention Network for Thick Cloud Removal by Reconstructing
Semantic Information-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Yuyun Chen;Zhanchuan Cai;Jieyu Yuan;Lianghai Wu;
Pages: 2339 - 2351 Abstract: The presence of thick clouds in single optical images shows the contamination of interesting objects. Besides, the difficulty of thick cloud removal is mainly the restoration of the weak boundary information from cloud-contaminated areas. Recently, many deep-learning-based frameworks were applied for cloud removal by obtain the related semantic information from the weak boundary information. However, the large-size cloud-contaminated areas lead to the artificial textures in the resulting images. Thus, obtaining the optimal semantic information from finite boundary information is the key to solve this problem. In this work, we design a deep-learning framework for cloud removal, especially large-size clouds removal (i.e., more than 30% coverage of the whole image). First, we design a cloud location model (CLM), which adopted the fully convolutional network to locate the cloud. Second, desired by theory of the coarse-to-fine restoration, we build a dense-attention network (termed as DANet) for restoring cloud contaminated areas. In the DANet, we design a dense block into the coarse network for training the features of restoring directions of each pixel from the weak boundary information. Furthermore, a contextual attention module is built into refinement network for restoring contaminated areas relying on the semantic relationship between the background and foreground information. Compared with the state-of-the-art methods, the proposed DANet achieved greater removal performances and reconstruct more natural image textures. PubDate:
2023
Issue No: Vol. 16 (2023)
- Joint Sparse Representation-based Single Image Super-Resolution for Remote
Sensing Applications-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Bhabesh Deka;Helal Uddin Mullah;Trishna Barman;Sumit Datta;
Pages: 2352 - 2365 Abstract: Sparse representation-based single image super-resolution (SISR) methods use a coupled overcomplete dictionary trained from high-resolution images/image patches. Since remote sensing (RS) satellites capture images of large areas, these images usually have poor spatial resolution and obtaining an effective dictionary as such would be very challenging. Moreover, traditional patch-based sparse representation models for reconstruction tend to give unstable sparse solution and produce visual artefact in the recovered images. To mitigate these problems, in this article, we have proposed an adaptive joint sparse representation-based SISR method that is dependent only on the input low-resolution image for dictionary training and sparse reconstruction. The new model combines patch-based local sparsity and group sparse representation-based nonlocal sparsity in a single framework, which helps in stabilizing the sparse solution and improve the SISR results. The experimental results are evaluated both visually and quantitatively for several RGB and multispectral RS datasets, where the proposed method shows improvements in peak signal-to-noise ratio by 1–4 dB and 2–3 dB over the state-of-the-art sparse representation- and deep learning-based SR methods, respectively. Land cover classification applied on the super-resolved images further validate the advantages of the proposed method. Finally, for practical RS applications, we have performed parallel implementation in general purpose graphics processing units and achieved significant speed ups (30–40×) in the execution time. PubDate:
2023
Issue No: Vol. 16 (2023)
- Effects of Urban Redevelopment on Surface Urban Heat Island
-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Dan Li;Shaofeng Yan;Guangzhao Chen;
Pages: 2366 - 2373 Abstract: Urban expansion and urban redevelopment can affect the surface urban heat island (SUHI) phenomenon, a major topic in the study of urban climates. The effects of urban expansion on SUHI have been studied by numerous researchers, while the effects of urban redevelopment remain unclear. We aimed to understand the effects of urban redevelopment on SUHI. Using the thermal bands of Landsat-5 TM and Landsat-8 TIRS, we retrieved the land surface temperature and calculated the SUHI intensity of the redevelopment areas during 2000–2019 in Guangzhou (China). Based on the high-spatial-resolution images from Google Earth, 253 redevelopment areas were identified and classified as village low residential areas, low-industrial areas, middle residential areas, high residential areas, and commercial areas. Furthermore, the change in SUHI intensity in redevelopment areas was analyzed. Results showed that urban redevelopment, including the transitions from urban village to high-rise commercial land, from low-rise industrial land to high-rise residential land, from industrial land to parking lot, from bare land to mid-rise buildings, and from parking lot to high-rise commercial land, can considerably reduce the speed of increase of local SUHI intensity. These new findings have theoretical and practical implications for urban planning and redevelopment. PubDate:
2023
Issue No: Vol. 16 (2023)
- Large-Scale Forest Height Mapping by Combining TanDEM-X and GEDI Data
-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Changhyun Choi;Victor Cazcarra-Bes;Roman Guliaev;Matteo Pardini;Konstantinos P. Papathanassiou;Wenlu Qi;John Armston;Ralph O. Dubayah;
Pages: 2374 - 2385 Abstract: The present study addresses the development, implementation, and validation of a forest height mapping scheme based on the combination of TanDEM-X interferometric coherence and GEDI waveform measurements. The very general case where only a single polarisation TanDEM-X interferogram, a set of spatially discrete GEDI waveform measurements, and no DTM are available is assumed. The use of GEDI waveforms to invert the TanDEM-X interferometric measurements is described together with a set of performance criteria implemented to ensure a certain performance quality. The emphasis is set on developing a methodology able to invert forest height at large scales. Combining 595 TanDEM-X scenes and about 15 million GEDI waveforms, a spatially continuous 25-m resolution forest height map covering the whole of Tasmania Island is achieved. The derived forest height map is validated against an airborne lidar-derived canopy height map available across the whole island. PubDate:
2023
Issue No: Vol. 16 (2023)
- Understanding Volume Estimation Uncertainty of Lakes and Wetlands Using
Satellites and Citizen Science-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Shahzaib Khan;Faisal Hossain;Tamlin Pavelsky;Grant M. Parkins;Megan Rodgers Lane;Angélica M. Gómez;Sanchit Minocha;Pritam Das;Sheikh Ghafoor;Md. Arifuzzaman Bhuyan;Md. Nazmul Haque;Preetom Kumar Sarker;Partho Protim Borua;Jean-Francois Cretaux;Nicolas Picot;Vivek Balakrishnan;Shakeel Ahmad;Nirakar Thapa;Rajan Bhattarai;Faizan-ul Hasan;Bareerah Fatima;Muhammad Ashraf;Shahryar Khalique Ahmad;Arthur Compin;
Pages: 2386 - 2401 Abstract: We studied variations in the volume of water stored in small lakes and wetlands using satellite remote sensing and lake water height data contributed by citizen scientists. A total of 94 water bodies across the globe were studied using satellite data in the optical and microwave wavelengths from Landsat 8, Sentinel-1, and Sentinel-2. The uncertainty in volume estimation as a function of geography and geophysical factors, such as cloud cover, precipitation, and water surface temperature, was studied. The key finding that emerged from this global study is that uncertainty is highest in regions with a distinct precipitation season, such as in the monsoon dominated South Asia or the Pacific Northwestern region of the USA. This uncertainty is further compounded when small lakes and wetlands are seasonal with alternating land use as a water body and agricultural land, such as the wetlands of Northeastern Bangladesh. On an average, 45% of studied lakes could be estimated of their volume change with a statistical significant uncertainty that is less than the expected volume in South Asia. In North America, this statistically significant uncertainty in volume estimation was found to be around 50% in lakes eastward of the 108th meridian with lowest uncertainty found in lakes along the East coast of the USA. The article provides a baseline for understanding the current state of the art in estimating volumetric change of lakes and wetlands using citizen science in anticipation of the recently launched Surface Water and Ocean Topography Mission. PubDate:
2023
Issue No: Vol. 16 (2023)
- Improving Tourism Analytics From Climate Data Using Knowledge Graphs
-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Jiantao Wu;Jarrett Pierse;Fabrizio Orlandi;Declan O'Sullivan;Soumyabrata Dev;
Pages: 2402 - 2412 Abstract: Climate change has been deemed to be one of the greatest challenges facing humans in the 21st century, with extreme weather events taking place more regularly than before. While the impact of climate change has been well documented in recent years across industries, the impact of climate change on the tourism economy is yet to be fully realized. This article aims to apply a range of knowledge graph techniques to naturalistic data. Among these, weather data will be explored as one prospective way to enhance people's understanding of how climate and a country's tourism economy are related and how they interact. According to our exploration with the knowledge graph approach in organizing the climate and tourism data, the insights and knowledge gained from the knowledge graph are able to ultimately help improve the quality of life for people and the tourism industry of a country. PubDate:
2023
Issue No: Vol. 16 (2023)
- An Ensemble Learning Approach for Land Use/Land Cover Classification of
Arid Regions for Climate Simulation: A Case Study of Xinjiang, Northwest China-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Haoyang Du;Manchun Li;Yunyun Xu;Chen Zhou;
Pages: 2413 - 2426 Abstract: Accurate classifications of land use/land cover (LULC) in arid regions are vital for analyzing changes in climate. We propose an ensemble learning approach for improving LULC classification accuracy in Xinjiang, northwest China. First, multisource geographical datasets were applied, and the study area was divided into Northern Xinjiang, Tianshan, and Southern Xinjiang. Second, five machine learning algorithms—k-nearest neighbor, support vector machine (SVM), random forest (RF), artificial neural network (ANN), and C4.5—were chosen to develop different ensemble learning strategies according to the climatic and topographic characteristics of each subregion. Third, stratified random sampling was used to obtain training samples and optimal parameters for each machine learning algorithm. Lastly, each derived approach was applied across Xinjiang, and subregion performance was evaluated. The results showed that the LULC classification accuracy achieved across Xinjiang via the proposed ensemble learning approach was improved by ≥6.85% compared with individual machine learning algorithms. By specific subregion, the accuracies for Northern Xinjiang, Tianshan, and Southern Xinjiang increased by ≥6.70%, 5.87%, and 6.86%, respectively. Moreover, the ensemble learning strategy combining four machine learning algorithms (i.e., SVM, RF, ANN, and C4.5) was superior across Xinjiang and Tianshan; whereas, the three-algorithm (i.e., SVM, RF, and ANN) strategy worked best for the Northern and Southern Xinjiang. The innovation of this study is to develop a novel ensemble learning approach to divide Xinjiang into different subregions, accurately classify land cover, and generate a new land cover product for simulating climate change in Xinjiang. PubDate:
2023
Issue No: Vol. 16 (2023)
- Satellite-Detected Contrasting Responses of Canopy Structure and Leaf
Physiology to Drought-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Hongfan Gu;Gaofei Yin;Yajie Yang;Aleixandre Verger;Adrià Descals;Iolanda Filella;Yelu Zeng;Dalei Hao;Qiaoyun Xie;Xing Li;Jingfeng Xiao;Josep Peñuelas;
Pages: 2427 - 2436 Abstract: Disentangling drought impacts on plant photosynthesis is crucial for projecting future terrestrial carbon dynamics. We examined the separate responses of canopy structure and leaf physiology to an extreme summer drought that occurred in 2011 over Southwest China, where the weather was humid and radiation was the main growth-limiting factor. Canopy structure and leaf physiology were, respectively, represented by near-infrared reflectance of vegetation (NIRv) derived from MODIS data and leaf scale fluorescence yield (Φf) derived from both continuous SIF (CSIF) and global OCO-2 SIF (GOSIF). We detected contrasting responses of canopy structure and leaf physiology to drought with a 14.0% increase in NIRv, compared with 12.6 or 19.3% decreases in Φf from CSIF and GOSIF, respectively. The increase in structure resulted in a slight carbon change, due to water deficit-induced physiological constraints. The net ecosystem effect was a 7.5% (CSIF), 1.2% (GOSIF), and −2.96% (EC-LUE GPP) change in photosynthesis. Our study improves understanding of complex vegetation responses of plant photosynthesis to drought and may contribute to the reconciliation of contrasting observed directions in plant responses to drought in cloudy regions via remote sensing. PubDate:
2023
Issue No: Vol. 16 (2023)
- Hyperspectral Image Superresolution via Subspace-Based Deep Prior
Regularization-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Jianwei Zheng;Pengfei Li;Honghui Xu;Jiawei Jiang;Yuchao Feng;Zhi Liu;
Pages: 2437 - 2449 Abstract: Hyperspectral imaging is able to provide a finer delivery of various material properties than conventional imaging systems. Yet in reality, an optical system can only generate data with high spatial resolution but low spectral one, or vice versa, at video rates. As a result, an issue that fuses low-resolution hyperspectral and high-resolution multispectral images has gained great attention. However, most fusion approaches depend purely on hand-crafted regularizers or data-driven priors, leading to the issues of tricky parameter selection or poor interpretability. In this work, a subspace-based deep prior regularization is proposed to tackle these problems, which takes both hand-crafted regularizer and data-driven prior into account. Specifically, we leverage the spectral correlation of the images and transfer them from the original space to the subspace domain, within which a modified U-net-based deep prior learning network (SDPL-net) is designed for the fusion issue. Moreover, instead of taking the output of SDPL-net directly as the result, we further feed the output back to the model-based optimization. Under such prior regularization, the recovered high-resolution hyperspectral image holds a high consistency to its inherent structure and hence tends to present enhanced reliability and accuracy. Experimental results on simulated and real data reveal that the proposed method excels other state-of-the-art methods in both quantitative and qualitative metrics. PubDate:
2023
Issue No: Vol. 16 (2023)
- An Improved Lightweight Yolo-Fastest V2 for Engineering Vehicle
Recognition Fusing Location Enhancement and Adaptive Label Assignment-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Hairong Zhang;Dongsheng Xu;Dayu Cheng;Xiaoliang Meng;Geng Xu;Wei Liu;Teng Wang;
Pages: 2450 - 2461 Abstract: Engineering vehicle recognition based on video surveillance is one of the key technologies to assist illegal land use monitoring. At present, the engineering vehicle recognition mainly adopts the traditional deep learning model with a large number of floating-point operations. So, it cannot be achieved in edge devices with limited computing power and storage in real time. In addition, some lightweight models have problems with inaccurate bounding box locating, low recognition rate, and unreasonable selection of positive training samples for the small object. To solve the problems, the article proposes an improved lightweight Yolo-Fastest V2 for engineering vehicle recognition fusing location enhancement and adaptive label assignment. The location-enhanced feature pyramid network structure combines deep and shallow feature maps to accurately localize bounding boxes. The grouping k-means clustering strategy and adaptive label assignment algorithm select an appropriate anchor for each object based on its shape and Intersection over Union. The study was conducted on Raspberry Pi 4B 2018 using two datasets and different models. Experiments show that our method achieves the optimal combination in speed and accuracy. Specifically, the mAP50 is increased by 7.02% with the speed of 11.24 fps under the engineering vehicle data obtained by video surveillance in a rural area of China. PubDate:
2023
Issue No: Vol. 16 (2023)
- Remote Sensing of Surface Melt on Antarctica: Opportunities and Challenges
-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Sophie de Roda Husman;Zhongyang Hu;Bert Wouters;Peter Kuipers Munneke;Sanne Veldhuijsen;Stef Lhermitte;
Pages: 2462 - 2480 Abstract: Surface melt is an important driver of ice shelf disintegration and its consequent mass loss over the Antarctic Ice Sheet. Monitoring surface melt using satellite remote sensing can enhance our understanding of ice shelf stability. However, the sensors do not measure the actual physical process of surface melt, but rather observe the presence of liquid water. Moreover, the sensor observations are influenced by the sensor characteristics and surface properties. Therefore, large inconsistencies can exist in the derived melt estimates from different sensors. In this study, we apply state-of-the-art melt detection algorithms to four frequently used remote sensing sensors, i.e., two active microwave sensors, which are Advanced Scatterometer (ASCAT) and Sentinel-1, a passive microwave sensor, i.e., Special Sensor Microwave Imager/Sounder (SSMIS), and an optical sensor, i.e., Moderate Resolution Imaging Spectroradiometer (MODIS). We intercompare the melt detection results over the entire Antarctic Ice Sheet and four selected study regions for the melt seasons 2015–2020. Our results show large spatiotemporal differences in detected melt between the sensors, with particular disagreement in blue ice areas, in aquifer regions, and during wintertime surface melt. We discuss that discrepancies between sensors are mainly due to cloud obstruction and polar darkness, frequency-dependent penetration of satellite signals, temporal resolution, and spatial resolution, as well as the applied melt detection methods. Nevertheless, we argue that different sensors can complement each other, enabling improved detection of surface melt over the Antarctic Ice Sheet. PubDate:
2023
Issue No: Vol. 16 (2023)
- CFNet: An Eigenvalue Preserved Approach to Multiscale Building
Segmentation in High-Resolution Remote Sensing Images-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Qi Liu;Yang Li;Muhammad Bilal;Xiaodong Liu;Yonghong Zhang;Huihui Wang;Xiaolong Xu;Hui Lu;
Pages: 2481 - 2491 Abstract: In recent years, AI and deep learning (DL) methods have been widely used for object classification, recognition, and segmentation of high-resolution multispectral remote sensing images. These DL-based solutions perform better compared with the traditional spectral algorithms but still suffer from insufficient optimization of global and local features of object context. In addition, failure of code-data isolation and/or disclosure of detailed eigenvalues cause serious privacy and even secret leakage due to the sensitivity of high-resolution remote sensing data and their processing mechanisms. In this article, class feature modules have been presented in the decoder part of an attention-based CNN network to distinguish between building and nonbuilding (background) area. In this way, context features of a focused object can be extracted with more details being processed while the resolution of images is maintained. The reconstructed local and global feature values and dependencies in the proposed model are maintained by reconfiguring multiple effective attention modules with contextual dependencies to achieve better results for the eigenvalue. According to quantitative results and their visualization, the proposed model has depicted better performance over others' work using two large-scale building remote sensing datasets. The F1-score of this model reached 87.91 and 89.58 on WHU Buildings Dataset and Massachusetts Buildings Dataset, respectively, which exceeded the other semantic segmentation models. PubDate:
2023
Issue No: Vol. 16 (2023)
- Simultaneous Update of High-Resolution Land-Cover Mapping Attempt: Wuhan
and the Surrounding Satellite Cities Cartography Using L2HNet-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Yan Huang;Yuqing Wang;Zhanbo Li;Zhuohong Li;Guangyi Yang;
Pages: 2492 - 2503 Abstract: Land-cover mapping is important for urban planning and management, and current land-cover mapping products are unable to meet the needs of cities due to frequent land surface changes. In this study, based on the low-to-high network (L2HNet) network, we generate a high-resolution land-cover mapping product for Wuhan and its surrounding areas. In this article, we adopt a simplified L2HNet by removing the confident area selection and the L2H loss module to shorten the cycle time of the entire mapping process. The mapping process used ESA LandCover (2021) as low-resolution labels and Google Maps as high-resolution remote sensing images. In the course of the experiment, we also calculate the four indicators mean intersection over union (MIoU), overall accuracy (OA), frequency weighted intersection over union (FWIoU), and Kappa, evaluate the accuracy of our product in predicting fine feature structure using a point-based test method, and compare it with six mainstream land-cover mapping products. The product achieves a 1m-resolution land-cover product in study areas while maintaining an over 75.21% MIoU. OA, FWIoU, and Kappa all maintain values above 85.00%, showing excellent prediction results. In quantitative analysis, compared to ESA LandCover(2021), the L2HNet product has a significant improvement in mapping accuracy for build-up and permanent water, including an exciting 21.08% improvement in permanent water accuracy and an amazing improvement in build-up. The comparison with mainstream products also shows the credibility and practicality of the product. The end result of this research fills a gap in Wuhan and its surrounding areas' 1m-resolution land-cover mapping product. While significantly improving the product's resolution, L2HNet makes time- and labor-saving periodic mapping a reality. PubDate:
2023
Issue No: Vol. 16 (2023)
- Construction of Improved Semantic Segmentation Model and Application to
Extraction of Anthropogenically Disturbed Parcels With Soil Erosion From Remote Sensing Images-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Jialin Li;Li Hua;Lu Li;Zijing Zhang;Chongfa Cai;
Pages: 2504 - 2516 Abstract: With the rapid socioeconomic development in China, increasing soil erosion caused by anthropogenic production and construction activities is taking place, which is characterized by short duration, high frequency, and great damage to its surrounding environment. Therefore, the regulation and control of soil erosion of anthropogenically disturbed parcels is an urgent task. This study proposes an improved model that combines the boundary constraint and jagged hybrid dilated convolution channel shuffling module (BCJHDC and the polarized self-attention (PSA) module for extracting anthropogenically disturbed parcels with soil erosion from high-resolution remote sensing images in Hubei Province. First, the PSA module is added to the encoder to better extract the feature information of the target object. Second, the BCJHDC module is used to extract multiscale semantic information from images and improve the boundary segmentation quality. Precision, recall, intersection over union (IOU), and F1 score (F1) are calculated to evaluate the model accuracy. The results indicate that our improved model performs well on the human-perturbed parcel extraction task, with an F1 of 87.92% and an IOU of 78.44%. Ablation experiments and application experiments suggest the validity of the applicability and the portability of our proposed improved model, respectively. Compared with the other seven advanced semantic segmentation models, our improved model has significant advantages. Overall, this study provides a valuable reference for policy formulation of water and soil conservation. PubDate:
2023
Issue No: Vol. 16 (2023)
- Comparison of Methane Detection Using Shortwave and Longwave Infrared
Hyperspectral Sensors Under Varying Environmental Conditions-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Lucy A. Zimmerman;John P. Kerekes;
Pages: 2517 - 2531 Abstract: Methane is a prevalent greenhouse gas with potent heat trapping capabilities, but methane emissions can be difficult to detect. Hyperspectral imagery is an effective method of detection which can be used to locate methane emission sources, as well as provide accountability for reaching emissions reduction goals. Because of methane's absorption features, both shortwave infrared (SWIR) and longwave infrared (LWIR) hyperspectral sensors have been used to accurately detect methane plumes. However, surface, environmental, and atmospheric background conditions can cause methane detectability to vary, and there have not been previous studies which evaluate this variability over a wide range of conditions. To assess this variation, this trade study compared methane detectability for two airborne hyperspectral sensors: AVIRIS-NG in the SWIR and HyTES in the LWIR. We modeled methane plume detection under a wide range of precisely known conditions by making use of synthetic images which were comprised of MODTRAN-generated radiance curves. We applied a spectral matched filter to these images to assess detection accuracy, and used these results to identify the conditions which have the most significant impact on detectability in the SWIR and LWIR. We then computed the specific boundaries on these conditions which make methane most detectable for each instrument; these novel results explore methane detectability over a broader range of conditions and sensors than previous studies. This trade study and methodology can aid decision-making about which sensors are most useful for various types of methane emission analysis, such as leak detection and emission rate quantification. PubDate:
2023
Issue No: Vol. 16 (2023)
- Multitask GANs for Oil Spill Classification and Semantic Segmentation
Based on SAR Images-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Jianchao Fan;Chuan Liu;
Pages: 2532 - 2546 Abstract: The increasingly frequent marine oil spill disasters have great harm to the marine ecosystem. As an essential means of remote sensing monitoring, synthetic aperture radar (SAR) images can detect oil spills in time and reduce marine pollution. Many look-alike oil spill regions are difficult to distinguish in SAR images, and the scarcity of real oil spill data makes it difficult for deep learning networks to train effectively. In order to solve the abovementioned problems, this article designs a multitask generative adversarial networks (MTGANs) oil spill detection model to distinguish oil spills and look-alike oil spills and segment oil spill areas in one framework. The discriminator of the first generative adversarial network (GAN) is transformed into a classifier, which can effectively distinguish between real and look-alike oil spills. The generator of the second GAN model integrates a fully convolutional symmetric structure and multiple convolution blocks. Multiple convolution blocks can extract the shallow oil spill information, and the fully convolutional symmetric structure can extract the deeper features of the oil spill information. The algorithm only needs to use a small number of oil spill images as the training set to train the network, and the limitation of the oil spill dataset can be solved. Validation evaluations are conducted on three datasets of Sentinel-1, ERS-1/2, and GF-3 satellites, and the experimental results demonstrate that the proposed MTGANs oil spill detection framework outperforms other models in oil spill classification and semantic segmentation. Among them, the classification accuracy of the oil spill and look-alikes can reach 97.22$%$. The average OA for semantic segmentation of the oil spill area can be 97.47$%$ and the average precision can reach 86.69$%$. PubDate:
2023
Issue No: Vol. 16 (2023)
- Bathymetry Retrieval From Spaceborne Multispectral Subsurface Reflectance
-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Guoqing Zhou;Sikai Su;Jiasheng Xu;Zhou Tian;Qiaobo Cao;
Pages: 2547 - 2558 Abstract: A few scholars have developed the models for retrieval of water depth from subsurface reflectance of multispectral images to avoid the influences of sun glitter. However, the models are only suitable for case I water. For this reason, this study proposes a bathymetry retrieval model using subsurface reflectance for both case I and case II water. The model first corrects the water surface reflectance image and then converts it into a subsurface reflectance image, and the subsurface reflectance image is used as the water depth retrieval image. Landsat 8 images were taken for experiments in case 1 water and case 2 water, and two water areas, Weizhou Island, Guangxi, China, and Molokai Island, Hawaii, USA, were used to verify the proposed model. The experimental results showed that the proposed model reduced the root-mean-squared error of the retrieved water depth in the Weizhou and Molokai areas from 3.113 to 2.903 m and 4.239 to 3.653 m, respectively, i.e., improve accuracy of water depth at 6.75% and 13.82% for Weizhou and Molokai areas, respectively. Therefore, the results demonstrate that the proposed model using subsurface reflectance can significantly improve the accuracy of bathymetry retrieval via spaceborne multispectral images. PubDate:
2023
Issue No: Vol. 16 (2023)
- Spectral Token Guidance Transformer for Multisource Images Change
Detection-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Bangyong Sun;Qinsen Liu;Nianzeng Yuan;Jiahai Tan;Xiaomei Gao;Tao Yu;
Pages: 2559 - 2572 Abstract: With the development of Earth observation technology, more multisource remote sensing images are obtained from various satellite sensors and significantly enrich the data source of change detection (CD). However, the utilization of multisource bitemporal images frequently introduces challenges during featuring or representing the various physical mechanisms of the observed landscapes and makes it more difficult to develop a general model for homogeneous and heterogeneous CD adaptively. In this article, we propose an adaptive spatial-spectral transformer CD network based on spectral token guidance, named STCD-Former. Specifically, a spectral transformer with dual-branch first encodes the diverse spectral sequence in spectral-wise to generate a corresponding spectral token. And then, the spectral token is used as guidance to interact with the patch token to learn the change rules. More significantly, to optimize the learning of difference information, we design a difference amplification module to highlight discriminative features by adaptively integrating the difference information into the feature embedding. Finally, the binary CD result is obtained by multilayer perceptron. The experimental results on three homogeneous datasets and one heterogeneous dataset have demonstrated that the proposed STCD-Former outperforms the other state-of-the-art methods qualitatively and visually. PubDate:
2023
Issue No: Vol. 16 (2023)
- CroFuseNet: A Semantic Segmentation Network for Urban Impervious Surface
Extraction Based on Cross Fusion of Optical and SAR Images-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Wenfu Wu;Songjing Guo;Zhenfeng Shao;Deren Li;
Pages: 2573 - 2588 Abstract: The fusion of optical and synthetic aperture radar (SAR) images is a promising method to extract urban impervious surface (IS) accurately. Previous studies have shown that the feature-level fusion of optical and SAR images can significantly improve IS extraction. However, they generally use simple layer stacking for features fusion, ignoring the interaction between optical and SAR images. Besides, most of the features they used are shallow features manually extracted, such as texture and geometric features, lacking the use of high-level semantic features of images. The lack of publicly available IS datasets is considered as an obstacle that prevents the extensive use of deep learning models in IS extraction. Therefore, this study first creates an open and accurate IS dataset based on optical and SAR images, and then proposes a semantic segmentation network based on cross fusion of optical and SAR images features, namely CroFuseNet, for IS extraction. In CroFuseNet, we design a cross fusion module to fuse features of optical and SAR images to achieve better complementarity between the two types of images, and we propose a multimodal features aggregation module to aggregate specific high-level features from optical and SAR images. To validate the proposed CroFuseNet, we compare it with two classical machine learning algorithms and four state-of-the-art deep learning models. The proposed model has the highest accuracy, with OA, MIoU, and F1-Score of 97.77%, 0.9495, and 0.9770, respectively. The quantitative and qualitative experimental results demonstrate that the proposed model is superior to these comparative algorithms. PubDate:
2023
Issue No: Vol. 16 (2023)
- A Framework to Assess Remote Sensing Algorithms for Satellite-Based Flood
Index Insurance-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Mitchell Thomas;Elizabeth Tellman;Daniel E. Osgood;Ben DeVries;Akm Saiful Islam;Michael S. Steckler;Maxwell Goodman;Maruf Billah;
Pages: 2589 - 2604 Abstract: Remotely sensed data have the potential to monitor natural hazards and their consequences on socioeconomic systems. However, in much of the world, inadequate validation data of disaster damage make reliable use of satellite data difficult. We attempt to strengthen the use of satellite data for one application—flood index insurance—which has the potential to manage the largely uninsured losses from floods. Flood index insurance is a particularly challenging application of remote sensing due to floods’ speed, unpredictability, and the significant data validation required. We propose a set of criteria for assessing remote sensing flood index insurance algorithm performance and provide a framework for remote sensing application validation in data-poor environments. Within these criteria, we assess several validation metrics—spatial accuracy compared to high-resolution PlanetScope imagery (F1), temporal consistency as compared to river water levels (Spearman's ρ), and correlation to government damage data (R2)—that measure index performance. With these criteria, we develop a Sentinel-1 flood inundation time series in Bangladesh at high spatial (10 m) and temporal (∼weekly) resolution and compare it to a previous Sentinel-1 algorithm and a Moderate Resolution Imaging Spectroradiometer (MODIS) time series used in flood index insurance. Results show that the adapted Sentinel-1 algorithm (F1avg = 0.925, ρavg = 0.752, R2 = 0.43) significantly outperforms previous Sentinel-1 and MODIS algorithms on the validation criteria. Beyond Bangladesh, our proposed validation criteria can be used to develop and validate better remote sensing products fo- index insurance and other flood applications in places with inadequate ground truth damage data. PubDate:
2023
Issue No: Vol. 16 (2023)
- Linear Feature-Based Image/LiDAR Integration for a Stockpile Monitoring
and Reporting Technology-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Seyyed Meghdad Hasheminasab;Tian Zhou;Ayman Habib;
Pages: 2605 - 2623 Abstract: Stockpile monitoring has been recently conducted with the help of modern remote sensing techniques—e.g., terrestrial/aerial photogrammetry/LiDAR—that can efficiently produce accurate 3-D models for the area of interest. However, monitoring of indoor stockpiles still requires more investigation due to unfavorable conditions in these environments such as a lack of global navigation satellite system signals and/or homogenous texture. This article develops a fully automated image/LiDAR integration framework that is capable of generating accurate 3-D models with color information for stockpiles under challenging environmental conditions. The derived colorized 3-D point cloud can be subsequently used for volume estimation and visual inspection of stockpiles. The main contribution of the developed strategy is using automatically derived conjugate image/LiDAR linear features for simultaneous registration and camera/LiDAR system calibration. Data for this article are acquired using a camera-assisted LiDAR mapping platform—denoted as stockpile monitoring and reporting technology—which was recently designed as a time-efficient and cost-effective bulk material tracking. Experimental results on three datasets show that the developed framework outperforms a classical planar feature-based registration technique in terms of the alignment of acquired point cloud. Results also indicate that the proposed approach can lead to a high relative accuracy between image lines and their corresponding back-projected LiDAR features in the range of 4–7 pixels. PubDate:
2023
Issue No: Vol. 16 (2023)
- Optimized Nonlinear PRI Variation Strategy Using Knowledge-Guided Genetic
Algorithm for Staggered SAR Imaging-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Xin Qi;Yun Zhang;Yicheng Jiang;Elisa Giusti;Marco Martorella;
Pages: 2624 - 2643 Abstract: Staggered synthetic aperture radar (SAR), which operates with variable pulse repetition interval (PRI), staggers blind areas to solve the blind range problem caused by constant PRI in conventional high-resolution wide-swath SAR imaging. The PRI variation strategy determines the blind area distribution, and thus has a significant influence on the imaging performance in staggered mode. Generally, the existing strategies based on linear PRI variation can control the blind areas in a straightforward way, which has achieved impressive results. However, the linearity of the PRI variation imposes regularity or even periodicity on the locations of the blind areas, which limits the distribution of the blind areas. The imaging performance has the potential to be further improved by introducing much more irregularity into the PRI sequences. To this end, this article proposes an optimized nonlinear PRI variation strategy for staggered SAR mode. First, a novel objective function is defined that quantitatively measures the uniformity of the blind area distribution along the slant range and the discontinuity of the blind area distribution along the azimuth. Subsequently, the optimum nonlinear PRI variation strategy is found using an optimization problem and the proposed objective function. A knowledge-guided genetic algorithm is proposed to solve the optimization problem. Comparisons with the existing linear variation strategies show that the proposed strategy can provide a superior imaging performance after reconstruction with a lower objective function value. Simulations and experiments on raw data generated in staggered SAR mode are performed to verify the effectiveness of the optimized nonlinear PRI variation strategy. PubDate:
2023
Issue No: Vol. 16 (2023)
- Composite Analysis-Based Machine Learning for Prediction of Tropical
Cyclone-Induced Sea Surface Height Anomaly-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Hongxing Cui;Danling Tang;Huizeng Liu;Yi Sui;Xiaowei Gu;
Pages: 2644 - 2653 Abstract: Sea surface height anomaly (SSHA) induced by tropical cyclones (TCs) is closely associated with oscillations and is a crucial proxy for thermocline structure and ocean heat content in the upper ocean. The prediction of TC-induced SSHA, however, has been rarely investigated. This study presents a new composite analysis-based random forest (RF) approach to predict daily TC wind pump induced SSHA. The proposed method utilizes TC's characteristics and prestorm upper oceanic parameters as input features to predict TC-induced SSHA up to 30 days after TC passage. Simulation results suggest that the proposed method is skillful at inferring both the amplitude and temporal evolution of SSHA induced by TCs of different intensity groups. Using a TC-centered 5° × 5° box, the proposed method achieves highly accurate prediction of TC-induced SSHA over the Western North Pacific with root mean square error of 0.024 m, outperforming alternative machine learning methods and the numerical model. Moreover, the proposed method also demonstrated good prediction performance in different geographical regions, i.e., the South China Sea and the Western North Pacific subtropical ocean. The study provides insight into the application of machine learning in improving the prediction of SSHA influenced by extreme weather conditions. Accurate prediction of TC-induced SSHA allows for better preparedness and response, reducing the impact of extreme events (e.g., storm surge) on people and property. PubDate:
2023
Issue No: Vol. 16 (2023)
- A Fuzzy-Boundary Enhanced Trident Network for Parcel Extraction in the
Urban–Rural Area-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Jiajun Zhu;Zhuoqun Chai;Zhaopu Song;Zhanpeng Chen;Qian Shi;
Pages: 2654 - 2667 Abstract: As the basic unit of farmland, parcel is crucial for remote sensing tasks, such as urban management. Previous studies of farmland parcels extraction are based on boundary detection and instance segmentation methods. However, these methods perform poorly in the parcels with complex shape and fuzzy boundary due to the insufficient feature extraction capability. Moreover, for the lack of multiscale features extraction and fusion, they are difficult to extract different scale farmland parcels accurately. Based on these issues, we propose a fuzzy-boundary enhanced trident network, named FBETNet, to enhance the feature of fuzzy boundary and generate multiscale parcels. First, a semantic-guided multitask strategy is introduced in order to enhance the feature of fuzzy boundary. Second, we design a multiscale trident module to further improve the performance of multiscale feature extraction. Finally, an adversarial data augmentation strategy is employed in the training phase to strengthen the robustness and stability of our proposed method. Experiments show that our proposed method improves significantly in both accuracy and visualization, especially for the parcels with fuzzy boundary and complex shape. PubDate:
2023
Issue No: Vol. 16 (2023)
- Uncertainty Support in the Spectral Information System SPECCHIO
-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Andreas Hueni;Kimberley Mason;Simon Trim;
Pages: 2668 - 2680 Abstract: The spectral information system SPECCHIO was updated to support the generic handling of uncertainty information in the form of uncertainty tree diagrams. The updates involve changes to the relations database model as well as dedicated methods provided by the SPECCHIO application programming interface. A case study selected from classic field spectroscopy demonstrates the use of the functionality. In conclusion, a database-centric automated uncertainty propagation in combination with measurement protocol standardization will provide a crucial step toward spectroscopy data accompanied by propagated, traceable, uncertainty information. PubDate:
2023
Issue No: Vol. 16 (2023)
- ShapeFormer: A Shape-Enhanced Vision Transformer Model for Optical Remote
Sensing Image Landslide Detection-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Pengyuan Lv;Lusha Ma;Qiaomin Li;Fang Du;
Pages: 2681 - 2689 Abstract: Landslides pose a serious threat to human life, safety, and natural resources. Remote sensing images can be used to effectively monitor landslides at a large scale, which is of great significance for pre-disaster warning and post-disaster assistance. In recent years, deep learning-based methods have made great progress in the field of remote sensing image landslide detection. In remote sensing images, landslides display a variety of scales and shapes. In this article, to better extract and keep the multiscale shape information of landslides, a shape-enhanced vision transformer (ShapeFormer) model is proposed. For the feature extraction, a pyramid vision transformer (PVT) model is introduced, which directly models the global information of local elements at different scales. To learn the shape information of different landslides, a shape feature extraction branch is designed, which uses the adjacent feature maps at different scales in the PVT model to improve the boundary information. After the feature extraction step, a decoder with deconvolutional layers follows, which combines the multiple features and gradually recovers the original resolution of the combined features. A softmax layer is connected with the combined features to acquire the final pixel-wise result. The proposed ShapeFormer model was tested on two public datasets—the Bijie dataset and the Nepal dataset—which have different spectral and spatial characteristics. The results, when compared with those of some of the state-of-the-art methods, show the potential of the proposed method for use with multisource optical remote sensing data for landslide detection. PubDate:
2023
Issue No: Vol. 16 (2023)
- A Light-Weighted Hypergraph Neural Network for Multimodal Remote Sensing
Image Retrieval-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Hongfeng Yu;Chubo Deng;Liangjin Zhao;Lingxiang Hao;Xiaoyu Liu;Wanxuan Lu;Hongjian You;
Pages: 2690 - 2702 Abstract: With the continuous maturity of remote sensing technology, the obtained remote sensing images' quality and quantity have surpassed any previous period. In this context, the content-based remote sensing image retrieval (CBRSIR) task attracts a lot of attention and research interest. Nowadays, the previous CBRSIR works mainly face the following problems. First of all, few works can realize one to many cross-modal image retrieval task (such as using optical image to retrieve SAR, optical images at the same time); second, research works mainly focus on small-area, target-level retrieval, and few on semantic-level retrieval of the whole image; last but not the least, most of the existing networks are characterized by massive parameters and huge computing need, which cannot be applied to resource-constrained edge devices with power and storage limit. For the sake of alleviating these bottlenecks, this article introduces a novel light-weighted nonlocal semantic fusion network based on hypergraph structure for CBRSIR (abbreviated as HGNLSF-Net). Specifically, in the framework, using the topological characteristics of hypergraph, the relationship among multiple nodes can be modeled, so as to understand the global features on remote sensing images better with fewer parameters and less computation. In addition, since the nonlocal semantics often involves a lot of noise, the hard-link module is constructed to filter noise. A series of experimental results on typical CBRSIR dataset, i.e., Multi-modal Multi-temporal Remote Sensing Image Retrieval Dataset (MMRSIRD), well show that with fewer parameters, the proposed HGNLSF-Net outperforms other methods and achieves optimal retrieval performance. PubDate:
2023
Issue No: Vol. 16 (2023)
- Automated Detection of Hydrothermal Emission Signatures From Multibeam
Echo Sounder Images Using Deep Learning-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Kazuhide Mimura;Kentaro Nakamura;Kazuhiro Takao;Kazutaka Yasukawa;Yasuhiro Kato;
Pages: 2703 - 2710 Abstract: Seafloor massive sulfide deposits have attracted attention as a mineral resource, as they contain a wide variety of base, precious, and other valuable critical metals. Previous studies have shown that signatures of hydrothermal activity can be detected by a multibeam echo sounder (MBES), which would be beneficial for exploring sulfide deposits. Although detecting such signatures from acoustic images is currently performed by skilled humans, automating this process could lead to improved efficiency and cost effectiveness of exploration for the seafloor deposits. Herein, we attempted to establish a method for automated detection of MBES water column anomalies using deep learning models. First, we compared the “Mask R-CNN” and “YOLO-v5” detection model architectures, wherein YOLO-v5 yielded higher F1 scores. We then compared the number of training classes and found that models trained with two classes (signal and noise) exhibited superior performance compared with models trained with only one class (signal). Finally, we examined the number of trainable parameters and obtained the best model performance when the YOLO-v5l model with a large trainable parameters was used in the two-class training process. The best model had a precision of 0.928, a recall of 0.881, and an F1 score of 0.904. Moreover, this model achieved a low false alarm rate (less than 0.7%) and had a high detection speed (20−25 ms per frame), indicating that it can be applied in the field for automatic and real-time exploration of seafloor hydrothermal deposits. PubDate:
2023
Issue No: Vol. 16 (2023)
- 57-Year Ice Velocity Dynamics in Byrd Glacier Based on Multisource Remote
Sensing Data-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Xiaohan Yuan;Gang Qiao;Yanjun Li;
Pages: 2711 - 2727 Abstract: Long time-series glacier ice velocity reflects the local climate changes and can be used to estimate mass balance (MB) changes, which is a critical parameter for understanding of glacier–climate interactions and prediction of sea level rise. However, due to the difficulty in image matching caused by the poor quality of historical satellite images, there was insufficient glacier ice velocity data before 1999. Here, we proposed a multiple-constraint dense image matching approach for mapping historical ice velocity based on the early poor-quality images from ARGON, Landsat-1, and Landsat-4/5. We successfully applied this method to Byrd Glacier to generate its historical ice velocity maps from 1963 to 1999. Additionally, ice velocity maps of Byrd Glacier from 2000 to 2014 were generated by IMCORR software using Landsat-7 and Landsat-8 images. Combining with the ice velocity maps from the Global Land Ice Velocity Extraction from Landsat-8 dataset since 2014, we obtained the ice velocity of Byrd Glacier for 57 years. Our results showed that the glacier experienced slight fluctuations in ice velocity, which may not be due to the calving events in the studied portion of the Ross Ice Shelf or the air temperature changes, but by the activity of subglacial drainage systems. Furthermore, Byrd Glacier showed a positive MB (average rate of 2.6 ± 2.0 Gt/year) from 1963 to 2020, indicating that global climate change may have a limited impact on it. PubDate:
2023
Issue No: Vol. 16 (2023)
- Multi-Scale Fast Fourier Transform Based Attention Network for
Remote-Sensing Image Super-Resolution-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Zheng Wang;Yanwei Zhao;Jiacheng Chen;
Pages: 2728 - 2740 Abstract: Recently, with the rise and progress of convolutional neural networks (CNNs), CNN-based remote-sensing image super-resolution (RSSR) methods have gained considerable advancement and showed great power for image reconstruction tasks. However, most of these methods cannot handle well the enormous number of objects with different scales contained in remote-sensing images and thus limits super-resolution performance. To address these issues, we propose a multiscale fast Fourier transform (FFT) based attention network (MSFFTAN), which employs a multiinput U-shape structure as backbone for accurate RSSR. Specifically, we carefully design an FFT-based residual block consisting of an image domain branch and a Fourier domain branch to extract local details and global structures simultaneously. In addition, a local–global channel attention block is developed to further enhance the reconstruction ability of small targets. Finally, we present a branch gated selective block to adaptively explore and aggregate features from multiple scales and depths. Extensive experiments on two public datasets have demonstrated the superiority of MSFFTAN over the state-of-the-art (SOAT) approaches in aspects of both quantitative metrics and visual quality. The peak signal-to-noise ratio of our network is 1.5 dB higher than the SOAT method on the UCMerced LandUse with downscaling factor 2. PubDate:
2023
Issue No: Vol. 16 (2023)
- Space Geodetic Views on the 2021 Central Greece Earthquake Sequence: 2D
Deformation Maps Decomposed From Multi-Track and Multi-Temporal Sentinel-1 InSAR Data-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Zhen Li;Shan-Shan Xu;Zhang-Feng Ma;
Pages: 2741 - 2752 Abstract: Pioneering efforts well studied the deformation decomposition of single earthquake using a pair of ascending (ASC) and descending (DES) track interferometric synthetic aperture radar (InSAR) data. However, deformation decomposition of sequent events is rarely discussed and hard to implement. That's because it's hard to ensure deformations related to each earthquake can be recorded by a pair of ASC and DES track data. Three sequent earthquakes (Mw>5.5) just hit Central Greece in March 2021, and this earthquake sequence provides us with a perfect case to study 2-D (east-west and up-down) deformation decomposition when the mentioned premise cannot be satisfied. In this context, we proposed a Multi-track and Multi-temporal 2-D (MTMT2-D) method. Its novelty and behind rationale are to decompose 2-D deformations of each event through fusing multitrack and multitemporal interferograms. Based on the decomposed deformations, we invert the slip distribution of three earthquakes respectively. We found that the decomposed deformations can better constrain the fault geometry than the single InSAR interferogram. Furthermore, our geodetic inversion results also suggest a domino-like triggering rupture process for this earthquake sequence. It indicates that our MTMT2-D method can potentially reveal more details about earthquake sequence. PubDate:
2023
Issue No: Vol. 16 (2023)
- MashFormer: A Novel Multiscale Aware Hybrid Detector for Remote Sensing
Object Detection-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Keyan Wang;Feiyu Bai;Jiaojiao Li;Yajing Liu;Yunsong Li;
Pages: 2753 - 2763 Abstract: Object detection is a critical and demanding topic in the subject of processing satellite and airborne images. The targets acquired in remote sensing imagery are at various sizes, and the backgrounds are complicated, which makes object detection extremely challenging. We address these aforementioned issues in this article by introducing the MashFormer, an innovative multiscale aware convolutional neural network (CNN) and transformer integrated hybrid detector. Specifically, MashFormer employs the transformer block to complement the CNN-based feature extraction backbone, which could obtain the relationships between long-range features and enhance the representative ability in complex background scenarios. With the intention of improving the detection performance for objects with multiscale characteristic, since in remote sensing scenarios, the size of object varies greatly. A multilevel feature aggregation component, incorporate with a cross-level feature alignment module is designed to alleviate the semantic discrepancy between features from shallow and deep layers. To verify the effectiveness of the suggested MashFormer, comparative experiments are carried out with other cutting-edge methodologies using the publicly available high resolution remote sensing detection and Northwestern Polytechnical University VHR-10 datasets. The experimental findings confirm the effectiveness and superiority of our suggested model by indicating that our approach has greater mean average precision than the other methodologies. PubDate:
2023
Issue No: Vol. 16 (2023)
- Moon-Based Ground Penetrating Radar Derivation of the Helium-3 Reservoir
in the Regolith at the Chang'E-3 Landing Site-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Chunyu Ding;Qingquan Li;Jiangwan Xu;Zhonghan Lei;Jiawei Li;Yan Su;Shaopeng Huang;
Pages: 2764 - 2776 Abstract: The Moon-based ground penetrating radar (GPR) carried by the Yutu rover performed in-situ radar measurements to explore extraterrestrial objects, which provides an unprecedented opportunity to study the shallow subsurface structure of the Moon and its internal resources. Exploiting lunar resources might be one of the solutions to the Earth's energy shortage in the future. In this article, first, the thickness distribution of the lunar regolith at the Chang'E-3 landing site is derived using the high-frequency Yutu radar observation data. Second, the surface concentration of helium-3 is determined based on the surface TiO$_{2}$ content of the lunar regolith. Finally, the reservoir of helium-3 resources in the lunar regolith is estimated. Our result suggests that the helium-3 reservoir along the Yutu rover traveling route from the navigation points N105 to N208 ($sim$445 m$^{2}$) is $sim$37–51 g, and its helium-3 content per unit area is $sim$0.083–0.114 g/m${^{2}}$, which is at least five times higher than that of the global average. Currently, the nuclear fusion experiment is facing a severe shortage of tritium fuel. We discuss the possibility of replacing it with lunar helium-3 as the fuel for nuclear fusion. Meanwhile, we also suggest that the Chang'E-3 landing area can be a potential site selection for the exploitation of the lunar helium-3 in the future. Our results will provide a valuable reference to evaluate the economics and feasibility of mini-g in-situ helium-3 resources on the Moon. PubDate:
2023
Issue No: Vol. 16 (2023)
- An Individual Tree Segmentation Method From Mobile Mapping Point Clouds
Based on Improved 3-D Morphological Analysis-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Weixi Wang;Yuhang Fan;You Li;Xiaoming Li;Shengjun Tang;
Pages: 2777 - 2790 Abstract: Street tree extraction based on the 3-D mobile mapping point cloud plays an important role in building smart cities and creating highly accurate urban street maps. Existing methods are often over- or under-segmented when segmenting overlapping street tree canopies and extracting geometrically complex trees. To address this problem, we propose a method based on improved 3-D morphological analysis for extracting street trees from mobile laser scanner (MLS) point clouds. First, the 3-D semantic point cloud segmentation framework based on deep learning is used for preclassification of the original point cloud to obtain the vegetation point cloud in the scene. Considering the influence of terrain unevenness, the vegetation point cloud is deterraformed and slice point cloud containing tree trunks is obtained through spatial filtering on height. On this basis, a voxel-based region growing method constrained with the changing rate of convex area is used to locate the stree trees. Then we propose a progressive tree crown segmentation method, which first completed the preliminary individual segmentation of the tree crown point cloud based on the voxel-based region growth constrained by the minimum increment rule, and then optimizes the crown edges by “valley” structure-based clustering. In this article, the proposed method is validated and the accuracy is evaluated using three sets of MLS datasets collected from different scenarios. The experimental results show that the method can effectively identify and localize street trees with different geometries and has a good segmentation effect for street trees with large adhesion between canopies. The accuracy and recall of tree localization are higher than 96.08% and 95.83%, respectively, and the average precision and recall of instance segmentation in three datasets are higher than 93.23% and 95.41%, respectively. PubDate:
2023
Issue No: Vol. 16 (2023)
- Characterizing Topographic Influences of Bushfire Severity Using Machine
Learning Models: A Case Study in a Hilly Terrain of Victoria, Australia-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Saroj Kumar Sharma;Jagannath Aryal;Quanxi Shao;Abbas Rajabifard;
Pages: 2791 - 2807 Abstract: Topography plays a significant role in determining bushfire severity over a hilly landscape. However, complex interrelationships between topographic variables and bushfire severity are difficult to quantify using traditional statistical methods. More recently, different machine learning (ML) models are becoming popular in characterizing complex relationships between different environmental variables. Yet, few studies have specifically evaluated the suitability of ML models in predictive bushfire severity analysis. Hence, the aim of this research is twofold. First, to determine suitable ML models by assessing their performances in bushfire severity predictions using remote sensing data analytics, and second to identify and investigate topographic variables influencing bushfire severity. The results showed that random forest (RF) and gradient boosting (GB) models had their distinct advantages in predictive modeling of bushfire severity. RF model showed higher precision (86% to 100%) than GB (59% to 72%) while predicting low, moderate, and high severity classes. Whereas GB model demonstrated better recall, i.e., completeness of positive predictions (56% to 75%) than RF (49% to 61%) for those classes. Closer investigations on topographic characteristics showed a varying relationship of severity patterns across different morphological landform classes. Landforms having lower slope curvatures or with unchanging slopes were more prone to severe burning than those landforms with higher slope curvatures. Our results provide insights into how topography influences potential bushfire severity risks and recommends purpose-specific choice of ML models. PubDate:
2023
Issue No: Vol. 16 (2023)
- Synchronous Chlorophyll-a and Sea Surface Salinity Variability in the
Equatorial Pacific Ocean-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Wei Shi;Menghua Wang;
Pages: 2808 - 2818 Abstract: Using chlorophyll-a (Chl-a) concentration data derived from the Visible Infrared Imaging Radiometer Suite onboard the Suomi National Polar-orbiting Partnership, the in situ measurements from the tropical ocean atmosphere moorings, and the sea surface salinity (SSS) data from the Soil Moisture Active Passive mission and Aquarius satellite, we report synchronous Chl-a and SSS variability in the Equatorial Pacific Ocean on the daily and monthly bases. During the El Niño event in 2015, a decrease in Chl-a and SSS occurred and developed within the same timeframe, and possessed similar spatial patterns across the Equatorial Pacific Ocean. Enhanced Chl-a and SSS coincided and colocated (in timing, location, spatial coverage, and extent) during the La Niña event in 2020. In contrast, sea surface temperature variability did not relate to Chl-a and SSS variability across the Equatorial Pacific Ocean. Chl-a and SSS were found to covary on the daily basis driven by the tropical instability waves. The mechanism that caused the synchronous Chl-a and SSS variability in the Equatorial Pacific Ocean on both the daily and monthly bases is addressed and discussed. PubDate:
2023
Issue No: Vol. 16 (2023)
- Neural Network Fusion Processing and Inverse Mapping to Combine
Multisensor Satellite Data and Analyze the Prominent Features-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Gunjan Joshi;Ryo Natsuaki;Akira Hirose;
Pages: 2819 - 2840 Abstract: In the last decade, the increase of active and passive earth observation satellites has provided us with more remote sensing data. This fact has led to enhanced interest in the fusion of different satellite data since some of the satellites have properties complementary to others. Fusion techniques can improve the estimation in areas of interest by using the complementary information and inferring unknown parameters. They also have the potential to provide high-resolution detailed classification maps. Thus, we propose a neural network, which combines and analyzes the data obtained from synthetic aperture radar (SAR) and optical sensors to provide high-resolution classification maps. The neural network employs a novel activation function to construct a neural network explainability method termed as inverse mapping for prominent feature analysis. By applying inverse mapping to the data fusion neural network, we can understand which input features are the prominent contributors for which classification outputs. Inverse mapping realizes backward signal flow based on teacher-signal backpropagation dynamics, which is consistent with its forward processing. It performs the contribution analysis of the data pixel by pixel and class by class. In this article, we focus on earthquake damage detection by dealing with SAR and optical sensor data of the 2018 Sulawesi earthquake in Indonesia. The fusion-based results show increased classification accuracy compared to the results of independent sensors. Moreover, we observe that inverse mapping shows reasonable explanations in a consistent manner. It also indicates the contributions of features different from straightforward counterparts, namely, pre- and post-seismic features, in the detection of particular classes. PubDate:
2023
Issue No: Vol. 16 (2023)
- Extracting Deciduous Forests Spring Phenology From Sentinel-1 Cross Ratio
Index-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Huinan Yu;Yajie Yang;Changjing Wang;Rui Chen;Qiaoyun Xie;Guoxiang Liu;Gaofei Yin;
Pages: 2841 - 2850 Abstract: Deciduous forests spring phenology plays a major role in balancing the carbon cycle. The cloud cover affects images acquired from optical sensors and reduces their performance in monitoring phenology. Synthetic aperture radar (SAR) can regularly acquire images day and night independent of weather conditions, which offers more frequent observations of vegetation phenology compared to optical sensors. However, it remains unclear how SAR data-derived indices vary across different growth stages of forests. Here, we explored the relationship between the cross ratio (CR) index derived from Sentinel-1 data and the deciduous forest growth process. We proposed a deciduous forests spring phenology extraction method using CR and compared the extracted start of growing season (SOS) with those extracted using normalized difference vegetation index (NDVI) derived from Sentinel-2 optical satellite data and green chromatic coordinate (GCC) derived from ground PhenoCam data. We extracted the SOS of 41 PhenoCam sites over the Continental United States in 2018 using the dynamic threshold method. Our results showed that the variations of CR time series are closely related to the phenological processes of deciduous forests. The SOS extracted using CR data showed high consistency with those extracted using GCC (R2 = 0.46), with slightly lower accuracy compared with NDVI-derived results (R2 = 0.62). Our study illustrates the value and mechanism of deciduous forests spring phenology extraction using SAR data and provides a reference for using SAR data to improve forest phenology extraction in addition to using optical remote sensing data, especially in rainy and cloudy regions. PubDate:
2023
Issue No: Vol. 16 (2023)
- A Novel Unsupervised Evaluation Metric Based on Heterogeneity Features for
SAR Image Segmentation-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Hang Yu;Xiangjie Yin;Zhiheng Liu;Suiping Zhou;Chenyang Li;Haoran Jiang;
Pages: 2851 - 2867 Abstract: The segmentation of synthetic aperture radar (SAR) images is vital and fundamental in SAR image processing, so evaluating segmentation results without ground truth (GT) is an essential part in segmentation algorithms comparison, parameters selection, and optimization. In this study, we first extracted the heterogeneous features (HF) of SAR images to adequately describe the SAR image targets, which were extracted by the proposed intensity feature extractor (IFEE) based on edge-hold and two fruitful methods. Then we proposed a novel and effective unsupervised evaluation (UE) metric G to evaluate the SAR image segmentation results, which is based on HF and uses the global intrasegment homogeneity (GHO), global intersegment heterogeneity (GHE), and edge validity index (EVI) as local segmentation measures. The effectiveness of GHO, GHE, EVI, and G was revealed by visual interpretation as qualitative analysis and supervised evaluation (SE) as quantitative analysis. In experiments, four segmentation algorithms are used to segment plenty of synthetic and real SAR images as the evaluation objects, and four widely used metrics are utilized for comparison. The results show the effectiveness and superiority of the proposed metric. Moreover, the mean correlation between the proposed UE metric and the SE metric is more than 0.67 and 0.99, which indicates that the proposed metric helps in choosing parameters of segmentation algorithms without GT. PubDate:
2023
Issue No: Vol. 16 (2023)
- Using Artificial Neural Networks to Couple Satellite C-Band Synthetic
Aperture Radar Interferometry and Alpine3D Numerical Model for the Estimation of Snow Cover Extent, Height, and Density-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Gianluca Palermo;Edoardo Raparelli;Paolo Tuccella;Massimo Orlandi;Frank Silvio Marzano;
Pages: 2868 - 2888 Abstract: This work presents a new approach for the estimation of snow extent, height, and density in complex orography regions, which combines differential interferometric synthetic-aperture-radar (DInSAR) data and snowpack numerical model data through artificial neural networks (ANNs). The estimation method, subdivided into classification and estimation, is based on two ANNs trained by a DInSAR response model coupled with Alpine3D snow cover numerical model outputs. Auxiliary satellite training data from satellite visible-infrared MODIS imager as well as digital elevation and land cover models are used to discriminate wet and dry snow areas. For snow cover classification the ANN-based estimation methodology is combined with fuzzy-logic and compared with a consolidated decision threshold approach using C-band SAR backscattering information. For snow height (SH) and density estimation, the proposed methodology is compared with an analytical inverse method and two model-based statistical techniques (linear regression and maximum likelihood). The validation is carried out in Central Apennines, a mountainous area in Italy with an extension of about 104 km2 and peaks up to 2912 m, using in situ data collected between December 2018 and February 2019. Results show that the ANN-based technique has a snow cover area classification accuracy of more than 80% when compared MODIS maps. Estimation bias and root mean square error are equal to about 0.5 cm and 20 cm for SH and to 5 kg/m3 and 80 kg/m3 for snow density. As expected, worse results are associated with low DInSAR coherence between two repeat passes and snow melting periods. PubDate:
2023
Issue No: Vol. 16 (2023)
- Mapping Invasive Aquatic Plants in Sentinel-2 Images Using Convolutional
Neural Networks Trained With Spectral Indices-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Elena Cristina Rodríguez-Garlito;Abel Paz-Gallardo;Antonio Plaza;
Pages: 2889 - 2899 Abstract: Multispectral images collected by the European Space Agency's Sentinel-2 satellite offer a powerful resource for accurately and efficiently mapping areas affected by the distribution of invasive aquatic plants. In this work, we use different spectral indices to detect invasive aquatic plants in the Guadiana river, Spain. Our methodology uses a convolutional neural network (CNN) as the baseline classifier and trains it using spectral indices calculated using different Sentinel-2 band combinations. Specifically, we consider the following spectral indices: With two bands, we calculate the normalized difference vegetation index, normalized difference water index, and normalized difference infrared index. With three bands, we calculate the red–green–blue composite and the floating algae index. Finally, we also use four bands to calculate the bare soil index. In our results, we observed that CNNs can better map invasive aquatic plants in the considered case study when trained intelligently (using spectral indices) as compared to using all spectral bands provided by the Sentinel-2 instrument. PubDate:
2023
Issue No: Vol. 16 (2023)
- Semisupervised Semantic Segmentation With Certainty-Aware Consistency
Training for Remote Sensing Imagery-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Yongjie Guo;Feng Wang;Yuming Xiang;Hongjian You;
Pages: 2900 - 2914 Abstract: Semisupervised learning is a forcible method to lessen the cost of annotation for remote sensing semantic segmentation tasks. Recent related research works indicate that consistency training is one of the most effective strategies in semisupervised learning. The core of consistency training is maintaining model outputs consistent under various perturbations. However, the current consistency training-based semisupervised semantic segmentation frameworks lack the analysis of model uncertainty, which increases the generation of semantic ambiguity on remote sensing images. Therefore, we propose the certainty-aware consistency training (CACT) strategy to mitigate the influence of semantic ambiguity caused by model uncertainty. The CACT strategy consists of two novel parts: certainty-aware consistency correction (CACC) and class-balanced-adaptive threshold (CBAT) strategy. The CACC starts with generating a high-quality prediction target, then models the importance of the consistent output target and corrects the output predictions according to the certainty map, increasing the focus on reliable predictions. The CBAT strategy uses a dynamic class-balanced adaptive threshold to filter out unreliable predictions, further reducing the impact of semantic ambiguity. Finally, considerable experimental results on the DLRSD, WHDLD, and Potsdam demonstrate that our framework has an excellent performance on semisupervised remote sensing semantic segmentation scenarios. PubDate:
2023
Issue No: Vol. 16 (2023)
- A Novel SAR Image Despeckling Method Based on Local Filter With Nonlocal
Preprocessing-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Chao Wang;Baolong Guo;Fangliang He;
Pages: 2915 - 2930 Abstract: Owing to the characteristics of long distance and strong penetration, a synthetic aperture radar (SAR) imaging system could provide ground information with high resolution under a poor climate environment. Nevertheless, speckle is still a common interference of the output that deteriorates the content of SAR images and further affects the recognition of real objects. In this article, a new speckle suppression method is proposed from the perspective of exploring nonlocal and local SAR image features. Considering the statistical distribution of SAR images, a novel local filter termed SAR-orientated guided bilateral filter is proposed to characterize the range and spatial similarity of SAR images. Meanwhile, an optimized nonlocal filter based on the weight Schatten-$p$ norm is introduced to characterize the nonlocal self-similarity of SAR images by a low-rank model. As a preprocessing step, it yields nonlocal filtering features as the guidance image of the proposed SAR-oriented guided bilateral filter. By incorporating the nonlocal filtering feature into the local filter, the structured method could achieve desirable despeckling results. Extensive experiments on real SAR images demonstrate that the proposed method outperforms several state-of-the-art methods in terms of both visual satisfaction and quantitative metrics. PubDate:
2023
Issue No: Vol. 16 (2023)
- WCDL: A Weighted Cloud Dictionary Learning Method for Fusing
Cloud-Contaminated Optical and SAR Images-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Jing Ling;Hongsheng Zhang;
Pages: 2931 - 2941 Abstract: Cloud cover hinders accurate and timely monitoring of urban land cover (ULC). The combination of synthetic aperture radar (SAR) and optical data without cloud contamination has demonstrated promising performance in previous research. However, ULC studies on cloud-prone areas are scarce despite the inevitability of cloud cover, especially in the tropics and subtropics. This study proposes a novel weighted cloud dictionary learning (WCDL) method for fusing optical and SAR data for the ULC classification in cloud-prone areas. We innovatively propose a cloud probability weighting model and a pixelwise cloud dictionary learning method that take the interference disparities at various cloud probability levels into account to mitigate cloud interference. Experiments reveal that the overall accuracy (OA) of fused data rises by more than 6% and 20% compared to single SAR and optical data, respectively. This method considerably improved by 3% in OA compared with other methods that directly stitch optical and SAR data together regardless of cloud interference. It improves almost all land covers producer's accuracy (PA) and user's accuracy (UA) by up to 9%. Ablation studies further show that the cloud probability weighting model improves the OA of all classifiers by up to 5%. And the pixelwise cloud dictionary learning model improves by more than 2% in OA for all cloud conditions, and the UA and PA are enhanced by up to 9% and 10%. The proposed WCDL method will serve as a reference for fusing cloud-contaminated optical and SAR data and timely, continuous, and accurate land surface monitoring in cloudy areas. PubDate:
2023
Issue No: Vol. 16 (2023)
- UAVStereo: A Multiple Resolution Dataset for Stereo Matching in UAV
Scenarios-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Xiaoyi Zhang;Xuefeng Cao;Anzhu Yu;Wenshuai Yu;Zhenqi Li;Yujun Quan;
Pages: 2942 - 2953 Abstract: Stereo matching is a fundamental task in 3-D scene reconstruction. Recently, deep learning-based methods have proven effective on some benchmark datasets, such as KITTI and SceneFlow. Unmanned aerial vehicles (UAVs) are commonly used for surface observation, and the images captured are frequently used for detailed 3-D reconstruction because of their high resolution and low-altitude acquisition. Currently, mainstream supervised learning networks require a significant amount of training data with ground-truth labels to learn model parameters. However, owing to the scarcity of UAV stereo-matching datasets, learning-based stereo matching methods in UAV scenarios are not fully investigated yet. To facilitate further research, this study proposes a pipeline for generating accurate and dense disparity maps using detailed meshes reconstructed based on UAV images and LiDAR point clouds. Through the proposed pipeline, we constructed a multiresolution UAV scenario dataset called UAVStereo, with over 34 000 stereo image pairs covering three typical scenes. To the best of our knowledge, UAVStereo is the first stereo matching dataset for UAV low-altitude scenarios. The dataset includes synthetic and real stereo pairs to enable generalization from the synthetic domain to the real domain. Furthermore, our UAVStereo dataset provides multiresolution and multiscene image pairs to accommodate various sensors and environments. In this article, we evaluate traditional and state-of-the-art deep learning methods, highlighting their limitations in addressing challenges in UAV scenarios and offering suggestions for future research. PubDate:
2023
Issue No: Vol. 16 (2023)
- A Spatial PWV Retrieval Model Over Land for GCOM-W/AMSR2 Using Neural
Network Method: A Case in the Western United States-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Zhaorui Gao;Nan Jiang;Yan Xu;Tianhe Xu;Rubing Zeng;Ao Guo;Yuhao Wu;
Pages: 2954 - 2962 Abstract: Precipitable water vapor (PWV) is an important and active part of the atmosphere. As known, microwave PWV retrieval is well applied for the ocean but challenging over land. In this article, we creatively established a microwave PWV retrieval spatial model over land using the backpropagation neural network to combine the high precision ground-based global navigation satellite system (GNSS) data and high spatial continuity satellite-borne data. Three-year data from 167 GNSS stations located in western America were utilized for training the network, and 40 untrained sites were selected as the test set. Root-mean-square error (RMSE) of the test set can reach 3.90 and 3.88 mm in the ascending (As) and descending (De) orbits, respectively. Then, we analyzed the influence of land cover types on the model over land. As a result, we found that stations located in the area with a single large-scale continuous land cover type had higher retrieval accuracy. In contrast, stations with diversified land cover types had lower precision. Furthermore, after fully considering the impact of land cover type, we performed an improved model with 61 stations on the single large-scale continuous grasslands, and the results of 8 stations as the test set showed that the RMSE could reach 3.41 and 3.31 mm in the As and De orbits, respectively. Compared with the spatial model established previously, the accuracy had been improved by about 13%. We think this is due to the stable physical properties (such as microwave emissivity) of the single large-scale continuous land cover type. PubDate:
2023
Issue No: Vol. 16 (2023)
- Remote Sensing Image Recovery and Enhancement by Joint Blind Denoising and
Dehazing-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Yan Cao;Jianchong Wei;Sifan Chen;Baihe Chen;Zhensheng Wang;Zhaohui Liu;Chengbin Chen;
Pages: 2963 - 2976 Abstract: Due to the hazy weather and the long-distance imaging path, the captured remote sensing image (RSI) may suffer from detail loss and noise pollution. However, simply applying dehazing operation on a noisy hazy image may result in noise amplification. Therefore, in this article, we propose joint blind denoising and dehazing for RSI recovery and enhancement to address this problem. First, we propose an efficient and effective noise level estimation method based on quad-tree subdivision and integrate it into fast and flexible denoising convolutional neural network for blind denoising. Second, a multiscale guided filter decomposes the denoised hazy image into base and detailed layers, separating the initial details. Then, the dehazing procedure using the corrected boundary constraint is implemented in the base layer, while a nonlinear sigmoid mapping function enhances the detailed layers. The last step is to fuse the enhanced detailed layers and the dehazed base layer to get the final result. Using both synthetic remote sensing hazy image (RSHI) datasets and real-world RSHI, we perform comprehensive experiments to evaluate the proposed method. Results show that our method is superior to well-known methods in both dehazing and joint denoising and dehazing tasks. PubDate:
2023
Issue No: Vol. 16 (2023)
- Multiscale Fusion Network Based on Global Weighting for Hyperspectral
Feature Selection-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Jinjin Wang;Jiahang Liu;Jian Cui;Ji Luan;Yangyu Fu;
Pages: 2977 - 2991 Abstract: Feature selection (FS) is an important way to achieve high-precision and efficient classification of hyperspectral remote sensing images. However, most existing FS methods use a fixed scale to extract features and the relationship between spatial and spectral dimensions is ignored. In fact, this correlation is useful for classification. In this article, a multiscale feature fusion network based on global weighting (MSFGW) is proposed in which a global weighting mechanism is explored to catch spatial–spectral information at multiple scales. First, the multiscale feature extraction module composed of group convolution and dilated convolution is utilized to extract the multiscale features. With the increase of the dilation rate, the module takes the spatial differences at varying scales. Second, a 3-D weighting mechanism is used to combine the spatial and spectral correlated information for reducing the interference of homologous and heterologous and boosting the feature discrimination ability. Then, multiscale weighted features are fused to integrate the internal information of all bands at different scales. Finally, the band reconstruction network is used to select representative bands according to their entropy. The experimental results with the state-of-the-art FS algorithms on four widely hyperspectral datasets demonstrate that the features selected by MSFGW have obvious advantages in classification with only a few training samples. PubDate:
2023
Issue No: Vol. 16 (2023)
- Brain-Inspired Remote Sensing Interpretation: A Comprehensive Survey
-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Licheng Jiao;Zhongjian Huang;Xu Liu;Yuting Yang;Mengru Ma;Jiaxuan Zhao;Chao You;Biao Hou;Shuyuan Yang;Fang Liu;Wenping Ma;Lingling Li;Puhua Chen;Zhixi Feng;Xu Tang;Yuwei Guo;Xiangrong Zhang;Dou Quan;Shuang Wang;Weibin Li;Jing Bai;Yangyang Li;Ronghua Shang;Jie Feng;
Pages: 2992 - 3033 Abstract: Brain-inspired algorithms have become a new trend in next-generation artificial intelligence. Through research on brain science, the intelligence of remote sensing algorithms can be effectively improved. This article summarizes and analyzes the essential properties of brain cognize learning and the recent advance of remote sensing interpretation. First, this article introduces the structural composition and the properties of the brain. Then, five represent brain-inspired algorithms are studied, including multiscale geometry analysis, compressed sensing, attention mechanism, reinforcement learning, and transfer learning. Next, this article summarizes the data types of remote sensing, the development of typical applications of remote sensing interpretation, and the implementations of remote sensing, including datasets, software, and hardware. Finally, the top ten open problems and the future direction of brain-inspired remote sensing interpretation are discussed. This work aims to comprehensively review the brain mechanisms and the development of remote sensing and to motivate future research on brain-inspired remote sensing interpretation. PubDate:
2023
Issue No: Vol. 16 (2023)
- Interferometric SAR Coherence Magnitude Estimation by Machine Learning
-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Nico Adam;
Pages: 3034 - 3044 Abstract: Current interferometric wide area ground motion services require the estimation of the coherence magnitude as accurately and computationally effectively as possible. However, a precise and at the same time computationally efficient method is missing. Therefore, the objective of this article is to improve the empirical Bayesian coherence magnitude estimation in terms of accuracy and computational cost. Precisely, this article proposes the interferometric coherence magnitude estimation by machine learning (ML). It results in a nonparametric and automated statistical inference. However, applying ML in this estimation context is not straightforward. The number and the domain of possible input processes is infinite and it is not possible to train all possible input signals. It is shown that the expected channel amplitudes and the expected interferometric phase cause redundancies in the input signals allowing to solve this issue. Similar to the empirical Bayesian methods, a single parameter for the maximum underlaying coherence is used to model the prior. However, no prior or any shape of prior probability is easy to implement within the ML framework. The article reports on the bias, standard deviation and RMSE of the developed estimators. It was found that ML estimators improve the coherence estimation RMSE from small samples ($2 leq N < 30$) and for small underlaying coherence compared to the conventional and empirical Bayes estimators. The developed ML coherence magnitude estimators are suitable and recommended for operational InSAR systems. For the estimation, the ML model is extremely fast evaluated because no iteration, numeric integration or bootstrapping is needed. PubDate:
2023
Issue No: Vol. 16 (2023)
- Multiscale Adaptive Fusion Network for Hyperspectral Image Denoising
-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Haodong Pan;Feng Gao;Junyu Dong;Qian Du;
Pages: 3045 - 3059 Abstract: Removing the noise and improving the visual quality of hyperspectral images (HSIs) is challenging in academia and industry. Great efforts have been made to leverage local, global, or spectral context information for HSI denoising. However, existing methods still have limitations in feature interaction exploitation among multiple scales and rich spectral structure preservation. In view of this, we propose a novel solution to investigate the HSI denoising using a multiscale adaptive fusion network (MAFNet), which can learn the complex nonlinear mapping between clean and noisy HSI. Two key components contribute to improving the HSI denoising: A progressively multiscale information aggregation network and a coattention fusion module. Specifically, we first generate a set of multiscale images and feed them into a coarse-fusion network to exploit the contextual texture correlation. Thereafter, a fine fusion network is followed to exchange the information across the parallel multiscale subnetworks. Furthermore, we design a coattention fusion module to adaptively emphasize informative features from different scales, and thereby enhance the discriminative learning capability for denoising. Extensive experiments on synthetic and real HSI datasets demonstrate that the proposed MAFNet has achieved a better denoising performance than other state-of-the-art techniques. PubDate:
2023
Issue No: Vol. 16 (2023)
- Statistical Texture Learning Method for Monitoring Abandoned Suburban
Cropland Based on High-Resolution Remote Sensing and Deep Learning-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Qianhui Shen;Haojun Deng;Xinjian Wen;Zhanpeng Chen;Hongfei Xu;
Pages: 3060 - 3069 Abstract: Cropland abandonment is crucial in agricultural management and has a profound impact on crop yield and food security. In recent years, many cropland abandonment identification methods based on remote sensing observation data have been proposed, but most of these methods are based on coarse-resolution images and use traditional machine learning methods for simple identification. To this end, we perform abandonment recognition on high-resolution remote sensing images. According to the texture features of the abandoned land, we combine the method of statistical texture learning and propose a new deep learning framework called pyramid scene parsing network-statistical texture learning (PSPNet-STL). The model integrates high-level semantic feature extraction and deep mining of low-level texture features to identify cropland abandonment. First, we labeled the abandoned cropland area and built the high-resolution abandoned cropland (HRAC) dataset, a high-resolution cropland abandonment dataset. Second, we improved PSPNet by fusing statistical texture learning modules to learn multiple texture information on low-level feature maps and combined high-level semantic features for cropland abandonment recognition. Experiments are performed on the HRAC dataset. Compared with other methods, the proposed model has the best performance on this dataset, both in terms of accuracy and visualization, proving that deep mining of low-level statistical texture features is beneficial for crop abandonment recognition. PubDate:
2023
Issue No: Vol. 16 (2023)
- Machine Learning Approaches for Road Condition Monitoring Using Synthetic
Aperture Radar-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Lucas Germano Rischioni;Arun Babu;Stefan V. Baumgartner;Gerhard Krieger;
Pages: 3070 - 3082 Abstract: Airborne synthetic aperture radar (SAR) has the potential to monitor remotely the road traffic infrastructure on a large scale. Of particular interest is the road surface roughness, which is an important road safety parameter. For this task, novel algorithms need to be developed. Machine learning approaches, such as artificial neural networks and random forest regression, which can perform nonlinear regression, can achieve this goal. This work considers fully polarimetric airborne radar datasets captured with German Aerospace Center's (DLR)'s airborne F-SAR radar system. Several machine learning-based approaches were tested on the datasets to estimate road surface roughness. The resulting models were then compared with ground truth surface roughness values and also with the semiempirical surface roughness model studied in the previous work. PubDate:
2023
Issue No: Vol. 16 (2023)
- DDMA-MIMO Observations With the MU Radar: Validation by Measuring a Beam
Broadening Effect-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Tomoya Matsuda;Hiroyuki Hashiguchi;
Pages: 3083 - 3091 Abstract: The phased-array radar, basically developed with the defense systems, has mainly utilized for the atmospheric radar and the wind profiling radar in the field of meteorological remote sensing, and recently it has also applied to the weather radar for research purposes. As a further development of phased-array technology, the “multiple-input–multiple-output (MIMO) technique,” which has been developed in the field of communication systems, has been applied to radars. With the MIMO radar, it is possible to create a virtual antenna aperture plane beyond the actual antenna and to reduce the actual antenna size compared to that of the conventional antenna while maintaining the angular resolution. This effect is expected to reduce costs, which is one of the major hurdles in expanding phased array radars instead of parabolic antenna systems. In order to confirm the effect, an experimental observation was performed using the MU radar which is a VHF-band phased array atmospheric radar with multichannel receivers. The MIMO technique requires orthogonal waveforms on each transmitter to identify the transmitted signals with multiple receivers, and various methods are known to realize orthogonality. In this article, we focus on the “Doppler division multiple access (DDMA)” MIMO technique, with which slightly different frequencies are selected as transmit waveforms to separate in each receiver in the Doppler frequency domain. The observation results by measuring a beam broadening effect with the MU radar indicate that it will be a key technique for the atmospheric radar in the near future. PubDate:
2023
Issue No: Vol. 16 (2023)
- A Novel Spectral Indices-Driven Spectral-Spatial-Context Attention Network
for Automatic Cloud Detection-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Yang Chen;Luliang Tang;Wumeng Huang;Jianhua Guo;Guang Yang;
Pages: 3092 - 3103 Abstract: Cloud detection is a fundamental step for optical satellite image applications. Existing deep learning methods can provide more accurate cloud detection results. However, the performance of these methods relies on a large number of label samples, whose collection is time-consuming and high-cost. In addition, cloud detection is challenging in high-brightness scenes due to cloud and high-brightness objects having a similar spectral features. In this study, we propose a cloud index driven spectral-spatial-context attention network (SSCA-net) for cloud detection, which relies on no effort to manually collect label samples and can improve the accuracy of cloud detection in high-brightness scenes. The label samples are automatically generated from the cloud index by using dual-threshold, which is then expanded to improve the completeness of cloud mask labels. We designed SSCA-net with the spectral-spatial-context aware module and spectral-spatial-context information aggregation module, aimed to improve the accuracy of cloud detection in high-brightness scenes. The results show that the proposed SSCA-net achieved good performance with an average overall accuracy of 97.69% and an average kappa coefficient of 92.71% on the Sentinel-2 and Landsat-8 datasets. This article provides fresh insight into how advanced deep attention networks and cloud indexes can be integrated to obtain high accuracy of cloud detection on high-brightness scenes. PubDate:
2023
Issue No: Vol. 16 (2023)
- MDFENet: A Multiscale Difference Feature Enhancement Network for Remote
Sensing Change Detection-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Hao Li;Xiaoyong Liu;Huihui Li;Ziyang Dong;Xiangling Xiao;
Pages: 3104 - 3115 Abstract: The main task of remote sensing change detection (CD) is to identify object differences in bitemporal remote sensing images. In recent years, methods based on deep convolutional neural networks have made great progress in remote sensing CD. However, due to illumination changes and seasonal changes in the images acquired by the same sensor, the problem of “pseudo change” in the change map is still difficult to solve. In this article, in order to reduce “pseudo changes,” we propose a multiscale difference feature enhancement network (MDFENet) to extract the most discriminative features from bitemporal remote sensing images. MDFENet contains three procedures: first, multiscale bitemporal features are generated by a shared weighted Siamese encoder. Then features of each scale are fed into a difference enhancement module to generate refined difference features. Finally, they are combined and reconstructed by a decoder to generate change map. The difference enhancement module includes multiple layers of difference enhancement encoder and transformer decoder. They are applied to features of different scales to establish long-range relationships of pixels semantic changes, while high-level difference features participate in the generation of low-level difference features to enhance information transmission among features of different scales, reducing “pseudo changes.” Compared with state-of-the-art methods, the proposed method achieved the best performance on two datasets, with F1 of 81.15% on the SYSU-CD dataset and 90.85% on the LEVIR-CD dataset. PubDate:
2023
Issue No: Vol. 16 (2023)
- Integration and Comparison of Multiple Two-Leaf Light Use Efficiency
Models Across Global Flux Sites-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Haoqiang Zhou;Gang Bao;Fei Li;Jiquan Chen;Siqin Tong;Xiaojun Huang;Enliang Guo;Yuhai Bao;Wendu Rina;
Pages: 3116 - 3130 Abstract: Accurate estimation of gross primary productivity (GPP) from the regional to global scale is essential in modeling carbon cycle processes. The recently-developed two-leaf light use efficiency (TL-LUE) model and its revised versions based on different concepts have significantly improved the underlying mechanisms between model assumptions and photosynthetic processing. Yet few studies have compared the advantages of the various two-leaf LUE models for their practical applications. Here, an integrated model referred to as a three-parameter radiation-constrained mountain TL-LUE (RMTL3-LUE) is proposed by combining the radiation scalar of the [radiation-constrained TL-LUE model] and the topographic parameters of the [mountainous TL-LUE model]. In this way, the importance of light intensity and topography on vegetation photosynthesis is integrated. Our calibration and validation of RMTL3-LUE were carried out for 11 ecosystems with in situ eddy covariance measurements around the globe. This indicates that the model can effectively improve the GPP estimates compared with its predecessors. At the landscape scale, RMTL3-LUE can also realistically quantify topographic effects on photosynthesis, with topographic sensitivities of decreasing (increasing) with the slope on the unshaded (shaded) terrain. Furthermore, RMTL3-LUE displays an asymmetric sensitivity to PAR variability, with a low sensitivity to PAR compared with other models under high PAR conditions and a similar sensitivity to PAR in low PARs. Altogether, it is clear that the integration of the merits of multiple TL-LUE models can further improve the photosynthetic processes for various conditions amid more challenges in constructing more complex models. PubDate:
2023
Issue No: Vol. 16 (2023)
- Double Prior Network for Multidegradation Remote Sensing Image
Super-Resolution-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Mengyang Shi;Yesheng Gao;Lin Chen;Xingzhao Liu;
Pages: 3131 - 3147 Abstract: Image super-resolution (SR) is widely used in remote sensing because it can effectively increase image details. Neural networks have shown remarkable performance in recent years, benefitting from their end-to-end training. However, remote sensing images contain a variety of degradation factors. Neural networks lack flexibility in dealing with these complex issues compared with reconstruction-based approaches. Traditional neural network methods cannot take advantage of prior knowledge and lack interpretability. To develop a flexible, accurate, and interpretable algorithm for remote sensing SR, we proposed an effective SR network called YSRNet. It is performed by unfolding a traditional optimization process into a learnable network. Combining conventional reconstruction-based methods and neural networks can significantly improve the algorithm's performance. Since the gradient features of remote sensing images contain valuable information, the total variation constraints and the deep prior constraints are introduced into the objective function for image SR. Furthermore, we propose an enhanced version called YSRNet+, which can apply attention weights to different prior terms and channels. Compared with the YSRNet, the YSRNet+ enables networks to focus more on useful prior information and improve the interpretability of networks. Experiments on three remote sensing datasets are performed to evaluate the algorithm's effectiveness. The experimental results demonstrate that the proposed algorithm performs better than some state-of-the-art neural network algorithms, especially in the scenario of the multidegradation factors. PubDate:
2023
Issue No: Vol. 16 (2023)
- Dense Temperature Mapping and Heat Wave Risk Analysis Based on Multisource
Remote Sensing Data-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Mengxi Liu;Xuezhang Li;Zhuoqun Chai;Anqi Chen;Yuanyuan Zhang;Qingnian Zhang;
Pages: 3148 - 3157 Abstract: As high temperature and heat wave have become great threats to human survival, social stability, and ecological safety, it is of great significance to master the spatial and temporal dynamic changes of temperature to prevent high temperature and heat wave risks. The meteorological station can provide accurate near-ground temperature, but only within a specific space and time. In order to meet the needs of large-scale research, spatial interpolation methods were widely used to obtain spatially continuous temperature maps. However, these methods often ignore the influence of external factors on temperature, such as land cover, height, etc., and neglect to supplement temporal-wise information. To deal with these issues, a joint spatio-temporal method is proposed to obtain dense temperature mapping from multisource remote sensing data, which combining a geographically weighted regression model and a polynomial fitting model. Besides, a heat wave risk model is also built based on the dense temperature maps and population data, in order to evaluate the heat wave risk of different areas. Accuracy evaluations and experiments have verified the effectiveness of the proposed methods. Case study on the four cities of Zhejiang Province, China have demonstrated that areas with higher degree of urbanization are often accompanied by higher heat wave risks, such as the northern part of the study area. The study also found that the heat wave risks have presented a centralized distribution and spatial autocorrelation characteristics in the study area. PubDate:
2023
Issue No: Vol. 16 (2023)
- Detecting Historical Terrain Anomalies With UAV-LiDAR Data Using
Spline-Approximation and Support Vector Machines-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Marcel Storch;Norbert de Lange;Thomas Jarmer;Björn Waske;
Pages: 3158 - 3173 Abstract: The documentation of historical remains and cultural heritage is of great importance to preserve historical knowledge. Many studies use low-resolution airplane-based laser scanning and manual interpretation for this purpose. In this study, a concept to automatically detect terrain anomalies in a historical conflict landscape using high-resolution UAV-LiDAR data was developed. We applied different ground filter algorithms and included a spline-based approximation step in order to improve the removal of low vegetation. Due to the absence of comprehensive labeled training data, a one-class support vector machine algorithm was used in an unsupervised manner in order to automatically detect the terrain anomalies. We applied our approach in a study site with different densities of low vegetation. The morphological ground filter was the most suitable when dense near-ground vegetation is present. However, with the use of the spline-based processing step, all filters used could be significantly improved in terms of the F1-score of the classification results. It increased by up to 42% points in the area with dense low vegetation and by up to 14% points in the area with sparse low vegetation. The completeness (recall) reached maximum values of 0.8 and 1.0, respectively, when taking into account the results leading to the highest F1-score for each filter. Therefore, our concept can support on-site field prospection. PubDate:
2023
Issue No: Vol. 16 (2023)
- Cloud Image Retrieval for Sea Fog Recognition (CIR-SFR) Using Double
Branch Residual Neural Network-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Tianjiao Hu;Zhuzhang Jin;Wanxin Yao;Jiezhi Lv;Wei Jin;
Pages: 3174 - 3186 Abstract: Sea fog is a common weather phenomenon at sea, which reduces visibility and causes tremendous hazards to marine transportation, marine fishing, and other maritime operations. Traditional sea fog monitoring methods have enormous difficulties in characterizing the diversity of sea fog and distinguishing sea fog from low-level clouds. Thus, we propose a cloud image retrieval method for sea fog recognition (CIR-SFR) in a deep learning (DL) framework by combining the advantages of metric learning. CIR-SFR includes the feature extraction module and the retrieval-based SFR module. The feature extraction module adopts the double branch residual neural network (DBRNN) to comprehensively extract the global and local features of cloud images. By introducing local branches and using activation masks, DBRNN can focus on regions of interest in cloud images. Moreover, cloud image features are projected into the semantic space by introducing multisimilarity loss, which effectively improves the discrimination ability of sea fog and low-level clouds. For the retrieval-based SFR module, similar cloud images are retrieved from the cloud image dataset according to the distance in the feature space, and accurate SFR results are obtained by counting the percentage of various cloud image types in the retrieval results. To evaluate the SFR system, we establish a dataset of 2544 cloud images including clear sky, low-level cloud, medium high cloud, and sea fog. Experimental results show that the proposed method outperforms the traditional methods in SFR, which provides a new way for SFR. PubDate:
2023
Issue No: Vol. 16 (2023)
- Retrieval of Rain Rates for Tropical Cyclones From Sentinel-1 Synthetic
Aperture Radar Images-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Xianbin Zhao;Weizeng Shao;Zhengzhong Lai;Xingwei Jiang;
Pages: 3187 - 3197 Abstract: The purpose of this study was to develop a method for retrieving the rain rate from C-band (∼5.3 GHz) synthetic aperture radar (SAR) images during tropical cyclones (TCs). Seven dual-polarized (vertical–vertical [VV] and vertical–horizontal [VH]) Sentinel-1 (S-1) SAR images were acquired in the interferometric-wide (IW) swath mode during the Satellite Hurricane Observation Campaign. These images were collocated with rain rates measured by the Stepped-Frequency Microwave Radiometers onboard National Oceanic and Atmospheric Administration aircraft. Wind speeds were retrieved from the VH-polarized SAR images using the geophysical model function (GMF) S1IW.NR. We determined the difference between the measured normalized radar cross section (NRCS) based on VV-polarized SAR and the predicted NRCS derived using the GMF CMOD5.N forced with wind speeds retrieved from VH-polarized SAR images. Rain cells were identified as regions in the images where the NRCS difference was greater than 0.5 dB or smaller than −0.5 dB. We found that the difference in the NRCS decreased and the VH-polarized wind speed increased with increasing rain rate. Based on these findings, we developed an empirical function for S-1 SAR rain retrieval in a TC, naming it CRAIN_S1. The validation of the CRAIN_S1 results with Tropical Rainfall Measuring Mission data resulted in a root mean square error of 0.58 mm/h and a correlation of 0.89. This study provides an alternate method for rain monitoring utilizing SAR data with a fine spatial resolution. PubDate:
2023
Issue No: Vol. 16 (2023)
- Remote-Sensing-Based Change Detection Using Change Vector Analysis in
Posterior Probability Space: A Context-Sensitive Bayesian Network Approach -
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Yikun Li;Xiaojun Li;Jiaxin Song;Zihao Wang;Yi He;Shuwen Yang;
Pages: 3198 - 3217 Abstract: Change vector analysis (CVA) and post-classification change detection (PCC) have been the most widely used change detection methods. However, CVA requires sound radiometric correction to achieve optimal performance, and PCC is susceptible to accumulated classification errors. Although change vector analysis in the posterior probability space (CVAPS) was developed to resolve the limitations of PCC and CVA, the uncertainty of remote sensing imagery limits the performance of CVAPS owing to three major problems: 1) mixed pixels; 2) identical ground cover type with different spectra; and 3) different ground cover types with the same spectrum. To address this problem, this article proposes the FCM-CSBN-CVAPS approach under the CVAPS framework. The proposed approach decomposes the mixed pixels into multiple signal classes using the fuzzy C means (FCM) algorithm. Although the mixed pixel problem is less severe in the high-resolution image, the change detection performance is still enhanced because, as a soft clustering algorithm, FCM is less susceptible to cumulative clustering error. Then, a context-sensitive Bayesian network (CSBN) is constructed to establish multiple-to-multiple stochastic linkages between signal pairs and ground cover types by incorporating spatial information to resolve problems 2) and 3) discussed above. Finally, change detection is performed using CVAPS in the posterior probability space. The effectiveness of the proposed approach is evaluated on three bitemporal remote sensing datasets with different spatial sizes and resolutions. The experimental results confirm the effectiveness of FCM-CSBN-CVAPS in addressing the uncertainty problems of change detection and its superiority over other relevant change detection techniques. PubDate:
2023
Issue No: Vol. 16 (2023)
- A Survey on Deep-Learning-Based Real-Time SAR Ship Detection
-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Jianwei Li;Jie Chen;Pu Cheng;Zhentao Yu;Lu Yu;Cheng Chi;
Pages: 3218 - 3247 Abstract: Recently, deep learning has greatly promoted the development of synthetic aperture radar (SAR) ship detection. But the detectors are usually heavy and computation intensive, which hinder the usage on the edge. In order to solve this problem, a lot of lightweight networks and acceleration ideas are proposed. In this survey, we review the papers that are about real-time SAR ship detection. We first introduce the model compression and acceleration methods. They are pruning, quantization, knowledge distillation, low-rank factorization, lightweight networks, and model deployment. They are the source of innovation in real-time SAR ship detection. Then, we summarize the real-time object detection methods. They are two-stage, single-stage, anchor free, trained from scratch, model compression, and acceleration. Researchers in SAR ship detection usually learn from these ideas. We then spend a lot of content on the review of the 70 real-time SAR ship detection papers. The years, datasets, journals, deep-learning frameworks, and hardwares are introduced first. After that, 10 public datasets and the evaluation metrics are shown. Then, we survey the 70 papers according to anchor free, trained from scratch, YOLO series, constant false alarm rate+convolutional neural network, lightweight backbone, pruning, quantization, knowledge distillation, and hardware deployment. The experimental results show that the algorithms have been greatly developed in speed and accuracy. In the end, we pointed out the problems of 70 papers and the directions to be studied in the future. This article can enable researchers to quickly understand the research status in this field. PubDate:
2023
Issue No: Vol. 16 (2023)
- High-Confidence Sample Generation Technology and Application for Global
Land-Cover Classification-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Xinyuan Xi;Zhimin Liu;Lin Sun;Shuai Xie;Zhihui Wang;
Pages: 3248 - 3263 Abstract: Deep learning technology has become one of the most important technologies in remote sensing land classification applications. Its powerful sample-learning and information-mining abilities promote the continuous improvement of classification accuracy. A large volume of high-quality and representative sample data is the premise for the successful application of deep learning technology. Conventional methods of obtaining samples through manual delineation or surface surveys require a great deal of manpower and material resources. Therefore, the inability to obtain sufficient and widely representative high-quality samples is one of the key factors limiting the application of deep learning technology. In this study, the method of generating sample data obtains high-confidence classification results from a variety of existing high-quality classification products as deep learning samples, which are then used to support the application of deep learning technology for land-cover classification. When the three global land-cover classification products, FROM-GLC-2015, GLC_FCS30-2015, and GlobeLand30, have the same type of discrimination, the sample is considered a high-confidence sample. Based on this, a large volume of sample data widely distributed around the world was obtained. Using the extracted samples, a random forest classifier was trained using multiple types of information from the Landsat data, and land-cover classification was achieved. Application experiments were conducted in several typical regions, and the classification results were verified. The results showed that the classification accuracy of random forests under the support of samples generated using the sample extraction method proposed in this article was considerably higher than that of the three land-cover classification products. PubDate:
2023
Issue No: Vol. 16 (2023)
- Land Use Classification of High-Resolution Multispectral Satellite Images
With Fine-Grained Multiscale Networks and Superpixel Postprocessing-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Yaobin Ma;Xiaohua Deng;Jingbo Wei;
Pages: 3264 - 3278 Abstract: Land use recognition from multispectral satellite images is fundamentally critical for geological applications, but the results are not satisfied. The scale dimension of current multiscale learning is too coarse to account for rich scales in multispectral images, and pixel-wise classification tends to produce “salt-and-pepper” labels due to possible misclassification in heterogeneous regions. In this article, these issues are addressed by proposing a new pixel-wise classification model with finer scales for convolutional neural networks. The model is designed to extract multiscale contextual information using multiscale networks at a fine-grained level, addressing the issue of insufficient multiscale learning for classification. Furthermore, a small-scale segmentation-combination method is introduced as a postprocessing solution to smooth fragmented classification results. The proposed method is tested on GF-1, GF-2, DEIMOS-2, GeoEye-1, and Sentinel-2 satellite images, and compared with six neural-network-based algorithms. The results demonstrate the effectiveness of the proposed model in finding objects of large scale difference, improving classification accuracy, and reducing classified fragments. The discussion also illustrates that convolutional neural networks and pixel-wise inference are more practical than transformer and patch-wise recognition. PubDate:
2023
Issue No: Vol. 16 (2023)
- Visual Question Generation From Remote Sensing Images
-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Laila Bashmal;Yakoub Bazi;Farid Melgani;Riccardo Ricci;Mohamad M. Al Rahhal;Mansour Zuair;
Pages: 3279 - 3293 Abstract: Visual question generation (VQG) is a fundamental task in vision-language understanding that aims to generate relevant questions about the given input image. In this article, we propose a paragraph-based VQG approach for generating intelligent questions in natural language about remote sensing (RS) images. Specifically, our proposed framework consists of two transformer-based vision and language models. First, we employ a swin-transformer encoder to generate a multiscale representative visual feature from the image. Then, this feature is used as a prefix to guide a generative pretrained transformer-2 (GPT-2) decoder in generating multiple questions in the form of a paragraph to cover the abundant visual information contained in the RS scene. To train the model, the language decoder is fine-tuned on RS dataset to generate a set of relevant questions from the RS image. We evaluate our model on two visual question-answering (VQA) datasets in RS. In addition, we construct a new dataset termed TextRS-VQA for better evaluation for our VQG model. This dataset consists of questions completely annotated by humans which addresses the high redundancy of the questions in prior VQA datasets. Extensive experiments using several accuracy and diversity metrics demonstrate the effectiveness of our proposed VQG model in generating meaningful, valid, and diverse questions from RS images. PubDate:
2023
Issue No: Vol. 16 (2023)
- A Priori Land Surface Reflectance Synergized With Multiscale Features
Convolution Neural Network for MODIS Imagery Cloud Detection-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Nan Ma;Lin Sun;Chenghu Zhou;Yawen He;Chuanxiang Dong;Yu Qu;Huiyong Yu;
Pages: 3294 - 3308 Abstract: Moderate resolution imaging spectrometer (MODIS) images are widely used in land, ocean, and atmospheric monitoring, due to their wide spectral coverage, high temporal resolution, and convenient data acquisition. Accurate cloud detection is critical to the fine processing and application of MODIS images. Owing to spatial resolution limitations and the influence of mixed pixels, most MODIS cloud detection algorithms struggle to effectively recognize of clouds and ground objects. Here, we propose a novel cloud detection method based on land surface reflectance and a multiscale feature convolutional neural network to achieve high-precision cloud detection, particularly for thin clouds and clouds over bright surface. A monthly surface reflectance dataset was constructed by MODIS products (MOD09A1) and employed to provide background information for cloud detection. Difference-based samples were obtained using surface reflectance as well MODIS images of different phases based on difference operations. The multiscale feature network (MFCD-Net) using an atrous spatial pyramid pooling and a channel and spatial attention module integrated low-level spatial features and high-level semantic information to capture multiscale features and generate a high-precision cloud mask. For cloud detection experiments and quantitative analysis, 61 MODIS images acquired at different times on various underlying surface types were used. Cloud detection results were compared to those of UNet, Deeplabv3+, UNet++, PSPNet, and top of atmosphere-based (MFCD-TOA) methods. The proposed method performed well, with the highest overall accuracy (96.55%), precision (92.13%), and recall (88.90%). It improved cloud detection accuracy in various scenarios, reducing thin cloud omission and bright surface misidentification. PubDate:
2023
Issue No: Vol. 16 (2023)
- A Generic Cryptographic Deep-Learning Inference Platform for Remote
Sensing Scenes-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Qian Chen;Yulin Wu;Xuan Wang;Zoe L. Jiang;Weizhe Zhang;Yang Liu;Mamoun Alazab;
Pages: 3309 - 3321 Abstract: Deep learning plays an essential role in multidisciplinary research of remote sensing. We will encounter security problems during the data acquisition, processing, and result generation stages. Therefore, secure deep-learning inference services are one of the most important links. Some theoretical progress has been made in cryptographic deep-learning inference, but it lacks a general platform that can be realized in reality. Constantly modifying the corresponding models to approximate the plaintext results reveal the model information to a certain extent. This article proposes a generic post-quantum platform named the PyHENet, which perfectly combines cryptography with plaintext deep learning libraries. Second, we optimize the convolution, activation, and pooling functions and complete the ciphertext operation under floating point numbers for the first time. Moreover, the computation process is accelerated by single instruction multiple data streams and GPU parallel computing. The experimental results show that the PyHENet is closer to the plaintext inference platform than any other cryptographic model and has satisfactory robustness. The optimized PyHENet obtained a better accuracy of 95.05% in the high-resolution NaSC-TG2 database, which the Tiangong-2 space station received. PubDate:
2023
Issue No: Vol. 16 (2023)
- A Novel Global Grid Model for Atmospheric Weighted Mean Temperature in
Real-Time GNSS Precipitable Water Vapor Sounding-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Liangke Huang;Zhedong Liu;Hua Peng;Si Xiong;Ge Zhu;Fade Chen;Lilong Liu;Hongchang He;
Pages: 3322 - 3335 Abstract: The atmospheric weighted mean temperature (Tm) is an important parameter in calculating the precipitable water vapor from Global Navigation Satellite System (GNSS) signals. As both GNSS positioning and GNSS precipitable water vapor detection require high spatial and temporal resolutions for calculating Tm, high-precision modeling of Tm has gained widespread attention in recent years. The previous models for calculating Tm have the limitation of too many model parameters or single-grid data. Therefore, this study presents a global high-precision Tm model (GGTm-H model) developed from the latest Modern-Era Retrospective Analysis for Research and Applications, version-2 (MERRA-2) atmospheric reanalysis data provided by the United States National Aeronautics and Space Administration. The accuracy of the GGTm-H model was verified by combining the MERRA-2 surface Tm data and 319 radiosonde data. The results highlighted that 1) When the MERRA-2 Tm data were used as a reference value, the mean annual RMSE of the GGTm-H model was observed to be 2.72 K. When compared with the Bevis model, GPT2w-5 model, and GPT2w-1 model, the GGTm-H model showed an improvement of 1.5, 0.33, and 0.21 K, respectively. 2) When the radiosonde data were used as a reference value, the mean bias and RMSE of the GGTm-H model were −0.41 K and 3.82 K, respectively. Compared with the other models, the GGTm-H model had the lowest mean annual bias and RMSE. The developed model does not consider any meteorological parameters while calculating Tm. Therefore, it has important applications in the real-time and high-precision monitoring of precipitable water vapor from GNSS signals. PubDate:
2023
Issue No: Vol. 16 (2023)
- An Efficient Polarimetric Persistent Scatterer Interferometry Algorithm
for Dual-Pol Sentinel-1 Data-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Feng Zhao;Leixin Zhang;Teng Wang;Yuxuan Zhang;Shiyong Yan;Yunjia Wang;
Pages: 3336 - 3352 Abstract: With time-series PolSAR images, polarimetric persistent scatterer interferometry (PolPSI) algorithms can obtain optimized interferograms with overall better phase quality than any single-pol channel for ground deformation monitoring. Moreover, the open access of Sentinel-1 PolSAR data makes it possible for applications of PolPSI over large regions worldwide. However, the optimum scattering mechanism usually has to be searched in a high dimension solution space by PolPSI techniques from each pixel. This is with very high or even unacceptable computational costs if satisfactory results are expected. To this end, an efficient and effective PolPSI algorithm named as TP-ESM is proposed with Sentinel-1 data in this study, which optimizes pixels' interferograms by a weighted sum of their corresponding VV and VH channel interferograms. The effectiveness of TP-ESM is tested together with two other PolPSI techniques [i.e., TP-MSM and exhaustive search polarimetric optimization (ESPO)] over Beijing with 46 Sentinel-1 PolSAR images. The results show that TP-ESM can obtain similar optimized interferometric phases with ESPO over high quality pixels, and the ground deformation monitoring pixel density improvement achieved by TP-ESM and ESPO w.r.t. conventional PSI approach (with VV data) is 35% and 43%, respectively. On the other hand, the TP-MSM approach is found not applicable to Sentinel-1 data. Considering the negligible computational cost of TP-ESM w.r.t. ESPO, it presents a quite good performance on both interferograms' optimizations and ground deformation monitoring. Moreover, the proposed TP-ESM outperforms ESPO on distributed scatterers pxiels' optimization, and it is anticipated to have promising performances on other PolSAR images acquired by other sensors. PubDate:
2023
Issue No: Vol. 16 (2023)
- Image-to-Image Training for Spatially Seamless Air Temperature Estimation
With Satellite Images and Station Data-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Peifeng Su;Temesgen Abera;Yanlong Guan;Petri Pellikka;
Pages: 3353 - 3363 Abstract: Air temperature at approximately 2 m above the ground ($T_{a}$) is one of the most important environmental and biophysical parameters to study various earth surface processes. $T_{a}$ measured from meteorological stations is inadequate to study its spatio-temporal patterns since the stations are unevenly and sparsely distributed. Satellite-derived land surface temperature (LST) provides global coverage, and is generally utilized to estimate $T_{a}$ due to the close relationship between LST and $T_{a}$. However, LST products are sensitive to cloud contamination, resulting in missing values in LST and leading to the estimated $T_{a}$ being spatially incomplete. To solve the missing data problem, we propose a deep learning method to estimate spatially seamless $T_{a}$ from LST that contains missing values. Experimental results on 5-year data of mainland China illustrate that the image-to-image training strategy alleviates the missing data problem and fills the gaps in LST implicitly. Plus, the strong linear relationships between observed daily mean $T_{a}$ ($T_{rm{mean}}$), daily minimum $T_{a}$ ($T_{min}$), and daily maximum $T_{a}$ ($T_{max}$) make the estimation of $T_{rm{mean}}$, $T_{min}$, and $T_{max}$ simultaneously possible. For mainland China, the proposed method achieves results with $R^{2}$ of 0.962, 0.953, 0.944, mean absolute error (MAE) of 1.793 $^{circ }$C, 2.143 $^{circ }$C, and 2.125 $^{circ }$C, and root-mean-square error (RMSE) of 2.376 $^{circ }$C, 2.808 $^{circ }$C, and 2.823 $^{circ }$C for $T_{rm{rm{mean}}}$, $T_{min}$, and $T_{max}$, respectively. Our study provides a new paradigm for estimating spatially seamless ground-level parameters from satellite products. Code and more results are available at https://github.com/cvvsu/LSTa. PubDate:
2023
Issue No: Vol. 16 (2023)
- Building Detection From Panchromatic and Multispectral Images With
Dual-Stream Asymmetric Fusion Networks-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Ziyue Huang;Qingjie Liu;Huanyu Zhou;Guangshuai Gao;Tao Xu;Qi Wen;Yunhong Wang;
Pages: 3364 - 3377 Abstract: Building detection from panchromatic (PAN) and multispectral (MS) images is an essential task for many practical applications. In this article, a dual-stream asymmetric fusion network is proposed, named DAFNet. DAFNet can achieve effective information fusion at the feature level. It obtains better building detection performance from the following three perspectives: a two-stream network structure is designed to guarantee the ability to extract information from PAN and MS images; an asymmetric feature fusion module is proposed to fuse features efficiently and concisely; and two consistency regularization losses, i.e., PAN information preservation loss and cross-modal semantic consistency loss are applied to further explore the consistency between features for better fusion. The experiments are conducted on a challenging building detection dataset collected from GaoFen-2 satellite images. Comprehensive evaluations on 12 popular detection methods demonstrate the superiority of our DAFNet compared with the existing state-of-the-art fusion methods. We reveal that feature-level fusion is more suitable for building detection from PAN-MS images. PubDate:
2023
Issue No: Vol. 16 (2023)
- GPU Implementation of Graph-Regularized Sparse Unmixing With Superpixel
Structures-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Zeng Li;Jie Chen;Muhammad Mobeen Movania;Susanto Rahardja;
Pages: 3378 - 3389 Abstract: To enhance spectral unmixing performance, a large number of algorithms have simultaneously investigated spatial and spectral information in hyperspectral images. However, sophisticated algorithms with high computational complexity can be very time-consuming when a large amount of data are involved in processing hyperspectral images. In this article, we first introduce a group sparse graph-regularized unmixing method with superpixel structure, to promote piecewise consistency of abundances and reduce computational burden. Segmenting the image into several nonoverlapped superpixels also enables to decompose the unmixing problem into uncoupled subproblems that can be processed in parallel. An implementation for the proposed algorithm on graphics processing units (GPUs) is then developed based on the NVIDIA compute unified device architecture (CUDA) framework. The proposed scheme achieves parallelism at both the intrasuperpixel and intersuperpixel levels, where multiple concurrent streams have been used to enable multiple kernels to execute on the device simultaneously. Simulation results with a series of experiments demonstrate advantages of the proposed algorithm. The performance of the GPU implementation also illustrates that parallel scheme largely expedites the implementation. PubDate:
2023
Issue No: Vol. 16 (2023)
- Validation of the Effective Resolution of SMAP Enhanced Resolution
Backscatter Products-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
David G. Long;Julie Z. Miller;
Pages: 3390 - 3404 Abstract: NASA's Soil Moisture Active Passive (SMAP) mission originally included both passive and active L-band measurement capabilities. It was the first satellite instrument to provide global L-band radar observations of normalized radar cross section ($sigma ^{0}$) at multiple resolutions. The SMAP radar collected high-resolution ($sim$1–3 km) synthetic aperture radar (SAR) measurements over most of the earth's land mass. It simultaneously collected low-resolution 6 × 30 km “slice” and full-footprint 29 × 35 km measurements. The SMAP radar operated for 83 days, from day of the year 103 to 186 in 2015, before the transmitter failed. The SMAP radar was designed to make vegetation roughness measurements in support of the SMAP primary mission to measure soil moisture, but the radar data are useful for a variety of applications, particularly in the polar regions. Unfortunately, limitations in the data download volume precluded the downlink of high-resolution data over Antarctica, sea ice in the polar regions, and various islands. Nonetheless, low-resolution slice and footprint data were collected and downlinked over these areas. To better exploit these low-resolution data, this article employs image reconstruction techniques to create twice-daily enhanced resolution SMAP radar images from the slice and footprint measurements. To validate the resolution enhancement, the enhanced resolution data are compared to SAR results over Greenland and South America. The new dataset is provided to the science community to support cryosphere and climate studies. PubDate:
2023
Issue No: Vol. 16 (2023)
- A Polarimetric Decomposition and Copula Quantile Regression Approach for
Soil Moisture Estimation From Radarsat-2 Data Over Vegetated Areas-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Li Zhang;Rui Wang;Huiming Chai;Xiaolei Lv;
Pages: 3405 - 3417 Abstract: This article proposes a novel framework for probabilistic estimation of surface soil moisture (SSM) based on polarimetric decomposition and copula quantile regression, mainly focusing on solving the low correlation between synthetic aperture radar (SAR) backscattering coefficients and SSM in corn-covered areas. Cloude–Pottier decomposition and adaptive nonnegative eigenvalue decomposition can extract more polarization parameters, explaining the implicit information in polarization data from different theoretical levels. Polarization parameters and the backscattering coefficients for different polarizations constitute predictor variable parameters for estimating the SSM. The dimensionality of the predictor variable parameters is reduced by supervised principal component analysis to derive the first principal component. SPCA ensures a high correlation between the first principal component and the SSM. Finally, the Archimedes copula function simply and effectively constructs the nonlinear relationship between SSM and the first principal component to complete the quantile regression estimation of SSM. Results show that the root-mean-square error range of SSM estimation is 0.039–0.078 cm$^{3}$/cm$^{3}$ and the correlation coefficient (R) is 0.401–0.761. In addition, copula quantile regression constructs an uncertainty range for the SSM estimate, which can be used to judge the reliability of the estimate. PubDate:
2023
Issue No: Vol. 16 (2023)
- New Composite Nighttime Light Index (NCNTL): A New Index for Urbanization
Evaluation Research-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Haofan Ran;Fei Zhang;Ngai Weng Chan;Mou Leong Tan;Hsiang-Te Kung;Jingchao Shi;
Pages: 3418 - 3434 Abstract: This article employs the 2018 National Polar-Orbiting Partnership/Visible Infrared Imaging Radiometer Suite (NPP/VIIRS) and Luojia_01 nighttime light imagery to construct a New Composite Nighttime Light Index, which is NCNTL. The reliability of NCNTL is verified based on the analysis of urban road network, population, and Landsat normalized difference vegetation index auxiliary data. The research found some differences between the NPP/VIIRS and the Luojia_01 nighttime light imagery in detecting urban areas by comparing and analyzing the urban area factor of the Xinjiang region. Therefore, NCNTL was developed to solve this issue. First, a more significant improvement of correlation was found between NCNTL and the city-related factors than that in a single nighttime light imagery source. Second, the conformation of the NCNTL integration of the characteristics of multisource nighttime light was achieved to a certain extent by using NPP/VIIRS and Luojia_01 to extract urban area features. Third, NCNTL outperformed the single-source data in extracting small- and medium-sized cities in southern Xinjiang. With the application of the new nighttime index, researchers can now fuse nighttime light imagery easily to perform urban analysis. Although the quality of NCNTL is similar to nighttime light imagery processed using multisource auxiliary data, it can greatly reduce the workload in urban analysis and decrease the complex task requirement of collecting data from multiple sources. PubDate:
2023
Issue No: Vol. 16 (2023)
- An Extension of Multiquadric Method Based on Trend Analysis for Surface
Construction-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
HaiFei Liu;Peng Guo;JianXin Liu;Rong Liu;TieGang Tong;
Pages: 3435 - 3441 Abstract: The representation of spatial discrete data is crucial for the data analysis of meteorology, agriculture, geological exploration, and other fields. Among various methods for scattered data interpolation, the multiquadric (MQ) method is the most favorable in the field of surface construction. However, the classical MQ method has accuracy concern on the boundary and is time consuming for large dataset. This study proposes an algorithm by integrating the MQ method with trend surface analysis. In which, the low-order polynomial trend surface equation is first used to model the overall trend. Then, the MQ equation is applied to fit the residual surface after removal the trend from the data. Our implementation can eliminate the distortion in data missing areas by the classical MQ method, and the modeling efficiency can be improved significantly since the local MQ method divide the residual surface into a group of subsurfaces. The accuracy and efficiency of the proposed algorithm are validated on a synthetic model. The performance of the developed algorithm is further examined on the elevation data collected in Tibet and the seabed of a strait in Norway. The results show that with an equivalent resolution, the developed algorithm can be much more efficient than the classical MQ method and well-developed Kriging method. PubDate:
2023
Issue No: Vol. 16 (2023)
- Automated Machine Learning Driven Stacked Ensemble Modeling for Forest
Aboveground Biomass Prediction Using Multitemporal Sentinel-2 Data-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Parth Naik;Michele Dalponte;Lorenzo Bruzzone;
Pages: 3442 - 3454 Abstract: Modeling and large-scale mapping of forest aboveground biomass (AGB) is a complicated, challenging, and expensive task. There are considerable variations in forest characteristics that create functional disparity for different models and needs comprehensive evaluation. Moreover, the human-bias involved in the process of modeling and evaluation affects the generalization of models at larger scales. In this article, we present an automated machine learning framework for modeling, evaluation, and stacking of multiple base models for AGB prediction. We incorporate a hyperparameter optimization procedure for automatic extraction of targeted features from multitemporal Sentinel-2 data that minimizes human-bias in the proposed modeling pipeline. We integrate the two independent frameworks for automatic feature extraction and automatic model ensembling and evaluation. The results suggest that the extracted target-oriented features have an excessive contribution of red-edge and short-wave infrared spectrum. The feature importance scale indicates a dominant role of summer-based features as compared to other seasons. The automated ensembling and evaluation framework produced a stacked ensemble of base models that outperformed individual base models in accurately predicting forest AGB. The stacked ensemble model delivered the best scores of R2cv = 0.71 and RMSE = 74.44 Mgha−1. The other base models delivered R2cv and RMSE ranging between 0.38–0.66 and 81.27–109.44 Mg ha−1, respectively. The model evaluation metrics indicated that the stacked ensemble model was more resistant to outliers and achieved a better generalization. Thus, the proposed study demonstrated an effective automated modeling pipeline for predicting AGB by minimizing human-bias and deployable over large and diverse forest areas. PubDate:
2023
Issue No: Vol. 16 (2023)
- Toward a Deep-Learning-Network-Based Convective Weather Initiation
Algorithm From the Joint Observations of Fengyun-4A Geostationary Satellite and Radar for 0–1h Nowcasting-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Fenglin Sun;Bo Li;Min Min;Danyu Qin;
Pages: 3455 - 3468 Abstract: Nowcasting of convective weather is a challenging and significant task in operational weather forecasting system. In this article, a new convolution recurrent neural network based regression network for convective weather prediction is proposed, which is named as the convective weather nowcasting net (CWNNet). The CWNNet adopts the joint observations of Fengyun-4A geostationary (GEO) satellite and the ground-based Doppler weather radar data of the last 0–1-h as the inputs of the model to predict the radar reflectivity factor maps of next 0–1 h. The statistical validating results clearly demonstrate that the mean values of the probability of detection, false alarm ratio, threat score, root mean square error and mean absolute error evaluating the performance of CWNNet for 1-h nowcasting reach 0.87, 0.137, 0.71, 3.365 dBZ and 1.038 dBZ, respectively. Due to that the GEO meteorological satellite is capable of capturing the features of convective initiation (CI), the CWNNet shows a good performance in CI nowcasting. Besides, several case studies also further indicate that the CWNNet can predict CI more than 30 min in advance by monitoring the convective clouds. The CWNNet based on the joint satellite and radar data shows a better nowcasting performance than that only employing single data source. Thus, it can effectively produce more reliable nowcasting for convective weather events. PubDate:
2023
Issue No: Vol. 16 (2023)
- SSB-Based Signal Processing for Passive Radar Using a 5G Network
-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Authors:
Karol Abratkiewicz;Adam Księżyk;Marek Płotka;Piotr Samczyński;Jacek Wszołek;Tomasz Piotr Zieliński;
Pages: 3469 - 3484 Abstract: This article presents an alternative processing chain for the passive radar using fifth-generation (5G) standard technology for broadband cellular networks as illuminators of opportunity. The proposition is to use a 5G synchronization signal block's (SSB) periodically transmitted modulated pulse in 5G-based passive coherent location (PCL) system processing. Although the SSB periodicity limits the velocity ambiguity, the article describes a solution to tackle this problem in a single target scenario. The method is advantageous when there is a lack of transmission in the telecommunication channel, and the 5G SSB is the only existing signal. The article proposes a signal processing pipeline for a 5G-based PCL that is inspired by passive radars using noncooperative pulse radar as an illumination source. The method has been validated using simulated and real-life 5G data measurements. The results presented in the article show the possibility of detecting a moving target with a lack of data transmission in the 5G network, using only the SSB when the classical passive radar signal processing fails. The presented results prove the possibility for a significant increase of 5G network-based PCL utilization in short-range applications. PubDate:
2023
Issue No: Vol. 16 (2023)
- Geological Mapping via Convolutional Neural Network Based on Remote
Sensing and Geochemical Survey Data in Vegetation Coverage Areas-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
|