for Journals by Title or ISSN
for Articles by Keywords
Similar Journals
Journal Cover
Geoscience and Remote Sensing, IEEE Transactions on
Journal Prestige (SJR): 2.649
Citation Impact (citeScore): 6
Number of Followers: 208  
  Hybrid Journal Hybrid journal (It can contain Open Access articles)
ISSN (Print) 0196-2892
Published by IEEE Homepage  [191 journals]
  • [Front cover]
    • Abstract: Presents the front cover for this issue of the publication.
      PubDate: Nov. 2019
      Issue No: Vol. 57, No. 11 (2019)
  • IEEE Transactions on Geoscience and Remote Sensing publication information
    • Abstract: Provides a listing of current staff, committee members and society officers.
      PubDate: Nov. 2019
      Issue No: Vol. 57, No. 11 (2019)
  • IEEE Transactions on Geoscience and Remote Sensing information for authors
    • Abstract: Provides instructions and guidelines to prospective authors who wish to submit manuscripts.
      PubDate: Nov. 2019
      Issue No: Vol. 57, No. 11 (2019)
  • IEEE Transactions on Geoscience and Remote Sensing institutional listings
    • Abstract: Advertisement.
      PubDate: Nov. 2019
      Issue No: Vol. 57, No. 11 (2019)
  • DRBox-v2: An Improved Detector With Rotatable Boxes for Target Detection
           in SAR Images
    • Authors: Quanzhi An;Zongxu Pan;Lei Liu;Hongjian You;
      Pages: 8333 - 8349
      Abstract: Convolutional neural network (CNN)-based methods have been successfully applied to SAR target detection. Different from prevalently used detection approaches with rectangle bounding box, rotatable bounding box (RBox)-based methods, such as DRBox-v1, can effectively reduce the interference of background pixels and locate the targets more finely for geospatial object detection. Although DRBox-v1 has achieved impressive detected performance, there still exist some remaining problems and room for improvement. In this paper, an improved RBox-based target detection framework is proposed to boost precision and recall rates of detection, and we refer to the method as DRBox-v2 and apply it to target detection in SAR images. The main improvements of DRBox-v2 as well as the contributions of this paper are fourfold. First, a multi-layer prior box generation strategy is designed for detecting small-scale targets. Since shallow layers lack strong sematic information, the feature pyramid network (FPN) module is applied. Second, a modified encoding scheme for RBox is proposed for more precisely estimating the position of RBox and orientation of targets. Third, a focal loss (FL) combined with hard negative mining (HNM) technique is proposed to mitigate the issue of the imbalance between positive and negative samples, which produces better results than solely employing either one. Fourth, comprehensive ablation studies are conducted to reveal the effect of each improvement on detected results. The results of the target detection on three data sets are illustrated and our method obtains 0.135, 0.081, 0.115 gains in average precision compared with three state-of-the-art methods, respectively.
      PubDate: Nov. 2019
      Issue No: Vol. 57, No. 11 (2019)
  • A Temporal Phase Coherence Estimation Algorithm and Its Application on
           DInSAR Pixel Selection
    • Authors: Feng Zhao;Jordi J. Mallorqui;
      Pages: 8350 - 8361
      Abstract: Pixel selection is a crucial step of all advanced Differential Interferometric Synthetic Aperture Radar (DInSAR) techniques that have a direct impact on the quality of the final DInSAR products. In this paper, a full-resolution phase quality estimator, i.e., the temporal phase coherence (TPC), is proposed for DInSAR pixel selection. The method is able to work with both distributed scatterers (DSs) and permanent scatterers (PSs). The influence of different neighboring window sizes and types of interferograms combinations [both the single-master (SM) and the multi-master (MM)] on TPC has been studied. The relationship between TPC and phase standard deviation (STD) of the selected pixels has also been derived. Together with the classical coherence and amplitude dispersion methods, the TPC pixel selection algorithm has been tested on 37 VV polarization Radarsat-2 images of Barcelona Airport. Results show the feasibility and effectiveness of TPC pixel selection algorithm. Besides obvious improvements in the number of selected pixels, the new method shows some other advantages comparing with the other classical two. The proposed pixel selection algorithm, which presents an affordable computational cost, is easy to be implemented and incorporated into any advanced DInSAR processing chain for high-quality pixels’ identification.
      PubDate: Nov. 2019
      Issue No: Vol. 57, No. 11 (2019)
  • On Quad-Polarized SAR Measurements of the Ocean Surface
    • Authors: Vladimir N. Kudryavtsev;Shengren Fan;Biao Zhang;Alexis A. Mouche;Bertrand Chapron;
      Pages: 8362 - 8370
      Abstract: This paper provides improved quantitative estimates of the wind-ruffled roughness contributions to dual co- and cross-polarized radar signals. Expanding previous approaches, 1696 RADARSAT-2 quad-polarized synthetic aperture radar (SAR) measurements, co-located with 65 in situ National Data Buoy Center (NDBC) buoy observations, are analyzed. Considering all wind conditions, the impact of breaking and near-breaking waves on dual co- and cross-polarized radar signals is robustly documented. For VV polarized measurements, the contribution of breaking waves decreased from 60% to 20% with increasing incidence angle, whereas for HH polarization and cross-polarization measurements, it can amount to about 60%–70% for all incidence angles. Building on the large analyzed data set, robust empirical dependencies between breaking waves and their impact on co- and cross-pol signals are then derived, as functions of wind speeds, incidence angles, and azimuth directions.
      PubDate: Nov. 2019
      Issue No: Vol. 57, No. 11 (2019)
  • Seven-Component Scattering Power Decomposition of POLSAR Coherency Matrix
    • Authors: Gulab Singh;Rashmi Malik;Shradha Mohanty;Virendra Singh Rathore;Kanta Yamada;Maito Umemura;Yoshio Yamaguchi;
      Pages: 8371 - 8382
      Abstract: Applications of fully polarimetric synthetic aperture radar (POLSAR) have increased in the past few decades. The potential of model-based decompositions is coupled with polarimetric information extraction from the POLSAR data for target identification and classification. The coherency matrix $[T]$ with nine independent parameters, and associated with some physical scattering models, serves as input to these decompositions. This paper attempts to assign one such physical scattering model to the real part of $T_{23}$ ( $text {Re}{{T}_{23}}$ ) and develop a new scattering power decomposition model, called as the seven-component scattering decomposition (7SD). Previously developed scattering power models have eliminated $text {Re}{T_{23}}$ , assuming the orientation angle compensation condition, to reduce the number of independent $[T]$ parameters. The proposed 7SD model has been tested on fully polarimetric SAR data sets acquired by the spaceborne Advanced Land Observing Satellite-2/Phased Array type L-band Synthetic Aperture Radar-2 (ALOS-2/PALSAR-2) and airborne F-SAR, and the results are compared with the existing scattering power decompositions. The physical scattering model for $text {Re}{T_{23}}$ is derived from a particular configuration of dipoles (referred to as “mixed dipole” configuration), which gives rise to compound scattering. The mixed-dipole scattering occurs in urban areas that are highly oriented to the radar illumination direction as well as in vegetation areas. 7SD also deli-ers an additional mixed dipole scattering power compared to the previous six-component scattering model. The mixed-dipole scattering model reduces the contribution of volume scattering power in double-bounce predominant areas (such as oriented urban blocks), thereby imparting improved understanding of the polarimetric information contained in the coherency matrix.
      PubDate: Nov. 2019
      Issue No: Vol. 57, No. 11 (2019)
  • Backprojection Subimage Autofocus of Moving Ships for Synthetic Aperture
    • Authors: Aron Sommer;Jörn Ostermann;
      Pages: 8383 - 8393
      Abstract: We propose a new autofocus approach for the backprojection reconstruction algorithm to compute high-quality synthetic aperture radar images of non-linearly moving and maneuvering ships. In contrast to the state-of-the-art autofocus techniques, our approach allows a long coherent processing interval even in the case of a rough sea, which improves the image quality. An improved image quality enables the classification of ships in airborne synthetic aperture radar (SAR) images. For this purpose, we decompose the image into subimages and estimate pulse-by-pulse a phase error for each subimage by maximizing subimage sharpness. A regularized Levenberg–Marquardt algorithm guarantees a smooth phase correction on subimage level. By correcting the subsequent range distances from the flight path to all pixels using the currently estimated phase errors, sharp images of maneuvering ships with arbitrary velocities can now be reconstructed. The evaluation of our proposed ship autofocus technique on the basis of real airborne X-band data shows that our approach leads to a visible improvement of image quality in comparison with the state-of-the-art techniques. Given these results, even an automatic ship classification based on radar images might be possible in the future.
      PubDate: Nov. 2019
      Issue No: Vol. 57, No. 11 (2019)
  • Class Information-Based Band Selection for Hyperspectral Image
    • Authors: Meiping Song;Xiaodi Shang;Yulei Wang;Chunyan Yu;Chein-I Chang;
      Pages: 8394 - 8416
      Abstract: This paper presents a class information (CI)-based band selection (BS) approach to hyperspectral image classification (HSIC). It introduces a new concept from an information theory point of view, CI which can be used to determine an appropriate weight imposed on each class of interest. Specifically, two types of criteria, intraclass information criterion (IC) and interclass IC are derived as CI probabilities to measure CI that can be used to determine the number of training samples required to be selected for each class. With such CI-calculated probabilities, another new concept called class self-information (CSI) is also defined for each class that can be further used to define the class entropy (CE) so that CSI and CE can be used to determine the number of bands required for BS, $n_{text {BS}}$ . In order to find desired $n_{text {BS}}$ bands, two types of BS methods based on CSI and CE are custom-designed, called single class signature-constrained BS (SCSC-BS) which utilizes the constrained energy minimization (CEM) to constrain each individual class signature to select bands for a particular class according to its CSI-determined $n_{text {BS}}$ and a multiple class signatures-constrained BS (MCSC-BS) which takes advantage of linearly constrained minimum variance (LCMV) to constrain all class signatures to select CE-determined $n_{text {BS}}$ bands for all classes. These SCSC-BS and MCSC-BS selected bands are then used to perform classification and evaluated by CI-weighted classification measures by real image experiments. The results show that HSIC using judiciously selected partial bands as well as CI-weighted measures can improve HSIC with using full bands.
      PubDate: Nov. 2019
      Issue No: Vol. 57, No. 11 (2019)
  • Soil and Vegetation Scattering Contributions in L-Band and P-Band
           Polarimetric SAR Observations
    • Authors: S. Hamed Alemohammad;Thomas Jagdhuber;Mahta Moghaddam;Dara Entekhabi;
      Pages: 8417 - 8429
      Abstract: Active microwave-based retrieval of soil moisture in vegetated areas has uncertainties due to the sensitivity of the signal to both soil (dielectric constant and roughness) and vegetation (dielectric constant and structure) properties. A multi-frequency acquisition system would increase the number of observations that may constrain soil and/or vegetation parameter retrievals. In order to realize this constraint, an understanding of microwaves interaction with the surface and vegetation across frequencies is necessary. Different microwave frequencies have varied interactions with the soil–vegetation medium and increasing penetration into the soil and canopy with the decreasing frequency. In this study, we examine the contributions of different scattering mechanisms to coincident observations from two microwave frequencies (L and P) of airborne synthetic aperture radar instruments. We quantify contributions of surface, vegetation volume, and double-bounce scattering components. Results are analyzed and discussed to guide future multi-frequency retrieval algorithm designs.
      PubDate: Nov. 2019
      Issue No: Vol. 57, No. 11 (2019)
  • A Novel Approach to Doppler Centroid and Channel Errors Estimation in
           Azimuth Multi-Channel SAR
    • Authors: Yashi Zhou;Robert Wang;Yunkai Deng;Weidong Yu;Huaitao Fan;Da Liang;Qingchao Zhao;
      Pages: 8430 - 8444
      Abstract: Multi-channel synthetic aperture radar (SAR) in azimuth can overcome the minimum-antenna-area constraint of the conventional SAR in high-resolution and wide-swath (HRWS) imaging. However, the SAR system suffers from amplitude and phase mismatch among channels and nonideal antenna pattern, which will result in azimuth ambiguity and ghost targets in the final image. Therefore, taking the nonbandlimited signal and channel errors into account, a practical azimuth ambiguity-to-signal ratio (AASR) model of multi-channel SAR system is established. Meanwhile, the baseband Doppler centroid (DC) frequency related to channel errors also has an influence on image quality. Then, an effective method is proposed to calculate the baseband DC frequency according to the jumping points of the channel phase errors estimate. Subsequently, considering the effect of azimuth antenna pattern (AAP), a corresponding relationship between the ideal steering vectors and the signal subspace from the decomposing covariance matrix is established. After that, based on the uniqueness of the signal subspace and the correct corresponding relationship, an accurate method is proposed to estimate the channel phase errors by minimizing the minimum mean square error (MMSE) of the signal subspace. Finally, an accurate multi-channel SAR imaging diagram is shown to effectively mitigate the azimuth ambiguous energy caused by channel errors. Simulation and real data experiments, including four channel airborne SAR data with a bandwidth of 210 MHz and the Chinese Gaofen-3 dual receiving channel (DRC) spaceborne SAR data, validate the effectiveness of the proposed calibration method, particularly in low signal-to-noise ratio (SNR).
      PubDate: Nov. 2019
      Issue No: Vol. 57, No. 11 (2019)
  • Detecting Small Objects in Urban Settings Using SlimNet Model
    • Authors: Zheng Yang;Yaolin Liu;Lirong Liu;Xinming Tang;Junfeng Xie;Xiaoming Gao;
      Pages: 8445 - 8457
      Abstract: The automatic extraction of small objects such as roadside milestones, small traffic signs, and other urban furniture remains a technical challenge. This study focuses on methods of deep learning to detect small urban elements in mobile mapping system (MMS) images. Based on images obtained by an MMS in urban areas, we create an urban element detection (UED) data set containing several kinds of small objects found in a city. A simple feature extraction convolution neural network (CNN) called SlimNet is proposed and combined with an optimized faster R-CNN framework. The resulting deep learning method can automatically extract small objects commonly found in cities, including manhole covers, milestones, and license plates. Experiments on the UED data set show that SlimNet has the highest accuracy compared with other popular networks, including VGG, MobileNet, ResNet, and YOLOv3. The SlimNet model can achieve a mean average precision (AP) that is up to 12.3% higher than that of the lowest ResNet-152 network and can accelerate both training and detection owing to its relative simplicity. Moreover, $k$ -means clustering is used to choose the dimensions of the anchor box for detection. We ran $k$ -means clustering for different numbers of clusters, and the results show that at least four clusters are needed for detection using a small data set such as the UED. We also propose a method to use templates of different scales for anchors to further improve small object detection; this approach improved the AP by 3%–4% in our experiments.
      PubDate: Nov. 2019
      Issue No: Vol. 57, No. 11 (2019)
  • Contour Refinement and EG-GHT-Based Inshore Ship Detection in Optical
           Remote Sensing Image
    • Authors: Hao Chen;Tong Gao;Wen Chen;Ye Zhang;Jing Zhao;
      Pages: 8458 - 8478
      Abstract: Inshore ship detection becomes challenging in high-resolution optical remote sensing image (RSI) because inshore ships are often incomplete and deformed due to the poor imaging condition and shadow of ship superstructure, and there are various interferences in harbor. A contour refinement and the improved generalized Hough transform (GHT)-based inshore ship detection scheme is proposed for RSI with complex harbor scenes. First, the suspected region of ships (SRS) is located in the entire RSI according to the line segments of ship body and docks. The contours in each SRS are then refined to repair the damaged ship head contour (SHC) using the convex set characteristics of ship head and subsequently reduce non-SHC by curvature filtering. In each refined SRS, equal frequency quantification instead of equal width quantification for R-Table construction and Gini coefficient-based decision criterion combining the number and distribution of votes are proposed to improve GHT (i.e., EG-GHT) and to extract SHCs as candidate targets. The false candidates are removed according to pixel proportion described by the structured binarization feature. Applying the border scoring strategy, the best candidates with the largest score among all the overlapped bounding boxes are selected as the final detection targets. Using the public RSIs with various cases, including turbid water, cloud occlusion, ships moored together, and ships with the different sizes, experimental results demonstrate the proposed scheme outperforms state-of-the-art contour-based methods and deep learning-based methods in terms of precision–recall rate and average precision, respectively.
      PubDate: Nov. 2019
      Issue No: Vol. 57, No. 11 (2019)
  • Validation of SMAP Soil Moisture Products Using Ground-Based Observations
           for the Paddy Dominated Tropical Region of India
    • Authors: Gurjeet Singh;Narendra N. Das;Rabindra K. Panda;Andreas Colliander;Thomas J. Jackson;Binayak P. Mohanty;Dara Entekhabi;Simon H. Yueh;
      Pages: 8479 - 8491
      Abstract: The Soil Moisture Active Passive (SMAP) mission currently provides three surface soil moisture products based solely on instrument measurements. The three soil moisture products are: 1) the radiometer-only 36 km gridded; 2) a radiometer-only enhanced product gridded at 9 km; and 3) a high-resolution (3 km) SMAP-Sentinel active–passive product. It is important to validate these released SMAP soil moisture products over various land covers and hydroclimatic domains before they are routinely used in scientific research and applications. This paper evaluates SMAP-based soil moisture products for typical Indian conditions of extreme seasonal variability that leads to changes from very wet to dry soil, especially for the paddy dominated region. The assessment metrics indicate that the enhanced passive-only soil moisture product meets the SMAP accuracy requirement of 0.04 m3/m3 during the nongrowing season (NGS) with unbiased root-mean-square error (ubRMSE) values ranging between 0.025 and 0.036 m3/m3. However, this product underperformed during the paddy growing season (GS) with ubRMSE values ranging between 0.063 and 0.097 m3/m3. In addition, the SMAP-Sentinel active–passive soil moisture product shows satisfactory performance during the NGS (ubRMSE, 0.017–0.051 m3/m3), but during the GS, ubRMSE ranged between 0.089 and 0.104 m3/m3. Use of the vegetation water content climatology and low clay fraction in SMAP baseline algorithm (auxiliary database) that mismatched with the actual values may be the possible source of errors and biases in the SMAP soil moisture products. The reported study provides guidelines for the application of enhanced SMAP soil moisture products in India, especially for the tropical region, and provides information that can be used to improve the retrieval algor-thm.
      PubDate: Nov. 2019
      Issue No: Vol. 57, No. 11 (2019)
  • Radiometric Sensitivity and Signal Detectability of Ocean Color Satellite
           Sensor Under High Solar Zenith Angles
    • Authors: Hao Li;Xianqiang He;Palanisamy Shanmugam;Yan Bai;Difeng Wang;Haiqing Huang;Qiankun Zhu;Fang Gong;
      Pages: 8492 - 8505
      Abstract: New generation ocean color imagers on geostationary orbits are designed to provide a much higher temporal resolution along with enhanced spatial and spectral resolutions that will open up obvious opportunities for improving the sampling frequency and resolving diurnal variability of phytoplankton and other biogeochemical properties in dynamic coastal waters. Despite the capabilities of such new generation sensors to detect the diurnal cycles of various ocean phenomena, there is a lack of knowledge on their radiometric sensitivity and signal detectability for observing the ocean color at morning or evening hours. This paper aims to explore the capability of geostationary satellite ocean color sensor for detecting ocean biogeochemical properties [chlorophyll (CHL); total suspended matter (TSM); colored dissolved organic matter (CDOM)] under high solar zenith angles (SZAs). The analysis is based upon simulations from the vector radiative transfer model for the coupled ocean–atmosphere system (PCOART-SA), which considers the earth curvature effects. The unitless differential signal-to-noise ratio ( $Delta $ SNR) is used as a discriminant parameter to indicate the radiometric sensitivity to variation of different biogeochemical properties. The results showed that the SZAs have a significant impact on the signal detectability for CHL variation. For typical shelf water (CHL $= 1,,mu text{g}$ /L, TSM = 1 mg/L, CDOM = 0.15 m−1), with the typical observation zenith angle (OZA) = 30°, changes on the order of $Delta $ CHL $= 0.024,,mu text{g}$ /L (2.4% to background CHL) were detectable when SZA = 30°; when SZA > 75°, the detectable minimal $Delta $ CHL increased to $0.77~mu text{g}$ /L (77%), indicating the difficulty of detecting CHL under high SZA. For CDOM, the detectability of changes ( $Delta $ CDOM) was also found to be closely related to the SZAs, i.e., changes on the order of ten times depending on the SZA conditions. However, even under extremely high SZA conditions (SZA = 80°, OZA = 30°), $Delta $ CDOM = 0.007 m−1 which is about 4.7% of the background CDOM was still detectable at 412 nm. On the other hand, under high SZA conditions (SZA = 80°, OZA = 30°), $Delta $ TSM = 0.211 mg/L (2.1% to the background TSM) was also detectable. Overall, our results indicate that under high SZAs conditions, the geostationary satellite ocean color sensor may experience difficulty in detecting a slight change in CHL variation in productive waters, but it still can detect small changes in TSM and CDOM contents despite a reduced sensitivity at the steeper SZAs.
      PubDate: Nov. 2019
      Issue No: Vol. 57, No. 11 (2019)
  • Adaptive Multiscale Deep Fusion Residual Network for Remote Sensing Image
    • Authors: Ge Li;Lingling Li;Hao Zhu;Xu Liu;Licheng Jiao;
      Pages: 8506 - 8521
      Abstract: With the development of remote sensing imaging technology, remote sensing images with high-resolution and complex structure can be acquired easily. The classification of remote sensing images is always a hot and challenging problem. In order to improve the performance of remote sensing image classification, we propose an adaptive multiscale deep fusion residual network (AMDF-ResNet). The AMDF-ResNet consists of a backbone network and a fusion network. The backbone network including several residual blocks generates multiscale hierarchy features, which contain semantic information from low to high levels. In the fusion network, the adaptive feature fusion module proposed can emphasize useful information and suppress useless information by learning the weights, which represent the importance of the features. The AMDF-ResNet can make full use of the multiscale hierarchy features and the extracted feature is discriminative. In addition, we propose a samples selection method named important samples selection strategy (ISSS). Based on superpixels segmentation result, gradient information and spatial distribution are used as two references to determine the selection numbers and select samples. Compared with the random selection strategy, training samples selected by ISSS are more representative and diverse. The experimental results on four data sets demonstrate that the AMDF-ResNet and ISSS are effective.
      PubDate: Nov. 2019
      Issue No: Vol. 57, No. 11 (2019)
  • Exploration of Machine Learning Techniques in Emulating a Coupled
           Soil–Canopy–Atmosphere Radiative Transfer Model for Multi-Parameter
           Estimation From Satellite Observations
    • Authors: Hanyu Shi;Zhiqiang Xiao;Xiaodan Tian;
      Pages: 8522 - 8533
      Abstract: The time-consuming modeling of physical remote sensing models restricts their application to parameter estimation from satellite observations. Machine learning techniques have become highly developed in recent years and show good capacity for model fitting. Based on our previously developed coupled soil–canopy–atmosphere radiative transfer model (RTM) and a multiple parameters estimation scheme, this paper evaluates the performance of four machine learning algorithms [Gaussian process regression (GPR), back-propagation neural networks (NNs), random forest regression, and general regression NN] on emulating the coupled RTM, where the traditional lookup table (LUT) algorithm is also compared. The results show that the GPR algorithm can emulate complex RTMs with excellent accuracy and efficiency. GPR emulators of photosynthetically active radiation (PAR), fraction of absorbed PAR, and incident shortwave radiation were applied to the multi-parameter estimation scheme to replace the traditional LUT algorithm, which avoids the need to integrate over the spectra while achieving an acceleration ratio of 16. A test of the updated multi-parameter estimation scheme at the Bondville site using 18 years of clear-sky observations demonstrates that replacing the computationally expensive integration processes with GPR emulators is practical. The emulators can also be used to simulate the corresponding parameters independently, and this GPR acceleration method for complex models is universal and can be easily applied to other time-consuming models.
      PubDate: Nov. 2019
      Issue No: Vol. 57, No. 11 (2019)
  • Sig-NMS-Based Faster R-CNN Combining Transfer Learning for Small Target
           Detection in VHR Optical Remote Sensing Imagery
    • Authors: Ruchan Dong;Dazhuan Xu;Jin Zhao;Licheng Jiao;Jungang An;
      Pages: 8534 - 8545
      Abstract: Small target detection is a challenging task in very-high-resolution (VHR) optical remote sensing imagery, because small targets occupy a minuscule number of pixels and are easily disturbed by backgrounds or occluded by others. Although current convolutional neural network (CNN)-based approaches perform well when detecting normal objects, they are barely suitable for detecting small ones. Two practical problems stand in their way. First, current CNN-based approaches are not specifically designed for the minuscule size of small targets (~15 or ~10 pixels in extent). Second, no well-established data sets include labeled small targets and establishing one from scratch is labor-intensive and time-consuming. To address these two issues, we propose an approach that combines Sig-NMS-based Faster R-CNN with transfer learning. Sig-NMS replaces traditional non-maximum suppression (NMS) in the stage of region proposal network and decreases the possibility of missing small targets. Transfer learning can effectively label remote sensing images by automatically annotating both object classes and object locations. We conduct an experiment on three data sets of VHR optical remote sensing images, RSOD, LEVIR, and NWPU VHR-10, to validate our approach. The results demonstrate that the proposed approach can effectively detect small targets in the VHR optical remote sensing images of about $10times 10$ pixels and automatically label small targets as well. In addition, our method presents better mean average precisions than other state-of-the-art methods: 1.5% higher when performing on the RSOD data set, 17.8% higher on the LEVIR data set, and 3.8% higher on NWPU VHR-10.
      PubDate: Nov. 2019
      Issue No: Vol. 57, No. 11 (2019)
  • Intermittent Clutter Suppression Method Based on Adaptive Harmonic Wavelet
           Transform for L-Band Radar Wind Profiler
    • Authors: S. Allabakash;Sanghun Lim;P. Yasodha;Hyunjung Kim;Gyuwon Lee;
      Pages: 8546 - 8556
      Abstract: Boundary layer radar (L-band) wind profilers frequently encounter a significant problem arising from the contamination of intermittent clutter, produced by seasonal and nocturnal migrating birds, which often yields erroneous wind velocity and boundary layer information. Classical harmonic wavelet transforms (HWTs) are inadequate in removing the transient clutter contamination under certain conditions, particularly when the clutter is significant. We implemented an adaptive complex harmonic discrete wavelet transform with an advanced statistical method to overcome the shortcomings of the classical wavelet method. This algorithm effectively eliminates the bird contamination even where the classical method fails. Finally, a multiple peak-picking (MPP) algorithm was added to select true atmospheric signals and estimate accurate moments. The obtained wind velocity measurements were compared with those derived using the conventional method and validated with global positioning system radiosonde winds. The comparison shows that the proposed method is more effective than the conventional one.
      PubDate: Nov. 2019
      Issue No: Vol. 57, No. 11 (2019)
  • An Improved Single-Channel Polar Region Ice Surface Temperature Retrieval
           Algorithm Using Landsat-8 Data
    • Authors: Yachao Li;Tingting Liu;Mohammed E. Shokr;Zemin Wang;Liangpei Zhang;
      Pages: 8557 - 8569
      Abstract: Ice surface temperature (IST) is a key parameter for the study of polar ice sheets and ice shelves. In this study, an improved single-channel (ISC) algorithm based on the radiative transfer equation is proposed for IST retrieval from Landsat-8 band 10 data. The main steps in the proposed ISC algorithm include: 1) simulation of atmospheric radiative parameters by regression against the atmospheric water vapor content and the effective mean atmospheric temperature; 2) calculation of IST using Planck’s equation, instead of using Taylor’s approximation; and 3) implementation of an iterative scheme for IST calculation. The errors from using Taylor’s approximation and the atmospheric radiative parameter simulation were quantitatively estimated. A sensitivity analysis of ISC to possible errors in atmospheric water vapor content, brightness temperature, and satellite observations was also conducted. The results of the sensitivity analysis showed that the proposed algorithm is robust to the atmospheric water vapor content, but is sensitive to the calibration precision of the thermal infrared sensor. Verification using a simulated approach showed better IST variability from ISC than the original SC algorithm [the root-mean-square errors (RMSEs) were 0.3252 and 0.7176 K, respectively]. When compared with near-surface air temperatures from 68 automatic weather stations data in Greenland and 25 data in the Antarctic, the bias and RMSE from the ISC algorithm were again better than those from the SC algorithm. The IST from Moderate Resolution Imaging Spectroradiometer (MODIS) was found to be underestimated with respect to the results of both the SC and ISC algorithms. Maps of the spatial distributions of IST derived from samples of Landsat-8 images are presented. The rationale of each step in the proposed ISC algorithm is also presented so that this can provide further support to the authenticity of -he results.
      PubDate: Nov. 2019
      Issue No: Vol. 57, No. 11 (2019)
  • Novel Polarimetric Contrast Enhancement Method Based on Minimal Clutter to
           Signal Ratio Subspace
    • Authors: Dongwen Yang;Lan Du;Hongwei Liu;Wei Ni;
      Pages: 8570 - 8583
      Abstract: Enhancing the contrast of target and clutter is a crucial issue in synthetic aperture radar (SAR) image target detection. In this paper, we define a novel subspace, called minimal clutter-to-signal ratio (MCSR) subspace, which can minimize the clutter-to-signal ratio (CSR) by projecting the feature vector to the subspace. Based on MCSR, a novel polarimetric contrast enhancement method is proposed. The MCSR subspace is learned based on the commonly used polarimetric feature vectors extracted from the labeled training SAR image pixels. The feature vectors extracted form candidate SAR image pixels are projected to the MCSR subspace. By calculating the square norm of each transformed feature vector, an enhanced image can be obtained. It is demonstrated that the existing optimization of polarimetric contrast enhancement (OPCE) is a special case of the proposed method to some extent. Experimental results show that our method outperforms the traditional OPCE method on the RadarSat-2 SAR data.
      PubDate: Nov. 2019
      Issue No: Vol. 57, No. 11 (2019)
  • Measurements of Sea Surface Currents in the Baltic Sea Region Using
           Spaceborne Along-Track InSAR
    • Authors: Anis Elyouncha;Leif E. B. Eriksson;Roland Romeiser;Lars M. H. Ulander;
      Pages: 8584 - 8599
      Abstract: The main challenging problems in ocean current retrieval from along-track interferometric (ATI)-synthetic aperture radar (SAR) are phase calibration and wave bias removal. In this paper, a method based on differential InSAR (DInSAR) technique for correcting the phase offset and its variation is proposed. The wave bias removal is assessed using two different Doppler models and two different wind sources. In addition to the wind provided by an atmospheric model, the wind speed used for wave correction in this work is extracted from the calibrated SAR backscatter. This demonstrates that current retrieval from ATI-SAR can be completed independently of atmospheric models. The retrieved currents, from four TanDEM-X (TDX) acquisitions over the Öresund channel in the Baltic Sea, are compared to a regional ocean circulation model. It is shown that by applying the proposed phase correction and wave bias removal, a good agreement in spatial variation and current direction is achieved. The residual bias, between the ocean model and the current retrievals, varies between 0.013 and 0.3 m/s depending on the Doppler model and wind source used for wave correction. This paper shows that using SAR as a source of wind speed reduces the bias and root-mean-squared-error (RMSE) of the retrieved currents by 20% and 15%, respectively. Finally, the sensitivity of the sea current retrieval to Doppler model and wind errors are discussed.
      PubDate: Nov. 2019
      Issue No: Vol. 57, No. 11 (2019)
  • Introducing Spatial Regularization in SAR Tomography Reconstruction
    • Authors: Clément Rambour;Loïc Denis;Florence Tupin;Hélène M. Oriot;
      Pages: 8600 - 8617
      Abstract: The resolution achieved by current synthetic aperture radar (SAR) sensors provides a detailed visualization of urban areas. Spaceborne sensors such as TerraSAR-X can be used to analyze large areas at a very high resolution. In addition, repeated passes of the satellite give access to temporal and interferometric information on the scene. Because of the complex 3-D structure of urban surfaces, scatterers located at different heights (ground, building facade, and roof) produce radar echoes that often get mixed within the same radar cells. These echoes must be numerically unmixed in order to get a fine understanding of the radar images. This unmixing is at the core of SAR tomography. SAR tomography reconstruction is generally performed in two steps: 1) reconstruction of the so-called tomogram by vertical focusing, at each radar resolution cell, to extract the complex amplitudes (a 1-D processing) and 2) transformation from radar geometry to ground geometry and extraction of significant scatterers. We propose to perform the tomographic inversion directly in ground geometry in order to enforce spatial regularity in 3-D space. This inversion requires solving a large-scale nonconvex optimization problem. We describe an iterative method based on variable splitting and the augmented Lagrangian technique. Spatial regularizations can easily be included in this generic scheme. We illustrate, on simulated data and a TerraSAR-X tomographic data set, the potential of this approach to produce 3-D reconstructions of urban surfaces.
      PubDate: Nov. 2019
      Issue No: Vol. 57, No. 11 (2019)
  • Short-Term Prediction of Electricity Outages Caused by Convective Storms
    • Authors: Roope Tervo;Joonas Karjalainen;Alexander Jung;
      Pages: 8618 - 8626
      Abstract: Prediction of power outages caused by convective storms, which are highly localized in space and time, is of crucial importance to power grid operators. We propose a new machine learning approach to predict the damage caused by storms. This approach hinges identifying and tracking of storm cells using weather radar images on the application of machine learning techniques. Overall prediction process consists of identifying storm cells from CAPPI weather radar images by contouring them with a solid 35-dBZ threshold, predicting a track of storm cells, and classifying them based on their damage potential to power grid operators. Tracked storm cells are then classified by combining data obtained from weather radar, ground weather observations, and lightning detectors. We compare random forest classifiers and deep neural networks as alternative methods to classify storm cells. The main challenge is that the training data are heavily imbalanced, as extreme weather events are rare.
      PubDate: Nov. 2019
      Issue No: Vol. 57, No. 11 (2019)
  • A Fast Time-Domain SAR Imaging and Corresponding Autofocus Method Based on
           Hybrid Coordinate System
    • Authors: Yi Liang;Guofei Li;Jun Wen;Gang Zhang;Yanfeng Dang;Mengdao Xing;
      Pages: 8627 - 8640
      Abstract: Compared with frequency-domain algorithms, time-domain algorithms (TDAs) can achieve image focusing under the conditions of arbitrary trajectories and large integration angles. However, the interpolations in both range and bearing angle directions are required in the coordinate transformation of fast TDAs, which inevitably increases the computational load and introduces interpolation errors. In this paper, a fast time-domain imaging and corresponding autofocus method based on the hybrid coordinate (HC) system is proposed. First, the interpolation operation is optimized in the process of fast TDAs. It transforms the 2-D interpolation into 1-D interpolation only in the bearing angle direction, which improves the execution efficiency and reduces the interpolation error. Next, a 3-D trajectory deviation estimation method based on Gauss–Newton iteration is investigated for the motion compensation in the HC system. By iterative optimization, the 3-D motion errors during the flight are estimated accurately, and the space-variant phase error is compensated precisely. This method has good robustness and universality. Simulation results and real data processing demonstrate the effectiveness and the practicability of the method presented in this paper.
      PubDate: Nov. 2019
      Issue No: Vol. 57, No. 11 (2019)
  • A Polarimetric Coherence Method to Determine Ice Crystal Orientation
           Fabric From Radar Sounding: Application to the NEEM Ice Core Region
    • Authors: Thomas M. Jordan;Dustin M. Schroeder;Davide Castelletti;Jilu Li;Jørgen Dall;
      Pages: 8641 - 8657
      Abstract: Ice crystal orientation fabric (COF) records information about past ice-sheet deformation and influences the present-day flow of ice. Polarimetric radar sounding provides a means to infer anisotropic COF patterns due to the associated birefringence of polar ice. Here, we develop a polarimetric coherence (phase-based) method to determine horizontal properties of the COF. The method utilizes the azimuth and depth dependence of the vertical gradient of the hhvv coherence phase to infer the dielectric principal axes and birefringence, which are then related to the second-order fabric orientation tensor. Specifically, under the assumption that one of the orientational eigenvectors is vertical, we can determine the horizontal eigenvectors and the difference between the horizontal eigenvalues (a measure of horizontal fabric asymmetry). The method exploits single-polarized data acquired with varying antenna orientation. It applies to ground-based “multi-polarization” surveys and is demonstrated using data acquired by Center for Remote Sensing of Ice Sheets (CReSIS) using Multi-Channel Coherent Radar Depth Sounder (MCRDS) from the North Greenland Eemian Ice Drilling (NEEM) ice core region in Greenland. The analysis is validated using a combination of polarimetric matrix backscatter simulations and comparison with COF data from the NEEM ice core. The results are consistent with a conventional model of ice deformation at an ice divide where a lateral tension component is present, with minor horizontal COF asymmetry and the greatest horizontal concentration of crystallographic axes orientated near parallel to the ice divide.
      PubDate: Nov. 2019
      Issue No: Vol. 57, No. 11 (2019)
  • Surface Roughness-Induced Spectral Degradation of Multi-Spaceborne Solar
           Diffusers Due to Space Radiation Exposure
    • Authors: Xi Shao;Tung-Chang Liu;Xiaoxiong Xiong;Changyong Cao;Taeyoung Choi;Amit Angal;
      Pages: 8658 - 8671
      Abstract: Solar diffusers (SDs) have often been used as the onboard calibrators for the radiometric calibration of reflective solar band imaging sensors. After being spaceborne, the reflectance of SDs is observed to degrade with spectral dependence due to exposure to solar UV and energetic particle radiation. Long-term spectral reflectance data of SDs onboard multiple LEO imaging sensors, such as the Moderate Resolution Imaging Spectroradiometer (MODIS) on Terra and Aqua and the Visible Infrared Imaging Radiometer Suite (VIIRS) on SNPP, are analyzed. The reflectance of SDs on these three instruments degrades faster for the shorter wavelength (0.4– $0.6~mu text{m}$ ) bands than the longer wavelength bands. The Surface Roughness-induced Rayleigh Scattering (SRRS) model is applied to simulate the SD degradation on these instruments, and the growth of the surface roughness parameter of the SDs is derived. It is determined that the change of surface roughness scale length is ~tens of nanometers. To show the consistency of roughness growth rates among the SDs on Terra/Aqua MODIS and SNPP VIIRS instruments, the functional dependences of the growth rates are characterized according to the SD exposure time and the stage of surface roughness. It is also found that the flattening or reverse in the growth trend of the surface roughness for these three SDs occurred around the same interval between October 2013 and October 2015. The confirmation of the applicability of SRRS model with the long-term spectral reflectance data from three independent spaceborne SDs facilitates a better understanding of the origin and physical processes of the SD degradation.
      PubDate: Nov. 2019
      Issue No: Vol. 57, No. 11 (2019)
  • Remote Sensing of Sea Ice Thickness and Salinity With 0.5–2 GHz
           Microwave Radiometry
    • Authors: Kenneth C. Jezek;Joel T. Johnson;Oguz Demir;Mark J. Andrews;Giovanni Macelloni;Marco Brogioni;Marion Leduc-Leballeur;Shurun Tan;Leung Tsang;Ronald Kwok;Lars Kaleschke;Domenic J. Belgiovane;Chi-Chih Chen;Alexandra Bringer;
      Pages: 8672 - 8684
      Abstract: An ultrawideband radiometer was used to measure microwave brightness temperature spectra over Arctic sea ice in the Lincoln Sea near the north coast of Greenland. Spectra over the range of 0.5–2 GHz were compared to thermal infrared images collected during the airborne campaign and also compared to nearly concurrent Sentinel-1 C-band synthetic aperture radar (SAR) data. Based on those comparisons, spectral signatures were associated with thick multiyear ice and thin ice. A radiative transfer (RT) model consisting of a homogeneous slab of sea ice bounded by sea water and air was then used to invert the spectra for sea ice thickness and salinity. Inferred thicknesses were consistent with ice thickness climatology for ice floes in the Lincoln Sea. Salinities are higher than expected which may be a consequence of neglecting surface and volume scattering contributions in the models.
      PubDate: Nov. 2019
      Issue No: Vol. 57, No. 11 (2019)
  • Least-Squares Gaussian Beam Transform for Seismic Noise Attenuation
    • Authors: Min Bai;Juan Wu;Mi Zhang;Yangkang Chen;
      Pages: 8685 - 8694
      Abstract: We propose a novel seismic noise attenuation approach based on least-squares Gaussian beam transform (LSGBT). Gaussian beam transform uses time-domain Gaussian beam (TGB), which can be characterized by a particular location, arrival time, amplitude, slope, curvature, and width. We implement the local attributes such as beam center, spacing, and width to perform Gaussian beam decomposition. In this approach, we first introduce the plane-wave decomposition (PWD) theory to implement TGB decomposition of noisy seismic data and then apply data reconstruction. Unlike most state-of-the-art algorithms, random noise is attenuated in the process of Gaussian beam reconstruction. In the reconstruction records, the useful events are well preserved simultaneously removing random noise. Comparisons of experimental results on field data using traditional $f$ - $x$ deconvolution (FX Decon) and median filter (MF) methods are also provided, which suggest that our method achieves better denoising performance than FX Decon and MF methods. Taking into account that signal loss is sometimes unavoidable in almost all existing denoising methods. In addition to the signal-to-noise ratio (SNR) measurement, we also use local similarity as an efficient tool to evaluate denoising performance. A group of synthetic and field examples demonstrates the effectiveness of the proposed approach.
      PubDate: Nov. 2019
      Issue No: Vol. 57, No. 11 (2019)
  • An Optimized Choice of UCPML to Truncate Lattices With Rotated Staggered
           Grid Scheme for Ground Penetrating Radar Simulation
    • Authors: Bin Zhang;Qianwei Dai;Xiaobo Yin;Zhiwei Li;Deshan Feng;Ya Sun;Xun Wang;
      Pages: 8695 - 8706
      Abstract: Efficient and accurate simulation of ground penetrating radar (GPR) in the open region helps immensely in both grasping the features of echoes and facilitating the interpretation of real GPR data. Due to the limitation of the computer model, however, the strong artificial boundary reflections, especially the low-frequency propagating waves encountered at the late stage of simulation greatly affect the simulation accuracy of GPR. This paper presents an innovative optimized unsplit-field convolutional perfectly matched layer (UCPML) based on rotated staggered grid (RSG) scheme to truncate the finite-difference time-domain (FDTD) lattices. Rather than obey the sharp variation based on an ${m}$ th-order polynomial, the optimized approach employs a novel optimized term and an adjustment factor to seek a gentle variation on optimal constitutive coefficients. This guarantees that the determination of optimal constitutive coefficients can be less influenced by the order of polynomial and especially, to improve the absorptive performance on low-frequency propagating waves. The calculating efficiency and accuracy of the RSG-FDTD scheme, as well as the absorbing performance of the optimized UCPML, are verified by two numerical examples. In particular, the analysis of the amplitude-frequency features of low-frequency clutters at steady state of the electromagnetic (EM) field and the corresponding global reflection error in the time–frequency domain is also presented.
      PubDate: Nov. 2019
      Issue No: Vol. 57, No. 11 (2019)
  • Improved Ocean Surface Velocity Precision Using Multi-Channel SAR
    • Authors: Mark A. Sletten;Jakov V. Toporkov;
      Pages: 8707 - 8718
      Abstract: This paper investigates new approaches to estimating the motion of the dynamic ocean surface using a multi-channel synthetic aperture radar (MSAR) with $M$ phase centers arranged in an along-track configuration. The objective of this paper is to determine the processing methods that produce the finest velocity resolution, an issue that arises due to the finite coherence time of radar backscatter produced by the sea surface. The investigation is carried out both theoretically, using synthesized data produced from a modeled MSAR covariance matrix, as well as experimentally, using images collected with a 16-channel system. Three processing methods are considered: linear regression along with a multi-baseline phase progression, estimation of the velocity centroid, and coherent averaging of the shortest baseline interferograms. Both the theoretical and experimental results indicate that simple averaging of the shortest-baseline interferograms often produces the best velocity precision.
      PubDate: Nov. 2019
      Issue No: Vol. 57, No. 11 (2019)
  • Can We Track Targets From Space' A Hybrid Kernel Correlation Filter
           Tracker for Satellite Video
    • Authors: Jia Shao;Bo Du;Chen Wu;Lefei Zhang;
      Pages: 8719 - 8731
      Abstract: Despite the great success of correlation filter-based trackers in visual tracking, it is questionable whether they can still perform on the satellite video data, acquired by a satellite or space station very high above the earth. The difficulty lies in that the targets usually occupy only a few pixels compared with the image size of over one million pixels and almost melt into the similar background. Since correlation filter models strongly depend on the quality of features and the spatial layout of the tracked object, they would probably fail on satellite video tracking tasks. In this paper, we propose a hybrid kernel correlation filter (HKCF) tracker employing two complementary features adaptively in a ridge regression framework. One feature is the optical flow that can detect variation pixels of the target. The other one is the histogram of oriented gradient that can capture the contour and texture information in the target, and an adaptive fusion strategy is proposed to employ the strengths of both features in different satellite videos. Quantitative evaluations are performed on six real satellite video data sets. The results show that our approach outperforms state-of-the-art tracking methods while running at more than 100 frames/s.
      PubDate: Nov. 2019
      Issue No: Vol. 57, No. 11 (2019)
  • Unsupervised Classification of Multispectral Images Embedded With a
           Segmentation of Panchromatic Images Using Localized Clusters
    • Authors: Ting Mao;Hong Tang;Wei Huang;
      Pages: 8732 - 8744
      Abstract: There are many approaches to fuse panchromatic (PAN) and multispectral (MS) images for classification, mainly including sharpening-then-classification methods, classification-then-sharpening methods, and segmentation-then-classification methods. The generalized Chinese restaurant franchise (gCRF) is a segmentation-then-classification-like method to fuse very high resolution (VHR) PAN and MS images for classification, which has the limitation the same as that of the general segmentation-then-classification methods that segmentation errors will affect the subsequent classification. The problems of gCRF are that during the segmentation step, the spatial coherence in the image plane is deficient and the global clusters without spatial position information are used for segmentation, which may lead to undersegmented and disconnected regions in the segmentation results and decrease classification accuracy. In this paper, we propose an improved model, which overcomes the problems of the gCRF during the segmentation step, to increase the classification accuracy by the following two ways: 1) building the spatial coherence in the image plane by introducing neighborhood information of superpixels to construct the subimages and 2) using localized clusters with spatial location information instead of global clusters to measure the similarity between superpixels and segments. The experimental results show that the problems of undersegmentation and disconnected segments are both alleviated, resulting in better classification results in terms of the visual and quantitative aspects.
      PubDate: Nov. 2019
      Issue No: Vol. 57, No. 11 (2019)
  • Technical Framework for Shallow-Water Bathymetry With High Reliability and
           No Missing Data Based on Time-Series Sentinel-2 Images
    • Authors: Sensen Chu;Liang Cheng;Xiaoguang Ruan;Qizhi Zhuang;Xiao Zhou;Manchun Li;Yongzhong Shi;
      Pages: 8745 - 8763
      Abstract: Shallow-water bathymetry based on multispectral satellite imagery (MSI) is an important technology for depth measurement, but it is difficult to obtain a bathymetric map with high reliability and no missing data because of the ubiquitous image noise. Here, we propose a time-series-based bathymetry framework (TSBF). First, a pixel-level time series is constructed using remote sensing images collected at multiple points in time. Then, a new time-domain denoising method, the maximum outlier removal method, is used to create an optimal image from this time series. Finally, bathymetric inversion is performed using this optimal image to obtain a bathymetric map. Anda Reef and northeastern Jiuzhang Atoll, which have complex noise features, were selected as test cases to validate the proposed framework. Results show that the proposed TSBF can obtain bathymetric maps with high accuracy, reliability, and no missing data, outperforming the conventional bathymetry framework based on a single image.
      PubDate: Nov. 2019
      Issue No: Vol. 57, No. 11 (2019)
  • Improving Object Imaging With Sea Glinted Background Using Polarization
           Method: Analysis and Operator Survey
    • Authors: Roy Avrahamy;Benny Milgrom;Moshe Zohar;Mark Auslender;Shlomo Hava;
      Pages: 8764 - 8774
      Abstract: When observing sea water, a specular reflection of a light source may appear in the form of bright points of light that come and go. These bright points of light, called glints, blend together to form a smooth path of glittering light when viewed from a distance. For detection and observation systems, glints may produce severe saturation in different parts of the image, generating blinding glares and increased fatigue for the observer, which causes hardships in marine remote sensing and target detection. In our work, we have advanced the state-of-the-art analysis of the polarization-based approach to glint reduction and target imaging for a modern remote sensing system by adding external linear polarizers to an observation system on the Red Sea shore. The results of our experiments are presented, analyzed, and discussed, qualitatively and quantitatively, using image processing tools. We performed: 1) an analysis of the RGB histograms of the overall image, the sea and the background; 2) an auto segmentation using the MATLAB image processing toolbox on colored and grayscale images; and 3) saturated frame pixel analysis. An operator survey was added to validate the proposed method. The results show that a polarizer at the optimal angle can help reduce the glints, and, as a result, leads to image enhancement for oceanic applications in general, and for oceanic detection and remote sensing systems in particular.
      PubDate: Nov. 2019
      Issue No: Vol. 57, No. 11 (2019)
  • A Multi-Level Semantic Scene Interpretation Strategy for Change
           Interpretation in Remote Sensing Imagery
    • Authors: Fethi Ghazouani;Imed Riadh Farah;Basel Solaiman;
      Pages: 8775 - 8795
      Abstract: Remotely sensed images represent an important source of information for monitoring land changes that may occur. There is, therefore, a need to analyze and interpret such information in order to extract useful semantic change interpretations. However, extracting such semantics from satellite images is a complex task that requires prior and contextual knowledge. In this paper, we focus on the issue of semantic scene interpretation for change interpretation. Consequently, a strategy for semantic remote-sensing imagery scene interpretation is proposed. This strategy is based on a representative framework that is structured around several levels of interpretation: the pixel level, the visual primitive level, the object level, the scene level, and the change interpretation level. Each level integrates a logical mechanism to extract useful knowledge for interpretation. The proposed model has been evaluated using two Landsat scene images acquired in 2000 [Landsat Enhanced Thematic Mapper plus (ETM+)] and 2017 (Landsat 8) in order to check its relevance for semantic scene and change interpretation. Precision, recall, and F-measure metrics were used in order to show the capacity of the proposed methodology for semantic classification. A visual evaluation was also performed to evaluate the performance of the presented interpretation strategy, and the query results for each level show a promising capability for semantic object classification, spatial and temporal relations’ extraction, and change interpretation.
      PubDate: Nov. 2019
      Issue No: Vol. 57, No. 11 (2019)
  • CNN-Based Polarimetric Decomposition Feature Selection for PolSAR Image
    • Authors: Chen Yang;Biao Hou;Bo Ren;Yue Hu;Licheng Jiao;
      Pages: 8796 - 8812
      Abstract: In order to better interpret polarimetric synthetic aperture radar (PolSAR) images, many scholars tend to do target decomposition for PolSAR images and utilize the obtained features to perform subsequent classification. These target decomposition features play an important role in terrain classification but completely utilizing them produces a high computational complexity. Furthermore, some features have a negative impact on the classification task. Therefore, selecting the appropriate amount of high-quality features is of great significance to the classification task. In this paper, we propose a convolutional neural network (CNN)-based feature selection algorithm for PolSAR image classification. First, we design a 1-D CNN for feature selection, then train the designed network with all the decomposition features to obtain a trained model. Second, the Kullback–Leibler distance (KLD) between different features is utilized as a standard to select feature subsets. Third, feature subsets with excellent performance form the final results. Due to the special structure of the 1-D CNN, repetitively training model is avoided when the input changes. Different from traditional feature selection methods, our method considers the performance of features combination rather than single feature contribution. To this end, the feature subsets selected by the proposed method are more useful to the classification task. Innovatively introducing KLD in the selection stage avoids random selection and improves the selection efficiency. Finally, we validate the performance of selected feature subsets in traditional and deep learning classification frameworks. Experiments demonstrate that features selected by the proposed method have a good performance comparing with others on three real PolSAR data sets.
      PubDate: Nov. 2019
      Issue No: Vol. 57, No. 11 (2019)
  • 3-D Gaussian–Gabor Feature Extraction and Selection for
           Hyperspectral Imagery Classification
    • Authors: Sen Jia;Jiayue Zhuang;Lin Deng;Jiasong Zhu;Meng Xu;Jun Zhou;Xiuping Jia;
      Pages: 8813 - 8826
      Abstract: Hyperspectral remote sensing imagery provides valuable and rich information to distinguish the characteristics of materials. However, this advantage of hyperspectral imagery often encounters the problem of a limited amount of training samples, which is caused by the difficulty of manually labeling. Fortunately, the spatial distribution of surface objects can be integrated with the spectral signature to improve the discriminative ability. In this paper, a 3-D Gaussian–Gabor feature extraction and selection framework has been proposed for hyperspectral image classification. First, a bank of 3-D Gaussian–Gabor filters are convolved with the concatenated data of both extended multi-attribute profile (EMAP) features and raw hyperspectral data. Second, an improved fast density peak clustering (IFDPC) method is introduced to select the most representative features from each extracted 3-D Gaussian–Gabor feature cube. Finally, the retained features are combined together to accomplish the classification task. The proposed method is thus named as GG-IFDPC. Three real hyperspectral imagery data sets have been utilized, and the experiments demonstrate the advantages of the proposed GG-IFDPC approach over the compared ones.
      PubDate: Nov. 2019
      Issue No: Vol. 57, No. 11 (2019)
  • Intercomparisons of Cloud Mask Products Among Fengyun-4A, Himawari-8, and
    • Authors: Xi Wang;Min Min;Fu Wang;Jianping Guo;Bo Li;Shihao Tang;
      Pages: 8827 - 8839
      Abstract: In this paper, we developed a unified and operational cloud mask algorithm for the new-generation geostationary (GEO) meteorological satellite imagers of the Advanced Geostationary Radiation Imager (AGRI) aboard Fengyun-4A (FY-4A) and the Advanced Himawari Imager (AHI) aboard Himawari-8 (H08). We investigated the all-round performance of the cloud mask algorithm. Spatiotemporally, the algorithm matches the official Collection-6 cloud mask products of a Moderate Resolution Imaging Spectroradiometer (MODIS) from both the Terra and Aqua platforms, which we employed as the benchmark for performing intercomparisons and validations. The robust cloud mask algorithm can show high consistency between FY-4A/AGRI and H08/AHI. The MODIS-based validation results suggest that cloudy scene identification is better than that observed for clear skies for both FY-4A/AGRI and H08/AHI; there is also a relatively low false-alarm ratio (FAR). Moreover, the algorithm is more reliable during daytime hours, with a hit rate (HR) of approximately 92% for both FY-4A/AGRI and H08/AHI. We found slightly higher accuracy in cloud-masking results over water than those over land. Furthermore, we found that more than 67% of the matched pixels for both advanced GEO imagers had no bias when taking MODIS as the benchmark. Overall, HR values were approximately 91.04% and 91.82% for FY-4A/AGRI and H08/AHI, respectively. These results confirm the high quality of the algorithm for retrieving real-time cloud mask products.
      PubDate: Nov. 2019
      Issue No: Vol. 57, No. 11 (2019)
  • A Contextual and Multitemporal Active-Fire Detection Algorithm Based on
           FengYun-2G S-VISSR Data
    • Authors: Zhengyang Lin;Fang Chen;Bin Li;Bo Yu;Huicong Jia;Meimei Zhang;Dong Liang;
      Pages: 8840 - 8852
      Abstract: Wildfires are one of the most destructive disasters on the planet. They also significantly impact the land surface. Satellite data have been widely used to detect the outbreak and monitor the expansion of fire incidents for damage assessment and disaster management. Polar-orbiting satellite data have been used for several decades but data from geostationary satellites, which can provide observations with a high temporal resolution, have received much less attention. This paper utilizes data from FengYun-2G, a Chinese geostationary satellite, to detect wildfires in two selected research regions in January 2016. The detection algorithm systemizes image-based analysis to filter out obvious nonfire pixels and temporal analysis to confirm the true detections. Fire detection is based on comparisons between predicted and observed values. The results show that the proposed method has some advantages compared with the use of polar-orbiting satellite data, including early detection and continuous observation. The validation work is conducted based on the collection 6.1 Global Monthly Fire Location Product generated from fire detections by Moderate Resolution Imaging Spectroradiometer (MODIS) sensors. The average accuracy within the target time is 56%, while the omission error rate is over 78%. In detail, the algorithm has a lower omission error rate in Australia while it fails in detecting most of the fire pixels in India. The dominance of small fire incidents, as well as low spatial resolution greatly limit the detection ability. Many small fires were beyond the ability of Stretched Visible and Infrared Spin Scan Radiometer (S-VISSR) data when no significant fire characteristics could be captured. Future development of the algorithm will focus on improving the results by enhancing the adaption to different regions, as well as, including multisource data sets.
      PubDate: Nov. 2019
      Issue No: Vol. 57, No. 11 (2019)
  • Improving Forest Height Retrieval by Reducing the Ambiguity of Volume-Only
           Coherence Using Multi-Baseline PolInSAR Data
    • Authors: Zhanmang Liao;Binbin He;Xiaojing Bai;Xingwen Quan;
      Pages: 8853 - 8866
      Abstract: Nonvolume decorrelation ( $gamma _{mathrm {Nonvol}}$ ) united with the unknown ground contribution will bring a 2-D ambiguity to volume-only coherence, making the inversion underdetermined even when multiple baselines are available. In the context of random volume over ground (RVoG) model and three-stage algorithm, this paper theoretically presented the varied response of different baselines to both $gamma _{mathrm {Nonvol}}$ and ground contribution, and then proposed a new multi-baseline inversion method to reduce the 2-D ambiguity. The proposed method includes two steps, calculating the common overlapped ambiguity from different baselines and fixing the extinction coefficient, to more accurately retrieve the volume-only coherence and forest height. It makes no assumptions on $gamma _{mathrm {Nonvol}}$ and ground contribution. The method was validated and compared with three single-baseline inversions and two published multi-baseline inversions by using the airborne P-band polarimetric SAR interferometry (PolInSAR) data and the reference data of LiDAR canopy height model (CHM) over a dense rainforest site. Results showed that the developed multi-baseline method successfully reduced the combined influence of both $gamma _{mathrm {Nonvol}}$ and ground contribution, and performed better than any single baseline, improving the ${R}^{2}$ from 0.60 to 0.77 and unbiased root-mean-square error (RMSE) from 1.32 to 1.04 m at the scale of ca. $100 times 140,,text{m}^{2}$ . Moreover, the multi-baseline scheme is relatively robust among d-fferent baseline combinations.
      PubDate: Nov. 2019
      Issue No: Vol. 57, No. 11 (2019)
  • Learn to Detect: Improving the Accuracy of Earthquake Detection
    • Authors: Tai-Lin Chin;Chin-Ya Huang;Shan-Hsiang Shen;You-Cheng Tsai;Yu Hen Hu;Yih-Min Wu;
      Pages: 8867 - 8878
      Abstract: Earthquake early warning system uses high-speed computer network to transmit earthquake information to population center ahead of the arrival of destructive earthquake waves. This short (10 s of seconds) lead time will allow emergency responses such as turning off gas pipeline valves to be activated to mitigate potential disaster and casualties. However, the excessive false alarm rate of such a system imposes heavy cost in terms of loss of services, undue panics, and diminishing credibility of such a warning system. At the current, the decision algorithm to issue an early warning of the onset of an earthquake is often based on empirically chosen features and heuristically set thresholds and suffers from excessive false alarm rate. In this paper, we experimented with three advanced machine learning algorithms, namely, $K$ -nearest neighbor (KNN), classification tree, and support vector machine (SVM) and compared their performance against a traditional criterion-based method. Using the seismic data collected by an experimental strong motion detection network in Taiwan for these experiments, we observed that the machine learning algorithms exhibit higher detection accuracy with much reduced false alarm rate.
      PubDate: Nov. 2019
      Issue No: Vol. 57, No. 11 (2019)
  • Method for 3-D Scene Reconstruction Using Fused LiDAR and Imagery From a
           Texel Camera
    • Authors: Taylor C. Bybee;Scott E. Budge;
      Pages: 8879 - 8889
      Abstract: Reconstructing a 3-D scene from aerial sensor data creating a textured digital surface model (TDSM), consisting of a LiDAR point cloud and an overlaid image, is valuable in many applications including agriculture, military, surveying, and natural disaster response. When collecting LiDAR from an aircraft, the navigation system accuracy must exceed the LiDAR accuracy to properly reference returns in 3-D space. Precision navigation systems can be expensive and often require full-scale aircraft to house such systems. Synchronizing the LiDAR sensor and a camera, using a texel camera calibration, provides additional information that reduces the need for precision navigation equipment. This paper describes a bundle adjustment technique for aerial texel images that allows for relatively low-accuracy navigation systems to be used with low-cost LiDAR and camera data to form higher fidelity terrain models. The bundle adjustment objective function utilizes matching image points, measured LiDAR distances, and the texel camera calibration and does not require overlapping LiDAR scans or ground control points. The utility of this method is proven using a simulated texel camera and unmanned aerial system (UAS) flight data created from aerial photographs and elevation data. A small UAS is chosen as the target vehicle due to its relatively inexpensive hardware and operating costs, illustrating the power of this method in accurately referencing the LiDAR and camera data. In the 3-D reconstruction, the 1- $sigma $ accuracy between LiDAR measurements across the scene is on the order of the digital camera pixel size.
      PubDate: Nov. 2019
      Issue No: Vol. 57, No. 11 (2019)
  • Unsupervised Change Detection Based on a Unified Framework for Weighted
           Collaborative Representation With RDDL and Fuzzy Clustering
    • Authors: Gang Yang;Heng-Chao Li;Wei-Ye Wang;Wen Yang;William J. Emery;
      Pages: 8890 - 8903
      Abstract: In this paper, we propose a novel unsupervised change detection method of remote sensing (RS) images based on a unified framework for weighted collaborative representation (WCR) with robust deep dictionary learning (RDDL) and fuzzy clustering. Specifically, WCR is employed to collaboratively represent neighborhood features with lower computational complexity, for which the RDDL model is built to learn more effective and representative overcomplete dictionary and enhance the robustness against the noise and outliers. Meanwhile, in order to make the resulting collaborative coefficients more beneficial for clustering, the unified framework for WCR with RDDL and fuzzy clustering is designed. By doing so, our framework not only precludes the utilization of third-party clustering algorithm, but also achieves better detection performance. Subsequently, the spatial constraint is enforced on the membership matrix to yield the updated one for further improving the accuracy of change detection. Finally, a binary change mask (CM) is achieved by assigning the pixels into the changed and unchanged classes. Experiments are performed on five pairs of RS images, and experimental results demonstrate the effectiveness of the proposed method.
      PubDate: Nov. 2019
      Issue No: Vol. 57, No. 11 (2019)
  • Cyclic Shift Matrix—A New Tool for the Translation Matching Problem
    • Authors: Xiurui Geng;Weitun Yang;
      Pages: 8904 - 8913
      Abstract: For numerous applications in image registration, sub-pixel translation estimation is a fundamental task, and increasing attention has been given to methods based on image phase information. However, we have found that none of these methods is universal. In other words, for any one of these methods, we can always find some image pairs which will not be well matched. In this paper, by introducing the cyclic shift matrix (CSM), we present a new model for the translation matching problem and derive a least squares solution for the model. In addition, by repeatedly applying the CSM to the matching image, an iterative CSM method is proposed to further improve the matching accuracy. Furthermore, we show that the traditional phase-based matching algorithms can only achieve an exact solution when there is a cyclic shift relationship between the images to be matched. The proposed method is evaluated using simulated and real images and demonstrates a better performance in both accuracy and robustness compared with the state-of-the-art methods.
      PubDate: Nov. 2019
      Issue No: Vol. 57, No. 11 (2019)
  • Polar-Spatial Feature Fusion Learning With Variational
           Generative-Discriminative Network for PolSAR Classification
    • Authors: Zaidao Wen;Qian Wu;Zhunga Liu;Quan Pan;
      Pages: 8914 - 8927
      Abstract: Feature learning-based polarimetric synthetic aperture radar (PolSAR) classification model will generally suffer from the challenge of deficient labeled pixels. In this paper, we propose a novel generative-discriminative network for PolSAR polar-spatial feature fusion learning and classification, which comprises of a deep generative network and a discriminative network with their bottom layers shared. With this architecture, it enables to make use of both labeled and unlabeled pixels in a PolSAR image for model learning in a semisupervised way. Moreover, the proposed network imposes a Gaussian random field prior and a conditional random field posterior on the learned fusion features and the output label configuration, respectively. Without the need of the complicated recurrent iterations, our network can still efficiently produce the structured fusion feature as well as a smoothed classification map by involving some auxiliary variables, and it is specifically optimized via variational inference within an alternating direction method of multipliers iteration scheme. Extensive experiments on different benchmark PolSAR imageries demonstrate the effectiveness and superiority of the proposed network. Compared with other state-of-the-art algorithms of PolSAR feature learning and classification, our model can achieve a much better performance in terms of the visual quality of the label map and overall classification accuracy, facilitating the much less labeling pixels.
      PubDate: Nov. 2019
      Issue No: Vol. 57, No. 11 (2019)
  • Evaluation of Red-Peak Algorithms for Chlorophyll Measurement in the Pearl
           River Estuary
    • Authors: Fenfen Liu;Shilin Tang;
      Pages: 8928 - 8936
      Abstract: An algorithm of the red-peak envelop area (PEA) near 700 nm was evaluated using in situ data during nine cruises in the Pearl River estuary and compared with other algorithms using the reflectance peak (RP) near 700 nm, including the fluorescence line height (FLH), maximum chlorophyll index (MCI), and MCI2 algorithms. Of all algorithms, the PEA algorithm presented the most accurate performance [ $R^{2}= 0.74$ , root-mean-square error RMSE = 0.12] and provided a more rational spatial distribution of phytoplankton blooms when both Sentinel 3 Ocean and Land Color Instrument (OLCI) and Hyperion data were used because the PEA integrates information from both the moving peak and the asymmetric curve on each side of the peak due to the high correlation relationship ( $R^{2}= 0.7$ ) of chlorophyll and the ratio of the peak area between the left and right halves. Moreover, compared with other algorithms, the PEA algorithm developed using the Hyperion (higher spectral resolution) and OLCI band settings presented similar retrieval accuracies. These results demonstrated that the PEA algorithm is less dependent on the band settings, and the spectral band settings of OLCI from 650 to 750 nm are reasonable and can be used to detect phytoplankton blooms if the PEA algorithm is applied. The OLCI PEA algorithm was applied to determine the variations in phytoplankton blooms under the influences of strong precipitation events. The most obvious increases in chlorophyll concentration (from 20 to 30 mg $text{m}^{-3}$ ) were observed in the middle river channel upstream of the Pearl River estuary after strong precipitation events.
      PubDate: Nov. 2019
      Issue No: Vol. 57, No. 11 (2019)
  • Low-Velocity Small Target Detection With Doppler-Guided Retrospective
           Filter in High-Resolution Radar at Fast Scan Mode
    • Authors: Sai-Nan Shi;Xiang Liang;Peng-Lang Shui;Jian-Kang Zhang;Shuai Zhang;
      Pages: 8937 - 8953
      Abstract: It is a difficult task for high-resolution maritime radars operating at a fast scan mode to find sea-surface floating and low-velocity small targets due to ubiquitous sea clutter, sporadic sea spikes like target returns, and shortage of shared database. In this paper, a simulation method is presented to generate high-resolution radar returns of a local sea surface with a structural trend in texture and sea spikes by integrating the existing results on large-scale sea surface generation, sea surface reflectivity, Doppler characteristics of sea clutter, and properties of sea spikes. Generally, sea-surface small target detection at a fast scan mode is composed of intrascan integration to suppress sea clutter and interscan integration to exclude false alarms and sea spikes. Based on the Doppler difference between targets and sea clutter at the two time scales of a coherent processing interval (CPI) of tens of milliseconds and a scan period of several seconds, a Doppler-guided retrospective filter (DGRF) detector is proposed, which uses the optimum coherent detection in intrascan integration and a DGRF in interscan binary and test statistic integrations. The two integrations and Doppler consistency of integrated plots are fused for the final decision. Owing to the Doppler guidance, the proposed detector effectively impedes the integration of false alarms of the intrascan processing and provides significant detection performance improvement, which is verified by the simulated data and real radar data with test small targets.
      PubDate: Nov. 2019
      Issue No: Vol. 57, No. 11 (2019)
  • Microwave Radiometer Data Superresolution Using Image Degradation and
           Residual Network
    • Authors: Ting Hu;Feng Zhang;Wei Li;Weidong Hu;Ran Tao;
      Pages: 8954 - 8967
      Abstract: Microwave radiometers are the key sensors to globally monitor environmental parameters; however, it suffers from its low and nonuniform spatial resolution. In this paper, a superresolution (SR) technique based on image degradation and residual network is proposed to enhance the spatial resolution of microwave radiometer data. Specifically, an improved degradation model is proposed to construct pairs of high-resolution (HR) and low-resolution (LR) data for training and testing. In addition, a new residual network connected by the SR main and gradient auxiliary branches in parallel is designed to achieve SR reconstructions, where eight-channel gradient maps extracted from LR data are input into the auxiliary branch to help to reconstruct. SR results are eventually generated by the trained SR network. Experiments executed on both simulated and actual data demonstrate the soundness and the superiority of the proposed SR technique.
      PubDate: Nov. 2019
      Issue No: Vol. 57, No. 11 (2019)
  • A Line-by-Line Fast Anomaly Detector for Hyperspectral Imagery
    • Authors: María Díaz;Raúl Guerra;Pablo Horstrand;Sebastián López;Roberto Sarmiento;
      Pages: 8968 - 8982
      Abstract: In recent years, anomaly detection (AD) has enjoyed a growing interest in hyperspectral data analysis. However, most state-of-the-art detectors need to work with the entire hyperspectral cube, what prevents their use for applications under real-time constraints, especially when the hyperspectral data are collected by push-broom scanners that acquire the hyperspectral images (HSIs) in a line-by-line fashion. In this paper, a Line-by-Line Fast Anomaly Detector for Hyperspectral Imagery (LbL-FAD) is proposed, which is capable of processing each sensed line as soon as it is captured. The LbL-FAD works under the assumption that anomalous pixels cannot be well represented by the background distribution. It uses an orthogonal projection strategy for extracting a set of pixels from the first captured hyperspectral frames, i.e., lines of pixels, that are used for representing the background distribution. Using these pixels, the LbL-FAD proposes a hardware-friendly alternative to compute the orthogonal subspace to that spanned by the selected background samples, making the anomalous pixels better detectable. In addition, the LbL-FAD incorporates an automatic thresholding method which provides line-by-line and real-time binary maps where anomalous targets are segmented from the background. This novelty clearly differentiates the proposed LbL-FAD from the conventional anomaly detectors, which usually are not able to automatically discriminate anomalous pixels from background pixels until the entire image is processed. Several experiments have been carried out using real HSIs collected by different sensors. The obtained results clearly support the benefits of our proposal, both in terms of the accuracy of the detection performance and the computational complexity.
      PubDate: Nov. 2019
      Issue No: Vol. 57, No. 11 (2019)
  • Dense Attention Pyramid Networks for Multi-Scale Ship Detection in SAR
    • Authors: Zongyong Cui;Qi Li;Zongjie Cao;Nengyuan Liu;
      Pages: 8983 - 8997
      Abstract: Synthetic aperture radar (SAR) is an active microwave imaging sensor with the capability of working in all-weather, all-day to provide high-resolution SAR images. Recently, SAR images have been widely used in civilian and military fields, such as ship detection. The scales of different ships vary in SAR images, especially for small-scale ships, which only occupy few pixels and have lower contrast. Compared with large-scale ships, the current ship detection methods are insensitive to small-scale ships. Therefore, the ship detection methods are facing difficulties with multi-scale ship detection in SAR images. A novel multi-scale ship detection method based on a dense attention pyramid network (DAPN) in SAR images is proposed in this paper. The DAPN adopts a pyramid structure, which densely connects convolutional block attention module (CBAM) to each concatenated feature map from top to bottom of the pyramid network. In this way, abundant features containing resolution and semantic information are extracted for multi-scale ship detection while refining concatenated feature maps to highlight salient features for specific scales by CBAM. Then, the salient features are integrated with global unblurred features to improve accuracy effectively in SAR images. Finally, the fused feature maps are fed to the detection network to obtain the final detection results. Experiments on the data set of SAR ship detection data set (SSDD) including multi-scale ships in various SAR images show that the proposed method can detect multi-scale ships in different scenes of SAR images with extremely high accuracy and outperforms other ship detection methods implemented on SSDD.
      PubDate: Nov. 2019
      Issue No: Vol. 57, No. 11 (2019)
  • Remote Sensing Image Reconstruction Using Tensor Ring Completion and Total
    • Authors: Wei He;Naoto Yokoya;Longhao Yuan;Qibin Zhao;
      Pages: 8998 - 9009
      Abstract: Time-series remote sensing (RS) images are often corrupted by various types of missing information such as dead pixels, clouds, and cloud shadows that significantly influence the subsequent applications. In this paper, we introduce a new low-rank tensor decomposition model, termed tensor ring (TR) decomposition, to the analysis of RS data sets and propose a TR completion method for the missing information reconstruction. The proposed TR completion model has the ability to utilize the low-rank property of time-series RS images from different dimensions. To further explore the smoothness of the RS image spatial information, total-variation regularization is also incorporated into the TR completion model. The proposed model is efficiently solved using two algorithms, the augmented Lagrange multiplier (ALM) and the alternating least square (ALS) methods. The simulated and real-data experiments show superior performance compared to other state-of-the-art low-rank related algorithms.
      PubDate: Nov. 2019
      Issue No: Vol. 57, No. 11 (2019)
  • Spectral Super-Resolution for Multispectral Image Based on Spectral
           Improvement Strategy and Spatial Preservation Strategy
    • Authors: Chen Yi;Yong-Qiang Zhao;Jonathan Cheung-Wai Chan;
      Pages: 9010 - 9024
      Abstract: While hyperspectral (HS) images play a significant role in many applications, they often suffer from issues such as low spatial resolution, low temporal resolution, and some of the acquired spectral bands are either with low signal-to-noise ratio (SNR) or invalid because of the very high-noise level. To address this issue, a spectral super-resolution method is proposed in this paper to recover a high-spectral-resolution HS image from multispectral (MS) images. The reconstructed HS image will have the same spatial resolution and coverage as the input MS image. The proposed method involves spectral improvement strategy and spatial preservation strategy. For spectral improvement strategy, auxiliary MS/HS image pairs of different landscapes are exploited to estimate spectral response relationship so that an HS image is obtained as an intermediate result. Then, spectral dictionary learning is exploited to recover a more accurate spectral reconstruction result. Spatial preservation strategy is used as a spatial constraint to ensure spatial consistency. In addition, the low-rank property of HS image is also introduced to make the use of global spectral coherence among HS bands. Experiments are conducted on both simulated and real datasets including spectral enhancement of RGB image and the MS image generated by AVIRIS data and real MS/HS data (ALI and Hyperion) captured by Earth Observing-1 (EO-1) satellite. Experiment results demonstrate the superiority of our proposed method to other state-of-the-art methods.
      PubDate: Nov. 2019
      Issue No: Vol. 57, No. 11 (2019)
  • MIMA: MAPPER-Induced Manifold Alignment for Semi-Supervised Fusion of
           Optical Image and Polarimetric SAR Data
    • Authors: Jingliang Hu;Danfeng Hong;Xiao Xiang Zhu;
      Pages: 9025 - 9040
      Abstract: Multi-modal data fusion has recently been shown promise in classification tasks in remote sensing. Optical data and radar data, two important yet intrinsically different data sources, are attracting more and more attention for potential data fusion. It is already widely known that a machine learning-based methodology often yields excellent performance. However, the methodology relies on a large training set, which is very expensive to achieve in remote sensing. The semi-supervised manifold alignment (SSMA), a multi-modal data fusion algorithm, has been designed to amplify the impact of an existing training set by linking labeled data to unlabeled data via unsupervised techniques. In this paper, we explore the potential of SSMA in fusing optical data and polarimetric synthetic aperture radar (SAR) data, which are multi-sensory data sources. Furthermore, we propose a MAPPER-induced manifold alignment (MIMA) for the semi-supervised fusion of multi-sensory data sources. Our proposed method unites SSMA with MAPPER, which is developed from the emerging topological data analysis (TDA) field. To the best of our knowledge, this is the first time that SSMA has been applied on fusing optical data and SAR data, and also the first time that TDA has been applied in remote sensing. The conventional SSMA derives a topological structure using $k$ -nearest neighbor (kNN), while MIMA employs MAPPER, which considers the field knowledge and derives a novel topological structure through the spectral clustering in a data-driven fashion. The experimental results on data fusion with respect to land cover land use classification and local climate zone classification suggest superior performance of MIMA.
      PubDate: Nov. 2019
      Issue No: Vol. 57, No. 11 (2019)
  • Knowledge-Aided 2-D Autofocus for Spotlight SAR Filtered Backprojection
    • Authors: Xinhua Mao;Lan Ding;Yudong Zhang;Ronghui Zhan;Shan Li;
      Pages: 9041 - 9058
      Abstract: The filtered backprojection (FBP) algorithm is a popular choice for complicated trajectory synthetic aperture radar (SAR) image formation processing due to its inherent nonlinear motion compensation capability. However, how to efficiently refocus the defocused FBP imagery when the motion measurement is not accurate enough is still a challenging problem. In this paper, a new interpretation of the FBP derivation is presented from the Fourier transform point of view. Based on this new viewpoint, the property of the residual 2-D phase error in FBP imagery is analyzed in detail. Then, by incorporating the derived a priori knowledge on the 2-D phase error, an accurate and efficient 2-D autofocus approach is proposed. This new approach performs the parameter estimation in a dimension-reduced parameter subspace by exploiting the a priori analytical structure of the 2-D phase error, therefore it possesses much higher accuracy and efficiency than the conventional blind methods. Finally, experimental results clearly demonstrate the effectiveness and robustness of the proposed method.
      PubDate: Nov. 2019
      Issue No: Vol. 57, No. 11 (2019)
  • Fast and Robust Matching for Multimodal Remote Sensing Image Registration
    • Authors: Yuanxin Ye;Lorenzo Bruzzone;Jie Shan;Francesca Bovolo;Qing Zhu;
      Pages: 9059 - 9070
      Abstract: While image matching has been studied in remote sensing community for decades, matching multimodal data [e.g., optical, light detection and ranging (LiDAR), synthetic aperture radar (SAR), and map] remains a challenging problem because of significant nonlinear intensity differences between such data. To address this problem, we present a novel fast and robust template matching framework integrating local descriptors for multimodal images. First, a local descriptor [such as histogram of oriented gradient (HOG) and local self-similarity (LSS) or speeded-up robust feature (SURF)] is extracted at each pixel to form a pixelwise feature representation of an image. Then, we define a fast similarity measure based on the feature representation using the fast Fourier transform (FFT) in the frequency domain. A template matching strategy is employed to detect correspondences between images. In this procedure, we also propose a novel pixelwise feature representation using orientated gradients of images, which is named channel features of orientated gradients (CFOG). This novel feature is an extension of the pixelwise HOG descriptor with superior performance in image matching and computational efficiency. The major advantages of the proposed matching framework include: 1) structural similarity representation using the pixelwise feature description and 2) high computational efficiency due to the use of FFT. The proposed matching framework has been evaluated using many different types of multimodal images, and the results demonstrate its superior matching performance with respect to the state-of-the-art methods.
      PubDate: Nov. 2019
      Issue No: Vol. 57, No. 11 (2019)
  • Differential SAR Tomography Reconstruction Robust to Temporal
           Decorrelation Effects
    • Authors: Hossein Aghababaee;Giampaolo Ferraioli;Gilda Schirinzi;
      Pages: 9071 - 9080
      Abstract: Temporal decorrelation is one of the major problems in synthetic aperture radar (SAR) tomography (TomoSAR) of a natural environment that leads to blurring and spreading in focused image space. In the context of spatiotemporal focusing using the multi-temporal multi-baseline (MB) SAR data, a model-based differential TomoSAR is employed. Along this and with the aim of temporal decorrelation-robust focusing, a differential tomography framework based on generalized Capon estimator is investigated. The method can cope with temporal decorrelation of the distributed environment by spatiotemporal focusing with optimal bandwidth of the distributed signal. In addition, the method employs an additional parameter for coherence channel balancing in the model of generalized Capon that benefits from it in characterizing the spatiotemporal backscattering by mitigating the inconsistency between channels. The analysis is performed with a realistic simulation of temporal decorrelation in the presence of different decorrelation sources and taking into account the dependence on the vertical structure of the forested area. Effectiveness of the proposed framework has been assessed on both simulated and real data sets by evaluation and characterization of the canopy and under foliage ground in terms of deviation between the estimated covariance matrix and one of the generalized TomoSAR models.
      PubDate: Nov. 2019
      Issue No: Vol. 57, No. 11 (2019)
  • An Efficient and Accurate GB-SAR Imaging Algorithm Based on the Fractional
           Fourier Transform
    • Authors: Lilong Zou;Motoyuki Sato;
      Pages: 9081 - 9089
      Abstract: In this paper, an efficient and accurate imaging algorithm is presented for Ground-Based Synthetic Aperture Radar (GB-SAR) or other radar systems that could be formed by a physical or synthetic linear aperture. The imaging algorithm is based on the fractional Fourier transform (FrFT) for the azimuth compression. A mathematical framework is derived according to the projection of a sample reflectivity image onto the pseudopolar coordinate, and its implementation was presented. With the data acquisition geometry and the pseudopolar imaging coordinate, the phase of a point target can be expressed as a quadratic phase exponential. It makes that only 1-D FrFT is needed for the azimuth compression of the time-domain backscatter data for the GB-SAR imaging problem. By further research, the optimal transformation order that represents the spatial frequency changes by the FrFT was given subsequently. Taking advantage of this optimal representation, the proposed approach avoids the large calculation that occurs in the time-domain back projection (TDBP). Comparing to the far-field pseudopolar format algorithm (FPFA), the accuracy of the proposed algorithm is much improved. Meanwhile, the proposed approach holds almost the same computational cost and complexity as the FPFA. The proposed approach keeps the advantages of the imaging quality of the TDBP and the computational cost of the FPFA that are two important aspects of the GB-SAR applications. Both the numerical simulation and the field GB-SAR experiment show that the algorithm is more suitable for the high-precision GB-SAR imaging, especially for the near field.
      PubDate: Nov. 2019
      Issue No: Vol. 57, No. 11 (2019)
  • Remote Sensor Design for Visual Recognition With Convolutional Neural
    • Authors: Lucas Jaffe;Michael Zelinski;Wesam Sakla;
      Pages: 9090 - 9108
      Abstract: While deep learning technologies for computer vision have developed rapidly since 2012, modeling of remote sensing systems has remained focused around human vision. In particular, remote sensing systems are usually constructed to optimize sensing cost-quality tradeoffs with respect to human image interpretability. While some recent studies have explored remote sensing system design as a function of simple computer vision algorithm performance, there has been little work relating this design to the state of the art in computer vision: deep learning with convolutional neural networks. We develop experimental systems to conduct this analysis, showing results with modern deep learning algorithms and recent overhead image data. Our results are compared to standard image quality measurements based on human visual perception, and we conclude not only that machine and human interpretability differ significantly but also that computer vision performance is largely self-consistent across a range of disparate conditions. This paper is presented as a cornerstone for a new generation of sensor design systems that focus on computer algorithm performance instead of human visual perception.
      PubDate: Nov. 2019
      Issue No: Vol. 57, No. 11 (2019)
  • Detection of First-Year and Multi-Year Sea Ice from Dual-Polarization SAR
           Images Under Cold Conditions
    • Authors: Alexander S. Komarov;Mark Buehner;
      Pages: 9109 - 9123
      Abstract: This paper presents a new technique for automated detection of multi-year (MY) and first-year (FY) sea ice from RADARSAT-2 dual-polarization HH–HV ScanSAR Wide images under cold environmental conditions. The approach is applied to 2.05 km $times2.05$ km ( $41 times 41$ pixels) spatial window in the situation where the area is labeled as ice by our recently introduced ice and open water detection approach. The probability of the presence of MY ice is modeled as a function of the two selected predictor parameters computed over each spatial window: the HV/HH polarization ratio and the standard deviation of the HV signal. The proposed MY ice probability model was built based on thousands of synthetic aperture radar (SAR) images and corresponding Canadian Ice Service (CIS) Image Analysis products covering the 2010–2016 time period, not including 2013. Our verification results for the independent testing subset for the year 2013 against the CIS Image Analysis products suggest that approximately 50% of pure MY and FY ice samples were classified with an accuracy of 98.2%. The incidence angle correction of HH and HV backscatter does not improve MY and FY ice detection results in the space of the selected predictor parameters. The proposed technique will be used as part of the Environment and Climate Change Canada Regional Ice-Ocean Prediction System in support of assimilation of ice thickness retrievals from Cryosat-2 and Soil Moisture and Ocean Salinity mission data.
      PubDate: Nov. 2019
      Issue No: Vol. 57, No. 11 (2019)
  • Haze and Thin Cloud Removal Using Elliptical Boundary Prior for Remote
           Sensing Image
    • Authors: Qiang Guo;Hai-Miao Hu;Bo Li;
      Pages: 9124 - 9137
      Abstract: Remote sensing images play important roles in various earth surface observation applications. However, the hazy state of surface atmosphere can visually decrease the contrast and availability of remote sensing images. In this paper, we propose a haze and thin cloud removal method for single visible remote sensing images, which aims to robustly estimate haze thickness, atmospheric light, and transmission value from a remote sensing image with dense haze or thin cloud, and finally recovers a haze-free image. An elliptical boundary prior (EBP) is proposed to transform the haze thickness in each local patch from the pixels cluster in the spectral space, which is surrounded by an ellipse. With the aim of preventing highlight objects influences, an atmospheric light estimation approach is presented. The correlation of transmission and haze thickness is reconstructed to develop the scattering model for remote sensing images. The experimental results demonstrate that the proposed method can not only significantly improve the contrast and restore textures of various kinds of hazy remote sensing images but also well preserve the spectral information of visible bands.
      PubDate: Nov. 2019
      Issue No: Vol. 57, No. 11 (2019)
  • FaultNet3D: Predicting Fault Probabilities, Strikes, and Dips With a
           Single Convolutional Neural Network
    • Authors: Xinming Wu;Yunzhi Shi;Sergey Fomel;Luming Liang;Qie Zhang;Anar Z. Yusifov;
      Pages: 9138 - 9155
      Abstract: We simultaneously estimate fault probabilities, strikes, and dips directly from a seismic image by using a single convolutional neural network (CNN). In this method, we assume a local 3-D fault is a plane defined by a single combination of strike and dip angles. We assume the fault strikes and dips, respectively, are in the ranges of [0°, 360°) and [64°, 85°], which are divided into 577 classes corresponding to the situation of no fault and 576 different combinations of strikes and dips. We construct a 7-layer CNN to classify the fault strike and dip in a local seismic cube and obtain the classification probability at the same time. With the fault probability, strike and dip estimated at some seismic pixel, we further compute a fault cube (centered at the pixel) with fault features elongated along the fault plane. By sliding the classification window within a full seismic image, we are able to obtain a lot of overlapping fault cubes which are stacked to compute three full images of enhanced and continuous fault probabilities, strikes, and dips. To train the CNN model, we propose an effective and efficient workflow to automatically create 900 000 synthetic seismic cubes and the corresponding fault class labels. Although trained with only synthetic data sets, our CNN model can be applied to accurately estimate fault probabilities, strikes, and dips within field seismic images that are acquired at totally different surveys. With the estimated three fault images, we further construct fault cells that are represented as small 3-D squares, each square is colored by fault probability and oriented by fault strike and dip. We recursively link the fault cells by following the fault strikes and dips to finally construct fault skins, which are simple linked data structures to represent fault surfaces.
      PubDate: Nov. 2019
      Issue No: Vol. 57, No. 11 (2019)
  • Nested Network With Two-Stream Pyramid for Salient Object Detection in
           Optical Remote Sensing Images
    • Authors: Chongyi Li;Runmin Cong;Junhui Hou;Sanyi Zhang;Yue Qian;Sam Kwong;
      Pages: 9156 - 9166
      Abstract: Arising from the various object types and scales, diverse imaging orientations, and cluttered backgrounds in optical remote sensing image (RSI), it is difficult to directly extend the success of salient object detection for nature scene image to the optical RSI. In this paper, we propose an end-to-end deep network called LV-Net based on the shape of network architecture, which detects salient objects from optical RSIs in a purely data-driven fashion. The proposed LV-Net consists of two key modules, i.e., a two-stream pyramid module (L-shaped module) and an encoder–decoder module with nested connections (V-shaped module). Specifically, the L-shaped module extracts a set of complementary information hierarchically by using a two-stream pyramid structure, which is beneficial to perceiving the diverse scales and local details of salient objects. The V-shaped module gradually integrates encoder detail features with decoder semantic features through nested connections, which aims at suppressing the cluttered backgrounds and highlighting the salient objects. In addition, we construct the first publicly available optical RSI data set for salient object detection, including 800 images with varying spatial resolutions, diverse saliency types, and pixel-wise ground truth. Experiments on this benchmark data set demonstrate that the proposed method outperforms the state-of-the-art salient object detection methods both qualitatively and quantitatively.
      PubDate: Nov. 2019
      Issue No: Vol. 57, No. 11 (2019)
  • Minimizing Height Effects in MTInSAR for Deformation Detection Over Built
    • Authors: Lei Zhang;Hongguo Jia;Zhong Lu;Hongyu Liang;Xiaoli Ding;Xin Li;
      Pages: 9167 - 9176
      Abstract: Removing the topographic component in the interferometric synthetic aperture radar (InSAR) phase is conventionally conducted using an external digital elevation model (DEM). However, with an increasing spatial resolution of SAR data, the external DEM is becoming less qualified for this purpose, resulting in notable phase residues and even decorrelation in differential interferograms. Although topographic residuals can be parameterized and estimated by multi-temporal InSAR (MTInSAR) techniques, its accuracy is limited by several factors. Instead of providing accurate height information, shortening the length of baselines is an alternative for DEM phase mitigation. We propose here an MTInSAR processing framework that can retrieve the deformation time series without the estimation of topographic residuals. Within the framework, we generate a set of pseudo interferograms with near-zero baselines by integer combination and take these pseudo interferograms as observations of MTInSAR model, where deformation becomes the only signal that needs to be parameterized. The deformation time series is then retrieved directly from wrapped phases by ridge estimation with an integer ambiguity detector. It is noted that although atmospheric artifacts might be magnified during the combination, their differential components at arcs constructed with neighboring points that are not significantly enlarged. The proposed method is particularly suitable for infrastructure deformation monitoring in urban areas where no accurate external DEM is available. It also has promising potential for retrieving deformation from SAR data stacks with short acquisition intervals since the combination can enlarge the signal of interests in pseudo-observations. Semisynthetic and real data tests indicate that the proposed method has satisfied performance on DEM error mitigation and deformation time series estimation.
      PubDate: Nov. 2019
      Issue No: Vol. 57, No. 11 (2019)
  • Iterative Double Laplacian-Scaled Low-Rank Optimization for Under-Sampled
           and Noisy Signal Recovery
    • Authors: Qiang Zhao;Qizhen Du;Wenhan Sun;Yangkang Chen;
      Pages: 9177 - 9187
      Abstract: Recovering signal from under-sampled and erratic noise-corrupted seismic data is indeed a challenging task because of its difficulty in simultaneous modeling of erratic noise and missing signal. Assuming that the recorded data are the superposition of low-rank and sparse components, many related works have been reported using a hybrid rank-sparsity constraint. Those published works typically detect the rank and erratic noise using empirical and global thresholds, which often fail to well characterize the varying sparsity and easily cause biased estimation in case of their nonstationary distribution. We propose an iterative double Laplacian-scaled low-rank optimization to adaptively select the sparsity and rank regularizer parameters for robust signal recovery. Comparing with the published approaches with global threshold, Laplacian-scaled mixture, which is obtained by multiplying Laplacian variable with a Gamma variable, is used to locally model the sparsity of erratic noise and the low-rank feature of signal. Then, the expectation–maximization (EM) algorithm is used to transform the Laplacian-scaled mixture problem into a localized reweighted $ell _{1}$ minimization scheme. The weighted coefficient appearing in its EM solver provides a variable constraint to locally address the rank and erratic noise, and hence, their regularizer parameters can dynamically reflect the different importance of those coefficients. We tested the effectiveness of the proposed method using under-sampled synthetic and field data that are corrupted by erratic noise and used other state-of-the-art methods as comparisons. The results showed that more exact estimations of the signal and erratic noise can be obtained using the proposed method.
      PubDate: Nov. 2019
      Issue No: Vol. 57, No. 11 (2019)
  • Highly Squinted MEO SAR Focusing Based on Extended Omega-K Algorithm and
           Modified Joint Time and Doppler Resampling
    • Authors: Wenkang Liu;Guang-Cai Sun;Xiang-Gen Xia;Dong You;Mengdao Xing;Zheng Bao;
      Pages: 9188 - 9200
      Abstract: A squinted observation geometry along with long integration time significantly aggravates the range walk and spatial variation of a medium-earth-orbit (MEO) synthetic aperture radar (SAR) signal. Variable pulse repeating frequency (PRF) is recommended to avoid the blockage in echo recording and save storage space. The existing wavenumber algorithms cannot handle the nonlinear and range–azimuth-coupled spatial variation (RACSP) over a large scene. In this paper, we propose a modified Stolt mapping method along with a modified joint time and Doppler resampling (JTDR) for highly squinted MEO SAR data processing. An azimuth timescale transformation is used to deal with the nonlinear spatial variation of the azimuth frequency-modulation (FM) rate. An extended Omega-K is used to linearize the range frequency and achieve range cell migration correction (RCMC). To address the RACSP, the Doppler is linearized in the range-Doppler domain using a range-dependent Doppler scale transformation. The computational complexity and geometry distortion correction (GDC) are also discussed. Simulation results are shown to verify the effectiveness of the developed focusing approaches.
      PubDate: Nov. 2019
      Issue No: Vol. 57, No. 11 (2019)
  • Multi-Scale Dense Networks for Hyperspectral Remote Sensing Image
    • Authors: Chunju Zhang;Guandong Li;Shihong Du;
      Pages: 9201 - 9222
      Abstract: For hyperspectral remote sensing image (HSI) classification, the learning process of deep neural networks has been progressively advanced in depth, but the fine features are often largely lost or even disappear in the process of depth transfer. With the increase in feature aggregation and connectivity, the complexity of the network and the training parameters increases greatly, requiring more training time. This paper proposed a multi-scale dense network (MSDN) for HSI classification that made full use of different scale information in the network structure and combined scale information throughout the network. It implemented feature extraction of HSIs in two dimensions, including the features at fine and coarse levels. In the horizontal direction, it considered the deep extraction of HSI features, and the 3-D dense connection structure was used for aggregating features at different levels. In the vertical direction, scale information was considered, and three-scale feature maps at low, middle, and high levels were generated based on the first layer of the network. The MSDN used stride convolution for downsampling and combined feature information at different scale levels. The MSDN extracted features along the diagonal line. The network implemented the reconstruction of deep feature extraction and multi-scale fusion for HSI classification. The MSDN model performed well on representative HSI datasets, namely, the Indian Pines, Pavia University, Salinas, Botswana, and Kennedy Space Center datasets. It improved the training speed and accuracy for HSI classification and especially improved the convergence speed, which effectively saved computing resources and had high stability.
      PubDate: Nov. 2019
      Issue No: Vol. 57, No. 11 (2019)
  • Sequence SAR Image Classification Based on Bidirectional
           Convolution-Recurrent Network
    • Authors: Xueru Bai;Ruihang Xue;Li Wang;Feng Zhou;
      Pages: 9223 - 9235
      Abstract: Although the deep convolutional neural network (DCNN) has been successfully applied to target classification of military vehicles based on synthetic aperture radar (SAR), most of the available methods do not fully exploit the characteristics of continuous SAR imaging and only utilize single image for recognition. To extract significant identification features contained in the image sequence, this paper proposes a sequence of SAR target classification method based on bidirectional convolution-recurrent network. In this network, we extract spatial features of each image through DCNNs without the fully connected layer, and then learn sequence features by bidirectional long short-term memory networks. Finally, we design the average softmax classifier to obtain the classification results. Compared with the available methods, the proposed network takes advantage of the significant information in the image sequence and achieves higher classification accuracy in the moving and stationary target acquisition and recognition data set. In addition, it has shown robustness to large depression angle variants, configuration variants, and version variants.
      PubDate: Nov. 2019
      Issue No: Vol. 57, No. 11 (2019)
  • Compressed Imaging to Reduce Storage in Adjoint-State Calculations
    • Authors: Toktam Zand;Alison Malcolm;Ali Gholami;Alan Richardson;
      Pages: 9236 - 9241
      Abstract: To generate seismic images of the subsurface with adjoint-state methods such as reverse-time migration (RTM) and full-waveform inversion (FWI), the gradient of a misfit function is computed efficiently by applying what is referred to as an imaging condition to the forward propagated source wavefield and the backward propagated adjoint wavefield. In order to reduce the storage in adjoint-state calculations, we evaluate the imaging condition only at a randomly selected subset of spatial grid points (compressed imaging) and then efficiently reconstruct the full image from the imaged points via compressed sensing theory, which combines the compressibility characteristics of seismic images and convex optimization tools for the reconstruction. We use the second-order total variation regularization for the reconstruction and, using different numerical tests from RTM and FWI, we show that the new method allows a significant reduction in wavefield storage, while still recovering the full image accurately. Furthermore, regularization, applied on the gradient during the reconstruction stage improves the convergence of the FWI algorithm.
      PubDate: Nov. 2019
      Issue No: Vol. 57, No. 11 (2019)
  • Early Calibration and Performance Assessments of NOAA-20 VIIRS Thermal
           Emissive Bands
    • Authors: Yonghong Li;Xiaoxiong Xiong;Jeff McIntire;Amit Angal;Sergey Gusev;Kwofu Chiang;
      Pages: 9242 - 9251
      Abstract: The Visible Infrared Imaging Radiometer Suite (VIIRS) sensor aboard the NOAA-20 (previously JPSS-1) spacecraft has successfully operated since its launch in November, 2017. Similar to the first VIIRS instrument on the Suomi-National Polar-orbiting Partnership (SNPP) spacecraft, the data are collected in 22 spectral bands that are calibrated by a set of onboard calibrators. This paper provides an overview of the NOAA-20 VIIRS on-orbit operation and calibration, with a particular focus on the thermal emissive bands (TEBs). The results presented in this paper include the on-orbit changes in the TEB spectral band responses, detector noise characterization, and key calibration parameters, such as the nonlinear coefficients derived from the blackbody warm-up cool-down cycles. Other issues, such as the early mission long-wave infrared (LWIR) response degradation due to icing on the dewar window, and their impact on sensor calibration are also discussed. Since launch, the VIIRS instrument temperature has been stable to within ±0.8 K and the cold focal plane temperatures are well controlled with variations less than 40 mK. With the exception of the early degradation observed in the LWIR bands, the TEB gains have been stable to within 0.04% (except I5 at 0.07%). Based on the current performance, VIIRS is expected to meet its calibration requirements throughout its design lifetime.
      PubDate: Nov. 2019
      Issue No: Vol. 57, No. 11 (2019)
  • Bundle Adjustment of a Time-Sequential Spectral Camera Using Polynomial
    • Authors: Adilson Berveglieri;Antonio Maria Garcia Tommaselli;Lucas Dias Santos;Eija Honkavaara;
      Pages: 9252 - 9263
      Abstract: Lightweight hyperspectral cameras based on frame geometry have been used for several applications in unmanned aerial vehicles (UAVs). The camera used in this investigation is based on a tunable Fabry–Pérot interferometer (FPI) and works on the time-sequential principle for band acquisition. Due to this feature, when collecting images in movement, hypercubes are generated with unregistered bands, and consequently, the individual bands in each hypercube have different exterior orientation parameters (EOPs), which must be estimated by an image orientation procedure. The objective of this paper was to develop an approach for bundle block adjustment (BBA) using time-dependent polynomial models for simultaneous image orientation of all bands. The procedure consists of using a minimum number of bands to estimate the polynomial parameters. From the estimated polynomial parameters, the EOPs (position and attitude) of all bands can be determined. In tests with backprojecting ground points to interpolated bands, the average error was smaller than 1 pixel, which indicates excellent potential for orthomosaic generation. The polynomial technique was also compared to the conventional BBA. The discrepancies assessed at checkpoints indicated a similar error for both techniques, which were approximately less than the pixel size in planimetry and less than 2.8 times the pixel size in height. Therefore, the results show that the spectral band orientation can be performed with the proposed technique, assuming that the trajectory during the cube can be modeled with the polynomial model, which reduces the workload while achieving the same accuracy as conventional BBA for all bands.
      PubDate: Nov. 2019
      Issue No: Vol. 57, No. 11 (2019)
  • Image-Guided Registration of Unordered Terrestrial Laser Scanning Point
           Clouds for Urban Scenes
    • Authors: Xuming Ge;Han Hu;Bo Wu;
      Pages: 9264 - 9276
      Abstract: This paper presents an image-guided end-to-end registration approach for globally consistent 3-D registration of unordered terrestrial laser scanning (TLS) point clouds. The proposed method can handle arbitrary point clouds with reasonable pairwise overlap without knowledge about their initial position and orientation, without requiring artificial targets, and without needing to record the order of the scanning. One of the novel contributions of the proposed approach lies in the optimization of a scanning network. We retrieve the similarities of all scans based on a vocabulary tree using both the geometrically rectified panorama images and the corresponding 3-D point clouds. The approach also highlights the integral optimization in both the coarse and fine registration. A pose graph is introduced to realize global optimization at the end of the coarse step without primitives. After that, the results act as the inputs to start the pairwise fine registration, which is then followed by the minimum loop expansion (MLE) refinement. Comprehensive experiments demonstrated network optimization rates of over 60% using the image-guided strategy. Using the pose-graph optimization method, successful registration rates (SRRs) increased to 100% for all tested cases. The MLE not only accelerates the speed of the convergence but also improves registration accuracy, which reached 0.1 m and 0.1° in the translation and rotation angles, respectively.
      PubDate: Nov. 2019
      Issue No: Vol. 57, No. 11 (2019)
  • Remote Sensing Image Superresolution Using Deep Residual Channel Attention
    • Authors: Juan Mario Haut;Ruben Fernandez-Beltran;Mercedes E. Paoletti;Javier Plaza;Antonio Plaza;
      Pages: 9277 - 9289
      Abstract: The current trend in remote sensing image superresolution (SR) is to use supervised deep learning models to effectively enhance the spatial resolution of airborne and satellite-based optical imagery. Nonetheless, the inherent complexity of these architectures/data often makes these methods very difficult to train. Despite these recent advances, the huge amount of network parameters that must be fine-tuned and the lack of suitable high-resolution remotely sensed imagery in actual operational scenarios still raise some important challenges that may become relevant limitations in the existent earth observation data production environments. To address these problems, we propose a new remote sensing SR approach that integrates a visual attention mechanism within a residual-based network design in order to allow the SR process to focus on those features extracted from land-cover components that require more computations to be superresolved. As a result, the network training process is significantly improved because it aims at learning the most relevant high-frequency information while the proposed architecture allows neglecting the low-frequency features extracted from spatially uninformative earth surface areas by means of several levels of skip connections. Our experimental assessment, conducted using the University of California at Merced and GaoFen-2 remote sensing image collections, three scaling factors, and eight different SR methods, demonstrates that our newly proposed approach exhibits competitive performance in the task of superresolving remotely sensed imagery.
      PubDate: Nov. 2019
      Issue No: Vol. 57, No. 11 (2019)
  • Tensor Alignment Based Domain Adaptation for Hyperspectral Image
    • Authors: Yao Qin;Lorenzo Bruzzone;Biao Li;
      Pages: 9290 - 9307
      Abstract: This paper presents a tensor alignment (TA) based domain adaptation (DA) method for hyperspectral image (HSI) classification. To be specific, HSIs in both domains are first segmented into superpixels, and tensors of both domains are constructed to include neighboring samples from a single superpixel. Then the subspace alignment (SA) between the two domains is achieved through alignment matrices, and the original tensors are projected as core tensors with lower dimensions into the invariant tensor subspace by applying projection matrices. To preserve the geometric information of original tensors, we employ a manifold regularization term for core tensors into the optimization process. The alignment matrices, projection matrices, and core tensors are solved in the framework of Tucker decomposition with an alternating optimization strategy. In addition, a postprocessing strategy is defined via pure samples extraction for each superpixel to further improve classification performance. Experimental results on four real HSIs demonstrate that the proposed method can achieve better performance compared with the state-of-the-art subspace learning methods when a limited amount of source labeled samples are available.
      PubDate: Nov. 2019
      Issue No: Vol. 57, No. 11 (2019)
  • Large-Scale Multibaseline Phase Unwrapping: Interferogram Segmentation
           Based on Multibaseline Envelope-Sparsity Theorem
    • Authors: Hanwen Yu;Yuan Zhou;Stephanie S. Ivey;Yang Lan;
      Pages: 9308 - 9322
      Abstract: Multibaseline (MB) phase unwrapping (PU) is a critical processing step for the MB synthetic aperture radar interferometry (InSAR). Compared with the traditional single-baseline (SB) PU, MB PU has wider application scope on the study area with strong phase variation, because it can overcome the limitation of the Itoh condition. Since most of the MB PU methods need to process multiple interferograms simultaneously, the size of the input interferograms will pose unique challenges when it exceeds the limit of computational capabilities. Until now, the research achievements related to large-scale (LS) MB PU have been quite limited. To deal with such case, we propose a technique for applying the two-stage programming-based MB PU method (TSPA) proposed by Yu and Lan to the LS MB InSAR data set in this paper. To be specific, the MB $L^kappa $ -norm envelope-sparsity theorem is proposed and proved first, which gives a sufficient condition to exactly guarantee the consistency between local and global TSPA solutions. Afterward, based on the MB $L^kappa $ -norm envelope-sparsity theorem, we put forward an interferogram tiling strategy, whereby each LS interferogram in the input MB InSAR data set is partitioned into a set of several smaller sub-interferograms that can be unwrapped individually by TSPA in parallel or in series. Both theoretical analysis and experimental results show that the proposed tiling strategy is effective for the LS MB PU problem.
      PubDate: Nov. 2019
      Issue No: Vol. 57, No. 11 (2019)
  • A Data-Driven Approach for Accurate Rainfall Prediction
    • Authors: Shilpa Manandhar;Soumyabrata Dev;Yee Hui Lee;Yu Song Meng;Stefan Winkler;
      Pages: 9323 - 9331
      Abstract: In recent years, there has been growing interest in using precipitable water vapor (PWV) derived from global positioning system (GPS) signal delays to predict rainfall. However, the occurrence of rainfall is dependent on a myriad of atmospheric parameters. This paper proposes a systematic approach to analyze various parameters that affect precipitation in the atmosphere. Different ground-based weather features such as Temperature, Relative Humidity, Dew Point, Solar Radiation, PWV along with Seasonal and Diurnal variables are identified, and a detailed feature correlation study is presented. While all features play a significant role in rainfall classification, only a few of them, such as PWV, Solar Radiation, Seasonal, and Diurnal features, stand out for rainfall prediction. Based on these findings, an optimum set of features are used in a data-driven machine learning algorithm for rainfall prediction. The experimental evaluation using a 4-year (2012–2015) database shows a true detection rate of 80.4%, a false alarm rate of 20.3%, and an overall accuracy of 79.6%. Compared to the existing literature, our method significantly reduces the false alarm rates.
      PubDate: Nov. 2019
      Issue No: Vol. 57, No. 11 (2019)
  • $n$+ -Point+Complete+Graphs&rft.title=Geoscience+and+Remote+Sensing,+IEEE+Transactions+on&rft.issn=0196-2892&;&rft.aufirst=Feipeng&;Jun+Xiao;Ying+Wang;">Efficient Rock-Mass Point Cloud Registration Using $n$ -Point Complete
    • Authors: Feipeng Wang;Jun Xiao;Ying Wang;
      Pages: 9332 - 9343
      Abstract: The surfaces of rock masses are arbitrary and complex. Moreover, the point clouds of rock-mass surfaces acquired via terrestrial laser scanning typically span large distances and have high resolutions. These characteristics cause difficulties in registration between scans. To address these difficulties, an efficient method using $n$ -point complete graphs is proposed. To handle massive point clouds, a step-by-step strategy is adopted to reduce the number of points involved in the computation. First, the Gaussian curvature of each point of the initial data is estimated, and points with low Gaussian curvatures are filtered out such that only the interesting points are preserved. Second, these interesting points are clustered, and the centroid of each cluster is calculated. Finally, a descriptor is built from the $n$ -point complete graph formed by each centroid and its $n-1$ nearest neighbors. By matching the descriptors generated from two point clouds, corresponding point pairs can be obtained, thus achieving alignment. In addition, this strategy inherently incorporates denoising, outlier handling, and filtration, thereby endowing the method with strong adaptability to various conditions without incurring any additional cost. Experiments on data sets with varying degrees of outliers, noise and overlap were conducted to demonstrate the robustness of the proposed method. The results show that, with point span $r~approx ~1$ cm, the output root mean square error is around 0.5 cm, which is comparable with that of the Iterative Closest Point algorithm. A runtime analysis shows that the total processing time of the proposed method grows nearly linearly with increasing data size.
      PubDate: Nov. 2019
      Issue No: Vol. 57, No. 11 (2019)
  • A Data Assimilation Method for Simultaneously Estimating the Multiscale
           Leaf Area Index From Time-Series Multi-Resolution Satellite Observations
    • Authors: Xuchen Zhan;Zhiqiang Xiao;Jingyi Jiang;Hanyu Shi;
      Pages: 9344 - 9361
      Abstract: Current global leaf area index (LAI) products are generally produced from single-temporal satellite observations acquired by a single sensor. These LAI products are usually spatiotemporally discontinuous and inaccurate for some vegetation types in many areas, which limit the applications of these LAI products to the understanding of land dynamics. In this paper, a new data assimilation method was proposed to estimate multiscale and temporally continuous LAI values from multi-sensor time-series satellite observations with different spatial resolutions. An ensemble multiscale tree (EnMsT) was used to establish the conversion relationships between different spatial resolution LAI values, and dynamic models of the LAI at different spatial scales were constructed to evolve LAI at the corresponding spatial scales over time. At each time step, a multiscale Kalman filter (MKF) was introduced to fuse the predicted LAI values from the dynamic models at different spatial scales and to construct a forecasted EnMsT. When satellite observations were available, an ensemble multiscale filter (EnMsF) technique was applied to update the LAI values at each node of the EnMsT. The method was applied to estimate temporally continuous multiscale LAI values from the time series of Thematic Mapper (TM) or Enhanced Thematic Mapper Plus (ETM+) surface reflectance data and Moderate Resolution Imaging Spectroradiometer (MODIS) surface reflectance data at several sites with different vegetation types. The estimated multiscale LAI values were compared with the MODIS and GEOV2 LAI products, and the reference LAI values at the corresponding scales aggregated from the high-resolution LAI surface images. The estimated LAI values with the finest spatial resolution were also validated by ground measurements from the selected sites. The results show that the new method is able to simultaneously estimate temporally continuous multiscale LAI values by assimilating satellite observation- with different spatial resolutions, and the estimated multiscale LAI values are well consistent with the reference LAI values at the corresponding scales over the selected sites. The root-mean-square error (RMSE) and coefficient of determination of the retrieved LAI values at the finest spatial scale against the ground measurements over the selected sites are 0.539 and 0.788, respectively.
      PubDate: Nov. 2019
      Issue No: Vol. 57, No. 11 (2019)
  • Multi-Scale and Multi-Task Deep Learning Framework for Automatic Road
    • Authors: Xiaoyan Lu;Yanfei Zhong;Zhuo Zheng;Yanfei Liu;Ji Zhao;Ailong Ma;Jie Yang;
      Pages: 9362 - 9377
      Abstract: Road detection and centerline extraction from very high-resolution (VHR) remote sensing imagery are of great significance in various practical applications. Road detection and centerline extraction operations depend on each other, to a certain extent. The road detection constrains the appearance of the centerline, and the centerline enhances the linear features of the road detection. However, most of the previous works have addressed these two tasks separately and have not considered the symbiotic relationship between them, making it difficult to obtain smooth and complete roads. In this paper, a novel multi-scale and multi-task deep learning framework for automatic road extraction (MSMT-RE) is proposed to build the relationship between them and simultaneously complete the road detection and centerline extraction tasks. U-Net is selected as the basic network for multi-task learning due to its strong ability to preserve spatial details. Multi-scale feature integration is also applied in the framework to increase the robustness of the feature extraction. Meanwhile, an adaptive loss function is introduced to solve the problems of roads taking up a small percentage of the training samples, and the fact that the positive samples of the two tasks are unbalanced. Finally, experiments were conducted on two public road data sets and two large images from Google Earth, and the proposed framework was compared with other state-of-the-art deep learning-based road extraction methods, both quantitatively and qualitatively. The proposed approach outperformed all the compared methods, confirming its advantages in automatic road extraction.
      PubDate: Nov. 2019
      Issue No: Vol. 57, No. 11 (2019)
  • An Active Deep Learning Approach for Minimally Supervised PolSAR Image
    • Authors: Haixia Bi;Feng Xu;Zhiqiang Wei;Yong Xue;Zongben Xu;
      Pages: 9378 - 9395
      Abstract: Recently, deep neural networks have received intense interests in polarimetric synthetic aperture radar (PolSAR) image classification. However, its success is subject to the availability of large amounts of annotated data which require great efforts of experienced human annotators. Aiming at improving the classification performance with greatly reduced annotation cost, this paper presents an active deep learning approach for minimally supervised PolSAR image classification, which integrates active learning and fine-tuned convolutional neural network (CNN) into a principled framework. Starting from a CNN trained using a very limited number of labeled pixels, we iteratively and actively select the most informative candidates for annotation, and incrementally fine-tune the CNN by incorporating the newly annotated pixels. Moreover, to boost the performance and robustness of the proposed method, we employ Markov random field (MRF) to enforce class label smoothness, and data augmentation technique to enlarge the training set. We conducted extensive experiments on four real benchmark PolSAR images, and experiments demonstrated that our approach achieved state-of-the-art classification results with significantly reduced annotation cost.
      PubDate: Nov. 2019
      Issue No: Vol. 57, No. 11 (2019)
  • Efficient Narrowband RFI Mitigation Algorithms for SAR Systems With
           Reweighted Tensor Structures
    • Authors: Yan Huang;Guisheng Liao;Lei Zhang;Yijian Xiang;Jie Li;Arye Nehorai;
      Pages: 9396 - 9409
      Abstract: Radio-frequency systems, such as TV and cellular networks, severely interfere with synthetic aperture radar (SAR) systems. Narrowband radio-frequency interference (RFI) has a special low-rank property in the received signal matrix, because it performs like a sinusoid with nearly invariant frequency as the slow time proceeds. Exploiting this special property, in this paper, we divide the received signal matrix into several small matrices, in each of which the RFI is also low rank. Without losing the connection between these small matrices, we stack them into a three-mode tensor to separate the low-rank RFI tensor and recover the informative signal tensor. Previous studies employed the nuclear norm to regularize the low-rank RFI, which is not a good choice. Hence, we propose two reweighted algorithms, the reweighted tensor nuclear norm (RTNN) and the reweighted tensor Frobenius norm (RTFN) algorithms, to approximate the rank function in a tensor and accurately extract the low-rank RFI tensor from the received signal tensor. As a result, the introduction of the tensor structure dramatically decreases the computational cost. Furthermore, the reweighted scheme helps suppressing the RFI and recovering the useful signal with excellent performance. Finally, real SAR data with measured RFI is employed to demonstrate the effectiveness of the proposed methods for RFI mitigation.
      PubDate: Nov. 2019
      Issue No: Vol. 57, No. 11 (2019)
  • Global Cloud Detection for CERES Edition 4 Using Terra and Aqua MODIS Data
    • Authors: Qing Z. Trepte;Patrick Minnis;Szedung Sun-Mack;Christopher R. Yost;Yan Chen;Zhonghai Jin;Gang Hong;Fu-Lung Chang;William L. Smith;Kristopher M. Bedka;Thad L. Chee;
      Pages: 9410 - 9449
      Abstract: The Clouds and Earth’s Radiant Energy System (CERES) has been monitoring clouds and radiation since 2000 using algorithms developed before 2002 for CERES Edition 2 (Ed2) products. To improve cloud amount accuracy, CERES Edition 4 (Ed4) applies revised algorithms and input data to Terra and Aqua MODerate-resolution Imaging Spectroradiometer (MODIS) radiances. The Ed4 cloud mask uses 5–7 additional channels, new models for clear-sky ocean and snow/ice-surface radiances, and revised Terra MODIS calibrations. Mean Ed4 daytime and nighttime cloud amounts exceed their Ed2 counterparts by 0.035 and 0.068. Excellent consistency between average Aqua and Terra cloud fraction is found over nonpolar regions. Differences over polar regions are likely due to unresolved calibration discrepancies. Relative to Ed2, Ed4 cloud amounts agree better with those from the Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observations (CALIPSO). CALIPSO comparisons indicate that Ed4 cloud amounts are more than or as accurate as other available cloud mask systems. The Ed4 mask correctly identifies cloudy or clear areas 90%–96% of the time during daytime over nonpolar areas depending on the CALIPSO–MODIS averaging criteria. At night, the range is 88%–95%. Accuracy decreases over land. The polar day and night accuracy ranges are 90%–91% and 80%–81%, respectively. The mean Ed4 cloud fractions slightly exceed the average for seven other imager cloud masks. Remaining biases and uncertainties are mainly attributed to errors in Ed4 predicted clear-sky radiances. The resulting cloud fractions should help CERES produce a more accurate radiation budget and serve as part of a cloud property climate data record.
      PubDate: Nov. 2019
      Issue No: Vol. 57, No. 11 (2019)
  • Multiscale Full-Waveform Dual-Parameter Inversion Based on Total Variation
           Regularization to On-Ground GPR Data
    • Authors: Deshan Feng;Cen Cao;Xun Wang;
      Pages: 9450 - 9465
      Abstract: Full-waveform inversion (FWI) in the time domain of ground-penetrating radar (GPR) data involves a vast number of calculations; thus, it requires a large amount of memory and is difficult to calculate on a personal computer (PC). In this paper, GPR data are analyzed with multiscale FWI using two parameters (permittivity and conductivity) based on total variation (TV) regularization, which is implemented on a PC using a graphics processing unit (GPU) parallel acceleration strategy. The inverse problem is considered to be a nonlinear optimization problem and is solved with limited-memory Broyden–Fletcher–Goldfarb–Shanno (L-BFGS) process, which is a quasi-Newton method. The gradient of the objective function is calculated using the adjoint-state method, and the finite-difference method is required to solve the forward problem many times. A multiscale serial inversion strategy is applied to optimize the inversion algorithm and to decompose the inversion problem into 2–3 frequency bands to search in the direction of the global minimum point instead of local minimums. Taking the complex model as an example, experiments are carried out to assess the parameter adjustment factor and regularization parameter. The appropriate parameter adjustment factor and regularization parameter can effectively guarantee the convergence speed and stability of dual-parameter inversion method and improve the accuracy of GPR data inversion. Finally, FWI of the noise-free and 25-dB signal-to-noise ratio (SNR) noise data of the overthrust model is performed. The results show that the multiscale and dual-parameter inversion method proposed in this paper can provide reliable constraints, has better adaptability to noisy data, and can reliably and accurately reconstruct the dielectric properties distribution of the subsurface.
      PubDate: Nov. 2019
      Issue No: Vol. 57, No. 11 (2019)
  • Lifetime Absolute Calibration of the EO-1 Hyperion Sensor and its
    • Authors: Xin Jing;Larry Leigh;Dennis Helder;Cibele Teixeira Pinto;David Aaron;
      Pages: 9466 - 9475
      Abstract: The Earth-Observing One Hyperion sensor was decommissioned on March 20, 2017. Analysis of Libya 4 Pseudo-Invariant Calibration Sites (PICSs) image data acquired from 2004 to final decommissioning indicated statistically significant drifts in sensor response in the bands 8 to 16 (426.82, 436.99, 447.17, 457.34, 467.52, 477.69, 487.87, 498.04, and 508.22 nm) and bands 206 (2213.93 nm), 209 (2244.22 nm), and 210 (2254.22 nm). The estimated yearly drift in these bands ranges between −0.136% and −0.049%. After correction accounting for the estimated drift, the absolute radiometric calibration of the sensor was evaluated through vicarious reflectance-based calibrations performed at the South Dakota State University (SDSU) test site and the Radiometric Calibration Network (RadCalNet) Railroad Valley site using data from 2002 to 2015. Calibration correction coefficients including gain as much as 1.40 and bias as large as 0.132 were found except absorption bands (890–980, 1090–1180, 1305–1520, and 1750–2050 nm). Finally, the yearly drift and calibration correction coefficients were validated by comparison of the banded multispectral Hyperion data after any significant drift and calibration coefficient correction with Landsat 7 data. The validation showed that after drift and calibration coefficient correction, there is no significant gain and bias using different test sites at different signal levels. The drift and calibration correction coefficients of each band are provided in this paper.
      PubDate: Nov. 2019
      Issue No: Vol. 57, No. 11 (2019)
  • Seismic Signal Denoising and Decomposition Using Deep Neural Networks
    • Authors: Weiqiang Zhu;S. Mostafa Mousavi;Gregory C. Beroza;
      Pages: 9476 - 9488
      Abstract: Frequency filtering is widely used in routine processing of seismic data to improve the signal-to-noise ratio (SNR) of recorded signals and by doing so to improve subsequent analyses. In this paper, we develop a new denoising/decomposition method, DeepDenoiser, based on a deep neural network. This network is able to simultaneously learn a sparse representation of data in the time–frequency domain and a non-linear function that maps this representation into masks that decompose input data into a signal of interest and noise (defined as any non-seismic signal). We show that DeepDenoiser achieves impressive denoising of seismic signals even when the signal and noise share a common frequency band. Because the noise statistics are automatically learned from data and require no assumptions, our method properly handles white noise, a variety of colored noise, and non-earthquake signals. DeepDenoiser can significantly improve the SNR with minimal changes in the waveform shape of interest, even in the presence of high noise levels. We demonstrate the effect of our method on improving earthquake detection. There are clear applications of DeepDenoiser to seismic imaging, micro-seismic monitoring, and preprocessing of ambient noise data. We also note that the potential applications of our approach are not limited to these applications or even to earthquake data and that our approach can be adapted to diverse signals and applications in other settings.
      PubDate: Nov. 2019
      Issue No: Vol. 57, No. 11 (2019)
  • Lidar Remote Sensing of Seawater Optical Properties: Experiment and Monte
           Carlo Simulation
    • Authors: Dong Liu;Peituo Xu;Yudi Zhou;Weibiao Chen;Bing Han;Xiaolei Zhu;Yan He;Zhihua Mao;Chengfeng Le;Peng Chen;Haochi Che;Zhipeng Liu;Qun Liu;Qingjun Song;Sijie Chen;
      Pages: 9489 - 9498
      Abstract: Detecting the vertical profile of optical properties is an important task in the remote sensing of the upper ocean, especially for 3-D reconstruction. Ocean color remote sensing can only provide surface information, while the light detection and ranging (lidar) technique can provide depth-resolved data. Lidar can provide global-scale observations of the upper ocean for days and nights with minimal atmospheric correction errors. Unfortunately, due to the strong multiple scattering effects that occur when light propagates in seawater, the simple lidar equation may cause some deviations between the actual measurements and the simulation of the lidar signals. In this paper, we present a shipborne oceanic lidar, which was developed to detect the optical properties of seawater. For evaluating the performance of the lidar system, a Monte Carlo (MC) model was established to simulate lidar signals based on the simultaneous in situ inherent optical properties of seawater. The lidar measurements and the MC simulation can provide both the lidar signals and the retrieved lidar attenuation coefficient $alpha $ . The results of the comparison indicate that the lidar-measured signals correspond well with the MC-simulated signals at different experiment stations in the Yellow Sea and at various receiving fields of view (FOVs). We also observed strong correlations between the lidar-measured $alpha $ and MC-simulated $alpha $ at different stations ( $r = 0.95$ ) and at various FOVs ( $r = 0.96$ ). The results indicate the reliability of the developed lidar system.
      PubDate: Nov. 2019
      Issue No: Vol. 57, No. 11 (2019)
  • Portability Study of an OpenCL Algorithm for Automatic Target Detection in
           Hyperspectral Images
    • Authors: Sergio Bernabé;Carlos García;Francisco D. Igual;Guillermo Botella;Manuel Prieto-Matias;Antonio Plaza;
      Pages: 9499 - 9511
      Abstract: In the last decades, the problem of target detection has received considerable attention in remote sensing applications. When this problem is tackled using hyperspectral images with hundreds of bands, the use of high-performance computing (HPC) is essential. One of the most popular algorithms in the hyperspectral image analysis community for this purpose is the automatic target detection and classification algorithm (ATDCA). Previous research has already investigated the mapping of ATDCA on HPC platforms such as multicore processors, graphics processing units (GPUs), and field-programmable gate arrays (FPGAs), showing impressive speedup factors (after careful fine-tuning) that allow for its exploitation in time-critical scenarios. However, the lack of standardization resulted in most implementations being too specific to a given architecture, eliminating (or at least making extremely difficult) code reusability across different platforms. In order to address this issue, we present a portability study of an implementation of ATDCA developed using the open computing language (OpenCL). We focus on cross-platform parameters such as performance, energy consumption, and code design complexity, as compared to previously developed (hand-tuned) implementations. Our portability study analyzes different strategies to expose data parallelism as well as enable the efficient exploitation of complex memory hierarchies in heterogeneous devices. We also conduct an assessment of energy consumption and discuss metrics to analyze the quality of our code. The conducted experiments—using synthetic and real hyperspectral data sets collected by the Hyperspectral Digital Imagery Collection Experiment (HYDICE) and NASA’s Airborne Visible Infra-Red Imaging Spectrometer (AVIRIS)—demonstrate, for the first time in the literature, that portability across different HPC platforms can be achieved for real-time target detection in hyperspectr-l missions.
      PubDate: Nov. 2019
      Issue No: Vol. 57, No. 11 (2019)
  • Corrections to “An Efficient Preconditioner for 3D Finite Difference
           Modeling of the Electromagnetic Diffusion Process in the Frequency
           Domain” [DOI: 10.1109/TGRS.2019.2937742]
    • Authors: Jian Li;Jianxin Liu;Gary D. Egbert;Rong Liu;Rongwen Guo;Kejia Pan;
      Pages: 9512 - 9512
      Abstract: A label of an equation in the Four-Color Cellblock Gauss-Seidel Preconditioner section of the title article contains a writing mistake, so we are modifying it by: 1) changing “violation of (8)” to “violation of (6)” and 2) changing “free condition in (8)” to “free condition in (6).” This error does not affect the text or results presented in the article.
      PubDate: Nov. 2019
      Issue No: Vol. 57, No. 11 (2019)
School of Mathematical and Computer Sciences
Heriot-Watt University
Edinburgh, EH14 4AS, UK
Tel: +00 44 (0)131 4513762
Fax: +00 44 (0)131 4513327
Home (Search)
Subjects A-Z
Publishers A-Z
Your IP address:
About JournalTOCs
News (blog, publications)
JournalTOCs on Twitter   JournalTOCs on Facebook

JournalTOCs © 2009-