Journal Cover
Geoscience and Remote Sensing, IEEE Transactions on
Journal Prestige (SJR): 2.649
Citation Impact (citeScore): 6
Number of Followers: 205  
 
  Hybrid Journal Hybrid journal (It can contain Open Access articles)
ISSN (Print) 0196-2892
Published by IEEE Homepage  [191 journals]
  • IEEE Transactions on Geoscience and Remote Sensing publication information
    • Abstract: Presents a listing of the editorial board, board of governors, current staff, committee members, and/or society editors for this issue of the publication.
      PubDate: Sept. 2019
      Issue No: Vol. 57, No. 9 (2019)
       
  • IEEE Transactions on Geoscience and Remote Sensing information for authors
    • Abstract: These instructions give guidelines for preparing papers for this publication. Presents information for authors publishing in this journal.
      PubDate: Sept. 2019
      Issue No: Vol. 57, No. 9 (2019)
       
  • IEEE Transactions on Geoscience and Remote Sensing institutional listings
    • Abstract: Presents a listing of institutions relevant for this issue of the publication.
      PubDate: Sept. 2019
      Issue No: Vol. 57, No. 9 (2019)
       
  • The Parallel SBAS Approach for Sentinel-1 Interferometric Wide Swath
           Deformation Time-Series Generation: Algorithm Description and Products
           Quality Assessment
    • Authors: Michele Manunta;Claudio De Luca;Ivana Zinno;Francesco Casu;Mariarosaria Manzo;Manuela Bonano;Adele Fusco;Antonio Pepe;Giovanni Onorato;Paolo Berardino;Prospero De Martino;Riccardo Lanari;
      Pages: 6259 - 6281
      Abstract: We present an advanced differential synthetic aperture radar (SAR) interferometry (DInSAR) processing chain, based on the Parallel Small BAseline Subset (P-SBAS) technique, for the efficient generation of deformation time series from Sentinel-1 (S-1) interferometric wide (IW) swath SAR data sets. We first discuss an effective solution for the generation of high-quality interferograms, which properly accounts for the peculiarities of the terrain observation with progressive scans (TOPS) acquisition mode used to collect S-1 IW SAR data. These data characteristics are also properly accounted within the developed processing chain, taking full advantage from the burst partitioning. Indeed, such data structure represents a key element in the proposed P-SBAS implementation of the S-1 IW processing chain, whose migration into a cloud computing (CC) environment is also envisaged. An extensive experimental analysis, which allows us to assess the quality of the obtained interferometric products, is presented. To do this, we apply the developed S-1 IW P-SBAS processing chain to the overall archive acquired from descending orbits during the March 2015–April 2017 time span over the whole Italian territory, consisting in 2740 S-1 slices. In particular, the quality of the final results is assessed through a large-scale comparison with the GPS measurements relevant to nearly 500 stations. The mean standard deviation value of the differences between the DInSAR and the GPS time series (projected in the radar line of sight) is less than 0.5 cm, thus confirming the effectiveness of the implemented solution. Finally, a discussion about the performance achieved by migrating the developed processing chain within the Amazon Web Services CC environment is addressed, highlighting that a two-year data set relevant to a standard S-1 IW slice can be reliably processed in about 30 h.The presented results demonstrate the capability of the implemented P-SBAS approach to efficien-ly and effectively process large S-1 IW data sets relevant to extended portions of the earth surface, paving the way to the systematic generation of advanced DInSAR products to monitor ground displacements at a very wide spatial scale.
      PubDate: Sept. 2019
      Issue No: Vol. 57, No. 9 (2019)
       
  • Estimation of Source Wavelet From Seismic Traces Using Groebner Bases
    • Authors: Karthikeyan Elumalai;Brejesh Lall;R. K. Patney;
      Pages: 6282 - 6291
      Abstract: An accurate and effective seismic wavelet estimation technique has extreme significance in the seismic data processing for analyzing the earth’s subsurface layer information. The seismic wavelet to be determined is modeled as a moving average (MA) process and assumed to be driven by a zero mean, non-Gaussian, statistically independent, and identically distributed (IID) process. In order to estimate the MA model parameter from the observed noisy seismic signal, we pose this as a blind system identification (BSI) problem. In the BSI, a set of multivariate polynomial equations is obtained by matching higher order cumulant of observed noisy data with a higher order moment of blind system’s impulse response. The Groebner bases that form the solution to this set of equations are obtained using the proposed algorithm. Numerical results demonstrate that the proposed method has a lower estimation error as compared to the previously reported methods.
      PubDate: Sept. 2019
      Issue No: Vol. 57, No. 9 (2019)
       
  • A Shadowing Mitigation Approach for Sea State Parameters Estimation Using
           X-Band Remotely Sensing Radar Data in Coastal Areas
    • Authors: Wendy Navarro;Juan C. Velez;Alejandro Orfila;Serguei Lonin;
      Pages: 6292 - 6310
      Abstract: A novel procedure based on filtering and interpolation approaches is proposed to estimate the sea state parameters, including significant wave height, peak wave direction, peak period, peak wavenumber, and peak wavelength in shallow waters using the X-band marine radars. The method compensates the distortions introduced by the radar acquisition process and the power decay of the radar signal along the distance applying image-enhancement techniques instead of empirical and semiempirical calibration methods that use signal-to-noise ratio and in situ measurements as external references. To determine the threshold value for the interpolation approach, the influence of the antenna height on shadowing modulation effects is examined through performing an analysis of variance (ANOVA) that uses data from two X-band radars deployed at 10 and 20 m above MSL. ANOVA results reveal that it is possible to explain the increment of intensities affected by shadowing throughout the distance using an adaptive threshold retrieved from a third-order polynomial function of the mean radar cross section (RCS). Finally, an X-band radar is installed at 13 m above MSL to test the proposed technique. During measurements, the wind and wave conditions varied, and the antenna-look direction remained constant. Errors for $H_{s}$ , $theta _{p}$ , and $T_{p}$ calculated as the difference between estimated and true data show a mean bias and a relative value of 0.05 m (2.72%), 1.52° (5.94%), and 0.15 s (1.67%), respectively. The directional and wave energy spectra derived from radar estimates, acoustic wave and current, ADVs record, as well as JONSWAP formulation are presented to illustrate the i-provement resulting from the proposed method over the frequency domain.
      PubDate: Sept. 2019
      Issue No: Vol. 57, No. 9 (2019)
       
  • A Transverse Spectrum Deconvolution Technique for MIMO Short-Range Fourier
           Imaging
    • Authors: Thomas Fromenteze;Okan Yurduseven;Fabien Berland;Cyril Decroze;David R. Smith;Alexander G. Yarovoy;
      Pages: 6311 - 6324
      Abstract: The growing need for high-performance imaging tools for terrorist threat detection and medical diagnosis has led to the development of new active architectures in the microwave and millimeter range. Notably, multiple-input multiple-output systems can meet the resolution constraints imposed by these applications by creating large, synthetic radiating apertures with a limited number of antennas used independently in transmitting and receiving signals. However, the implementation of such systems is coupled with strong constraints in the software layer, requiring the development of reconstruction techniques capable of interrogating the observed scene by optimizing both the resolution of images reconstructed in two or three dimensions and the associated computation times. In this paper, we first review the formalisms and constraints associated with each application by taking stock of efficient processing techniques based on spectral decompositions, and then, we present a new technique called the transverse spectrum deconvolution range migration algorithm allowing us to carry out reconstructions that are both faster and more accurate than with conventional Fourier domain processing techniques. This paper is particularly relevant to the development of new computational imaging tools that require, even more pronouncedly than in the case of conventional architectures, fast image computing techniques despite a very large number of radiating elements interrogating the scene to be imaged.
      PubDate: Sept. 2019
      Issue No: Vol. 57, No. 9 (2019)
       
  • Effects of Wind Wave Spectra on Radar Backscatter From Sea Surface at
           Different Microwave Bands: A Numerical Study
    • Authors: Dengfeng Xie;Kun-Shan Chen;Xiaofeng Yang;
      Pages: 6325 - 6334
      Abstract: Wind wave spectrum describes the quasi-periodic nature of the ocean surface oscillations and plays an indispensable role in the study of microwave electromagnetic scattering from sea surface. A reliable spectrum model suitable for radar cross section (RCS) predictions at different radar frequencies is desired. This paper evaluated the performances of five common spectrum models (i.e., Fung spectrum, Durden–Vesecky spectrum, Apel spectrum, Elfouhaily spectrum, and the newest version of Hwang spectrum, H18) on the normalized radar backscattering cross section (NRBCS) simulations based on advanced integral equation model (AIEM) at L-, C-, X-, and Ku-bands versus incidence angle, wind direction, and wind speed by comparing with the model and measured data for validation. These results indicate no single wave spectrum of them is satisfying for all the four radar frequencies, e.g., Apel and H18 spectra are better for L- and C-bands, Apel spectrum for X-band, and Elfouhaily and H18 spectra for Ku-band. Given this, three average composite spectrum models are constructed using different spectral models (i.e., all five spectra, Apel + Elfouhaily + H18, and Apel + H18) to simulate NRBCSs, similar to that of the individual spectrum model. It is concluded that the combination of Apel and H18 spectra overall performs best among the individual one and other composited spectra in like-polarized NRBCSs versus incidence angles, wind directions, and wind speeds, for wind speed greater than 30 m/s where the combination of the five spectra work well at Ku-band.
      PubDate: Sept. 2019
      Issue No: Vol. 57, No. 9 (2019)
       
  • OS-Flow: A Robust Algorithm for Dense Optical and SAR Image Registration
    • Authors: Yuming Xiang;Feng Wang;Ling Wan;Niangang Jiao;Hongjian You;
      Pages: 6335 - 6354
      Abstract: Coregistration of high-resolution optical and synthetic aperture radar (SAR) images is still an ongoing problem due to different imaging mechanisms of two kinds of remote sensing images. In this paper, we propose an optical flow-based algorithm to solve the dense registration problem [optical-to-SAR (OS)-flow]. Unlike parametric registration methods that estimate a transformation model, OS-flow aims to find pixelwise correspondences between optical and SAR images. Specifically, two frameworks of OS-flow, a global method and a local method, are proposed. Due to the drastic differences between SAR and optical images, two dense feature descriptors, rather than the raw intensities, are utilized to retain the constancy assumption in optical flow estimation. Considering the inherent properties of the two images, two dense descriptors are constructed using consistent gradient computation. After satisfying the constancy assumption, the global method estimates the flow map by optimizing an objective function, and the local method iteratively estimates the flow vector in a local neighborhood. Both methods use the coarse-to-fine matching strategy to address large displacements and reduce the computational cost. Experiments on several optical-to-SAR image pairs in various scenarios show that the proposed methods have a strong ability to match across optical and SAR images and outperform other state-of-the-art methods in terms of registration accuracy.
      PubDate: Sept. 2019
      Issue No: Vol. 57, No. 9 (2019)
       
  • MODIS Reflective Solar Bands On-Orbit Calibration and Performance
    • Authors: Xiaoxiong Xiong;Amit Angal;Kevin A. Twedt;Hongda Chen;Daniel Link;Xu Geng;Emily Aldoretta;Qiaozhen Mu;
      Pages: 6355 - 6371
      Abstract: The design of the Moderate-Resolution Imaging Spectroradiometer (MODIS) instrument was driven by the scientific community’s desire to have near-daily global coverage at moderate resolution (~1km) with comprehensive spectral coverage from visible to long-wave infrared wavelengths. Since their launches in 1999 and 2002, respectively, the Terra and Aqua MODIS instruments have made continuous global observations and generated numerous data products to help users worldwide with their studies of the Earth’s system and its short- and long-term changes. The 20 reflective solar bands (RSBs) with wavelengths from 0.41 to $2.2~mu text{m}$ collect data at three nadir spatial resolutions: 250 m, 500 m, and 1 km. The solar diffuser (SD) coupled with the SD stability monitor (SDSM) provides a reflectance-based calibration on-orbit. In addition, lunar observations and response trends from pseudoinvariant desert sites are used to characterize the response versus scan-angle changes on-orbit. This paper provides a brief overview of MODIS RSB calibration algorithms, as implemented in the latest Level 1B version 6.1, operational activities, on-orbit performance, remaining challenges, and potential improvements. Results from the SD and SDSM measurements show a wavelength and mirror-side-dependent degradation in RSB responses, with the largest degradation at the shortest wavelengths, particularly for Terra MODIS. Aqua MODIS has experienced far less degradation of its optics and on-board calibrators compared with Terra MODIS, resulting in an overall better performance. With the exception of Aqua band 6, there have been no new noisy or inoperable detectors in the RSB of either instrument during postlaunch operations. As the instruments age and continue to endure the space environment, the detectors and the optical systems degrade. The challenges associated wit- incorporating these on-orbit changes to ensure a production of high-quality calibrated L1B data products are also discussed in this paper.
      PubDate: Sept. 2019
      Issue No: Vol. 57, No. 9 (2019)
       
  • High-Frequency Ionospheric Monitoring System for Over-the-Horizon Radar in
           Canada
    • Authors: Thayananthan Thayaparan;Dale Dupont;Yousef Ibrahim;Ryan Riddolls;
      Pages: 6372 - 6384
      Abstract: The Canadian Department of National Defence (DND) is developing an experimental over-the-horizon radar (OTHR) with the potential for surveillance of Canada. Because of dynamically changing ionospheric conditions in the Earth’s high-latitude and polar regions, the operating OTHR transmission frequency and elevation angle need to be adjusted regularly to maintain constant illumination of downrange targets. In this paper, the feasible operating frequency and elevation angle radar parameters are determined for short- and long-range OTHR operation using 3-D ionosphere ray-tracing simulations. Together, the collection of all feasible radar configurations forms a characteristic profile which shifts and deforms as factors such as the time of day, season, and solar activity are varied. The range of operating frequencies and elevation angles obtained from this paper will aid developing the transmitter and receiver antenna layouts for experimental OTHR configurations in the poorly understood high-latitude and polar regions. These methods will also help to form the basis of the frequency monitoring systems (FMS) that will control the configuration of these polar OTHR systems in real time.
      PubDate: Sept. 2019
      Issue No: Vol. 57, No. 9 (2019)
       
  • A Ship ISAR Imaging Algorithm Based on Generalized Radon-Fourier Transform
           With Low SNR
    • Authors: Zegang Ding;Tianyi Zhang;Yong Li;Gen Li;Xichao Dong;Tao Zeng;Meng Ke;
      Pages: 6385 - 6396
      Abstract: Existing ship inverse synthetic aperture radar (ISAR) imaging algorithms are not applicable, when the signal-to-noise ratio (SNR) is low, for the translational motion that cannot be well compensated by existing algorithms. To achieve ship ISAR imaging with low SNR, a ship ISAR imaging algorithm based on the generalized radon-Fourier transform (GRFT) is proposed in this paper. Considering not only the rotational motion but also the translational motion between the radar and the ship, the proposed algorithm uses the GRFT to simultaneously compensate the time-variant range envelopes and the Doppler phase. Thus, the signal coherence is fully utilized, and the coherent integration of the ship’s multicomponent echo signal is realized. Subsequently, to overcome the problem of the heavy computational load and improve the efficiency of the proposed algorithm, the scheme of cascaded GRFTs that consists of the coarse GRFT and the subsequent fine GRFT is adopted. The coarse GRFT with large search ranges and intervals is aimed at obtaining the real ranges of ship scatter points’ motion parameters. Based on the coarse GRFT result, the fine GRFT with small search ranges and intervals is performed to efficiently obtain the coherent integration result. Then, based on the coherent integration result, the constant false alarm rate (CFAR) detection is performed to obtain the desired scatter points and their amplitudes and motion parameters, and the multicomponent signal is reconstructed. Finally, based on the reconstructed multicomponent signal, the high-quality instantaneous ship ISAR image can be obtained. Computer simulations and experiment results validate the effectiveness of the proposed algorithm.
      PubDate: Sept. 2019
      Issue No: Vol. 57, No. 9 (2019)
       
  • CoSMIR Performance During the GPM OLYMPEX Campaign
    • Authors: Rachael A. Kroodsma;Matthew A. Fritts;Jared F. Lucey;Mathew R. Schwaller;Troy J. Ames;Caitlyn M. Cooke;Lawrence M. Hilliard;
      Pages: 6397 - 6407
      Abstract: The airborne Conical Scanning Millimeter-wave Imaging Radiometer (CoSMIR) has participated in the Global Precipitation Measurement (GPM) Olympic Mountains Experiment (OLYMPEX) from November to December, 2015, with great success. With similar channels as that of the GPM Microwave Imager (GMI) at 89–183 GHz, CoSMIR served as a proxy for GMI by flying onboard the DC-8 Aircraft for a total of 17 science flights, collecting over 72 h of observations. The high-quality, calibrated brightness temperature data set is the result of several improvements made to CoSMIR prior to OLYMPEX to make the instrument more reliable. This paper describes these improvements and gives a detailed summary of the CoSMIR measurements obtained from OLYMPEX. CoSMIR experienced minor performance issues during the campaign, most of them were not excessive and only resulted in a loss of approximately 4 h of data for the entire campaign. Performance issues are discussed and shown how they were mitigated to achieve a quality data set. Comparisons of CoSMIR and GMI observations are presented to show that the CoSMIR measurements agree well with GMI. The CoSMIR data set is publicly available as a part of the OLYMPEX data suite and can reliably be used in the GPM algorithm development and related studies.
      PubDate: Sept. 2019
      Issue No: Vol. 57, No. 9 (2019)
       
  • Sentinel-2 Sharpening Using a Reduced-Rank Method
    • Authors: Magnus O. Ulfarsson;Frosti Palsson;Mauro Dalla Mura;Johannes R. Sveinsson;
      Pages: 6408 - 6420
      Abstract: Recently, the Sentinel-2 (S2) satellite constellation was deployed for mapping and monitoring the Earth environment. Images acquired by the sensors mounted on the S2 platforms have three levels of spatial resolution: 10, 20, and 60 m. In many remote sensing applications, the availability of images at the highest spatial resolution (i.e., 10 m for S2) is often desirable. This can be achieved by generating a synthetic high-resolution image through data fusion. To this end, researchers have proposed techniques exploiting the spectral/spatial correlation inherent in multispectral data to sharpen the lower resolution S2 bands to 10 m. In this paper, we propose a novel method that formulates the sharpening process as a solution to an inverse problem. We develop a cyclic descent algorithm called S2Sharp and an associated tuning parameter selection algorithm based on generalized cross validation and Bayesian optimization. The tuning parameter selection method is evaluated on a simulated data set. The effectiveness of S2Sharp is assessed experimentally by comparisons to state-of-the-art methods using both simulated and real data sets.
      PubDate: Sept. 2019
      Issue No: Vol. 57, No. 9 (2019)
       
  • Robust Band-Dependent Spatial-Detail Approaches for Panchromatic
           Sharpening
    • Authors: Gemine Vivone;
      Pages: 6421 - 6433
      Abstract: Pansharpening refers to the fusion of a multispectral (MS) image with a finer spectral resolution but coarser spatial resolution than a panchromatic (PAN) image. The classical pansharpening problem can be dealt with component substitution or multiresolution analysis techniques. One of the most notable approaches in the former class is the band-dependent spatial-detail (BDSD) method. It has been shown state-of-the-art performance, in particular, when the fusion of four band data sets is addressed. However, new sensors, such as the WorldView-2/-3 ones, usually acquire MS images with more than four spectral bands to be fused with the PAN image. The BDSD method has shown limitations in performance in these cases. Thus, in this paper, several BDSD-based approaches are provided to solve this issue getting a robustness of the BDSD with respect to the spectral bands to be fused. The experimental results conducted both at reduced and at full resolutions on four real data sets acquired by the IKONOS, the QuickBird, the WorldView-2, and the WorldView-3 sensors demonstrate the validity of the proposed approaches against the benchmark.
      PubDate: Sept. 2019
      Issue No: Vol. 57, No. 9 (2019)
       
  • Full-Polarization Bistatic Scattering From an Inhomogeneous Rough Surface
    • Authors: Ying Yang;Kun-Shan Chen;
      Pages: 6434 - 6446
      Abstract: This paper examines the properties of bistatic scattering from an inhomogeneous rough surface, which, in this paper, is modeled by the transitional layer as a function of depth. The lower medium of the rough surface is horizontally uniform but vertically inhomogeneous. Both linear and circular polarizations are investigated in light of the dependences of transition rate, background dielectric constant, and surface roughness. The presence of dielectric inhomogeneity generally leads to several features that do not appear in the homogeneous surface, such as the scattering coefficient on the whole scattering plane is enhanced; the dynamic range of HH and VV over the azimuth plane is reduced; HV can be greater than VH; and the difference of LR and RR is decreased. With the increasing transition rate, the scattering coefficients for both the linear and circular polarizations are enhanced. As the background dielectric constant increases, the scattering responses of the linear and circular polarizations are quite different. For the linear polarization, HH exhibits a stronger angular dependence; VV reduces in the forward region and enhances notably in the backward region; and HV decreases but VH increases. For circular polarizations, the cross-polarized LR increases in the backward region but decreases in the forward region, and the copolarized RR enhances on the whole scattering plane. With the increasing surface roughness, the scattering coefficient becomes more evenly distributed over the entire scattering plane.
      PubDate: Sept. 2019
      Issue No: Vol. 57, No. 9 (2019)
       
  • Ship Detection Based on Complex Signal Kurtosis in Single-Channel SAR
           Imagery
    • Authors: Xiangguang Leng;Kefeng Ji;Shilin Zhou;Xiangwei Xing;
      Pages: 6447 - 6461
      Abstract: Recent studies have shown that complex information in single-channel synthetic aperture radar (SAR) imagery has practically always been underrated. This improves the perception of their potential for ocean monitoring. Based on the in-depth interpretation of complex signal kurtosis (CSK), this paper proposes a new ship detection method based on CSK in single-channel SAR imagery. The proposed method consists of two main parts, i.e., region proposal and target identification. The basic idea is to first detect potential ship locations based on the region proposal. Then, the final ship target is acquired based on the target identification. Compared to conventional methods based on detected products, e.g., the constant false alarm rate (CFAR), the proposed method has three advantages. First, CSK can take advantage of both non-Gaussianity and noncircularity, which is the fundamental concept distinguishing complex signal analysis from the real case. Second, the proposed method can be intrinsically free of false alarms caused by radio frequency interference (RFI). Finally, the proposed method can avoid missing detection in dense target situations. This methodology has been demonstrated over significant data sets acquired from Sentinel-1, TerraSAR-X, and Gaofen-3. These results validate that CSK is a vital indicator of ship detection. Complex information is expected to play a more important role in single-channel SAR imagery.
      PubDate: Sept. 2019
      Issue No: Vol. 57, No. 9 (2019)
       
  • Multiscale Locality and Rank Preservation for Robust Feature Matching of
           Remote Sensing Images
    • Authors: Xingyu Jiang;Junjun Jiang;Aoxiang Fan;Zhongyuan Wang;Jiayi Ma;
      Pages: 6462 - 6472
      Abstract: As a fundamental and important task in many applications of remote sensing and photogrammetry, feature matching tries to seek correspondences between the two feature sets extracted from an image pair of the same object or scene. This paper focuses on eliminating mismatches from a set of putative feature correspondences constructed according to the similarity of existing well-designed feature descriptors. Considering the stable local topological relationship of the potential true correspondences, we propose a simple yet efficient method named multiscale Top $K$ Rank Preservation (mTopKRP) for robust feature matching. To this end, we first search the $K$ -nearest neighbors of each feature point and generate a ranking list accordingly. Then we design a metric based on the weighted Spearman’s footrule distance to describe the similarity of two ranking lists specifically for the matching problem. We build a mathematical optimization model and derive its closed-form solution, enabling our method to establish reliable correspondences in linearithmic time complexity, which requires only tens of milliseconds to handle over 1000 putative matches. We also introduce a multiscale strategy for neighborhood construction, which increases the robustness of our method and can deal with different types of degradation, even when the image pair suffers from a large scale change, rotation, nonrigid deformation, or a large number of mismatches. Extensive experiments on several representative remote sensing image data sets demonstrate the superiority of our method over state of the art.
      PubDate: Sept. 2019
      Issue No: Vol. 57, No. 9 (2019)
       
  • Identification of Sun Glint Contamination in GMI Measurements Over the
           Global Ocean
    • Authors: Qiumeng Xue;Li Guan;
      Pages: 6473 - 6483
      Abstract: This paper utilizes the model regression difference method to identify sun glint contamination on Global Precipitation Measurement Microwave Imager (GMI) data over the ocean based on observations from 2015 to 2016. The spatial distribution characteristics and the critical angles of the sun glint flags are analyzed in depth. It is found that the GMI measurements with horizontal and vertical polarizations at 10.65 GHz over the ocean are sometimes contaminated by the solar radiation reflected by the sea surface. The sun glint contamination has also been detected over high reflected land surface. The intensity and locations of the contamination are related to the sun glint angle. Only those GMI field of views with smaller sun glint angles are easily contaminated. The closer the sun glint angle is to 0°, the stronger the magnitude of the contamination. The GMI observations at other channels are not contaminated mainly because sun glint is most pronounced at 10 GHz. There are too strong constraints and tossing out of too many useful data in current GMI sun glint algorithms. The suggested critical angles of the sun glint flags for 10.65GHz is 20° to reduce false flagging. By applying the model regression difference method, the error in brightness temperature caused by sun glint can be corrected. The Tropical Rainfall Measuring Mission Microwave Imager (TMI) observations at 10.65 GHz are also contaminated by the reflected solar radiation from the ocean, and the intensity and locations of the contamination are similar to those of the GMI.
      PubDate: Sept. 2019
      Issue No: Vol. 57, No. 9 (2019)
       
  • Guided Patchwise Nonlocal SAR Despeckling
    • Authors: Sergio Vitale;Davide Cozzolino;Giuseppe Scarpa;Luisa Verdoliva;Giovanni Poggi;
      Pages: 6484 - 6498
      Abstract: We propose a new method for synthetic aperture radar (SAR) image despeckling, which leverages information drawn from coregistered optical imagery. Filtering is performed by patchwise nonlocal means, working exclusively on SAR data. However, the filtering weights are computed by taking into account also the optical guide, which is much cleaner than the SAR image, and hence more discriminative. To avoid injecting optical-domain information into the filtered image, an SAR-domain statistical test is preliminarily performed to reject right away any risky predictor. Experiments on two SAR-optical data sets prove the proposed method to suppress very effectively the speckle, preserving structural details, and without introducing significant filtering artifacts. Overall, the proposed method compares favorably with all the state-of-the-art despeckling filters, and also with our own previous optical-guided filter.
      PubDate: Sept. 2019
      Issue No: Vol. 57, No. 9 (2019)
       
  • An Analytic Expression for the Phase Noise of the Goldstein–Werner
           Filter
    • Authors: Scott Hensley;
      Pages: 6499 - 6516
      Abstract: Interferogram filtering for noise reduction is a key to many radar interferometric applications. Repeat pass radar interferometry often uses data with less than ideal correlation levels resulting from either long spatial or temporal baselines or changes between observations leading to high levels of temporal correlation. To maximize the utility of such pairs filtering the interferogram to get maximal noise reduction is often needed. One technique that has proved quite useful in the geophysical community is power spectral or Goldstein–Werner filtering of the interferogram whereby a power-weighted version of the Fourier transform is used to enhance fringe visibility. Although this paper defining the filter briefly touched upon the spatial resolution and noise reduction induced by the filter, it did not provide a useful formula for predicting the phase noise after filtering. This paper derives a formula for the phase noise obtained from power spectral filtering albeit under the restriction of several simplifying assumptions to make the problem analytically tractable. In particular, it is assumed that the interferometric phase is locally well approximated by a linear phase ramp with nonlinear phase perturbations small in a spectral energy sense compared to the linear term.
      PubDate: Sept. 2019
      Issue No: Vol. 57, No. 9 (2019)
       
  • Learning and Adapting Robust Features for Satellite Image Segmentation on
           Heterogeneous Data Sets
    • Authors: Sina Ghassemi;Attilio Fiandrotti;Gianluca Francini;Enrico Magli;
      Pages: 6517 - 6529
      Abstract: This paper addresses the problem of training a deep neural network for satellite image segmentation so that it can be deployed over images whose statistics differ from those used for training. For example, in postdisaster damage assessment, the tight time constraints make it impractical to train a network from scratch for each image to be segmented. We propose a convolutional encoder–decoder network able to learn visual representations of increasing semantic level as its depth increases, allowing it to generalize over a wider range of satellite images. Then, we propose two additional methods to improve the network performance over each specific image to be segmented. First, we observe that updating the batch normalization layers’ statistics over the target image improves the network performance without human intervention. Second, we show that refining a trained network over a few samples of the image boosts the network performance with minimal human intervention. We evaluate our architecture over three data sets of satellite images, showing the state-of-the-art performance in binary segmentation of previously unseen images and competitive performance with respect to more complex techniques in a multiclass segmentation task.
      PubDate: Sept. 2019
      Issue No: Vol. 57, No. 9 (2019)
       
  • Hydra: An Ensemble of Convolutional Neural Networks for Geospatial Land
           Classification
    • Authors: Rodrigo Minetto;Maurício Pamplona Segundo;Sudeep Sarkar;
      Pages: 6530 - 6541
      Abstract: In this paper, we describe Hydra, an ensemble of convolutional neural networks (CNNs) for geospatial land classification. The idea behind Hydra is to create an initial CNN that is coarsely optimized but provides a good starting pointing for further optimization, which will serve as the Hydra’s body. Then, the obtained weights are fine-tuned multiple times with different augmentation techniques, crop styles, and classes weights to form an ensemble of CNNs that represent the Hydra’s heads. By doing so, we prompt convergence to different endpoints, which is a desirable aspect for ensembles. With this framework, we were able to reduce the training time while maintaining the classification performance of the ensemble. We created ensembles for our experiments using two state-of-the-art CNN architectures, residual network (ResNet), and dense convolutional networks (DenseNet). We have demonstrated the application of our Hydra framework in two data sets, functional map of world (FMOW) and NWPU-RESISC45, achieving results comparable to the state-of-the-art for the former and the best-reported performance so far for the latter. Code and CNN models are available at https://github.com/maups/hydra-fmow.
      PubDate: Sept. 2019
      Issue No: Vol. 57, No. 9 (2019)
       
  • Characterizing the System Impulse Response Function From Photon-Counting
           LiDAR Data
    • Authors: Adam P. Greeley;Thomas A. Neumann;Nathan T. Kurtz;Thorsten Markus;Anthony J. Martino;
      Pages: 6542 - 6551
      Abstract: NASA’s Multiple Altimeter Beam Experimental LiDAR (MABEL) is an aircraft-based photon-counting laser altimeter designed as a simulator to test measurement techniques and algorithms for Advanced Topographic Laser Altimeter System (ATLAS), the sole instrument on NASA’s Ice, Cloud, and land Elevation Satellite-2 (ICESat-2) mission. By measuring the time of flight, pointing angle, and absolute position for individual photons, ICESat-2 provides detailed elevation measurements of earth’s surface. Calculating accurate and precise elevations requires an understanding of how photons interact with surfaces, and characterization of the photon distribution after returning from surfaces. Neither MABEL nor ATLAS records the transmitted laser pulse shape, relying instead on aggregating several pulses worth of photons, often using histograms, to characterize the pulse shape. In this paper, we assess the limitations of using histograms and propose a more robust method to describe MABEL’s system impulse-response function using an exponentially modified Gaussian distribution. We also provide standard error estimates for the arithmetic mean and standard deviation calculations, and for exponentially modified Gaussian parameters using a Monte Carlo sensitivity analysis. We apply this method to photon returns from a sea ice lead and from a dry salt lake bed as case studies for estimating the standard error associated with sample size for the arithmetic mean and standard deviation, and for the exponentially modified Gaussian parameters. We use these standard errors to calculate the minimum number of photons required to find both Gaussian and exponentially modified Gaussian distribution parameters within 3 cm of their parent population values.
      PubDate: Sept. 2019
      Issue No: Vol. 57, No. 9 (2019)
       
  • StfNet: A Two-Stream Convolutional Neural Network for
           Spatiotemporal Image Fusion
    • Authors: Xun Liu;Chenwei Deng;Jocelyn Chanussot;Danfeng Hong;Baojun Zhao;
      Pages: 6552 - 6564
      Abstract: Spatiotemporal image fusion is considered as a promising way to provide Earth observations with both high spatial resolution and frequent coverage, and recently, learning-based solutions have been receiving broad attention. However, these algorithms treating spatiotemporal fusion as a single image super-resolution problem, generally suffers from the significant spatial information loss in coarse images, due to the large upscaling factors in real applications. To address this issue, in this paper, we exploit temporal information in fine image sequences and solve the spatiotemporal fusion problem with a two-stream convolutional neural network called StfNet. The novelty of this paper is twofold. First, considering the temporal dependence among image sequences, we incorporate the fine image acquired at the neighboring date to super-resolve the coarse image at the prediction date. In this way, our network predicts a fine image not only from the structural similarity between coarse and fine image pairs but also by exploiting abundant texture information in the available neighboring fine images. Second, instead of estimating each output fine image independently, we consider the temporal relations among time-series images and formulate a temporal constraint. This temporal constraint aiming to guarantee the uniqueness of the fusion result and encourages temporal consistent predictions in learning and thus leads to more realistic final results. We evaluate the performance of the StfNet using two actual data sets of Landsat-Moderate Resolution Imaging Spectroradiometer (MODIS) acquisitions, and both visual and quantitative evaluations demonstrate that our algorithm achieves state-of-the-art performance.
      PubDate: Sept. 2019
      Issue No: Vol. 57, No. 9 (2019)
       
  • Learning Temporal Features for Detection on Maritime Airborne Video
           Sequences Using Convolutional LSTM
    • Authors: Gonçalo Cruz;Alexandre Bernardino;
      Pages: 6565 - 6576
      Abstract: In this paper, we study the effectiveness of learning temporal features to improve detection performance in videos captured by small aircraft. To implement this learning process, we use a convolutional long short-term memory (LSTM) associated with a pretrained convolutional neural network (CNN). To improve the training process, we incorporate domain-specific knowledge about the expected size and number of boats. We carry out three tests. The first searches the best sequence length and subsampling rate for training and the second compares the proposed method with a traditional CNN, a traditional LSTM, and a gated recurrent unit (GRU). The final test evaluates our method with the already published detectors in two data sets. Results show that in favorable conditions, our method’s performance is comparable to other detectors but, on more challenging environments, it stands out from other techniques.
      PubDate: Sept. 2019
      Issue No: Vol. 57, No. 9 (2019)
       
  • A CIE Color Purity Algorithm to Detect Black and Odorous Water in Urban
           Rivers Using High-Resolution Multispectral Remote Sensing Images
    • Authors: Qian Shen;Yue Yao;Junsheng Li;Fangfang Zhang;Shenglei Wang;Yanhong Wu;Huping Ye;Bing Zhang;
      Pages: 6577 - 6590
      Abstract: Urban black and odorous water (BOW) is a serious global environmental problem. Since these waters are often narrow rivers or small ponds, the detection of BOW waters using traditional satellite data and algorithms is limited both by a lack of spatial resolution and by imperfect retrieval algorithms. In this paper, we used the Chinese high-resolution remote sensing satellite Gaofen-2 (GF-2, 0.8 m). The atmospheric correction showed that the mean absolute percentage error of the derived remote sensing reflectance ( $R_{mathrm {rs}}$ ) in visible bands is 25.19%. We first measured $R_{mathrm {rs}}$ spectra of two classes of BOW [BOW with high concentrations of iron (II) sulfide, i.e., BOW1 and BOW with high concentrations of total suspended matter, i.e., BOW2] and ordinary water in Shenyang. Then, in situ $R_{mathrm {rs}}$ data were converted into $R_{mathrm {rs}}$ corresponding to the wide GF-2 bands using the spectral response functions. We used the converted $R_{mathrm {rs}}$ data to calculate several band combinations, including the baseline height, [ $R_{mathrm {rs}}$ (green) $- R_{mathrm {rs}}$ (red))/( $R_{mathrm {rs}}$ (green) $+ R_{mathrm {rs}}$ (red)], and the color purity on a Commission Internationale de L’Eclairage (CIE) chromaticity diagram. The color pur-ty was found to be the best index to extract BOW from ordinary water. Then, $R_{mathrm {rs}}$ (645) was applied to categorize BOW into BOW1 and BOW2. We applied the algorithm to two synchronous GF-2 images. The recognition accuracy of BOW2 and ordinary water are both 100%. The extracted river water type near Weishanhu Road was BOW1, which agreed well with ground truth. The algorithm was further applied to other GF-2 data for Shenyang and Beijing.
      PubDate: Sept. 2019
      Issue No: Vol. 57, No. 9 (2019)
       
  • Geosynchronous SAR Tomography: Theory and First Experimental Verification
           Using Beidou IGSO Satellite
    • Authors: Cheng Hu;Bin Zhang;Xichao Dong;Yuanhao Li;
      Pages: 6591 - 6607
      Abstract: Synthetic aperture radar (SAR) tomography (TomoSAR) techniques exploit multipass acquisitions of the same scene with slightly different view angles, and allow generating fully 3-D images, providing an estimation of scatterers’ distribution along range, azimuth, and elevation directions. This paper extends TomoSAR to geosynchronous SAR (GEO TomoSAR). First, the potential and performance of GEO TomoSAR were analyzed from the perspective of orbital perturbation and the resulting large spatial baseline. Then, the rotation-induced decorrelation problems induced by the along-track baseline component were analyzed. In addition, the optimized acquisition geometry and tomographic processing flow were given, and the computer simulation verification was also completed. Finally, the equivalent validation experiment based on Beidou inclined geosynchronous orbit (IGSO) navigation satellite was carried out to demonstrate the feasibility and effectiveness of GEO TomoSAR. The experimental system employs the Beidou IGSO satellite as illuminator of opportunity and a ground system collecting and processing reflected echoes. This is the first time to employ the data from repeat-track Beidou IGSO satellites for tomographic processing. The 3-D imaging of the urban area using this experimental system was presented and then verified using LiDAR cloud data as reference. The results show that GEO TomoSAR can form the baseline of the order of hundreds of kilometers in elevation, which has the ability to achieve a resolution of 5 m in elevation.
      PubDate: Sept. 2019
      Issue No: Vol. 57, No. 9 (2019)
       
  • Polarized Backscattering From Spatially Anisotropic Rough Surface
    • Authors: Ying Yang;Kun-Shan Chen;
      Pages: 6608 - 6618
      Abstract: This paper examines the polarized backscattering of spatially anisotropic rough surfaces. To better explore the physical mechanisms that control the azimuthal dependence of the backscattering from anisotropic surfaces, the effects of surface roughness [correlation length and root-mean-square (rms) height], dielectric constant, and radar parameters from anisotropic surfaces are studied. The advanced integral equation model (AIEM) is used to simulate both co- and cross-polarized backscattering coefficients, including the single and multiple scattering. Numerical results suggest that the multiple scattering exhibits a stronger azimuthal dependence for HH than VV polarization, especially more so at a larger incident angle. For weakly anisotropic surface, the azimuthal variation of backscattering tends to be a sinusoidal-like pattern. However, with the enhancement of anisotropy, such a scattering pattern is distorted, and the sharp dip appears at up/down direction. As the rms height and dielectric constant increase, the scattering is enhanced on the whole. The HH/VV ratio at lower dielectric constant is greater than that at higher one. In comparison, scattering shows stronger dependence on anisotropy at lower dielectric constant, especially at a larger incident angle. As an application example, we compare the model predictions with reported measurements from two different sites. Preliminary results are quite encouraging, and thus, the analysis presented in this paper is potentially useful to predict and interpret backscattering from crop field surface, where strong anisotropic surfaces commonly present due to plowing or raking practice.
      PubDate: Sept. 2019
      Issue No: Vol. 57, No. 9 (2019)
       
  • First Demonstration of Joint Wireless Communication and High-Resolution
           SAR Imaging Using Airborne MIMO Radar System
    • Authors: Jie Wang;Xing-Dong Liang;Long-Yong Chen;Li-Na Wang;Kun Li;
      Pages: 6619 - 6632
      Abstract: Special attention has been devoted to joint wireless communication and radar sensing systems in recent years. However, since communication and radar have conflict requirements in terms of waveforms, transceiver developments, and signal processing algorithms, realization of this system concept is still a great challenge. In this paper, we introduce an airborne multi-input multi-output (MIMO) radar, along with the modified orthogonal frequency-division multiplexing (OFDM) and space–time coding (STC) waveform schemes, for the implementation of the joint wireless communication and synthetic aperture radar (SAR) imaging. The proposed system, which simultaneously transmits multidimensional waveforms with reconfigurable channels, can acquire adequate degrees of freedom. Thereby, it becomes a feasible method to perform both data transmission and high-resolution SAR imaging at the same time without intramodal interference. Theoretical analysis is validated by laboratory and flight experiments. Through our analysis, we aim to open up a new perspective of using MIMO radar to realize joint wireless communication and SAR imaging.
      PubDate: Sept. 2019
      Issue No: Vol. 57, No. 9 (2019)
       
  • Blind Hyperspectral Unmixing Considering the Adjacency Effect
    • Authors: Xinyu Wang;Yanfei Zhong;Liangpei Zhang;Yanyan Xu;
      Pages: 6633 - 6649
      Abstract: This paper focuses on the blind unmixing technique for analyzing hyperspectral images (HSIs). A joint deconvolution and blind hyperspectral unmixing (DBHU) algorithm is proposed, which is aimed at eliminating the impact of the adjacency effect (AE) on unmixing. In remote sensing imagery, the AE occurs in the presence of atmospheric scattering over a heterogeneous surface. The AE leads to blurring and additional mixing of HSIs and makes it difficult to estimate endmembers and abundances accurately. In this paper, we first model the blurred HSIs by the use of a bilinear mixing model (BMM), where a blurring kernel is used to model the mixing caused by the AE. Based on the BMM, the DBHU problem is formulated as a constrained and biconvex optimization problem. Specifically, the minimum-volume simplex (MVS) is incorporated to deal with the additional mixing caused by the AE, and 3-D total variation (TV) priors are adopted to model the spectral–spatial correlation of the data. In DBHU, the biconvex problem is efficiently solved by a nonstandard application of the alternating direction method of multipliers (ADMM) algorithm, where a block coordinate descent scheme is applied by splitting the original problem into two saddle point subproblems, and then minimizing the subproblems alternately via the ADMM until convergence. The experimental results obtained with both simulated and real data confirm the viability of the proposed algorithm, and DBHU works well, even where both blurring and noise are present in the scene.
      PubDate: Sept. 2019
      Issue No: Vol. 57, No. 9 (2019)
       
  • Spectrum Recovery for Clutter Removal in Penetrating Radar Imaging
    • Authors: Yinchuan Li;Xiaodong Wang;Zegang Ding;Xu Zhang;Yin Xiang;Xiaopeng Yang;
      Pages: 6650 - 6665
      Abstract: Penetrating radar systems are widely employed to scan the objects that are placed behind or buried inside mediums (such as walls, ground, and so on). As the clutter is much stronger than the target echo, clutter removal must be performed before imaging. The moving average subtraction, spatial notch filtering, and singular value decomposition methods are commonly used to remove clutter. However, the drawback is that these methods eliminate some of the target spectrum information, which causes target energy losses and generates side lobes. To solve this problem, two spectrum recovery methods are proposed in this paper. The first method recovers the spectrum magnitude and phase via sinc interpolation and linear fitting, respectively, which is fast and suitable for real-time processing. Although the second method recovers the spectrum based on matrix completion with prior information, which is more accurate and more computational expensive. Extensive simulations and experiments are presented to validate the proposed methods. The results show that the proposed methods can improve various traditional clutter removal methods, the side lobes are clearly suppressed, and the signal-to-clutter ratio is significantly improved.
      PubDate: Sept. 2019
      Issue No: Vol. 57, No. 9 (2019)
       
  • Performance of POLYMER Atmospheric Correction of Ocean Color Imagery in
           the Presence of Absorbing Aerosols
    • Authors: Minwei Zhang;Chuanmin Hu;Brian B. Barnes;
      Pages: 6666 - 6674
      Abstract: The atmospheric correction approach currently being used operationally by NASA [termed as NASA standard atmospheric correction (NSAC) approach] to process ocean color data relies on traditional “black pixel” approach, with additional modifications to account for nonnegligible water-leaving radiance in the near-infrared (NIR) bands. The NSAC approach underestimates remote-sensing reflectance ( $R_{mathrm {rs}}$ , sr−1) in blue wavelengths in the presence of absorbing aerosols. Addressing this issue requires realistic absorbing-aerosol model and knowledge of the vertical distribution of aerosols, which are currently difficult to achieve. An alternative atmospheric correction approach has been evaluated in this paper for Moderate Resolution Imaging Spectroradiometer (MODIS) data. The approach is based on a previously developed spectra-matching optimization [POLYnomial-based approach established for the atmospheric correction of MERIS data (POLYMER)], where polynomial functions are used to express atmospheric contribution to the measured radiance and where a bio-optical model is used to estimate the water contribution. Evaluation against in situ data measured over the regions frequently affected by absorbing aerosols indicates that, compared with the NSAC approach, the POLYMER approach improves the $R_{mathrm {rs}}$ retrievals in blue wavelengths while having a slightly worse performance in other wavelengths. Evaluation using NSAC-retrieved $R_{mathrm {rs}}$ in adjacent days free of absorbing aerosols suggests that the POLYMER approach could improve the spectral shape and increase valid spatial coverage. When applied to time-series MODIS data, the POLYMER approach could generate more te-porary coherent daily and monthly $R_{mathrm {rs}}$ patterns than the NSAC approach. These results suggest that the POLYMER approach could be an alternative approach to partly correct for absorbing aerosols in the absence of explicit information on the aerosol type and the vertical distribution.
      PubDate: Sept. 2019
      Issue No: Vol. 57, No. 9 (2019)
       
  • Multiple-Feature Kernel-Based Probabilistic Clustering for Unsupervised
           Band Selection
    • Authors: Marco Bevilacqua;Yannick Berthoumieu;
      Pages: 6675 - 6689
      Abstract: This paper presents a new method to perform unsupervised band selection (UBS) with hyperspectral data. The method provides a probabilistic clustering approach. The band images are clustered in the image space by computing their posterior class probability. Then, for each cluster, the band exhibiting the highest probability of belonging to it is selected as cluster exemplar. More particularly, the proposed method falls into information-maximization clustering methods, where the posterior class probability is modeled and the parameters of the models are derived by maximizing the information between the data and the unknown cluster labels. In this context, we propose a new image representation for hyperspectral images, based on the first- and second-order statistics of multiple image features. We refer to this representation as multiple-feature local statistical descriptors (MLSD). The descriptors are computed with respect to regular grids, and a special pixel selection procedure reduces the number of samples within each block of the grid. A kernel-based model that embeds the MLSD is then proposed for the posterior class probability. The model is finally optimized according to an information-maximization criterion. We conduct several experiments to determine the best parameters for the proposed approach and compare the latter with other state-of-the-art UBS methods. Quantitative evaluations show that, by employing our band selection method, higher performance in terms of classification accuracy and endmember extraction can be achieved in comparison with the state of the art.
      PubDate: Sept. 2019
      Issue No: Vol. 57, No. 9 (2019)
       
  • Deep Learning for Hyperspectral Image Classification: An Overview
    • Authors: Shutao Li;Weiwei Song;Leyuan Fang;Yushi Chen;Pedram Ghamisi;Jón Atli Benediktsson;
      Pages: 6690 - 6709
      Abstract: Hyperspectral image (HSI) classification has become a hot topic in the field of remote sensing. In general, the complex characteristics of hyperspectral data make the accurate classification of such data challenging for traditional machine learning methods. In addition, hyperspectral imaging often deals with an inherently nonlinear relation between the captured spectral information and the corresponding materials. In recent years, deep learning has been recognized as a powerful feature-extraction tool to effectively address nonlinear problems and widely used in a number of image processing tasks. Motivated by those successful applications, deep learning has also been introduced to classify HSIs and demonstrated good performance. This survey paper presents a systematic review of deep learning-based HSI classification literatures and compares several strategies for this topic. Specifically, we first summarize the main challenges of HSI classification which cannot be effectively overcome by traditional machine learning methods, and also introduce the advantages of deep learning to handle these problems. Then, we build a framework that divides the corresponding works into spectral-feature networks, spatial-feature networks, and spectral–spatial-feature networks to systematically review the recent achievements in deep learning-based HSI classification. In addition, considering the fact that available training samples in the remote sensing field are usually very limited and training deep networks require a large number of samples, we include some strategies to improve classification performance, which can provide some guidelines for future studies on this topic. Finally, several representative deep learning-based classification methods are conducted on real HSIs in our experiments.
      PubDate: Sept. 2019
      Issue No: Vol. 57, No. 9 (2019)
       
  • Phase Correlation Decomposition: The Impact of Illumination Variation for
           Robust Subpixel Remotely Sensed Image Matching
    • Authors: Xue Wan;Jian Guo Liu;Shengyang Li;Hongshi Yan;
      Pages: 6710 - 6725
      Abstract: Illumination variation is one of the major problems in multitemporal earth observation (EO) image matching. Despite much research has been focused on illumination invariant image matching, subpixel image matching under large illumination variation without prior knowledge is still a challenge. This paper proposes a phase correlation decomposition (PCD) theory model in order to analyze the joint effects of zenith and azimuth angles of the lighting source. A novel stepwise least-squares fitting-based PC (SLSF-PC) is proposed to accurately calculate the subpixel image shift by a stepwise function in the frequency domain. Our mathematical investigation is validated by image alignment and stereo dense matching experiments using simulated terrain shading images representing four different landscapes and a multi-illumination remotely sensed image data set containing eight different scenes under seasonal and daily illumination variation. Image matching experiments demonstrate the superior performance of the proposed SLSF-PC compared to the state-of-the-art image matching algorithms, such as speeded up robust features (SURF), mutual information (MI), and normalized cross correlation (NCC). Even under great illumination angle change, the proposed SLSF-PC is able to achieve 0.1 subpixel matching accuracy on average while other methods fail to find the correspondence.
      PubDate: Sept. 2019
      Issue No: Vol. 57, No. 9 (2019)
       
  • Observed Relationship Between BRF Spectral-Continuum Variance and
           Macroscopic Roughness of Clay Sediments
    • Authors: Gregory Badura;Charles M. Bachmann;Justin Harms;Andrei Abelev;
      Pages: 6726 - 6740
      Abstract: Spectral data offer a means of estimating the critical parameters of sediments, including sediment composition, moisture content, surface roughness, density, and grain-size distribution. Macroscopic surface roughness in particular has a substantial impact on the structure of the bidirectional reflectance factor (BRF) and the angular distribution of scattered light. In developing the models to invert the properties of the surface beyond just surface composition, roughness must also be accounted for in order to achieve reliable and repeatable results. This paper outlines laboratory studies in which the BRF and surface digital elevation measurements were performed on dry clay sediments. The results were used to explore the suitability of various roughness metrics to account for the radiometric effect of surface roughness. The metrics that are specifically addressed in this paper include random roughness and sill variance. Relative accuracy and tradeoffs between these metrics are described. We find that spectral variability, especially near spectral absorption features, correlates strongly with the quantified measures of surface roughness. We also find that spectral variability is sensitive to the sensor fore-optic size. The results suggest that roughness parameters might be directly determined from the spectrum itself. The relationship between spectral variability and macroscopic surface roughness was particularly strong in some broad spectral ranges of the visible, near infrared, and shortwave infrared, including the near-infrared region between 600 and 850 nm.
      PubDate: Sept. 2019
      Issue No: Vol. 57, No. 9 (2019)
       
  • A Novel Inpainting Algorithm for Recovering Landsat-7 ETM+ SLC-OFF Images
           Based on the Low-Rank Approximate Regularization Method of Dictionary
           Learning With Nonlocal and Nonconvex Models
    • Authors: Jiaqing Miao;Xiaobing Zhou;Ting-Zhu Huang;Tingbing Zhang;Zhaoming Zhou;
      Pages: 6741 - 6754
      Abstract: On May 31, 2003, the scan line corrector (SLC) of the Enhanced Thematic Mapper Plus (ETM+) on-board the Landsat-7 satellite failed, resulting in strips of data lost in all ETM+ images acquired since then. In this paper, we proposed a novel inpainting algorithm for recovering the ETM+ SLC-off images. The two slopes of the boundaries of each missing stripe were extracted through the Hough transform, ignoring the slope of the edge of the strip that overlaps the edge of the image. An adaptive dictionary was then developed and trained using ETM+ SLC-on images acquired before May 31, 2003 so that the physical characteristics and geometric features of the ground coverage of the data-missing strips can be considered during recovery. To make the algorithm computationally efficient, data-missing strips were repaired along their slope directions by using the logdet $left ({cdot }right)$ low-rank nonconvex model along with the dictionary. The algorithm was tested using the simulated ETM+ SLC-off images created from a multiband ETM+ SLC-on image file and compared to the high accuracy low-rank tensor completion (HaLRTC), logDet, and tensor nuclear norm (TNN) algorithms. The results show that the ETM+ images restored using the new algorithm have lower RMSE, higher PSNR and structure similarity (SSIM) values, and better visualization. These results indicate that the new algorithm performs better than the other three algorithms and can efficiently and accurately restore the data-missing stripes.
      PubDate: Sept. 2019
      Issue No: Vol. 57, No. 9 (2019)
       
  • Ionospheric Correction of InSAR Time Series Analysis of C-band Sentinel-1
           TOPS Data
    • Authors: Cunren Liang;Piyush Agram;Mark Simons;Eric J. Fielding;
      Pages: 6755 - 6773
      Abstract: The Copernicus Sentinel-1A/B satellites operating at C-band in terrain observation by progressive scans (TOPS) mode bring unprecedented opportunities for measuring large-scale tectonic motions using interferometric synthetic aperture radar (InSAR). Although the ionospheric effects are only about one-sixteenth of those at L-band, the measurement accuracy might still be degraded by long-wavelength signals due to the ionosphere. We implement the range split-spectrum method for correcting ionospheric effects in InSAR with C-band Sentinel-1 TOPS data. We perform InSAR time series analysis and evaluate these ionospheric effects using data acquired on both ascending (dusk-side of the Sentinel-1 dawn–dusk orbit) and descending (dawn-side) tracks over representative midlatitude and low-latitude (geomagnetic latitude) areas. We find that the ionospheric effects are very strong for data acquired at low latitudes on ascending tracks. For other cases, ionospheric effects are not strong or even negligible. The application of the range split-spectrum method, despite some implementation challenges, largely removes ionospheric effects, and thus improves the InSAR time series analysis results.
      PubDate: Sept. 2019
      Issue No: Vol. 57, No. 9 (2019)
       
  • Digital Terrain Model Retrieval in Tropical Forests Through P-Band SAR
           Tomography
    • Authors: Mauro Mariotti D’Alessandro;Stefano Tebaldini;
      Pages: 6774 - 6781
      Abstract: This paper focuses on the retrieval of terrain topography below dense tropical forests by means of synthetic aperture radar (SAR) systems. Low-frequency signals are needed to penetrate such a thick vegetation layer; however, this expedient alone does not guarantee proper retrieval. It is, here, demonstrated that the phase center of P-band backscatter may lie several meters above the ground, depending on the slope and incidence angle. SAR tomography is shown to overcome this problem and retrieves the actual topography even in the presence of dense trees up to 50 m tall. Digital terrain models returned by SAR tomography are, here, put in comparison with light detection and ranging (LiDAR) terrain models: the accuracy of radar-derived maps is found to be at least comparable with the one offered by LiDAR systems. Moreover, the discrepancy between tomography and LiDAR is larger if large-footprint LiDAR is considered thus suggesting that, in this case, tomographic maps should be considered the reference height. Analyses are carried out by processing three data sets gathered over different tropical forests in western Africa. The robustness of the radar estimates is assessed with respect to both ground slope and treetop height.
      PubDate: Sept. 2019
      Issue No: Vol. 57, No. 9 (2019)
       
  • $L^{p}$+ -Penalization+Method+to+Enhance+the+Spatial+Resolution+of+Microwave+Radiometer+Measurements&rft.title=Geoscience+and+Remote+Sensing,+IEEE+Transactions+on&rft.issn=0196-2892&rft.date=2019&rft.volume=57&rft.spage=6782&rft.epage=6791&rft.aulast=Migliaccio;&rft.aufirst=Matteo&rft.au=Matteo+Alparone;Ferdinando+Nunziata;Claudio+Estatico;Flavia+Lenti;Maurizio+Migliaccio;">An Adaptive $L^{p}$ -Penalization Method to Enhance the Spatial Resolution
           of Microwave Radiometer Measurements
    • Authors: Matteo Alparone;Ferdinando Nunziata;Claudio Estatico;Flavia Lenti;Maurizio Migliaccio;
      Pages: 6782 - 6791
      Abstract: In this paper, we introduce a novel approach to enhance the spatial resolution of single-pass microwave data collected by mesoscale sensors. The proposed rationale is based on an $L^{p}$ -minimization approach with a variable $p$ exponent. The algorithm automatically adapts the $p$ exponent to the region of the image to be reconstructed. This approach allows taking benefit of the advantages of both the regularization in Hilbert ( $p = 2$ ) and Banach ( $1< p< 2$ ) spaces. Experiments are undertaken considering the microwave radiometer and refer to both actual and simulated data collected by the special sensor microwave imager (SSM/I). Results demonstrate the benefits of the proposed method in reconstructing abrupt discontinuities and smooth gradients with respect to conventional approaches in Hilbert or Banach spaces.
      PubDate: Sept. 2019
      Issue No: Vol. 57, No. 9 (2019)
       
  • Feature Fusion With Predictive Weighting for Spectral Image Classification
           and Segmentation
    • Authors: Yu Zhang;Cong Phuoc Huynh;King Ngi Ngan;
      Pages: 6792 - 6807
      Abstract: In this paper, we propose a spatial–spectral feature fusion model with a predictive feature weighting mechanism and demonstrate its applications to the problems of hyperspectral image classification and segmentation. To address these problems, we learn a set of 1-D convolutional local spectral filters and 2-D spatial–spectral filters that feed features into a fusion module, in an end-to-end fashion. We propose a lightweight predictive feature weighting component embedded in the fusion model and consider four design fusion options, i.e., by adding or concatenating features with equal or predicted weights. For the pixel classification task, the training input consists of image patches with labeled central pixels, whereas for the spatial segmentation task, it includes the label maps of image regions. The proposed networks have been evaluated on the Indian Pines, Pavia University, and Houston University for the classification problem and the SpaceNet data set for the spatial segmentation problem. The quantitative results favor the proposed approach over the state-of-the-art methods across all the four data sets.
      PubDate: Sept. 2019
      Issue No: Vol. 57, No. 9 (2019)
       
  • Unsupervised Spatial–Spectral Feature Learning by 3D Convolutional
           Autoencoder for Hyperspectral Classification
    • Authors: Shaohui Mei;Jingyu Ji;Yunhao Geng;Zhi Zhang;Xu Li;Qian Du;
      Pages: 6808 - 6820
      Abstract: Feature learning technologies using convolutional neural networks (CNNs) have shown superior performance over traditional hand-crafted feature extraction algorithms. However, a large number of labeled samples are generally required for CNN to learn effective features under classification task, which are hard to be obtained for hyperspectral remote sensing images. Therefore, in this paper, an unsupervised spatial–spectral feature learning strategy is proposed for hyperspectral images using 3-Dimensional (3D) convolutional autoencoder (3D-CAE). The proposed 3D-CAE consists of 3D or elementwise operations only, such as 3D convolution, 3D pooling, and 3D batch normalization, to maximally explore spatial–spectral structure information for feature extraction. A companion 3D convolutional decoder network is also designed to reconstruct the input patterns to the proposed 3D-CAE, by which all the parameters involved in the network can be trained without labeled training samples. As a result, effective features are learned in an unsupervised mode that label information of pixels is not required. Experimental results on several benchmark hyperspectral data sets have demonstrated that our proposed 3D-CAE is very effective in extracting spatial–spectral features and outperforms not only traditional unsupervised feature extraction algorithms but also many supervised feature extraction algorithms in classification application.
      PubDate: Sept. 2019
      Issue No: Vol. 57, No. 9 (2019)
       
  • Continuous Human Motion Recognition With a Dynamic Range-Doppler
           Trajectory Method Based on FMCW Radar
    • Authors: Chuanwei Ding;Hong Hong;Yu Zou;Hui Chu;Xiaohua Zhu;Francesco Fioranelli;Julien Le Kernec;Changzhi Li;
      Pages: 6821 - 6831
      Abstract: Radar-based human motion recognition is crucial for many applications, such as surveillance, search and rescue operations, smart homes, and assisted living. Continuous human motion recognition in real-living environment is necessary for practical deployment, i.e., classification of a sequence of activities transitioning one into another, rather than individual activities. In this paper, a novel dynamic range-Doppler trajectory (DRDT) method based on the frequency-modulated continuous-wave (FMCW) radar system is proposed to recognize continuous human motions with various conditions emulating real-living environment. This method can separate continuous motions and process them as single events. First, range-Doppler frames consisting of a series of range-Doppler maps are obtained from the backscattered signals. Next, the DRDT is extracted from these frames to monitor human motions in time, range, and Doppler domains in real time. Then, a peak search method is applied to locate and separate each human motion from the DRDT map. Finally, range, Doppler, radar cross section (RCS), and dispersion features are extracted and combined in a multidomain fusion approach as inputs to a machine learning classifier. This achieves accurate and robust recognition even in various conditions of distance, view angle, direction, and individual diversity. Extensive experiments have been conducted to show its feasibility and superiority by obtaining an average accuracy of 91.9% on continuous classification.
      PubDate: Sept. 2019
      Issue No: Vol. 57, No. 9 (2019)
       
  • A Global Adjustment Method for Photogrammetric Processing of
           Chang’E-2 Stereo Images
    • Authors: Xin Ren;Jianjun Liu;Chunlai Li;Haihong Li;Wei Yan;Fengfei Wang;Wenrui Wang;Xiaoxia Zhang;Xingye Gao;Wangli Chen;
      Pages: 6832 - 6843
      Abstract: The Chang’E-2 (CE2) lunar orbiter was the second robotic orbiter of the Chinese Lunar Exploration Program, as well as the pioneer robotic orbiter in the soft landing project in the second phase of the program. It used a two-line stereo camera to acquire stereo images with global coverage at a resolution of 7 m. The stereo images have a large potential for producing the best lunar topographic map. However, errors and uncertainties in the interior orientation (IO) and exterior orientation (EO) parameters of the camera seriously affected the accuracy of the global topographic mapping. In this paper, a global adjustment method is proposed to eliminate the effects of these errors. The error models are represented by a Chebyshev polynomial. The polynomial coefficients were estimated as unknowns using five lunar laser ranging retroreflector (LRRR) points as ground control points in the adjustment. The experimental results show that the planimetric and height deviations between the neighboring strips were 5 and 2 m (less than 1 pixel), respectively, which were decreased by 32.6 and 31.5 times, respectively, relative to those derived from the original EO parameters. The large inconsistencies in the CE2 trajectory data were significantly reduced after the adjustment. In comparison with the LRRR positions, the planimetric and height errors ranged from 21 to 97 m and −19 to 10 m, respectively. A new seamless mosaic and high precision absolute position topographic map has been generated using this method.
      PubDate: Sept. 2019
      Issue No: Vol. 57, No. 9 (2019)
       
  • Simultaneous Mapping of Coastal Topography and Bathymetry From a
           Lightweight Multicamera UAS
    • Authors: Katherine L. Brodie;Brittany L. Bruder;Richard K. Slocum;Nicholas J. Spore;
      Pages: 6844 - 6864
      Abstract: A low-cost multicamera Unmanned Aircraft System (UAS) is used to simultaneously estimate open-coast topography and bathymetry from a single longitudinal coastal flight. The UAS combines nadir and oblique imagery to create a wide field of view (FOV), which enables collection of mobile, long dwell timeseries of the littoral zone suitable for structure-from-motion (SfM), and wave speed inversion algorithms. Resultant digital surface models (DSMs) compare well with terrestrial topographic lidar and bathymetric survey data at Duck, NC, USA, with roor-mean-square error (RMSE)/bias of 0.26/–0.05 and 0.34/–0.05 m, respectively. Bathymetric data from another flight at Virginia Beach, VA, USA, demonstrates successful comparison (RMSE/bias of 0.17/0.06 m) in a secondary environment. UAS-derived engineering data products, total volume profiles and shoreline position, were congruent with those calculated from traditional topo-bathymetric surveys at Duck. Capturing both topography and bathymetry within a single flight, the presented multicamera system is more efficient than data acquisition with a single camera UAS; this advantage grows for longer stretches of coastline (10 km). Efficiency increases further with an on-board Global Navigation Satellite System–Inertial Navigation System (GNSS-INS) to eliminate ground control point (GCP) placement. The Appendix reprocesses the Virginia Beach flight with the GNSS–INS input and no GCPs. The resultant DSM products are comparable [root-mean-squared difference (RMSD)/bias of 0.62/−0.09 m, and processing time is significantly reduced.
      PubDate: Sept. 2019
      Issue No: Vol. 57, No. 9 (2019)
       
  • Prediction of Sea Ice Motion With Convolutional Long Short-Term Memory
           Networks
    • Authors: Zisis I. Petrou;Yingli Tian;
      Pages: 6865 - 6876
      Abstract: Prediction of sea ice motion is important for safeguarding human activities in polar regions, such as ship navigation, fisheries, and oil and gas exploration, as well as for climate and ocean-atmosphere interaction models. Numerical prediction models used for sea ice motion prediction often require a large number of data from diverse sources with varying uncertainties. In this paper, a deep learning approach is proposed to predict sea ice motion for several days in the future, given only a series of past motion observations. The proposed approach consists of an encoder–decoder network with convolutional long short-term memory (LSTM) units. Optical flow is calculated from satellite passive microwave and scatterometer daily images covering the entire Arctic and used in the network. The network proves able to learn long-time dependencies within the motion time series, whereas its convolutional structure effectively captures spatial correlations among neighboring motion vectors. The approach is unsupervised and end-to-end trainable, requiring no manual annotation. Experiments demonstrate that the proposed approach is effective in predicting sea ice motion of up to 10 days in the future, outperforming previous deep learning networks and being a promising alternative or complementary approach to resource-demanding numerical prediction methods.
      PubDate: Sept. 2019
      Issue No: Vol. 57, No. 9 (2019)
       
  • Subdictionary-Based Joint Sparse Representation for SAR Target Recognition
           Using Multilevel Reconstruction
    • Authors: Zhi Zhou;Zongjie Cao;Yiming Pi;
      Pages: 6877 - 6887
      Abstract: Template-matching-based approaches have been developed for many years in the field of synthetic aperture radar (SAR) automatic target recognition (ATR). However, the performance of template-matching-based approaches is strongly affected by two factors: background clutter and noise and the size of the data set. To solve the problems mentioned above, a multilevel reconstruction-based multitask joint sparse representation method is proposed in this paper. According to the theory of the attributed scattering center (ASC) model, a SAR image exhibits strong point-scatter-like behavior, which can be modeled by scattering centers on the target. As a result, the ASCs can be extracted from SAR images based on the ASC model. Then, ASCs extracted from SAR images are used to reconstruct the SAR target at multilevels based on energy ratio (ER). The multilevel reconstruction is a process of data augmentation, which can not only restrain the background clutter and noise but also augment the data set. Several subdictionaries are designed after multilevel reconstruction according to the label of training samples. Meanwhile, a test image chip is reconstructed into multiple test images. The random projection coefficients associated with multiple reconstructed test images are fed into a multitask joint sparse representation classification framework. The final decision is made in terms of accumulated reconstruction error. Experiments on moving and stationary target acquisition and recognition (MSTAR) data set proved the effectiveness of our method.
      PubDate: Sept. 2019
      Issue No: Vol. 57, No. 9 (2019)
       
  • Effect of Anisotropy on Ionospheric Scintillations Observed by SAR
    • Authors: Shradha Mohanty;Charles S. Carrano;Gulab Singh;
      Pages: 6888 - 6899
      Abstract: Studies pertaining to small scale structures producing scintillations using synthetic aperture radars (SARs) have predominantly been conducted at low-latitude regions. The high-latitude region (auroral belt and polar caps) is highly dynamic and varies in response to stimuli from solar winds and the magnetosphere in complex ways. In this paper, the authors have shown the capability of SAR for scintillation observation in the auroral region. An attempt has been made to fit an irregularity anisotropy model to SAR measurements for characterizing the ionospheric irregularities in the auroral regions. The dependency of anisotropy irregularity model on parameters, such as irregularity structure (axial ratio), their orientation with respect to magnetic field lines, and the ionospheric plasma drift, is closely studied using Advanced Land Observing Satellite (ALOS)-2 datasets acquired over Alaska. Geomagnetic indices and total electron content data are consistent with the occurrence of the scintillation event under study. Drift velocity measurements from high-frequency radars in the super dual array radar network (SuperDARN) showed that the anisotropy is independent of the magnitude and the azimuth angle of the plasma drift. The typical range of orientation angle suitable for the high latitude regions probed by ALOS-2 is demonstrated to be between 120°–135°. This paper explores the idea of inferring irregularity anisotropy by comparing the amplitude scintillation (S4) index measured in SAR data pairs using two well-established techniques. The image contrast technique heavily relies on the accurate modeling of anisotropy, whereas the radar cross-sectional enhancement method is independent of it. This feature has been exploited in the $S_{4}$ comparison to finally fit the choice of irregularity axial ratio and conclude that the sheet-like-structures best describe the ionospheric irregularity structure in the region under observation.
      PubDate: Sept. 2019
      Issue No: Vol. 57, No. 9 (2019)
       
  • Preregistration Classification of Mobile LIDAR Data Using Spatial
           Correlations
    • Authors: Ville V. Lehtola;Matti Lehtomäki;Heikki Hyyti;Risto Kaijaluoto;Antero Kukko;Harri Kaartinen;Juha Hyyppä;
      Pages: 6900 - 6915
      Abstract: We explore a novel paradigm for light detection and ranging (LIDAR) point classification in mobile laser scanning (MLS). In contrast to the traditional scheme of performing classification for a 3-D point cloud after registration, our algorithm operates on the raw data stream classifying the points on-the-fly before registration. Hence, we call it preregistration classification (PRC). Specifically, this technique is based on spatial correlations, i.e., local range measurements supporting each other. The proposed method is general since exact scanner pose information is not required, nor is any radiometric calibration needed. Also, we show that the method can be applied in different environments by adjusting two control parameters, without the results being overly sensitive to this adjustment. As results, we present classification of points from an urban environment where noise, ground, buildings, and vegetation are distinguished from each other, and points from the forest where tree stems and ground are classified from the other points. As computations are efficient and done with a minimal cache, the proposed methods enable new on-chip deployable algorithmic solutions. Broader benefits from the spatial correlations and the computational efficiency of the PRC scheme are likely to be gained in several online and offline applications. These range from single robotic platform operations including simultaneous localization and mapping (SLAM) algorithms to wall-clock time savings in geoinformation industry. Finally, PRC is especially attractive for continuous-beam and solid-state LIDARs that are prone to output noisy data.
      PubDate: Sept. 2019
      Issue No: Vol. 57, No. 9 (2019)
       
  • Scale-Free Convolutional Neural Network for Remote Sensing Scene
           Classification
    • Authors: Jie Xie;Nanjun He;Leyuan Fang;Antonio Plaza;
      Pages: 6916 - 6928
      Abstract: Fine-tuning of pretrained convolutional neural networks (CNNs) has been proven to be an effective strategy for remote sensing image scene classification, particularly when a limited number of labeled data sets are available for training purposes. However, such a fine-tuning process often needs that the input images are resized into a fixed size to generate input vectors of the size required by fully connected layers (FCLs) in the pretrained CNN model. Such a resizing process often discards key information in the scenes and thus deteriorates the classification performance. To address this issue, in this paper, we introduce a scale-free CNN (SF-CNN) for remote sensing scene classification. Specifically, the FCLs in the CNN model are first converted into convolutional layers, which not only allow the input images to be of arbitrary sizes but also retain the ability to extract discriminative features using a traditional sliding-window-based strategy. Then, a global average pooling (GAP) layer is added after the final convolutional layer so that input images of arbitrary size can be mapped to feature maps of uniform size. Finally, we utilize the resulting feature maps to create a new FCL that is fed to a softmax layer for final classification. Our experimental results conducted using several real data sets demonstrate the superiority of the proposed SF-CNN method over several well-known classification methods, including pretrained CNN-based ones.
      PubDate: Sept. 2019
      Issue No: Vol. 57, No. 9 (2019)
       
  • A Large-Scale Multi-Institutional Evaluation of Advanced Discrimination
           Algorithms for Buried Threat Detection in Ground Penetrating Radar
    • Authors: Jordan M. Malof;Daniël Reichman;Andrew Karem;Hichem Frigui;K. C. Ho;Joseph N. Wilson;Wen-Hsiung Lee;William J. Cummings;Leslie M. Collins;
      Pages: 6929 - 6945
      Abstract: In this paper, we consider the development of algorithms for the automatic detection of buried threats using ground penetrating radar (GPR) measurements. GPR is one of the most studied and successful modalities for automatic buried threat detection (BTD), and a large variety of BTD algorithms have been proposed for it. Despite this, large-scale comparisons of GPR-based BTD algorithms are rare in the literature. In this paper, we report the results of a multi-institutional effort to develop advanced BTD algorithms for a real-world GPR BTD system. The effort involved five institutions with substantial experience with the development of GPR-based BTD algorithms. In this paper, we report the technical details of the advanced algorithms submitted by each institution, representing their latest technical advances, and many state-of-the-art GPR-based BTD algorithms. We also report the results of evaluating the algorithms from each institution on the large experimental data set used for development. The experimental data set comprised 120 000 m2 of GPR data using surface area, from 13 different lanes across two U.S. test sites. The data were collected using a vehicle-mounted GPR system, the variants of which have supplied data for numerous publications. Using these results, we identify the most successful and common processing strategies among the submitted algorithms, and make recommendations for GPR-based BTD algorithm design.
      PubDate: Sept. 2019
      Issue No: Vol. 57, No. 9 (2019)
       
  • A Terrestrial Validation of ICESat Elevation Measurements and Implications
           for Global Reanalyses
    • Authors: Adrian A. Borsa;Helen Amanda Fricker;Kelly M. Brunt;
      Pages: 6946 - 6959
      Abstract: The primary goal of NASA’s Ice, Cloud, and land Elevation Satellite (ICESat) mission was to detect centimeter-level changes in global ice sheet elevations at the spatial scale of individual ice streams. Confidence in detecting these small signals requires careful validation over time to characterize the uncertainty and stability of measured elevations. A common validation approach compares altimeter elevations to an independently characterized and stable reference surface. Using a digital elevation model (DEM) from geodetic surveys of one such surface, the salar de Uyuni in Bolivia, we show that ICESat elevations at this location have a 0.0-cm bias relative to the WGS84 ellipsoid, 4.0-cm (1-sigma) uncertainty overall, and 1.8-cm uncertainty under ideal conditions over short (50 km) profiles. We observe no elevation bias between ascending and descending orbits, but we do find that elevations measured immediately after transitions from low to high surface albedo may be negatively biased. Previous studies have reported intercampaign biases (ICBs) between various ICESat observation campaigns, but we find no statistically significant ICBs or ICB trends in our data. We do find a previously unreported 3.1-cm bias between ICESat’s Laser 2 and Laser 3, and we find even larger interlaser biases in reanalyzed data from other studies. For an altimeter with an exact repeat orbit like ICESat, we also demonstrate that validation results with respect to averaged elevation profiles along a single ground track are comparable to results obtained using reference elevations from an in situ survey.
      PubDate: Sept. 2019
      Issue No: Vol. 57, No. 9 (2019)
       
  • Transferred Deep Learning-Based Change Detection in Remote Sensing Images
    • Authors: Meijuan Yang;Licheng Jiao;Fang Liu;Biao Hou;Shuyuan Yang;
      Pages: 6960 - 6973
      Abstract: Supervised deep neural networks (DNNs) have been extensively used in diverse tasks. Generally, training such DNNs with superior performance requires a large amount of labeled data. However, it is time-consuming and expensive to manually label the data, especially for tasks in remote sensing, e.g., change detection. The situation motivates us to resort to the existing related images with labels, from which the concept of change can be adapted to new images. However, the distributions of the related labeled images (source domain) and unlabeled new images (target domain) are similar but not identical. It impedes a change detection model learned from source domains being well applied to the target domain. In this paper, we propose a transferred deep learning-based change detection framework to solve this problem. It consists of pretraining and fine-tuning stages. In the pretraining process, we propose two tasks to be learned simultaneously, namely, change detection for the source domain with labels and reconstruction of the unlabeled target data. The auxiliary task aims to reconstruct the difference image (DI) for the target domain. DI is an effective feature, such that the auxiliary task is of much relevance to change detection. The lower layers are shared between these two tasks in the training process. It mitigates the distribution discrepancy between the source and target domains and makes the concept of change from the source domain adapt to the target domain. In addition, we evaluate three modes of the U-net architecture to merge the information for a pair of patches. To fine-tune the change detection network (CDN) for the target domain, two strategies are exploited to select the pixels that have a high possibility of being correctly classified by an unsupervised approach. The proposed method demonstrates an excellent capacity for adapting the concept of change from the source domain to the target domain. It outperforms the state-of-the-art change detection metho-s via experimental results on real remote sensing data sets.
      PubDate: Sept. 2019
      Issue No: Vol. 57, No. 9 (2019)
       
  • High-Speed Maneuvering Platforms Squint Beam-Steering SAR Imaging Without
           Subaperture
    • Authors: Bowen Bie;Guang-Cai Sun;Xiang-Gen Xia;Mengdao Xing;Liang Guo;Zheng Bao;
      Pages: 6974 - 6985
      Abstract: This paper investigates the imaging problems in squint beam-steering synthetic aperture radar (SBS-SAR) mounted on high-speed platforms with constant acceleration. The cross-range-dependent range cell migration (RCM) is compensated by keystone transform (KT) and time domain RCM correction (RCMC). By derotation and phase compensation, the KT of Doppler folded signal is achieved without zero-padding. For azimuth processing, the signal is reconstructed by the nonlinear phase and range-dependent derotation. Then, the space-variant (SV) Doppler chirp rate is corrected by time domain azimuth nonlinear chirp scaling (ANCS). After frequency domain matched filtering, the full aperture signal is focused in the 2-D time domain. The algorithm is validated by simulated SAR data, including the evaluation of RCMC with KT, geometric correction, and the focusing performance.
      PubDate: Sept. 2019
      Issue No: Vol. 57, No. 9 (2019)
       
  • A Novel Approach to SAR Ocean Wind Retrieval
    • Authors: Vegard Nilsen;Geir Engen;Harald Johnsen;
      Pages: 6986 - 6995
      Abstract: A novel approach to a Bayesian ocean wind retrieval from high-range bandwidth synthetic aperture radar (SAR) data is demonstrated and validated using global Sentinel-1 (S1) A and B WV data acquired in October 2016 and January 2017. These spectral parameters are defined from the full-resolution image cross spectra. The first parameter is the integral spectral value (ISV) defined as the signed spectral energy at high-range wavenumber. Two other parameters, the azimuth phase plane slope (APPS) and range phase plane slope (RPPS), are the slope of the phase plane from the image cross spectra. Together with the normalized radar cross section (NRCS), these parameters form the input to our data-driven model for ocean wind retrieval. The model is trained on S1B from October 2016 data and validated on S1A and S1B from January 2017 data colocated with European Centre for Medium-Range Weather Forecast (ECMWF) atmospheric wind model as “ground” truth. The APPS proves to be the result of two sinusoidal functions, one symmetric and one antisymmetric, the antisymmetric part is in direct relation with the azimuth wind direction. Our Bayesian model achieves standard deviations of 1.73 m/s and 49.26° for January 2017 S1B data set with a bias of 0.03 m/s and −1.55°, corresponding results for January 2017 S1A data were 1.79 m/s and 49.95° with biases 0.41 m/s and −1.89°. Including data with ECMWF wind speed above 7 m/s we achieve standard deviations of 1.81 m/s and 33.16° with biases 0.1 m/s and −1.31° for the January 2017 S1B data set.
      PubDate: Sept. 2019
      Issue No: Vol. 57, No. 9 (2019)
       
  • Airborne Circular W-Band SAR for Multiple Aspect Urban Site Monitoring
    • Authors: Stephan Palm;Rainer Sommer;Daniel Janssen;Axel Tessmann;Uwe Stilla;
      Pages: 6996 - 7016
      Abstract: This paper presents a strategy for urban site monitoring by very high-resolution circular synthetic aperture radar (CSAR) imaging of multiple aspects. We analytically derive the limits of coherent azimuth processing for nonplanar objects in CSAR if no digital surface model (DSM) is available. The result indicates the level of maximum achievable resolution of these objects in this geometry. The difficulty of constantly illuminating a specific scene in full aspect mode (360°) for such small wavelengths is solved by a hardware- and software-side integration of the radar in a mechanical tracking mode. This results in the first demonstration of full aspect airborne subaperture CSAR images collected with an active frequency-modulated continuous wave (FMCW) radar at W-band. We describe the geometry and the implementation of the real-time beam-steering mode and evaluate resulting effects in the CSAR processing chain. The physical properties in W-band allow the use of extremely short subapertures in length while generating high azimuthal bandwidths. We use this feature to generate full aspect image stacks for CSAR video monitoring in very high frame rates. This technique offers the capability of detecting and observing moving objects in single channel data by shadow tracking. Due to the relatively strong echo of roads, the shadows of moving cars are rich in contrast. The image stack is further evaluated to present wide angular anisotropic properties of targets and first results on multiple aspect image fusion. Both topics show huge potential for further investigations in terms of image analysis and scene classification.
      PubDate: Sept. 2019
      Issue No: Vol. 57, No. 9 (2019)
       
  • One-Bit SAR Imaging Based on Single-Frequency Thresholds
    • Authors: Bo Zhao;Lei Huang;Weimin Bao;
      Pages: 7017 - 7032
      Abstract: This paper addresses a novel SAR imaging scheme based on 1-bit sampling assisted with a single-frequency threshold. The 1-bit sampling approach is able to considerably reduce the quantization cost. However, when the sampling technique simplifies the SAR system by reducing the word length of each sample to only 1 bit, amplitude information of the SAR echo is lost and high-order harmonics are introduced, degrading the SAR imaging quality. The strategy of single-frequency threshold is able to linearly maintain the amplitude information and shifts the spectra of the harmonics away from the imaging component caused by the intermodulation. Hence, the imaging quality using 1-bit sampled data can be guaranteed. By selecting different SAR parameter groups according to a comprehensive consideration on spectrum aliasing, filter mismatching, and radio frequency interfering, a good tradeoff can be achieved between imaging quality and system simplification. Examples of ideal scatterers and a real measured scene are provided for quantitative analysis. Real measured RADARSAT-1 data are also imaged using the proposed scheme to validate its effectiveness.
      PubDate: Sept. 2019
      Issue No: Vol. 57, No. 9 (2019)
       
  • Fast 3-D Imaging Algorithm Based on Unitary Transformation and Real-Valued
           Sparse Representation for MIMO Array SAR
    • Authors: Chunxiao Wu;Zenghui Zhang;Xingdong Liang;Longyong Chen;Wenxian Yu;Trieu-Kien Truong;
      Pages: 7033 - 7047
      Abstract: Multiple-input multiple-output (MIMO) array synthetic aperture radar (SAR) with array antennas distributed along the cross-track direction can obtain 3-D scene information of the surveillance region. However, the cross-track resolution is unacceptable due to the length limitation of the MIMO antenna array. The superresolution algorithms within the framework of compressive sensing (CS) have been introduced to recover the cross-track signal because of its inherent spatial sparsity. The existing sparse recovery algorithms for 3-D SAR are attempted to find the sparse solution in the complex domain directly, which requires a very high computational complexity. To overcome this problem, a new fast 3-D imaging algorithm based on real-valued sparse representation is proposed in this paper. In this new algorithm, unitary transformation can be employed to transform the sparse signal recovery model of uniform/nonuniform MIMO array SAR from the complex domain to the real domain. Thus, a real-valued reweighted $ell _{2,1}$ -norm minimization model is established. In addition, a modification of the fast iterative shrinkage-thresholding algorithm (FISTA) is used to reconstruct the 3-D image for further improving the computational efficiency. Moreover, the theoretical analysis of computational complexity of the proposed algorithm is derived when compared with an existing complex domain algorithm. Finally, numerical simulations and MIMO array SAR real experimental results are illustrated to validate that the proposed algorithm can reduce the computational complexity significantly in terms of CPU time while still maintaining the inherent advantages of superresolution and robustness against the noise.
      PubDate: Sept. 2019
      Issue No: Vol. 57, No. 9 (2019)
       
  • Automatic Design of Convolutional Neural Network for Hyperspectral Image
           Classification
    • Authors: Yushi Chen;Kaiqiang Zhu;Lin Zhu;Xin He;Pedram Ghamisi;Jón Atli Benediktsson;
      Pages: 7048 - 7066
      Abstract: Hyperspectral image (HSI) classification is a core task in the remote sensing community, and recently, deep learning-based methods have shown their capability of accurate classification of HSIs. Among the deep learning-based methods, deep convolutional neural networks (CNNs) have been widely used for the HSI classification. In order to obtain a good classification performance, substantial efforts are required to design a proper deep learning architecture. Furthermore, the manually designed architecture may not fit a specific data set very well. In this paper, the idea of automatic CNN for the HSI classification is proposed for the first time. First, a number of operations, including convolution, pooling, identity, and batch normalization, are selected. Then, a gradient descent-based search algorithm is used to effectively find the optimal deep architecture that is evaluated on the validation data set. After that, the best CNN architecture is selected as the model for the HSI classification. Specifically, the automatic 1-D Auto-CNN and 3-D Auto-CNN are used as spectral and spectral–spatial HSI classifiers, respectively. Furthermore, the cutout is introduced as a regularization technique for the HSI spectral–spatial classification to further improve the classification accuracy. The experiments on four widely used hyperspectral data sets (i.e., Salinas, Pavia University, Kennedy Space Center, and Indiana Pines) show that the automatically designed data-dependent CNNs obtain competitive classification accuracy compared with the state-of-the-art methods. In addition, the automatic design of the deep learning architecture opens a new window for future research, showing the huge potential of using neural architectures’ optimization capabilities for the accurate HSI classification.
      PubDate: Sept. 2019
      Issue No: Vol. 57, No. 9 (2019)
       
  • Detection of Radio Frequency Interference in Microwave Radiometers
           Operating in Shared Spectrum
    • Authors: Priscilla N. Mohammed;Adam J. Schoenwald;Randeep Pannu;Jeffrey R. Piepmeier;Damon Bradley;Soon Chye Ho;Rashmi Shah;James L. Garrison;
      Pages: 7067 - 7074
      Abstract: Microwave radiometers measure weak thermal emission from the Earth, which is broadband in nature. Radio frequency interference (RFI) originates from active transmitters and is typically narrow band, directional, and continuous or intermittent. The Global Precipitation Measurement (GPM) Microwave Imager (GMI) has seen RFI caused by ocean reflections from direct broadcast and communication satellites in the shared 18.7-GHz allocated band. This paper focuses on the use of a complex signal kurtosis algorithm to detect direct broadcast satellite (DBS) signals at 18.7 GHz. An experiment was conducted in August 2017 at the Harvest oil platform, located about 10 km off the coast of central California. Data were collected for direct and ocean reflected DBS transmissions in the K- and Ku-bands from a commercial geostationary satellite. Results are presented for the complex kurtosis performance for a five-channel quadrature phase-shift keying (QPSK) signal versus the seven-channel case. As the spectrum becomes more occupied, detector performance decreases. Filtering of RFI in the fully occupied spectrum is very difficult, and detection using the complex kurtosis detector is only possible for very large interference-to-noise ratio (INR) values at −5 dB and higher. This corresponds to over 100 K in a real system such as GMI; therefore, other detection approaches might be more appropriate.
      PubDate: Sept. 2019
      Issue No: Vol. 57, No. 9 (2019)
       
  • Assessments of Ocean Wind Retrieval Schemes Used for Chinese Gaofen-3
           Synthetic Aperture Radar Co-Polarized Data
    • Authors: Lin Ren;Jingsong Yang;Alexis A. Mouche;He Wang;Gang Zheng;Juan Wang;Huaguo Zhang;Xiulin Lou;Peng Chen;
      Pages: 7075 - 7085
      Abstract: This paper assesses different retrieval schemes used for the Chinese Gaofen-3 Synthetic Aperture Radar (GF-3 SAR) co-polarized data. The data consist of 4186 GF-3 data points and collocated wind information from sources including the ASCAT scatterometer, HY2A-SCAT scatterometer, and National Data Buoy Center (NDBC) buoy wind data set. The VV-polarized geophysical model function (GMF) is a CMOD7 model while the HH-polarized GMF is a hybrid of the CMOD7 and PR model. Assessments involve comparisons between SAR-derived and collocated winds in terms of the root-mean-square difference (RMSD) and bias. First, a comparison between the two retrieval schemes for the VV-polarized data clearly shows that the optimal scheme performs better than the classical scheme for wind speed retrieval. Comparisons for HH-polarized data show similar results. These experiments indicate that the wind speed RMSDs for the GF-3 co-polarized data are within 2 m/s when using the optimal scheme. Moreover, the wind direction RMSDs from the two schemes have no significant difference, with values near 20°. Overall, these assessments indicate that the GF-3 co-polarized data are sufficient for operational wind speed retrieval using the optimal scheme. However, wind direction retrieval requires further improvement.
      PubDate: Sept. 2019
      Issue No: Vol. 57, No. 9 (2019)
       
  • Seismic Phase Picking Using Convolutional Networks
    • Authors: Esteban Pardo;Carmen Garfias;Norberto Malpica;
      Pages: 7086 - 7092
      Abstract: When a seismometer network records an earthquake, operators will manually review the waveforms and identify the wave phases, a task known as phase picking. Manual phase picking is a time-consuming process that can be automated using machine learning; however, automatic methods have not yet achieved human-level performance, and open-source implementations of state-of-the-art algorithms are not always available. Convolutional networks have revolutionized the field of image processing, where the large amounts of readily available data make possible near-human performance in tasks such as classification and segmentation. Fortunately, phase picking is also an area where thousands of phases are manually picked, which makes convolutional networks a good fit for the processing of this type of data. In this paper, we describe Cospy, an open-source convolutional phase picker that uses a two-stage analysis in which the first stage segments a rough area around the phase, and the second stage regresses the precise location. Our approach was evaluated on the Northern California Earthquake Data Center (NCEDC) data set and, when targeting picks closer than 0.1 s, it achieved an $F_{1}$ -score of 93.13% for P phases and 91.07% for S phases. Our results show that convolutional networks are on track to achieve human-level performance on the task of seismic phase picking and can contribute to decreasing the need for manual analysis. An open-source implementation of the proposed approach, pretrained on the NCEDC data set, can be downloaded at https://github.com/stbnps/cospy.
      PubDate: Sept. 2019
      Issue No: Vol. 57, No. 9 (2019)
       
  • Robust Target Detection Within Sea Clutter Based on Graphs
    • Authors: Kun Yan;Yu Bai;Hsiao-Chun Wu;Xiangli Zhang;
      Pages: 7093 - 7103
      Abstract: In this paper, a novel robust graph-based adequate and concise information representation paradigm is explored. This new signal representation framework can provide a promising alternative for manifesting the essential structure of random signals. A typical application, namely, target detection within sea clutter, can thus be carried out using our proposed new graph-based signal characterization. According to Monte Carlo simulation results, the proposed graph-based signal (target) detection method leads to the outstanding performance, compared to other existing techniques, especially when the signal-to-noise ratio is rather small (0–6 dB). This new graph-based target detector can be expected to be the future backbone technique for identifying and tracking marine vessels using high-resolution radars.
      PubDate: Sept. 2019
      Issue No: Vol. 57, No. 9 (2019)
       
  • Infrared Small Target Detection Based on Facet Kernel and Random Walker
    • Authors: Yao Qin;Lorenzo Bruzzone;Chengqiang Gao;Biao Li;
      Pages: 7104 - 7118
      Abstract: Efficient detection of targets immersed in a complex background with a low signal-to-clutter ratio (SCR) is very important in infrared search and tracking (IRST) applications. In this paper, we address the target detection problem in terms of local image segmentation and propose a novel small target detection algorithm derived from facet kernel and random walker (RW) algorithm which includes four main stages. First, since the RW algorithm is suitable for images with less noises, local order-statistic and mean filtering are applied to remove the pixel-sized noises with high brightness (PNHB) and smooth the infrared images. Second, the infrared image is filtered by the facet kernel to enhance the target pixels and candidate target pixels are extracted by an adaptive threshold operation. Third, inspired by the properties of infrared targets, a novel local contrast descriptor (NLCD) based on the RW algorithm is proposed to achieve clutter suppression and target enhancement. Then, the candidate target pixels are selected as central pixels to construct the local regions and the NLCD map of all local regions is computed. The obtained NLCD map is weighted by the filtered map of facet kernel to further enhance target. Finally, the target is detected by a thresholding operation on the weighted map. Experimental results on three data sets show that the proposed method outperforms conventional baseline methods in terms of target detection accuracy.
      PubDate: Sept. 2019
      Issue No: Vol. 57, No. 9 (2019)
       
  • A Super-Resolution Sparse Aperture ISAR Sensors Imaging Algorithm via the
           MUSIC Technique
    • Authors: Qiuchen Liu;Aijun Liu;Yong Wang;Hongzhi Li;
      Pages: 7119 - 7134
      Abstract: The conventional range-Doppler (RD) technique uses fast Fourier transformation (FFT) to generate focused images. However, the spectrum of FFT has high sidelobes and wide main lobes and the resolution of RD is limited by the radar parameters so that the RD method cannot generate super-resolution images especially with sparse aperture (SA) data. A super-resolution SA inverse synthetic aperture radar (SA-ISAR) imaging algorithm via the multiple signal classification (MUSIC) technique is proposed in this paper. The proposal uses the MUSIC algorithm to estimate the location and employs the least-squares technique to calculate the intensity of each scatterer. Then, the scatterers can be precisely depicted in the images without the interference of sidelobes and wide main lobes. The resolution of the proposal depends less on the radar parameters, and it can be further improved by using a smaller search step. Experiments obtained by processing of simulated and raw data demonstrate that the proposal can efficiently generate super-resolution images with full aperture (FA) or SA data.
      PubDate: Sept. 2019
      Issue No: Vol. 57, No. 9 (2019)
       
  • SMF-POLOPT: An Adaptive Multitemporal Pol(DIn)SAR Filtering and Phase
           Optimization Algorithm for PSI Applications
    • Authors: Feng Zhao;Jordi J. Mallorqui;
      Pages: 7135 - 7147
      Abstract: Speckle noise and decorrelation can hamper the application and interpretation of PolSAR images. In this paper, a new adaptive multitemporal Pol(DIn)SAR filtering and phase optimization algorithm is proposed to address these limitations. This algorithm first categorizes and adaptively filters permanent scatterer (PS) and distributed scatterer (DS) pixels according to their polarimetric scattering mechanisms [i.e., the scattering-mechanism-based filtering (SMF)]. Then, two different polarimetric DInSAR (POLDInSAR) phase OPTimization methods are applied separately on the filtered PS and DS pixels (i.e., POLOPT). Finally, an inclusive pixel selection approach is used to identify high-quality pixels for ground deformation estimation. Thirty-one full-polarization Radarsat-2 SAR images over Barcelona (Spain) and 31 dual-polarization TerraSAR-X images over Murcia (Spain) have been used to evaluate the performance of the proposed algorithm. The PolSAR filtering results show that the speckle of PolSAR images has been well reduced with the preservation of details by the proposed SMF. The obtained ground deformation monitoring results have shown significant improvements, about $times 7.2$ (the full-polarization case) and $times 3.8$ (the dual-polarization case) with respect to the classical full-resolution single-pol amplitude dispersion method, on the valid pixels’ densities. The excellent PolSAR filtering and ground deformation monitoring results achieved by the adaptive Pol(DIn)SAR filtering and phase optimization algorithm (i.e., the SMF-POLOPT) have validated the effectiveness of this proposed scheme.
      PubDate: Sept. 2019
      Issue No: Vol. 57, No. 9 (2019)
       
  • Two-Step Accuracy Improvement of Motion Compensation for Airborne SAR With
           Ultrahigh Resolution and Wide Swath
    • Authors: Jianlai Chen;Buge Liang;De-Gui Yang;Dang-Jun Zhao;Mengdao Xing;Guang-Cai Sun;
      Pages: 7148 - 7160
      Abstract: The motion compensation (MOCO) for the airborne SAR with ultrahigh resolution and wide swath is required to consider the range-dependent (RD) phase error. The RD phase error may cause an RD residual-range cell migration (RCM) after the correction of RCM, which can degrade the performance of phase gradient autofocus (PGA) when estimating the phase error. In addition, because the PGA estimation is based on the strong scattering point, it may wrongly estimate the phase error for some observation scenes without strong scattering point. Alternatively, to take into account the above two problems, we study a MOCO algorithm based on two-step accuracy improvement. In the algorithm, the first step is to estimate and correct the RD residual-RCM and thus improves the accuracy of PGA. The second step is to develop a prior-information-based-weighted least square (PI-WLS) to further improve the accuracy of RD phase error estimation. Processing of airborne real data validates the effectiveness of the proposed algorithm.
      PubDate: Sept. 2019
      Issue No: Vol. 57, No. 9 (2019)
       
  • Improvements in the Beam-Mismatch Correction of Precipitation Radar Data
           After the TRMM Orbit Boost
    • Authors: Kaya Kanemaru;Takuji Kubota;Toshio Iguchi;
      Pages: 7161 - 7169
      Abstract: The orbit of the tropical rainfall measuring mission (TRMM) satellite was boosted from 350 to 402.5 km in August 2001 to extend its lifetime by conserving fuel. Since the timing between transmission and reception by the precipitation radar (PR) onboard the TRMM satellite was a fixed constant for measurement from an original altitude of 350 km, the PR encountered a mismatch between the transmitting and receiving beams after the TRMM orbit boost. Although the PR algorithm in TRMM Version 7 employs a correction for the beam mismatch, its error remains as an underestimation of the precipitation estimates near surface in the second half of the swath. This paper aims to mitigate the beam-mismatch correction error in Version 7. The beam-mismatch correction needs to estimate the radar echo that would be measured in the virtual intermediate beam between the beam in question and the previous adjacent beam. The beam-mismatch correction developed in this paper assumes that the surface and precipitation echoes change linearly in the horizontal direction in parallel to the surface between the two beams. The new correction is tested with observational data, indicating that the method improves the accuracy of the correction at off-nadir angles. Statistics of the surface normalized radar cross sections and the bright band peak intensities at off-nadir angles are improved using the method. The asymmetric bias of the precipitation estimates with respect to the scan angle in Version 7 is mitigated by 95.9% over ocean and 72.5% over land with the new correction.
      PubDate: Sept. 2019
      Issue No: Vol. 57, No. 9 (2019)
       
  • A CNN-Based Spatial Feature Fusion Algorithm for Hyperspectral Imagery
           Classification
    • Authors: Alan J. X. Guo;Fei Zhu;
      Pages: 7170 - 7181
      Abstract: The shortage of training samples remains one of the main obstacles in applying the neural networks to the hyperspectral images classification. To fuse the spatial and spectral information, pixel patches are often utilized to train a model, which may further aggregate this problem. In the existing works, an artificial neural network (ANN) model supervised by centerloss (ANNC) was introduced. Training merely with spectral information, the ANNC yields discriminative spectral features suitable for the subsequent classification tasks. In this paper, we propose a novel convolutional neural network (CNN)-based spatial feature fusion (CSFF) algorithm, which allows a smart integration of spatial information to the spectral features extracted by ANNC. As a critical part of CSFF, a CNN-based discriminant model is introduced to estimate whether two pixels belong to the same class. At the testing stage, by applying the discriminant model to the pixel pairs generated by a test pixel and each of its neighbors, the local structure is estimated and represented as a customized convolutional kernel. The spectral–spatial feature is generated by a convolutional operation between the estimated kernel and the corresponding spectral features within a local region. The final label is determined by classifying the resulting spectral–spatial feature. Without increasing the number of training samples or involving pixel patches at the training stage, the CSFF framework achieves the state of the art by declining 20%–50% classification failures in experiments on three well-known hyperspectral images.
      PubDate: Sept. 2019
      Issue No: Vol. 57, No. 9 (2019)
       
  • Sparse Recovery on Intrinsic Mode Functions for the Micro-Doppler
           Parameters Estimation of Small UAVs
    • Authors: Yichao Zhao;Yi Su;
      Pages: 7182 - 7193
      Abstract: Micro-Doppler (m-D) effect, induced by the rotation of rotor blades, provides an important signature to discriminate between small unmanned aerial vehicles (UAVs) and other aircrafts or birds in remote surveillance. Compared with the Doppler signal induced by the translation, m-D signal, however, is rather weak and consists of multiple frequency components. In this paper, empirical mode decomposition (EMD) algorithm is applied to addressing the mode-mixing problem in the returned signal. Theoretically, Doppler features are consequently allocated in the first few intrinsic mode functions (IMFs). Rather, the partial components of the subsequent IMFs hold a similar property with the rotation signal. Those components are selected as the input data for the sparse recovery. With the sinusoidal frequency-modulated basis (SFMB), the essence of the recovery problem is converted into 1-D parameter optimization. Then, phase orthogonal matching pursuit (POMP) method is developed for the sparse solution. The proposed method is contrasted with the prevailing approach to solving the mode-mixing problem. Simulation results confirm the theoretical analysis, showing the feasibility in the estimation of m-D frequency. The preliminary findings from the measured data suggest that the proposed method has a potential application in the identification of small UAVs.
      PubDate: Sept. 2019
      Issue No: Vol. 57, No. 9 (2019)
       
  • SAR Speckle Nonlocal Filtering With Statistical Modeling of Haar Wavelet
           Coefficients and Stochastic Distances
    • Authors: Pedro A. A. Penna;Nelson D. A. Mascarenhas;
      Pages: 7194 - 7208
      Abstract: Due to the coherent processing of synthetic aperture radar (SAR) systems, multiplicative speckle noise arises providing a granular appearance in SAR images. This kind of noise makes it difficult to analyze and interpret surface images from the earth. Therefore, studying alternatives to attenuate the speckle is a constant task in the image processing literature. Current state-of-the-art filters in remote sensing area explore the philosophy of similarity between patches. This paper aims to expand the traditional nonlocal means (NLM) algorithm originally proposed for the additive white Gaussian noise (AWGN) to deal with the speckle. In our research, we consider the worst scenario, i.e., the single-look speckle noise, and apply the NLM to filter intensity SAR images in the Haar wavelet domain. To accomplish this task, the Haar coefficients were described by exponential-polynomial (EP) and gamma distributions. Furthermore, stochastic distances based on these two mentioned distributions were derived and embedded in the NLM filter by replacing the Euclidean distance of the original method. This represents the main contribution of the proposed research. Finally, this paper analyzes and compares the synthetic and real experiments of the proposed method with some recent filters of the literature demonstrating its competitive performance.
      PubDate: Sept. 2019
      Issue No: Vol. 57, No. 9 (2019)
       
  • Road Detection and Centerline Extraction Via Deep Recurrent Convolutional
           Neural Network U-Net
    • Authors: Xiaofei Yang;Xutao Li;Yunming Ye;Raymond Y. K. Lau;Xiaofeng Zhang;Xiaohui Huang;
      Pages: 7209 - 7220
      Abstract: Road information extraction based on aerial images is a critical task for many applications, and it has attracted considerable attention from researchers in the field of remote sensing. The problem is mainly composed of two subtasks, namely, road detection and centerline extraction. Most of the previous studies rely on multistage-based learning methods to solve the problem. However, these approaches may suffer from the well-known problem of propagation errors. In this paper, we propose a novel deep learning model, recurrent convolution neural network U-Net (RCNN-UNet), to tackle the aforementioned problem. Our proposed RCNN-UNet has three distinct advantages. First, the end-to-end deep learning scheme eliminates the propagation errors. Second, a carefully designed RCNN unit is leveraged to build our deep learning architecture, which can better exploit the spatial context and the rich low-level visual features. Thereby, it alleviates the detection problems caused by noises, occlusions, and complex backgrounds of roads. Third, as the tasks of road detection and centerline extraction are strongly correlated, a multitask learning scheme is designed so that two predictors can be simultaneously trained to improve both effectiveness and efficiency. Extensive experiments were carried out based on two publicly available benchmark data sets, and nine state-of-the-art baselines were used in a comparative evaluation. Our experimental results demonstrate the superiority of the proposed RCNN-UNet model for both the road detection and the centerline extraction tasks.
      PubDate: Sept. 2019
      Issue No: Vol. 57, No. 9 (2019)
       
  • Energy Flow Domain Reverse-Time Migration for Borehole Radar
    • Authors: Jianjian Huo;Qing Zhao;Binzhong Zhou;Lanbo Liu;Chunguang Ma;Jiyu Guo;Longhao Xie;
      Pages: 7221 - 7231
      Abstract: A modified 2-D reverse-time migration algorithm in the energy flow domain, called EF-RTM, is proposed for short impulse borehole radar (BHR) imaging. The key of the approach is based on Poynting’s theorem, which allows decomposition of the energy flux density (EFD) derived from the source and receiver wave fields in different wave-propagation directions. Then, imaging conditions, for example, zero-lag cross correlation (prestack migration) and zero-time imaging principle (poststack migration), are applied to the decomposed EFD-field components to obtain the migrated sections. The characteristics of the resulting images can be optionally combined or separately used for better BHR data imaging, interpretation, and analysis. In this paper, the EF-RTM algorithm is validated by numerical modeling and real field data and compared with the conventional RTM algorithm. All the results show that this new EF-RTM method is superior to the conventional RTM method: it inherits the high precision of RTM with additional imaging advantages, such as natural wave-field decomposition in different directions, improved migrated cross-range resolution, a better focus of the target’s shape, and migration noise reduction.
      PubDate: Sept. 2019
      Issue No: Vol. 57, No. 9 (2019)
       
  • Caps-TripleGAN: GAN-Assisted CapsNet for Hyperspectral Image
           Classification
    • Authors: Xue Wang;Kun Tan;Qian Du;Yu Chen;Peijun Du;
      Pages: 7232 - 7245
      Abstract: The increase in the spectral and spatial information of hyperspectral imagery poses challenges in classification due to the fact that spectral bands are highly correlated, training samples may be limited, and high resolution may increase intraclass difference and interclass similarity. In this paper, in order to better handle these problems, a Caps-TripleGAN framework is proposed by exploring the 1-D structure triple generative adversarial network (TripleGAN) for sample generation and integrating CapsNet for hyperspectral image classification. Moreover, spatial information is utilized to verify the learning capacity and discriminative ability of the Caps-TripleGAN framework. The experimental results obtained with three real hyperspectral data sets confirm that the proposed method outperforms most of the state-of-the-art methods.
      PubDate: Sept. 2019
      Issue No: Vol. 57, No. 9 (2019)
       
  • Structure-Aware Collaborative Representation for Hyperspectral Image
           Classification
    • Authors: Wei Li;Yuxiang Zhang;Na Liu;Qian Du;Ran Tao;
      Pages: 7246 - 7261
      Abstract: Recently, collaborative representation (CR) has drawn increasing attention in hyperspectral image classification due to its simplicity and effectiveness. However, existing representation-based classifiers do not explicitly utilize class label information of training samples in estimating representation coefficients. To solve this issue, a structure-aware CR with Tikhonov regularization (SaCRT) method is proposed to consider both class label information of training samples and spectral signatures of testing pixels to estimate more discriminative representation coefficients. In the proposed framework, marginal regression is employed; furthermore, an interclass row-sparsity structure is designed to preserve the compact relationship among intraclass pixels and more separable interclass pixels, thereby enhancing class separability. The experimental results evaluated using three hyperspectral data sets demonstrate that the proposed method significantly outperforms some state-of-the-art classifiers.
      PubDate: Sept. 2019
      Issue No: Vol. 57, No. 9 (2019)
       
  • High-Resolution Topography of Titan Adapting the Delay/Doppler Algorithm
           to the Cassini RADAR Altimeter Data
    • Authors: Valerio Poggiali;Marco Mastrogiuseppe;Alexander Gerard Hayes;Roberto Seu;Joseph Peter Mullen;Samuel Patrick Dennis Birch;Maria Carmela Raguso;
      Pages: 7262 - 7268
      Abstract: The Cassini RADAR altimeter has provided broad-scale surface topography data for Saturn’s largest moon Titan. Herein, we adapt the delay/Doppler algorithm to take into account Cassini geometries and antenna mispointing usually occurring during hyperbolic Titan flybys. The proposed algorithm allows up to tenfold improvement in the along-track resolution. Preliminary results are provided that show how the improved topography presented herein can advance our understanding of Titan’s surface characteristics.
      PubDate: Sept. 2019
      Issue No: Vol. 57, No. 9 (2019)
       
  • An Atmospheric Phase Screen Estimation Strategy Based on Multichromatic
           Analysis for Differential Interferometric Synthetic Aperture Radar
    • Authors: Filippo Biondi;Carmine Clemente;Danilo Orlando;
      Pages: 7269 - 7280
      Abstract: In synthetic aperture radar (SAR), the separation of the height between the ground subsidence phase components and the atmospheric phase delay mixed in the global SAR interferometry (InSAR) phase information is an issue of primary concern in the remote sensing community. This paper describes a complete procedure to address the challenge to estimate the atmospheric phase screen and to separate the three-phase components by exploiting only one InSAR image couple. This solution has the capability to process persistent scatterers subsidence maps potentially using only two multitemporal InSAR couples observed in any atmospheric condition. The solution is obtained by emulating the atmosphere compensation technique that is largely used by the global positioning system where two frequencies are used in order to estimate and compensate the positioning errors due to atmosphere parameters’ variations. A sub-chirping and sub-Doppler algorithm for atmospheric compensation is proposed, which allows the successful separation of the height from the subsidence and the atmosphere parameters from the interferometric phase observed on one InSAR couple. The results are given processing images of two InSAR couples observed by the COSMO-SkyMed satellite system.
      PubDate: Sept. 2019
      Issue No: Vol. 57, No. 9 (2019)
       
  • $K$+ -Gaussians+With+Application+to+Hyperspectral+Imaging&rft.title=Geoscience+and+Remote+Sensing,+IEEE+Transactions+on&rft.issn=0196-2892&rft.date=2019&rft.volume=57&rft.spage=7281&rft.epage=7293&rft.aulast=Wiesel;&rft.aufirst=Yonatan&rft.au=Yonatan+Woodbridge;Uri+Okun;Gal+Elidan;Ami+Wiesel;">Unmixing $K$ -Gaussians With Application to Hyperspectral Imaging
    • Authors: Yonatan Woodbridge;Uri Okun;Gal Elidan;Ami Wiesel;
      Pages: 7281 - 7293
      Abstract: In this paper, we consider the parameter estimation of $K$ -Gaussians, given convex combinations of their realizations. In the remote sensing literature, this setting is known as the normal compositional model (NCM) and has shown promising gains in modeling hyperspectral images. Current NCM parameter estimation techniques are based on Bayesian methodology and are computationally slow and sensitive to their prior assumptions. Here, we introduce a deterministic variant of the NCM, named DNCM, which assumes that the unknown mixing coefficients are nonrandom. This leads to a standard Gaussian model with a simple estimation procedure, which we denote by $K$ -Gaussians. Its iterations are provided in closed form and do not require any sampling schemes or simplifying structural assumptions. We illustrate the performance advantages of $K$ -Gaussians using synthetic and real images, in terms of accuracy and computational costs in comparison to state of the art. We also demonstrate the use of our algorithm in hyperspectral target detection on a real image with known targets.
      PubDate: Sept. 2019
      Issue No: Vol. 57, No. 9 (2019)
       
  • Effect of Microgeometry on Modeling Accuracy of Fluid-Saturated Rock Using
           Dielectric Permittivity
    • Authors: Chen Guo;Bowen Ling;Gary Mavko;Richard Liu;
      Pages: 7294 - 7299
      Abstract: A common practice for estimating subsurface constituents from remote sensing methods is to use analytical effective medium models relating effective dielectric permittivity to properties of the targeted region. These models suggest that the effective permittivity depends on the volumetric fraction and phase property of each constituent of the composite. Most effective medium models are based on idealized approximations of the composite’s geometry. Some studies have shown that the analytical mixing rule may underestimate or overestimate the effective property when there are geometrical variations. In this paper, we use numerical experiments to compute the effective dielectric permittivity of composites with different microgeometries having varying amounts of internal interfaces. By comparing the numerical results with the classic analytical mixing rules, we quantify the discrepancy with a diagram that indicates the high deviation region. The study is carried out by using various fluid–solid permittivity contrasts to suit a wide range of applications.
      PubDate: Sept. 2019
      Issue No: Vol. 57, No. 9 (2019)
       
  • Comments on “The Influence of Equatorial Scintillation on L-Band SAR
           Image Quality and Phase”
    • Authors: Yifei Ji;Yongsheng Zhang;Qilei Zhang;Zhen Dong;
      Pages: 7300 - 7301
      Abstract: As was indicated in the mentioned paper, the ionospheric stripes, in general, aligned well with the orientation of the projected ambient geomagnetic field vector. However, it is shown in our study that the calculated striping heading is not only dependent upon the orientation of the projected ambient geomagnetic field vector, namely, the geomagnetic heading but also the geomagnetic inclination, the system incident, and squint angle. It also confirms that the changing direction of the visible stripes in the mentioned Phased Array-type L-band Synthetic Aperture Radar (PALSAR) data is mainly due to the variation of the geomagnetic inclination, while the geomagnetic heading is nearly constant along the orbit. Therefore, the denotation and presentation in that research that the projected geomagnetic field vector elongates in the direction of the stripe orientation might not be feasible.
      PubDate: Sept. 2019
      Issue No: Vol. 57, No. 9 (2019)
       
  • Corrections to “FengYun-3 B Satellite Medium Resolution Spectral Imager
           Visible On-Board Calibrator Radiometric Output Degradation Analysis”
    • Authors: Dandan Zhi;Wei Wei;Yanna Zhang;Tanqi Yu;Yan Pan;Ling Sun;Xin Li;Xiaobing Zheng;
      Pages: 7302 - 7302
      Abstract: In the above paper [1], the name of the institution of the authors Dandan Zhi, Tanqi Yu, and Yan Pan were incorrect, they are with the Anhui Institute of Optics and Fine Mechanics, Chinese Academy of Sciences, Key Laboratory of Optical Calibration and Characterization, Hefei 230031, China, and also with the University of Science and Technology of China, Hefei 230026, China (e-mail: 1416652331@qq.com).
      PubDate: Sept. 2019
      Issue No: Vol. 57, No. 9 (2019)
       
 
 
JournalTOCs
School of Mathematical and Computer Sciences
Heriot-Watt University
Edinburgh, EH14 4AS, UK
Email: journaltocs@hw.ac.uk
Tel: +00 44 (0)131 4513762
Fax: +00 44 (0)131 4513327
 
Home (Search)
Subjects A-Z
Publishers A-Z
Customise
APIs
Your IP address: 18.206.194.83
 
About JournalTOCs
API
Help
News (blog, publications)
JournalTOCs on Twitter   JournalTOCs on Facebook

JournalTOCs © 2009-