for Journals by Title or ISSN
for Articles by Keywords
Followed Journals
Journal you Follow: 0
Sign Up to follow journals, search in your chosen journals and, optionally, receive Email Alerts when new issues of your Followed Journals are published.
Already have an account? Sign In to see the journals you follow.
Journal Cover
Selected Topics in Applied Earth Observations and Remote Sensing, IEEE Journal of
Journal Prestige (SJR): 1.547
Citation Impact (citeScore): 4
Number of Followers: 52  
  Hybrid Journal Hybrid journal (It can contain Open Access articles)
ISSN (Print) 1939-1404
Published by IEEE Homepage  [191 journals]
  • Frontcover
    • Abstract: Presents the front cover for this issue of the publication.
      PubDate: Sept. 2018
      Issue No: Vol. 11, No. 9 (2018)
  • IEEE Geoscience and Remote Sensing Societys
    • Abstract: Presents a listing of the editorial board, board of governors, current staff, committee members, and/or society editors for this issue of the publication.
      PubDate: Sept. 2018
      Issue No: Vol. 11, No. 9 (2018)
  • IEEE Geoscience and Remote Sensing Societys
    • Abstract: Presents a listing of the editorial board, board of governors, current staff, committee members, and/or society editors for this issue of the publication.
      PubDate: Sept. 2018
      Issue No: Vol. 11, No. 9 (2018)
  • Institutional Listings
    • Abstract: Presents a listing of institutional institutions relevant for this issue of the publication.
      PubDate: Sept. 2018
      Issue No: Vol. 11, No. 9 (2018)
  • Preliminary Analysis of the Potential and Limitations of MICAP for the
           Retrieval of Sea Surface Salinity
    • Authors: Lanjie Zhang;Xiaobin Yin;Zhenzhan Wang;Hao Liu;Mingsen Lin;
      Pages: 2979 - 2990
      Abstract: A new payload concept has been proposed: the microwave imager combined active/passive (MICAP). MICAP is a combined one-dimensional microwave interferometric radiometer that operates at 1.4, 6.9, 18.7, and 23.8 GHz and L-band (1.26 GHz) scatterometer. It has the capability to simultaneously remotely sense sea surface salinity (SSS), sea surface temperature (SST), and wind speed. MICAP will be a candidate payload onboard the Ocean Salinity Satellite led by the State Oceanic Administration of China to monitor SSS and reduce geophysical errors caused by surface roughness and SST. To provide an “all-weather” estimation of SSS with high accuracy from space, the errors of the simultaneous retrieval of multiparameters using MICAP are analyzed, and noise levels and stability requirements of instruments are estimated. Preliminary analysis shows that MICAP can provide SSS with an accuracy of 1 psu for single measurement and 0.1 psu over the global ocean for 200 × 200 km resolution pixels and one month at middle and low latitudes with default instrument noises (0.1 K, 0.3 K and 0.3 K for the L, C, and K band radiometers, respectively, and 0.1 dB for the L-band scatterometer) while uncertainties of the drift corrections are less than the radiometer sensitivities.
      PubDate: Sept. 2018
      Issue No: Vol. 11, No. 9 (2018)
  • Shallow Water Depth Retrieval From Multitemporal Sentinel-1 SAR Data
    • Authors: Xiaolin Bian;Yun Shao;Shiang Wang;Wei Tian;Xiaochen Wang;Chunyan Zhang;
      Pages: 2991 - 3000
      Abstract: The Sentinel-1 constellation can provide numerous high-resolution C-band synthetic aperture radar (SAR) data with long-term continuity and freely, thus showing a cost-effective solution for the coastal monitoring at high or moderate spatial resolutions. The major goal is to improve estimates of shallow water depth for SAR applications. We present an algorithm that is based on the linear dispersion relation between water depth and swell parameters like swell wavelength, direction, and period to estimate shallow water depth using multitemporal SAR data with a short repeating cycle. This is accomplished via circular convolution and Kalman filter that provides both the estimates and a measure of their uncertainty at each location. The introduced algorithm is tested on four Sentinel-1 interferometric wide swath (IW) mode SAR images over the coastal region of Fujian Province, China. The retrieved water depth both from multitemporal SAR images and different single SAR images show general agreement with water depth from an official electronic navigational chart. All comparisons indicate that the proposed method is feasible and multitemporal SAR data have great potential in bathymetric surveying.
      PubDate: Sept. 2018
      Issue No: Vol. 11, No. 9 (2018)
  • Assessment of Paddy Rice Height: Sequential Inversion of Coherent and
           Incoherent Models
    • Authors: Onur Yuzugullu;Esra Erten;Irena Hajnsek;
      Pages: 3001 - 3013
      Abstract: This paper investigates the evolution of canopy height of rice fields for a complete growth cycle. For this purpose, copolar interferometric Synthetic Aperture Radar (Pol-InSAR) time series data were acquired during the large across-track baseline ($>$1 km) science phase of the TanDEM-X mission. The height of rice canopies is estimated by three different model-based approaches. The first approach evaluates the inversion of the Random Volume over Ground (RVoG) model. The second approach evaluates the inversion of a metamodel-driven electromagnetic backscattering model by including a priori morphological information. The third approach combines the previous two processes. The validation analysis was carried out using the Pol-InSAR and ground measurement data acquired between May and September in 2015 over rice fields located in Ipsala district of Edirne, Turkey. The results of presented height estimation algorithms demonstrated the advantage of Pol-InSAR data. The combined RvoG model and EM metamodel height estimation approach provided rice canopy heights with errors less than 20 cm for the complete growth cycle.
      PubDate: Sept. 2018
      Issue No: Vol. 11, No. 9 (2018)
  • Sensitivity of SAR Tomography to the Phenological Cycle of Agricultural
           Crops at X-, C-, and L-band
    • Authors: Hannah Joerg;Matteo Pardini;Irena Hajnsek;Konstantinos P. Papathanassiou;
      Pages: 3014 - 3029
      Abstract: Understanding the impact of soil and plant parameter changes in agriculture on Synthetic Aperture Radar (SAR) measurements is of great interest when it comes to monitor the temporal evolution of agricultural crops by means of SAR. In this regard, specific transitions between phenological stages in corn, barley, and wheat have been identified associated to certain dielectric and geometric changes, based on a time series of fully polarimetric multibaseline SAR data and in situ measurements. The data have been acquired in the frame of DLR's CROPEX campaign on six dates between May and July in 2014. The experiments reported in this paper address the sensitivity of X-, C-, and L-band to phenological transitions exploiting the availability of multiple baselines on each acquisition date. The application of tomographic techniques enables the estimation of the three-dimensional (3-D) backscatter distribution and the separation of ground and volume scattering components. Tomographic parameters have been derived at different frequencies, namely the center of mass of the profiles of the total and of the volume-only 3-D backscatter, and the ground and volume powers. Their sensitivity and ability to detect changes occurring on the ground and in the vegetation volume have been evaluated focusing on the added value provided by the 3-D resolution at the different frequencies and polarizations available.
      PubDate: Sept. 2018
      Issue No: Vol. 11, No. 9 (2018)
  • Deep Convolutional Neural Network for Complex Wetland Classification Using
           Optical Remote Sensing Imagery
    • Authors: Mohammad Rezaee;Masoud Mahdianpari;Yun Zhang;Bahram Salehi;
      Pages: 3030 - 3039
      Abstract: The synergistic use of spatial features with spectral properties of satellite images enhances thematic land cover information, which is of great significance for complex land cover mapping. Incorporating spatial features within the classification scheme have been mainly carried out by applying just low-level features, which have shown improvement in the classification result. By contrast, the application of high-level spatial features for classification of satellite imagery has been underrepresented. This study aims to address the lack of high-level features by proposing a classification framework based on convolutional neural network (CNN) to learn deep spatial features for wetland mapping using optical remote sensing data. Designing a fully trained new convolutional network is infeasible due to the limited amount of training data in most remote sensing studies. Thus, we applied fine tuning of a pre-existing CNN. Specifically, AlexNet was used for this purpose. The classification results obtained by the deep CNN were compared with those based on well-known ensemble classifiers, namely random forest (RF), to evaluate the efficiency of CNN. Experimental results demonstrated that CNN was superior to RF for complex wetland mapping even by incorporating the small number of input features (i.e., three features) for CNN compared to RF (i.e., eight features). The proposed classification scheme is the first attempt, investigating the potential of fine-tuning pre-existing CNN, for land cover mapping. It also serves as a baseline framework to facilitate further scientific research using the latest state-of-art machine learning tools for processing remote sensing data.
      PubDate: Sept. 2018
      Issue No: Vol. 11, No. 9 (2018)
  • Insights Into Polarimetric Processing for Wetlands From Backscatter
           Modeling and Multi-Incidence Radarsat-2 Data
    • Authors: Frank Ahern;Brian Brisco;Kevin Murnaghan;Philip Lancaster;Donald K. Atwood;
      Pages: 3040 - 3050
      Abstract: We have observed unexpected results using the Freeman–Durden (FD) and other polarimetric decompositions in Radarsat-2 quad-pol data from many swamps in Eastern Ontario. In particular, the decompositions reported minimal backscatter from the double-bounce mechanism in a situation where there was compelling evidence that double-bounce backscatter contributed substantially to the return. This led to a hypothesis that the FD and similar models give erroneous results because of the physics of Fresnel reflection of wood, a lossy dielectric material, that makes up the vertical reflecting surfaces in swamps. We found some support for this hypothesis in the literature, and now report on an extensive theoretical and observational investigation. This work has shown that the Freeman-Durden decomposition, and other decompositions that use the same logic, will often mistake double-bounce backscatter as single-bounce backscatter in wetlands. This is a consequence of the fundamental physics of Fresnel reflection. It is important for users to be aware of this pitfall. Double-bounce backscatter from natural surfaces can be identified without recourse to polarimetric decomposition. The simplest, and most reliable, indicator of double-bounce backscatter is a high return in HH polarization. Double-bounce backscatter will generally produce higher return in HH than any other scattering mechanism. If both HH and VV polarizations are available, a high HH/VV intensity ratio is also a strong indicator of double-bounce backscatter. Additional modeling efforts are expected to provide further insights that can lead to improved applications of polarimetric data.
      PubDate: Sept. 2018
      Issue No: Vol. 11, No. 9 (2018)
  • Testing the Efficiency of Using High-Resolution Data From GF-1 in Land
           Cover Classifications
    • Authors: Xiaofeng Wang;Chaowei Zhou;Xiaoming Feng;Changwu Cheng;Bojie Fu;
      Pages: 3051 - 3061
      Abstract: High-resolution remote sensing plays an important role in the study of subtle changes on the Earth's surface. The newly orbiting Chinese GF-1 satellites are designed to observe the Earth surface on a regional scale; however, the satellite efficiency requires further investigation. In this paper, the efficiency of using GF-1 01 satellite images to monitor a complex surface is tested by considering supplementary information and different land cover classification methods. Our work revealed that the GF-1 satellite observations can efficiently detect land cover fragments. When the support vector machine method is applied, the overall classification accuracy based on multisource data reaches 90.5%. The “salt and pepper phenomenon” is effectively reduced in classification images. These results also indicate that the accuracy of the GF-1 image classification is superior to the results when using the same method with the Landsat 8 and Sentinel-2A images, with the overall classification accuracy increasing by 23.6% and 13.6%. Our study suggests that GF-1 satellite observations are suitable for land cover studies on complex land surfaces. This approach can benefit various related fields such as land resource surveys, ecological assessments, environmental evaluations, and so forth.
      PubDate: Sept. 2018
      Issue No: Vol. 11, No. 9 (2018)
  • Evaluation of Optical and Radar Images Integration Methods for LULC
           Classification in Amazon Region
    • Authors: Luciana O. Pereira;Corina C. Freitas;Sidnei J. S. Sant´Anna;Mariane S. Reis;
      Pages: 3062 - 3074
      Abstract: The main objective of this study is to evaluate different methods to integrate (fusion and combination) Synthetic Aperture Radar (SAR) Advanced Land Observing Satellite (ALOS) Phased Arrayed L-band SAR (PALSAR-1) (Fine Beam Dual mode-FDB) and LANDSAT images in order to identify those which lead to higher accuracy of land-use and land-cover (LULC) mapping in an agricultural frontier region in Amazon. One method used to integrate the multipolarized information in SAR images before the fusion process was also evaluated. In this method, the first principal component (PC1) of SAR data was used. Color compositions of fused data that presented better LULC classification were visually analyzed. Considering the proposed objective, the following fusion methods must be highlighted: Ehlers, Wavelet á trous, Intensity, Hue and Saturation (IHS), and selective principal component analysis (SPC). These latter three methods presented good results when processed using PC1 from ALOS/PALSAR-1 FBD backscatter filtered image or three SAR extracted and selected features. These results corroborate with the applicability of the proposed method for SAR data information integration. Distinct methods better discriminate different LULC classes. In general, densely forested classes were better characterized by the Ehlers_TM6 fusion method, in which at least the polarization HV was used. Intermediate and initial regeneration classes were better discriminated using SPC-fused data with PC1 of ALOS/PALSAR-1 FBD data. Bare soil and pasture classes were better discriminated in optical features and the PC1 of ALOS/PALSAR-1 FBD data fused by the IHS method. Soybean with approximately 40 days from seeding was better discriminated in image classification obtained from ALOS/PALSAR-1 FBD image.
      PubDate: Sept. 2018
      Issue No: Vol. 11, No. 9 (2018)
  • Geological Mapping in Western Tasmania Using Radar and Random Forests
    • Authors: Declan D. G. Radford;Matthew J. Cracknell;Michael J. Roach;Grace V. Cumming;
      Pages: 3075 - 3087
      Abstract: Mineral exploration and geological mapping of highly prospective areas in western Tasmania, southern Australia, is challenging due to steep topography, dense vegetation, and limited outcrop. Synthetic aperture radar (SAR) can potentially penetrate vegetation canopies and assist geological mapping in this environment. This study applies manual and automated lithological classification methods to airborne polarimetric TopSAR and geophysical data in the Heazlewood region, western Tasmania. Major discrepancies between classification results and the existing geological map generated fieldwork targets that led to the discovery of previously unmapped rock units. Manual analysis of radar image texture was essential for the identification of lithological boundaries. Automated pixel-based classification of radar data using Random Forests achieved poor results despite the inclusion of textural information derived from gray level co-occurrence matrices. This is because the majority of manually identified features within the radar imagery result from geobotanical and geomorphological relationships, rather than direct imaging of surficial lithological variations. Inconsistent relationships between geology and vegetation or geology and topography limit the reliability of TopSAR interpretations for geological mapping in this environment. However, Random Forest classifications, based on geophysical data and validated against manual interpretations, were accurate (∼90%) even when using limited training data (∼0.15% of total data). These classifications identified a previously unmapped region of mafic–ultramafic rocks, the presence of which was verified through fieldwork. This study validates the application of machine learning for geological mapping in remote and inaccessible localities but also highlights the limitations of SAR data in thickly vegetated terrain.
      PubDate: Sept. 2018
      Issue No: Vol. 11, No. 9 (2018)
  • Extraction of Wall Cracks on Earthquake-Damaged Buildings Based on TLS
           Point Clouds
    • Authors: Hongbo Jiang;Qiang Li;Qisong Jiao;Xin Wang;Lie Wu;
      Pages: 3088 - 3096
      Abstract: Earthquakes often induce collapse or cause extreme damage to large areas of buildings. One of the most important requirements for earthquake emergency operations is staying up-to-date on the extent of structural damage in earthquake-stricken areas. Terrestrial laser scanning (TLS) technology can directly obtain the coordinates of mass points while maintaining a high measurement accuracy, thereby providing the means to directly extract quantitative information from surface cracks on damaged buildings. In this paper, we present a framework for extracting wall cracks from high-density TLS point clouds. We first differentiate wall points from nonwall points using the TLS data. Then, a planar triangulation modeling method is used to construct a triangular irregular network (TIN) dataset, after which a raster surface is generated using an inverse distance weighting point cloud rasterization method based on the crack width. Then, cracks are extracted based on their shape features. We extract six sets of wall cracks from a damaged building wall in Beichuan County as an example of employing the above-mentioned method; the damage was caused by the Wenchuan earthquake. Quantitative calculations reveal that the extraction accuracy of the proposed method is greater than 91% and that the rate of leakage detection is less than 10%. In addition, the main limiting factor of the extraction accuracy is the crack width, that is, a wider crack will result in a higher extraction accuracy. In addition, the crack connectivity and leakage rate are negatively correlated, that is, a higher connectivity corresponds to a lower rate of missed extractions.
      PubDate: Sept. 2018
      Issue No: Vol. 11, No. 9 (2018)
  • Passive Microwave Probing Mare Basalts in Mare Imbrium Using CE-2 CELMS
    • Authors: Zhiguo Meng;Shuo Hu;Tianxing Wang;Cui Li;Zhanchuan Cai;Jinsong Ping;
      Pages: 3097 - 3104
      Abstract: To evaluate the availability of the volcanism study, the microwave sounder (Chinese lunar exploration project, a microwave sounder, CELMS) data from Change'E -2 satellite were employed in this study to compare with the geologic results derived from optical data and radar data of Mare Imbrium, which is presented as the long duration and the last extensive phase of lunar volcanism. First, the normalized brightness temperature ( $T_{B}$) is generated to eliminate its strong latitude-dependent effect, which implies a good correlation with the titanium abundance. Moreover, difference between $T_{B}$ of the same frequency at noon and that at midnight ( ${text{d}}T_{B}$) is deduced to evaluate the absorption features of the lunar regolith. According to the ${text{d}}T_{B}$ change with frequency, a new geologic perspective is given to Mare Imbrium. The new interpretation map largely agrees with optical results in the regions with high (FeO + TiO2) abundance (FTA), and with the interpretation maps by radar data in the regions with low FTA. The statistical results validate the three-phase volcanism in Mare Imbrium. This study also hints the special importance of the CELMS data in understanding the basaltic volcanism of the Moon.
      PubDate: Sept. 2018
      Issue No: Vol. 11, No. 9 (2018)
  • Persistent Scatterer Analysis Using Dual-Polarization Sentinel-1 Data:
           Contribution From VH Channel
    • Authors: Roghayeh Shamshiri;Hossein Nahavandchi;Mahdi Motagh;
      Pages: 3105 - 3112
      Abstract: The regular acquisition and relatively short revisit time of Sentinel-1 satellite improve the capability of a persistent scatterer interferometric synthetic aperture radar (PS-InSAR) as a suitable geodetic method of choice for measuring ground surface deformation in space and time. The SAR instrument aboard the Sentinel-1 satellite supports operation in dual polarization (HH–HV, VV–VH), which can be used to increase the spatial density of measurement points through the polarimetric optimization method. This study evaluates the improvement in displacement mapping by incorporating the information obtained from the VH channel of Sentinel-1 data into the PS-InSAR analysis. The method that has shown great success with different polarimetric data performs a search over the available polarimetric space in order to find a linear combination of polarization states, which yields the optimum PS selection criterion using the amplitude dispersion index (ADI) criterion. We applied the method to a dataset of 50 dual-polarized (VV–VH) Sentinel-1 images over Trondheim city in Norway. The results show overall increase of about 186% and 78% in the number of PS points with respect to the conventional channels of VH and VV, respectively. The study concludes that, using the ADI optimization, we can incorporate information from the VH channel into the PS-InSAR analysis, which otherwise is lost due to its low amplitude.
      PubDate: Sept. 2018
      Issue No: Vol. 11, No. 9 (2018)
  • Patch-Sorted Deep Feature Learning for High Resolution SAR Image
    • Authors: Zhongle Ren;Biao Hou;Zaidao Wen;Licheng Jiao;
      Pages: 3113 - 3126
      Abstract: Synthetic aperture radar (SAR) image classification is a fundamental process for SAR image understanding and interpretation. The traditional SAR classification methods extract shallow and handcrafted features, which cannot subtly depict the abundant modal information in high resolution SAR image. Inspired by deep learning, an effective feature learning tool, a novel method called patch-sorted deep neural network (PSDNN) to implement unsupervised discriminative feature learning is proposed. First, the randomly selected patches are measured and sorted by the meticulously designed patch-sorted strategy, which adopts instance-based prototypes learning. Then the sorted patches are delivered to a well-designed dual-sparse autoencoder to obtain desired weights in each layer. Convolutional neural network is followed to extract high-level spatial and structural features. At last, the features are fed to a linear support vector machine to generate predicted labels. The experimental results in three broad SAR images of different satellites confirm the effectiveness and generalization of our method. Compared with three traditional feature descriptors and four unsupervised deep feature descriptors, the features learned in PSDNN appear powerful discrimination and the PSDNN achieves desired classification accuracy and a good visual appearance.
      PubDate: Sept. 2018
      Issue No: Vol. 11, No. 9 (2018)
  • Superpixel-Level Target Discrimination for High-Resolution SAR Images in
           Complex Scenes
    • Authors: Zhaocheng Wang;Lan Du;Hongtao Su;
      Pages: 3127 - 3143
      Abstract: Traditional synthetic aperture radar (SAR) target discrimination methods are implemented at the chip-level, which may have good discrimination performance in simple scenes but may lose the effectiveness in complex scenes. To improve the discrimination performance in complex scenes, this paper proposes a superpixel-level target discrimination method directly in high-resolution SAR images. The proposed discrimination method mainly contains three stages. First, based on the superpixel-level target detection results, we describe each superpixel via the multilevel and multidomain feature descriptor, which can reflect the differences between targets and clutter comprehensively. Second, we employ the support vector machine as the discriminator to obtain the discriminated target superpixels. Finally, we cluster the discriminated target superpixels and extract the target chips from the original SAR image based on the clustering results. The experimental results based on the miniSAR real SAR data show that the proposed discrimination method has about 25% higher F1-score than the traditional discrimination methods.
      PubDate: Sept. 2018
      Issue No: Vol. 11, No. 9 (2018)
  • A Novel Approach Based on the Instrumental Variable Method With
           Application to Airborne Synthetic Aperture Radar Imagery
    • Authors: Shasha Mo;Jianwei Niu;Yanfei Wang;
      Pages: 3144 - 3154
      Abstract: Airborne synthetic aperture radar (SAR) system is an essential tool for modern remote sensing applications. The aircraft is easily affected by the atmospheric turbulence, leading to deviations from the ideal track. To enable high-resolution imagery, a navigation system is usually mounted on the aircraft. Due to the limitation of the navigation system's accuracy, motion errors estimated from the SAR raw data are needed. In this paper, a novel motion compensation algorithm, which is based on the instrumental variables (IV) method, is proposed. We call this IV-based algorithm the IVA algorithm. In this algorithm, double-derivative motion errors are estimated without modeling the random disturbances to be a zero-mean Gaussian distribution and to be created from mutually independent noise, which makes it more robust and accurate in focusing SAR images. Before the motion error estimation, a Savitzky–Golay filter is performed to reduce the phase estimation errors, in which the phase is obtained by the phase gradient autofocus algorithm. Finally, the estimated motion errors are used to compensate the received signal with the range-dependent model. The IVA algorithm is validated by using real airborne SAR data, and experimental results show that the proposed algorithm achieve an excellent performance in airborne SAR systems.
      PubDate: Sept. 2018
      Issue No: Vol. 11, No. 9 (2018)
  • N-SAR: A New Multichannel Multimode Polarimetric Airborne SAR
    • Authors: Aifang Liu;Fan Wang;Hui Xu;Liechen Li;
      Pages: 3155 - 3166
      Abstract: Recent years have seen a surging interest in several novel tools pertaining to SAR, such as multichannel high-resolution and wide-swath (HRWS) SAR, multibaseline interferometric SAR (InSAR), multisubband SAR, polarimetric SAR (PolSAR), and polarimetric SAR interferometry (PolInSAR). We believe that these new approaches to SAR have valuable scientific applications. Here, we present a new experimental airborne SAR system named “N-SAR” (SAR of the Nanjing Research Institute of Electronic Technology) that can fulfill new requirements and is scalable to allow rapid development of modern SARs. As a dual-antenna airborne SAR system, the N-SAR system will be used to test new technologies and signal processing algorithms such as PolSAR and PolInSAR, multibaseline SAR interferometry, multichannel multichannel HRWS SAR/InSAR, and SAR ground moving target indicator. It will play a key role in evaluating the performance of current engineering-oriented SAR systems by using several new operational modes for scientific purposes. In this paper, we provide a conceptual description of the general system design features, instrument design, and capabilities of the N-SAR system. To meet the requirements of different experiments, a novel operational mode, named the alternating bistatic multipolarized mode based on the N-SAR system, is presented. A series of flight tests that started in April 2017 and will be carried out over the next few years are shown. Several preliminary experimental results pertaining to the calibration of PolSAR, multichannel SAR imaging, and interferometry are presented in this paper as an early validation of the capabilities of the N-SAR system.
      PubDate: Sept. 2018
      Issue No: Vol. 11, No. 9 (2018)
  • A Multifeature Autosegmentation-Based Approach for Inshore Ambiguity
           Identification and Suppression With Azimuth Multichannel SAR Systems
    • Authors: Huajian Xu;Zhiwei Yang;Pengyuan He;Guisheng Liao;Min Tian;Penghui Huang;
      Pages: 3167 - 3178
      Abstract: To address the problems of location and suppression of inshore azimuth ambiguous clutter for the azimuth multichannel synthetic aperture radar (SAR) systems, a multifeature autosegmentation-based approach is developed in this paper. This proposed method can segment a SAR image automatically according to the distinctions among main land clutter, ambiguous land clutter, and sea clutter in the features of interferogram's phase and magnitude. First, the finite mixture clutter model for a multilook covariance matrix (MLCM) is built, where the off-diagonal elements of the MLCM contain the information of magnitude and interferogram's phase between azimuth channels. Then, SAR image autosegmentation is carried out by using the expectation maximum algorithm with combination of the aforementioned mixture model, and the isolated points that are segmented incorrectly can be eliminated via exploiting the Markov random field smoothing technique. Finally, azimuth ambiguous clutter can be suppressed by means of the clutter covariance matrix, which is constructed by the training samples of segmented ambiguities. The experiments on simulated data and real data measured by TerraSAR-X demonstrate that the proposed approach can obtain the more accurate position information and good cancellation performance for the azimuth ambiguous clutter, without the accurate system parameters and the information of the sources account for azimuth ambiguities.
      PubDate: Sept. 2018
      Issue No: Vol. 11, No. 9 (2018)
  • Classification of VHR Multispectral Images Using ExtraTrees and Maximally
           Stable Extremal Region-Guided Morphological Profile
    • Authors: Alim Samat;Claudio Persello;Sicong Liu;Erzhu Li;Zelang Miao;Jilili Abuduwaili;
      Pages: 3179 - 3195
      Abstract: Pixel-based contextual classification methods, including morphological profiles (MPs), extended MPs, attribute profiles (APs), and MPs with partial reconstruction (MPPR), have shown the benefits of using geometrical features extracted from very-high resolution (VHR) images. However, the structural element sequence or the attribute filters that are necessarily adopted in the above solutions always result in computationally inefficient and redundant high-dimensional features. To solve the second problem, we introduce maximally stable extremal regions (MSER) guided MPs (MSER_MPs) and MSER_MPs(M), which contains mean pixel values within regions, to foster effective and efficient spatial feature extraction. In addition, the extremely randomized decision tree (ERDT) and its ensemble version, ExtraTrees, are introduced and investigated. An extremely randomized rotation forest (ERRF) is proposed by simply replacing the conventional C4.5 decision tree in a rotation forest (RoF) with an ERDT. Finally, the proposed spatial feature extractors, ERDT, ExtraTrees, and ERRF are evaluated for their ability to classify three VHR multispectral images acquired over urban areas, and compared against C4.5, Bagging(C4.5), random forest, support vector machine, and RoF in terms of classification accuracy and computational efficiency. The experimental results confirm the superior performance of MSER_MPs(M) and MSER_MPs compared to MPPR and MPs, respectively, and ExtraTrees is better for spectral-spatial classification of VHR multispectral images using the original spectra stacked with MSER_MPs(M) features.
      PubDate: Sept. 2018
      Issue No: Vol. 11, No. 9 (2018)
  • Pansharpening for Multiband Images With Adaptive Spectral–Intensity
    • Authors: Yong Yang;Lei Wu;Shuying Huang;Yingjun Tang;Weiguo Wan;
      Pages: 3196 - 3208
      Abstract: The pansharpening algorithm often faces an imbalance between spatial sharpness and spectral preservation, resulting in spectral and intensity inhomogeneities in the fused image. In this paper, to overcome this problem, we present a robust pansharpening method for multiband images with adaptive spectral–intensity modulation. In this method, we propose an adaptive spectral modulation coefficient (ASMC) and an adaptive intensity modulation coefficient (AIMC) to modulate the spectral and spatial information in the fused image, respectively. Among these coefficients, the ASMC is constructed based on two aspects: first, the details extracted from the panchromatic (PAN) and multispectral (MS) images; and second, the spectral relationship between each MS band. The AIMC is calculated by assessing the correlation and standard deviation between the PAN image and each MS band. Finally, we propose a mathematically linear model to combine ASMC and AIMC to achieve the fused image. Various remote-sensing satellite images were used in the evaluations. Experimental results indicate that the proposed method achieves outstanding performance in balancing spatial and spectral information and outperforms several state-of-the-art fusion methods in terms of both full-reference and no-reference metrics, and on visual inspection.
      PubDate: Sept. 2018
      Issue No: Vol. 11, No. 9 (2018)
  • Structural-Correlated Self-Examples Based Superresolution of Single Remote
           Sensing Image
    • Authors: Huifang Shen;Biao Hou;Zaidao Wen;Licheng Jiao;
      Pages: 3209 - 3223
      Abstract: Image superresolution methods are of great importance to image analysis and interpretation and have been intensively studied and widely applied. The main research works on single-image superresolution are how to construct the training image database and how to learn the mapping relationship between low- and high-resolution images. Considering only a single image, a novel super-resolution method for self-examples learning without depending on any external training images is proposed in this paper. The training self-examples are extracted from the gradually degraded versions of the testing image and their corresponding interpolated counterparts to build internal high- and low-resolution training databases. Inspired by the concept of “coarse-to-fine,” the upscaling process is performed gradually as well. The algorithm includes two steps during each upscaling procedure. For each low-resolution patch, the first step is to find structural-correlated patches by sparse representation throughout the training database to learn global linear mapping function between low- and high-resolution image patches without any assumption on the data, and the second step takes the advantage of sparse representation as a local constraint on super-resolution result. At each upscaling procedure, iterative back projection is applied to guarantee the consistency of the estimated image. Moreover, the internal training database will be updated according to the newly generated upscaled image. Experiments show that the proposed algorithm can achieve good performance on peak signal-to-noise ratio and structural similarity index and produce excellent visual effects compared with other super-resolution methods.
      PubDate: Sept. 2018
      Issue No: Vol. 11, No. 9 (2018)
  • Supervised and Adaptive Feature Weighting for Object-Based Classification
           on Satellite Images
    • Authors: Ya'nan Zhou;Yuehong Chen;Li Feng;Xin Zhang;Zhanfeng Shen;Xiaocheng Zhou;
      Pages: 3224 - 3234
      Abstract: Object-based image analysis (OBIA) technique has been representing an evolving paradigm of remote sensing application, along with more high-resolution satellite images available. However, too many derived features from segmented objects also present a new challenge to OBIA applications. In this paper, we present a supervised and adaptive method for ranking and weighting features for object-based classification. The core of this method is the feature weight maps for each land type resulted from prior thematic maps and their corresponding satellite images of study areas. Specifically, first, satellite images to be classified are segmented using an adaptive multiscale algorithm, and the multiple (spectral, shape, and texture) features of segmented objects are calculated. Second, we extract distance maps and feature weight vectors for each land type from the prior thematic maps and corresponding satellite images, to generate feature weight maps. Third, a feature-weighted classifier with the feature weight maps, is applied on the segmented objects to generate classification maps. Finally, the classification result is evaluated. This approach is applied on a Sentinel-2 multispectral satellite image and a Google Map image to produce objected-based classification maps, compared with the traditional feature selection algorithms. The experimental results illustrate that the proposed method is practically efficient to select important features and improve classification performance.
      PubDate: Sept. 2018
      Issue No: Vol. 11, No. 9 (2018)
  • TrAdaBoost Based on Improved Particle Swarm Optimization for Cross-Domain
           Scene Classification With Limited Samples
    • Authors: Li Yan;Ruixi Zhu;Yi Liu;Nan Mo;
      Pages: 3235 - 3251
      Abstract: Scene classification is usually based on supervised learning, but collecting instances is expensive and time-consuming. TrAdaBoost has achieved great success in transferring source instances to target images, but it has problems, such as the excessive focus on instances harder to classify, the rapid convergence speed of the source instances, and the weight mismatch caused by the big gap between the number of source and target instances, leading to decreased classification accuracy. In this paper, in order to address these problems, classical particle swarm optimization (PSO) is modified to select the optimal feature subspace for classifying the “harder” and “easier” instances by reducing unimportant dimensions. A modified correction factor is proposed by considering the classification accuracy of the instances from both domains, to decrease the convergence speed. Iterative selective TrAdaBoost is also proposed to reduce the weight mismatch by removing the indiscriminate source instances. The experimental results obtained with three benchmark data sets confirm that the proposed method outperforms most of the previous methods of scene classification with limited target samples. It is also proved that modified PSO for optimal feature subspace selection, the modified correction factor, and iterative selective TrAdaBoost are all effective in improving the classification accuracy, giving improvements of 3.6%, 4.3%, and 2.7%, and these three contributions together increase the classification accuracy by about 8% in total.
      PubDate: Sept. 2018
      Issue No: Vol. 11, No. 9 (2018)
  • Semantic Segmentation for High Spatial Resolution Remote Sensing Images
           Based on Convolution Neural Network and Pyramid Pooling Module
    • Authors: Bo Yu;Lu Yang;Fang Chen;
      Pages: 3252 - 3261
      Abstract: Semantic segmentation provides a practical way to segment remotely sensed images into multiple ground objects simultaneously, which can be potentially applied to multiple remote sensed related aspects. Current classification algorithms in remotely sensed images are mostly limited by different imaging conditions, the multiple ground objects are difficult to be separated from each other due to high intraclass spectral variances and interclass spectral similarities. In this study, we propose an end-to-end framework to semantically segment high-resolution aerial images without postprocessing to refine the segmentation results. The framework provides a pixel-wise segmentation result, comprising convolutional neural network structure and pyramid pooling module, which aims to extract feature maps at multiple scales. The proposed model is applied to the ISPRS Vaihingen benchmark dataset from the ISPRS 2D Semantic Labeling Challenge. Its segmentation results are compared with previous state-of-the-art method UZ_1, UPB and three other methods that segment images into objects of all the classes (including clutter/background) based on true orthophoto tiles, and achieve the highest overall accuracy of 87.8% over the published performances, to the best of our knowledge. The results validate the efficiency of the proposed model in segmenting multiple ground objects from remotely sensed images simultaneously.
      PubDate: Sept. 2018
      Issue No: Vol. 11, No. 9 (2018)
  • Dimensionality Reduction of Hyperspectral Image Using Spatial Regularized
           Local Graph Discriminant Embedding
    • Authors: Renlong Hang;Qingshan Liu;
      Pages: 3262 - 3271
      Abstract: Dimensionality reduction (DR) is an important preprocessing step for hyperspectral image (HSI) classification. Recently, graph-based DR methods have been widely used. Among various graph-based models, the local graph discriminant embedding (LGDE) model has shown its effectiveness due to the complete use of label information. Besides spectral information, an HSI also contains rich spatial information. In this paper, we propose a regularization method to incorporate the spatial information into the LGDE model. Specifically, an oversegmentation method is first employed to divide the original HSI into nonoverlapping superpixels. Then, based on the observation that pixels in a superpixel often belong to the same class, intraclass graphs are constructed to describe such spatial information. Finally, the constructed superpixel-level intraclass graphs are used as a regularization term, which can be naturally incorporated into the LGDE model. Besides, to sufficiently capture the nonlinear property of an HSI, the linear LGDE model is further extended into its kernel counterpart. To demonstrate the effectiveness of the proposed method, experiments have been established on three widely used HSIs acquired by different hyperspectral sensors. The obtained results show that the proposed method can achieve higher classification performance than many state-of-the-art graph embedding models, and the kernel extension model can further improve the classification performance.
      PubDate: Sept. 2018
      Issue No: Vol. 11, No. 9 (2018)
  • Tridiagonal Folmat Enhanced Multivariance Products Representation Based
           Hyperspectral Data Compression
    • Authors: Zeynep Gündoğar;Behçet Uğur Töreyin;Metin Demiralp;
      Pages: 3272 - 3278
      Abstract: Hyperspectral imaging features an important issue in remote sens ing and applications. Requirement to collect high volumes of hyper spectral data in remote sensing algorithms poses a compression prob lem. To this end, many techniques or algorithms have been develop ed and continues to be improved in scientific literature. In this paper, we propose a recently developed lossy compression method whi ch is called tridiagonal folded matrix enhanced multivariance prod ucts representation (TFEMPR). This is a specific multidimensional array decomposition method using a new mathematical concept called “folded matrix” and provides binary decomposi tion for multidimensional arrays. Beside the method a comparati ve analysis of compression algorithms is presented in this paper by means of compression performances. Compression performance of TFEMPR is compared with the state-art-methods such as compressive -projection principal component analysis, matching pursu it and block compressed sensing algorithms, etc., via average peak signal-to-noise ratio. Experiments with AVIRIS data set indicate a superior reconstructed image quality for the propo sed technique in comparison to state-of-the-art hyperspectral data compression methods.
      PubDate: Sept. 2018
      Issue No: Vol. 11, No. 9 (2018)
  • An Investigation Into the Impact of Band Error Variance Estimation on
           Intrinsic Dimension Estimation in Hyperspectral Images
    • Authors: Mark Berman;Zhipeng Hao;Glenn Stone;Yi Guo;
      Pages: 3279 - 3296
      Abstract: There have been a significant number of recent papers about hyperspectral imaging, which propose various methods for estimating the number of materials/endmembers in hyperspectral images. This is sometimes called the “intrinsic” dimension (ID) of the image. Estimation of the error variance in each spectral band is a critical first step in ID estimation. The estimated error variances can then be used to preprocess (e.g., whiten) the data, prior to ID estimation. A range of variance estimation methods have been advocated in the literature. We investigate the impact of five variance estimation methods (three using spatial information and two using spectral information) on five ID estimation methods, with the aid of four different, but semirealistic, sets of simulated hyperspectral images. Our findings are as follows: first, for all four sets, the two spectral variance estimation methods significantly outperform the three spatial methods; second, when used with the spectral variance estimation methods, two of the ID estimation methods (called random matrix theory and NWHFC) consistently outperform the other three ID estimation methods; third, the better spectral variance estimation method sometimes gives negative variance estimates; fourth, we introduce a simple correction that guarantees positivity; and fifth, we give a fast algorithm for its computation.
      PubDate: Sept. 2018
      Issue No: Vol. 11, No. 9 (2018)
  • Marginal Stacked Autoencoder With Adaptively-Spatial Regularization for
           Hyperspectral Image Classification
    • Authors: Jie Feng;Liguo Liu;Xianghai Cao;Licheng Jiao;Tao Sun;Xiangrong Zhang;
      Pages: 3297 - 3311
      Abstract: Stacked autoencoder (SAE) provides excellent performance for image processing under sufficient training samples. However, the collection of training samples is difficult in hyperspectral images. Insufficient training samples easily make SAE overfit and limit the application of SAE to hypersepctral images. To address this problem, a novel marginal SAE with adaptively-spatial regularization (ARMSAE) is proposed for hyperspectral image classification. First, a superpixel segmentation method is used to divide the image into many homogenous regions. Then, at the pretraining stage, an adaptively-shaped spatial regularization is introduced to extract contextual information of samples in the homogenous regions. It sufficiently utilizes unlabeled adjacent samples to alleviate the lack of training samples. At the fine-tuning stage, the marginal samples based on geometrical property are selected to tune the ARMSAE network. The fine-tuning exploits margin strategy to alleviate the inaccurate statistical estimation caused by insufficient training samples. Finally, the label of each test sample is determined by all the samples locating in the same homogenous region. Experimental results on hyperspectral images demonstrate the proposed method provides encouraging classification performance compared with several related state-of-the-art methods.
      PubDate: Sept. 2018
      Issue No: Vol. 11, No. 9 (2018)
  • Gaussian Pyramid Based Multiscale Feature Fusion for Hyperspectral Image
    • Authors: Shutao Li;Qiaobo Hao;Xudong Kang;Jón Atli Benediktsson;
      Pages: 3312 - 3324
      Abstract: In this paper, we propose a segmented principal component analysis (SPCA) and Gaussian pyramid decomposition based multiscale feature fusion method for the classification of hyperspectral images. First, considering the band-to-band cross correlations of objects, the SPCA method is utilized for the spectral dimension reduction of the hyperspectral image. Then, the dimension-reduced image is decomposed into several Gaussian pyramids to extract the multiscale features. Next, the SPCA method is performed again to compute the fused SPCA based Gaussian pyramid features (SPCA-GPs). Finally, the performance of the SPCA-GPs is evaluated using the support vector machine classifier. Experiments performed on three widely used hyperspectral images show that the proposed SPCA-GPs method outperforms several compared classification methods in terms of classification accuracies and computational cost.
      PubDate: Sept. 2018
      Issue No: Vol. 11, No. 9 (2018)
  • Unsupervised Bayesian Classification of a Hyperspectral Image Based on the
           Spectral Mixture Model and Markov Random Field
    • Authors: Yuan Fang;Linlin Xu;Junhuan Peng;Honglei Yang;Alexander Wong;David A. Clausi;
      Pages: 3325 - 3337
      Abstract: Typical unsupervised classification of hyperspectral imagery (HSI) uses a Gaussian mixture model to determine intensity similarity of pixels. However, the existence of mixed pixels in HSI tends to reduce the effectiveness of the similarity measure and leads to large classification errors. Since a semantic class is always dominated by a particular endmember, a mixed pixel can be better classified by identifying the dominant endmember. By exploiting the spectral mixture model (SMM) that describes the endmember-abundance pattern of mixed pixels, the discriminative ability of HSI can be enhanced. A Bayesian classification approach is presented for spatial–spectral HSI classification, where the data likelihood is built upon the SMM, and the label prior is based on a Markov random field (MRF). The new approach has three key characteristics. First, instead of using intensity similarity, the new approach uses the abundance-endmember pattern of each pixel and classifies a pixel by its dominant endmember. Second, to integrate the SMM into a Bayesian framework, a data likelihood is designed based on the SMM to reflect the influence of the dominant endmember on the conditional distribution of the mixed pixel given the class label. Third, the resulting maximum a posteriori problem is solved by the expectation–maximization (EM) algorithm, in which the E-step adopts a graph-cut approach to estimate the class labels, and the M-step adopts a purified-means approach to estimate the endmembers. Experiments on both simulated and real HSIs demonstrate that the proposed method can exploit the spatial–spectral information of HSI to achieve high accuracy in unsupervised classification of HSI.
      PubDate: Sept. 2018
      Issue No: Vol. 11, No. 9 (2018)
  • Toward Ultralightweight Remote Sensing With Harmonic Lenses and
           Convolutional Neural Networks
    • Authors: Artem V. Nikonorov;Maksim V. Petrov;Sergei A. Bibikov;Pavel Y. Yakimov;Viktoriya V. Kutikova;Yuriy V. Yuzifovich;Andrey A. Morozov;Roman V. Skidanov;Nikolay L. Kazanskiy;
      Pages: 3338 - 3348
      Abstract: In this paper, we describe our advances in manufacturing a 256-layer 7-μm thick harmonic lens with 150 and 300 mm focal distances combined with color correction, deconvolution, and a feedforwarding deep learning neural network capable of producing images approaching photographic visual quality. While reconstruction of images taken with diffractive optics was presented in previous works, this paper is the first to use deep neural networks during the restoration step. The level of imaging quality we achieved with our imaging system can facilitate the emergence of ultralightweight remote sensing cameras for nano- and pico-satellites, and for aerial remote sensing systems onboard small UAVs and solar-powered airplanes.
      PubDate: Sept. 2018
      Issue No: Vol. 11, No. 9 (2018)
  • A Volumetric Fusing Method for TLS and SFM Point Clouds
    • Authors: Wei Li;Cheng Wang;Dawei Zai;Pengdi Huang;Weiquan Liu;Chenglu Wen;Jonathan Li;
      Pages: 3349 - 3357
      Abstract: A terrestrial laser scanning (TLS) point cloud acquired from a given ground view is incomplete because of severe occlusion and self-occlusion. The models reconstructed by aligning the cross-source point clouds [TLS and structure-from-motion (SFM) point clouds] provide a more complete large-scale outdoor scene. However, because of differences in nonrigid deformation, stratified redundancy of alignment is inevitable and ubiquitous. Therefore, this paper presents a volumetric fusing method for cross-source three-dimensional reconstructions. To eliminate the stratification of aligned cross-source point clouds, we propose a graph-cuts method with boundary constraints for blending the two cross-source point clouds. Then, to reduce the gaps that exist in the blending results, we develop a progressive migration method combined with the local average direction of normal vectors to smooth the unconnected boundary. Finally, experimental results demonstrate the effectiveness of eliminating stratification with the proposed blending algorithm, and the progressive migration method achieves a smooth connection in the boundary of the blended point clouds.
      PubDate: Sept. 2018
      Issue No: Vol. 11, No. 9 (2018)
  • Quantifying the Carbon Storage in Urban Trees Using Multispectral ALS Data
    • Authors: Xinqu Chen;Chengming YE;Jonathan Li;Michael A. Chapman;
      Pages: 3358 - 3365
      Abstract: This paper presents a new method for quantifying the carbon storage in urban trees using multispectral airborne laser scanning (ALS) data. This method takes the full advantage of multispectral ALS range and intensity data and shows the feasibility of quantifies the carbon storage in urban trees. Our method consists of four steps: multispectral ALS data processing, vegetation isolation, dendrometric parameters estimation, and carbon storage modeling. Our results suggest that ALS-based dendrometric parameter estimation and allometric models can yield consistent performance and accurate estimation. Citywide carbon storage estimation is derived in this paper for the Town of Whitchurch–Stouffville, Ontario, Canada, by extrapolating the values within the study area to the entire city based on the specific proportion of each land cover type. The proposed method reveals the potential of multispectral ALS data in land cover mapping and carbon storage estimation at individual-tree level.
      PubDate: Sept. 2018
      Issue No: Vol. 11, No. 9 (2018)
  • A Fuzzy Shape-Based Anomaly Detection and Its Application to
           Electromagnetic Data
    • Authors: Vyron Christodoulou;Yaxin Bi;George Wilkie;
      Pages: 3366 - 3379
      Abstract: The problem of data analytics in real-world electromagnetic (EM) applications poses a lot of algorithmic constraints. The process of big datasets, the requirement of prior knowledge, unknown location of anomalies, and variable length patterns are all issues that need to be addressed. In this application, we address those issues by proposing a fuzzy shape-based method with anomaly detection. This method is evaluated against 12 benchmark datasets of different kinds of anomalies and provides promising results based on the use of a new performance metric that takes into account the distance between the predicted and actual anomalies. Real-world EM data from the Earth's magnetic field are provided by the SWARM satellite constellation relating to regions in China, Greece and Peru. The seismic events that occurred in those regions are compared against the SWARM data. Moreover, three other methods: GrammarViz, HOT-SAX, and CUSUM-EWMA are also applied to further investigate the possible linkages of EM anomalies with seismic events. The findings further our understanding of real-world data analytics in EM data and seismicity. Some proposals regarding the limitations of available data for the real-world datasets are also presented.
      PubDate: Sept. 2018
      Issue No: Vol. 11, No. 9 (2018)
  • Become a published author in 4 to 6 weeks
    • Pages: 3380 - 3380
      Abstract: Advertisement, IEEE.
      PubDate: Sept. 2018
      Issue No: Vol. 11, No. 9 (2018)
School of Mathematical and Computer Sciences
Heriot-Watt University
Edinburgh, EH14 4AS, UK
Tel: +00 44 (0)131 4513762
Fax: +00 44 (0)131 4513327
Home (Search)
Subjects A-Z
Publishers A-Z
Your IP address:
About JournalTOCs
News (blog, publications)
JournalTOCs on Twitter   JournalTOCs on Facebook

JournalTOCs © 2009-