Subjects -> INSTRUMENTS (Total: 62 journals)
 Showing 1 - 16 of 16 Journals sorted alphabetically Annali dell'Istituto e Museo di storia della scienza di Firenze Applied Mechanics Reviews       (Followers: 27) Bulletin of Social Informatics Theory and Application       (Followers: 1) Computational Visual Media       (Followers: 4) Devices and Methods of Measurements Documenta & Instrumenta - Documenta et Instrumenta EPJ Techniques and Instrumentation European Journal of Remote Sensing       (Followers: 9) Experimental Astronomy       (Followers: 39) Flow Measurement and Instrumentation       (Followers: 18) Geoscientific Instrumentation, Methods and Data Systems       (Followers: 4) Geoscientific Instrumentation, Methods and Data Systems Discussions       (Followers: 1) IEEE Journal on Miniaturization for Air and Space Systems       (Followers: 2) IEEE Sensors Journal       (Followers: 103) IEEE Sensors Letters       (Followers: 3) IJEIS (Indonesian Journal of Electronics and Instrumentation Systems)       (Followers: 3) Imaging & Microscopy       (Followers: 9) InfoTekJar : Jurnal Nasional Informatika dan Teknologi Jaringan Instrumentation Science & Technology       (Followers: 7) Instruments and Experimental Techniques       (Followers: 1) International Journal of Applied Mechanics       (Followers: 7) International Journal of Instrumentation Science       (Followers: 40) International Journal of Measurement Technologies and Instrumentation Engineering       (Followers: 2) International Journal of Metrology and Quality Engineering       (Followers: 4) International Journal of Remote Sensing       (Followers: 274) International Journal of Remote Sensing Applications       (Followers: 43) International Journal of Sensor Networks       (Followers: 4) International Journal of Testing       (Followers: 1) Journal of Applied Remote Sensing       (Followers: 83) Journal of Astronomical Instrumentation       (Followers: 3) Journal of Instrumentation       (Followers: 32) Journal of Instrumentation Technology & Innovations       (Followers: 1) Journal of Medical Devices       (Followers: 5) Journal of Medical Signals and Sensors       (Followers: 3) Journal of Optical Technology       (Followers: 5) Journal of Sensors and Sensor Systems       (Followers: 11) Journal of Vacuum Science & Technology B       (Followers: 2) Jurnal Informatika Upgris Measurement : Sensors       (Followers: 3) Measurement and Control       (Followers: 36) Measurement Instruments for the Social Sciences Measurement Science and Technology       (Followers: 7) Measurement Techniques       (Followers: 3) Medical Devices & Sensors Medical Instrumentation Metrology and Measurement Systems       (Followers: 6) Microscopy       (Followers: 8) Modern Instrumentation       (Followers: 50) Optoelectronics, Instrumentation and Data Processing       (Followers: 4) PFG : Journal of Photogrammetry, Remote Sensing and Geoinformation Science Photogrammetric Engineering & Remote Sensing       (Followers: 29) Remote Sensing       (Followers: 54) Remote Sensing Applications : Society and Environment       (Followers: 8) Remote Sensing of Environment       (Followers: 93) Remote Sensing Science       (Followers: 24) Review of Scientific Instruments       (Followers: 22) Sensors and Materials       (Followers: 2) Solid State Nuclear Magnetic Resonance       (Followers: 3) Standards Transactions of the Institute of Measurement and Control       (Followers: 13) Труды СПИИРАН
Similar Journals
 PFG : Journal of Photogrammetry, Remote Sensing and Geoinformation ScienceNumber of Followers: 0      Hybrid journal (It can contain Open Access articles) ISSN (Print) 2512-2789 - ISSN (Online) 2512-2819 Published by Springer-Verlag  [2626 journals]
• Reports
• PubDate: 2019-10-16

• Quality of Height Models Covering Large Areas
• Abstract: Abstract Digital height models (DHM) are a basic requirement for several applications. The generation of DHM is time consuming and expensive, but some nearly worldwide covering height models are available free of charge or commercially and the number and quality are growing. For practical use it is important to have some information about the quality, the accuracy, accuracy characteristics, areas with problems, height definition as digital surface model with the height of the visible surface or as digital terrain model with heights of the bare ground, resolution (point spacing and correlation of neighboured height values), homogeneity and availability. Also morphologic details are important, depending upon the point spacing and relative height accuracy. An overview about the freely available and commercially nearly worldwide covering height models with a satisfying point spacing and accuracy is given. Changes in this area are fast, so only the current status can be described. A really worldwide covering DHM came with the commercial WorldDEM, based on TanDEM-X interferometric synthetic aperture radar (InSAR) from which now also a reduced, free version with 3 arcsec point spacing is available as TDM90. In addition to the highest accuracy in this field with 10 m point spacing, WorldDEM has also very good morphologic details with the exception of city areas. Quality files belonging to WorldDEM include information about areas with problems. Some height models have been analysed by comparison with satisfying reference height models. The characteristics are described as well as the problems of accuracy specifications with different accuracy figures and dependency upon terrain inclination and other parameters, to allow a selection corresponding to the individual requirements. This also includes horizontal shifts or even rotations and higher degree systematic differences between the reference and the analysed DHM.
PubDate: 2019-10-14

• Biomass Assessment of Agricultural Crops Using Multi-temporal
Dual-Polarimetric TerraSAR-X Data
• Abstract: Abstract The biomass of three agricultural crops, winter wheat (Triticum aestivum L.), barley (Hordeum vulgare L.), and canola (Brassica napus L.), was studied using multi-temporal dual-polarimetric TerraSAR-X data. The radar backscattering coefficient sigma nought of the two polarization channels HH and VV was extracted from the satellite images. Subsequently, combinations of HH and VV polarizations were calculated (e.g. HH/VV, HH + VV, HH × VV) to establish relationships between SAR data and the fresh and dry biomass of each crop type using multiple stepwise regression. Additionally, the semi-empirical water cloud model (WCM) was used to account for the effect of crop biomass on radar backscatter data. The potential of the Random Forest (RF) machine learning approach was also explored. The split sampling approach (i.e. 70% training and 30% testing) was carried out to validate the stepwise models, WCM and RF. The multiple stepwise regression method using dual-polarimetric data was capable to retrieve the biomass of the three crops, particularly for dry biomass, with R2 > 0.7, without any external input variable, such as information on the (actual) soil moisture. A comparison of the random forest technique with the WCM reveals that the RF technique remarkably outperformed the WCM in biomass estimation, especially for the fresh biomass. For example, the R2 > 0.68 for the fresh biomass estimation of different crop types using RF whereas WCM show R2 < 0.35 only. However, for the dry biomass, the results of both approaches resembled each other.
PubDate: 2019-10-01

• Analyzing the Supply and Detecting Spatial Patterns of Urban Green Spaces
via Optimization
• Abstract: Abstract Green spaces in urban areas offer great possibilities of recreation, provided that they are easily accessible. Therefore, an ideal city should offer large green spaces close to where its residents live. Although there are several measures for the assessment of urban green spaces, the existing measures usually focus either on the total size of all green spaces or on their accessibility. Hence, in this paper, we present a new methodology for assessing green-space provision and accessibility in an integrated way. The core of our methodology is an algorithm based on linear programming that computes an optimal assignment between residential areas and green spaces. In a basic setting, it assigns green spaces of a prescribed size exclusively to each resident, such that an objective function that, in particular, considers the average distance between residents and assigned green spaces is optimized. We contribute a detailed presentation on how to engineer an assignment-based method, such that it yields plausible results (e.g., by considering distances in the road network) and becomes efficient enough for the analysis of large metropolitan areas (e.g., we were able to process an instance of Berlin with about 130,000 polygons representing green spaces, 18,000 polygons representing residential areas, and 6 million road segments). Furthermore, we show that the optimal assignments resulting from our method enable a subsequent analysis that reveals both interesting global properties of a city as well as spatial patterns. For example, our method allows us to identify neighbourhoods with a shortage of green spaces, which will help spatial planners in their decision-making.
PubDate: 2019-10-01

• A Novel Method for Digitalisation of Test Fields by Laser Scanning
• Abstract: Abstract In this article, a novel, media undisruptive method for the measurement of photogrammetric test fields using a laser tracker is presented. The new approach is precise and versatile in its application. It relies on image processing on the quasi continuous measurements of a hand-held laser scanner and laser tracker combination. The field of useful applications is large. In this article, we show the benefit in the field of camera calibration. Essential for highly accurate photogrammetric measurements is a careful calibration, since all cameras have optical distortions due to manufacturing processes of the lens. The calibration can be done by e.g.  using a test field. In some cases, 3D coordinates of the control points are necessary. These coordinates are often determined by photogrammetry itself and tacheometric angle measurements in advance. A scale, e.g.  a subtense bar, usually needs to be included which increases the measuring efforts. The method bases on the measured 3D point cloud of a test field. With this technique, not only the centers of all control points are accessible. Other geometric features can be chosen too. Since the point cloud consist of many single point measurements, every control point determination has already a high statistical redundancy. The 3D coordinates of every single control point are extracted from the point cloud, making an additional scale obsolete. Presently, the position accuracy is $$\le 50\,{\upmu }{\text {m}}$$ (MPE), which is mainly limited by the laser scanner used in this article. The here-presented technique can be applied to all kinds of shapes, dimensions, materials, numbers and arrangements of control points. Furthermore, it is a lot faster and easier to handle than the angle measurements of the tacheometer.
PubDate: 2019-09-19

• Bilateral Kernel Extraction from PCA for Classification of Hyperspectral
Images
• Abstract: Abstract The improved spatial and spectral resolution in the advanced Hyperspectral (HS) sensors results in images with rich information per pixel. Hence, the development of efficient spatial–spectral feature extraction (FE) techniques is very crucial for a proper characterization of the objects on ground. In this paper, an attempt has been made to develop a simple, yet effective spatial–spectral FE algorithm. In the proposed approach, the following steps are performed. First, Principal Component Analysis (PCA) was applied on the original hyperspectral image (HSI) and the most significant principal component was extracted. Then, the Bilateral Filter (BF), which acts as an edge-preserving filter, was applied on the selected principal component to extract kernel for each pixel in HSI. The extracted kernel bank is then applied on the original HSI. As in general, the principal component image is edge informative, and the BF is an edge-preserving filter; therefore, the extracted kernel bank can be applied on the original HSI to extract spatial–spectral features. Finally, with the help of these features, the performance of Support Vector Machine (SVM) classifier is evaluated. The proposed approach is validated on three popular hyperspectral data sets, namely, Indian Pines, Pavia University, and Botswana. The experimental results reveal that learning the edge information from a reference image (in the present context PCA) is quite essential, rather than applying the edge-preserving filters directly on the HSI. Theoretically, this holds true, as a unique edge (ground) information exists for an HSI, while in reality, the edges have variations due to variation in reflectance over bands.
PubDate: 2019-09-06

• An Investigation into the Location of the Crashed Aircraft Through the Use
of Free Satellite Images
• Abstract: Abstract Remote sensing data and techniques are being utilized for various purposes including natural disasters such as earthquake as well as flood mapping and detection. The research aims to consume liberates Landsat 8 images for investigating crashed airplanes such as MH370. Overall approximately 300 Landsat images with less than 10% clouds within a defined period were processed and utilized through the Google Earth Engine Platform. Due to the variation of materials as well as the colour of airplane body being different from the area in which the plane crash occurred, the characteristics of the template of a plane’s shape should be different in terms of albedo, temperature as well as a vegetation index value. The research demonstrates the potential of Landsat 8 data especially, the NDVI, the albedo and reflectance of band 4 are capable of distinguishing between the plane and its surrounding green area. Therefore, our result confirms that during the research period, there was no plane on the location and further adds that there is no evidence from the remote sensing to justify the presence of the crashed MH370 in the site as earlier reported.
PubDate: 2019-09-01

• Reports
• PubDate: 2019-09-01

• An Object-Based Shadow Detection Method for Building Delineation in
High-Resolution Satellite Images
PubDate: 2019-09-01

• Classification of ALS Point Clouds Using End-to-End Deep Learning
• Abstract: Abstract Deep learning, referring to artificial neural networks with multiple layers, is widely used for classification tasks in many disciplines including computer vision. The most popular type is the Convolutional Neural Network (CNN), commonly applied to 2D image data. However, CNNs are difficult to adapt to irregular data like point clouds. PointNet, on the other hand, has enabled the derivation of features based on the geometric distribution of a set of points in nD-space utilising a neural network. We use PointNet on multiple scales to automatically learn a representation of local neighbourhoods in an end-to-end fashion, which is optimised for semantic labelling on 3D point clouds acquired by Airborne Laser Scanning (ALS). The results are comparable to those using manually crafted features, suggesting a successful representation of these neighbourhoods. On the ISPRS 3D Semantic Labelling benchmark, we achieve 80.6% overall accuracy, a mid-field result. Investigation on a bigger dataset, namely the 2011 ALS point cloud of the federal state of Vorarlberg, shows overall accuracies of up to 95.8% over large-scale built-up areas. Lower accuracy is achieved for the separation of low vegetation and ground points, presumably because of invalid assumptions about the distribution of classes in space, especially in high alpine regions. We conclude that the method of the end-to-end system, allowing training on a big variety of classification problems without the need for expert knowledge about neighbourhood features can also successfully be applied to single-point-based classification of ALS point clouds.
PubDate: 2019-09-01

• A Combined PCA-SIs Classification Approach for Delineating Built-up Area
from Remote Sensing Data
• Abstract: Abstract The aim of this study is to develop a method for delineating built-up areas based on remote sensing data. The proposed method evaluated 13 spectral indices (SIs) commonly used in assessing land use and land cover (LULC) and selected meaningful indices through a principle component analysis (PCA) and spectral separability analysis. These indices are combined into a built-up delineation index set (BDIS). The development was implemented at the example of the built-up area in Qena city, Egypt. The method was evaluated against ground truth data and one recently developed global product using confusion matrix statistics. The BDIS was computed from indices showing a high loading of each one of the most relevant principle components and high separability at the same time. Subsequently, the selected indices, i.e., the transformed difference vegetation index (TDVI), band ratio for a built-up area (BRBA), and a new built-up area index (NBI), was used as input variables for the supervised classification procedures. The results show an increase in the accuracy of the built-up area delineation using BDIS. The overall, producer’s, user’s accuracies, and Kappa coefficient were 96.3%, 96%, 93%, and 0.946, respectively. The results and a comparison with the global human settlement layer provided by the European Joint Research Center also verified the usefulness of the proposed method for utilizing Landsat 8 OLI imagery data in delineating a built-up area, providing a comprehensive view on built-up area at the local scale.
PubDate: 2019-09-01

• A Delaunay Triangulation Algorithm Based on Dual-Spatial Data Organization
• Abstract: Abstract Existing Delaunay triangulation algorithms for LiDAR data can only guarantee the efficiency of a certain reconstruction step, but cannot guarantee the overall efficiency. This paper presents a Delaunay triangulation algorithm which integrates two existing approaches to improve the overall efficiency of LiDAR data triangulation. The proposed algorithm consists of four steps: (1) dividing a point cloud into grid cells, (2) sorting a point cloud using a KD-tree, (3) triangulating the point cloud and exporting inactive triangles in main memory, and (4) scheduling the above steps. The proposed algorithm was tested using three LiDAR data sets. The LiDAR data was used for comparing the proposed algorithm with the Streaming Delaunay algorithm with respect to both time efficiency and memory usage. Results from the experiments suggest that the proposed algorithm is three to four times faster than Streaming Delaunay while using nearly the same memory space.
PubDate: 2019-06-01

• Duisburg 1566: Transferring a Historic 3D City Model from Google Earth
into a Virtual Reality Application
• Abstract: Abstract Physical and digital city models are idealized and simplified representations of spatial, social, economic and cultural structures of a city in a certain region for a specific (historical) timeframe. The presentation of historical city models as physical models in museums gives an overview of the urban situation at a given time in a specified scale, while the visualisation on the Internet permits a playful immersion into the past of a city. Historic city/town models are ideally suited both for thorough multi-dimensional geometric documentation and for realistic interactive visualisation in immersive virtual reality (VR) applications. VR is increasingly in use for visiting (historic) virtual places to enhance a visitor’s experience by providing access to additional materials for review and knowledge deepening. Using today’s available 3D technologies a virtual place is no longer just a presentation of geometric environments on the Internet, but with features—provided by game industry tools—an interactive visualisation of objects can be achieved. In this paper the conversion and adaptation of an existing virtual 3D model for a VR application is presented. The model of the city of Duisburg, Germany, is based on the year 1566 and exists as a physical model and was digitised for a Google Earth representation in 2007. The workflow from data acquisition using laser scanning in 2007 to the visualisation in 2018 using the VR system HTC Vive, including the necessary programming for user navigation and interactions, is described. Furthermore, the use (including simultaneous use of multiple end-users) of such a VR visualisation for historic city models is discussed.
PubDate: 2019-06-01

• Finite-Element Approach to Camera Modelling and Calibration
• Abstract: Abstract This paper is focused on the finite-element (FE) method of camera calibration. The FE method enables the modelling of systematic error effects, including those which cannot be recovered by standard modelling procedures, e.g., those based on Brown’s distortion model. The FE approach to camera modelling has been previously published a number of times; however, some important aspects were not sufficiently addressed in this earlier research work. In addition, the computing power was too low to test the finite-element method with high-resolution FE grid. The proposed FE implementation is fully independent of any polynomial model and includes correction of the distance-dependent distortion effect. Besides modelling the effects such as lens distortion and sensor unflatness, the approach also accommodates the calibration of non-perspective lenses such as fisheye lenses. In addition to introducing the proposed FE calibration method, this paper addresses the related issues of sufficient target density, correction pattern smoothness and FE grid size. It also reports on experimental testing of the new FE implementation using the acceptance test procedure of the German VDI guideline 2634. Two different cameras were calibrated within the acceptance tests to analyse the impacts of the sensor size and field of view of the lens. For comparison with the FE method, both data sets were also processed using standard photogrammetric software (AICON 3D Studio). The results have proven the ability of the proposed FE modification to recover any systematic effects and to model ultra-wide field-of-view lenses, while achieving highly accurate measurements. The method is able to model the distance-dependent distortion effect, but requires a very large number of observations, which may be expensive and difficult to establish in practise. The proposed method, which can be characterised by utilising a high-resolution grid, is mostly intended for laboratory calibration of highly stable camera systems and not for on-the-job type calibration, where the target density would likely not be sufficiently large.
PubDate: 2019-06-01

• Pixel-Based Classification of Hyperspectral Images Using Convolutional
Neural Networks
• Abstract: Abstract The recent progress in geographical information systems, remote sensing (RS) and data analytics enables us to acquire and process large amount of Earth observation data. Convolutional neural networks (CNN) are being used frequently in classification of multi-dimensional images with high accuracy. In this paper, we test CNNs for the classification of hyperspectral RS data. Our proposed CNN is a multi-layered neural network architecture, which is tailored to classify objects based on pixel-wise spatial information using spectral bands of hyperspectral imagery (HSI). We use benchmark satellite imagery in four different HSI datasets for classification using the proposed architecture. Our results are compared with support vector machine (SVM) and extreme learning machine (ELM) algorithms, which are frequently used techniques of machine learning in RS data classification. Moreover, we also provide a comparison with the state-of-the-art CNN approaches, which have been used for HSI classification. Our results show improvements of up to 6% on average over SVM and ELM while up to 4% improvement is observed in comparison with two recently proposed CNN architectures for HSI classification accuracy. On the other hand, the processing time of our proposed CNN is also significantly lower.
PubDate: 2019-06-01

• Report
• PubDate: 2019-06-01

• Modelling End-of-Season Soil Salinity in Irrigated Agriculture Through
Multi-temporal Optical Remote Sensing, Environmental Parameters, and In
Situ Information
• Abstract: Abstract Accurate information of soil salinity levels enables for remediation actions in long-term operating irrigation systems with malfunctioning drainage and shallow groundwater (GW), as they are widespread throughout the Aral Sea Basin (ASB). Multi-temporal Landsat 5 data combined with GW levels and potentials, elevation and relative topographic position, and soil (clay content) parameters, were used for modelling bulk electromagnetic induction (EMI) at the end of the irrigation season. Random forest (RF) regressionwas applied to predict in situ observations of 2008–2011 which originated from a cotton research station in Uzbekistan. Validation, i.e. median statistics from 100 RF runs with a holdout of each 20% of the samples, revealed that mono-temporal (R2: 0.1–0.18, RMSE: 16.7–24.9 mSm−1) underperformed multi-temporal RS data (R2: 0.29–0.45; RMSE: 15.1–20.9 mSm−1). Combinations of multi-temporal RS data with environmental parameters achieved highest accuracies (R2: 0.36–0.50, RMSE: 13.2–19.9 mSm−1). Beside RS data recorded at the initial peaks of the major irrigation phases, terrain and GW parameters turned out to be important variables for the model. RF preferred neither raw data nor spectral indices known to be suitable for detecting soil salinity. Unexplained variance components result from missing environmental variables, but also from processes not considered in the data. A calibration of the EMI for electrical conductivity and the standard soil salinity classification returned an overall accuracy of 76–83% for the period 2008–2011. The presented indirect approach together with the in situ calibration of the EMI data can support an accurate mapping of soil salinity at the end of the season, at least in the type of irrigation systems found in the ASB.
PubDate: 2018-12-01

• Modellbasierte Selektion hyperspektraler EnMAP Kanäle zur optimalen
Invertierung von Strahlungstransfermodellen für landwirtschaftliche
Kulturen
• Abstract: Abstract Model-based Selection of hyperspectral EnMAP Channels for optimal Inversion of Radiation Transfer Models in Agriculture. Satellite-based hyperspectral Earth observation data combined with physically based radiative transfer models have the strong potential to support sustainable agriculture by providing accurate spatial and temporal information of important vegetation biophysical and biochemical variables such as leaf chlorophyll content. To meet this goal, possible error sources in the modelling should be minimized. Thus, the capability of a model to reproduce the measured spectral signals has to be tested before applying any retrieval algorithm. For an exemplary demonstration, the PROSAIL model was employed to emulate the setup of the future EnMAP hyperspectral sensor in the visible and near-infrared (VNIR) spectral region with a 6.5 nm spectral sampling distance. Model uncertainties were determined to subsequently exclude those wavelengths with highest mean absolute error (MAE) between model simulation and spectral measurement. For this purpose data from two campaigns were exploited (1) from Nebraska–Lincoln (maize and soybean) and (2) from Munich–North-Isar (maize and winter wheat). A significant increase of accuracy for leaf chlorophyll content (LCC, µg cm−2) estimations could be obtained, with relative RMSE decreasing from 26% (full VNIR range) to 15% (optimized VNIR) for maize and from 77% to 29% for soybean, respectively. We therefore recommend applying a specific model-error threshold (MAE ~ 0.01) to stabilize the retrieval of crop biochemical variables.
PubDate: 2018-12-01

• Noise Filtering in High-Resolution Satellite Images Using Composite
Multiresolution Transforms
• Abstract: Abstract This contribution proposes a multiresolution analysis (MRA)-based composite technique for image restoration by noise filtering in satellite images. Multiresolution techniques provide a coarse–fine and scale-invariant decomposition of images for analysis and interpretation. MRA methods effectively handle the noise because of their multiscale feature. This study presents a scheme based on the combination of wavelet-, contourlet- and curvelet-based transforms as effective tool for noise filtering in satellite images. The proposed method is applied to the problem of restoring an image from noisy data and effects of denoising are compared. Several comparison experiments with state-of-the-art noise filtering schemes are conducted. The composite approach of curvelet and wavelet is found to be more effective than the others based on the set of evaluation measures like peak signal–noise ratio, mean-squared error, edge-enhancing index and mean–standard deviation ratio across edges. The results are illustrated using high-resolution satellite data, such as Quickbird and Worldview-2 images. Such high-resolution images are more likely to be noisy due to the short observation time over the target in contrast to images from low-resolution sensors.
PubDate: 2018-12-01

• Reports
• PubDate: 2018-12-01

JournalTOCs
School of Mathematical and Computer Sciences
Heriot-Watt University
Edinburgh, EH14 4AS, UK
Email: journaltocs@hw.ac.uk
Tel: +00 44 (0)131 4513762