Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract In recent years, targeted spraying technology, which was proposed to solve the problems of pesticide waste and environmental pollution caused by traditional spraying methods, has been successfully applied in orchards. In street scenes with a variety of object classes, it is challenging to detect tree crowns, which limits the application of targeted spraying for street trees. Two-dimensional (2D) light detection and ranging (LiDAR) sensors have been widely used in targeted spraying to monitor the presence of tree crowns. Considering a mobile laser scanning (MLS) system with a single 2D LiDAR sensor in push-broom mode, this paper proposes a pointwise method for street tree crown detection from MLS point clouds by using a grid index and local features. First, an efficient two-level neighbourhood search method is proposed to obtain the spherical neighbourhood of a single point by using the grid index of the MLS point clouds. Subsequently, a set of local statistical features, including width features, depth features, elevation features, intensity features, echo number features, dimensionality features and a density feature, are extracted from the spherical neighbourhood. Finally, a supervised learning algorithm called boosting is used to automatically fuse these features and generate a pointwise tree crown detector from a labelled training set. An MLS point cloud with 15,134,000 points is captured from both sides of a 136.5 m street, and the cloud contains buildings, lanes, sidewalks, benches, street lights, bicycles, traffic signs, grids, trees, bushes, turf areas, parterres, and pedestrians. The estimated Bayesian errors of single-feature approaches range from 6.23 to 36.09%, and the error rate of the tree crown detector composed of all features is less than 0.73%, with a recall rate of over 98.30% and a precision of over 99.13%. The experimental results show that the proposed method can provide an online, fine and accurate protocol for targeted spraying. PubDate: 2022-06-23
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract There is a pair of binocular cameras for navigation onboard the Chang’e-4 lunar rover. Although the cameras were calibrated before launch, it is necessary to calibrate them again after landing on the moon, especially the external parameters. In this article, an image of a solar panel containing parallel-line features is used for recalibration. According to the collinear equations of an image point, object point and projection centre, the algebraic relationship between the slope k and intercept t parameters of a line in the image and the external parameters of the cameras are deduced. Through this algebraic relationship, we propose an external parameter recalibration method based on an image that includes parallel lines on a plane in the object space. Three experiments were carried out to evaluate the effectiveness and reliability of the proposed method. PubDate: 2022-06-02
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Visual systems are receiving increasing attention in underwater applications. While the photogrammetric and computer vision literature so far has largely targeted shallow water applications, recently also deep sea mapping research has come into focus. The majority of the seafloor, and of Earth’s surface, is located in the deep ocean below 200 m depth, and is still largely uncharted. Here, on top of general image quality degradation caused by water absorption and scattering, additional artificial illumination of the survey areas is mandatory that otherwise reside in permanent darkness as no sunlight reaches so deep. This creates unintended non-uniform lighting patterns in the images and non-isotropic scattering effects close to the camera. If not compensated properly, such effects dominate seafloor mosaics and can obscure the actual seafloor structures. Moreover, cameras must be protected from the high water pressure, e.g. by housings with thick glass ports, which can lead to refractive distortions in images. Additionally, no satellite navigation is available to support localization. All these issues render deep sea visual mapping a challenging task and most of the developed methods and strategies cannot be directly transferred to the seafloor in several kilometers depth. In this survey we provide a state of the art review of deep ocean mapping, starting from existing systems and challenges, discussing shallow and deep water models and corresponding solutions. Finally, we identify open issues for future lines of research. PubDate: 2022-04-20
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract The idea of the wisdom of the crowd is that integrating multiple estimates of a group of individuals provides an outcome that is often better than most of the underlying estimates or even better than the best individual estimate. In this paper, we examine the wisdom of the crowd principle on the example of spatial data collection by paid crowdworkers. We developed a web-based user interface for the collection of vehicles from rasterized shadings derived from 3D point clouds and executed different data collection campaigns on the crowdsourcing marketplace microWorkers. Our main question is: how large must be the crowd in order that the quality of the outcome fulfils the quality requirements of a specific application' To answer this question, we computed precision, recall, F1 score, and geometric quality measures for different crowd sizes. We found that increasing the crowd size improves the quality of the outcome. This improvement is quite large at the beginning and gradually decreases with larger crowd sizes. These findings confirm the wisdom of the crowd principle and help to find an optimum number of the crowd size that is in the end a compromise between data quality, and cost and time required to perform the data collection. PubDate: 2022-04-06 DOI: 10.1007/s41064-022-00202-2
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract In this study, Urmia lake and its basin, which are vital regions in the northwest of Iran, were monitored using satellite data and modeling methods. Monthly precipitation was computed using TRMM satellite dataset. Terrestrial Water Storage (TWS), evaporation, temperature, and TWS Anomaly (TWSA) were estimated from GLDAS dataset and GRACE missions. Moreover, Jason satellite altimetry series and MODIS were used to assess the lake Water Level (WL) and area variations. These seven parameters were estimated from April 2002 to June 2019. This study adopted and evaluated four deep-learning methods based on feed-forward and recurrent architectures for data modeling, and, subsequently, predicting the water area variations. According to the obtained results, Recurrent Neural Network (RNN) and Convolutional Neural Network (CNN) models had some malfunctions in predicting lake area, while Multi-Layer Perceptron (MLP) and Long Short-Term Memory (LSTM) acquired results close to real variations of Urmia lake area. Taking Mean Absolute Error, Mean Relative Error, Root Mean Squared Error (RMSE), and correlation coefficient (r) as evaluation parameters, LSTM achieved the superior quantities, 175.07 km2, 18.87%, 231.7 km2, and 0.83, respectively. Results also indicate that LSTM is more accurate while predicting the variation of critical situations. PubDate: 2022-04-04 DOI: 10.1007/s41064-022-00203-1
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: A correction to this paper has been published: https://doi.org/10.1007/s41064-021-00156-x PubDate: 2022-04-01
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Assessing the architecture of Norway spruce (Picea abies (L.) Karst.) is challenging using terrestrial remote sensing due to its typically dense, evergreen canopy. On the other hand, this species is among the most vulnerable to the effects of climate change. Advanced crown variables may serve as an early warning of tree drought and wind risk, but are still scarce and hardly available for the vast majority of forests, because they are costly and time-consuming to measure in the field. In this study, we used single-image photogrammetry (SIP) based on a high-resolution smartphone camera. Our method was used to assess the architecture of spruce trees growing under two contrasting forest settings: a low-density urban forest (190–668 trees per ha) and an intermediate-density managed forest (636–1018 trees per ha), outside of Norway spruce native range, in North-Western Poland. For trees ranging 18–50 cm in diameter at breast height (DBH), we obtained a ca. 1 cm (3%) mean error in SIP DBH measurements, with little bias (− 0.3%). The principal component analysis based on the relative tree- and crown-level variables revealed two independent trait dimensions explaining 83.9% of the total variance. The axes were driven by tree slenderness and by crown proportions; the former providing a key to disentangle spruce architectures of the two stands. Overall, our results show that spruce architecture may be quickly and reliably measured with SIP using a smartphone application. PubDate: 2022-04-01 DOI: 10.1007/s41064-022-00201-3
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Knowledge about tree species distribution is important for forest management and for modeling and protecting biodiversity in forests. Methods based on images are inherently limited to the forest canopy. Airborne lidar data provide information about the trees’ geometric structure, as well as trees beneath the upper canopy layer. In this paper, the potential of two deep learning architectures (PointCNN, 3DmFV-Net) for classification of four different tree classes is evaluated using a lidar dataset acquired at the Bavarian Forest National Park (BFNP) in a leaf-on situation with a maximum point density of about 80 pts/m \(^{2}\) . Especially in the case of BFNP, dead wood plays a key role in forest biodiversity. Thus, the presented approaches are applied to the combined classification of living and dead trees. A total of 2721 single trees were delineated in advance using a normalized cut segmentation. The trees were manually labeled into four tree classes (coniferous, deciduous, standing dead tree with crown, and snag). Moreover, a multispectral orthophoto provided additional features, namely the Normalized Difference Vegetation Index. PointCNN with 3D points, laser intensity, and multispectral features resulted in a test accuracy of up to 87.0%. This highlights the potential of deep learning on point clouds in forestry. In contrast, 3DmFV-Net achieved a test accuracy of 73.2% for the same dataset using only the 3D coordinates of the laser points. The results show that the data fusion of lidar and multispectral data is invaluable for differentiation of the tree classes. Classification accuracy increases by up to 16.3% points when adding features generated from the multispectral orthophoto. PubDate: 2022-03-30 DOI: 10.1007/s41064-022-00200-4
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract A novel concept of camera modelling for underwater 3D measurements based on stereo camera utilisation is introduced. The geometrical description of the ray course subject to refraction in underwater cameras is presented under assumption of conditions, which are typically satisfied or can be achieved approximately. Possibilities of simplification are shown, which allow an approximation of the ray course by classical pinhole modelling. It is shown how the expected measurement errors can be estimated, as well as its influence on the expected 3D measurement result. Final processing of the 3D measurement data according to the requirements regarding accuracy is performed using several kinds of refinement. For example, calibration parameters can be refined, or systematic errors can be decreased by subsequent compensation by suitable error correction functions. Experimental data of simulations and real measurements obtained by two different underwater 3D scanners are presented and discussed. If inverse image magnification is larger than about one hundred, remaining errors caused by refraction effects can be usually neglected and the classical pinhole model can be used for stereo camera-based underwater 3D measurement systems. PubDate: 2022-02-22 DOI: 10.1007/s41064-022-00195-y
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Road information plays a fundamental role in application fields such as map updating, traffic management, and road monitoring. Extracting road features from remote sensing images is a hot and frontier issue in the remote sensing field, and it is also one of the most challenging research topics. In view of this, this research systematically reviews the deep learning technology applied to road extraction in remote sensing images and summarizes the existing theories and methods. According to the different annotation types and learning methods, they can be divided into three methods: fully supervised, weakly supervised and unsupervised learning. Then, the datasets and performance evaluation metrics related to road extraction from remote sensing images are summarized, and on this basis, the effects of common road extraction methods are analysed. Finally, suggestions and prospects for the development of road extraction are proposed. PubDate: 2022-02-15 DOI: 10.1007/s41064-022-00194-z
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Water transparency measured using Secchi disk is an important water quality indicator influenced by various biotic and abiotic processes in coastal and marine ecosystems. Understanding the role of this important indicator over large coastal environments requires synoptic measurements through ocean color satellites, such as Moderate‐Resolution Imaging Spectroradiometer (MODIS) and Medium‐Resolution Imaging Spectrometer (MERIS). In this study, we evaluated the performance of different atmospheric correction algorithms and the suitability of different pixel extraction methods in modeling Secchi disk depth (ZSD) over the North Arabian Gulf (NAG) waters using MODIS and MERIS imagery. Evaluating the performance of different atmospheric correction algorithms and the suitability of pixel extraction methods yielded various ZSD models with different accuracy. The most accurate MODIS and MERIS ZSD models had R2 of 0.75 (RMSE = 80 cm) and 0.78 (RMSE = 74 cm), respectively. These models can be used to accurately map ZSD of NAG waters that would provide a better understanding of NAG water quality dynamics. Although these models were designed for NAG waters, they can be applied for the entire Arabian Gulf waters and probably other similar waters with the availability of training data. The key factor that limits the efficiency of these models and other previous models is the success of atmospheric correction algorithms in retrieving reliable remote sensing reflectance over different water bodies. PubDate: 2022-01-31 DOI: 10.1007/s41064-021-00189-2
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract High-resolution hyperspectral remote sensing can provide a large-scale mapping of pure spectra along with perturbed/mixed spectra of minerals within a scene. Among the high-computational “per-pixel” methods, machine learning is a well-known automated technique to data science, being most flexible to map new spectra or perturbed/mixed spectra of minerals as an individual category. Since limited mineral samples often partly represent the complex mineralogy of a large site, a distributed mapping requires to be conducted using a scalable method that works even with a smaller number of training samples. In this regard, we introduce an integrated extreme learning machine (IELM) method that maps qualitatively the pure spectra and perturbed/mixed spectra of every surface type. This mapping has been further integrated into a quantitative analysis of the perturbation/mixing nature of pure spectra. The large-scale mapping of the Jahazpur mineralised belt has been conducted by a MapReduce model with the IELM method using AVIRIS-NG (Airborne Visible-Infrared Imaging Spectrometer-Next Generation) observation. In the validation process, the IELM method achieves 98.08% accuracy with high signal-to-noise (SNR) valued AVIRIS-NG data and 96.54% with low-SNR synthetic data in the presence of 269 training samples. The IELM method shows better efficacy than a spectral feature fitting approach in assessment. The analyses of perturbed and mixed spectra implicate that an additive spectral variability model and linear mixing model fit for the present data of our investigation. These analytical findings can be further extended for a “sub-pixel” method (e.g. spectral unmixing) to reach an application like lithology or host-rock mapping. PubDate: 2022-01-28 DOI: 10.1007/s41064-021-00188-3
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Remote sensing scene classification deals with the problem of classifying land use/cover of a region from images. To predict the development and socioeconomic structures of cities, the status of land use in regions is tracked by the national mapping agencies of countries. Many of these agencies use land-use types that are arranged in multiple levels. In this paper, we examined the efficiency of a hierarchically designed convolutional neural network (CNN)-based framework that is suitable for such arrangements. We use the NWPU-RESISC45 dataset for our experiments and arrange this data set in a two-level nested hierarchy. Each node in the designed hierarchy is trained using DenseNet-121 architectures. We provide detailed empirical analysis to compare the performances of this hierarchical scheme and its non-hierarchical counterpart, together with the individual model performances. We also evaluated the performance of the hierarchical structure statistically to validate the presented empirical results. The results of our experiments show that although individual classifiers for different sub-categories in the hierarchical scheme perform considerably well, the accumulation of the classification errors in the cascaded structure prevents its classification performance from exceeding that of the non-hierarchical deep model. PubDate: 2022-01-27 DOI: 10.1007/s41064-022-00193-0
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Airborne remote sensing with optical sensor systems is an essential tool for a variety of environmental monitoring applications. Depending on the size of the area to be monitored, either unmanned (UAVs) or manned aircraft are more suitable. For survey areas starting at several square kilometers, piloted aircraft remain the preferred carrier platform. However, a specific class of manned aircraft is often not considered: the gyrocopter-type ultralight aircraft. These aircraft are less expensive to operate than conventional fixed wings. Additionally, they are highly maneuverable, offer a high payload and a long endurance, and thus perfectly fill the niche between UAVs and conventional aircraft. Therefore, the authors have developed a modular and easy-to-use sensor carrier system, the FlugKit, to temporarily convert an AutoGyro MTOsport gyrocopter into a full-fledged aerial remote sensing platform mainly for vegetation monitoring. Accordingly, various suitable optical sensor systems in the visible (VIS), near-infrared (NIR), and longwave infrared (LWIR) were explicitly developed for this carrier system. This report provides a deeper insight into the individual components of this remote sensing solution based on a gyrocopter as well as application scenarios already carried out with the system. PubDate: 2022-01-17 DOI: 10.1007/s41064-021-00187-4
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract This paper proposes a multiple CNN architecture with multiple input features, combined with multiple LSTM, along with densely connected convolutional layers, for temporal wind nature analyses. The designed architecture is called Multiple features, Multiple Densely Connected Convolutional Neural Network with Multiple LSTM Architecture, i.e. MCLT. A total of 58 features in the input layers of the MCLT are designed using wind speed and direction values. These empirical features are based on percentage difference, standard deviation, correlation coefficient, eigenvalues, and entropy, for efficiently describing the wind trend. Two successive LSTM layers are used after four densely connected convolutional layers of the MCLT. Moreover, LSTM has memory units that utilise learnt features from the current as well as previous outputs of the neurons, thereby enhancing the learning of patterns in the temporal wind dataset. Densely connected convolutional layer helps to learn features of other convolutional layers as well. The MCLT is used to predict dominant speed and direction classes in the future for the wind datasets of Stuttgart and Netherlands. The maximum and minimum overall accuracies for dominant speed prediction are 99.1% and 94.9%, (for Stuttgart) and 99.9% and 97.5% (for Netherlands) and for dominant direction prediction are 99.9% and 94.4% (for Stuttgart) and 99.6% and 96.4% (for Netherlands), respectively, using MCLT with 58 features. The MCLT, therefore, with multiple features at different levels, i.e. the input layers, the convolutional layers, and LSTM layers, shows promising results for the prediction of dominant speed and direction. Thus, this work is useful for proper wind utilisation and improving environmental planning. These analyses would also help in performing Computational Fluid Dynamics (CFD) simulations using wind speed and direction measured at a nearby meteorological station, for devising a new set of appropriate inflow boundary conditions. PubDate: 2021-12-19 DOI: 10.1007/s41064-021-00185-6