Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Geographic Information Systems (GIS) are available as stand-alone desktop applications as well as web platforms for vector- and raster-based geospatial data processing and visualization. While each approach offers certain advantages, limitations exist that motivate the development of hybrid systems that will increase the productivity of users for performing interactive data analytics using multidimensional gridded data. Web-based applications are platform-independent, however, require the internet to communicate with servers for data management and processing which raises issues for performance, data integrity, handling, and transfer of massive multidimensional raster data. On the other hand, stand-alone desktop applications can usually function without relying on the internet, however, they are platform-dependent, making distribution and maintenance of these systems difficult. This paper presents RasterJS, a hybrid client-side web library for geospatial data processing that is built on the Progressive Web Application (PWA) architecture to operate seamlessly in both Online and Offline modes. A packaged version of this system is also presented with the help of Web Bundles API for offline access and distribution. RasterJS entails the use of latest web technologies that are supported by modern web browsers, including Service Workers API, Cache API, IndexedDB API, Notifications API, Push API, and Web Workers API, in order to bring geospatial analytics capabilities to large-scale raster data for client-side processing. Each of these technologies acts as a component in the RasterJS to collectively provide a similar experience to users in both Online and Offline modes in terms of performing geospatial analysis activities such as flow direction calculation with hydro-conditioning, raindrop flow tracking, and watershed delineation. A large-scale case study is included in the study for watershed analysis to demonstrate the capabilities and limitations of the library. The framework further presents the potential to be utilized for other use cases that rely on raster processing, including land use, agriculture, soil erosion, transportation, and population studies. PubDate: 2023-03-20
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract The geophysical properties of snow are essential to study the mountain snow/glacier system and can be used as an indicator for any related hazard. In this study, an attempt has been made to model the geophysical properties of snow, such as dielectric, density, and wetness using the Sentinel–1 dual-polarized SLC product. A state-of-the-art inversion model has been developed using Sentinel–1 derived stokes parameters to estimate snow dielectric and subsequently used to model density and wetness employing Looyega's and Denoth's equations. The proposed inclusion of stokes parameters in the inversion model has significantly predicted the results. The respective modeled and in-situ snow dielectric, density, and wetness show a good coefficient of determination (R2 > 0.7) with 95% confidence. Utilizing the field-measured values, the estimated root mean squared error (RMSE) of snow dielectric, density, and wetness, is 0.26, 0.08 g/cm3, and 0.84, respectively. The comparison of the proposed model with some of the existing models reflects its good efficiency in predicting the snow geophysical parameters. PubDate: 2023-03-20
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract For decades, earthquake prediction has been the focus of research using various methods and techniques. It is difficult to predict the size and location of the next earthquake after one has occurred. However, machine learning (ML)-based approaches and methods have shown promising results in earthquake prediction over the past few years. Thus, we compiled 31 studies on earthquake prediction using ML algorithms published from 2017 to 2021, with the aim of providing a comprehensive review of previous research. This study covered different geographical regions globally. Most of the models analysed in this study are keen on predicting the earthquake magnitude, trend and occurrence. A comparison of different types of seismic indicators and the performance of the algorithms were summarized to identify the best seismic indicators with a high-performance ML algorithm. Towards this end, we have discussed the highest performance of the ML algorithm for earthquake magnitude prediction and suggested a potential algorithm for future studies. PubDate: 2023-03-17
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Radar satellite imagery has been widely used to obtain soil moisture (SM) estimates of high accuracy. However, obtaining the best accuracy of SM estimates requires investigating the contribution of vegetation canopy to the accuracy of retrieved SM. We used the Integral Equation Model (IEM) coupled with the Water Cloud Model (WCM) (herein referred to as the IWCM) to estimate surface SM using radar and multi-spectral images. Accordingly, Sentinel-1 and Sentinel-2 images corresponding to calibration (2017) and validation (2016) periods were used to obtain VV-polarized radar data (where radar transmits and receives vertical polarization), Leaf Area Index, and Normalized Difference Vegetation Index at the SM measurement stations. SM measurements from eleven stations in the Walnut Gulch watershed, USA, were used as in situ data. Investigating the relationship between the simulation error on various variables revealed a dependence of error on precipitation received on the day before soil moisture measurement was carried out. Next, two data-driven models (DDMs), i.e., Support Vector Machine (SVM) and the Regression Tree (RT), were used to obtain SM estimates at stations using radar signal and vegetation indices as their input features. Accordingly, the RT model showed the best performance with validation error of 0.071 m3/m3 and 0.074 m3/m3 for the LAI and NDVI-based models, respectively. Based on the RT results, precipitation of the previous day, followed by the Julian date had the highest values of importance in predicting the the soil moisture. The RT model was consequently used to calculate regionalized estimates for the watershed due to its higher accuracy in estimating SM in the measurement stations. The results indicated the feasibility of using DDMs to obtain regionalized surface SM measurements at the watershed scale. PubDate: 2023-03-17
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Phytoliths constitute microscopic SiO2-rich biominerals formed in the cellular system of many living plants and are often preserved in soils, sediments and artefacts. Their analysis contributes significantly to the identification and study of botanical remains in (paleo)ecological and archaeological contexts. Traditional identification and classification of phytoliths rely on human experience, and as such, an emerging challenge is to automatically classify them to enhance data homogeneity among researchers worldwide and facilitate reliable comparisons. In the present study, a deep artificial neural network (NN) is implemented under the objective to detect and classify phytoliths, extracted from modern wheat (Triticum spp.). The proposed methodology is able to recognise four phytolith morphotypes: (a) Stoma, (b) Rondel, (c) Papillate, and (d) Elongate dendritic. For the learning process, a dataset of phytolith photomicrographs was created and allocated to training, validation and testing data groups. Due to the limited size and low diversity of the dataset, an end-to-end encoder-decoder NN architecture is proposed, based on a pre-trained MobileNetV2, utilised for the encoder part and U-net, used for the segmentation stage. After the parameterisation, training and fine-tuning of the proposed architecture, it is capable to classify and localise the four classes of phytoliths in unknown images with high unbiased accuracy, exceeding 90%. The proposed methodology and corresponding dataset are quite promising for building up the capacity of phytolith classification within unfamiliar (geo)archaeological datasets, demonstrating remarkable potential towards automatic phytolith analysis. PubDate: 2023-03-14
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Weather prediction is the hottest topic in remote sensing to understand natural disasters and their intensity in an early stage. But in many cases, the typical imaging models have resulted in less forecasting rate. Hence, to overcome this problem, a novel buffalo-based Generalized Adversarial Cyclone Intensity Prediction System (BGACIPS) was designed for cyclone intensity prediction using space satellite images. The processed satellite images contained features like rain, snow, Tropical depression (T.Depression), thunderstorms (T.strom), and cyclone. Initially, the noise features were removed in the pre-processing module, and the refined data was entered into the classification layer. Consequently, the analysis of the features was performed, and the intensity of each feature and cyclone stages were identified. Furthermore, the planned design is executed in the python environment, and the improvement score has been analyzed regarding prediction exactness, mean errors, and error rate. Hence, the proposed novel BGACIPS has a lower error rate and higher prediction accuracy than the compared models. PubDate: 2023-03-14
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract An authentic water consumption forecast is an auxiliary tool to support the management of the water supply and demand in urban areas. Providing a highly accurate forecasting model depends a lot on the quality of the input data. Despite the advancement of technology, water consumption in some places is still recorded by operators, so its database usually has some approximate and incomplete data. For this reason, the methods used to predict the water demand should be able to handle the drawbacks caused by the uncertainty in the dataset. In this regard, a structured hybrid approach was designed to cluster the customers and predict their water demand according to the uncertainty in the dataset. First, a fuzzy-based algorithm consisting of Forward-Filling, Backward-Filling, and Mean methods was innovatively proposed to impute the missing data. Then, a multi-dimensional time series k-means clustering technique was developed to group the consumers based on their consumption behavior, for which the missing data were estimated with fuzzy numbers. Finally, one forecasting model inspired by Long Short-Term Memory (LSTM) networks was adjusted for each cluster to predict the monthly water demand using the lagged demand and the temperature. This approach was implemented on the water time series of the residential consumers in Yazd, Iran, from January 2011 to November 2020. Based on the performance evaluation in terms of the Root Mean Squared Error (RMSE), the proposed approach had an acceptable level of confidence to predict the water demand of all the clusters. PubDate: 2023-03-10
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Enhancing the textural details in a satellite image play a significant part in satellite image processing. Satellite images are rich in textural and very minute spatial data. Failing to retrieve and enhance these critical details in the satellite image can lead to loss of information and hence poor results in the succeeding stages of satellite image processing. Correctly identifying the spatial and textural data in a satellite image is an effective way by which the image information can be preserved for a better-quality image. For this, the textural details should be distinguished, and then effective image processing can be performed. This paper introduces the Gabor filter-based parameter optimization for enhancing the textural and spatial information in the image. Manta ray foraging optimization is adopted for modifying the control parameters in the filter to account for the inadequacy of the algorithm in balancing the local and global search. A self adaptable Manta ray optimization is proposed, which is shown to outperform the traditional enhancement techniques such as Bilateral filter and Gabor filter optimized with Particle Swarm Optimization (PSO), Differential Evolution(DE) and Manta Ray Foraging Optimization(MRFO). The proposed method is compared with the traditional methods in terms of parameters such as Peak Signal to Noise Ratio (PSNR), Feature Similarity Index (FSIM), Entropy and Computation time. The proposed method gave a huge improvement in PSNR by 17.61%, FSIM by 7.47% and entropy values by 6.7% and was seen to give the least CPU time. PubDate: 2023-03-08
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Lithofacies identification is critical to energy exploration and reservoir evaluation. Machine learning provides a way to use logging data for lithofacies intelligence identification. However, labeled logging data are usually scarce, which makes the currently used supervised algorithms less effective, so semi-supervised methods have received attention from researchers. In this paper, we propose to apply Tri-Training to the field of lithofacies recognition. The framework used Random Forest (RF), Gradient-Boosted Decision Trees (GBDT), and Support Vector Machine (SVM), as the baseline supervised classifiers, and based on the idea of inductive semi-supervised methods and ensemble learning. Baseline classifiers are trained and iterated using unlabeled data to obtain effect improvement. The final results are output in an ensemble paradigm. We used seven logging parameters from two wells as input and divide the data randomly 10 times for training and testing. With only five samples of each lithology, the prediction accuracy improved by the average of 2.1% and 14.5% in both wells compared to the baseline methods. In addition, we also compared two commonly used semi-supervised methods, label propagation algorithm (LPA) and Co-Training. The experimental results also confirm that Tri-training has the better and more stable performance. The Tri-training method in this paper can be effectively applied to lithofacies identification under scarce labeled logging data. PubDate: 2023-03-07
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract The Uniaxial Compressive Strength (UCS) is an essential parameter in various fields (e.g., civil engineering, geotechnical engineering, mechanical engineering, and material sciences). Indeed, the determination of UCS in carbonate rocks allows evaluation of its economic value. The relationship between UCS and numerous physical and mechanical parameters has been extensively investigated. However, these models lack accuracy, where as regional and small samples negatively impact these models' reliability. The novelty of this work is the use of state-of-the-art machine learning techniques to predict the Uniaxial Compressive Strength (UCS) of carbonate rocks using data collected from scientific studies conducted in 16 countries. The data reflect the rock properties including Ultrasonic Pulse Velocity, density and effective porosity. Machine learning models including Random Forest, Multi Layer Perceptron, Support Vector Regressor and Extreme Gradient Boosting (XGBoost) are trained and evaluated in terms of prediction performance. Furthermore, hyperparameter optimization is conducted to ensure maximum prediction performance. The results showed that XGBoost performed the best, with the lowest Mean Absolute Error (ranging from 17.22 to 18.79), the lowest Root Mean Square Error (ranging from 438.95 to 590.46), and coefficients of determination (R2) ranging from 0.91 to 0.94. The aim of this study was to improve the accuracy and reliability of models for predicting the UCS of carbonate rocks. PubDate: 2023-03-07
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract In this paper, an automatic aftershock forecasting system for China is presented. Based on a parameter-free historical analogy method, this system can produce short-term aftershock forecast, including seismic sequence types and the magnitude of the largest aftershock, within a few minutes after a major earthquake and can further provide scientists and government agencies with a set of background information for consultation purposes. First, the system construction concept and operation framework are described, and an evaluation of the forecast performance of the system is then conducted considering earthquakes from 2019 to 2021 in mainland China. The results indicate that the sequence type classification precision reaches 83.5%, and the magnitude of more than 90% of the aftershocks is smaller than that of upper range forecast. This system is fast and easy to control, and all the reports and maps can be produced approximately 5 min after earthquake occurrence. Practical use verifies that the application of this system has greatly improved the efficiency of post-earthquake consultation in mainland China. PubDate: 2023-03-07
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Accurate prediction of the amount of water inflows in mines is of great significance for safe production in mining. To improve the accuracy of the prediction, based on the analysis of the hydrological and geological conditions of the mine, the main factors affecting the water inflows in mines were determined. Using entropy method, the weight values of the factors affecting the water inflows in mines were calculated, and the non-linear regression fitting between the water inflows and various factors was carried out using multiple regression theory and MATLAB function programming. Combining with the factor weights determined by the entropy method, a weighted non-linear regression prediction model for water inflows in mines was established. The model not only takes into account the fact that the water inflows in mines are affected by multiple factors, but also reflects the characteristic that the importance of factors is different. By comparing with the multiple linear regression prediction model and the measured water inflows, it is proved that the weighted non-linear regression prediction model for water inflows in mines can overcome the defects of existing methods, minimize the prediction error caused by low degree of hydrological and geological exploration, and improve the prediction accuracy. PubDate: 2023-03-06
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Deep Learning (DL) based downscaling has recently become a popular tool in earth sciences. Multiple DL methods are routinely used to downscale coarse-scale precipitation data to produce more accurate and reliable estimates at local scales. Several studies have used dynamical or statistical downscaling of precipitation, but the availability of ground truth still hinders the accuracy assessment. A key challenge to measuring such a method's accuracy is comparing the downscaled data to point-scale observations, which are often unavailable at such small scales. In this work, we carry out DL-based downscaling to estimate the local precipitation using gridded data from the India Meteorological Department (IMD). To test the efficacy of different DL approaches, we apply SR-GAN and three other contemporary approaches (viz., DeepSD, ConvLSTM, and UNET) for downscaling and evaluating their performance. The downscaled data is validated with precipitation values at IMD ground stations. We find overall reasonably well reproduction of original data in SR-GAN approach as noted through M.S.E., variance statistics and correlation coefficient (CC). It is found that the SR-GAN method outperforms three other methods documented in this work (CCSR-GAN = 0.8806; CCUNET = 0.8399; CCCONVLSTM = 0.8311; CCDEEPSD = 0.8037). A custom V.G.G. network, used in the SR-GAN, is developed in this work using precipitation data. This DL method offers a promising alternative to other existing statistical downscaling approaches. It is noted that superiority in the SR-GAN approach is achieved through the perceptual loss concept, wherein it overcomes the issue of smooth reconstruction and is consequently able to capture better fine-scale details of data considered. PubDate: 2023-03-03
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract GNSS tomography is a method for the three-dimensional reconstruction of wet refractivity ( \(N_{w}\) ) in a set of voxels, each covering a specific part of the troposphere. The substantial assumption is the homogeneity of atmosphere in each voxel in given time intervals, known as the time response of model. Determining the optimal time resolution is one of the existing challenges in the tomography of the Earth’s atmosphere. We apply Empirical Orthogonal Functions (EOFs) to find an optimal time response for our tomographic model. To investigate our method, we compute the EOFs using the numerical atmospheric model that is available in our test area as the reference field on an already designed tomographic model. Using time resolutions of 30, 45, 60, 75, 90, 105 and 120 min, our EOF based method suggests the time periods of 60 to 75 and 75 to 90 min as the time response in the two days (a dry and a wet day) of our experiments, respectively According to our analysis, because of the quality of our reference field, it is not possible to expect similarities better than 85% for wet day and 93% for dry days in the scattering of the \(N_{w}\) field between the reconstructed images and our reference model. PubDate: 2023-03-03
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract In the geological field, recognizing rock thin section images under a microscope is of great significance to geological research and mineral resources exploration. Compared with other images, the rock thin section microscope image has complex features and rich information. When classifying for them, the redundant information and features that are useless for classification tasks will affect the model classification effect. Thus this paper proposes a rock image classification algorithm based on depth residuals shrinkage network and attention mechanism to suppress the useless information. The subnetwork of obtaining threshold value is improved, and the global maximum pooling of features is added as information representation, and the soft threshold function is improved by adding the weight coefficient which is based on attention mechanism to distinguish the importance of different features. Moreover, three rock thin section microscopic image classification algorithms fusing multidimensional information are designed, and the orthogonal polarized image and single polarized image of rock thin section are input as the base data, which make full use of the multidimensional information of rock thin section microscopic image. This study used 20,242 images of 12 kinds of rock thin sections as samples to train and verify the above method. The results show that the method proposed in this paper can effectively improve the recognition accuracy of rock thin section images under a microscope. PubDate: 2023-03-02
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Hyperspectral image classification (HSIC) is a hot topic discussed by most researchers. In recent years, deep learning and especially CNN have provided very good results in HSIC. However, there is still a need to develop new deep learning-based methods for HSIC. In this study, a new CNN-based method is proposed to reduce the number of trainable parameters and increase HSIC accuracy. The proposed method consists of 3 branches. Squeeze-and-excitation network (SENet) in the first branch, a hybrid method consisting of the combination of 3D CNN and 2D DSC in the second branch, and 2D DSC in the third branch are used. The main purpose of using a multi-branch network structure is to further enrich the features extracted from HSI. SENet used in the first branch are integrated into the proposed method as they increase the classification performance while minimally increasing the total number of parameters. In the second and third branches, hybrid CNN methods consisting of 3D CNN and 2D Depthwise separable convolution were used. With the hybrid CNN, the number of trainable parameters is reduced and the classification performance is increased. In order to analyze the classification performance of the proposed method, applications were carried out on the WHU-Hi-HanChuan, WHU-Hi-LongKou and Indian pines datasets. As a result of the applications, 97.45%, 99.84% and 96.31% overall accuracy values were obtained, respectively. In addition, the proposed method was compared with nine different methods developed in recent years from the literature and it was seen that it obtained the best classification result. PubDate: 2023-03-02
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.