Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Three supervised machine learning (ML) classification algorithms: Support Vector Classifier (SVC), K- Nearest Neighbour (K-NN), and Linear Discriminant Analysis (LDA) classification algorithms are combined with seventy-six (76) data points of nine (9) core sample datasets retrieved from five (5) selected wells in oilfields of the Subei Basin to delineate bioturbation. Application of feature selection via p-score and f-scoring reduced the number of relevant features to 7 out of the 12 considered. Each classifier underwent model training and testing allocating 80% of the data for training and the remaining 20% for testing. Under the model training, optimization of hyperparameters of the SVC (C, Gamma and Kernel) and K-NN (K value) was performed via the grid search to understand the best form of the decision boundaries that provides optimal accuracy of prediction of Bioturbation. Results aided the selection of optimized SVC hyperparameters such as a linear kernel, C-1000 and Gamma parameter—0.10 that provided a training accuracy of 96.17%. The optimized KNN classifier was obtained based on the K = 5 nearest neighbour to obtain a training accuracy of 73.28%. The training accuracy of the LDA classifier was 67.36% which made it the worst-performing classifier in this work. Further cross-validation based on a fivefold stratification was performed on each classifier to ascertain model generalization and stability for the prediction of unseen test data. Results of the test performance of each classifier indicated that the SVC was the best predictor of the bioturbation index at 92.86% accuracy, followed by the K-NN model at 90.48%, and then the LDA classifier which gave the lowest test accuracy at 76.2%. The results of this work indicate that bioturbation can be predicted via ML methods which is a more efficient and effective means of rock characterization compared to conventional methods used in the oil and gas industry. PubDate: 2024-08-28
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Susceptibility mapping has been an effective approach to manage the threat of debris flows. However, the sample heterogeneity problem has rarely been considered in previous studies. This paper is to explore the effect of sample heterogeneity on susceptibility mapping and propose corresponding solutions. Two unsupervised clustering approaches including K-means clustering and fuzzy C-means clustering were introduced to divide the study area into several homogeneous regions, each region was processed independently to solve the sample heterogeneity problem. The information gain ratio method was used to evaluate the predictive ability of the conditioning factors in the total dataset before clustering and the homogeneous datasets after clustering. Then the total dataset and the homogeneous datasets were involved in the random forest modeling. The receiver operating characteristic curves and related statistical results were employed to evaluate the model performance. The results showed that there was a significant sample heterogeneity problem for the study area, and the fuzzy C-means algorithm can play an important role in solving this problem. By dividing the study area into several homogeneous regions to process independently, conditioning factors with better predictive ability, models with better performance and debris flow susceptibility maps with higher quality could be obtained. PubDate: 2024-08-28
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Automated methods for building function classification are essential due to restricted access to official building use data. Existing approaches utilize traditional Natural Language Processing (NLP) techniques to analyze textual data representing human activities, but they struggle with the ambiguity of semantic contexts. In contrast, Large Language Models (LLMs) excel at capturing the broader context of language. This study presents a method that uses LLMs to interpret OpenStreetMap (OSM) tags, combining them with physical and spatial metrics to classify urban building functions. We employed an XGBoost model trained on 32 features from six city datasets to classify urban building functions, demonstrating varying F1 scores from 67.80% in Madrid to 91.59% in Liberec. Integrating LLM embeddings enhanced the model's performance by an average of 12.5% across all cities compared to models using only physical and spatial metrics. Moreover, integrating LLM embeddings improved the model's performance by 6.2% over models that incorporate OSM tags as one-hot encodings, and when predicting based solely on OSM tags, the LLM approach outperforms traditional NLP methods in 5 out of 6 cities. These results suggest that deep contextual understanding, as captured by LLM embeddings more effectively than traditional NLP approaches, is beneficial for classification. Finally, a Pearson correlation coefficient of approximately -0.858 between population density and F1-scores suggests that denser areas present greater classification challenges. Moving forward, we recommend investigation into discrepancies in model performance across and within cities, aiming to identify generalized models. PubDate: 2024-08-27
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Hyperspectral Images (HSIs) possess extensive applications in remote sensing, especially material discrimination and earth observation monitoring. However, constraints in spatial resolution increase sensitivity to spectral noise, limiting the ability to adjust Receptive Fields (RFs). Convolutional Neural Networks (CNNs) with fixed RFs are a common choice for HSI classification tasks. However, their potential in leveraging the appropriate RF remains under-exploited, thus affecting feature discriminative capabilities. This study introduces an Enhanced Adaptive Source-Selection Kernel with Attention Mechanism (EAS \(^2\) KAM) for HSI Classification. The model incorporates a Three Dimensional Enhanced Function Mixture (3D-EFM) with a distinct RF for local low-rank contextual exploitation. Furthermore, it incorporates diverse global RF branches enriched with spectral attention and an additional spectral-spatial mixing branch to adjust RFs, enhancing multiscale feature discrimination. The 3D-EFM is integrated with a 3D Residual Network (3D ResNet) that includes a Channel-Pixel Attention Module (CPAM) in each segment, improving spectral-spatial feature utilization. Comprehensive experiments on four benchmark datasets show marked advancements, including a maximum rise of 0.67% in Overall Accuracy (OA), 0.87% in Average Accuracy (AA), and 1.33% in the Kappa Coefficient ( \(\kappa \) ), outperforming the top two HSI classifiers from a list of eleven state-of-the-art deep learning models. A detailed ablation study evaluates model complexity and runtime, confirming the superior performance of the proposed model. PubDate: 2024-08-27
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Many landslides occurred every year, causing extensive property losses and casualties in China. Landslide susceptibility mapping is crucial for disaster prevention by the government or related organizations to protect people's lives and property. This study compared the performance of random forest (RF), classification and regression trees (CART), Bayesian network (BN), and logistic model trees (LMT) methods in generating landslide susceptibility maps in Yanchuan County using optimization strategy. A field survey was conducted to map 311 landslides. The dataset was divided into a training dataset and a validation dataset with a ratio of 7:3. Sixteen factors influencing landslides were identified based on a geological survey of the study area, including elevation, plan curvature, profile curvature, slope aspect, slope angle, slope length, topographic position index (TPI), terrain ruggedness index (TRI), convergence index, normalized difference vegetation index (NDVI), distance to roads, distance to rivers, rainfall, soil type, lithology, and land use. The training dataset was used to train the models in Weka software, and landslide susceptibility maps were generated in GIS software. The performance of the four models was evaluated by receiver operating characteristic (ROC) curves, confusion matrix, chi-square test, and other statistical analysis methods. The comparison results show that all four machine learning models are suitable for evaluating landslide susceptibility in the study area. The performances of the RF and LMT methods are more stable than those of the other two models; thus, they are suitable for landslide susceptibility mapping. PubDate: 2024-08-27
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract With the continuous development of power system and the growth of load demand, accurate short-term load forecasting (SLTF) provides reliable guidance for power system operation and scheduling. Therefore, this paper proposes a two-stage short-term load forecasting method. In the first stage, the original load is processed by improved complete ensemble empirical mode decomposition with adaptive noise (ICEEMDAN). The time series features of the load are extracted by temporal convolutional network (TCN), which is used as an input to realize the initial load prediction based on gated recurrent unit (GRU). At the same time, in order to overcome the problem that the prediction model established by the original subsequence has insufficient adaptability in the newly decomposed subsequence, the real-time decomposition strategy is adopted to improve the generalization ability of the model. To further improve the prediction accuracy, an error compensation strategy is constructed in the second stage. The strategy uses adaptive variational mode decomposition (AVMD) to reduce the unpredictability of the error sequence and corrects the initial prediction results based on the temporal convolutional network-gated recurrent unit (TCN-GRU) error compensator. The proposed two-stage forecasting method was evaluated using load data from Queensland, Australia. The analysis results show that the proposed method can better capture the nonlinearity and non-stationarity in the load data. The mean absolute percentage error of its prediction is 0.819%, which are lower than the other compared models, indicating its high applicability in SLTF. PubDate: 2024-08-26
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Understanding the spatial distribution of forest properties can help improve our knowledge of carbon storage and the impacts of climate change. Despite the active use of remote sensing and machine learning (ML) methods in forest mapping, the associated uncertainty predictions are relatively uncommon. The objectives of this study were: (1) to evaluate the spatial resolution effect on growing stock volume (GSV) mapping using Sentinel-2A and Landsat 8 satellite images, (2) to identify the most key predictors, and (3) to quantify the uncertainty of GSV predictions. The study was conducted in heterogeneous landscapes, covering anthropogenic areas, logging, young plantings and mature trees. We employed an ML approach and evaluated our models by root mean squared error (RMSE) and coefficient of determination (R2) through a 10-fold cross-validation. Our results indicated that the Sentinel-2A provided the best prediction performances (RMSE = 56.6 m3/ha, R2 = 0.53) in compare with Landsat 8 (RMSE = 71.2 m3/ha, R2 = 0.23), where NDVI, LSWI and B08 band (near-infrared spectrum) were identified as key variables, with the highest contribution to the model. Moreover, the uncertainty of GSV predictions using the Sentinel-2A was much smaller compared with Landsat 8. The combined assessment of accuracy and uncertainty reinforces the suitability of Sentinel-2A for applications in heterogeneous landscapes. The higher accuracy and lower uncertainty observed with the Sentinel-2A underscores its effectiveness in providing more reliable and precise information for decision-makers. This research is important for further digital mapping endeavours with accompanying uncertainty, as uncertainty assessment plays a pivotal role in decision-making processes related to spatial assessment and forest management. PubDate: 2024-08-26
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract The underwater images (UWIs) are one of the most effective sources to collect information about the underwater environment. Due to the irregular optical properties of different water types, the captured UWIs suffer from color cast, low visibility and distortion. Moreover, each water type offers different optical absorption, scattering, and attenuation of red, green and blue bands, which makes restoration of UWIs a challenging task. The revised underwater image formation model (RUIFM) considers only the peak values of the corresponding attenuation coefficient of each water type to restore UWIs. The performance of RUIFM suffers due to the inter-class variations of UWIs in a water type. In this paper, we propose an improved version of RUIFM as the Diverse Underwater Image Formation Model (DUIFM). The DUIFM increases the diversity of RUIFM by deeply encountering the optical properties of each water type. We investigate the inter-class variations of Jerlov-based classes of UWIs in terms of light attenuation and statistical features and further classify each image into low, medium and high bands. Which, in turn, provides the precise inherent optical attenuation coefficient of water and increases the generality of the DUIFM in color restoration and enhancement. The qualitative and quantitative performance evaluation results on publicly available real-world underwater enhancement (RUIE), underwater image enhancement benchmark (UIEB) and enhanced underwater visual perception (EUVP) data sets demonstrate the effectiveness of our proposed DUIFM. PubDate: 2024-08-26
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract The rapid and uncontrolled development in the urban environment leads to significant problems, negatively affecting the quality of life in many areas. Smart Sustainable City concept has emerged to solve these problems and enhance the quality of life for its citizens. A smart city integrates the physical, digital and social system in order to provide a sustainable and comfortable future by the help of the Information and Communication Technologies (ICT) and Spatial Data Infrastructures (SDI). However, the integrated management of urban data requires the inclusion of ICT enabled SDI that can be applied as a decision support element in different urban problems by giving a comprehensive understanding of city dynamics; an interoperable and integrative conceptual data modelling, essential for smart sustainable cities and successful management of big urban data. The main purpose of this study is to propose an integrated data management approach in accordance with international standards for sustainable management of smart cities. Thematic data model designed within the scope of quality of life, which is one of the main purposes of smart cities, offers an exemplary approach to overcome the problems arising from the inability to manage and analyse big and complex urban data for sustainability. In this aspect, it is aimed to provide a conceptual methodology for successful implementation of smart sustainable city applications within the international and national SDIs with environmental quality of life theme. With this object, firstly, the literature on smart sustainable cities was examined together with the scope of quality and sustainability of urban environment along with all related components. Secondly, the big data and its management was examined within the concept of the urban SDI. In this perspective, new trends and standards related to sensors, internet of things (IoT), real-time data, online services and application programming interfaces (API) were investigated. After, thematic conceptual models for the integrated management of sensor-based data were proposed and a real time Air Quality Index (AQI) dashboard was designed in Istanbul, Türkiye as the thematic case application of proposed models. PubDate: 2024-08-24
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract The practice of categorizing the galaxies according to morphologies exists and offers crucial details on the creation and development of the universe. The conventional visual inspection techniques have been very subjective and time-consuming. However, it is now possible to classify galaxies with greater accuracy owing to advancements in deep learning techniques. Deep Learning has demonstrated considerable potential in the research of galaxy classification and offers fresh perspectives on the genesis and evolution of galaxies. The suggested methodology employs Residual Networks for variety in a transfer learning-based method. To improve the accuracy of ResNet, an attention mechanism has been included. In our investigation, we used two relatively shallow ResNet models, ResNet18 and ResNet50 by incorporating a soft attention mechanism into them. The presented approach is validated on the Galaxy Zoo dataset from Kaggle. The accuracy increases from 60.15% to 80.20% for ResNet18 and from 78.21% to 80.55% for ResNet50, thus, demonstrating that the proposed work is now on a level with the accuracy of the far more complex, ResNet152 model. We have found that the attention mechanism can successfully improve the accuracy of even shallow models, which has implications for future studies in image recognition tasks. PubDate: 2024-08-24
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Soil salinity is one of the significant environmental issues that can reduce crop growth and productivity, ultimately leading to land degradation. Therefore, accurate monitoring and mapping of soil salinity are essential for implementing effective measures to combat increasing salinity. This study aims to estimate the spatial distribution of soil salinity using machine learning methods in Huludao City, located in northeastern China. By meticulously collecting data, soil salinity was measured in 310 soil samples. Subsequently, environmental parameters were calculated using remote sensing data. In the next step, soil salinity was modeled using machine learning methods, including random forest (RF), support vector machine (SVM), and artificial neural network (ANN). Additionally, to estimate uncertainty, the lower limit (5%) and upper limit (95%) prediction intervals were used. The results indicated that accurate maps for predicting soil salinity could be obtained using machine learning methods. By comparing the methods employed, it was determined that the RF model is the most accurate approach for estimating soil salinity (RMSE=0.03, AIC=-919, BIS=-891, and R2=0.84). Furthermore, the results from the prediction interval coverage probability (PICP) index, utilizing the uncertainty maps, demonstrated the high predictive accuracy of the methods employed in this study. Moreover, it was revealed that the environmental parameters, including NDVI, GNDVI, standh, and BI, are the main controllers of the spatial patterns of soil salinity in the study area. However, there remains a need to explore more precise methods for estimating soil salinity and identifying salinity patterns, as soil salinity has intensified with increased human activities, necessitating more detailed investigations. PubDate: 2024-08-24
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Hyperspectral Imaging (HSI) has revolutionized earth observation through advanced remote sensing technology, providing rich spectral and spatial information across multiple bands. However, this wealth of data introduces significant challenges for classification, including high spectral correlation, the curse of dimensionality due to limited labeled data, the need to model long-term dependencies, and the impact of sample input on deep learning performance. These challenges are further exacerbated by the costly and complex acquisition of HSI data, resulting in limited availability of labeled samples and class imbalances. To address these critical issues, our study proposes a novel approach for generating high-quality synthetic hyperspectral data cubes using an advanced Generative Adversarial Network (GAN) integrated with the Wasserstein loss and gradient penalty phenomenon (WGAN-GP). This approach aims to augment real-world data, mitigating the scarcity of labeled samples that has long been a bottleneck in hyperspectral image analysis and classification. To fully leverage both the synthetic and real data, we introduce a novel Convolutional LSTM classifier designed to process the intricate spatial and spectral correlations inherent in hyperspectral data. This classifier excels in modeling multi-dimensional relationships within the data, effectively capturing long-term dependencies and improving feature extraction and classification accuracy. The performance of our proposed model, termed 3D-ACWGAN-ConvLSTM, is rigorously validated using benchmark hyperspectral datasets, demonstrating its effectiveness in augmenting real-world data and enhancing classification performance. This research contributes to addressing the critical need for robust data augmentation techniques in hyperspectral imaging, potentially opening new avenues for applications in areas constrained by limited data availability and complex spectral-spatial relationships. PubDate: 2024-08-23
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Accurate hourly streamflow prediction is crucial for managing water resources, particularly in smaller basins with short response times. This study evaluates six deep learning (DL) models, including Long Short-Term Memory (LSTM), Gated Recurrent Unit (GRU), Convolutional Neural Network (CNN), and their hybrids (CNN-LSTM, CNN-GRU, CNN-Recurrent Neural Network (RNN)), across two basins in Northwest Spain over a ten-year period. Findings reveal that GRU models excel, achieving Nash-Sutcliffe Efficiency (NSE) scores of approximately 0.96 and 0.98 for the Groba and Anllóns catchments, respectively, at 1-hour lead times. Hybrid models did not enhance performance, which declines at longer lead times due to basin-specific characteristics such as area and slope, particularly in smaller basins where NSE dropped from 0.969 to 0.24. The inclusion of future rainfall data in the input sequences has improved the results, especially for longer lead times from 0.24 to 0.70 in the Groba basin and from 0.81 to 0.92 in the Anllóns basin for a 12-hour lead time. This research provides a foundation for future exploration of DL in streamflow forecasting, in which other data sources and model structures can be utilized. PubDate: 2024-08-23
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Geological domaining is an essential aspect of mineral resource evaluation. Various explicit and implicit modeling approaches have been developed for this purpose, but most of them are computationally expensive and complex, particularly when dealing with intricate mineralization systems and large datasets. Additionally, most of them require a time-consuming process for hyperparameter tuning. In this research, the application of the Learning Vector Quantization (LVQ) classification algorithm has been proposed to address these challenges. The LVQ algorithm exhibits lower complexity and computational costs compared to other machine learning algorithms. Various versions of LVQ, including LVQ1, LVQ2, and LVQ3, have been implemented for geological domaining in the Darehzar porphyry copper deposit in southeastern Iran. Their performance in geological domaining has been thoroughly investigated and compared with the Support Vector Machine (SVM), a widely accepted classification method in implicit domaining. The overall classification accuracy of LVQ1, LVQ2, LVQ3, and SVM is 90%, 90%, 91%, and 98%, respectively. Furthermore, the calculation time of these algorithms has been compared. Although the overall accuracy of the SVM method is ∼ 7% higher, its calculation time is ∼ 1000 times longer than LVQ methods. Therefore, LVQ emerges as a suitable alternative for geological domaining, especially when dealing with large datasets. PubDate: 2024-08-23
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Floods, among the most destructive climate-induced natural disasters, necessitate effective prediction models for early warning systems. The proposed Multi-Attention Encoder-Decoder-based Temporal Convolutional Network (MA-TCN-ED) prediction model combines the strengths of the Temporal Convolutional Network (TCN), Multi-Attention (MA) mechanism, and Encoder-Decoder (ED) architecture along with filter-wrapper feature selection for optimal feature selection. This framework aims to improve flood prediction accuracy by effectively capturing temporal dependencies and intricate patterns in atmospheric and hydro-meteorological data. The proposed framework was pervasively assessed for predicting the real-world flood-related data of the river Meenachil, Kerala, and the results showed that MA-TCN-ED using a filter-wrapper feature selection approach achieved higher accuracy in flood prediction. Further the model was validated on the dataset of river Pamba, Kerala. The proposed model exhibits better performance with about 32% reduced MAE, 39% reduced RMSE, 12% increased NSE, 14% enhanced R2, and 17% enhanced accuracy relative to the average performance of all the compared baseline models. The proposed work holds promise for enhancing early warning systems and mitigating the impact of floods and contributes to the broader understanding of leveraging deep learning models for effective climate-related risk mitigation. PubDate: 2024-08-22
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Complex changes in coastlines are increasing with climate, sea level, and human impacts. Remote Sensing (RS) and Geographic Information Systems (GIS) provide critical information to rapidly and precisely monitor environmental changes in coastal areas and to understand and respond to environmental, economic, and social impacts. This study aimed to determine the temporal changes in the coastline of the Seyhan Basin, Türkiye, using Landsat satellite images from 1985 to 2023 on the Google Earth Engine (GEE) platform. The approximately 50 km of coastline was divided into three regions and analyzed using various statistical techniques with the Digital Shoreline Analysis System (DSAS) tool. In Zone 1, the maximum coastal accretion was 1382.39 m (Net Shoreline Movement, NSM) and 1430.63 m (Shoreline Change Envelope, SCE), while the maximum retreat was -76.43 m (NSM). Zone 2 showed low retreat and accretion rates, with maximum retreat at -2.39 m/year (End Point Rate, EPR) and -2.45 m/year (Linear Regression Rate, LRR), and maximum accretion at 0.99 m/year (EPR) and 0.89 m/year (LRR). Significant changes were observed at the mouth of the Seyhan delta in Zone 3. According to the NSM method, the maximum accretion was 1337.72 m, and maximum retreat was 1301.4 m; the SCE method showed a maximum retreat of 1453.65 m. EPR and LRR methods also indicated high retreat and accretion rates. Statistical differences between the methods were assessed using the Kruskal–Wallis H test and ANOVA test. Generally, NSM and EPR methods provided similar results, while other methods varied by region. Additionally, the Kalman filtering model was used to predict the coastline for 2033 and 2043, identifying areas vulnerable to future changes. Comparisons were made to determine the performance of Kalman filtering. In the 10-year and 20-year future forecasts for determining the coastline for the years 2033 and 2043 with the Kalman filtering model, it was determined that the excessive prediction time negatively affected the performance in determining the coastal boundary changes. PubDate: 2024-08-21
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Cargo security is one of the most critical issues in modern logistics. For high-value theft-targeted (HVTT) cargo the driving phase of transportation takes up a major part of thefts. Dozen fleet management solutions based on GNSS positioning were introduced in recent years. Existing tracking solutions barely meet the requirements of TAPA 2020. Map-matching algorithms present valuable ideas on handling GNSS inaccuracy, however, universal map-matching methods are overcomplicated. Commercial map data providers require additional fees for the use of real-time map-matching functionality. In addition, at the map-matching stage, information on the actual distance from which the raw data was captured is lost. In HVTT security, the distance between the raw GNSS position and map-matched position can be used as a quantitative security factor. The goal of this research was to provide empirical data for TAPA TSR 2020 Level 1 certification in terms of tracking vehicles during typical operating conditions (cargo loading, routing, transportation, stopover, unloading) as well as detecting any geofencing violations. The Dynamic Geofencing Algorithm (DGA) presented in this article was developed for this specific purpose and this is the first known pulication to examine TAPA Standarization in terms of cargo positioning and fleet monitoring. The DGA is adaptive geometric-based matching (alternately curve-to-curve, point-to-curve, point-to-point). The idea behind the algorithm is to detect and eliminate the atypical matching circumstances—namely if the raw position is registered at one of the exceptions described in the paper. The problem of dynamic/adaptive cartographic projection is also addressed so that the robus Euclidean calculactions could be used in global scale. PubDate: 2024-08-21
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Detecting the influence of temperature on urban vegetation is useful for planning urban biodiversity conservation efforts, since temperature affects several ecosystem processes. In this study, the relationships between land surface temperature (LST) and vegetation phenology events (start of growing season, SOS; end of growing season, EOS; peak phenology) was examined in native savannah woodland and grass parcels of a hot climate town. For comparison, similar woodland and grass parcels on the town’s periphery, and a wetland, were used. The vegetation parcel LST values (°C) in one calendar year (2023) were obtained from Landsat-8 (L8) and Landsat-9 (L9) thermal imagery, whose combination yielded an 8-day image frequency. Phenology changes relative to seasonal air temperature and LST were determined using vegetation index (VI) values computed from accompanying 30 m resolution L8-L9 non-thermal bands: the Normalised Difference Vegetation Index (NDVI) and one improved VI, the Soil Adjusted Vegetation Index (SAVI). Higher imaging frequency, 250 m resolution NDVI and Enhanced Vegetation Index (EVI) MOD13Q1 layers supplemented the L8-L9 VIs. LST correlated highly with air temperature (p < 0.001). On nearly all L8-L9 image dates, the urban vegetation parcel’s mean LST was higher (p < 0.001) than that at its peri-urban equivalent. Improved VIs (SAVI, EVI) detected some phenology events to have occurred slightly earlier than detected by the NDVI. Associated with the higher LST, the SOS was earlier in the urban than in the peri-urban woodland. This association has scarcely been demonstrated in savannah vegetation, necessitating proactive efforts to reduce potential biodiversity effects. PubDate: 2024-08-20
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Accurate digital elevation models (DEMs) derived from airborne light detection and ranging (LiDAR) data are crucial for terrain analysis applications. As established in the literature, higher point density improves terrain representation but requires greater data storage and processing capacities. Therefore, point cloud sampling is necessary to reduce densities while preserving DEM accuracy as much as possible. However, there has been a limited examination directly comparing the effects of various sampling algorithms on DEM accuracy. This study aimed to help fill this gap by evaluating and comparing the performance of three common point cloud sampling methods octree, spatial, and random sampling methods in high terrain. DEMs were then generated from the sampled point clouds using three different interpolation algorithms: inverse distance weighting (IDW), natural neighbor (NN), and ordinary kriging (OK). The results showed that octree sampling consistently produced the most accurate DEMs across all metrics and terrain slopes compared to other methods. Spatial sampling also produced more accurate DEMs than random sampling but was less accurate than octree sampling. The results can be attributed to differences in how the sampling methods represent terrain geometry and retain microtopographic detail. Octree sampling recursively subdivides the point cloud based on density distributions, closely conforming to complex microtopography. In contrast, random sampling disregards underlying densities, reducing accuracy in rough terrain. The findings guide optimal sampling and interpolation methods of airborne lidar point clouds for generating DEMs for similar complex mountainous terrains. PubDate: 2024-08-19
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Lake temperature forecasting is crucial for understanding and mitigating climate change impacts on aquatic ecosystems. The meteorological time series data and their relationship have a high degree of complexity and uncertainty, making it difficult to predict lake temperatures. In this study, we propose a novel approach, Probabilistic Quantile Multiple Fourier Feature Network (QMFFNet), for accurate lake temperature prediction in Qinghai Lake. Utilizing only time series data, our model offers practical and efficient forecasting without the need for additional variables. Our approach integrates quantile loss instead of L2-Norm, enabling probabilistic temperature forecasts as probability distributions. This unique feature quantifies uncertainty, aiding decision-making and risk assessment. Extensive experiments demonstrate the method’s superiority over conventional models, enhancing predictive accuracy and providing reliable uncertainty estimates. This makes our approach a powerful tool for climate research and ecological management in lake temperature forecasting. Innovations in probabilistic forecasting and uncertainty estimation contribute to better climate impact understanding and adaptation in Qinghai Lake and global aquatic systems. PubDate: 2024-08-17