|
|
- VOX2BIM+ - A Fast and Robust Approach for Automated Indoor Point Cloud
Segmentation and Building Model Generation-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Building Information Modeling (BIM) plays a key role in digital design and construction and promises also great potential for facility management. In practice, however, for existing buildings there are often either no digital models or existing planning data is not up-to-date enough for use as as-is models in operation. While reality-capturing methods like laser scanning have become more affordable and fast in recent years, the digital reconstruction of existing buildings from 3D point cloud data is still characterized by much manual work, thus giving partially or fully automated reconstruction methods a key role. This article presents a combination of methods that subdivide point clouds into separate building storeys and rooms, while additionally generating a BIM representation of the building’s wall geometries for use in CAFM applications. The implemented storeys-wise segmentation relies on planar cuts, with candidate planes estimated from a voxelized point cloud representation before refining them using the underlying point data. Similarly, the presented room segmentation uses morphological operators on the voxelized point cloud to extract room boundaries. Unlike the aforementioned spatial segmentation methods, the presented parametric reconstruction step estimates volumetric walls. Reconstructed objects and spatial relations are modelled BIM-ready as IFC in one final step. The presented methods use voxel grids to provide relatively high speed and refine their results by using the original point cloud data for increased accuracy. Robustness has proven to be rather high, with occlusions, noise and point density variations being well-tolerated, meaning that each method can be applied to data acquired with a variety of capturing methods. All approaches work on unordered point clouds, with no additional data being required. In combination, these methods comprise a complete workflow with each singular component suitable for use in numerous scenarios. PubDate: 2023-05-30
- Automation Strategies for the Photogrammetric Reconstruction of Pipelines
-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: A responsible use of energy resources is currently more important than ever. For the effective insulation of industrial plants, a three-camera measurement system was, therefore, developed. With this system, the as-built geometry of pipelines can be captured, which is the basis for the production of a precisely fitting and effective insulation. In addition, the digital twin can also be used for Building Information Modelling, e.g. for planning purposes or maintenance work. In contrast to the classical approach of processing the images by calculating a point cloud, the reconstruction is performed directly on the basis of the object edges in the image. For the optimisation of the, initially purely geometrically calculated components, an adjustment approach is used. In addition to the image information, this approach takes into account standardised parameters (such as the diameter) as well as the positional relationships between the components and thus eliminates discontinuities at the transitions. Furthermore, different automation approaches were developed to facilitate the evaluation of the images and the manual object recognition in the images for the user. For straight pipes, the selection of the object edges in one image is sufficient in most cases to calculate the 3D cylinder. Based on the normalised diameter, the missing depth can be derived approximately. Elbows can be localised on the basis of coplanar neighbouring elements. The other elbow parameters can be determined by matching the back projection with the image edges. The same applies to flanges. For merging multiple viewpoints, a transformation approach is used which works with homologous components instead of control points and minimises the orthogonal distances between the component axes in the datasets. PubDate: 2023-05-22
- Editorial
-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
PubDate: 2023-05-11
- Semantic Real-Time Mapping with UAVs
-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Whilst mapping with UAVs has become an established tool for geodata acquisition in many domains, certain time-critical applications, such as crisis and disaster response, demand fast geodata processing pipelines rather than photogrammetric post-processing approaches. Based on our 3D-capable real-time mapping pipeline, this contribution presents not only an array of optimisations of the original implementation but also an extension towards understanding the image content with respect to land cover and object detection using machine learning. This paper (1) describes the pipeline in its entirety, (2) compares the performance of the semantic labelling and object detection models quantitatively and (3) showcases real-world experiments with qualitative evaluations. PubDate: 2023-05-11
- Reports
-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
PubDate: 2023-04-14
- A Globally Applicable Method for NDVI Estimation from Sentinel-1 SAR
Backscatter Using a Deep Neural Network and the SEN12TP Dataset-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Vegetation monitoring is important for many applications, e.g., agriculture, food security, or forestry. Optical data from space-borne sensors and spectral indices derived from their data like the normalised difference vegetation index (NDVI) are frequently used in this context because of their simple derivation and interpretation. However, optical sensors have one major drawback: cloud coverage hinders data acquisition, which is especially troublesome for moderate and tropical regions. One solution to this problem is the use of cloud-penetrating synthetic aperture radar (SAR) sensors. Yet, with very different image characteristics of optical and SAR data, an optical sensor cannot be easily replaced by SAR sensors. This paper presents a globally applicable model for the estimation of NDVI values from Sentinel-1 C-band SAR backscatter data. First, the newly created dataset SEN12TP consisting of Sentinel-1 and -2 images is introduced. Its main features are the sophisticated global sampling strategy and that the images of the two sensors are time-paired. Using this dataset, a deep learning model is trained to regress SAR backscatter data to NDVI values. The benefit of auxiliary input information, e.g., digital elevation models, or land-cover maps is evaluated experimentally. After selection of the best model configuration, another experimental evaluation on a carefully selected hold-out test set confirms that high performance, low error, and good level of spatial detail are achieved. Finally, the potential of our approach to create dense NDVI time series of frequently clouded areas is shown. One limit of our approach is the neglect of the temporal characteristics of the SAR and NDVI data, since only data from a single date are used for prediction. PubDate: 2023-04-13
- Uncovering Early Traces of Bark Beetle Induced Forest Stress via
Semantically Enriched Sentinel-2 Data and Spectral Indices-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Forest ecosystems are shaped by both abiotic and biotic disturbances. Unlike sudden disturbance agents, such as wind, avalanches and fire, bark beetle infestation progresses gradually. By the time infestation is observable by the human eye, trees are already in the final stages of infestation—the red- and grey-attack. In the relevant phase—the green-attack—biochemical and biophysical processes take place, which, however, are not or hardly visible. In this study, we applied a time series analysis based on semantically enriched Sentinel-2 data and spectral vegetation indices (SVIs) to detect early traces of bark beetle infestation in the Berchtesgaden National Park, Germany. Our approach used a stratified and hierarchical hybrid remote sensing image understanding system for pre-selecting candidate pixels, followed by the use of SVIs to confirm or refute the initial selection, heading towards a 'convergence of evidence approach’. Our results revealed that the near-infrared (NIR) and short-wave-infrared (SWIR) parts of the electromagnetic spectrum provided the best separability between pixels classified as healthy and early infested. Referring to vegetation indices, we found that those related to water stress have proven to be most sensitive. Compared to a SVI-only model that did not incorporate the concept of candidate pixels, our approach achieved distinctively higher producer’s accuracy (76% vs. 63%) and user’s accuracy (61% vs. 42%). The temporal accuracy of our method depends on the availability of satellite data and varies up to 3 weeks before or after the first ground-based detection in the field. Nonetheless, our method offers valuable early detection capabilities that can aid in implementing timely interventions to address bark beetle infestations in the early stage. PubDate: 2023-04-13
- Automatic Detection of Specific Constructions on a Large Scale Using Deep
Learning in Very High Resolution Airborne Imagery-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: In the High Modernism period, from around 1914 to 1970, many system halls in steel construction were manufactured to meet the increasing demand in industry, commerce, and agriculture, among other areas. However, these types of buildings have not been the focus of any research in the field of construction history, generating a lack of knowledge regarding their construction types, distribution, and related context to enable statements on the ability and worthiness of historical monument listings. This paper proposes a methodology for the automatic detection of these buildings using aerial imagery. For this purpose, Deep Learning techniques for two tasks are evaluated: semantic segmentation and object detection. Different state-of-the-art software architectures are extensively reviewed and assessed through a series of experiments to determine which features and hyper-parameters produce the best results. Based on our experiments, the height information from nDSM improved the results by refining the detections and reducing the number of false negatives and false positives. Moreover, the Focal Loss helped boost the detections by tuning its hyper-parameter \(\gamma\) , where object detection algorithms showed high sensitivity to this value. Semantic segmentation models outperformed their counterparts for object detection, with U-Net and EfficientNet B3 as the backbone, the one with the best results with a \(Detection\ Rate\) of up to \(93\%\) . PubDate: 2023-04-06
- Multi-temporal UAV Imaging-Based Mapping of Chlorophyll Content in Potato
Crop-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Spectral indices based on unmanned aerial vehicle (UAV) multispectral images combined with machine learning algorithms can more effectively assess chlorophyll content in plants, which plays a crucial role in plant nutrition diagnosis, yield estimation and a better understanding of plant and environment interactions. Therefore, the aim of this study was to use UAV-based spectral indices deriving from UAV-based multispectral images as inputs in different machine learning models to predict canopy chlorophyll content of potato crops. The relative chlorophyll content was obtained using a SPAD chlorophyll meter. Random Forest (RF), support vector regression (SVR), partial least squares regression (PLSR) and ridge regression (RR) were employed to predict the chlorophyll content. The results showed that RF model was the best performing algorithm with an R2 of 0.76, Root Mean Square Error (RMSE) of 1.97. Both RF and SVR models showed much better accuracy than PLSR and RR models. This study suggests that the best models, RF model, allow to map the spatial variation in chlorophyll content of plant canopy using the UAV multispectral images at different growth stages. PubDate: 2023-04-01
- Assessment of TanDEM-X DEM 2020 Data in Temperate and Boreal Forests and
Their Application to Canopy Height Change-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Space-borne digital elevation models (DEM) are considered as important proxy for canopy surface height and its changes in forests. Interferometric TanDEM-X DEMs were assessed regarding their accuracy in forests of Germany and Estonia. The interferometric synthetic aperture radar (InSAR) data for the new global TanDEM-X DEM 2020 coverage were acquired between 2017 and 2020. Each data acquisition was processed using the delta-phase approach for phase unwrapping and comprise an absolute height calibration. The results of the individual InSAR heights confirmed a substantial bias in forests. This was indicated by a mean error (ME) between – 5.74 and – 6.14 m associated with a root-mean-squared-error (RMSE) between 6.99 m and 7.40 m using airborne light detection and ranging (LiDAR) data as a reference. The bias was attributed to signal penetration, which was attempted to be compensated. The ME and RMSE improved substantially after the compensation to the range of – 0.54 to 0.84 m and 3.55 m to 4.52 m. Higher errors of the penetration depth compensated DEMs compared to the original DEMs were found in non-forested areas. This suggests to use the penetration compensation only in forests. The potential of the DEMs for estimating height changes was further assessed in a case study in Estonia. The canopy height change analysis in Estonia indicated an overall accuracy in terms of RMSE of 4.17 m and ME of – 0.93 m on pixel level comparing TanDEM-X and LiDAR height changes. The accuracy improved substantially at forest stand level to an RMSE of 2.84 m and an ME of – 1.48 m. Selective penetration compensation further improved the height change estimates to an RMSE of 2.14 m and an ME of – 0.83 m. Height loss induced by clearcutting was estimated with an ME of – 0.85 m and an RMSE of 3.3 m. Substantial regrowth resulted in an ME of – 0.46 m and an RMSE of 1.9 m. These results are relevant for exploiting multiple global acquisitions of TanDEM-X, in particular for estimating canopy height and its changes in European forests. PubDate: 2023-03-01 DOI: 10.1007/s41064-023-00235-1
- Optimised U-Net for Land Use–Land Cover Classification Using Aerial
Photography-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Convolutional Neural Networks (CNN) consist of various hyper-parameters which need to be specified or can be altered when defining a deep learning architecture. There are numerous studies which have tested different types of networks (e.g. U-Net, DeepLabv3+) or created new architectures, benchmarked against well-known test datasets. However, there is a lack of real-world mapping applications demonstrating the effects of changing network hyper-parameters on model performance for land use and land cover (LULC) semantic segmentation. In this paper, we analysed the effects on training time and classification accuracy by altering parameters such as the number of initial convolutional filters, kernel size, network depth, kernel initialiser and activation functions, loss and loss optimiser functions, and learning rate. We achieved this using a well-known top performing architecture, the U-Net, in conjunction with LULC training data and two multispectral aerial images from North Queensland, Australia. A 2018 image was used to train and test CNN models with different parameters and a 2015 image was used for assessing the optimised parameters. We found more complex models with a larger number of filters and larger kernel size produce classifications of higher accuracy but take longer to train. Using an accuracy-time ranking formula, we found using 56 initial filters with kernel size of 5 × 5 provide the best compromise between training time and accuracy. When fully training a model using these parameters and testing on the 2015 image, we achieved a kappa score of 0.84. This compares to the original U-Net parameters which achieved a kappa score of 0.73. PubDate: 2023-02-13 DOI: 10.1007/s41064-023-00233-3
- Report
-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
PubDate: 2023-02-06 DOI: 10.1007/s41064-023-00234-2
- An Efficient U-Net Model for Improved Landslide Detection from Satellite
Images-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Landslides are a dangerous hazard that might have devastating results. Thus, detecting landslides from satellite images can be significant for various governing authorities. In the past, different deep-learning models have produced remarkable results in terms of landslide detection. Here, an enhanced U-Net model is suggested for detecting the landslides from the newly introduced open-source Bijie landslide data set. The satellite images of the data set are obtained from TripleSat with a spatial resolution of 0.8 m. Further, the proposed study uses the ResNet-50, ResNet-101, VGG-19, and DenseNet-121 as backbone models. The model is evaluated qualitatively, and five matrices, i.e. precision, recall, f1-score, MCC (Matthews-correlation-coefficient), and overall accuracy (OA) are computed for quantitative evaluation. The obtained results of each model are compared with the earlier studies to prove the potential and novelty of the research work. The performance of U-Net + ResNet-50 is found to be the best in terms of precision (0.98), f1-score (0.98), and OA (1.0). PubDate: 2023-01-26 DOI: 10.1007/s41064-023-00232-4
- Monocular In-flight Measurement of Airfoil Deflections
-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: The knowledge of the actual shape of an aeroplane’s wing in midair is crucial to perform realistic flow simulations. Based on these analyses, the shape of a wing can be optimized by constructive measures and by the selective emptying of the fuel tanks installed inside the wings. As a result, the fuel consumption is reduced and fewer emissions occur. Furthermore, monitoring wing deflections allows for conclusions about the mechanical load and thus the service limit of an airfoil. To determine the wing deflection, we present the concept of a deployed measuring system consisting of measuring marks attached to the wing’s surface and a single camera. A basic model for the bending of a wing is explained and utilized, which assumes the preservation of arc lengths on the wing’s upper surface during bending. The measuring system was successfully applied during several long-distance flights with wide-body aircraft. The design of the measurement system, its setup and calibration, as well as obtained results are presented and discussed. PubDate: 2023-01-20 DOI: 10.1007/s41064-022-00230-y
- Application of UAS-Based Remote Sensing in Estimating Winter Wheat
Phenotypic Traits and Yield During the Growing Season-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Phenotyping approaches have been considered as a vital component in crop breeding programs to improve crops and develop new high-yielding cultivars. However, traditional field-based monitoring methods are expensive, invasive, and time-intensive. Moreover, data collected using satellite and airborne platforms are either costly or limited by their spatial and temporal resolution. Here, we investigated whether low-cost unmanned/unoccupied aerial systems (UASs) data can be used to estimate winter wheat (Triticum aestivum L.) nitrogen (N) content, structural traits including plant height, fresh and dry biomass, and leaf area index (LAI) as well as yield during different winter wheat growing stages. To achieve this objective, UAS-based red–green–blue (RGB) and multispectral data were collected from winter wheat experimental plots during the winter wheat growing season. In addition, for each UAS flight mission, winter wheat traits and total yield (only at harvest) were measured through field sampling for model development and validation. We then used a set of vegetation indices (VIs), machine learning algorithms (MLAs), and structure-from-motion (SfM) to estimate winter wheat traits and yield. We found that using linear regression and MLAs, instead of using VIs, improved the capability of UAS-derived data in estimating winter wheat traits and yield. Further, considering the costly and time-intensive process of collecting in-situ data for developing MLAs, using SfM-derived elevation models and red-edge-based VIs, such as CIre and NDRE, are reliable alternatives for estimating key winter wheat traits. Our findings can potentially aid breeders through providing rapid and non-destructive proxies of winter wheat phenotypic traits. PubDate: 2023-01-20 DOI: 10.1007/s41064-022-00229-5
- Calibration and Validation from Ground to Airborne and Satellite Level:
Joint Application of Time-Synchronous Field Spectroscopy, Drone, Aircraft and Sentinel-2 Imaging-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Non-invasive investigation of surfaces from drones and manned aircrafts used as camera platforms is a well-established remote-sensing practice. However, cross-comparison of multispectral reflectance from different camera systems across different platforms, locations, and times can be challenging. We investigate reflectance retrieved from Sentinel-2 and two airborne camera systems with respect to the mobile, radiometrically calibrated, two-channel hemispherical-conical field-spectrometer system RoX. This spectrometer system serves in combination with a nine-panel grey scale as ground reference and transfer instrument. In the first step, the ground reference was validated against Sentinel-2 reflectance including atmospheric compensation. Our results suggest significant differences in the uncorrected reflectance from the two airborne sensors with respect to instantaneous calibration across 22 mixed targets. In the second step, those differences were reduced to a median discrepancy below 10% using the proposed in-field empirical line correction method (ELC). Continuous irradiance correction further improved the agreement across the validation targets and achieved a coherent reflectance dataset from all four different sensor systems, from the satellite level to the ground and airborne level, considering the limitations of instrument and in-field handling. NDVI maps created from drone and manned aircraft achieved an agreement around 89% and 95% compared to the satellite after calibration and correction. We consider in-field calibration with additional, continuous down-welling radiance correction of reflectance promising to support fusion of information across four sensors and platforms. Thus, field-spectrometer systems serve as transfer instruments and bridge the gap of information from the satellite down to the ground and airborne scale in future airborne mapping and classification efforts. PubDate: 2023-01-17 DOI: 10.1007/s41064-022-00231-x
- UAV LiDAR Metrics for Monitoring Crop Height, Biomass and Nitrogen Uptake:
A Case Study on a Winter Wheat Field Trial-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Efficient monitoring of crop traits such as biomass and nitrogen uptake is essential for an optimal application of nitrogen fertilisers. However, currently available remote sensing approaches suffer from technical shortcomings, such as poor area efficiency, long postprocessing requirements and the inability to capture ground and canopy from a single acquisition. To overcome such shortcomings, LiDAR scanners mounted on unmanned aerial vehicles (UAV LiDAR) represent a promising sensor technology. To test the potential of this technology for crop monitoring, we used a RIEGL Mini-VUX-1 LiDAR scanner mounted on a DJI Matrice 600 pro UAV to acquire a point cloud from a winter wheat field trial. To analyse the UAV-derived LiDAR point cloud, we adopted LiDAR metrics, widely used for monitoring forests based on LiDAR data acquisition approaches. Of the 57 investigated UAV LiDAR metrics, the 95th percentile of the height of normalised LiDAR points was strongly correlated with manually measured crop heights (R2 = 0.88) and with crop heights derived by monitoring using a UAV system with optical imaging (R2 = 0.92). In addition, we applied existing models that employ crop height to approximate dry biomass (DBM) and nitrogen uptake. Analysis of 18 destructively sampled areas further demonstrated the high potential of the UAV LiDAR metrics for estimating crop traits. We found that the bincentile 60 and the 90th percentile of the reflectance best revealed the relevant characteristics of the vertical structure of the winter wheat plants to be used as proxies for nitrogen uptake and DBM. We conclude that UAV LiDAR metrics provide relevant characteristics not only of the vertical structure of winter wheat plants, but also of crops in general and are, therefore, promising proxies for monitoring crop traits, with potential use in the context of Precision Agriculture. PubDate: 2022-12-14 DOI: 10.1007/s41064-022-00228-6
- The Influence of Noise Intensity in the Nonlinear Spectral Unmixing of
Hyperspectral Data-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Noise effect as an unwanted and troubler component was investigated in this study. It generally exists more or less in the remote sensing data because of the device errors and natural effects. Therefore, its correct estimation will lead to better analysis. This paper aims to examine the noise effect on selecting the spectral mixing model. A set of synthetic data was first designed based on one linear and five nonlinear models. Then, the noise was added to the data at different signal-to-noise (SNR) levels. After designing the models, to evaluate the noise intensity, it was determined using the noise estimation methods (multiple linear regression (MLR) based method and L1HyMixDe), assuming that each synthetic dataset stayed on the linear model. A comparison was made between the obtained noise values from the linear and each of the nonlinear models using one-way Analysis of Variance (ANOVA) and Wilcoxon statistical tests. According to the significant difference between the noise values of linear and nonlinear data in different SNR levels, an SNR limit was determined for each model and below this value, the noise overcomes the nonlinear portion of the data. As a result, Polynomial Post Nonlinear Mixing Model (PPNMM) shows the best performance in the nonlinear unmixing of data in the presence of noise. This study was tested on real Hyperion data and the obtained results agreed with our assessments. PubDate: 2022-11-25 DOI: 10.1007/s41064-022-00223-x
- Geometric Feedback System for Robotic Spraying
-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: In this paper, we tackle the task of replacing labor intensive and repetitive manual inspection of sprayed concrete elements with a sensor-based and automated alternative. We present a geometric feedback system that is integrated within a robotic setup and includes a set of depth cameras used for acquiring data on sprayed concrete structures, during and after fabrication. The acquired data are analyzed in terms of thickness and surface quality, with both sets of information then used within the adaptive fabrication process. The thickness evaluation is based on the comparison of the as-built state to a previous as-built state or to the design model. The surface quality evaluation is based on the local analysis of 3D geometric and intensity features. These features are used by a random forest classifier trained using data manually labelled by a skilled professional. With this approach, we are able to achieve a prediction accuracy of 87 % or better when distinguishing different surface quality types on flat specimens, and 75 % when applied in a full production setting with wet and non-planar surfaces. The presented approach is a contribution towards in-line material thickness and surface quality inspection within digital fabrication. PubDate: 2022-10-18 DOI: 10.1007/s41064-022-00219-7
- Publisher Correction: Self-Calibration and Crosshair Tracking with Modular
Digital Imaging Total Station-
Free pre-print version: Loading...
Rate this result:
What is this?
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
PubDate: 2022-10-10 DOI: 10.1007/s41064-022-00222-y
|