Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: Recently, statistical distributions have been explored to provide estimates of the mineralogical diversity of Earth, and Earth-like planets. In this paper, a Bayesian approach is introduced to estimate Earth’s undiscovered mineralogical diversity. Samples are generated from a posterior distribution of the model parameters using Markov chain Monte Carlo simulations such that estimates and inference are directly obtained. It was previously shown that the mineral species frequency distribution conforms to a generalized inverse Gauss–Poisson (GIGP) large number of rare events model. Even though the model fit was good, the population size estimate obtained by using this model was found to be unreasonably low by mineralogists. In this paper, several zero-truncated, mixed Poisson distributions are fitted and compared, where the Poisson-lognormal distribution is found to provide the best fit. Subsequently, the population size estimates obtained by Bayesian methods are compared to the empirical Bayes estimates. Species accumulation curves are constructed and employed to estimate the population size as a function of sampling size. Finally, the relative abundances, and hence the occurrence probabilities of species in a random sample, are calculated numerically for all mineral species in Earth’s crust using the Poisson-lognormal distribution. These calculations are connected and compared to the calculations obtained in a previous paper using the GIGP model for which mineralogical criteria of an Earth-like planet were given. PubDate: 2019-03-18 DOI: 10.1007/s11004-019-09795-8

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: Customary and routine practice of geostatistical modeling assumes that inter-point distances are a Euclidean metric (i.e., as the crow flies) when characterizing spatial variation. There are many real-world settings, however, in which the use of a non-Euclidean distance is more appropriate, for example, in complex bodies of water. However, if such a distance is used with current semivariogram functions, the resulting spatial covariance matrices are no longer guaranteed to be positive-definite. Previous attempts to address this issue for geostatistical prediction (i.e., kriging) models transform the non-Euclidean space into a Euclidean metric, such as through multi-dimensional scaling (MDS). However, these attempts estimate spatial covariances only after distances are scaled. An alternative method is proposed to re-estimate a spatial covariance structure originally based on a non-Euclidean distance metric to ensure validity. This method is compared to the standard use of Euclidean distance, as well as a previously utilized MDS method. All methods are evaluated using cross-validation assessments on both simulated and real-world experiments. Results show a high level of bias in prediction variance for the previously developed MDS method that has not been highlighted previously. Conversely, the proposed method offers a preferred tradeoff between prediction accuracy and prediction variance and at times outperforms the existing methods for both sets of metrics. Overall results indicate that this proposed method can provide improved geostatistical predictions while ensuring valid results when the use of non-Euclidean distances is warranted. PubDate: 2019-03-14 DOI: 10.1007/s11004-019-09791-y

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: The issue of identifying stratigraphic units within a sedimentary succession is of prime importance for reservoir studies, because it allows splitting the reservoir into several units with specific parameters, thus reducing the vertical nonstationarity in simulations. A new method is proposed for semi-automatic determination of the sedimentary units from well logging that uses a customized geostatistical hierarchical clustering algorithm. A new linkage criteria derived from the Ward criteria (cluster minimum variance) is proposed to enforce the monotonic increase of dissimilarities. The discretized proportion of sand lithofacies calculated from the vertical proportion curve of the well is taken as input data. At each step of the procedure, the algorithm merges the most similar of two consecutive units of sand lithofacies, ensuring stratigraphic consistency. Finally, the number of units is deduced from the first most important step of the dissimilarity. The user can investigate a larger number of units by considering the clusters with lower levels of dissimilarities. The method is validated using two synthetic cases built for a fluvial meandering reservoir analog containing three and five units. The results from the synthetic cases show that the units are identified when the sand proportion contrast between units is larger than the internal variability within the units. For low sand contrasts between units or for a small number of wells, sedimentary unit limits may be found for lower clustering dissimilarities. Finally, the method is successfully applied to a field study, where the resulting cluster units are found to be comparable to the field interpretation, suggesting a limit between units defined by paleosols rather than close overlying lacustrine levels. PubDate: 2019-03-14 DOI: 10.1007/s11004-019-09793-w

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: Soil screening levels (SSLs) are reference threshold values required by environmental laws, established based on soil geochemical background data from often-extensive sampling areas. Such areas are often inappropriate for interpreting the true risk of pollution in small areas, since they overlook local factors (e.g., geology, industry, and traffic), which are unfeasible to encompass in large-scale samplings. To solve this issue, the calculation of local SSLs is proposed herein, performed on a major scale closer to the area of interest. To exemplify this proposal, a soil sampling campaign was performed in the Municipality of Langreo, one of the most industrialized areas in the Principality of Asturias (northwestern Spain). Sampling allowed the measurement of local soil screening levels for several inorganic contaminants. Afterwards, a soil pollution index was calculated, referred to both regional and local thresholds, to assess the degree of contamination. Both pollution indicators were subjected to a methodology based on a Bayesian network analysis, followed by a stochastic sequential Gaussian simulation approach. The methodologies used showed differences in the identification of potentially polluted areas depending on the soil screening levels (regional or local) used. It was concluded that, in urban/industrial cores, local soil screening levels facilitate the identification of polluted areas and also reduce the uncertainty associated with sampling density and diffuse contamination. Thus, use of local levels circumvents false-positive areas that would be classified as polluted were regional soil screening levels to be used. PubDate: 2019-03-13 DOI: 10.1007/s11004-019-09792-x

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: The development of deep geothermal energy may be at risk from the economic point of view if the estimated deep thermal field is far from its real state. The strong heterogeneity of geological units makes it challenging to perform a reliable estimation on the construction of the deep thermal field over a large region. Additionally, the thermal properties of rocks, such as thermal conductivity and the radiogenic element concentration, whether they are from laboratory measurements or inversed from well logs, may strongly control the deep thermal field at a local scale. In this paper, the thermal conductivities of rocks from the Trois-Rivières region in the Saint Lawrence Lowlands sedimentary basin in eastern Canada are obtained from two methods: (i) direct experimental measurement and (ii) indirect inversion method using well logs, including gamma ray, neutron porosity, density, and photoelectric absorption factor. The spatial distribution of subsurface temperature in the study area in the Trois-Rivières region is numerically investigated by considering four case studies that include different values (minimum, average, and maximum) of the thermal properties by applying the Underworld simulator. The results show that thermal properties play a large role in controlling the subsurface temperature distribution and heat flux. The temperature difference can reach 15 °C in the basement, caused by the difference in thermal properties in the Trois-Rivières region. The highest heat flux is found in the Trenton–Black River–Chazy groups, and the lowest heat flux is in the Potsdam group, which also has the highest thermal conductivity. Vertical heat flux does not change linearly with depth but is highly related to the thermal properties of specific geological formations. Furthermore, it does not have a positive correlation with the vertical temperature changes. This demonstrates that the assessment of the potential of deep geothermal energy depending merely on the surface heat flux may greatly overestimate or underestimate the geothermal capacity. Construction of the thermal models based on the integrated thermal properties from both the experimental measurement and well logs in this paper is useful in reducing the exploration risk associated with the utilization of deep geothermal energy. PubDate: 2019-03-11 DOI: 10.1007/s11004-019-09790-z

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: In this paper, an implicit structural modeling method using locally defined moving least squares shape functions is proposed. The continuous bending energy is minimized to interpolate between data points and approximate geological structures. This method solves a sparse problem without relying on a complex mesh. Discontinuities such as faults and unconformities are handled with minor modifications of the method using meshless optic principles. The method is illustrated on a two-dimensional model with folds, faults and an unconformity. This model is then modified to show the ability of the method to handle sparsity, noise and different reliabilities in the data. Key parameters of the shape functions and the pertinence of the bending energy for structural modeling applications are discussed. The predefined values deduced from these studies for each parameter of the method can also be used to construct other models. PubDate: 2019-03-07 DOI: 10.1007/s11004-019-09789-6

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: In order to speed up the development and utilization of hydrothermal energy, it is essential to assess the potential of geothermal resources in petroliferous basins. In this paper, the distribution of reservoirs (aquifers) and the characteristics of geothermal fields have been studied systematically based on geological, geophysical, well drilling, temperature, and sample test data obtained from the major petroliferous basins of China. It has been found that some of the porous sandstone formations in these petroliferous basins are major geothermal reservoirs and are extensively thick and widely distributed. In general, the geothermal gradient in China is higher in the eastern basins and lower in the western basins. On average, the geothermal gradient is above 30 °C/km in the Bohai Bay, Songliao, and Subei basins. The geothermal resource abundance is also higher in eastern China and in the Beibuwan basin in southern China, and the geothermal source forming condition is better, followed by the Ordos, Qaidam, and Sichuan basins in Central China. Other potential basins include the Tarim and Junggar basins in western China, where the geothermal gradient ranges between 21 and 22 °C/km, on average. In this paper, three methods, stochastic simulation, unit volumetric, and analogy, were used for the assessment of geothermal resources. Using the stochastic simulation and unit volumetric methods, the geothermal resources, annual recovered geothermal resources, geothermal water resources, and thermal energy of water in 11 basins or blocks of up to 4000 m deep were calculated. Grading evaluation criteria were established by considering the heterogeneity of geothermal reservoirs. The results showed that the petroliferous basins are very rich in geothermal resources. The annual recoverable resources reach 1626.8 × 106 tons of standard coal, in which grade I, grade II, and grade III resources are 641.9 × 106, 298.6 × 106, and 686.3 × 106 tons of standard coal, respectively. The results demonstrate that the development and utilization of geothermal energy in oilfields has a huge potential for industrial production and family use, and a great significance for the development of green oilfields. With the high demand of heat, the eastern oilfields with high geothermal resource abundance should be the first to be considered for the production and utilization of geothermal energy, followed by the central and western oilfields. PubDate: 2019-03-06 DOI: 10.1007/s11004-019-09786-9

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: Geothermal energy is a clean energy source that can potentially mitigate greenhouse gas emissions, as its use can lead to a lower mitigation cost. However, research on the economic impacts of the geothermal industry is scarce. This paper describes the effect of the geothermal industry, its economic input and output, using Beijing as a case study. This paper adopts the input–output model. The results show that the demand for and input use of the geothermal sector vary greatly across industrial sectors: electricity, heat production, the supply industry and general equipment manufacturing have the greatest direct consumption coefficient for the geothermal industry. When considering direct and indirect demand, it is clear that the geothermal industry has a great effect on different industrial sectors in diverse ways. Its influence coefficient and sensitivity coefficient are 1.2167 (ranked 11th) and 1.2293 (ranked 8th), respectively, revealing that it exerts obvious demand-pulling and supply-pushing effects on the regional economy. PubDate: 2019-02-25 DOI: 10.1007/s11004-019-09787-8

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: High-order sequential simulation methods have been developed as an alternative to existing frameworks to facilitate the modeling of the spatial complexity of non-Gaussian spatially distributed variables of interest. These high-order simulation approaches address the modeling of the curvilinear features and spatial connectivity of extreme values that are common in mineral deposits, petroleum reservoirs, water aquifers, and other geological phenomena. This paper presents a new high-order simulation method that generates realizations directly at the block support scale conditioned to the available data at point support scale. In the context of sequential high-order simulation, the method estimates, at each block location, the cross-support joint probability density function using Legendre-like splines as the set of basis functions needed. The proposed method adds previously simulated blocks to the set of conditioning data, which initially contains the available data at point support scale. A spatial template is defined by the configuration of the block to be simulated and related conditioning values at both support scales, and is used to infer additional high-order statistics from a training image. Testing of the proposed method with an exhaustive dataset shows that simulated realizations reproduce major structures and high-order relations of data. The practical intricacies of the proposed method are demonstrated in an application at a gold deposit. PubDate: 2019-02-20 DOI: 10.1007/s11004-019-09784-x

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: The shape of Earth’s surface topography is determined by numerous competing processes that act to either roughen or smoothen the surface. Hence, calculating topographic roughness is a useful technique for understanding the relative importance of these processes. This study analyzes the relative surface roughness of the Greenland Ice Sheet by calculating the fractal dimension of surface elevation isolines. It is shown that the fractal dimension of isolines decreases at higher elevations for nearly all the ice sheet catchments. However, the magnitude of fractality, which represents the relative complexity or roughness of the surface, is spatially variable. Catchments in the central-east of the ice sheet have the highest fractal dimension, and the north catchment has the lowest fractal dimension. Multi-fractality at lower elevations for several catchments is observed including the southeast catchment, indicating that these catchments have variable dominant forcings at different length scales. Exploring the local variation of fractal dimensions shows that the majority of isolines with high fractal dimension are clustered in the central-east region and persist in contours up to 2500 m elevation. However, it is shown that local fractal dimensions are related to surface elevation, bed elevation, and ice thickness. It is also shown that local fractal dimensions are correlated with the ruggedness of basal topography (defined as the difference between the highest and lowest elevation in a window of \(3\times 3\) pixels on a 150 m grid). This analysis serves as a qualitative approach for investigating the processes that control the geometry of ice caps on other terrestrial planets. PubDate: 2019-02-18 DOI: 10.1007/s11004-019-09788-7

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: A fast upscaling procedure for determining the equivalent hydraulic conductivity of a three-dimensional fractured rock is presented in this paper. A modified semi-analytical superposition method is developed to take into account, at the same time, the hydraulic conductivity of the porous matrix (KM) and the fractures (KF). The connectivity of the conductive fracture network is also taken into account. The upscaling approach has been validated by comparison with the hydraulic conductivity of synthetic samples calculated with full numerical procedures (flow simulations and averaging). The extended superposition approach is in good agreement with numerical results for infinite size fractures. For finite size fractures, an improved model that takes into account the connectivity of the fracture network through multiplicative connectivity indexes determined empirically is proposed. This improved model is also in good agreement with the numerical results obtained for different configurations of fracture networks. PubDate: 2019-02-08 DOI: 10.1007/s11004-019-09785-w

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: In the original version of this article, unfortunately Figure 9 was wrong due to a typesetting mistake. The original article has been corrected. PubDate: 2019-02-01 DOI: 10.1007/s11004-018-9767-5

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: An accurate prediction of benefit in ore deposits with heterogeneous spatial variations requires the definition of geological domains that differentiate the types of mineralogy, alteration, and lithology, as well as the prediction of full mineral and geochemical compositions within each modeled domain and across boundaries between different domains. This paper proposes and compares various approaches (different combinations of log-ratio transformation, Gaussian and flow anamorphosis, and deterministic or probabilistic geological models) for geostatistical simulation of geochemical compositions in the presence of several geological domains. Different approaches are illustrated through an application to a nickel–cobalt laterite deposit located in Western Australia. Four rock types (ferruginous, smectite, saprolite, and ultramafic) are considered to define compositionally homogeneous domains. Geochemical compositions are comprised of six different components of interest (Fe, Al, Mg, Ni, Co, and Filler). The results suggest that the flow anamorphosis is a vital element for geostatistical modeling of geochemical composition due to its invariance properties and capability for reproducing complex patterns in input data, including: presence of outliers, presence of several populations (due to the presence of several geological domains), nonlinearity, and heteroscedasticity. PubDate: 2019-02-01 DOI: 10.1007/s11004-018-9763-9

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: Models used for reservoir prediction are subject to various types of uncertainty, and interpretational uncertainty is one of the most difficult to quantify due to the subjective nature of creating different scenarios of the geology and due to the difficultly of propagating these scenarios into uncertainty quantification workflows. Non-uniqueness in geological interpretation often leads to different ways to define the model. Uncertainty in the model definition is related to the equations that are used to describe the modelled reality. Therefore, it is quite challenging to quantify uncertainty between different model definitions, because they may include completely different model parameters. This paper is a continuation of work to capture geological uncertainties in history matching and presents a workflow to handle uncertainty in the geological scenario (i.e. the conceptual geological model) to quantify its impact on the reservoir forecasting and uncertainty quantification. The workflow is based on inferring uncertainty from multiple calibrated models, which are solutions of an inverse problem, using adaptive stochastic sampling and Bayesian inference. The inverse problem is solved by sampling a combined space of geological model parameters and a space of reservoir model descriptions, which represents uncertainty across different modelling concepts based on multiple geological interpretations. The workflow includes building a metric space for reservoir model descriptions using multi-dimensional scaling and classifying the metric space with support vector machines. The proposed workflow is applied to a synthetic reservoir model example to history match it to the known truth case reservoir response. The reservoir model was designed using a multi-point statistics algorithm with multiple training images as alternative geological interpretations. A comparison was made between predictions based on multiple reservoir descriptions and those of a single one, revealing improved performance in uncertainty quantification when using multiple training images. PubDate: 2019-02-01 DOI: 10.1007/s11004-018-9755-9

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: Bayesian uncertainty quantification of reservoir prediction is a significant area of ongoing research, with the major effort focussed on estimating the likelihood. However, the prior definition, which is equally as important in the Bayesian context and is related to the uncertainty in reservoir model description, has received less attention. This paper discusses methods for incorporating the prior definition into assisted history-matching workflows and demonstrates the impact of non-geologically plausible prior definitions on the posterior inference. This is the first of two papers to deal with the importance of an appropriate prior definition of the model parameter space, and it covers the key issue in updating the geological model—how to preserve geological realism in models that are produced by a geostatistical algorithm rather than manually by a geologist. To preserve realism, geologically consistent priors need to be included in the history-matching workflows, therefore the technical challenge lies in defining the space of all possibilities according to the current state of knowledge. This paper describes several workflows for Bayesian uncertainty quantification that build realistic prior descriptions of geological parameters for history matching using support vector regression and support vector classification. In the examples presented, it is used to build a prior description of channel dimensions, which is then used to history-match the parameters of both fluvial and deep-water reservoir geostatistical models. This paper also demonstrates how to handle modelling approaches where geological parameters and geostatistical reservoir model parameters are not the same, such as measured channel dimensions versus affinity parameter ranges of a multi-point statistics model. This can be solved using a multilayer perceptron technique to move from one parameter space to another and maintain realism. The overall workflow was implemented on three case studies, which refer to different depositional environments and geological modelling techniques, and demonstrated the ability to reduce the volume of parameter space, thereby increasing the history-matching efficiency and robustness of the quantified uncertainty. PubDate: 2019-02-01 DOI: 10.1007/s11004-018-9774-6

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: During a conventional multiple-point statistics simulation, the algorithm may not find a matched neighborhood in the training image for some unsimulated pixels. These pixels are referred to as the dead-end pixels; the existence of the dead-end pixels means that multiple-point statistics simulation is not a simple sequential simulation. In this paper, the multiple-point statistics simulation is cast as a combinatorial optimization problem, and the efficient backtracking algorithm is developed to solve this optimization problem. The efficient backtracking consists of backtracking, forward checking, and conflict-directed backjumping algorithms that are introduced and discussed in this paper. This algorithm is applied to simulate multiple-point statistics properties of some synthetic training images; the results show that no anomalies occurred in any of the produced realizations as opposed to previously published methods for solving the dead-end pixels. In particular, in simulating a channel system, all the channels generated by this method are continuous, which is of paramount importance in fluid flow simulation applications. The results also show that the presence of hard data does not degrade the quality of the generated realizations. The presented method provides a robust algorithmic framework for performing MPS simulation. PubDate: 2019-02-01 DOI: 10.1007/s11004-018-9761-y

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: An important property of variational-based data assimilation is the ability to define a functional formulation such that the minimum of that functional can be any state that is desired. Thus, it is possible to define cost functions such that the minimum of the background error component is the mean, median or the mode of a multivariate lognormal distribution, where, unlike the multivariate Gaussian distributions, these statistics are not equivalent. Therefore, for lognormal distributions it is shown here that there are regions where each one of these three statistics are optimal at minimizing the errors, given estimates of an a priori state. Also, as part of this work, a chaotic signal was detected with respect to the first guess to the Newton–Raphson solver that affect the accuracy of the solution to several decimal places. PubDate: 2019-02-01 DOI: 10.1007/s11004-018-9765-7

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: The Cascadia subduction zone fault lies just off the Pacific coast of the USA and Canada. Although this fault has been seismically inactive over the written history of the Cascadia region, it has the potential to produce catastrophic earthquakes and tsunamis. A variety of dating methods have been used to show that the most recent Cascadia earthquake occurred in 1700. Among these methods is an informal analysis of oral traditions handed down by Native American peoples that appear to refer to a major earthquake in this region. A central difficulty in analyzing these narratives quantitatively is their use of a generation and other qualitative measures of time that have no fixed lengths. Here, these narratives are analyzed under an explicit statistical model of the lengths of these measures. The results raise a question about the previous conclusion that these narratives all refer to the most recent Cascadia earthquake. PubDate: 2019-01-24 DOI: 10.1007/s11004-019-09783-y

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: The task of optimal sampling for the statistical simulation of a discrete random field is addressed from the perspective of minimizing the posterior uncertainty of non-sensed positions given the information of the sensed positions. In particular, information theoretic measures are adopted to formalize the problem of optimal sampling design for field characterization, where concepts such as information of the measurements, average posterior uncertainty, and the resolvability of the field are introduced. The use of the entropy and related information measures are justified by connecting the task of simulation with a source coding problem, where it is well known that entropy offers a fundamental performance limit. On the application, a one-dimensional Markov chain model is explored where the statistics of the random object are known, and then the more relevant case of multiple-point simulations of channelized facies fields is studied, adopting in this case a training image to infer the statistics of a non-parametric model. In both contexts, the superiority of information-driven sampling strategies is proved in different settings and conditions, with respect to random or regular sampling. PubDate: 2019-01-04 DOI: 10.1007/s11004-018-09777-2

Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.

Abstract: Multiple categorical variables such as mineralization zones, alteration zones, and lithology are often available for geostatistical modeling. Each categorical variable has a number of possible categorical outcomes. The current approach for numerical modeling of categorical variables is to either combine the categorical variables or to model them independently. The collapse of multiple categorical variables into a single variable with all combinations is impractical due to the large number of combinations. In some cases, lumping categorical variables is justified in terms of stationary domains; however, this decision is often due to the limitations of existing techniques. The independent modeling of each categorical variable will fail to reproduce the collocated joint categorical relationships. A methodology for the multivariate modeling of categorical variables utilizing the hierarchical truncated pluri-Gaussian approach is developed and illustrated with the Swiss Jura data set. The multivariate approach allows for improved reproduction of multivariate relationships between categorical variables. PubDate: 2019-01-02 DOI: 10.1007/s11004-018-09782-5