Abstract: Abstract In the late stages of terrestrial planet formation, pairwise collisions between planetary-sized bodies act as the fundamental agent of planet growth. These collisions can lead to either growth or disruption of the bodies involved and are largely responsible for shaping the final characteristics of the planets. Despite their critical role in planet formation, an accurate treatment of collisions has yet to be realized. While semi-analytic methods have been proposed, they remain limited to a narrow set of post-impact properties and have only achieved relatively low accuracies. However, the rise of machine learning and access to increased computing power have enabled novel data-driven approaches. In this work, we show that data-driven emulation techniques are capable of classifying and predicting the outcome of collisions with high accuracy and are generalizable to any quantifiable post-impact quantity. In particular, we focus on the dataset requirements, training pipeline, and classification and regression performance for four distinct data-driven techniques from machine learning (ensemble methods and neural networks) and uncertainty quantification (Gaussian processes and polynomial chaos expansion). We compare these methods to existing analytic and semi-analytic methods. Such data-driven emulators are poised to replace the methods currently used in N-body simulations, while avoiding the cost of direct simulation. This work is based on a new set of 14,856 SPH simulations of pairwise collisions between rotating, differentiated bodies at all possible mutual orientations. PubDate: 2020-12-02

Abstract: Abstract Many important problems in astrophysics, space physics, and geophysics involve flows of (possibly ionized) gases in the vicinity of a spherical object, such as a star or planet. The geometry of such a system naturally favors numerical schemes based on a spherical mesh. Despite its orthogonality property, the polar (latitude-longitude) mesh is ill suited for computation because of the singularity on the polar axis, leading to a highly non-uniform distribution of zone sizes. The consequences are (a) loss of accuracy due to large variations in zone aspect ratios, and (b) poor computational efficiency from a severe limitations on the time stepping. Geodesic meshes, based on a central projection using a Platonic solid as a template, solve the anisotropy problem, but increase the complexity of the resulting computer code. We describe a new finite volume implementation of Euler and MHD systems of equations on a triangular geodesic mesh (TGM) that is accurate up to fourth order in space and time and conserves the divergence of magnetic field to machine precision. The paper discusses in detail the generation of a TGM, the domain decomposition techniques, three-dimensional conservative reconstruction, and time stepping. PubDate: 2020-03-27

Abstract: Abstract Deep generative models, such as Generative Adversarial Networks (GANs) or Variational Autoencoders (VAs) have been demonstrated to produce images of high visual quality. However, the existing hardware on which these models are trained severely limits the size of the images that can be generated. The rapid growth of high dimensional data in many fields of science therefore poses a significant challenge for generative models. In cosmology, the large-scale, three-dimensional matter distribution, modeled with N-body simulations, plays a crucial role in understanding the evolution of structures in the universe. As these simulations are computationally very expensive, GANs have recently generated interest as a possible method to emulate these datasets, but they have been, so far, mostly limited to two dimensional data. In this work, we introduce a new benchmark for the generation of three dimensional N-body simulations, in order to stimulate new ideas in the machine learning community and move closer to the practical use of generative models in cosmology. As a first benchmark result, we propose a scalable GAN approach for training a generator of N-body three-dimensional cubes. Our technique relies on two key building blocks, (i) splitting the generation of the high-dimensional data into smaller parts, and (ii) using a multi-scale approach that efficiently captures global image features that might otherwise be lost in the splitting process. We evaluate the performance of our model for the generation of N-body samples using various statistical measures commonly used in cosmology. Our results show that the proposed model produces samples of high visual quality, although the statistical analysis reveals that capturing rare features in the data poses significant problems for the generative models. We make the data, quality evaluation routines, and the proposed GAN architecture publicly available at https://github.com/nperraud/3DcosmoGAN. PubDate: 2019-12-19

Abstract: Abstract We present the construction of a novel time-domain signature extraction methodology and the development of a supporting supervised pattern detection algorithm. We focus on the targeted identification of eclipsing binaries that demonstrate a feature known as the O’Connell effect. Our proposed methodology maps stellar variable observations to a new representation known as distribution fields (DFs). Given this novel representation, we develop a metric learning technique directly on the DF space that is capable of specifically identifying our stars of interest. The metric is tuned on a set of labeled eclipsing binary data from the Kepler survey, targeting particular systems exhibiting the O’Connell effect. The result is a conservative selection of 124 potential targets of interest out of the Villanova Eclipsing Binary Catalog. Our framework demonstrates favorable performance on Kepler eclipsing binary data, taking a crucial step in preparing the way for large-scale data volumes from next-generation telescopes such as LSST and SKA. PubDate: 2019-11-08

Abstract: Abstract Most stars form as part of a stellar group. These young stars are mostly surrounded by a disk from which potentially a planetary system might form. Both, the disk and later on the planetary system, may be affected by the cluster environment due to close fly-bys. The here presented database can be used to determine the gravitational effect of such fly-bys on non-viscous disks and planetary systems. The database contains data for fly-by scenarios spanning mass ratios between the perturber and host star from 0.3 to 50.0, periastron distances from 30 au to 1000 au, orbital inclination from 0∘ to 180∘ and angle of periastron of 0∘, 45∘ and 90∘. Thus covering a wide parameter space relevant for fly-bys in stellar clusters. The data can either be downloaded to perform one’s own diagnostics like for e.g. determining disk size, disk mass, etc. after specific encounters, obtain parameter dependencies or the different particle properties can be visualized interactively. Currently the database is restricted to fly-bys on parabolic orbits, but it will be extended to hyperbolic orbits in the future. All of the data from this extensive parameter study is now publicly available as DESTINY. PubDate: 2019-09-09

Abstract: Abstract We present the full public release of all data from the TNG100 and TNG300 simulations of the IllustrisTNG project. IllustrisTNG is a suite of large volume, cosmological, gravo-magnetohydrodynamical simulations run with the moving-mesh code Arepo. TNG includes a comprehensive model for galaxy formation physics, and each TNG simulation self-consistently solves for the coupled evolution of dark matter, cosmic gas, luminous stars, and supermassive black holes from early time to the present day, \(z=0\) . Each of the flagship runs—TNG50, TNG100, and TNG300—are accompanied by halo/subhalo catalogs, merger trees, lower-resolution and dark-matter only counterparts, all available with 100 snapshots. We discuss scientific and numerical cautions and caveats relevant when using TNG. The data volume now directly accessible online is ∼750 TB, including 1200 full volume snapshots and ∼80,000 high time-resolution subbox snapshots. This will increase to ∼1.1 PB with the future release of TNG50. Data access and analysis examples are available in IDL, Python, and Matlab. We describe improvements and new functionality in the web-based API, including on-demand visualization and analysis of galaxies and halos, exploratory plotting of scaling relations and other relationships between galactic and halo properties, and a new JupyterLab interface. This provides an online, browser-based, near-native data analysis platform enabling user computation with local access to TNG data, alleviating the need to download large datasets. PubDate: 2019-05-14

Abstract: Abstract Inferring model parameters from experimental data is a grand challenge in many sciences, including cosmology. This often relies critically on high fidelity numerical simulations, which are prohibitively computationally expensive. The application of deep learning techniques to generative modeling is renewing interest in using high dimensional density estimators as computationally inexpensive emulators of fully-fledged simulations. These generative models have the potential to make a dramatic shift in the field of scientific simulations, but for that shift to happen we need to study the performance of such generators in the precision regime needed for science applications. To this end, in this work we apply Generative Adversarial Networks to the problem of generating weak lensing convergence maps. We show that our generator network produces maps that are described by, with high statistical confidence, the same summary statistics as the fully simulated maps. PubDate: 2019-05-06

Abstract: Abstract The “gravitational million-body problem,” to model the dynamical evolution of a self-gravitating, collisional N-body system with ∼106 particles over many relaxation times, remains a major challenge in computational astrophysics. Unfortunately, current techniques to model such systems suffer from severe limitations. A direct N-body simulation with more than 105 particles can require months or even years to complete, while an orbit-sampling Monte Carlo approach cannot adequately model the dynamics in a dense cluster core, particularly in the presence of many black holes. We have developed a new technique combining the precision of a direct N-body integration with the speed of a Monte Carlo approach. Our Rapid And Precisely Integrated Dynamics code, the RAPID code, statistically models interactions between neighboring stars and stellar binaries while integrating directly the orbits of stars or black holes in the cluster core. This allows us to accurately simulate the dynamics of the black holes in a realistic globular cluster environment without the burdensome \(N^{2}\) scaling of a full N-body integration. We compare RAPID models of idealized globular clusters to identical models from the direct N-body and Monte Carlo methods. Our tests show that RAPID can reproduce the half-mass radii, core radii, black hole ejection rates, and binary properties of the direct N-body models far more accurately than a standard Monte Carlo integration while remaining significantly faster than a full N-body integration. With this technique, it will be possible to create more realistic models of Milky Way globular clusters with sufficient rapidity to explore the full parameter space of dense stellar clusters. PubDate: 2018-11-28

Abstract: Abstract Dark matter in the universe evolves through gravity to form a complex network of halos, filaments, sheets and voids, that is known as the cosmic web. Computational models of the underlying physical processes, such as classical N-body simulations, are extremely resource intensive, as they track the action of gravity in an expanding universe using billions of particles as tracers of the cosmic matter distribution. Therefore, upcoming cosmology experiments will face a computational bottleneck that may limit the exploitation of their full scientific potential. To address this challenge, we demonstrate the application of a machine learning technique called Generative Adversarial Networks (GAN) to learn models that can efficiently generate new, physically realistic realizations of the cosmic web. Our training set is a small, representative sample of 2D image snapshots from N-body simulations of size 500 and 100 Mpc. We show that the GAN-generated samples are qualitatively and quantitatively very similar to the originals. For the larger boxes of size 500 Mpc, it is very difficult to distinguish them visually. The agreement of the power spectrum \(P_{k}\) is 1–2% for most of the range, between \(k=0.06\) and \(k=0.4\) . For the remaining values of k, the agreement is within 15%, with the error rate increasing for \(k>0.8\) . For smaller boxes of size 100 Mpc, we find that the visual agreement to be good, but some differences are noticable. The error on the power spectrum is of the order of 20%. We attribute this loss of performance to the fact that the matter distribution in 100 Mpc cutouts was very inhomogeneous between images, a situation in which the performance of GANs is known to deteriorate. We find a good match for the correlation matrix of full \(P_{k}\) range for 100 Mpc data and of small scales for 500 Mpc, with ∼20% disagreement for large scales. An important advantage of generating cosmic web realizations with a GAN is the considerable gains in terms of computation time. Each new sample generated by a GAN takes a fraction of a second, compared to the many hours needed by traditional N-body techniques. We anticipate that the use of generative models such as GANs will therefore play an important role in providing extremely fast and precise simulations of cosmic web in the era of large cosmological surveys, such as Euclid and Large Synoptic Survey Telescope (LSST). PubDate: 2018-11-23

Abstract: Abstract We present a 360∘ (i.e., 4π steradian) general-relativistic ray-tracing and radiative transfer calculations of accreting supermassive black holes. We perform state-of-the-art three-dimensional general-relativistic magnetohydrodynamical simulations using the BHAC code, subsequently post-processing this data with the radiative transfer code RAPTOR. All relativistic and general-relativistic effects, such as Doppler boosting and gravitational redshift, as well as geometrical effects due to the local gravitational field and the observer’s changing position and state of motion, are therefore calculated self-consistently. Synthetic images at four astronomically-relevant observing frequencies are generated from the perspective of an observer with a full 360∘ view inside the accretion flow, who is advected with the flow as it evolves. As an example we calculated images based on recent best-fit models of observations of Sagittarius A*. These images are combined to generate a complete 360∘ Virtual Reality movie of the surrounding environment of the black hole and its event horizon. Our approach also enables the calculation of the local luminosity received at a given fluid element in the accretion flow, providing important applications in, e.g., radiation feedback calculations onto black hole accretion flows. In addition to scientific applications, the 360∘ Virtual Reality movies we present also represent a new medium through which to interactively communicate black hole physics to a wider audience, serving as a powerful educational tool. PubDate: 2018-11-19

Abstract: Abstract Multidimensional nucleosynthesis studies with hundreds of nuclei linked through thousands of nuclear processes are still computationally prohibitive. To date, most nucleosynthesis studies rely either on hydrostatic/hydrodynamic simulations in spherical symmetry, or on post-processing simulations using temperature and density versus time profiles directly linked to huge nuclear reaction networks. Parallel computing has been regarded as the main permitting factor of computationally intensive simulations. This paper explores the different pros and cons in the parallelization of stellar codes, providing recommendations on when and how parallelization may help in improving the performance of a code for astrophysical applications. We report on different parallelization strategies succesfully applied to the spherically symmetric, Lagrangian, implicit hydrodynamic code SHIVA, extensively used in the modeling of classical novae and type I X-ray bursts. When only matrix build-up and inversion processes in the nucleosynthesis subroutines are parallelized (a suitable approach for post-processing calculations), the huge amount of time spent on communications between cores, together with the small problem size (limited by the number of isotopes of the nuclear network), result in a much worse performance of the parallel application compared to the 1-core, sequential version of the code. Parallelization of the matrix build-up and inversion processes in the nucleosynthesis subroutines is not recommended unless the number of isotopes adopted largely exceeds 10,000. In sharp contrast, speed-up factors of 26 and 35 have been obtained with a parallelized version of SHIVA, in a 200-shell simulation of a type I X-ray burst carried out with two nuclear reaction networks: a reduced one, consisting of 324 isotopes and 1392 reactions, and a more extended network with 606 nuclides and 3551 nuclear interactions. Maximum speed-ups of ∼41 (324-isotope network) and ∼85 (606-isotope network), are also predicted for 200 cores, stressing that the number of shells of the computational domain constitutes an effective upper limit for the maximum number of cores that could be used in a parallel application. PubDate: 2018-11-16

Abstract: Abstract We present an account of the state of the art in the fields explored by the research community invested in “Modeling and Observing DEnse STellar systems”. For this purpose, we take as a basis the activities of the MODEST-17 conference, which was held at Charles University, Prague, in September 2017. Reviewed topics include recent advances in fundamental stellar dynamics, numerical methods for the solution of the gravitational N-body problem, formation and evolution of young and old star clusters and galactic nuclei, their elusive stellar populations, planetary systems, and exotic compact objects, with timely attention to black holes of different classes of mass and their role as sources of gravitational waves. Such a breadth of topics reflects the growing role played by collisional stellar dynamics in numerous areas of modern astrophysics. Indeed, in the next decade many revolutionary instruments will enable the derivation of positions and velocities of individual stars in the Milky Way and its satellites, and will detect signals from a range of astrophysical sources in different portions of the electromagnetic and gravitational spectrum, with an unprecedented sensitivity. On the one hand, this wealth of data will allow us to address a number of long-standing open questions in star cluster studies; on the other hand, many unexpected properties of these systems will come to light, stimulating further progress of our understanding of their formation and evolution. PubDate: 2018-11-06

Abstract: Abstract We present entropy-limited hydrodynamics (ELH): a new approach for the computation of numerical fluxes arising in the discretization of hyperbolic equations in conservation form. ELH is based on the hybridisation of an unfiltered high-order scheme with the first-order Lax-Friedrichs method. The activation of the low-order part of the scheme is driven by a measure of the locally generated entropy inspired by the artificial-viscosity method proposed by Guermond et al. (J. Comput. Phys. 230(11):4248-4267, 2011, doi:10.1016/j.jcp.2010.11.043). Here, we present ELH in the context of high-order finite-differencing methods and of the equations of general-relativistic hydrodynamics. We study the performance of ELH in a series of classical astrophysical tests in general relativity involving isolated, rotating and nonrotating neutron stars, and including a case of gravitational collapse to black hole. We present a detailed comparison of ELH with the fifth-order monotonicity preserving method MP5 (Suresh and Huynh in J. Comput. Phys. 136(1):83-99, 1997, doi:10.1006/jcph.1997.5745), one of the most common high-order schemes currently employed in numerical-relativity simulations. We find that ELH achieves comparable and, in many of the cases studied here, better accuracy than more traditional methods at a fraction of the computational cost (up to \({\sim}50\%\) speedup). Given its accuracy and its simplicity of implementation, ELH is a promising framework for the development of new special- and general-relativistic hydrodynamics codes well adapted for massively parallel supercomputers. PubDate: 2017-07-04

Abstract: Abstract We report on the successful completion of a 2 trillion particle cosmological simulation to \(z=0\) run on the Piz Daint supercomputer (CSCS, Switzerland), using 4000+ GPU nodes for a little less than 80 h of wall-clock time or 350,000 node hours. Using multiple benchmarks and performance measurements on the US Oak Ridge National Laboratory Titan supercomputer, we demonstrate that our code PKDGRAV3, delivers, to our knowledge, the fastest time-to-solution for large-scale cosmological N-body simulations. This was made possible by using the Fast Multipole Method in conjunction with individual and adaptive particle time steps, both deployed efficiently (and for the first time) on supercomputers with GPU-accelerated nodes. The very low memory footprint of PKDGRAV3 allowed us to run the first ever benchmark with 8 trillion particles on Titan, and to achieve perfect scaling up to 18,000 nodes and a peak performance of 10 Pflops. PubDate: 2017-05-18

Abstract: Abstract We present the black hole accretion code (BHAC), a new multidimensional general-relativistic magnetohydrodynamics module for the MPI-AMRVAC framework. BHAC has been designed to solve the equations of ideal general-relativistic magnetohydrodynamics in arbitrary spacetimes and exploits adaptive mesh refinement techniques with an efficient block-based approach. Several spacetimes have already been implemented and tested. We demonstrate the validity of BHAC by means of various one-, two-, and three-dimensional test problems, as well as through a close comparison with the HARM3D code in the case of a torus accreting onto a black hole. The convergence of a turbulent accretion scenario is investigated with several diagnostics and we find accretion rates and horizon-penetrating fluxes to be convergent to within a few percent when the problem is run in three dimensions. Our analysis also involves the study of the corresponding thermal synchrotron emission, which is performed by means of a new general-relativistic radiative transfer code, BHOSS. The resulting synthetic intensity maps of accretion onto black holes are found to be convergent with increasing resolution and are anticipated to play a crucial role in the interpretation of horizon-scale images resulting from upcoming radio observations of the source at the Galactic Center. PubDate: 2017-05-03