Abstract: 2013 Publication year: 2013 Source:Advances in Geophysics, Volume 54

Fluids play an integral role in the geodynamical system, from consumption through serpentinization at mid-ocean ridges and outer rises, to release through dehydration and decarbonization within subduction zones and beyond. Fluids affect a number of critical elements of the tectonic cycle, including weakening plate boundaries and catalyzing mantle wedge melting for feeding volcanic arcs. This review paper summarizes the vast topic of the hydrogeological cycle of the solid earth, and how fluids affect, and are affected by, tectonic processes. Ultimately these fluids must either remain trapped in the mantle or return to the surface at high pressure via ductile processes or fracture networks. High pressure fluids returning to the surface may get trapped at the base of the brittle crust, where they can contribute to earthquake nucleation and genesis. Evidence suggests that high pressure fluids are active participants in tectonic earthquakes, and the relatively recent discovery of slow slip earthquakes and non-volcanic tremor phenomena all point to trapped, over-pressured fluids as an underlying mechanical cause. Fluids play an integral role in lithospheric geodynamics, which provides for some speculations about fluids and earthquakes in a general sense. One such speculation is that spatial aftershock patterns reflect fluid pathways taken by the release of high pressure fluids triggered by the earthquake mainshock. Some of these patterns are shown, and I introduce the term “Zen Trees” to describe them because of their aesthetic form and their resemblance to Eastern calligraphy. I hypothesize that earthquakes that do not spawn significant aftershock sequences indicate little if any trapped high pressure fluids at depth, while earthquakes producing long-lived aftershock sequences point to large reservoirs of trapped high pressure fluids. Although the viscous mantle is the ultimate geophysical fluid, the focus in this paper is limited to fluids in the lithosphere because this boundary, typically treated as a thermal boundary layer, is controlled by complex dynamical interactions between fracture, deformation, dissolution/precipitation, and fluid flow.

Abstract: 2013 Publication year: 2013 Source:Advances in Geophysics, Volume 54

The aim of this review is to characterize the role of pressure solution creep in the ductility of the Earth’s upper crust and to describe how this creep mechanism competes and interacts with other deformation mechanisms. Pressure solution creep is a major mechanism of ductile deformation of the upper crust, accommodating basin compaction, folding, shear zone development, and fault creep and interseismic healing. However, its kinetics is strongly dependent on the composition of the rocks (mainly the presence of phyllosilicates minerals that activate pressure solution) and on its interaction with fracturing and healing processes (that activate and slow down pressure solution, respectively). The present review combines three approaches: natural observations, theoretical developments, and laboratory experiments. Natural observations can be used to identify the pressure solution markers necessary to evaluate creep law parameters, such as the nature of the material, the temperature and stress conditions, or the geometry of mass transfer domains. Theoretical developments help to investigate the thermodynamics and kinetics of the processes and to build theoretical creep laws. Laboratory experiments are implemented in order to test the models and to measure creep law parameters such as driving forces and kinetic coefficients. Finally, applications are discussed for the modeling of sedimentary basin compaction and fault creep. The sensitivity of the models to time is given particular attention: viscous versus plastic rheology during sediment compaction; steady state versus non-steady state behavior of fault and shear zones. The conclusions discuss recent advances for modeling pressure solution creep and the main questions that remain to be solved.

Abstract: 2013 Publication year: 2013 Source:Advances in Geophysics, Volume 54

Carbonates are major sedimentary materials found in many upper layers of the Earth’s crust. Understanding their compaction behavior is important for porosity prediction in sedimentary basins and to improve the knowledge about sealing of active faults at shallow depths, where the faults cross-cut limestone formations. In carbonates, as opposed to siliciclastic sediments, diagenesis starts at shallow depths ( PubDate: 2013-02-28T13:35:10Z

Abstract: 2012 Publication year: 2012 Source:Advances in Geophysics, Volume 53

Microseisms seen on seismograms worldwide were once viewed as “noise” contaminating records of earthquakes. However, these low-amplitude oscillations generated by storms over the oceans are now recognized as carriers of an important meteorological “signal”. Decades-long archives of analog seismograms may thus represent a high-resolution record of climate change significantly longer than those based on traditional meteorological observations. One of the first phenomena investigated by the then-new field of seismology, microseism research began with their identification around 1870. Improved characterization came from subsequent investigations in Europe, Japan, and North America, which sought out their sources and source regions. Two-generation mechanisms were identified in the mid-twentieth century. In both, microseisms originate with atmospheric energy in the form of storms over the oceans. It is coupled into the water column via the generation of ocean swell, transmitted to the seafloor, and then travels as elastic waves at the seafloor. Analysis of secondary microseisms, recorded in eastern North America during August 1992 Saffir/Simpson category 5 hurricane Andrew, shows the feasibility of using these signals to identify North Atlantic Ocean hurricanes. The shift in dominant microseism frequency with Andrew intensification demonstrates that these microseisms were generated over the deep waters of the North Atlantic Ocean at or near the hurricane and are thus a near real-time record of hurricane changes. Variations in secondary microseism frequency and amplitude allow detection of the hurricane while over the ocean and up to ∼2000km from the recording station. Analog seismograms from seismic stations in North America may thus document unobserved North Atlantic hurricanes. However, uncertainties remain. The relative contributions of deep- and shallow-water sources remain uncertain, and the generation of microseisms with transverse wave components lacks a satisfactory explanation. Better understanding of the various controls on microseism properties is necessary before information in these waveforms can be used to infer storm characteristics, especially for less-energetic storms.

Abstract: 2012 Publication year: 2012 Source:Advances in Geophysics, Volume 53

Observations related to tsunami catalogs are reviewed and described in a phenomenological framework. An examination of scaling relationships between earthquake size (as expressed by scalar seismic moment and mean slip) and tsunami size (as expressed by mean and maximum local run-up and maximum far-field amplitude) indicates that scaling is significant at the 95% confidence level, although there is uncertainty in how well earthquake size can predict tsunami size (R 2 ∼0.4–0.6). In examining tsunami event statistics, current methods used to estimate the size distribution of earthquakes and landslides and the inter-event time distribution of earthquakes are first reviewed. These methods are adapted to estimate the size and inter-event distribution of tsunamis at a particular recording station. Using a modified Pareto size distribution, the best-fit power-law exponents of tsunamis recorded at nine Pacific tide-gauge stations exhibit marked variation, in contrast to the approximately constant power-law exponent for inter-plate thrust earthquakes. With regard to the inter-event time distribution, significant temporal clustering of tsunami sources is demonstrated. For tsunami sources occurring in close proximity to other sources in both space and time, a physical triggering mechanism, such as static stress transfer, is a likely cause for the anomalous clustering. Mechanisms of earthquake-to-earthquake and earthquake-to-landslide triggering are reviewed. Finally, a modification of statistical branching models developed for earthquake triggering is introduced to describe triggering among tsunami sources.

Abstract: 2012 Publication year: 2012 Source:Advances in Geophysics, Volume 53

Lessons learnt from the destructive earthquakes occurred during the new millennium provide new opportunities to take action, revise, and improve the procedure for seismic hazard assessment (SHA). A single hazard map cannot meet the requirements from different end-users; the mapping of the expected earthquake ground motion that accounts for events' recurrence may be suitable for insurances. When dealing with cultural heritage and critical structures (e.g., nuclear power plants), where it is necessary to consider extremely long time intervals, the standard hazard estimates are by far unsuitable, due to their basic heuristic limitations. While time-dependent SHA may be suitable to increase earthquake preparedness, by planning adequate mitigation actions, for critical structures (i.e., those for which the consequences of failure are intolerable) the maximum possible seismic input is relevant. Therefore the need for an appropriate estimate of the seismic hazard, aimed not only at the seismic classification of the national territory, but also at the capability of properly accounting for the local amplifications of ground shaking, as well as for the fault properties, is a pressing concern for seismic engineers. A viable alternative to traditional SHA is represented by the use of the scenario earthquakes, characterized at least in terms of magnitude, distance, and faulting style, and by the treatment of complex source processes. The relevance of the realistic modeling, which permits the generalization of empirical observations by means of physically sound theoretical considerations, is evident, as it allows the optimization of the structural design with respect to the site of interest. The time information associated with the scenarios of ground motion, given by the intermediate-term middle-range earthquake predictions, can be useful to public authorities in assigning priorities for timely mitigation actions. Therefore, the approach we have developed naturally supplies realistic time series of ground motion useful to preserve urban settings, historical monuments, and relevant man-made structures.

Abstract: 2010 Publication year: 2010 Source:Advances in Geophysics, Volume 52

Geophysical investigations which commenced thousands of years ago in China from observations of the Earth shaking caused by large earthquakes (Lee et al., 2003) have gone a long way in their development from an initial, intuitive stage to a modern science employing the newest technological and theoretical achievements. In spite of this enormous development, geophysical research still faces the same basic limitation. The only available information about the Earth comes from measurement at its surface or from space. Only very limited information can be acquired by direct measurements. It is not surprising, therefore, that geophysicists have contributed significantly to the development of the inverse theory—the theory of inference about sought parameters from indirect measurements. For a long time this inference was understood as the task of estimating parameters used to describe the Earth's structure or processes within it, like earthquake ruptures. The problem was traditionally solved by using optimization techniques following the least absolute value and least squares criteria formulated by Laplace and Gauss. Today the inverse theory faces a new challenge in its development. In many geophysical and related applications, obtaining the model “best fitting” a given set of data according to a selected optimization criterion is not sufficient any more. We need to know how plausible the obtained model is or, in other words, how large the uncertainties are in the final solutions. This task can hardly be addressed in the framework of the classical optimization approach. The probabilistic inverse theory incorporates a statistical point of view, according to which all available information, including observational data, theoretical predictions and a priori knowledge, can be represented by probability distributions. According to this reasoning, the solution of the inverse problem is not a single, optimum model, but rather the a posteriori probability distribution over the model space which describes the probability of a given model being the true one. This path of development of the inverse theory follows a pragmatic need for a reliable and efficient method of interpreting observational data. The aim of this chapter is to bring together two elements of the probabilistic inverse theory. The first one is a presentation of the theoretical background of the theory enhanced by basic elements of the Monte Carlo computational technique. The second part provides a review of the solid earth applications of the probabilistic inverse theory.

Abstract: 2010 Publication year: 2010 Source:Advances in Geophysics, Volume 52

We report on recent advances in experiments and modeling of particle–fluid flows that are relevant to an understanding of debris flows. We first describe laboratory experiments on steady, inclined flows of mixtures of water and a single idealized granular phase that focus on the differences in the depths and the velocity of the two phases and that provide evidence for the importance of collisional exchange of momentum and energy between the particles. We then indicate how a relatively simple rate-dependent rheological model for the particles that incorporates yield may be used in the context of a two-phase mixture theory that distinguishes between the depths of the fluid and particle phases to reproduce what is seen in the experiments on both uniform and non-uniform flows. Finally, because a phenomenological extension of kinetic theory for dense, inclined flows of identical particles has recently been developed, we outline a kinetic theory for dense, inclined flows of two types of particles and water as a possible alternative to existing phenomenological theories.

Abstract: 2010 Publication year: 2010 Source:Advances in Geophysics, Volume 52

The seismically active region from Tunisia to the Azores Islands constitutes the westernmost part of the plate boundary between Eurasia and Africa. From the point of view of tectonics, this is a complex structure which involves volcanism and rifting at the Azores, strike-slip motion at the center of the Atlantic, and horizontal N-S compressions at its eastern part, with complex interaction between Iberia and northern Africa and E-W extension at the Alboran Sea, involving some kind of subduction or delamination process. This chapter has been divided into four parts: (1) Atlantic region, Azores–Gibraltar; (2) Azores Islands triple junction; (3) southern Iberia, Betics, and Alboran Sea; and (4) North Africa, Morocco, Algeria, and Tunisia. Plate motion shows counterclock rotation of Africa with respect to Eurasia around a pole near the Canary Islands. The Azores region forms a triple junction with ridge structure and oblique spreading in its three branches. The Atlantic region from Azores to Gibraltar is separated into two parts, west and east of 20°W. The first has E-W strike-slip motion and the second is under horizontal N-S compression producing underthrusting of Africa. The Betics–Alboran area is dominated by the collision movement between Iberia and northern Africa, and the E-W extension at the Alboran Basin. Intermediate and deep earthquake activity and tomography data show an anomalous deep structure, interpreted as produced by a subduction or a lithospheric delamination process. The Betics Cordillera, which links with the Rif by the Gibraltar Arc, is formed by overthrusting toward the north, limiting with the stable Iberia, and crossed by several fracture systems. The Rif, High Atlas, and Tell mountains are under NW-SE horizontal compression and dominated by structures trending NE-SW. Several interpretations have been given to the tectonic development of these regions, and some aspects are not yet completely explained.

Abstract: 2009 Publication year: 2009 Source:Advances in Geophysics, Volume 51

The present review of seismicity induced by mining is a continuation of the previous reviews, published in 1990 and 2001 in Advances in Geophysics, describing the problems involved and the state-of-the-art of relevant research in this field at the end of 1980s and 1990s. During the last decade, seismic monitoring has been expanded in several mining districts, a number of new techniques have been introduced, and new significant results have been obtained in studies of seismic events induced by mining. This review is organized similarly to some extent to the previous ones. New techniques in seismic monitoring in mines, mining factors affecting seismicity, source mechanisms and source time functions, source parameters and their scaling relations are briefly discussed. Precursory phenomena observed in mines and some attempts at prediction of larger events are reviewed. The new results obtained so far by the Japanese research group in South African gold mines, the concepts of stress diffusion and of “critical earthquakes” applied to seismicity in mines, and numerical modeling of rock mass response to mining are also briefly discussed.

Abstract: 2009 Publication year: 2009 Source:Advances in Geophysics, Volume 51

Over the last few decades, it has become clear that various human activities have the potential to generate seismic activity. Examples include subsurface waste injection, reservoir impoundment in the vicinity of large dams, and development of mining, geothermal or hydrocarbon resources. Recently, induced seismicity has also become a concern in connection with geologic carbon sequestration projects. This study focuses on seismicity induced by hydrocarbon production by summarizing the published case studies and describing current theoretical approaches to model these. It is important to understand the conditions under which hydrocarbon production may lead to seismic activity in order to ensure that they are performed safely. Our knowledge of induced seismicity in hydrocarbon fields has progressed substantially over the last few years owing to more intensive high-quality instrumentation of oil fields and a continuous effort to understand the phenomenon theoretically. However, much of the available literature is dispersed over a variety of journals and specialized reports. This review aims at providing a first step toward making the current knowledge about induced seismicity in hydrocarbon fields more accessible to a broad audience of scientists.

Abstract: 2009 Publication year: 2009 Source:Advances in Geophysics, Volume 51

Observations related to tsunami generation, propagation, and runup are reviewed and described in a phenomenological framework. In the three coastal regimes considered (near-field broadside, near-field oblique, and far field), the observed maximum wave amplitude is associated with different parts of the tsunami wavefield. The maximum amplitude in the near-field broadside regime is most often associated with the direct arrival from the source, whereas in the near-field oblique regime, the maximum amplitude is most often associated with the propagation of edge waves. In the far field, the maximum amplitude is most often caused by the interaction of the tsunami coda that develops during basin-wide propagation and the nearshore response, including the excitation of edge waves, shelf modes, and resonance. Statistical distributions that describe tsunami observations are also reviewed, both in terms of spatial distributions, such as coseismic slip on the fault plane and near-field runup, and temporal distributions, such as wave amplitudes in the far field. In each case, fundamental theories of tsunami physics are heuristically used to explain the observations.

Abstract: 2008 Publication year: 2008 Source:Advances in Geophysics, Volume 50

I present a review of the weak localization effect in seismology. To understand this multiple scattering phenomenon, I begin with an intuitive approach illustrated by experiments performed in the laboratory. The importance of reciprocity and interference in scattering media is emphasized. I then consider the role of source mechanism, again starting with experimental evidence. Important theoretical results for elastic waves are summarized, that take into account the full vectorial character of elastic waves. Applications to the characterization of heterogeneous elastic media are discussed.

Abstract: 2008 Publication year: 2008 Source:Advances in Geophysics, Volume 50

An extended theory on the coherence function of log amplitude and phase for waves passing through random media is developed for a depth‐dependent background medium using the WKBJ‐approximated Green's function, the Rytov approximation, and the stochastic theory of the random velocity field. The new theory overcomes the limitation of the existing theory that can only deal with constant background media. Our extended coherence functions depend jointly on the angle separation between two incident plane waves and the spatial lag between receivers. The theory is verified through numerical simulations using the iasp91 background velocity model with two layers of random media. The current theory has the potential to be used to invert for the depth‐dependent spectrum of heterogeneities in the Earth.

Abstract: 2008 Publication year: 2008 Source:Advances in Geophysics, Volume 50

High‐frequency seismograms of earthquakes are complex mainly caused by scattering due to the lithospheric inhomogeneity. Disregarding phase information, seismologists have often focused on the characteristics of seismogram envelopes. The delay time of the maximum amplitude arrival from the onset and the apparent duration time are good measures of scattering caused by random velocity inhomogeneities. There is a stochastic method to directly simulate wave envelopes in random media. The Markov approximation for the parabolic equation is known to be powerful for the direct synthesis of scalar wave envelopes when the wavelength is shorter than the correlation length of random media. It leads to the master equation for the two‐frequency mutual coherence function (TFMCF) of waves, of which the Fourier transform gives the time trace of the wave intensity. It well predicts the peak delay and the broadening of wave envelopes with increasing travel distance for an impulsive source. In this chapter, we extend this approximation to vector waves in random elastic media. When the medium inhomogeneity is weak and the wavelength is shorter than the correlation distance, P‐ and S‐waves can be separately treated by using potentials since conversion scattering between them is weak. Applying the Markov approximation to the TFMCF of potential field, we are able to synthesize vector‐wave envelopes. Vector‐wave envelopes are analytically derived for plane wavelet incidence onto random media and for wavelet radiation from a point source in random media characterized by a Gaussian autocorrelation function. For P‐waves, this approximation predicts not only the peak delay and envelope broadening in the longitudinal component but also the excitation of wave amplitude in the transverse component due to ray bending. The ratio of the mean square (MS) fractional velocity fluctuation to the correlation distance ɛ2/a is the key parameter characterizing these vector‐wave envelopes. The relation between the time integral of the transverse‐component MS amplitude against travel distance gives this ratio. S‐wave envelopes can be synthesized with an analogous mathematical approach. For the same randomness, the envelope broadening of S‐wavelet is larger than that of P‐wavelet by a factor of the ratio of their wave velocities. The validity of the direct envelope synthesis with the Markov approximation is confirmed by a comparison with vector‐wave envelopes calculated from finite difference simulations in two dimensions. The direct syntheses of vector‐wave envelopes developed here could serve for the mathematical interpretation of observed seismograms in terms of lithospheric inhomogeneity.