Authors:I. Gultepe; R. Rabin; R. Ware; M. Pavolonis Abstract: Publication date: Available online 22 October 2016 Source:Advances in Geophysics Author(s): I. Gultepe, R. Rabin, R. Ware, M. Pavolonis The objective of this work is to better understand light snow (LSN) precipitation measurements (precipitation rate (PR)<0.5mm/h) collected by optical present weather sensors (OPWS), weighing gauges, and spectral probes that are important for meteorological and hydrometeorological applications. Observations collected during the Satellite Applications for Arctic Weather and Search and rescue (SAR) Operations (SAAWSO) project that took place over Goose Bay, Newfoundland (NFL), Canada were studied to assess LSN characteristics and instrument sensitivities. Two case studies representing extreme environmental conditions temperature between 0 and −35°C, and snow occurrence for the SAAWSO project are presented. The ice crystal size and shape of LSN using a new platform called Ground Cloud Imaging Probe (GCIP) were obtained between 7.5 and 930μm over 60 channels at 15μm intervals. The measurements from the GCIP, Laser Precipitation Monitor (LPM), weighing gauges, and OPWS were used in the analysis. The results suggested the following: (1) LSN occurs at about 80% of time over the Arctic regions; (2) LSN can play a significant role in cooling at the surface and dehydration of the upper levels; and (3) OPWS can respond to LSN conditions better than weighing gauges. It is concluded that OPWS and spectral probes can improve measurement of LSN, including snow particle shape and size distribution with sizes <0.5mm. Further research on LSN impact on weather and climate simulations is needed.

Authors:Y.M. Leroy; B. Maillot Abstract: Publication date: Available online 22 October 2016 Source:Advances in Geophysics Author(s): Y.M. Leroy, B. Maillot The first of the two objectives is to present the theory of limit analysis (LA) and its application for determining the collapse mechanism (CM) characterizing the complex, active fold within a fold-and-thrust belt or an accretionary wedge. The theory applies to fluid-saturated materials corresponding typically to upper crustal conditions. The potential of this approach is illustrated by comparing the various CMs, which could occur at the front of an accreting wedge, including friction, delamination, and compaction deformation mechanisms resulting in either straight or curvy ramps for a thrusting oriented either hinterland or foreland. Such hinterland thrusting combined with a change in depth of the active decollement would be typical of triangular zones. In the process, it is shown that LA reproduces and extends naturally the classical critical Coulomb wedge theory. The second objective is to document how the predictive potential of the kinematic approach of LA is combined sequentially with the geometrical constructions of fault-related folds. The resulting method is called sequential limit analysis and provides the means to determine, for example, the complete sequence of thousands of thrusts, each thrust being captured from its onset to its arrest because of the onset of a mechanically more favorable event.

Authors:A. Malehmir; L.V. Socco; M. Bastani; C.M. Krawczyk; A.A. Pfaffhuber; R.D. Miller; H. Maurer; R. Frauenfelder; K. Suto; S. Bazin; K. Merz; T. Dahlin Abstract: Publication date: Available online 4 October 2016 Source:Advances in Geophysics Author(s): A. Malehmir, L.V. Socco, M. Bastani, C.M. Krawczyk, A.A. Pfaffhuber, R.D. Miller, H. Maurer, R. Frauenfelder, K. Suto, S. Bazin, K. Merz, T. Dahlin Natural hazards such as landslides, floods, rockfalls, earthquakes, volcanic eruptions, sinkholes, and snow avalanches represent potential risks to our infrastructures, properties, and lives. That potential will continue to escalate with current and continued human encroachment into risk areas. With the help of geophysical techniques many of those risks can be better understood and quantified, thereby minimized and at least partly mitigated through accurate, site-specific, and proper planning and engineering. On occasions these hazards simply cannot be avoided, but better characterization and therefore understanding of the subsurface geology and natural processes responsible for the threats is possible through integration of various cost-effective geophysical methods with relevant geotechnical, geomechanical, and hydrogeological methods. With the enhanced characterization possible when geophysics is incorporated into natural hazard analysis, potential risks can be better quantified and remediation plans tuned to minimize the threat most natural hazards present to civilizations. In this article we will first review common geophysical methods that can be and have been utilized in studying natural hazard prone areas, then we provide selected case studies and approaches using predominantly our own examples, and finally a look into the future detailing how these methods and technologies can be better implemented and thereby more time- and cost-effective and provide improved results.

Abstract: 2012 Publication year: 2012 Source:Advances in Geophysics, Volume 53

Microseisms seen on seismograms worldwide were once viewed as “noise” contaminating records of earthquakes. However, these low-amplitude oscillations generated by storms over the oceans are now recognized as carriers of an important meteorological “signal”. Decades-long archives of analog seismograms may thus represent a high-resolution record of climate change significantly longer than those based on traditional meteorological observations. One of the first phenomena investigated by the then-new field of seismology, microseism research began with their identification around 1870. Improved characterization came from subsequent investigations in Europe, Japan, and North America, which sought out their sources and source regions. Two-generation mechanisms were identified in the mid-twentieth century. In both, microseisms originate with atmospheric energy in the form of storms over the oceans. It is coupled into the water column via the generation of ocean swell, transmitted to the seafloor, and then travels as elastic waves at the seafloor. Analysis of secondary microseisms, recorded in eastern North America during August 1992 Saffir/Simpson category 5 hurricane Andrew, shows the feasibility of using these signals to identify North Atlantic Ocean hurricanes. The shift in dominant microseism frequency with Andrew intensification demonstrates that these microseisms were generated over the deep waters of the North Atlantic Ocean at or near the hurricane and are thus a near real-time record of hurricane changes. Variations in secondary microseism frequency and amplitude allow detection of the hurricane while over the ocean and up to ∼2000km from the recording station. Analog seismograms from seismic stations in North America may thus document unobserved North Atlantic hurricanes. However, uncertainties remain. The relative contributions of deep- and shallow-water sources remain uncertain, and the generation of microseisms with transverse wave components lacks a satisfactory explanation. Better understanding of the various controls on microseism properties is necessary before information in these waveforms can be used to infer storm characteristics, especially for less-energetic storms.

Abstract: 2012 Publication year: 2012 Source:Advances in Geophysics, Volume 53

Observations related to tsunami catalogs are reviewed and described in a phenomenological framework. An examination of scaling relationships between earthquake size (as expressed by scalar seismic moment and mean slip) and tsunami size (as expressed by mean and maximum local run-up and maximum far-field amplitude) indicates that scaling is significant at the 95% confidence level, although there is uncertainty in how well earthquake size can predict tsunami size (R 2 ∼0.4–0.6). In examining tsunami event statistics, current methods used to estimate the size distribution of earthquakes and landslides and the inter-event time distribution of earthquakes are first reviewed. These methods are adapted to estimate the size and inter-event distribution of tsunamis at a particular recording station. Using a modified Pareto size distribution, the best-fit power-law exponents of tsunamis recorded at nine Pacific tide-gauge stations exhibit marked variation, in contrast to the approximately constant power-law exponent for inter-plate thrust earthquakes. With regard to the inter-event time distribution, significant temporal clustering of tsunami sources is demonstrated. For tsunami sources occurring in close proximity to other sources in both space and time, a physical triggering mechanism, such as static stress transfer, is a likely cause for the anomalous clustering. Mechanisms of earthquake-to-earthquake and earthquake-to-landslide triggering are reviewed. Finally, a modification of statistical branching models developed for earthquake triggering is introduced to describe triggering among tsunami sources.

Abstract: 2012 Publication year: 2012 Source:Advances in Geophysics, Volume 53

Lessons learnt from the destructive earthquakes occurred during the new millennium provide new opportunities to take action, revise, and improve the procedure for seismic hazard assessment (SHA). A single hazard map cannot meet the requirements from different end-users; the mapping of the expected earthquake ground motion that accounts for events' recurrence may be suitable for insurances. When dealing with cultural heritage and critical structures (e.g., nuclear power plants), where it is necessary to consider extremely long time intervals, the standard hazard estimates are by far unsuitable, due to their basic heuristic limitations. While time-dependent SHA may be suitable to increase earthquake preparedness, by planning adequate mitigation actions, for critical structures (i.e., those for which the consequences of failure are intolerable) the maximum possible seismic input is relevant. Therefore the need for an appropriate estimate of the seismic hazard, aimed not only at the seismic classification of the national territory, but also at the capability of properly accounting for the local amplifications of ground shaking, as well as for the fault properties, is a pressing concern for seismic engineers. A viable alternative to traditional SHA is represented by the use of the scenario earthquakes, characterized at least in terms of magnitude, distance, and faulting style, and by the treatment of complex source processes. The relevance of the realistic modeling, which permits the generalization of empirical observations by means of physically sound theoretical considerations, is evident, as it allows the optimization of the structural design with respect to the site of interest. The time information associated with the scenarios of ground motion, given by the intermediate-term middle-range earthquake predictions, can be useful to public authorities in assigning priorities for timely mitigation actions. Therefore, the approach we have developed naturally supplies realistic time series of ground motion useful to preserve urban settings, historical monuments, and relevant man-made structures.

Abstract: 2010 Publication year: 2010 Source:Advances in Geophysics, Volume 52

Geophysical investigations which commenced thousands of years ago in China from observations of the Earth shaking caused by large earthquakes (Lee et al., 2003) have gone a long way in their development from an initial, intuitive stage to a modern science employing the newest technological and theoretical achievements. In spite of this enormous development, geophysical research still faces the same basic limitation. The only available information about the Earth comes from measurement at its surface or from space. Only very limited information can be acquired by direct measurements. It is not surprising, therefore, that geophysicists have contributed significantly to the development of the inverse theory—the theory of inference about sought parameters from indirect measurements. For a long time this inference was understood as the task of estimating parameters used to describe the Earth's structure or processes within it, like earthquake ruptures. The problem was traditionally solved by using optimization techniques following the least absolute value and least squares criteria formulated by Laplace and Gauss. Today the inverse theory faces a new challenge in its development. In many geophysical and related applications, obtaining the model “best fitting” a given set of data according to a selected optimization criterion is not sufficient any more. We need to know how plausible the obtained model is or, in other words, how large the uncertainties are in the final solutions. This task can hardly be addressed in the framework of the classical optimization approach. The probabilistic inverse theory incorporates a statistical point of view, according to which all available information, including observational data, theoretical predictions and a priori knowledge, can be represented by probability distributions. According to this reasoning, the solution of the inverse problem is not a single, optimum model, but rather the a posteriori probability distribution over the model space which describes the probability of a given model being the true one. This path of development of the inverse theory follows a pragmatic need for a reliable and efficient method of interpreting observational data. The aim of this chapter is to bring together two elements of the probabilistic inverse theory. The first one is a presentation of the theoretical background of the theory enhanced by basic elements of the Monte Carlo computational technique. The second part provides a review of the solid earth applications of the probabilistic inverse theory.

Abstract: 2010 Publication year: 2010 Source:Advances in Geophysics, Volume 52

We report on recent advances in experiments and modeling of particle–fluid flows that are relevant to an understanding of debris flows. We first describe laboratory experiments on steady, inclined flows of mixtures of water and a single idealized granular phase that focus on the differences in the depths and the velocity of the two phases and that provide evidence for the importance of collisional exchange of momentum and energy between the particles. We then indicate how a relatively simple rate-dependent rheological model for the particles that incorporates yield may be used in the context of a two-phase mixture theory that distinguishes between the depths of the fluid and particle phases to reproduce what is seen in the experiments on both uniform and non-uniform flows. Finally, because a phenomenological extension of kinetic theory for dense, inclined flows of identical particles has recently been developed, we outline a kinetic theory for dense, inclined flows of two types of particles and water as a possible alternative to existing phenomenological theories.

Abstract: 2010 Publication year: 2010 Source:Advances in Geophysics, Volume 52

The seismically active region from Tunisia to the Azores Islands constitutes the westernmost part of the plate boundary between Eurasia and Africa. From the point of view of tectonics, this is a complex structure which involves volcanism and rifting at the Azores, strike-slip motion at the center of the Atlantic, and horizontal N-S compressions at its eastern part, with complex interaction between Iberia and northern Africa and E-W extension at the Alboran Sea, involving some kind of subduction or delamination process. This chapter has been divided into four parts: (1) Atlantic region, Azores–Gibraltar; (2) Azores Islands triple junction; (3) southern Iberia, Betics, and Alboran Sea; and (4) North Africa, Morocco, Algeria, and Tunisia. Plate motion shows counterclock rotation of Africa with respect to Eurasia around a pole near the Canary Islands. The Azores region forms a triple junction with ridge structure and oblique spreading in its three branches. The Atlantic region from Azores to Gibraltar is separated into two parts, west and east of 20°W. The first has E-W strike-slip motion and the second is under horizontal N-S compression producing underthrusting of Africa. The Betics–Alboran area is dominated by the collision movement between Iberia and northern Africa, and the E-W extension at the Alboran Basin. Intermediate and deep earthquake activity and tomography data show an anomalous deep structure, interpreted as produced by a subduction or a lithospheric delamination process. The Betics Cordillera, which links with the Rif by the Gibraltar Arc, is formed by overthrusting toward the north, limiting with the stable Iberia, and crossed by several fracture systems. The Rif, High Atlas, and Tell mountains are under NW-SE horizontal compression and dominated by structures trending NE-SW. Several interpretations have been given to the tectonic development of these regions, and some aspects are not yet completely explained.

Abstract: 2009 Publication year: 2009 Source:Advances in Geophysics, Volume 51

The present review of seismicity induced by mining is a continuation of the previous reviews, published in 1990 and 2001 in Advances in Geophysics, describing the problems involved and the state-of-the-art of relevant research in this field at the end of 1980s and 1990s. During the last decade, seismic monitoring has been expanded in several mining districts, a number of new techniques have been introduced, and new significant results have been obtained in studies of seismic events induced by mining. This review is organized similarly to some extent to the previous ones. New techniques in seismic monitoring in mines, mining factors affecting seismicity, source mechanisms and source time functions, source parameters and their scaling relations are briefly discussed. Precursory phenomena observed in mines and some attempts at prediction of larger events are reviewed. The new results obtained so far by the Japanese research group in South African gold mines, the concepts of stress diffusion and of “critical earthquakes” applied to seismicity in mines, and numerical modeling of rock mass response to mining are also briefly discussed.

Abstract: 2009 Publication year: 2009 Source:Advances in Geophysics, Volume 51

Over the last few decades, it has become clear that various human activities have the potential to generate seismic activity. Examples include subsurface waste injection, reservoir impoundment in the vicinity of large dams, and development of mining, geothermal or hydrocarbon resources. Recently, induced seismicity has also become a concern in connection with geologic carbon sequestration projects. This study focuses on seismicity induced by hydrocarbon production by summarizing the published case studies and describing current theoretical approaches to model these. It is important to understand the conditions under which hydrocarbon production may lead to seismic activity in order to ensure that they are performed safely. Our knowledge of induced seismicity in hydrocarbon fields has progressed substantially over the last few years owing to more intensive high-quality instrumentation of oil fields and a continuous effort to understand the phenomenon theoretically. However, much of the available literature is dispersed over a variety of journals and specialized reports. This review aims at providing a first step toward making the current knowledge about induced seismicity in hydrocarbon fields more accessible to a broad audience of scientists.

Abstract: 2009 Publication year: 2009 Source:Advances in Geophysics, Volume 51

Observations related to tsunami generation, propagation, and runup are reviewed and described in a phenomenological framework. In the three coastal regimes considered (near-field broadside, near-field oblique, and far field), the observed maximum wave amplitude is associated with different parts of the tsunami wavefield. The maximum amplitude in the near-field broadside regime is most often associated with the direct arrival from the source, whereas in the near-field oblique regime, the maximum amplitude is most often associated with the propagation of edge waves. In the far field, the maximum amplitude is most often caused by the interaction of the tsunami coda that develops during basin-wide propagation and the nearshore response, including the excitation of edge waves, shelf modes, and resonance. Statistical distributions that describe tsunami observations are also reviewed, both in terms of spatial distributions, such as coseismic slip on the fault plane and near-field runup, and temporal distributions, such as wave amplitudes in the far field. In each case, fundamental theories of tsunami physics are heuristically used to explain the observations.

Abstract: 2008 Publication year: 2008 Source:Advances in Geophysics, Volume 50

I present a review of the weak localization effect in seismology. To understand this multiple scattering phenomenon, I begin with an intuitive approach illustrated by experiments performed in the laboratory. The importance of reciprocity and interference in scattering media is emphasized. I then consider the role of source mechanism, again starting with experimental evidence. Important theoretical results for elastic waves are summarized, that take into account the full vectorial character of elastic waves. Applications to the characterization of heterogeneous elastic media are discussed.

Abstract: 2008 Publication year: 2008 Source:Advances in Geophysics, Volume 50

An extended theory on the coherence function of log amplitude and phase for waves passing through random media is developed for a depth‐dependent background medium using the WKBJ‐approximated Green's function, the Rytov approximation, and the stochastic theory of the random velocity field. The new theory overcomes the limitation of the existing theory that can only deal with constant background media. Our extended coherence functions depend jointly on the angle separation between two incident plane waves and the spatial lag between receivers. The theory is verified through numerical simulations using the iasp91 background velocity model with two layers of random media. The current theory has the potential to be used to invert for the depth‐dependent spectrum of heterogeneities in the Earth.

Abstract: 2008 Publication year: 2008 Source:Advances in Geophysics, Volume 50

High‐frequency seismograms of earthquakes are complex mainly caused by scattering due to the lithospheric inhomogeneity. Disregarding phase information, seismologists have often focused on the characteristics of seismogram envelopes. The delay time of the maximum amplitude arrival from the onset and the apparent duration time are good measures of scattering caused by random velocity inhomogeneities. There is a stochastic method to directly simulate wave envelopes in random media. The Markov approximation for the parabolic equation is known to be powerful for the direct synthesis of scalar wave envelopes when the wavelength is shorter than the correlation length of random media. It leads to the master equation for the two‐frequency mutual coherence function (TFMCF) of waves, of which the Fourier transform gives the time trace of the wave intensity. It well predicts the peak delay and the broadening of wave envelopes with increasing travel distance for an impulsive source. In this chapter, we extend this approximation to vector waves in random elastic media. When the medium inhomogeneity is weak and the wavelength is shorter than the correlation distance, P‐ and S‐waves can be separately treated by using potentials since conversion scattering between them is weak. Applying the Markov approximation to the TFMCF of potential field, we are able to synthesize vector‐wave envelopes. Vector‐wave envelopes are analytically derived for plane wavelet incidence onto random media and for wavelet radiation from a point source in random media characterized by a Gaussian autocorrelation function. For P‐waves, this approximation predicts not only the peak delay and envelope broadening in the longitudinal component but also the excitation of wave amplitude in the transverse component due to ray bending. The ratio of the mean square (MS) fractional velocity fluctuation to the correlation distance ɛ2/a is the key parameter characterizing these vector‐wave envelopes. The relation between the time integral of the transverse‐component MS amplitude against travel distance gives this ratio. S‐wave envelopes can be synthesized with an analogous mathematical approach. For the same randomness, the envelope broadening of S‐wavelet is larger than that of P‐wavelet by a factor of the ratio of their wave velocities. The validity of the direct envelope synthesis with the Markov approximation is confirmed by a comparison with vector‐wave envelopes calculated from finite difference simulations in two dimensions. The direct syntheses of vector‐wave envelopes developed here could serve for the mathematical interpretation of observed seismograms in terms of lithospheric inhomogeneity.