A  B  C  D  E  F  G  H  I  J  K  L  M  N  O  P  Q  R  S  T  U  V  W  X  Y  Z  

  Subjects -> METEOROLOGY (Total: 106 journals)
The end of the list has been reached or no journals were found for your choice.
Similar Journals
Journal Cover
Dynamics and Statistics of the Climate System
Number of Followers: 8  

  This is an Open Access Journal Open Access journal
ISSN (Online) 2059-6987
Published by Oxford University Press Homepage  [425 journals]
  • Use of generalized additive modelling techniques to create synthetic daily
           temperature networks for benchmarking homogenization algorithms

    • Authors: Killick R; Jolliffe I, Willett K.
      First page: dzz001
      Abstract: AbstractBackgroundThe removal of non-climatic artefacts, inhomogeneities, from observed station data for the purpose of climate research is an ongoing task. Progress on homogenization algorithms is limited by a lack of suitable test data that are sufficiently realistic and where the ‘truth’ is known a priori.ObjectivesThis article describes a new method to create realistic, synthetic, daily temperature data, where the truth is known completely, thus allowing them to be used as a benchmark against which to test the performance of homogenization algorithms.MethodsObservations, reanalysis data and the power of statistical modelling, specifically gamma generalized additive models (GAMs), were combined to produce clean synthetic, daily mean temperature time series. The created clean data were corrupted with realistic inhomogeneities using both constant offsets and time-varying offsets produced by perturbing some of the GAM’s input variables. When assessing the created clean and corrupted data, particular focus was given to variability, inter-station correlations, temporal autocorrelations and extreme values.ResultsThis is the first daily benchmarking study at this scale, bringing improvements over monthly or annual studies as it is at the daily level that the extremes of climate are felt. The created inhomogeneities mimic real-world features, as identified from real-world data by the homogenization community. They take the form of both steps and trends and explore changes in the mean and variance. Clean and corrupted data are created for four regions in the USA: Wyoming, the North East, the South East and the South West. These four regions encompass a diverse range of climates, from a snow climate in the North East, to desert climates in the South West. Four test scenarios were created to allow the assessment of algorithm ability for different inhomogeneity and data structures. Scenario 1 was a best guess for the real world. The other three scenarios were constructed in ways that allowed the effects of station density, step versus trend inhomogeneities and varying temporal autocorrelation to be investigated.The closeness to reality of the created clean and corrupted data was assessed by comparisons with the observed data, noting that the observed data contain some level of both systematic and random error and are therefore not perfect themselves. Generally, the created clean data had higher interstation correlations in deseasonalized series (~0.9 versus ~0.7) and lower temporal autocorrelations in deseasonalized difference series (~0.01 versus ~0.10) than their real-world counterparts. The addition of inhomogeneities to create the corrupted data resulted in higher temporal autocorrelations in the deseasonalized difference series (~0.20 versus ~0.10) than those seen in the observed data. Despite these differences, the created levels of correlation are able to address issues of signal-to-noise ratio for detection algorithms, as in real-world data.ConclusionsThese created clean and corrupted data provide a valuable first daily surface temperature data set that can be used in homogenisation benchmarking studies. It is anticipated that they will serve as a baseline to be built upon in the future.
      PubDate: Tue, 25 Jun 2019 00:00:00 GMT
      DOI: 10.1093/climsys/dzz001
      Issue No: Vol. 3, No. 1 (2019)
       
  • Theoretical foundations of emergent constraints: relationships between
           climate sensitivity and global temperature variability in conceptual
           models

    • Authors: Williamson M; Cox P, Nijsse F.
      First page: dzy006
      Abstract: AbstractBackgroundThe emergent constraint approach has received interest recently as a way of utilizing multi-model General Circulation Model (GCM) ensembles to identify relationships between observable variations of climate and future projections of climate change. These relationships, in combination with observations of the real climate system, can be used to infer an emergent constraint on the strength of that future projection in the real system. However, there is as yet no theoretical framework to guide the search for emergent constraints. As a result, there are significant risks that indiscriminate data-mining of the multidimensional outputs from GCMs could lead to spurious correlations and less than robust constraints on future changes. To mitigate against this risk, Cox et al. (Cox et al. Emergent constraint on equilibrium climate sensitivity from global temperature variability. Nature 2018a; 553: 319, hereafter CHW18) proposed a theory-motivated emergent constraint, using the one-box Hasselmann model to identify a linear relationship between equilibrium climate sensitivity (ECS) and a metric of global temperature variability involving both temperature standard deviation and autocorrelation (Ψ). A number of doubts have been raised about this approach, some concerning the application of the one-box model to understand relationships in complex GCMs, which are known to have more than the single characteristic timescale.ObjectivesTo study whether the linear Ψ–ECS proportionality in CHW18 is an artefact of the one-box model. More precisely, we ask ‘Does the linear Ψ–ECS relationship feature in the more complex and realistic two-box and diffusion models'’.MethodsWe solve the two-box and diffusion models to find relationships between ECS and Ψ. These models are forced continually with white noise parameterizing internal variability. The resulting analytical relations are essentially fluctuation–dissipation theorems.ResultsWe show that the linear Ψ–ECS proportionality in the one-box model is not generally true in the two-box and diffusion models. However, the linear proportionality is a very good approximation for parameter ranges applicable to the current state-of-the-art CMIP5 climate models. This is not obvious—due to structural differences between the conceptual models, their predictions also differ. For example, the two-box and diffusion, unlike the one-box model, can reproduce the long-term transient behaviour of the CMIP5 abrupt4xCO2 and 1pcCO2 simulations. Each of the conceptual models also predicts different power spectra with only the diffusion model’s pink 1/f spectrum being compatible with observations and GCMs. We also show that the theoretically predicted Ψ–ECS relationship exists in the piControl as well as historical CMIP5 experiments and that the differing gradients of the proportionality are inversely related to the effective forcing in that experiment.ConclusionsWe argue that emergent constraints should ideally be derived by such theory-driven hypothesis testing, in part to protect against spurious correlations from blind data-mining but mainly to aid understanding. In this approach, an underlying model is proposed, the model is used to predict a potential emergent relationship between an observable and an unknown future projection, and the hypothesized emergent relationship is tested against an ensemble of GCMs.
      PubDate: Mon, 18 Mar 2019 00:00:00 GMT
      DOI: 10.1093/climsys/dzy006
      Issue No: Vol. 3, No. 1 (2019)
       
  • Spatio-temporal patterns of daily Indian summer monsoon rainfall

    • Authors: Mitra A; Apte A, Govindarajan R, et al.
      First page: dzy010
      Abstract: AbstractThe primary objective of this article is to analyse a set of canonical spatial patterns that approximate the daily rainfall across the Indian region, as identified in the companion article where we developed a discrete representation of the Indian summer monsoon rainfall using state variables with spatio-temporal coherence maintained using a Markov random field prior. In particular, we use these spatio-temporal patterns to study the variation of rainfall during the monsoon season. First, the 10 patterns are divided into three families of patterns distinguished by their total rainfall amount and geographic spread. These families are then used to establish ‘active’ and ‘break’ spells of the Indian monsoon at the all-India level. Subsequently, we characterize the behaviour of these patterns in time by estimating probabilities of transition from one pattern to another across days in a season. Patterns tend to be ‘sticky’: the self-transition is the most common. We also identify most commonly occurring sequences of patterns. This leads to a simple seasonal evolution model for the summer monsoon rainfall. The discrete representation introduced in the companion article also identifies typical temporal rainfall patterns for individual locations. This enables us to determine wet and dry spells at local and regional scales. Last, we specify sets of locations that tend to have such spells simultaneously and thus come up with a new regionalization of the landmass.
      PubDate: Mon, 11 Feb 2019 00:00:00 GMT
      DOI: 10.1093/climsys/dzy010
      Issue No: Vol. 3, No. 1 (2019)
       
  • Is the glacial climate scale invariant'

    • Authors: Mitsui T; Lenoir G, Crucifix M.
      First page: dzy011
      Abstract: AbstractBackgroundPrevious estimates of the power spectrum and of the scaling exponent of the detrended fluctuation analysis of palaeoclimate time series yielded the suggestion that climate fluctuations are scale invariant over a wide range of time scales. Specifically, the last glacial period is characterised by Dansgaard-Oeschger events, with rapid and frequent transitions between stadial and interstadial regimes, and it was suggested that climate changes were then exhibiting multi-fractal dynamics.ObjectivesThe present contribution clarifies the interpretation of detrended fluctuation analysis and of power spectra during the last glacial period.MethodsWe use two simple models exhibiting regime behaviour reminiscent of Dansgaard-Oeshger dynamics: the random telegraph process with additive white Gaussian noise, and Stommel-Cessi’s box model of thermohaline circulation (a study of the Lorenz model is also proposed in the appendix). This analysis then provides a support for interpreting the generalized Hurst exponent h(q) and power-spectrum estimates of two NGRIP (Greenland) ice core records: the oxygen isotope ratio and the calcium ion concentration. We also analyse simulations with a stochastic dynamical system model with ice and astronmical forcings, which we recently proposed to simulate Dansgaard-Oeschger events.ResultsMultifractal detrended fluctuation analyses of time series generated by the toy models reveal that their generalized fluctuation functions have a local scaling regime. The generalized Hurst exponent h(q) is lower for q<<0 than for q>0. This dependency of h(q) is named here “apparent multifractality”. It occurs because the behaviour of the autocorrelation function of small fluctuations (within a regime) differs from that of large fluctuations (regime shifts). The generalized Hurst exponents of NGRIP records exhibit a form of apparent multifractality similar to that described in the toy models. The stochastic dynamical system model also captures both the power spectrum of the observations, and the behaviour of h(q).ConclusionsThe apparent multifractality of the Dansgaard-Oeschger events records is a consequence of regime switching between stadial and interstadial climates. Neither the local scaling in the power spectrum, nor the output of the multifractal detrended fluctuation analysis implies that the underlying process is scale invariant.
      PubDate: Sat, 05 Jan 2019 00:00:00 GMT
      DOI: 10.1093/climsys/dzy011
      Issue No: Vol. 3, No. 1 (2019)
       
  • Bayesian estimation of englacial radar chronology in Central West
           Antarctica

    • Authors: Muldoon G; Jackson C, Young D, et al.
      First page: dzy004
      Abstract: AbstractEnglacial radar reflectors in the central West Antarctic Ice Sheet contain information about past dynamics and ice properties. Due to significant data coverage in this area, these isochronous reflectors can be traced over large portions of the ice sheet, but assigning ages to the reflectors for the purpose of studying dynamics requires incorporation of chronologic data from ice cores. To date reflectors in the Marie Byrd Land region, we consider the Byrd ice core, strategically located between the catchments of Thwaites Glacier and the Siple Coast ice streams. We determine ages with uncertainty for four englacial radar reflectors spanning the ice thickness using Bayesian approaches to combine radar observations, an existing Byrd ice core chronology, and a simplified ice flow model. This method returns the marginal probability distribution of depth and age for each of the observed radar reflectors. The results also include inferences of accumulation rate at the Byrd ice core site during the last 30 ka that show a minimum accumulation rate during the Last Glacial Maximum at half the modern rate. The deepest continuous radar reflector is 25.67 ± 1.45 ka, <30% of the estimated age of the oldest ice at the Byrd ice core site despite being located at 70% of the ice depth, limiting the age of radar-interpretable ice in this region. The inferred reflector age profiles at the Byrd ice core site derived here compare favourably with the more recent WAIS Divide ice core record. However, uncertainty in reflector depth due to radar range precision contributes considerably to uncertainty in reflector age in a way that is not readily reducible using currently available ice-penetrating radar systems.
      PubDate: Thu, 20 Dec 2018 00:00:00 GMT
      DOI: 10.1093/climatesystem/dzy004
      Issue No: Vol. 3, No. 1 (2018)
       
  • A discrete view of the Indian monsoon to identify spatial patterns of
           rainfall

    • Authors: Mitra A; Apte A, Govindarajan R, et al.
      First page: dzy009
      Abstract: AbstractWe propose a representation of the Indian summer monsoon rainfall in terms of a probabilistic model based on a Markov random field consisting of discrete state variables representing low and high rainfall at grid-scale and daily rainfall patterns across space and in time. These discrete states are conditioned on observed daily gridded rainfall data from the period 2000 to 2007. The model gives us a set of 10 spatial patterns of daily monsoon rainfall over India, which are robust over a range of user-chosen parameters and coherent in space and time. Each day in the monsoon season is assigned precisely one of the spatial patterns, that approximates the spatial distribution of rainfall on that day. Such approximations are quite accurate for nearly 95% of the days. Remarkably, these patterns are representative (with similar accuracy) of the monsoon seasons from 1901 to 2000 as well. Finally, we compare the proposed model with alternative approaches to extract spatial patterns of rainfall, using empirical orthogonal functions and clustering algorithms such as K-means and spectral clustering.
      PubDate: Wed, 17 Oct 2018 00:00:00 GMT
      DOI: 10.1093/climsys/dzy009
      Issue No: Vol. 3, No. 1 (2018)
       
  • A statistical significance test for sea-level variability

    • Authors: Castellana D; Dijkstra H, Wubs F.
      First page: dzy008
      Abstract: AbstractA statistical test is presented to address the null hypothesis that sea-level fluctuations in the open ocean are solely due to additive noise in the wind stress. The effect of high-frequency wind-stress variations can be represented as a correlated additive and multiplicative noise (CAM) stochastic model of sea-level variations. The main novel aspect of this article is the estimation of parameters in the discrete CAM model from time series of sea surface height observations. This leads to a statistical test, similar to the red noise [or AR(1)] test for sea surface temperature, which can be used to attribute specific sea-level variability to other effects than wind-stress noise. We demonstrate the performance of this test using altimeter data at several locations in the open ocean.
      PubDate: Wed, 17 Oct 2018 00:00:00 GMT
      DOI: 10.1093/climsys/dzy008
      Issue No: Vol. 3, No. 1 (2018)
       
  • Extreme value laws and mean squared error growth in dynamical systems

    • Authors: Sterk A; Holland M.
      First page: dzy007
      Abstract: AbstractBackgroundExtreme value theory for chaotic deterministic dynamical systems is a rapidly expanding area of research. Given a system and a scalar observable defined on its state space, extreme value theory studies the asymptotic probability distributions of large values attained by the observable along evolutions of the system.ObjectiveThe aim of this paper is to study the relation between the statistics and predictability of extremes.MethodsPredictability is measured by the mean squared error (MSE), which is estimated from the difference of pairs of forecasts conditional on one of the forecasts exceeding a threshold.ResultsUnder the assumption that pairs of forecast variables satisfy a linear regression model, we show that the MSE can be decomposed into the sum of three terms: a threshold-independent constant, a mean term that always increases with threshold, and a variance term that can either increase, decrease, or stay constant with threshold. Using the generalised Pareto distribution to model excesses over a threshold, we show that the MSE always increases with threshold at sufficiently high threshold. However, when the forecasts have a negative tail index the MSE can be a decreasing function of threshold at lower thresholds.ConclusionsOur method is illustrated by means of four examples: the tent map, the cusp map, and two low-order models for atmospheric regime transitions and the El Ni\~{n}o-Southern Oscillation phenomenon. These examples clearly show that predictability depends on the observable and the invariant measure of the system.
      PubDate: Fri, 05 Oct 2018 00:00:00 GMT
      DOI: 10.1093/climsys/dzy007
      Issue No: Vol. 3, No. 1 (2018)
       
  • The Mid-Pleistocene Transition induced by delayed feedback and bistability

    • Authors: Quinn C; Sieber J, von der Heydt A, et al.
      First page: dzy005
      Abstract: AbstractThe Mid-Pleistocene Transition (MPT), the shift from 41- to 100-kyr glacial–interglacial cycles that occurred roughly 1 Myr ago, is often considered as a change in internal climate dynamics. Here, we revisit the model of Quaternary climate dynamics that was proposed by Saltzman and Maasch (‘Carbon cycle instability as a cause of the late Pleistocene ice age oscillations: modelling the asymmetric response’. Glob Biogeochem Cycle 1988; 2: 177–185—from this point, referred to as SM88). We show that it is quantitatively similar to a scalar equation for the ice dynamics only when combining the remaining components into a single delayed feedback term. The delay is the sum of the internal time scales of ocean transport and ice sheet dynamics, which is on the order of 10 kyr.We find that, in the absence of astronomical forcing, the delayed feedback leads to bistable behaviour, where stable large-amplitude oscillations of ice volume and an equilibrium coexist over a large range of values for the delay. We then apply astronomical forcing using the forcing data for integrated summer insolation at 65 degrees North from Huybers and Eisenman (Integrated Summer Insolation Calculations. NOAA/NCDC Paleoclimatology Program Data Contribution #2006-079, 2006). Since the precise scaling of the forcing amplitude is not known, we perform a systematic study to show how the system response depends on the forcing amplitude. We find that over a wide range of forcing amplitudes, the forcing leads to a switch from small-scale oscillations of 41 kyr to large-amplitude oscillations of roughly 100 kyr without any change of other parameters. The transition in the forced model consistently occurs at about the same time as the MPT (between 1200 and 800 kyr BP) as observed in the data records from Lisiecki and Raymo (‘A Pliocene-Pleistocene stack of 57 globally distributed benthic δ18O records’. Paleoceanography 2005; 20:1-17). This provides evidence that the MPT could have been primarily forcing-invoked. Small additional random disturbances make the forcing-induced transition near 800 kyr BP even more robust.We also find that the forced system forgets its initial history during the small-scale oscillations, in particular, nearby initial conditions converge prior to transitioning. In contrast to this, in the regime of large-amplitude oscillations, the oscillation phase is very sensitive to random perturbations, which has a strong effect on the timing of the deglaciation events.
      PubDate: Sat, 01 Sep 2018 00:00:00 GMT
      DOI: 10.1093/climsys/dzy005
      Issue No: Vol. 3, No. 1 (2018)
       
  • Statistical analysis of coverage error in simple global temperature
           estimators

    • Authors: Cowtan K; Jacobs P, Thorne P, et al.
      First page: dzy003
      Abstract: AbstractBackgroundGlobal mean surface temperature is widely used in the climate literature as a measure of the impact of human activity on the climate system. While the concept of a spatial average is simple, the estimation of that average from spatially incomplete data is not. Correlation between nearby map grid cells means that missing data cannot simply be ignored. Estimators that (often implicitly) assume uncorrelated observations can be biased when naively applied to the observed data, and in particular, the commonly used area weighted average is a biased estimator under these circumstances. Some surface temperature products use different forms of infilling or imputation to estimate temperatures for regions distant from the nearest observation, however the impacts of such methods on estimation of the global mean are not necessarily obvious or themselves unbiased. This issue was addressed in the 1970s by Ruvim Kagan, however his work has not been widely adopted, possibly due to its complexity and dependence on subjective choices in estimating the dependence between geographically proximate observations.ObjectivesThe aim of this work is to present a simple estimator for global mean surface temperature from spatially incomplete data which retains many of the benefits of the work of Kagan, while being fully specified by two equations and a single parameter. The main purpose of the simplified estimator is to better explain to users of temperature data the problems associated with estimating an unbiased global mean from spatially incomplete data, however the estimator may also be useful for problems with specific requirements for reproducibility and performance.MethodsThe new estimator is based on generalized least squares, and uses the correlation matrix of the observations to weight each observation in accordance with the independent information it contributes. It can be implemented in fewer than 20 lines of computer code. The performance of the estimator is evaluated for different levels of observational coverage using reanalysis data with artificial noise.ResultsFor recent decades the generalized least squares estimator mitigates most of the error associated with the use of a naive area weighted average. The improvement arises from the fact that coverage bias in the historical temperature record does not arise from an absolute shortage of observations (at least for recent decades), but rather from spatial heterogeneity in the distribution of observations, with some regions being relatively undersampled and others oversampled. The estimator addresses this problem by reducing the weight of the oversampled regions, in contrast to some existing global temperature datasets which extrapolate temperatures into the unobserved regions. The results are almost identical to the use of kriging (Gaussian process interpolation) to impute missing data to global coverage, followed by an area weighted average of the resulting field. However, the new formulation allows direct diagnosis of the contribution of individual observations and sources of error.ConclusionsMore sophisticated solutions to the problem of missing data in global temperature estimation already exist. However the simple estimator presented here, and the error analysis that it enables, demonstrate why such solutions are necessary. The 2013 Fifth Assessment Report of the Intergovernmental Panel on Climate Change discussed a slowdown in warming for the period 1998-2012, quoting the trend in the HadCRUT4 historical temperature dataset from the United Kingdom Meteorological Office in collaboration with the Climatic Research Unit of the University of East Anglia, along with other records. Use of the new estimator for global mean surface temperature would have reduced the apparent slowdown in warming of the early 21st century by one third in the spatially incomplete HadCRUT4 product.
      PubDate: Fri, 20 Jul 2018 00:00:00 GMT
      DOI: 10.1093/climsys/dzy003
      Issue No: Vol. 3, No. 1 (2018)
       
  • Chaotic and non-chaotic response to quasiperiodic forcing: limits to
           predictability of ice ages paced by Milankovitch forcing

    • Authors: Ashwin P; David Camp C, von der Heydt A.
      First page: dzy002
      Abstract: AbstractIt is well known that periodic forcing of a nonlinear system, even of a 2D autonomous system, can produce chaotic responses with sensitive dependence on initial conditions if the forcing induces sufficient stretching and folding of the phase space. Quasiperiodic forcing can similarly produce chaotic responses, where the transition to chaos on changing a parameter can bring the system into regions of strange non-chaotic behaviour. Although it is generally acknowledged that the timings of Pleistocene ice ages are at least partly due to Milankovitch forcing (which may be approximated as quasiperiodic, with energy concentrated near a small number of frequencies), the precise details of what can be inferred about the timings of glaciations and deglaciations from the forcing are still unclear. In this article, we perform a quantitative comparison of the response of several low-order nonlinear conceptual models for these ice ages to various types of quasiperiodic forcing. By computing largest Lyapunov exponents and mean periods, we demonstrate that many models can have a chaotic response to quasiperiodic forcing for a range of forcing amplitudes, even though some of the simplest conceptual models do not. These results suggest that pacing of ice ages to forcing may have only limited determinism.
      PubDate: Tue, 08 May 2018 00:00:00 GMT
      DOI: 10.1093/climsys/dzy002
      Issue No: Vol. 3, No. 1 (2018)
       
  • Data-adaptive harmonic decomposition and prediction of Arctic sea ice
           extent

    • Authors: Kondrashov D; Chekroun M, Ghil M.
      First page: dzy001
      Abstract: AbstractDecline in the Arctic sea ice extent (SIE) is an area of active scientific research with profound socio-economic implications. Of particular interest are reliable methods for SIE forecasting on subseasonal time scales, in particular from early summer into fall, when sea ice coverage in the Arctic reaches its minimum. Here, we apply the recent data-adaptive harmonic (DAH) technique of Chekroun and Kondrashov, (2017), Chaos, 27 for the description, modeling and prediction of the Multisensor Analyzed Sea Ice Extent (MASIE, 2006–2016) data set. The DAH decomposition of MASIE identifies narrowband, spatio-temporal data-adaptive modes over four key Arctic regions. The time evolution of the DAH coefficients of these modes can be modelled and predicted by using a set of coupled Stuart–Landau stochastic differential equations that capture the modes’ frequencies and amplitude modulation in time. Retrospective forecasts show that our resulting multilayer Stuart–Landau model (MSLM) is quite skilful in predicting September SIE compared to year-to-year persistence; moreover, the DAH–MSLM approach provided accurate real-time prediction that was highly competitive for the 2016–2017 Sea Ice Outlook.
      PubDate: Tue, 27 Mar 2018 00:00:00 GMT
      DOI: 10.1093/climsys/dzy001
      Issue No: Vol. 3, No. 1 (2018)
       
 
JournalTOCs
School of Mathematical and Computer Sciences
Heriot-Watt University
Edinburgh, EH14 4AS, UK
Email: journaltocs@hw.ac.uk
Tel: +00 44 (0)131 4513762
 


Your IP address: 44.200.101.84
 
Home (Search)
API
About JournalTOCs
News (blog, publications)
JournalTOCs on Twitter   JournalTOCs on Facebook

JournalTOCs © 2009-