Articles

We list the CORSSA articles below.

CORSSA articles in Theme I

Author: David Vere-Jones

Abstract: This article originated as a lecture at the Statistical Seismology V meeting in Erice, Italy which was held in 2007. This lecture sought to define the role for statistics and stochastic models in furthering our understanding of earthquake processes and in solving practical problems related to earthquake occurrence. Given the importance of such tasks in our field, the lecture concluded with some comments on how to include statistics in the education of seismologists and some comments on future perspectives for this field.

Download full article

Authors: Andrew J. Michael & Stefan Wiemer

Abstract: Statistical seismology is the application of rigorous statistical methods to earthquake science with the goal of improving our knowledge of how the earth works. Within statistical seismology there is a strong emphasis on the analysis of seismicity data in order to improve our scientific understanding of earthquakes and to improve the evaluation and testing of earthquake forecasts, earthquake early warning, and seismic hazards assessments. Given the societal importance of these applications, statistical seismology must be done well. Unfortunately, a lack of educational resources and available software tools make it difficult for students and new practitioners to learn about this discipline. The goal of the Community Online Resource for Statistical Seismicity Analysis (CORSSA) is to promote excellence in statistical seismology by providing the knowledge and resources necessary to understand and implement the best practices, so that the reader can apply these methods to their own research. This introduction describes the motivation for and vision of CORRSA. It also describes its structure and contents.

Download full article

CORSSA articles in Theme III

Authors: Mark Naylor, Katerina Orfanogiannaki, and David Harte

Abstract: This article will take you through an exploratory analysis of data contained in earthquake catalogues. The aim is to provide the reader with ideas about how to start investigating the properties of a new dataset in a straightforward and rigorous way. We start to introduce more advanced concepts, such as how to determine catalogue completeness, but reserve detailed descriptions of such advanced methodologies to other articles.

The target audience is undergraduate and graduate students who would like to use SSLib (Harte and Brownrigg 2010) and the R Language (Team 2010) to explore earthquake data. We have chosen to use R because it is freely available on all platforms and hope this makes the tutorial as accessible as possible. This article focusses on data exploration rather than being comprehensive guide to the application.

You will learn about basic plotting tools that can be used to explore the properties of earthquake data and visually identify diffculties in choosing a subset of the a total catalogue for subsequent analysis. This section provides an introductory overview but does not provide technical solutions to those problems.

Download full article

CORSSA articles in Theme IV

Authors: Stephan Husen & Jeanne Hardebeck

Abstract: Earthquake location catalogs are not an exact representation of the true earthquake locations. They contain random error, for example from errors in the arrival time picks, as well as systematic biases. The most important source of systematic errors in earthquake locations is the inherent coupling of earthquake locations to the seismic velocity structure of the Earth. Random errors may be accounted for in formal uncertainty estimates, but systematic biases are not, and must be considered based on knowledge about how the earthquakes were located. In this chapter we discuss earthquake location methods and methods for estimating formal uncertainties, systematic biases in earthquake location catalogs, and give readers guidance on how to identify good-quality earthquake locations.

Download full article

Authors: Arnaud Mignan and Jochen Woessner

Abstract: Assessing the magnitude of completeness Mc of instrumental earthquake catalogs is an essential and compulsory step for any seismicity analysis. Mc is defined as the lowest magnitude at which all the earthquakes in a space-time volume are detected. A correct estimate of Mc is crucial since a value too high leads to under-sampling, by discarding usable data, while a value too low leads to erroneous seismicity parameter values and thus to a biased analysis, by using incomplete data.  In this article, we describe peer-reviewed techniques to estimate and map Mc. We provide examples with real and synthetic earthquake catalogs to illustrate features of the various methods and give the pros and cons of each method. With this article at hand, the reader will get an overview of approaches to assess Mc, understand why Mc evaluation is essential and an a non-trivial task, and hopefully be able to select the most appropriate Mc method to include in his seismicity studies.

Download full article

Authors: Laura Gulia, Stefan Wiemer, and Max Wyss

Abstract: Man-made contaminations and heterogeneity of reporting are present in all earthquake catalogs.  Often they are quite strong and introduce errors in statistical analyses of the seismicity.  We discuss three types of artifacts in this chapter:  The presence of reported events, which are not earthquakes, but explosions; heterogeneity of resolution of small events as a function of space and time; and inadvertent changes of the magnitude scale.  These problems must be identified, mapped, and excluded from the catalog before any meaningful statistical analysis can be performed.  Explosions can be identified by comparing the rate of day-time to night-time events because quarries and road construction operate only during the day and often at specific hours.  

Spatial heterogeneity of reporting small events comes about because many stations record small earthquakes that occur near the center of a seismograph network, but only relatively large ones can be located outside the network, for example offshore.  To deal with this problem, the minimum magnitude of complete reporting, Mc, has to be mapped.  Based on the map of Mc, one needs to define the area and the corresponding Mc, the choice of which leads to a homogeneous catalog.  There are two approaches to the strategy for selecting an Mc and its corresponding area of validity:  If one wishes to work with the maximum number of earthquake per area for statistical power of resolution, one needs to eliminate from consideration areas of inferior reporting and use a small Mc(inside), appropriate for the inside of a network.  However, if one wishes to include areas outside of the network, such as offshore areas, then one has to cull the catalog by deleting all small events from the core of the network and accept only earthquakes with magnitude larger than Mc(outside).  In this case, one pays with loss of statistical power for the advantage of covering a larger area.

As a function of time, changes in hardware, software, and reporting procedure bring about two types of changes in the catalog.  (1) As a function of time the reporting of small earthquakes improves because seismograph stations are added or detection procedures are improved.  (2) The magnitude scale is inadvertently changed due to changes in hardware, software, or analysis routine.  The first problem is dealt with by calculating the mean Mc as a function of time in the area chosen for analysis.  This will usually identify steps of Mc downward (better resolution with time) at fairly discrete times. Once these steps are identified, one is faced with choosing a homogeneous catalog that covers a long period, but with a relatively large Mc(long time).  This way one gains coverage of time, but pays with loss of statistical power because small events, which are completely reported during recent times, have to be eliminated.  On the other hand, if one wishes to work with a small Mc(recent), then one must exclude the older parts of the catalog in which Mc(old) is high.

To define the magnitude scale in a local or regional area in such a way that it corresponds to an international standard is not trivial, nor is it trivial to keep the scale constant as a function of time, when hardware, software, and reporting procedures keep changing.  Resulting changes are more prominent in societies characterized by high intellectual mobility, and may not be found in totalitarian societies, where observatory procedures are adhered to with military precision.  There are two types of changes: simple magnitude shifts and stretches (or compressions) of the scale. Here, we show how to identify changes of the magnitude scale and how to correct for them, such that the catalog approaches better homogeneity, a necessity for statistical analysis.

Download full article

Authors: Jochen Woessner, Jeanne L. Hardebeck, and Egill Hauksson

Abstract: Seismicity catalogs are one of the basic products that an agency running a seismic network provides, and is the starting point for most studies related to seismicity. A seismicity catalog is a parametric description of earthquakes with each entry describing one earthquake, for example each earthquake has a location, origin time, and magnitude. At first glance, this seems to be an easy data set to understand and use. In reality, each seismicity catalog is the product of complex procedures that start with the configuration of the seismic network, the selection of sensors and software to process data, and the selection of a location procedure and a magnitude scale. The human-selected computational tools and defined processing steps, combined with the spatial and temporal heterogeneity of the seismic network and the seismicity, makes seismicity catalogs a heterogeneous data set with as many natural as human induced obstacles. This article is intended to provide essential background on how instrumental seismicity catalogs are generated and focuses on providing insights on the high value as well as the limitations of such data sets.

Download full article

CORSSA articles in Theme V

Authors: Jiancang Zhuang, David Harte, Maximilian J. Werner, Sebastian Hainzl, and Shiyong Zhou

Abstract: In this and subsequent articles, we present an overview of some models of seismicity that have been developed to describe, analyze and forecast the probabilities of earthquake occurrences. The models that we focus on are not only instrumental in the understanding of seismicity patterns, but also important tools for time-independent and time-dependent seismic hazard analysis. We intend to provide a general and probabilistic framework for the occurrence of earthquakes. In this article, we begin with a survey of simple, one-dimensional temporal models such as the Poisson and renewal models. Despite their simplicity, they remain highly relevant to studies of the recurrence of large earthquakes on individual faults, to the debate about the existence of seismic gaps, and also to probabilistic seismic hazard analysis. We then continue with more general temporal occurrence models such as the stress-release model, the Omori-Utsu formula, and the ETAS (Epidemic Type Aftershock Sequence) model.

Download full article

Authors: Jiancang Zhuang, Maximilian J. Werner, Sebastian Hainzl, David Harte, and Shiyong Zhou

Abstract: In this article, we present a review of spatiotemporal point-process models, including the epidemic type aftershock sequence (ETAS) model, the EEPAS (Every Earthquake is Precursor According to Scale) model, the double branching model, and related techniques. Here we emphasize the ETAS model, because it has been well studied and is currently a standard model for testing hypotheses related to seismic activity.

Download full article

Authors: Sebastian Hainzl, Sandy Steacy, and David Marsan

Abstract: Our fundamental, physical, understanding of earthquake generation is that stress-build-up leads to earthquakes within the brittle crust rupturing mainly pre-existing crustal faults.While absolute stresses are difficult to estimate, the stress changes induced by earthquakes can be calculated, and these have been shown to effect the location and timing of subsequent events. Furthermore, constitutive laws derived from laboratory experiments can be used to model the earthquake nucleation on faults and their rupture propagation. Exploiting this physical knowledge quantitative seismicity models have been built. In this article, we discuss the spatiotemporal seismicity model based on the rate-and-state dependent frictional response of fault populations introduced by Dieterich (1994). This model has been shown to explain a variety of observations, e.g. the Omori-Utsu law for aftershocks. We focus on the following issues: (i) necessary input information; (ii) model implementation; (iii) data-driven parameter estimation and (iv) consideration of the involved epistemic and aleatoric uncertainties.

Download full article

Author: Takaki Iwata

Abstract: We often observe that earthquakes are triggered by the external oscillation of stress/strain, and typical causes of the oscillation are the earth tides and seismic waves of a large earthquake. As no clear physical models of these types of earthquake-triggering events have been developed, statistical approaches are used for detection and discussion of the triggering effects. This article presents a review of suggestive physical processes, common statistical techniques, and recent developments related to this issue.

Download full article

Authors: David Marsan and Max Wyss

Abstract: Earthquake time series can be characterized by the rate of occurrence, which gives the number of earthquakes per unit time. Occurrence rates generally evolve through time; they strongly increase immediately after a large shock, for example. Understanding and modeling this time evolution is a fundamental issue in seismology, and more particularly for prediction purposes.

Seismicity rate changes can be subtle, with a slow time evolution, or with a gradual onset long after the cause. Therefore, it has proved problematic in many instances to assess whether a change in rate is real, i.e., whether it is statistically significant, or not. We here review and describe existing methods developed for measuring seismicity rate changes, and for testing the significance of these changes. Null hypotheses of 'no change' are formulated, that depend on the context. Statistics are then defined to quantify the departure from this null hypothesis. We illustrate these methods with several examples.

Download full article

Authors: Thomas van Stiphout, Jiancang Zhuang, and David Marsan

Abstract: Seismicity declustering, the process of separating the seismicity catalog into foreshocks, mainshocks, and aftershocks, is widely used in seismology, in particular for seismic hazard assessment and in earthquake prediction models. There are several declustering algorithms that have been proposed over the years. Up to now, most users have applied either the algorithm of Gardner and Knopoff (1974) or Reasenberg (1985), mainly because of the availability of the source codes and the simplicity of the algorithms. However, declustering algorithms are often applied blindly without scrutinizing parameter values or the result. In this article we present a broad range of algorithms, and we highlight fundamentals of seismicity declustering and possible pitfalls. For most algorithms the source code or information regarding how to access the source code is available on the CORSSA website.

Download full article

Authors: Jiancang Zhuang and Sarah Touati

Abstract: Starting from basic simulation procedures for random variables, this article presents the theories and techniques related to the simulation of general point process models that are specified by the conditional intensity. In particular, we focus on the simulation of point process models for quantifying the characteristics of the occurrence processes of earthquakes, including the Poisson model (homogeneous or nonhomogeneous), the recurrence (renewal) models, the stress release model and the ETAS models.

Download full article

CORSSA articles in Theme VI

Author: J. Douglas Zechar

Motivation: One of the cornerstones of science is the ability to accurately and reliably forecast natural phenomena. Unfortunately, earthquake prediction research has been plagued by controversy, and it remains an outstanding problem; for a review of some of the historical challenges, see Sue Hough's book Predicting the Unpredictable. The motivation for the work that I describe in this article is fairly self-evident: we want to know if an earthquake forecast or a set of earthquake predictions is particularly "good." Therefore, our fundamental objectives are to define and to quantify "good."

In this article, I emphasize the analysis of statements regarding future earthquake occurrence (i.e., characteristics such as origin time, epicenter, and magnitude) but many of the concepts discussed are applicable to other earthquake studies (i.e., probabilistic loss estimates, earthquake early warning, etc.). A broader motivation of this article is to encourage you to exercise rigorous hypothesis testing methods whenever the research problem allows.

Ending point: The techniques described in this article will allow you to quantify the predictive skill of an earthquake forecast or of a set of earthquake predictions. You will be able to check if an observed set of earthquakes is consistent with a forecast, and you will have some tools to compare two forecasts. Using the accompanying code and example data, you can execute each of the test methods described in this article.

Read the article