Katana VentraIP

Seismic tomography

Seismic tomography or seismotomography is a technique for imaging the subsurface of the Earth with seismic waves produced by earthquakes or explosions. P-, S-, and surface waves can be used for tomographic models of different resolutions based on seismic wavelength, wave source distance, and the seismograph array coverage.[1] The data received at seismometers are used to solve an inverse problem, wherein the locations of reflection and refraction of the wave paths are determined. This solution can be used to create 3D images of velocity anomalies which may be interpreted as structural, thermal, or compositional variations. Geoscientists use these images to better understand core, mantle, and plate tectonic processes.

Theory[edit]

Tomography is solved as an inverse problem. Seismic travel time data are compared to an initial Earth model and the model is modified until the best possible fit between the model predictions and observed data is found. Seismic waves would travel in straight lines if Earth was of uniform composition, but the compositional layering, tectonic structure, and thermal variations reflect and refract seismic waves. The location and magnitude of these variations can be calculated by the inversion process, although solutions to tomographic inversions are non-unique.


Seismic tomography is similar to medical x-ray computed tomography (CT scan) in that a computer processes receiver data to produce a 3D image, although CT scans use attenuation instead of traveltime difference. Seismic tomography has to deal with the analysis of curved ray paths which are reflected and refracted within the Earth, and potential uncertainty in the location of the earthquake hypocenter. CT scans use linear x-rays and a known source.[2]

History[edit]

Seismic tomography requires large datasets of seismograms and well-located earthquake or explosion sources. These became more widely available in the 1960s with the expansion of global seismic networks, and in the 1970s when digital seismograph data archives were established. These developments occurred concurrently with advancements in computing power that were required to solve inverse problems and generate theoretical seismograms for model testing.[3]


In 1977, P-wave delay times were used to create the first seismic array-scale 2D map of seismic velocity.[4] In the same year, P-wave data were used to determine 150 spherical harmonic coefficients for velocity anomalies in the mantle.[1] The first model using iterative techniques, required when there are a large numbers of unknowns, was done in 1984. This built upon the first radially anisotropic model of the Earth, which provided the required initial reference frame to compare tomographic models to for iteration.[5] Initial models had resolution of ~3000 to 5000 km, as compared to the few hundred kilometer resolution of current models.[6][7][8]


Seismic tomographic models improve with advancements in computing and expansion of seismic networks. Recent models of global body waves used over 107 traveltimes to model 105 to 106 unknowns.[9][6]

Diffraction and wave equation tomography use the full waveform, rather than just the first arrival times. The inversion of amplitude and phases of all arrivals provide more detailed density information than transmission traveltime alone. Despite the theoretical appeal, these methods are not widely employed because of the computing expense and difficult inversions.

Reflection tomography originated with . It uses an artificial source to resolve small-scale features at crustal depths.[10] Wide-angle tomography is similar, but with a wide source to receiver offset. This allows for the detection of seismic waves refracted from sub-crustal depths and can determine continental architecture and details of plate margins. These two methods are often used together.

exploration geophysics

Local earthquake tomography is used in seismically active regions with sufficient seismometer coverage. Given the proximity between source and receivers, a precise earthquake focus location must be known. This requires the simultaneous iteration of both structure and focus locations in model calculations.

[9]

Teleseismic tomography uses waves from distant earthquakes that deflect upwards to a local seismic array. The models can reach depths similar to the array aperture, typically to depths for imaging the crust and lithosphere (a few hundred kilometers). The waves travel near 30° from vertical, creating a vertical distortion to compact features.

[11]

Limitations[edit]

Global seismic networks have expanded steadily since the 1960s, but are still concentrated on continents and in seismically active regions. Oceans, particularly in the southern hemisphere, are under-covered.[11] Tomographic models in these areas will improve when more data becomes available. The uneven distribution of earthquakes naturally biases models to better resolution in seismically active regions.


The type of wave used in a model limits the resolution it can achieve. Longer wavelengths are able to penetrate deeper into the Earth, but can only be used to resolve large features. Finer resolution can be achieved with surface waves, with the trade off that they cannot be used in models of the deep mantle. The disparity between wavelength and feature scale causes anomalies to appear of reduced magnitude and size in images. P- and S-wave models respond differently to the types of anomalies depending on the driving material property. First arrival time based models naturally prefer faster pathways, causing models based on these data to have lower resolution of slow (often hot) features.[9] Shallow models must also consider the significant lateral velocity variations in continental crust.


Seismic tomography provides only the current velocity anomalies. Any prior structures are unknown and the slow rates of movement in the subsurface (mm to cm per year) prohibit resolution of changes over modern timescales.[16]


Tomographic solutions are non-unique. Although statistical methods can be used to analyze the validity of a model, unresolvable uncertainty remains.[9] This contributes to difficulty comparing the validity of different model results.


Computing power limits the amount of seismic data, number of unknowns, mesh size, and iterations in tomographic models. This is of particular importance in ocean basins, which due to limited network coverage and earthquake density require more complex processing of distant data. Shallow oceanic models also require smaller model mesh size due to the thinner crust.[5]


Tomographic images are typically presented with a color ramp representing the strength of the anomalies. This has the consequence of making equal changes appear of differing magnitude based on visual perceptions of color, such as the change from orange to red being more subtle than blue to yellow. The degree of color saturation can also visually skew interpretations. These factors should be considered when analyzing images.[2]

Banana Doughnut theory

EarthScope

[1]is a collection of web-based tools for the interactive visualisation, analysis, and quantitative comparison of global-scale, volumetric (3-D) data sets of the subsurface, with supporting tools for interacting with other, complementary models and data sets.

SubMachine

. Incorporated Research Institutions for Seismology (IRIS). Retrieved 17 January 2013.

EarthScope Education and Outreach: Seismic Tomography Background

. Incorporated Research Institutions for Seismology (IRIS). Retrieved 17 January 2013.

Tomography Animation