Results 1 - 10
of
99
Tomographic inversion of local earthquake data from the Hengill–Grensdalur central volcano complex
, 1989
"... We have determined the three-dimensional P wave velocity structure within the area of the Hengill-Grensdalur central volcano complex, southwest Iceland, from the tomographic inversion of 2409 P wave arrival times recorded by a local earthquake xperiment. The aperture of the 20-element seismic networ ..."
Abstract
-
Cited by 35 (2 self)
- Add to MetaCart
We have determined the three-dimensional P wave velocity structure within the area of the Hengill-Grensdalur central volcano complex, southwest Iceland, from the tomographic inversion of 2409 P wave arrival times recorded by a local earthquake xperiment. The aperture of the 20-element seismic network utilized in the inversion permitted imaging ofa 5-km-thick rustal volume underlying a 15 x 14 km 2 area. Within this localized volume are located the underpinnings of the active Hengill volcano and fissure swarm, the extinct Grensdalur volcano, and an active high-temperature g othermal field. It was thus expected that the characteristic length scale of heterogeneity would be of the order of a kilometer. In order to image heterogeneous eismic velocity structure at this scale we paid particular attention to the fidelity of the assumed model parameterization, defined as the degree to which the parameterization can reproduce xpected structural heterogeneity. We also discuss the trade-off between the resolution of model parameters and image fidelity, compare results obtained from different parameterizations t illustrate his trade-off, and present: • synoptic means of assessing image resolution that utilizes the off-diagonal information contained within the resolution matrix. The final tomographic image presented here was determined for a parameterization with fidelity that closely matches the geologic heterogeneity observed on the surface. For this parameterization, the resolution of individual parameters i generally low; however, aquantitative analysis of resolution provides an unambiguous a sessment
Roecker The effect of S-wave arrival times on the accuracy of hypocenter estimation
- Bull. Seismol. Soc. Am
, 1990
"... Well-constrained hypocenters (latitude, longitude, depth, and origin time) are required for nearly all studies that use earthquake data. We have examined the theoretical basis behind some of the widely accepted "rules of thumb " for obtaining accurate hypocenter estimates that pertain to t ..."
Abstract
-
Cited by 27 (0 self)
- Add to MetaCart
Well-constrained hypocenters (latitude, longitude, depth, and origin time) are required for nearly all studies that use earthquake data. We have examined the theoretical basis behind some of the widely accepted "rules of thumb " for obtaining accurate hypocenter estimates that pertain to the use of S phases and illustrate, in a variety of ways, why and when these "rules " are applicable. Results of experiments done for this study show that epicentral estimates (latitude and longitude) are typically far more robust with respect to data inadequacies; therefore, only examples illustrating the relationship between S phase arrival time data and focal depth and origin time estimates are presented. Most methods used to determine earthquake hypocenters are based on iterative, linearized, least-squares algorithms. Standard errors associated with hypocenter parame-ters are calculated assuming the data errors may be correctly described by a Gaussian distribution. We examine the influence of S-phase arrival time data on such algorithms by using the program HYPOINVERSE with synthetic datasets. Least-squares hypocenter determination algorithms have several shortcomings: solutions may be highly dependent on starting hypocenters, linearization and the assumption that data errors follow a Gaussian distribution may not be appropriate, and depth/origin time trade-offs are not readily apparent. These shortcomings can lead to biased hypocenter estimates and standard errors that do not always represent he true error. To illustrate the constraint provided by S-phase data on hypocenters determined without some of these potential problems, we also show examples of hypocenter estimates derived using a probabilistic approach that does not require linearization. We conclude that a correctly timed S phase recorded within about 1.4 focal depth's distance from the epicenter can be a powerful constraint on focal depth. Furthermore, we demonstrate that even a single incorrectly timed S phase can result in depth estimates and associated measures of uncertainty that are significantly incorrect.
The Somali plate and the East African rift system: present-day kinematics
- Geophysical Journal International
, 1994
"... The motion of the Somalia plate relative to the Nubia (Africa), Arabia and Antarctica platcs is re-evaluated using a new inversion method based on a Monte Carlo technique and a least absolute value misfit criterion. A subset of the N U V E L 1 data set, with additional data along the Levant Fault an ..."
Abstract
-
Cited by 24 (1 self)
- Add to MetaCart
(Show Context)
The motion of the Somalia plate relative to the Nubia (Africa), Arabia and Antarctica platcs is re-evaluated using a new inversion method based on a Monte Carlo technique and a least absolute value misfit criterion. A subset of the N U V E L 1 data set, with additional data along the Levant Fault and in the Red Sea is used. The results confirm that the motion of Arabia with respect to Africa is significantly different from the motion relative to Somalia. It is further shown that the data along the SW Indian Ridge are compatible with a pole of relative motion between Africa and Somalia located close to the hypothetical diffuse triple junction between the ridge and thc East African Rift. The resulting Africa-Somalia motion is then compatible with the geological structures and seismological data along the East African Rift system. Assuming a separate Somalia plate thus solves kinematic and geological problems around the Afar triple junction and along the East African Rift.
Comparison of various inversion techniques as applied to the determination of a geophysical deformation model for the 1983 Borah Peak earthquake., Bull. seism
- Soc. Am
, 1992
"... A number of techniques are employed to overcome nonuniqueness and instability inherent in linear inverse problems. To test the factors that enter into the selection of an inversion technique for fault slip distribution, we used a penalty function with smoothness (PF + S), a damped least-squares meth ..."
Abstract
-
Cited by 22 (0 self)
- Add to MetaCart
(Show Context)
A number of techniques are employed to overcome nonuniqueness and instability inherent in linear inverse problems. To test the factors that enter into the selection of an inversion technique for fault slip distribution, we used a penalty function with smoothness (PF + S), a damped least-squares method (DLS), damped least-squares method with a positivity constraint (DLS + P), and a penalty function with smoothness and a positivity constraint (PF + S + P) for inverting the elevation changes for slip associated with the 1983 Borah Peak earthquake. Unlike solving an ill-posed inverse problem using a gradient technique (Ward and Barrientos, 1986), we have restored the well-posed character between the elevation changes and normal slip distribution. Studies showed that the constraints based on sound understanding of the physical nature of the problem are crucial in the derivation of a meaningful solution and dictates primarily the selection of a particular inversion technique. All available geological and geophysical information were used to determine a geophysical
Nonlinear arrival time inversion— Constraining velocity anomalies by seeking smooth models in 3-D
- Geophys J. Internat
, 1990
"... The problem of constraining 3-D seismic anomalies using arrival times from a regional network is examined. The non-linear dependence of arrival times on the hypocentral parameters of the earthquakes and the 3-D velocity field leads to a multiparameter-type non-linear inverse problem, and the distrib ..."
Abstract
-
Cited by 20 (3 self)
- Add to MetaCart
The problem of constraining 3-D seismic anomalies using arrival times from a regional network is examined. The non-linear dependence of arrival times on the hypocentral parameters of the earthquakes and the 3-D velocity field leads to a multiparameter-type non-linear inverse problem, and the distribution of sources and receivers from a typical regional network results in an enormous 3-D variation in data constraint. To ensure computational feasibility, authors have tended to neglect the non-linearity of the problem by linearizing about some best-guess discretized earth model. One must be careful in interpreting 3-D structure from linearized inversions because the inadequacy of the data window may combine with non-linear effects to produce artificial or phantom ‘structure’. To avoid the generation of artificial velocity gradients we must determine only those velocity variations which are necessary to fit the data rather than merely estimating local velocities in different parts of the model, which is the more common practice. We present a series of inversion algorithms which seek to inhibit the generation of unnecessary structure while performing efficiently within the frame-
Y.: The longer it has been since the last earthquake, the longer expected time till the next
- Bull. Seism. Soc. Am
, 1989
"... We adopt a Iognormal distribution for earthquake interval times, and we use a locally determined rather than a generic coefficient of variation, to estimate the probability of occurrence of characteristic earthquakes. We extend previous methods in two ways. First, we account for the aseismic period ..."
Abstract
-
Cited by 20 (0 self)
- Add to MetaCart
(Show Context)
We adopt a Iognormal distribution for earthquake interval times, and we use a locally determined rather than a generic coefficient of variation, to estimate the probability of occurrence of characteristic earthquakes. We extend previous methods in two ways. First, we account for the aseismic period since the last event (the "seismic drought") in updating the parameter estimates. Second, in calculating the earthquake probability we allow for uncertainties in the mean recurrence time and its variance by averaging over their likelihood. Both extensions can strongly influence the calculated earthquake probabilities, especially for long droughts in regions with few documented earthquakes. As time passes, the recurrence time and variance estimates increase if no additional events occur, leading eventually to an affirmative answer to the question in the title. The earthquake risk estimate begins to drop when the drought exceeds the estimated recurrence time. For the Parkfield area of California, the probability of a magnitude 6 event in the next 5 years is about 34 per cent, much lower than previous estimates. Furthermore, the estimated 5-year probability will decrease with every uneventful year after 1988. For the Coachella Valley segment of the San Andreas Fault, the uncertainties are large, and we estimate the probability of a large event in the next 30 years to be 9 per cent, again much smaller than previous estimates. On the Mojave (Pallett Creek) segment the catalog includes 10 events, and the present drought is just approaching the recurrence interval, so the estimated risk is revised very little by our methods.
A New Look at the Entropy for Solving Linear Inverse Problems
- IEEE Transactions on Information Theory
, 1994
"... Entropy-based methods are widely used for solving inverse problems, especially when the solution is known to be positive. We address here the linear ill-posed and noisy inverse problems y = Ax + n with a more general convex constraint x 2 C, where C is a convex set. Although projective methods ar ..."
Abstract
-
Cited by 17 (4 self)
- Add to MetaCart
(Show Context)
Entropy-based methods are widely used for solving inverse problems, especially when the solution is known to be positive. We address here the linear ill-posed and noisy inverse problems y = Ax + n with a more general convex constraint x 2 C, where C is a convex set. Although projective methods are well adapted to this context, we study here alternative methods which rely highly on some "information-based" criteria. Our goal is to enlight the role played by entropy in this frame, and to present a new and deeper point of view on the entropy, using general tools and results of convex analysis and large deviations theory. Then, we present a new and large scheme of entropic-based inversion of linear-noisy inverse problems. This scheme was introduced by Navaza in 1985 [48] in connection with a physical modeling for crystallographic applications, and further studied by Dacunha-Castelle and Gamboa [13]. Important features of this paper are (i) a unified presentation of many well kno...
A reanalysis of the hypocentral location and related observations for the great 1906
- California earthquake, Bull. Seismol. Soc. Am. 95
, 2005
"... Abstract We determine probabilistic hypocentral locations for the foreshock and mainshock of the Great 1906 California earthquake through reanalysis of arrival-time observations in conjunction with modern velocity models and advanced event-location techniques. We obtain two additional observations f ..."
Abstract
-
Cited by 16 (1 self)
- Add to MetaCart
(Show Context)
Abstract We determine probabilistic hypocentral locations for the foreshock and mainshock of the Great 1906 California earthquake through reanalysis of arrival-time observations in conjunction with modern velocity models and advanced event-location techniques. We obtain two additional observations for the mainshock and one for the foreshock that were not used in previous location studies. Using a robust likelihood function for event location, we generate a usable subset of the predomi-nantly unreliable teleseismic readings and determine new wave-type identifications for some of the local and teleseismic readings. Our locations are much better con-strained than those of earlier studies, even though we do not assume that the epicenter lies on the San Andreas fault, as did previous authors. We confirm the conclusions of earlier studies that the local and teleseismic arrival-time observations can be explained by a single foreshock focus and a single main-shock focus on the San Andreas fault, and that there is no single, unique hypocenter that satisfies all available local observations. The maximum-likelihood point (Lati-tude, 37.78 N; Longitude, 122.51 W) for our “preferred ” mainshock location in-dicates a hypocenter to the west of San Francisco, close to the San Andreas fault zone. This hypocenter has a 68 % confidence error of about 8 km parallel to the San Andreas fault and about 24 km perpendicular to the fault, and a depth in the midcrust of about 12 7 km. The closest point on the San Andreas fault to this hypocenter lies about 10 km to the northwest of the widely accepted mainshock epicenter of Bolt (1968). Our mainshock location is consistent with the association of initial rupture of the 1906 mainshock with a dilatational right-bend or step-over in the submerged San Andreas fault system offshore of the Golden Gate. Our fore-shock location is less well constrained than our mainshock location but is consistent with the foreshock hypocenter being at the same location as the mainshock hypo-center. Online material: Visualization of 3D probabilistic hypocentral locations associated with the 1906 earthquake.
2002), An inquiry into the lunar interior: A nonlinear inversion of the Apollo lunar seismic data
- J. Geophys. Res
"... [1] This study discusses in detail the inversion of the Apollo lunar seismic data and the question of how to analyze the results. The well-known problem of estimating structural parameters (seismic velocities) and other parameters crucial to an understanding of a planetary body from a set of arrival ..."
Abstract
-
Cited by 14 (2 self)
- Add to MetaCart
(Show Context)
[1] This study discusses in detail the inversion of the Apollo lunar seismic data and the question of how to analyze the results. The well-known problem of estimating structural parameters (seismic velocities) and other parameters crucial to an understanding of a planetary body from a set of arrival times is strongly nonlinear. Here we consider this problem from the point of view of Bayesian statistics using a Markov chain Monte Carlo method. Generally, the results seem to indicate a somewhat thinner crust with a thickness around 45 km as well as a more detailed lunar velocity structure, especially in the middle mantle, than obtained in earlier studies. Concerning the moonquake locations, the shallow moonquakes are found in the depth range 50–220 km, and the majority of deep moonquakes are concentrated in the depth range 850–1000 km, with what seems to be an apparently rather sharp lower boundary. In wanting to further analyze the outcome of the inversion for specific features in a statistical fashion, we have used credible intervals, two-dimensional marginals, and Bayesian hypothesis testing. Using this form of hypothesis testing, we are able to decide between the relative importance of any two hypotheses given data, prior information, and the physical laws that govern the relationship between model and data, such as having to decide between a thin crust of 45 km and a thick crust as implied by the generally assumed value of 60 km. We obtain a Bayes factor of 4.2, implying that a thinner crust is strongly favored. INDEX TERMS: 6250 Planetology: Solar
Use of fault striations and dislocation models to infer tectonic shear stress during the 1995 Hyogo-ken Nanbu (Kobe) earthquake
, 1998
"... derived by Yoshida et al. (1996) show substantial changes in direction of slip with time at specific points on the Nojima and Rokko fault systems, as do striations we observed on exposures of the Nojima fault surface on Awaji Island. Spudich (1992) showed that the initial stress, that is, the shear ..."
Abstract
-
Cited by 13 (2 self)
- Add to MetaCart
derived by Yoshida et al. (1996) show substantial changes in direction of slip with time at specific points on the Nojima and Rokko fault systems, as do striations we observed on exposures of the Nojima fault surface on Awaji Island. Spudich (1992) showed that the initial stress, that is, the shear traction on the fault before the earth-quake origin time, can be derived at points on the fault where the slip rake rotates with time if slip velocity and stress change are known at these points. From Yoshida's slip model, we calculated ynamic stress changes on the ruptured fault surfaces. To estimate rrors, we compared the slip velocities and dynamic stress changes of several published models of the earthquake. The differences between these models had an exponential distribution, not gaussian. We developed a Bayesian method to estimate the probability density function (PDF) of initial stress from the striations and from Yoshida's slip model. Striations near Toshima and Hirabayashi give initial stresses of about 13 and 7 MPa, respectively. We obtained initial stresses of about 7 to 17 MPa at depths of 2 to 10 km on a subset of points on the Nojima and Rokko fault systems. Our initial stresses and coseismic stress changes agree well with postearth-quake stresses measured by hydrofracturing in deep boreholes near Hirabayashi and Ogura on Awaji Island. Our results indicate that the Nojima fault slipped at very low shear stress, and fractional stress drop was complete near the surface and about 32% below depths of 2 kin. Our results at depth depend on the accuracy of the rake rotations in Yoshida's model, which are probably correct on the Nojima fault but debatable on the Rokko fault. Our results imply that curved or cross-cutting fault striations can be formed in a single earthquake, contradicting a common assumption of structural geology.