W4_inclass_loma_prieta_guerrero_anthony
pdf
keyboard_arrow_up
School
University of California, Berkeley *
*We aren’t endorsed by this school
Course
88
Subject
Geology
Date
Apr 3, 2024
Type
Pages
23
Uploaded by MegaCapybara304
W4_inclass_loma_prieta_guerrero_anthony
October 2, 2023
1
Loma
Prieta
Earthquake,
Earthquake
Occurrence
Statistics:
Omori’s Law
Last week we:
- pandas DataFrames, indexing, and data cleaning.
- Load marine geophysical
data (bathymetry and marine magnetic anomalies) from two oceanic ridges. - Select data and drop
rows with NaNs.
- Plot bathymetry data and evaluate spreading rate.
- Declare a functions to
detrend data, calculate distance and estimate spreading rate - Plot marine magnetic anomaly data
and compare spreading rates.
Our goals for today:
- Load a Bay Area seismic catalog of January 01,1989 to December 31, 1995.
- Compute the distance and time interval between Loma Prieta quake and subsequant earthquakes
to indentify aftershocks. - Filter the aftershocks from the catalog and look at their distribution. -
Examine the Gutenberg-Richter statistic - Make and earthquake forecast
1.1
Setup
Run this cell as it is to setup your environment.
[1]:
import
numpy
as
np
import
pandas
as
pd
import
datetime
import
matplotlib
import
matplotlib.pyplot
as
plt
import
matplotlib.dates
as
mdates
import
cartopy.crs
as
ccrs
import
cartopy.feature
as
cfeature
On October 17 at 5:15pm PDT (October 18 1989 at 04:15am UTC) the M6.9 Loma Prieta earth-
quake occurred in the Santa Cruz mountains approximately 80 km southwest of the Berkeley Cam-
pus. We will use this earthquake sequence to investigate the performance of catalog declustering
algorithm.
https://en.wikipedia.org/wiki/1989_Loma_Prieta_earthquake
1.2
Load the Earthquake Catalog
Load the .csv data file of all the earthquakes from January 01,1989 to December 31, 1995 in the
ANSS (Advanced National Seismic System) catalog from between latitudes 36.0-38.5° and longitude
-123.0 to -121.0° (
http://ncedc.org/anss/catalog-search.html
).
1
[2]:
# read data
#pd.to_datetime?
LP_catalog
=
pd
.
read_csv(
'data/bay_area_anss_1989_1995.csv'
)
#the following function just reformats to a DateTime object format
LP_catalog[
'DateTime'
]
=
pd
.
to_datetime(LP_catalog[
'DateTime'
])
LP_catalog
.
head()
[2]:
Unnamed: 0
DateTime
Latitude
Longitude
Depth
Magnitude
\
0
0 1989-01-01 03:51:37.170
36.5928
-121.1027
0.00
1.26
1
1 1989-01-01 11:39:34.870
36.9583
-121.5762
5.70
1.07
2
2 1989-01-01 21:05:18.030
37.5293
-121.6905
7.03
0.99
3
3 1989-01-01 21:54:27.470
37.5590
-121.6760
7.38
0.85
4
4 1989-01-01 21:57:12.240
36.6493
-121.2678
4.66
1.17
MagType
NbStations
Gap
Distance
RMS Source
EventID
0
Md
4
150.0
6.0
0.13
NC
1160762.0
1
Md
34
36.0
1.0
0.08
NC
129467.0
2
Md
11
80.0
4.0
0.02
NC
129474.0
3
Md
12
66.0
1.0
0.04
NC
129475.0
4
Md
21
65.0
1.0
0.04
NC
129476.0
DateTime objects are very useful. There are functions to parse the DateTime data into individual
arrays, or functions to process the DateTime data in different ways.
[8]:
LP_catalog
.
iloc[
3
][
'DateTime'
]
.
minute
[8]:
54
[10]:
#
create data arrays, it will speed up our loops later
#
Note .dt provides functional resources to work on the DateTime file.
year
=
LP_catalog[
'DateTime'
]
.
dt
.
year
month
=
LP_catalog[
'DateTime'
]
.
dt
.
month
day
=
LP_catalog[
'DateTime'
]
.
dt
.
day
lat
=
LP_catalog[
'Latitude'
]
.
values
lon
=
LP_catalog[
'Longitude'
]
.
values
mag
=
LP_catalog[
'Magnitude'
]
.
values
nevt
=
len
(year)
#number of events
mag
[10]:
array([1.26, 1.07, 0.99, …, 0.89, 1.06, 1.1 ])
1.3
Map the Raw Earthquake Catalog
On a map of the Bay Area plot the location of events in the raw catalog. Scale the marker color
and size by magnitude.
Code for you to write
2
[12]:
#Make a Map of the earthquake catalog
# Set Corners of Map
lat0
=36.0
lat1
=38.5
lon0
=-123.0
lon1
=-121.0
tickstep
=0.5
#for axes
latticks
=
np
.
arange(lat0,lat1
+
tickstep,tickstep)
lonticks
=
np
.
arange(lon0,lon1
+
tickstep,tickstep)
# coordinates for UC Berkeley
Berk_lat
= 37.8716
Berk_lon
= -122.2727
plt
.
figure(
1
,(
10
,
10
))
ax
=
plt
.
axes(projection
=
ccrs
.
PlateCarree())
ax
.
set_extent([lon0, lon1, lat0, lat1], crs
=
ccrs
.
PlateCarree())
ax
.
coastlines(resolution
=
'10m'
,linewidth
=1
)
ax
.
set_xticks(lonticks)
ax
.
set_yticks(latticks)
ax
.
set(xlabel
=
'Longitude'
, ylabel
=
'Latitude'
,title
=
'Raw Catalog'
)
# Sort by magnitude to plot largest events on top
LP_catalog_sorted
=
LP_catalog
.
sort_values(
'Magnitude'
)
#exponent to scale marker size
z
=
np
.
exp(LP_catalog_sorted[
'Magnitude'
])
plt
.
scatter(LP_catalog_sorted[
'Longitude'
], LP_catalog_sorted[
'Latitude'
], s
=
z,
c
=
LP_catalog_sorted[
'Magnitude'
], cmap
=
'plasma'
,alpha
=0.4
,marker
=
'o'
)
␣
↪
# plot circles on EQs
plt
.
plot(Berk_lon,Berk_lat,
's'
,color
=
'limegreen'
,markersize
=8
)
# plot red
␣
↪
square on Berkeley Campus
plt
.
colorbar(label
=
'Magnitude'
)
plt
.
show()
3
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
d0 = datetime.date(year[0], month[0], day[0]) ## Plot Magnitude vs. Time for the Raw Catalog
Plot magnitude vs. time for the raw catalog and print out the number of events as we did in-class.
Use the
alpha = 0.2
argument in
plot
to make the markers transparent.
We have an array of magnitudes but what about time? We need to construct an array of fractions
of years from the first entry in the catalog.
Code for you to write
[14]:
d0
=
datetime
.
date(year[
0
], month[
0
], day[
0
])
d0
4
[14]:
datetime.date(1989, 1, 1)
[21]:
#Let's create a time array and then use the plot() function to plot the
␣
↪
earthquakes
#as points.
d0
=
datetime
.
date(year[
0
], month[
0
], day[
0
])
time
=
np
.
zeros(nevt)
# initialize the size of the array days
for
i
in
range
(
0
,nevt,
1
):
d1
=
datetime
.
date(year[i], month[i], day[i])
time[i]
=
((d1
-
d0)
.
days)
/365.25 +
year[
0
]
#Now the plot
fig, ax
=
plt
.
subplots(figsize
=
(
6
,
6
))
ax
.
plot(time, mag,
'o'
,alpha
=0.2
,markersize
=5
)
ax
.
set(xlabel
=
'Time'
, ylabel
=
'Magnitude'
,
title
=
'Raw Event Catalog'
)
ax
.
grid()
plt
.
show()
5
[ ]:
#Here is another way to do it using some built in functions. It is as clear but
␣
↪
it is
#powerful using the datetime attributes, and you may want to use it when
␣
↪
comparing other datetime ranges.
fig, ax
=
plt
.
subplots(figsize
=
(
6
,
6
))
ax
.
plot(mdates
.
date2num(LP_catalog[
'DateTime'
]), mag,
'o'
,alpha
=0.2
,markersize
=5
)
locator
=
mdates
.
AutoDateLocator()
formatter
=
mdates
.
ConciseDateFormatter(locator)
ax
.
xaxis
.
set_major_locator(locator)
ax
.
xaxis
.
set_major_formatter(formatter)
ax
.
set(xlabel
=
'Time'
, ylabel
=
'Magnitude'
,
title
=
'Raw Event Catalog'
)
ax
.
grid()
6
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
plt
.
show()
1.4
Earthquake Catalog Statistics and Aftershocks
There are two primary statistics that describe earthquake catalog data. The first is Omori’s Law
which describes the distribution of aftershocks following a primary earthquake or mainshock.
It
can be used to estimate the rate of aftershock production over time after the earthquake.
The second is Gutenberg-Richter, which is used to characterize the rates of primary earthquakes
over some period of time to determine rates of occurence, which can then be used to assess hazard
and make earthquake forecasts.
Both statistics
require a means of identifying earthquakes as aftershocks
.
Since after-
shocks are related to the redistribution of mainshock stress including them in a statistic that
assumes
primary (or mainshock)
event occurrence would lead to biased results. Aftershocks are
therefore identified and removed from analysis. This is done based on spatial and temporal proxim-
7
ity of a given earthquake to another earlier, larger earthquake. There are several empirical models
developed for this identification and the following cell describes one of these.
1.4.1
We will start by examing the Loma Prieta aftershocks, which we need to isolate
from the catalog.
1.4.2
Gardner and Knopoff (1974) Aftershock Model
Gardner and Knopoff (1974) studied many hundreds of earthquake sequences and empirically deter-
mined distance and time filters as a function of mainshock magnitude to help identify aftershocks.
The table below shows their model with distance to the event in km, and time after the event in
days.
Both distance from the mainshock and time afterward increase dramatically with increasing mag-
nitude. Note that aftershocks following M5.5+ earthquakes can last for years.
These windows are based on the following equations:
To build our algorithm to identify aftershock using these windows we need to convert the year-
month-day formate of dates to a timeline in number of days.
We’ll do this using the function
datetime.date
which for a given year, month, and day returns a datetime class object, which can
be used to compute the time interval in days.
We will also use the use the
np.power(base, exponent)
function to compute the distance and
time interval bounds.
1.4.3
First, Determine the index value and the number of days from the beginning
of the catalog to the Loma Prieta Quake
The Loma Prieta earthquake is the largest in the catalog
[29]:
LP_ind
=
LP_catalog
.
index[LP_catalog[
'Magnitude'
]
==
max
(LP_catalog[
'Magnitude'
])]
lp_mag
=
LP_catalog[
'Magnitude'
][LP_ind]
.
values
print
(
f'Loma Prieta Earthquake M
{
lp_mag[
0
]
:
.1f
}
Index=
{
LP_ind[
0
]
:
d
}
'
)
Loma Prieta Earthquake M6.9 Index=3140
[26]:
#Some information about using datetime
To calculate the days since the Loma Prieta earthquake for many entries we can use
a loop
Code for you to write
[36]:
#Determine the number of days from the Loma Prieta Earthquake
days
=
np
.
zeros(nevt)
# initialize the size of the array days
d0
=
datetime
.
date(year[LP_ind[
0
]], month[LP_ind[
0
]], day[LP_ind[
0
]])
time
=
np
.
zeros(nevt)
# initialize th size of the array days
for
i
in
range
(
0
,nevt,
1
):
d1
=
datetime
.
date(year[i], month[i], day[i])
days[i]
=
((d1
-
d0)
.
days)
8
days
[36]:
array([-290., -290., -290., …, 2265., 2265., 2265.])
Why are some of the days[…] values negative?
some of the days are negative values because it was before the loma prieta earthquake.
1.4.4
Second, Define a function to compute the distance between to geographic points
on a sphere - we will use the haversine function from last week
Great-circle
distance
shown
in
red
between
two
points
on
a
sphere,
P
and
Q.
Source:
https://en.wikipedia.org/wiki/Great-circle_distance
[37]:
#This function computes the spherical earth distance between to geographic
␣
↪
points and is used in the
#declustering algorithm below
def
haversine_np
(lon1, lat1, lon2, lat2):
"""
Calculate the great circle distance between two points
on the earth (specified in decimal degrees)
All args must be of equal length.
The first pair can be singular and the second an array
"""
lon1, lat1, lon2, lat2
=
map
(np
.
radians, [lon1, lat1, lon2, lat2])
#
␣
↪
convert degrees lat, lon to radians
dlon
=
lon2
-
lon1
dlat
=
lat2
-
lat1
a
=
np
.
sin(dlat
/2.0
)
**2 +
np
.
cos(lat1)
*
np
.
cos(lat2)
*
np
.
sin(dlon
/2.0
)
**2
␣
↪
# great circle inside sqrt
c
= 2 *
np
.
arcsin(np
.
sqrt(a))
# great circle angular separation
km
= 6371.0 *
c
# great circle distance in km, earth radius = 6371.0 km
return
km
1.4.5
Next we need to define the code for applying the Gardner & Knopoff (1974)
Aftershock Model
Code for you to write
[42]:
#code here
i
=
LP_ind[
0
]
Dtest
=
np
.
power(
10
,
0.1238*
mag[i]
+ 0.983
)
9
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
if
mag[i]
>= 6.5
:
Ttest
=
np
.
power(
10
,
0.032*
mag[i]
+ 2.7389
)
else
:
Ttest
=
np
.
power(
10
,
0.5409*
mag[i]
- 0.547
)
print
(
f'Magnitude=
{
mag[i]
:
.1f
}
Dtest=
{
Dtest
:
0.2f
}
km
Ttest=
{
Ttest
:
.2f
}
days'
)
Magnitude=6.9
Dtest=68.74 km
Ttest=911.38 days
1.4.6
Put the pieces together to build a aftershock detection algorithm
Code for us to examine and write
Note that you need to be careful about array dimensionality (column vs row ordered). The dimen-
sionalty when performing certain (most) operations on arrays needs to be the same for both. We
will also use np.array() which has as default column major ordering.
[45]:
#Declustering Algorithm Catalog
cnt
=
LP_ind[
0
]
# initialize a counting variable at the LP index
save
=
np
.
zeros((
1000000
),dtype
=
int
)
# initialize a storage array
i
=
LP_ind[
0
]
# logical if statements to incorporate definitions of Dtest and Ttest
␣
↪
aftershock window bounds
a
=
days[i
+1
:nevt]
-
days[i]
# time interval in days to subsequent earthquakes
␣
↪
in catalog
m
=
mag[i
+1
:nevt]
# magnitudes of subsequent earthquakes in catalog
b
=
haversine_np(lon[i],lat[i],lon[i
+1
:nevt],lat[i
+1
:nevt])
# distance in km to
␣
↪
subsequent EQs in catalog
#First Test find number of events that are within the aftershock time period
icnt
=
np
.
count_nonzero(a
<=
Ttest)
# counts the number of potential aftershocks
if
icnt
> 0
:
# if there are potential aftershocks
itime
=
np
.
array(np
.
nonzero(a
<=
Ttest))
.
transpose()
+
(i
+1
)
# indices of
␣
↪
potential aftershocks <= Ttest bound
for
j
in
range
(
0
,icnt,
1
):
# loops over the aftershocks
if
b[j]
<=
Dtest
and
m[j]
<
mag[i]:
# test if the event is inside the
␣
↪
distance window
# and that the event is smaller
␣
↪
than the current main EQ
save[cnt]
=
itime[j]
# index value of the aftershock
cnt
+= 1
# increment the counting variable
10
af_ind
=
np
.
delete(np
.
unique(save),
0
)
# This is an array of indexes of
␣
↪
aftershocks
Create a set of arrays for the aftershock catalog. Use
af_ind
to select the aftershock events from
the raw calalog.
[47]:
# select the aftershock events
aftershock_df
=
LP_catalog
.
iloc[af_ind]
aftershock_days
=
days[af_ind]
#The aftershocks are selected from the days array
aftershock_mag
=
mag[af_ind]
#The aftershocks are selected from the mag array
aftershock_lon
=
lon[af_ind]
#The aftershocks are selected from the lon array
aftershock_lat
=
lat[af_ind]
#The aftershocks are selected from the lat array
n2
=
len
(aftershock_days)
n2
[47]:
15269
Concept question:
How many aftershock events were there?
15269
1.5
Plot Magnitude vs. Time for the Aftershock Catalog
Plot magnitude vs. time for the aftershock events and print out the number of events.
Use the
alpha = 0.2
argument in
plot
to make the markers transparent. Plot the mainshock in a different
color.
[49]:
#Plot DeClustered Catalog
fig, ax
=
plt
.
subplots(figsize
=
(
6
,
6
))
ax
.
plot(mdates
.
date2num(LP_catalog
.
iloc[LP_ind][
'DateTime'
]),
␣
↪
mag[LP_ind[
0
]],
'o'
,color
=
'red'
,markersize
=10
,label
=
'Loma Prieta'
)
ax
.
plot(mdates
.
date2num(aftershock_df[
'DateTime'
]), aftershock_mag,
'o'
,alpha
=0.
↪
2
,markersize
=5
,label
=
'Aftershocks'
)
locator
=
mdates
.
AutoDateLocator()
formatter
=
mdates
.
ConciseDateFormatter(locator)
ax
.
xaxis
.
set_major_locator(locator)
ax
.
xaxis
.
set_major_formatter(formatter)
ax
.
set(xlabel
=
'Time'
, ylabel
=
'Magnitude'
,
title
=
'Aftershock Event Catalog'
)
ax
.
grid()
ax
.
legend()
plt
.
show()
11
1.6
Plot a histogram of
aftershock_days
[56]:
plt
.
hist(aftershock_days[aftershock_days
<31
],
30
)
plt
.
xlabel(
'Days after main shock'
)
plt
.
ylabel(
'Number of Events'
)
plt
.
title(
'Aftershocks of the Loma Prieta 1989 Quake'
)
plt
.
show()
12
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
Discussion question:
How would you describe the distribution of number of aftershocks with
time after the main quake?
By how much is the rate of aftershock occurrence reduced after 7 days? After 30 days?
Most come between the first 3 days of the loma prieta earthqauke.
after 7 days there were 211
aftershocks and after 30 days it recuded to 53.
53- 30 211- 7
[70]:
cnts,bins
=
np
.
histogram(aftershock_days[aftershock_days
<= 30
], bins
=
range
(
0
,
33
))
cnts[
30
]
[70]:
53
1.7
Map the Aftershock Events
On a map of the Bay Area plot the location of events in the aftershock catalog. Again scale the
marker color and size by magnitude, set
vmax=6.9
so the colorscale matches our original map. Add
the locations of UC Berkeley campus and the Loma Prieta event epicenter.
13
[71]:
#Make a Map of Aftershock events
#Set Corners of Map
lat0
=36.0
lat1
=38.5
lon0
=-123.0
lon1
=-121.0
tickstep
=0.5
#for axes
latticks
=
np
.
arange(lat0,lat1
+
tickstep,tickstep)
lonticks
=
np
.
arange(lon0,lon1
+
tickstep,tickstep)
plt
.
figure(
1
,(
10
,
10
))
ax
=
plt
.
axes(projection
=
ccrs
.
PlateCarree())
ax
.
set_extent([lon0, lon1, lat0, lat1], crs
=
ccrs
.
PlateCarree())
ax
.
set_aspect(
'auto'
)
ax
.
coastlines(resolution
=
'10m'
,linewidth
=1
)
#downloaded 10m, 50m
ax
.
set_xticks(lonticks)
ax
.
set_yticks(latticks, crs
=
ccrs
.
PlateCarree())
ax
.
set(xlabel
=
'Longitude'
, ylabel
=
'Latitude'
,
title
=
'Aftershock Catalog'
)
#Sort Descending to plot largest events on top
indx
=
np
.
argsort(aftershock_mag)
#determine sort index
x
=
aftershock_lon[indx]
#apply sort index
y
=
aftershock_lat[indx]
z
=
np
.
exp(aftershock_mag[indx])
#exponent to scale size
plt
.
scatter(x, y, s
=
z, c
=
aftershock_mag[indx],vmax
=6.9
, cmap
=
'plasma'
,alpha
=0.
↪
4
,marker
=
'o'
,label
=
'aftershocks'
)
plt
.
plot(lon[LP_ind[
0
]],
␣
↪
lat[LP_ind[
0
]],
'*'
,color
=
'yellow'
,markersize
=20
,markeredgecolor
=
'black'
,label
=
'Loma
␣
↪
Prieta Quake'
)
plt
.
plot(
-122.2727
,
37.8716
,
'rs'
,markersize
=8
,label
=
'UC Berkeley'
)
plt
.
legend()
plt
.
colorbar(label
=
'Magnitude'
)
plt
.
show()
14
Map of Bay Area faults. Source: https://pubs.er.usgs.gov/publication/fs20163020
Discussion questions:
- Were aftershocks only triggered on the same fault as the main quake? - What faults were active?
Write your answers here.
*No they were triggered on many other faults like the calaveras fault.
*San andreas, san gregorio, hayward, calaveras, green valley concord, and maacama faults.
15
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
1.8
Lets decluster the whole catalog and examine the Gutenberg-Richter Statis-
tic
Gutenberg devised a statistic that counts, for a given time period, the number of events with
magnitude equal to or greater than a given magnitude. It is computed by defining a range of test
magnitudes (sometimes called bins) and then determining the number of events of that magni-
tude and larger.
Therefore it is a statistic on the rate of earthquake above a given magnitude.
When plotted in log10(number(mag)) vs. mag there is a linear trend with an approximate -1 slope.
That means that for each unit increase in magnitude there is a 10-fold decrease in the number of
earthquakes. Gutenberg and Richter fit a line to the data compiled in this way to determine the
intercept and slope (b-value, or rate of earthquake occurrence.
𝑙𝑜𝑔10(𝑁(𝑚)) = 𝐴 + 𝐵 ∗ 𝑀
[73]:
#Write Algorithm to Decluster the entire catalog
#Reset variables
year
=
np
.
array(LP_catalog[
'DateTime'
]
.
dt
.
year)
month
=
np
.
array(LP_catalog[
'DateTime'
]
.
dt
.
month)
day
=
np
.
array(LP_catalog[
'DateTime'
]
.
dt
.
day)
lat
=
LP_catalog[
'Latitude'
]
.
values
lon
=
LP_catalog[
'Longitude'
]
.
values
mag
=
LP_catalog[
'Magnitude'
]
.
values
nevt
=
len
(year)
#number of events
For the Loma Prieta earthquake we only investigated the catalog from the time of the event to
identify the aftershocks to plot the Omori curve.
More typically the declustering algorithm is used to remove the aftershocks from a catalog.
To
apply the declustering algorithm to the entire catalog we need to pass through the entire catalog
from beginning to end. To do this we need to add a loop.
[75]:
#We need to add a loop over time from what we had before
cnt
=0
save
=
np
.
zeros(
10000000
,dtype
=
int
)
# initialize a counting variable
for
i
in
range
(nevt):
#You only need this outside loop
␣
↪
with proper indentation
Dtest
=
np
.
power(
10
,
0.1238*
mag[i]
+0.983
)
# distance bounds
if
mag[i]
>= 6.5
:
Ttest
=
np
.
power(
10
,
0.032*
mag[i]
+2.7389
)
# aftershock time bounds for M
␣
↪
>= 6.5
else
:
Ttest
=
np
.
power(
10
,
0.5409*
mag[i]
-0.547
)
# aftershock time bounds for M
␣
↪
< 6.5
a
=
days[i
+1
:nevt]
-
days[i]
# time interval in days to subsequent
␣
↪
earthquakes in catalog
m
=
mag[i
+1
:nevt]
# magnitudes of subsequent earthquakes in catalog
16
b
=
haversine_np(lon[i],lat[i],lon[i
+1
:nevt],lat[i
+1
:nevt])
# distance in km
␣
↪
to subsequent EQs in catalog
icnt
=
np
.
count_nonzero(a
<=
Ttest)
# counts the number of potential
␣
↪
aftershocks,
# the number of intervals <= Ttest bound
if
icnt
> 0
:
# if there are potential aftershocks
itime
=
np
.
array(np
.
nonzero(a
<=
Ttest))
.
transpose()
+
(i
+1
)
# indices of
␣
↪
potential aftershocks <= Ttest bound
for
j
in
range
(
0
,icnt,
1
):
# loops over the aftershocks
if
b[j]
<=
Dtest
and
m[j]
<
mag[i]:
# test if the event is inside the
␣
↪
distance window
# and that the event is smaller
␣
↪
than the current main EQ
save[cnt]
=
itime[j]
# index value of the aftershock
cnt
+= 1
# increment the counting variable
af_ind
=
np
.
delete(np
.
unique(save),
0
)
# This is an array of indexes of
␣
↪
aftershocks
The np.delete() function is used to remove aftershocks from the lat, lon, etc arrays. It is good to
rename the receiving array (plat, plot, etc - p for ‘primary’).
[76]:
#This time Delete the aftershocks from the catalog and create new arrays
plat
=
np
.
delete(lat,af_ind)
plon
=
np
.
delete(lon,af_ind)
pmag
=
np
.
delete(mag,af_ind)
pyear
=
np
.
delete(year,af_ind)
pmonth
=
np
.
delete(month,af_ind)
pday
=
np
.
delete(day,af_ind)
1.8.1
How many events were removed from the catalog?
What percentage of the
events are considered primary, or mainshocks?
28824 events were removed. the % of events that were considered primary was about 15%.
[77]:
len
(mag)
-
len
(pmag)
[77]:
28824
[79]:
len
(pmag)
/
len
(mag)
*100
[79]:
14.945852636548732
17
[80]:
# RePlot Mag vs Time and compare with what was obtained before
d0
=
datetime
.
date(pyear[
0
],
1
,
1
)
time
=
np
.
zeros(
len
(pday))
# initialize the size of the array days
for
i
in
range
(
0
,
len
(pday),
1
):
d1
=
datetime
.
date(pyear[i], pmonth[i], pday[i])
time[i]
=
((d1
-
d0)
.
days)
/365.25 +
year[
0
]
#This loop created a time array in units of years.
fig, ax
=
plt
.
subplots(figsize
=
(
6
,
6
))
ax
.
plot(time, pmag,
'o'
,alpha
=0.2
,markersize
=5
)
ax
.
set(xlabel
=
'Time'
, ylabel
=
'Magnitude'
,
title
=
'Raw Event Catalog'
)
ax
.
grid()
plt
.
show()
18
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
1.8.2
Map a map of the declustered earthquake catalog and compare with the com-
plete catalog
[81]:
#Make a Map of the earthquake catalog
# Set Corners of Map
lat0
=36.0
lat1
=38.5
lon0
=-123.0
lon1
=-121.0
tickstep
=0.5
#for axes
latticks
=
np
.
arange(lat0,lat1
+
tickstep,tickstep)
lonticks
=
np
.
arange(lon0,lon1
+
tickstep,tickstep)
# coordinates for UC Berkeley
Berk_lat
= 37.8716
Berk_lon
= -122.2727
plt
.
figure(
1
,(
10
,
10
))
ax
=
plt
.
axes(projection
=
ccrs
.
PlateCarree())
ax
.
set_extent([lon0, lon1, lat0, lat1], crs
=
ccrs
.
PlateCarree())
ax
.
coastlines(resolution
=
'10m'
,linewidth
=1
)
ax
.
set_xticks(lonticks)
ax
.
set_yticks(latticks)
ax
.
set(xlabel
=
'Longitude'
, ylabel
=
'Latitude'
,title
=
'Raw Catalog'
)
# Sort by magnitude to plot largest events on top
#LP_catalog_sorted = LP_catalog.sort_values(by=['Magnitude'])
#exponent to scale marker size
z
=
np
.
exp(pmag)
plt
.
scatter(plon, plat, s
=
z,
c
=
pmag, cmap
=
'plasma'
,alpha
=0.4
,marker
=
'o'
)
# plot circles on EQs
plt
.
plot(Berk_lon,Berk_lat,
's'
,color
=
'limegreen'
,markersize
=8
)
# plot red
␣
↪
square on Berkeley Campus
plt
.
colorbar(label
=
'Magnitude'
)
plt
.
show()
19
1.8.3
Compute the Gutenberg Richter Statistic
• Compile the annual number of earthquakes above a given magnitude
• Examine the distribution
• Fit the Gutenberg Richter model to the data
• Use the Gutenberg Richter model to forecast recurrence of earthquakes
𝑙𝑜𝑔10(𝑁(𝑚)) = 𝐴 + 𝐵 ∗ 𝑀
Let’s try it.
Code for us to examine and write
20
[86]:
# Compute G&R Statistics
min_mag
=0.0
max_mag
=
max
(pmag)
m
=
np
.
arange(min_mag,max_mag,
0.1
)
#The magnitude "bins"
N
=
np
.
zeros(
len
(m))
#N is the number of events
numyr
=1995-1989 + 1
#number of years in the catalog - note 7
␣
↪
years is too low for this analysis
for
i
in
range
(
0
,
len
(m),
1
):
N[i]
=
np
.
log10(
sum
(pmag
>=
m[i])
/
numyr)
#plot it
plt
.
figure()
plt
.
plot(m,N,
'o'
)
plt
.
xlabel(
'Magnitude'
)
plt
.
ylabel(
'log10(N)'
)
plt
.
title(
'Gutenberg Ricter'
)
plt
.
show()
21
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
[94]:
#Next Fit the G&R model to the data and plot
y
=
np
.
polyfit(m[(m
<=5.5
)
&
(m
>=1.50
)],N[(m
<=5.5
)
&
(m
>=1.50
)],
1
)
intercept
=
y[
1
]
slope
=
y[
0
]
print
(
f'The declustered data B value:
{
slope
:
0.3f
}
'
)
#plot it
plt
.
figure()
plt
.
plot(m,N,
'o'
)
plt
.
plot(m,intercept
+
slope
*
m,
'-'
,color
=
'orange'
)
plt
.
xlabel(
'Magnitude'
)
plt
.
ylabel(
'log10(N)'
)
plt
.
title(
'Gutenberg Ricter'
)
plt
.
show()
The declustered data B value: -0.836
Error in atexit._run_exitfuncs:
22
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
Traceback (most recent call last):
File "/opt/conda/lib/python3.9/site-packages/popularity_contest/reporter.py",
line 105, in report_popularity
libraries = get_used_libraries(initial_modules, current_modules)
File "/opt/conda/lib/python3.9/site-packages/popularity_contest/reporter.py",
line 74, in get_used_libraries
all_packages = get_all_packages()
File "/opt/conda/lib/python3.9/site-packages/popularity_contest/reporter.py",
line 51, in get_all_packages
for f in dist.files:
TypeError: 'NoneType' object is not iterable
What is the annual rate of M4, M5 and M6+ earthquakes?
What is the average
recurrence interval?
Note that this result is significantly biased because here we used only a 6
year catalog. In the takehome assignment you will explore a more expansive catalog.
thhe annual rate is 1 log10(N) to - 1
1.8.4
Turn in this notebook
Save your completed notebook to a pdf file and upload to bcourses.
[ ]:
23
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help