Please see attached mathematical statistics question below. part(b) How to find the distribution of Y = − ln X?

MATLAB: An Introduction with Applications
6th Edition
ISBN:9781119256830
Author:Amos Gilat
Publisher:Amos Gilat
Chapter1: Starting With Matlab
Section: Chapter Questions
Problem 1P
icon
Related questions
icon
Concept explainers
Question

Please see attached mathematical statistics question below.

part(b)

How to find the distribution of Y = − ln X?

(using section 1.7 which is attached)

1.7
Continuous Random Variables
In the last section, we discussed discrete random variables. Another class of random
variables important in statistical applications is the class of continuous random
variables, which we define next.
Definition 1.7.1 (Continuous Random Variables). We say a random variable is a
continuous random variable if its cumulative distribution function Fx(x) is a
continuous function for all x E R.
Recall from Theorem 1.5.3 that P(X = x) = Fx(x) – Fx(x-), for any random
variable X. Hence, for a continuous random variable X, there are no points of
discrete mass; i.e., if X is continuous, then P(X = x) = 0 for all r e R. Most
continuous random variables are absolutely continuous; that is,
| fx(t) dt,
Fx(r) =
(1.7.1)
for some function fx(t). The function fx (t) is called a probability density func-
tion (pdf) of X. If fx(x) is also continuous, then the Fundamental Theorem of
Calculus implies that
Fx (x) = fx(x).
(1.7.2)
The support of a continuous random variable X consists of all points r such
that fx(x) > 0. As in the discrete case, we often denote the support of X by S.
If X is a continuous random variable, then probabilities can be obtained by
integration; i.e.,
P(a < X < b) = Fx (b) – Fx(a) =
fx(t) dt.
Also, for continuous random variables,
P(a < X < b) = P(a < X < b) = P(a< X < b) = P(a < X < b).
From the definition (1.7.2), note that pdfs satisfy the two properties
(i) fx(x) >0 and (ii) ſ fx(t) dt = 1.
(1.7.3)
The second property, of course, follows from Fx(0) = 1. In an advanced course in
probability, it is shown that if a function satisfies the above two properties, then it
is a pdf for a continuous random variable; see, for example, Tucker (1967).
Recall in Example 1.5.2 the simple experiment where a number was chosen
at random from the interval (0, 1). The number chosen, X, is an example of a
continuous random variable. Recall that the cdf of X is Fx (x) = x, for 0 <x < 1.
Hence, the pdf of X is given by
1 0<x <1
fx(x) =
(1.7.4)
0elsewhere.
Any continuous or discrete random variable X whose pdf or pmf is constant on
the support of X is said to have a uniform distribution; see Chapter 3 for a more
formal definition.
Example 1.7.1 (Point Chosen at Random Within the Unit Circle). Suppose we
select a point at random in the interior of a circle of radius 1. Let X be the
distance of the selected point from the origin. The sample space for the experiment
is C = {(w, y) : w? + y? < 1}. Because the point is chosen at random, it seems
that subsets of C which have equal area are equilikely. Hence, the probability of the
selected point lying in a set A cC is proportional to the area of A; i.e.,
area of A
P(A) =
For 0 < x < 1, the event {X < x} is equivalent to the point lying in a circle of
radius r. By this probability rule, P(X <x) = T2² /n = x2; hence, the cdf of X is
x2 0<x < 1
1< x.
Fx(x) =
(1.7.5)
1
Taking the derivative of Fx(x), we obtain the pdf of X:
2x 0<r < 1
elsewhere.
fx(x) =
(1.7.6)
For illustration, the probability that the selected point falls in the ring with radii
1/4 and 1/2 is given by
P(<xs) - 2w du - -.
= [
3
X <
2w dw = w?
16
Example 1.7.2. Let the random variable be the time in seconds between incoming
telephone calls at a busy switchboard. Suppose that a reasonable probability model
for X is given by the pdf
je-=/4 0 <r<∞
elsewhere.
fx(x) =
Note that fx satisfies the two properties of a pdf, namely, (i) f(x) 2 0 and (ii)
1
-1/4 dx
= -e-/4
= 1.
For illustration, the probability that the time between successive phone calls exceeds
4 seconds is given by
P(X > 4) =
1
e-/4 dx = e-1 = 0.3679.
The pdf and the probability of interest are depicted in Figure 1.7.1. From the figure,
the pdf has a long right tail and no left tail. We say that this distribution is skewed
right or positively skewed. This is an example of a gamma distribution which is
discussed in detail in Chapter 3. I
1.7.1 Quantiles
Quantiles (percentiles) are easily interpretable characteristics of a distribution.
Definition 1.7.2 (Quantile). Let 0 < p < 1. The quantile of order p of the
distribution of a random variable X is a value &p such that P(X < Ep) < p and
P(X < Sp) > p. It is also known as the (100p) th percentile of X. I
Examples include the median which is the quantile E1/2. The median is also
called the second quartile. It is a point in the domain of X that divides the mass
of the pdf into its lower and upper halves. The first and third quartiles divide each
of these halves into quarters. They are, respectively 1/4 and E3/4. We label these
quartiles as q1, 42 and q3, respectively. The difference iq = 43 - q1 is called the
f(x)
0.2 -
0.1 +
(0, 0)
Figure 1.7.1: In Example 1.7.2, the area under the pdf to the right of 4 is P(X >
4).
interquartile range of X. The median is often used as a measure of center of the
distribution of X, while the interquartile range is used as a measure of spread or
dispersion of the distribution of X.
Quantiles need not be unique even for continuous random variables with pdfs.
For example, any point in the interval (2, 3) serves as a median for the following
pdf:
3(1 – x)(x – 2) 1< x<2
3(3 – x)(x – 4) 3<x < 4
f(x) =
(1.7.7)
elsewhere.
If, however, a quantile, say &p, is in the support of an absolutely continuous random
variable X with cdf Fx (x) then &p is the unique solution to the equation:
S, = Fx'(p),
(1.7.8)
where F'(u) is the inverse function of Fx(x). The next example serves as an
illustration.
Example 1.7.3. Let X be a continuous random variable with pdf
e
f(x) =
-00 < x < o.
(1.7.9)
(1+5e")1.2*
This pdf is a member of the log F-family of ditributions which is often used in the
modeling of the log of lifetime data. Note that X has the support space (-00, 00).
The cdf of X is
F(x) = 1- (1+ 5e¬*)¯², -∞ <x < o0,
which is confirmed immediately by showing that F'(x) = f(x). For the inverse of
the cdf, set u = F(x) and solve for u. A few steps of algebra lead to
F(u) = log {.2 [(1 – u)-5 – 1]}, 0<u<1.
Thus, Ep = F'(p) = log {.2 [(1 –- p)-5 – 1]}. The following three R functions can
be used to compute the pdf, cdf, and inverse cdf of F, respectively. These can be
downloaded at the site listed in the Preface.
dlogF <- function(x){exp(x)/(1+5*exp(x))^(1.2)}
plogF <- function(x){1- (1+5*exp(x))^(-.2)}
qlogF <- function (x){log(.2*((1-x)^(-5) - 1))}
Once the R function qlogF is sourced, it can be used to compute quantiles. The
following is an R script which results in the computation of the three quartiles of
Х:
qlogF(.25) ; qlogF(.50); qlogF(.75)
-0.4419242; 1.824549; 5.321057
Figure 1.7.2 displays a plot of this pdf and its quartiles. Notice that this is another
example of a skewed-right distribution; i.e., the right-tail is much longer than left-
tail. In terms of the log-lifetime of mechanical parts having this distribution, it
follows that 50% of the parts survive beyond 1.83 log-units and 25% of the parts
live longer than 5.32 log-units. With the long-right tail, some parts attain a long
life. I
1.7.2
Transformations
Let X be a continuous random variable with a known pdf fx. As in the discrete
case, we are often interested in the distribution of a random variable Y which is
some transformation of X, say, Y = g(X). Often we can obtain the pdf of Y by
first obtaining its cdf. We illustrate this with two examples.
Example 1.7.4. Let X be the random variable in Example 1.7.1. Recall that X
was the distance from the origin to the random point selected in the unit circle.
Suppose instead that we are interested in the square of the distance; that is, let
Y = X?. The support of Y is the same as that of X, namely, Sy = (0, 1). What is
the cdf of Y? By expression (1.7.5), the cdf of X is
x <0
x2 0<r <1
1< r.
Fx(x) =
(1.7.10)
Let y be in the support of Y; i.e., 0 < y < 1. Then, using expression (1.7.10) and
the fact that the support of X contains only positive numbers, the cdf of Y is
Fy (y) = P(Y < y) = P(X² < y) = P(X < VT) = Fx(VT) = V = y.
Figure 1.7.2: A graph of the pdf (1.7.9) showing the three quartiles, q1, 92, and
93, of the distribution. The probability mass in each of the four sections is 1/4.
It follows that the pdf of Y is
1 0<y<1
0 elsewhere.
fy (y) =
Example 1.7.5. Let fx(r) = , -1 < z < 1, zero elsewhere, be the pdf of a
random variable X. Note that X has a uniform distribution with the interval of
support (-1, 1). Define the random variable Y by Y = X². We wish to find the
pdf of Y. If y 2 0, the probability P(Y < y) is equivalent to
P(X² < y) = P(-vūS X< V).
Accordingly, the cdf of Y, Fy(y) = P(Y < y), is given by
y <0
= V 0<y<1
1S y.
Fy (y) = .
Hence, the pdf of Y is given by
* 0< y<1
fy (y)
elsewhere.
These examples illustrate the cumulative distribution function technique.
The transformation in Example 1.7.4 is one-to-one, and in such cases we can obtain
a simple formula for the pdf of Y in terms of the pdf of X, which we record in the
next theorem.
Theorem 1.7.1. Let X be a continuous random variable with pdf fx (x) and support
Sx. Let Y = g(X), where g(x) is a one-to-one differentiable function, on the sup-
port of X, Sx. Denote the inverse of g by x = g-(y) and let dæ/dy = d[g=(y)]/dy.
Then the pdf ofY is given by
fr (y) = fx(9¬(y))
for y E Sy ,
(1.7.11)
where the support of Y is the set Sy = {y = g(x) : rE Sx}.
Proof: Since g(x) is one-to-one and continuous, it is either strictly monotonically
increasing or decreasing. Assume that it is strictly monotonically increasing, for
now. The cdf of Y is given by
Fy (y) = P[Y < y] = P[g(X) < y] = P[X <g¯(y)] = Fx (9¯'(4).
(1.7.12)
Hence, the pdf of Y is
d
fy (y) =
da
) = fx (9¯'(»).
(1.7.13)
where dr/dy is the derivative of the function r = g¬'(y). In this case, because g is
increasing, dr/dy > 0. Hence, we can write dr/dy = |dr/dyl.
Suppose g(x) is strictly monotonically decreasing. Then (1.7.12) becomes Fy(y) =
1- Fx(9-(y)). Hence, the pdf of Y is fy(y) = fx(9g¬'(1))(-dx/dy). But since g
is decreasing, dr/dy < 0 and, hence, -dr/dy = |dx/dy|. Thus Equation (1.7.11) is
true in both cases.5 .
Henceforth, we refer to dr/dy = (d/dy)g-(y) as the Jacobian (denoted by J)
of the transformation. In most mathematical areas, J = dx/dy is referred to as the
Jacobian of the inverse transformation r = g(y), but in this book it is called the
Jacobian of the transformation, simply for convenience.
We summarize Theorem 1.7.1 in a simple algorithm which we illustrate in the
next example. Assuming that the transformation Y = g(X) is one-to-one, the
following steps lead to the pdf of Y:
1. Find the support of Y.
2. Solve for the inverse of the transfomation; i.e., solve for a in terms of y in
y = g(x), thereby obtaining r = g¬'(y).
3. Obtain 4.
4. The pdf of Y is fy(y) = fx(9¬1(y)) 4.
The proof of Theorem 1.7.1 can also be obtained by using the change-of-variable technique as
discussed in Chapter 4 of Mathematical Comments.
Example 1.7.6. Let X have the pdf
{
4r3 0<r < 1
f(x) =
elsewhere.
Consider the random variable Y = – log X. Here are the steps of the above algo-
rithm:
1. The support of Y = – log X is (0, 0).
2. If y = - log r then r = e=V.
3. 4
= -e-y
4. Thus the pdf of Y is:
fy(y) = fx (e-")|-e-"| = 4(e-)³e=" = 4e¬4v_
1.7.3 Mixtures of Discrete and Continuous Type Distribu-
tions
We close this section by two examples of distributions that are not of the discrete
or the continuous type.
Example 1.7.7. Let a distribution function be given by
팩 0<r<1
1< r.
F(r) =
1
Then, for instance,
P(-3<x s) - F (}) - F(-3) = -0=
3
and
P(X = 0) = F(0) – F(0–) =5
1
- 0 =
The graph of F(x) is shown in Figure 1.7.3. We see that F(æ) is not always
continuous, nor is it a step function. Accordingly, the corresponding distribution is
neither of the continuous type nor of the discrete type. It may be described as a
mixture of those types. I
Distributions that are mixtures of the continuous and discrete type do, in fact,
occur frequently in practice. For illustration, in life testing, suppose we know that
the length of life, say X, exceeds the number b, but the exact value of X is unknown.
This is called censoring. For instance, this can happen when a subject in a cancer
study simply disappears; the investigator knows that the subject has lived a certain
number of months, but the exact length of life is unknown. Or it might happen
when an investigator does not have enough time in an investigation to observe the
moments of deaths of all the animals, say rats, in some study. Censoring can also
occur in the insurance industry; in particular, consider a loss with a limited-pay
policy in which the top amount is exceeded but it is not known by how much.
F(x)
1
0.5
(0, 0)
Figure 1.7.3: Graph of the cdf of Example 1.7.7.
Example 1.7.8. Reinsurance companies are concerned with large losses because
they might agree, for illustration, to cover losses due to wind damages that are
between $2,000,000 and $10,000,000. Say that X equals the size of a wind loss in
millions of dollars, and suppose it has the cdf
-00 < x < 0
Fx(x) =
( 10 13
Transcribed Image Text:1.7 Continuous Random Variables In the last section, we discussed discrete random variables. Another class of random variables important in statistical applications is the class of continuous random variables, which we define next. Definition 1.7.1 (Continuous Random Variables). We say a random variable is a continuous random variable if its cumulative distribution function Fx(x) is a continuous function for all x E R. Recall from Theorem 1.5.3 that P(X = x) = Fx(x) – Fx(x-), for any random variable X. Hence, for a continuous random variable X, there are no points of discrete mass; i.e., if X is continuous, then P(X = x) = 0 for all r e R. Most continuous random variables are absolutely continuous; that is, | fx(t) dt, Fx(r) = (1.7.1) for some function fx(t). The function fx (t) is called a probability density func- tion (pdf) of X. If fx(x) is also continuous, then the Fundamental Theorem of Calculus implies that Fx (x) = fx(x). (1.7.2) The support of a continuous random variable X consists of all points r such that fx(x) > 0. As in the discrete case, we often denote the support of X by S. If X is a continuous random variable, then probabilities can be obtained by integration; i.e., P(a < X < b) = Fx (b) – Fx(a) = fx(t) dt. Also, for continuous random variables, P(a < X < b) = P(a < X < b) = P(a< X < b) = P(a < X < b). From the definition (1.7.2), note that pdfs satisfy the two properties (i) fx(x) >0 and (ii) ſ fx(t) dt = 1. (1.7.3) The second property, of course, follows from Fx(0) = 1. In an advanced course in probability, it is shown that if a function satisfies the above two properties, then it is a pdf for a continuous random variable; see, for example, Tucker (1967). Recall in Example 1.5.2 the simple experiment where a number was chosen at random from the interval (0, 1). The number chosen, X, is an example of a continuous random variable. Recall that the cdf of X is Fx (x) = x, for 0 <x < 1. Hence, the pdf of X is given by 1 0<x <1 fx(x) = (1.7.4) 0elsewhere. Any continuous or discrete random variable X whose pdf or pmf is constant on the support of X is said to have a uniform distribution; see Chapter 3 for a more formal definition. Example 1.7.1 (Point Chosen at Random Within the Unit Circle). Suppose we select a point at random in the interior of a circle of radius 1. Let X be the distance of the selected point from the origin. The sample space for the experiment is C = {(w, y) : w? + y? < 1}. Because the point is chosen at random, it seems that subsets of C which have equal area are equilikely. Hence, the probability of the selected point lying in a set A cC is proportional to the area of A; i.e., area of A P(A) = For 0 < x < 1, the event {X < x} is equivalent to the point lying in a circle of radius r. By this probability rule, P(X <x) = T2² /n = x2; hence, the cdf of X is x2 0<x < 1 1< x. Fx(x) = (1.7.5) 1 Taking the derivative of Fx(x), we obtain the pdf of X: 2x 0<r < 1 elsewhere. fx(x) = (1.7.6) For illustration, the probability that the selected point falls in the ring with radii 1/4 and 1/2 is given by P(<xs) - 2w du - -. = [ 3 X < 2w dw = w? 16 Example 1.7.2. Let the random variable be the time in seconds between incoming telephone calls at a busy switchboard. Suppose that a reasonable probability model for X is given by the pdf je-=/4 0 <r<∞ elsewhere. fx(x) = Note that fx satisfies the two properties of a pdf, namely, (i) f(x) 2 0 and (ii) 1 -1/4 dx = -e-/4 = 1. For illustration, the probability that the time between successive phone calls exceeds 4 seconds is given by P(X > 4) = 1 e-/4 dx = e-1 = 0.3679. The pdf and the probability of interest are depicted in Figure 1.7.1. From the figure, the pdf has a long right tail and no left tail. We say that this distribution is skewed right or positively skewed. This is an example of a gamma distribution which is discussed in detail in Chapter 3. I 1.7.1 Quantiles Quantiles (percentiles) are easily interpretable characteristics of a distribution. Definition 1.7.2 (Quantile). Let 0 < p < 1. The quantile of order p of the distribution of a random variable X is a value &p such that P(X < Ep) < p and P(X < Sp) > p. It is also known as the (100p) th percentile of X. I Examples include the median which is the quantile E1/2. The median is also called the second quartile. It is a point in the domain of X that divides the mass of the pdf into its lower and upper halves. The first and third quartiles divide each of these halves into quarters. They are, respectively 1/4 and E3/4. We label these quartiles as q1, 42 and q3, respectively. The difference iq = 43 - q1 is called the f(x) 0.2 - 0.1 + (0, 0) Figure 1.7.1: In Example 1.7.2, the area under the pdf to the right of 4 is P(X > 4). interquartile range of X. The median is often used as a measure of center of the distribution of X, while the interquartile range is used as a measure of spread or dispersion of the distribution of X. Quantiles need not be unique even for continuous random variables with pdfs. For example, any point in the interval (2, 3) serves as a median for the following pdf: 3(1 – x)(x – 2) 1< x<2 3(3 – x)(x – 4) 3<x < 4 f(x) = (1.7.7) elsewhere. If, however, a quantile, say &p, is in the support of an absolutely continuous random variable X with cdf Fx (x) then &p is the unique solution to the equation: S, = Fx'(p), (1.7.8) where F'(u) is the inverse function of Fx(x). The next example serves as an illustration. Example 1.7.3. Let X be a continuous random variable with pdf e f(x) = -00 < x < o. (1.7.9) (1+5e")1.2* This pdf is a member of the log F-family of ditributions which is often used in the modeling of the log of lifetime data. Note that X has the support space (-00, 00). The cdf of X is F(x) = 1- (1+ 5e¬*)¯², -∞ <x < o0, which is confirmed immediately by showing that F'(x) = f(x). For the inverse of the cdf, set u = F(x) and solve for u. A few steps of algebra lead to F(u) = log {.2 [(1 – u)-5 – 1]}, 0<u<1. Thus, Ep = F'(p) = log {.2 [(1 –- p)-5 – 1]}. The following three R functions can be used to compute the pdf, cdf, and inverse cdf of F, respectively. These can be downloaded at the site listed in the Preface. dlogF <- function(x){exp(x)/(1+5*exp(x))^(1.2)} plogF <- function(x){1- (1+5*exp(x))^(-.2)} qlogF <- function (x){log(.2*((1-x)^(-5) - 1))} Once the R function qlogF is sourced, it can be used to compute quantiles. The following is an R script which results in the computation of the three quartiles of Х: qlogF(.25) ; qlogF(.50); qlogF(.75) -0.4419242; 1.824549; 5.321057 Figure 1.7.2 displays a plot of this pdf and its quartiles. Notice that this is another example of a skewed-right distribution; i.e., the right-tail is much longer than left- tail. In terms of the log-lifetime of mechanical parts having this distribution, it follows that 50% of the parts survive beyond 1.83 log-units and 25% of the parts live longer than 5.32 log-units. With the long-right tail, some parts attain a long life. I 1.7.2 Transformations Let X be a continuous random variable with a known pdf fx. As in the discrete case, we are often interested in the distribution of a random variable Y which is some transformation of X, say, Y = g(X). Often we can obtain the pdf of Y by first obtaining its cdf. We illustrate this with two examples. Example 1.7.4. Let X be the random variable in Example 1.7.1. Recall that X was the distance from the origin to the random point selected in the unit circle. Suppose instead that we are interested in the square of the distance; that is, let Y = X?. The support of Y is the same as that of X, namely, Sy = (0, 1). What is the cdf of Y? By expression (1.7.5), the cdf of X is x <0 x2 0<r <1 1< r. Fx(x) = (1.7.10) Let y be in the support of Y; i.e., 0 < y < 1. Then, using expression (1.7.10) and the fact that the support of X contains only positive numbers, the cdf of Y is Fy (y) = P(Y < y) = P(X² < y) = P(X < VT) = Fx(VT) = V = y. Figure 1.7.2: A graph of the pdf (1.7.9) showing the three quartiles, q1, 92, and 93, of the distribution. The probability mass in each of the four sections is 1/4. It follows that the pdf of Y is 1 0<y<1 0 elsewhere. fy (y) = Example 1.7.5. Let fx(r) = , -1 < z < 1, zero elsewhere, be the pdf of a random variable X. Note that X has a uniform distribution with the interval of support (-1, 1). Define the random variable Y by Y = X². We wish to find the pdf of Y. If y 2 0, the probability P(Y < y) is equivalent to P(X² < y) = P(-vūS X< V). Accordingly, the cdf of Y, Fy(y) = P(Y < y), is given by y <0 = V 0<y<1 1S y. Fy (y) = . Hence, the pdf of Y is given by * 0< y<1 fy (y) elsewhere. These examples illustrate the cumulative distribution function technique. The transformation in Example 1.7.4 is one-to-one, and in such cases we can obtain a simple formula for the pdf of Y in terms of the pdf of X, which we record in the next theorem. Theorem 1.7.1. Let X be a continuous random variable with pdf fx (x) and support Sx. Let Y = g(X), where g(x) is a one-to-one differentiable function, on the sup- port of X, Sx. Denote the inverse of g by x = g-(y) and let dæ/dy = d[g=(y)]/dy. Then the pdf ofY is given by fr (y) = fx(9¬(y)) for y E Sy , (1.7.11) where the support of Y is the set Sy = {y = g(x) : rE Sx}. Proof: Since g(x) is one-to-one and continuous, it is either strictly monotonically increasing or decreasing. Assume that it is strictly monotonically increasing, for now. The cdf of Y is given by Fy (y) = P[Y < y] = P[g(X) < y] = P[X <g¯(y)] = Fx (9¯'(4). (1.7.12) Hence, the pdf of Y is d fy (y) = da ) = fx (9¯'(»). (1.7.13) where dr/dy is the derivative of the function r = g¬'(y). In this case, because g is increasing, dr/dy > 0. Hence, we can write dr/dy = |dr/dyl. Suppose g(x) is strictly monotonically decreasing. Then (1.7.12) becomes Fy(y) = 1- Fx(9-(y)). Hence, the pdf of Y is fy(y) = fx(9g¬'(1))(-dx/dy). But since g is decreasing, dr/dy < 0 and, hence, -dr/dy = |dx/dy|. Thus Equation (1.7.11) is true in both cases.5 . Henceforth, we refer to dr/dy = (d/dy)g-(y) as the Jacobian (denoted by J) of the transformation. In most mathematical areas, J = dx/dy is referred to as the Jacobian of the inverse transformation r = g(y), but in this book it is called the Jacobian of the transformation, simply for convenience. We summarize Theorem 1.7.1 in a simple algorithm which we illustrate in the next example. Assuming that the transformation Y = g(X) is one-to-one, the following steps lead to the pdf of Y: 1. Find the support of Y. 2. Solve for the inverse of the transfomation; i.e., solve for a in terms of y in y = g(x), thereby obtaining r = g¬'(y). 3. Obtain 4. 4. The pdf of Y is fy(y) = fx(9¬1(y)) 4. The proof of Theorem 1.7.1 can also be obtained by using the change-of-variable technique as discussed in Chapter 4 of Mathematical Comments. Example 1.7.6. Let X have the pdf { 4r3 0<r < 1 f(x) = elsewhere. Consider the random variable Y = – log X. Here are the steps of the above algo- rithm: 1. The support of Y = – log X is (0, 0). 2. If y = - log r then r = e=V. 3. 4 = -e-y 4. Thus the pdf of Y is: fy(y) = fx (e-")|-e-"| = 4(e-)³e=" = 4e¬4v_ 1.7.3 Mixtures of Discrete and Continuous Type Distribu- tions We close this section by two examples of distributions that are not of the discrete or the continuous type. Example 1.7.7. Let a distribution function be given by 팩 0<r<1 1< r. F(r) = 1 Then, for instance, P(-3<x s) - F (}) - F(-3) = -0= 3 and P(X = 0) = F(0) – F(0–) =5 1 - 0 = The graph of F(x) is shown in Figure 1.7.3. We see that F(æ) is not always continuous, nor is it a step function. Accordingly, the corresponding distribution is neither of the continuous type nor of the discrete type. It may be described as a mixture of those types. I Distributions that are mixtures of the continuous and discrete type do, in fact, occur frequently in practice. For illustration, in life testing, suppose we know that the length of life, say X, exceeds the number b, but the exact value of X is unknown. This is called censoring. For instance, this can happen when a subject in a cancer study simply disappears; the investigator knows that the subject has lived a certain number of months, but the exact length of life is unknown. Or it might happen when an investigator does not have enough time in an investigation to observe the moments of deaths of all the animals, say rats, in some study. Censoring can also occur in the insurance industry; in particular, consider a loss with a limited-pay policy in which the top amount is exceeded but it is not known by how much. F(x) 1 0.5 (0, 0) Figure 1.7.3: Graph of the cdf of Example 1.7.7. Example 1.7.8. Reinsurance companies are concerned with large losses because they might agree, for illustration, to cover losses due to wind damages that are between $2,000,000 and $10,000,000. Say that X equals the size of a wind loss in millions of dollars, and suppose it has the cdf -00 < x < 0 Fx(x) = ( 10 13
Let X1,..., Xn be a random sample from Beta(0, 1) distribution, where 0 > 0
(a) Show that ô
is the mle of 0.
E In X
(b) Find the distribution of Y = – In X (using section 1.7)
(c) Use Part (b) to show that W = -E, In X; has gamma distribution IT(n, 1/0).
(d) show that 20W has x2(2n) distribution.
(e) Using Part (d), find c1, c2 so that P(C <
20n
< c2) = 1-a, for a E (0, 1). Next
obtain a (1 – a)100% CI for 0.
(f) For a
interval found in Example 6.2.6
0.02 and n = 15, compare the length of this interval with the length of the
%3D
%3D
Transcribed Image Text:Let X1,..., Xn be a random sample from Beta(0, 1) distribution, where 0 > 0 (a) Show that ô is the mle of 0. E In X (b) Find the distribution of Y = – In X (using section 1.7) (c) Use Part (b) to show that W = -E, In X; has gamma distribution IT(n, 1/0). (d) show that 20W has x2(2n) distribution. (e) Using Part (d), find c1, c2 so that P(C < 20n < c2) = 1-a, for a E (0, 1). Next obtain a (1 – a)100% CI for 0. (f) For a interval found in Example 6.2.6 0.02 and n = 15, compare the length of this interval with the length of the %3D %3D
Expert Solution
trending now

Trending now

This is a popular solution!

steps

Step by step

Solved in 2 steps

Blurred answer
Knowledge Booster
Continuous Probability Distribution
Learn more about
Need a deep-dive on the concept behind this application? Look no further. Learn more about this topic, statistics and related others by exploring similar questions and additional content below.
Recommended textbooks for you
MATLAB: An Introduction with Applications
MATLAB: An Introduction with Applications
Statistics
ISBN:
9781119256830
Author:
Amos Gilat
Publisher:
John Wiley & Sons Inc
Probability and Statistics for Engineering and th…
Probability and Statistics for Engineering and th…
Statistics
ISBN:
9781305251809
Author:
Jay L. Devore
Publisher:
Cengage Learning
Statistics for The Behavioral Sciences (MindTap C…
Statistics for The Behavioral Sciences (MindTap C…
Statistics
ISBN:
9781305504912
Author:
Frederick J Gravetter, Larry B. Wallnau
Publisher:
Cengage Learning
Elementary Statistics: Picturing the World (7th E…
Elementary Statistics: Picturing the World (7th E…
Statistics
ISBN:
9780134683416
Author:
Ron Larson, Betsy Farber
Publisher:
PEARSON
The Basic Practice of Statistics
The Basic Practice of Statistics
Statistics
ISBN:
9781319042578
Author:
David S. Moore, William I. Notz, Michael A. Fligner
Publisher:
W. H. Freeman
Introduction to the Practice of Statistics
Introduction to the Practice of Statistics
Statistics
ISBN:
9781319013387
Author:
David S. Moore, George P. McCabe, Bruce A. Craig
Publisher:
W. H. Freeman