*9.29 Let Y₁, Y₂,..., Y, denote a random sample of size n from a power family distribution (see Exercise 6.17). Then the methods of Section 6.7 imply that Y(n) = max(Y₁, Y₂, ..., Y₁) has the distribution function given by 0, y < 0, F(n) (y) (y/0), 0≤ y ≤0, - fre 1, y > 0. Use the method described in Exercise 9.26 to show that Y() is a consistent estimator of 0.

MATLAB: An Introduction with Applications
6th Edition
ISBN:9781119256830
Author:Amos Gilat
Publisher:Amos Gilat
Chapter1: Starting With Matlab
Section: Chapter Questions
Problem 1P
icon
Related questions
Question

Answer 9.29 

 

**Exercise 9.29**

Let \( Y_1, Y_2, \ldots, Y_n \) denote a random sample of size \( n \) from a power family distribution (see Exercise 6.17). Then the methods of Section 6.7 imply that \( Y_{(n)} = \max(Y_1, Y_2, \ldots, Y_n) \) has the distribution function given by

\[
F_{(n)}(y) = 
\begin{cases} 
0, & y < 0, \\
\left(\frac{y}{\theta}\right)^{\alpha n}, & 0 \leq y \leq \theta, \\
1, & y > \theta.
\end{cases}
\]

Use the method described in Exercise 9.26 to show that \( Y_{(n)} \) is a consistent estimator of \( \theta \).

---

**Explanation:**

- The notation \( Y_{(n)} \) refers to the maximum value in the sample \( Y_1, Y_2, \ldots, Y_n \).
- The distribution function \( F_{(n)}(y) \) provides probabilities for different ranges of \( y \):
  - \( F_{(n)}(y) = 0 \) when \( y < 0 \), indicating no probability below zero.
  - \( F_{(n)}(y) = \left(\frac{y}{\theta}\right)^{\alpha n} \) for \( 0 \leq y \leq \theta \), showing the probability distribution within this range.
  - \( F_{(n)}(y) = 1 \) when \( y > \theta \), indicating certainty above \( \theta \).
- The exercise asks for demonstration of the consistency of \( Y_{(n)} \) as an estimator for \( \theta \), using techniques from a previous exercise (9.26).
Transcribed Image Text:**Exercise 9.29** Let \( Y_1, Y_2, \ldots, Y_n \) denote a random sample of size \( n \) from a power family distribution (see Exercise 6.17). Then the methods of Section 6.7 imply that \( Y_{(n)} = \max(Y_1, Y_2, \ldots, Y_n) \) has the distribution function given by \[ F_{(n)}(y) = \begin{cases} 0, & y < 0, \\ \left(\frac{y}{\theta}\right)^{\alpha n}, & 0 \leq y \leq \theta, \\ 1, & y > \theta. \end{cases} \] Use the method described in Exercise 9.26 to show that \( Y_{(n)} \) is a consistent estimator of \( \theta \). --- **Explanation:** - The notation \( Y_{(n)} \) refers to the maximum value in the sample \( Y_1, Y_2, \ldots, Y_n \). - The distribution function \( F_{(n)}(y) \) provides probabilities for different ranges of \( y \): - \( F_{(n)}(y) = 0 \) when \( y < 0 \), indicating no probability below zero. - \( F_{(n)}(y) = \left(\frac{y}{\theta}\right)^{\alpha n} \) for \( 0 \leq y \leq \theta \), showing the probability distribution within this range. - \( F_{(n)}(y) = 1 \) when \( y > \theta \), indicating certainty above \( \theta \). - The exercise asks for demonstration of the consistency of \( Y_{(n)} \) as an estimator for \( \theta \), using techniques from a previous exercise (9.26).
**Exercise 9.26**

It is sometimes relatively easy to establish consistency or lack of consistency by appealing directly to Definition 9.2, evaluating \( P(|\hat{\theta}_n - \theta| \leq \epsilon) \) directly, and then showing that 

\[
\lim_{n \to \infty} P(|\hat{\theta}_n - \theta| \leq \epsilon) = 1
\]

Let \( Y_1, Y_2, \ldots, Y_n \) denote a random sample of size \( n \) from a uniform distribution on the interval \( (0, \theta) \). If \( Y_{(n)} = \max(Y_1, Y_2, \ldots, Y_n) \), we showed in Exercise 6.74 that the probability distribution function of \( Y_{(n)} \) is given by

\[
F_{(n)}(y) = 
\begin{cases} 
0, & y < 0, \\
(y/\theta)^n, & 0 \leq y \leq \theta, \\
1, & y > \theta.
\end{cases}
\]

**a** For each \( n \geq 1 \) and every \( \epsilon > 0 \), it follows that 

\[
P(|Y_{(n)} - \theta| \leq \epsilon) = P(\theta - \epsilon \leq Y_{(n)} \leq \theta + \epsilon).
\]

If \( \epsilon > \theta \), verify that \( P(\theta - \epsilon \leq Y_{(n)} \leq \theta + \epsilon) = 1 \) and that, for every positive \( \epsilon < \theta \), we obtain 

\[
P(\theta - \epsilon \leq Y_{(n)} \leq \theta + \epsilon) = 1 - [(\theta - \epsilon)/\theta]^n.
\]

**b** Using the result from part (a), show that \( Y_{(n)} \) is a consistent estimator for \( \theta \) by showing that, for every \( \epsilon > 0 \), 

\[
\lim_{n \to \infty} P(|Y_{(n)} - \theta| \leq
Transcribed Image Text:**Exercise 9.26** It is sometimes relatively easy to establish consistency or lack of consistency by appealing directly to Definition 9.2, evaluating \( P(|\hat{\theta}_n - \theta| \leq \epsilon) \) directly, and then showing that \[ \lim_{n \to \infty} P(|\hat{\theta}_n - \theta| \leq \epsilon) = 1 \] Let \( Y_1, Y_2, \ldots, Y_n \) denote a random sample of size \( n \) from a uniform distribution on the interval \( (0, \theta) \). If \( Y_{(n)} = \max(Y_1, Y_2, \ldots, Y_n) \), we showed in Exercise 6.74 that the probability distribution function of \( Y_{(n)} \) is given by \[ F_{(n)}(y) = \begin{cases} 0, & y < 0, \\ (y/\theta)^n, & 0 \leq y \leq \theta, \\ 1, & y > \theta. \end{cases} \] **a** For each \( n \geq 1 \) and every \( \epsilon > 0 \), it follows that \[ P(|Y_{(n)} - \theta| \leq \epsilon) = P(\theta - \epsilon \leq Y_{(n)} \leq \theta + \epsilon). \] If \( \epsilon > \theta \), verify that \( P(\theta - \epsilon \leq Y_{(n)} \leq \theta + \epsilon) = 1 \) and that, for every positive \( \epsilon < \theta \), we obtain \[ P(\theta - \epsilon \leq Y_{(n)} \leq \theta + \epsilon) = 1 - [(\theta - \epsilon)/\theta]^n. \] **b** Using the result from part (a), show that \( Y_{(n)} \) is a consistent estimator for \( \theta \) by showing that, for every \( \epsilon > 0 \), \[ \lim_{n \to \infty} P(|Y_{(n)} - \theta| \leq
Expert Solution
trending now

Trending now

This is a popular solution!

steps

Step by step

Solved in 2 steps

Blurred answer
Recommended textbooks for you
MATLAB: An Introduction with Applications
MATLAB: An Introduction with Applications
Statistics
ISBN:
9781119256830
Author:
Amos Gilat
Publisher:
John Wiley & Sons Inc
Probability and Statistics for Engineering and th…
Probability and Statistics for Engineering and th…
Statistics
ISBN:
9781305251809
Author:
Jay L. Devore
Publisher:
Cengage Learning
Statistics for The Behavioral Sciences (MindTap C…
Statistics for The Behavioral Sciences (MindTap C…
Statistics
ISBN:
9781305504912
Author:
Frederick J Gravetter, Larry B. Wallnau
Publisher:
Cengage Learning
Elementary Statistics: Picturing the World (7th E…
Elementary Statistics: Picturing the World (7th E…
Statistics
ISBN:
9780134683416
Author:
Ron Larson, Betsy Farber
Publisher:
PEARSON
The Basic Practice of Statistics
The Basic Practice of Statistics
Statistics
ISBN:
9781319042578
Author:
David S. Moore, William I. Notz, Michael A. Fligner
Publisher:
W. H. Freeman
Introduction to the Practice of Statistics
Introduction to the Practice of Statistics
Statistics
ISBN:
9781319013387
Author:
David S. Moore, George P. McCabe, Bruce A. Craig
Publisher:
W. H. Freeman