*9.29 Let Y₁, Y₂,..., Y, denote a random sample of size n from a power family distribution (see Exercise 6.17). Then the methods of Section 6.7 imply that Y(n) = max(Y₁, Y₂, ..., Y₁) has the distribution function given by 0, y < 0, F(n) (y) (y/0), 0≤ y ≤0, - fre 1, y > 0. Use the method described in Exercise 9.26 to show that Y() is a consistent estimator of 0.
*9.29 Let Y₁, Y₂,..., Y, denote a random sample of size n from a power family distribution (see Exercise 6.17). Then the methods of Section 6.7 imply that Y(n) = max(Y₁, Y₂, ..., Y₁) has the distribution function given by 0, y < 0, F(n) (y) (y/0), 0≤ y ≤0, - fre 1, y > 0. Use the method described in Exercise 9.26 to show that Y() is a consistent estimator of 0.
MATLAB: An Introduction with Applications
6th Edition
ISBN:9781119256830
Author:Amos Gilat
Publisher:Amos Gilat
Chapter1: Starting With Matlab
Section: Chapter Questions
Problem 1P
Related questions
Question
Answer 9.29
![**Exercise 9.29**
Let \( Y_1, Y_2, \ldots, Y_n \) denote a random sample of size \( n \) from a power family distribution (see Exercise 6.17). Then the methods of Section 6.7 imply that \( Y_{(n)} = \max(Y_1, Y_2, \ldots, Y_n) \) has the distribution function given by
\[
F_{(n)}(y) =
\begin{cases}
0, & y < 0, \\
\left(\frac{y}{\theta}\right)^{\alpha n}, & 0 \leq y \leq \theta, \\
1, & y > \theta.
\end{cases}
\]
Use the method described in Exercise 9.26 to show that \( Y_{(n)} \) is a consistent estimator of \( \theta \).
---
**Explanation:**
- The notation \( Y_{(n)} \) refers to the maximum value in the sample \( Y_1, Y_2, \ldots, Y_n \).
- The distribution function \( F_{(n)}(y) \) provides probabilities for different ranges of \( y \):
- \( F_{(n)}(y) = 0 \) when \( y < 0 \), indicating no probability below zero.
- \( F_{(n)}(y) = \left(\frac{y}{\theta}\right)^{\alpha n} \) for \( 0 \leq y \leq \theta \), showing the probability distribution within this range.
- \( F_{(n)}(y) = 1 \) when \( y > \theta \), indicating certainty above \( \theta \).
- The exercise asks for demonstration of the consistency of \( Y_{(n)} \) as an estimator for \( \theta \), using techniques from a previous exercise (9.26).](/v2/_next/image?url=https%3A%2F%2Fcontent.bartleby.com%2Fqna-images%2Fquestion%2F763773a4-b59e-428b-8a3f-bc3dd5fb97d6%2F3af86532-3857-4fa2-8ab6-516c9580d53e%2F31ch4o_processed.png&w=3840&q=75)
Transcribed Image Text:**Exercise 9.29**
Let \( Y_1, Y_2, \ldots, Y_n \) denote a random sample of size \( n \) from a power family distribution (see Exercise 6.17). Then the methods of Section 6.7 imply that \( Y_{(n)} = \max(Y_1, Y_2, \ldots, Y_n) \) has the distribution function given by
\[
F_{(n)}(y) =
\begin{cases}
0, & y < 0, \\
\left(\frac{y}{\theta}\right)^{\alpha n}, & 0 \leq y \leq \theta, \\
1, & y > \theta.
\end{cases}
\]
Use the method described in Exercise 9.26 to show that \( Y_{(n)} \) is a consistent estimator of \( \theta \).
---
**Explanation:**
- The notation \( Y_{(n)} \) refers to the maximum value in the sample \( Y_1, Y_2, \ldots, Y_n \).
- The distribution function \( F_{(n)}(y) \) provides probabilities for different ranges of \( y \):
- \( F_{(n)}(y) = 0 \) when \( y < 0 \), indicating no probability below zero.
- \( F_{(n)}(y) = \left(\frac{y}{\theta}\right)^{\alpha n} \) for \( 0 \leq y \leq \theta \), showing the probability distribution within this range.
- \( F_{(n)}(y) = 1 \) when \( y > \theta \), indicating certainty above \( \theta \).
- The exercise asks for demonstration of the consistency of \( Y_{(n)} \) as an estimator for \( \theta \), using techniques from a previous exercise (9.26).
![**Exercise 9.26**
It is sometimes relatively easy to establish consistency or lack of consistency by appealing directly to Definition 9.2, evaluating \( P(|\hat{\theta}_n - \theta| \leq \epsilon) \) directly, and then showing that
\[
\lim_{n \to \infty} P(|\hat{\theta}_n - \theta| \leq \epsilon) = 1
\]
Let \( Y_1, Y_2, \ldots, Y_n \) denote a random sample of size \( n \) from a uniform distribution on the interval \( (0, \theta) \). If \( Y_{(n)} = \max(Y_1, Y_2, \ldots, Y_n) \), we showed in Exercise 6.74 that the probability distribution function of \( Y_{(n)} \) is given by
\[
F_{(n)}(y) =
\begin{cases}
0, & y < 0, \\
(y/\theta)^n, & 0 \leq y \leq \theta, \\
1, & y > \theta.
\end{cases}
\]
**a** For each \( n \geq 1 \) and every \( \epsilon > 0 \), it follows that
\[
P(|Y_{(n)} - \theta| \leq \epsilon) = P(\theta - \epsilon \leq Y_{(n)} \leq \theta + \epsilon).
\]
If \( \epsilon > \theta \), verify that \( P(\theta - \epsilon \leq Y_{(n)} \leq \theta + \epsilon) = 1 \) and that, for every positive \( \epsilon < \theta \), we obtain
\[
P(\theta - \epsilon \leq Y_{(n)} \leq \theta + \epsilon) = 1 - [(\theta - \epsilon)/\theta]^n.
\]
**b** Using the result from part (a), show that \( Y_{(n)} \) is a consistent estimator for \( \theta \) by showing that, for every \( \epsilon > 0 \),
\[
\lim_{n \to \infty} P(|Y_{(n)} - \theta| \leq](/v2/_next/image?url=https%3A%2F%2Fcontent.bartleby.com%2Fqna-images%2Fquestion%2F763773a4-b59e-428b-8a3f-bc3dd5fb97d6%2F3af86532-3857-4fa2-8ab6-516c9580d53e%2Fna8m4a6_processed.png&w=3840&q=75)
Transcribed Image Text:**Exercise 9.26**
It is sometimes relatively easy to establish consistency or lack of consistency by appealing directly to Definition 9.2, evaluating \( P(|\hat{\theta}_n - \theta| \leq \epsilon) \) directly, and then showing that
\[
\lim_{n \to \infty} P(|\hat{\theta}_n - \theta| \leq \epsilon) = 1
\]
Let \( Y_1, Y_2, \ldots, Y_n \) denote a random sample of size \( n \) from a uniform distribution on the interval \( (0, \theta) \). If \( Y_{(n)} = \max(Y_1, Y_2, \ldots, Y_n) \), we showed in Exercise 6.74 that the probability distribution function of \( Y_{(n)} \) is given by
\[
F_{(n)}(y) =
\begin{cases}
0, & y < 0, \\
(y/\theta)^n, & 0 \leq y \leq \theta, \\
1, & y > \theta.
\end{cases}
\]
**a** For each \( n \geq 1 \) and every \( \epsilon > 0 \), it follows that
\[
P(|Y_{(n)} - \theta| \leq \epsilon) = P(\theta - \epsilon \leq Y_{(n)} \leq \theta + \epsilon).
\]
If \( \epsilon > \theta \), verify that \( P(\theta - \epsilon \leq Y_{(n)} \leq \theta + \epsilon) = 1 \) and that, for every positive \( \epsilon < \theta \), we obtain
\[
P(\theta - \epsilon \leq Y_{(n)} \leq \theta + \epsilon) = 1 - [(\theta - \epsilon)/\theta]^n.
\]
**b** Using the result from part (a), show that \( Y_{(n)} \) is a consistent estimator for \( \theta \) by showing that, for every \( \epsilon > 0 \),
\[
\lim_{n \to \infty} P(|Y_{(n)} - \theta| \leq
Expert Solution
![](/static/compass_v2/shared-icons/check-mark.png)
This question has been solved!
Explore an expertly crafted, step-by-step solution for a thorough understanding of key concepts.
This is a popular solution!
Trending now
This is a popular solution!
Step by step
Solved in 2 steps
![Blurred answer](/static/compass_v2/solution-images/blurred-answer.jpg)
Recommended textbooks for you
![MATLAB: An Introduction with Applications](https://www.bartleby.com/isbn_cover_images/9781119256830/9781119256830_smallCoverImage.gif)
MATLAB: An Introduction with Applications
Statistics
ISBN:
9781119256830
Author:
Amos Gilat
Publisher:
John Wiley & Sons Inc
![Probability and Statistics for Engineering and th…](https://www.bartleby.com/isbn_cover_images/9781305251809/9781305251809_smallCoverImage.gif)
Probability and Statistics for Engineering and th…
Statistics
ISBN:
9781305251809
Author:
Jay L. Devore
Publisher:
Cengage Learning
![Statistics for The Behavioral Sciences (MindTap C…](https://www.bartleby.com/isbn_cover_images/9781305504912/9781305504912_smallCoverImage.gif)
Statistics for The Behavioral Sciences (MindTap C…
Statistics
ISBN:
9781305504912
Author:
Frederick J Gravetter, Larry B. Wallnau
Publisher:
Cengage Learning
![MATLAB: An Introduction with Applications](https://www.bartleby.com/isbn_cover_images/9781119256830/9781119256830_smallCoverImage.gif)
MATLAB: An Introduction with Applications
Statistics
ISBN:
9781119256830
Author:
Amos Gilat
Publisher:
John Wiley & Sons Inc
![Probability and Statistics for Engineering and th…](https://www.bartleby.com/isbn_cover_images/9781305251809/9781305251809_smallCoverImage.gif)
Probability and Statistics for Engineering and th…
Statistics
ISBN:
9781305251809
Author:
Jay L. Devore
Publisher:
Cengage Learning
![Statistics for The Behavioral Sciences (MindTap C…](https://www.bartleby.com/isbn_cover_images/9781305504912/9781305504912_smallCoverImage.gif)
Statistics for The Behavioral Sciences (MindTap C…
Statistics
ISBN:
9781305504912
Author:
Frederick J Gravetter, Larry B. Wallnau
Publisher:
Cengage Learning
![Elementary Statistics: Picturing the World (7th E…](https://www.bartleby.com/isbn_cover_images/9780134683416/9780134683416_smallCoverImage.gif)
Elementary Statistics: Picturing the World (7th E…
Statistics
ISBN:
9780134683416
Author:
Ron Larson, Betsy Farber
Publisher:
PEARSON
![The Basic Practice of Statistics](https://www.bartleby.com/isbn_cover_images/9781319042578/9781319042578_smallCoverImage.gif)
The Basic Practice of Statistics
Statistics
ISBN:
9781319042578
Author:
David S. Moore, William I. Notz, Michael A. Fligner
Publisher:
W. H. Freeman
![Introduction to the Practice of Statistics](https://www.bartleby.com/isbn_cover_images/9781319013387/9781319013387_smallCoverImage.gif)
Introduction to the Practice of Statistics
Statistics
ISBN:
9781319013387
Author:
David S. Moore, George P. McCabe, Bruce A. Craig
Publisher:
W. H. Freeman