7. Let (0, ), let & be the real line, and let L(0, a) - (0-a)². Let the distribution of X be Poisson with parameter 0 > 0, fx (x | 6). 20, 1, 2, . Take as the prior distribution of 6 the gamma distribution (a, 8) (see Section 3.1) with density g(6)=((a)) for > 0, where a 0 and ẞ> 0. (a) Show that the posterior distribution of 8 given X = x is the gamma distribution G(a + x, B/(8 + 1)). (b) The first two moments of the gamma distribution g(a, 3) are aß and a (a+1)ẞ2. Show that the Bayes rule with respect to G(a, 3) is das(x) =ẞ(a + x)/(ẞ+1). (c) Show that the usual (maximum likelihood) estimate d(x) = x is not a Bayes rule. Note that if = [0, ∞), d(x) = x will be a Bayes rule. (d) Show that the rule d(x) = x is a limit of Bayes rules. (0) (e) Show that the rule d(x) = x is a generalized Bayes rule with respect to the measure log 0 [or dr (0) = (1/0) de]. (f) Show that the rule d(x) =x is an extended Bayes rule. 1.8.7. (a) The joint density of 0 and X is = h(0.x)=fx(x0)g(0) =e/a! ((a)) for x=0,1,... and 9>0. Hence g(r) is proportional to e-((0+1)/2) ga+x-1 for >0, which makes g(er) the gamma distribution G(a+1,3/(3+1)). (b) Since the loss is squared error, we have d...a(r) = E(r) = (a+r)3/(3+1). (c) If d() is Bayes with respect to r, then r(r,d) 0. On the other hand, r(r,d) = E(0-X)²= E(E((0-X)20)) E(e). If e=(0, 0), then E() > 0, since >0. But if e [0, oc), then E(0) can be zero, and in fact, d, or any rule d such that d(0) = 0, is Bayes with respect to the distribution degenerate at 0. (For 8=0. P(6) is defined to be degenerate at 0.) (d) das(x)=(a+1)/(+1)d(x)= as a 0 and 3→∞. (e) We want to find d to minimize (0-1)²-00 (1/0) do Vo If z=0, then this integral is + unless d=0. Hence, d(0)=0. If > 0, then minimizing this integral is equivalent to finding d to minimize E6-d)2 when 0 has the distribution G(r. 1), and so d(z) = 1. (f) Given >0. let 7. be the gamma distribution G(1,). Then, r(d)=E(0-d(x)) = E(E{(0-X)²|0}) =E(Var(X0))=E(6) = €. Since the minimum Bayes risk cannot be negative, d certainly comes within e of minimizing the Bayes risk. Since is arbitrary, d is extended Bayes. (The same must be true of any rule d such that d(0) = 0.)
7. Let (0, ), let & be the real line, and let L(0, a) - (0-a)². Let the distribution of X be Poisson with parameter 0 > 0, fx (x | 6). 20, 1, 2, . Take as the prior distribution of 6 the gamma distribution (a, 8) (see Section 3.1) with density g(6)=((a)) for > 0, where a 0 and ẞ> 0. (a) Show that the posterior distribution of 8 given X = x is the gamma distribution G(a + x, B/(8 + 1)). (b) The first two moments of the gamma distribution g(a, 3) are aß and a (a+1)ẞ2. Show that the Bayes rule with respect to G(a, 3) is das(x) =ẞ(a + x)/(ẞ+1). (c) Show that the usual (maximum likelihood) estimate d(x) = x is not a Bayes rule. Note that if = [0, ∞), d(x) = x will be a Bayes rule. (d) Show that the rule d(x) = x is a limit of Bayes rules. (0) (e) Show that the rule d(x) = x is a generalized Bayes rule with respect to the measure log 0 [or dr (0) = (1/0) de]. (f) Show that the rule d(x) =x is an extended Bayes rule. 1.8.7. (a) The joint density of 0 and X is = h(0.x)=fx(x0)g(0) =e/a! ((a)) for x=0,1,... and 9>0. Hence g(r) is proportional to e-((0+1)/2) ga+x-1 for >0, which makes g(er) the gamma distribution G(a+1,3/(3+1)). (b) Since the loss is squared error, we have d...a(r) = E(r) = (a+r)3/(3+1). (c) If d() is Bayes with respect to r, then r(r,d) 0. On the other hand, r(r,d) = E(0-X)²= E(E((0-X)20)) E(e). If e=(0, 0), then E() > 0, since >0. But if e [0, oc), then E(0) can be zero, and in fact, d, or any rule d such that d(0) = 0, is Bayes with respect to the distribution degenerate at 0. (For 8=0. P(6) is defined to be degenerate at 0.) (d) das(x)=(a+1)/(+1)d(x)= as a 0 and 3→∞. (e) We want to find d to minimize (0-1)²-00 (1/0) do Vo If z=0, then this integral is + unless d=0. Hence, d(0)=0. If > 0, then minimizing this integral is equivalent to finding d to minimize E6-d)2 when 0 has the distribution G(r. 1), and so d(z) = 1. (f) Given >0. let 7. be the gamma distribution G(1,). Then, r(d)=E(0-d(x)) = E(E{(0-X)²|0}) =E(Var(X0))=E(6) = €. Since the minimum Bayes risk cannot be negative, d certainly comes within e of minimizing the Bayes risk. Since is arbitrary, d is extended Bayes. (The same must be true of any rule d such that d(0) = 0.)
MATLAB: An Introduction with Applications
6th Edition
ISBN:9781119256830
Author:Amos Gilat
Publisher:Amos Gilat
Chapter1: Starting With Matlab
Section: Chapter Questions
Problem 1P
Related questions
Question
Explain the example mathematically and write the complete solution. MATHEMATICAL STATISTICS A DECISION THEORETIC APPROACH
![7. Let (0, ), let & be the real line, and let L(0, a) - (0-a)².
Let the distribution of X be Poisson with parameter 0 > 0,
fx (x | 6).
20, 1, 2, .
Take as the prior distribution of 6 the gamma distribution (a, 8)
(see Section 3.1) with density
g(6)=((a)) for > 0,
where a 0 and ẞ> 0.
(a) Show that the posterior distribution of 8 given X = x is the
gamma distribution G(a + x, B/(8 + 1)).
(b) The first two moments of the gamma distribution g(a, 3) are
aß and a (a+1)ẞ2. Show that the Bayes rule with respect to G(a, 3)
is das(x) =ẞ(a + x)/(ẞ+1).
(c) Show that the usual (maximum likelihood) estimate d(x) = x
is not a Bayes rule. Note that if = [0, ∞), d(x) = x will be a
Bayes rule.
(d) Show that the rule d(x) = x is a limit of Bayes rules.
(0)
(e) Show that the rule d(x) = x is a generalized Bayes rule with
respect to the measure log 0 [or dr (0) = (1/0) de].
(f) Show that the rule d(x) =x is an extended Bayes rule.
1.8.7. (a) The joint density of 0 and X is
=
h(0.x)=fx(x0)g(0)
=e/a! ((a))
for x=0,1,... and 9>0. Hence g(r) is proportional to
e-((0+1)/2) ga+x-1
for >0, which makes g(er) the gamma distribution G(a+1,3/(3+1)).
(b) Since the loss is squared error, we have d...a(r) = E(r) = (a+r)3/(3+1).
(c) If d() is Bayes with respect to r, then r(r,d) 0. On the other hand, r(r,d) = E(0-X)²=
E(E((0-X)20)) E(e). If e=(0, 0), then E() > 0, since >0. But if e [0, oc), then E(0) can be
zero, and in fact, d, or any rule d such that d(0) = 0, is Bayes with respect to the distribution degenerate
at 0. (For 8=0. P(6) is defined to be degenerate at 0.)
(d) das(x)=(a+1)/(+1)d(x)= as a 0 and 3→∞.
(e) We want to find d to minimize
(0-1)²-00 (1/0) do
Vo
If z=0, then this integral is + unless d=0. Hence, d(0)=0. If > 0, then minimizing this integral
is equivalent to finding d to minimize E6-d)2 when 0 has the distribution G(r. 1), and so d(z) = 1.
(f) Given >0. let 7. be the gamma distribution G(1,). Then,
r(d)=E(0-d(x)) = E(E{(0-X)²|0})
=E(Var(X0))=E(6) = €.
Since the minimum Bayes risk cannot be negative, d certainly comes within e of minimizing the Bayes risk.
Since is arbitrary, d is extended Bayes. (The same must be true of any rule d such that d(0) = 0.)](/v2/_next/image?url=https%3A%2F%2Fcontent.bartleby.com%2Fqna-images%2Fquestion%2F82e22ded-0b23-443e-9053-9b6ae90126e3%2F6d098e44-7bdc-4e24-b84b-dca5071d418b%2Fujgj0wt_processed.jpeg&w=3840&q=75)
Transcribed Image Text:7. Let (0, ), let & be the real line, and let L(0, a) - (0-a)².
Let the distribution of X be Poisson with parameter 0 > 0,
fx (x | 6).
20, 1, 2, .
Take as the prior distribution of 6 the gamma distribution (a, 8)
(see Section 3.1) with density
g(6)=((a)) for > 0,
where a 0 and ẞ> 0.
(a) Show that the posterior distribution of 8 given X = x is the
gamma distribution G(a + x, B/(8 + 1)).
(b) The first two moments of the gamma distribution g(a, 3) are
aß and a (a+1)ẞ2. Show that the Bayes rule with respect to G(a, 3)
is das(x) =ẞ(a + x)/(ẞ+1).
(c) Show that the usual (maximum likelihood) estimate d(x) = x
is not a Bayes rule. Note that if = [0, ∞), d(x) = x will be a
Bayes rule.
(d) Show that the rule d(x) = x is a limit of Bayes rules.
(0)
(e) Show that the rule d(x) = x is a generalized Bayes rule with
respect to the measure log 0 [or dr (0) = (1/0) de].
(f) Show that the rule d(x) =x is an extended Bayes rule.
1.8.7. (a) The joint density of 0 and X is
=
h(0.x)=fx(x0)g(0)
=e/a! ((a))
for x=0,1,... and 9>0. Hence g(r) is proportional to
e-((0+1)/2) ga+x-1
for >0, which makes g(er) the gamma distribution G(a+1,3/(3+1)).
(b) Since the loss is squared error, we have d...a(r) = E(r) = (a+r)3/(3+1).
(c) If d() is Bayes with respect to r, then r(r,d) 0. On the other hand, r(r,d) = E(0-X)²=
E(E((0-X)20)) E(e). If e=(0, 0), then E() > 0, since >0. But if e [0, oc), then E(0) can be
zero, and in fact, d, or any rule d such that d(0) = 0, is Bayes with respect to the distribution degenerate
at 0. (For 8=0. P(6) is defined to be degenerate at 0.)
(d) das(x)=(a+1)/(+1)d(x)= as a 0 and 3→∞.
(e) We want to find d to minimize
(0-1)²-00 (1/0) do
Vo
If z=0, then this integral is + unless d=0. Hence, d(0)=0. If > 0, then minimizing this integral
is equivalent to finding d to minimize E6-d)2 when 0 has the distribution G(r. 1), and so d(z) = 1.
(f) Given >0. let 7. be the gamma distribution G(1,). Then,
r(d)=E(0-d(x)) = E(E{(0-X)²|0})
=E(Var(X0))=E(6) = €.
Since the minimum Bayes risk cannot be negative, d certainly comes within e of minimizing the Bayes risk.
Since is arbitrary, d is extended Bayes. (The same must be true of any rule d such that d(0) = 0.)
Expert Solution

This question has been solved!
Explore an expertly crafted, step-by-step solution for a thorough understanding of key concepts.
Step by step
Solved in 2 steps with 1 images

Similar questions
Recommended textbooks for you

MATLAB: An Introduction with Applications
Statistics
ISBN:
9781119256830
Author:
Amos Gilat
Publisher:
John Wiley & Sons Inc

Probability and Statistics for Engineering and th…
Statistics
ISBN:
9781305251809
Author:
Jay L. Devore
Publisher:
Cengage Learning

Statistics for The Behavioral Sciences (MindTap C…
Statistics
ISBN:
9781305504912
Author:
Frederick J Gravetter, Larry B. Wallnau
Publisher:
Cengage Learning

MATLAB: An Introduction with Applications
Statistics
ISBN:
9781119256830
Author:
Amos Gilat
Publisher:
John Wiley & Sons Inc

Probability and Statistics for Engineering and th…
Statistics
ISBN:
9781305251809
Author:
Jay L. Devore
Publisher:
Cengage Learning

Statistics for The Behavioral Sciences (MindTap C…
Statistics
ISBN:
9781305504912
Author:
Frederick J Gravetter, Larry B. Wallnau
Publisher:
Cengage Learning

Elementary Statistics: Picturing the World (7th E…
Statistics
ISBN:
9780134683416
Author:
Ron Larson, Betsy Farber
Publisher:
PEARSON

The Basic Practice of Statistics
Statistics
ISBN:
9781319042578
Author:
David S. Moore, William I. Notz, Michael A. Fligner
Publisher:
W. H. Freeman

Introduction to the Practice of Statistics
Statistics
ISBN:
9781319013387
Author:
David S. Moore, George P. McCabe, Bruce A. Craig
Publisher:
W. H. Freeman