1. What is your guess on the value of p? 2. In Maximum Likelihood Estimation, we want to find a parameter p which maximizes all the observations in the dataset. If the dataset is a matrix A, where each row a1, a2,,am are individual observations, we want to maximize P(A) = P(a₁)P(a₂) P(am) because individ- ual experiments are independent. Maximizing this is equivalent to maximizing log P(A) = log P(a₁)+log P(a₂)+…+log P(am). Maximizing this quantity is equivalent to minimizing the -log P(A) = -log P(a₁) - log P(a₂)log P(am). 3. Here you need to find out P(ai) for yourself. 4. If you can do that properly, you will find an equation of the form: Now, define q = m mn -log P(A) mn Σi=121 mn log p Σ=1 yi mn log (1 - p) Then the equation becomes: -log P(A) -q logp (1-q) log (1 − p) mn Use Pinsker's Inequality or Calculus to show that, p = q. 5. What is the value of p for the above dataset given in the table? 6. If you toss 20 coins now, how many coins are most likely to yield a head?
1. What is your guess on the value of p? 2. In Maximum Likelihood Estimation, we want to find a parameter p which maximizes all the observations in the dataset. If the dataset is a matrix A, where each row a1, a2,,am are individual observations, we want to maximize P(A) = P(a₁)P(a₂) P(am) because individ- ual experiments are independent. Maximizing this is equivalent to maximizing log P(A) = log P(a₁)+log P(a₂)+…+log P(am). Maximizing this quantity is equivalent to minimizing the -log P(A) = -log P(a₁) - log P(a₂)log P(am). 3. Here you need to find out P(ai) for yourself. 4. If you can do that properly, you will find an equation of the form: Now, define q = m mn -log P(A) mn Σi=121 mn log p Σ=1 yi mn log (1 - p) Then the equation becomes: -log P(A) -q logp (1-q) log (1 − p) mn Use Pinsker's Inequality or Calculus to show that, p = q. 5. What is the value of p for the above dataset given in the table? 6. If you toss 20 coins now, how many coins are most likely to yield a head?
A First Course in Probability (10th Edition)
10th Edition
ISBN:9780134753119
Author:Sheldon Ross
Publisher:Sheldon Ross
Chapter1: Combinatorial Analysis
Section: Chapter Questions
Problem 1.1P: a. How many different 7-place license plates are possible if the first 2 places are for letters and...
Related questions
Question
Please give a step-by-step solution to parts 4, 5, and 6.
This problem is on Maximum Likelihood Estimation.
![1st to 2nd to 3rd to 4th to 5th to 6th to 7th toss
T
H
H
H
H
HT
H
T H
H
T T
H
H
T
H
T H
T
HH
T T
TT H
HT T T
H
H
T
T
HT
T
T
T
T
T
H
T
T
T T H H T T](/v2/_next/image?url=https%3A%2F%2Fcontent.bartleby.com%2Fqna-images%2Fquestion%2F83c13de9-775f-43a6-a34c-0ec7e6de8b0b%2F28f3242c-e704-47d4-9b01-f47c897a0c3b%2Fukscsrh_processed.png&w=3840&q=75)
Transcribed Image Text:1st to 2nd to 3rd to 4th to 5th to 6th to 7th toss
T
H
H
H
H
HT
H
T H
H
T T
H
H
T
H
T H
T
HH
T T
TT H
HT T T
H
H
T
T
HT
T
T
T
T
T
H
T
T
T T H H T T
![1. What is your guess on the value of p?
2. In Maximum Likelihood Estimation, we want to find a parameter p which maximizes all the
observations in the dataset. If the dataset is a matrix A, where each row a1, a2,, am are
individual observations, we want to maximize P(A) = P(a₁) P(a₂) P(am) because individ-
ual experiments are independent. Maximizing this is equivalent to maximizing log P(A) =
log P(a₁) +log P(a₂)++log P(am). Maximizing this quantity is equivalent to minimizing the
-log P(A) = - log P(a₁) – log P(a₂) — · · · – log P(am).
3. Here you need to find out P(a) for yourself.
4. If you can do that properly, you will find an equation of the form:
Now, define q =
m
Σi
mn
-log P(A) ==
mn
Sm
i=1 ilog pi=1 log (1 − p)
mn
Then the equation becomes:
log P(A)
mn
mn
= -q logp (1-q) log (1 - p)
Use Pinsker's Inequality or Calculus to show that, p = q.
5. What is the value of p for the above dataset given in the table?
6. If you toss 20 coins now, how many coins are most likely to yield a head?](/v2/_next/image?url=https%3A%2F%2Fcontent.bartleby.com%2Fqna-images%2Fquestion%2F83c13de9-775f-43a6-a34c-0ec7e6de8b0b%2F28f3242c-e704-47d4-9b01-f47c897a0c3b%2F4wzr5co_processed.png&w=3840&q=75)
Transcribed Image Text:1. What is your guess on the value of p?
2. In Maximum Likelihood Estimation, we want to find a parameter p which maximizes all the
observations in the dataset. If the dataset is a matrix A, where each row a1, a2,, am are
individual observations, we want to maximize P(A) = P(a₁) P(a₂) P(am) because individ-
ual experiments are independent. Maximizing this is equivalent to maximizing log P(A) =
log P(a₁) +log P(a₂)++log P(am). Maximizing this quantity is equivalent to minimizing the
-log P(A) = - log P(a₁) – log P(a₂) — · · · – log P(am).
3. Here you need to find out P(a) for yourself.
4. If you can do that properly, you will find an equation of the form:
Now, define q =
m
Σi
mn
-log P(A) ==
mn
Sm
i=1 ilog pi=1 log (1 − p)
mn
Then the equation becomes:
log P(A)
mn
mn
= -q logp (1-q) log (1 - p)
Use Pinsker's Inequality or Calculus to show that, p = q.
5. What is the value of p for the above dataset given in the table?
6. If you toss 20 coins now, how many coins are most likely to yield a head?
Expert Solution
![](/static/compass_v2/shared-icons/check-mark.png)
This question has been solved!
Explore an expertly crafted, step-by-step solution for a thorough understanding of key concepts.
Step 1: Write the given information.
VIEWStep 2: Determine the guess for the p value and determine the maximum likelihood estimator.
VIEWStep 3: Determine the value of P(ai) using Binomial distribution.
VIEWStep 4: Use the calculus to show that p = q by minimising the obtained function.
VIEWStep 5: Determine the value of p for the given data set in the table.
VIEWSolution
VIEWStep by step
Solved in 6 steps with 13 images
![Blurred answer](/static/compass_v2/solution-images/blurred-answer.jpg)
Recommended textbooks for you
![A First Course in Probability (10th Edition)](https://www.bartleby.com/isbn_cover_images/9780134753119/9780134753119_smallCoverImage.gif)
A First Course in Probability (10th Edition)
Probability
ISBN:
9780134753119
Author:
Sheldon Ross
Publisher:
PEARSON
![A First Course in Probability](https://www.bartleby.com/isbn_cover_images/9780321794772/9780321794772_smallCoverImage.gif)
![A First Course in Probability (10th Edition)](https://www.bartleby.com/isbn_cover_images/9780134753119/9780134753119_smallCoverImage.gif)
A First Course in Probability (10th Edition)
Probability
ISBN:
9780134753119
Author:
Sheldon Ross
Publisher:
PEARSON
![A First Course in Probability](https://www.bartleby.com/isbn_cover_images/9780321794772/9780321794772_smallCoverImage.gif)