Darya is working on an online learning application in which false negatives are costlier than false positives. Therefore, she proposes the following variant of the Halving algorithm for learning a concept from class C. As in the Halving algorithm, her algorithm maintains a version space. Let VS+ denote the version space at time-step t. Instead of taking the label chosen by a majority of the functions in VSt (as the Halving algorithm does), Darya's algorithm first checks if there is a strong majority of functions in VS₁ that give the same label, where strong majority means at least 70 percent. More precisely, if at least [0.7 VS+|| functions in VS give a '+' label (respectively '-' label), Darya's algorithm predicts the label as '+' (respectively '-'). Otherwise, when there is no strong majority, Darya's algorithm always predicts the label as '+'. For example, if there are |VSt| = 100 functions in VSt out of which 75 functions give a '-', then the algorithm predicts the label as; but if 67 functions give a '-', the algorithm predicts a '+'. Use a similar analysis to the one we saw in the class for the Halving algorithm to derive an upper bound for the expected number of mistakes made by Darya's algorithm. You need to devise an asymptotic upper bound for the number of mistakes by Darya's algorithm. Show your work.

icon
Related questions
Question
Darya is working on an online learning application in which false negatives are costlier than false positives. Therefore, she
proposes the following variant of the Halving algorithm for learning a concept from class C. As in the Halving algorithm,
her algorithm maintains a version space. Let VS+ denote the version space at time-step t. Instead of taking the label chosen
by a majority of the functions in VSt (as the Halving algorithm does), Darya's algorithm first checks if there is a strong
majority of functions in VS₁ that give the same label, where strong majority means at least 70 percent. More precisely,
if at least [0.7 VS+|| functions in VS give a '+' label (respectively '-' label), Darya's algorithm predicts the label as '+'
(respectively '-'). Otherwise, when there is no strong majority, Darya's algorithm always predicts the label as '+'. For
example, if there are |VSt| = 100 functions in VSt out of which 75 functions give a '-', then the algorithm predicts the label
as; but if 67 functions give a '-', the algorithm predicts a '+'.
Use a similar analysis to the one we saw in the class for the Halving algorithm to derive an upper bound for the expected
number of mistakes made by Darya's algorithm. You need to devise an asymptotic upper bound for the number of mistakes
by Darya's algorithm. Show your work.
Transcribed Image Text:Darya is working on an online learning application in which false negatives are costlier than false positives. Therefore, she proposes the following variant of the Halving algorithm for learning a concept from class C. As in the Halving algorithm, her algorithm maintains a version space. Let VS+ denote the version space at time-step t. Instead of taking the label chosen by a majority of the functions in VSt (as the Halving algorithm does), Darya's algorithm first checks if there is a strong majority of functions in VS₁ that give the same label, where strong majority means at least 70 percent. More precisely, if at least [0.7 VS+|| functions in VS give a '+' label (respectively '-' label), Darya's algorithm predicts the label as '+' (respectively '-'). Otherwise, when there is no strong majority, Darya's algorithm always predicts the label as '+'. For example, if there are |VSt| = 100 functions in VSt out of which 75 functions give a '-', then the algorithm predicts the label as; but if 67 functions give a '-', the algorithm predicts a '+'. Use a similar analysis to the one we saw in the class for the Halving algorithm to derive an upper bound for the expected number of mistakes made by Darya's algorithm. You need to devise an asymptotic upper bound for the number of mistakes by Darya's algorithm. Show your work.
Expert Solution
steps

Step by step

Solved in 2 steps

Blurred answer