KNN, decision tree: (Machine learning) Please ans question B and C specifically- 6. Consider a binary classification problem using 1-nearest neighbors with the Euclidean distance metric. We have N 1-dimensional training points x(1), x(2), . . . x(N ) and corresponding labels y(1), y(2), . . . y(N ) with x(i ) ∈ R and y(i ) ∈ {0, 1}. Assume the points x(1), x(2), . . . x(N ) are in ascending order by value. If there are ties during the 1-NN algorithm, we break ties by choosing the label corresponding to the x(i ) with lower value. (a) Is it possible to build a decision tree that behaves exactly the same as the
KNN, decision tree: (Machine learning)
Please ans question B and C specifically-
6. Consider a binary classification problem using 1-nearest neighbors with the Euclidean distance metric. We have N 1-dimensional training points x(1), x(2), . . . x(N ) and corresponding labels
y(1), y(2), . . . y(N ) with x(i ) ∈ R and y(i ) ∈ {0, 1}. Assume the points x(1), x(2), . . . x(N ) are in ascending order by value. If there are ties during the 1-NN
corresponding to the x(i ) with lower value.
(a) | Is it possible to build a decision tree that behaves exactly the same as the 1-nearest neighbor classifier? Assume that the decision at each node takes the form of “ x ≤ t or |
x > t,” where t ∈R.
◯ Yes
◯ No
If your answer is yes, please explain how you will construct the decision tree. If your answer is no,
explain why it’s not possible.
(b) Let’s add a dimension! Now assume the training points are 2-dimensional
where x(i ) = ( x( 1i), x( 2i )) ∈R2 and the decision at each node takes the form of “ xj ≤ t or xj > t,”
where t ∈ R and j ∈ {1, 2}. Give an example with at most 3 training points for which it isn’t
possible to build a decision tree that behaves exactly the same as a 1-nearest neighbor classifier.
(c) Assuming we have 2-dimensional training points x(i ) = ( x( 1i ), x( 2i )) ∈R2 and
the decision at each node takes the form of “ xj ≤t or xj > t,” where t ∈ R and j ∈ {1, 2}, under
what conditions is it possible to build a decision tree that behaves exactly the same as a 1-nearest
neighbor classifier? Explain why it is possible to build the decision tree as stated in part (a) but, in
general, not possible in part (b).
Trending now
This is a popular solution!
Step by step
Solved in 2 steps with 1 images