In the basic decision tree algorithm (assuming we always create binary splits), we choose the fea- ture/value pair with the maximum information gain as the test to use at each internal node of the decision tree. Suppose we modified the algorithm to choose at random from among those feature/value combinations that had non-zero mutual information, and we kept all other parts of the algorithm unchanged. (a) What is the maximum number of leaf nodes that such a decision tree could contain if it were

Database System Concepts
7th Edition
ISBN:9780078022159
Author:Abraham Silberschatz Professor, Henry F. Korth, S. Sudarshan
Publisher:Abraham Silberschatz Professor, Henry F. Korth, S. Sudarshan
Chapter1: Introduction
Section: Chapter Questions
Problem 1PE
icon
Related questions
Question
3. In the basic decision tree algorithm (assuming we always create binary splits), we choose the fea-
ture/value pair with the maximum information gain as the test to use at each internal node of the
decision tree. Suppose we modified the algorithm to choose at random from among those feature/value
combinations that had non-zero mutual information, and we kept all other parts of the algorithm
unchanged.
(a) What is the maximum number of leaf nodes that such a decision tree could contain if it were
trained on m training examples?
(b) What is the maximum number of leaf nodes that a decision tree could contain if it were trained on
m training examples using the original maximum mutual information version of the algorithm?
Is it bigger, smaller, or the same as your answer to (b)?
(c) How do you think this change (using random splits vs. maximum information mutual information
splits) would affect the accuracy of the decision trees produced on average? Why?
Transcribed Image Text:3. In the basic decision tree algorithm (assuming we always create binary splits), we choose the fea- ture/value pair with the maximum information gain as the test to use at each internal node of the decision tree. Suppose we modified the algorithm to choose at random from among those feature/value combinations that had non-zero mutual information, and we kept all other parts of the algorithm unchanged. (a) What is the maximum number of leaf nodes that such a decision tree could contain if it were trained on m training examples? (b) What is the maximum number of leaf nodes that a decision tree could contain if it were trained on m training examples using the original maximum mutual information version of the algorithm? Is it bigger, smaller, or the same as your answer to (b)? (c) How do you think this change (using random splits vs. maximum information mutual information splits) would affect the accuracy of the decision trees produced on average? Why?
Expert Solution
trending now

Trending now

This is a popular solution!

steps

Step by step

Solved in 3 steps with 1 images

Blurred answer
Knowledge Booster
Types of trees
Learn more about
Need a deep-dive on the concept behind this application? Look no further. Learn more about this topic, computer-science and related others by exploring similar questions and additional content below.
Similar questions
Recommended textbooks for you
Database System Concepts
Database System Concepts
Computer Science
ISBN:
9780078022159
Author:
Abraham Silberschatz Professor, Henry F. Korth, S. Sudarshan
Publisher:
McGraw-Hill Education
Starting Out with Python (4th Edition)
Starting Out with Python (4th Edition)
Computer Science
ISBN:
9780134444321
Author:
Tony Gaddis
Publisher:
PEARSON
Digital Fundamentals (11th Edition)
Digital Fundamentals (11th Edition)
Computer Science
ISBN:
9780132737968
Author:
Thomas L. Floyd
Publisher:
PEARSON
C How to Program (8th Edition)
C How to Program (8th Edition)
Computer Science
ISBN:
9780133976892
Author:
Paul J. Deitel, Harvey Deitel
Publisher:
PEARSON
Database Systems: Design, Implementation, & Manag…
Database Systems: Design, Implementation, & Manag…
Computer Science
ISBN:
9781337627900
Author:
Carlos Coronel, Steven Morris
Publisher:
Cengage Learning
Programmable Logic Controllers
Programmable Logic Controllers
Computer Science
ISBN:
9780073373843
Author:
Frank D. Petruzella
Publisher:
McGraw-Hill Education