CYBR7240 Assignment 3_whudso21

docx

School

Kennesaw State University *

*We aren’t endorsed by this school

Course

7240

Subject

Computer Science

Date

Jan 9, 2024

Type

docx

Pages

10

Uploaded by BaronKnowledge10070

Report
William Hudson CYBR7240 Assignment 3 (5 Points) When presented with a dataset, it is usually a good idea to visualise it first. Go to the  Visualise  tab. Click on any of the scatter plots to open a new window which shows the scatter plot for two selected attributes. Try visualising a scatter plot of  age  and  duration . Do you notice anything unusual? You can click on any data point to display all it's values. There is one outlier in the bottom left of the graph. Info of the outlier is shown on the screenshot
William Hudson CYBR7240 Assignment 3 (5 Points) In the previous point you should have found a data point, which seems to be corrupted, as some of its values are nonsensical. Even a single point like this can significantly affect the performance of a classifier. How do you think it would affect Decision trees? A good way to check this is to test the performance of each classifier before and after removing this datapoint. It would skew the visualization to display more data in the right side of the graph due to several corrupted values being much lower than the “normal” dataset.
William Hudson CYBR7240 Assignment 3 (10 Points) To remove this instance from the dataset we will use a filter. We want to remove all instances, where the age of an applicant is lower than 0 years, as this suggests that the instance is corrupted. In the  Preprocess  tab click on  Choose  in the Filter pane. Select  filters > unsupervised > instance > RemoveWithValues . Click on the text of this filter to change the parameters. Set the attribute index to 13 (Age) and set the split point at 0. Click  Ok  to set the parameters and  Apply  to apply the filter to the data. Visualise the data again to verify that the invalid data point was removed.
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
William Hudson CYBR7240 Assignment 3 (20 Points) On the  Classify  tab, select the  Percentage split  test option and change its value to 90%. This way, we will train the classifiers using 90% of the training data and evaluate their performance on the remaining 10%. First, train a decision tree classifier with default options. Select  classifiers > trees > J48  and click  Start J48  is the Weka implementation of the  C4.5  algorithm, which uses the normalized information gain criterion to build a decision tree for classification.
William Hudson CYBR7240 Assignment 3 (20 Points) After training the classifier, the full decision tree is output for your perusal; you may need to scroll up for this. The tree may also be viewed in graphical form by right-clicking in the  Result list  and selecting  Visualize tree ; unfortunately this format is very cluttered for large trees. Such a tree accentuates one of the strengths of decision tree algorithms: they produce classifiers which are understandable to humans. This can be an important asset in real life applications (people are seldom prepared to do what a computer program tells them if there is no clear explanation). Observe the output of the classifier and try to answer the following questions: o How would you assess the performance of the classifier? Is the Percentage of  Correctly Classified Instances  a sufficient measure in this case? Why?  Hint:  check the number of good and bad cases in the test sample, using the confusion matrix. Each column of the matrix represents the instances in a predicted class, while each row represents the instances in an actual class. For example let us define an experiment from P positive instances and N negative instances. The four outcomes can be formulated in a 2 by 2 contingency table or confusion matrix. One benefit of a confusion matrix is that it is easy to see if the system is confusing two classes (i.e. commonly mislabeling one as another). The J48 algorithm appears to show that 63 instances were true positive (correctly approved for finance), 11 false negatives (incorrectly denied for finance), 15 false positives (incorrectly approved for finance) and 11 true negatives. This means that 15 people were given loans that should not have and 11 that were rejected for loans that should have qualified. The 63 true positives and 11 true negatives drives the 74% correctly classified rate, and I would say that this is not sufficient as 15 defaults on loans per period would not be a good business practice, especially when 11 people were denied loans that would be good candidates. o Looking at the decision tree itself, are the rules it applies sensible? Are there any branches which appear absurd? At what depth of the tree? What does this suggest? Hint:  Check the rules applied after following the paths: (a)  CheckingAccount = <0, Foreign = yes, Duration >11, Job = skilled, OtherDebtors = none, Duration <= 30  and (b)  CheckingAccount = <0, Foreign = yes, Duration >11, Job = unskilled. The rules seem fairly sensible, but when looking at the branch beginning with CheckingAccount =<0, it seems that if you are a foreign application, there are significantly more factors that go into the decision, whereas if you are not foreign, the application is given a “good” rating immediately.
William Hudson CYBR7240 Assignment 3 o How does the decision tree deal with classification in the case where there are zero instances in the training set corresponding to that particular path in the tree (e.g. those leaf nodes that have (0:0))? If there were 0 instances of a training set, the classification was “good”
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
William Hudson CYBR7240 Assignment 3 (20 Points) Now, explore the effect of the  confidenceFactor  option. You can find this by clicking on the Classifer name (to the right of the  Choose  button on the Classify tab). On the  Classifier options  window, click on the  More  button to find out what the confidence factor controls. Try the values 0.1, 0.2, 0.3 and 0.5. What is the performance of the classifier at each case? Did you expect this given your observations in the previous questions? Why do you think this happens? The performance in each case was as followed: confidenceFactor 0.1 – 69% correct confidenceFactor 0.2 – 71% correct confidenceFactor 0.3 – 77% correct confidenceFactor 0.5 – 77% correct It seems expected that as the branches are pruned, there would be more errors. As the decision trees grow, along with the confidenceFactor, there were more correctly classified instances. I believe this happened because, as more factors are used in the decision making process, the more likely you are to come to the right decision, hence the higher correctly classified rate as the confidenceFactor goes up.
William Hudson CYBR7240 Assignment 3 (20 Points) Suppose that it is worse to classify a customer as good when they are bad, than it is to classify a customer as bad when they are good. Which value would you pick for the confidence factor? Which performance measure would you base your decision on? I would choose confidenceFactor 0.5 because the false positive rate of 9 is much lower than any of other tested confidenceFactor values. That would mean only 9 people were approved for financing when they should not have been (marked good, actually bad).
William Hudson CYBR7240 Assignment 3 (Bonus: 20 Points)Finally we will create a  random decision forest  and compare the performance of this classifier to that of the decision tree and the decision stump. The random decision forest is an ensemble classifier that consists of many decision trees and outputs the class that is the mode of the class's output by individual trees. Again set the test option  Percentage split  to 90%. Select  classifiers > trees > RandomForest  and hit  Start . Again, observe the output. How high can you get the performance of the classifier by changing the number of trees (numTrees) parameter? How does the random decision forest compare performance wise to the decision tree and decision stump? When I ran the RandomForest with higher numInstances number >200, the classification would remain stable at between 76%-79% correctly classified. I lowered the numInstance count to 1 and used a random seed number and got 69% on the first run, 62% on the 2 nd run, and 67% on run 3 – highly varied.
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
William Hudson CYBR7240 Assignment 3 As there are less instances of randomized data in the lower numInstance runs, the performance level is lower and less precise. As the numInstance count rises, the performance level raises and is more consistent. I did attempt to run this test at numInstance 100000, but my WEKA crashed.