The support vector machine does not need to be implemented. Linear classifiers, on the other hand, are intrinsically binary, and we have a 10-way classification issue (the library has handled it for you). You will train 10 binary, one-vs-all SVMs to determine which of ten categories a test instance belongs to. Each classifier will be taught to detect 'bird' versus 'non-bird,' 'cat' vs 'non-cat,' and so on. On each test case, all 10 classifiers are assessed, and the classifier with the highest degree of certainty "wins." For example, if the 'cat' classifier returns a score of -0.2 (where 0 is on the decision boundary), and the 'bird' classifier returns a score of -0.3, and all of the other classifiers are even more negative, the test case would be classified as a 'cat,' despite the fact that none of the classifiers put the test case on the positive side of the decision boundary. You have a free parameter C while learning an SVM that governs how strongly regularized the model is. Your accuracy will be quite sensitive to C, so experiment with a variety of numbers. Hints: To train and predict, use SVM in Sklearn or OpenCV. CODE: from sklearn import neighbors np.random.seed(56) ##########--WRITE YOUR CODE HERE--########## # The following steps are just for your reference # You can write in your own way # # # densely sample keypoints # def sample_kp(shape, stride, size): # return kp # # # extract vocabulary of SIFT features # def extract_vocabulary(raw_data, key_point): # return vocabulary # # # extract Bag of SIFT Representation of images # def extract_feat(raw_data, vocabulary, key_point): # return feat # # # sample dense keypoints # skp = sample_kp((train_data[0].shape[0],train_data[0].shape[1]),(64,64), 8) # vocabulary = extract_vocabulary(train_data, skp) # train_feat = extract_feat(train_data, vocabulary, skp) # test_feat = extract_feat(test_data, vocabulary, skp) train_feat = test_feat = ##########-------END OF CODE-------########## # this block should generate # train_feat and test_feat corresponding to train_data and test_data
The support vector machine does not need to be implemented. Linear classifiers, on the other hand, are intrinsically binary, and we have a 10-way classification issue (the library has handled it for you). You will train 10 binary, one-vs-all SVMs to determine which of ten categories a test instance belongs to. Each classifier will be taught to detect 'bird' versus 'non-bird,' 'cat' vs 'non-cat,' and so on.
On each test case, all 10 classifiers are assessed, and the classifier with the highest degree of certainty "wins." For example, if the 'cat' classifier returns a score of -0.2 (where 0 is on the decision boundary), and the 'bird' classifier returns a score of -0.3, and all of the other classifiers are even more negative, the test case would be classified as a 'cat,' despite the fact that none of the classifiers put the test case on the positive side of the decision boundary. You have a free parameter C while learning an SVM that governs how strongly regularized the model is. Your accuracy will be quite sensitive to C, so experiment with a variety of numbers.
Hints:
To train and predict, use SVM in Sklearn or OpenCV.
Step by step
Solved in 3 steps