CV_assignment_4.pdf

pdf

School

Drexel University *

*We aren’t endorsed by this school

Course

583

Subject

Computer Science

Date

Feb 20, 2024

Type

pdf

Pages

13

Uploaded by CaptainMantisMaster961

Report
Computer Vision Assignment 4 - Classification Fall 2023 Meet Sakariya - 14473322 1 (Theory Questions) 1. Given the following image pixel intensity val [ues]120I=013 (a) Perform k-means clustering with k=2. Your initial references vectors will be a1 = [1] and a2 = [2]. You will be “running” the k-means algorithm manually. Until the cluster assignments don’t change: i. Assign an observation (pixel) to the reference vector it is closest to, using the Euclidean distance on the pixel value. ii. Update the reference vectors to be the means of their members. Show the value of the reference vectors, and the cluster assignments for each iteration you must perform (10pts). Solution: Given image pixel intensity val [ues]120I=013 We will start with initial reference vectors a1 = [1] and a2 = [2]. We will iteratively assign pixels to the closest reference vector using the Euclidean distance on pixel values and then update the reference vectors to be the means of their members. We will repeat this process until the cluster assignments no longer change: Iteration 1: Initial reference vectors: a1 = [1] a2 = [2] Cluster assignments: For pixel I(1,1) = 1: Distance to a1: |1−1|=0 Distance to a2: |1−2|=1 Assign this pixel to cluster 1 (closest to a1). 1
For pixel I(1,2) = 2: Distance to a1: |2−1|=1 Distance to a2: |2−2|=0 Assign this pixel to cluster 2 (closest to a2). For pixel I(1,3) = 0: Distance to a1: |0−1|=1 Distance to a2: |0−2|=2 Assign this pixel to cluster 1 (closest to a1). For pixel I(2,1) = 0: Distance to a1: |0−1|=1 Distance to a2: |0−2|=2 Assign this pixel to cluster 1 (closest to a1). For pixel I(2,2) = 1: Distance to a1: |1−1|=0 Distance to a2: |1−2|=1 Assign this pixel to cluster 1 (closest to a1). For pixel I(2,3) = 3: Distance to a1: |3−1|=2 Distance to a2: |3−2|=1 Assign this pixel to cluster 2 (closest to a2). Updated reference vectors: a1 = [0.5] (mean of cluster 1) a2 = [2.5] (mean of cluster 2) Cluster assignments have changed, so we need to continue. Iteration 2: Cluster assignments: For pixel I(1,1) = 1: Distance to a1: |1 − 0.5| = 0.5 Distance to a2: |1 − 2.5| = 1.5 Assign this pixel to cluster 1 (closest to a1). For pixel I(1,2) = 2: Distance to a1: |2 0 5| 1 5
Assign this pixel to cluster 1 (closest to a1 For pixel I(2,1) = 0: Distance to a1: |0 − 0.5| = 0.5 Distance to a2: |0 − 2.5| = 2.5 Assign this pixel to cluster 1 (closest to a1 For pixel I(2,2) = 1: Distance to a1: |1 − 0.5| = 0.5 Distance to a2: |1 − 2.5| = 1.5 Assign this pixel to cluster 1 (closest to a1 For pixel I(2,3) = 3: Distance to a1: |3 − 0.5| = 2.5 Distance to a2: |3 − 2.5| = 0.5 Assign this pixel to cluster 2 (closest to a2 Updated reference vectors: a1 = [0.333] (mean of cluster 1) a2 = [2.667] (mean of cluster 2) Iteration 3: Cluster assignments: For pixel I(1,1) = 1: Distance to a1: |1 − 0.333| = 0.667 Distance to a2: |1 − 2.667| = 1.667 Assign this pixel to cluster 1 (closest to a1 For pixel I(1,2) = 2: Distance to a1: |2 − 0.333| = 1.667 Distance to a2: |2 − 2.667| = 0.667 Assign this pixel to cluster 2 (closest to a2 For pixel I(1,3) = 0: Distance to a1: |0 − 0.333| = 0.333 Distance to a2: |0 − 2.667| = 2.667 Assign this pixel to cluster 1 (closest to a1 For pixel I(2,1) = 0: Distance to a1: |0 − 0.333| = 0.333 Distance to a2: ). ). ). ). ). ). ). ). ).
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
For pixel I(2,3) = 3: Distance to a1: |3 − 0.333| = 2.667 Distance to a2: |3 − 2.667| = 0.333 Assign this pixel to cluster 2 (closest to a2 Updated reference vectors: a1 = [0.167] (mean of cluster 1) a2 = [2.833] (mean of cluster 2) Iteration 4: Cluster assignments: For pixel I(1,1) = 1: Distance to a1: |1 − 0.167| = 0.833 Distance to a2: |1 − 2.833| = 1.833 Assign this pixel to cluster 1 (closest to a1 For pixel I(1,2) = 2: Distance to a1: |2 − 0.167| = 1.833 Distance to a2: |2 − 2.833| = 0.833 Assign this pixel to cluster 2 (closest to a2 For pixel I(1,3) = 0: Distance to a1: |0 − 0.167| = 0.167 Distance to a2: |0 − 2.833| = 2.833 Assign this pixel to cluster 1 (closest to a1 For pixel I(2,1) = 0: Distance to a1: |0 − 0.167| = 0.167 Distance to a2: |0 − 2.833| = 2.833 Assign this pixel to cluster 1 (closest to a1 For pixel I(2,2) = 1: Distance to a1: |1 − 0.167| = 0.833 Distance to a2: |1 − 2.833| = 1.833 Assign this pixel to cluster 1 (closest to a1 For pixel I(2,3) = 3: Distance to a1: |3 − 0.167| = 2.833 Distance to a2: ). ). ). ). ). ). ).
5 Cluster assignments: For pixel I(1,1) = 1: Distance to a1: |1 − 0.083| = 0.917 Distance to a2: |1 − 2.917| = 1.917 Assign this pixel to cluster 1 (closest to a1). For pixel I(1,2) = 2: Distance to a1: |2 − 0.083| = 1.917 Distance to a2: |2 − 2.917| = 0.917 Assign this pixel to cluster 2 (closest to a2). For pixel I(1,3) = 0: Distance to a − 1: |0 − 0.083| = 0.083 Distance to a2: |0 − 2.917| = 2.917 Assign this pixel to cluster 1 (closest to a1). For pixel I(2,1) = 0: Distance to a1: |0 − 0.083| = 0.083 Distance to a2: |0 − 2.917| = 2.917 Assign this pixel to cluster 1 (closest to a1). For pixel I(2,2) = 1: Distance to a1: |1 − 0.083| = 0.917 Distance to a2: |1 − 2.917| = 1.917 Assign this pixel to cluster 1 (closest to a1). For pixel I(2,3) = 3: Distance to a1: |3 − 0.083| = 2.917 Distance to a2: |3 − 2.917| = 0.083 Assign this pixel to cluster 2 (closest to a2). The cluster assignments have not changed in this iteration, so the k-means algorithm stops. Final reference vectors: a1 = [0.083] a2 = [2.917] Final cluster assignments: Pixel (1,1) is in cluster 1
OR a1 0 0.1353 0.0067 0.1353 0.1353 0.0001 a2 0.1353 0 0.0067 0.0025 0.1353 0.0498 6 a3 0.0067 0.0067 0 0.0067 0.0498 0.0000 a4 0.1353 0.0025 0.0067 0 0.1353 0.0000 a5 0.1353 0.1353 0.0498 0.1353 0 0.0067 a6 0.0001 0.0498 0.0000 0.0000 0.0067 0 (b) Using the same grayscale image above, compute the weights between the pixels for a fully-connected, undirected graph representation. You can either show this as a drawing of the graph (in which case, a hand-drawn visualizations, inserted into your PDF, with the weights on the edges is fine) or provide the weight matrix (the upper-diagonal of it is fine, since it will be symmetric). For the weights of the edges connecting pixels, we’ll use a combination of their value and location. For pixel a, let ai be its value/intensity, and (ax,ay) be its location. We can then compute the similarity/weight between pixels a and bas: w(a,b)=e−((a−b)2+(a−b)2+(a−b 2i i x x y y)) In addition, consider a pixel not to be connected to itself (weight=0). You should leave your weights in terms of e (10pts). Solution: Given image pixel intensity ma [trix:]120I=013 The weight between pixel a and pixel b is given by: w(a,b)=e−((a−b)2+(a−b 2+(a−b)2i i x x) y y ) Where: -ai and bi are the intensity values of pixels a and b, respectively -(ax,ay) and (bx,by) are the locations of pixels a and b, respectively The weight matrix is symmetric, and we only need to compute the upper-diagonal part of it. Here’s the weight matrix with the weights represented in terms of ’e’ a 1 a 2 a 3 a 4 a 5 a a 1 a 2 a 3 a 4 a 5 a 6 a 0 e e e e e a e 0 e e e e a e e 0 e e e a e e e 0 e e a e e e e 0 e− a e e e e e 0 1 2 5 2 2 9 5 6 2 3 2 2 3 5 5 5 3 10 4 2 6 5 2 13 5 5 2 2 3 2 6 9 3 10 13 5
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
Degree matrix D: 0.41 -0.13 -0.13 -0.13 0.00 -1.23e-04 1.33e-16 0 0 0 0 0 Laplacian matrix L = D - W L = -0.13 0.27 -0.00 -0.13 0.00 -2.26e-06 0 0.05 0 0 0 0 0 0 0.09 0 0 0 -0.1 3 0.00 0.32 -0.1 3 0.00 -0.0 4 The weight between pixel a and pixel b is given by: w(a,b)=e−((a− 2i bi 0 0 0 0.31 0 0 -0.1 3 -0.1 3 -0.1 3 0.46 -0.0 4 -0 0 0 0 0 0 0.55 0 0.00 0.00 0.00 -0.0 4 0.07 -4.5 3 0 0 0 0 0 0.59 -1.23e- 04 -2.26e- 06 -0.04 0.00 -4.53 0.05 (c) Find the minimum non-trivial graph cut using the matrix formulation way shown in class. You may (and should) use function like svd do eigen-decomposition for you. You’ll likely need to read its documentation to understand how to the inputs and outputs work. Show the intermediate matrices needed for your eigen-decomposition, namely D and W, and what the chosen eigenvalue/vector pair is. Finally draw your new (cut) graph (and in- clude that image) and/or just tell us which pixels belong to which groups. (10pts). Solution: Given image pixel intensity: []120I=013 a 1 a 2 a 3 a 4 a 5 a 6 a 0 e e e e e a e 0 e e e e a e e 0 e e e a e e e 0 e e a e e e e 0 e− a e e e e e 0 1 2 5 2 2 9 5 6 2 3 2 2 3 5 5 5 3 10 )+(a 4 2 6 5 2 13 2 x) +(a 5 5 2 2 3 2 2 y)) 6 9 3 10 13 5 b b x y
Image of group 2: Choosen eigen value: 0.0557 Image of group 1: 0 I = 0 1 I = 1 8 1 0 0 1 0 1 1 0 [ [ ] ]
2 Classifying an Image using Grayscale Histograms Accuracy: 65.43% 9 Now we want to make our training and validation sets. First set the random number generator’s seed to zero so that you can have reproducible results. Next shuffle the data and select 2/3 of it for training and the remaining for validation. Now that we have our datasets created, let’s classify! Go through each validatioon sample’s histogram and classify them as car or not-car using a k-nearest neighbors approach, where k = 5. For our similarity metric, we’ll use the histogram intersection (where D is the number of bins): For each image in your dataset, compute a grayscale histogram with 256 bins and extract a class label from the first three characters of the file name, neg,pos. For simplicity you may want to enumerate these class labels. ∑Dsim(a,b)=min(aj,bj)j=1 Note that histogram intersection is a similarity measurement, so when implementing KNN you’ll want to find the K most similar observations (as opposed the the K nearest). For your report, compute the accuracy as the percentage of the validation images classified correctly. In addition show: 1. One image correctly labeled as a car 2. One image correctly labeled as not a car 3. One image incorrectly labeled as a car 4. One image incorrectly labeled as not a car Note: It may take a while to traverse this directory and make your data matrices. Therefore, you may consider saving your representations and labels in some file and read them in as needed. Solution:
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
10
11
3 Classifying an Image using Gists 12 Next let’s attempt to classification using local gradient information. Divide your grayscale images into 10 non-overlapping 20 ×20 sub-images, computing an 8-bin HOG at each sub-image. Concatenate these ten 8-bin HOGs to form an 80-feature representation of the image. Now repeat your classification (kNN with k=5) and evaluation as in Part 1, but using this represen- tation. Note that now your histogram intersection similar metric will just sum over 80 bins. Solution: Accuracy: 90.29%
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
13