PS#10

pdf

School

California Lutheran University *

*We aren’t endorsed by this school

Course

IDS575

Subject

Computer Science

Date

Apr 3, 2024

Type

pdf

Pages

8

Uploaded by SuperHumanCrabPerson1153

Report
Q1 Mixture of Gaussian 45 Points Q1.1 8 Points Select all correct about GMM. Q1.2 8 Points Choose all of the following correctly describes the differences between GMM and k-means. Gaussian mixture moel is supervised The number of Gaussian distribution functions used is equal to the number of clusters Gaussian Mixture model is equivalent to Quadratic Discriminant Analysis of GMM can be decomposed similarly to PCA Σ k-means often gets stuck in a local minimum, while GMM with EM tends not to GMM is better at capturing clusters of different sizes and orientations GMM is better at capturing clusters with overlaps GMM is less prone to overfitting
Q1.3 15 Points Suppose we have a Gaussian mixture of dimensional data with 3 dimensions, and we use a model with full covariance matrices. How many parameters are in the model if we set the number of cluster as 4? 39 Q1.4 7 Points Which of the following contour plots describes a Gaussian distribution with diagonal covariance? Select all that apply.
Q1.5 7 Points The following four clusterings of points into 2 clusters.Appling GMM to which is better than k-means? select all correct A B C D
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
Q2 Expectation and Maximization 55 Points Q2.1 7 Points The EM algorithm optimizes a lower bound on its objective function, which is the marginal likelihood of the observed data points . Q2.2 7 Points In the EM algorithm, what are the E step and M step? A B C D P ( x ) i i x , x , ... x 1 2 N True False
Q2.3 7 Points The EM algorithm is guaranteed to never decrease the value of its objective function on any iteration. Q2.4 7 Points Suppose we have data that come from a mixture of 6 Gaussians. Which model would we expect to have the highest log-likelihood after fitting via the EM algorithm? Estimate cluster responsibilities, Maximize likelihood over parameters Estimate likelihood over parameters, Maximize cluster responsibilities Estimate number of parameters, Maximize likelihood over parameters Estimate likelihood over parameters, Maximize number of parameters True False a mixture of Gaussians with 2 components a mixture of Gaussians with 4 components a mixture of Gaussians with 6 components a mixture of Gaussians with 8 components a mixture of Gaussians with 10 components
Q2.5 7 Points EM algorithm can only be used for parameter estimation of mixture models. Q2.6 20 Points Suppose that we are fitting a Gaussian mixture model for data consisting of a single real value x, using K=2 components. We have m=5 training cases. to are 5,15,25,30,40. We use the EM algorithm to find the maximum likelihood estimates for the model parameters, , , and . The standard deviations for the two components are fixed at 10. Suppose that at some point in the EM algorithm, the E step found that the weights of the two components for the five data samples were as follows: What values for the parameters , , and will be found in the next M step of the algorithm? Q2.6.pdf Download True False x (1) x (5) ϕ 1 ϕ 2 μ 1 μ 2 ϕ 1 ϕ 2 μ 1 μ 2
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
1 of 1 GRADED Problem Set (PS) #10 STUDENT
Urvashiben Patel TOTAL POINTS 100 / 100 pts QUESTION 1 Mixture of Gaussian 45 / 45 pts 1.1 (no title) 8 / 8 pts 1.2 (no title) 8 / 8 pts 1.3 (no title) 15 / 15 pts 1.4 (no title) 7 / 7 pts 1.5 (no title) 7 / 7 pts QUESTION 2 Expectation and Maximization 55 / 55 pts 2.1 (no title) 7 / 7 pts 2.2 (no title) 7 / 7 pts 2.3 (no title) 7 / 7 pts 2.4 (no title) 7 / 7 pts 2.5 (no title) 7 / 7 pts 2.6 (no title) 20 / 20 pts