final-hmm

pdf

School

Hong Kong Polytechnic University *

*We aren’t endorsed by this school

Course

273

Subject

Computer Science

Date

Nov 24, 2024

Type

pdf

Pages

4

Uploaded by lixun73230

Report
CS 188 Spring 2023 Final Review: HMMs Q1. HMMs Consider a process where there are transitions among a finite set of states s 1 , · · · , s k over time steps i = 1 , · · · , N . Let the random variables X 1 , · · · , X N represent the state of the system at each time step and be generated as follows: Sample the initial state s from an initial distribution P 1 ( X 1 ), and set i = 1 Repeat the following: 1. Sample a duration d from a duration distribution P D over the integers { 1 , · · · , M } , where M is the maximum duration. 2. Remain in the current state s for the next d time steps, i.e., set x i = x i +1 = · · · = x i + d 1 = s (1) 3. Sample a successor state s from a transition distribution P T ( X t | X t 1 = s ) over the other states s ̸ = s (so there are no self transitions) 4. Assign i = i + d and s = s . This process continues indefinitely, but we only observe the first N time steps. (a) Assuming that all three states s 1 , s 2 , s 3 are different, what is the probability of the sample sequence s 1 , s 1 , s 2 , s 2 , s 2 , s 3 , s 3 ? Write an algebraic expression. Assume M 3. At each time step i we observe a noisy version of the state X i that we denote Y i and is produced via a conditional distribution P E ( Y i | X i ). (b) Only in this subquestion assume that N > M . Let X 1 , · · · , X N and Y 1 , · · · , Y N random variables defined as above. What is the maximum index i N 1 so that X 1 ⊥⊥ X N | X i , X i +1 , · · · , X N 1 is guaranteed? (c) Only in this subquestion, assume the max duration M = 2, and P D uniform over { 1 , 2 } and each x i is in an alphabet { a, b } . For ( X 1 , X 2 , X 3 , X 4 , X 5 , Y 1 , Y 2 , Y 3 , Y 4 , Y 5 ) draw a Bayes Net over these 10 random variables with the property that removing any of the edges would yield a Bayes net inconsistent with the given distribution. 1
(d) In this part we will explore how to write the described process as an HMM with an extended state space. Write the states z = ( s, t ) where s is a state of the original system and t represents the time elapsed in that state. For example, the state sequence s 1 , s 1 , s 1 , s 2 , s 3 , s 3 would be represented as ( s 1 , 1) , ( s 1 , 2) , ( s 1 , 3) , ( s 2 , 1) , ( s 3 , 1) , ( s 3 , 2). Answer all of the following in terms of the parameters P 1 ( X 1 ) , P D ( d ) , P T ( X j +1 | X j ) , P E ( Y i | X i ) , k (total number of possible states), N , and M (max duration). (i) What is P ( Z 1 )? P ( x 1 , t 1 ) = (ii) What is P ( Z i +1 | Z i )? Hint: You will need to break this into cases where the transition function will behave differently. P ( X i +1 , t i +1 | X i , t i ) = (iii) What is P ( Y i | Z i )? P ( Y i | X i , t i ) = (e) In this question we explore how to write an algorithm to compute P ( X N | y 1 , · · · , y N ) using the particular structure of this process. Write P ( X t | y 1 , · · · , y t 1 ) in terms of other factors. Construct an answer by checking the correct boxes below: P ( X t | y 1 , · · · , y t 1 ) = (i) (ii) (iii) (i) *$ k i =1 M d =1 M d =1 *$ k i =1 M d =1 *$ k i =1 *$ M d =1 (ii) *$ P ( Z t = ( X t , d ) | Z t 1 = ( s i , d )) *$ P ( X t | X t 1 = s i ) *$ P ( X t | X t 1 = s d ) *$ P ( Z t = ( X t , d ) | Z t 1 = ( s i , d )) (iii) *$ P ( Z t 1 = ( s d , i ) | y 1 , · · · , y t 1 ) *$ P ( X t 1 = s d | y 1 , · · · , y t 1 ) *$ P ( Z t 1 = ( s i , d ) | y 1 , · · · , y t 1 ) *$ P ( X t 1 = s i | y 1 , · · · , y t 1 ) 2
Q2. Planning ahead with HMMs Pacman is tired of using HMMs to estimate the location of ghosts. He wants to use HMMs to plan what ac- tions to take in order to maximize his utility. Pacman uses the HMM (drawn to the right) of length T to model the planning problem. In the HMM, X 1: T is the sequence of hidden states of Pacman’s world, A 1: T are ac- tions Pacman can take, and U t is the utility Pacman receives at the particu- lar hidden state X t . Notice that there are no evidence variables, and utilities are not discounted. . . . X t 1 X t X t +1 . . . . . . U t 1 U t U t +1 A t 1 A t A t +1 . . . . . . . . . (a) The belief at time t is defined as B t ( X t ) = p ( X t | a 1: t ). The forward algorithm update has the following form: B t ( X t ) = (i) (ii) B t 1 ( x t 1 ) . Complete the expression by choosing the option that fills in each blank. (i) *$ max x t 1 *$ x t 1 *$ max x t *$ x t *$ 1 (ii) *$ p ( X t | x t 1 ) *$ p ( X t | x t 1 ) p ( X t | a t ) *$ p ( X t ) *$ p ( X t | x t 1 , a t ) *$ 1 *$ None of the above combinations is correct (b) Pacman would like to take actions A 1: T that maximizes the expected sum of utilities, which has the following form: MEU 1: T = (i) (ii) (iii) (iv) (v) Complete the expression by choosing the option that fills in each blank. (i) *$ max a 1: T *$ max a T *$ a 1: T *$ a T *$ 1 (ii) *$ max t *$ Q T t =1 *$ T t =1 *$ min t *$ 1 (iii) *$ x t ,a t *$ x t *$ a t *$ x T *$ 1 (iv) *$ p ( x t | x t 1 , a t ) *$ p ( x t ) *$ B t ( x t ) *$ B T ( x T ) *$ 1 (v) *$ U T *$ 1 U t *$ 1 U T *$ U t *$ 1 *$ None of the above combinations is correct (c) A greedy ghost now offers to tell Pacman the values of some of the hidden states. Pacman needs your help to figure out if the ghost’s information is useful. Assume that the transition function p ( x t | x t 1 , a t ) is not deterministic. With respect to the utility U t , mark all that can be True: VPI( X t 1 | X t 2 ) > 0 VPI( X t 2 | X t 1 ) > 0 VPI( X t 1 | X t 2 ) = 0 VPI( X t 2 | X t 1 ) = 0 None of the above (d) Pacman notices that calculating the beliefs under this model is very slow using exact inference. He therefore decides to try out various particle filter methods to speed up inference. Order the following methods by how accurate their estimate of B T ( X T ) is? If different methods give an equivalently accurate estimate, mark them as the same number. 3
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
Most accurate Least accurate Exact inference *$ 1 *$ 2 *$ 3 *$ 4 Particle filtering with no resampling *$ 1 *$ 2 *$ 3 *$ 4 Particle filtering with resampling before every time elapse *$ 1 *$ 2 *$ 3 *$ 4 Particle filtering with resampling before every other time elapse *$ 1 *$ 2 *$ 3 *$ 4 4