ECON 100 - Spring 2023 - PS6_solutions

pdf

School

University Of Chicago *

*We aren’t endorsed by this school

Course

1000

Subject

Economics

Date

Feb 20, 2024

Type

pdf

Pages

10

Uploaded by BailiffRabbitPerson818

Report
ECON 10000: Principles of Microeconomics Spring 2023 Problem Set 6 Due Friday May 19, 11:59pm (NB: again, one more day than usual) Please follow the graders ’ instruction regarding submission via Canvas/Gradescope. Please type your problem set! (You can insert hand drawn figures as images) Note: you can form groups of 5 people maximum and you should turn in one problem set per group.
(Jeff) 1. Craig and Min are the only two econ tutors in Hyde Park. Regarding advertising their services, they each have two options: advertise (A) or not advertise (NA). When they both advertise, they each get a monthly revenue of $1,000. When neither of them advertises, they each get a monthly revenue of $500. When only one of them advertises, the person who advertises gets a revenue of $1,700 per month while the non-advertiser gets a revenue of $50. However, the cost of advertising is $900 per month. [18 TOTAL POINTS] a. Write down the payoff (i.e., profit) matrix of the game that describes the above market. What will the equilibrium/a be? Is/Are this/these equilibrium/a Pareto efficient? Explain your answer. [6 points] Solution: Note that advertising costs $900 per month, so we should subtract $900 from revenue numbers. The normal form (i.e., payoff matrix) of this game is: In the above table, we have used th e “arrow” methodology from the textbook to indicate the direction of preference. The Nash Equilibrium is ( A, A ). If the other player is advertising, neither player can make themselves better off by deviating from ( A, A ). Advertising is a dominant strategy, so there will not be another NE. This equilibrium is not Pareto-efficient, because both players would be better off by choosing ( NA, NA ), but unfortunately it is not a NE. b. You learn that there was an error in the revenue numbers: when only one of them advertises, the non-advertiser actually has a monthly revenue of $150 (instead of $50). All the other numbers are correct. Write down the new payoff matrix and find the new equilibrium/a. [6 points] Solution: The new normal form (i.e., payoff matrix) of the game is: There are now two ( pure-strategy) Nash Equilibria: ( A, NA ) and ( NA, A ). Switching is not beneficial for either player if they are in either of these equilibria. (Note that there is also a mixed-strategy NE).
c. Go back to the situation in part a. Min sends Craig a message via Snapchat: “We both have 36 months before we graduate and leave Hyde Park. So, let’s agree to not advertise, and we will make more profit than otherwise. Next month, I will not advertise hoping that you will not either. However, if you do advertise, then I will advertise until we leave Hyde Park.” The message gets deleted after Craig reads it, leaving no trace. Will this agreement be successfully implemented? Why or why not? If not, provide changes/options that would make it successful. [6 points] Solution: This arrangement probably will not work. Both players will use backwards induction to find their optimal strategies. In the last period, Advertise is the dominant strategy for both players, so they will both Advertise . Bringing this back period-by- period all the way to the first period, and ( A, A ) will end up as the NE. You may have read about a ‘grim strategy’: punishing the other player by always choosing “Advertise” if they pick “Advertise’ just once. But note that “length” of th e game is very important here. There is a defined endpoint to our game, and so players will defect quickly because of backwards induction. To get the Snapchat arrangement to work, we would need to change the payoff matrices. For example, Min might threaten a smear campaign against Craig if Craig advertises. This would reduce Craig ’s payoff of choosing “Advertise” , which might get the players back to ( NA, NA ). But without a change in the payoff matrices, the strategy would not work. (Jeff) 2. Suppose we create the following system for problem sets the next time we teach ECON 100. We randomly create teams of 2 students. Throughout the course, the students in the same team need to work together for all problem sets. At the end of the course, the two students must play a simultaneous game, and they each have two choices: C (cooperate) or D (defect). Let S be the average score of problem sets. If they both choose C , then they each get (1.2*S). In other words, this is a situation where both students in the group put work hard. If they both choose D , they each get (0.5*S). In other words, neither of the student put any effort into the problem set, hoping that the other one would. If one chooses C while the other chooses D , the student who chose D gets (2*S) and the student who chose C gets 0. In other words, this is where one student completely free rides the work of the other student, plus drops her name from the problem set. [25 TOTAL POINTS] a. Represent the above game by its payoff matrix (i.e., the normal form of the game). In this way, you are showing all the elements of a game: players, strategies and payoffs. [5 points] Solution:
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
b. Find the Nash equilibrium of the game in part a. [4 points] Solution: The Nash Equilibrium is ( D, D ). Given that Player 2 defects, if Player 1 cooperated instead of defecting, his payoff would fall to 0. Given that Player 1 defects, if Player 2 cooperates his payoff also falls to 0. Therefore ( D, D ) is a NE. Since Defect is the dominant strategy for both players, there is no other NE. It is likely that you are not too comfortable with the Nash equilibrium in part b. In fact, you probably think that if you actually played that game with a fellow student you would not end up in that Nash equilibrium. c. One possibility is that the above payoffs do not fully capture all costs and benefits of the environment. Suppose that if anyone chooses D , the student is flagged as a cheater and his/her name is made public. Also suppose that this public punishment can be represented in “score units” as p . (I) Write down the new payoff matri x. (II) Let’s consider an environment where public punishment is light. If p = (0.1*S), what’s the Nash equilibrium? [6 points] Solution: (I) For this part, we use p to denote the punishment. (II) Again, the NE is ( D, D ). Though the payoffs have changed, it is still true that either player loses from switching to Cooperate from ( D, D ). See the normal form of this game below: d. Forget the public punishment from part c. Suppose that the above described system for problem sets scores is implemented across all university courses. For the following
situations, and in three sentences or less, explain whether or not you expect the Nash equilibrium of the game to remain the same, and why. If you are making any assumptions state them clearly. Of course, your assumptions cannot contradict the situation described. (I) At the beginning of each course, the teams are redone with the matching being random. (II) You are matched to the same student in all courses until you graduate. (III) Imagine situation (II) and you are taking the last course at the University of Chicago. (IV) Does your answer have any implications to the equilibrium in the course just before the last one? If so, what happens if you take this reasoning all the way back to the first course? [10 points] Solution: We have graded this part generously as the goal of this question to emphasize the power of backward induction. Each part, from (I) to (IV) builds the intuition. (I) We would still expect ( D, D ), as the payoff matrix is the same as above. In other words, the game has not changed at all. (II) Now we might expect trust to build up between partners. This might result in ( C, C ) being played, but only if this trust affects the payoff matrices. Because rational players use backwards induction, if the payoff matrices do not change each period they cannot simply trust that their behavior this period will affect the other player next period. (III) Given that this is the last course, we might expect that the final payoff matrix will be less affected by trust or social pressure. So, we will likely end up in the ( D, D ) equilibrium. (IV) If both players know the last period is going to end up in ( D, D ), then in the second- to-last period the social trust might be quite low, which means they’d again end up in ( D, D ). We can carry this logic all the way back to the start, so that we again expect the players to end up in ( D, D ) every period. (Amanda) 3. Consider the typical game of ‘rock (r), paper (p), scissors (s)’ for two players (1 and 2). When both players choose the same strategy, they both get 0 points. If one player chooses a strategy that beats the other strategy, the winner gets 1 point while the loser gets -1 point. [27 TOTAL POINTS] a. Write down all the elements of the game by representing it in its payoff matrix form. [6 points] Solution: Please see below. We have used the “arrow method” from the textbook. But you can also use the method we covered in class, that is much simpler when you have a payoff matrix bigger than 2x2.
b. Find the (pure strategy) Nash Equilibrium. [5 points] Solution: There is no pure-strategy NE. In every cell, at least one player can make themselves better off by switching their strategy. For example, if Player 1 plays Rock and Player 2 plays Scissors, then Player 2 can make themselves better off by switching to Paper. Please note that for every cell a reasoning similar to this applies! In class, we superficially covered the concept of mixed strategies. When you mix strategies, you randomly choose across pure strategies with a particular probability. For example, in the above game a mixed strategy Nash equilibrium is when both players choose each strategy with probability 1/3, randomly . c. In three sentences or less, explain the economic intuition of the importance of the word randomly in the above NE. [6 points] Solution: If the players are mixing in a way that is predictable to the opponent rather than randomly , then their opponent can always predict what they are going to play and play the move that beats that. For example, if I always play Rock, then Paper, then Scissors (i.e., always in this sequence) and you know this, then you can beat me in all three rounds by playing Paper, Scissors, Rock. In other words, playing Rock, Paper, Scissors with probability 1/3 is not enough. You need to do it randomly! d. In this NE, what can you tell us about the expected payoffs of the pure strategies for player 1. In other words, assume that player 2 is following the mixed strategy of playing Rock, Paper, Scissors with probability 1/3. Then, calculate the expected payoff for player 1 of
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
playing Rock. Then do the say for Paper, and then for Scissors. What’s the economic intuition? [10 points] Solution: If the other player is playing Rock, Paper, Scissors with probability (1/3,1/3,1/3), then the payoff of a pure strategy is 0 (i.e., for each pure strategy). For example, if Player 1 only plays Rock and Player 2 mixes, then: 𝑬(?𝒂𝒚?𝒇𝒇) = ? 𝟑 × ? + ? 𝟑 × (−?) + ? 𝟑 × ? = ? But this will also be true for the expected payoff of playing Paper or Scissors when player 2 is mixing with the probabilities described in the NE. In other words, the expected payoff for player 1 of each pure strategy will be the same across all three options. Since the payoffs are equal across all three strategies, Player 1 is indifferent between all three options, and so an optimal strategy is the mixed strategy (Rock, Paper, Scissors) = (1/3, 1/3, 1/3). A player will only mix between different choices of pure strategies if his expected payoffs from these choices are equal. Since Player 1 is indifferent here, he is willing to mix. If he did not, then Player 2 would no longer be indifferent between the three options and would switch his strategy. It makes sense that Player 1 is indifferent: we know this is a NE in which he mixes. If he were not indifferent, he would just switch to his best option (like in part d.). (Amanda) 4. Consider two firms: an incumbent ( I ) and a potential competitor ( C ). First, the potential competitor has to decide whether to enter the market ( E ) or not enter the market ( N ), and then the incumbent has to decide whether to produce a high quantity ( H ) or low quantity ( L ). This game has the following extensive form: [30 TOTAL POINTS] a. Find the Nash Equilibrium of this game. [5 points] Solution: We solve this sequential game by backward induction. We start by solving the “subgames” in the last stages of this game (i.e., the second stage branch of the game). The incumbent ( I ) moves at the second stage given some decision by the competitor ( C ). If C chooses E (top branch in the game tree), I will get a payoff of 4 if he chooses H or 5 if he chooses L . Hence, he will choose L .
If C chooses N instead (bottom branch in the game tree), I will get a payoff of 10 if he chooses H or 7 if he chooses L . Hence, he will choose H . Given the above analysis, when C makes the first move at the first stage of the game, she will take the optimal behavior of I into account: If C chooses E , she will get 5 (because I will choose L ) If instead C chooses N , she will get 0 (because I will choose H ) Hence, C will choose N . Thus, the unique NE is ( C , I ) = ( E , L ). The payoffs are (5, 5). Consider the following change to the above game. Before the potential competitor makes a move, the incumbent can decide to build a big factory ( B ) or a small factory ( S ). The extensive form of the new game is: b. Using the above extensive form, find the Nash Equilibrium of the game. Don’t get confused with the payoffs. The payoff belonging to each player is clearly denoted by its color. [5 points] Solution: Again, we rely on backward induction to solve this game. We start at the last stage of the game (i.e., when player I makes a choice for the second time). Please note that at the last stage of the game, there are four decision nodes where player I has to make a choice: 1. Top node (i.e., after I has chosen B and C has chosen E ): I will choose H and not L because 3 is better than 2. 2. Second node (i.e., after I has chosen B and C has chosen N ): I will choose H and not L because 9 is better than 6. 3. Third node (i.e., after I has chosen S and C has chosen E ): I will choose L and not H because 5 is better than 4. 4. Last node (i.e., after I has chosen B and C has chosen E ): I will choose H and not L because 3 is better than 2. Now, in the second stage of the game, as player C makes the decision, she will take into account what player I will do in the last stage of the game. Also, player C has two decision nodes:
Top node (i.e., after I has chosen B ): C will choose N and not E because 0 is better than -2 Bottom node (i.e., after I has chosen S ): C will choose E and not N because 5 is better than 0 Finally, we can solve the first stage of the game (i.e., when I makes the decision to begin the game) taking into account the optimal strategies in the second and last stages of the game: I will choose B and not S because 9 is better than 5. c. In solving the above two subquestions, what concept have you used? Describe its economic intuition. [5 points] Solution: We have used the concept of backwards induction. The economic intuition is as follows: a player will act knowing which actions his opponent will take in response to his own actions. Therefore, the player takes actions to channel the outcome of the game toward his most-preferred outcome. d. In one sentence, explain what a strictly dominated strategy is. Using that concept, find the Nash equilibrium of the following game. [6 points] Solution: You have to use the reverse of the definition we used in class for (strictly) dominant strategy. A strictly dominated strategy is a strategy which it never makes sense to play. In other words, there is another strategy for the player that is always better than the strictly dominated strategy, independent of what the other player will do. In this example, B is a strictly dominated strategy by strategy T for player 1, because 3 > 0 and −2 > 5. Therefore, player 1 will never choose B . Given that 1 will play T , 2 should play R , as 6 >4. Therefore, the NE is ( T, R ). e. (i) Find the Nash Equilibrium of the game. (ii) This game does not have a mixed strategy Nash equilibrium. In a couple of sentences explain why that has to be the case. [9 points]
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
Solution: (i) The NE is ( T, L ). This is an interesting case. If player 2 chooses L , player 1 is indifferent between choosing T and B . However, if player 2 chooses R , player 1 will prefer T as 3>0. In such a case, we say that B is weakly dominated by T , as opposed to strictly dominated strategy as we have seen in the previous part. (ii) This part relies on the intuition from Q3 part d. For there to be a mixed-strategy NE, the expected payoffs to each player should be equal in equilibrium (i.e., given the probabilities of playing each strategy). Otherwise, it would not be optimal to mix. But note that this cannot be the case for Player 1. If Player 2 places any positive probability on playing R , then Player 1 should never play B (because Player 1 would get a higher expected value from playing T for sure). Thus Player 2 should not mix strategies. Given that Player 2 is not mixing, Player 1 would also not mix strategies.