ITSDE2023_ExerciseBook (1)
pdf
keyboard_arrow_up
School
University of Waterloo *
*We aren’t endorsed by this school
Course
101
Subject
Aerospace Engineering
Date
Dec 6, 2023
Type
Pages
51
Uploaded by JudgeWorldShark17
Exercise Book
Introduction to Time Series and Dynamic Econometrics
Minor Applied Econometrics
Period 1, 2023
Karim Moussa (coordinator, teacher)
Mariia Artemova (teacher)
Francisco Blasques and Siem Jan Koopman (course developers)
Instructions:
This exercise book contains exercises for the course
Introduction to Time Series
and Dynamic Econometrics
. Exercises with solutions are marked with an asterisk (*); exercises
that may be considered particularly challenging are marked with two asterisks (
**
).
1
Contents
1
k
Exercises of Week One
3
2
k
Exercises of Week Two
6
3
k
Exercises of Week Three
10
4
k
Exercises of Week Four
13
5
k
Exercises of Week Five
16
6
k
Exercises of Week Six
18
7
k
Solutions
19
2
1
k
Exercises of Week One
1.1 Give the definition of:
(a) Strictly stationary process
(b) Weakly stationary process
(c) iid sequence
1.2 Make use of the definitions given in the previous question to argue that
•
Every iid process is strictly stationary, but it may not be weakly stationary.
•
Not every strictly stationary process is weakly stationary and vice versa.
1.3 * The following Venn diagram specifies the relation between iid processes, weakly stationary
processes and strictly stationary processes.
(1)
WS
,
SS
,
IID
(4)
WS
,
SS
,
IID
(2)
WS
,
SS
,
IID
(5)
WS
,
SS
,
IID
(3)
WS
,
SS
,
IID
Give an example time series for every set (i.e. region 1 to 5) in the above Venn diagram.
1.4 Consider the following statement:
There is no strictly stationary process which is iid and
weakly stationary at the same time.
Is this statement true or false. Justify your answer.
1.5 Let
{
X
t
}
be weakly stationary. Show that
E
X
2
t
=
E
X
2
t
-
1
.
1.6 * Show that
ρ
X
(
t
+
h, t
) =
ρ
X
(
h
) if the time series is weakly stationary.
1.7 * Give the definition of white noise and random walk.
1.8 Give three examples of white noise processes.
1.9 * Complete the Venn diagram elaborated before with:
(a) white noise process
3
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
(b) random walk
1.10 * Give examples of time series that characterize each set (including each intersection and
union) in the Venn diagram.
1.11 * Derive the autocorrelation function of the random walk starting at
t
= 1.
1.12 * Derive the conditional mean and conditional variance of a random walk with iid innova-
tions. How do they differ from the unconditional mean and unconditional variances derived
in the previous questions?
1.13 Which of the following statements is correct?
(a) 2
L
4
X
t
-
5
L
(
L
-
1
X
t
+
X
t
-
3
) +
X
t
-
4
=
-
2
X
t
-
4
-
5
X
t
(b)
L
4
X
t
+3
-
5
L
2
(
L
-
1
X
t
+
X
t
+1
) + 10
X
t
-
1
=
X
t
-
1
1.14 Let
{
X
t
}
be a random walk process and
{
Δ
X
t
}
denote its first difference. Consider the
following statement:
The first difference of a random walk may be iid.
Is this statement
true or false. Justify your answer.
1.15 Let the time-series
{
X
t
}
be given by
Time
t
= 1
t
= 2
t
= 3
t
= 4
t
= 5
t
= 6
t
= 7
t
= 8
t
= 9
X
t
103
.
4
105
.
2
104
.
1
104
.
7
105
.
3
105
.
9
106
.
1
105
.
8
105
.
5
Hint:
use a programming language or
Excel
to perform the following subquestions.
(a) Apply the difference operator Δ and state the values of the time series Δ
X
t
.
(b) Produce the time series of growth rates
{
100
×
(
X
t
-
X
t
-
1
)
/X
t
-
1
}
.
(c) Produce the time series of natural logarithms
{
log
X
t
}
(d) Apply the difference operator Δ and state the values of the time series of logarithms
Δ log
X
t
.
(e) How does the time series of growth rates compare to the time series of differences in
logarithms?
1.16
**
(Brockwell and Davies) Show that a strictly stationary process with
E
|
X
t
|
2
<
∞
is
weakly stationary.
Hint:
To show that the autocovariance is finite, use the Cauchy-Schwarz inequality with
inner product
h
Z
t
, Z
t
-
h
i
=
E
(
Z
t
Z
t
-
h
) and
Z
t
=
X
t
-
E
(
X
t
).
1
1.17 * (Brockwell and Davies) Let
{
Z
t
}
be a sequence of independent normal random variables,
each with mean 0 and variance
σ
2
, and let
a
,
b
and
c
be constants. Which of the following
processes are weakly stationary?
For each stationary process derive the mean and the
autocovariance function:
1
Recall that the Cauchy-Schwarz inequality states that for any inner product
h·
,
·i
we have
h
u, v
i
2
≤
h
u, u
i · h
v, v
i
.
4
(a)
X
t
=
a
+
bZ
t
+
cZ
t
-
2
(b)
X
t
=
a
+
bZ
0
(c)
X
t
=
Z
t
Z
t
-
1
1.18 * (Brockwell and Davies) Suppose
{
X
t
}
and
{
Y
t
}
are uncorrelated weakly stationary se-
quences. Show that
{
X
t
+
Y
t
}
is weakly stationary.
5
2
k
Exercises of Week Two
2.1 ** Suppose that
{
X
t
}
is generated by the AR(1) below
X
t
= 0
.
9
X
t
-
1
+
ε
t
Use the sequence of innovations
{
ε
t
}
below in order to complete the values of the time-series
{
X
t
}
from time
t
= 4 to
t
= 9.
Time
t
= 1
t
= 2
t
= 3
t
= 4
t
= 5
t
= 6
t
= 7
t
= 8
t
= 9
X
t
0
.
31
1
.
25
0
.
79
ε
t
-
0
.
10
-
0
.
04
0
.
01
-
0
.
03
0
.
12
0
.
02
2.2 Repeat the exercise above assuming that:
(a)
X
t
= 0
.
7
X
t
-
1
+
ε
t
(b)
X
t
=
X
t
-
1
+
ε
t
(c)
X
t
=
-
0
.
9
X
t
-
1
+
ε
t
2.3 Suppose that
{
X
t
}
is generated by the AR(1) below
X
t
= 0
.
1 + 0
.
9
X
t
-
1
+
ε
t
Use the sequence of innovations
{
ε
t
}
below in order to complete the values of the time-series
{
X
t
}
from time
t
= 4 to
t
= 9.
Time
t
= 1
t
= 2
t
= 3
t
= 4
t
= 5
t
= 6
t
= 7
t
= 8
t
= 9
X
t
0
.
31
1
.
25
0
.
79
ε
t
-
0
.
10
-
0
.
04
0
.
01
-
0
.
03
0
.
12
0
.
02
2.4 Repeat the exercise above assuming that
(a)
X
t
= 0
.
1 + 0
.
5
X
t
-
1
+
ε
t
(b)
X
t
= 0
.
5 + 0
.
5
X
t
-
1
+
ε
t
(c)
X
t
= 0
.
1
-
0
.
5
X
t
-
1
+
ε
t
2.5 ** Consider a time-series
{
X
t
}
generated by a stable AR(1) with intercept. Verify that
the variance, covariance and autocorrelation functions do not depend on the intercept
parameter
α
.
2.6 Calculate the mean of the time-series
{
X
t
}
generated by the following AR(1) models:
X
t
= 0
.
1 + 0
.
5
X
t
-
1
+
t
X
t
= 0
.
1 + 0
.
9
X
t
-
1
+
t
X
t
= 0
.
5 + 0
.
9
X
t
-
1
+
t
X
t
= 0
.
5
-
0
.
9
X
t
-
1
+
t
2.7 ** Derive the mean of a time series generated by a stable AR(1) model with intercept.
6
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
2.8 ** Derive the autocovariance and autocorrelation function for a stable AR(2) process.
2.9 Comment on the following statement:
The stability of the AR(2) with intercept does not
depend on
α
.
2.10 Derive the mean, variance, autocovariance and autocorrelation function of a time series
generated by a stable AR(2) with intercept.
2.11 * Comment on the following statement:
The variance, covariance and autocorrelation
functions of the stable AR(2) with intercept do not depend on
α
.
2.12 ** Derive the autocovariance and autocorrelation function for a stable ARMA(1,1) model.
2.13 ARMA(
p, q
) models can also feature an ‘intercept’ or ‘constant’ parameter
α
that deter-
mines the unconditional mean of the time-series
X
t
=
α
+
φ
1
X
t
-
1
+
. . .
+
φ
p
X
t
-
p
+
ε
t
+
θ
1
ε
t
-
1
+
. . .
+
θ
q
ε
t
-
q
.
Derive the unconditional mean for the time-series generated by the following models:
(a) stable AR(
p
) with intercept
(b) MA(
q
) with intercept
(c) stable ARMA(2,1) with intercept
2.14 Suppose that
{
X
t
}
is generated by the ARMA(1,1) below
X
t
= 0
.
9
X
t
-
1
+
ε
t
+ 2
ε
t
-
1
Use the sequence of innovations
{
ε
t
}
below in order to complete the values of the time-series
{
X
t
}
from time
t
= 4 to
t
= 9.
Time
t
= 1
t
= 2
t
= 3
t
= 4
t
= 5
t
= 6
t
= 7
t
= 8
t
= 9
X
t
0
.
31
1
.
25
0
.
79
ε
t
0.02
-
0
.
10
-
0
.
04
0
.
01
-
0
.
03
0
.
12
0
.
02
2.15 Repeat the exercise above assuming that:
(a)
X
t
= 0
.
9
X
t
-
1
+
ε
t
+
ε
t
-
1
(b)
X
t
= 0
.
7
X
t
-
1
-
0
.
2
X
t
-
2
+
ε
t
(c)
X
t
= 0
.
7
X
t
-
1
-
0
.
4
X
t
-
2
+
ε
t
(d)
X
t
= 0
.
5
X
t
-
1
+ 0
.
3
X
t
-
3
+
ε
t
-
0
.
4
t
-
1
2.16 Derive the conditional expectation
E
(
X
t
|
x
t
-
1
, x
t
-
2
, . . .
) when
{
X
t
}
is generated by:
(a) * iid sequence with
E
(
X
t
) = 0
(b) * Random Walk with iid innovations
(c) * AR(1) and AR(2) with intercept and iid innovations
(d) ** AR(p) with intercept and iid innovations
7
For the following parts related to the (AR)MA model, assume that the initialization
ε
0
, . . . , ε
1
-
q
= 0 is used, with q the order of the MA component:
(e) * MA(1), MA(2), MA(q) with intercept and iid innovations
(f) ** ARMA(1,1) with intercept and iid innovations
(g) * ARMA(p,q) with intercept and iid innovations
2.17 * (Brockwell and Davies, 1.5(a)) Derive the autocovariance and autocorrelation function
of the process
{
X
t
}
given by the moving-average process of order 2
X
t
=
Z
t
+ 0
.
8
Z
t
-
2
,
{
Z
t
} ∼
WN(0
,
1)
2.18 (Brockwell and Davies, 3.1) Let
{
Z
t
}
be a white noise sequence. Determine which of the
following ARMA processes are causal and which of them are invertible.
(a) **
X
t
+ 0
.
2
X
t
-
1
-
0
.
48
X
t
-
2
=
Z
t
(b) **
X
t
+ 1
.
9
X
t
-
1
+ 0
.
88
X
t
-
2
=
Z
t
+ 0
.
2
Z
t
-
1
+ 0
.
7
Z
t
-
2
(c) *
X
t
+ 0
.
6
X
t
-
1
=
Z
t
+ 1
.
2
Z
t
-
1
(d) *
X
t
+ 1
.
8
X
t
-
1
+ 0
.
81
X
t
-
2
=
Z
t
(e) **
X
t
+ 1
.
6
X
t
-
1
=
Z
t
-
0
.
4
Z
t
-
1
+ 0
.
04
Z
t
-
2
2.19 ** (Brockwell and Davies, 3.6) Show that the two MA(1) processes
X
t
=
Z
t
+
θZ
t
-
1
,
{
Z
t
} ∼
WN(0
, σ
2
)
Y
t
=
˜
Z
t
+
1
θ
˜
Z
t
-
1
,
{
˜
Z
t
} ∼
WN(0
, σ
2
θ
2
)
where 0
<
|
θ
|
<
1, have the same autocovariance functions.
2.20 * Suppose that the dynamics of the growth rate of GDP in the Netherlands are given by
the following AR(3):
X
t
= 0
.
008 + 0
.
92
X
t
-
1
-
0
.
14
X
t
-
3
+
t
,
{
t
} ∼
NID(0
,
0
.
011)
Suppose that the last observed values of GDP growth were given by
x
t
-
3
= 0
.
019,
x
t
-
2
=
0
.
008 and
x
t
-
1
= 0
.
012. As a result, the conditional probability of an economic recession
at time
t
is given by
P
(
X
t
<
0
|
x
t
-
1
, x
t
-
2
, ...
)
≈
6
.
8%
Similarly, the conditional probability of having economic growth above 3% at time
t
is
given by
P
(
X
t
>
0
.
03
|
x
t
-
1
, x
t
-
2
, . . .
)
≈
10
.
7%
This occurs naturally because
X
t
|
x
t
-
1
, x
t
-
2
, . . .
∼
N(0
.
008 + 0
.
92
x
t
-
1
-
0
.
14
x
t
-
3
,
0
.
011)
∼
N(0
.
0164
,
0
.
011)
Can you explain why the probability that
X
t
>
0
.
03 is larger than the probability of
X
t
<
0
(note that the last observed value
x
t
-
1
is closer to zero than 0.03)?
8
2.21 Let
{
X
t
}
be generated according to the following AR(1) model:
X
t
= 10
.
5 + 0
.
7
X
t
-
1
+
t
,
{
t
} ∼
NID(0
,
2)
.
State the conditional distribution of
X
T
+
h
for
h
= 1
,
2
,
3, given that
X
T
= 16
.
4. In other
words give the distribution of
X
T
+
h
|
X
T
for
h
= 1
,
2
,
3.
2.22 Let
{
X
t
}
be generated according to the following MA(1) model:
X
t
=
ε
t
+ 0
.
7
ε
t
-
1
,
{
ε
t
} ∼
NID(0
,
2)
.
State the conditional distribution of
X
T
+
h
for
h
= 1
,
2
,
3, given that
X
T
=
-
0
.
5 and
ε
T
-
1
=
-
0
.
7. In other words give the distribution of
X
T
+
h
|
X
T
, ε
T
-
1
for
h
= 1
,
2
,
3.
2.23 ** Suppose that the dynamics of the IBM stock price are well approximated by the following
AR(1) model
X
t
= 1
.
42 + 0
.
99
X
t
-
1
+
t
,
{
t
} ∼
NID(0
,
0
.
1
2
)
Suppose that you invested in a single IBM stock at time
t
-
1 for a price of
x
t
-
1
= $112
.
60.
If there are no transaction costs,
(a) What is the probability of selling with a profit at time
t
?
(b) What is the probability of selling with a loss at time
t
?
9
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
3
k
Exercises of Week Three
3.1 Consider the AR(1) model
X
t
=
βX
t
-
1
+
t
,
{
t
} ∼
NID(0
,
1)
.
(a) State the conditional distribution of
X
2
|
X
1
,
X
3
|
X
2
,
and
X
4
|
X
3
(b) Suppose that
β
= 1 and you observe a sample
X
1
, . . . , X
4
with the following values:
Time
t
= 1
t
= 2
t
= 3
t
= 4
X
t
0
.
71
2
.
25
1
.
59
-
0
.
13
Calculate the following conditional densities:
f
(
X
2
|
X
1
)
,
f
(
X
3
|
X
2
)
,
and
f
(
X
4
|
X
3
)
(c) Calculate the above conditional densities above under the assumption that
β
= 0
.
5.
(d) Use the conditional densities values computed above to answer the following questions:
i. What is the value of the likelihood function at
β
= 1?
ii. What is the value of the likelihood function at
β
= 0
.
5?
iii. What is the log likelihood value at
β
= 1?
iv. What is log likelihood value at
β
= 0
.
5?
v. Which value for
β
do you think is most likely given the data? What does that
tell you about the parameter estimate from a maximum likelihood perspective?
3.2 * Derive the ML estimator of
σ
2
for the stationary Gaussian AR(1) when
φ
is known.
3.3 * Are the estimates of
φ
and
σ
2
related?
3.4 * Let
X
1
, ..., X
T
be a subset of a time-series generated by a stationary Gaussian MA(1)
model,
X
t
=
ε
t
+
θε
t
-
1
,
{
ε
t
} ∼
NID(0
, σ
2
ε
)
.
Use the method of prediction error decomposition to derive the conditional likelihood for
the MA(1) model above.
3.5 * Let
X
1
, . . . , X
T
be a subset of a time-series generated by the following stationary AR(2)
model,
X
t
=
φ
1
X
t
-
1
+
φ
2
X
t
-
2
+
ε
t
,
{
ε
t
} ∼
NID(0
, σ
2
ε
)
.
(a) Give an expression for the likelihood function using the joint Gaussianity of
X
1
, . . . , X
T
.
(b) Give an expression for the conditional loglikelihood function and define the ML esti-
mator of the ARMA parameters.
10
(c) Give the first-order optimality conditions for the conditional loglikelihood function
with respect to
φ
1
and
φ
2
.
3.6 * Show that the LS and ML estimators are the same for the Gaussian AR(1) model.
3.7 * Derive the LS estimator of
α
and
φ
for the AR(1) model with intercept.
3.8 * How would you estimate the innovation variance
σ
2
in the least squares setting?
3.9 Produce 1, 2 and
h
-step ahead forecasts for the AR(2) with iid innovations and derive the
corresponding forecast errors and their respective variances.
3.10 * Produce 1, 2 and
h
-step ahead forecasts for the MA(2) with iid innovations
t
assuming
0
and
-
1
are given, and derive the corresponding forecast errors and their respective
variances.
3.11 * Produce 1, 2 and
h
-step ahead forecasts for the ARMA(1,1) with iid innovations
t
assuming
0
is given, and derive the corresponding forecast errors and their respective
variances.
3.12 * Show that the variance of the
h
-step ahead forecast error of the stable ARMA(1,1) with
iid innovations converge to the unconditional variance of the time-series:
V
ar
(
e
T
+
h
) =
σ
2
1 + 2
φθ
+
θ
2
1
-
φ
2
,
h
→ ∞
3.13 * Explain the intuition behind the limit results derived in the previous question.
3.14 Consider an AR(1) with Gaussian iid innovations. Derive the point forecasts and respective
95% confidence bounds for time
T
+ 1,
T
+ 2,
T
+ 3 and
T
+
h
, for the following AR(1)
processes:
(a) *
X
T
= 1
.
3,
φ
= 0
.
9 and
σ
2
= 0
.
1
(b) *
X
T
= 1
.
3,
φ
= 0
.
5 and
σ
2
= 0
.
1
(c)
X
T
= 1
.
3,
φ
= 1 and
σ
2
= 0
.
1
(d)
X
T
= 1
.
3,
φ
= 0
.
9 and
σ
2
= 1
(e)
X
T
=
-
0
.
5,
α
=
-
0
.
2,
φ
= 0
.
8 ,
σ
2
= 0
.
01
3.15 Redo the previous exercise with 90% and 99% confidence bounds.
3.16 For each of the above models describe how the bounds behave as the forecasting horizon
diverges to infinity (i.e. as
h
→ ∞
).
3.17 Assume that the last three observed values of the sample were
X
T
= 1
.
3,
X
T
-
1
= 1
.
3
and
X
T
-
2
= 1
.
3. Suppose that the innovations are Gaussian. Derive the point forecasts
and respective 95% confidence bounds for time
T
+ 1 and
T
+ 2, for the following AR(3)
processes with iid innovations:
(a)
φ
1
= 0
.
7,
φ
2
= 0,
φ
3
=
-
0
.
3 and
σ
2
= 0
.
1
11
(b)
φ
1
= 0
.
7,
φ
2
= 0,
φ
3
=
-
0
.
7 and
σ
2
= 0
.
1
(c)
α
= 0
.
5,
φ
1
= 0
.
7,
φ
2
= 0,
φ
3
=
-
0
.
3 and
σ
2
= 0
.
1
3.18 * Derive the IRF with origin at
x
=
E
(
X
t
) = 0 generated by an impulse of magnitude
at
time
t
=
s
for the AR(2) model with iid innovations.
3.19 Derive the IRF with origin
x
generated by an impulse of magnitude
at time
t
=
s
for the
AR(3) model with iid innovations.
3.20 * Suppose that the dynamics of the quarterly growth rate of the aggregate industrial
production (AIP) in the Netherlands are well described by the following AR(4) model:
X
t
= 0
.
008 + 0
.
92
X
t
-
1
-
0
.
14
X
t
-
4
+
t
where
{
t
} ∼
NID(0
,
0
.
1).
(a) Calculate the average growth rate
E
(
X
t
) =
μ
X
of the AIP in the Netherlands.
(b) Calculate the IRF with origin at
μ
X
generated by a negative shock of
-
5% occurring
at time
t
=
s
.
12
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
4
k
Exercises of Week Four
4.1 * Consider an ADL(1,1) model for
Y
t
, with exogenous variable
X
t
.
(a) Show that the long run equilibrium
¯
Y
of
{
Y
t
}
is just the expected value of
Y
t
given
that
X
t
is fixed at
X
t
=
¯
X
for all
t
.
(b) Show that when
¯
X
=
E
(
X
t
), then the long run equilibrium
¯
Y
is just the unconditional
expectation of
Y
t
.
4.2 * Let
Y
t
and
X
t
be related according to
Y
t
= 1
.
2 + 0
.
95
Y
t
-
1
+ 1
.
5
X
t
-
0
.
3
X
t
-
1
+
ε
t
Calculate the long-run equilibrium of
{
Y
t
}
for
¯
X
= 2
.
3.
4.3 * (Okun’s Law) Let the unemployment rate
Y
t
and the quarterly GDP growth rate
X
t
in
the Netherlands be related according to
Y
t
= 0
.
025 + 0
.
87
Y
t
-
1
-
0
.
5
X
t
-
0
.
1
X
t
-
1
+
ε
t
and that GDP dynamics are well described by the following AR(2)
X
t
= 0
.
0047 + 0
.
71
X
t
-
1
+ 0
.
13
X
t
-
2
+
u
t
Suppose the last observed values of
Y
t
and
X
t
were given by
Time
T
-
2
T
-
1
T
Y
t
0.064
0.059
0.087
X
t
0.021
0.019
-0.018
Assume iid innovations. Produce forecasts for the unemployment growth rate for the next
two years
T
+ 1,
T
+ 2,
T
+ 3,
. . .
,
T
+ 8.
4.4 * Consider an ADL(1,1).
Show that the long-run multiplier is the sum of all
h
-th step
ahead multipliers.
4.5 Derive the long-run relation, long-run and short-run multiplier of the ADL(2,1), ADL(1,2)
and ADL(2,2).
4.6 Consider again the ADL model
Y
t
= 1
.
2 + 0
.
95
Y
t
-
1
+ 1
.
5
X
t
-
0
.
3
X
t
-
1
+
ε
t
(a) Calculate the short-run multiplier and explain its meaning
(b) Calculate the
h
-step ahead multipliers for
h
= 1
,
2
,
3
,
4
,
5 and explain their meaning
(c) Give the long-run equilibrium between
{
Y
t
}
and
{
X
t
}
and explain its meaning.
(d) Calculate the long-run multiplier and explain its meaning.
13
4.7 Consider the following ADL-AR triangular system with iid innovations
Y
t
= 0
.
9
Y
t
-
1
+
X
t
+
ε
t
X
t
= 0
.
9
X
t
-
1
+
u
t
Determine the IRF from
t
=
s
-
2 to
t
=
s
+ 3 for both
{
Y
t
}
and
{
X
t
}
of a unit rise in
u
t
at time
t
=
s
. Can you derive an expression for the IRF for
t
=
s
+
h
for general
h
∈
Z
?
Hint: first determine the IRF from
t
=
s
-
2
to
t
=
s
+ 3
in terms of general coefficients
φ, β
, and
γ
and then fill in the specific values above. Next, see if you can find a pattern as
t
increases.
4.8 Consider the following ADL-AR triangular system with iid innovations
Y
t
= 2
.
4
-
0
.
1
Y
t
-
1
+ 0
.
91
Y
t
-
2
+ 2
.
1
X
t
+ 0
.
82
X
t
-
4
+
ε
t
X
t
= 0
.
1 + 0
.
36
X
t
-
1
+ 0
.
52
X
t
-
3
+
u
t
Determine the IRF from
t
=
s
-
2 to
t
=
s
+ 3 for both
{
Y
t
}
and
{
X
t
}
of a unit rise in
u
t
at time
t
=
s
. From which time point does the term
X
t
-
4
in the ADL model start to
impact the IRF?
4.9 * (Okun’s Law Revisited) Consider again the relation between the unemployment rate
Y
t
and the GDP growth rate
X
t
in the Netherlands
Y
t
= 0
.
02 + 0
.
87
Y
t
-
1
-
0
.
5
X
t
-
0
.
1
X
t
-
1
+
ε
t
(a) Please re-write the ADL(1,1) model in Error Correction form and interpret the coef-
ficients from an economic perspective.
(b) Please comment on the following statement of the ministry of social affairs:
The new
economic policies implemented by this government are expected to raise the growth
rate of GDP from
2%
to
3%
and reduce the unemployment from
9%
to
5%
.
4.10 Let
{
Y
t
}
be generated by an error correction model with error correction strength given
by
γ
=
-
0
.
5. Show that
{
Y
t
}
is weakly stationary.
4.11 * Suppose that
{
X
t
}
is generated by an MA(2),
X
t
=
ε
t
+
θ
1
ε
t
-
1
+
θ
2
ε
t
-
2
.
Suppose further that you observe the true error term
{
ε
t
}
.
Show that attempting to
estimate an MA(1) by regressing
X
t
on
ε
t
-
1
will result in autocorrelation in the regression’s
residuals.
4.12 * Let the dynamics of
{
Y
t
}
be given by an ADL(1,0) model
Y
t
=
α
+
φY
t
-
1
+
βX
t
+
ε
t
14
Suppose that we regress
Y
t
on
X
t
,
Y
t
=
α
+
βX
t
+
u
t
The static regression model is misspecified and
u
t
=
φY
t
-
1
+
ε
t
is correlated with its lag
u
t
-
1
=
φY
t
-
2
+
ε
t
-
1
.
(a) * Show that cov(
u
t
, u
t
-
1
)
6
= 0.
(b) * Show that the same problem occurs when
{
Y
t
}
is generated by an ADL(0
,
1). What
about an ADL(
p, q
)?
4.13 Let
{
Y
t
}
be generated by an ADL(1,1) with invertible autoregressive polynomial.
(a) Calculate the mean of the process and relate your result to the long-run equilibrium
of the process.
(b) What is the contemporaneous effect on
y
t
of a unit impulse on
x
t
at time
t
?
(c) What is the long-run effect on
y
t
of a unit impulse on
x
t
at time
t
?
(d) Would you say that a change in
X
causes a change in
Y
?
15
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
5
k
Exercises of Week Five
5.1 * Consider a process
{
X
t
}
generated according to
X
t
=
α
+
φX
t
-
1
+
δt
+
t
where
{
t
} ∼
WN(0
, σ
2
).
Impose restrictions on the parameters
α
,
φ
and
δ
in order to
generate a process
{
X
t
}
that is:
(a) White noise
(b) Stationary with mean zero
(c) Stationary with nonzero mean
(d) Trend-stationary
(e) A random walk
(f) A random-walk with drift (positive and negative)
5.2 * Consider the following realized paths of different time series generated by models (a)-(d).
Which path corresponds to which model?
(a)
x
t
=
x
t
-
1
+
t
(b)
x
t
= 0
.
25 +
x
t
-
1
+
t
(c)
x
t
= 0
.
25 + 0
.
85
x
t
-
1
+
t
(d)
x
t
= 0
.
25 + 0
.
75
x
t
-
1
-
0
.
25
x
t
-
10
+
t
5.3 * Consider the following regression estimation results obtained for the traded price of Shell
stocks:
Δ
X
t
=
-
0
.
0011
(0
.
0013)
X
t
-
1
+
u
t
16
Can you reject the unit root hypothesis? Show that under the unit root hypothesis the
forecast for the Shell stock price is ‘flat’ with linearly increasing forecast error variance.
5.4 * Consider the following processes
Y
t
= 0
.
27
Y
t
-
1
-
0
.
13
Y
t
-
3
+ 0
.
86
Y
t
-
7
+
t
Y
t
= 0
.
31 + 0
.
97
Y
t
-
1
+ 0
.
03
Y
t
-
2
+
t
Y
t
= 0
.
51 + 1
.
97
t
+ 0
.
86
Y
t
-
1
+
t
Which of these processes are:
(a) Weakly stationary?
(b) Trend stationary?
(c) Unit root non-stationary?
(d) Unit root non-stationary with drift?
5.5 Consider the following ADF regression estimation results obtained from two different time-
series:
Δ
X
t
= 0
.
68
(0
.
23)
-
0
.
083
(0
.
035)
X
t
-
1
+ 0
.
35
(0
.
092)
Δ
X
t
-
1
+
u
t
Δ
X
t
=
-
0
.
083
(0
.
035)
X
t
-
1
+ 0
.
35
(0
.
092)
Δ
X
t
-
1
+
u
t
(a) Which of these time-series seem to be stationary? Are you sure?
(b) How sensitive are your conclusions to the choice of confidence level?
5.6 Suppose that
X
1
= 0. Present the
h
-step ahead point forecast for
X
1+
h
,
h
= 1
, . . . ,
10,
assuming that
{
X
t
}
is generated according to:
(a)
X
t
=
X
t
-
1
+
t
(random-walk)
(b)
X
t
=
t
+ 0
.
7
*
X
t
-
1
+
t
(trend stationary)
(c)
X
t
= 1 +
X
t
-
1
+
t
(random-walk with drift)
5.7 Consider the random-walk with drift defined in the previous question. Can you show that
this process is I(1)? How about the trend stationary process is it I(1)?
5.8 * Think about the mean-reverting properties of I(0) and I(1) processes. What would be
the practical implications of the following statements:
(a) Global temperatures on planet earth are I(0)
(b) The growth rate of GDP in the Netherlands is I(1)
(c) The price series of Microsoft stocks is I(0)
(d) Sea level is I(0)
5.9 Consider the following statement:
We should never regress two I(1) variables since the
result will be spurious.
Is this statement true or false. Justify your answer.
17
6
k
Exercises of Week Six
6.1 * Show that any linear combination of independent I(0) sequences is I(0)
6.2 * Show that any linear combination of independent I(1) and I(0) sequences is not I(0).
6.3 * Show that any linear combination of independent I(1) sequences is I(1).
6.4 Consider the following statement:
I(0) variables are always cointegrated because a linear
combination of I(0) variables is I(0)
Is this statement true or false. Justify your answer.
6.5 * Consider the following processes
Z
t
= 1
.
83
Z
t
-
1
-
0
.
92
Z
t
-
2
+
t
,
{
t
} ∼
WN
(0
, σ
2
)
W
t
= 0
.
97
Z
t
-
1
+ 0
.
03
W
t
-
2
+
v
t
,
{
v
t
} ∼
WN
(0
, σ
2
v
)
R
t
=
R
t
-
1
+
u
t
,
{
u
t
} ∼
WN
(0
, σ
2
u
)
Y
t
= 0
.
2
Z
t
+ 9
.
1
W
t
-
1
X
t
=
-
Y
t
+ 2
R
t
H
t
= 3
Z
t
-
v
t
+
u
t
Q
t
= 3
Z
t
-
v
t
u
t
Which of the above sequences is I(0)? Which time-series are cointegrated?
6.6 * Think about the properties of cointegrated time series.
What would be the practical
implications of the following statements:
(a) Aggregate consumption and GDP are not cointegrated
(b) Global temperatures and CO2 emissions are not cointegrated
(c) The GDP of Netherlands and Belgium are not cointegrated
(d) Oil prices and electricity prices are not cointegrated
6.7 * Consider the following regression results
Y
t
= 0
.
68
(0
.
23)
-
0
.
083
(0
.
035)
Y
t
-
1
+ 0
.
35
(0
.
012)
X
t
+
u
t
Y
t
= 0
.
68
(0
.
23)
-
0
.
083
(0
.
001)
Y
t
-
1
+ 0
.
35
(0
.
492)
X
t
-
1
+
u
t
Y
t
= 0
.
68
(0
.
23)
-
0
.
083
(0
.
035)
Y
t
-
1
+ 0
.
35
(0
.
17)
X
t
-
1
+
u
t
Y
t
= 0
.
68
(0
.
23)
-
0
.
783
(0
.
981)
X
t
-
1
+ 0
.
35
(0
.
002)
X
t
-
3
+
u
t
In which cases does
{
X
t
}
Granger cause
{
Y
t
}
? Are you sure?
18
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
7
k
Solutions
Solutions Week 1
1.3
(1)
WS
,
SS
,
IID
(4)
WS
,
SS
,
IID
(2)
WS
,
SS
,
IID
(5)
WS
,
SS
,
IID
(3)
WS
,
SS
,
IID
(1) We must make sure that the unconditional mean, variance, and covariance stay con-
stant over time, but the rest of the distribution changes. For example, for some
t
*
∈
Z
we can let
{
X
t
}
be an independent sequence such that
X
t
∼
N(
μ, σ
2
) for
t
≤
t
*
, and
X
t
∼
Bernoulli(
μ, σ
2
, p
), that is,
X
t
is a Bernoulli variable with mean
μ
, variance
σ
2
and probability parameter
p
for every
t > t
*
.
Remark:
Recall that the Bernoulli distribution has mean
p
and variance
p
(1
-
p
).
Therefore, if
Z
∼
Bernoulli(
p
) is a standard Bernoulli variable, we can define
X
=
μ
+
σ
p
p
(1
-
p
)
(
Z
-
p
)
,
so that
E
X
=
μ
and
V
ar
X
=
σ
2
. Of course we can do the same by replacing the
Bernoulli distribution with another distribution. Can you do this for the Student’s
t
distribution?
(2)
{
X
t
}
IID with
X
t
∼
N(
μ, σ
2
) for every
t
(3) Define the two-dimensional vector
Z
t
= (
X
t
, X
t
-
1
)
0
∼
N(
μ
,
Σ
) for every
t
, where
μ
= [0
,
0]
and
Σ
=
"
1
0
.
5
0
.
5
1
#
In this case, the process
{
Z
t
}
is not independent because given
Z
t
, the second element
of
Z
t
+1
is known.
(4)
{
X
t
}
is iid with
X
t
∼
Cauchy for every
t
(note that the mean does not exist for the
Cauchy distribution!)
19
(5) (
X
t
, X
t
-
1
)
∼
t(
μ
,
Σ
, λ
), which is the multivariate
t
distribution with mean
μ
, variance
matrix Σ, and
λ
the degrees of freedom, for every
t
, and with
μ
= [0
,
0]
and
Σ
=
"
1
0
.
5
0
.
5
1
#
and
λ <
2
.
Remark:
Recall that a variable
Z
following the Student’s
t
distribution with
λ
degrees of freedom only has finite moments
E
|
Z
|
n
<
∞
for
n < λ
, so for
λ <
2 the
variance does not exist!
1.6 The ACF is defined by
ρ
X
(
t
+
h, t
) =
C
ov(
X
t
+
h
, X
t
)
p
V
ar(
X
t
+
h
)
V
ar(
X
t
)
If
{
X
t
}
is weakly/strictly stationary, then, for every (
t, h
),
C
ov(
X
t
+
h
, X
t
) =
γ
X
(
h
)
and
V
ar(
X
t
+
h
) =
V
ar(
X
t
) =
σ
2
X
Therefore, we can re-write
ρ
X
(
t
+
h, t
) =
γ
X
(
h
)
p
σ
2
X
σ
2
X
=
γ
X
(
h
)
σ
2
X
=
γ
X
(
h
)
γ
X
(0)
=
ρ
X
(
h
)
1.7 A white noise (WN) process, denoted
{
X
t
} ∼
WN
(0
, σ
2
X
) is a sequence of uncorrelated
random variables, with zero mean and constant variance
σ
2
X
.
A random walk (RW) process
{
X
t
}
is a time-series defined as
X
t
=
ε
1
+
ε
2
+
. . .
+
ε
t
∀
t
∈
N
, were
{
ε
t
}
is a white noise process
{
ε
t
} ∼
WN
(0
, σ
2
ε
).
20
1
9
2
3
4
5
6
7
8
Weakly stationary
Strictly stationary
IID
White noise
Random walk
1.9
(1)
WS
,
SS,
IID,
WN,
RW
(2)
WS
,
SS,
IID,
WN
,
RW
(3)
WS
,
SS
,
IID,
WN
,
RW
(4)
WS
,
SS
,
IID
,
WN
,
RW
(5)
WS
,
SS
,
IID,
WN,
RW
(6)
WS
,
SS
,
IID
,
WN,
RW
(7)
WS,
SS
,
IID
,
WN,
RW
(8)
WS,
SS
,
IID,
WN,
RW
(9)
WS,
SS,
IID,
WN,
RW
Legend:
WS: weakly stationary
SS: strictly stationary
IID: identically independently distributed
WN: white noise
RW: random walk
1.10
(1)
{
X
t
}
iid with
X
t
∼
N(
μ, σ
2
) for every
t
≤
t
*
and
X
t
∼
t(
μ, σ
2
) for every
t > t
*
, with
μ
6
= 0.
(2)
{
X
t
}
iid with
X
t
∼
N(
μ, σ
2
) for every
t
≤
t
*
and
X
t
∼
t(
μ, σ
2
) for every
t > t
*
, with
μ
= 0.
(3)
{
X
t
}
is defined as
X
t
=
P
t
Y
-
(1
-
P
t
)
Y
, where
{
P
t
}
is an iid sequence of Bernoulli
random variables such that
P
t
= 1 with probability 0.5 and
P
t
= 0 with probability
0.5, and
Y
∼
N
(0
,
1). Note that
Y
does not have time index
t
.
(4)
{
X
t
}
iid with
X
t
∼
N(0
, σ
2
) for every
t
(5) (
X
t
, X
t
-
1
)
∼
N(
μ
,
Σ
) for every
t
, where
μ
= [1
,
1]
and
Σ
=
"
1
0
.
5
0
.
5
1
#
21
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
(6)
{
X
t
}
iid with
X
t
∼
N(
μ, σ
2
) for every
t
with
μ
6
= 0
(7)
{
X
t
}
iid with
X
t
∼
t(
μ, σ
2
, λ
) for every
t
with
λ <
2.
(8) (
X
t
, X
t
-
1
)
∼
t(
μ
,
Σ
, λ
) for every
t
, where
λ <
2 and
μ
= [1
,
1]
and
Σ
=
"
1
0
.
5
0
.
5
1
#
(9)
{
X
t
}
is generated according to
X
t
+1
=
x
t
+
t
where
{
t
} ∼
WN(0
, σ
2
).
1.11 The random walk starting at time
t
= 1 is given by
X
t
=
ε
1
+
ε
2
+
. . .
+
ε
t
{
ε
t
} ∼
WN
(0
, σ
2
ε
)
.
Recall that
V
ar(
X
t
) =
tσ
2
ε
and
C
ov(
X
t
, X
t
-
h
) = (
t
-
h
)
σ
2
ε
.
Therefore,
ρ
X
(
t, t
-
h
)
=
γ
X
(
t, t
-
h
)
p
V
ar(
X
t
)
V
ar(
X
t
-
h
)
=
(
t
-
h
)
σ
2
ε
p
(
tσ
2
ε
)((
t
-
h
)
σ
2
ε
)
=
√
t
-
h
√
t
The correlations between elements of
{
X
t
}
change over time!
1.12 The conditional mean of the random walk is given by
E
(
X
t
|
X
t
-
1
) =
E
(
X
t
-
1
+
t
|
X
t
-
1
)
(using definition of random walk)
=
E
(
X
t
-
1
|
X
t
-
1
) +
E
(
t
|
X
t
-
1
)
(linearity of conditional expectation)
=
X
t
-
1
+
E
(
t
)
(definition of RW implies
t
⊥
X
t
-
1
)
=
X
t
-
1
.
(
t
is white noise so
E
(
t
) = 0)
The conditional mean of
X
t
is
X
t
-
1
. This is very different from the unconditional mean
of
X
t
which is always zero.
The conditional variance of the random walk is given by
V
ar(
X
t
|
X
t
t
-
1
) =
V
ar(
X
t
-
1
+
t
|
X
t
-
1
)
(using definition of random walk)
=
V
ar(
X
t
-
1
|
X
t
-
1
) +
V
ar(
t
|
X
t
-
1
)
(definition of RW implies
t
⊥
X
t
-
1
)
=
V
ar(
X
t
-
1
|
X
t
-
1
) +
V
ar(
t
)
(definition of RW implies
t
⊥
X
t
-
1
)
= 0 +
V
ar(
t
)
(
X
t
-
1
is a constant given the conditioning)
=
σ
2
ε
.
(
{
t
}
is white noise with variance
σ
2
ε
)
The conditional variance of
X
t
is fixed at
σ
2
ε
. This is very different from the unconditional
variance of
X
t
which grows ever time and is given by
tσ
2
ε
.
1.16 Since
|
X
t
| ≤
max
{
1
, X
2
t
} ≤
1 +
X
2
t
we have
E
(
X
2
t
)
<
∞
⇒
E
(
|
X
t
|
)
<
∞
.
22
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
Recall that strictly stationarity implies identically distributed. Since
V
ar(
X
t
) =
E
(
X
2
t
)
-
E
(
X
t
)
2
<
∞
,
we can therefore conclude that expectation and variance are finite and constant.
Finally, we use the Cauchy-Schwarz inequality with inner product
h
Z
t
, Z
t
-
h
i
=
E
(
Z
t
Z
t
-
h
)
and
Z
t
=
X
t
-
E
X
t
to show that
C
ov(
X
t
, X
t
-
h
) =
E
(
Z
t
Z
t
-
h
)
≤
q
E
Z
2
t
E
Z
2
t
-
h
=
p
V
ar(
X
t
)
V
ar(
X
t
-
h
)
<
∞
,
where the first and second equalities follow from the definition of the covariance and vari-
ance, respectively. Lastly, by strict stationarity this covariance holds for all times
t
.
1.17 (Brockwell and Davies) Let
{
Z
t
}
be a sequence of independent normal random variables,
each with mean 0 and variance
σ
2
, and let
a
,
b
and
c
be constants. Which of the following
processes are weakly stationary?
For each stationary process derive the mean and the
autocovariance function:
(a)
X
t
=
a
+
bZ
t
+
cZ
t
-
2
E
(
X
t
) =
E
(
a
+
bZ
t
+
cZ
t
-
2
)
=
E
(
a
) +
E
(
bZ
t
) +
E
(
cZ
t
-
2
)
=
a
+
b
E
(
Z
t
) +
c
E
(
Z
t
-
2
)
=
a
+
b
·
0 +
c
·
0 =
a.
V
ar(
X
t
) =
V
ar(
a
+
bZ
t
+
cZ
t
-
2
)
=
V
ar(
a
) +
V
ar(
bZ
t
) +
V
ar(
cZ
t
-
2
)
= 0 +
b
2
V
ar(
Z
t
) +
c
2
V
ar(
Z
t
-
2
)
= (
b
2
+
c
2
)
σ
2
.
C
ov(
X
t
, X
t
-
h
) =
C
ov(
a
+
bZ
t
+
cZ
t
-
2
, a
+
bZ
t
-
h
+
cZ
t
-
h
-
2
)
=
b
2
C
ov(
Z
t
, Z
t
-
h
) +
bc
C
ov(
Z
t
, Z
t
-
h
-
2
)
+
cb
C
ov(
Z
t
-
2
, Z
t
-
h
) +
c
2
C
ov(
Z
t
-
2
, Z
t
-
h
-
2
)
Now since
C
ov(
Z
t
, Z
t
-
h
) = 0
∀
h
6
= 0, we have,
γ
(0) = (
b
2
+
c
2
)
σ
2
γ
(1) = 0
γ
(2) =
bcσ
2
γ
(
h
) = 0
∀
h >
2
.
Since expectation, variance and covariance are all finite and constant over time we
conclude that
{
X
t
}
is weakly stationary.
23
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
(b)
X
t
=
a
+
bZ
0
E
(
X
t
) =
E
(
a
+
bZ
0
) =
a
+
b
E
(
Z
0
) =
a.
V
ar(
X
t
) =
V
ar(
a
+
bZ
0
) =
b
2
σ
2
.
C
ov(
X
t
, X
t
-
h
) =
C
ov(
a
+
bZ
0
, a
+
bZ
0
) =
b
2
σ
2
Since expectation, variance and covariance are all finite and constant over time we
conclude that
{
X
t
}
is weakly stationary.
(c)
X
t
=
Z
t
Z
t
-
1
X
t
=
Z
t
Z
t
-
1
E
(
X
t
) =
E
(
Z
t
Z
t
-
1
) = 0
.
V
ar(
Z
t
Z
t
-
1
) =
E
(
Z
2
t
Z
2
t
-
1
)
-
E
(
Z
t
Z
t
-
1
)
2
=
V
ar(
Z
t
)
V
ar(
Z
t
-
1
) =
σ
4
C
ov(
X
t
, X
t
-
h
) =
C
ov(
Z
t
Z
t
-
1
, Z
t
-
h
Z
t
-
1
-
h
) = 0
if
h
6
= 0.
Since expectation, variance and covariance are all finite and constant over time we
conclude that
{
X
t
}
is weakly stationary.
1.18 Let
{
W
t
}
:=
{
X
t
+
Y
t
}
. Then,
E
(
W
t
) =
E
(
X
t
+
Y
t
) =
E
(
X
t
) +
E
(
Y
t
) =
μ
X
+
μ
Y
.
V
ar(
W
t
) =
V
ar(
X
t
+
Y
t
) =
V
ar(
X
t
) +
V
ar(
Y
t
) =
σ
2
X
+
σ
2
Y
C
ov(
W
t
, W
t
-
h
) =
C
ov(
X
t
+
Y
t
, X
t
-
h
+
Y
t
-
h
)
=
C
ov(
X
t
, X
t
-
h
) +
C
ov(
X
t
, Y
t
-
h
) +
C
ov(
Y
t
, X
t
-
h
)
+
C
ov(
Y
t
, Y
t
-
h
)
=
γ
X
(
h
) + 0 + 0 +
γ
Y
(
h
) =
γ
X
(
h
) +
γ
Y
(
h
)
.
Since the mean, variance, and covariance are all finite and constant over time we conclude
that
{
W
t
}
is weakly stationary.
24
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
Solutions Week 2
2.1
Time
t
= 1
t
= 2
t
= 3
t
= 4
t
= 5
t
= 6
t
= 7
t
= 8
t
= 9
X
t
0
.
31
1
.
25
0
.
79
0.611
0.509
0.468
0.392
0.472
0.445
ε
t
-
0
.
10
-
0
.
04
0
.
01
-
0
.
03
0
.
12
0
.
02
2.5 Since
X
t
=
α
+
φX
t
-
1
+
ε
t
, we have:
V
ar(
X
t
) =
V
ar(
α
+
φX
t
-
1
+
ε
t
) =
φ
2
V
ar(
X
t
-
1
) +
σ
2
⇒
V
ar(
X
t
) =
σ
2
1
-
φ
2
C
ov(
X
t
, X
t
-
h
) =
C
ov(
α
+
φX
t
-
1
+
ε
t
, X
t
-
h
) =
φ
C
ov(
X
t
-
1
, X
t
-
h
)
⇒
C
ov(
X
t
, X
t
-
h
) =
φ
h
σ
2
1
-
φ
2
ρ
(
h
) =
γ
(
h
)
γ
(0)
=
φ
h
.
2.7 We have
X
t
=
α
+
φX
t
-
1
+
ε
t
(1
-
φL
)
X
t
=
α
+
ε
t
X
t
= (1
-
φL
)
-
1
(
α
+
ε
t
) =
∞
X
j
=0
(
φL
)
j
(
α
+
ε
t
) =
∞
X
j
=0
φ
j
α
+
∞
X
j
=0
φ
j
ε
t
-
j
E
(
X
t
) =
E
∞
X
j
=0
φ
j
(
α
+
ε
t
)
=
∞
X
j
=0
φ
j
α
=
α
1
-
φ
.
2.8 The autocovariance function is given by,
γ
(1) =
C
ov(
X
t
, X
t
-
1
) =
C
ov(
φ
1
X
t
-
1
+
φ
2
X
t
-
2
+
ε
t
, X
t
-
1
)
=
φ
1
γ
(0) +
φ
2
γ
(1)
and hence,
γ
(1) =
φ
1
1
-
φ
2
γ
(0)
γ
(2) =
C
ov(
X
t
, X
t
-
2
) =
C
ov(
φ
1
X
t
-
1
+
φ
2
X
t
-
2
+
ε
t
, X
t
-
2
)
=
φ
1
γ
(1) +
φ
2
γ
(0)
and hence,
γ
(2) =
φ
2
1
1
-
φ
2
+
φ
2
γ
(0)
.
γ
(
h
) =
C
ov(
X
t
, X
t
-
h
) =
C
ov(
φ
1
X
t
-
1
+
φ
2
X
t
-
2
+
ε
t
, X
t
-
h
)
=
φ
1
γ
(
h
-
1) +
φ
2
γ
(
h
-
2)
,
h
≥
2
Since we have that,
γ
(1) =
φ
1
1
-
φ
2
γ
(0)
γ
(2) =
φ
1
γ
(1) +
φ
2
γ
(0)
.
.
.
25
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
The autocorrelation function is given by,
ρ
(1) =
γ
(1)
γ
(0)
=
φ
1
1
-
φ
2
ρ
(2) =
φ
1
γ
(1) +
φ
2
γ
(0)
γ
(0)
=
φ
2
1
1
-
φ
2
+
φ
2
.
.
.
2.11 This statement is true. The solution to the exercise above reveals that the expressions for
the variance and autocorrelation do not depend on
α
.
2.12 The
autocovariance
function is given by:
γ
(1) =
C
ov(
X
t
, X
t
-
1
) =
C
ov(
φ
1
X
t
-
1
+
ε
t
+
θ
1
ε
t
-
1
, X
t
-
1
)
=
C
ov(
φ
1
X
t
-
1
, X
t
-
1
) +
C
ov(
ε
t
, X
t
-
1
) +
C
ov(
θ
1
ε
t
-
1
, X
t
-
1
)
=
φ
1
γ
(0) +
θ
1
σ
2
=
φ
1
1 +
θ
2
1
+ 2
φ
1
θ
1
1
-
φ
2
1
+
θ
1
σ
2
γ
(2) =
C
ov(
X
t
, X
t
-
2
) =
C
ov(
φ
1
X
t
-
1
+
ε
t
+
θ
1
ε
t
-
1
, X
t
-
2
)
=
C
ov(
φ
1
X
t
-
1
, X
t
-
2
) +
C
ov(
ε
t
, X
t
-
2
) +
C
ov(
θ
1
ε
t
-
1
, X
t
-
2
)
=
φ
1
γ
(1)
.
γ
(
h
) =
C
ov(
X
t
, X
t
-
h
) =
φ
1
γ
(
h
-
1)
,
h
≥
2
.
From the above it follows that
γ
(1)
=
φ
1
γ
(0) +
θ
1
σ
2
=
φ
1
1 +
θ
2
1
+ 2
φ
1
θ
1
1
-
φ
2
1
+
θ
1
σ
2
γ
(
h
)
=
φ
1
γ
(
h
-
1)
,
h
≥
2
.
The
autocorrelation
function is given by
ρ
(1)
=
φ
1
+
θ
1
1
-
φ
2
1
1 +
θ
2
1
+ 2
φ
1
θ
1
=
(
φ
1
+
θ
1
)(1 +
φ
1
θ
1
)
1 +
θ
2
1
+ 2
φ
1
θ
1
ρ
(
h
)
=
φ
1
ρ
(
h
-
1)
,
h
≥
2
.
2.16
(a) Since the sequence is iid, we know that all the random variables of the sequence
{
X
t
}
are independent and have the same distribution. We also know that
E
(
X
t
) = 0
∀
t
.
Hence, the conditional expectation is given by
E
(
X
t
|
x
t
-
1
, x
t
-
2
, ...
) =
E
(
X
t
) = 0
.
Note:
The conditioning drops in the first equality because the
X
t
’s are all independent.
(b) Our random walk satisfies
X
t
=
X
t
-
1
+
t
with
{
t
} ∼
iid(0
, σ
2
) (
note:
the mean of
the innovations is zero and the variance finite because the innovations must also be
white noise). Hence, the conditional expectation is given by
E
(
X
t
|
x
t
-
1
, x
t
-
2
, ...
) =
E
(
X
t
-
1
+
t
|
x
t
-
1
)
=
E
(
X
t
-
1
|
x
t
-
1
) +
E
(
t
|
x
t
-
1
)
=
x
t
-
1
+
E
(
t
)
=
x
t
-
1
.
26
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
Note:
The first equality holds by the definition of the random walk.
The second equality holds
by linearity of the expectation.
The conditioning in the third equality drops because the
t
is
independent of past
X
t
’s in the random walk with iid innovations.
The expectation of
t
is zero
because the innovations are white noise.
(c) An AR(1) process with intercept satisfies
X
t
=
α
+
φX
t
-
1
+
t
with
{
t
} ∼
iid(0
, σ
2
)
(
note:
the mean of the innovations is zero and the variance finite because the inno-
vations must also be white noise). Hence, the conditional expectation is given by
E
(
X
t
|
x
t
-
1
, x
t
-
2
, ...
) =
E
(
α
+
φX
t
-
1
+
t
|
x
t
-
1
)
=
α
+
φ
E
(
X
t
-
1
|
x
t
-
1
) +
E
(
t
|
x
t
-
1
)
=
α
+
φx
t
-
1
+
E
(
t
)
=
α
+
φx
t
-
1
Note:
The first equality holds by the definition of the AR(1). The second equality holds by linearity
of the expectation. The conditioning in the third equality drops because the
t
is independent of past
X
t
’s in the AR(1) process with iid innovations. The expectation of
t
is zero because the innovations
are white noise.
An AR(2) process with intercept satisfies
X
t
=
α
+
φ
1
X
t
-
1
+
φ
2
X
t
-
2
+
t
with
{
t
} ∼
iid(0
, σ
2
) (
note:
the mean of the innovations is zero and the variance finite
because the innovations must also be white noise). Hence, the conditional expectation
is given by
E
(
X
t
|
x
t
-
1
, x
t
-
2
, ...
) =
E
(
α
+
φ
1
X
t
-
1
+
φ
2
X
t
-
2
+
t
|
x
t
-
1
, x
t
-
2
, ...
)
=
α
+
φ
1
E
(
X
t
-
1
|
x
t
-
1
, x
t
-
2
...
) +
φ
2
E
(
X
t
-
2
|
x
t
-
1
, x
t
-
2
...
)
+
E
(
t
|
x
t
-
1
, x
t
-
2
, ...
)
=
α
+
φ
1
x
t
-
1
+
φ
2
x
t
-
2
+
E
(
t
)
=
α
+
φ
1
x
t
-
1
+
φ
2
x
t
-
2
Note:
The first equality holds by the definition of the AR(2). The second equality holds by linearity
of the expectation. The conditioning in the third equality drops because the
t
is independent of past
X
t
’s in the AR(2) process with iid innovations. The expectation of
t
is zero because the innovations
are white noise.
(d) Note that the mean of the innovations is zero and the variance is finite because the
innovations must also be white noise. We therefore have
E
(
X
t
|
x
t
-
1
, x
t
-
2
, . . .
) =
E
(
α
+
φ
1
X
t
-
1
+
. . .
+
φ
p
X
t
-
p
+
t
|
x
t
-
1
, x
t
-
2
, . . .
)
=
α
+
φ
1
E
(
X
t
-
1
|
x
t
-
1
, . . . , x
t
-
p
) +
. . . ,
+
φ
p
E
(
X
t
-
p
|
x
t
-
1
, . . . , x
t
-
p
) +
E
(
t
)
=
α
+
φ
1
x
t
-
1
+
. . .
+
φ
p
x
t
-
p
(e) An MA(1) with intercept satisfies
X
t
=
α
+
t
+
θ
1
t
-
1
with
{
t
} ∼
iid(0
, σ
2
) (
note:
the mean of the innovations is zero and the variance finite because the innovations
27
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
must also be white noise). Hence, the conditional expectation is given by
E
(
X
t
|
x
t
-
1
, ...
) =
E
(
α
+
t
+
θ
1
t
-
1
|
x
t
-
1
, ...
)
=
α
+
E
(
t
|
x
t
-
1
, ...
) +
θ
1
E
(
t
-
1
|
x
t
-
1
, ...
)
=
α
+
E
(
t
) +
θ
1
t
-
1
=
α
+
θ
1
t
-
1
Note:
The first equality holds by the definition of the MA(1). The second equality holds by linearity
of the expectation. In the third equality, the conditioning on the first expectation drops because the
t
is independent of past
X
t
’s in the MA(1) with iid innovations. Note also that
t
-
1
is known when
we condition on past
X
t
’s. Finally, the expectation of
t
is zero because the innovations are white
noise.
An MA(2) with intercept satisfies
X
t
=
α
+
t
+
θ
1
t
-
1
+
θ
2
t
-
2
with
{
t
} ∼
iid(0
, σ
2
)
(
note:
the mean of the innovations is zero and the variance finite because the inno-
vations must also be white noise). Hence, the conditional expectation is given by
E
(
X
t
|
x
t
-
1
, ...
) =
E
(
α
+
t
+
θ
1
t
-
1
+
θ
2
t
-
2
|
x
t
-
1
, ...
)
=
α
+
E
(
t
|
x
t
-
1
, ...
) +
θ
1
E
(
t
-
1
|
x
t
-
1
, ...
) +
θ
2
E
(
t
-
2
|
x
t
-
1
, ...
)
=
α
+
E
(
t
) +
θ
1
t
-
1
+
θ
2
t
-
2
=
α
+
θ
1
t
-
1
+
θ
2
t
-
2
.
Note:
The first equality holds by the definition of the MA(2). The second equality holds by linearity
of the expectation.
In the third equality, the conditioning on the first expectation drops because
the
t
is independent of past
X
t
’s in the MA(2) with iid innovations. Note also that both
t
-
1
and
t
-
2
are known when we condition on past
X
t
’s. Finally, the expectation of
t
is zero because the
innovations are white noise.
An MA(q) with intercept satisfies
X
t
=
α
+
t
+
θ
1
t
-
1
+
...
+
θ
q
t
-
q
with
{
t
} ∼
iid(0
, σ
2
)
(
note:
the mean of the innovations is zero and the variance finite because the inno-
vations must also be white noise). Hence, the conditional expectation is given by
E
(
X
t
|
x
t
-
1
, ...
) =
E
(
α
+
t
+
θ
1
t
-
1
+
...
+
θ
q
t
-
q
|
x
t
-
1
, ...
)
=
α
+
E
(
t
|
x
t
-
1
, ...
) +
θ
1
E
(
t
-
1
|
x
t
-
1
, ...
) +
...
+
θ
q
E
(
t
-
q
|
x
t
-
1
, ...
)
=
α
+
E
(
t
) +
θ
1
t
-
1
+
...
+
θ
q
t
-
q
=
α
+
θ
1
t
-
1
+
...
+
θ
q
t
-
q
.
Note:
The first equality holds by the definition of the MA(q). The second equality holds by linearity
of the expectation. In the third equality, the conditioning on the first expectation drops because the
t
is independent of past
X
t
’s in the MA(q) with iid innovations.
Note also that the innovations
t
-
1
, ...,
t
-
q
are known when we condition on past
X
t
’s. Finally, the expectation of
t
is zero because
the innovations are white noise.
28
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
(f) For the ARMA(1,1) model, we have
E
(
X
t
|
x
t
-
1
, . . .
) =
E
(
α
+
φX
t
-
1
+
t
+
θ
1
t
-
1
|
x
t
-
1
, . . .
)
=
α
+
φ
E
(
X
t
-
1
|
x
t
-
1
, . . .
) +
E
(
t
|
x
t
-
1
, . . .
)
+
θ
1
E
(
t
-
1
|
x
t
-
1
, . . .
)
=
α
+
φx
t
-
1
+
θ
1
t
-
1
.
(g) An ARMA(p,q) with intercept satisfies
X
t
=
α
+
φ
1
X
t
-
1
+
...
+
φ
p
X
t
-
p
+
t
+
θ
1
t
-
1
+
...
+
θ
q
t
-
q
with
{
t
} ∼
WN
(0
, σ
2
). Hence, the conditional expectation is given by
E
(
X
t
|
x
t
-
1
, ...
) =
E
(
α
+
φ
1
X
t
-
1
+
...
+
φ
p
X
t
-
p
+
t
+
θ
1
t
-
1
+
...
+
θ
q
t
-
q
|
x
t
-
1
, ...
)
=
α
+
φ
1
E
(
X
t
-
1
|
x
t
-
1
, ...
) +
...
+
φ
p
E
(
X
t
-
p
|
x
t
-
1
, ...
) +
E
(
t
|
x
t
-
1
, ...
)
+
θ
1
E
(
t
-
1
|
x
t
-
1
, ...
) +
...
+
θ
q
E
(
t
-
q
|
x
t
-
1
, ...
)
=
α
+
φ
1
x
t
-
1
+
...
+
φ
p
x
t
-
1
+
E
(
t
) +
θ
1
t
-
1
+
...
+
θ
q
t
-
q
=
α
+
φ
1
x
t
-
1
+
...
+
φ
p
x
t
-
p
+
θ
1
t
-
1
+
...
+
θ
q
t
-
q
Note:
The first equality holds by the definition of the ARMA(p,q). The second equality holds by
linearity of the expectation. In the third equality, the conditioning on the expectation of
t
drops
because
t
is independent of past
X
t
’s in the AR(p,q) with iid innovations.. Note also that the lagged
data
X
t
-
1
, ..., X
t
-
p
and the lagged innovations
t
-
1
, ...,
t
-
q
are known when we condition on past
X
t
’s. Finally, the expectation of
t
is zero because the innovations are white noise.
2.17 Since the process is given by
X
t
=
Z
t
+
θZ
t
-
2
{
Z
t
} ∼
WN
(0
,
1), we have
γ
X
(0) =
V
ar(
X
t
) =
V
ar(
Z
t
+
θZ
t
-
2
) =
V
ar(
Z
t
) +
θ
2
V
ar(
Z
t
-
2
) = 1 +
θ
2
Note:
The second equality holds by the definition of the process. The third equality holds because the
Z
t
’s are white noise, hence they are all uncorrelated, which implies that the variance of the sum is equal to
the sum of the variances. Finally, the last equality holds since the
Z
t
’s are white noise with unit variance
and hence they all have the same variance of 1.
γ
X
(1) =
C
ov(
X
t
, X
t
-
1
) =
C
ov(
Z
t
+
θZ
t
-
2
, Z
t
-
1
+
θZ
t
-
3
)
=
C
ov(
Z
t
, Z
t
-
1
) +
θ
C
ov(
Z
t
, Z
t
-
3
) +
θ
C
ov(
Z
t
-
2
, Z
t
-
1
) +
θ
2
C
ov(
Z
t
-
2
, Z
t
-
3
)
= 0 + 0 + 0 + 0 = 0
Note:
The second equality holds by the definition of the process. The third equality holds by linearity
of the covariance. Finally, the last equality holds since the
Z
t
’s are white noise, and hence, they are all
uncorrelated.
γ
X
(2) =
C
ov(
X
t
, X
t
-
2
) =
C
ov(
Z
t
+
θZ
t
-
2
, Z
t
-
2
+
θZ
t
-
4
)
=
C
ov(
Z
t
, Z
t
-
2
) +
θ
C
ov(
Z
t
, Z
t
-
4
) +
θ
C
ov(
Z
t
-
2
, Z
t
-
2
) +
θ
2
C
ov(
Z
t
-
2
, Z
t
-
4
)
= 0 + 0 +
θ
+ 0 =
θ
= 0
.
8
29
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
Note:
The second equality holds by the definition of the process. The third equality holds by linearity of
the covariance.The last equality holds since the
Z
t
’s are white noise, and hence, the covariance between
any lag is zero. Finally, note that the covariance
C
ov(
Z
t
-
2
, Z
t
-
2
) is equal to the variance
V
ar(
Z
t
-
2
) = 1.
γ
X
(3) =
C
ov(
X
t
, X
t
-
3
)
=
C
ov(
Z
t
+
θZ
t
-
2
, Z
t
-
3
+
θZ
t
-
5
)
=
C
ov(
Z
t
, Z
t
-
3
) +
θ
C
ov(
Z
t
, Z
t
-
5
) +
θ
C
ov(
Z
t
-
2
, Z
t
-
3
) +
θ
2
C
ov(
Z
t
-
2
, Z
t
-
5
)
= 0
Note:
The second equality holds by the definition of the process. The third equality holds by linearity
of the covariance. Finally, the last equality holds since the
Z
t
’s are white noise, and hence, they are all
uncorrelated.
γ
X
(
h
) = 0
∀
h
≥
3
Since
ρ
X
(
h
) =
γ
X
(
h
)
/γ
X
(0) we have the ACF given by
ρ
X
(0) =
γ
X
(0)
/γ
X
(0) = 1
ρ
X
(1) = 0
/γ
X
(0) = 0
ρ
X
(2) =
θ/
(1 +
θ
2
) = 0
.
8
/
1
.
64
≈
0
.
49
ρ
X
(3) = 0
/γ
X
(0) = 0
ρ
X
(
h
) = 0
∀
h
≥
3
2.18
Note:
An ARMA(p,q)
φ
(
L
)
X
t
=
θ
(
L
)
t
is called:
•
‘Causal’ when the AR polynomial
φ
(
L
) is invertible and we can re-write the ARMA(p,q)
as an MA(
∞
)
X
t
=
θ
(
L
)
/φ
(
L
)
t
•
‘Invertible’ when the MA polynomial
θ
(
L
) is invertible and the ARMA(p,q) can be
written as an AR(
∞
)
φ
(
L
)
/θ
(
L
)
X
t
=
t
.
(a) For
X
t
=
-
0
.
2
X
t
-
1
+ 0
.
48
X
t
-
2
+
Z
t
,
•
{
X
t
}
is invertible because
θ
(
z
) = 1
•
{
X
t
}
is causal because
φ
(
z
) = 1 + 0
.
2
z
-
0
.
48
z
2
= 0 has solutions
z
1
= 5
/
3 and
z
2
=
-
5
/
4.
Just use the quadratic formula!
30
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
(b) For
X
t
=
-
1
.
9
X
t
-
1
-
0
.
88
X
t
-
2
+
Z
t
+ 0
.
2
Z
t
-
1
+ 0
.
7
Z
t
-
2
,
•
{
X
t
}
is invertible because
θ
(
z
) = 1 + 0
.
2
z
+ 0
.
7
z
2
= 0 has solutions
z
1
=
-
(1
-
i
√
69)
/
7 and
z
2
=
-
(1 +
i
√
69)
/
7. Since
|
z
1
|
=
|
z
2
|
=
√
70
/
7
>
1
•
{
X
t
}
is not causal because
φ
(
z
) = 1+1
.
9
z
+0
.
88
z
2
= 0 has solutions
z
1
=
-
10
/
11
and
z
2
=
-
5
/
4.
(c) Since
X
t
=
-
0
.
6
X
t
-
1
+
Z
t
+ 1
.
2
Z
t
-
1
, it follows that
{
X
t
}
is not invertible because
θ
(
z
) = 1 + 1
.
2
z
= 0 has solution
z
=
-
5
/
6.
Furthermore,
{
X
t
}
is causal because
φ
(
z
) = 1 + 0
.
6
z
= 0 has solution
z
=
-
5
/
3.
(d) Since
X
t
=
-
1
.
8
X
t
-
1
-
0
.
81
X
t
-
2
+
Z
t
it follows that
{
X
t
}
is invertible because
θ
(
z
) = 1.
Furthermore,
{
X
t
}
is causal because
φ
(
z
) = 1 + 1
.
8
z
+ 0
.
81
z
2
= 0 has
solution
z
1
=
z
2
=
-
10
/
9.
(e) For
X
t
=
-
1
.
6
X
t
-
1
+
Z
t
-
0
.
4
Z
t
-
1
+ 0
.
04
Z
t
-
2
,
•
{
X
t
}
is invertible because
θ
(
z
) = 1
-
0
.
4
z
+ 0
.
04
z
2
has solutions
z
1
=
z
2
= 5.
•
{
X
t
}
is not causal because
φ
(
z
) = 1 + 1
.
6
z
= 0 has solution
z
=
-
5
/
8.
2.19
X
t
=
Z
t
+
θZ
t
-
1
,
{
Z
t
} ∼
WN
(0
, σ
2
)
and
Y
t
=
˜
Z
t
+
1
θ
˜
Z
t
-
1
,
{
˜
Z
} ∼
WN
(0
, σ
2
θ
2
)
γ
X
(
h
) =
C
ov(
X
t
, X
t
-
h
) =
C
ov(
Z
t
+
θZ
t
-
1
, Z
t
-
h
+
θZ
t
-
h
-
1
)
=
C
ov(
Z
t
, Z
t
-
h
) +
θ
C
ov(
Z
t
, Z
t
-
h
-
1
)
+
θ
C
ov(
Z
t
-
1
, Z
t
-
h
) +
θ
2
C
ov(
Z
t
-
1
, Z
t
-
h
-
1
)
γ
X
(0) =
σ
2
+
θ
2
σ
2
=
σ
2
(1 +
θ
2
)
,
γ
X
(1) =
σ
2
θ ,
γ
X
(2) = 0
and
γ
X
(
h
) = 0
∀
h
≥
2
γ
Y
(
h
) =
C
ov(
Y
t
, Y
t
-
h
) =
C
ov(
˜
Z
t
+
1
θ
˜
Z
t
-
1
,
˜
Z
t
-
h
+
1
θ
˜
Z
t
-
h
-
1
)
=
C
ov(
˜
Z
t
,
˜
Z
t
-
h
) +
1
θ
C
ov(
˜
Z
t
,
˜
Z
t
-
h
-
1
)
+
1
θ
C
ov(
˜
Z
t
-
1
,
˜
Z
t
-
h
) +
1
θ
2
C
ov(
˜
Z
t
-
1
,
˜
Z
t
-
h
-
1
)
γ
X
(0) =
σ
2
θ
2
+
1
θ
2
σ
2
θ
2
=
σ
2
(1 +
θ
2
)
,
γ
X
(1) =
σ
2
1
θ
θ
2
=
σ
2
θ ,
γ
X
(2) = 0
and
γ
X
(
h
) = 0
∀
h
≥
2
2.20 If this AR(3) model describes the dynamics of the growth rate of GDP in The Netherlands,
then this means that the average growth rate of GDP in the Netherlands is approximately
3.6%
E
X
t
= 0
.
008
/
(1
-
0
.
92 + 0
.
14)
≈
0
.
036
.
Since the GDP growth rate is stable, the process is mean reverting, and hence, on average,
the process
X
t
moves towards the mean rate of 3
.
6%. This is the reason why the conditional
31
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
probability of having a growth rate above 3% is larger than the conditional probability of
everting a recession (
X
t
<
0) despite the fact that the last observed growth rate
X
t
-
1
was
actually closer to zero than to 3%.
2.23
(a) Since
X
t
|
X
t
-
1
∼
N
(1
.
42 + 0
.
99
x
t
-
1
,
0
.
1
2
) and
x
t
-
1
= 112
.
6, we have
X
t
|
X
t
-
1
∼
N
(112
.
894
,
0
.
1
2
), hence
P
(
X
t
>
112
.
60
|
X
t
-
1
) = 0
.
99836
.
Remark:
Note that you can compute the above function numerically using the CDF
as
P
(
Z > z
) = 1
-
P
(
Z
≤
z
) for any random variable
Z
and real number
z
.
(b)
P
(
X
t
<
112
.
60
|
X
t
-
1
) = 1
-
0
.
99836 = 0
.
00164.
32
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
Solutions Week 3
3.2 Since,
X
t
=
φX
t
-
1
+
ε
t
with
{
ε
t
} ∼
NID(0
, σ
2
), we have that,
X
t
|
D
t
-
1
=
X
t
|
X
t
-
1
∼
N
(
φX
t
-
1
, σ
2
), and hence,
ˆ
σ
2
T
= arg max
σ
2
f
(
x
T
;
σ
2
) = arg max
σ
2
T
Y
t
=2
f
(
x
t
|
D
t
-
1
;
σ
2
)
= arg max
σ
2
T
Y
t
=2
1
√
2
πσ
2
exp
h
-
(
x
t
-
φx
t
-
1
)
2
2
σ
2
i
= arg max
σ
2
T
X
t
=2
-
log
√
2
πσ
2
-
(
x
t
-
φx
t
-
1
)
2
2
σ
2
.
Since the likelihood is differentiable w.r.t.
σ
2
we can find the maximum by setting the
derivative to zero.
Since, log
L
(
σ
2
) =
T
X
t
=2
-
log
√
2
πσ
2
-
(
x
t
-
φx
t
-
1
)
2
2
σ
2
=
T
X
t
=2
-
log
√
2
π
-
1
2
log
σ
2
-
1
2
(
σ
2
)
-
1
2
t
.
The derivative is
∂
log
L
(
σ
2
)
∂σ
2
=
T
X
t
=2
-
1
2
1
σ
2
+
1
2
(
σ
2
)
-
2
2
t
.
and by construction, the ML estimator ˆ
σ
2
T
satisfies,
∂
log
L
(ˆ
σ
2
T
)
∂σ
2
= 0
⇔
-
1
2
T
X
t
=2
1
ˆ
σ
2
T
+
1
2
(ˆ
σ
2
T
)
-
2
T
X
t
=2
2
t
= 0
⇔
T
X
t
=2
1
ˆ
σ
2
T
= (ˆ
σ
2
T
)
-
2
T
X
t
=2
2
t
⇔
ˆ
σ
2
T
=
∑
T
t
=2
2
t
T
-
1
.
3.3 It depends on the nature of the model and the estimation problem!
In general, ML es-
timates are dependent on each other (e.g. nonlinear problems, incorrect specification, no
analytical estimator form). Here we can typically solve the system
(
∂L
(
ˆ
φ
T
)
/∂φ
= 0
∂L
(ˆ
σ
2
T
)
/∂σ
2
= 0
and obtain expressions for
ˆ
φ
T
and
ˆ
σ
2
T
that are functions only of the data
X
1
, ..., X
T
.
In some sense
, it might thus be said that
ˆ
φ
T
and ˆ
σ
2
T
are analytically unrelated, but they
are not independent random variables.
3.4 The conditional likelihood for an MA(1) can be based on prediction error decomposition.
First, we notice that
X
2
|
X
1
∼
N
(
θε
1
, σ
2
ε
)
,
X
3
|
X
2
, X
1
=
X
3
|
X
2
∼
N
(
θε
2
, σ
2
ε
)
,
and in general
X
t
|
X
t
-
1
∼
N
(
θε
t
-
1
, σ
2
ε
). Therefore,
f
(
x
T
;
θ, σ
2
ε
) =
T
Y
t
=2
f
(
x
t
|
D
t
-
1
;
θ, σ
2
ε
) =
T
Y
t
=2
1
p
2
πσ
2
ε
exp
h
-
(
x
t
-
θ
t
-
1
)
2
2
σ
2
ε
i
33
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
3.5 Using the joint Gaussianity of
X
1
, ..., X
T
generated by the AR(2)
AR(2):
X
t
=
φ
1
X
t
-
1
+
φ
2
X
t
-
2
+
ε
t
,
{
ε
t
} ∼
NID(0
, σ
2
ε
)
,
the log likelihood function takes the form
L
(
φ
1
, φ
2
, σ
2
ε
) =
-
T
2
log 2
π
-
1
2
log
|
Γ(
φ
1
, φ
2
, σ
2
ε
)
| -
1
2
(
X
0
T
Γ
-
1
(
φ
1
, φ
2
, σ
2
ε
)
X
T
)
where,
Γ(
φ
1
, φ
2
, σ
2
ε
) =
γ
(0)
γ
(1)
γ
(2)
. . .
γ
(
T
-
1)
γ
(1)
γ
(0)
γ
(1)
. . .
γ
(
T
-
2)
γ
(2)
γ
(1)
γ
(0)
. . .
γ
(
T
-
3)
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
γ
(
T
-
1)
γ
(
T
-
2)
γ
(
T
-
3)
. . .
γ
(0)
.
with
γ
(0),
γ
(1), ..., derived in the lecture slides of week 2!
Since
X
t
=
φ
1
X
t
-
1
+
φ
2
X
t
-
2
+
ε
t
,
{
ε
t
} ∼
NID(0
, σ
2
ε
)
.
we have that
X
t
|
D
t
-
1
=
X
t
|
X
t
-
1
, X
t
-
2
∼
N
(
φ
1
X
t
-
1
+
φ
2
X
t
-
2
, σ
2
ε
).
Hence, for
ψ
=
(
φ
1
, φ
2
, σ
2
ε
), the conditional loglikelihood estimator of the parameters is given by
ˆ
ψ
T
= arg max
ψ
f
(
x
T
;
ψ
) = arg max
ψ
T
Y
t
=3
f
(
x
t
|
D
t
-
1
;
ψ
)
= arg max
ψ
T
Y
t
=3
1
p
2
πσ
2
ε
exp
h
-
(
x
t
-
φ
1
x
t
-
1
-
φ
2
x
t
-
2
)
2
2
σ
2
ε
i
= arg max
ψ
T
X
t
=3
-
log
p
2
πσ
2
ε
-
(
x
t
-
φ
1
x
t
-
1
-
φ
2
x
t
-
2
)
2
2
σ
2
ε
.
Finally, since
log
L
(
ψ
) =
T
X
t
=3
-
log
p
2
πσ
2
ε
-
(
x
t
-
φ
1
x
t
-
1
-
φ
2
x
t
-
2
)
2
2
σ
2
ε
,
the first order optimality conditions (FOCs) are given by
∂
log
L
(
ˆ
ψ
)
∂φ
1
=
T
X
t
=3
1
σ
2
ε
(
x
t
-
ˆ
φ
1
x
t
-
1
-
ˆ
φ
2
x
t
-
2
)
x
t
-
1
= 0
,
∂
log
L
(
ˆ
ψ
)
∂φ
2
=
T
X
t
=3
1
σ
2
ε
(
x
t
-
ˆ
φ
1
x
t
-
1
-
ˆ
φ
2
x
t
-
2
)
x
t
-
2
= 0
.
Given these FOCs we could easily add the equation for estimating
σ
2
in our system!
3.6 Since
X
t
=
φX
t
-
1
+
ε
t
, the LS estimator of
φ
is defined as,
ˆ
φ
T
= arg min
φ
T
X
t
=2
(
x
t
-
φx
t
-
1
)
2
34
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
and satisfies the first order condition (FOC),
∂
∑
T
t
=2
(
x
t
-
ˆ
φ
T
x
t
-
1
)
2
∂φ
= 0
⇔
-
2
T
X
t
=2
(
x
t
-
ˆ
φ
T
x
t
-
1
)
x
t
-
1
= 0
⇔
ˆ
φ
T
=
T
X
t
=2
x
t
-
1
x
t
/
T
X
t
=2
x
2
t
-
1
.
This is the same as the ML estimator derived in the lecture slides!
3.7 By the definition of OLS we have
(ˆ
α,
ˆ
φ
) = arg min
α,φ
T
X
t
=2
(
x
t
-
α
-
φx
t
-
1
)
2
Therefore, the FOCs yield
∂
∑
T
t
=2
(
x
t
-
α
-
φx
t
-
1
)
∂α
=
-
2
T
X
t
=2
(
x
t
-
ˆ
α
-
ˆ
φx
t
-
1
) = 0
,
∂
∑
T
t
=2
(
x
t
-
α
-
φx
t
-
1
)
∂φ
=
-
2
T
X
t
=2
(
x
t
-
ˆ
α
-
ˆ
φx
t
-
1
)
x
t
-
1
= 0
.
From the first equation it follows that
ˆ
α
=
1
T
-
1
T
X
t
=2
x
t
-
ˆ
φ
1
T
-
1
T
X
t
=2
x
t
-
1
From the second we get
T
X
t
=2
(
x
t
-
ˆ
α
-
ˆ
φx
t
-
1
)
x
t
-
1
= 0
⇔
T
X
t
=2
(
x
t
-
ˆ
α
-
ˆ
φx
t
-
1
)(
x
t
-
1
-
1
T
-
1
T
X
k
=2
x
k
-
1
) = 0
⇔
T
X
t
=2
(
x
t
-
1
T
-
1
T
X
k
=2
x
k
+
ˆ
φ
1
T
-
1
T
X
k
=2
x
k
-
1
-
ˆ
φx
t
-
1
)(
x
t
-
1
-
1
T
-
1
T
X
k
=2
x
k
-
1
) = 0
We conclude
ˆ
φ
=
∑
T
t
=2
(
x
t
-
1
-
1
T
-
1
∑
T
k
=2
x
k
-
1
)(
x
t
-
1
T
-
1
∑
T
k
=2
x
k
)
∑
T
t
=2
(
x
t
-
1
-
1
T
-
1
∑
T
k
=2
x
k
-
1
)
2
.
(1)
Alternative derivation.
Alternatively, we can derive the estimate of
ˆ
φ
by plugging in
the expression for ˆ
α
and solving for
ˆ
φ
(which, in turn, would also provide a solution for
ˆ
α
). From the FOC for
ˆ
φ
, we have
T
X
t
=2
(
x
t
-
ˆ
α
-
ˆ
φx
t
-
1
)
x
t
-
1
= 0
⇐⇒
T
X
t
=2
(
x
t
-
"
1
T
-
1
T
X
k
=2
x
k
-
ˆ
φ
1
T
-
1
T
X
k
=2
x
k
-
1
#
-
ˆ
φx
t
-
1
)
x
t
-
1
= 0
⇐⇒
T
X
t
=2
x
t
x
t
-
1
-
1
T
-
1
T
X
k
=2
x
k
T
X
t
=2
x
t
-
1
+
ˆ
φ
1
T
-
1
T
X
k
=2
x
k
-
1
T
X
t
=2
x
t
-
1
-
ˆ
φ
T
X
t
=2
x
t
-
1
x
t
-
1
= 0
35
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
⇐⇒
ˆ
φ
T
X
t
=2
x
t
-
1
x
t
-
1
-
1
T
-
1
T
X
k
=2
x
k
-
1
T
X
t
=2
x
t
-
1
!
=
T
X
t
=2
x
t
x
t
-
1
-
1
T
-
1
T
X
k
=2
x
k
T
X
t
=2
x
t
-
1
⇐⇒
ˆ
φ
=
∑
T
t
=2
x
t
x
t
-
1
-
1
T
-
1
∑
T
k
=2
x
k
∑
T
t
=2
x
t
-
1
∑
T
t
=2
x
t
-
1
x
t
-
1
-
1
T
-
1
∑
T
k
=2
x
k
-
1
∑
T
t
=2
x
t
-
1
The above expression can be simplified further by grouping the
x
t
-
1
terms. Note that the
expression is a different representation of (1), which is easier to interpret.
3.8 We use the parameters we have already estimated and examine the residuals
ˆ
t
=
x
t
-
ˆ
α
-
ˆ
φx
t
-
1
.
We can then estimate
σ
2
by using the sample variance on these residuals, i.e.
ˆ
σ
2
=
1
T
-
1
T
X
t
=2
ˆ
2
t
.
3.10 Let
X
1
, ..., X
T
be a subset of a time series generated by a stationary MA(2) model,
X
t
=
t
+
θ
1
t
-
1
+
θ
2
t
-
2
,
t
∼
(0
, σ
2
)
.
Let
D
T
≡
X
1
, ..., X
T
, then the one-step-ahead forecast is
ˆ
X
T
+1
=
E
(
X
T
+1
|
D
T
) =
E
(
T
+1
+
θ
1
T
+
θ
2
T
-
1
|
D
T
)
(Definition of MA(2))
=
E
(
T
+1
|
D
T
) +
θ
1
E
(
T
|
D
T
) +
θ
2
E
(
T
-
1
|
D
T
)
(linearity of expectation)
=
E
(
T
+1
) +
θ
1
T
+
θ
2
T
-
1
(
T
+1
⊥
X
1
, ..., X
T
in MA(2) )
(
T
and
T
-
1
known given
D
T
)
=
θ
1
T
+
θ
2
T
-
1
.
(
E
(
T
+1
) = 0)
The two-step-ahead forecast is
ˆ
X
T
+2
=
E
(
X
T
+2
|
D
T
) =
E
(
T
+2
+
θ
1
T
+1
+
θ
2
T
|
D
T
)
(Definition of MA(2))
=
E
(
T
+2
|
D
T
) +
θ
1
E
(
T
+1
|
D
T
) +
θ
2
E
(
T
|
D
T
)
(linearity of expectation)
=
E
(
T
+2
) +
θ
1
E
(
T
+1
) +
θ
2
T
(
T
+2
,
T
+1
⊥
D
T
in MA(2) )
(
T
known given
D
T
)
=
θ
2
T
.
(
E
(
T
+2
) =
E
(
T
+1
) = 0)
36
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
Then, for the
h
-
step-ahead forecast we have
ˆ
X
T
+
h
=
E
(
X
T
+
h
|
X
1
, ..., X
T
) = 0
∀
h >
2
.
The forecast errors are thus given by
e
T
+1
≡
X
T
+1
-
ˆ
X
T
+1
=
T
+1
+
θ
1
T
+
θ
2
T
-
1
-
θ
1
T
-
θ
2
T
-
1
=
T
+1
,
e
T
+2
≡
X
T
+2
-
ˆ
X
T
+2
=
T
+2
+
θ
1
T
+1
+
θ
2
T
-
θ
2
T
=
T
+2
+
θ
1
T
+1
,
e
T
+
h
≡
X
T
+
h
-
ˆ
X
T
+
h
=
T
+
h
+
θ
1
T
+
h
-
1
+
θ
2
T
+
h
-
2
,
∀
h >
2
.
Finally, the forecast error variances are given by
V
ar(
e
T
+1
) =
V
ar(
T
+1
) =
σ
2
,
V
ar(
e
T
+2
) =
V
ar(
T
+2
+
θ
1
T
+1
)
=
V
ar(
T
+2
) +
θ
2
1
V
ar(
T
+1
)
(
{
t
} ∼
WN, hence uncorrelated)
=
σ
2
(1 +
θ
2
1
)
,
(
V
ar(
T
+2
) =
V
ar(
T
+1
) =
σ
2
)
V
ar(
e
T
+
h
) =
V
ar(
T
+
h
+
θ
1
T
+
h
-
1
+
θ
2
T
+
h
-
2
)
=
V
ar(
T
+
h
) +
θ
2
1
V
ar(
T
+
h
-
1
) +
θ
2
2
V
ar(
T
+
h
-
2
)
(
{
t
} ∼
WN hence uncorrelated)
=
σ
2
(1 +
θ
2
1
+
θ
2
2
)
.
(
V
ar(
T
+2
) =
V
ar(
T
+1
) =
σ
2
)
3.11 The one-step ahead forecast is,
ˆ
X
T
+1
=
E
(
X
T
+1
|
X
1
, . . . , X
T
)
=
E
(
φX
T
+
ε
T
+1
+
θε
T
|
X
1
, . . . , X
T
)
=
φ
E
(
X
T
|
X
1
, . . . , X
T
) +
E
(
ε
T
+1
|
X
1
, . . . , X
T
) +
θ
E
(
ε
T
|
X
1
, . . . , X
T
)
=
φX
T
+
E
(
ε
T
+1
) +
θε
T
=
φX
T
+
θε
T
.
Note:
The second equality holds by definition of the ARMA(1,1) process.
The third equality holds by
linearity of the expectation. In the fourth equality, the conditioning in the second expectation drops since
T
+1
is independent of past data that spans until time
T
. Note also that
T
is known conditional on the
data until time
T
. Finally, the fifth equality holds because the unconditional expectation of
T
+1
is zero
since the innovations are white noise.
37
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
The one-step ahead forecast error is,
e
T
+1
=
X
t
+1
-
ˆ
X
t
+1
=
ε
T
+1
and hence
E
(
e
T
+1
) = 0 and
V
ar
(
e
T
+1
) =
σ
2
.
The 2-step ahead forecast is,
ˆ
X
T
+2
=
E
(
X
T
+2
|
X
1
, . . . , X
T
)
=
E
(
φX
T
+1
+
ε
T
+2
+
θε
T
+1
|
X
1
, . . . , X
T
)
=
φ
E
(
X
T
+1
|
X
1
, . . . , X
T
) +
E
(
ε
T
+2
|
X
1
, . . . , X
T
) +
θ
E
(
ε
T
+1
|
X
1
, . . . , X
T
)
=
φ
ˆ
X
T
+1
+
E
(
ε
T
+2
) +
θ
E
(
ε
T
+1
)
=
φ
2
X
T
+
φθε
T
.
Note:
The second equality holds by definition of the ARMA(1,1) process.
The third equality holds by
linearity of the expectation. In the fourth equality, the conditioning in the second and third expectations
drops since both
T
+2
and
T
+1
are independent of past data that spans until time
T
.
Note also that
E
(
X
T
+1
|
X
1
, . . . , X
T
) is just the one step-ahead forecast
ˆ
X
T
+1
that we calculated above. Finally, the fifth
equality holds because the unconditional expectation of both
T
+2
and
T
+1
is zero since the innovations
are white noise.
The 2-step ahead forecast error is,
e
T
+2
=
ε
T
+2
+ (
θ
+
φ
)
ε
T
+1
.
hence we have
E
(
e
T
+2
) = 0 and
V
ar(
e
T
+2
)
=
V
ar(
ε
T
+2
+ (
θ
+
φ
)
ε
T
+1
)
=
V
ar(
ε
T
+2
) + (
θ
+
φ
)
2
V
ar(
ε
T
+1
)
=
σ
2
ε
[1 + (
φ
+
θ
)
2
]
.
Note:
The first equality holds by definition of the forecast error.
The second equality holds since the
innovations are white noise, hence they are uncorrelated, and as a result, the variance of the sum is equal
to the sum of the variances.
The last equality holds because the innovations are white noise and hence
they all have the same variance
σ
2
ε
.
For the
h
-step ahead forecast, a sketch of the derivations (with some missing steps and
justifications) is,
ˆ
X
T
+
h
=
E
(
X
T
+
h
|
X
1
, . . . , X
T
)
=
E
(
φ
h
X
T
+
ε
T
+
h
+ (
φ
+
θ
)
ε
T
+
h
-
1
+
. . .
+
φ
h
-
2
(
φ
+
θ
)
ε
T
+1
+
φ
h
-
1
θε
T
|
X
1
, . . . , X
T
)
=
φ
h
X
T
+
φ
h
-
1
θε
T
.
38
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
The
h
-step forecast error is given by,
e
T
+
h
=
ε
T
+
h
+ (
φ
+
θ
)
ε
T
+
h
-
1
+
. . .
+
φ
h
-
2
(
φ
+
θ
)
ε
T
+1
.
As a result, the variance of the
h
-step forecast error is given by
V
ar(
e
T
+
h
) =
V
ar(
ε
T
+
h
) + (
φ
+
θ
)
2
V
ar(
ε
T
+
h
-
1
) +
. . .
+
φ
2(
h
-
2)
(
φ
+
θ
)
2
V
ar(
ε
T
+1
)
=
σ
2
1 + (
φ
+
θ
)
2
+
φ
2
(
φ
+
θ
)
2
+
...
+
φ
2(
h
-
2)
(
φ
+
θ
)
2
=
σ
2
1 + (
φ
+
θ
)
2
h
-
2
X
j
=0
φ
2
j
Note that we have re-written the variance of the sum as the sum of the variances since the
innovations are white noise. The white noise property also allows us to set the variance of
each innovation (at any lag) equal to
σ
2
.
3.12 The result follows easily by noting that we are dealing with a geometric series, and hence,
V
ar(
e
T
+
h
) =
σ
2
1 + (
φ
+
θ
)
2
h
-
2
X
j
=0
φ
2
j
→
σ
2
1 + (
φ
+
θ
)
2
1
1
-
φ
2
=
σ
2
(1
-
φ
2
) + (
φ
+
θ
)
2
1
-
φ
2
=
σ
2
1 +
θ
2
+ 2
φθ
1
-
φ
2
3.13 Since a stable process has fading memory, the relevance of the last observation decreases
as the forecast horizon
h
increases. In the limit as
h
goes to infinity, the last observation
is irrelevant for the forecast. As a result, in the limit, the forecast error variance matches
the unconditional variance of the process.
3.14 Recall that for the AR(1) we have
ˆ
X
T
+
h
=
φ
h
X
T
(see the slides!).
Hence, given
X
T
= 1
.
3, we have for (a),
ˆ
X
T
+1
=
E
(
X
T
+1
|
X
T
) = 0
.
9
×
1
.
3 = 1
.
17
,
ˆ
X
T
+2
=
E
(
X
T
+2
|
X
T
) = 0
.
9
2
×
1
.
3 = 1
.
053
,
ˆ
X
T
+3
=
E
(
X
T
+3
|
X
T
) = 0
.
9
3
×
1
.
3 = 0
.
9477
Recall that for the AR(1) we have
V
ar(
e
T
+
h
) =
σ
2
(1 +
φ
2
+
...
+
φ
2(
h
-
1)
).
Hence, the 95% confidence bounds are given by
CB
T
+1
=
ˆ
X
T
+1
±
1
.
96
×
S
dev(
e
T
+
h
)
=
ˆ
X
T
+1
±
1
.
96
×
√
0
.
1
≈
ˆ
X
T
+1
±
1
.
96
×
0
.
316
,
CB
T
+2
=
ˆ
X
T
+2
±
1
.
96
×
S
dev(
e
T
+
h
)
=
ˆ
X
T
+2
±
1
.
96
×
p
0
.
1
×
(1 + 0
.
9
2
)
≈
ˆ
X
T
+2
±
1
.
96
×
0
.
425
,
CB
T
+3
=
ˆ
X
T
+3
±
1
.
96
×
S
dev(
e
T
+
h
)
=
ˆ
X
T
+3
±
1
.
96
×
p
0
.
1
×
(1 + 0
.
9
2
+ 0
.
9
4
)
≈
ˆ
X
T
+3
±
1
.
96
×
0
.
497
.
39
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
Using the formulae above, we can compute the forecasts and confidence bounds for the
case
φ
= 0
.
5.
We can also plot the forecast together with the confidence bounds using a programming
language or
Excel
. Below are the results for different values of
φ
.
1
1.5
2
2.5
3
3.5
4
4.5
5
-1
0
1
2
φ
=0
.
9
1
1.5
2
2.5
3
3.5
4
4.5
5
-1
0
1
2
φ
=0
.
5
40
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
3.18 An AR(2) model is
X
t
=
φ
1
X
t
-
1
+
φ
2
X
t
-
2
+
ε
t
,
ε
t
∼
WN(0
, σ
2
)
.
So, using the recursive formulation of the model
∂X
s
∂ε
s
= 1
,
∂X
s
+1
∂ε
s
=
φ
1
∂X
s
∂ε
s
=
φ
1
,
∂X
s
+2
∂ε
s
=
φ
1
∂X
s
+1
∂ε
s
+
φ
2
∂X
s
∂ε
s
=
φ
2
1
+
φ
2
,
∂X
s
+3
∂ε
s
=
φ
1
∂X
s
+2
∂ε
s
+
φ
2
∂X
s
+1
∂ε
s
=
φ
3
1
+ 2
φ
1
φ
2
,
.
.
.
Then, the IRF with origin at
x
=
E
(
X
t
) = 0 generated by an
-impulse is given by,
x
s
-
2
=
x
=
E
(
X
t
) = 0
x
s
-
1
=
x
=
E
(
X
t
) = 0
x
s
=
x
+
∂X
s
∂ε
s
·
=
x
s
+1
=
φ
1
x
s
+2
= (
φ
2
1
+
φ
2
)
x
s
+3
= (
φ
3
1
+ 2
φ
1
φ
2
)
.
.
.
Note:
Using the recursive formulation can sometimes be easier to obtain IRF compared to using the
MA(
∞
) representation.
3.20 Since the AR(4) is stable:
E
(
X
t
) =
E
(
X
t
-
1
) =
E
(
X
t
-
4
) =
μ
X
.
Then, from the definition of the AR(4) model we have,
μ
X
= 0
.
008 + 0
.
92
μ
X
-
0
.
14
μ
X
.
Solving for
μ
X
yields:
μ
X
= 0
.
008
/
(1
-
0
.
92 + 0
.
14)
≈
0
.
036
Using the recursive formulation of the model
∂X
s
∂ε
s
= 1
,
∂X
s
+1
∂ε
s
= 0
.
92
,
∂X
s
+2
∂ε
s
= 0
.
92
2
,
∂X
s
+3
∂ε
s
= 0
.
92
3
,
∂X
s
+4
∂ε
s
= 0
.
92
4
-
0
.
14
,
∂X
s
+5
∂ε
s
= 0
.
92
5
-
2
×
0
.
14
×
0
.
92
,
.
.
.
41
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
Then, the IRF with origin at
μ
X
=
E
(
X
t
) = 0
.
036 generated by an
=
-
0
.
05-impulse is
given by,
x
s
-
2
=
x
=
E
(
X
t
) = 0
.
036
,
x
s
-
1
=
x
=
E
(
X
t
) = 0
.
036
,
x
s
=
x
+
∂X
s
∂ε
s
·
= 0
.
036
-
0
.
05 =
-
0
.
014
,
x
s
+1
= 0
.
036 + 0
.
92
×
= 0
.
036 + 0
.
92
×
(
-
0
.
05) =
-
0
.
014
,
x
s
+2
= 0
.
036 + 0
.
92
2
×
= 0
.
036 + 0
.
92
2
×
(
-
0
.
05)
≈ -
0
.
006
,
x
s
+3
= 0
.
036 + 0
.
92
3
×
= 0
.
036 + 0
.
92
3
×
(
-
0
.
05)
≈ -
0
.
003
,
x
s
+4
= 0
.
036 + (0
.
92
4
-
0
.
14)
×
≈
0
.
036 + 0
.
58
×
(
-
0
.
05)
≈
0
.
007
,
x
s
+5
= 0
.
036 + (0
.
92
5
-
2
×
0
.
14
×
0
.
92)
×
≈
0
.
036 + 0
.
4
×
(
-
0
.
05)
≈
0
.
016
,
.
.
.
Or we can demonstrate the IRF graphically using a programming language or
Excel
.
Note:
Once you figure out the recursive structure of the IRF, it is easier to program
the recursion rather
than doing all the calculations by hand, especially for complex models and long horizons.
t=s
s+4
s+8
s+12
s+16
s+20
-0.02
-0.01
0
0.01
0.02
0.03
0.04
0.05
42
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
Solutions Week 4
4.1
(a) For
Y
t
given
X
t
=
¯
X
(or using a math notation
Y
t
|
(
{
X
t
}
=
¯
X
)), we have
Y
t
|
(
{
X
t
}
=
¯
X
) =
α
+
φY
t
-
1
+ (
β
0
+
β
1
)
¯
X
+
ε
t
=
α
*
+
φY
t
-
1
+
ε
t
so
{
Y
t
|
(
{
X
t
}
=
¯
X
)
} ∼
AR(1) with intercept
α
*
=
α
+ (
β
0
+
β
1
)
¯
X
. If
|
φ
|
<
1 then
{
Y
t
|
(
{
X
t
}
=
¯
X
)
}
is stationary, and hence
E
(
Y
t
|{
X
t
}
=
¯
X
) =
E
(
α
*
+
φY
t
-
1
+
ε
t
|{
X
t
}
=
¯
X
)
=
α
*
+
φ
E
(
Y
t
-
1
|{
X
t
}
=
¯
X
) +
E
(
ε
t
)
(
{
X
t
}
exogenous
⇒
ε
t
⊥ {
X
t
}
)
=
α
*
/
(1
-
φ
) =
α/
(1
-
φ
) + (
β
0
+
β
1
)
¯
X/
(1
-
φ
)
.
(
{
ε
t
} ∼
WN
⇒
E
(
ε
t
) = 0)
(b) If
|
φ
|
<
1 then the unconditional mean of
Y
t
is
E
(
Y
t
) =
α
+
φ
E
(
Y
t
-
1
) +
β
0
E
(
X
t
) +
β
1
E
(
X
t
-
1
) +
E
(
ε
t
)
=
α
+
φ
E
(
Y
t
-
1
) + (
β
0
+
β
1
)
E
(
X
t
) +
E
(
ε
t
)
(
{
X
t
}
stationary
⇒
E
(
X
t
) =
E
(
X
t
-
1
))
=
α
(1
-
φ
)
+
(
β
0
+
β
1
)
(1
-
φ
)
E
(
X
t
)
(
{
ε
t
} ∼
WN
⇒
E
(
ε
t
) = 0)
Given the result in (a), if we set
¯
X
=
E
(
X
t
) we get
E
(
Y
t
|{
X
t
}
=
¯
X
) =
α
(1
-
φ
)
+
(
β
0
+
β
1
)
(1
-
φ
)
E
(
X
t
)
.
4.2 The long run equilibrium
¯
Y
for a given fixed value
¯
X
is trivially given by
¯
Y
= 1
.
2 + 0
.
95
¯
Y
+ 1
.
5
¯
X
-
0
.
3
¯
X
⇔
¯
Y
=
1
.
2 + 1
.
5
¯
X
-
0
.
3
¯
X
1
-
0
.
95
.
For
¯
X
= 2
.
3 we have
¯
Y
= 79
.
2.
4.3 We present the solution to this exercise by plotting the forecasts using a programming
language.
You can also solve this exercise analytically, although for large horizons it is
recommended to do it numerically.
4.4
•
Short-run multiplier:
β
0
.
•
h
-step ahead multiplier:
φ
h
-
1
(
φβ
0
+
β
1
).
•
Long-run multiplier: (
β
0
+
β
1
)
/
(1
-
φ
).
Hence:
β
0
+
∞
X
j
=1
φ
j
-
1
(
φβ
0
+
β
1
) =
β
0
+
φβ
0
+
β
1
1
-
φ
=
β
0
+
β
1
1
-
φ
.
Note:
The long-run multiplier is not
the limit of the
h
-th step ahead multipliers, but instead, it is the
sum
of the
h
-step ahead multipliers! It is the cumulative effect of all multipliers!
43
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
T
T+4
T+8
T+12
T+16
T+20
T+24
0.05
0.1
0.15
Unemployment Rate Forecast
T
T+4
T+8
T+12
T+16
T+20
T+24
-0.02
0
0.02
0.04
GDP growth rate Forecast
Figure II
Exercise 4.3: Forecasts for unemployment rate and quarterly GDP growth rate in the
Netherlands.
4.9
(a) Re-writting the ADL(1,1) in ECM form yields:
Δ
Y
t
=
-
(1
-
φ
)
Y
t
-
1
-
α
1
-
φ
-
β
0
+
β
1
1
-
φ
X
t
-
1
+
β
0
Δ
X
t
+
ε
t
=
-
0
.
13(
Y
t
-
1
-
0
.
15 + 4
.
6
X
t
-
1
)
-
0
.
5Δ
X
t
+
ε
t
.
i. The negative error correction coefficient (-0.13) indicates that unemployment is
indeed related to GDP in the long run. The correction to equilibrium values is
however quite slow (13% per quarter).
ii. Changes in the GDP growth rate have an immediate short-run negative impact
on the unemployment rate. In particular, the unemployment rate will drop im-
mediately by half a percentage point given a rise in the GDP growth rate of
1%.
(b) To check this statement, we note that Δ
X
t
= 3
-
2 = 1, and compute the following
conditional expectation:
E
[Δ
Y
t
Y
t
-
1
= 9
, X
t
-
1
= 2
,
Δ
X
t
= 1]
=
E
[
-
0
.
13(
Y
t
-
1
-
0
.
15 + 4
.
6
X
t
-
1
)
-
0
.
5Δ
X
t
Y
t
-
1
= 9
, X
t
-
1
= 2
,
Δ
X
t
= 1]
=
-
0
.
13(9
-
0
.
15 + 4
.
6
·
2)
-
0
.
5
·
1
=
-
0
.
4545
,
hence the unemployment rate is expected to go down to 9
-
0
.
4545 = 8
.
5455 percent,
instead of 5 percent. The statement is therefore false.
4.11 Suppose the DGP is
X
t
=
ε
t
+
θ
1
ε
t
-
1
+
θ
2
ε
t
-
2
,
but we consider the MA(1) model
X
t
=
u
t
+
φε
t
-
1
44
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
Then
u
t
=
ε
t
+ (
θ
1
-
φ
)
ε
t
-
1
+
θ
2
ε
t
-
2
. Therefore
C
ov(
u
t
, u
t
-
1
) =
C
ov(
ε
t
+ (
θ
1
-
φ
)
ε
t
-
1
+
θ
2
ε
t
-
2
, ε
t
-
1
+ (
θ
1
-
φ
)
ε
t
-
2
+
θ
2
ε
t
-
3
)
=
C
ov(
ε
t
, ε
t
-
1
) + (
θ
1
-
φ
)
C
ov(
ε
t
, ε
t
-
2
) +
θ
2
C
ov(
ε
t
, ε
t
-
3
)
+ (
θ
1
-
φ
)
C
ov(
ε
t
-
1
, ε
t
-
1
) + (
θ
1
-
φ
)(
θ
1
-
φ
)
C
ov(
ε
t
-
1
, ε
t
-
2
) + (
θ
1
-
φ
)
θ
2
C
ov(
ε
t
-
1
, ε
t
-
3
)
+
θ
2
C
ov(
ε
t
-
2
, ε
t
-
1
) +
θ
2
(
θ
1
-
φ
)
C
ov(
ε
t
-
2
, ε
t
-
2
) +
θ
2
θ
2
C
ov(
ε
t
-
2
, ε
t
-
3
)
= ((
θ
1
-
φ
) +
θ
2
(
θ
1
-
φ
))
σ
2
= (
θ
2
+ 1)(
θ
1
-
φ
)
σ
2
where the first equality holds by definition of the residual process, the second equality holds
by linearity of the covariance, and the third equality holds by the fact that innovations are
white noise and hence are uncorrelated with fixed variance
σ
2
.
4.12
(a) Given that
u
t
=
φY
t
-
1
+
ε
t
and
u
t
-
1
=
φY
t
-
2
+
ε
t
-
1
, we have
C
ov(
u
t
, u
t
-
1
) =
C
ov(
φY
t
-
1
+
ε
t
, φY
t
-
2
+
ε
t
-
1
)
(definition of residual process)
=
φ
2
C
ov(
Y
t
-
1
, Y
t
-
2
) +
φ
C
ov(
Y
t
-
1
, ε
t
-
1
)
(linearity of covariance)
+
φ
C
ov(
ε
t
, Y
t
-
2
) +
C
ov(
ε
t
, ε
t
-
1
)
=
φ
2
γ
Y
(1) +
φσ
2
ε
+ 0 + 0
(ADL:
ε
t
⊥
Y
t
-
2
,
{
ε
t
} ∼
WN:
ε
t
⊥
ε
t
-
1
and
V
ar(
ε
t
) =
σ
2
ε
)
=
φ
2
γ
Y
(1) +
φσ
2
ε
,
which is not equal to 0, unless
φ
= 0. But the case
φ
= 0 corresponds to the correct
specification case.
(b) Let the DGP be an ADL(0,1)
Y
t
=
α
+
β
0
X
t
+
β
1
X
t
-
1
+
ε
t
.
Suppose that we fit the static regression
Y
t
=
α
+
β
0
X
t
+
u
t
.
Then, the residuals are given by
u
t
=
β
1
X
t
-
1
+
ε
t
.
We thus have,
C
ov(
u
t
, u
t
-
1
) =
C
ov(
β
1
X
t
-
1
+
ε
t
, β
1
X
t
-
2
+
ε
t
-
1
)
=
β
2
1
C
ov(
X
t
-
1
, X
t
-
2
) +
β
1
C
ov(
X
t
-
1
, ε
t
-
1
)
+
β
1
C
ov(
ε
t
, X
t
-
2
) +
C
ov(
ε
t
, ε
t
-
1
)
=
β
2
1
γ
X
(1) + 0 + 0 + 0
=
β
2
1
γ
X
(1)
45
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
where the first equality holds be definition of the residuals, the second equality holds
by linearity of the covariance, and the third equality holds since
X
t
is exogenous,
which implies
C
ov(
X
t
-
1
, ε
t
-
1
) =
C
ov(
ε
t
, X
t
-
2
) = 0, and because the innovations are
white noise, and hence uncorrelated over time, which implies that
C
ov(
ε
t
, ε
t
-
1
) = 0.
We thus conclude that
the residuals have autocorrelation if the exogenous process
{
X
t
}
has autocorrelation
, i.e. if
γ
X
(1)
6
= 0.
46
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
Solutions Week 5
5.1
(a) White noise
α
=
φ
=
δ
= 0,
σ
2
>
0
(b) Stationary with mean zero
α
=
δ
= 0,
|
φ
|
<
1,
σ
2
>
0
(c) Stationary with nonzero mean
δ
= 0,
α
6
= 0
|
φ
|
<
1,
σ
2
>
0
(d) Trend-stationary
δ
6
= 0,
|
φ
|
<
1,
σ
2
>
0
(e) A random walk
α
=
δ
= 0,
φ
= 1,
σ
2
>
0
(f) A random-walk with drift (positive and negative)
δ
= 0,
α
6
= 0,
φ
= 1,
σ
2
>
0
(
α >
0
, α <
0)
5.2 (a) top left;
(b) top right;
(c) bottom left;
(d) bottom right.
5.3 Suppose that the estimation results are valid (asymptotically). The ADF test statistics is
ADF = 0
.
0011
/
0
.
0013 =
-
0
.
846
.
The value
-
0
.
846 is smaller than
-
1
.
95
in absolute value
, where the latter is a DF
critical value with no intercept for a confidence level of 95%. Hence, at a 5% significance
level, we cannot reject the null hypothesis
H
0
of a unit root.
Note:
Sometimes, the conclusion might depend on the pre-specified confidence level.
Also, it is im-
portant that the estimation results are valid, the DF statistic has the correct distribution, the residuals
uncorrelated, etc.
5.4 (a) None of the processes is stationary
(b) The third process is trend stationary because it consists of a linear trend plus a sta-
tionary process
(c) The first process is unit root non-stationary because 0
.
27
-
0
.
13 + 0
.
86 = 1. Also the
second process is unit root non-stationary because 0
.
97 + 0
.
03 = 1.
(d) The second process is unit root non-stationary with drift because the constant 0.31
introduces a drift in the process.
47
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
5.8
(a) Global temperatures on planet earth are I(0)
•
Average global temperature is fixed over time.
•
In the long run, temperatures must return to an average level.
•
Current increase in temperatures is only temporary.
(b) The growth rate of GDP in the Netherlands is I(1)
•
Average growth rate may change over time.
•
After a crisis there is no reason to suppose that the economy returns back to
average growth path.
•
A recession may be permanent!
•
Volatility may change over time. Growth rates may be increasingly volatile.
Great
moderation
could stay forever.
(c) The price series of Microsoft stocks is I(0)
•
Average price is fixed over time.
•
An increase or decrease in prices can only be temporary.
•
When stock prices are low, you can make a profit by buying Microsoft stocks and
waiting until they converge back to higher values.
•
When prices are high, you should bet against Microsoft stocks.
(d) Sea level is I(0)
•
Sea levels cannot raise permanently.
•
In the short-run, fluctuations may be potentially dangerous for some countries
(like The Netherlands!)
•
In the long-run, the sea level will always return back to its long-run average level!
48
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
Solutions Week 6
6.1 Let
X
t
∼
I
(0) and
Y
t
∼
I
(0).
Define the linear combination of
X
t
and
Y
t
as
Z
t
=
a
+
bX
t
+
cY
t
. Then
E
(
Z
t
) =
E
(
a
+
bX
t
+
cY
t
)
(definition of
Z
t
)
=
a
+
b
E
(
X
t
) +
c
E
(
Y
t
)
(linearity of expectation)
=
a
+
bμ
X
(
t
) +
cμ
Y
(
t
)
=
a
+
bμ
X
+
cμ
Y
(
X
t
and
Y
t
are I(0))
=
μ
Z
.
Hence, The mean is finite and constant in time.
Now we check the variance and autocovariance.
V
ar(
Z
t
) =
V
ar(
a
+
bX
t
+
cY
t
)
(definition of
Z
t
)
=
b
2
V
ar(
X
t
) +
c
2
V
ar(
Y
t
)
(
X
t
⊥
Y
t
)
=
b
2
σ
2
X
(
t
) +
c
2
σ
2
Y
(
t
)
=
b
2
σ
2
X
+
c
2
σ
2
Y
(
X
t
and
Y
t
are I(0))
=
σ
2
Z
.
The variance is also finite and constant in time.
Finally, we check that the autocovariance depends only on
h
, but not
t
.
C
ov(
Z
t
, Z
t
-
h
) =
C
ov(
a
+
bX
t
+
cY
t
, a
+
bX
t
-
h
+
cY
t
-
h
)
(definition of
Z
t
)
=
b
2
C
ov(
X
t
, X
t
-
h
) +
bc
C
ov(
X
t
, Y
t
-
h
)
+
cb
C
ov(
Y
t
, X
t
-
h
) +
c
2
C
ov(
Y
t
, Y
t
-
h
)
(linearity of covariance)
=
b
2
γ
X
(
t, h
) +
c
2
γ
Y
(
t, h
)
(
X
t
⊥
Y
t
)
=
b
2
γ
X
(
h
) +
c
2
γ
Y
(
h
)
(
X
t
and
Y
t
are I(0))
=
γ
Z
(
h
)
Conclusion:
Z
t
is I(0)!
6.2 Let
X
t
∼
I
(0) and
Y
t
∼
I
(1). Since
Y
t
∼
I
(1), either
μ
Y
(
t
) varies in time, or
σ
2
Y
(
t
) varies
in time, or
γ
Y
(
t, h
) varies in time for some
h
.
Define the linear combination of
X
t
and
Y
t
as
Z
t
=
a
+
bX
t
+
cY
t
. We know from exercise
6.1 that
μ
Z
(
t
) =
E
(
Z
t
) =
a
+
bμ
X
+
cμ
Y
(
t
)
,
σ
2
Z
(
t
) =
V
ar(
Z
t
) =
b
2
σ
2
X
+
c
2
σ
2
Y
(
t
)
,
49
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
γ
Z
(
t, h
) =
C
ov(
Z
t
, Z
t
-
h
) =
b
2
γ
X
(
h
) +
c
2
γ
Y
(
t, h
)
.
Hence, either
μ
Z
(
t
) varies in time, or
σ
2
Z
(
t
) varies in time, or
γ
Z
(
t, h
) varies in time for
some
h
.
Conclusion:
Z
t
is not I(0)!
6.3 Let
X
t
∼
I
(1) and
Y
t
∼
I
(1). Define
Z
t
=
a
+
bX
t
+
cY
t
. We must argue that
Z
t
is not
I(0), but Δ
Z
t
is I(0).
If
Z
t
is I(0), then this means that
X
t
and
Y
t
are cointegrated... but this cannot be true
since they are independent. Therefore,
Z
t
cannot be I(0)!.
On the other hand, Δ
Z
t
is a linear combination of I(0) processes:
Δ
Z
t
=
a
+
bX
t
+
cY
t
-
a
-
bX
t
-
1
-
cY
t
-
1
=
b
Δ
X
t
+
c
Δ
Y
t
.
Conclusion: Δ
Z
t
is I(0) (see Exercise 6.1).
6.5
Z
t
∼
I
(0),
W
t
∼
I
(0),
R
t
∼
I
(1). The rest is left without solution.
6.6
(a) Aggregate consumption and GDP are not cointegrated
–
This means that there is no long-run relation between consumption and GDP.
–
Consumption and GDP are in the short-run related, a temporary term shock to
GDP, a recession, affects consumption.
–
However, if there is a permanent change in GDP, consumption will in the long-run
not react to this shift.
(b) Global temperatures and CO2 emissions are not cointegrated
–
This would imply that global warming due to the CO2 emissions is temporary.
–
There is no long-run equilibrium between global temperatures and CO2 emissions,
hence, CO2 emissions only affect temperatures in the short-run.
(c) The GDP of Netherlands and Belgium are not cointegrated
–
Increasing prosperity in the Netherlands doesn’t have long-run effects on GDP of
Belgium.
–
Benefits from trade are only temporary?
(d) Oil prices and electricity prices are not cointegrated
–
If Oil prices are permanently increased due to increased scarcity electricity prices
will probably increase in the short-run.
–
However, in the long-run electricity prices are not related to the oil prices.
6.7 The immediate answer should be:
it depends!
Are the point estimates and standard errors
valid?
Is the estimator Gaussian?
At least asymptotically?
At which confidence level
should the answer be given?
Suppose that the estimation results are valid, that the estimator is indeed asymptotically
Gaussian, and that we decide to work with standard confidence levels.
50
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
The first regression does not give evidence that
X
t
Granger causes
Y
t
because it does not
give any information on whether past values of
X
t
contain information to predict
Y
t
.
The second regression does not provide evidence that
X
t
Granger causes
Y
t
even at the
90% confidence level since
τ
= 0
.
35
/
0
.
492
≈
0
.
71
<
1
.
645
.
(2)
The third regression provides evidence that
X
t
Granger causes
Y
t
at the 95% confidence
level, but not at the 99% confidence level,
1
.
960
< τ
= 0
.
35
/
0
.
17
≈
2
.
06
<
2
.
576
.
(3)
Finally, the fourth regression suggests that
X
t
Granger causes
Y
t
at any reasonable confi-
dence level since
τ
= 0
.
35
/
0
.
002 = 175
>
2
.
576
.
(4)
51
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help