DO_HW11_combined
pdf
keyboard_arrow_up
School
Georgia Institute Of Technology *
*We aren’t endorsed by this school
Course
6669
Subject
Mathematics
Date
Nov 24, 2024
Type
Pages
9
Uploaded by BrigadierValor7249
ISyE-HW11
November 2023
Problem 1: Dantzig-Wolfe Decomposition
1.
When the given easy constraints are represented as a polyhedron, it is as follows.
P
=
{
x
∈
R
6
:
Fx
=
b, x
≥
0
}
F
=
1
1
1
0
0
0
0
0
0
1
1
1
1
0
0
1
0
0
0
1
0
0
1
0
0
0
1
0
0
1
, b
=
20
20
10
20
10
The polyhedron
P
is bounded if and only if the equation
Fx
= 0 has only a
trivial solution. (If there exists a nontrivial solution
d
, then for any positive
θ
,
we have
F
(
x
+
θd
) =
Fx
+
θFd
=
Fx
=
b
, which means
x
+
θd
∈
P
. Therefore,
P
is not bounded.) In the 3rd, 4th, and 5th rows of
F
, for the sum of any two
non-negative numbers to be zero, both numbers must be zero. Therefore, the
solution to
Fx
= 0 is (
x
11
, x
12
, x
13
, x
21
, x
22
, x
23
) = (0
,
0
,
0
,
0
,
0
,
0), which is a
trivial solution. This confirms that the given
P
is a bounded polyhedron.
2.
The
c, D
, and
b
of the Dantzig-Wolfe master problem are as follows.
c
= [0
,
1
,
0
,
0
,
1
,
1]
⊤
D
= [1
,
0
,
0
,
0
,
0
,
1]
b
= 12
3.
To construct the (RMP), the following calculations are performed for two ex-
treme points:
c
⊤
x
1
, c
⊤
x
2
, Dx
1
, and
Dx
2
.
c
⊤
x
1
= [0
,
1
,
0
,
0
,
1
,
1]
10
10
0
0
10
10
= 30
1
c
⊤
x
2
= [0
,
1
,
0
,
0
,
1
,
1]
0
10
10
10
10
0
= 20
Dx
1
= [1
,
0
,
0
,
0
,
0
,
1]
10
10
0
0
10
10
= 20
Dx
2
= [1
,
0
,
0
,
0
,
0
,
1]
0
10
10
10
10
0
= 0
(RMP) is given as:
min
λ
1
,λ
2
30
λ
1
+ 20
λ
2
s.t.
20
λ
1
≤
12
λ
1
+
λ
2
= 1
λ
1
, λ
2
≥
0
4.
The feasible region of the (RMP) obtained above is represented in the graph
below as a black line. The point where the objective function is minimized is
marked in orange, and (
λ
∗
1
, λ
∗
2
) = (0
,
1).
2
Figure 1: Feasible region of RMP
5.
1) To find the basis using the Simplex method, the constraints of the (RMP)
obtained above were modified as follows.
min
30
λ
1
+ 20
λ
2
s.t.
20
λ
1
+
s
1
= 12
λ
1
+
λ
2
= 1
λ
1
, λ
2
, s
1
≥
0
As the starting BFS,
λ
1
= 0
, λ
2
= 1
, s
1
= 12 were chosen. The basis and basic
solution at this point are as follows, and since the basic solution is non-negative,
we can confirm that the current basic solution is a basic feasible solution.
B
=
0
1
1
0
, B
−
1
=
0
1
1
0
, x
B
=
λ
2
s
1
=
B
−
1
b
=
1
12
, x
N
=
λ
1
= 0
When calculating the reduced cost for the nonbasic variable
λ
1
, it is as follows.
¯
c
1
=
c
1
−
c
⊤
B
B
−
1
A
1
= 30
−
[20
,
0]
0
1
1
0
12
1
= 10
Since the reduced cost is positive, the current solution is the optimal solution.
Furthermore, dual variables can be calculated as follows.
[ˆ
y
⊤
,
ˆ
r
] =
c
⊤
B
B
−
1
= [20
,
0]
0
1
1
0
= [0
,
20]
2) The dual of the RMP obtained in question 3, which is DRMP, can be derived
through the following process.
3
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
Step 1
: Relax the Objective Function
30
λ
1
+ 20
λ
2
+
y
1
(12
−
20
λ
1
) +
y
2
(1
−
λ
1
−
λ
2
)
y
1
≤
0
, y
2
free
Step 2
: Relax Feasible Region
Z
(
y
) = min
λ
1
,λ
2
30
λ
1
+ 20
λ
2
+
y
1
(12
−
20
λ
1
) +
y
2
(1
−
λ
1
−
λ
2
)
s.t.
λ
1
, λ
2
≥
0
Z
(
y
) =
min
λ
1
:
λ
1
≥
0
(30
−
20
y
1
−
y
2
)
λ
1
+
min
λ
2
:
λ
2
≥
0
(20
−
y
2
)
λ
2
+
(12
y
1
+
y
2
)
Z
(
y
) = 12
y
1
+
y
2
for
(30
−
20
y
1
−
y
2
)
≥
0
and
(20
−
y
2
)
≥
0
Step 3
: Find the best lower bound (DRMP)
max
y
1
,y
2
12
y
1
+
y
2
s.t.
30
−
20
y
1
−
y
2
≥
0
20
−
y
2
≥
0
y
1
≤
0
, y
2
free
In question 3, in the (RMP) obtained, (
λ
∗
1
, λ
∗
2
) was (0
,
1). Therefore, applying
Dual Complementary Slackness in the (DRMP),
x
j
(
c
j
−
y
⊤
A
j
) = 0
∀
j
. Hence,
since
λ
∗
2
is nonzero, we have
λ
∗
2
(20
−
y
∗
2
) = 0, and
y
∗
2
= 20. Additionally, ap-
plying Primal Complementary Slackness,
y
i
(
a
⊤
i
x
−
b
i
) = 0
∀
i
. The inequality
20
λ
∗
1
≤
12 results in
y
∗
1
(20
λ
∗
1
−
12) = 0, which leads to
y
∗
1
= 0. Therefore, the
optimal dual variables are [0
,
20], and this confirms that they are the same as
the values obtained in part 5-1.
6.
The reduced cost of (RMP) can be expressed in a maximization form as follows.
Z
= max
x
(
−
c
⊤
+ ˆ
y
⊤
D
)
x
+ ˆ
r
s.t.
Fx
=
b, x
≥
0
When we utilize the dual variables obtained earlier, the calculation would be as
follows.
−
c
⊤
+ ˆ
y
⊤
D
= [0
,
−
1
,
0
,
0
,
−
1
,
−
1] + 0
×
D
4
Z
= max
x
−
x
12
−
x
22
−
x
23
+ 20
s.t.
1
1
1
0
0
0
0
0
0
1
1
1
1
0
0
1
0
0
0
1
0
0
1
0
0
0
1
0
0
1
x
11
x
12
x
13
x
21
x
22
x
23
=
20
20
10
20
10
, x
≥
0
In a minimization form, the Dantzig-Wolfe decomposition algorithm terminates
when the smallest reduced cost is non-negative. Therefore, in the maximization
form, it concludes when the largest (-reduced cost) is non-positive.
In other
words, it terminates when
Z
≤
0.
7.
The five equality constraints in the given problem are identical to the flow
conservation constraints in the Transportation problem. The first equality con-
straint represents the amount of product transported from warehouse 1 to cities
1, 2, and 3, signifying
the total supply from warehouse 1
.
The second
equality constraint represents the amount of product transported from ware-
house 2 to cities 1, 2, and 3, signifying
the total supply from warehouse
2
. The third equality constraint represents the amount of product transported
from warehouses 1 and 2 to city 1, signifying
the demand of city 1
. The fourth
and fifth equality constraints, for the same reason, represent
the demands of
city 2 and city 3
, respectively. Therefore, the total supply is 20 + 20 = 40,
and the total demand is 10 + 20 + 10 = 40, and due to flow conservation, we
can confirm that both values are equal.
Problem 2: Image compression and SVD
As in the attached Jupyter notebook, SVD decomposition was carried out, and
this was utilized to perform image compression at
k
= 5
,
15
,
25.
5
HW11-2
In [1]:
import
numpy as
np
import
matplotlib.pyplot as
plt
In [2]:
X =
np
.
loadtxt
(
"clownImage.txt"
)
In [3]:
plt
.
gray
()
plt
.
imshow
(
X
)
plt
.
show
()
In [4]:
def
image
(
X
, k
):
U
, s
, Vt =
np
.
linalg
.
svd
(
X
)
Sigma =
np
.
zeros
((
U
.
shape
[
1
], Vt
.
shape
[
0
]))
np
.
fill_diagonal
(
Sigma
, s
)
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
output =
U @
Sigma
[:, :
k
] @
Vt
[:
k
, :]
plt
.
imshow
(
output
)
plt
.
show
()
In [5]:
image
(
X
, 5
)
In [6]:
image
(
X
, 15
)
In [7]:
image
(
X
, 25
)
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help