? 498DS_E4.jl — Pluto.jl
pdf
keyboard_arrow_up
School
University of Illinois, Urbana Champaign *
*We aren’t endorsed by this school
Course
498
Subject
Industrial Engineering
Date
Dec 6, 2023
Type
Pages
14
Uploaded by DukeTurtleMaster954
12/11/21, 10
:
05 AM
498DS_E4.jl — Pluto.jl
Page 1 of 14
http://localhost:1234/edit?id=8c8c6da4-5a8c-11ec-0ee6-a1aa10df7925
Exam
4
(
"
nal exam)
Autonomous vehicles (part
2
!)
Imagine (again!) that you recently started work at a self-driving vehicle company, and your job is to
help create a navigation system for the company's vehicles.
There are a lot of things that need to happen to make a working navigation system, but one of them
is that there needs to be an algorithm that can read road signs, and that's what you're working on.
As a start, you're given the dataset below, which consists of pictures of arrows on road signs (the
column
sign
in the dataframe
df
below), and labels of the direction (in radians; the column
direction
in the dataframe below) that each arrow is pointing.
begin
using
Plots
using
StatsPlots
using
Images
using
DataFrames
using
MultivariateStats
using
Distributions
using
Statistics
using
Zygote
using
BSON
using
LinearAlgebra
end
⋅
⋅
⋅
⋅
⋅
⋅
⋅
⋅
⋅
⋅
⋅
⋅
12/11/21, 10
:
05 AM
498DS_E4.jl — Pluto.jl
Page 2 of 14
http://localhost:1234/edit?id=8c8c6da4-5a8c-11ec-0ee6-a1aa10df7925
4.92173
1.52973
3.15105
9
10
more
1000
So, for example the
sign
in the first row of the dataset below looks like this:
and it is pointing in the
direction
0.9258 (or 0.2947
π
) radians, which is equivalent to 53°.
You can work with this data just like any other dataframe:
sign
direction
df
⋅
12/11/21, 10
:
05 AM
498DS_E4.jl — Pluto.jl
Page 3 of 14
http://localhost:1234/edit?id=8c8c6da4-5a8c-11ec-0ee6-a1aa10df7925
(a vector displayed as a row to save space)
[
0.925807
,
4.34847
,
1.97531
,
5.42164
,
1.23705
]
S
(a vector displayed as a row to save space)
=
G
(a vector displayed as a row to save space)
=
df
.
sign
[
1
:
5
]
⋅
df
.
direction
[
1
:
5
]
⋅
S
=
df
.
sign
[:]
⋅
G
= [
Gray
.(
df
.
sign
[
i
])
for
i
in
1
:
1000
]
⋅
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
12/11/21, 10
:
05 AM
498DS_E4.jl — Pluto.jl
Page 4 of 14
http://localhost:1234/edit?id=8c8c6da4-5a8c-11ec-0ee6-a1aa10df7925
2450
×
1000 Matrix{Float64}:
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
…
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
…
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
⋮
⋮
⋱
⋮
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
…
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
(
2450
,
1000
)
normalize (generic function with 1 method)
begin
X
=
zeros
(
2450
,
1000
)
for
i
=
1
:
1000
si
=
Float64
.(
channelview
(
G
[
i
]))
si2
=
reshape
(
si
,(
50
*
49
,
1
))
X
[:,
i
] =
si2
end
end
⋅
⋅
⋅
⋅
⋅
⋅
⋅
⋅
X
⋅
size
(
X
)
⋅
function
normalize
(
x
,
y
)
x_norm
=
zero
(
x
)
for
i
in
1
:
size
(
x
,
2
)
x_norm
[:,
i
] .= (
x
[:,
i
].-
mean
(
x
[:,
i
]))./
std
(
x
[:,
i
])
end
y_norm
= (
y
.-
mean
(
y
))./
std
(
y
)
return
x_norm
,
y_norm
end
⋅
⋅
⋅
⋅
⋅
⋅
⋅
⋅
12/11/21, 10
:
05 AM
498DS_E4.jl — Pluto.jl
Page 5 of 14
http://localhost:1234/edit?id=8c8c6da4-5a8c-11ec-0ee6-a1aa10df7925
(
2450
×
1000 Matrix{Float64}:
0.378275
0.32742
0.330453
0.396657
…
0.357847
0.322182
0.410722
0.305203
0.378275
0.32742
0.330453
0.396657
0.357847
0.322182
0.410722
0.305203
0.378275
0.32742
0.330453
0.396657
0.357847
0.322182
0.410722
0.305203
0.378275
0.32742
0.330453
0.396657
0.357847
0.322182
0.410722
0.305203
0.378275
0.32742
0.330453
0.396657
0.357847
0.322182
0.410722
0.305203
0.378275
0.32742
0.330453
0.396657
…
0.357847
0.322182
0.410722
0.305203
0.378275
0.32742
0.330453
0.396657
0.357847
0.322182
0.410722
0.305203
⋮
⋱
0.378275
0.32742
0.330453
0.396657
0.357847
0.322182
0.410722
0.305203
0.378275
0.32742
0.330453
0.396657
…
0.357847
0.322182
0.410722
0.305203
0.378275
0.32742
0.330453
0.396657
0.357847
0.322182
0.410722
0.305203
0.378275
0.32742
0.330453
0.396657
0.357847
0.322182
0.410722
0.305203
0.378275
0.32742
0.330453
0.396657
0.357847
0.322182
0.410722
0.305203
0.378275
0.32742
0.330453
0.396657
0.357847
0.322182
0.410722
0.305203
, [
-
20
×
1000 Matrix{Float64}:
-11.728
3.38149
-0.0163603
…
2.59487
-14.1454
12.8919
0.261496
7.49186
5.3888
-6.49495
-16.6066
-11.133
-12.0371
13.409
13.3631
6.86482
5.76925
-10.3171
7.48835
1.01005
5.12249
12.614
-3.59523
10.0596
-12.7789
-13.0657
1.96406
9.31629
0.979221
-0.329081
10.4752
-7.886
-0.748967
…
6.33779
-3.9566
-4.50535
-1.36274
0.286756
-11.0481
-2.53133
-6.10917
-1.45385
⋮
⋱
-1.84345
4.25452
-5.56974
5.36965
-3.83836
5.15989
6.45353
4.10864
3.05597
…
5.44428
3.20901
1.8079
-2.72877
-2.72015
-3.32074
-2.71408
5.99799
1.66927
2.08206
-4.00917
0.18941
0.0979262
0.145973
1.67242
-3.21489
2.66213
4.74587
-2.59116
-1.90517
2.98767
2.47426
-0.71979
2.05994
-3.38991
-6.97752
0.901762
pad (generic function with 1 method)
X_norm
,
labels_norm
=
normalize
(
X
,
labels
)
⋅
begin
pca_model
=
fit
(
PCA
,
X_norm
,
maxoutdim
=
20
)
X_pca
=
Matrix
(
MultivariateStats
.
transform
(
pca_model
,
X_norm
)')
X2
=
Float64
.(
X_pca
')
end
⋅
⋅
⋅
⋅
⋅
function
pad
(
x
,
n
::Int
)
vcat
(
zeros
(
n
,
size
(
x
,
2
)),
x
,
zeros
(
n
,
size
(
x
,
2
)))
end
⋅
⋅
⋅
12/11/21, 10
:
05 AM
498DS_E4.jl — Pluto.jl
Page 6 of 14
http://localhost:1234/edit?id=8c8c6da4-5a8c-11ec-0ee6-a1aa10df7925
conv_1d (generic function with 1 method)
maxpool_1d (generic function with 1 method)
relu (generic function with 1 method)
dense_layer (generic function with 1 method)
twolayer (generic function with 1 method)
multilayer (generic function with 1 method)
function
conv_1d
(
x
,
filter
)
n
=
length
(
filter
) ÷
2
pads_x
=
pad
(
x
,
n
)
[
sum
(
pads_x
[
i
-
n
:
i
+
n
,
j
].*
filter
)
for
i
in
1
+
n
:
size
(
pads_x
,
1
)-
n
,
j
in
1
:
size
(
pads_x
,
2
)]
end
⋅
⋅
⋅
⋅
⋅
⋅
function
maxpool_1d
(
x
::AbstractMatrix{<:Real}
,
poolsize
::Int
)
n
=
poolsize
÷
2
matrix
=
pad
(
x
,
n
)
A
= [
maximum
(
matrix
[
i
-
n
:
i
+
n
,
j
])
for
i
in
1
+
n
:
size
(
matrix
,
1
)-
n
,
j
in
1
:
size
(
matrix
,
2
)]
end
⋅
⋅
⋅
⋅
⋅
⋅
relu
(
x
) =
max
(
x
,
0
)
⋅
function
dense_layer
(
x
::Matrix{T}
,
w
::Matrix{T}
,
b
::Vector{T}
)
where
T
<:
AbstractFloat
return
w
*
x
.+
b
end
⋅
⋅
⋅
function
twolayer
(
x
::Matrix{T}
,
p
::Vector
)
where
T
<:
AbstractFloat
w1
,
b1
,
w2
,
b2
=
p
output1
=
relu
.(
dense_layer
(
x
,
w1
,
b1
))
output2
=
dense_layer
(
output1
,
w2
,
b2
)
return
output2
end
⋅
⋅
⋅
⋅
⋅
⋅
function
multilayer
(
x
::Matrix{T}
,
p
::Vector
)
where
T
<:
AbstractFloat
w1
,
b1
,
w2
,
b2
,
w3
,
b3
=
p
output1
=
relu
.(
dense_layer
(
x
,
w1
,
b1
))
output2
=
relu
.(
dense_layer
(
output1
,
w2
,
b2
))
output3
=
dense_layer
(
output2
,
w3
,
b3
)
return
output3
end
⋅
⋅
⋅
⋅
⋅
⋅
⋅
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
12/11/21, 10
:
05 AM
498DS_E4.jl — Pluto.jl
Page 7 of 14
http://localhost:1234/edit?id=8c8c6da4-5a8c-11ec-0ee6-a1aa10df7925
softmax (generic function with 1 method)
mse (generic function with 1 method)
model (generic function with 1 method)
(
1000
)
function
softmax
(
x
::AbstractMatrix{<:AbstractFloat}
)
goal
=
sum
(
exp
.(
x
),
dims
=
1
)
return
exp
.(
x
) ./
goal
end
⋅
⋅
⋅
⋅
function
mse
(
ŷ
,
y
)
sum
((
ŷ
.-
y
).^
2
) /
length
(
y
)
end
⋅
⋅
⋅
function
model
(
x
,
p
)
conv1
,
conv2
,
w1
,
b1
,
w2
,
b2
,
w3
,
b3
=
p
inputsize
=
size
(
x
,
1
)
o1
=
maxpool_1d
(
relu
.(
conv_1d
(
x
,
conv1
)),
poolsize
)
o2
=
maxpool_1d
(
relu
.(
conv_1d
(
o1
,
conv2
)),
poolsize
)
#o3 = maxpool_1d(relu.(conv_1d(o2, conv3)), poolsize)
o4
=
multilayer
(
o2
, [
w1
,
b1
,
w2
,
b2
,
w3
,
b3
])
#o4'
end
⋅
⋅
⋅
⋅
⋅
⋅
⋅
⋅
⋅
size
(
labels
)
⋅
12/11/21, 10
:
05 AM
498DS_E4.jl — Pluto.jl
Page 8 of 14
http://localhost:1234/edit?id=8c8c6da4-5a8c-11ec-0ee6-a1aa10df7925
0.01
o1
20
×
1000 Matrix{Float64}:
4.94491
0.0
0.0
11.8038
…
2.91168
0.994702
11.3284
9.75541
4.94491
7.66032
0.358233
11.8038
2.91168
0.994702
11.3284
9.75541
4.94491
11.2481
3.36099
11.8038
2.91168
0.994702
11.3284
9.75541
2.13482
11.2481
5.31686
4.74093
2.91168
2.3164
3.09297
9.75541
0.149569
11.2481
5.31686
2.66918
2.91168
2.3164
4.4243
4.64485
3.94491
11.2481
5.31686
2.66918
…
7.08749
2.3164
4.4243
4.64485
3.94491
11.2481
5.31686
2.66918
7.08749
5.03648
4.4243
4.64485
⋮
⋱
0.979663
3.98326
4.74447
2.09903
1.95417
5.61038
0.503542
0.141982
1.25719
3.98326
4.74447
2.83935
…
2.08018
3.12745
2.25485
0.141982
1.25719
3.98326
4.74447
2.83935
3.01162
3.12745
4.50534
0.141982
1.25719
3.98326
0.414624
2.83935
3.01162
3.12745
4.50534
0.431867
1.25719
3.98326
0.414624
2.83935
3.01162
3.12745
4.50534
0.431867
1.25719
1.33579
0.0
2.83935
3.01162
1.72506
4.50534
0.431867
=
begin
inputsize
=
size
(
X2
,
1
)
hiddensize1
=
10
hiddensize2
=
10
nlabels
=
1
conv1
=
rand
(
5
) .-
0.5
conv2
=
rand
(
5
) .-
0.5
#conv3 = rand(5) .- 0.5
w1
=
rand
(
hiddensize1
,
inputsize
) .-
0.5
b1
=
rand
(
hiddensize1
) .-
0.5
w2
=
rand
(
hiddensize2
,
hiddensize1
) .-
0.5
b2
=
rand
(
hiddensize2
) .-
0.5
w3
=
rand
(
nlabels
,
hiddensize2
) .-
0.5
b3
=
rand
(
nlabels
) .-
0.5
p
= [
conv1
,
conv2
,
w1
,
b1
,
w2
,
b2
,
w3
,
b3
]
poolsize
=
4
batchsize
=
25
epochs
=
50
steps_per_epoch
=
size
(
X
,
2
) ÷
batchsize
nsteps
=
steps_per_epoch
*
epochs
η
=
0.01
end
⋅
⋅
⋅
⋅
⋅
⋅
⋅
⋅
⋅
⋅
⋅
⋅
⋅
⋅
⋅
⋅
⋅
⋅
⋅
⋅
⋅
⋅
⋅
⋅
⋅
⋅
o1
=
maxpool_1d
(
relu
.(
conv_1d
(
X2
,
conv1
)),
poolsize
)
⋅
#o2 = o2 = maxpool_1d(relu.(conv_1d(o1, conv2)), poolsize)
⋅
#o3 = maxpool_1d(relu.(conv_1d(o2, conv3)), poolsize)
⋅
12/11/21, 10
:
05 AM
498DS_E4.jl — Pluto.jl
Page 9 of 14
http://localhost:1234/edit?id=8c8c6da4-5a8c-11ec-0ee6-a1aa10df7925
output1
10
×
1000 Matrix{Float64}:
-0.411035
10.1659
-3.11668
-6.06426
…
1.06007
0.688212
-1.34779
-0.60964
8.57885
-5.57123
4.00822
5.16183
5.30948
-2.60333
-7.77543
-0.626067
5.12084
5.34411
-6.60854
1.00906
-8.67616
-4.77156
4.7701
-3.46768
14.3445
9.30165
18.6694
-3.358
0.581933
16.1739
-1.07909
-6.78684
-14.6914
-1.5235
-9.91157
1.79251
-19.1121
2.97701
5.98927
…
0.783044
2.8868
2.84552
3.734
-7.21919
-10.0418
8.93176
-18.9343
4.07301
-3.94265
8.36151
8.05771
7.60116
-5.53838
-6.60214
-8.00498
1.88311
-3.02295
-2.85208
1.02453
4.83829
-5.74393
9.22337
3.44628
7.76251
-11.9509
6.14056
8.556
16.1388
9.75841
3.51422
=
labels
[
0.925807
,
4.34847
,
1.97531
,
5.42164
,
1.23705
,
0.703343
,
3.23173
,
2.84555
,
4.92173
,
1.52
=
Vector{Float64} (alias for Array{Float64, 1})
out
1
×
1000 Matrix{Float64}:
-0.39685
-1.78134
-1.69631
-0.728252
…
-0.459083
-0.547867
-1.06342
=
out2
[
-0.39685
,
-1.78134
,
-1.69631
,
-0.728252
,
-0.761988
,
-0.378167
,
-1.17474
,
-0.876201
,
-2
=
2.20582059552703
output1
=
dense_layer
(
X2
,
w1
,
b1
)
⋅
#output2 = dense_layer(output1, w2, b2)
⋅
#size(output2')
⋅
labels
=
df
.
direction
[:]
⋅
typeof
(
labels
)
⋅
out
=
Float64
.(
model
(
X2
,
p
))
⋅
out2
=
vec
(
out
)
⋅
mse
(
out2
,
labels_norm
)
⋅
#mse(vec(Float64.(model(X2[:, sample],p))), labels[sample])
⋅
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
12/11/21, 10
:
05 AM
498DS_E4.jl — Pluto.jl
Page 10 of 14
http://localhost:1234/edit?id=8c8c6da4-5a8c-11ec-0ee6-a1aa10df7925
[[
0.293149
,
0.13947
,
-0.755168
,
-0.547618
,
0.170855
], [
0.486456
,
0.545739
,
0.040114
,
-0
Your task is to create an algorithm that, when given a list of pictures of road signs like the ones
above, returns the direction (in radians) that each arrow is pointing.
Specifically you will submit to PrairieLearn a version of the function
predict_directions
below
which takes as its argument a vector of images of road signs
signs
and a vector of model
parameters
p
, and returns a vector of the directions that the signs are pointed.
predict_directions (generic function with 1 method)
begin
function
error
(
p
)
sample
=
rand
(
1
:
size
(
X2
,
2
),
batchsize
)
mse
(
vec
(
Float64
.(
model
(
X2
[:,
sample
],
p
))),
labels_norm
[
sample
])
end
for
i
in
1
:
nsteps
g
=
error
'(
p
)
p
.-=
η
.*
g
end
p
end
⋅
⋅
⋅
⋅
⋅
⋅
⋅
⋅
⋅
⋅
⋅
function
predict_directions
(
signs
::Vector{Matrix{RGB{N0f8}}}
,
p
::AbstractVector
)
::Vector{Float64}
# This function currently randomly guesses the direction of each sign.
# It doesn't even use the model parameters that it is being given.
# Replace this code with something better!
return
2
π
.*
rand
(
length
(
signs
))
end
⋅
⋅
⋅
⋅
⋅
⋅
⋅
12/11/21, 10
:
05 AM
498DS_E4.jl — Pluto.jl
Page 11 of 14
http://localhost:1234/edit?id=8c8c6da4-5a8c-11ec-0ee6-a1aa10df7925
Similar to exam three, you may end up creating a model that has some data or parameters that are
not easy to copy and paste into the PrairieLearn code window. If this is the case, you can save them
to a file named
e4_parameters.bson
using the line of code below. (Note that it is expected that
your vector of parameters will be named
p
.) You can then upload your
e4_parameters.bson
file
(which should now be in the same directory as this notebook) to PrairieLearn for grading.
In addition to your model parameters, you can also save additional information that may be needed
for running your model, such as the
pca_model
below.
The goal of this is obviously to help a vehicle navigate on some sort of route, so that is how your
algorithm will be graded. The code below creates a random route (which is just a randomly
generated vector of arrow signs), uses your
predict_directions
function to make a prediction of
the route the vehicle should follow based on those signs, and then compares that prediction to the
actual route.
You can re-run the cell below to see how your function performs on di
f
ferent routes. Your navigation
error for each route is shown by the black line, with the magnitude of the error shown in the figure
legend.
(You shouldn't need to edit the code below. It just demostrates how your function will be graded.)
# This is an example parameter vector.
# Replace it with the actual learned parameters for your model.
#p = [1.0, 2, 3, 4, 5, 6, 7, 8, 9, 10]
⋅
⋅
⋅
bson
(
"e4_parameters.bson"
,
p
=
p
,
pca_model
=
fit
(
PCA
, [
1 2
;
3 4
])
)
⋅
⋅
⋅
⋅
12/11/21, 10
:
05 AM
498DS_E4.jl — Pluto.jl
Page 12 of 14
http://localhost:1234/edit?id=8c8c6da4-5a8c-11ec-0ee6-a1aa10df7925
Your grade for this exam will be based on the error of your algorithm averaged across 10 randomly-
generated routes such as the one above. The code below calculates an approximate value for this
error. (However, the routes that PrairieLearn will use to base your grade on will be di
f
ferent, so your
error value may not be exactly the same as what is shown here, but it should be close.)
begin
eval_df
=
create_test_dataset
(
20
)
eval_x
,
eval_y
=
ipc
(
ones
(
size
(
eval_df
,
1
)),
eval_df
.
direction
)
plts
=
scatter
([
0
], [
0
],
lab
=
"Start"
,
c
=
:green
)
plot!
(
cumsum
(
vcat
(
0
,
eval_x
...)),
cumsum
(
vcat
(
0
,
eval_y
...)),
lab
=
"Actual route"
,
lw
=
3
,
c
=
:magenta
)
scatter!
([
sum
(
eval_x
)], [
sum
(
eval_y
)],
lab
=
"Actual finish"
,
c
=
:magenta
)
pred_directions
=
predict_directions
(
eval_df
.
sign
,
p
)
pred_x
,
pred_y
=
ipc
(
ones
(
size
(
eval_df
,
1
)),
pred_directions
)
plot!
(
cumsum
(
vcat
(
0
,
pred_x
...)),
cumsum
(
vcat
(
0
,
pred_y
...)),
c
=
:aquamarine
,
lw
=
3
,
lab
=
"Your predicted route"
)
scatter!
([
sum
(
pred_x
)], [
sum
(
pred_y
)],
c
=
:aquamarine
,
lab
=
"Your predicted finish"
)
err
(
x
̂
,
ŷ
,
x
,
y
) =
norm
([
sum
(
x
̂
) .-
sum
(
x
),
sum
(
ŷ
) .-
sum
(
y
)])
e
=
err
(
pred_x
,
pred_y
,
eval_x
,
eval_y
)
plot!
([
sum
(
eval_x
),
sum
(
pred_x
)], [
sum
(
eval_y
),
sum
(
pred_y
)],
lw
=
2
,
ls
=
:dashdotdot
,
color
=
:black
,
arrow
=
true
,
lab
=
"Error=$(round(e, sigdigits=2))"
)
plts
end
⋅
⋅
⋅
⋅
⋅
⋅
⋅
⋅
⋅
⋅
⋅
⋅
⋅
⋅
⋅
⋅
⋅
⋅
⋅
⋅
⋅
⋅
⋅
⋅
⋅
⋅
⋅
⋅
⋅
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
12/11/21, 10
:
05 AM
498DS_E4.jl — Pluto.jl
Page 13 of 14
http://localhost:1234/edit?id=8c8c6da4-5a8c-11ec-0ee6-a1aa10df7925
13.718285485066165
So, your mean error in this example grading calculation is
13.7
. The exam will be graded using the
following scale:
error
≤
25: 25%
error
≤
15: 50%
error
≤
10: 60%
error
≤
8: 70%
error
≤
6: 75%
error
≤
4: 80%
error
≤
3: 85%
error
≤
2: 90%
error
≤
1.5: 95%
error
≤
1.0: 100%
Good luck!
Hint:
Recall that you can convert an image into a data array using the
channelview
function. For
example, to convert the first sign image in the training dataset
df
into an array:
InterruptException:
begin
errs
= []
for
i
∈
1
:
10
eval_df
=
create_test_dataset
(
20
)
eval_x
,
eval_y
=
ipc
(
ones
(
size
(
eval_df
,
1
)),
eval_df
.
direction
)
pred_directions
=
predict_directions
(
eval_df
.
sign
,
p
)
pred_x
,
pred_y
=
ipc
(
ones
(
size
(
eval_df
,
1
)),
pred_directions
)
e
=
err
(
pred_x
,
pred_y
,
eval_x
,
eval_y
)
push!
(
errs
,
e
)
end
avg_err
=
mean
(
errs
)
end
⋅
⋅
⋅
⋅
⋅
⋅
⋅
⋅
⋅
⋅
⋅
⋅
⋅
⋅
⋅
s1
=
Float64
.(
channelview
(
df
.
sign
[
1
]))
⋅
12/11/21, 10
:
05 AM
498DS_E4.jl — Pluto.jl
Page 14 of 14
http://localhost:1234/edit?id=8c8c6da4-5a8c-11ec-0ee6-a1aa10df7925