Assignmen3-AI
pdf
keyboard_arrow_up
School
New Jersey Institute Of Technology *
*We aren’t endorsed by this school
Course
670
Subject
Industrial Engineering
Date
Jan 9, 2024
Type
Pages
7
Uploaded by ProfPorcupine3635
Project
Assignment
3
Report
Project
Githul
I.
Introduction:
In
previous
works
on
Alzheimer’s
Detection
stage
classifications,
many
models
such
as
Convolutional
Neural
Networks
(CNN),
Random
Forests,
Support
Vector
Machine
(SVM),
Naive
Bayes,
and
K
Nearest
Neighbors
were
used
as
their
MRI
image
classifiers.
We
would
like
to
create
a
new
model
or
improve
upon
an
existing
model.
We
will
evaluate
the
results
with
performance
measurements
and
compare
the
accuracy
of
our
model
with
previous
traditional
models.
The
goal
of
our
paper
is
to
propose
a
new
or
modified
model
(modified
CNN)
that
will
improve
the
classification
accuracy
of
Alzheimer’s
dementia
level
using
the
same
MRI
image
dataset.
Il.
Methodology
(ideas
we
tried):
We
chose
python
as
our
programming
language
to
develop
and
execute
our
machine
learning
models.
Based
on
previous
works,
python
was
the
most
popular
and
easier
programming
language
to
use
for
building
and
running
models.
There
are
many
machine
learning
libraries/packages
available
with
python.
Three
of
the
most
common
machine
learning
libraries
used
in
various
literature
reviews
were
PyTorch,
TensorFlow,
and
Scikit-learn.
We
will
primarily
use
the
TensorFlow
library
for
most
of
our
implementation,
since
TensorFlow
offers
better
visualization
and
easier
deployment
of
our
training
models.
TensorFlow
is
a
good
machine
learning
framework
to
support
the
building
of
neural
networks
like
CNN.
We
will
also
incorporate
other
libraries
for
creating
2D
plots
with
Matplotlib
and
to
analyze
the
data
with
Pandas.
The
original
dataset
also
used
an
Inception
Model
(Transfer
Learning)
which
may
be
placing
an
artificial
ceiling
on
accuracy.
We
plan
on
removing
the
Inception
Model,
and
instead
adding
a
layer
of
normalization
and
fine
tuning
for
the
model.
Transfer
Learning
within
our
dataset:
inception_model
=
Sequential()
pretrained_model=
tf.keras.applications.InceptionV3(include_top=False,
input_shape=(224,224,3),
pooling='avg',6classes=4,
weights="imagenet')
for
layer
in
pretrained_model.layers:
layer.trainable=False
inception_model.add(pretrained_model)
Example
of
Fine
Tuning
in
a
similar
dataset
about
Pneumonia:
checkpoint_cb
=
tf.keras.callbacks.ModelCheckpoint("“xray_model.h5",
save_best_only=True)
early_stopping_cb
=
tf.keras.callbacks.EarlyStopping(patience=180,
restore_best_weights=True)
Another
methodology
we
applied
was
with
the
InceptionV3
trained
CNN.
For
this
purpose
we
had
to
rescale
the
images
to
176
x
208
pixel
resolution
and
convert
them
to
RGB
format,
so
that
the
pretrained
model
could
work
effectively.
The
transfer
learning
method
we
are
using
requires
certain
layers
of
the
CNN
to
be
frozen
to
reuse
pretrained
model
weights.
The
key
steps
in
this
transfer
learning
method
are
to
freeze
the
base
model
weights,
swap
in
a
single
dense
layer
for
classification,
implement
early
stopping
based
on
validation
AUC,
and
to
train
the
model
with
InceptionV3.
We
then
utilize
the
kerastuner
package
to
optimize
the
hyperparameters.
Dropout
rates
were
set
to
.50
and
.60
and
the
learning
rate
was
set
to
.0001.
The
size
of
the
convolution
layer
was
set
to
1024
as
well
as
the
dense
layer.
The
final
model
layers
are
as
follows:
InceptionV3,
dropout,
batch
normalization,
conv2d
twice,
max
pooling
2D,
dropout,
batch
normalization,
flatten,
and
two
dense
layers.
The
final
iteration
produced
these
loss
statistics.
Model
AUC
on
Test
Data:87.0%
precision
recall
fl-score
support
mild
0.69
9.39
2.50
179
moderate
1.00
0.25
.40
12
normal
8.72
.77
8.74
640
very-mild
9.59
0.64
0.62
4438
accuracy
@.67
1279
macro
avg
@.75
.51
.57
1279
weighted
avg
.67
0.67
.66
1279
Validation
Loss
by
Epochs
Validation
AUC
by
Epochs
:
:
y
=P
y
=P
Confusion
Matrix
121
®
Taining
loss
100
1
AUC:87.0%
~—
Validation
loss
mid{
7
0
51
58
400
0.95
1
<
moderate{
0
3
3
6
300
O
090
fi
=
v
=
normal
{
12
0
153
200
0.85
1
®
Taining
AUC
very-mild
{
20
0
140
100
080
~
Validation
AUC
.
°
-==
Test
AUC
D>
&
R
.§b
T
T
T
Q@
§
,@
0
0
25
50
75
100
125
150
6&
&
&
Epochs
Predicted
label
Total
Time:84.57
mins
Another
methodology
we
tried
was
an
EfficientNet
B3
model.
EfficientNet
B3
is
a
type
of
advanced
computer
model
used
for
understanding
and
working
with
images.
It's
part
of
a
series
of
models
created
by
Google
Brain
researchers.
What's
special
about
EfficientNet
B3
is
that
it's
designed
to
be
really
good
at
recognizing
and
making
sense
of
pictures,
while
also
being
efficient
with
computer
resources.
It's
like
having
a
powerful
tool
that
can
quickly
understand
what's
in
an
image
without
needing
as
much
computing
power
as
some
other
models.
The
following
were
the
training
and
test
loss
and
training
and
test
accuracy
we
received
using
the
given
model.
Training
and
Validation
Loss
Training
and
Validation
Accuracy
7
=
Training
loss
1.0
=
\/alidation
loss
@
bestepoch=17
6
0.9
5
4
§
0.8
(9]
(%]
—
E
3
<
0.7
2
1
0.6
w—
Training
Accuracy
=
\/alidation
Accuracy
0
@®
Dbestepoch=17
2.5
5.0
7.5
10.0
12.5
15.0
17.5
20.0
2.5
5.0
7.5
10.0
12.5
15.0
17.5
20.0
Epochs
Epochs
Fig:
Training and
test
validation
and
accuracy
line
graph.
During
the
evaluation
of
model,
we
received
accuracy
of
0.75
during
validation
and
received
an
value
of
0.69
as
test
accuracy.
The
following
is
the
graphical
representation
of
the
confusion
matrix.
Train
Loss:
0.89649557620286942
0.9822117853164673
o
s
B3
Test
Loss:
¢
Test
Accuracy:
©0.6953125
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
Confusion
Matrix
MildDemented
32
]
8
21
ModerateDemented
1
2
o
1
True
Label
5
o
4
NonDemented
75
VeryMildDemented
°
25
Predicted
Label
Lastly,
the
current
approach
we’ll
be
using
the
Vggl18
model
to
train
the
data.
The
VGG-18
model
is
a
type
of
deep
neural
network
specifically
designed
for
image
recognition
tasks.
It's
part
of
the
VGG
(Visual
Geometry
Group)
series
of
models
developed
by
researchers
at
Oxford
University.
What's
notable
about
VGG-18
is
its
architecture:
it
consists
of
18
layers,
including
convolutional
layers,
max-pooling
layers,
and
fully
connected
layers.
These
layers
work
together
to
process
and
understand
features
within
images.
The
following
were
the
training
and
test
loss
results
we
received
using
the
given
model.
w—
Train
-
Validation
05
1
0
2
SO
75
100
125
150
175
200
Batches
processed
0
P
50
75
100
125
150
175
Currently
we
are
testing
the
model
for
benchmarks
as
well
as
working
on
augmenting
the
dataset.
lll.
Data
Loading
and
Processing:
Our
data
source
is
a
collection
of
MRl
images
downloaded
from
the
Kaggle
Alzheimer’s
dataset
to
a
local
directory.
We
used
TensorFlow’s
tf.keras
function
to
preprocess
and
load
the
images.
The
image
set
will
be
split
up
into
4
classes
of
dementia
levels
and
the
image
size
is
set
to
176
x
208.
To
visualize
and
verify
the
loaded
images,
we
used
Matplotlib
to
display
the
images
in
our
model.
We
want
to
convert
our
model
integer
labels
into
one-hot
encoded
labels,
since
we
are
working
with
categorical
data
instead
of
noncontinuous
data.
The
raw
data
does
not
come
to us
as
feature
vectors,
so
using
the
one-hot
function
will
encode
the
text
into
a
list
of
words
for
easier
processing
and
evaluation
of
the
data.
IV.
Building
Our
ML
Model:
CNN
based
approach
was
the
most
popular
architecture
in
many
literature
reviews
for
Alzheimer’s
Detection.
This
is
due
to
CNN’s
optimal
performance
in
image
classification,
detection,
and
segmentation.
We
will
be
implementing
a
modified
CNN
approach
model
using
TensorFlow’s
Keras
Sequential
API
to
create
and
train
our
model.
We
will
define
a
convolutional
layer
in
Keras,
which
will
take
some
input
arguments,
like
specifying
the
shape.
The
CNN
model
will
take
tensors
of
shapes
as
an
input.
We
will
also
need
to
define
other
layers
such
as
the
Rectified
Linear
Unit
(ReLU),
pooling,
dropout,
and
fully
connected
layers.
With
the
TensorFlow
library,
it
does
offer
functions
to
support
the
building
process
of
these
layers.
The
number
of
filters
in
each
convolutional
layer
will
be
something
we
need
to
experiment
with
and
see
the
results.
Adding
many
filters
will
allow
for
better
learning
capacity,
but
may
lead
to
overfitting,
which
will
cause
issues
with
the
training
model.
With
our
CNN-based
model,
we
will
modify
the
multi-layer
convolutional
network
and
try
to
improve
upon
the
traditional
approach
using
TensorFlow's
machine
learning
library,
by
using
a
much
larger
and
nuanced
dataset.
Additionally,
the
original
dataset
for
Alzheimer's
classification
was
imbalanced,
meaning
that
the
majority
of
the
images
(72%)
were
classified
as
demented,
and
only
28%
were
classified
as
non-demented
Our
modified
approach
will
apply
a
layer
of
data
balancing
by
accounting
for
the
ratio
of
demented
to
non-demented
images
through
the use
of
class
weights.
We
also
will
be
applying
a
layer
of
Fine
Tuning.
We
want
to
fine
tune
both
our
model
and
our
learning
rate.
Too
high
a
learning
rate
will
cause
our
model
to
diverge
and
too
low
a
rate
will
cause
our
model
to
be
slow.
V.
Model
Evaluation:
Model
accuracy
and
model
loss
will
be
evaluated
after
the
model
training.
We
will
compare
its
performance
on
both
the
training
and
testing
datasets
by
plotting
metrics
in
a
visual
graph.
To
achieve
a
more
accurate
assessment
of
our
model,
we
will
use
different
amounts
of
epoch
cycle
to
compare
the
results.
First
we
wanted
to
view
the
model
loss
and
accuracy
of
the
original
dataset
consisting
of
~5,000
images.
Then
we
want
to
view
the
same
graphs
for
the
augmented
Alzheimer's
dataset
consisting
of
40,000
images
before
and
after
removing
the
inception
model.
Original
model
accuracy
&
loss:
accuracy
loss
model
accuracy
0.99
1
0.98
1
0.97
-
0.96
1
0.95
1
0.94
1
0.93
1
0.92
1
-
frain
"
val
4
epoch
model
loss
0.40
1
0.35
0.30
1
0.25
1
0.20
1
0.15
1
0.10
1
0.05
A
-
-
epoch
o+
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
Paper
Accuracy
Brain
MRI
Analysis
for
Alzheimer’s
Disease
Diagnosis
Using
CNN-Based
Feature
Extraction
and
Machine
Learning
CNN
softmax
accuracy
96%
Deep
learning-based
classification
of
healthy
aging
controls,
mild
cognitive
impairment
and
Alzheimer’s
disease
using
fusion
of
MRI-PET
imaging
CNN
Adam
optimizer
accuracy
95.06%
An
Alzheimer’s
disease
classification
method
using
fusion
of
features
from
brain
Magnetic
Resonance
Image
transforms
and
deep
convolutional
networks
CNN
with
SGDM
optimizer
accuracy
85.4%
Classification
of
cognitively
normal
controls,
mild
cognitive
impairment
and
Alzheimer’s
disease
using
transfer
learning
approach
CNN
with
Structural
transfer
with
thresholding
and
BRF
classifier
accuracy
79%
Integrating
convolutional
neural
networks,
kNN,
and
Bayesian
optimization
for
efficient
diagnosis
of
Alzheimer's
disease
in
magnetic
resonance
images
CNN
model
with
KNN
classifier
and
Bayesian
optimizer
accuracy
94.96%
Alzheimer's
classification(Project)
—
Approach
1
Efficient
B3
CNN
model
with
75%
accuracy
Alzheimer's
classification(Project)
—
Approach
2
InceptionV3
CNN
model
with
91
%
accuracy
Alzheimer's
classification(Project)
—
Traditional
Sequential
CNN
model
with
75
%
accuracy(dropped
to
60
with
higher
epochs)
Alzheimer's
classification(Project)
—
Current
Approach
Vggl18
CNN
model
with
95
%
accuracy