Train the network using Cross entropy loss See https://pytorch.org/docs/stable/generated/torch.nn.CrossEntropyLoss.html for details. It expects an input of size Nsamples x Nclasses, which is un-normalized logits, and a target, which is y_train_l of size Nsamplesx1. You may also feed it y_train_T of size Nsample x Nclasses. Please see the documentation. Cross entropy loss expects raw unnrormalized scores. Soft-max converts raw unnormalized scores to probabilities, which are used to plot the labels. Use SGD and run 20,000 epochs using a learning rate of 1e-2 to train the neural network nn = NeuralNet(Nfeatures,Nclasses,20,20).to(device) sm = N.Softmax(dim=1) # Weight the cross entropy loss to balance the classes Nsamples_per_class = y_train_T.sum(axis=0) Weight = Nsamples_per_class.sum()/Nsamples_per_class loss = torch.nn.CrossEntropyLoss(weight=Weight) learning_rate = 0.01 #YOUR CODE HERE optimizer= optim.SGD(net.parameters(), lr=learning_rate) # define optimizer for epoch in range(20000): #YOUR CODE HERE predNN = ?? # Forward pass error = ?? # find the loss optimizer.zero_grad() # clear the gradients backward.loss() # Send loss backward optimizer.step() # update weights if(np.mod(epoch,5000)==0): print("Error =",error.detach().cpu().item()) fig,ax = plt.subplots(1,2,figsize=(12,4)) ax[0].plot(y_train_T[0:40].detach().cpu()) ax[1].plot(sm(predNN[0:40]).detach().cpu()) plt.show()
Train the network using Cross entropy loss
See https://pytorch.org/docs/stable/generated/torch.nn.CrossEntropyLoss.html for details. It expects an input of size Nsamples x Nclasses, which is un-normalized logits, and a target, which is y_train_l of size Nsamplesx1. You may also feed it y_train_T of size Nsample x Nclasses. Please see the documentation.
Cross entropy loss expects raw unnrormalized scores. Soft-max converts raw unnormalized scores to probabilities, which are used to plot the labels.
Use SGD and run 20,000 epochs using a learning rate of 1e-2 to train the neural network
nn = NeuralNet(Nfeatures,Nclasses,20,20).to(device)
sm = N.Softmax(dim=1)
# Weight the cross entropy loss to balance the classes
Nsamples_per_class = y_train_T.sum(axis=0)
Weight = Nsamples_per_class.sum()/Nsamples_per_class
loss = torch.nn.CrossEntropyLoss(weight=Weight)
learning_rate = 0.01
#YOUR CODE HERE
optimizer= optim.SGD(net.parameters(), lr=learning_rate) # define optimizer
for epoch in range(20000):
#YOUR CODE HERE
predNN = ?? # Forward pass
error = ?? # find the loss
optimizer.zero_grad() # clear the gradients
backward.loss() # Send loss backward
optimizer.step() # update weights
if(np.mod(epoch,5000)==0):
print("Error =",error.detach().cpu().item())
fig,ax = plt.subplots(1,2,figsize=(12,4))
ax[0].plot(y_train_T[0:40].detach().cpu())
ax[1].plot(sm(predNN[0:40]).detach().cpu())
plt.show()
Trending now
This is a popular solution!
Step by step
Solved in 4 steps with 1 images