please code in python, I really need to know this works.. kindly help me out... please use can use any sample dataset for demo. Buil a Grid_Search_NN_model that has the same architecture as the MLP model from Question 4. Use grid search to tune two hyperparameters: The number of neurons on the hidden layers of your MLP model (find the best number among 8, 16, 32). Each hidden layer should have the same number of neurons/nodes, so only one hyperparameter is needed to tune the number of neurons. Learning rate of the SGD optimizer (find the best value among the two numbers 0.01 and 0.1). Implement grid search to identify optimal hyperparameter values, and print out the best hyperparameter values and the best cross-validation accuracy.You can use 3-fold GridSearchCV and KerasClassifier functions on the standarized training set to do this. Build the optimized MLP model on the training set by passing the detected best hyperparameter values to the Grid_Search_NN_model. Print out the precision, recall, and F1-score of the optimized MLP model on the test set. MLP code is given below: mport time import numpy as np import pandas as pd from sklearn.neural_network import MLPClassifier from sklearn.metrics import classification_report, confusion_matrix from sklearn.preprocessing import StandardScaler # Load the data data = pd.read_csv('sample_data.csv') X = data.iloc[:, :-1].values y = data.iloc[:, -1].values # Split the data into train and test sets from sklearn.model_selection import train_test_split train_data, test_data, train_labels, test_labels = train_test_split(X, y, test_size=0.2, random_state=0) # Standardize the data sc = StandardScaler() train_datascaled = sc.fit_transform(train_data) test_datascaled = sc.transform(test_data) # Train the MLP model mlp = MLPClassifier(hidden_layer_sizes=(8,8), activation='tanh', solver='sgd', learning_rate_init=0.1, max_iter=10) start_time = time.time() mlp.fit(train_datascaled, train_labels) end_time = time.time() exec_time = (end_time - start_time) * 1000 print("Execution time: {:.2f} ms".format(exec_time)) # Predict the test set labels predictions = mlp.predict(test_datascaled) # Print classification report and confusion matrix print(confusion_matrix(test_labels, predictions)) print(classification_report(test_labels, predictions))
please code in python, I really need to know this works.. kindly help me out... please
use can use any sample dataset for demo.
Buil a Grid_Search_NN_model that has the same architecture as the MLP model from Question 4. Use grid search to tune two hyperparameters:
- The number of neurons on the hidden layers of your MLP model (find the best number among 8, 16, 32). Each hidden layer should have the same number of neurons/nodes, so only one hyperparameter is needed to tune the number of neurons.
- Learning rate of the SGD optimizer (find the best value among the two numbers 0.01 and 0.1).
Implement grid search to identify optimal hyperparameter values, and print out the best hyperparameter values and the best cross-validation accuracy.You can use 3-fold GridSearchCV and KerasClassifier functions on the standarized training set to do this. Build the optimized MLP model on the training set by passing the detected best hyperparameter values to the Grid_Search_NN_model. Print out the precision, recall, and F1-score of the optimized MLP model on the test set.
MLP code is given below:
mport time
import numpy as np
import pandas as pd
from sklearn.neural_network import MLPClassifier
from sklearn.metrics import classification_report, confusion_matrix
from sklearn.preprocessing import StandardScaler
# Load the data
data = pd.read_csv('sample_data.csv')
X = data.iloc[:, :-1].values
y = data.iloc[:, -1].values
# Split the data into train and test sets
from sklearn.model_selection import train_test_split
train_data, test_data, train_labels, test_labels = train_test_split(X, y, test_size=0.2, random_state=0)
# Standardize the data
sc = StandardScaler()
train_datascaled = sc.fit_transform(train_data)
test_datascaled = sc.transform(test_data)
# Train the MLP model
mlp = MLPClassifier(hidden_layer_sizes=(8,8), activation='tanh', solver='sgd', learning_rate_init=0.1, max_iter=10)
start_time = time.time()
mlp.fit(train_datascaled, train_labels)
end_time = time.time()
exec_time = (end_time - start_time) * 1000
print("Execution time: {:.2f} ms".format(exec_time))
# Predict the test set labels
predictions = mlp.predict(test_datascaled)
# Print classification report and confusion matrix
print(confusion_matrix(test_labels, predictions))
print(classification_report(test_labels, predictions))
Step by step
Solved in 3 steps