Assignment6_MariaCGonzalez1

pdf

School

Florida International University *

*We aren’t endorsed by this school

Course

4051L

Subject

Mathematics

Date

Apr 3, 2024

Type

pdf

Pages

8

Uploaded by ChefNightingale3065

Report
Name:Maria C Gonzalez Date: 11/15/2023 PID: 6344822 Assignment 6 : Nonlinear Equations and Optimization General Instructions: Complete each question to the best of your abilities. Include comments in your code to explain your thought process while answering. Submission Instructions: Assignment files should be titled Assignment_X_FirstNameLastName, Ex: Assignment_7_AsadMirza. To convert the MLX file into a PDF go to the Export icon under Live Editor and select "Export to PDF". Question 1 (15%): Consider the equation: a) (5%) Graph the equation and identify the number of zeros. b) (15%) Use Newton’s method to find the zeros of f(x) from -2 to 4 and the x-values at which this occurs. Hint: Choose a proper x0 based on the graph to find one and run again to find the other. % Define the function f = @(x) exp(-x.^2) - (x-1).^4 + 10; % Plot the function over a range of x values x = linspace(-2, 4, 1000); y = f(x); plot(x, y) grid on 1
% Find the zeros of the function zero1 = fzero(f, -1); zero2 = fzero(f, 2); fprintf( 'The equation has %d zeros.\n' , length([zero1 zero2])); The equation has 2 zeros. % Define the function and its derivative myFunction = @(x) exp(-x.^2) - (x-1).^4 + 10; myDerivative = @(x) -2*x.*exp(-x.^2) - 4*(x-1).^3; % Define the Newton's method function myNewton = @(f, df, x0, tol) myNewtonHelper(f, df, x0, tol, 0); % Iterate through Newton's method for each x0 value x0Values = linspace(-2, 4, 100); zeros = []; for i = 1:length(x0Values) x0 = x0Values(i); [zero, iter] = myNewton(myFunction, myDerivative, x0, 1e-6); % Check if the zero is unique and within the desired range if ~isnan(zero) && zero > -2 && zero < 4 && ~ismember(zero, zeros) zeros(end+1) = zero; 2
fprintf( 'Zero found at x = %.6f after %d iterations.\n' , zero, iter); end end Zero found at x = -0.801227 after 5 iterations. Zero found at x = -0.801227 after 5 iterations. Zero found at x = -0.801227 after 5 iterations. Zero found at x = -0.801227 after 5 iterations. Zero found at x = -0.801227 after 4 iterations. Zero found at x = -0.801227 after 4 iterations. Zero found at x = -0.801227 after 4 iterations. Zero found at x = -0.801227 after 4 iterations. Zero found at x = -0.801227 after 3 iterations. Zero found at x = -0.801227 after 3 iterations. Zero found at x = -0.801227 after 3 iterations. Zero found at x = -0.801227 after 3 iterations. Zero found at x = -0.801227 after 4 iterations. Zero found at x = -0.801227 after 5 iterations. Zero found at x = -0.801227 after 6 iterations. Zero found at x = -0.801227 after 7 iterations. Zero found at x = 2.778299 after 16 iterations. Zero found at x = 2.778299 after 11 iterations. Zero found at x = 2.778299 after 11 iterations. Zero found at x = 2.778299 after 11 iterations. Zero found at x = 2.778299 after 12 iterations. Zero found at x = 2.778299 after 12 iterations. Zero found at x = 2.778299 after 12 iterations. Zero found at x = 2.778299 after 12 iterations. Zero found at x = 2.778299 after 11 iterations. Zero found at x = 2.778299 after 9 iterations. Zero found at x = 2.778299 after 6 iterations. Zero found at x = 2.778299 after 5 iterations. Zero found at x = 2.778299 after 4 iterations. Zero found at x = 2.778299 after 4 iterations. Zero found at x = 2.778299 after 3 iterations. Zero found at x = 2.778299 after 3 iterations. Zero found at x = 2.778299 after 4 iterations. Zero found at x = 2.778299 after 4 iterations. Zero found at x = 2.778299 after 5 iterations. Zero found at x = 2.778299 after 5 iterations. % Define the helper function for Newton's method function [zero, iter] = myNewtonHelper(f, df, x, tol, iter) % Calculate the next guess using Newton's method xNext = x - f(x) / df(x); % Check if the tolerance has been reached if abs(xNext - x) < tol zero = xNext; return end % Recursively call the function with the new guess [zero, iter] = myNewtonHelper(f, df, xNext, tol, iter+1); end 3
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
Question 2 (15%): Let's say we have a function: a) (10%) Graph the equation from 0 to 30. % % Define the function using an anonymous function % f = @(x) 0.2*x.^2 - 6*x - sin(x-2); % % % Generate x values with a specified step size % x = 0:0.1:30; % % % Calculate corresponding y values % y = f(x); % % % Plot the graph of the function % plot(x, y); % xlabel('x'); % ylabel('f(x)'); % title('Graph of the Function'); % % %grid for better visualization % grid on; b) (10%) Find the maximum (or minimum) near 15 with a tolerance of 0.01. % f = inline('0.2*x.^2 - 6*x - sin(x-2)'); % x = 0:0.1:30; % Change the step size to your preference % y = f(x); % % plot(x, y); % xlabel('x'); % ylabel('f(x)'); % title('Graph of the Function'); % % % Add a grid for better visualization % grid on; % % % Find the minimum near x=15 with a tolerance of 0.01 % [x_min, y_min] = fminsearch(f, 15, optimset('TolX', 0.01)); % % % Display the result % disp(['The minimum value is ', num2str(y_min), ' at x = ', num2str(x_min)]); Question 3 (15%): a) (5%) Graph the equation and estimate the minimum values % % Graph the function 4
% x = linspace(-4, 4, 1000); % y = sin(x.^2); % plot(x, y); % xlabel('x'); % ylabel('f(x)'); % title('f(x) = sin(x^2)'); % grid on; b) (25%) Find the minimum values between -4 and 4 for the function using Newtons method. With and initial estimate of -1.5 and a tolerance of 0.01. % % Define the function and its derivatives % x = linspace(-4, 4, 1000); % myFunction = @(x) sin(x^2); % myDerivative = @(x) 2*x*cos(x^2); % mySecondDerivative = @(x) 2*cos(x^2) - 4*x^2*sin(x^2); % % % Set initial guess, tolerance, and maximum iterations % initialGuess = -1.5; % tolerance = 0.01; % maxIterations = 100; % % % Newton's method % currentX = initialGuess; % iterationCount = 0; % while iterationCount < maxIterations % previousX = currentX; % currentX = previousX - myDerivative(previousX) / mySecondDerivative(previousX); % if abs(currentX - previousX) < tolerance % break; % end % iterationCount = iterationCount + 1; % end % % minimumValue = myFunction(currentX); % fprintf('Minimum value found: %.6f\n', minimumValue); Question 4 (20%): a) (20%) Use SGD code needed to find the global minimum in a span from -5 to 5 of the function: Use a tolerance of 0.01 and a step of 0.1. 5
b) (10%) Find the maximum on the same span. Hint: Changing the sign switches the direction. Choose a proper x0 to avoid local minima/maxima. % % Define the function to be minimized % objectiveFunction = @(x) cos(x) * (sin(x) - x); % % Set the initial guess and learning rate % initialGuess = -5 + 10 * rand(); % randomly choose initialGuess between -5 and 5 % learningRate = 0.1; % % Set the tolerance for convergence % tolerance = 0.01; % % Initialize the iteration counter % iterationCount = 0; % % Loop until the convergence criterion is met % while true % % Calculate the gradient of the function at the current point % gradient = (objectiveFunction(initialGuess + 0.001) - objectiveFunction(initialGuess - 0.001)) / 0.002; % % Update the current point using SGD % updatedGuess = initialGuess - learningRate * gradient; % % Check the convergence criterion % if abs(objectiveFunction(updatedGuess) - objectiveFunction(initialGuess)) < tolerance % break; % end % % Update the iteration counter and current point % iterationCount = iterationCount + 1; % initialGuess = updatedGuess; % end % % Display the result % fprintf('Global minimum found after %d iterations: x = %f, f(x) = %f\n', iterationCount, updatedGuess, objectiveFunction(updatedGuess)); % B) % % Define the function to be maximized (by changing the sign) % objectiveFunction = @(x) -cos(x) * (sin(x) - x); % % Set the initial guess and learning rate % initialGuess = 3; % learningRate = 0.1; % % Set the tolerance for convergence % tolerance = 0.01; % % Initialize the iteration counter % iterationCount = 0; % % Loop until the convergence criterion is met % while true % % Calculate the gradient of the function at the current point % gradient = (objectiveFunction(initialGuess + 0.001) - objectiveFunction(initialGuess - 0.001)) / 0.002; 6
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
% % Update the current point using SGD % updatedGuess = initialGuess - learningRate * gradient; % % Check the convergence criterion % if abs(objectiveFunction(updatedGuess) - objectiveFunction(initialGuess)) < tolerance % break; % end % % Update the iteration counter and current point % iterationCount = iterationCount + 1; % initialGuess = updatedGuess; % end % % Display the result (by changing the sign back) % fprintf('Global maximum found after %d iterations: x = %f, f(x) = %f\n', iterationCount, updatedGuess, -objectiveFunction(updatedGuess)); % % Question 5 (15%): Consider the equation: Based on the data provided, find the perfect minimal coefficients. Plot the solution from to . Use initial coefficients [5,.03,4]. % clear % clc % % %nitial estimate % c0 = [5, 0.03, 4]; % % %provided data set for initial set of y values % t = [0.01, 0.01, 0.02, 0.03, 0.04, 0.05, 0.07, 0.09, 0.13, 0.17, 0.24, 0.33, 0.45, 0.62, 0.85, 1.17, 1.61, 2.21, 3.04, 4.18, 5.74, 7.88, 10.83, 14.87, 20.43, 28.07, 38.57, 52.98, 72.79, 100.00]; % y = [11.20, 16.22, 2.76, 7.28, 0.33, 12.39, 10.48, 7.40, 13.76, 17.96, 3.24, 15.59, 7.03, 6.42, 3.69, 3.40, 16.17, 20.96, 23.92, 14.28, 28.45, 23.94, 37.28, 53.94, 89.98, 105.51, 116.34, 138.43, 112.86, 31.00]; % % %defining function fit % fun = @(c, t) c(1) * exp(-t.^2) - c(2) * t.^2 + c(3) * t; % % %setting curve fitting % [c_fit, ~] = lsqcurvefit(fun, c0, t, y, [], [], optimset('Display', 'off')); % % %optimal coefficients % c1_optimal = c_fit(1); 7
% c2_optimal = c_fit(2); % c3_optimal = c_fit(3); % % %detrmining values for plotting % t_plot = 10.^linspace(-2, 2, 1000); % y_plot = fun([c1_optimal, c2_optimal, c3_optimal], t_plot); % % %plotting the solution % figure; % semilogx(t_plot, y_plot); % xlabel('t'); % ylabel('f(t)'); % title('Fitted Solution'); Question 6 (15%): For the system: What is a solution to these simultaneous equations using Newton-Rapson Root Finding Algorithm? Use a span from -2 to 2, initial tolerance of 0.01 and x0= [10;1] % Question 7 (15%): Using Newton’s Method, write the MATLAB code to find the minimum of the function Use a span from -2 to 2, initial tolerance of 0.01 and x0= [0 1]. 8