1. As in Item (II), we convert the problem in Item (I) into an unconstrained problem using the penalty method. We consider the same objective function as in Item (II - 1): F(x, y) = -x¹y¹ + M(x + y − 5)² where M = 100. 2 2. Compute the Hessian matrix V²F(x, y). 3. Set the initial candidate solution at = 4. Using the computed Hessian matrix V²F(x, y), find its inverse at the point 3-0₁ that is, compute [V²F(xo, Yo)]-¹. (Substitute zo and yo first then compute the inverse.) where a = 1. 5. Compute the next solution using the formula -a [V²F(xo, Yo)-¹VF(To, yo)
1. As in Item (II), we convert the problem in Item (I) into an unconstrained problem using the penalty method. We consider the same objective function as in Item (II - 1): F(x, y) = -x¹y¹ + M(x + y − 5)² where M = 100. 2 2. Compute the Hessian matrix V²F(x, y). 3. Set the initial candidate solution at = 4. Using the computed Hessian matrix V²F(x, y), find its inverse at the point 3-0₁ that is, compute [V²F(xo, Yo)]-¹. (Substitute zo and yo first then compute the inverse.) where a = 1. 5. Compute the next solution using the formula -a [V²F(xo, Yo)-¹VF(To, yo)
Advanced Engineering Mathematics
10th Edition
ISBN:9780470458365
Author:Erwin Kreyszig
Publisher:Erwin Kreyszig
Chapter2: Second-order Linear Odes
Section: Chapter Questions
Problem 1RQ
Related questions
Question

Transcribed Image Text:I2
Y2
6. Repeat the same process to get the next candidate solution
7. Does your solution get close to the solution you obtained in Item (I)? In how many
iterations?
![III. The steepest descent method is simple but it is slow at finding the solution. Fortunately,
there is another gradient descent method that modelers can use for optimization problems.
This method, called Newton's method, is based on the quadratic approximation of a function.
Newton's method is preferred by more modelers because it converges fast to the solution.
Unfortunately, the method has a drawback. Because Newton's method uses the inverse of
the Hessian matrix, it requires more computation especially when there are many unknowns.
Your task is to solve the problem in Item (I) using Newton's method. Follow the algorithm
below.
1. As in Item (II), we convert the problem in Item (I) into an unconstrained problem
using the penalty method. We consider the same objective function as in Item (II - 1):
F(x, y) = − x¹y² + M(x + y − 5)²
where M = 100.
2
2. Compute the Hessian matrix V2F(x, y).
3. Set the initial candidate solution at
Io
where a = 1.
Yo
=
- 0₁
Io
4. Using the computed Hessian matrix V²F(x, y), find its inverse at the point Yo
that is, compute [V²F(xo, Yo)]-¹. (Substitute o and yo first then compute the inverse.)
5. Compute the next solution using the formula
[31] = [50] – a (V²F (50, 36)]¯`¹ VF (20. YO)
- y)](/v2/_next/image?url=https%3A%2F%2Fcontent.bartleby.com%2Fqna-images%2Fquestion%2F2a8fcd58-6a4a-406e-8f39-9e6b187955fc%2F78b2a7db-8bfc-4ddd-8694-3476d13be2e5%2Fsttyc6_processed.png&w=3840&q=75)
Transcribed Image Text:III. The steepest descent method is simple but it is slow at finding the solution. Fortunately,
there is another gradient descent method that modelers can use for optimization problems.
This method, called Newton's method, is based on the quadratic approximation of a function.
Newton's method is preferred by more modelers because it converges fast to the solution.
Unfortunately, the method has a drawback. Because Newton's method uses the inverse of
the Hessian matrix, it requires more computation especially when there are many unknowns.
Your task is to solve the problem in Item (I) using Newton's method. Follow the algorithm
below.
1. As in Item (II), we convert the problem in Item (I) into an unconstrained problem
using the penalty method. We consider the same objective function as in Item (II - 1):
F(x, y) = − x¹y² + M(x + y − 5)²
where M = 100.
2
2. Compute the Hessian matrix V2F(x, y).
3. Set the initial candidate solution at
Io
where a = 1.
Yo
=
- 0₁
Io
4. Using the computed Hessian matrix V²F(x, y), find its inverse at the point Yo
that is, compute [V²F(xo, Yo)]-¹. (Substitute o and yo first then compute the inverse.)
5. Compute the next solution using the formula
[31] = [50] – a (V²F (50, 36)]¯`¹ VF (20. YO)
- y)
Expert Solution

This question has been solved!
Explore an expertly crafted, step-by-step solution for a thorough understanding of key concepts.
Introduction
VIEWStep 1: modifying the cost function to make the problem unconstrained
VIEWStep 2: Computing the Hessian matrix
VIEWStep 3: setting the initial point for the Iteration
VIEWStep 4: Finding the inverse of the Hessian matrix at the starting point
VIEWStep 5: computing the next feasible point using Newton's Iteration formula
VIEWStep by step
Solved in 6 steps with 4 images

Recommended textbooks for you

Advanced Engineering Mathematics
Advanced Math
ISBN:
9780470458365
Author:
Erwin Kreyszig
Publisher:
Wiley, John & Sons, Incorporated

Numerical Methods for Engineers
Advanced Math
ISBN:
9780073397924
Author:
Steven C. Chapra Dr., Raymond P. Canale
Publisher:
McGraw-Hill Education

Introductory Mathematics for Engineering Applicat…
Advanced Math
ISBN:
9781118141809
Author:
Nathan Klingbeil
Publisher:
WILEY

Advanced Engineering Mathematics
Advanced Math
ISBN:
9780470458365
Author:
Erwin Kreyszig
Publisher:
Wiley, John & Sons, Incorporated

Numerical Methods for Engineers
Advanced Math
ISBN:
9780073397924
Author:
Steven C. Chapra Dr., Raymond P. Canale
Publisher:
McGraw-Hill Education

Introductory Mathematics for Engineering Applicat…
Advanced Math
ISBN:
9781118141809
Author:
Nathan Klingbeil
Publisher:
WILEY

Mathematics For Machine Technology
Advanced Math
ISBN:
9781337798310
Author:
Peterson, John.
Publisher:
Cengage Learning,

