68 DESIGN OF EFFICIENT ALGORITHMS 2.8 where the dimensions of each M, are shown in the brackets. Evaluating M in the order M₁ X (MX (M3 × M₁)) requires 125.000 operations, while evaluating M in the order (M, X (MM;)) × M₁ requires only 2200 operations. ☐ Trying all possible orderings in which to evaluate the product of n matrices so as to minimize the number of operations is an exponential process (see Ex- ercise 2.31), which is impractical when n is large. However, dynamic program- ming provides an O(n³) algorithm. Let m₁, be the minimum cost of computing M₁ × Mi+1 × ·× M; for 1 ≤ i ≤ j ≤n. Clearly, (0, if i=j (2.9) mij = (MIN (mik + M+1, +1-1), if j>i ish i, is the minimum, taken over all possible values of k between i and j- 1, of the sum of these three terms. The dynamic programming approach calculates the m₁'s in order of in- creasing difference in the subscripts. We begin by calculating m₁ for all i, then mi,i+1 for all i, next mi,i+2, and so on. In this way, the terms m₁ and mk+1,j in (2.9) will be available when we calculate mij. This follows since j - i must be strictly greater than either of k-i and j― (k+1) if k is in the range i ≤ k 1, the recursive algorithm is likely to be polynomial in time complexity. However, if the obvious division of a prob- lem of size n results in n problems of size n - 1, then a recursive algorithm is likely to have exponential growth. In this case a tabular technique called dynamic programming often results in a more efficient algorithm. In essence, dynamic programming calculates the solution to all subprob- lems. The computation proceeds from the small subproblems to the larger subproblems, storing the answers in a table. The advantage of the method lies in the fact that once a subproblem is solved, the answer is stored and never recalculated. The technique is easily understood from a simple example. Consider the evaluation of the product of n matrices M = M₁x M₂ × · · · × M„ where each M, is a matrix with 1-1 rows and r; columns. The order in which the matrices are multiplied together can have a significant effect on the total number of operations required to evaluate M, no matter what matrix multi- plication algorithm is used. Example 2.7. Assume that the multiplication of a p× q matrix by a q× r matrix requires pqr operations, as it does in the "usual" algorithm. and con- sider the product M = M₁ [10 × 20] M2 [20 x 50] M3 [501] MA (2.8) [1 × 100]

Database System Concepts
7th Edition
ISBN:9780078022159
Author:Abraham Silberschatz Professor, Henry F. Korth, S. Sudarshan
Publisher:Abraham Silberschatz Professor, Henry F. Korth, S. Sudarshan
Chapter1: Introduction
Section: Chapter Questions
Problem 1PE
icon
Related questions
Question

My algorithms professor is asking us to go through these notes and understand dynamic programing matrix chain multiplication. I need help understanding the formula in the red box and what each of the letters are or represent.

68 DESIGN OF EFFICIENT ALGORITHMS
2.8
where the dimensions of each M, are shown in the brackets. Evaluating M
in the order
M₁ X (MX (M3 × M₁))
requires 125.000 operations, while evaluating M in the order
(M, X (MM;)) × M₁
requires only 2200 operations. ☐
Trying all possible orderings in which to evaluate the product of n matrices
so as to minimize the number of operations is an exponential process (see Ex-
ercise 2.31), which is impractical when n is large. However, dynamic program-
ming provides an O(n³) algorithm. Let m₁, be the minimum cost of computing
M₁ × Mi+1 × ·× M; for 1 ≤ i ≤ j ≤n. Clearly,
(0,
if i=j
(2.9)
mij =
(MIN (mik + M+1, +1-1),
if j>i
ish<j
The term m is the minimum cost of evaluating M' = M; × M¡+1 × · · · × Mk.
The second term, m+1.), is the minimum cost of evaluating
M = M+Mk+2 × · · · × M;
The third term is the cost of multiplying M' by M". Note that M' is an ri-1 × rk
matrix and M" is an r X r; matrix. Equation (2.9) states that mij, j > i, is the
minimum, taken over all possible values of k between i and j- 1, of the sum
of these three terms.
The dynamic programming approach calculates the m₁'s in order of in-
creasing difference in the subscripts. We begin by calculating m₁ for all i,
then mi,i+1 for all i, next mi,i+2, and so on. In this way, the terms m₁ and mk+1,j
in (2.9) will be available when we calculate mij. This follows since j - i must
be strictly greater than either of k-i and j― (k+1) if k is in the range i ≤
k<j. The algorithm is given below.
Algorithm 2.5. Dynamic programming algorithm for computing the minimum
cost order of multiplying a string of n matrices, M₁ × M₂ ×···× M„.
Input. Fo...", where ;-, and r; are the dimensions of matrix M₁.
Output. The minimum cost of multiplying the M,'s, assuming pqr operations
are required to multiply an X a matrix by a a X r matrix.
Transcribed Image Text:68 DESIGN OF EFFICIENT ALGORITHMS 2.8 where the dimensions of each M, are shown in the brackets. Evaluating M in the order M₁ X (MX (M3 × M₁)) requires 125.000 operations, while evaluating M in the order (M, X (MM;)) × M₁ requires only 2200 operations. ☐ Trying all possible orderings in which to evaluate the product of n matrices so as to minimize the number of operations is an exponential process (see Ex- ercise 2.31), which is impractical when n is large. However, dynamic program- ming provides an O(n³) algorithm. Let m₁, be the minimum cost of computing M₁ × Mi+1 × ·× M; for 1 ≤ i ≤ j ≤n. Clearly, (0, if i=j (2.9) mij = (MIN (mik + M+1, +1-1), if j>i ish<j The term m is the minimum cost of evaluating M' = M; × M¡+1 × · · · × Mk. The second term, m+1.), is the minimum cost of evaluating M = M+Mk+2 × · · · × M; The third term is the cost of multiplying M' by M". Note that M' is an ri-1 × rk matrix and M" is an r X r; matrix. Equation (2.9) states that mij, j > i, is the minimum, taken over all possible values of k between i and j- 1, of the sum of these three terms. The dynamic programming approach calculates the m₁'s in order of in- creasing difference in the subscripts. We begin by calculating m₁ for all i, then mi,i+1 for all i, next mi,i+2, and so on. In this way, the terms m₁ and mk+1,j in (2.9) will be available when we calculate mij. This follows since j - i must be strictly greater than either of k-i and j― (k+1) if k is in the range i ≤ k<j. The algorithm is given below. Algorithm 2.5. Dynamic programming algorithm for computing the minimum cost order of multiplying a string of n matrices, M₁ × M₂ ×···× M„. Input. Fo...", where ;-, and r; are the dimensions of matrix M₁. Output. The minimum cost of multiplying the M,'s, assuming pqr operations are required to multiply an X a matrix by a a X r matrix.
2.8 DYNAMIC PROGRAMMING
Recursive techniques are useful if a problem can be divided into subproblems
with reasonable effort and the sum of the sizes of the subproblems can be
kept small. Recall from Theorem 2.1 that if the sum of the sizes of the sub-
problems is an, for some constant a > 1, the recursive algorithm is likely to
be polynomial in time complexity. However, if the obvious division of a prob-
lem of size n results in n problems of size n - 1, then a recursive algorithm is
likely to have exponential growth. In this case a tabular technique called
dynamic programming often results in a more efficient algorithm.
In essence, dynamic programming calculates the solution to all subprob-
lems. The computation proceeds from the small subproblems to the larger
subproblems, storing the answers in a table. The advantage of the method
lies in the fact that once a subproblem is solved, the answer is stored and
never recalculated. The technique is easily understood from a simple example.
Consider the evaluation of the product of n matrices
M = M₁x M₂ × · · · × M„
where each M, is a matrix with 1-1 rows and r; columns. The order in which
the matrices are multiplied together can have a significant effect on the total
number of operations required to evaluate M, no matter what matrix multi-
plication algorithm is used.
Example 2.7. Assume that the multiplication of a p× q matrix by a q× r
matrix requires pqr operations, as it does in the "usual" algorithm. and con-
sider the product
M =
M₁
[10 × 20]
M2
[20 x 50]
M3
[501]
MA
(2.8)
[1 × 100]
Transcribed Image Text:2.8 DYNAMIC PROGRAMMING Recursive techniques are useful if a problem can be divided into subproblems with reasonable effort and the sum of the sizes of the subproblems can be kept small. Recall from Theorem 2.1 that if the sum of the sizes of the sub- problems is an, for some constant a > 1, the recursive algorithm is likely to be polynomial in time complexity. However, if the obvious division of a prob- lem of size n results in n problems of size n - 1, then a recursive algorithm is likely to have exponential growth. In this case a tabular technique called dynamic programming often results in a more efficient algorithm. In essence, dynamic programming calculates the solution to all subprob- lems. The computation proceeds from the small subproblems to the larger subproblems, storing the answers in a table. The advantage of the method lies in the fact that once a subproblem is solved, the answer is stored and never recalculated. The technique is easily understood from a simple example. Consider the evaluation of the product of n matrices M = M₁x M₂ × · · · × M„ where each M, is a matrix with 1-1 rows and r; columns. The order in which the matrices are multiplied together can have a significant effect on the total number of operations required to evaluate M, no matter what matrix multi- plication algorithm is used. Example 2.7. Assume that the multiplication of a p× q matrix by a q× r matrix requires pqr operations, as it does in the "usual" algorithm. and con- sider the product M = M₁ [10 × 20] M2 [20 x 50] M3 [501] MA (2.8) [1 × 100]
Expert Solution
steps

Step by step

Solved in 2 steps

Blurred answer
Similar questions
  • SEE MORE QUESTIONS
Recommended textbooks for you
Database System Concepts
Database System Concepts
Computer Science
ISBN:
9780078022159
Author:
Abraham Silberschatz Professor, Henry F. Korth, S. Sudarshan
Publisher:
McGraw-Hill Education
Starting Out with Python (4th Edition)
Starting Out with Python (4th Edition)
Computer Science
ISBN:
9780134444321
Author:
Tony Gaddis
Publisher:
PEARSON
Digital Fundamentals (11th Edition)
Digital Fundamentals (11th Edition)
Computer Science
ISBN:
9780132737968
Author:
Thomas L. Floyd
Publisher:
PEARSON
C How to Program (8th Edition)
C How to Program (8th Edition)
Computer Science
ISBN:
9780133976892
Author:
Paul J. Deitel, Harvey Deitel
Publisher:
PEARSON
Database Systems: Design, Implementation, & Manag…
Database Systems: Design, Implementation, & Manag…
Computer Science
ISBN:
9781337627900
Author:
Carlos Coronel, Steven Morris
Publisher:
Cengage Learning
Programmable Logic Controllers
Programmable Logic Controllers
Computer Science
ISBN:
9780073373843
Author:
Frank D. Petruzella
Publisher:
McGraw-Hill Education