. a. If the new fast floating point unit speeds up floating point operations by, on average 2x, and floating operations take 20% of the original program's execution time, what is the overall speedup (ignoring the penalty to any other instructions)? b. Now assume that speeding up the floating point unit slowed down data cache accesses, resulting in a 1.5x (0r 2/3 speedup). Data cache accesses consume 10% of the execution time. What is the overall speedup now? c. After implementing the new floating point operations, what percentage of execution time is spent on floating point operations, what percentage is spent on data cache accesses
When making changes to optimize part of a processor, it is often the case that speeding up one type of instruction comes at the cost of slowing down something else. For example, if we put in a complicated fast floating point unit, that takes space, and something might have to be moved farther away from the middle to accommodate it, adding an extra cycle in delay to reach that unit. The basic Amdahl?s law equation does not take into account this trade-off.
a. If the new fast floating point unit speeds up floating point operations by, on average 2x, and floating operations take 20% of the original program's execution time, what is the overall speedup (ignoring the penalty to any other instructions)?
b. Now assume that speeding up the floating point unit slowed down data cache accesses, resulting in a 1.5x (0r 2/3 speedup). Data cache accesses consume 10% of the execution time. What is the overall speedup now?
c. After implementing the new floating point operations, what percentage of execution time is spent on floating point operations, what percentage is spent on data cache accesses?
Trending now
This is a popular solution!
Step by step
Solved in 4 steps with 3 images