Imagine we have a small computer which contains a simplified version of a CPU and RAM. The CPU can execute a single program that is stored in the RAM, and while running the program it can access 3 memory locations: its internal register AL, and two RAM locations that we'll call address 1 and address 2. Since this is a simplified computer it is able to skip some of the steps to access memory that are in our textbook, but accessing RAM still takes longer than accessing AL. The only values this computer is capable of storing in memory are unsigned bytes (8 bit binary numbers), and when it starts running a program all 3 of its memory locations initially contain the value 00000000. Here is a list of all of the instructions our imaginary computer can perform: Write [number] to AL This instruction takes the operand [number], which must be an 8-bit binary number, and puts it into AL. This overwrites whatever was in AL previously. It takes 3 cycles to complete: 1 cycle each to fetch, decode, and execute the instruction. Move from [memory location] to [different memory location] This instruction takes the value stored inside the first memory location operand and copies it into the second memory location operand. This overwrites whatever was in the second memory location (the first memory location is unchanged). If the first memory location is AL, then it takes 3 cycles to complete: 1 cycle each to fetch, decode, and execute the instruction. However, if the first memory location is one of the two RAM locations (address 1 or address 2), then it takes 4 cycles to complete: fetch, decode, get value from RAM, execute. Add [memory location] to AL This instruction takes the value inside of the memory operand (either address 1 or address 2) and adds it to the value inside of AL. The result is stored inside of AL, overwriting whatever was in there previously. It takes 4 cycles to complete: 1 cycle each to fetch, decode, get value from RAM, and execute the addition. For this discussion, look at the following program that I wrote for our imaginary computer that adds 8 + 12: Write 00001000 to AL Move from AL to address 1 Write 00001100 to AL Move from AL to address 2 Move from address 1 to AL Add address 2 to AL Move from AL to address 1 Using the description and length of each instruction listed above, count up how many clock cycles this program will take to run on our imaginary computer. How many clock cycles does it take? What are the values stored in AL, address 1, and address 2 once the program completes? The clock speed of modern computers is measured in GHz (gigahertz), which is billions of cycles per second. Let's say our imaginary computer has a modest clock speed of 2.3 GHz (2,300,000,000 cycles per second), which is comparable for some very small laptops. In what fraction of a second will our simple program complete in?
Imagine we have a small computer which contains a simplified version of a CPU and RAM. The CPU can execute a single
Here is a list of all of the instructions our imaginary computer can perform:
- Write [number] to AL
- This instruction takes the operand [number], which must be an 8-bit binary number, and puts it into AL. This overwrites whatever was in AL previously. It takes 3 cycles to complete: 1 cycle each to fetch, decode, and execute the instruction.
- Move from [memory location] to [different memory location]
- This instruction takes the value stored inside the first memory location operand and copies it into the second memory location operand. This overwrites whatever was in the second memory location (the first memory location is unchanged). If the first memory location is AL, then it takes 3 cycles to complete: 1 cycle each to fetch, decode, and execute the instruction. However, if the first memory location is one of the two RAM locations (address 1 or address 2), then it takes 4 cycles to complete: fetch, decode, get value from RAM, execute.
- Add [memory location] to AL
- This instruction takes the value inside of the memory operand (either address 1 or address 2) and adds it to the value inside of AL. The result is stored inside of AL, overwriting whatever was in there previously. It takes 4 cycles to complete: 1 cycle each to fetch, decode, get value from RAM, and execute the addition.
For this discussion, look at the following program that I wrote for our imaginary computer that adds 8 + 12:
Write 00001000 to AL
Move from AL to address 1
Write 00001100 to AL
Move from AL to address 2
Move from address 1 to AL
Add address 2 to AL
Move from AL to address 1
- Using the description and length of each instruction listed above, count up how many clock cycles this program will take to run on our imaginary computer. How many clock cycles does it take?
- What are the values stored in AL, address 1, and address 2 once the program completes?
- The clock speed of modern computers is measured in GHz (gigahertz), which is billions of cycles per second. Let's say our imaginary computer has a modest clock speed of 2.3 GHz (2,300,000,000 cycles per second), which is comparable for some very small laptops. In what fraction of a second will our simple program complete in?
Trending now
This is a popular solution!
Step by step
Solved in 3 steps