Parmar_1230438757_M4A2

docx

School

Arizona State University *

*We aren’t endorsed by this school

Course

510

Subject

Computer Science

Date

Dec 6, 2023

Type

docx

Pages

11

Uploaded by CountSummer14886

Report
1 Module 4: Assignment 2 - Analysis and Optimization of CPU Architectures and Memory Systems for Enhanced Performance Krupal Parmar Information Technology, Arizona State University IFT 510 Principles of Computer Information and Technology Dinesh Sthapit 09/24/2023
2 CPU Architecture: Q1: Explain the current CPU architecture designs, including traditional modern architectures such as CISC and RISC. Ans: A CISC (Complex Instruction Set Computer) is a type of computer where single instructions can perform many low-level tasks, such as memory loading, arithmetically, or memory store operations, or are performed by multi-threaded processes or address modes within a single instruction set. CISC architecture is characterized by the small size of the program and the large number of instructions that can be executed. Each instruction set contains over 300 instructions and the maximum number of instructions can be executed in between 2 and 10 cycles. Implementing instruction pipelining is not easy in CISC. Program compilers, based on an overview, perform very well. This is because a large number of new instructions can be found in a single set of instructions. Compilers can design compound instructions in a single, simple set of instructions. This allows for a low-level process. It also allows for the integration of big addressing nodes and other types of data into a machine’s hardware. The CISC is not as efficient as the RISC because it cannot eliminate codes, wasting cycles. The complexity of the microprocessor chips also makes it difficult to understand and program for. Instruction Set Architecture:
3 The instruction set architecture is the communication medium that programmers use to talk to the hardware. The main keywords in this architecture are the user commands that the microprocessor uses to execute the data, copy the data, delete the data, or edit the data. Reduced Instruction Set Computer (RISC) Reduced Instruction Set Computing (RISC) is a CPU design plan that uses simple instructions and works fast. It has a small number of instructions that are expected to do very small things. The instructions in RISC are modest and simple, so that more complex commands can be included in the machine. The instructions are similar in length, and they are wound together to do complex things in one operation. Most of the time, the commands are done in one machine cycle. Pipeline is an important technique used to speed up RISC machines. Compare and contrast CPU architectures such as: IBM Mainframe series, Intel x86 family, IBM POWER / PowerPC family, ARM architecture, Oracle SPARC family. This is an example of a subset of an instruction set. Each instruction is designed to accomplish very small tasks in this machine, and the instruction sets are small and easy to use. This allows for more complex commands to be included in the machine, and each instruction in the machine is approximately the same size. The instructions in the machine are wound together to perform complex operations in one operation, and most of the instructions are executed in one machine cycle. A technique used to speed up RISC machines is called pipelining.
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
4 Reduced Instruction Set Computers (RISC): Reduced instruction set computers are microprocessors that are designed to run a limited number of instructions simultaneously. This allows for fewer transistors to be used, which reduces the cost of designing and manufacturing RISC chips. RISC machines have the following characteristics: Q2: Provide an overview of prominent CPU architectures, such as IBM Mainframe series, Intel x86 family, IBM POWER/PowerPC family, ARM architecture, and Oracle SPARC family. Ans: The IBM Mainframe Series processors are designed to be highly efficient, dependable, and scalable. They are used in large-scale computing applications, such as big data processing, financial operations, or critical applications. They offer a wide range of features, including massive memory and I/O (Information Over Oriole), hardware-based virtualization for multi- threaded virtualization, redundancy and fault tolerance mechanisms, and more. The Intel x86 family consists of computers, servers and workstations that are based on the x86 architecture, which dates back to the Intel 8086 processor, which is still the most popular processor in the current PC market. The x86 family features a complex instruction set, backward compatibility, a long history, and is super-fast with only one thread. The x86 family is used to build PCs, servers, and workstations. The IBM POWER / POWERPC family focuses on high performance computing (HPC). It is characterized by its high-power efficiency, reliability, multiprocessing capabilities, and the ability to be customized by manufacturers for specific applications. This family is well-suited for
5 server, supercomputing, and specific computing applications. It supports SMT (Single Sign-On) and advanced memory technologies to maximize the performance of the POWER system. ARM is a family of RISC computers that is characterized by its energy-efficient design and scalability. It is used in a wide range of devices, including mobile phones, tablets, Internet of Things (IoT) devices, and servers. Its energy-efficient design extends the battery life of the mobile device, and it is able to be customized for various applications. It is also characterized by its parallelism and multi-core design. Q3: Compare and contrast the features and characteristics of these CPU architectures. Ans: CPU architectures can be difficult to compare due to their wide range of features and characteristics. However, I will briefly outline some of the key characteristics that distinguish CPU architectures from each other. For instance, x86 is a long-standing, backward-compatible computer architecture. It is widely used for desktop computers, server computers, workstations, and various other applications. The x86 architecture has a large instruction set with many different instructions. It can run at high speed with a single thread. ARM is an energy-efficient RISC computer. It is used for mobile devices, IoT (Internet of Things), Embedded Systems, and even Servers. It supports multiple cores as well as SIMD instructions. It is commonly used in networks and Embedded Systems.
6 MIPS is another RISC computer architecture. It has high performance and low power consumption. It is used in server computers and gaming consoles such as IBM POWER servers and Xbox 360s. It can also be customized by the manufacturer. SPARC is a versatile, open-source, and customizable computer architecture that was originally developed for scientific and research applications. Due to its flexibility, SPARC has become increasingly popular in recent years. It is characterized by its multiprocessing capabilities, high reliability, and a wide range of applications, such as embedded systems, Internet of Things (IoT), research, education, etc. Multiprocessing: Q1: Explain the reasons for implementing multiprocessing in computer systems. Ans: Multiprocessors are computer systems that have two or more central processors (CPUs) that share the common main memory and peripherals. This allows multiple programs to run at the same time. The main purpose of using a multi-processor is to increase the system's execution speed, but it can also be used for fault tolerance and matching applications. A good example of a multi-processor would be a single central tower connected to two different computer systems. A multi-processor is seen as a way to increase computing speeds, performance, cost, availability, and reliability. Benefits: Enhanced performance Multiple applications Multiple users
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
7 Multi-tasking inside an application High throughput and/or responsiveness Hardware sharing among CPUs Q2: Describe the concept of parallel processing through threads and its benefits. Ans: Concept of parallel processing through threads: In a multithreaded program, it is possible to create multiple threads in the same process. Each thread has access to the parent process’s memory and resources. Threads allow for concurrent execution, which means that different parts of the program can execute at the same time. This means that more CPU cores can be used for different tasks. Threads share memory space with the parent process, allowing them to communicate with each other and exchange data. Threads do not need to use interprocess communication mechanisms such as pipes or sockets to communicate. Locks, Mutexes, Semaphores, Condition Variables, etc. are mechanisms that allow a thread to synchronize its actions and access a shared resource only once. Benefits: Improved overall performance of a program. Increased responsiveness of the program. Faster context switching. Faster communication Concurrency within a process
8 Q3: Explain the advantages and disadvantages of primary-replica multiprocessing and symmetrical multiprocessing. Ans: Primary-replica multiprocessing: An asymmetric multiprocessor (AMP) system is a computer system in which not all the multiple interconnected CPU (central processing unit) are treated in the same way. In an asymmetrical multiprocessor, only one processor runs the operating system tasks. In this case, the processor is in a master and slave relationship. The other processors are considered slave processors, but one serves as master or supervisor. An AMP system assigns tasks to a slave processor. For instance, an AMP system can be used to assign tasks to a CPU based on priority and importance of tasks completed. Advantages: Primary-replica systems are often easier to design and deploy because the primary processor does most of the heavy lifting, while the replica processors focus on specific tasks. Because the primary processor controls the system’s essential functions, it can offer redundancy and failover. In the event of a primary processor failure, one of the replica processors can take over, increasing system stability. Because replica processors can be assigned to specific tasks, critical system functions don’t conflict with specialized tasks, resulting in predictable performance. Disadvantages: The primary-Replica system may not have enough processor cores available for parallel execution because the primary processor manages system tasks alone. This limits the overall performance of the system.
9 Distributing workloads between the primary- and replica-processors can be difficult and require complex load balancing. Increasing the number of replica processors does not always lead to linear performance improvements. The primary processor becomes a bottleneck when dealing with large numbers of replicas. Symmetrical Multiprocessing: Symmetric multiprocessing (SMP) is a type of multi-processor architecture in which two or more parallel processors are linked to a single shared main memory and have complete access to all inputs and outputs. In other words, symmetric multiprocessing allows each processor to self- scalarize. For instance, SMP uses multiple processors to solve a single problem, which is called parallel programming. Advantages: SMP systems offer high parallelism because all cores are able to perform parallel tasks at the same time. This optimizes CPU resources and improves performance. SMP can easily scale up by adding additional processor cores. Performance tends to increase linearly as more cores are added. Load balancing is often easier with SMP systems because tasks can be pushed to any available cores without having to make complicated routing decisions. Disadvantages: SMP systems are more complicated to design and maintain, especially when it comes to shared memory and inter-core synchronization. All cores in a SMP system share system resources, which can lead to contention for resources such as memory, cache and I/O.
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
10 When multiple identical cores are running at full capacity, it generates more heat and consumes more power. This is in contrast to a primary replica system, where some cores may be running at a lower-power status. Software development for SMP systems is difficult, as it involves addressing issues related to threads synchronization and parallelism.
11 References CPU Architecture: 1. Chen, Crystal; Novick, Greg; Shimano, Kirk. "Pipelining" 2. Aletan, Samuel O. (1 April 1992). "An overview of RISC architecture" 3. Shaout, Adnan; Eldos, Taisir (Summer 2003). "On the Classification of Computer Architecture" Multiprocessing: 1. Irv Englander (2009). The architecture of Computer Hardware and Systems Software. An Information Technology Approach (4th ed.). 2. https://www.perforce.com/blog/qac/multithreading-parallel-programming-c-cpp 3. https://www.geeksforgeeks.org/difference-between-asymmetric-and-symmetric- multiprocessing/#