Lecture notes on Computer Organisation and architecture

computer organization and architecture designing for performance, computer organization and architecture assignment topics, computer organization and architecture objective questions,
Dr.DavisHardy Profile Pic
Published Date:22-07-2017
Your Website URL(Optional)
& CONTENTS Preface xi 1. Introduction to Computer Systems 1 1.1. Historical Background 2 1.2. Architectural Development and Styles 4 1.3. Technological Development 5 1.4. Performance Measures 6 1.5. Summary 11 Exercises 12 References and Further Reading 14 2. Instruction Set Architecture and Design 15 2.1. Memory Locations and Operations 15 2.2. Addressing Modes 18 2.3. Instruction Types 26 2.4. Programming Examples 31 2.5. Summary 33 Exercises 34 References and Further Reading 35 3. Assembly Language Programming 37 3.1. A Simple Machine 38 3.2. Instructions Mnemonics and Syntax 40 3.3. Assembler Directives and Commands 43 3.4. Assembly and Execution of Programs 44 3.5. Example: The X86 Family 47 3.6. Summary 55 Exercises 56 References and Further Reading 57 4. Computer Arithmetic 59 4.1. Number Systems 59 4.2. Integer Arithmetic 63 viiviii CONTENTS 4.3 Floating-Point Arithmetic 74 4.4 Summary 79 Exercises 79 References and Further Reading 81 5. Processing Unit Design 83 5.1. CPU Basics 83 5.2. Register Set 85 5.3. Datapath 89 5.4. CPU Instruction Cycle 91 5.5. Control Unit 95 5.6. Summary 104 Exercises 104 References 106 6. Memory System Design I 107 6.1. Basic Concepts 107 6.2. Cache Memory 109 6.3. Summary 130 Exercises 131 References and Further Reading 133 7. Memory System Design II 135 7.1. Main Memory 135 7.2. Virtual Memory 142 7.3. Read-Only Memory 156 7.4. Summary 158 Exercises 158 References and Further Reading 160 8. Input–Output Design and Organization 161 8.1. Basic Concepts 162 8.2. Programmed I/O 164 8.3. Interrupt-Driven I/O 167 8.4. Direct Memory Access (DMA) 175 8.5. Buses 177 8.6. Input–Output Interfaces 181 8.7. Summary 182 Exercises 183 References and Further Reading 183CONTENTS ix 9 Pipelining Design Techniques 185 9.1. General Concepts 185 9.2. Instruction Pipeline 187 9.3. Example Pipeline Processors 201 9.4. Instruction-Level Parallelism 207 9.5. Arithmetic Pipeline 209 9.6. Summary 213 Exercises 213 References and Further Reading 215 10 Reduced Instruction Set Computers (RISCs) 215 10.1. RISC/CISC Evolution Cycle 217 10.2. RISCs Design Principles 218 10.3. Overlapped Register Windows 220 10.4. RISCs Versus CISCs 221 10.5. Pioneer (University) RISC Machines 223 10.6. Example of Advanced RISC Machines 227 10.7. Summary 232 Exercises 233 References and Further Reading 233 11 Introduction to Multiprocessors 235 11.1. Introduction 235 11.2. Classification of Computer Architectures 236 11.3. SIMD Schemes 244 11.4. MIMD Schemes 246 11.5. Interconnection Networks 252 11.6. Analysis and Performance Metrics 254 11.7. Summary 254 Exercises 255 References and Further Reading 256 Index 259& PREFACE This book is intended for students in computer engineering, computer science, and electrical engineering. The material covered in the book is suitable for a one- semester course on “Computer Organization & Assembly Language” and a one- semester course on “Computer Architecture.” The book assumes that students studying computer organization and/or computer architecture must have had exposure to a basic course on digital logic design and an introductory course on high-level computer language. Thisbookreflectstheauthors’experienceinteachingcoursesoncomputerorgan- ization and computer architecture for more than fifteen years. Most of the material used in the book has been used in our undergraduate classes. The coverage in the book takes basically two viewpoints of computers. The first is the programmer’s viewpoint and the second is the overall structure and function of a computer. The first viewpoint covers what is normally taught in a junior level course on Computer Organization and Assembly Language while the second viewpoint covers what is normallytaughtinaseniorlevelcourseonComputerArchitecture.Inwhatfollows, weprovideachapter-by-chapterreviewofthematerialcoveredinthebook.Indoing so,weaimatprovidingcourseinstructors,students,andpracticingengineers/scien- tists with enough information that can help them select the appropriate chapter or sequences of chapters to cover/review. Chapter1setsthestageforthematerialpresentedintheremainingchapters.Our coverage in this chapter starts with a brief historical review of the development of computer systems. The objective is to understand the factors affecting computing as we know it today and hopefully to forecast the future of computation. We also introduce the general issues related to general-purpose and special-purpose machines. Computer systems can be defined through their interfaces at a number of levels of abstraction, each providing functional support to its predecessor. The interface between the application programs and high-level language is referred to as Language Architecture. The Instruction Set Architecture defines the interface between the basic machine instruction set and the Runtime and I/O Control.A different definition of computer architecture is built on four basic viewpoints. These are the structure, the organization, the implementation, and the performance. The structure defines the interconnection of various hardware components, the organization defines the dynamic interplay and management of the various com- ponents, the implementation defines the detailed design of hardware components, and the performance specifies the behavior of the computer system. Architectural xixii PREFACE developmentandstylesarecoveredinChapter1.Wedevotethelastpartofourcov- erage in this chapter to a discussion on the different CPU performance measures used. ThesequenceconsistingofChapters2and3introducesthebasicissuesrelatedto instruction set architecture and assembly language programming. Chapter 2 covers the basic principles involved in instruction set architecture and design. We start by addressing the issue of storing and retrieving information into and from memory, followed by a discussion on a number of different addressing modes. We also explain instruction execution and sequencing in some detail. We show the appli- cation of the presented addressing modes and instruction characteristics in writing sample segment codes for performing a number of simple programming tasks. Building on the material presented in Chapter 2, Chapter 3 considers the issues related to assembly language programming. We introduce a programmer’s view of a hypothetical machine. The mnemonics and syntax used in representing the different instructions for the machine model are then introduced. We follow that withadiscussionontheexecutionofassemblyprogramsandanassemblylanguage example of the X86 Intel CISC family. Thesequenceofchapters4and 5coversthedesignandanalysisofarithmeticcir- cuits and the design of the Central Processing Unit (CPU). Chapter 4 introduces the reader to the fundamental issues related to the arithmetic operations and circuits used to support computation in computers. We first introduce issues such as number representations, base conversion, and integer arithmetic. In particular, we introduce a number of algorithms together with hardware schemes that are used in performing integeraddition,subtraction,multiplication,anddivision.Asfarasfloating-pointarith- metic, we introduce issues such as floating-point representation, floating-point oper- ations, and floating-point hardware schemes. Chapter 5 covers the main issues related to the organization and design of the CPU. The primary function of the CPU istoexecuteasetofinstructionsstoredinthecomputer’smemory.AsimpleCPUcon- sists of a set of registers, Arithmetic Logic Unit (ALU), and Control Unit (CU). The basic principles needed for the understanding of the instruction fetch-execution cycle,andCPUregistersetdesignarefirstintroduced.Theuseofthesebasicprinciples inthedesignofrealmachinessuchasthe8086andtheMIPSareshown.Adetailed discussion ona typical CPU data path and control unit designis alsoprovided. Chapters 6 and 7 combined are dedicated to Memory System Design. A typical memory hierarchy starts with a small, expensive, and relatively fast unit, called the cache. The cache is followed in the hierarchy by a larger, less expensive, and rela- tively slow main memory unit. Cache and main memory are built using solid-state semiconductor material. They are followed in the hierarchy by a far larger, less expensive, and much slower magnetic memories that consist typically of the (hard) disk and the tape. We start our discussion in Chapter 6 by analyzing the fac- tors influencing the success of a memory hierarchy of a computer. The remaining part of Chapter 6 is devoted to the design and analysis of cache memories. The issues related to the design and analysis of the main and the virtual memory are covered in Chapter 7. A brief coverage of the different read-only memory (ROM) implementations is also provided in Chapter 7.PREFACE xiii I/O plays a crucial role in any modern computer system. A clear understanding and appreciation of the fundamentals of I/O operations, devices, and interfaces are ofgreatimportance.ThefocusofChapter8isastudyoninput–output(I/O)design and organization. We cover the basic issues related to programmed and Interrupt- driven I/O. The interrupt architecture in real machines such as 8086 and MC9328MX1/MXL AITC are explained. This is followed by a detailed discussion on Direct Memory Access (DMA), busses (synchronous and asynchronous), and arbitration schemes. Our coverage in Chapter 8 concludes with a discussion on I/O interfaces. There exists two basic techniques to increase the instruction execution rate of a processor. These are: to increase the clock rate, thus decreasing the instruction execution time, or alternatively to increase the number of instructions that can be executed simultaneously. Pipelining and instruction-level parallelism are examples of thelattertechnique.Pipelining isthefocusof thediscussion providedin Chapter 9.Theideaistohavemorethanoneinstructionbeingprocessedbytheprocessorat the same time. This can be achieved by dividing the execution of an instruction among a number of sub-units (stages), each performing part of the required oper- ations, i.e., instruction fetch, instruction decode, operand fetch, instruction execution, and store of results. Performance measures of a pipeline processor are introduced. The main issues contributing to instruction pipeline hazards are dis- cussed and some possible solutions are introduced. In addition, we present the con- cept of arithmeticpipeliningtogetherwiththe problems involved indesigningsuch pipeline. Our coverage concludes with a review of two pipeline processors, i.e., the ARM 1026EJ-S and the UltraSPARC-III. Chapter10isdedicatedtoastudyofReducedInstructionSetComputers(RISCs). Thesemachinesrepresentanoticeableshiftincomputerarchitectureparadigm.The RISC paradigm emphasizes the enhancement of computer architectures with the resources needed to make the execution of the most frequent and the most time- consuming operations most efficient. RISC-based machines are characterized by a number of common features, such as, simple and reduced instruction set, fixed instructionformat,oneinstructionpermachinecycle,pipelineinstructionfetch/exe- cute units, ample number of general purpose registers (or alternatively optimized compiler code generation), Load/Store memory operations, and hardwired control unit design. Our coverage in this chapter starts with a discussion on the evolution ofRISCarchitecturesandthestudiesthatledtotheirintroduction.OverlappedReg- isterWindows,anessentialconceptintheRISCdevelopment,isalsodiscussed.We show the application of the basic RISC principles in machines such as the Berkeley RISC, the Stanford MIPS, the Compaq Alpha, and the SUN UltraSparc. Having covered the essential issues in the design and analysis of uniprocessors and pointing out the main limitations of a single stream machine, we provide an introduction to the basic concepts related to multiprocessors in Chapter 11. Here a number of processors (two or more) are connected in a manner that allows them to share the simultaneous execution of a single task. The main advantage for using multiprocessors is the creation of powerful computers by connecting many existing smaller ones. In addition, a multiprocessor consisting of a number ofxiv PREFACE single uniprocessors is expected to be more cost effective than building a high- performance single processor. We present a number of different topologies used for interconnecting multiple processors, different classification schemes, and a topology-basedtaxonomyforinterconnectionnetworks.Twomemory-organization schemesforMIMD(multipleinstructionmultipledata)multiprocessors,i.e.,Shared Memory and Message Passing, are also introduced. Our coverage in this chapter ends with a touch on the analysis and performance metrics for multiprocessors. Interested readers are referred to more elaborate discussions on multiprocessors in our book entitled Advanced Computer Architectures and Parallel Processing, John Wiley and Sons, Inc., 2005. From the above chapter-by-chapter review of the topics covered in the book, it should be clear that the chapters of the book are, to a great extent, self-contained and inclusive. We believe that such an approach should help course instructors to selectivelychoosethesetofchapterssuitableforthetargetedcurriculum.However, ourexperienceindicatesthatthegroupofchaptersconsistingofChapters1to5and 8 is typically suitable for a junior level course on Computer Organization and Assembly Language for Computer Science, Computer Engineering, and Electrical Engineering students. The group of chapters consisting of Chapters 1, 6, 7, 9–11 is typically suitable for a senior level course on Computer Architecture. Practicing engineers and scientists will find it feasible to selectively consult the material cov- eredinindividualchaptersand/orgroupsofchaptersasindicatedinthechapter-by- chapter review. For example, to find more about memory system design, interested readers may consult the sequence consisting of Chapters 6 and 7. ACKNOWLEDGMENTS We would like to express our thanks and appreciation to a number of people who have helped inthepreparationof thisbook.Studentsinour Computer Organization and Computer Architecture courses at the University of Saskatchewan (UofS), SMU, KFUPM, and Kuwait University have used drafts of different chapters and provided us with useful feedback and comments that led to the improvement of thepresentationofthematerialinthebook;tothemwearethankful.Ourcolleagues Donald Evan, Fatih Kocan, Peter Seidel, Mitch Thornton, A. Naseer, Habib Ammari, and Hakki Cankaya offered constructive comments and excellent sugges- tions that led to noticeable improvement in the style and presentation of the book material. We are indebted to the anonymous reviewers arranged by John Wiley for their suggestions and corrections. Special thanks to Albert Y. Zomaya, the series editor and to Val Moliere, Kirsten Rohstedt, and Christine Punzo of John Wiley for their help in making this book a reality. Of course, responsibility for errors and inconsistencies rests with us. Finally, and most of all, we want to thank our families for their patience and support during the writing of this book. MOSTAFA ABD-EL-BARR HESHAM EL-REWINI& CHAPTER1 IntroductiontoComputerSystems The technological advances witnessed in the computer industry are the result of a long chain of immense and successful efforts made by two major forces. These are the academia, represented by university research centers, and the industry, representedbycomputercompanies.Itis,however,fairtosaythatthecurrenttech- nological advances in the computer industry owe their inception to university research centers. In order to appreciate the current technological advances in the computer industry, one has to trace back through the history of computers and their development. The objective of such historical review is to understand the factors affecting computing as we know it today and hopefully to forecast the future of computation. A great majority of the computers of our daily use are known as general purpose machines. These are machines that are built with no specific application in mind, but rather are capable of performing computation needed by a diversity of applications. These machines are to be distinguished from those built to serve (tailored to) specific applications. The latter are known as special purpose machines. A brief historical background is given in Section 1.1. Computer systems have conventionally been defined through their interfaces at anumberoflayered abstraction levels, eachprovidingfunctionalsupporttoitspre- decessor. Included among the levels are the application programs, the high-level languages, and the set of machine instructions. Based on the interface between different levels of the system, a number of computer architectures can be defined. The interface between the application programs and a high-level language is referred to as a language architecture. The instruction set architecture defines the interfacebetweenthebasicmachineinstructionsetandtheruntimeandI/Ocontrol. A different definition of computer architecture is built on four basic viewpoints. These arethe structure,the organization, the implementation,and the performance. Inthisdefinition,thestructuredefinestheinterconnectionofvarioushardwarecom- ponents, the organization defines the dynamic interplay and management of the various components, the implementation defines the detailed design of hardware components, and the performance specifies the behavior of the computer system. Architectural development and styles are covered in Section 1.2. Fundamentals of Computer Organization and Architecture, by M. Abd-El-Barr and H. El-Rewini ISBN 0-471-46741-3 Copyright 2005 John Wiley & Sons, Inc. 12 INTRODUCTION TO COMPUTER SYSTEMS AnumberoftechnologicaldevelopmentsarepresentedinSection1.3.Ourdiscus- sioninthischapterconcludeswithadetailedcoverageofCPUperformancemeasures. 1.1. HISTORICAL BACKGROUND Inthissection,wewouldliketoprovideahistoricalbackgroundontheevolutionof cornerstoneideasinthecomputingindustry.Weshouldemphasizeattheoutsetthat the effort to build computers has not originated at one single place. There is every reason for us to believe that attempts to build the first computer existed in different geographically distributed places. We also firmly believe that building a computer requires teamwork. Therefore, when some people attribute a machine to the name of a single researcher, what they actually mean is that such researcher may have led the team who introduced the machine. We, therefore, see it more appropriate to mention the machine and the place it was first introduced without linking that to a specific name. We believe that such an approach is fair and should eliminate any controversy about researchers and their names. Itisprobablyfairtosay thatthefirstprogram-controlled(mechanical) computer everbuildwastheZ1(1938).Thiswasfollowedin1939bytheZ2asthefirstoper- ationalprogram-controlledcomputerwithfixed-pointarithmetic.However,thefirst recorded university-based attempt to build a computer originated on Iowa State University campus in the early 1940s. Researchers on that campus were able to build a small-scale special-purpose electronic computer. However, that computer was never completely operational. Just about the same time a complete design of a fully functional programmable special-purpose machine, the Z3, was reported in Germany in 1941. It appears that the lack of funding prevented such design from beingimplemented.Historyrecordedthatwhilethesetwoattemptswereinprogress, researchers from different parts of the world had opportunities to gain first-hand experience through their visits to the laboratories and institutes carrying out the work. It is assumed that such first-hand visits and interchange of ideas enabled the visitors to embark on similar projects in their own laboratories back home. Asfarasgeneral-purposemachinesareconcerned,theUniversityofPennsylvania is recorded to have hosted the building of the Electronic Numerical Integrator and Calculator (ENIAC) machine in 1944. It was the first operational general-purpose machinebuiltusingvacuumtubes.Themachinewasprimarilybuilttohelpcompute artilleryfiringtablesduringWorldWarII.Itwasprogrammablethroughmanualset- ting of switches and plugging of cables. The machine was slow by today’s standard, withalimitedamountofstorageandprimitiveprogrammability.Animprovedversion of the ENIAC was proposed on the same campus. The improved version of the ENIAC, called the Electronic Discrete Variable Automatic Computer (EDVAC), was an attempt to improve the way programs are entered and explore the concept of stored programs. It was not until 1952 that the EDVAC project was completed. Inspired by the ideas implemented in the ENIAC, researchers at the Institute for Advanced Study (IAS) at Princeton built (in 1946) the IAS machine, which was about 10 times faster than the ENIAC.1.1. HISTORICAL BACKGROUND 3 In 1946 and while the EDVAC project was in progress, a similar project was initiated at Cambridge University. The project was to build a stored-program com- puter, known as the Electronic Delay Storage Automatic Calculator (EDSAC). It was in 1949 that the EDSAC became the world’s first full-scale, stored-program, fullyoperationalcomputer.Aspin-offoftheEDSACresultedinaseriesofmachines introduced at Harvard. The series consisted of MARK I, II, III, and IV. The latter two machines introduced the concept of separate memories for instructions and data. The term Harvard Architecture was given to such machines to indicate the use of separate memories. It should be noted that the term Harvard Architecture is used today to describe machines with separate cache for instructions and data. The first general-purpose commercial computer, the UNIVersal Automatic Computer (UNIVAC I), was on the market by the middle of 1951. It represented an improvementovertheBINAC,whichwasbuiltin1949.IBMannounceditsfirstcom- puter, the IBM701, in 1952. The early 1950s witnessed a slowdown in the computer industry.In1964IBMannouncedalineofproductsunderthenameIBM360series. Theseriesincludedanumberofmodelsthatvariedinpriceandperformance.Thisled DigitalEquipmentCorporation(DEC)tointroducethefirstminicomputer,thePDP-8. Itwasconsideredaremarkablylow-costmachine.Intelintroducedthefirstmicropro- cessor, the Intel 4004, in 1971. The world witnessed the birth of the first personal computer (PC) in 1977 when Apple computer series were first introduced. In 1977 theworldalsowitnessedtheintroductionoftheVAX-11/780byDEC.Intelfollowed suit by introducing the first of the most popular microprocessor, the 80 86 series. Personal computers, which were introduced in 1977 by Altair, Processor Technology, North Star, Tandy, Commodore, Apple, and many others, enhanced the productivity of end-users in numerous departments. Personal computers from Compaq,Apple, IBM, Dell, and manyothers, soonbecame pervasive, andchanged the face of computing. In parallel with small-scale machines, supercomputers were coming into play. The first such supercomputer, the CDC 6600, was introduced in 1961 by Control DataCorporation.CrayResearchCorporationintroducedthebestcost/performance supercomputer, the Cray-1, in 1976. The 1980s and 1990s witnessed the introduction of many commercial parallel computers with multiple processors. They can generally be classified into two main categories: (1) shared memory and (2) distributed memory systems. The number of processors in a single machine ranged from several in a shared memory computer to hundreds of thousands in a massively parallel system. Examples of parallel computers during this era include Sequent Symmetry, Intel iPSC, nCUBE, Intel Paragon, Thinking Machines (CM-2, CM-5), MsPar (MP), Fujitsu (VPP500), and others. One of the clear trends in computing is the substitution of centralized servers by networks of computers. These networks connect inexpensive, powerful desktop machines to form unequaled computing power. Local area networks (LAN) of powerful personal computers and workstations began to replace mainframes and minis by 1990. These individual desktop computers were soon to be connected into larger complexes of computing by wide area networks (WAN).4 INTRODUCTION TO COMPUTER SYSTEMS TABLE 1.1 Four Decades of Computing Feature Batch Time-sharing Desktop Network Decade 1960s 1970s 1980s 1990s Location Computer room Terminal room Desktop Mobile Users Experts Specialists Individuals Groups Data Alphanumeric Text, numbers Fonts, graphs Multimedia Objective Calculate Access Present Communicate Interface Punched card Keyboard & CRT See & point Ask & tell Operation Process Edit Layout Orchestrate Connectivity None Peripheral cable LAN Internet Owners Corporate computer DivisionalISshops Departmental Everyone centers end-users CRT, cathode ray tube; LAN, local area network. ThepervasivenessoftheInternetcreatedinterestinnetworkcomputingandmore recently in grid computing. Grids are geographically distributed platforms of com- putation. They should provide dependable, consistent, pervasive, and inexpensive access to high-end computational facilities. Table 1.1 is modified from a table proposed by Lawrence Tesler (1995). In this table,majorcharacteristicsofthedifferentcomputingparadigmsareassociatedwith each decade of computing, starting from 1960. 1.2. ARCHITECTURAL DEVELOPMENT AND STYLES Computer architects have always been striving to increase the performance of their architectures.Thishastakenanumberofforms.Amongtheseisthephilosophythat bydoingmoreinasingleinstruction,onecanuseasmallernumberofinstructionsto perform the same job. The immediate consequence of this is the need for fewer memory read/write operations and an eventual speedup of operations. It was also argued that increasing the complexity of instructions and the number of addressing modes has the theoretical advantage of reducing the “semantic gap” between the instructionsinahigh-levellanguageandthoseinthelow-level(machine)language. A single (machine) instruction to convert several binary coded decimal (BCD) numbers to binary is an examplefor how complex someinstructions were intended to be. The huge number of addressing modes considered (more than 20 in the VAX machine) further adds to the complexity of instructions. Machines following this philosophy have been referred to as complex instructions set computers TM (CISCs). Examples of CISC machines include the Intel Pentium , the Motorola TM TM MC68000 , and the IBM & Macintosh PowerPC . It should be noted that as more capabilities were added to their processors, manufacturers realized that it was increasingly difficult to support higher clock rates that would have been possible otherwise. This is because of the increased1.3. TECHNOLOGICAL DEVELOPMENT 5 complexityofcomputationswithinasingleclockperiod.Anumberofstudiesfrom the mid-1970s and early-1980s also identified that in typical programs more than 80%oftheinstructionsexecutedarethoseusingassignmentstatements,conditional branching and procedure calls. It was also surprising to find out that simple assign- mentstatementsconstitutealmost50%ofthoseoperations.Thesefindingscauseda different philosophy to emerge. This philosophy promotes the optimization of architectures by speeding up those operations that are most frequently used while reducing the instruction complexities and the number of addressing modes. Machines following this philosophy have been referred to as reduced instructions TM set computers (RISCs). Examples of RISCs include the Sun SPARC and TM MIPS machines. The above two philosophies in architecture design have led to the unresolved controversy as to which architecture style is “best.” It should, however, be men- tioned that studies have indicated that RISC architectures would indeed lead to faster execution of programs. The majority of contemporary microprocessor chips seemstofollowtheRISCparadigm.Inthisbookwewillpresentthesalientfeatures and examples for both CISC and RISC machines. 1.3. TECHNOLOGICAL DEVELOPMENT Computer technology has shown an unprecedented rate of improvement. This includes the development of processors and memories. Indeed, it is the advances in technology that have fueled the computer industry. The integration of numbers of transistors (a transistor is a controlled on/off switch) into a single chip has increased from a few hundred to millions. This impressive increase has been made possible by the advances in the fabrication technology of transistors. Thescaleofintegrationhasgrownfromsmall-scale(SSI)tomedium-scale(MSI) to large-scale (LSI) to very large-scale integration (VLSI), and currently to wafer- scale integration (WSI). Table 1.2 shows the typical numbers of devices per chip in each of these technologies. It should be mentioned that the continuous decrease in the minimum devices feature size has led to a continuous increase in the number of devices per chip, TABLE 1.2 Numbers of Devices per Chip Integration Technology Typicalnumberofdevices Typical functions SSI Bipolar 10–20 Gates and flip-flops MSI Bipolar & MOS 50–100 Adders & counters LSI Bipolar & MOS 100–10,000 ROM & RAM VLSI CMOS (mostly) 10,000–5,000,000 Processors WSI CMOS .5,000,000 DSP&special purposes SSI, small-scale integration; MSI, medium-scale integration; LSI, large-scale integration; VLSI, very large-scale integration; WSI, wafer-scale integration.6 INTRODUCTION TO COMPUTER SYSTEMS which in turn has led to a number of developments. Among these is the increase in thenumberofdevicesinRAMmemories,whichinturnhelpsdesignerstotradeoff memorysizeforspeed.Theimprovementinthefeaturesizeprovidesgoldenoppor- tunities for introducing improved design styles. 1.4. PERFORMANCE MEASURES In this section, we consider the important issue of assessing the performance of a computer. In particular, we focus our discussion on a number of performance measures that are used to assess computers. Let us admit at the outset that there are various facets to the performance of a computer. For example, a user of a computer measures its performance based on the time taken to execute a given job (program). On the other hand, a laboratory engineer measures the performance of his system by the total amount of work done in a given time. While the user considers the program execution time a measure for performance, the laboratory engineer considers the throughput a more important measure for performance. A metric for assessing the performance of a computer helps comparing alternative designs. Performance analysis should help answering questions such as how fast can a program be executed using a given computer? In order to answer such a question, we need to determine the time taken by a computer to execute a given job. We define the clock cycle time as the time between two consecutive rising (trailing) edgesofaperiodicclocksignal(Fig.1.1).Clockcyclesallowcountingunitcompu- tations,becausethestorageofcomputationresultsissynchronizedwithrising(trail- ing)clockedges.Thetimerequiredtoexecuteajobbyacomputerisoftenexpressed in terms of clock cycles. We denote the number of CPU clock cycles for executing a job to be the cycle count (CC), the cycle time by CT, and the clock frequency by f¼ 1/CT.The time taken by the CPU to execute a job can be expressed as CPU time¼ CCCT ¼ CC=f Itmaybeeasiertocountthenumberofinstructionsexecuted inagivenprogramas compared to counting the number of CPU clock cycles needed for executing that Figure 1.1 Clock signal1.4. PERFORMANCE MEASURES 7 program. Therefore, the average number of clock cycles per instruction (CPI) has been used as an alternate performance measure. The following equation shows how to compute the CPI. CPU clock cycles for the program CPI ¼ Instruction count CPU time¼ Instruction countCPIClock cycle time Instruction countCPI ¼ Clock rate It is known that the instruction set of a given machine consists of a number of instruction categories: ALU (simple assignment and arithmetic and logic instruc- tions), load, store, branch, and so on. In the case that the CPI for each instruction category is known, the overall CPI can be computed as P n CPI I i i i¼1 CPI ¼ Instruction count whereI isthenumberoftimesaninstructionoftypeiisexecutedintheprogramand i CPI is the average number of clock cycles needed to execute such instruction. i Example Consider computing the overall CPI for a machine A for which the followingperformancemeasureswererecordedwhenexecutingasetofbenchmark programs. Assume that the clock rate of the CPU is 200MHz. Instruction Percentage of No. of cycles category occurrence per instruction ALU 38 1 Load & store 15 3 Branch 42 4 Others 5 5 Assuming the execution of 100 instructions, the overall CPI can be computed as P n CPI I 381þ153þ424þ55 i i i¼1 CPI ¼ ¼ ¼ 2:76 a Instruction count 100 ItshouldbenotedthattheCPIreflectstheorganizationandtheinstructionsetarchi- tectureoftheprocessorwhiletheinstructioncountreflectstheinstructionsetarchi- tecture and compiler technology used. This shows the degree of interdependence between the two performance parameters. Therefore, it is imperative that both the8 INTRODUCTION TO COMPUTER SYSTEMS CPI and the instruction count are considered in assessing the merits of a given computer or equivalently in comparing the performance of two machines. A different performance measure that has been given a lot of attention in recent years is MIPS (million instructions-per-second (the rate of instruction execution per unit time)), which is defined as Instruction count Clock rate MIPS¼ ¼ 6 6 Execution time10 CPI10 Example Suppose that the same set of benchmark programs considered above were executed on another machine, call it machine B, for which the following measures were recorded. Instruction Percentage of No. of cycles category occurrence per instruction ALU 35 1 Load & store 30 2 Branch 15 3 Others 20 5 What is the MIPS rating for the machine considered in the previous example (machine A) and machine B assuming a clock rate of 200MHz? P n CPI I 381þ153þ424þ55 i i i¼1 CPI ¼ ¼ ¼ 2:76 a Instruction count 100 6 Clock rate 20010 MIPS ¼ ¼ ¼ 70:24 a 6 6 CPI 10 2:7610 a P n CPI I 351þ302þ205þ153 i i i¼1 CPI ¼ ¼ ¼ 2:4 b Instruction count 100 6 Clock rate 20010 MIPS ¼ ¼ ¼ 83:67 b 6 6 CPI 10 2:410 a Thus MIPS . MIPS . b a It is interesting to note here that although MIPS has been used as a performance measure for machines, one has to be careful in using it to compare machines having different instruction sets. This is because MIPS does not track execution time. Consider, for example, the following measurement made on two different machines running a given set of benchmark programs.1.4. PERFORMANCE MEASURES 9 No. of No. of Instruction instructions cycles per category (in millions) instruction Machine (A) ALU 8 1 Load & store 4 3 Branch 2 4 Others 4 3 Machine (B) ALU 10 1 Load & store 8 2 Branch 2 4 Others 4 3 P n 6 CPI I (81þ43þ43þ24)10 i i i¼1 CPI ¼ ¼ ffi 2:2 a 6 Instruction count (8þ4þ4þ2)10 6 Clock rate 20010 MIPS ¼ ¼ ffi 90:9 a 6 6 CPI 10 2:210 a 6 Instruction countCPI 1810 2:2 a CPU ¼ ¼ ¼ 0:198s a 6 Clock rate 20010 P n 6 CPI I (101þ82þ44þ24)10 i i i¼1 CPI ¼ ¼ ¼ 2:1 b 6 Instruction count (10þ8þ4þ2)10 6 Clock rate 20010 MIPS ¼ ¼ ¼ 95:2 b 6 6 CPI 10 2:110 a 6 Instruction countCPI 2010 2:1 a CPU ¼ ¼ ¼ 0:21s b 6 Clock rate 20010 MIPS . MIPS and CPU . CPU b a b a The example shows that although machine B has a higher MIPS compared to machine A, it requires longer CPU time to execute the same set of benchmark programs. Million floating-point instructions per second, MFLOP (rate of floating-point instruction execution per unit time) has also been used as a measure for machines’ performance. It is defined as Number of floating-point operations in a program MFLOPS¼ 6 Execution time1010 INTRODUCTION TO COMPUTER SYSTEMS WhileMIPSmeasurestherateofaverageinstructions,MFLOPSisonlydefinedfor the subset of floating-point instructions. An argument against MFLOPS is the fact that the set of floating-point operations may not be consistent across machines and therefore the actual floating-point operations will vary from machine to machine. Yet another argument is the fact that the performance of a machine for a given program as measured by MFLOPS cannot be generalized to provide a single performance metric for that machine. The performance of a machine regarding one particular program might not be interesting to a broad audience. The use of arithmetic and geometric means are themostpopularwaystosummarizeperformanceregardinglargersetsofprograms (e.g., benchmark suites). These are defined below. n X 1 Arithmetic mean¼ Execution time i n i¼1 sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi n Y n Geometric mean¼ Execution time i i¼1 where execution time is the execution time for the ith program and n is the total i number of programs in the set of benchmarks. The following table shows an example for computing these metrics. CPU time on CPU time on Item computerA(s) computerB(s) Program 1 50 10 Program 2 500 100 Program 3 5000 1000 Arithmetic mean 1835 370 Geometric mean 500 100 Weconcludeourcoverageinthissectionwithadiscussiononwhatisknownasthe Amdahl’s law for speedup (SU ) due to enhancement. In this case, we consider o speedup as a measure of how a machine performs after some enhancement relative to its original performance. The following relationship formulates Amdahl’s law. Performance after enhancement SU ¼ o Performance before enhancement Execution time before enhancement Speedup¼ Execution time after enhancement Consider, for example, a possible enhancement to a machine that will reduce the execution time for some benchmarks from 25s to 15s. We say that the speedup resulting from such reduction is SU ¼ 25=15¼ 1:67. o1.5. SUMMARY 11 Initsgivenform,Amdahl’slawaccountsforcaseswherebyimprovementcanbe appliedtotheinstructionexecutiontime.However,sometimesitmaybepossibleto achieveperformance enhancementforonly afraction oftime,D.In thiscase anew formulahas to be developed in order to relate the speedup, SU dueto an enhance- D ment for a fraction of timeD to the speedup due to an overall enhancement, SU . o This relationship can be expressed as 1 SU ¼ o (1D)þ(D=SU ) D It should be noted that when D¼ 1, that is, when enhancement is possible at all times, then SU ¼ SU , as expected. o D Consider, for example, a machine for which a speedup of 30 is possible after applying an enhancement. If under certain conditions the enhancement was only possible for 30% of the time, what is the speedup due to this partial application of the enhancement? 1 1 1 SU ¼ ¼ ¼ ¼ 1:4 o 0:3 (1D)þ(D=SU ) 0:7þ0:01 D (10:3)þ 30 Itisinterestingtonotethattheaboveformulacanbegeneralizedasshownbelowto account for the case whereby a number of different independent enhancements can be applied separately and for different fractions of the time, D , D , ..., D , thus 1 2 n leading respectively to the speedup enhancements SU , SU ,..., SU . D D D 1 2 n 1 SU ¼ o (D þD þþD ) 1 2 n ½1(D þD þþD )þ 1 2 n (SU þSU þþSU ) D D D 1 2 n 1.5. SUMMARY In this chapter, we provided a brief historical background for the development of computer systems, starting from the first recorded attempt to build a computer, the Z1, in 1938, passing through the CDC 6600 and the Cray supercomputers, and endingup with today’s modern high-performance machines. We then provided a discussion on the RISC versus CISC architectural styles and their impact on machineperformance.Thiswasfollowedbyabriefdiscussiononthetechnological developmentanditsimpactoncomputingperformance.Ourcoverageinthischapter was concluded withadetailedtreatmentoftheissuesinvolvedinassessingtheper- formanceofcomputers.Inparticular,wehaveintroducedanumberofperformance measures such as CPI, MIPS, MFLOPS, and Arithmetic/Geometric performance means, none of them defining the performance of a machine consistently. Possible

Advise: Why You Wasting Money in Costly SEO Tools, Use World's Best Free SEO Tool Ubersuggest.