How Microprocessors works

Dr.NaveenBansal Profile Pic
Published Date:25-10-2017
Your Website URL(Optional)
1 Evolution and History of Microprocessors To understand the working and programming of microprocessors, it is very necessary to know the details of evolution and history of microprocessors which will be discussed in this chapter. In the succeeding chapters of this book, the detailed discussion on the basics of microprocessors will be made. For the beginners it is very difficult to understand the operation of a digital computer as it contains large details, so efforts have been made to envisage the operation of a digital computer in step by step manner. 1.1 INTRODUCTION Microprocessor is a digital device on a chip which can fetch instructions from a memory, decode and execute them i.e. performs certain arithmetic and logical operations, accept data from input device, and send results to output devices. Therefore, a microprocessor interfaced with memory and Input/ Output devices forms a Microcomputer. Basically, there are five building blocks of a digital computer namely: Input Unit Through this unit data and instructions are fed to the memory of the computer. The basic purpose of this unit is to read the data into the machine. The program from the memory is read into the machine along with the input data which are required to solve or compute the problem by the machine. The typical devices which are used for this purpose are keyboards, paper tape reader and toggle switches etc. Memory Unit The memory unit of a digital computer consists of devices which are capable of storing information. The memory of a computer is used for storing two distinct type of information such as data to be processed by the computer and program through which the result of the desired problem is obtained. Computer program and data are stored in the Memory Unit. This usually consists of chips of both ROMs (Read Only Memories) and RAMs (Random Access Memories) either bipolar or MOS. Arithmetic and This unit is used for performing arithmetic operations such as Logical Unit Addition, Subtraction, Multiplications, division and other logical (ALU) operations on the data. The control unit guides ALU which of the operations are to be performed. The sequence of the instructions is controlled by the control unit. Control Unit The control unit performs the most important function in a computer. It controls all other units and also controls the flow of data from one unit to another for performing computations. It also sequences the operations. It instructs all the units to perform the task in a particular sequence with the help of clock pulses. Output Unit After processing of the data in the Arithmetic and Logical Unit, the results are displayed to the output world through this unit. The CRTs (Cathode Ray Tubes), LEDs (Light Emitting Diodes) and Printer etc. form the output unit. In a computer system ALU and Control Unit are combined in one unit called Central Processing Unit (CPU). The block diagram of a computer is shown in figure 1.1. Fig. 1.1 2 The Central Processing Unit is analogous to the human brain as all the decisions as per the instructions are made by CPU. All other parts are also controlled by this unit. A microprocessor is an integrated circuit designed for use as Central Processing Unit of a computer. The CPU is the primary and central player in communicating with devices such as memory, input and output. However, the timing of communication process is controlled by the group of circuits called control unit. The term ‘Microprocessor’ came into existence, in 1971, when the Intel Corporation of America, developed the first microprocessor (INTEL-4004) which is a 4- bit microprocessor (µ p). A microprocessor is a programmable digital electronic component that incorporates the functions of a Central Processing Unit (CPU) on single semi-conducting Integrated Circuits (ICs). As such, a system with microprocessor as an integral part is termed as a microprocessor based system. When a computer is microprocessor based, it is called a microcomputer (µ c). A microprocessor is specified by its ‘Word Size’, e.g. 4-bit, 8-bit, 16-bit etc. By the term ‘word size” means the number of bits of data that is processed by the microprocessor as a unit. For example, an 8-bit microprocessor performs various operations on 8-bit data. It also specifies the width of the data bus. As discussed above, a microcomputer consists of input/ output devices, and memory, in addition to microprocessor which acts its CPU. In fact CPU is commonly referred to microprocessor (µ p). Microprocessors made possible the advent of the microcomputer in the mid-1970s. Before this period, electronic CPUs were typically made from bulky discrete switching devices. Later on small-scale integrated circuits were used to design the CPUs. By integrating the processor onto one or very few large-scale integrated circuit package (containing the equivalent of thousands or millions of discrete transistors), the cost of processor was greatly reduced. The evolution of microprocessors has been known to follow Moore’s law when it comes to steadily increasing performance over the years. This law suggests that the complexity of an integrated circuit, with respect to minimum component cost, doubles in every 18 months. This dictum has generally proven true since the early 1970’s. This lead to the dominance of microprocessors over every other form of computer; every system from the largest mainframes to the smallest hand held computers now uses a microprocessor as its core. The microprocessor based systems play significant role in the every functioning of industrialized societies. The microprocessor can be viewed as a programmable logic device that can be used to control processes or to turn on/off devices. So the microprocessor can be viewed as a data processing unit or a computing unit of a computer. The microprocessor is a programmable integrated device that has computing and decision making capability similar to that of the central processing unit (CPU) of a computer. Nowadays, the microprocessor is being used in a wide range of products called microprocessor based products or systems. The microprocessor communicates and operates in the binary numbers 0 and 1, called bits. Each microprocessor has a fixed set 3of instructions in the form of binary pattern called a machine language. However, it is difficult for human beings to communicate in the language of 0s and 1s. Therefore, the binary instructions are given abbreviated names, called mnemonics, which form the assembly language for a given microprocessor. 1.2 EVOLUTION/HISTORY OF MICROPROCESSORS An English Mathematician Charles Babbage was the first man to propose the basic principle of modern computers from 1792-1871. He gave the concept of a programmable machine having computer similar to modern digital computers. He is, therefore, known Father of Modern Computers. In 1930s successful general purpose mechanical computer was developed. Before this, mechanical calculators were built to perform simple mathematical operations such as addition, subtraction, multiplication and division. Improvement continued and in 1944 Prof. H. Aiken developed a first practical electro-mechanical digital computer in collaboration with IBM. This computer was known as HAWARD MARK-I. It was in large size (51’ long and 8’high) and weighing about 2 tons. The punch cards ware used to input the data in the computer. During the development of the HAWARD MARK-I Computer, Konard Joos of Germany was busy in developing another computer based on 0’s and 1’s rather than decimal numbers. So he developed a computer making use of relays (on-off for 1’s and 0’s) during 1936-44. Joos also developed a language for the computer. The giant machines during 1940-50 were thus designed using relays and vacuum tubes. In 1945, John J. Mauchy and J. Presper Eckert of University of Pennsylvania developed first electronic computer ENIAC (Electronic Numerical Integrator and Calculator). It was too huge weighing 30 tons and occupied an area and made 30'X 50' use of 18000 vacuum tubes, more than 30000 resistors, 10000 capacitors and 6000 switches. It took 200 µ S for addition, and 3mS for 10-digit multiplication. It had separate memory for program and data. It used 20 electronic accumulators for memory. Each accumulator stored signed 10-digit decimal number. A number of computers using vacuum tubes were developed during 1940-55. The main drawback with the ENIAC was the life of the vacuum tube components, which required the frequent maintenance. The invention of semiconductor transistors in 1948 at Bell Laboratories leads further development of computers. The use of semiconductor transistors could not only reduced the size of computers but also increased its capability to a great extent. This leads reduction in cost. Further, the invention of Integrated circuits in 1958 by Jack Kilby of Texas Instrument made a revolution in electronic circuitry. The use of ICs made the size of computers very small and became more versatile in functions. Finally, the advent of IC technology leads to the development of first microprocessor (INTEL 4004) in 1971 at Intel Corporation by an engineer Marcian E. Hoff. It was a 4-bit microprocessor – a programmable controller on a chip. This was called the first generation microprocessor. It was fabricated using P-channel MOSFET technology and had an instruction set of 45 different instructions. It addressed 4096 four-bit wide memory locations. The P-channel MOSFET technology gave low cost but low speed not compatible with TTL (Transistor- 4Transistor Logic) technology. It has to use at least 30 ICs to form a system. As INTEL 4004 had very small number of instructions, it could be used in limited applications such as early video games and small microprocessor-based controllers. Seeing microprocessor as a viable product, Intel Corporation released the 8008 microprocessor – an extended 8 bit version of the 4004 microprocessor in 1972. Soon a variety of microprocessors was released by different manufactures. A few first generation microprocessors are listed in table 1.1. Table 1.1 4-bit Microprocessors 8-bit Microprocessors INTEL 4004 INTEL 8008 INTEL 4040 NATIONAL IMP-8 FAIRCHILD PPS-25 ROCKWELL PPS-8 ROCKWELL PPS-4 AMI 7200 NATIONAL IMP-4 MOSTEK 5065 Second generation microprocessors appeared in 1973 and used NMOS- technology which offered faster speed, higher density and still better reliability. In the year 1974, an 8-bit microprocessor INTEL 8080 was developed using NMOS technology. It requires only two additional devices to design a functional CPU. It is much faster than 8008 and has more instructions than 8008 that facilitates the programming. The 8080 was compatible with TTL, whereas the 8008 was not directly compatible. The 8080 microprocessor could also address four times more memory (64K bytes) than the 8008 microprocessor (16K bytes). INTEL Corporation in 1977 developed another 8-bit microprocessor 8085 which was proved to be a better version than 8080. The execution time of two 8-bit numbers is 2.0 µ S for 8085 whereas it is 1.3 µ S for 8080. The main advantages of the 8085 were its internal clock generator, internal system controller, and higher clock frequency. Some of the important second generation microprocessors are given in table 1.2. Table 1.2 8-bit Microprocessors 12-bit Microprocessors INTEL 8080 INTERSIL 6100 INTEL 8085 TOSHIBA TLCS-12 FAIRCHILD F8 MOTOROLA M6800 ZILOG Z-80 SIGNETICS 2650 The advantages of second generation microprocessor are given below: 5 (i) Larger chip size (170 X 200 mils), (ii) 40 pins, (iii) More number of on-chip decoded timing signals, (iv) Ability to address larger memory space, (v) Ability to address more I/O Ports, (vi) More powerful instruction set, (vii) Faster operation, (viii) Better Interrupt handling capabilities. Third generation microprocessors were introduced in 1978. These were 16-bit microprocessors, designed using HMOS (High Density MOS) technology. These microprocessors offered better speed and higher packing density than NMOS. Some important third generation microprocessors are given in table 1.3. Table 1.3 16-bit Microprocessors INTEL 8086 MOTOROLA-68000 INTERSIL 6100 INTEL 8088 MOTOROLA-68010 TOSHIBA TLCS-12 INTEL 80186 NATIONAL NS-16016 ZILOG Z-8000 INTEL 80286 TEXAS INSTRUMENT- TMS-99000 In 1978, 16-bit INTEL 8086 microprocessor of 64 pins was introduced and in 1979 other 16-bit microprocessor 8088 was developed. In addition to the other performances, these µPs contain multiply/divide/arithmetic hardware. The memory addressing capabilities has been increased to very large i.e., 1MB to 16MB through a variety of flexible and powerful addressing mode. The other characteristics of third generation are given below: (i) These microprocessors were 40/48/64 pins, (ii) High speed and very strong processing capability, (iii) Easier to program, (iv) Allow for dynamically re-locatable programs, (v) Size of internal registers were 8/16/32 bits, (vi) These µPs had the multiply/divide/arithmetic hardware, (vii) Physical memory space was from 1 to 16 Mega-bytes (MB), (viii) Flexible 10 port addresses, (ix) More powerful interrupt and hardware capabilities, (x) Segmented address and virtual memory features. 6 Fourth generation microprocessors of 32 bits were introduced in the form of 80386 in 1985 and 80486 in 1989. The instruction set of the 80386 microprocessor was upward compatible with the earlier 8086, 8088 and 80286 microprocessors. However, the 80486 is an improved version of the 80386 microprocessor. The 80386 executes many instructions in 2 clock cycles while the 80486 executes in one clock cycle. These microprocessors are of low power version of HMOS technology. Some important fourth generation microprocessors are given in table 1.4. Table 1.4 32-bit Microprocessors INTEL 80386 MOTOROLA M-68020 INTEL 80486 MOTOROLA M-68030 NATIONAL NS16022 BELLMAC-32 MOTOROLA MC 88100 Fifth generation microprocessor was introduced by INTEL Corporation in 1993 in the form of PENTIUM with 64 data bus. The Pentium was similar to the 80386 and 80486 microprocessor. The two introductory versions of the Pentium operated with a clock frequency of 60 MHz and 66 MHz and a speed of 110 MIPS (Million Instructions Per Second). With better and more advanced technologies, the speed of μPs has increased tremendously. The old 8085 of 1977 executed 0.5 million instruction/sec. (0.5 MIPS), while the 80486 executes 54 million instruction per sec. The Pentium Pro Processor is the Sixth generation microprocessor introduced in 1995 having better architecture but more in size. The Pentium Pro Microprocessor contains 21 million transistors, 3 integer units as well as a floating unit to increase the performance of most software. The basic clock frequency is 150 MHz and 166 MHz. 1.3 BASIC MICROPROCESSOR SYSTEM The Microprocessor alone does not serve any useful purpose unless it is supported by memory and I/O ports. The combination of memory and I/O ports with microprocessor is known as microprocessor based system. As discussed above the microprocessor which is the central processing unit executes the program stored in the memory and transfer data to and from the outside world through I/O ports. The microprocessor is interconnected with memory and I/O ports by the data bus, the Address bus and the control bus. A bus is basically a communication link between the processing unit and the peripheral devices as shown in figure 1.2. 1.3.1 Address Bus 7The address bus is unidirectional and is to be used by the CPU to send out address of the memory location to be accessed. It is also used by the CPU to select a particular input or output port. It may consist of 8, 12, 16, 20 or even more number of parallel lines. Number of bits in the address bus determines the minimum number of bytes of data in the 16 memory that can be accessed. A 16-bit address bus for instance can access 2 bytes of data. It is labeled as A …………A , where n is the width of bits of the address bus. 0 n-1 Fig. 1.2 1.3.2 Data Bus Data bus is bidirectional, that is, data flow occurs both to and from CPU and peripherals. There is an internal data bus which may not be of the same width as the external data bus by that connects the I/O and memory. A microprocessor is characterized by the width of its data bus. All those microprocessors having internal and external data buses of different widths are characterized either by their internal or external data buses. The size of the internal data bus determines the largest number that can be processed by a microprocessor, for instance, having a 16-bit internal data bus is 65536 (64K). The bus is labeled as: D ………………D where n is the data bus width in bits 0 n-1, 1.3.3 Control Bus Control bus contains a number of individual lines carrying synchronizing signals. The control bus sends out control signal to memory, I/O ports and other peripheral devices to ensure proper operation. It carries control signals like MEMORY READ, MEMORY WRITE, READ INPUT PORT, WRITE OUTPUT PORT, HOLD, INTERRUPT etc. For instance, if it is desired to read the contents of a particular memory location, the CPU first sends out address of that very location on the address bus and a ‘Memory Read’ control signal on the control bus. The memory responds by outputting data stored in the addressed memory location on the data bus. 8 This book will confine the detailed study of 8085 microprocessor because it is most commonly used microprocessor. However, evolution of microprocessors has been discussed in this chapter, in order to have the knowledge microprocessors introduced so far. Before discussing the details of the 8085, brief discussion of 8-bit microprocessor 8080 is given here. 1.4 BRIEF DESCRIPTION OF 8-BIT MICROPROCESSOR 8080A Intel 8080 microprocessor is a successor to the Intel 8008 CPU. The Intel 8080/8080A was not object-code compatible with the 8008, but it was source-code compatible with it. The 8080 CPU had the same interrupt processing logic as the 8008, which made porting of old applications easier. Maximum memory size on the Intel 8080 was increased from 16 KB to 64 KB. The number of I/O ports was increased to 256. In addition to all 8008 instructions and addressing modes the 8080 processor included many new instructions and direct addressing mode. The 8080 also included new Stack Pointer (SP) register. The SP was used to specify position of external stack in CPU memory, and the stack could grow as large as the size of memory. Thus, the CPU was no longer limited to 7-level internal stack, like the 8008 did. The Intel 8080, an 8-bit microprocessor was very popular. Fig. 1.2 shows the shape of the microprocessor and Fig. 1.3 shows the pin out configuration of the 8080A microprocessor. Its salient features include: a) A two-phase clock input Q1 and Q2 b) 16-bit address bus c) 8-bit data bus d) Power supply input of +5V, -5V and +12V required. e) The 8080A places the status of the operation on the data bus during the earlier part of the cycle and places data on the bus during later part of cycle. Fig. 1.2 9 Fig. 1.3 Program memory Pprogram can be located anywhere in memory. Jump, branch and call instructions use 16-bit addresses, i.e. they can be used to jump/branch anywhere within 64 KB. All jump/branch instructions use absolute addressing. Data memory The processor always uses 16-bit addresses so that data can be placed anywhere. Stack memory It is limited only by the size of memory. Stack grows downward. 10Interrupts The processor supports maskable interrupts. When an interrupt occurs the processor fetches from the bus one instruction, usually one of these instructions: • One of the 8 RST instructions (RST0 - RST7). The processor saves current program counter into stack and branches to memory location N 8 (where N is a 3-bit number from 0 to 7 supplied with the RST instruction). • CALL instruction (3 byte instruction). The processor calls the subroutine, address of which is specified in the second and third bytes of the instruction. The interrupt can be enabled or disabled using EI (Enable Interrupts) and DI (Disable Interrupts) instructions. I/O ports 256 Input ports 256 Output ports Registers Accumulator or A register is an 8-bit register used for arithmetic, logic, I/O and load/store operations. Flag is an 8-bit register contains the following five, 1-bit flags: • Sign flag - set if the most significant bit of the result is set. • Zero - set if the result is zero. • Auxiliary carry flag - set if there was a carry out from bit 3 to bit 4 of the result. • Parity flag - set if the parity (the number of set bits in the result) is even. • Carry flag - set if there was a carry during addition, or borrow during subtraction/ comparison. General registers: • 8-bit B and 8-bit C registers can be used as one 16-bit BC register pair. When used as a pair the C register contains low-order byte. Some instructions may use BC register as a data pointer. • 8-bit D and 8-bit E registers can be used as one 16-bit DE register pair. When used as a pair the E register contains low-order byte. Some instructions may use DE register as a data pointer. 11• 8-bit H and 8-bit L registers can be used as one 16-bit HL register pair. When used as a pair the L register contains low-order byte. HL register usually contains a data pointer used to reference memory addresses. Stack pointer It is a 16 bit register. This register is always incremented/ decremented by 2. Program counter It is also a 16-bit register. Instruction Set 8080 instruction set consists of the following instructions: • Data moving instructions. • Arithmetic - add, subtract, increment and decrement. • Logic - AND, OR, XOR and rotate. • Control transfer – conditional, unconditional, call subroutine, return from subroutine and restarts. • Input/Output instructions. • Other – setting/clearing flag bits, enabling/disabling interrupts, stack operations, etc. 1.5 COMPUTER PROGRAMMING LANGUAGES In order for computers to accept commands from human and perform tasks vital to productivity, a means of communication must exist. Programming languages provide this necessary link between man and machine. Because they are quite simple compared to human language, rarely containing more than few hundred distinct words, programming languages must contain very specific instructions. There are more than 2,000 different programming languages in existence, although most programs are written in one of several popular languages, like BASIC, COBOL, C++, or Java. Programming languages have different strengths and weaknesses. Depending on the kind of program being written, the computer will run on the experience of the programmer, and the way in which the program will be used, the suitability of one programming language over another will vary. One can categorize computer language as low Level and high level programming languages which are being discussed in the subsequent subsections. 1.5.1 Low-Level Programming Language A low-level programming language is a language that provides little or no abstraction from a computer's instruction set architecture. The word "low" refers to the small or nonexistent amount of abstraction between the language and machine language; because of this, low-level languages are sometimes described as being "close to the 12hardware." A low-level language does not need a compiler or interpreter to run; the processor for which the language was written is able to run the code without using either of these. By comparison, a high-level programming language isolates the execution semantics of computer architecture from the specification of the program, making the process of developing a program simpler and more understandable. Low-level programming languages are sometimes divided into two categories: first generation, and second generation programming languages. First Generation Programming Language The First-Generation Programming Language (1GL) is machine code or machine language. Machine language is a language which is directly understood by a computer. It is also called binary language as it is based on 0s or 1s. Any instruction in machine language is represented in terms of 0’s and 1’s, even the memory addresses are given in binary mode. Programs in machine language are very difficult to read and understand as binary codes of each command can not easily be remembered. So it is very difficult and complicated to write the computer program in machine language. The experienced programmer can only work in machine language that too after having the good knowledge of machine hardware. Programs written in machine language cannot easily be understood by other programmers. Currently, programmers almost never write programs directly in machine code, because as discussed above it not only require attention to numerous details which a high-level language would handle automatically, but it also requires memorizing or looking up numerical codes for every instruction that is used. Second Generation Programming Language The Second-Generation Programming Language (2GL) is assembly language. Assembly language was first developed in the 1950s and it is different for different microprocessors. It was the first step to improve the computer programming. For writing the programs in assembly language it is necessary that the programmers should have the knowledge of machine hardware. The assembly language eliminated much of the error- prone and time-consuming first-generation programming needed with the earliest computers, freeing the programmer from tedious or boring jobs such as remembering numeric codes and calculating memory addresses. The assembly language was once widely used for all sorts of programming. However, by the 1980s (1990s on small computers), the use of assembly languages had largely been supplanted by high-level languages, in the search for improved programming productivity. Assembly languages are basically a family of low-level languages for programming computers, microprocessors, microcontrollers etc. They implement a symbolic representation of the numeric machine codes and other constants needed to program a particular CPU architecture. This representation is usually defined by the hardware manufacturer, and is based on abbreviations (called mnemonics) that help the 13programmer remember individual instructions, registers, etc. An assembly language is thus specific to certain physical or virtual computer architecture. Instructions (statements) in assembly language are generally very simple, unlike those in high-level languages. Generally, an opcode is a symbolic name for a single executable machine language instruction, and there is at least one opcode mnemonic defined for each machine language instruction. Each instruction typically consists of an operation or opcode plus zero or more operands. Most instructions refer to a single value, or a pair of values. Operands can either be immediate (typically one byte values, coded in the instruction itself) or the addresses of data located elsewhere in storage. A typical assembly language statement of 8080A or 8085 microprocessor written by the programmer is given below, which is divided in to four fields namely, Label, Mnemonics or Operation code (Opcode), Operand and comments. Label Mnemonics Operand Commnets START: LXI H, 2500 H ; Initialize H-L register pair A label for an instruction is optional, but it is very essential for specifying jump locations. Similarly, comments are also optional but it is required for good documentation. The four fields of assembly language statements shown above are separated by the following delimiters: Delimiters Placement Colon (:) A colon is placed after the Label. Label is optional. Space ( ) Space is left between an opcode and operand. Comma (,) A comma is placed between two operands. Semicolon (;) Semicolon is placed between the comments. The program written in assembly language is converted to machine language manually. For writing the program in machine language, the starting address, where the program is to be stored should be known. Now the op code of the instruction is to be written in first location (starting address) and in the consecutive memory locations data /address of the operand is written. While storing the address in the memory locations, lower byte of he address is stored first then the upper byte. A utility program called an assembler is used to translate assembly language statements into the target computer's machine code. The assembler performs a more or less isomorphic translation (a one-to-one mapping) from mnemonic statements into machine instructions and data. The reverse process that is conversion of machine language to the assembly language is done by deassembler. 14For software development for a microprocessor/ microcomputer (written in assembly language in large number of instructions), it is absolutely essential to use an assembler. In fact assembler translates mnemonics into binary code with speed and accuracy; thus eliminating human error in looking for the opcodes. Other advantages of using the assembler for the software development are as follows: • It assigns appropriate values to the symbols used in a program. This facilitates specifying jump locations. • The assembler checks syntax errors, such as wrong labels and expressions, and provides error messages. However it cannot check logic errors in a program. • It is easy to insert or delete instructions in a program; the assembler can reassemble the entire program quickly with new memory locations and modified addresses fro jump locations. This avoids rewriting the program manually. 1.5.2 High-Level Programming Language The machine language and assembly languages discussed above are the first and second generation programming languages which fall in the category of low level languages. These languages require deep knowledge of computer hardware. The high level computer languages developed around 1960s are machine independent i.e. computer hardware is not necessary to know for the programmers. In high level languages one has to know only the instructions in English word and logic of the problem irrespective of the types of computer being used. For the computer programming prepared in high level language, only the use of English alphabets and mathematical systems like +, -, /, etc. are made. In fact high level language is more close to user and is easy to read and understand. The high level languages are called procedural language and are designed to solve general and specific problems. The term "high-level language" does not imply that the language is superior to low-level programming languages - in fact high level refers to the higher level of abstraction from machine language. They have no opcodes that can directly compile the language into machine code, unlike low-level assembly language. In high level languages the words in English are converted into binary language of different microprocessors with the help of a program called Interpreter or Complier. The compiler or interpreter accepts English like statements as the input called Source code. The source codes are translated into machine language compatible with the microprocessor being used in the machine. The translation in the machine language from the source code is called the object code. Figure 1.4 shows the block diagram for translation of high level language program into machine code. Each microprocessor needs its own compiler or an interpreter for each high level language. 15 Fig. 1.4 The difference between a compiler and an interpreter is that the compiler reads the entire program first and then generates the object code; where as the interpreter reads one instruction at a time and produces its object code which is executed at the same time before reading the next instruction. The high level programming language developed so far may be categorized into third, fourth and fifth generation programming languages whose brief discussion is given below. Third Generation Programming Language A third-generation programming language (3GL) is a refinement of a second- generation programming language. Whereas a second generation language is more aimed to fix logical structure to the language, a third generation language aims to refine the usability of the language in such a way to make it more user friendly. A third generation language improves over a second generation language by having more refinement on the usability of the language itself from the perspective of the user. Languages like ALGOL, COBOL, FORTRAN IV etc. are examples of this generation and were considered as high level languages. Most of these languages had compilers and the advantage of this was speed. Independence was another factor as these languages were machine independent and could run on different machines. FORTRAN (FORmula TRANslating) and COBOL (COmputer Business Oriented Language) were the first high-level programming languages to make an impact on the field of computer science. Along with assembly language, these two high-level languages have influenced the development of many modern programming languages, including Java, C++, and BASIC. FORTRAN is well suited for math, science, and engineering programs because of its ability to perform numeric computations. The language was developed in New York by IBM's John Backus. FORTRAN was known as user’s friendly, because it was easy to learn in a short period of time and required no previous computer knowledge. It eliminated the need for engineers, scientists, and other users to rely on assembly programmers in order to communicate with computers. Although FORTRAN is often referred to as a language of the past, computer science students were still taught the language in the early 2000s for historical reasons, and because FORTRAN code still exists in some applications. 16COBOL was another high level and third generation programming language well suited for creating business applications. COBOL's strength is in processing data, and in its simplicity. Other languages like BASIC, C, C++, C, Pascal, and Java are also third- generation languages. Fourth-Generation Programming Language A fourth-generation programming languages (4GL) (1970s-1990) are the programming language designed with a specific purpose in mind, such as the development of commercial business software. In the evolution of computing, the fourth generation language followed the third generation language in an upward trend toward higher abstraction and statement power. The fourth generation language was followed by efforts to define and use a fifth generation language (5GL). Basically the fourth generation languages are languages that consist of statements similar to statements in a human language. Fourth generation languages are commonly used in database programming and scripts. The commonly used fourth generation languages are FoxPro , SQL , MATLAB etc. Fifth-Generation Programming Language A fifth-generation programming language (5GL) is a programming language based around solving problems using constraints given to the program, rather than using an algorithm written by a programmer. Most constraint-based and logic programming languages and some declarative languages are fifth-generation languages. While fourth-generation programming languages are designed to build specific programs, fifth-generation languages are designed to make the computer solve a given problem without the programmer. This way, the programmer only needs to worry about what problems need to be solved and what conditions need to be met, without worrying about how to implement a routine or algorithm to solve them. Fifth-generation languages are used mainly in artificial intelligence research. Prolog, OPS5, Visual Basic, and Mercury etc. are examples of fifth-generation languages. Problems 1.1 Draw the block diagram of a general computer and discuss in detail the five blocks of a digital computer. 1.2 Discuss History and Evolution of Microprocessor. 1.3 What is microprocessor? What is the difference between a microprocessor and a microcomputer? 1.4 What is the difference between the 4-bit microprocessor and 8-bit microprocessor? 1.5 Name the main 8-bit microprocessors. Give the brief description of 8-bit microprocessor 8080A. 171.6 Discuss basic microprocessor system with the help of block diagram. 1.7 What are low level computer programming languages? Discuss them. 1.8 Discuss first generation computer programming languages. 1.9 Discuss second generation computer programming languages. 1.10 Write short note on high level computer programming languages. 1.11 What is the difference between assembly language and machine language? 1.12 Mention the brief description of Assembly Language. What are the advantages of assembler? ________ 182 SAP – I For the beginners it is very difficult to understand the operation of a digital computer as it contains large details. To understand the step by step operation of a digital computer, the concept of Simple as Possible (SAP) computer has been introduced. Simple as possible computer, a conceptual computer will be discussed in three stages namely SAP – I, SAP – II and SAP – III computers. All the necessary details for the operation of digital computers will be discussed in three stages. After the study of these three stages of SAP computers, we will be in a position to understand clearly the fundamentals of microprocessor 8085 including the architecture, programming and interfacing devices. In this chapter the organization, programming and circuits of SAP – I computer will be discussed; the succeeding chapters will have the details of other stages of SAP computers. 2.1 ARCHITECTURE OF SAP - I SAP – I, a conceptual computer is an 8-bit computer, as it can process the data of 8-bits. Further it is a simple computer and is considered for the basic understanding of the operation of digital computers, so it is assumed that it can store only 16 words (each word 4 being 8 bit long). The length of the memory address register will of 4 bits since 2 = 16 (16 is the total capacity of the memory unit). The address of 16 memory locations of memory address register (MAR) will be 0000 to 1111 i.e. Memory locations Address Memory locations Address (in Hexadecimal) (in Hexadecimal) 0 H 0000 8 H 1000 1 H 0001 9 H 1001 2 H 0010 A H 1010 3 H 0011 B H 1011 4 H 0100 C H 1100 5 H 0101 D H 1101 6 H 0110 E H 1110 7 H 0111 F H 1111 The basic architecture of this computer is shown in figure 2.1. It contains an 8-bit W-Bus (Wire Bus), which is used for data transfer to various 8-bit registers. A Bus is a group of conducting wires. In this figure, all register outputs connected to W-Bus are three-state which allows ordinary transfer of data. Remaining other register outputs continuously drive the boxes they are connected to. A brief discussion of each block is given below: Program Counter First block of the SAP –I computers is the Program Counter (PC). It is basically the part of the control unit, its function is to send to the memory the address of the next instruction to be fetched and executed. Program counter is also called as the pointer as it is like someone pointing a finger at a list of instructions, saying do this first and do this next and so on. In the beginning, program and data is stored in the memory through the input unit. A 4-bit binary address (0000 to 1111) is sufficient to address a word in the memory. The first instruction of the program is stored in the memory location 0000, second instruction at 0001 location, and the third instruction at 0010 location and so on. When the computer starts executing the program, the program counter is reset. The program counter is then incremented by one to get the next address of the memory location. After the first instruction is fetched and executed, the PC sends address 0001 to memory. Again the PC is incremented by one. After execution of second instruction, PC sends the next address to memory and so on. Fig. 2.1 Input and MAR The second block of SAP –I computer is Input and Memory Address Register (MAR). It includes 4 bit address register and 8 bit data register. These registers are basically the parts of input unit. The input and MAR sends 4 bit address and 8 bit data to memory unit, this unit helps in storing the instructions and data to the memory, before the computer run starts. This unit includes a matrix of switches (micro-switches) for address 20

Advise: Why You Wasting Money in Costly SEO Tools, Use World's Best Free SEO Tool Ubersuggest.