Lecture notes for Advanced Digital signal processing

how is digital signal processor filtering accomplished. digital signal processors architectures implementations and applications pdf free download
JuliyaMadenta Profile Pic
JuliyaMadenta,Philippines,Researcher
Published Date:15-07-2017
Your Website URL(Optional)
Comment
An Introduction To Digital Signal Processors Bruno Paillard, Ph. D. - P.eng. - Professor Génie électrique et informatique Université de Sherbrooke January 27 2002 © BRUNO PAILLARD ING. 1 Introduction • 1. Computers and microprocessors • 2. Applications areas of the microprocessor • 3. Examples of embedded systems. 1. COMPUTERS AND MICROPROCESSORS th Charles Babbage invented the concept of computer in the mid 19 century, but the concept had to wait for the development of vacuum tube electronics in the 1930s to achieve its full potential. John W. Mauchly and J. Presper Eckert at University of Pennsylvania’s Moore School of Electrical Engineering developed one of the first electronic computers between 1942 and 1946. Called ENIAC (Electronic Numerical Integrator and Computer), it was originally used to calculate ballistic tables for the military. With 17 468 vacuum tubes and 100 feet of front panel, this 30 tons mighty machine was capable of doing 5000 th additions and 300 multiplications a second. Although it is less than 1/10 000 the computational speed found in a modern cellular phone, due to its “all electronic” design, it was the fastest computer in use at the time. Later models were used for nuclear physics and aerodynamics research, two fields where the super-computer is still a tool of the trade today. Although not publicised at the time, the first electronic computer was actually built by Tommy Flowers, an electronics engineer in the British secret service, during the Second World War. This computer called Colossus was used by the British secret service to decipher German military codes. Because of the secrecy that surrounded these operations it was not recognized as the first electronic computer until recently. Initially computers were used to carry out numerical computations, but nowadays they are used in many applications from music players to flight control systems. A computer performs its task by the sequential execution of elementary binary operations called “instructions”. These elementary instructions may represent the addition of two numbers for instance, or a comparison between two numbers, or the transfer of a binary word from one memory location to another. They are assembled in a complete “program” that defines the global task carried out by the computer. The part of the computer that executes the instructions is called the Central Processing Unit (CPU). A microprocessor is a silicon chip that implements the complete Central Processing Unit of a computer. Silicon photolithographic and etching processes are used to mass- produce microprocessors. © BRUNO PAILLARD ING. 5 The development of the microprocessor in the 1970’s represents a major milestone in the history of electronics and computer systems. It enabled the development of low cost computers, which in time became “personal computers”. It also spawned the field of “embedded systems”, in which a microprocessor is used to control an electronic device or subsystem. Nowadays, nearly all customer, scientific and industrial electronics incorporate microprocessors. The paternity of the microprocessor is still debated today. In 1971, Intel introduced the 4004, which included all the elements of a 4-bit CPU. The same year Texas Instruments introduced the TMS1802NC. Both microprocessors were originally intended to support the functions of a desk calculator. The TMS1802NC was not very flexible. Its program, in particular, was stored in a Read Only Memory, which contents were permanently etched on the silicon chip. Modifying the program required the development of new lithography masks. Texas Instruments received a patent in 1973 for the development of the microprocessor. Intel continued its development effort and produced the 8008 in 1972, the 8080 in 1974, and the 8086 in 1978. These microprocessors were the precursors of today’s Pentiums. Several companies followed in Intel and Texas Instrument’s footsteps. Motorola with the 6800 and Rockwell with the 6502 are two examples. The first years, microprocessors were not very differentiated and these first machines were used equally in computer systems and in embedded systems. For instance Rockwell’s 6502 was used in a number of embedded system development boards, such as the KIM and the AIM65. It was also the CPU of one of the first personal computers: the PET produced by the Commodore company. Later, looking for increased performance, as well as new markets, microprocessor manufacturers specialized their designs. The first micro-controller, the TMS1000 from Texas instruments was introduced in 1974. Microcontrollers not only possess a CPU on the silicon chip, but also integrate a number of peripherals (memory, parallel ports analog to digital converters…etc.). In essence they constitute complete microcomputers integrated on the same chip of silicon. The addition of peripherals to the core CPU makes microcontrollers particularly efficient in embedded systems applications where cost, size and power consumption must be kept low. For instance a microwave oven control unit was one of the first applications targeted by the TMS1000 microcontroller. In the 1980s Intel introduced the 8748 microcontroller family. This family integrated many peripherals, including a program memory that was erasable and reprogrammable by the developer. These characteristics lowered the development cost of microcontroller systems, and enabled the use of microcontrollers in low-volume embedded applications. The 1980s also saw the introduction of the first Digital Signal Processors. Introduced in 1983 by Texas Instruments, the TMS320C10 was a microprocessor specifically designed to solve digital signal processing problems. Prior to its release, signal processing was mostly the domain of analog electronics. Digital signal processing applications were few and usually required high-cost complex machines that were only viable in aerospace or military applications. The introduction of the DSP ushered the establishment of digital signal processing as one of the core disciplines of Electrical Engineering. Digital Signal processing progressively replaced analog signal processing in applications that range from control to telecommunications. This “digital migration” is still in progress today, and affects applications of ever-decreasing cost. Digital signal processing implements techniques and technologies that are much more advanced and entail a lot more complexity than its analog counterpart. However the flexibility allowed by programming, the precision and inherent stability of the processing parameters, as well as the possibility of very complex and adaptive processes, nearly impossible to 6 © BRUNO PAILLARD implement in analog form, combined to the very low cost of today’s microprocessors make this approach unavoidable. The differentiation brought about by the integration of specialized peripherals onto the microprocessor’s silicon chip produced devices that are extremely specialized. Some microcontrollers for instance are specifically designed for applications such as communications protocols (Ethernet, USB, etc.). Others still are specifically designed for use in electric motor drives, and so on… The benefit of such specialization is the production of very efficient designs, in terms of cost, size and power consumption. On the other hand it forces the developer to learn and master an ever-increasing variety of CPUs. This difficulty must not be under- estimated. The time investment needed to get familiar with a new microprocessor and its software development tools is so great that it is often a larger obstacle to the adoption of this microprocessor by the developer, than the technical limitations of the microprocessor itself. Many developers will go through untold contortions to keep working with a microprocessor with which they are already familiar. On the manufacturer’s side, the introduction of a new family of microprocessors is a very complex and costly operation, and the adoption of the microprocessor by the market is far from guaranteed. To circumvent this problem some manufacturers are introducing new types of microcontrollers that incorporate programmable logic and programmable analog subsystems. This is the case of the PsoC (Programmable System on a Chip) family from Cypress. The integration of programmable sub-systems gives tremendous flexibility to the product. The developers can, to a great extent, specialize the microcontroller themselves by designing their own set of peripherals tailored to the application. 2. APPLICATION AREAS OF THE MICROPROCESSOR The most obvious application of the microprocessor is the computer. From the super- computer used for scientific computation to the pocket personal computer used for word processing or Internet browsing, computers have become an essential tool in our everyday life. The world of computer systems however accounts only for a small fraction of the total number of microprocessors deployed in applications around the world. The vast majority of microprocessors are used in embedded systems. Embedded systems are electronic devices or sub-systems that use microprocessors for their operation. The low cost of microprocessors, combined to the flexibility offered by programming makes them omnipresent in areas ranging from customer electronics to telecommunication systems. Today, it is actually very difficult to find electronic devices or sub-systems that do not incorporate at least one microprocessor. Needless to say, the vast majority of the engineering and design activity related to microprocessor systems is in the area of embedded systems. By contrast to computer systems, the program of an embedded system is fixed, usually unique, and designed during the development of the device. It is often permanently stored in a Read Only Memory and begins its execution from the moment the device powered on. Because it is fixed and usually resides in a Read Only Memory, the software of an embedded system is often called firmware. © BRUNO PAILLARD 7 3. EXAMPLES OF EMBEDDED SYSTEMS At home, the microprocessor is used to control many appliances and electronic devices: microwave oven, television set, compact-disk player, alarm system are only a few examples. In the automobile microprocessors are used to control the combustion of the engine, the door locks, the brake system (ABS brakes) and so on… Microprocessors are used in most measuring instruments (oscilloscopes, multi-meters, signal generators…etc). In telecommunication systems microprocessors are used in systems ranging from telephone sets to telephone switches. In the aerospace industry they are used in in-flight navigation systems and flight control systems. Microprocessors are even used in very low-cost applications such as wristwatches, medical thermometers and musical cards. Even in computer systems, embedded microprocessors are used in pointing devices, keyboards, displays, hard disk drives, modems… and even in the battery packs of laptop computers 8 © BRUNO PAILLARD Microprocessors • 1. Applications and types of microprocessors • 2. Evolution of tools and software development methodologies • 3. Present frontier • 4. Criteria for choosing a microprocessor • 5. Building blocks of an embedded system • 6. Program execution 1. APPLICATIONS AND TYPES OF MICROPROCESSORS Today, microprocessors are found in two major application areas: • Computer system applications • Embedded system applications Embedded systems are often high-volume applications for which manufacturing cost is a key factor. More and more embedded systems are mobile battery-operated systems. For such systems power consumption (battery time) and size are also critical factors. Because they are specifically designed to support a single application, embedded systems only integrate the hardware required to support this application. They often have simpler architectures than computer systems. On the other hand, they often have to perform operations with timings that are much more critical than in computer systems. A cellular phone for instance must compress the speech in real time; otherwise the process will produce audible noise. They must also perform with very high reliability. A software crash would be unacceptable in an ABS brake application. Digital Signal Processing applications are often viewed as a third category of microprocessor applications because they use specialized CPUs called DSPs. However, in reality they qualify as specialized embedded applications. Today, there are 3 different types of microprocessors optimized to be used in each application area: • Computer systems: General-purpose microprocessors. • Embedded applications: Microcontrollers • Signal processing applications: Digital Signal Processors (DSPs) In reality the boundaries of application areas are not as well defined as they seem. For instance DSPs can be used in applications requiring a high computational speed, but not necessarily related to signal processing. Such applications include computer video boards and specialized co-processor boards designed for intensive scientific computation. On the other hand, powerful general-purpose microprocessors such as Intel’s i860 or Digital’s Alpha-chip are used in high-end digital signal processing equipment designed for algorithm development and rapid prototyping. © BRUNO PAILLARD ING. 9 The following sections list the typical features and application areas for the three types of microprocessors 1.1. General purpose microprocessors 1.1.1. Applications • Computer systems 1.1.2. Manufacturers and models • Intel: Pentium • Motorola: PowerPC • Digital: Alpha Chip • LSI Logic: SPARC family (SUN) ... etc. 1.1.3. Typical features • Wide address bus allowing the management of large memory spaces • Integrated hardware memory management unit • Wide data formats (32 bits or more) • Integrated co-processor, or Arithmetic Logic Unit supporting complex numerical operations, such as floating point multiplications. • Sophisticated addressing modes to efficiently support high-level language functions. • Large silicon area • High cost • High power consumption 1.2. Embedded systems: Microcontrollers 1.2.1. Application examples • Television set • Wristwatches • TV/VCR remote control • Home appliances • Musical cards • Electronic fuel injection • ABS brakes • Hard disk drive • Computer mouse / keyboard • USB controller • Computer printer • Photocopy machine • ... etc. 1.2.2. Manufacturers and models • Motorola: 68HC11 • Intel: 8751 10 © BRUNO PAILLARD • Microchip: PIC16/17family • Cypress PsoC family • … etc. 1.2.3. Typical features • Memory and peripherals integrated on the chip • Narrow address bus allowing only limited amounts of memory. • Narrow data formats (8 bits or 16 bits typical) • No coprocessor, limited Arithmetic-Logic Unit. • Limited addressing modes (High-level language programming is often inefficient) • Small silicon area • Low cost • Low power consumption. 1.3. Signal processing: DSPs 1.3.1. Application examples • Telecommunication systems. • Control systems • Attitude and Flight control systems in aerospace applications. • Audio/video recording and play-back (Compact-disk/MP3 players, video cameras…etc.) • High-performance hard-disk drives • Modems • Video boards • Noise cancellation systems • … etc. 1.3.2. Manufacturers and models • Texas Instruments: TMS320C6000, TMS320C5000... • Motorola: 56000, 96000... • Analog devices: ADSP2100, ADSP21000... • ... etc. 1.3.3. Typical features • Fixed-point processor (TMS320C5000, 56000...) or floating point processor (TMS320C67, 96000...) • Architecture optimized for intensive computation. For instance the TMS320C67 can do 1000 Million floating point operations a second (1 GIGA Flop). • Narrow address bus supporting a only limited amounts of memory. • Specialized addressing modes to efficiently support signal processing operations (circular addressing for filters, bit-reverse addressing for Fast Fourier Transforms…etc.) • Narrow data formats (16 bits or 32 bits typical). • Many specialized peripherals integrated on the chip (serial ports, memory, timers…etc.) • Low power consumption. • Low cost. © BRUNO PAILLARD 11 1.4. Cost of a microprocessor Like many other electronic components, microprocessors are fabricated from large disks of mono-crystalline silicon called “wafers”. Using photolithographic processes hundreds of individual microprocessor chips are typically fabricated on a single wafer. The cost of processing a wafer is in general fixed, irrespective of the complexity of the microprocessors that are etched onto it. The silicon surface occupied by a microprocessor on the wafer is dependant on several factors. The most important factors being its complexity (number of gates, or transistors), and the lithographic scale indicating how small a gate can be. At first glance, for a given fabrication process and a given lithographic scale, it would seem that the cost of a microprocessor is roughly proportional to its surface area, therefore to its complexity. However things are a little bit more complex. In practice the wafers are not perfect and have a number of defects that are statistically distributed on their surface. A microprocessor whose area includes the defect is generally not functional. There are therefore always some defective microprocessors on the wafer at the end of the process. The ratio of good microprocessors to the total number fabricated on the wafer is called the “fabrication yield”. For a fixed number of defects per square inch on the wafer, which is dependant on the quality of the wafer production, the probability of a microprocessor containing a defect increases non-linearly with the microprocessor area. Beyond a certain surface area (beyond a certain complexity) the fabrication yield decreases considerably. Of course the selling price of the good microprocessors is increased to offset the cost of having to process and test the defective ones. In other words the cost of a microprocessor increases much faster than its complexity and surface area. 1.5. Power consumption of a microprocessor As we discussed earlier, some microprocessors are optimized to have a low power consumption. Today almost all microprocessors are implemented in CMOS technology. For this technology, the electric current drawn by the microprocessor is almost entirely attributable to the electric charges used to charge and discharge the parasitic input capacitance of its gates during transitions from 0 to 1 and 1 to 0. This charge loss is proportional to the following factors: • The voltage swing (normally the supply voltage). • The input gate capacitance. • The number of gates in the microprocessor. • The average number of transitions per second per gate. The input gate capacitance is fairly constant. As manufacturing processes improve, the lateral dimensions of transistor gates get smaller, but so does the thickness of the oxide layers used for the gates. Typical values are on the order of a few pF per gate. With thinner oxide layers, lower supply voltages can be used to achieve the same electric field. A lower supply voltage mean that less charge is transferred during each transition, which leads to a lower current consumption. This is the main factor behind the push for decreasing supply voltages in digital electronics. 12 © BRUNO PAILLARD The current consumption is also proportional to the number of gates of the microprocessor (its complexity), and to the average number of transitions per second (its clock frequency). At identical fabrication technology and lithographic scale, a microprocessor optimized for power consumption is simpler (less gates), and operates at a lower clock frequency than a high-performance microprocessor. For a given processor, the developer can directly adjust the trade-off between computational speed and power consumption by appropriately choosing the clock frequency. Some microprocessors, even offer the possibility of dynamically adjusting the clock frequency through the use of a Phase-Locked-Loop. It allows the reduction of the clock frequency during less intensive computation periods to save energy. Such systems are especially useful in battery-operated devices. 2. EVOLUTION OF THE SOFTWARE DEVELOPMENT TOOLS AND METHODOLOGIES In the field of embedded systems, software development tools and methodologies can often appear to be lacking in sophistication, compared to the tools used in the field of computer systems. For instance, many embedded applications are developed entirely in assembly language. Software that is developed in high-level languages is often developed in C, and rarely in C++, even today. When schedulers and multi-tasking kernels are used, they are much simpler than their counterparts found in the operating systems of modern-day computers. This apparent lack of sophistication may be attributed to several factors: • An embedded system is usually designed to run a unique application. Providing a uniform set of standardized “operating system” services to software applications is therefore much less critical than it is for a computer system designed to run multiple applications that all have similar needs. In fact in many embedded systems, “operating system” functions like peripheral drivers and user-interface are custom-designed, along with the core functionality of the device. • The software of an embedded system is generally much less complex than the software that runs on a computer system. The difficulties of embedded system development usually lie in the interaction between hardware and software, rather than in the interaction between software and software (software complexity). • Most embedded systems are very specific devices. Their hardware resources have been optimized to provide the required function, and are often very specific and very limited. This makes the use of complex operating systems difficult. Indeed, to be efficient, a computer’s operating system often relies on considerable resources (memory, computational speed… etc.). Furthermore, it expects a certain level of uniformity and standard in the hardware resources present in the system. • For embedded systems, real-time execution constraints are often critical. This does not necessarily mean that the computational speed must be high, but rather that specific processes must execute within fixed and guaranteed time limits. For instance, for a microcontroller driving an electric motor, the power bridge protection process (process leading to the opening of all transistors) must execute within 1 microsecond following the reception of a short-circuit signal, otherwise the bridge may be damaged. In such a situation, the execution of code © BRUNO PAILLARD 13 within a complex and computationally hungry operating system is obviously not a good choice. • The development of code in a high-level language is often less efficient than the development in assembly language. For instance, even with the use of an optimizing C compiler, designed specifically for the TMS320VC5402 DSP for which it generates code, the development of a filter function can be 10 to 40 times slower when developed in C than when it is optimized in assembly language. For one thing a high-level language does not (by definition) give the developer access to low-level features of the microprocessor, which are often essential in optimizing specific computationally intensive problems. Furthermore, the developer usually possesses a much more complete description of the process that must be achieved than what is directly specifiable to a compiler. Using this more complete information, the developer can arrive at a solution that is often more efficient than the generic one synthesized by the compiler. • In the field of computer systems, software development tools and operating systems are designed so that the developer does not have to be concerned with the low-level software and hardware details of the machine. In the field of embedded systems, the developer usually needs to have a fine control over low- level software, and the way it interacts with the hardware. • Finally, for an embedded system, the execution of the code must be perfectly understood and controlled. For instance, a rollover instead of a saturation during an overflow can be disastrous if the result of the calculation is the attitude control signal of a satellite launch vehicle. This exact problem led to the destruction of the ESA’s “Arianne V” rocket during its first launch in June 1996. In situations such as this one, the use of a high-level language such as C, for which the behaviour during overflow (rollover or saturation) is not defined in the standard, may be a bad choice. 3. PRESENT FRONTIERS Today, the traditional frontiers between embedded systems and computer systems are getting increasingly blurred. More and more applications and devices have some attributes of an embedded system and some attributes of a computer system. A cellular telephone for instance may also provide typical “computer system” functions such as Internet connexion. On the other hand, a PC may provide typical “embedded system” functions such as DVD playback. In many cases these devices are designed using a hybrid architecture in which a general-purpose microprocessor executes software under the control of a complex operating system, and in which special purpose microprocessors (DSPs for instance) are used as peripherals, and execute function-specific software (or firmware) typical of an embedded system. These architectures may even be implemented at the integrated circuit level where the general-purpose microprocessor and the DSP reside on the same silicon chip. In other cases, microcontroller-based systems are enhanced by specialized peripherals to rigidly support traditional computer-system functions in a limited way. For instance the use of specialized peripherals to support a TCP/IP modem connexion can provide Internet access to a microwave oven controller. More than 30 years after the development of the first microprocessors, the field is still in profound evolution. 14 © BRUNO PAILLARD 4. CRITERIA FOR CHOOSING A MICROPROCESSOR The choice of a microprocessor for a given application is probably one of the most difficult tasks facing the systems engineer today. To take this action correctly, the engineer must know the whole range of microprocessors that could be used for the application, well enough to precisely measure the pros and cons of each choice. For instance, the engineer may be required to evaluate if a particular time-critical task can execute within its time limits for each potential choice of microprocessor. This evaluation may require the development of optimized software, which obviously entails a very good knowledge of the microprocessor and its software development tools, and a lot of work In practice, the investment in time and experience necessary to know a microprocessor well is very high. Most engineers try to make their choice within the set of devices with which they are already familiar. This approach is sub-optimal, but reflects the reality of systems design. In this field, even seasoned designers do not have an in-depth experience of the very wide variety of microprocessors that are on the market today. To help in the choice, most manufacturers offer “development kits” or “evaluation kits”. These kits allow the engineer to design and test software, without having to design the entire custom hardware required by the application. Manufacturers often also offer software examples and libraries of functions that allow the engineer to evaluate typical solutions without having to design the software. The availability of such tools and information biases the choice toward one manufacturer, or one family of microprocessors. On the other hand it is often seen by design engineers as legitimate criteria on which to base their choice. The following general criteria may be applied for the choice of a microprocessor: 1 Capacity of the microprocessor (It is advisable to take some margin because the problem often evolves in the course of the design). Logical criteria:  Instruction set functionality.  Architecture, addressing modes.  Execution speed (not just clock frequency)  Arithmetic and Logic capabilities.  Addressing capacity. Physical criteria:  Power consumption.  Size.  Presence of on-chip peripherals – Necessity of support circuitry. 2 Software tools and support: Development environment, assembler, compiler, evaluation kit, function libraries and other software solutions. These may come from the manufacturer or from third-party companies. 3 Cost 4 Market availability 5 Processor maturity © BRUNO PAILLARD 15 When evaluating the capacity of a microprocessor for a given application, the clock frequency is a good criterion to compare microprocessors in the same family, but should not be used to compare microprocessors from separate manufacturers, or even from separate families by the same manufacturer. For one thing, some microprocessors process complete instructions at each clock cycle, while others may only process part of an instruction. Furthermore the number and complexity of the basic operations that are contained into single instructions vary significantly from one microprocessor family to the next. The instruction set and addressing modes are important criteria. Some microprocessors have very specialized instructions and addressing modes, designed to handle certain types of operations very efficiently. If these operations are critical for the application, such microprocessors may be a good choice. The capability of the Arithmetic and Logic Unit (ALU) is usually a critical factor. In particular the number of bits (or resolution) of the numbers that can be handled by the ALU is important. If the application requires operations to be carried out in 32-bit precision, a 32-bit ALU may be able to do the operation in a single clock cycle. Using a 16-bit ALU, the operation has to be decomposed into several lower-resolution operations by software, and it may take as much as 8 times longer to execute. Another important factor is the ability of the ALU to carry out operations in fixed-point representation only (fixed-point processors), or in fixed-point and floating-point representations (floating-point processors). Fixed-point processors can perform floating- point calculations in software, but the time penalty is very high. Of course there is a trade-off between the capability of the ALU and other factors. A very wide (32 or even 64-bits) floating-point ALU accounts for a significant portion of the microprocessor’s silicon area. It has a considerable impact on the cost and power consumption of the CPU, and may prohibit or limit the integration of on-chip peripherals. Most embedded applications only require limited amounts of memory. This is why most microcontrollers and DSPs have limited address busses. For those few embedded applications that need to manage very large amounts of memory, the choice of a general-purpose microprocessor designed for computer systems and having a very wide address bus may be appropriate. As mentioned earlier, by carefully choosing the clock frequency, the designer can directly adjust the power consumption. If power consumption is an important factor one avenue may be to choose a microprocessor designed for its low power. Another, often competitive, solution may be to choose a more complex and more powerful microprocessor, and have it work at a lower frequency. In fact the latter solution provides more flexibility because the system will be able to cope should unanticipated computational needs appear during the development. The presence of on-chip peripherals is an important factor for embedded designs. It can contribute to lowering the power consumption, and the size of the system because external peripherals can be avoided. These factors are especially important in portable battery-operated devices. The availability of good development and support tools is a critical factor. The availability of relevant software functions and solutions in particular can cut down the development time significantly and should be considered in the choice. Cost is a function of the microprocessor’s complexity, but it is also a function of the microprocessor’s intended market. Microprocessors that are produced for mass markets are always less expensive (at equal complexity) than microprocessors produced in smaller quantities. Even if the application is not intended for a large volume, it may be good to choose a microprocessor that is, because the design will benefit from the lower 16 © BRUNO PAILLARD cost of the microprocessor. On the other hand, if the application is intended for large volume it is highly advisable to make sure that the processor will be available in large enough quantities to support the production. Some microprocessors are not intended for very high-volume markets and, as with any other electronics component, availability can become a problem. Finally the processor’s maturity may be a factor in the choice. Newer processors are often not completely functional in their first release. It can take as long as a year or two to discover and “iron-out” the early silicon bugs. In this case the designer has to balance the need for performance with the risk of working with a newer microprocessor. 5. BUILDING BLOCKS OF AN EMBEDDED SYSTEM Figure 2-1 describes the architecture of a typical microprocessor system. It is worth noting that this generic architecture fits computer systems such as PCs, as well as embedded systems such as a microwave oven controllers or a musical cards. Outside world Clock Memory Peripheral (Program / Data) CPU Address Data Control Bus system Architecture of an embedded system Figure 2-1 Microprocessor (CPU): The CPU is a sequential logic machine that reads instructions in memory and executes them one after the other. A clock sequences the various operations performed by the CPU. The CPU: • Executes the instructions found in memory. • Performs the calculations and data processing operations specified by the instructions. • Initiates data exchanges with peripherals (memory, parallel ports, …etc.) © BRUNO PAILLARD 17 Before the development of the first microprocessor in 1971 the multi-circuit structure that used to perform this role in early computers was called Central Processing Unit (CPU). Today the words CPU and microprocessor are often used interchangeably. However, the term microprocessor is generally reserved to CPUs that are completely integrated on a single silicon chip. For microcontrollers, which also include peripherals on the same silicon chip, the term CPU describes exclusively the part of the device that executes the program. It excludes the peripherals. Clock: The clock circuit is usually implemented by a quartz oscillator. The oscillator may be part of the microprocessor, or may be external. The quartz crystal itself is always external. In low cost designs where frequency stability is not an issue, a low-cost ceramic resonator may replace the quartz. The clock sequences all the operations that are performed by the CPU. The signal from the oscillator may be divided-down by programmable dividers, or it may be multiplied by an adjustable factor using a Phase-Locked-Loop (PLL). Dividing or multiplying the clock frequency provides a way to dynamically adjust the computational speed and the power consumption of the microprocessor. Multiplying the oscillator frequency is also essential in modern designs because it is very difficult (and therefore expensive) to produce stable quartz resonators at frequencies above 100MHz. The PLL therefore enables the use of a lower frequency quartz resonator to produce the high clock frequencies required by today’s microprocessors. Memory: Memory circuits are used to: • Store program instructions. • Store data words (constants and variables) that are used by the program. • Exchange these data words with the CPU. Peripherals: Peripherals provide services to the CPU, or provide a connexion to the outside world. Memory circuits are a special case of peripherals. Any electronic circuit connected to the CPU by its bus system is considered to be a peripheral. Bus system: The bus system is the network of connexions between the CPU and the peripherals. It allows instructions and data words to be exchanged between the CPU and its various peripherals. The following example describes the use of a microprocessor system in a balance control application (figures 2-2 and 2-3). 18 © BRUNO PAILLARD Figure 2-2 “Bikeman at DSPWorld 1997” Photo: Cory Roy The microprocessor used in this application is a TMS320C50 from Texas Instruments. The system has 3 inputs that are brought to the CPU via 3 peripherals: • A Pulse Width Modulation (PWM) signal follows the position of a radio-control joystick. The width of the pulses transmitted by the radio-control system represent the trajectory input of the pilot. The signal is transmitted to a peripheral called a timer that measures the width of its pulses, and provides this information to the CPU. • A vibrating gyrometer is placed in the head of the cyclist. It provides an analog signal proportional to the rate of roll of the bicycle around the axis passing through the contact points of the wheels. This analog signal is measured by a peripheral called an Analog to Digital converter that provides this information to the CPU. • An optical sensor placed in the gear train of the bicycle provides a square wave with a frequency proportional to the speed of the bicycle. This frequency is measured and provided to the CPU by another timer. The system has one output that is generated by the CPU: • A PWM signal is generated by a third timer, and sent to the servomotor that controls the position of the handlebars. The system works as follows: © BRUNO PAILLARD 19 • Thirty times a second the gyrometer output is measured by the Analog to Digital Converter and is sent to the CPU. • The CPU integrates this rate of roll signal to produce the absolute angle formed by the bicycle frame and the road. • The CPU calculates the position that the handlebars should take to counteract any unwanted variation of this angle (to stabilize the angle), and to properly achieve the pilot’s trajectory input. If the pilot wants to go in a straight line for instance, and the bicycle is falling to the right, the CPU temporarily turns the handlebars to the right to put the bicycle into a right trajectory. The centrifugal force due to this trajectory then stops the fall to the right and swings the bicycle back to a straight position. When the bicycle is straight, the handlebars are repositioned in the middle to maintain this trajectory. If the pilot wants to turn left the CPU temporarily turns the handlebars to the right. The centrifugal force due to the right trajectory puts the bicycle in a left fall. Once the bicycle has attained the correct left trajectory, the CPU places the handlebars in an intermediate left position to maintain this trajectory. Finally when the pilot wants to get out of the left turn, the CPU temporarily increases the angle of the handlebars to the left. The centrifugal force swings the bicycle back to the right. When the bicycle reaches a vertical angle, the handlebars are put back in the center to provide a stable straight trajectory. Because the system is naturally unstable, very little force has to be exerted on the handlebars to control balance and trajectory. By perfectly controlling the position of the handlebars, the system uses the centrifugal force to do the hard work. • The position of the handlebars that is calculated by the CPU is sent to a timer that generates the PWM signal that drives the servomotor. • The speed of the bicycle is measured to adapt the reaction of the handlebars to the speed of the bicycle. When the bicycle is going fast, the reaction on the handlebars is small. When it is going slowly, the amplitude of the reaction increases. For such a system an automatic balance and trajectory control system is essential, because the bicycle is small and its reaction time is so short that no human pilot could react in time. The simplified architecture of the system is as follows: 20 © BRUNO PAILLARD Radio Speed Servo- receiver sensor motor Gyrometer Clock ADC Timer Timer Timer CPU Address Data Control Figure 2-3 6. PROGRAM EXECUTION The CPU executes instructions that are stored in memory. These instructions describe elementary operations, such as reading a peripheral, adding two values, or writing a result back into memory. The sequence of many instructions constitutes a program. Its clock sequences the CPU operations. At each clock cycle, the CPU carries out specific elementary operations that lead to the execution of the instructions. Two phases can be identified in the operation of the CPU: 1. The Fetch Cycle: During the fetch cycle the CPU reads the instruction word (or words) from the memory. The fetch cycle is always a read from memory. 2. The Execute Cycle: During the execute cycle the CPU carries out the operation indicated by the instruction. This operation can be a read from memory or from a peripheral, a © BRUNO PAILLARD 21 write to memory or to a peripheral, or can be an internal arithmetic or logic operation that does not require any exchange with a peripheral. These two phases do not always take the same time. The length of the fetch cycle varies with the number of instruction words that must be fetched from memory. For many microprocessors, the length of the execute cycle varies with the complexity of the operation represented by the instruction. However, from the time of its reset on, the CPU always alternates between fetch cycle and execute cycle. Some microprocessors are able to do both cycles at once. They fetch the next instruction while they execute the present one. 22 © BRUNO PAILLARD System architecture • 1. Von Neumann architecture • 2. Harvard architecture • 3. Pros and cons of each architecture • 4. Busses, address decoding and three-state logic • 5. Memories: technology and operation The system’s architecture represents the way that the elements of the system are interconnected and exchange data. It has a significant influence on the way the CPU accesses its peripherals. Every type of architecture requires special hardware features from the CPU (in particular from its bus system). Every microprocessor is therefore designed to work in a specific type of architecture and cannot usually work in other types. Every type of architecture includes: • At least one CPU • At least one peripheral • At least one bus system connecting the CPU to the peripherals. 1. VON NEUMANN ARCHITECTURE In 1946, well before the development of the first microprocessor, John Von Neumann developed the first computer architecture that allowed the computer to be programmed by codes residing in memory. This development took place at University of Pennsylvania’s Moore school of Electrical Engineering, while he was working on the ENIAC – one of the first electronic computers. Prior to this development, modifying the program of a computer required making modifications to its wiring. In the ENIAC, program instructions were stored using an array of switches, which represented an early form of Read Only Memory. The Von Neumann architecture is the most widely used today, and is implemented by the majority of microprocessors on the market. It is described in Figure 3-1. For this architecture, all the elements of the computer are interconnected by a single system of 3 busses: • The Data Bus Transports data between the CPU and its peripherals. It is bi-directional. The CPU can read or write data in the peripherals. • The Address Bus The CPU uses the address bus to indicate which peripherals it wants to access, and within each peripheral which specific register. The address bus is unidirectional The CPU always writes the address, which is read by the peripherals. © BRUNO PAILLARD ING. 23

Advise: Why You Wasting Money in Costly SEO Tools, Use World's Best Free SEO Tool Ubersuggest.