Lecture Notes for Digital Electronics

lecture notes basic digital electronics and introduction to digital electronics lecture notes, and digital electronics and logic design lecture notes pdf free download
Dr.MattWood Profile Pic
Dr.MattWood,United States,Teacher
Published Date:25-07-2017
Your Website URL(Optional)
Comment
Lecture Notes for Digital Electronics Raymond E. Frey Physics Department University of Oregon Eugene, OR 97403, USA rayfreyuoregon.edu March, 20001 Basic Digital Concepts By converting continuous analog signals into a nite number of discrete states, a process called digitization, then to the extent that the states are suciently well separated so that noise does create errors, the resulting digital signals allow the following (slightly idealized):  storage over arbitrary periods of time  flawless retrieval and reproduction of the stored information  flawless transmission of the information Some information is intrinsically digital, so it is natural to process and manipulate it using purely digital techniques. Examples are numbers and words. The drawback to digitization is that a single analog signal (e.g. a voltage which is a function of time, like a stereo signal) needs many discrete states, or bits, in order to give a satisfactory reproduction. For example, it requires a minimum of 10 bits to determine a voltage at any given time to an accuracy of 0:1%. For transmission, one now requires 10 lines instead of the one original analog line. The explosion in digital techniques and technology has been made possible by the incred- ible increase in the density of digital circuitry, its robust performance, its relatively low cost, and its speed. The requirement of using many bits in reproduction is no longer an issue: The more the better. This circuitry is based upon the transistor, which can be operated as a switch with two states. Hence, the digital information is intrinsically binary. So in practice, the terms digital and binary are used interchangeably. In the following sections we summarize some conventions for de ning the binary states and for doing binary arithmetic. 1.1 Binary Logic States The following table attempts to make correspondences between conventions for de ning binary logic states. In the case of the TTL logic gates we will be using in the lab, the Low voltage state is roughly 01 Volt and the High state is roughly 2:55 Volts. See page 475 of the text for the exact conventions for TTL as well as other hardware gate technologies. Boolean Logic Boolean Algebra Voltage State Voltage State (positive true) (negative true ) True (T) 1 High (H) Low (L) False (F) 0 L H The convention for naming these states is illustrated in Fig. 1. The \positive true" case is illustrated. The relationship between the logic state and label (in this case \switch open") at some point in the circuit can be summarized with the following: The labelled voltage is High (Low) when the label's stated function is True (False). In the gure, the stated function is certainly true (switch open), and this does correspond to a high voltage at the labelled point. (Recall that with the switch open, Ohm's Law implies that with zero current, the voltage di erence across the \pull up" resistor is zero, so that 1the labelled point is at +5 Volts. With a closed switch, the labelled point is connected to ground, with a 5 Volt drop across the resistor and a current of I = V=R = 5 mA through it.) +5 V 1 k switch open Figure 1: Illustration for labelling logic states (\positive true"). With the convention known as \negative true", the label would be changed to \switch closed" with a bar over it: switchclosed. Our statement becomes: The labelled voltage is Low (High) when the label's stated function is True (False). So in the gure, the stated function (switch closed) is true when the voltage is low. The bar     is meant to envoke the boolean inversion operation: T = F, F = T, T = T, and so forth. 1.2 Binary Arithmetic Each digit in binary is a 0 or a 1 and is called a bit, which is an abbreviation of binary digit. There are several common conventions for representation of numbers in binary. The most familiar is unsigned binary. An example of a 8-bit number in this case is 7 6 0 01001111 =02 +12 ++12 =64+8+4+2+1= 79 2 10 (Generally the subscripts will be omitted, since it will be clear from the context.) To convert from base 10 to binary, one can use a decomposition like above, or use the following algorithm illustrated by 79: 79=2 = 39, remainder 1, then 39=2 = 19 r 1, and so forth. Then assemble all the remainders in reverse order. n The largest number which can be represented by n bits is 2 − 1. For example, with 4 bits the largest number is 1111 = 15. 2 The most signi cant bit (MSB) is the bit representing the highest power of 2, and the LSB represents the lowest power of 2. Arithmetic with unsigned binary is analogous to decimal. For example 1-bit addition and multiplication are as follows: 0 + 0 = 0, 0 + 1 = 1, 1 + 1 = 0, 00=0, 0 1 = 0, and 1 1 = 1. Note that this is di erent from Boolean algebra, as weshall seeshortly, where 1+1= 1. Another convention is called BCD (\binary coded decmal"). In this case each decimal and 9 = 1001,then digit is separately converted to binary. Therefore, since 7 = 0111 2 2 79 = 01111001 (BCD). Note that this is di erent than our previous result. We will use BCD quite often in this course. It is quite convenient, for example, when decimal numerical displays are used. 2Yet another convention is Gray code. You have a homework problem to practice this. This is less commonly used. 1.2.1 Representation of Negative Numbers There are two commonly used conventions for representing negative numbers. With sign magnitude, the MSB is used to flag a negative number. So for example with 4-bit numbers we would have 0011 = 3 and 1011 =−3. This is simple to see, but is not good for doing arithmetic. With 2's complement, negative numbers are designed so that the sum of a number and its 2's complement is zero. Using the 4-bit example again, we have 0101 = 5 and its 2's complement−5 = 1011. Adding (remember to carry) gives 10000 = 0. (The 5th bit doesn't count) Both addition and multiplication work as you would expect using 2's complement. There are two methods for forming the 2's complement: 1. Make the transformation 01and 1 0, then add 1. MSB 2. Add some number to −2 to get the number you want. For 4-bit numbers an example of nding the 2's complement of 5 is−5=−8 + 3 = 1000 + 0011 = 1011. 1.2.2 Hexadecimal Representation It is very often quite useful to represent blocks of 4 bits by a single digit. Thus in base 16 there is a convention for using one digit for the numbers 0,1,2,:::,15 which is called hexadecimal. It follows decimal for 09, then uses letters AF. Decimal Binary Hex 0 0000 0 1 0001 1 2 0010 2 3 0011 3 4 0100 4 5 0101 5 6 0110 6 7 0111 7 8 1000 8 9 1001 9 10 1010 A 11 1011 B 12 1100 C 13 1101 D 14 1110 E 15 1111 F 32 Logic Gates and Combinational Logic 2.1 Gate Types and Truth Tables The basic logic gates are AND, OR, NAND, NOR, XOR, INV, and BUF. The last two are not standard terms; they stand for \inverter" and \bu er", respectively. The symbols for these gates and their corresponding Boolean expressions are given in Table 8.2 of the text which, for convenience, is reproduced (in part) in Fig. 2. Figure 2: Table 8.2 from the text. All of the logical gate functions, as well as the Boolean relations discussed in the next section, follow from the truth tables for the AND and OR gates. We reproduce these below. We also show the XOR truth table, because it comes up quite often, although, as we shall see, it is not elemental. 4A B Q A 0 0 0 Q B 1 0 0 0 1 0 1 1 1 Figure 3: AND gate. A B Q 0 0 0 A Q 1 0 1 B 0 1 1 1 1 1 Figure 4: OR gate. A B Q 0 0 0 A 1 0 1 Q B 0 1 1 1 1 0 Figure 5: XOR (exclusive OR) gate. 52.2 Boolean Algebra and DeMorgan's Theorems Boolean algebra can be used to formalize the combinations of binary logic states. The fundamental relations are given in Table 8.3 of the text. In these relations, A and B are binary quantities, that is, they can be either logical true (T or 1) or logical false (F or 0). Most of these relations are obvious. Here are a few of them: AA = A ; A + A = A ; A + A=1; AA=0; A=A 0 Recall that the text sometimes uses an apostrophe for inversion (A ). We use the standard overbar notation (A). We can use algebraic expressions to complete our de nitions of the basic logic gates we began above. Note that the Boolean operations of \multiplication" and \addition" are de ned by the truth tables for the AND and OR gates given above in Figs. 3 and 4. Using these de nitions, we can de ne all of the logic gates algebraically. The truth tables can also be constructed from these relations, if necessary. See Fig. 2 for the gate symbols.  AND: Q = AB (see Fig. 3)  OR: Q = A + B (see Fig. 4)  NAND: Q = AB  NOR: Q = A + B  XOR: Q = A B (de ned by truth table Fig. 5)  INV: Q = A  BUF: Q = A 2.2.1 Example: Combining Gates Let's re-express the XOR operation in terms of standard Boolean operations. The following truth table evaluates the expression Q = AB + AB. A B AB AB Q 0 0 0 0 0 1 0 0 1 1 0 1 1 0 1 1 1 0 0 0 We see that this truth table is identical to the one for the XOR operation. Therefore, we can write A B = AB + AB (1) A schematic of this expression in terms of gates is given in Fig. 6 (as well as Fig. 8.25 of the text). Recall that the open circles at the output or input of a gate represent inversion. 6A A Q Q = B B Figure 6: Realization of the XOR gate in terms of AND and OR gates. 2.2.2 Gate Interchangeablilty In an example from the homework, we can make an INV gate from a 2-input NOR gate. Simply connect the two inputs of the NOR gate together. Algebraically, if the two original NOR gate inputs are labelled B and C, and they are combined to form A,then we have Q=B+C=A+A=A,which is the INV operation. Note that an INV gate can not be made from OR or AND gates. For this reason the OR and AND gates are not universal. So for example, no combination of AND gates can be combined to substitute for a NOR gate. However, the NAND and NOR gates are universal. 2.2.3 DeMorgan Perhaps the most interesting of the Boolean identities are the two known as DeMorgan's Theorems:    A + B = AB (or;A+B=AB)(2) AB = A + B (or;AB=A+B)(3) These expressions turn out to be quite useful, and we shall use them often. An example of algebraic logic manipulation follows. It is the one mentioned at the end of Lab 1. One is to show that an XOR gate can be composed of 4 NAND gates. From the section above we know A B = AB + AB.SinceAA=0 and BB = 0, we can add these, rearrange, and apply the two DeMorgan relations to give    A B = A(A + B)+B(A+B)=A(AB)+B(AB)= A(AB) B(AB) 2.3 Symbolic Logic The two DeMorgan expressions above can be envoked using gate symbols by following this prescription: Change gate shape (ANDOR) and invert all inputs and outputs. By examining the two rightmost columns of Fig. 2, one sees that the transformation between 3rd and 4th columns for the gates involving AND/OR gates works exactly in this way. For example, the DeMorgan expression AB = A+B is represented symbolically by the equivalence between the 3rd and 4th columns of the 2nd row (\NAND") of Fig. 2. We will go over how this works, and some more examples, in class. 72.4 Logic Minimization and Karnaugh Maps As we found above, given a truth table, it is always possible to write down a correct logic expression simply by forming an OR of the ANDs of all input variables for which the output is true (Q = 1). However, for an arbitrary truth table such a procedure could produce a very lengthy and cumbersome expression which might be needlessly inecient to implement with gates. There are several methods for simpli cation of Boolean logic expressions. The process is usually called \logic minimization", and the goal is to form a result which is ecient. Two methods we will discuss are algebraic minimization and Karnaugh maps. For very compli- cated problems the former method can be done using special software analysis programs. Karnaugh maps are also limited to problems with up to 4 binary inputs. Let's start with a simple example. The table below gives an arbitrary truth table involving 2 logic inputs. Table 1: Example of simple arbitrary truth table. A B Q 0 0 1 0 1 1 1 0 0 1 1 1 There are two overall stategies: 1. Write down an expression directly from the truth table. Use Boolean algebra, if desired, to simplify. 2. Use Karnaugh mapping (\K-map"). This is only applicable if there are 4 inputs. In our example above, we can use two di erent ways of writin down a result directly from the truth table. We can write down all TRUE terms and OR the result. This gives    Q = AB + AB + AB While correct, without further simpli cation this expression would involve 3 2-input AND gates, 2 inverters, and 1 3-input OR gate. Alternatively, one can write down an expression for all of the FALSE states of the truth table. This is simpler in this case:    Q = AB Q = AB = A + B where the last step results from Eqn. 3. Presumably, the two expressions can be found to be equivalent with some algebra. Certainly, the 2nd is simpler, and involves only an inverter and one 2-input OR gate. 8Finally, one can try a K-map solution. The rst step is to write out the truth table in the form below, with the input states the headings of rows and columns of a table, and the corresponding outputs within, as shown below. Table 2: K-map of truth table. AnB 01 0 11 1 01 The steps/rules are as follows: 1. Form the 2-dimensional table as above. Combine 2 inputs in a \gray code" way see 2nd example below. 2. Form groups of 1's and circle them; the groups are rectangular and must have sides of n m length 2  2 ,where n and m are integers 0; 1; 2;:::. 3. The groups can overlap. 4. Write down an expression of the inputs for each group. 5. OR together these expressions. That's it. 6. Groups can wrap across table edges. 7. As before, one can alternatively form groups of 0's to give a solution for Q. 8. The bigger the groups one can form, the better (simpler) the result. 9. There are usually many alternative solutions, all equivalent, some better than others depending upon what one is trying to optimize. AnB 01 Here is one way of doing it: 0 11 1 01  The two groups we have drawn are A and B. So the solution (as before) is:  Q = A + B 2.4.1 K-map Example 2 Let's use this to determine which 3-bit numbers are prime. (This is a homework problem.) We assume that 0; 1;2are not prime. We will let our input number have digits a a a .Here 2 1 0 is the truth table: Here is the corresponding K-map and a solution. Note that where two inputs are combined in a row or column that their progression follows gray code, that is only one bit changes at a time. The solution shown above is: Q = a a + a a = a (a + a ) 1 0 2 0 0 1 2 9Table 3: 3-digit prime nder. Decimal a a a Q 2 1 0 0 0 0 0 0 1 0 0 1 0 2 0 1 0 0 3 0 1 1 1 4 1 0 0 0 5 1 0 1 1 6 1 1 0 0 7 1 1 1 1 Table 4: K-map of truth table. a na a 00 01 11 10 2 1 0 0 0010 1 0110 102.4.2 K-map Example 3: Full Adder In this example we will outline how to build a digital full adder. It is called \full" because it will include a \carry-in" bit and a \carry-out" bit. The carry bits will allow a succession of 1-bit full adders to be used to add binary numbers of arbitrary length. (A half adder includes only one carry bit.) a a i S S i b i b Σ Cout Cout Cin Cin i i Figure 7: Block schematic of full adder. (We name our adder the \ chip"). The scheme for the full adder is outlined in Fig. 7. Imagine that we are adding two n-bit binary numbers. Let the inputs a and b be the i-th bits of the two numbers. The carry in i i bit Cin represents any carry from the sum of the neighboring less signi cant bits at position i i− 1. That is, Cin =1 if a =b = 1, and is 0 otherwise. The sum S at position i is i i−1 i−1 i thereforethe sumof a, b,and Cin . (Note that this is an arithmetic sum, not aBoolean i i i OR.) A carry for this sum sets the carry out bit, Cout = 1, which then can be applied to the i sum of the i + 1 bits. The truth table is given below. Cin a b S Cout i i i i i 0 0 0 0 0 0 0 1 1 0 0 1 0 1 0 0 1 1 0 1 1 0 0 1 0 1 0 1 0 1 1 1 0 0 1 1 1 1 1 1 With Cin = 0, we see that the output sum S is just given by the XOR operation, a b . i i i i And with Cin =1, then S =ab . Perhaps the simplest way to express this relationship i i i i is the following: S = Cin  (a  b ) i i i i To determine a relatively simple expression for Cout,wewilluseaK-map: i Cinna b 00 01 11 10 i i i 0 0010 1 0111 11This yields Cout = a b + Cin a + Cin b = a b + Cin (a + b ) i i i i i i i i i i i i which in hardware would be 2 2-input OR gates and 2 2-input AND gates. As stated above, the carry bits allow our adder to be expanded to add any number of bits. As an example, a 4-bit adder circuit is depicted in Fig. 8. The sum can be 5 bits, where the MSB is formed by the nal carry out. (Sometimes this is referred to as an \overflow" bit.) a a a a 3 2 1 0 b b b b 3 2 1 0 ab ab ab ab Σ Σ Σ Σ Cout Cin Cout Cin Cout Cin Cout Cin S S S S S S S S S 4 3 2 1 0 Figure 8: Expansion of 1-bit full adder to make a 4-bit adder. 2.4.3 Making a Multiplier from an Adder In class we will discuss how to use our full adder (the \ chip") to make a multiplier. 2.5 Multiplexing A multiplexer (MUX) is a device which selects one of many inputs to a single output. The selection is done by using an input address. Hence, a MUX can take many data bits and put them, one at a time, on a single output data line in a particular sequence. This is an example of transforming parallel data to serial data. A demultiplexer (DEMUX) performs the inverse operation, taking one input and sending it to one of many possible outputs. Again the output line is selected using an address. A MUX-DEMUX pair can be used to convert data to serial form for transmission, thus reducing the number of required transmission lines. The address bits are shared by the MUX and DEMUX at each end. If n data bits are to be transmitted, then after multiplexing, the number of separate lines required is log n+1, compared to n without the conversion to 2 serial. Hence for large n the saving can be substantial. In Lab 2, you will build such a system. Multiplexers consist of two functionally separate components, a decoder and some switches or gates. The decoder interprets the input address to select a single data bit. We use the example of a 4-bit MUX in the following section to illustrate how this works. 2.5.1 A 4-bit MUX Design We wish to design a 4-bit multiplexer. The block diagram is given in Fig. 9. There are 4 input data bits D D , 2 input address bits A and A , one serial output data bit Q,and 0 3 0 1 12an (optional) enable bit E which is used for expansion (discussed later). First we will design the decoder. E MUX D 3 D 2 GATES Q D /SWITCHES 1 D 0 C CCC 3 210 A 1 DECODER A 0 Figure 9: Block diagram of 4-bit MUX. m We need m address bits to specify 2 data bits. So in our example, we have 2 address bits. The truth table for our decoder is straightforward: A A C C C C 1 0 0 1 2 3 0 0 1 0 0 0 0 1 0 1 0 0 1 0 0 0 1 0 1 1 0 0 0 1 The implementation of the truth table with standard gates is also straightforward, as given in Fig. 10. C CC C 3 21 0 A 1 A 0 Figure 10: Decoder for the 4-bit MUX. For the \gates/switches" part of the MUX, the design depends upon whether the input data lines carry digital or analog signals. We will discuss the analog possibility later. The digital case is the usual and simplest case. Here, the data routing can be accomplished 13simply by forming 2-input ANDs of the decoder outputs with the corresponding data input, and then forming an OR of these terms. Explicitly, Q = C D + C D + C D + C D 0 0 1 1 2 2 3 3 Finally, if an ENABLE line E is included, it is simply ANDed with the righthand side of this expression. This can be used to switch the entire MUX IC o /on, and is useful for expansion to more bits. as we shall see. 143 Flip-Flops and Introductory Sequential Logic We now turn to digital circuits which have states which change in time, usually according to an external clock. The flip-flop is an important element of such circuits. It has the interesting property of memory: It can be set to a state which is retained until explicitly reset. 3.1 Simple Latches The following 3 gures are equivalent representations of a simple circuit. In general these are called flip-flops. Speci cally, these examples are called SR (\set-reset") flip-flops, or SR latches. S S Q Q Q Q R R Figure 11: Two equivalent versions of an SR flip-flop (or \SR latch"). S Q Q R Figure 12: Yet another equivalent SR flip-flop, as used in Lab 3. The truth table for the SR latch is given below. S S R R Q Q 1 0 0 1 1 0 0 1 1 0 0 1 0 1 0 1 retains previous 1 0 1 0 0 0 The state described by the last row is clearly problematic, since Q and Q should not be thesamevalue. Thus, theS =R = 1 inputs should be avoided. From the truth table, we can develop a sequence such as the following: 151. R=0, S =1)Q=1 (set) 2. R=0, S =0)Q=1 (Q = 1 state retained: \memory") 3. R=1, S =0)Q=0 (reset) 4. R=0, S =0)Q=0 (Q = 0 state retained) In alternative language, the rst operation \writes" a true state into one bit of memory. It can subsequently be \read" until it is erased by the reset operation of the third line. 3.1.1 Latch Example: Debounced Switch A useful example of the simple SR flip-flop is the debounced switch, like the ones on the lab prototyping boards. The point is that any simple mechanical switch will bounce as it makes contact. Hence, an attempt to provide a simple transition from digital HIGH to LOW with a mechanical switch may result in an unintended series of transitions between the two states as the switch damps to its nal position. So, for example, a digital counter connected to Q would count every bounce, rather than the single push of the button which was intended. The debounced con guration and corresponding truth table are given below. When the switch is moved from A to B, for example, the output Q goes LOW. A bounce would result in A = B = 1, which is the \retain previous" state of the flip-flop. Hence, the bounces do not appear at the output Q. +5 V 1 k Q A B 1 k +5 V Figure 13: A debounced switch. A B Q 1 0 0 0 1 1 1 1 retains previous 0 0 not allowed 163.2 Clocked Flip-flops We will soon get used to the idea of a clock as an essential element of digital circuitry. When we speak of a clock signal, we mean a sequence of evenly spaced digital high and low signals proceeding at a xed frequency. That is, the clock is a continuous sequence of square wave pulses. There are a number of reasons for the importance of the clock. Clearly it is essential for doing any kind of counting or timing operation. But, its most important role is in providing synchronization to the digital circuit. Each clock pulse may represent the transition to a new digital state of a so-called \state machine" (simple processor) we will soon encounter. Or a clock pulse may correspond to the movement of a bit of data from one location in memory to another. A digital circuit coordinates these various functions by the synchronization provided by a single clock signal which is shared throughout the circuit. A more sophisticated example of this concept is the clock of a computer, which we have come to associate with processing speed (e.g. 330 MHz for typical current generation commercial processors.) We can include a clock signal to our simple SR flip-flop, as shown in Fig. 14. The truth table, given below, follows directly from our previous SR flip-flop, except now we include a th label for the n clock pulse for the inputs and the output. This is because the inputs have no e ect unless they coincide with a clock pulse. (Note that a speci ed clock pulse conventionally refers to a HIGH level.) As indicated in the truth table, the inputs S = R = 0 represent the n n flip-flop memory state. Signi cantly, one notes that the interval between clock pulses also corresponds to the \retain previous state" of the flip-flop. Hence the information encoded by the one bit of flip-flop memory can only be modi ed in synchronization with the clock. S Q R Q CLK Figure 14: A clocked SR flip-flop. S R Q n n n 1 0 1 0 1 0 0 0 Q n−1 1 1 avoid We are now set to make a subtle transition for our next version of the clocked flip-flop. The flip-flop memory is being used to retain the state between clock pulses. In fact, the state set up by the S and R inputs can be represented by a single input we call \data", or 17D. This is shown in Fig. 15. Note that we have explicitly eliminated the bad S = R =1 state with this con guration. We can override this data input and clock sychronization scheme by including the \jam set" (S) and \jam reset" (R) inputs shown in Fig. 15. These function just as before with the unclocked SR flip-flop. Note that these \jam" inputs go by various names. So sometimes the set is called \preset" and reset is called \clear", for example. _ S D Q Q _ CLK R Figure 15: A \D-type transparent" flip-flop with jam set and reset. A typical timing diagram for this flip-flop is given in Fig. 16. Note that the jam reset signal R overrides any action of the data or clock inputs. CLK D _ R Q Figure 16: Example of timing diagram for the transparent D flip-flop. (It is assumed that S is held HIGH throughout.) 3.2.1 Edge Triggered Flip-Flops We need to make one nal modi cation to our clocked flip-flop. Note that in the timing diagram of Fig. 16 that there is quite a bit of apparent ambiguity regarding exactly when the D input gets latched into Q. If a transition in D occurs sometime during a clock HIGH, for example, what will occur? The answer will depend upon the characteristics of the particular electronics being used. This lack of clarity is often unacceptable. As a point of terminology, 18the clocked flip-flop of Fig. 15 is called a transparent D-type flip-flop or latch. (An example in TTL is the 7475 IC.) The solution to this is the edge-triggered flip-flop. We will discuss how this works for one example in class. It is also discussed some in the text. Triggering on a clock rising or falling edge is similar in all respects to what we have discussed, except that it requires 23 coupled SR-type flip-flops, rather than just one clocked SR flip-flop. The most common type is the positive-edge triggered D-type flip-flop. This latches the D input upon the clock transition from LOW to HIGH. An example of this in TTL is the 7474 IC. It is also common to employ a negative-edge triggered D-type flip-flop, which latches the D input upon the clock transition from HIGH to LOW. The symbols used for these three D-type flip-flops are depicted in Fig. 17. Note that the small triangle at the clock input depicts positive-edge triggering, and with an inversion symbol represents negative-edge triggered. The JK type of flip-flop is a slightlier fancier version of the D-type which we will discuss briefly later. Not shown in the gure are the jam set and reset inputs, which are typically included in the flip-flop IC packages. In timing diagrams, the clocks for edge-triggered devices are indicated by arrows, as shown in Fig. 18. J D Q DQ DQ Q CLK CLK CLK CLK K Figure 17: Symbols for D-type and JK flip-flops. Left to right: transparent D-type, positive- edge triggered D-type, negative-edge triggered D-type, and positive-edge triggered JK-type. CLK CLK Figure 18: Clocks in timing diagrams for positive-edge triggered (left) and negative-edge triggered (right) devices. For edge-triggered devices, the ambiguity regarding latch timing is reduced signi cantly. But at high clock frequency it will become an issue again. Typically, the requirements are as follows:  The data input must be held for a time t before the clock edge. Typically, t  setup setup 20 ns or less.  For some ICs, the data must be held for a short time t after the clock edge. Typically hold t  3 ns, but is zero for most newer ICs. hold  The output Q appears after a short propagation delay t of the signal through the prop gates of the IC. Typically, t  10 ns. prop 19

Advise: Why You Wasting Money in Costly SEO Tools, Use World's Best Free SEO Tool Ubersuggest.