Sei sulla pagina 1di 65

2011

SREE NARAYANA GURUKULAM COLLEGE OF ENGINEERING JISHA P.S

SWITCHING THEORY AND LOGIC DESIGN

MODULE 1
Number Systems and codes:-Decimal, Binary, Octal and Hexadecimal Number Systems.Codes: BCD, Gray Code, Excess-3 Code, ASCII,EBCIDIC, Conversion between various codes. Switching Theory:-Boolean Algebra-Postulates and Theorems.De Morgans Theorem.Swiotching Functions-Canonical Forms-Simplification of Switching Functionskarnaugh Map and Quine Mc-Clusky Methods

Introduction
Switching theory is the abstract mathematical formalization used in the logic design of d igital networks. It is so called because, when it was first developed by Claude Shannon (q.v.) in 1938, most logic networks were implemented using switches and electromechan ical devices such as relays. Modern logic networks are usually constructed using electro nic integrated circuits comprising networks of logical elements such as inverters, AND g ates, and OR gates. These elements operate on binary signals; they are constrained to tak e on only two different voltage values (such as 0 or 5 volts). Switching theory used a tw o-valued Boolean algebra (sometimes called Switching algebra) as a notation to represe nt the operation of such logic networks. The two algebraic values are most often represe nted as "0" and "1," although "T" and "F" are sometimes used to emphasize the relation t o propositional logic. The correspondence between the algebraic symbol used to represe nt a signal and the voltage present is arbitrary, although the positive logic convention in which the algebraic 1 represents the more positive voltage signal is now most common. Each input or output signal of a logic network is represented by a Boolean variable. Boo lean algebra has three basic operations: inversion, logical addition, and logical multiplic ation; these operations are implemented directly by logic gates called inverters, OR gate s, and AND gates. Application: Basic example of digital units is : Digital gates Signal and the equipment of digital logics are : Transistor (basic one ) , computer ...

Number System
A number system of base (also called radix) r is a system, which have r distinct symbols for r digits. A number is represented by a string of these symbolic digits. To determine th e quantity that the number represents, we multiply the number by an integer power of r depending on the place it is located and then find the sum of weighted digits. A number can be represented with different base values. We are familiar with the numbers in the base 10 (known as decimal numbers), with digits taking values 0,1,2,,8,9. A computer uses a Binary number system which has a base 2 and digits can have only TWO values: 0 and 1.

A decimal number with a few digits can be expressed in binary form using a large number of digits. Thus the number 65 can be expressed in binary form as 1000001. The binary form can be expressed more compactly by grouping 3 binary digits together to form an octal number. An octal number with base 8 makes use of the EIGHT digits 0,1,2,3,4,5,6 and 7. A more compact representation is used by Hexadecimal representation which groups 4 binary digits together. It can make use of 16 digits, but since we have only 10 digits, the remaining 6 digits are made up of first 6 letters of the alphabet. Thus the hexadecimal base uses 0,1,2,.8,9,A,B,C,D,E,F as digits. To summarize Decimal : base 10 Binary : base 2 Octal: base 8 Hexadecimal : base 16 Decimal, Binary, Octal, and Hex Numbers Hex Dec Oct Binary Hex Dec 0 0 1 2 3 1 2 3 0 1 2 3 0000 0001 0010 0011 4 5 6 7 4 5 6 7 Oct Binary Hex Dec Oct Binary Hex Dec Oct Binary 4 5 6 7 0100 0101 0110 0111 8 9 A B 8 9 10 11 1000 1001 1010 1011 C D E F 12 13 14 15 14 15 16 17 1100 1101 1110 1111

10 12 11 13

Decimal System: It is the most commonly used number system. Our present system of numbers has 10 separate symbols namely 0,1,2,3,4,5,6,7,8 and 9, which are called Arabi c numerals. The digit of a number system is a symbol, which represents an integral quantity. The base or radix of a number system is defined as the number of different digit s, which can occur in each position in the number system. The decimal system has a ba se or radix of 10. The actual meaning of 125 can be seen more clearly if we notice that it is spoken as on e hundred and sixty eight. Basically, the number is a contraction of 1*100+2*10+5. Th e important point is that the value of each digit is determined by its position. The decima l system uses power of 10 .That means 125 actually means 1* 102 + 2* 101 +5*100, wher e the powers of 10 is termed as the weight.

weight of 1 is 102 weight of 2 is 101 weight of 5 is 100 Binary Number System: Digital computers use the binary number system, which has o nly two symbols: 0 and 1.The numbers in binary system are represented as combinations of these two symbols. Binary system uses powers of 2. The binary digit is also referred to as Bit (the acronym for Binary Digit). A string of 4 bi ts is called a nibble and a string of 8 bits is called a byte. A byte is the basic unit of data i n computers. In binary system, the number (125) 10 is represented as 1111101 meaning is as follows 1*26 +1* 25 +1* 24 +1*23 +1* 22 +0* 21 +1*2 0 . Conversion of decimal to binary Here we keep on dividing the number by 2 recursively till it reduces to zero. Then we print the remainders in reverse order. Example: convert (68)10 to binary 68/2 = 34 remainder is 0 34/ 2 = 17 remainder is 0 17 / 2 = 8 remainder is 1 8 / 2 = 4 remainder is 0 4 / 2 = 2 remainder is 0 2 / 2 = 1 remainder is 0 1 / 2 = 0 remainder is 1 We stop here as the number has been reduced to zero and collect the remainders in reverse order. Answer = 1 0 0 0 1 0 0 Note: the answer is read from bottom (MSB, most significant bit) to top (LSB least significant bit) as (1000100)2 . You should be able to write a recursive function to convert a binary integer into its decimal equivalent. Conversion of decimal fraction to binary fraction To convert a decimal fraction to its binary fraction, multiplication by 2 is carried out repetitively and the integer part of the result is saved and placed after the decimal point. The fractional part is taken and multiplied by 2. The process can be stopped any time after the desired accuracy has been achieved. Example: convert ( 0.68)10 to binary fraction. 0.68 * 2 = 1.36 integer part is 1 Take the fractional part and continue the process 0.36 * 2 = 0.72 integer part is 0 0.72 * 2 = 1.44 integer part is 1 0.44 * 2 = 0.88 integer part is 0 The digits are placed in the order in which they are generated, and not in the reverse order. Let us say we need the accuracy up to 4 decimal places. Here is the result. Answer = 0. 1 0 1 0.. Example: convert ( 70.68)10 to binary equivalent. First convert 70 into its binary form which is 1000110. Then convert 0.68 into binary

form upto 4 decimal places to get 0.1010. Now put the two parts together. Answer = 1 0 0 0 1 1 0 . 1 0 1 0. Conversion of Binary to Decimal

Octal Number System Base or radix 8 number system. 1 octal digit is equivalent to 3 bits. Octal numbers are 0 to7. Binary is also easily converted to the octal numeral system, since octal uses a radix of 8, which is a power of two (namely, 23, so it takes exactly three binary digits to represent an octal digit). The correspondence between octal and binary numerals is the same as for the first eight digits of hexadecimal in the table above. Binary 000 is equivalent to the octal digit 0, binary 111 is equivalent to octal 7, and so forth. Octal Binary 0 1 2 3 4 5 6 7 000 001 010 011 100 101 110 111

Octal to Binary Conversion 658 = 110 1012 178 = 001 1112 Binary to Octal Conversion 1011002 = 101 1002 grouped = 548 100112 = 010 0112 grouped with padding = 238 Octal to Decimal Conversion 658 = (6 81) + (5 80) = (6 8) + (5 1) = 5310 1278 = (1 82) + (2 81) + (7 80) = (1 64) + (2 8) + (7 1) = 8710

Decimal to Octal Conversion(Repeated Multiplication / division by 8) Example: convert (177)10 to octal equivalent 177 / 8 = 22 remainder is 1 22 / 8 = 2 remainder is 6 2 / 8 = 0 remainder is 2 Answer = 2 6 1 Note: the answer is read from bottom to top as (261)8, the same as with the binary case. Conversion of decimal fraction to octal fraction is carried out in the same manner as decimal to binary except that now the multiplication is carried out by 8. Example: convert (0.523)10 to octal equivalent up to 3 decimal places 0.523 x 8 = 4.184 ,its integer part is 4 0.184 x 8 = 1.472, its integer part is 1 0.472 x 8 = 3.776 , its integer part is 3 So the answer is (0.413..)8 Hexadecimal Number System Base or radix 16 number system. 1 hex digit is equivalent to 4 bits. Numbers are 0,1,2..8,9, A, B, C, D, E, F. B is 11, E is 14 Numbers are expressed as powers of 16. 160 = 1, 161 = 16, 162 = 256, 163 = 4096, 164 = 65536, Conversion of Hex to Decimal Example: convert (F4C)16 to decimal = (F x 162) + (4 x 161) + (C x 160) = (15 x 256) + (4 x 16) + (12 x 1) Conversion of Decimal to Hex Example: convert (4768)10 to hex. = 4768 / 16 = 298 remainder 0 = 298 / 16 = 18 remainder 10 (A) = 18 / 16 = 1 remainder 2

= 1 / 16 = 0 remainder 1 Answer: 1 2 A 0 Note: the answer is read from bottom to top , same as with the binary case. = 3840 + 64 + 12 + 0 = (3916)10 Conversion of Binary to Hex Conversion of binary numbers to hex simply requires grouping bits in the binary numbers into groups of four bits. Groups are formed beginning with the LSB and progressing to the MSB. 1110 01112 = E716 1 1000 1010 1000 01112 = 0001 1000 1010 1000 01112 = 1 8 A 8 716 Conversion of Hex to Binary The conversion from an integer hexadecimal number to binary is accomplished by: 1. Converting the hexadecimal number to its 4-bit binary equivalent. 2. Combining the 4-bit sections by removing the spaces. Convert the hexadecimal value 0x0AFB2 to binary notation. A F B 2 1010 1111 1011 0010 This yields the binary number 10101111101100102 or 1010 1111 1011 00102 in a more readable format. Conversion of Octal to Hex The conversion is made in two steps using binary as an auxiliary base. Octal is converted to binary and then to hexadecimal, grouping digits 4 by 4, which correspond each to an hexadecimal digit. For instance, convert octal 1057 to hexadecimal: To binary: 1 0 5 7 001 000 101 111 To hexadecimal: 0010 0010 1111 2 2 F Thus 10578 = 22F16

To From -

Binary

Octal Whole Numbers: From RL, convert each 3 bit group to equivalent octal digit Fractional Part: From LR, convert each 3 bit group to equivalent octal digit

Decimal Multiply each digit with its weight and summing the products

Hexadecimal Whole Numbers: From RL, convert each 4 bit group to equivalent hexadecimal digit Fractional Part: From LR, convert each 4 bit group to equivalent hexadecimal digit

Binary

Octal

Replace each octal digit with the appropriate 3 bits

Multiply each OctalBinary digit with its weight and BinaryHexadecimal summing the products Whole Numbers: Repeated division by 16 Fractional Part: Repeated by 16 Multiplication

Whole Numbers: Whole Numbers: Repeated Repeated division by division by 2 8 Decimal Fractional Part: Repeated Multiplication by 2 Fractional Part: Repeated Multiplication by 8

Replace each HexadecimalBinar hexadecimal y Hexa digit with the decimal appropriate 4 Binary Octal bits

Multiply each digit with its weight and summing the products

BINARY CODES
BINARY CODED DECIMAL
In computing and electronic systems, binary-coded decimal (BCD) is an encoding for decimal numbers in which each digit is represented by its own binary sequence. Its main virtue is that it allows easy conversion to decimal digits for printing or display and faster decimal calculations. Its drawbacks are the increased complexity of circuits needed to

implement mathematical operations and a relatively inefficient encodingit occupies more space than a pure binary representation. Though BCD is not as widely used as it once was, decimal fixed-point and floatingpoint are still important and still used in financial, commercial, and industrial computing; modern decimal floating-point representations use base-10 exponents, but not BCD encodings. In BCD, a digit is usually represented by four bits which, in general, represent the values/digits/characters 0-9. Other bit combinations are sometimes used for sign or other indications. To BCD-encode a decimal number using the common encoding, each decimal digit is stored in a four-bit nibble.
Decimal: 0 BCD: 0000 0001 0010 0111 1 0001 2 0010 3 0011 4 0100 5 0101 6 0110 7 0111 8 1000 9 1001

Thus, the BCD encoding for the number 127 would be: Addition with BCD It is possible to perform addition in BCD by first adding in binary, and then converting to BCD afterwards. Conversion of the simple sum of two digits can be done by adding 6 (that is, 16 10) when the result has a value of greater-than 9. For example:

9 + 8 = 17 = [1001] + [1000] = [0001 0001] in binary.

However, in BCD, there cannot exist a value greater than 9 (1001) per nibble. To correct this, 6 (0110) is added to that sum to get the correct first two digits:

[0001 0001] + [0000 0110] = [0001 0111]

which gives two nibbles, [0001] and [0111], which correspond to "1" and "7" respectively. This gives 17 in BCD, which is the correct result. This technique can be extended to adding multiple digits, by adding in groups from right to left, propagating the second digit as a carry, always comparing the 5-bit result of a digit-pair sum to 9.

GRAY CODE
The reflected binary code, also known as Gray code after Frank Gray, is a binary numeral system where two successive values differ in only one digit. The reflected binary code was originally designed to prevent spurious output from electromechanical switches. Today, Gray codes are widely used to facilitate error correction in digital communications such as digital terrestrial television and some cable TV systems.

EXCESS 3 CODE
Excess-3 binary coded decimal (XS-3), also called biased representation or Excess-N, is a numeral system used on some older computers that uses a pre-specified number N as a biasing value. It is a way to represent values with a balanced number of positive and negative numbers. In XS-3, numbers are represented as decimal digits, and each digit is represented by four bits as the BCD value plus 3 (the "excess" amount):

The smallest binary number represents the smallest value. (i.e. 0 - Excess Value) The greatest binary number represents the largest value. (i.e. 2N - Excess Value 1)

Decimal Binary Decimal Binary Decimal Binary 0 1 2 3 0011 0100 0101 0110 4 5 6 7 0111 1000 1001 1010 8 9 10 11 1011 1100 1101 1110

To encode a number such as 127, then, one simply encodes each of the decimal digits as above, giving (0100, 0101, 1010). The primary advantage of XS-3 coding over BCD coding is that a decimal number can be nines' complemented (for subtraction) as easily as a binary number can be ones' complemented; just invert all bits. Adding Excess-3 works on a different algorithm than BCD coding or regular binary numbers. When you add two XS-3 numbers together, the result is not an XS-3 number. For instance, when you add 1 and 0 in XS-3 the answer seems to be 4 instead of 1. In order to correct this problem, when you are finished adding each digit, you have to subtract 3 (binary 11) if the digit is less than decimal 10 and add three if the number is greater than or equal to decimal 10.

ASCII
American Standard Code for Information Interchange (ASCII), pronounced /ski/ is a coding standard that can be used for interchanging information, if the information is

expressed mainly by the written form of English words. It is implemented as a character-encoding scheme based on the ordering of the English alphabet. ASCII codes represent text in computers, communications equipment, and other devices that work with text. Most modern character-encoding schemeswhich support many more characters than did the originalhave a historical basis in ASCII. Historically, ASCII developed from telegraphic codes. Its first commercial use was as a seven-bit teleprinter code promoted by Bell data services. Work on ASCII formally began October 6, 1960, with the first meeting of the American Standards Association's (ASA) X3.2 subcommittee. The first edition of the standard was published in 1963, a major revision in 1967, and the most recent update in 1986. Compared to earlier telegraph codes, the proposed Bell code and ASCII were both ordered for more convenient sorting (i.e., alphabetization) of lists, and added features for devices other than teleprinters. ASCII includes definitions for 128 characters: 33 are non-printing, mostly obsolete control characters that affect how text is processed; 94 are printable characters, and the space is considered an invisible graphic. The most commonly used character encoding on the World Wide Web was US-ASCII until 2008, when it was surpassed by UTF-8 Apart from the numeral data, a computer should also be able to handle and recognize the codes which express letters of alphabet , certain special characters and punctuation marks. Such codes are called alphanuemeric codes. It is a 7 bit code Letter A is coded as 1000001 decimal 65 Letter B is coded as 1000010 decimal 66 Letter C is coded as 1000011 decimal 67 and so on It has 27 =128 possible code groups and can represent all the standard keyboard characters .It also used in printers.
*0 1 2 3 4 5 6 7 8 9 A B C D E F 0 NUL SOH STX ETX EOT ENQ ACK BEL BS TAB LF VT FF CR SO SI 1 DLE DC1 DC2 DC3 DC4 NAK SYN ETB CAN EM SUB ESC FS GS RS US ! " # $ % & ' ( ) * + , - . / 2 1 2 3 4 5 6 7 8 9 : ; < = > ? 30 A B C D E F G H I J K L M N O 4@ Q R S T U V W X Y Z [ \ ] ^ _ 5P a b c d e f g h i j k l m n o 6` q r s t u v w x y z { | } ~ 7p

The EBCDIC Code


Extended Binary Coded Decimal Interchange Code (EBCDIC) is an 8-bit character encoding (code page) used on IBM mainframe operating systems such as z/OS, OS/390, VM and VSE, as well as IBM midrange computer operating systems such as OS/400 and i5/OS It is also employed on various non-IBM platforms such as Fujitsu-Siemens' BS2000/OSD, HP MPE/iX, and Unisys MCP. It descended from punched cards and the corresponding six bit binary-coded decimal code that most of IBM's computer peripherals of the late 1950s and early 1960s used.

Dec
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22

Hex
00 01 02 03 04 05 06 07 08 09 0A 0B 0C 0D 0E 0F 10 11 12 13 14 15 16

Code
NUL SOH STX ETX

Dec
32 33 34 35 36

Hex
20 21 22 23 24 25 26 27 28 29 2A 2B 2C 2D 2E 2F 30 31 32 33 34 35 36

Code

Dec
64 65 66 67 68

Hex
40 41 42 43 44 45 46 47 48 49 4A 4B 4C 4D 4E 4F 50 51 52 53 54 55 56

Code
space

Dec
96 97 98 99 100 101 102 103 104 105

Hex
60 61 62 63 64 65 66 67 68 69 6A 6B 6C 6D 6E 6F 70 71 72 73 74 75 76

Code
/

HT

37 38

LF ETB ESC

69 70 71 72 73 74 75 76

DEL

39 40 41 42

[ . < ( + |! &

106 107 108 109 110 111 112 113 114 115 116 117 118

| , % _ > ?

VT FF CR SO SI DLE

43 44 45 46 47 48 49 50 51 52 53

ENQ ACK BEL

77 78 79 80 81

SYN

82 83 84 85 86

BS

54

23 24 25 26 27 28 29 30 31

17 18 19 1A 1B 1C 1D 1E 1F IFS IGS IRS IUS CAN EM

55 56 57 58 59 60 61 62 63

37 38 39 3A 3B 3C 3D 3E 3F

EOT

87 88 89 90 91 92

57 58 59 5A 5B 5C 5D 5E 5F !] $ * ) ; ^

119 120 121 122 123 124 125 126 127

77 78 79 7A 7B 7C 7D 7E 7F : # @ = "

NAK

93 94

SUB

95

Dec
128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148

Hex
80 81 82 83 84 85 86 87 88 89 8A 8B 8C 8D 8E 8F 90 91 92 93 94

Code

Dec
160

Hex
A0 A1 A2 A3 A4 A5 A6 A7 A8 A9 AA AB AC AD AE AF B0 B1 B2 B3 B4

Code

Dec
192

Hex
C0 C1 C2 C3 C4 C5 C6 C7 C8 C9 CA CB CC CD CE CF D0 D1 D2 D3 D4

Code
{ A B C D E F G H I

Dec
224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239

Hex
E0 E1 E2 E3 E4 E5 E6 E7 E8 E9 EA EB EC ED EE EF F0 F1 F2 F3 F4

Code
\

a b c d e f g h i

161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176

~ s t u v w x y z

193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212

S T U V W X Y Z

} J K L M

240 241 242 243 244

0 1 2 3 4

j k l m

177 178 179 180

149 150 151 152 153 154 155

95 96 97 98 99 9A 9B

n o p q r

181 182 183 184 185 186 187

B5 B6 B7 B8 B9 BA BB

213 214 215 216 217 218 219

D5 D6 D7 D8 D9 DA DB

N O P Q R

245 246 247 248 249 250 251

F5 F6 F7 F8 F9 FA FB

5 6 7 8 9

BOOLEAN ALGEBRA
Due to historical reasons, digital circuits are called switching circuits, digital circuit functions are called switching functions and the algebra is called switching algebra. The algebraic system known as Boolean algebra named after the mathematician George Boole. George Boole Invented multi-valued discrete algebra (1854) and E. V. Huntington developed its postulates and theorems (1904). Historically, the theory of switching networks (or systems) is credited to Claude Shannon, who applied mathematical logic to describe relay circuits (1938). Relays are controlled electromechanical switches and they have been replaced by electronic controlled switches called logic gates. A special case of Boolean Algebra known as Switching Algebra is a useful mathematical model for describing the combinational circuits. In this section we will briefly discus how the Boolean algebra is applied to the design of digital systems.

POSTULATES Identity

Distributive

Complement

Note that for each property, one form is the dual of the other; (zeros to ones, ones to zeros, '.' operations to '+' operations, '+' operations to '.' operations).

From the above postulates the following theorems could be derived. Associative

Idempotence

Absorption

Simplification

Consensus

Adjacency

Demorgans Laws

In general form

Very useful for complementing function expressions; for example

Duality principle-property of boolean algebra Every algebraic expression deducible from the postulates of Boolean algebra remains valid if the operators and identity elements are interrchanged.(ie, if the dual of an algebraic expression is desired, we simply interchange + and . operators and replace 1's by 0's and 0's by 1's.

SUM OF PRODUCTS AND PRODUCT OF SUMS


In Boolean algebra, any Boolean function can be expressed in a canonical form using the dual concepts of minterms and maxterms. All logical functions are expressible in canonical form, both as a "sum of minterms" and as a "product of maxterms". This allows for greater analysis into the simplification of these functions, which is of great importance in the minimization of digital circuits. A Boolean function expressed as a disjunction (OR) of minterms is commonly known as a "sum of products" or "SOP". Thus it is a disjunctive normal form in which only minterms are allowed as summands. Its De Morgan dual is a "product of sums" or "POS" , which is a function expressed as a conjunction (AND) of maxterms. A product of sums is a special conjunctive normal form.

Minterms For a boolean function of n variables , a product term in which each of the n variables appears once (either complemented, or uncomplemented) is called a minterm. Thus, a minterm is a logical expression of n variables consisting of only the logical conjunction operator and the complement operator. For example, abc, ab'c and abc' are examples of minterms for a boolean function of the three variables a, b and c. There are 2n minterms of n variables - this is true since a variable in the minterm expression can either be in the form of itself or its complement - two choices per n variables. Indexing minterms In general, one assigns each minterm (ensuring the variables are written in the same order, usually alphabetic), an index based on the binary value of the minterm. A complemented term, like a' is considered a binary 0 and a noncomplemented term like a

is considered a binary 1. For example, one would associate the number 6 with a b c'(1102), and write the minterm expression as m6. So m0 of three variables is a'b'c'(0002) and m7 would be a b c(1112). Functional equivalence It is apparent that minterm n gives a true value for the n+1 th unique function input for that logical function. For example, minterm 5, a b' c, is true only when a and c both are true and b is false - the input where a = 1, b = 0, c = 1 results in 1. If one is given a truth table of a logical function, it is possible to write the function as a "sum of products". This is a special form of disjunctive normal form, qv. For example, if given the truth table ab 00 01 10 11 f(a, b) 1 0 0 1

observing that the rows that have an output of 1 are the first and fourth, so we can write f as a sum of minterms m0 and m3. If we wish to verify this:f(a,b) = m0 + m3 = (a'b')+(ab) then the truth table for this function, by direct computation, will be the same. Maxterms A maxterm is a logical expression of n variables consisting of only the logical disjunction operator and the complement operator. Maxterms are a dual of the minterm idea. Instead of using ANDs and complements, we use ORs and complements, and proceed similarly. For example, the following are maxterms: a+b'+c a'+b+c There are again 2n maxterms of n variables - this is true since a variable in the maxterm expression can also be in the form of itself or its complement - two choices per n variables. Dualization The complement of a minterm is the respective maxterm. This can be easily verified by using de Morgan's law. For example m1' = M1 (a'b)' = a+b'

Indexing maxterms Indexing maxterms is done in the opposite way as with minterms. One assigns each maxterm an index based on the order of its complements (again, ensuring the variables are written in the same order, usually alphabetic). For example, one might assign M6 (Maxterm 6) to the maxterm a'+b'+c. Similarly M0 of these three variables could be a+b+c and M7 could be a'+b'+c'. Functional equivalence It is apparent that maxterm n now gives a false value for the n+1 th unique function input for that logical function. For example, maxterm 5, a'+b+c', is false only when a and c both are true and b is false - the input where a = 1, b = 0, c = 1 results in 0. If one is given a truth table of a logical function, it is possible to write the function as a "product of sums". This a special form of conjunctive normal form, q.v. For example, if given the truth table ab 00 01 10 11 f(a, b) 1 0 1 0

observing that the rows that have an output of 0 are the second and fourth, so we can write f as a product of maxterms M1 and M3. If we wish to verify this: f(a,b) = M1 M3 = (a+b')(a'+b') then the truth table for this function, by direct computation, will be the same.

LOGIC EXPRESSION MINIMIZATION


The goal of logic expression minimization is to find an equivalent of an original logic expression that has fewer variables per term, has fewer terms and needs less logic to implement. There are three main manual methods used for logic expression minimization; algebraic minimization, Karnaugh Map minimization and QuineMcCluskey (tabular) minimization

Algebraic minimization

The algebraic minimization process is the application of the switching algebra postulates, laws, and theorems to transform the original expression. It is hard to recognize when a particular law can be applied and difficult to know if resulting expression is truly minimal. The incorrect implementation or dropped variables etc can easy lead to a mistake. The following are two examples of the algebraic minimization process by exploiting the adjacency theorem. Look for two terms that are identical except for one variable in the following expression

Application removes one term and one variable from the remaining term

In the following example one can look for the adjacency

The first and third term differ only The third and fourth term differ only The second and third term differ only Duplicate 3rd. term and rearrange

and and and

Apply adjacency on term pairs

KARNAUGH MAP (K-MAP)


The Karnaugh map provides a systematic method for simplifying a Boolean expression or a truth table function. The K map can produce the simplest SOP or POS expression possible. K-map procedure is actually an application of adjacency and guarantees a minimal expression. It is easy to use, visual, fast and familiarity with Boolean laws is not required.

The K map is a table consisting of N =2 n cells, where n is the number of input variables. Assuming the input variable are A and B then the K map illustrating the four possible variable combinations is shown.

Figure: Two variable K map Similarly three variable and four variable K-maps can be constructed as shown below

Figure: Three variable and four variable K maps Numerical Assignment of Karnaugh Map Cells Each cell of a Karnaugh map has a unique binary value. With reference to the Forms and Definitions of Boolean Expressions it is obvious that each value can be converted to the equivalent decimal value. This can be useful when entering functions expressed in numerical form, in the map.

So far we can see that applying Boolean algebra can be awkward in order to simplify expressions. Apart from being laborious (and requiring the remembering all the laws) the method can lead to solutions which, though they appear minimal, are not. The Karnaugh map provides a simple and straight-forward method of minimizing Boolean expressions. With the Karnaugh map Boolean expressions having up to four and even six variables can be simplified. A Karnaugh map provides a pictorial method of grouping together expressions with common factors and therefore eliminating unwanted variables. The Karnaugh map can also be described as a special arrangement of a truth table. The diagram below illustrates the correspondence between the Karnaugh map and the truth table for the general case of a two variable problem.

The values inside the squares are copied from the output column of the truth table,

therefore there is one square in the map for every row in the truth table. Around the edge of the Karnaugh map are the values of the two input variable. A is along the top and B is down the left hand side. The diagram below explains this:

The values around the edge of the map can be thought of as coordinates. So as an example, the square on the top right hand corner of the map in the above diagram has coordinates A=1 and B=0. This square corresponds to the row in the truth table where A=1 and B=0 and F=1. Note that the value in the F column represents a particular function to which the Karnaugh map corresponds.

Example:

By using the rules of simplification and ringing of adjacent cells in order to make as many variables redundant, the minimized result obtained is B.

By using the rules of simplification and ringing of adjacent cells in order to make as many variables redundant, the minimised result obtained is B + AC. In the case of a 3-input Karnaugh map, any two horizontally or vertically adjacent minterms, each composed of three variables, can be combined to form a new product term composed of only two variables. Similarly, in the case of a 4-input map, any two adjacent minterms, each composed of four variables, can be combined to form a new product term composed of only three variables. Additionally, the 1s associated with the minterms can be used to form multiple groups. For example, consider a new 3-input function .

Figure: Karnaugh map minterms can be used to form multiple groups. Groupings can also be formed from four adjacent minterms in which case two redundant variables can be discarded; consider some 4-input Karnaugh map examples. In fact, any group of 2n adjacent minterms can be gathered together, where n is a positive integer. For example, 21 = two minterms, 22 = four minterms, 23 = eight minterms, and so forth.

Figure: Some example Karnaugh map groupings of four adjacent minterms. As was noted above, Karnaugh map input values are ordered so that the values associated with adjacent rows and columns differ by only a single bit. One result of this ordering is that the top and bottom rows are also only separated by a single bit; similarly, the left and right columns are only separated by a single bit. It may help to visualize the map rolled into a horizontal cylinder such that the top and bottom edges are touching, or into a vertical cylinder such that the left and right edges are touching. This leads to some additional grouping possibilities.

Figure: Some additional Karnaugh map grouping possibilities. Note especially the last example. Diagonally adjacent minterms generally cannot be used to form a group, however, remembering that the left-right columns and the topbottom rows are logically adjacent, the four corner minterms can be used to form a single group When a Karnaugh map is populated using the 1s assigned to the truth table's output, the resulting Boolean expression is extracted from the map in sum-of-products form. As an alternative, the Karnaugh map can be populated using the 0s assigned to the truth table's output. In this case, groupings of 0's are used to generate expressions in product-of-sums format.

Although the sum-of-products and product-of-sums expressions appear to be somewhat different, they do produce identical results. The expressions can be shown to be equivalent using algebraic means, or by constructing truth tables for each expression and comparing the outputs.

MODULE 2
Combinational Logic Circuits-Review of Basic Gates, Adders ,Subtractors, Serial Adder, Parallel Adder, Carry Propagate Adder, Carry Lookahead Adder, Carry Save Adder, Comparators, Parity Generators, Decoder and Encoder, Multiplexer and Demultiplexer, PLA and PAL

LOGIC GATES
A logic gate performs a logical operation on one or more logic inputs and produces a single logic output. Because the output is also a logic-level value, an output of one logic gate can connect to the input of one or more other logic gates. The logic normally performed is Boolean logic and is most commonly found in digital circuits. Logic gates are primarily implemented electronically using diodes or transistors, but can also be constructed using electromagnetic relays, fluidics, optics, molecules, or even mechanical elements. In electronic logic, a logic level is represented by a voltage or current, (which depends on the type of electronic logic in use). Each logic gate requires power so that it can source and sink currents to achieve the correct output voltage. In logic circuit diagrams the power is not shown, but in a full electronic schematic, power connections are required.

Truth table
A truth table is a table that describes the behaviour of a logic gate. It lists the value of the output for every possible combination of the inputs and can be used to simplify the number of logic gates and level of nesting in an electronic circuit. In general the truth table does not lead to an efficient implementation; a minimization procedure, using Karnaugh maps, the QuineMcCluskey algorithm or a heuristic algorithm is required for reducing the circuit complexity. NAND and NOR logic gates are the two pillars of logic, in that all other types of Boolean logic gates (i.e., AND, OR, NOT, XOR, XNOR) can be created from a suitable network of just NAND or just NOR gate(s). They can be built from relays or transistors, or any other technology that can create an inverter and a two-input AND or OR gate. Hence the NAND and NOR gates are called the universal gates. For an input of 2 variables, there are 16 possible boolean algebraic functions. These 16 functions are enumerated below, together with their outputs for each combination of inputs variables

NOT gate (inverter)


The output Q is true when the input A is NOT true, the output is the inverse of the input: Q = NOT A A NOT gate can only have one input. A NOT gate is also called an inverter.

Input A Output Q 0 1 1 0

AND gate
The output Q is true if input A AND input B are both true: Q = A AND B An AND gate can have two or more inputs, its output is true if all inputs are true.

Input A 0 0 1 1

Input B 0 1 0 1

Output Q 0 0 0 1

NAND gate (NAND = Not AND)


This is an AND gate with the output inverted, as shown by the 'o' on the output. The output is true if input A AND input B are NOT both true: Q = NOT (A AND B) A NAND gate can have two or more inputs, its output is true if NOT all inputs are true.

Input A 0 0 1 1

Input B 0 1 0 1

Output Q 1 1 1 0

OR gate
The output Q is true if input A OR input B is true (or both of them are true): Q = A OR B An OR gate can have two or more inputs, its output is true if at least one input is true.

Input A 0 0 1 1

Input B 0 1 0 1

Output Q 0 1 1 1

NOR gate (NOR = Not OR)


This is an OR gate with the output inverted, as shown by the 'o' on the output. The output Q is true if NOT inputs A OR B is true: Q = NOT (A OR B) A NOR gate can have two or more inputs; its output is true if no inputs are true.

Input A 0 0 1 1

Input B 0 1 0 1

Output Q 1 0 0 0

EX-OR (EXclusive-OR) gate


The output Q is true if either input A is true OR input B is true, but not when both of them are true: Q = (A AND NOT B) OR (B AND NOT A) This is like an OR gate but excluding both inputs being true. The output is true if inputs A and B are DIFFERENT.

EX-OR gates can only have 2 inputs. Input A 0 0 1 1 Input B 0 1 0 1 Output Q 0 1 1 0

EX-NOR (EXclusive-NOR) gate


This is an EX-OR gate with the output inverted, as shown by the 'o' on the output. The output Q is true if inputs A and B are the SAME (both true and both false): Q = (A AND B) OR (NOT A AND NOT B) EX-NOR gates can only have 2 inputs. Input A 0 0 1 1 Input B 0 1 0 1 Output Q 1 0 0 1

UNIVERSAL GATES
Why are NAND and NOR Gates called Universal Gates? They are called universal gates because all of the other gates may be constructed using only those two gates. That is important because it's a lot cheaper in practice to make lots of similar things than a bunch of different things (different gates). All other gates/functions can be implemented by NOR or NAND gates. so they are called universal gates. In fact, in chips, entire logic maybe built using only NAND (or NOR) gates.

Implementing OR using NOR gates

Implementing AND using NOR gates

Implementing an inverter using NOR gate

Realization of logic gates using NAND gates

Any logic function can be implemented using NAND gates. To achieve this, first the logic function has to be written in Sum of Product (SOP) form. Once logic function is converted to SOP, then is very easy to implement using NAND gate.

Implementing an inverter using NAND gate

Implementing AND using NAND gates

Implementing OR using NAND gates

Any logic function can be implemented using NAND gates. To achieve this, first the logic function has to be written in Sum of Product (SOP) form. Once logic function is converted to SOP, then is very easy to implement using NAND gate. Consider a logic circuit with an expression: F = W.X.Y + X.Y.Z + Y.Z.W The above expression can be implemented with three AND gates in first stage and one

OR gate in second stage as shown in figure.

If bubbles are introduced at AND gates output and OR gates inputs (the same for NOR gates), the above circuit becomes as shown in figure.

Now replace OR gate with input bubble with the NAND gate. Now we have circuit which is fully implemented with just NAND gates.

COMBINATIONAL LOGIC CIRCUITS ADDERS


In electronics, an adder or summer is a digital circuit that performs addition of numbers. In modern computers adders reside in the arithmetic logic unit (ALU) where other operations are performed. Although adders can be constructed for many numerical representations, such as Binary-coded decimal or excess-3, the most common adders operate on binary numbers. In cases where two's complement or ones' complement is being used to represent negative numbers, it is trivial to modify an adder into an addersubtracter. Other signed number representations require a more complex adder Types of adders For single bit adders, there are two general types.

MODULE 4
Counters and Shift Registers:- Design of Synchronous and Asynchronous Counters:Binary, BCD, Decade and Up/Down Counters, Shift Registers, Types of Shift Registers, Counters using Shift Registers, Ring Counter and Johnson Counter Shift Register Shift registers are a type of sequential logic circuit, mainly for storage of digital data. They are a group of flip-flops connected in a chain so that the output from one flip-flop becomes the input of the next flip-flop. Most of the registers possess no characteristic internal sequence of states. All the flip-flops are driven by a common clock, and all are set or reset simultaneously. In this chapter, the basic types of shift registers are studied, such as Serial In - Serial Out, Serial In - Parallel Out, Parallel In - Serial Out, Parallel In - Parallel Out, and bidirectional shift registers. A special form of counter - the shift register counter, is alsointroduced. Serial In - Serial Out Shift Registers A basic four-bit shift register can be constructed using four D flip-flops, as shown below. The operation of the circuit is as follows. The register is first cleared, forcing all four outputs to zero. The input data is then applied sequentially to the D input of the first flip-flop on the left (FF0). During each clock pulse, one bit is transmitted from left to right. Assume a data word to be 1001. The least significant bit of the data has to be shifted through the register from FF0 to FF3.

In order to get the data out of the register, they must be shifted out serially. This can be done destructively or non-destructively. For destructive readout, the original data is lost and at the end of the read cycle, all flip-flops are reset to zero.

To avoid the loss of data, an arrangement for a non-destructive reading can be done by adding two AND gates, an OR gate and an inverter to the system. The construction of this circuit is shown below.

The data is loaded to the register when the control line is HIGH (ie WRITE). The data can be shifted out of the register when the control line is LOW (ie READ). Serial In - Parallel Out Shift Registers For this kind of register, data bits are entered serially in the same manner as discussed in the last section. The difference is the way in which the data bits are taken out of the register. Once the data are stored, each bit appears on its respective output line, and all bits are available simultaneously. A construction of a four-bit serial in - parallel out register is shown below.

Parallel In - Serial Out Shift Registers A four-bit parallel in - serial out shift register is shown below. The circuit uses D flipflops and NAND gates for entering data (ie writing) to the register.

D0, D1, D2 and D3 are the parallel inputs, where D0 is the most significant bit and D3 is the least significant bit. To write data in, the mode control line is taken to LOW and the data is clocked in. The data can be shifted when the mode control line is HIGH as SHIFT is active high. Parallel In - Parallel Out Shift Registers For parallel in - parallel out shift registers, all data bits appear on the parallel outputs immediately following the simultaneous entry of the data bits. The following circuit is a four-bit parallel in - parallel out shift register constructed by D flip-flops.

The D's are the parallel inputs and the Q's are the parallel outputs. Once the register is clocked, all the data at the D inputs appear at the corresponding Q outputs simultaneously. Bidirectional Shift Registers The registers discussed so far involved only right shift operations. Each right shift operation has the effect of successively dividing the binary number by two. If the operation is reversed (left shift), this has the effect of multiplying the number by two. With suitable gating arrangement a serial shift register can perform both operations. A bidirectional, or reversible, shift register is one in which the data can be shift either left or right. A four-bit bidirectional shift register using D flip-flops is shown below.

Here a set of NAND gates are configured as OR gates to select data inputs from the right or left adjacent bistables, as selected by the LEFT/RIGHT control line. Shift Register Counters Two of the most common types of shift register counters are introduced here: the Ring counter and the Johnson counter. They are basically shift registers with the serial outputs connected back to the serial inputs in order to produce particular sequences. These registers are classified as counters because they exhibit a specified sequence of states. Ring Counters A ring counter is basically a circulating shift register in which the output of the most significant stage is fed back to the input of the least significant stage. The following is a 4-bit ring counter constructed from D flip-flops. The output of each stage is shifted into the next stage on the positive edge of a clock pulse. If the CLEAR signal is high, all the flip-flops except the first one FF0 are reset to 0. FF0 is preset to 1 instead.

Since the count sequence has 4 distinct states, the counter can be considered as a mod-4 counter. Only 4 of the maximum 16 states are used, making ring counters very inefficient in terms of state usage. But the major advantage of a ring counter over a binary counter is that it is selfdecoding. No extra decoding circuit is needed to determine what state the counter is in.

Johnson Counters

Johnson counters are a variation of standard ring counters, with the inverted output of the last stage fed back to the input of the first stage. They are also known as twisted ring counters. An n-stage Johnson counter yields a count sequence of length 2n, so it may be considered to be a mod-2n counter. The circuit above shows a 4-bit Johnson counter. The state sequence for the counter is given in the table as well as the animation on the left.

Again, the apparent disadvantage of this counter is that the maximum available states are not fully utilized. Only eight of the sixteen states are being used. Beware that for both the Ring and the Johnson counter must initially be forced into a valid state in the count sequence because they operate on a subset of the available number of states. Otherwise, the ideal sequence will not be followed. Applications Shift registers can be found in many applications. Here is a list of a few. To produce time delay The serial in -serial out shift register can be used as a time delay device. The amount of delay can be controlled by: 1. the number of stages in the register 2. the clock frequency To simplify combinational logic The ring counter technique can be effectively utilized to implement synchronous sequential circuits. A major problem in the realization of sequential circuits is the assignment of binary codes to the internal states of the circuit in order to reduce the complexity of circuits required. By assigning one flip-flop to one internal state, it is possible to simplify the combinational logic required to realize the complete sequential circuit. When the circuit is in a particular state, the flip-flop corresponding to that state is set to HIGH and all other flip-flops remain LOW. To convert serial data to parallel data A computer or microprocessor-based system commonly requires incoming data to be in parallel format. But frequently, these systems must communicate with external devices that send or receive serial data. So, serial-to-

parallel conversion is required. As shown in the previous sections, a serial in - parallel out register can achieve this. Synchronous counter A synchronous counter, in contrast to an asynchronous counter, is one whose output bits change state simultaneously, with no ripple. The only way we can build such a counter circuit from J-K flip-flops is to connect all the clock inputs together, so that each and every flip-flop receives the exact same clock pulse at the exact same time: A 4-bit synchronous counter using J-K Flip-flops Where a stable count value is important across several bits, which is the case in most counter systems, synchronous counters are used. These also use flip-flops, either the Dtype or the more complex J-K type, but here, each stage is clocked simultaneously by a common clock signal. Logic gates between each stage of the circuit control data flow from stage to stage so that the desired count behavior is realized. Synchronous counters can be designed to count up or down, or both according to a direction input, and may be presetable via a set of parallel "jam" inputs. Most types of hardware-based counter are of this type. A simple way of implementing the logic for each bit of an ascending counter (which is what is shown in the image to the right) is for each bit to toggle when all of the less significant bits are at a logic high state. For example, bit 1 toggles when bit 0 is logic high; bit 2 toggles when both bit 1 and bit 0 are logic high; bit 3 toggles when bit 2, bit 1 and bit 0 are all high; and so on.

Now, the question is, what do we do with the J and K inputs? We know that we still have to maintain the same divide-by-two frequency pattern in order to count in a binary sequence, and that this pattern is best achieved utilizing the "toggle" mode of the flipflop, so the fact that the J and K inputs must both be (at times) "high" is clear. However, if we simply connect all the J and K inputs to the positive rail of the power supply as we did in the asynchronous circuit, this would clearly not work because all the flip-flops would toggle at the same time: with each and every clock pulse!

Examining the four-bit binary count sequence, another predictive pattern can be seen. Notice that just before a bit toggles, all preceding bits are "high:"

This pattern is also something we can exploit in designing a counter circuit. If we enable each J-K flip-flop to toggle based on whether or not all preceding flip-flop outputs (Q) are "high," we can obtain the same counting sequence as the asynchronous circuit without the ripple effect, since each flip-flop in this circuit will be clocked at exactly the same time:

The result is a four-bit synchronous "up" counter. Each of the higher-order flip-flops are made ready to toggle (both J and K inputs "high") if the Q outputs of all previous flipflops are "high." Otherwise, the J and K inputs for that flip-flop will both be "low," placing it into the "latch" mode where it will maintain its present output state at the next clock pulse. Since the first (LSB) flip-flop needs to toggle at every clock pulse, its J and K inputs are connected to Vcc or Vdd, where they will be "high" all the time. The next flip-flop need only "recognize" that the first flip-flop's Q output is high to be made ready to toggle, so no AND gate is needed. However, the remaining flip-flops should be made ready to toggle only when all lower-order output bits are "high," thus the need for AND gates. To make a synchronous "down" counter, we need to build the circuit to recognize the appropriate bit patterns predicting each toggle state while counting down. Not surprisingly, when we examine the four-bit binary count sequence, we see that all preceding bits are "low" prior to a toggle (following the sequence from bottom to top): Since each J-K flip-flop comes equipped with a Q' output as well as a Q output, we can use the Q' outputs to enable the toggle mode on each succeeding flip-flop, being that each Q' will be "high" every time that the respective Q is "low:"

Taking this idea one step further, we can build a counter circuit with selectable between "up" and "down" count modes by having dual lines of AND gates detecting the appropriate bit conditions for an "up" and a "down" counting sequence, respectively, then use OR gates to combine the AND gate outputs to the J and K inputs of each succeeding flip-flop:

This circuit isn't as complex as it might first appear. The Up/Down control input line simply enables either the upper string or lower string of AND gates to pass the Q/Q' outputs to the succeeding stages of flip-flops. If the Up/Down control line is "high," the top AND gates become enabled, and the circuit functions exactly the same as the first ("up") synchronous counter circuit shown in this section. If the Up/Down control line is made "low," the bottom AND gates become enabled, and the circuit functions

identically to the second ("down" counter) circuit shown in this section.

MODULE 3
Sequential Logic Circuits:-Latches and Flipflops - SR, JK, D, T and MS Flipflops, Asynchronous Inputs. Clocked Sequential Circuits:-State Tables, State Equations and State Diagrams, State Reduction and State Assignment, Design of Clocked Sequential Circuits using state Equations The Clocked RS Flip-Flop The clocked RS flip-flop is like an SR flip-flop but with an extra third input of a standard clock pulse CLK. The logic symbol for this flip-flop is shown below

and one equivalent implementation using NAND gates is illustrated here

Bearing in mind that the NAND implementation of an SRFF is driven by 0s then it can be seen that the extra two NAND gates in front of the standard SRFF circuitry mean that the circuit will function as a usual SRFF when S or R are 1 and the clock pulse is also 1 ("high"). Therefore this flip-flop is synchronous. Specifically, a 0 to 1 transition on either of the inputs S or R will only be seen at the output if the clock is 1. An example timing diagram is given below.

The D Flip-Flop The D flip-flop tracks the input, making transitions with match those of the input D. The D stands for "data"; this flip-flop stores the value that is on the data line. It can be thought of as a basic memory cell. A D flipflop can be made from set reset flip flop[by tying the set to the reset through an inverter. The result may be clocked J-K Flip-Flop The J-K flip-flop is the most versatile of the basic flip-flops. It has the input- following character of the clocked D flip-flop but has two inputs,traditionally labeled J and K. If J and K are different then the output Q takes the value of J at the next clock edge. If J and K are both low then no change occurs. If J and K are both high at the clock edge then the output will toggle from one state to the other. It can perform the functions of the set/reset flip-flop and has the advantage that there are no ambiguous states. It can also act as a T flip-flop to accomplish toggling action if J and K are tied together. This toggle application finds extensive use in binary counters. JK flip-flop JK flip-flop timing diagramThe JK flip-flop augments the behavior of the SR flip-flop (J=Set, K=Reset) by interpreting the S = R = 1 condition as a "flip" or toggle command. Specifically, the combination J = 1, K = 0 is a command to set the flip-flop; the combination J = 0, K = 1 is a command to reset the flip-flop; and the combination J = K = 1 is a command to toggle the flip-flop, i.e., change its output to the logical complement of its current value. Setting J = K = 0 does NOT result in a D flip-flop, but rather, will hold the current state. To synthesize a D flip-flop, simply set K equal to the complement of J. The JK flip-flop is therefore a universal flip-flop, because it can be

configured to work as an SR flip-flop, a D flip-flop, or a T flip-flop. NOTE: The flip flop is positive edge triggered (Clock Pulse) as seen in the timing diagram. A circuit symbol for a JK flip-flop, where > is the clock input, J and K are data inputs, Q is the stored data output, and Q' is the inverse of Q. and the corresponding truth table is: JK Flip Flop Characteristic table J K Qnext 0 0 hold state 0 1 reset 1 0 set 1 1 1 toggle The origin of the name for the JK flip-flop is detailed by P. L. Lindley, a JPL engineer, in a letter to EDN, an electronics design magazine. The letter is dated June 13, 1968, and was published in the August edition of the newsletter. In the letter, Mr. Lindley explains that he heard the story of the JK flip-flop from Dr. Eldred Nelson, who is responsible for coining the term while working at Hughes Aircraft. Flip-flops in use at Hughes at the time were all of the type that came to be known as J-K. In designing a logical system, Dr. Nelson assigned letters to flip-flop inputs as follows: #1: A & B, #2: C & D, #3: E & F, #4: G & H, #5: J & K. Another theory holds that the set and reset inputs were given the symbols "J" and "K" after one of the engineers that helped design the J-K flip-flop, Jack Kilby. D flip-flop D flip-flop symbolThe Q output always takes on the state of the D input at the moment of a rising clock edge. (or falling edge if the clock input is active low)[6] It is called the D flip-flop for this reason, since the output takes the value of the D input or Data input, and Delays it by one clock count. The D flip-flop can be interpreted as a primitive memory cell, zero-order hold, or delay line. Truth table: Clock Rising edge Rising edge D 0 1 Q 0 1 Qprev X X

('X' denotes a Don't care condition, meaning the signal is irrelevant) 3-bit shift registerThese flip flops are very useful, as they form the basis for shift registers, which are an essential part of many electronic devices. The advantage of the D flip-flop over the D-type latch is that it "captures" the signal at the moment the clock goes high, and subsequent changes of the data line do not influence Q until the next rising clock edge. An exception is that some flip-flops have a 'reset' signal input, which will reset Q (to zero), and may be either asynchronous or synchronous with the clock.

The above circuit shifts the contents of the register to the right, one bit position on each active transition of the clock. The input X is shifted into the leftmost bit position. If the T input is high, the T flip-flop changes state ("toggles") whenever the clock input is strobed. If the T input is low, the flip-flop holds the previous value. This behavior is described by the characteristic equation: (or, without benefit of the XOR operator, the equivalent: ) When T is held high, the toggle flip-flop divides the clock frequency by two; that is, if clock frequency is 4 MHz, the output frequency obtained from the flip-flop will be 2 MHz. This 'divide by' feature has application in various types of digital counters. A T flip-flop can also be built using a JK flip-flop (J & K pins are connected together and act as T) or D flip-flop (T input and Qprevious is connected to the D input through an XOR gate). JK Flip-Flop The JKFF simplifies the RSFF truth table but keeps two inputs (figure 7.22). The toggle state is useful in counting circuits. If the C pulse is too long this state is undefined and hence the JKFF can only be used with rigidly defined short clock pulses.

The basic JK flip-flop constructed from an RS flip-flop and gates, and its schematic symbol and truth table. Master/Slave or Pulse Triggering We can simulate a dynamic clock input by putting two flip-flops in tandem, one driving the other in a master/slave arrangement as shown in figure 7.23. The slave is clocked in a complementary fashion to the master.

An implementation of the master/slave flip-flop. This arrangement is still pulse triggered. The data inputs are written onto the master flip-flop while the clock is true and transferred to the slave when the clock becomes false. The arrangement guarantees that the outputs of the slave can never be connected to the slave's own RS inputs. The design overcomes signal racing (i.e. the input signals never catch up with the signals already in the flip-flop). There are however a few special states when a transition can occur in the master and be transferred to the slave when the clock is high. These are known as ones catching and are common in master/slave designs. Master/Slave or Pulse Triggering We can simulate a dynamic clock input by putting two flip-flops in tandem, one driving the other in a master/slave arrangement as shown in figure 7.23. The slave is clocked in a complementary fashion to the master.

Figure 7.23: An implementation of the master/slave flip-flop. This arrangement is still pulse triggered. The data inputs are written onto the master flip-flop while the clock is true and transferred to the slave when the clock becomes false. The arrangement guarantees that the outputs of the slave can never be connected to the slave's own RS inputs. The design overcomes signal racing (i.e. the input signals never catch up with the signals already in the flip-flop). There are however a few special states when a transition can occur in the master and be transferred to the

slave when the clock is high. These are known as ones catching and are common in master/slave designs.

Finite State Machine Design


The Finite state machine design process consists of 1. Constructing an initial state machine that realizes the design. 2. Minimizing the number of states. 3. Encoding the states. 4. Choosing the memory device to implement the state memory. 5. Implementing the finite state machines next state and output functions. In the age of very large scale integrated circuits, why should we bother to minimize a state machine implementation? After all as long as the input/output behavior of the machine is correct, it doesnt matter how it is implemented. Or does it?

Advantages of minimum states


In general, you will find it is worthwhile to implement the finite state machine in as few states as possible. This usually reduces the number of gates and flip-flops you need for the machine implementation. For example, you are given a finite state machine with 18 states, thus requiring five state flip-flops. If you can reduce the number of states to 16 or less, you save a flip-flop. Even if reducing the reducing the number of states is not enough to eliminate a flip-flop, it still has advantages. With fewer states, you introduce more dont care conditions into the next-state and output functions, making their implementation simpler. State reduction technique also allows you to be less meticulous in obtaining the initial finite state machine description. If you have introduced a few redundant states, you will find and eliminate them by using the state reduction technique introduced next.

State Minimization
State reduction identifies and combines states that have equivalent behavior. Two states have equivalent behavior if, for all the input combinations, their outputs are the same and they change to the same or equivalent next states. Algorithms for state reduction begin with the symbolic state transition table. First we group together states that have same state outputs (Moore machine) or transition outputs (mealy machine). These are potentially equivalent, since states cannot be equivalent if their outputs differ. Next, we examine the transitions to see if they go to the same next state for every input combination. If they do, the states are equivalent and we can combine them into a renamed new state. We then update all transitions to the newly combine states. We repeat this process until no additional states can be combined. There are two common methods by which states can be minimized. 1. Row-matching method. 2. Chart method.

Row Matching Method


Lets begin our investigation of a row-matching method with detailed example. We will see how to transform an initial state diagram for a simple sequence detector into a minimized, equivalent state diagram. Four-Bit Sequence Detector: Specification and Initial state diagram. Lets consider a sequence-detecting finite state machine with following specifications. The machine has single input X and output Z. The output is asserted after each four bit input sequence if it consists of one of the binary strings 0110 or 1010. The machine returns to the reset state after each four-bit sequence will assume mealy implementation. The state diagram of 4-bit sequence detector is as shown below:

Reset

0/0

1/0

0/0

1/0

0/0

1/0

0/0

1/0

0/0

1/0

0/0

1/0

0/0

1/0

0/0

1/0

0/0

1/0 0/0

1/0

0/1

1/0

0/0

1/0

0/1

1/0

0/0 1/0

0/0 1/0

There are 16 unique paths through the state diagram, one for each possible 4-bit pattern. This requires 15 states and 30 transitions. Only two of the transitions have a one output, representing the accepted strings.

The state transition table for 4-bit sequence detector is as shown below. Input Sequence

Present State S0

Next state
X=1

Output
X=0 X=0 X=1 0 0

Reset

S1

S2
0 1 00 01 10 11 S1 S2 S3 S4 S5 S3 S5 S7 S9 S11 S13 S4 S6 S8 S10 S12 S14 0 0 0 0 0 0 0 0 0 0 0 0

S6

000 001 010 011 100 101 110 111

S7 S8 S9 S10 S11 S12 S13 S14

S0 S0 S0 S0 S0 S0 S0 S0

S0 S0 S0 S0 S0 S0 S0 S0

0 0 0 1 0 1 0 0

0 0 0 0 0 0 0 0

The above table contain one row per state, with multiple next state and the outputs columns based on the input combinations. It gives the same information as a table with separate rows for each state and input combination. Next we examine the rows of the state transition table to find any with identical next state and output values (hence the term row matching). For this finite state machine we can combine S10 and S12.Lets call the new state S10` and modify all transitions to S10 and S12.The revised state table is as shown below. Input Sequence Present State Next State Output X=0 X=1 X=0 X=1 Reset S0 0 0

S1

S2
0 S1 S3 S4 0 0 1 S2 S5 S6 0 0 00 S3 S7 S8 0 0 01 S4 S9 S10` 0 0 10 S5 S11 S10` 0 0 11 S6 S13 S14 0 0 000 S7 S0 S0 0 0 001 S8 S0 S0 0 0 010 S9 S0 S0 0 0 011 or 101 S10` S0 S0 1 0 100 S11 S0 S0 0 0 110 S13 S0 S0 0 0 111 S14 S0 S0 0 0 We continue matching rows until we can no longer combine any. In the above figure S7, S8, S9 S11, S13, and S14 all have the same next state and the outputs. We combine them into a renamed state S7`. The table with renamed transitions, is shown in the figure below. Input Sequence Present State Next State Output X=0 X=1

X=0 X=1

Reset
0 1 00 01 10 11 Not(011 or 101) 011 or 101

S0 S1 S2 S3 S4 S5 S6 S7` S10`

S1 S3 S5 S7` S7` S7` S7` S0 S0

S2 S4 S6 S7` S10` S10` S7` S0 S0

0 0 0 0 0 0 0 0 1

0 0 0 0 0 0 0 0 0

Now state S3 and S6 can be combined, as can S4 and S5. We call the combined states S3` and S4` respectively. The final reduced state transition table is as shown below. Input Sequence Reset 0 1 00 or 11 01 or 10 Not(011 or 101) 011 or 101 Present State S0 Next State X=0 S1 S3` S4` S7` S7` S0 S0 Output X=1 X=0 S2 0 S4` 0 S3` 0 S7` S10` S0 S0 0 0 0 1 X=1 0 0 0 0 0 0 0

S1
S2 S3` S4` S7` S10`

In the process we have reduced 15 states to just 7 states. The reduced state diagram is as shown below.

Reset 0/0 1/0 0/0 0,1/0 0,1/0 0/0 0/1 1/0 1/0 0/0 1/0 1/0

Chart Method :
The implication Chart method is a more systematic approach to finding the states that can be combined into a single reduced state. Consider a Three -Bit Sequence Detector: Your goal is to design a binary sequence detector that will output a 1 whenever a machine has observed the serial sequence 010 or 110 at the inputs. The initial table is as shown below.

Input

Seque nce

Present State S0

Next state
X=0 X=1

Output
X=0 X=1 0 0

Reset
0 1 00 01 10 11

S1 S2
S3 S5 S0 S0 S0 S0 S4 S6 S0 S0 S0 S0

S1 S2 S3 S4 S5

S6

0 0 0 1 0 1

0 0 0 0 0 0

The method operates on a data structure that enumerates all possible combinations of states taken two at a time, called an implication chart.

S0 S1 S2 S3 S4

S5 S6 S0 S1 S2 S3 S4 S5 S6

The chart shown above is more complicated then it needs to be. For example the diagonal entries are not needed since we do not need to compare a state with itself. Also note that al the upper and the lower triangles of cells are symmetric. The chart cell for row Si and column Sj considers the same information as that for row Sj and column Si. Therefore, we work with the following reduced structure. S1 S2 S3 S4 S5 S6 S0 S1 S2 S3 S4 S5

We will fill the implication chart as follows. Let Xij be the cell whose row is labeled by the state Si and whose column is labeled by the state Sj. If we were able to combine states Si and Sj, it would imply that their next state transitions for each input combination must also be equivalent. The cell contains the next-state combinations that must be equivalent for the row and column states to be equivalent. If Si and Sj have different outputs or next state behavior, an X is placed in the cell. This indicates that the two states can never be equivalent. The implication chart for three-bit sequence detector is as shown below. S0, S1, S2, S3 and S5 have the same outputs and are the candidates for being combined. Similarly, states S4 and S6 might also be combined. Any combination of states across the groups, such as S1 and S4, is labeled by an X in the chart. Since their outputs are different they can NEVER be equivalent. To fill the chart entry for (row) S1 and (column) S0, we look at the next transition. S0 goes to S1 on 0 and S2 on 1, while S1 goes to S3 and S4 respectively. We will fill the chart in with S1-S3, the transitions on 0 and S2-S4, the transitions on 1.We call this grouping implied state pairs. The entry means that S0 and S1 cannot be

equivalent unless S1 is equivalent to S3 and S2 is equivalent to S4.The rest of the entries are filled in similarly. At this point the chart contain enough information to prove that many states are NOT EQUIVALENT. For example we already know that S2 and S4 cannot be equivalent, since they have different output behavior. Thus there is no way that S0 can be equivalent to S1. Initial Entries: S1-S3 S2-S4 S1-S5 S2-S4 S1-S0 S2-S0 S3-S5 S4-S6 S3-S0 S4-S0 S5-S0 S6-S0

S1-S0 S2-S0

S3-S0 S4-S0

S5-S0 S6-S0

S0-S0 S0-S0 S0-S0 S0-S0

S0

S1

S2

S3

S4

S5

First Pass: S1

S2 S3 S4 S5 S6 S0

S3-S5 S4-S6

S0-S0 S0-S0 S0-S0 S0-S0 S1 S2 S3 S4 S5

The above figure contains the results of first making pass. Entry S2-S0 is marked with X because the chart entry for the implied state pair S2-S6 is already marked with a X. Entry S3-S0 is also marked, because S1-S0 has just been marked. The same is true for S5S0.by the end of the pass; the only entries not marked are S2-S1, S5-S3 and S6-S4. We now make a second pass through the chart to see if we can add any new markings. Entry S2-S1 remains unmarked. Nothing in the chart refuses that S3 and S5 are equivalent. The same is true of S4 and S6. Continuing S3-S5 and S4-S6 are now obviously equivalent. They have identical outputs and transfer to the same next state (S0) for all input combinations. Since no marking have been added the algorithm stops. The unmarked entries represent equivalence between the row and the column indices. The final reduced state table is as shown. Input sequence Reset 0 or 1 00 or 10 01 or 11 Present State S0 S1` S3` S4` Next State X=0 S1` S3` S0 S0 Output X=1 X=0 S1` 0 S4` 0 S0 0 S0 1 X=1 0 0 0 0

Potrebbero piacerti anche