Sei sulla pagina 1di 39

UNIT-3

Short questions
1. Define Micro instruction format?
A.The microinstruction format are dived into four functional parts. The three fields F1,F2, and
F3 specify microoperations for the computer. The CD field is condition for branching, and BR
field Branch field

2.Enumerate different Address sequencing steps in micro


programmed control unit?
A.1. Incrementing of the control address register.
2. Unconditional branch or conditional branch, depending on status bit conditions.
3. A mapping process from the bits of the instruction to an address for control memory.
4. A facility for subroutine call and return.

3. What is a Control word?


A. The control unit initiates a series of sequential steps of rnicro operations. During any given time,
certain micro operations are to be initiated, while others remain idle. The control variables at any
given time can be represented by a string of l's and 0's is called a control word.

4. Define micro programmed unit?


A. In the micro programmed organization, the control information is stored in a control memory. The
control memory is programmed to initiate the required sequence of micro operations.
.
5 Define the SBR register ?
A.Microprograms that use subroutines must have a provision for storing the return address during a
subroutine call and restoring the address during a subroutine return. This may be accomplished by
placing the incremented output from the control address register into a subroutine register and
branching to the beginning of the subroutine. The subroutine register can then become the source for
transferring the address for the return to the main routine.

6. Enumerate different fields in micro instruction format?


A.The three fields F1, F2, and F3 specify micro operations for the computer. The CD field selects status
bit conditions. The BR field specifies the type of branch to be used. The AD field contains a branch
address.

7. Whatis the conditional branching specify micro instruction


format?
A. The CD (condition) field consists of two bits which are encoded to specify four status bit conditions as
listed in Table 7-1 . The first condition is always a 1, so that a reference to CD = 00 (or the symbol U) will
always find the condition to be true. When this condition is used in conjunction with the BR (branch)
field, it provides an unconditional branch operation. The indirect bi I is available from bit 15 of DR after
an instruction is read from memory. The sign bit of AC provides the next status bit. The zero value,
symbolized by Z, is a binary variable whose value is equal to 1 if all the bits in AC are equal to zero. We
will use the symbols U, I, S, and Z for the four status bits when we write micro programs in symbolic
form

8. What are the inputs to the control unit?


A. Instruction code, status bit, Subroutine register and incrementer.
9. List the advantages of Micro programmed control unit?
A.Advantages:-
1. The decoders and sequencing logic unit of a micro-programmed control unit are very
simple pieces of logic, compared to the hardwired control unit, which contains complex logic
for sequencing through the many micro-operations of the instruction cycle.
2. It simplifies the design of the control unit. Simpler design means the control unit
is cheaper and less error-prone to implement-
3. It is also flexible as changes could be easily made to the design.

10. Define Micro operation?


A. A micro operation is an elementary operation performed on the information stored in one or more
registers.

LONG ANSWERS

1. A) Define the following terms I) control word II) control function


III) control memory
B) Explain about micro programmed control organization?
C) Discuss about address sequencing in micro programmed
control unit?
A) CONTROL WORD: The control variables at any given time can be represented by a string of l's
and O's called a control word. As such, control words can be programmed to perform various
operations on the components of the system.
B) CONTROL FUNCTION:

The control variables are separated from the register transfer operation by specifying a control
function. A control function is a Boolean variable that is equal to 1 or 0. The control function is
included in the statement as follows:

P: R2 <-- R1

C) CONTROL MEMORY: A memory that is part of a control unit is referred to as a control


memory. The control memory is programmed to initiate the required sequence of micro
operations. Control units that use dynamic microprogramming employ a writable control
memory. This type of memory can be used for writing (to change the microprogram) but is
used mostly for reading.

B) Micro-programmed Control organization –


 The control signals associated with operations are stored in special memory units inaccessible
by the programmer as Control Words.
 Control signals are generated by a program are similar to machine language programs.
 Micro-programmed control unit is slower in speed because of the time it takes to fetch
microinstructions from the control memory.

C) Microinstructions are stored in control memory in groups, with each group specifying a
routine. Each computer instruction has its own microprogram routine in control memory to
generate the microoperations that execute the instruction. The hardware that controls the
address sequencing of the control memory must be capable of sequencing the
microinstructions within a routine and be able to branch from one routine to another.
The microoperation steps to be generated in processor registers depend on the operation
code part of the instruction. Each instruction has its own microprogram routine stored in a
given location of control memory. The transformation from the instruction code bits to an
address in control memory where the routine is located is referred to as a mapping process.

the address sequencing capabilities required in a control memory are:

1. Incrementing of the control address register.

2. Unconditional branch or conditional branch, depending on status bit conditions.

3. A mapping process from the bits of the instruction to an address for control memory

4. A facility for subroutine call and return.

2. A) Explain the various register of control unit ?


B) Draw the diagram for Micro program example and explain?
C) Explain design of control unit?

ANSWER:

A)CAR(CONTROL ADDRESS REGISTER): Control memory address register specifies the address of
the micro-instruction, and the control data register holds the micro-instruction read from memory,
The micro-instruction contains a control word that specifies one or more micro operations for the
data processor. Once these operations are executed, the control must determine the next address.
The location of the next micro-instruction may be the one next in sequence, or it may be located
somewhere else in the control memory. for this reason it is necessary to use some bits of the present
micro-instruction to control the generation of the address of the next micro-instruction. The next
address may also be a function of the external input condition.

SBR(SUB ROUTINE REGISTER):In a given program, it is often necessary to perform a particular


subtask many times on different data-values. Such a subtask is usually called a subroutine.

It is possible to include the block of instructions that constitute a subroutine at every place where it is
needed in the program. However, to save space, only one copy of the instructions that constitute the
subroutine is placed in the memory, and any program that requires the use of the subroutine simply
branches to its starting location. When a program branches to a subroutine we say that it is calling the
subroutine. The instruction that performs this branch operation is named a Call instruction.

B)

The block diagram of the computer is shown in . It consists of two memory units: a main memory for
storing instructions and data, and a control memory for storing the microprogram. Four
registersareassociated with the processor unit and two with the control unit.

The processor registers are program counter PC, address register AR, data register DR, and
accumulator register AC. The function of these registers is similar to the basic computer . The
controlunit has a control address register CAR and a subroutine register SBR. The control memory and
its registers are organized as a microprogrammed control unit.

C)

This figure shows the three decoders and some of the connections that must be made from their
outputs. Each of the three fields of the microinstruction presently available in the output of control
memory are decoded with a 3 x 8 decoder to provide eight outputs. Each of these outputs must be
connected to the proper circuit to initiate the corresponding microoperation.For example, when F1 =
101 (binary 5), the next clock pulse transition transfers the content of DR(0-10) to AR . Similarly, when
F1 = 110 (binary 6) there is a transfer from PC to AR (symbolized by PCTAR). As shown in Fig, outputs 5
and 6 of decoder f1 are connected to the load input of AR so that when eitherone of these outputs is
active, information from the multiplexers is transferred to AR . The multiplexers select the
information from DR when output 5 is active and from PC when output 5 is inactive. The transfer into
AR occurs with a clock pulse transition only when output 5 or output 6 of the decoder are active. The
other outputs of the decoders that initiate transfers between registers must be connected in a similar
fashion.
Unit:4
Short Answers
1. What is the purpose of valid bit in cache?
A. When data is loaded into a particular cache block, the corresponding valid bit is set to 1. So
the cache contains more than just copies of the data in memory; it also has bits to help us find
data within the cache and verify its validity.

2. What is a page?
A. The term page refers to groups of address space of the same size.

3. What are the two policies of cache memory?


A. Two policies of cache memory are write back and write through.

4. Define TLB?
A. A translation lookaside buffer (TLB) is a memory cache that is used to reduce the time taken to
access a user memory location. It is a part of the chip's memory-management unit (MMU).
The TLB stores the recent translations of virtual memory to physical memory and can be called
an address-translation cache.

5. What is locality of reference?


A. In computer science, locality of reference, also known as the principle of locality, is a term for
the phenomenon in which the same values, or related storage locations, are frequently
accessed, depending on the memory access pattern.

6. What is the difference between logical address and


physicaladdress space?
A: An address used by a programmer will be called a virtual address or logical address, and the set
of such addresses the address space.
An address in main memory is called a location or physical address

7. Define segment?
A. A segment is a set of logically related instructions or data elements associated with a given name.
Segments may be generated by the programmer or by the operating system. Examples of
segments are a subroutine, an array of data, a table of symbols, or a user's program.

8. Suppose a computer with a main memory capacity of 32k


words? How many bits are needed to specify a physical
address in memory?
A. 15 bits are needed to specify a physical address in memory.
9. What is a page frame?
A.The physical memory is broken down into groups of equal size called blocks, which may range
from 64 to 4096 words each.

10. List different paging techniques?


A. different paging techniques are
1. page fault
2. LRU
3. FIFO
11:Define memory unit?
A. Memory unit is the amount of data that can be stored in the storage unit. This storage capacity
is expressed in terms of Bytes.
11. What is the use cache memory?
A. 1. A cache is a smaller, faster memory, closer to a processor core, which stores copies of the
data from frequently used mainmemory locations.
2. A CPU cache is a hardware cache used by the central processing unit (CPU) of a computer to
reduce the average cost (time or energy) to access data from the main memory
12. Define the Static RAM (SRAM)?
A. Static Random Access Memory (Static RAM or SRAM) is a type of RAM that holds data in a
static form, that is, as long as the memory has power. Unlike dynamic RAM, it does not need to be
refreshed.
13. Define memory latency, seek time and memory access time ?
A.MEMORY LATENCY: In computing, memory latency is the time (the latency) between initiating a
request for a byte or word in memory until it is retrieved by a processor. ... Memory latencyis also
the time between initiating a request for data and the beginning of the actual data transfer.
SEEK TIME: Seek time is the time taken for a hard disk controller to locate a specific piece of
stored data. Other delays include transfer time (data rate) and rotational delay (latency).
MEMORY ACCESS TIME: Memory access time is how long it takes for a character in memory to be
transferred to or from the CPU.

14. List different types of memories?


A.
15.Define MMU?
A.Amemory management unit (MMU) is a computer hardware component that handles
all memory and caching operations associated with the processor. ... Hardware
memory management, which oversees and regulates the processor's use of RAM
(random access memory) and cache memory.
16.What are different memory access methods?
A.Associative memory method, Set associative method
17.Define hit ratio?
A. The hit ratio is the fraction of accesses which are a hit. The miss ratio is the fraction of accesses
which are a miss.

Long answers:

1.ENUMARATE VARIOUS MEMORY AND STORAGE


DEVICES AS PER HIERARCHY ?
 The memory hierarchy system consists of all storage devices employed in a computer system
from the slow but high-capacity auxilary memory to a relatively faster main memory, to an
even smaller and faster cache memory accessible to the high-speed processing logic.
 Figure illustrates the components ina typical memory hierarchy.
 At the bottom of the hierarchy are the relatively slow magnetic tapes used to store removable
files
 Next are the magnetic disks used as backup storage .
 The main memory occupies a central position by being able to communicate directly with the
CPUand with auxiliary memory devices through an I/O processor.
 When program is not residing in main memory are needed by the CPU, they are brought in
from auxiliary memory.
 Programs not currently needed in main memory are transferred into auxiliary memory to
provide space for currently used programs and data.
B. Explain Auxilary Memory with an example
Auxiliary memory
 The most common auxiliary memory devices used in computer systems aremagnetic disks and
tapes. Other components used, but not as frequently, aremagnetic drums, magnetic bubble
memory, and optical disks
 The important characteristics of any device are its access mode, access time, transfer rate,
capacity, and cost.
 The average time required to reach a storage location in memory andobtain its contents is
called the access time.
 In electromechanical devices with moving parts such as disks and tapes, the access time
consists of a seek time required to position the read-write head to a location.
 Transfertime is required to transfer data to or from the device.
 Because the seek time is usually much longer than the transfer time, auxiliary storage is
organized in records or blocks.
 A record is a specified number of characters or words.
 Reading orwriting is always done on entire records.
 The transfer rate is the number of characters or words that the device can transfer per second,
after it has been positioned at the beginning of the record.

Magnetic Disks A magnetic disk is a circular plate constructed of metal or plastic coated with
magnetized material. Often both sides of the disk are used and several disks may be stacked on one
spindle with read/write heads available on each surface. All disks rotate together at high speed and are
not stopped or started for access purposes. Bits are stored in the magnetized surface in spots along
concentric circles called tracks. The tracks are commonly divided into sections called sectors. In most
systems, the minimum quantity of information which can be transferred is a sector. The subdivision of
one disk surface into tracks and sectors is shown in Fig. 12-5.

C. A virtual memory system has an address space of 8K words, a


memory space of 4K wordsand page size of 1K word. Following
page references occurred 4 2 0 1 2 6 1 4 0 1 0 2 3 5 7Solve to
determine the four pages that are resident in the memory after
each page referencechange if the replacement algorithm used is
FIFO

1. A. What is a DRAM? Explain the types of DRAMs with


suitable diagrams
A. Dynamic RAM, or DRAM is a form of random access memory, RAM which is used in
many processor systems to provide the working memory.

The different types of DRAM are used for different applications as a result of their slightly varying
properties. The different types are summarised below:

Asynchronous DRAM:Asynchronous DRAM is the basic type of DRAM on which all other
types are based. Asynchronous DRAMs have connections for power, address inputs, and
bidirectional data lines.
synchronous DRAM (SDRAM). In SDRAM, all signals are tied to the clock so timing is
much tighter and better controlled
B. What are the different secondary storage devices? Elaborate
on any one of the devices
Most common forms secondary storage devices are:
Floppy disks
Hard disks
 Solid state disks

Hard Drives and SSDs

Some of the most common secondary storage devices are magnetic hard drives – long used
in laptop and desktop computers.
They use magnetic heads to store and read data on spinning metal disks known as platters.
They're generally the first place used to store data on a computer, from the operating.
More recently, computer manufacturers have started to ship more devices with what are
called solid state drives, or SSDs.
SSDs don't have moving parts like spinning platters.
Instead, they use flash memory, similar to USB flash drives.
They're usually faster and less noisy than hard drives, but they can be more expensive for the
same amount of data storage, so both devices are still currently in use for different
applications.
C. Explain Demand paging technique in detail with an
example?
Demand Paging: Demand paging is the technique of virtual memory
management. A virtual memory system is a combination of hardware and
software techniques.The memory management software system handles all
the softwareoperations for the efficient utilization of memory space. It must
decide
(1)which page in main memory ought to be removed to make room for a new
page,
(2) when a new page is to be transferred from auxiliary memory to main
memory, and
(3) where the page is to be placed in main memory. The hardware
mapping mechanism and the memory management software together
constitute the architecture of a virtual memory.
When a program starts execution, one or more pages are transferred into
main memory and the page table is set to indicate their position. The program
is executed from main memory until it attempts to reference a page that is
still in auxiliary memory. This condition is called page fault . When page fault
occurs, the execution of the present program is suspended until the required
page is brought into main memory. In demand Paging, a process is Copied into
the Logical Memory from the Physical Memory when a page is required.

2. A) WHAT IS THE BASIC CONCEPT OF VIRTUAL


MEMORY?

Virtual memory is a concept used in some large computer systems that permit the user to construct
programs as though a large memory space were available, equal to the totality of auxiliary memory.
Each address that is referenced by the CPU goes through an address mapping from the so-called virtual
address to a physical address in main memory. Virtual memory is used to give programmers the illusion
that they have a very large memory at their disposal, even though the computer actually has a
relativelymall main memory. A virtual memory system provides a mechanism forranslating program-
generated addresses into correct main memory locations.This is done dynamically, while programs are
being executed in the CPU. The translation or mapping is handled automatically by the hardware by
means ofa mapping table.

An address used by a programmer will be called a virtual address, and the set of such addresses the
address space . An address in main memory is called a location or physical address . The set of such
locations is called the memory space .Thus the address space is the set of addresses generated by
programs as they reference instructions and data; the memory space consists of the actual main
memory locations directly addressable for processing. In most computers the address and memory
spaces are identical. The address space is allowed to be larger than the memory space in computers
with virtual memory. A table is then needed, as shown in Fig. 12-17, to map a virtual address of 20 bits
to a physical address of 15 bits. The mapping is a dynamic operation, which means that every address is
translated immediately as a word is referenced by CPU.
B) EXPLAIN ABOUT SEGMENTED PAGE MAPPING?
• Segmentation and paging can be combined together to get paged segmentation or segmented
paging system

• The following technique is used to map a logical address to physical address in this case
• Two mapping tables may be stored in separate small memories or in main memory

• 3 memory accesses are required which reduces the speed

• To avoid this a fast associative memory is used to hold the most recently
referenced table entries – TLB (Translated lookaside buffer)
• Mapping process is first attempted by associative search with the given segment
and page no.

• When a block is referenced its value along with corresponding page and segment
no. are stored in the TLB

• If no match occurs, then the table mapping is used and the result is stored also in
the TLB

hexadecimal
address page no. Segment page block

60000 0 6 00 012

60100 1 6 01 000

60200 2 6 02 019

60300 3 6 03 053

60400 4 6 04 A61
MEMORY PROTECTION

• Can be assigned to the physical address or the logical address

• Physical address – assign a no. of protection bits to each block

• Logical address – include protection bits in the segment table

3.c)A two way set a associative cache memory uses blocks of 4 words.
The cache can accommodate a total of 2048 words from main memory.
The main memory size is 128k X 32 .What is the size of cache memory?
4.A)Discuss about various page replacement algorithms?
Page Replacement Algorithms

There are variety of Page replacement algorithms. Some of the most common and important
Page Replacement Algorithms are :-

1) FIFO Page Replacement Algorithm


2) Least Recently Used (LRU) Page Replacement Algorithm

1. FIFO Page Replacement Algorithm

It is the simplest page-replacement algorithm. As the name represents , when the page has to
be replaced then the oldest page is chosen. The page is replaced at the Head of the queue and
inserted at the Tail of the queue. The first-in, first-out (FIFO) page replacement algorithm is a
low-overhead algorithm and it requires very little book-keeping on the part of the operating
system.

Advantages of FIFO Page Replacement Algorithm:-

i) Easy to understand and execute.

Disadvantages of of FIFO Page Replacement Algorithm:-


i) It is not very effective.
ii) System needs to keep track of each frame
iii) Sometimes it behaves abnormally. This behaviour is called Belady’s anomaly.
iv) Bad replacement choice increases the page fault rate and slow process execution.

2. Least Recently Used (LRU) Page Replacement


Algorithm

In this algorithm , the page that has not been used for the longest period of time has to
be replaced.

Advantages of LRU Page Replacement Algorithm :-

i) It is amenable to full statistical analysis.


ii) Never suffers from Belady’s anomaly.

4.B) Demonstrate how logical address space is mapped to physical


address
The table implementation of the address mapping is simplified if the informa- tion in the address space
and the memory space are each divided into groups of fixed size. The physical memory is broken down
into groups of equal size called blocks.The term page refers to groups of address space of the same size.
Although both a page and a block are split into groups of 1K words, a page refers to the organization of
address space, while a block refers to the organization of memory space. The programs are also
considered to be split into pages. Consider a computer with an address space of 8K and a memory space
of 4K. If we split each into groups of 1K words we obtain eight pages and four blocks as shown in Fig. 12-
18.
The mapping from address space to memory space is facilitated if each virtual address is
considered to be represented by two numbers: a page number address and a line within the
page. In a computer with '1! words per page, p bits are used to specify a line address and the
remaining high-order bits of thevirtual address specify the page number.

Note that the line address in address space and memory space is the same; the only mapping
required is from a page number to a block number. The organization of the memory mapping
table in a paged system is shown in Fig. 12-19. The memory-page table consists of eight words,
one for each page. The address in the page table denotes the page number and the content of
the word gives the block number where that page is stored in main memory. respectively. A
presence bit in each location indicates whether the page has been transferred from auxiliary
memory into main memory. A 0 in the presence bit indicates that this page is not available in
main memory.

The content of word in the memory page table at the page number address is read out into the
memory table buffer register.
If the presence bit in the word read from the page table is 0, it signifies that the content of the
word referenced by the virtual address does not reside in main memory. A call to the operating
system is then generated to fetch the required page from auxiliary memory and place it into
main memory before resuming computation.

4C) Explain about match logic of Associative memory?


The match logic for each word can be derived from the comparison algorithmfor two binary
numbers. First, we neglect the key bits and compare the argument in A with the bits stored in
the cells of the words. Word i is equal to theargument in A if Ai = F,i for j = 1, 2, ... , n. Two bits
are equal if they are both1 or both 0. The equality of two bits can be expressed logically by the
Booleanfunction.
where xi = 1 if the pair of bits in position j are equal; otherwise, xi = 0. For a word i to be equal
to the argument in A we must have all xi variables equal to 1. This is the condition for setting
the corresponding match bit M, to

1. The Boolean function for this condition is

and constitutes the AND operation of all pairs of matched bits in a word.

We now include the key bit K; in the comparison logic. The requirement is that if K; = 0, the
corresponding bits of A; and f1; need no comparison. Only when K; = 1 must they be compared.
This requirement is achieved by ORingeach term with Kf, thus:

When K; = 1, we have Kf = 0 and x; + 0 = x; . When K; = 0, then Kf = 1 andx; + 1 = 1. A term (x; +


Kf) will be in the 1 state if its pair of bits is not compared. This is necessary because each term is
ANDed with all other terms so that an output of 1 will have no effect. The comparison of the
bits has an effect only when K; = 1.

The match logic for word i in an associative memory can now be expressedby the following
Boolean function:

Each term in the expression will be equal to 1 if its corresponding K; = 0. If K; = 1, the term will
be either 0 or 1 depending on the value of X;. A match will occur and M, will be equal to 1 if all
terms are equal to 1.

If we substitute the original definition of x;, the Boolean function above can be expressed as
follows:

where II is a product symbol designating the AND operation of all n terms. We need m such
functions, one for each word i = 1, 2, 3, ... , m. The circuit for matching one word is shown in Fig.
12-9. Each cell requires two AND gates and one OR gate. The inverters for A; and K; are needed
once for each column and are used for all bits in the column. The output of all OR gates in the
cells of the same word go to the input of a common AND gate to generate the match signal for
M,. M, will be logic 1 if a match occurs and 0 if no match occurs. Note that if the key register
contains all 0' s, output M, will be a 1 irrespective of the value of A or the word. This occurrence
must be avoided during normal operation.
UNIT 5
Short questions

1) Define I/O module?


A: The input-output subsystem of a computer, referred to as I/0, provides an efficient mode of
communication between the central system and the outside environment. Programs and data must
be entered into computer memory for processing and results obtained from computations must be
recorded or displayed for the user.

2) What is DMA?
A: In direct memory access (DMA), the interface transfers data into and out of the memory unit
through the memory bus. The CPU initiates the transfer by supplying the interface with the
starting address and the number of words needed to be transferred and then proceeds to
execute other tasks. When the transfer is made, the DMA requests memory cycles through the
memory bus. When the request is granted by the memory controller, the DMA transfers the
data directly into memory. The CPU merely delays its memory access operation to allow the
direct memory 110 transfer. Since peripheral speed is usually slower than processor speed, I/O-
memory transfers are infrequent compared to processor access to memory.

3) List different data transfer modes?


A: Data transfer to and from peripherals may be handled in one of three possible modes:
1. Programmed I/0
2. Interrupt-initiated I/0
3. Direct memory access (DMA)

4) What are the two methods of asynchronous data transfer in I/O?


A: Asynchronous data transfer between two independent units requires that control signals be
transmitted between the communicating units to indicate the time at which data is being
transmitted.
Types:
Strobe signal
Handshake

5) What is the strobe signal?


A: A strobe signal is supplied by one of the units to indicate to the other unit when the transfer
has to occur.

6) Define polling?
A: A polling procedure is used to identify the highest-priority source by software means. In this
method there is one common branch address for all interrupts. The program that takes care of
interrupts begins at the branch address and polls the interrupt sources in sequence.

7) What is the use VAD?


A: In daisy chaining priority interrupt Vector address (V AD) interrupt is place on to the data bus
for the CPU to use bus lines during the interrupt cycle
8) What is cycle stealing?
A: I n computing, traditionally cycle stealing is a method of
aaccessing computer memory (RAM) or bus without interfering with the CPU. It is
similar to direct memory access (DMA) for allowing I/O controllers to read or write RAM
without CPU intervention.

9) List different multiprocessors?


A: Tightly coupled multiprocessors
Loosely coupled multiprocessors

10) What is the use bus grant and bus request signals in DMA?
A:
 Bus Request (BR):This signal indicates to the processor that some other device
needs to become the bus master
 Bus Grant (BG): this output signal indicates to all bus master devices that the
processor will relinquish bus control at the end of the current bus cycle.

11) What are the cpu signals for DMA transfer?


A: Bus request
Bus grant

12) What is bus arbitration?


A: Bus Arbitration refers to the process by which the currentbus master accesses and
then leaves the control of the bus and passes it to the
another bus requestingprocessor unit.

13) What is Priority interrupt?


A: A priority interrupt is a system that establishes a priority over the various sources to
determine which condition is to be serviced first when two or more requests arrive
simultaneously.

14) Define Programmed I/O


A: Programmed 1/0 operations are the result of I/0 instructions written in the computer
program. Each data item transfer is initiated by an instruction in the program.

15) Define I/o mapped I/O


A: I/O mapped input/output uses special instruction to transfer data between the computer
system and the outside world.

16) Define Interrupt Driven I/O


A: An interrupt driven i/o uses special commands to inform the interface to issue an interrupt
request signal when the data are available from the device. In the meantime the CPU can
proceed to execute another program

17) Draw the block diagram for strobe initiated source?


A:
18) List the sequence of steps for handshaking?
A:

19) What is interconnection structure?


A: The components that form a multiprocessor system are CPUs, lOPs connected to input- output
devices, and a memory unit that may be partitioned into a number of separate modules. The
interconnection between the several components can have different physical configurations to
transfer data between them is called an interconnection structure.

20) List different interconnection structures?


A: 1. Time-shared common bus
2. Multiport memory
3. Crossbar switch
4. Multistage switching network
5. Hypercube system

Long Answers
1. A)Explain why DMA has more priority than CPU when both
request a memory transfer?
 we can transfer data direct to and from memory without the need of the CPU.
 The transfer of data between a fast storage device such as magnetic disk and memory is often
limited by the speed of the CPU.
 Removing the CPU from the path and letting the peripheral device manager the memory
buses directly would improve the speed of transfer.
 This transfer technique is called direct memory access(DMA).
 During DMA transfer, the CPU is idle and has no control of the memory buses
 A DMA controller takes over the buses to manage the transfer directly between the I/O device
and memory.
B. Demonstrate cycle stealing in detail?
(1) cycle stealing is a method of accessing computer memory (RAM) or bus without interfering with
the CPU. It is similar to direct memory access (DMA) for allowing I/O controllers to read or write
RAM without CPU intervention.
(2)This is similar to Burst Transfer mode, but instead of the data being transferred all at once, it is
transferred one byte at a time.
(3)In this method ,the transfer rate is slower but it prevents the CPU from staying idle for a long
period of time.

c. Explain the operation of Programmed I/O and also discuss its demerits

 Programmed I/0 operations are the result of I/0 instructions written in the computer
program.
 Each data item transfer is initiated by an instruction in the program.
 Usually, the transfer is to and from a CPU register and peripheral.
 Other instructions are needed to transfer the data to and from CPU and memory.
 Transferring data under program control requires constant monitoring of the peripheral by
the CPU.
 Once a data transfer is initiated, the CPU is required to monitor the interface to see when a
transfer can again be made.
 In the programmed I/0 method, the CPU stays in a program loop until the I/0 unit indicates
that it is ready for data transfer.
 This is a time-consuming process since it keeps the processor busy needlessly.
 Transfer of data under programmed I/0 is between CPU and peripherals.
Advantages - simple to implement
- very little hardware support
Disadvantages - busy waiting
- ties up CPU for long period with no useful work

2. A. Demonstrate Serial communication in I/O transfers


B. Draw the block diagram of DMA and demonstrate how
peripheral devices communicate with DMA.
 The DMA request CPU to handle control of buses to the DMA using bus request (BR) signal.
 The CPU grants the control of buses to DMA using bus grant (BG) signal after placing the
address bus, data bus and read and write lines into high impedance state (which behave like
open circuit).
 CPU initializes the DMA by sending following information through the data bus.
1.Starting address of memory block for read or write operation.
2.The word count which is the no. of words in the memory block.
3.Control to specify the mode of transfer such as read or write.
4.A control to start the DMA transfer.
 The DMA takes control over the buses directly interacts with memory and I/O units and
transfers the data without CPU intervention. When the transfer completes, DMA disables the
BR line. Thus CPU disable BG line, takes control over the buses and return to its normal
operation.
 The DMA transfer operation is illustrated below:
C. Explain the role of software in the priority interrupt system
 A polling procedure is used to identify the highest-priority source by software means.
 In this method there is one common branch address for all interrupts.
 The program that takes care of interrupts begins at the branch address and polls the interrupt
sources in sequence. The order in which they are tested determines the priority of each
interrupt.
 The highest-priority source is tested first, and if its interrupt signal is on, control branches to a
service routine for this source.
 Otherwise, the next-lower-priority source is tested, and Input-Output Organization so on.
 Thus the initial service routine for all interrupts consists of a program that tests the interrupt
sources in sequence and branches to one of many possible service routines.
 The particular service routine reached belongs to the highest-priority device among all devices
that interrupted the computer.
 The disadvantage of the software method is that if there are many interrupts, the time
required to poll them can exceed the time available to service the VO device.
 In this situation a hardware priority-interrupt unit can be used to speed up the operation.
3.A. Illustrate various priority interrupt schemes in I/O transfer.
1. Polling:
Polling is the software method of establishing priority of simultaneous interrupt. In this method,
when the processor detects an interrupt, it branches to an interrupt service routine whose job is to
pull each I/O module to determine which module caused the interrupt.
2. Daisy-Chaining Priority:
The Daisy–Chaining method of establishing priority on interrupt sources uses the hardware i.e., it is
the hardware means of establishing priority.
In this method, all the device, whether they are interrupt sources or not, connected in a serial
manner.
Means the device with highest priority is placed in the first position, which is followed by lowest
priority device.
And all device share a common interrupt request line, and the interrupt acknowledge line is daisy
chained through the modules.
B . Explain Cache coherence and also demonstrate various methods
of solving of cache coherence problem
cache coherence is the uniformity of shared resource data that ends up stored in multiple local
caches. When clients in a system maintain caches of a common memory resource, problems may arise
with incoherent data, which is particularly the case with CPUs in a multiprocessing system.
Solutions to the Cache Coherence Problem
 Various schemes have been proposed to solve the cache coherence problem in shared
memory multiprocessors.
 A simple scheme is to disallow private caches for each processor and have a shared cache
memory associated with main memory.
 Every data access is made to the shared cache.
 This method violates the principle of closeness of CPU to cache and increases the average
memory access time.
 In effect, this scheme solves the problem by avoiding it.
 For performance considerations it is desirable to attach a private cache to each processor.
 One scheme that has been used allows only nonshared and read-only data to be stored in
caches. Such items are called cachable. Shared writable data are noncachable.
 The compiler must tag data as either cachable or noncachable, and the system hardware
makes sure that only cachable data are stored in caches. The noncachable data remain in main
memory.
C. Demonstrate Interprocessor synchronization by applying
semaphores
 A semaphore can be initialized by means of a test and set instruction in conjunction with a
hardware lock mechanism.
 A hardware lock is a processor generated signal that serves to prevent other processors
from using the system bus as long as the signal is active.
 The test-and-set instruction tests and sets a semaphore and activates the lock mechanism
during the time that the instruction is being executed.
 This prevents other processors from changing the semaphore between the time that the
processor is testing it and the time that it is setting it.
 The semaphore is tested by transferring its value to a processor register R and then it is
set to 1. The value in R determines what to do next.
 If the processor finds that R = 1, it knows that the semaphore was originally set is
available.
 The semaphore is set to 1 to prevent other processors from accessing memory.
 The processor can now execute the critical section. The last instruction in the program
must clear location SEM to zero to release the shared resource to other processors.
3. A. Differentiate between asynchronous and synchronous data
transfer?
Data transfers over the system bus may be synchronous or asynchronous.
In a synchronous bus, each data item is transferred during a time slice known in advance to both
source and destination units.
Synchronization is achieved by driving both units from a common clock source. An alternative
procedure is to have separate clocks of approximately the same frequency in each unit.
Synchronization signals are transmitted periodically in order to keep all clocks in the system in step
with each other.
In an asynchronous bus, each data item being transferred is accompanied by handshaking control
signals to indicate when the data are transferred from the source and received by the destination

B. Distinguish between I/O mapped I/O and memory mapped I/O?


 Memory Mapped I/O
1. A memory address is allotted to an I/O device.
2. Any memory related instructions will be able to access this I/O device. eg : MOV BX,[3500
H]
3. The data from I/O device can also be given to ALU and result given back to I/O device using
single instruction.
 I/O Mapped I/O
1. An I/O address is given to an I/O device.
2. Only IN and OUT instructions can access such devices.
3. ALU operations are not possible directly on I/O data. They are to be first brought into
accumulator.
C. Explain different interconnection structures?
Time shared common bus: A time-shared common bus for five processors is shown
in Fig. 13-1 . Only one processor can communicate with the memory or another
processor at any given time. Transfer operations are conducted by the processor
that is in control of the bus at the time. Any other processor wishing to initiate a
transfer must first determine the availability status of the bus, and only after the
bus becomes available can the processor address the destination unit to initiate the
transfer. A command is issued to inform the destination unit what operation is to
be performed. The receiving unit recognizes its address in the bus and responds to
the control signals from the sender, after which the transfer is initiated. The system
may exhibit transfer conflicts since one common bus is shared by all processors.
These conflicts must be resolved by incorporating a bus controller that establishes
priorities among the requesting unit.
Multiport Memory:
A multiport memory system employs separate buses between each memory module and each
CPU. This is shown in Fig . 13-3 for four CPUs and four memory modules (MMs). Each processor
bus is COMected to each memory module. A processor bus consists of the address, data, and
control lines required to communicate with memory. The memory module is said to have four
ports and each port accommodates one of the buses. The module must have internal control
logic to determine which port will have access to memory at any given time. Memory access
conflicts are resolved by assigning fixed priorities to each memory port. The priority for
memory acoess associated with each processor may be established by the physical port
position that its bus occupies in each module. Thus CPU 1 will have priority over CPU 2, CPU 2
will have priority over CPU 3, and CPU 4 will have the lowest priority.

:
Cross-Bar switch:
The crossbar switch organization consists of a number of crosspoints that are placed at intersections
between processor buses and memory module paths. Figure 13-4 shows a crossbar switch
interconnection between four CPUs and four memory modules. The small square in each crosspoint is a
switch that determines the path from a processor to a memory module. Each switch point has control
logic to set up the transfer path between a processor and memory. It examines the address that is
placed in the bus to determine whether its particular module is being addressed. It also resolves
multiple requests for access to the same memory module on a predetermined priority basis. Figure 13-5
shows the functional design of a crossbar switch connected to one memory module. The circuit consists
of multiplexers that select the data, address, and control from one CPU for communication with the
memc module. Priority levels are established by the arbitration logic to select one Cl when two or more
CPUs attempt to access the same memory. The multiplex< are controlled with the binary code that is
generated by a priority encoc within the arbitration logic.
Multistage switching network:

The basic component of a multistage network is a two-input, two-out]: interchange switch. As shown in
Fig. 13-6, the 2 X 2 switch has two inpu labeled A and B, and two outputs, labeled 0 and 1. There are
control sign (not shown) associated with the switch that establish the interconnecti between the input
and output terminals. The switch has the capability connecting input A to either of the outputs. Terminal
B of the switch beha' in a similar fashion. The switch also has the capability to arbitrate betwe conflicting
requests. If inputs A and B both request the same output termin only one of them will be connected; the
other will be blocked.
Many different topologies have been proposed for multistage switching networks to control processor-
memory communication in a tightly coupled multiprocessor system or to control the communication
between the processing elements in a loosely coupled system. One such topology is the omega
switching network shown in Fig. 13-8. In this configuration, there is exactly one path from each source
to any particular destination. Some request patterns, however, cannot be connected simultaneously.
For example, any two sources cannot be connected simultaneously to destinations 000 and 001. A
particular request is initiated in the switching network by the source, which sends a 3-bit pattern
representing the destination number. As the binary pattern moves through the network, each level
examines a different bit to determine the 2 x 2 switch setting. Level 1 inspects the most significant bit.
level 2 inspects the middle bit, and level 3 inspects the least significant bit. When the request arrives on
either Input of the 2 x 2 switch, it i$ routed to the upper output if the specified bit is 0 or to the lower
output if the bit is 1.
:
Hypercube system:
The hypercube or binary n-cube multiprocessor structure is a loosely coupled system composed of N =
2" processors interconnected in an n -dimensionally binary cube. Each processor forms a need of the
cube. Figure 13-9 shows the hypercube structure for n = 1, 2, and 3. A one-cube structure has n = 1
and 2" = 2. It contains two processors interconnected by a single path. A two-cube structure has n = 2
and 2" = 4. It contains four nodes interconnected as a square. A three-cube structure has eight nodes
interconnected as a cube. An n-cube structure has 2" nodes with a processor residing in each node.
Each node is assigned a binary address in such a way that the addresses of two neighbors differ in
exactly one bit position. For example, the three neighbors of the node with address 100 in a three-
cube structure are 000, 110, and 101 . Each of these binary numbers differs from address 100 by one
bit value.

Potrebbero piacerti anche