Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
Short questions
1. Define Micro instruction format?
A.The microinstruction format are dived into four functional parts. The three fields F1,F2, and
F3 specify microoperations for the computer. The CD field is condition for branching, and BR
field Branch field
LONG ANSWERS
The control variables are separated from the register transfer operation by specifying a control
function. A control function is a Boolean variable that is equal to 1 or 0. The control function is
included in the statement as follows:
P: R2 <-- R1
C) Microinstructions are stored in control memory in groups, with each group specifying a
routine. Each computer instruction has its own microprogram routine in control memory to
generate the microoperations that execute the instruction. The hardware that controls the
address sequencing of the control memory must be capable of sequencing the
microinstructions within a routine and be able to branch from one routine to another.
The microoperation steps to be generated in processor registers depend on the operation
code part of the instruction. Each instruction has its own microprogram routine stored in a
given location of control memory. The transformation from the instruction code bits to an
address in control memory where the routine is located is referred to as a mapping process.
3. A mapping process from the bits of the instruction to an address for control memory
ANSWER:
A)CAR(CONTROL ADDRESS REGISTER): Control memory address register specifies the address of
the micro-instruction, and the control data register holds the micro-instruction read from memory,
The micro-instruction contains a control word that specifies one or more micro operations for the
data processor. Once these operations are executed, the control must determine the next address.
The location of the next micro-instruction may be the one next in sequence, or it may be located
somewhere else in the control memory. for this reason it is necessary to use some bits of the present
micro-instruction to control the generation of the address of the next micro-instruction. The next
address may also be a function of the external input condition.
It is possible to include the block of instructions that constitute a subroutine at every place where it is
needed in the program. However, to save space, only one copy of the instructions that constitute the
subroutine is placed in the memory, and any program that requires the use of the subroutine simply
branches to its starting location. When a program branches to a subroutine we say that it is calling the
subroutine. The instruction that performs this branch operation is named a Call instruction.
B)
The block diagram of the computer is shown in . It consists of two memory units: a main memory for
storing instructions and data, and a control memory for storing the microprogram. Four
registersareassociated with the processor unit and two with the control unit.
The processor registers are program counter PC, address register AR, data register DR, and
accumulator register AC. The function of these registers is similar to the basic computer . The
controlunit has a control address register CAR and a subroutine register SBR. The control memory and
its registers are organized as a microprogrammed control unit.
C)
This figure shows the three decoders and some of the connections that must be made from their
outputs. Each of the three fields of the microinstruction presently available in the output of control
memory are decoded with a 3 x 8 decoder to provide eight outputs. Each of these outputs must be
connected to the proper circuit to initiate the corresponding microoperation.For example, when F1 =
101 (binary 5), the next clock pulse transition transfers the content of DR(0-10) to AR . Similarly, when
F1 = 110 (binary 6) there is a transfer from PC to AR (symbolized by PCTAR). As shown in Fig, outputs 5
and 6 of decoder f1 are connected to the load input of AR so that when eitherone of these outputs is
active, information from the multiplexers is transferred to AR . The multiplexers select the
information from DR when output 5 is active and from PC when output 5 is inactive. The transfer into
AR occurs with a clock pulse transition only when output 5 or output 6 of the decoder are active. The
other outputs of the decoders that initiate transfers between registers must be connected in a similar
fashion.
Unit:4
Short Answers
1. What is the purpose of valid bit in cache?
A. When data is loaded into a particular cache block, the corresponding valid bit is set to 1. So
the cache contains more than just copies of the data in memory; it also has bits to help us find
data within the cache and verify its validity.
2. What is a page?
A. The term page refers to groups of address space of the same size.
4. Define TLB?
A. A translation lookaside buffer (TLB) is a memory cache that is used to reduce the time taken to
access a user memory location. It is a part of the chip's memory-management unit (MMU).
The TLB stores the recent translations of virtual memory to physical memory and can be called
an address-translation cache.
7. Define segment?
A. A segment is a set of logically related instructions or data elements associated with a given name.
Segments may be generated by the programmer or by the operating system. Examples of
segments are a subroutine, an array of data, a table of symbols, or a user's program.
Long answers:
Magnetic Disks A magnetic disk is a circular plate constructed of metal or plastic coated with
magnetized material. Often both sides of the disk are used and several disks may be stacked on one
spindle with read/write heads available on each surface. All disks rotate together at high speed and are
not stopped or started for access purposes. Bits are stored in the magnetized surface in spots along
concentric circles called tracks. The tracks are commonly divided into sections called sectors. In most
systems, the minimum quantity of information which can be transferred is a sector. The subdivision of
one disk surface into tracks and sectors is shown in Fig. 12-5.
The different types of DRAM are used for different applications as a result of their slightly varying
properties. The different types are summarised below:
Asynchronous DRAM:Asynchronous DRAM is the basic type of DRAM on which all other
types are based. Asynchronous DRAMs have connections for power, address inputs, and
bidirectional data lines.
synchronous DRAM (SDRAM). In SDRAM, all signals are tied to the clock so timing is
much tighter and better controlled
B. What are the different secondary storage devices? Elaborate
on any one of the devices
Most common forms secondary storage devices are:
Floppy disks
Hard disks
Solid state disks
Some of the most common secondary storage devices are magnetic hard drives – long used
in laptop and desktop computers.
They use magnetic heads to store and read data on spinning metal disks known as platters.
They're generally the first place used to store data on a computer, from the operating.
More recently, computer manufacturers have started to ship more devices with what are
called solid state drives, or SSDs.
SSDs don't have moving parts like spinning platters.
Instead, they use flash memory, similar to USB flash drives.
They're usually faster and less noisy than hard drives, but they can be more expensive for the
same amount of data storage, so both devices are still currently in use for different
applications.
C. Explain Demand paging technique in detail with an
example?
Demand Paging: Demand paging is the technique of virtual memory
management. A virtual memory system is a combination of hardware and
software techniques.The memory management software system handles all
the softwareoperations for the efficient utilization of memory space. It must
decide
(1)which page in main memory ought to be removed to make room for a new
page,
(2) when a new page is to be transferred from auxiliary memory to main
memory, and
(3) where the page is to be placed in main memory. The hardware
mapping mechanism and the memory management software together
constitute the architecture of a virtual memory.
When a program starts execution, one or more pages are transferred into
main memory and the page table is set to indicate their position. The program
is executed from main memory until it attempts to reference a page that is
still in auxiliary memory. This condition is called page fault . When page fault
occurs, the execution of the present program is suspended until the required
page is brought into main memory. In demand Paging, a process is Copied into
the Logical Memory from the Physical Memory when a page is required.
Virtual memory is a concept used in some large computer systems that permit the user to construct
programs as though a large memory space were available, equal to the totality of auxiliary memory.
Each address that is referenced by the CPU goes through an address mapping from the so-called virtual
address to a physical address in main memory. Virtual memory is used to give programmers the illusion
that they have a very large memory at their disposal, even though the computer actually has a
relativelymall main memory. A virtual memory system provides a mechanism forranslating program-
generated addresses into correct main memory locations.This is done dynamically, while programs are
being executed in the CPU. The translation or mapping is handled automatically by the hardware by
means ofa mapping table.
An address used by a programmer will be called a virtual address, and the set of such addresses the
address space . An address in main memory is called a location or physical address . The set of such
locations is called the memory space .Thus the address space is the set of addresses generated by
programs as they reference instructions and data; the memory space consists of the actual main
memory locations directly addressable for processing. In most computers the address and memory
spaces are identical. The address space is allowed to be larger than the memory space in computers
with virtual memory. A table is then needed, as shown in Fig. 12-17, to map a virtual address of 20 bits
to a physical address of 15 bits. The mapping is a dynamic operation, which means that every address is
translated immediately as a word is referenced by CPU.
B) EXPLAIN ABOUT SEGMENTED PAGE MAPPING?
• Segmentation and paging can be combined together to get paged segmentation or segmented
paging system
• The following technique is used to map a logical address to physical address in this case
• Two mapping tables may be stored in separate small memories or in main memory
• To avoid this a fast associative memory is used to hold the most recently
referenced table entries – TLB (Translated lookaside buffer)
• Mapping process is first attempted by associative search with the given segment
and page no.
• When a block is referenced its value along with corresponding page and segment
no. are stored in the TLB
• If no match occurs, then the table mapping is used and the result is stored also in
the TLB
hexadecimal
address page no. Segment page block
60000 0 6 00 012
60100 1 6 01 000
60200 2 6 02 019
60300 3 6 03 053
60400 4 6 04 A61
MEMORY PROTECTION
3.c)A two way set a associative cache memory uses blocks of 4 words.
The cache can accommodate a total of 2048 words from main memory.
The main memory size is 128k X 32 .What is the size of cache memory?
4.A)Discuss about various page replacement algorithms?
Page Replacement Algorithms
There are variety of Page replacement algorithms. Some of the most common and important
Page Replacement Algorithms are :-
It is the simplest page-replacement algorithm. As the name represents , when the page has to
be replaced then the oldest page is chosen. The page is replaced at the Head of the queue and
inserted at the Tail of the queue. The first-in, first-out (FIFO) page replacement algorithm is a
low-overhead algorithm and it requires very little book-keeping on the part of the operating
system.
In this algorithm , the page that has not been used for the longest period of time has to
be replaced.
Note that the line address in address space and memory space is the same; the only mapping
required is from a page number to a block number. The organization of the memory mapping
table in a paged system is shown in Fig. 12-19. The memory-page table consists of eight words,
one for each page. The address in the page table denotes the page number and the content of
the word gives the block number where that page is stored in main memory. respectively. A
presence bit in each location indicates whether the page has been transferred from auxiliary
memory into main memory. A 0 in the presence bit indicates that this page is not available in
main memory.
The content of word in the memory page table at the page number address is read out into the
memory table buffer register.
If the presence bit in the word read from the page table is 0, it signifies that the content of the
word referenced by the virtual address does not reside in main memory. A call to the operating
system is then generated to fetch the required page from auxiliary memory and place it into
main memory before resuming computation.
and constitutes the AND operation of all pairs of matched bits in a word.
We now include the key bit K; in the comparison logic. The requirement is that if K; = 0, the
corresponding bits of A; and f1; need no comparison. Only when K; = 1 must they be compared.
This requirement is achieved by ORingeach term with Kf, thus:
The match logic for word i in an associative memory can now be expressedby the following
Boolean function:
Each term in the expression will be equal to 1 if its corresponding K; = 0. If K; = 1, the term will
be either 0 or 1 depending on the value of X;. A match will occur and M, will be equal to 1 if all
terms are equal to 1.
If we substitute the original definition of x;, the Boolean function above can be expressed as
follows:
where II is a product symbol designating the AND operation of all n terms. We need m such
functions, one for each word i = 1, 2, 3, ... , m. The circuit for matching one word is shown in Fig.
12-9. Each cell requires two AND gates and one OR gate. The inverters for A; and K; are needed
once for each column and are used for all bits in the column. The output of all OR gates in the
cells of the same word go to the input of a common AND gate to generate the match signal for
M,. M, will be logic 1 if a match occurs and 0 if no match occurs. Note that if the key register
contains all 0' s, output M, will be a 1 irrespective of the value of A or the word. This occurrence
must be avoided during normal operation.
UNIT 5
Short questions
2) What is DMA?
A: In direct memory access (DMA), the interface transfers data into and out of the memory unit
through the memory bus. The CPU initiates the transfer by supplying the interface with the
starting address and the number of words needed to be transferred and then proceeds to
execute other tasks. When the transfer is made, the DMA requests memory cycles through the
memory bus. When the request is granted by the memory controller, the DMA transfers the
data directly into memory. The CPU merely delays its memory access operation to allow the
direct memory 110 transfer. Since peripheral speed is usually slower than processor speed, I/O-
memory transfers are infrequent compared to processor access to memory.
6) Define polling?
A: A polling procedure is used to identify the highest-priority source by software means. In this
method there is one common branch address for all interrupts. The program that takes care of
interrupts begins at the branch address and polls the interrupt sources in sequence.
10) What is the use bus grant and bus request signals in DMA?
A:
Bus Request (BR):This signal indicates to the processor that some other device
needs to become the bus master
Bus Grant (BG): this output signal indicates to all bus master devices that the
processor will relinquish bus control at the end of the current bus cycle.
Long Answers
1. A)Explain why DMA has more priority than CPU when both
request a memory transfer?
we can transfer data direct to and from memory without the need of the CPU.
The transfer of data between a fast storage device such as magnetic disk and memory is often
limited by the speed of the CPU.
Removing the CPU from the path and letting the peripheral device manager the memory
buses directly would improve the speed of transfer.
This transfer technique is called direct memory access(DMA).
During DMA transfer, the CPU is idle and has no control of the memory buses
A DMA controller takes over the buses to manage the transfer directly between the I/O device
and memory.
B. Demonstrate cycle stealing in detail?
(1) cycle stealing is a method of accessing computer memory (RAM) or bus without interfering with
the CPU. It is similar to direct memory access (DMA) for allowing I/O controllers to read or write
RAM without CPU intervention.
(2)This is similar to Burst Transfer mode, but instead of the data being transferred all at once, it is
transferred one byte at a time.
(3)In this method ,the transfer rate is slower but it prevents the CPU from staying idle for a long
period of time.
c. Explain the operation of Programmed I/O and also discuss its demerits
Programmed I/0 operations are the result of I/0 instructions written in the computer
program.
Each data item transfer is initiated by an instruction in the program.
Usually, the transfer is to and from a CPU register and peripheral.
Other instructions are needed to transfer the data to and from CPU and memory.
Transferring data under program control requires constant monitoring of the peripheral by
the CPU.
Once a data transfer is initiated, the CPU is required to monitor the interface to see when a
transfer can again be made.
In the programmed I/0 method, the CPU stays in a program loop until the I/0 unit indicates
that it is ready for data transfer.
This is a time-consuming process since it keeps the processor busy needlessly.
Transfer of data under programmed I/0 is between CPU and peripherals.
Advantages - simple to implement
- very little hardware support
Disadvantages - busy waiting
- ties up CPU for long period with no useful work
:
Cross-Bar switch:
The crossbar switch organization consists of a number of crosspoints that are placed at intersections
between processor buses and memory module paths. Figure 13-4 shows a crossbar switch
interconnection between four CPUs and four memory modules. The small square in each crosspoint is a
switch that determines the path from a processor to a memory module. Each switch point has control
logic to set up the transfer path between a processor and memory. It examines the address that is
placed in the bus to determine whether its particular module is being addressed. It also resolves
multiple requests for access to the same memory module on a predetermined priority basis. Figure 13-5
shows the functional design of a crossbar switch connected to one memory module. The circuit consists
of multiplexers that select the data, address, and control from one CPU for communication with the
memc module. Priority levels are established by the arbitration logic to select one Cl when two or more
CPUs attempt to access the same memory. The multiplex< are controlled with the binary code that is
generated by a priority encoc within the arbitration logic.
Multistage switching network:
The basic component of a multistage network is a two-input, two-out]: interchange switch. As shown in
Fig. 13-6, the 2 X 2 switch has two inpu labeled A and B, and two outputs, labeled 0 and 1. There are
control sign (not shown) associated with the switch that establish the interconnecti between the input
and output terminals. The switch has the capability connecting input A to either of the outputs. Terminal
B of the switch beha' in a similar fashion. The switch also has the capability to arbitrate betwe conflicting
requests. If inputs A and B both request the same output termin only one of them will be connected; the
other will be blocked.
Many different topologies have been proposed for multistage switching networks to control processor-
memory communication in a tightly coupled multiprocessor system or to control the communication
between the processing elements in a loosely coupled system. One such topology is the omega
switching network shown in Fig. 13-8. In this configuration, there is exactly one path from each source
to any particular destination. Some request patterns, however, cannot be connected simultaneously.
For example, any two sources cannot be connected simultaneously to destinations 000 and 001. A
particular request is initiated in the switching network by the source, which sends a 3-bit pattern
representing the destination number. As the binary pattern moves through the network, each level
examines a different bit to determine the 2 x 2 switch setting. Level 1 inspects the most significant bit.
level 2 inspects the middle bit, and level 3 inspects the least significant bit. When the request arrives on
either Input of the 2 x 2 switch, it i$ routed to the upper output if the specified bit is 0 or to the lower
output if the bit is 1.
:
Hypercube system:
The hypercube or binary n-cube multiprocessor structure is a loosely coupled system composed of N =
2" processors interconnected in an n -dimensionally binary cube. Each processor forms a need of the
cube. Figure 13-9 shows the hypercube structure for n = 1, 2, and 3. A one-cube structure has n = 1
and 2" = 2. It contains two processors interconnected by a single path. A two-cube structure has n = 2
and 2" = 4. It contains four nodes interconnected as a square. A three-cube structure has eight nodes
interconnected as a cube. An n-cube structure has 2" nodes with a processor residing in each node.
Each node is assigned a binary address in such a way that the addresses of two neighbors differ in
exactly one bit position. For example, the three neighbors of the node with address 100 in a three-
cube structure are 000, 110, and 101 . Each of these binary numbers differs from address 100 by one
bit value.