Sei sulla pagina 1di 3

Question 1 A pipelined architecture is a microprocessor architecture that uses pipelining t echniques in instruction execution.

An instruction pipeline is a technique used in the design of microprocessors to increase the number of instructions that can be executed in a unit of time, (instruction throughput). Pipelining increases t he number of instructions that can be processed at once, thus reducing the delay between completed instructions. In a pipelined architecture computer instructio ns are splitted into a series of independent steps, with storage at the end of e ach step. Instruction execution is extremely complex and involves several operat ions which are executed successively. The term pipeline refers to the fact that each step is carrying data at once like water, and each step is connected to the next like the links of a pipe. This means we can have multiple instructions ove rlapped in execution. b) The 8086 microproccessor is organized as two separate main components, called th e Bus Interface Unit (BIU) and the Execution Unit (EU). The BIU performs all bus operations such as instruction fetching, reading and writing operands for memor y and calculating the addresses of the memory operands. The instruction bytes ar e transferred to the instruction queue. EU executes instructions from the instru ction system byte queue. Both units operate asynchronously to give the 8086 an overlapping instruction fe tch and execution mechanism (Pipelining). This results in efficient use of the s ystem bus and system performance. BIU contains Instruction queue, Segment regist ers, Instruction pointer, Address adder. EU contains Control circuitry, Instruct ion decoder, ALU, Pointer and Index register, Flag register. BIU is the componen t where most pipelining operations are done. The BIU uses a mechanism known as a n instruction stream queue to implement a pipeline architecture. This queue perm its pre-fetch of up to 6 bytes of instruction code. Whenever the queue of the BI U is not full, it has room for at least two more bytes and at the same time the EU is not requesting it to read or write operands from memory, the BIU is free t o look ahead in the program by pre-fetching the next sequential instruction.

Question 2 a) ALE means Address latch Enable is a pulse to logic 1 that signals extern al circuitry when a valid address word is on the bus. This address must be latch ed in external circuitry on the 1-to-0 edge of the pulse at ALE b) TEST is an input that is also related to the external interrupt interfac e. Execution of a WAIT instruction causes the 8086 to check the logic level at t he TEST input. If the logic 1 is found, the microprocessor will suspend operatio n and goes into the idle state. The 8086 will no longer executes instructions, i nstead it repeatedly checks the logic level of the TEST input waiting for its tr ansition back to logic 0. As TEST switches to 0, execution resume with the next instruction in the program. This feature can be used to synchronize the operatio n of the 8086 to an event in external hardware. c) INTR means interrupt request; it is an input to the 8086 that can be use d by an external device to signal that it need to be serviced. Logic 1 at INTR r epresents an active interrupt request. When an interrupt request has been recogn ized by the 8086, it indicates this fact to external circuit with pulse to logic 0 at the INTA output. d) DT/R is a logic level output that signals the direction of data trans fer over the bus When this line is logic 1during the data transfer part of a bus

cycle, the bus is in the transmit mode. Therefore, data are either written into memory or output to an I/O device. On the other hand, logic 0 at DT/R signals t hat the bus is in the receive mode. This corresponds to reading data from memory or input of data from an input port. Question 3 i) There are 6 main stages in an instruction cycle: Fetching the required instructions from the memory Decoding and interpreting the instructions Execution of the decoded instructions by the Arithmetic Logic Unit Transfer of control instructions Perform arithmetic / logical functions Load and store the instruction as results in the memory ii) Instruction pipelining can be considered as a multiple assembly line, wh erein similar processes can be worked out simultaneously. Thus, instruction pipe lining is an introduction execution method in which instructions are divided in number of stages and multiple instructions at different stages are executed at t he same time. Furthermore, while one instruction is being executed, another coul d be simultaneously loaded into memory which, otherwise, wouldn t be accessed. Thus in instruction pipelining, different stages of instructions are executed at the same time by making use of different resources of the CPU this results in effici ent processing by the computer. iii) Pipeline hazards are problems with the instruction pipeline in central p rocessing unit (CPU) microarchitectures that potentially result in incorrect com putation. It refers to a situation in which a correct program ceases to work cor rectly due to implementing the processor with a pipeline. There are three fundam ental types of hazard: data hazards, branch hazards, and structural hazards. Dat a hazards can be further divided into Write After Read, Write After Write, and R ead After Write hazard. Data hazards Data hazards occur when instructions that exhibit data dependence modify data in different stages of the pipeline. Ignoring potential data hazards can result in race conditions also known as race hazards. This is when reads and writes of da ta occur in a different order in the pipeline than in the program code. There ar e three different types of data hazards, defined in the situations there occur i n; RAW A Read After Write hazard which occurs when, in the code as written, one instruc tion reads a location after an earlier instruction writes new data to it, but in the pipeline the write occurs after the read so the instruction doing the read gets stale data. WAR A Write After Read hazard is the reverse of a RAW, Occurs when in the code a wri te occurs after a read, but according to the pipeline write happen first. WAW A Write After Write hazard is a situation in which two writes occur out of order . We normally only consider it a WAW hazard when there is no read in between.

Structural Hazards These are data hazards which occur when a single piece of hardware is used in mo re than one stage of the pipeline, thus it can be needed by two or more instruct ions at the same time. For example, a single memory unit that is accessed both in the fetch stage where an instruction is retrieved from memory, and the memory stage where data is wri

tten and/or read from memory. Control Hazards Also called branch hazards, these occur when a decision needs to be made, but th e information needed to make the decision is not available yet

Question 4 Advantages of using Segment registers in 8086 are; Segment registers hold the base address of where a particular segment begins in memory. There is the code segment (CS), data segment (DS), stack segment (SS), a nd extra segment (ES). Allow the memory capacity to be 1 megabyte even though the addresses associated with the individual instructions are only 16 bits wide. Allow the instruction, data, or stack portion of a program to be more than 64K b ytes long by using more than one code, data, or stack segment. Facilitate the use of separate memory areas for a program, its data, and the sta ck. Permit a program and/or its data to be put into different areas of memory each t ime the program is executed

b) i) To calculate a physical address, you take the address (called the logical ad dress) and add it to the segment address. For example, if you wanted to calculat e the physical address which relates to logical address "1356" in the stack segm ent - you would also need to know what value is in the SS register, lets assume "2345": Add the zero to the end of the segment address: 23450 Then add the two addresses together: 23450 + 1356 = 247A6 ii) advantages and disadvantages I kudzanai kurarama @CUT

Potrebbero piacerti anche