Sei sulla pagina 1di 11

1

Acknowledgement


I express my sincere gratitude to the Department of Electrical and Electronics, Nepal
Engineering College for providing me the opportunity to undertake this research
project. I, wholeheartedly, offer my acknowledgement to my subject teacher, Er.
Prayag Khadka, for his constant guidance in shaping up the idea about the research
topics and patiently clearing our doubts. His support has been of immense help in
preparing this research project report. Last but not least, i wish to avail myself of this
opportunity to express a sense of gratitude and love to all our friends and well-wishers
for their unconditional support and encouragement.






















2

Chapter 1 Introduction

1.1 Definition:
Instruction pipelining is one of the common organizational approaches for the
improvement in the performance of processors. To be more specific, an
instruction pipeline can be defined as a technique used in the design of
computers and other digital electronic devices to enhance their instruction
throughput, i.e., to increase the number of instructions that can be executed in
a unit of time. The fundamental idea is to split the processing of a computer
instruction into a series of independent steps, with storage at the end of each
step. This allows the computer's control circuitry to issue instructions at the
processing rate of the slowest step, which is much faster than the time needed
to perform all steps at once. The term pipeline refers to the fact that each step
is carrying data at once (like water), and each step is connected to the next
(like the links of a pipe).
1.2 Pipelining strategy:
An instruction has a number of stages. The pipeline can be subdivided into a
number of stages according to the speedup we require.
As a simple approach, we have two stage instruction pipelining.
1.2.1 Two stage instruction pipelining:
The instruction processing is subdivided into two stages, namely, fetch
instruction and execute instruction. There are times during the
execution of an instruction when the main memory is not being
accessed. This time could be used to fetch the next instruction in
parallel with the execution of the current one.
The block diagram clarifies the two stage instruction pipelining.
3


Fig 1 Expanded view of two-stage instruction pipeline
As seen in the block diagram, the pipeline consists of two independent
stages. The first stage fetches and instruction and buffers it. When the
second stage is free, the first stage passes it the buffered instruction.
While the second stage is executing the instruction, the first stage takes
advantage of any unused memory cycles to fetch and buffer the next
instruction. This is called instruction prefetch or fetch overlap. If
the fetch and execute stages were of equal duration, the instruction
cycle would be halved,
This strategy is supposed to speed up the instruction execution but if
we look more closely, the doubling of execution rate is unlikely due to
the following two reasons:
Execution time will be longer than fetch time, so fetch stage
may have to wait for some time before it can empty its buffer.
A conditional branch instruction makes the address of the next
instruction to be fetched unknown. Thus, the fetch stage must
wait until it receives the next instruction address from the
execution stage. Again the execute stage may have to wait
while the next instruction is fetched.
Though these factors may reduce the effectiveness of the two stage
pipeline, some speedup occurs. In order to gain more speed up, the
pipeline needs to be subdivided into more stages.
4

1.2.2 Four stage instruction pipeline:
Figure 2 below shows how the instruction cycle in the CPU can be
processed with four segment pipeline. While an instruction is being
executed in segment four, the next instruction in sequence is busy
fetching an operand from memory in segment three. The effective
address (EA) can be calculated in a separate ALU circuit for third
instruction, and whenever the memory is available, the fourth and all
subsequent instructions can be fetched and placed in an instruction
FIFO. Thus upto four sub operations in the instruction cycle can
overlap and upto four different instructions can be in progress of being
processed at the same time.

Fig 2: Four stage CPU instruction pipeline
5

1.2.3 Six stage instruction pipeline:
[3]In general, the six stage instruction pipeline can be divided into the
following:
Fetch Instruction (FI): It reads the next instruction into a buffer.
Decode Instruction (DI): It determines the opcode and the operand
specifiers.
Calculate Operands (CO): It calculates the effective address of each
source operand. This may involve displacement, register indirect,
indirect or other forms of address calculation.
Fetch Operands (FO): It fetches each operand from the memory.
Operands in registers need not be fetched.
Execute instruction (EI): It performs the indicated operation and stores
the result, if any, in the specified destination operand location.
Write operand (WO): It stores the result in memory.
With this decomposition the various stages will be of more nearly
equal duration. The following timing diagram shows how a six-stage
pipeline can reduce the execution time for 9 instructions from 54 time
units to 14 time units.

1 2 3 4 5 6 7 8 9 10 11 12 13 14
1 FI DI CO FO EI WO
2 FI DI CO FO EI WO
3 FI DI CO FO EI WO
4 FI DI CO FO EI WO
5 FI DI CO FO EI WO
6 FI DI CO FO EI WO
7 FI DI CO FO EI WO
8 FI DI CO FO EI W9O
9 FI DI CO FO EI WO
Fig. 3 Timing diagram for instruction pipeline operation

6

Chapter 2 Current trends/ status
2.1 Brief history
Math processing (super-computing) began in earnest in the late 1970s as
Vector Processors and Array Processors. Usually, very large bulky super-
computing machines needed special environments and super-cooling of the
cores. The origin of pipelining is thought to be either the ILLIAC II project or
the IBM Stretch project though a simple version was used earlier in the Z1 in
1939 and the Z3 in 1941. [1] One of the early super computers was the Cyber
series built by Control Data Corporation. Seymour Cray developed the XMP
line of super computers, using pipelining for both multiply and add/subtract
functions. Later, Star Technologies took pipelining to another level by adding
parallelism (several pipelined functions working in parallel), developed by
their engineer, Roger Chen. In 1984, Star Technologies made another
breakthrough with the pipelined divide circuit, developed by James Bradley.
By the mid-1980s, super-computing had taken off with offerings from many
different companies around the world.
2.2 Current status
Today, most of these pipelined circuits can be found embedded inside most
micro-processors. It has aided in achieving the following advantages:
The cycle time of the processor is reduced, thus increasing instruction issue-
rate in most cases.
Some combinational circuits such as adders or multipliers can be made faster
by adding more circuitry. If pipelining is used instead, it can save circuitry vs.
a more complex combinational circuit.
Many designs include pipelines as long as 7, 10 and even 20 stages (like in the
Intel Pentium 4). The later "Prescott" and "Cedar Mill" Pentium 4 cores (and
their Pentium D derivatives) had a 31-stage pipeline, the longest in
mainstream consumer computing. The Xelerated X10q Network Processor has
a pipeline more than a thousand stages long. [2] The downside of a long
7

pipeline is that when a program branches, the processor cannot know where to
fetch the next instruction from and must wait until the branch instruction
finishes, leaving the pipeline behind it empty. In the extreme case, the
performance of a pipelined processor could theoretically approach that of an
un-pipelined processor, or even slightly worse if all but one pipeline stages are
idle and a small overhead is present between stages.
In certain applications, such as supercomputing, programs are specially
written to branch rarely and so very long pipelines can speed up computation
by reducing cycle time. If branching happens constantly, re-ordering branches
such that the more likely to be needed instructions are placed into the pipeline
can significantly reduce the speed losses associated with having to flush failed
branches.
Hence we can see that pipelining brings about some complications too. The
advantages of non-pipelined processor over pipelined processor have been
summed up in the following points:
A non-pipelined processor executes only a single instruction at a time. This
prevents branch delays (in effect, every branch is delayed) and problems with
serial instructions being executed concurrently. Consequently the design is
simpler and cheaper to manufacture.
The instruction latency in a non-pipelined processor is slightly lower than in a
pipelined equivalent. This is because extra flip flops must be added to the data
path of a pipelined processor.
A non-pipelined processor will have a stable instruction bandwidth. The
performance of a pipelined processor is much harder to predict and may vary
more widely between different programs.




8

Chapter 3 Future enhancements
As discussed in the earlier section, instruction pipelining has some disadvantages too,
which needs to be properly reviewed so that it does not bring about complications.
3.1 Pipeline hazards
1. Resource conflict: It is caused by access to memory by two segments at the
same time. Most of these conflicts can be resolved using separate instructions
and data memories.
2. Data dependency: It arises when an instruction depends on the result of the
previous instruction but this result is not yet available.
3. Branch difficulties: It arises from branch and other instructions that change the
value of the program counter.
3.2 Handling of branch instructions
One of the major problems in designing an instruction pipeline is assuring a
steady flow of instructions to the initial stages of pipeline. The primary
impediment is the conditional branch instruction. Until the instruction is
actually executed, it is impossible to determine whether the branch will be
taken or not.
We can use various approaches to deal with branch instructions but the
effective ones have been discussed below:
1. Prefetch target instruction: One way of handling a conditional branch
is to prefetch the target instruction in addition to the instruction
following the branch. Both are saved until branch is executed. If the
branch is successful the pipeline continues from the branch target
instruction.
2. Branch prediction: A pipeline with branch prediction uses some
additional logic to guess the outcome of a conditional branch
instruction before it is executed. The pipeline then begins prefetching
the instruction stream from the prediction path.
9

3. Delayed branch: In this procedure, the compiler detects the branch
instruction and rearranges the machine language code sequence by
inserting useful instructions that keeps the pipeline operating without
interruptions.




























10

Chapter 4 Conclusion

Instruction pipelining is a powerful technique for enhancing performance but requires
careful design to achieve optimum results with reasonable complexity. With
appropriate use of instruction pipelining, the processor performance can be enhanced
significantly. But with the haphazard use, there would be unnecessary complications
that would subsequently limit the processor performance.


























11

References

[1] Raul Rojas (1997). ""Konrad Zuse's Legacy: The Architecture of the Z1 and Z3"".
IEEE Annals of the History of Computing.
[2]http://www.mdronline.com/watch/watch_Issue.asp?Volname=Issue+%23171&on=
1#item13
[3]William stalling (2010), Computer Organization and Architecture, Pearson
Education.

Potrebbero piacerti anche