Sei sulla pagina 1di 2

CS-421 Parallel Processing BE (CIS) Batch 2004-05

Handout_12
Parallelism in Algorithms – Bernstein’s Conditions
One way to detect parallelism in a sequential algorithm is to look for operations that can be
carried out independently of each other.
Let, I(s) = input set of statement1 s i.e. the set of all variables read by s and O(s) = output set of
statement s i.e. the set of all variables written by s
Two statements Si and Sj are (data) independent if ALL of the following conditions hold:
I(Sj) ∩ O(Si ) = Φ flow independence
O(Si ) ∩ O(Sj) = Φ output independence
I(Si ) ∩ O(Sj) = Φ anti independence
That is, two statements S i and S j can be executed in parallel (denoted S i || S j) if [I(Sj) ∩ O(Si )] U
[O(Si ) ∩ O(Sj)] U [I(Si ) ∩ O(Sj)] = Φ
These are called Bernstein’s conditions and may be seen at different levels of granularity. For
example, S may be machine-level operations, may be entire procedures or just machine
instructions.
In general, a set of statements (processes) S1, S2, …, Sk can execute in parallel if Bernstein’s
conditions are satisfied on pair-wise basis i.e. S1 // S2 // … // Sk iff Si // Sj ∀ i ≠ j
Properties of Parallelism // Operator
• // is commutative i.e., Si // Sj ⇒ Sj // Si
• // is NOT transitive i.e., Si // Sj & Sj // Sk doesn’t imply Si // Sk

These dependences are usually portrayed in the form of a data-dependence graph. This is a
directed graph having as many nodes as there are procedures (or statements, depending upon the
granularity of the data dependence analysis).
e.g. 1 e.g. 2 e.g. 3
s1 : A := x + B; s1 : A := x + B; s1 : B := A + 3;
s2 : C := A * 3; s2 : A := 3; s2 : A := 3;
s2 is flow dependent on s1 s2 is output dependent on s2 is anti dependent on s1
indicated by an arrow s1 indicated by an arrow indicated by an arrow
directed from s1 to s2. directed from s1 to s2 with directed from s1 to s2 with
a small circle anywhere on a small line across the
the arrow. arrow.

1
Here statement is used in quite a general sense in this handout and it can stand for procedure,
process, instruction, etc, whatever be the level of granularity of data dependence analysis as
explained in the class.

Page - 1 - of 2
CS-421 Parallel Processing BE (CIS) Batch 2004-05
Handout_12
Compilers can use Bernstein’s conditions to generate parallel code from sequential programs.
However, this approach is not efficient because it can’t expose the hidden parallelism that is
usually exploited by restructuring of operations. As an example of code restructuring recall how
loop unrolling did the trick while scheduling code for VLIW machine.

In addition to the data dependence discussed above, we must also examine resource and control
dependences as explained below:
Resource Independence
There’s a resource dependence between two statements (or instructions) if they need the same
resource for execution. For instance, if there’s only one adder in a machine, then two statements
requiring addition cannot be executed in parallel, though they may be data independent.
Control Independence
If execution of statement S2 depends on some condition tested in statement S1 (for example , S1 is
a branch), then S2 is said to be control dependent on S1.
For parallel execution, two (or more) statements must be independent in every regard i.e. they
must be control independent, data independent and resource independent as well.
******

Page - 2 - of 2

Potrebbero piacerti anche