Sei sulla pagina 1di 42

Parallel Programming with MPI

Dr. Jie Song


Mr. Deepak Jeevan Kumar
Sun APSTC – Asia Pacific Science
& Technology Center
Agenda
• A Set of 8 Questions
• History and Goals
• Dissecting a MPI Program
`
• Sending and Receiving Messages

17 November 2003 © Dr. Jie Song & Mr. Deepak, Sun APSTC 2
Agenda
• A Set of 8 Questions
• History and Goals
• Dissecting a MPI Program
`
• Sending and Receiving Messages

17 November 2003 © Dr. Jie Song & Mr. Deepak, Sun APSTC 3
A Set of Eight Questions
• Q1: What?
• Q2: Why?
• Q3: What are the alternatives to MPI?
• Q4: Is MPI really better? `
• Q5: Which languages are “MPI-compatible”?
• Q6: What do you need to run MPI programs?
• Q7: How do you compile and run MPI programs?
• Q8: What happens when you run MPI programs?

17 November 2003 © Dr. Jie Song & Mr. Deepak, Sun APSTC 4
Q1: What is MPI?
• Message Passing Interface
• What is the message?
DATA
`
• Allows data to be passed between processes in a
distributed memory environment

17 November 2003 © Dr. Jie Song & Mr. Deepak, Sun APSTC 5
Q2: Why / What problem does MPI solve?

INTER-PROCESS/JOB COMMUNICATION
`

• Single machine, single SMP, multiple


machines, multiple SMPs
• Best in a distributed memory environment
• Very popular in cluster/grid environment
17 November 2003 © Dr. Jie Song & Mr. Deepak, Sun APSTC 6
Q3: What are the alternatives to MPI?

• PVM
• IPC on Unix
`
• Threads

17 November 2003 © Dr. Jie Song & Mr. Deepak, Sun APSTC 7
Q4: Is MPI really better?
TINA
There Is No Alternative
• “BEST OUT THERE” `
• Threads/IPC don't work in a distributed env.
• PVM is too old and not heavily supported
• Not really “Write Once, Run Anywhere”

17 November 2003 © Dr. Jie Song & Mr. Deepak, Sun APSTC 8
Q5: Which languages are MPI-compatible?

FORTRAN
`
&

17 November 2003 © Dr. Jie Song & Mr. Deepak, Sun APSTC 9
Q6: What do you need?

Any vendor's implementation of MPI

Implementation
Implementation Platform
`
MPICH Open Source Multiple Platform

Sun HPC ClusterTools 5 Solaris

IBM PE for AIX IBM AIX

http://www.cs.kent.edu/~farrell/dist/ref/implementations.html

17 November 2003 © Dr. Jie Song & Mr. Deepak, Sun APSTC 10
Q7: How do you compile and run?
MPICH (4 processors)
myhome@myhost> $MPIR_HOME/bin/mpicc -o helloworld helloworld.c

myhome@myhost> $MPIR_HOME/util/mpirun -np 16 helloworld


`

Sun HPC Cluster Tools 5 (8 processors)


myhome@myhost> cc -o helloworld helloworld.c -lmpi

myhome@myhost> mprun -np 8 helloworld

One process per physical processor is the norm


17 November 2003 © Dr. Jie Song & Mr. Deepak, Sun APSTC 11
Q8: What happens when you run MPI
programs?

A HYPOTHETICAL CLUSTER
`
2 proc.
Linux Box

2 proc.
SunFire v240
12 proc
SunFire 4800

17 November 2003 © Dr. Jie Song & Mr. Deepak, Sun APSTC 12
Q8: What happens when you run MPI
programs?

• Each processor executes its own copy of the


MPI executable
• Exactly the same executable
` is executed on
each processor
• Each processor is given a unique identifier
called the “rank”
• Use rank to differentiate between processors

17 November 2003 © Dr. Jie Song & Mr. Deepak, Sun APSTC 13
Agenda
• A Set of 8 Questions
• History and Goals
• Dissecting a MPI Program
`
• Sending and Receiving Messages

17 November 2003 © Dr. Jie Song & Mr. Deepak, Sun APSTC 14
A Brief History of MPI
• First message passing interface standard
– Successor to PVM
• 60 ppl from 40 different organizations
`
• MPI 1.1: 1992-94
• MPI 2.0: 1995-97
• Standards documents
– http://www.mcs.anl.gov/mpi/index.html
– http://www.mpi-forum.org/docs/docs.html

17 November 2003 © Dr. Jie Song & Mr. Deepak, Sun APSTC 15
Goals and Scope of MPI
• MPI's prime goals are:
– To provide source code portability
– To allow efficient implementation
`

• It also offers:
– A great deal of functionality
– Support for heterogenous parallel architectures

17 November 2003 © Dr. Jie Song & Mr. Deepak, Sun APSTC 16
Agenda
• A Set of 8 Questions
• History and Goals
• Dissecting a MPI Program
`
• Sending and Receiving Messages

17 November 2003 © Dr. Jie Song & Mr. Deepak, Sun APSTC 17
Dissecting a MPI Program
• MPI Program Structure
• Dissecting a MPI Program
• MPI Communicator
`
• Handles and MPI data types
• MPI function format

17 November 2003 © Dr. Jie Song & Mr. Deepak, Sun APSTC 18
MPI Program Structure
• Header file
• Initializing MPI
• Finding size (number of processes)
`
• Finding rank (unique id of each process)
• Main Program
– Message Passing
• Finalizing MPI

17 November 2003 © Dr. Jie Song & Mr. Deepak, Sun APSTC 19
Dissecting a MPI Program
Header file
#include <mpi.h>

void main (int argc, char *argv[]) { Initializing MPI


int rank, size; Must be the first routine
MPI_Init(&argc, & argv); called (only once)
MPI_Comm_rank(MPI_COMM_WORLD, &rank); Communicator Size
` &size);
MPI_Comm_size(MPI_COMM_WORLD, Number of processors
contained within
/* your code here */ a communicator
Process Rank
MPI_Finalize();
Uniquely identifies a
}
processor, 0 to n-1,
also used to allow different
processors to execute
Finalizing MPI different code simultaneously
Must be called last by all processes

17 November 2003 © Dr. Jie Song & Mr. Deepak, Sun APSTC 20
MPI Communicator
• Programmer's View: group of processes that are allowed to
communicate with each other

• All MPI communication calls have a common communicator


argument `

• Most often you will use MPI_COMM_WORLD


– Defined when you call MPI_Init
– It comprises of all of your processors
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
MPI_Comm_size(MPI_COMM_WORLD, &size);

17 November 2003 © Dr. Jie Song & Mr. Deepak, Sun APSTC 21
Handles / MPI Data Types
• MPI has its own MPI Data Type C Data Type
MPI_CHAR signed char
internal data MPI_SHORT signed short int
structures called MPI_INT signed int
handles MPI_LONG signed long
MPI_UNSIGNED_CHAR unsigned char
`
MPI_UNSIGNED_SHORT unsigned short int
• MPI releases handles MPI_UNSIGNED unsigned int
to allow programmes MPI_UNSIGNED_LONG unsigned long int

access them MPI_FLOAT float


MPI_DOUBLE double
MPI_LONG_DOUBLE long double

• C handles are of MPI_BYTE

defined typedefs MPI_PACKED

17 November 2003 © Dr. Jie Song & Mr. Deepak, Sun APSTC 22
MPI Function Format

error = MPI_Xxxxx(parameter1, parameter2, ...);

MPI_Xxxxx(parameter, parameter2, ...);

17 November 2003 © Dr. Jie Song & Mr. Deepak, Sun APSTC 23
Agenda
• A Set of 8 Questions
• History and Goals
• Dissecting a MPI Program
`
• Sending and Receiving Messages

17 November 2003 © Dr. Jie Song & Mr. Deepak, Sun APSTC 24
Sending & Receiving Messages
• Messages
• Types of Communication
• Point-to-Point Communication
• Definitions `
• Communication Modes
• Routine Names (Blocking)
• MPI_Send & MPI_Recv
• Wildcarding & Message Count
• Example Program
17 November 2003 © Dr. Jie Song & Mr. Deepak, Sun APSTC 25
Messages
• A message contains an array of elements of some particular
data type
• MPI Datatypes:
– Basic
– Derived `
• Programmer declares variables to have “normal” C/Fortran
type but uses matching MPI data types as arguments in MPI
routines
• Mechanism to handle type conversion in a heterogenous
collection of machines
• WARNING: MPI data type specified in the receive must match
the MPI data type specified in the send
17 November 2003 © Dr. Jie Song & Mr. Deepak, Sun APSTC 26
Types of Communication

Point-to-point Broadcast

Communication Modes

Blocking Non-blocking

17 November 2003 © Dr. Jie Song & Mr. Deepak, Sun APSTC 27
Point-to-point Communication

• Communication is between two ` processes only


• Source process sends message to destination process
• Destination process receives the message
• Communication takes place within a communicator
• Destination process is identified by its rank in the
communicator

17 November 2003 © Dr. Jie Song & Mr. Deepak, Sun APSTC 28
Definitions
• “Completion” of the communication means that
memory locations used in the message transfer can
be safely accessed
– Send: variable sent can be reused after completion
– Receive: variable received`can now be used

• Communication modes can be blocking or non-


blocking
– Blocking: return from routine implies completion
– Non-blocking: routine returns immediately, user must
test for completion
17 November 2003 © Dr. Jie Song & Mr. Deepak, Sun APSTC 29
Communication Modes
Mode Completion Condition
Synchronous Send Only completes when the receive has completed

Buffered Send Always completes (unless an error occurs),


` receiver
irrespective of

Standard Send Message sent (receiver state unknown)

Ready Send Always completes (unless an error occurs),


irrespective of whether the receive has completed

Receive Completes when a message has arrived

17 November 2003 © Dr. Jie Song & Mr. Deepak, Sun APSTC 30
Routine Names (Blocking)

Mode MPI Call


Synchronous Send MPI_SSend

Buffered Send MPI_BSend


`
Standard Send MPI_Send

Ready Send MPI_RSend

Receive MPI_Recv

17 November 2003 © Dr. Jie Song & Mr. Deepak, Sun APSTC 31
MPI_Send
int MPI_Send( Starting address of the data to be sent
void * buf,
int count, number of elements to be sent
MPI_Datatype datatype,
int dest, MPI datatype of the elements
int tag,
MPI_Comm comm ` rank of destination process
);
message marker (set by user)

MPI Communicator of processes involved

17 November 2003 © Dr. Jie Song & Mr. Deepak, Sun APSTC 32
MPI_Recv
int MPI_Recv( Starting address of the data to be sent
void * buf,
int count, number of elements to be sent
MPI_Datatype datatype,
int source, MPI datatype of the elements
int tag,
MPI_Comm comm, ` rank of source process
MPI_Status *status
); message marker (set by user)

MPI Communicator of processes involved

status handle (used in case of wildcarding)

17 November 2003 © Dr. Jie Song & Mr. Deepak, Sun APSTC 33
For a communication to succeed
• Sender must specify a valid destination rank

• Receiver must specify a valid source rank

`
• The communicator must be the same

• Tags & data types must match

• Receiver's buffer must be large enough

17 November 2003 © Dr. Jie Song & Mr. Deepak, Sun APSTC 34
Wildcarding
• Only the receiver can use wild cards
• To receive from any source
MPI_ANY_SOURCE
• To receive with any tag `
MPI_ANY_TAG
• Actual source and tag are returned in the receiver's
status handle
– status.MPI_SOURCE
– status.MPI_TAG

17 November 2003 © Dr. Jie Song & Mr. Deepak, Sun APSTC 35
Received Message Count
• Message received may not fill the receive
buffer
• count is the number of elements actually
received `

int MPI_Get_count( MPI_Status *status, MPI_Datatype


datatype, int * count);

17 November 2003 © Dr. Jie Song & Mr. Deepak, Sun APSTC 36
Timers
• Time is measured in seconds
• Time taken to perform a task is obtained by
calling the below function before & after and
subtracting the two values
`

double MPI_Wtime(void);

17 November 2003 © Dr. Jie Song & Mr. Deepak, Sun APSTC 37
Example – to pass the integer 10000
#include <stdio.h>
#include <mpi.h>
void main(int argc, char *argv[]){
int i_send, i_recv;
int rank, size;
MPI_Status status;
MPI_Init(&argc, &argv);
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
MPI_Comm_size(MPI_COMM_WORLD, &size);
if (rank == 0) {
i_send = 10000; `
MPI_Send(&i_send, 1, MPI_INT, 1, 0, MPI_COMM_WORLD);
} else if (rank == 1) {
MPI_Recv(&i_recv, 1, MPI_INT, MPI_ANY_SOURCE,
MPI_ANY_TAG, MPI_COMM_WORLD, &status);
printf(“My rank is %d and I received %d\n”, rank,
i_recv);
// output: 'My rank is 1 and I received 10000'
}
MPI_Finalize();
}

Execute this program with two processors


17 November 2003 © Dr. Jie Song & Mr. Deepak, Sun APSTC 38
Note the following points
• How the rank is used to make different
processes execute different parts of the code
• Every process however runs the same
executable `

• Every send and receive should match

17 November 2003 © Dr. Jie Song & Mr. Deepak, Sun APSTC 39
Summary
• What, Why, How?
• Compiling and Running
• Structure of a MPI Program
`
• Handles, functions
• Types of communication (blocking/non-
blocking)
• MPI_Send & MPI_Recv
• Wildcards, Timers
17 November 2003 © Dr. Jie Song & Mr. Deepak, Sun APSTC 40
What has not been covered?
• Synchronous / buffered / ready send
• Non-blocking communication
• Broadcast communication (MPI_Bcast)
`
• Derived data types
• Virtual topologies

http://alliance.osc.edu/mpi/

17 November 2003 © Dr. Jie Song & Mr. Deepak, Sun APSTC 41
Thank You

Contact Us
jie.song@sun.com
deepak.jeevankumar@sun.com
info@apstc.sun.com.sg
http://apstc.sun.com.sg

Potrebbero piacerti anche