Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
17 November 2003 © Dr. Jie Song & Mr. Deepak, Sun APSTC 2
Agenda
• A Set of 8 Questions
• History and Goals
• Dissecting a MPI Program
`
• Sending and Receiving Messages
17 November 2003 © Dr. Jie Song & Mr. Deepak, Sun APSTC 3
A Set of Eight Questions
• Q1: What?
• Q2: Why?
• Q3: What are the alternatives to MPI?
• Q4: Is MPI really better? `
• Q5: Which languages are “MPI-compatible”?
• Q6: What do you need to run MPI programs?
• Q7: How do you compile and run MPI programs?
• Q8: What happens when you run MPI programs?
17 November 2003 © Dr. Jie Song & Mr. Deepak, Sun APSTC 4
Q1: What is MPI?
• Message Passing Interface
• What is the message?
DATA
`
• Allows data to be passed between processes in a
distributed memory environment
17 November 2003 © Dr. Jie Song & Mr. Deepak, Sun APSTC 5
Q2: Why / What problem does MPI solve?
INTER-PROCESS/JOB COMMUNICATION
`
• PVM
• IPC on Unix
`
• Threads
17 November 2003 © Dr. Jie Song & Mr. Deepak, Sun APSTC 7
Q4: Is MPI really better?
TINA
There Is No Alternative
• “BEST OUT THERE” `
• Threads/IPC don't work in a distributed env.
• PVM is too old and not heavily supported
• Not really “Write Once, Run Anywhere”
17 November 2003 © Dr. Jie Song & Mr. Deepak, Sun APSTC 8
Q5: Which languages are MPI-compatible?
FORTRAN
`
&
17 November 2003 © Dr. Jie Song & Mr. Deepak, Sun APSTC 9
Q6: What do you need?
Implementation
Implementation Platform
`
MPICH Open Source Multiple Platform
http://www.cs.kent.edu/~farrell/dist/ref/implementations.html
17 November 2003 © Dr. Jie Song & Mr. Deepak, Sun APSTC 10
Q7: How do you compile and run?
MPICH (4 processors)
myhome@myhost> $MPIR_HOME/bin/mpicc -o helloworld helloworld.c
A HYPOTHETICAL CLUSTER
`
2 proc.
Linux Box
2 proc.
SunFire v240
12 proc
SunFire 4800
17 November 2003 © Dr. Jie Song & Mr. Deepak, Sun APSTC 12
Q8: What happens when you run MPI
programs?
17 November 2003 © Dr. Jie Song & Mr. Deepak, Sun APSTC 13
Agenda
• A Set of 8 Questions
• History and Goals
• Dissecting a MPI Program
`
• Sending and Receiving Messages
17 November 2003 © Dr. Jie Song & Mr. Deepak, Sun APSTC 14
A Brief History of MPI
• First message passing interface standard
– Successor to PVM
• 60 ppl from 40 different organizations
`
• MPI 1.1: 1992-94
• MPI 2.0: 1995-97
• Standards documents
– http://www.mcs.anl.gov/mpi/index.html
– http://www.mpi-forum.org/docs/docs.html
17 November 2003 © Dr. Jie Song & Mr. Deepak, Sun APSTC 15
Goals and Scope of MPI
• MPI's prime goals are:
– To provide source code portability
– To allow efficient implementation
`
• It also offers:
– A great deal of functionality
– Support for heterogenous parallel architectures
17 November 2003 © Dr. Jie Song & Mr. Deepak, Sun APSTC 16
Agenda
• A Set of 8 Questions
• History and Goals
• Dissecting a MPI Program
`
• Sending and Receiving Messages
17 November 2003 © Dr. Jie Song & Mr. Deepak, Sun APSTC 17
Dissecting a MPI Program
• MPI Program Structure
• Dissecting a MPI Program
• MPI Communicator
`
• Handles and MPI data types
• MPI function format
17 November 2003 © Dr. Jie Song & Mr. Deepak, Sun APSTC 18
MPI Program Structure
• Header file
• Initializing MPI
• Finding size (number of processes)
`
• Finding rank (unique id of each process)
• Main Program
– Message Passing
• Finalizing MPI
17 November 2003 © Dr. Jie Song & Mr. Deepak, Sun APSTC 19
Dissecting a MPI Program
Header file
#include <mpi.h>
17 November 2003 © Dr. Jie Song & Mr. Deepak, Sun APSTC 20
MPI Communicator
• Programmer's View: group of processes that are allowed to
communicate with each other
17 November 2003 © Dr. Jie Song & Mr. Deepak, Sun APSTC 21
Handles / MPI Data Types
• MPI has its own MPI Data Type C Data Type
MPI_CHAR signed char
internal data MPI_SHORT signed short int
structures called MPI_INT signed int
handles MPI_LONG signed long
MPI_UNSIGNED_CHAR unsigned char
`
MPI_UNSIGNED_SHORT unsigned short int
• MPI releases handles MPI_UNSIGNED unsigned int
to allow programmes MPI_UNSIGNED_LONG unsigned long int
17 November 2003 © Dr. Jie Song & Mr. Deepak, Sun APSTC 22
MPI Function Format
17 November 2003 © Dr. Jie Song & Mr. Deepak, Sun APSTC 23
Agenda
• A Set of 8 Questions
• History and Goals
• Dissecting a MPI Program
`
• Sending and Receiving Messages
17 November 2003 © Dr. Jie Song & Mr. Deepak, Sun APSTC 24
Sending & Receiving Messages
• Messages
• Types of Communication
• Point-to-Point Communication
• Definitions `
• Communication Modes
• Routine Names (Blocking)
• MPI_Send & MPI_Recv
• Wildcarding & Message Count
• Example Program
17 November 2003 © Dr. Jie Song & Mr. Deepak, Sun APSTC 25
Messages
• A message contains an array of elements of some particular
data type
• MPI Datatypes:
– Basic
– Derived `
• Programmer declares variables to have “normal” C/Fortran
type but uses matching MPI data types as arguments in MPI
routines
• Mechanism to handle type conversion in a heterogenous
collection of machines
• WARNING: MPI data type specified in the receive must match
the MPI data type specified in the send
17 November 2003 © Dr. Jie Song & Mr. Deepak, Sun APSTC 26
Types of Communication
Point-to-point Broadcast
Communication Modes
Blocking Non-blocking
17 November 2003 © Dr. Jie Song & Mr. Deepak, Sun APSTC 27
Point-to-point Communication
17 November 2003 © Dr. Jie Song & Mr. Deepak, Sun APSTC 28
Definitions
• “Completion” of the communication means that
memory locations used in the message transfer can
be safely accessed
– Send: variable sent can be reused after completion
– Receive: variable received`can now be used
17 November 2003 © Dr. Jie Song & Mr. Deepak, Sun APSTC 30
Routine Names (Blocking)
Receive MPI_Recv
17 November 2003 © Dr. Jie Song & Mr. Deepak, Sun APSTC 31
MPI_Send
int MPI_Send( Starting address of the data to be sent
void * buf,
int count, number of elements to be sent
MPI_Datatype datatype,
int dest, MPI datatype of the elements
int tag,
MPI_Comm comm ` rank of destination process
);
message marker (set by user)
17 November 2003 © Dr. Jie Song & Mr. Deepak, Sun APSTC 32
MPI_Recv
int MPI_Recv( Starting address of the data to be sent
void * buf,
int count, number of elements to be sent
MPI_Datatype datatype,
int source, MPI datatype of the elements
int tag,
MPI_Comm comm, ` rank of source process
MPI_Status *status
); message marker (set by user)
17 November 2003 © Dr. Jie Song & Mr. Deepak, Sun APSTC 33
For a communication to succeed
• Sender must specify a valid destination rank
`
• The communicator must be the same
17 November 2003 © Dr. Jie Song & Mr. Deepak, Sun APSTC 34
Wildcarding
• Only the receiver can use wild cards
• To receive from any source
MPI_ANY_SOURCE
• To receive with any tag `
MPI_ANY_TAG
• Actual source and tag are returned in the receiver's
status handle
– status.MPI_SOURCE
– status.MPI_TAG
17 November 2003 © Dr. Jie Song & Mr. Deepak, Sun APSTC 35
Received Message Count
• Message received may not fill the receive
buffer
• count is the number of elements actually
received `
17 November 2003 © Dr. Jie Song & Mr. Deepak, Sun APSTC 36
Timers
• Time is measured in seconds
• Time taken to perform a task is obtained by
calling the below function before & after and
subtracting the two values
`
double MPI_Wtime(void);
17 November 2003 © Dr. Jie Song & Mr. Deepak, Sun APSTC 37
Example – to pass the integer 10000
#include <stdio.h>
#include <mpi.h>
void main(int argc, char *argv[]){
int i_send, i_recv;
int rank, size;
MPI_Status status;
MPI_Init(&argc, &argv);
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
MPI_Comm_size(MPI_COMM_WORLD, &size);
if (rank == 0) {
i_send = 10000; `
MPI_Send(&i_send, 1, MPI_INT, 1, 0, MPI_COMM_WORLD);
} else if (rank == 1) {
MPI_Recv(&i_recv, 1, MPI_INT, MPI_ANY_SOURCE,
MPI_ANY_TAG, MPI_COMM_WORLD, &status);
printf(“My rank is %d and I received %d\n”, rank,
i_recv);
// output: 'My rank is 1 and I received 10000'
}
MPI_Finalize();
}
17 November 2003 © Dr. Jie Song & Mr. Deepak, Sun APSTC 39
Summary
• What, Why, How?
• Compiling and Running
• Structure of a MPI Program
`
• Handles, functions
• Types of communication (blocking/non-
blocking)
• MPI_Send & MPI_Recv
• Wildcards, Timers
17 November 2003 © Dr. Jie Song & Mr. Deepak, Sun APSTC 40
What has not been covered?
• Synchronous / buffered / ready send
• Non-blocking communication
• Broadcast communication (MPI_Bcast)
`
• Derived data types
• Virtual topologies
http://alliance.osc.edu/mpi/
17 November 2003 © Dr. Jie Song & Mr. Deepak, Sun APSTC 41
Thank You
Contact Us
jie.song@sun.com
deepak.jeevankumar@sun.com
info@apstc.sun.com.sg
http://apstc.sun.com.sg