Sei sulla pagina 1di 2

[SX- 4 SERIES TECHNICAL HIGHLIGHTS]

Outline of Message Passing Interface MPI/SX


Shoichi Sakon, Manager
Masanori Tamura, Assistant Manager
Language Processor Development Department, 1st Computers Software Division, Computers Software Operations Unit,
NEC Corporation

IXS Parallel level

Mem Mem
MPI inter-node
SX WATCHING and intra-node

IXS
Mem Mem MPI inter-node
+
and microtask
INTRODUCTION MPI/SX intra-node
(automatic parallelization)

Computers are handling more and more MPI Programming Fig. 2 Uses of MPI/SX
complex problems as they attain higher pro- In MPI programming, a unit of processing
cessing speeds, but the demand for even is described in the MPI specifications as an (1) Full support for MPI-1
greater processing power knows no limits. To MPI process. However, MPI process gener- MPI/SX supports all the functions of the
satisfy this demand, the Supercomputer SX-4 ation is not defined in the MPI specifications currently popular MPI-1. This means that
Series supports parallel processing with High but dependent on the processing system. MPI programs can be ported easily from other
Performance Fortran (HPF) and a Message (Most MPI implementation specifies the num- systems.
Passing Interface (MPI) not only for the con- ber of MPI processes at program activation.) (2) High performance
ventional shared memory but also for distrib- The user obtains information to identify the The global memory function (SX-GM)
uted memory. This article focuses on paral- number of processes and the process type enables several processes to share memory
lel processing for distributed memory with a (called rank) within the program, and data is and the internode crossbar switch (IXS) links
look at MPI/SX, a product for the SX Series. sent and received using the following two nodes at high speed. By making effective use
main methods: of these functions, MPI/SX enhances com-
PARALLEL PROCESSING FOR (1) Point-to-point communication where a munication performance which is critical for
DISTRIBUTED MEMORY process transmitting data and a process MPI. MPI communication performance is
receiving data are paired expressed by the fixed start-up time (latency)
Message Passing Interface (MPI) (2) Collective operation (communication or which is not dependent on the communicated
Message passing has long been studied as synchronous processing) where several data volume, and the sustained speed (through-
a method of implementing parallel processing processes share common data put) which is proportional to the communi-
for distributed memory and a number of inter- MPI can be used with the Fortran or C lan- cated data volume. MPI/SX provides high per-
faces of this type have been proposed. In guage (Fig. 1). formance in terms of both latency and through-
1994, these proposals were combined to form put (Table 1).
a new interface specification called MPI. Pro-
Table 1 MPI/SX (SUPER-UX R8.1)
gramming with a message passing interface Performance
imposes a great burden on the user because
Latency Throughput
everything from data placement to data trans-
Intra-node 9.2 µsec 7.2 Gbytes/s
fer must be coded. Despite this disadvantage,
Inter-node 19.8 µsec 4.5 Gbytes/s
a message passing interface brings out the best
from a system if tuned appropriately. MPI has (3) Tool linkage
become popular with a large number of users MPI/SX programs can be compiled, linked,
as a physical standard. and executed from PSUITE, an integrated pro-
High Performance Fortran (HPF) gram development environment popular for its
The HPF is based on the data parallel con- Fig. 1 Simple MPI Program
ease of use (Fig. 3). PSUITE can also be used
cept where programs are divided to run on to activate the VAMPIR (*) tool to display the
parallel processing according to data place- Features of MPI/SX
ment specified by the user. This data parallel MPI/SX is an MPI product for the Super-
language was developed from Fortran to pro- computer SX-4 Series designed to support a
vide parallel processing functions with dis- variety of SX system configurations to satis-
tributed memory. The user is only required to fy user requirement. In a multinode system,
specify the placement of distributed data, then MPI/SX allows distributed parallel processing
an HPF compiler automatically handles data using MPI between nodes and shared parallel
transfer and other functions. However, the processing within a node, or distributed par-
current HPF is not very popular yet because allel processing both between and within
its functions are still too limited, making it dif- nodes (Fig. 2). This product also allows dis-
ficult to use to create application software. tributed parallel processing using MPI in a
(HPF will be discussed in more detail at a single-node system. In addition, the MPI/SX
later date.) has the following features: Fig. 3 PSUITE

쐅SX WORLD 쏹 Spring 1998 No.22


Fig. 4 VAMPIR

MPI communication status graphically over


time (Fig. 4).
Outlook for MPI Standardization
The MPI Forum, whose role is to define and
maintain the MPI standard, announced a new
version of the MPI specifications, MPI-2. In
addition to the conventional functions, this
new version has a dynamic process generation
function to create an MPI process from a pro-
gram and a one-sided communication function
to read data from or write data to the memo-
ry space of a remote process.

CONCLUSION
NEC will improve MPI/SX further to attain
even higher communication performance. To
provide a more efficient system with less pro-
gramming load on the user, MPI/SX will fully
support MPI-2 and have stronger linkage with
the PSUITE tools.

쏄VAMPIR is a registered trademark of Forshungs-


zentrum Juelisc GmbH and applies to a product
developed by Pallas of Germany.

Contact:

Language Processor Development Department, 1st
Computers Software Division, Computers Software
Operations Unit, NEC Corporation
10, Nisshincho 1-chome, Fuchu-shi, Tokyo 183-8501
Japan
Tel : +81-423-33-5370
Fax: +81-423-33-1962
E-mail: sakon@lang1.bsl.tc.nec.co.jp
tmr@lang1.bsl.tc.nec.co.jp

C&C Research Laboratories, NEC Europe Ltd.


Rathausallee 10, 53757 Sankt Augustin, Germany
Tel : +49-2241-9252-95
Fax: +49-2241-9252-99
E-mail: hempel@ccrl-nece.technopark.gmd.de
ritzdorf@ccrl-nece.technopark.gmd.de
falk@ccrl-nece.technopark.gmd.de

Potrebbero piacerti anche