Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
InfiniBand-over-Distance Transport
using Low-Latency WDM
Transponders &
IB Credit Buffering
Christian Illmer
ADVA Optical Networking
InfiniBand-over-distance transport
using
IB credit buffering
October, 2008
10T
WDM
InfiniBand
1T
100G
10G
Ethernet
aw 8m
L
1
s
12xQDR
e ery
r
o ev
o
s
M
4xDDR
12x
4x
e
bl
u
Do
FC
1G
100M
1980
1985
1990
1995
2000
2005
2010
Connectivity performance
IBx1
IBx4
IBx12
2.5Gbit/s
10Gbit/s
30Gbit/s
5Gbit/s
20Gbit/s
60Gbit/s
10Gbit/s
40Gbit/s
120Gbit/s
100M
1G
STM-1
Synchronous
10bT
Ethernet
10G
STM-4
FE
STM-16
ETR/CLO
OTU3
10GbE
40GbE
100GbE
IBx4
QDR
IBx12
QDR
IBx1
SDR
ESCON
HDD
STM-64
GbE
InfiniBand
1G-FC
FICON
ISC
IBx4
SDR
2G-FC
8G-FC
FICON2
ISC3 4G-FC 10G-FC
FICON4
Ultra160 SCSI
Ultra320 SCSI
100G
IBx4
DDR
CPU connectivity-market
Market penetration of different CPU interconnect technologies
InfiniBand clearly dominating new high-end DC implementations
50%
2006
2007
40%
TOP
TOP 100
100 Supercomputers
Supercomputers
37%
37% in
in 07
07
50%
50% in
in 08
08
30%
20%
10%
0%
G
bE
In
M
yr
fin
in
iB
et
an
d
SP
In
Q
Pr
Cr
N
M
te Cr
ua
U
op
ix
o
a
r
M
Sw
ss
ed
dr
co y
rie
Al
ba
ic
n
itc
in
ta
ne
s
r
k
h
ry
ct
IB
FC Eth
Server
cluster
Relevant parameters
IB
FC Eth
IB
FC Eth
IB
FC Eth
FC
FC
FC
FC SAN
Ethernet LAN
Network
BSD Sockets
TCP
SDP
TS
IP
Drivers
BSD Sockets
IPoIB
SDP
TS
uDAPL
MPI
TS API
SAN
NFS-RDMA
DAT
TS API
FS API
File System
SCSI
SRP
FCP
VAPI
Ethernet
InfiniBand HCA
FC
Ethernet
Switch
InfiniBand Switch
FC
Switch
Ethernet GW
LAN/WAN
FC GW
Unified Fabric
SAN
Server
cluster
IB
IB
IB
IB
IB
IB
IB
IB HCAs
n
do
e
y
plo
de
be
to e
y
l
al
like d sc
n
U roa
ab
FC
FC
FC
Gate
way
Gate
way
IPoIB
IB SF
IB server cluster B
+50km
Dark fiber
IB switch
fabric
IB-over-DWDM
IB switch
fabric
Why is it relevant?
Data centers disperse geographically
(GRID computing, virtualization, disaster recovery, )
Native, low-latency IB-over-distance transport was still the missing part
10
11
IBM cluster
Cell cluster
DWDM
DWDM
0.4...100.4 km
G.652 SSMF
12
Demonstrator results
The Intel MPI benchmark SendRecV was used
Constant performance up to 50km
Decreasing performance after 50km
SendRecV Throughput vs. Distance
0.8
0.8
Throughput
[GB/s]
Throughput [GB/s]
0.6
0.4 km
0.4
25.4 km
0.6
32 kB
0.4
128 kB
50.4 km
0.2
512 kB
0.2
75.4 km
4096 kB
100.4 km
0
0
0
1000
2000
3000
4000
20
40
60
80
Distance [km]
Full
Full InfiniBand
InfiniBand throughput
throughput over
over more
more than
than 50km
50km
13
100
Thank you
public-relations@advaoptical.com
Thank you
Danke