Sei sulla pagina 1di 3

Slide 1

Hi guyz i will be giving you a presentation on evolution of sdn at google. The


reason i choose this topic is because there are still some question about the
scalibility of sdn, whether it's real, it's use cases many more such questions. So
towards the end of this presentation you may be able to appreciate SDN.

Slide 2
moving on, So google over the past 15+ yrs has built one of fastest, badest and
most capable network infra on the earth.

S3
So again some braging and the reason for this brag is to prove that sdn exsists and
is used to a great extent in google's network

S4
So this picture shows how a packet from one data center travels to other data
center(probably on the other side os the globe). I will come back to this slide
towards the end, we will build up upon this.

S5
So networking at google is pretty much sdn. Google is pretty much mature in terms
of sdn deployment. Sdn does not means only cheap hardware but it's pretty much
more.
It basically allows a decompositon of control, data and management plane.So what
does this lead to ?
It allow in independent evolution of these planes. So you can optimize it
independently, add new features to it.
sepration of these planes definately leads to higher uptime. It's unlike a box
where you have meshed these planes.The advantage of this is that bugs does not
travel along the box. bugs in one plane does not affectbugs in other plane with
this.
It allows for NFV where you can decouple your routing application from hardware
appliances so that they can run in software on vm's. So this helps to roll out new
function in days rather than months or years sometimes.

S6
So these are some of the innovations google has done past few years

S7
So our talk will focus primarily on these 4. So we are going to build on that
server to server communication. Starting from the data center side, JUPITER which
is google's building scale data center network , ANDROMEDA which is virtualization
on top of that network, and in future slides we will talk about WAN pieces

S8
So we will start with data center sides. what makes google's datacenter special ?
So when we talk about data centers then we want to be really really cheap when it
comes to data center hardware becz there are tons of it, there are tons of fabric,
cables, switches.
And with this comes shallow buffers that comes from routers and
This contributes to latency
It's really large so you have aggregate bandwidth of entire internet in one
datacenter.
It's impossibe to humanly manage these buildings and rectify the faults.

S9
All of these are compounded by scale.

S10
This shows a snapshot of how the traffic has increased in google data center. It's
almost have gone 50x within the span of 6 years. And becz of this google datacenter
topology had to be evolved .It has gone from Four-Post CR's which handled 2Tb of
data to jupiter which was released in 2012 which can handle upto 1.3 peta bytes of
data.

S11
Firepath is a logical centrallized set of controllers which runs this firepath
protocol(which is pretty much just disjkastra's).

S12
So we have built our data center, deployed our physical fabric, our logicalset of
controllers and a bunch of other things. So now we will take this and build a more
agile system and that's where andromeda comes in.Andromeda is a network
virtualization system .It provides an on demand access to compute and provides
infrastructure.

S13
So this is how services provided by andromeda looks like. It provides a
virtualizationon top of data center fabric.It also provides NFV with whole bunch of
functionality such as load balancing, routing, DoS, VPN etc.

S14
Now you have built your physical hardwares(fabrics) and deployed andromeda on top
of it to provide agility. These things are scattered around the globe. So how do
you connect them.

S15
So that's where the WAN pieces comes into picture. So there are two pieces to that
B4 and BwE.

S16
This is how B4 WAN looked like back in 2013. In present world It may look more
busier than this.

S17
History about B4 WAN. Started with opt-in network in 2010, SDN fully deployed in
late 2011. Google built it's own hardware. And we can see that the traffic has been
exponentially ramping up on this WAN.

S18
This is how it looks like when deployed in the network. Consider the grey box as
one b4 node, so whole bunch of datacenter in a geographic location can connect to
this b4 node and these WAN links connect these nodes across the globes

S19
The big cost of datacenter networking comes form datacenter hardwares, switches,
fabrics etc but the main cost comes from WAN links. Those are really expensive but
since they wanted to be cost-effecitve they utilized those links very well and
that's when BwE comes into the picture.

S20
So what does BwE do
It enables efficent usage of WAN especially the shallow buffers.
Efficent management of bandwidths for several application. If google cloud user it
gives you the bandwidth based on what you have paid for.
It gives you a visibility into 1000s of users using those links and based on that
you can can do whole bunch of network optimizations.
It provied a platform to apply whole bunch of experiments for network optimization.
S21
Centralized bandwidth allocation alogrithm for complex networks.
It does enforcement of bandwidth at the hosts based on several policies.
It optimizes the network.

s22

Potrebbero piacerti anche