Sei sulla pagina 1di 15

Introduction to Docker

Contents
• Introduction to Docker, Containers, and the Matrix from Hell
• What is Docker?
• What are containers?
• Why use containers?
• Prerequisites
• Docker Futures
• Advanced topics: Networking, Data
What is Docker?

• Wikipedia defines Docker as


• an open-source project that automates the deployment of software
applications inside containers by providing an additional layer of
abstraction and automation of OS-level virtualization on Linux.
• Docker is a tool that allows developers, sys-admins etc. to easily deploy
their applications in a sandbox (called containers) to run on the host
operating system i.e. Linux. The key benefit of Docker is that it allows users
to package an application with all of its dependencies into a standardized
unit for software development. Unlike virtual machines, containers do not
have the high overhead and hence enable more efficient usage of the
underlying system and resources.
What are containers?

• The industry standard today is to use Virtual Machines (VMs) to run


software applications. VMs run applications inside a guest Operating
System, which runs on virtual hardware powered by the server’s host OS.

• VMs are great at providing full process isolation for applications: there are
very few ways a problem in the host operating system can affect the
software running in the guest operating system, and vice-versa. But this
isolation comes at great cost — the computational overhead spent
virtualizing hardware for a guest OS to use is substantial.

• Containers take a different approach: by leveraging the low-level mechanics


of the host operating system, containers provide most of the isolation of
virtual machines at a fraction of the computing power.
Why use containers?
• Containers offer a logical packaging mechanism in which applications can be abstracted from the
environment in which they actually run. This decoupling allows container-based applications to be
deployed easily and consistently, regardless of whether the target environment is a private data center,
the public cloud, or even a developer’s personal laptop. This gives developers the ability to create
predictable environments that are isolated from rest of the applications and can be run anywhere.

• From an operations standpoint, apart from portability containers also give more granular control over
resources giving your infrastructure improved efficiency which can result in better utilization of your
compute resources.
Prerequisites

• There are no specific skills needed for this tutorial beyond a basic
comfort with the command line and using a text editor. Prior
experience in developing web applications will be helpful but is not
required. As we proceed further along the tutorial, we'll make use of
a few cloud services. If you're interested in following along, please
create an account on each of these websites:

• Amazon Web Services


• Docker Hub
• Azure
Why it works—separation of concerns
• Dan the Developer • Oscar the Ops Guy
• Worries about what’s “inside” the • Worries about what’s “outside” the
container container
• His code • Logging
• His Libraries • Remote access
• His Package Manager • Monitoring
• Network config
• His Apps
• All containers start, stop, copy, attach,
• His Data migrate, etc. the same way
• All Linux servers look the same
More technical explanation

WHY WHAT
• Run everywhere • High Level—It’s a lightweight VM
• Regardless of kernel version • Own process space
(2.6.32+) • Own network interface
• Regardless of host distro • Can run stuff as root
• Physical or virtual, cloud or not • Can have its own /sbin/init (different
• Container and host architecture must from host)
match* • <<machine container>>
• Run anything
• If it can run on the host, it can run in • Low Level—It’s chroot on steroids
the container • Can also not have its own /sbin/init
• i.e. if it can run on a Linux kernel, it • Container=isolated processes
can run • Share kernel with host
• No device emulation (neither HVM
nor PV) from host)
• <<application container>>
Containers vs. VMs
App App App Containers are isolated,
A A’ B
but share OS and, where
Bins/ Bins/ Bins/
Libs Libs Libs
appropriate, bins/libraries
VM …result is significantly faster deployment,
Guest Guest much less overhead, easier migration,
OS OS faster restart
Guest Guest Guest
OS OS OS

App B’

App B’
App A’

App B’
App A

App B

Docker
Container
Hypervisor (Type 2) Bins/Libs Bins/Libs

Host OS Host OS
Server Server
Why are Docker containers lightweight?
VMs Containers
App

App Δ
App App App
App
A’ A A
A A
Bins/

Bins/
Bins/ Bins/ Bins/
Libs Libs Libs Libs

Guest
Guest Guest OS
Guest
OS OS OS
Original App Copy of Modified App
(No OS to take App
up space, resources, No OS. Can Copy on write
or require restart) Share bins/libs capabilities allow
us to only save the diffs
VMs Between container A
Every app, every copy of an and container
app, and every slight modification A’
of the app requires a new virtual server
What are the basics of the Docker system?

Push

Container A
Docker
Container
Image
Registry

Search
Pull

Build Run
Dockerfile
For
A

Container B
Container A

Container C

Docker
Source
Code
Docker Engine
Repository
Host 1 OS (Linux)
Host 2 OS (Linux)
Changes and Updates

App Push

App Δ
A
Docker
Bins/ Container

Bins/
Libs Image
Registry

Base Container Container

App Δ
Container Mod A’ Mod A’’
Update
Image

Bins/
App App
A’’ A

Bins/ Bins/
Bins/

Libs Libs

Docker Engine Docker Engine


Host is now running A’’ Host running A wants to upgrade to A’’.
Requests update. Gets only diffs
Ecosystem Support
• Operating systems
• Virtually any distribution with a 2.6.32+ kernel
• Red Hat/Docker collaboration to make work across RHEL 6.4+, Fedora, and other members of the family (2.6.32 +)
• CoreOS—Small core OS purpose built with Docker
• OpenStack
• Docker integration into NOVA (& compatibility with Glance, Horizon, etc.) accepted for Havana release
• Private PaaS
• OpenShift
• Solum (Rackspace, OpenStack)
• Other TBA
• Public PaaS
• Deis, Voxoz, Cocaine (Yandex), Baidu PaaS
• Public IaaS
• Native support in Rackspace, Digital Ocean,+++
• AMI (or equivalent) available for AWS & other
• DevOps Tools
• Integrations with Chef, Puppet, Jenkins, Travis, Salt, Ansible +++
• Orchestration tools
• Mesos, Heat, ++
• Shipyard & others purpose built for Docker
• Applications
• 1000’s of Dockerized applications available at index.docker.io
Advanced topics
• Data
• Today: Externally mounted volumes
• Share volumes between containers
• Share volume between a containers and underlying hosts
• high-performance storage backend for your production database
• making live development changes available to a container, etc.
• Optional: specify memory limit for containers, CPU priority
• Device mapper/ LVM snapshots in 0.7
• Futures:
• I/O limits
• Container resource monitoring (CPU & memory usage)
• Orchestration (linking & synchronization between containers)
• Cluster orchestration (multi-host environment)
• Networking
• Supported today:
• UDP/TCP port allocation to containers
• specify which public port to redirect. If you don’t specify a public port, Docker will revert to allocating a random public port.
• Docker uses IPtables/netfilter
• IP allocation to containers
• Docker uses virtual interfaces, network bridge,

• Futures:
• See Pipework (Upstream) : Software-Defined Networking for Linux Containers (https://github.com/jpetazzo/pipework)
• Certain pipework concepts will move from upstream to part of core Docker
• Additional capabilities come with libvirt support in 0.8-0.9 timeframe
www.docker.com

Potrebbero piacerti anche