Sei sulla pagina 1di 5

https://labs.play-with-docker.

com

phy->VM->COntainer
one server one app

cloud delivery model pub/priv/hybrid


cloud service model iaas/paas/saas

git clone https://github.com/kubernetes-up-and-running/examples.git


git clone https://github.com/schoolofdevops/k8s-code.git
https://sookocheff.com/post/kubernetes/understanding-kubernetes-networking-model/
https://github.com/dockersamples/example-voting-app

VM
==========
Hypervisor
=========
bare metal

Containers
=========
Container engine
=========
OS

DOCKER

host can be a bare metal,VM or cloud instance for containers


light weight
docker company->docker engine, similart rkt,CRIO
client/server--can be colocated or client canbe remote via docker_host variable
docker engine/docker daemon -- EE[subscription]/CE[free]
docker image
docker hub -- official, users,public
local registry
container -- operating sysrtem virtualization

ericuser4/ubuntu:54.193.71.169
docker tool box with virtual box, docker for windows 10, play with docker
rhel,centos,suse --rpm based
debian,ubuntu - deb based
ctrl-pq to comeout of a container without exit

docker image contains os file sys as base image


overlay2 default storage driver
copy on write searc a file from lower layer and put it on current layer for editing
image name:tag
docker image inspect ubuntu
docker commit f29 centos:httpd
rpm -qa|grep -i httpd
dcoker hub or seperate registry
docker private registry
docker harbor local registry or registry:v2 --docker image to create private
registry server
port mapping
docker volume
bind mount
https://training.play-with-docker.com/ops-stage1/
https://labs.play-with-docker.com/

initially - LXC not portable system containers.first used by docker, then docker
developed libcontainerd..this is background engine,containerd,
linking containers
apt-get update
apt-get install iputils-ping
docker file -- FROM,RUN,COPY,ENV,WORKDIR,CMD,ENTRYPOINT,EXPOSE,
docker build/docker image build
docker compose - multi container orchestration ONLY on single host, Docker-
compose.yaml or other name
docker swarm orchestration platform -- like Mesos,K8
swarm architecture -- managers min 3 for HA,workers, etcd key-value pair db for
cluster info
etcd only on managers
manager workrs -can be bare metal, VMs or cloud instances
tomcat-> replica=4
layer 7 routing mesh, in DNS *.xyz.com -> LB IP...app.xyz.com:8080 -> LB, max 2k
nodes with 9 manager
k8 max 5k nodes in a single cluster with max 5/7 master
1 leader,2 follower, quorum, less thn 100ms latency for manager-manager
communication
CNN,CNI two networking model--CNN used by docker, CNI by K8
swram rolling upgrade
compose with swarm -- stack for multi container across multi hosts
docker network create --driver overlay uber-net
docker has default CNN network plugin driver
K8 relies on 3pp network CNI plugin
docker networking - bridge single host container networking
NAT ing happens in bridge host ip/port to container ip port,seperate namespace
host: container port mapped exclusively to host network, to all interface
macvlan -
docker network create --driver macvlan --subnet 10.0.0.0/16 --gateway=10.0.0.1 -o
parent=eth0 macvlan
multus
two swarm nodes can be on different subnet...can work with overlay
overlay-vxlan, l2 over l3
overlay be default in swarm with vteps connecting to docker0 bridge in each node..
like br-tun of openstack...
all SDN playrs have container network plugin
in kubernetes only L2...
docker deep drive/kubernetest up and running

docker stats,logs,prune,image prune


dcker logs -- redirect all to std.output,use docker logs / use shared volume/ use
centralized logging-ELK

Kubernetes - CNCF opensource


entrprise version - redhat-openshift,ibm-icp,vmware-pks,suse-casp,docker-dockerEE
control plane - master
worker plane - workers
control plane - etcd, api server,controller and scheduler
worker plane - kubelet,kubeproxy,container runtime

optional - kube dns,core dns,


may be docker engine -> CRIO
all request as REST api to api server
all objects data in distribued database in ETCD
only api server talks to etcd
controller maintains desired replica set of applications
scheduler decides the worker node
all these components can be run a system process or containers - 1.13 as pods
control plane in HA in 3 master node or 5 or 7

worker-kubelet, whatever running on worker through kubelet,registers worker with


api server,continuously polls api server to chk if something need to run at
worker??,
kubeproxy - to access the service running on pods from outside
container runtime--kubelets starts container with help of this..kubelet on master
also as all master service as pods
core dns -- for service/pod discovery
heapster repkaces metric server in 1.13 to monitor metrics of pods for auto scaling
scheduler - update manifest sent to api server about scheduled pods on a worker
node, it can also push to kubelet,
pod-smallest atomic unit to be deployed in kubernetes,logical host
example- two applicatiom module accessing same volume
service expose - cluster ip,node port,
no HA of POD if node goes down, for HA need replica set instead of POD object
kind is object type in manifest yaml or json
for rolling upgrade need to create new replica set and migrate
for rolling upgrade, need to use deployment, kind is deployment
above only for stateless..
for stateful needs persistent storage,kind will be statefulset
CSI - container storage interface fro nfs,cinder,|
portworks -- storage vendore popular with container,
configure storage class - dynamic , persistent volume pv[manual\
for specific requirement of pod running in all container nodes - daemonset like
monitoring agent

for external access expose via service


kubernestes installation -- kubeadm for single node master only..possible 3 master
thats in beta
for HA master use kubespray..next method--kops..

namespace - logical boundary of resources


cgroup for cpu and memory
3 default namespace - default,kube-public,kube-system. all control plane pods on
kube-system
kube-public -- visible to all
pods-labels,selectors tomcat/httpd/nginx
example -for tomcat 4 pods, for httpd 2 pods, for nginx 2 pods
when we expose these pods as a service, how kubernetes will identify pods of a
service..for that specify label ---- like app=tomcat, can define multiple label,
key value pair like env=test
mandatory to define label at pod level,pod selector will use this label to connect
to service,label at host label also for node scheduling of pod,
restrict resource limits to a pod
request for min, limit for max boundary of resource usage
memory Bytes,KB,MB
cpu 1 cpu unit or fraction like 250m,500m means half, 1000m means full how much
milisecond
CPU manager policy
network policy by calico
pod no HA, workload [ all pod,replic set comes under these]
replicaset - ensure desired number of pods are running
replicaset use selector and labels to decide pod desired value
kubectl get pods -l app=nginx with label
kubectl scale rs vote --replicas=10
RS does not support rolling uprade..only allow metadata change like replicas
one replica set supports one POD template only with multiple containers...for
multiple pod replica need to use multiple replica set
rolling upgrade to avoid downtime
Deployment
strategy - maxUnavailable-20%/1 perc or absolute val..how many max pod can be
unavailable when the rolling upgrade is done
maxsurge - capacity of app should alwys be maintained ,then put maxunavailable=0
and if maxsurege=2,
kubectl edit deployment
services - to give access to exposed workloads
service name will be a FQDN
4 service types - clusterIP,NodePort,LoadBalancer,ExternalName
in Baremetal or VM -- only clusterIP or nodeport
in public cloud, above 2 + service type or load balancer
clusterIP only within k8 --east west traffic
Node port--randomly takes a port no. > 30000,opens on all the nodes/workers,use any
node ip and port to reach the app,
LB service - creates a VIP at LB and use node port mechanism with the VIP
for on premise -- metalb,
storage --volume two option for pod => emptydir or hostpath..created from host
space
for persistent storage outside local storage
PV persistent volume, PVC claim,storage class
kubectl create/apply --apply immediately apply any changes to yaml...create dont

runtime credentials --in manifest yaml is not secure, also in docker image built
its very static
for tht use config map and secret from 3pp vault
pod to serv and srv to pod -- kube proxy
calico
etcd,bird,confd,felix
pod cidr:192.168.0.0
felix - across node , /26 on each node
RBAC
authentication,authrization,admission control

kubeadm join 10.0.82.137:6443 --token aoql1x.0m3w0yvzn6a76ai7 --discovery-token-ca-


cert-hash sha256:4d6ea648491f442f60bac41ef03f02612cfbcbd0db3e9a81578209220f21a89e

kubectl get pods -o wide

https://training.play-with-docker.com/ops-s1-hello/

Lab 1
https://github.com/docker/labs/blob/master/beginner/chapters/alpine.md

Lab 2
https://github.com/docker/labs/blob/master/beginner/chapters/webapps.md
Install Harbor
https://github.com/goharbor/harbor/blob/master/docs/installation_guide.md

Network Lab
https://github.com/docker/labs/tree/master/networking

Lab Docker (Play with Docker)


file://training.play-with-docker.com/ops-s1-hello/

7838134980
hamidulrahman82@gmail.com
ssh ip172-18-0-44-bj0ivr0j98i000d5j8ag@direct.labs.play-with-docker.com

Potrebbero piacerti anche