Kubernetes Day 5 Practicals

About vector

Deployment

A Deployment is a higher-order abstraction that controls deploying and maintaining a set of Pods. Behind the scenes, it uses a ReplicaSet to keep the Pods running, but it offers sophisticated logic for deploying, updating, and scaling a set of Pods within a cluster. Deployments support rolling updates and rollbacks. Roll outs can even be paused.

Kube-proxy can be deployed as daemon sets.
Also networking solutions such as weavenet require agents deployed on each node .
We can also set the node name property to bypass the scheduler on each pod so that they automatically get scheduled on the desired node .

https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/

https://www.bmc.com/blogs/kubernetes-daemonset/

A Deployment provides declarative updates for Pods and ReplicaSets.To see the ReplicaSet (rs) created by the Deployment, run kubectl get rs

apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
– name: nginx
image: nginx:1.14.2
ports:
– containerPort: 80

# kubectl get deployments

Deployment strategy

Rolling updates Recreate Blue Green deployments

By default strategy to RollingUpdate

To update deployment with new image :

# kubectl set image deployment/nginx-deployment nginx=nginx:1.16.1 –record

To Check status

# kubectl rollout status deployment/nginx-deployment

To Check newly created Replicaset

# kubectl get rs

To Rollback to previous version

# kubectl rollout undo deployment.v1.apps/nginx-deployment

To Scale a deployment

# kubectl scale deployment.v1.apps/nginx-deployment –replicas=10

Create a manifest file with Kind Deployment & use kubectl to create the object in k8s API Server.

Daemonset spec

https://raw.githubusercontent.com/kubernetes/website/main/content/en/examples/controllers/ daemonset.yaml Create daemoset kubectl apply -f https://k8s.io/examples/controllers/daemonset.yaml

apiVersion: apps/v1
kind: DaemonSet
metadata:
name: fluentd-elasticsearch
namespace: kube-system
labels:
k8s-app: fluentd-logging
spec:
selector:
matchLabels:
name: fluentd-elasticsearch
template:
metadata:
labels:
name: fluentd-elasticsearch
spec:
tolerations:
# this toleration is to have the daemonset runnable on master nodes
# remove it if your masters can’t run pods
– key: node-role.kubernetes.io/master
operator: Exists
effect: NoSchedule
containers:
– name: fluentd-elasticsearch
image: quay.io/fluentd_elasticsearch/fluentd:v2.5.2
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 200Mi
volumeMounts:
– name: varlog
mountPath: /var/log
– name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
terminationGracePeriodSeconds: 30
volumes:
– name: varlog
hostPath:
path: /var/log
– name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers

Stateful Sets Example
SQL master to slave replication
To not follow an ordered approach

Running Pods on select Nodes

If you specify a .spec.template.spec.nodeSelector, then the DaemonSet controller will create Pods on nodes which match that node selector. Likewise if you specify temp1 Page 2 Pods on nodes which match that node selector. Likewise if you specify a .spec.template.spec.affinity, then DaemonSet controller will create Pods on nodes which match that node affinity. If you do not specify either, then the DaemonSet controller will create Pods on all nodes.

Link https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/

Job

CronJob

Scheduling

K8s default scheduler checks the below conditions while scheduling a pod.

Nodename

Link : https://raw.githubusercontent.com/bhaaskara/educka/master/pods/placement/nodeName.yml

Nodeselector

Link : https://raw.githubusercontent.com/bhaaskara/educka/master/pods/placement/nodeSelector- pod.yml

Assign label to nodes
#kubectl label node node1 color=green #kubectl get nodes –show-labels

Affinity

PreferDuringSchedulingIgnoreDuringExecution

RequireDuringSchedulingIgnoreDuringExecution

RequireDuringSchedulingRequireDuringExecution

kubectl get nodes –show-labels

kubectl label nodes disktype=ssd

kubectl get nodes –show-labels

apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
– matchExpressions:
– key: disktype
operator: In
values:
– ssd
containers:

name: nginx
image: nginx
imagePullPolicy: IfNotPresent

Taint

Block a pod being scheduled on a node

#kubectl taint node node1 zone=red:NoSchedule

ConfigMaps

A ConfigMap is not designed to hold large chunks of data. The data stored in a ConfigMap cannot exceed 1 MiB. If you need to store settings that are larger than this limit, you may want to consider mounting a volume or use a separate database or file service.
NOTE: The Pod and the ConfigMap must be in the same namespace

# kubectl run color –image=kodekloud/webapp-color

# kubectl create configmap special-config –from-literal=special.how=very

# kubectl expose pod color –type=NodePort –name=example-service

apiVersion: v1
kind: Pod
metadata:
name: dapi-test-pod
spec:
containers:
– name: test-container
image: k8s.gcr.io/busybox
command: [ “/bin/sh”, “-c”, “env” ]
env:
# Define the environment variable
– name: SPECIAL_LEVEL_KEY
valueFrom:
configMapKeyRef:
# The ConfigMap containing the value you want to assign to SPECIAL_LEVEL_KEY
name: special-config
# Specify the key associated with the value
key: special.how
restartPolicy: Never