3-Tier Application Deployment in K8 Cluster

Three-tier Application Deployment
We have divided it into different phases . Each phase play a very important role and i must insure you will understand in easy way.
First Phase:-
i) First of all we need to create a ec2 instance .
ii)Go to the AWS dashboard and click on the EC2 service.
iii) Click on “Launch Instance.”
iv) Select an AMI.
v) Choose an instance type and set the Key Pair name.
vi) Create the Security Group.
vii)Click on “Launch Instance.”





Phase 2:-
After creating ec2 instance , now we need to connect with it and need to install docker .
Step 1:- First of all you need to perform update command to update the server
apt-get update -y
Step 2:- Copy and Paste the given command to create a script and run it .
#!/bin/bash
# Update the apt package index
sudo apt update# Install necessary dependencies
sudo apt install -y apt-transport-https ca-certificates curl software-properties-common# Add Docker's official GPG key
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -# Determine the Ubuntu version
ubuntu_version=$(lsb_release -cs)# Add the Docker repository to the system based on the Ubuntu version
if [ "$ubuntu_version" = "focal" ] || [ "$ubuntu_version" = "jammy" ]; then
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu focal stable"
elif [ "$ubuntu_version" = "bionic" ]; then
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu bionic stable"
else
echo "Unsupported Ubuntu version: $ubuntu_version"
exit 1
fi# Update the apt package index again
sudo apt update# Install Docker CE
sudo apt install -y docker-ce# Verify Docker installation
docker --version# Optionally, add the current user to the docker group
sudo usermod -aG docker $USERecho "Docker has been successfully installed."
echo "You may need to log out and log back in for group changes to take effect."
Step 2:- After creating the script , you need to provide the executable permission to file
sudo chmod +x docker.sh
sudo mv docker.sh /usr/local/bin
Now question arise why we are moving the .sh file to bin because if we move it to bin then in next time we dont need to use or apply (sh or ./ ) before shell script file
Step 3:- After installing the docker file we need to clone the github repo , where our code is located .
Step 4:- So we are working on project where frontend is written in reactjs and backend is written in node js . We are also using the Mongodb database for it.
Step 5:- First of all we need to create a dockerfile for frontend application. Lets create a docker file for it
# Use the official Node.js version 14 image from Docker Hub as the base image
FROM node:14
# Set the working directory inside the container to /usr/src/app
WORKDIR /usr/src/app# Copy package.json and package-lock.json from the host machine into the container's working directory
COPY package*.json ./# Install npm dependencies inside the container based on the package.json files
RUN npm install# Copy all application files from the host machine into the container's working directory
COPY . .# Specify the command to run when the container starts
CMD [ "npm", "start" ]
Step 6:- After created a dockerfile we need to perform to create a image which help us to create container.
#command to create a docker image is
docker build -t frontend .
# command to create a container from image is
docker run -d -p 3500:3500 --name frontend frontend
#here we are exposing it on port 3500
Step 7:- We are going to push the image on ecr , so we need to create ecr repo in aws . Before doing that we required to create an IAM users , where we provide the Permission to read & access ecr image to IAM .
We also need to install awscli on our terminal.
Step 8:- Command to install awscli on terminal.
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
sudo apt install unzip
unzip awscliv2.zip
sudo ./aws/install -i /usr/local/aws-cli -b /usr/local/bin --update
aws configure
Step 9:- Now we need to push the image on ecr , for that we will get the command on ecr, just crate a repo and click on push command options as shown in below screenshot.


Phase 3:- After pushing image on ecr , we need to install kubernetes , so here we are using eks . So i am providing the command to connect & install eks on terminal below .
First of all we need to kubectl , command is given below:-
curl -o kubectl https://amazon-eks.s3.us-west-2.amazonaws.com/1.19.6/2021-01-05/bin/linux/amd64/kubectl
chmod +x ./kubectl
sudo mv ./kubectl /usr/local/bin
kubectl version --short --client
Now we need to install eksctl:-
curl --silent --location "https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_$(uname -s)_amd64.tar.gz" | tar xz -C /tmp
sudo mv /tmp/eksctl /usr/local/bin
eksctl version
Step 2:- Now we need to Setup EKS Cluster :-
eksctl create cluster --name three-tier-cluster --region us-west-2 --node-type t2.medium --nodes-min 2 --nodes-max 2
aws eks update-kubeconfig --region us-west-2 --name three-tier-cluster
kubectl get nodes
This command will create a eks on aws.
Step 3:- Now we need to create deployment for frontend , backend and database.
Let’s start with database first.
Create deployment_mongo.yaml(menifest file) for mongodb (because we are using the mongodb)
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: three-tier
name: mongodb
spec:
replicas: 1
selector:
matchLabels:
app: mongodb
template:
metadata:
labels:
app: mongodb
spec:
containers:
- name: mon
image: mongo:4.4.6
command:
- "numactl"
- "--interleave=all"
- "mongod"
- "--wiredTigerCacheSizeGB"
- "0.1"
- "--bind_ip"
- "0.0.0.0"
ports:
- containerPort: 27017
env:
- name: MONGO_INITDB_ROOT_USERNAME
valueFrom:
secretKeyRef:
name: mongo-sec
key: username
- name: MONGO_INITDB_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mongo-sec
key: password
volumeMounts:
- name: mongo-volume
mountPath: /data/db
volumes:
- name: mongo-volume
persistentVolumeClaim:
claimName: mongo-volume-claim
By Default mongodb work on port 27017.
Step 4:- Now we need to create a service.yaml , secrets.yaml .
Service .yaml:-The service.yaml file in Kubernetes is used to define a service object, which directs network traffic to a group of pods. This includes specifications such as the service name, selectors to identify pods, ports to listen on, and the type of service. This file is required to configure network access to applications running in a Kubernetes cluster.
apiVersion: v1
kind: Service
metadata:
namespace: three-tier
name: mongodb-svc
spec:
selector:
app: mongodb
ports:
- name: mongodb-svc
protocol: TCP
port: 27017
targetPort: 27017
Secrets.yaml:- A `secrets.yaml` file in Kubernetes is used to define secret objects, which securely store sensitive information such as passwords, API keys, or certificates within the cluster. These secrets can be securely accessed by applications running in Kubernetes pods as files or environment variables, enabling secure configuration, communication, and authentication.
apiVersion: v1
kind: Secret
metadata:
namespace: three-tier
name: mongo-sec
type: Opaque
data:
password: cGFzc3dvcmQxMjM= #Three-Tier-Project
username: YWRtaW4= #admin
Step5:- We also need to create pv.yaml and pvc.yaml .
i)pv.yaml :- It defines a PersistentVolume (PV) object that represents storage in a cluster that is provisioned by an administrator or dynamically.
apiVersion: v1
kind: PersistentVolume
metadata:
name: mongo-pv
namespace: three-tier
spec:
capacity:
storage: 1Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
hostPath:
path: /data/db
ii)pvc.yaml :- It defines defines PersistentVolumeClaim (PVC) objects that are requests for storage by pods, separating storage details from applications.
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mongo-volume-claim
namespace: three-tier
spec:
accessModes:
- ReadWriteOnce
storageClassName: ""
resources:
requests:
storage: 1Gi
PV and PVC enable dynamic provisioning and consumption of persistent storage in a Kubernetes cluster.
Step6:- Now we need to apply command to deploy the cluster .
Use command which is given below:-
## It is used to apply the configuration defined in the deployment_mongo.yaml file to a Kubernetes cluster
kubectl apply -f deployment_mongo.yaml
## It is used to apply the configuration defined in the services.yaml file to a Kubernetes cluster
kubectl apply -f service.yaml## It is used to apply the configuration defined in the secrets.yaml file to a Kubernetes cluster
kubectl apply -f secets.yaml## It is used to apply the configuration defined in the pv.yaml file to a Kubernetes cluster
kubectl apply -f pv.yaml## It is used to apply the configuration defined in the pvc.yaml file to a Kubernetes cluster
kubectl apply -f pvc.yaml
Step 7:- Now create a deployment.yaml file for backend.
apiVersion: apps/v1
kind: Deployment
metadata:
name: api
namespace: three-tier
labels:
role: api
env: demo
spec:
replicas: 2
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 25%
selector:
matchLabels:
role: api
template:
metadata:
labels:
role: api
spec:
imagePullSecrets:
- name: ecr-registry-secret
containers:
- name: api
image: 407622020962.dkr.ecr.us-east-1.amazonaws.com/backend:latest
imagePullPolicy: Always
env:
- name: MONGO_CONN_STR
value: mongodb://mongodb-svc:27017/todo?directConnection=true
- name: MONGO_USERNAME
valueFrom:
secretKeyRef:
name: mongo-sec
key: username
- name: MONGO_PASSWORD
valueFrom:
secretKeyRef:
name: mongo-sec
key: password
ports:
- containerPort: 3500
livenessProbe:
httpGet:
path: /ok
port: 3500
initialDelaySeconds: 2
periodSeconds: 5
readinessProbe:
httpGet:
path: /ok
port: 3500
initialDelaySeconds: 5
periodSeconds: 5
successThreshold: 1
Also need to create a service.yaml file for backend.
apiVersion: v1
kind: Service
metadata:
name: api
namespace: three-tier
spec:
ports:
- port: 3500
protocol: TCP
type: ClusterIP
selector:
role: api
Now we need to create deployment.yaml and service.yaml for frontend too.
deployment.yaml file code is given below: —
apiVersion: apps/v1
kind: Deployment
metadata:
name: frontend
namespace: three-tier
labels:
role: frontend
env: demo
spec:
replicas: 1
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 25%
selector:
matchLabels:
role: frontend
template:
metadata:
labels:
role: frontend
spec:
imagePullSecrets:
- name: ecr-registry-secret
containers:
- name: frontend
image: 407622020962.dkr.ecr.us-east-1.amazonaws.com/frontend:latest
imagePullPolicy: Always
env:
- name: REACT_APP_BACKEND_URL
value: "http://backend.amanpathakdevops.study/api/tasks"
ports:
- containerPort: 3000
Service.yaml file is given below:-
apiVersion: v1
kind: Service
metadata:
name: frontend
namespace: three-tier
spec:
ports:
- port: 3000
protocol: TCP
type: ClusterIP
selector:
role: frontend
After creating the yaml files , we need to apply it as same as we have done in database sections.
Step 8:-
We will use ingress controller for internal routing .
#What is Ingress controller ?
The ingress controller in Kubernetes is a component responsible for managing inbound HTTP(S) traffic to services within the cluster. It acts as a Layer 7 load balancer, routing traffic based on rules defined in ingress resources. Ingress controllers support features such as HTTP/HTTPS routing, TLS expiration, and dynamic configuration updates based on Kubernetes service discovery. Popular implementations include NGINX Ingress Controller, Traffic, and HAProxy Ingress.
Create a ingress.yaml file with the given below yaml code .
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: mainlb
namespace: three-tier
annotations:
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/target-type: ip
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}]'
spec:
ingressClassName: alb
rules:
- host: backend.amanpathakdevops.study
http:
paths:
- path: /api
pathType: Prefix
backend:
service:
name: api
port:
number: 3500
- path: /
pathType: Prefix
backend:
service:
name: frontend
port:
number: 3000
Step 9:- Install AWS Load Balancer with the given command .
#!/bin/bash
# Download IAM Policy Document
curl -O https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.5.4/docs/install/iam_policy.json# Create IAM Policy
aws iam create-policy --policy-name AWSLoadBalancerControllerIAMPolicy --policy-document file://iam_policy.json# Associate OIDC Provider with EKS Cluster
eksctl utils associate-iam-oidc-provider --region=us-west-2 --cluster=three-tier-cluster --approve# Create IAM Service Account
eksctl create iamserviceaccount --cluster=three-tier-cluster --namespace=kube-system --name=aws-load-balancer-controller --role-name AmazonEKSLoadBalancerControllerRole --attach-policy-arn=arn:aws:iam::626072240565:policy/AWSLoadBalancerControllerIAMPolicy --approve --region=us-west-2
Step 10:- Deploy AWS Load Balancer Controller
#!/bin/bash
# Install Helm
sudo snap install helm --classic# Add EKS Helm repository
helm repo add eks https://aws.github.io/eks-charts# Update Helm repositories
helm repo update eks# Install AWS Load Balancer Controller using Helm
helm install aws-load-balancer-controller eks/aws-load-balancer-controller
-n kube-system
--set clusterName=my-cluster
--set serviceAccount.create=false
--set serviceAccount.name=aws-load-balancer-controller# Check the deployment status of AWS Load Balancer Controller
kubectl get deployment -n kube-system aws-load-balancer-controller# Apply the Kubernetes manifest file for the full stack LB (full_stack_lb.yaml)
kubectl apply -f full_stack_lb.yaml
So here we have deploy three tier application by using docker , kubernetes, aws ec2 , ecr, eks.
If you want to integrate it with jenkins then i am sharing jenkins script file below you can use it .
For Backend :- Backend_jenkins script
pipeline {
agent any
tools {
jdk 'jdk'
nodejs 'nodejs'
}
environment {
SCANNER_HOME=tool 'sonar-scanner'
AWS_ACCOUNT_ID = credentials('ACCOUNT_ID')
AWS_ECR_REPO_NAME = credentials('ECR_REPO2')
AWS_DEFAULT_REGION = 'us-east-1'
REPOSITORY_URI = "${AWS_ACCOUNT_ID}.dkr.ecr.${AWS_DEFAULT_REGION}.amazonaws.com/"
}
stages {
stage('Cleaning Workspace') {
steps {
cleanWs()
}
}
stage('Checkout from Git') {
steps {
git credentialsId: 'GITHUB', url: 'https://github.com/thakurrajeshPathak-DevOps/End-to-End-Kubernetes-Three-Tier-DevSecOps-Project.git'
}
}
stage('Sonarqube Analysis') {
steps {
dir('Application-Code/backend') {
withSonarQubeEnv('sonar-server') {
sh ''' $SCANNER_HOME/bin/sonar-scanner
-Dsonar.projectName=three-tier-backend
-Dsonar.projectKey=three-tier-backend '''
}
}
}
}
stage('Quality Check') {
steps {
script {
waitForQualityGate abortPipeline: false, credentialsId: 'sonar-token'
}
}
}
stage('OWASP Dependency-Check Scan') {
steps {
dir('Application-Code/backend') {
dependencyCheck additionalArguments: '--scan ./ --disableYarnAudit --disableNodeAudit', odcInstallation: 'DP-Check'
dependencyCheckPublisher pattern: '**/dependency-check-report.xml'
}
}
}
stage('Trivy File Scan') {
steps {
dir('Application-Code/backend') {
sh 'trivy fs . > trivyfs.txt'
}
}
}
stage("Docker Image Build") {
steps {
script {
dir('Application-Code/backend') {
sh 'docker system prune -f'
sh 'docker container prune -f'
sh 'docker build -t ${AWS_ECR_REPO_NAME} .'
}
}
}
}
stage("ECR Image Pushing") {
steps {
script {
sh 'aws ecr get-login-password --region ${AWS_DEFAULT_REGION} | docker login --username AWS --password-stdin ${REPOSITORY_URI}'
sh 'docker tag ${AWS_ECR_REPO_NAME} ${REPOSITORY_URI}${AWS_ECR_REPO_NAME}:${BUILD_NUMBER}'
sh 'docker push ${REPOSITORY_URI}${AWS_ECR_REPO_NAME}:${BUILD_NUMBER}'
}
}
}
stage("TRIVY Image Scan") {
steps {
sh 'trivy image ${REPOSITORY_URI}${AWS_ECR_REPO_NAME}:${BUILD_NUMBER} > trivyimage.txt'
}
}
stage('Checkout Code') {
steps {
git credentialsId: 'GITHUB', url: 'https://github.com/thakurrajeshPathak-DevOps/End-to-End-Kubernetes-Three-Tier-DevSecOps-Project.git'
}
}
stage('Update Deployment file') {
environment {
GIT_REPO_NAME = "End-to-End-Kubernetes-Three-Tier-DevSecOps-Project"
GIT_USER_NAME = "thakurrajesh-DevOps"
}
steps {
dir('Kubernetes-Manifests-file/Backend') {
withCredentials([string(credentialsId: 'github', variable: 'GITHUB_TOKEN')]) {
sh '''
git config user.email "thakurrajesh@gmail.com"
git config user.name "thakurrajesh-DevOps"
BUILD_NUMBER=${BUILD_NUMBER}
echo $BUILD_NUMBER
imageTag=$(grep -oP '(?<=backend:)[^ ]+' deployment.yaml)
echo $imageTag
sed -i "s/${AWS_ECR_REPO_NAME}:${imageTag}/${AWS_ECR_REPO_NAME}:${BUILD_NUMBER}/" deployment.yaml
git add deployment.yaml
git commit -m "Update deployment Image to version ${BUILD_NUMBER}"
git push @github.com/${GIT_USER_NAME}/${GIT_REPO_NAME">https://${GITHUB_TOKEN}@github.com/${GIT_USER_NAME}/${GIT_REPO_NAME} HEAD:master
'''
}
}
}
}
}
}
For frontend , the jenkins file is given below
pipeline {
agent any
tools {
jdk 'jdk'
nodejs 'nodejs'
}
environment {
SCANNER_HOME=tool 'sonar-scanner'
AWS_ACCOUNT_ID = credentials('ACCOUNT_ID')
AWS_ECR_REPO_NAME = credentials('ECR_REPO1')
AWS_DEFAULT_REGION = 'us-east-1'
REPOSITORY_URI = "${AWS_ACCOUNT_ID}.dkr.ecr.${AWS_DEFAULT_REGION}.amazonaws.com/"
}
stages {
stage('Cleaning Workspace') {
steps {
cleanWs()
}
}
stage('Checkout from Git') {
steps {
git credentialsId: 'GITHUB', url: 'https://github.com/thakurrajeshPathak-DevOps/End-to-End-Kubernetes-Three-Tier-DevSecOps-Project.git'
}
}
stage('Sonarqube Analysis') {
steps {
dir('Application-Code/frontend') {
withSonarQubeEnv('sonar-server') {
sh ''' $SCANNER_HOME/bin/sonar-scanner
-Dsonar.projectName=three-tier-frontend
-Dsonar.projectKey=three-tier-frontend '''
}
}
}
}
stage('Quality Check') {
steps {
script {
waitForQualityGate abortPipeline: false, credentialsId: 'sonar-token'
}
}
}
stage('OWASP Dependency-Check Scan') {
steps {
dir('Application-Code/frontend') {
dependencyCheck additionalArguments: '--scan ./ --disableYarnAudit --disableNodeAudit', odcInstallation: 'DP-Check'
dependencyCheckPublisher pattern: '**/dependency-check-report.xml'
}
}
}
stage('Trivy File Scan') {
steps {
dir('Application-Code/frontend') {
sh 'trivy fs . > trivyfs.txt'
}
}
}
stage("Docker Image Build") {
steps {
script {
dir('Application-Code/frontend') {
sh 'docker system prune -f'
sh 'docker container prune -f'
sh 'docker build -t ${AWS_ECR_REPO_NAME} .'
}
}
}
}
stage("ECR Image Pushing") {
steps {
script {
sh 'aws ecr get-login-password --region ${AWS_DEFAULT_REGION} | docker login --username AWS --password-stdin ${REPOSITORY_URI}'
sh 'docker tag ${AWS_ECR_REPO_NAME} ${REPOSITORY_URI}${AWS_ECR_REPO_NAME}:${BUILD_NUMBER}'
sh 'docker push ${REPOSITORY_URI}${AWS_ECR_REPO_NAME}:${BUILD_NUMBER}'
}
}
}
stage("TRIVY Image Scan") {
steps {
sh 'trivy image ${REPOSITORY_URI}${AWS_ECR_REPO_NAME}:${BUILD_NUMBER} > trivyimage.txt'
}
}
stage('Checkout Code') {
steps {
git credentialsId: 'GITHUB', url: 'https://github.com/thakurrajeshPathak-DevOps/End-to-End-Kubernetes-Three-Tier-DevSecOps-Project.git'
}
}
stage('Update Deployment file') {
environment {
GIT_REPO_NAME = "End-to-End-Kubernetes-Three-Tier-DevSecOps-Project"
GIT_USER_NAME = "thakurrajeshPathak-DevOps"
}
steps {
dir('Kubernetes-Manifests-file/Frontend') {
withCredentials([string(credentialsId: 'github', variable: 'GITHUB_TOKEN')]) {
sh '''
git config user.email "thakurrajesh07@gmail.com"
git config user.name "thakurrajesh-DevOps"
BUILD_NUMBER=${BUILD_NUMBER}
echo $BUILD_NUMBER
imageTag=$(grep -oP '(?<=frontend:)[^ ]+' deployment.yaml)
echo $imageTag
sed -i "s/${AWS_ECR_REPO_NAME}:${imageTag}/${AWS_ECR_REPO_NAME}:${BUILD_NUMBER}/" deployment.yaml
git add deployment.yaml
git commit -m "Update deployment Image to version ${BUILD_NUMBER}"
git push @github.com/${GIT_USER_NAME}/${GIT_REPO_NAME">https://${GITHUB_TOKEN}@github.com/${GIT_USER_NAME}/${GIT_REPO_NAME} HEAD:master
'''
}
}
}
}
}
}
To delete the EKS cluster :-
eksctl delete cluster --name three-tier-cluster --region us-west-2
KeyPoint :-
Q1 :- What is difference between package.json and package-lock.json
package.json: Manifest file containing project metadata and list of dependencies. Developers modify it directly to manage project configuration, metadata, and dependencies.
package-lock.json: Automatically generated file that records the exact