Building a CI/CD Pipeline for a Retail Company

Architecture Diagram :

Generate Token From Github

To generate a token in GitHub, you can follow these steps:

  1. Log in to your GitHub account.
  2. Click on your profile picture in the top-right corner of the page and select “Settings” from the dropdown menu.
  3. In the left sidebar, click on “Developer settings.”
  4. On the Developer settings page, click on “Personal access tokens.”
  5. Click on the “Generate new token” button.
  6. Enter a descriptive note for the token to help you remember its purpose.
  7. Select the scopes or permissions you want to grant to the token. Scopes control the actions the token can perform.
  8. Optionally, you can set an expiration date for the token by selecting the “Expiration” checkbox and choosing a date.
  9. Once you have configured the token, click on the “Generate token” button at the bottom of the page.
  10. GitHub will generate the token for you and display it on the screen. Make sure to copy the token and save it in a secure place.
  11. After closing the page, the token will not be visible again, so ensure you have copied it before leaving the page.
  12. You can use this token in your applications or scripts to authenticate with the GitHub API. Be cautious with the token and treat it like a password, as it provides access to your GitHub account.

Remember to keep your generated token private and secure. If you suspect it has been compromised, you can always revoke the token and generate a new one.

Installing Jenkins in Server

To configure a GitHub webhook for Jenkins, you can follow these steps:

  1. Set up Jenkins: Make sure you have Jenkins installed and configured on your server. Ensure that you have the necessary plugins installed, such as the GitHub plugin and the Generic Webhook Trigger plugin.
  2. Create a Jenkins job: Create a Jenkins job that you want to trigger when a webhook event occurs. Configure the job to perform the desired actions, such as building and deploying your code.
  3. Configure GitHub webhook: Go to your GitHub repository’s settings.
  4. Webhooks & services: Click on “Webhooks & services” in the left sidebar.
  5. Add webhook: Click on the “Add webhook” button to create a new webhook.
  6. Payload URL: In the “Payload URL” field, enter the URL of your Jenkins server, followed by the path to the webhook endpoint. The endpoint is typically http://<jenkins_server>/github-webhook/.
  7. Content type: Select the appropriate content type for the webhook payload. Usually, “application/json” is the recommended option.
  8. Secret (optional): If you want to secure the communication between GitHub and Jenkins, you can specify a secret token. This token can be used to verify the authenticity of the webhook request.
  9. Events: Select the events that should trigger the webhook. For example, you can choose to trigger the webhook on push events, pull request events, or specific types of actions.
  10. Active: Ensure the webhook is active by leaving the checkbox checked.
  11. Save the webhook: Click on the “Add webhook” or “Save” button to create the webhook.
  12. Test the webhook: To verify that the webhook is set up correctly, you can trigger a test payload by clicking on the “Recent Deliveries” link on the webhook page. Then, click on the most recent delivery and select “Redeliver” to send a test payload to your Jenkins job.
  13. Jenkins job configuration: In your Jenkins job configuration, add a build trigger for “Generic Webhook Trigger.” Configure the trigger to match the expected payload from the GitHub webhook. You can specify conditions, filters, and actions to be performed when the webhook is triggered.
  14. Save the job configuration: Click on the “Save” button to save your Jenkins job configuration.

Now, whenever a configured event occurs in your GitHub repository, the corresponding webhook will be sent to your Jenkins server, triggering the associated job to run.

Java Project Used :

Jenkins Script to Build Docker Image and Push it to Docker Hub :

Prerequisite : To have Jenkins Configured and Have a user for Jenkins Created. Please Go through the session to understand the steps in deep

  • User Source code management to get the code from github by adding the repository URL and Credentials.(Make sure to have dockerfile in the same git)
  • Make Sure to have Docker Build and Push Plugin Installed.
  • Add Additional step for docker build and Publish.
  • Give Repository Name, Tag, and Registry credentials.
  • Save Pipeline.

Jenkins Scripted Pipeline to create a EKS Cluster

Jenkins Script for the same :


  • Add AWS Access Key and Secret key to Jenkins Secrets
pipeline {
agent any
environment {
    AWS_ACCESS_KEY_ID = credentials('AWS_ACCESS_KEY_ID')
    clusterName = "my-eks-cluster1"
    region = "ap-south-1"
    vpcID = "vpc-02f76b72e650a6cbc"
    subnetIDs = "subnet-069ab53353888ca38,subnet-0167591b59b1328b3,subnet-0c2083f06d3b7fd4e"
    instanceType = "t2.micro"
    minSize = 1
    maxSize = 3
    securityGroupIDs = "sg-01548190c082d7b89"
    rolearn = "arn:aws:iam::462857891680:role/eksClusterRole"

stages {
    stage('Clean Workspace') {
        steps {

    stage('Install AWS CLI') {
        steps {
            sh 'curl "" -o ""'
            sh 'unzip'
            sh 'sudo ./aws/install --update'
            sh 'aws configure set aws_access_key_id $AWS_ACCESS_KEY_ID'
            sh 'aws configure set aws_secret_access_key $AWS_SECRET_ACCESS_KEY'
            sh 'aws configure set default.region $region'

    stage('Check if Cluster Exists') {
        steps {
            script {
                def clusterExists = sh(script: "aws eks describe-cluster --name ${clusterName} --region ${region}", returnStatus: true)
                if (clusterExists == 0) {
                    error("EKS cluster ${clusterName} already exists. Skipping cluster creation.")

    stage('Create EKS Cluster') {
        steps {
            script {
                // Create EKS cluster
                sh "aws eks create-cluster --name ${clusterName} --role-arn ${rolearn} --region ${region} --resources-vpc-config subnetIds=${subnetIDs},securityGroupIds=${securityGroupIDs},publicAccessCidrs="

                // Wait for the cluster to become ACTIVE
                sh "aws eks wait cluster-active --name ${clusterName} --region ${region}"

                // Create worker node group
                sh "aws eks create-nodegroup --cluster-name ${clusterName} --nodegroup-name my-node-group --node-role arn:aws:iam::462857891680:role/AmazonEKSNodeRole --subnets --subnets ${subnetIDs.replace(',', ' ')} --instance-types ${instanceType} --scaling-config desiredSize=${minSize},minSize=${minSize},maxSize=${maxSize} --disk-size 20 --region ${region}"

                // Wait for the node group to become ACTIVE
                sh "aws eks wait nodegroup-active --cluster-name ${clusterName} --nodegroup-name my-node-group --region ${region}"

                // Get cluster details
                sh "aws eks describe-cluster --name ${clusterName} --region ${region}"

This script is a Jenkins pipeline that performs the following steps:

  1. Cleans the workspace by deleting the existing files.
  2. Installs the AWS Command Line Interface (CLI) by downloading the AWS CLI package, unzipping it, and installing it.
  3. Checks if an Amazon Elastic Kubernetes Service (EKS) cluster with the specified name already exists. If it exists, it throws an error and skips cluster creation.
  4. Creates an EKS cluster using the AWS CLI. It specifies the cluster name, role ARN, region, VPC configuration (subnets and security groups), and public access CIDR.
  5. Waits for the EKS cluster to become active.
  6. Creates a worker node group for the EKS cluster. It specifies the cluster name, node group name, node role ARN, subnets, instance type, scaling configuration (min and max size), and disk size.
  7. Waits for the node group to become active.
  8. Retrieves the details of the created EKS cluster.

Overall, the script sets up and provisions an EKS cluster using the AWS CLI and performs necessary checks and waits for the cluster and node group to become active.

Connect to Kubernetes Cluster

  • Make Sure to Configure the AWS Before proceeding. Also, Install Kubectl.
aws eks update-kubeconfig --name my-eks-cluster --region us-west-2

the command is updating the kubeconfig file on your local machine to access an EKS cluster named “my-eks-cluster” located in the US West (Oregon) region.

When you run this command, it authenticates with the AWS credentials configured on your machine and retrieves the necessary cluster information from the specified EKS cluster. It then updates your kubeconfig file with the relevant authentication details, cluster endpoint, and other configuration settings.

After running this command, you can use the kubectl command-line tool to interact with the EKS cluster. For example, you can run commands like kubectl get pods to view the running pods in the cluster, kubectl apply to deploy a Kubernetes manifest, or kubectl exec to execute commands inside a container running in the cluster.

By updating the kubeconfig file, you ensure that your local machine has the necessary credentials and configuration to securely connect to and manage the specified EKS cluster.

Passing the Docker credentials to kubernetes Secrets

  • Do Docker login so that it will store the config in ~/.docker/config.json.
  • Fire below command to create a secret for the same
kubectl create secret generic regcred     --from-file=.dockerconfigjson=/home/ec2-user/.docker/config.json

Creating a Pod From the image which we are having in private registery

  • Use Below YAML file for creating pod
apiVersion: v1
kind: Pod
  name: private-reg
    app: private-reg
  - name: private-reg-container
    image: uiv89170/java
    - containerPort: 8123
        cpu: "200m"
        memory: "524Mi"
        cpu: "200m"
        memory: "524Mi"
  - name: regcred
  • User Kubectl apply -f <filename.YAML>

Expose the same using service:

  • below is the YAML file for creating service :
apiVersion: v1
kind: Service
  name: private-reg-service
    app: private-reg
    - protocol: TCP
      port: 80
      targetPort: 8123
  • After this when you will get service you will only see the private IP not External IP
  • Use Below command to create it as Loadbalancer and see the external IP
kubectl patch svc private-reg-service -p '{"spec": {"type": "LoadBalancer"}}'
  • now when you will get the serivce you will be able to check the external IP.
  • Your application will be Exposed in that External IP.

Project Tracker


Leave a Comment

MFH IT Solutions (Regd No -LIN : AP-03-46-003-03147775)

Consultation & project support organization.


MFH IT Solutions (Regd)
NAD Kotha Road, Opp Bashyam School, Butchurajupalem, Jaya Prakash Nagar Visakhapatnam, Andhra Pradesh – 530027