DevOps Project 19

Continuous Deployment with Jenkins, Elastic Container Registry and Elastic Container Service

what is Continuous Deployment?

Continuous Deployment (CD) is the natural evolution of Continuous Integration (CI) in DevOps, extending the automation principles from code integration to the automated release of software changes into production environments. Where CI focuses on regularly integrating code changes into a shared repository, CD takes this a step further by automating the deployment process. This seamless transition from CI to CD is pivotal in the DevOps culture, ensuring that software updates are not only integrated continuously but also deployed rapidly and reliably. By automating the entire pipeline, CD minimizes manual interventions, accelerates release cycles, and enhances collaboration between development and operations teams, contributing to a more agile and efficient software development lifecycle.

In this project, we will explore the integration of Jenkins, Amazon Elastic Container Registry (ECR), and Amazon Elastic Container Service (ECS) to establish a Continuous Deployment pipeline for a containerized application, showcasing the implementation of these concepts in a real-world scenario.

This project is an extension of our Continuous Integration Projects, so we’ll continue with the already provisioned resources (Jenkins, SonarQube and Nexus).
If you’re new, you can access the previous projects using the links below;

https://medium.com/@samuelnnanna71/continuous-integration-using-jenkins-nexus-sonarqube-and-slack-f1d43379dda9

https://medium.com/devops-dev/continuous-integration-using-jenkins-nexus-sonarqube-and-slack-89a257cb73a8

Project Architecture

Flow of Execution

  1. Update GitHub Webhook with the new Jenkins IP
  2. Copy Docker files from vprofile repo to our repo
  3. Prepare Two Separate Jenkinsfile for Staging & Prod in Source code
  4. AWS steps
    . IAM, ECR Repo setup
  5. Jenkins Steps
    . Install Plugins (Amazon ECR, Docker, Docker Build & Publish, Pipeline: AWS steps
  6. Install Docker Engine & AWSCLI on Jenkins
  7. Write Jenkinsfile for build & publish image to ECR
  8. ECS setup
    . Cluster, Task Definition, Service
  9. Code to Deploy Docker image to ECS
  10. Repeat the steps for prod ECS cluster
  11. Promoting Docker Image for prod

Ladies and Gentlemen, shall we?

Step One: Updating GitHub Webhook and Prepare Branches

When an instance undergoes a stop and start operation, its public IP changes, disrupting the webhook that relies on this IP. To address this, we need to update the Payload URL of the webhook with the new Jenkins IP. Follow these steps:

  1. Ensure you are on the “ci-jenkins” branch in your GitHub repository. Navigate to the “Settings” tab and find the “Webhooks” section.
  2. Access the AWS EC2 Management Console and copy the Public IP of the Jenkins server.
  3. Return to the webhook page on GitHub, click on “Edit,” and paste the IP address in the designated field, following the usual format.
  4. Scroll down and click on “Update webhook” to save the changes. This ensures that the webhook is synchronized with the updated Jenkins IP, allowing it to function seamlessly after instance restarts.

To organize Dockerfiles and Jenkinsfiles for staging and production environments in your source code, follow these steps:

  1. Open the Vprofile-project repository and switch to the “docker” branch. Download the content as a zip file from the “Code” dropdown.
  2. Extract the zip file and copy the “Docker-files” folder (located in the “vprofile-project-docker” directory) into your source code.
  3. In your terminal or Git Bash, navigate to the repository for the Continuous Integration project and switch to the “ci-jenkins” branch.
  4. Now create and switch into a new branch using the command;
git checkout -b cicd-jenkins

5. Using the file explorer, copy the “Docker-files” folder into the repository. folder. Ensure that you are already in the new branch (“cicd-jenkins”).

6. In the “cicd-jenkins” branch, create two directories, “StagePipeline” and “ProdPipeline.” Copy the Jenkinsfile from the branch into both directories. Optionally, you can delete the Jenkinsfile from the branch as it is no longer needed.

mkdir StagePipeline ProdPipeline #creates two directories
cp Jenkinsfile StagePiepline/
cp Jenkinsfile ProdPipeline/
git rm Jenkinsfile #removes Jenkinsfile

7. Now that the structure for staging and production branches is set, push the changes to the Git repository.

git add .
git commit - "<commit message>"
git push origin cicd-jenkins

Step Two: Setup Elastic Container Registry

Next, we need to to create an ECR repository where our docker images will be stored but before we do that we need to set up an IAM user that will have access to ECR repository and subsequently the Elastic Container Service (ECS). To do this;

  1. Navigate to the IAM console in AWS.
  2. Create a new IAM user named “cicdjenkins.”
  3. Attach the following policies to the “cicdjenkins” user:
  • AmazonEC2ContainerRegistryFullAccess: Provides full access to the Elastic Container Registry (ECR) for managing Docker images.
  • AmazonECS_FullAccess: Grants full access to the Elastic Container Service (ECS) for managing containerized applications.

4. After attaching the policies, generate access key credentials for the “cicdjenkins” user. Note down the Access Key ID and Secret Access Key.

  • Click on the “Security credentials” tab for the newly created user.
  • Under the “Access keys” section, click on “Create access key.”
  • Download the access key details or copy them to a secure location, as you will need these credentials in Jenkins.

Now to set up the ECR repository, follow these steps;

  1. Navigate to the AWS Management Console and go to the Amazon ECR service.
  2. Click on “Create repository.”
  3. Enter the repository name as “appimg.”
  4. Optionally, you can add tags, configure lifecycle policies, or adjust settings based on your project requirements. For now, the basic settings are sufficient.
  5. Click on “Create repository” to finalize the creation of the private ECR repository named “appimg.”

Now we have successfully set up a private ECR repository named “appimg” where your Docker images will be stored. Make note of the repository URL, as it will be used in Jenkins to push Docker images during the CI/CD pipeline.

Step Three: Jenkins Configurations

In this phase, we’ll be doing three things; install the necessary plugins in Jenkins, Store the user credentials on Jenkins and Install Docker Engine on Jenkins.

  1. Install Jenkins Plugins:
  • Navigate to the Jenkins dashboard and go to “Manage Jenkins.”
  • Access the “Plugin Manager” and go to the “Available” tab.
  • Search for and install the following plugins:
  • Docker Pipeline
  • Docker Build and Publish
  • Amazon ECR
  • Pipeline: AWS Steps

2. Add IAM User Credentials to Jenkins:

  • Navigate to “Manage Jenkins” and then “Manage Credentials.”
  • Under the “Stores scoped to Jenkins” section, click on “(global)”.
  • Click on “Add Credentials” and select “AWS Credentials” in the “Kind” field.
  • Provide the “Access Key ID” and “Secret Access Key” for the “cicdjenkins” IAM user created earlier.
  • Set the “ID” and “Description” fields accordingly.
  • Click “OK” to save the credentials.

3. Install Docker Engine on Jenkins Server:

  • SSH into the Jenkins server using the appropriate credentials.
  • To do this; ssh into the Jenkins server and run the following commands;
# Install awscli
sudo apt update && sudo apt install awscli
# Add Docker's official GPG key:
sudo apt-get update
sudo apt-get install ca-certificates curl
sudo install -m 0755 -d /etc/apt/keyrings
sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc
sudo chmod a+r /etc/apt/keyrings/docker.asc

# Add the repository to Apt sources:
echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu \
$(. /etc/os-release && echo "$VERSION_CODENAME") stable"
| \
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update

# Install the latest version
sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin

# Add Jenkins user to the docker group
sudo usermod -aG docker jenkins

# confirm the jenkins user is in the docker group
id jenkins

# restart jenkins service
sudo systemctl restart jenkins

Step Four: Docker Build

In this phase, we’ll write the code for our image to be built and pushed to the ECR repository.
Using VScode, navigate to the StagePipeline Directory in the cicd-jenkins branch. Open the Jenkinsfile and add the new stages and environment variables, then commit and push.

pipeline {
agent any
tools {
maven "MAVEN3"
jdk "OracleJDK17"
}

environment {
SNAP_REPO = 'vprofile-snapshot'
NEXUS_USER = 'admin'
NEXUS_PASS = 'admin123'
RELEASE_REPO = 'vprofile-release'
CENTRAL_REPO = 'vpro-maven-central'
NEXUS_IP = '172.31.82.244'
NEXUS_PORT = '8081'
NEXUS_GRP_REPO = 'vpro-maven-group'
NEXUS_LOGIN = 'nexuslogin'
SONARSERVER = 'sonarserver'
SONARSCANNER = 'sonarscanner'
registryCredential = 'ecr:us-east-1:awscreds'
appRegistry = '197383505749.dkr.ecr.us-east-1.amazonaws.com/appimg'
vprofileRegistry = "https://197383505749.dkr.ecr.us-east-1.amazonaws.com"
}

stages {
stage('Build'){
steps {
sh 'mvn -s settings.xml -DskipTests install'
}
post {
success {
echo 'Archiving'
archiveArtifacts artifacts: '**/*.war'
}
}

}

stage('Test') {
steps {
sh 'mvn -s settings.xml test'
}
}

stage('Checkstyle Analysis') {
steps {
sh 'mvn -s settings.xml checkstyle:checkstyle'
}
}

stage ('Sonar Analysis') {
environment {
scannerHome = tool "${SONARSCANNER}"
}
steps {
withSonarQubeEnv("${SONARSERVER}") {
sh '''${scannerHome}/bin/sonar-scanner -Dsonar.projectKey=vprofile \
-Dsonar.projectName=vprofile \
-Dsonar.projectVersion=1.0 \
-Dsonar.sources=src/ \
-Dsonar.java.binaries=target/test-classes/com/visualpathit/account/controllerTest/ \
-Dsonar.junit.reportsPath=target/surefire-reports/ \
-Dsonar.jacoco.reportsPath=target/jacoco.exec \
-Dsonar.java.checkstyle.reportPaths=target/checkstyle-result.xml'
''
}
}
}

stage ("Upload Artifact") {
steps {
nexusArtifactUploader(
nexusVersion: 'nexus3',
protocol: 'http',
nexusUrl: "${NEXUS_IP}:${NEXUS_PORT}",
groupId: 'QA',
version: "${env.BUILD_ID}-${env.BUILD_TIMESTAMP}",
repository: "${RELEASE_REPO}",
credentialsId: "${NEXUS_LOGIN}",
artifacts: [
[artifactId: 'vproapp',
classifier: '',
file: 'target/vprofile-v2.war',
type: 'war']
]
)
}
}

stage ('Build App Image') {
steps {
script {
dockerImage = docker.build( appRegistry + ":$BUILD_NUMBER", "./Docker-files/app/multistage")
}
}
}

stage ('Upload App Image') {
steps {
script {
docker.withRegistry( vprofileRegistry, registryCredential) {
dockerImage.push("$BUILD_NUMBER")
dockerImage.push('latest')
}
}
}
}

}
}

On the Jenkins, create a new job, give it a name and set the necessary configurations. Make sure the new branch “cicd-jenkins” is selected then manually trigger the the build process.

Step Five: ECS Setup

We have successfully built the application image and pushed to the ECR repository, we need to set up Elastic Container Service where our application will be deployed. To do this;

  1. Create ECS Cluster:
  • Go to the ECS Management Console on AWS.
  • Click on “Clusters” in the left sidebar and then click “Create Cluster.”
  • Give the cluster a name “vproappstaging.”
  • Under Infrastructure, select “AWS Fargate.” For Monitoring, enable “Use Container Insights.”
  • Click “Create” to create the ECS cluster.

2. Create ECS Task Definition:

  • In the ECS Management Console, click on “Task Definitions” in the left sidebar.
  • Click “Create new Task Definition.”
  • Provide a name, e.g., “vproappstagetask.”
  • Under Container, specify a container name and the Image URI from your ECR repository.
  • Set the container port to “8080” and click “Next.”
  • Optionally adjust resource settings, then click “Next.”
  • Review your configurations and click “Create.”

3. Deploy Service on ECS Cluster:

  • Click on the newly created cluster, go to the “Services” tab, and click “Create.”
  • Under “Compute Options,” select “Fargate” as the launch type.
  • In the “Family” dropdown, select the task definition created earlier (e.g., “vproappstagetask”).
  • Provide a service name (e.g., “vproappstagesvc”) and leave the desired task as “1.”
  • Under Networking, choose “Create new security group,” provide a name, and add an inbound rule for HTTP from anywhere. After creating the security group, edit the inbound rules to also allow port “8080.”
  • Under Load Balancing, create an application load balancer. Provide a name and create a target group with HTTP as the protocol. Set the health check path to “/login” and the health check grace period to 30 seconds. Ensure that the naming convention differentiates it from the production environment.
  • Review your configurations and click “Create.”

4. Adjust Security Group Rules:

  • After successful deployment, edit the security group associated with your ECS service to add port “8080” allowed from anywhere.
  • Repeat the same process for the target group associated with your load balancer.

Step Six: Deploy Image to ECS

In this phase, we’re are goin to write the code to for Jenkins to automate the cluster deployment. But first we need to confirm that our application is running. To do this;

  1. Check ECS Service:
  • In the ECS Management Console, click on the cluster you created (e.g., “vpoappstaging”).
  • Under the “Services” tab, click on the service that was created with the task definition (e.g., “vproappstagesvc”).
  • Here, you’ll find details about the service. Click on “Configuration and Networking.”

2. Access the Application:

  • Scroll down to the “Network Configuration” section.
  • Click on the link under “DNS names” to access the deployed application.

Now that we have successfully deployed our application, we have to automate the process, which is the sole purpose of this project.

  1. Update the Jenkinsfile
pipeline {
agent any
options {
buildDiscarder(logRotator(numToKeepStr: "1"))
}

tools {
maven "MAVEN3"
jdk "OracleJDK17"
}

environment {
SNAP_REPO = 'vprofile-snapshot'
NEXUS_USER = 'admin'
NEXUS_PASS = 'admin123'
RELEASE_REPO = 'vprofile-release'
CENTRAL_REPO = 'vpro-maven-central'
NEXUS_IP = '172.31.82.244'
NEXUS_PORT = '8081'
NEXUS_GRP_REPO = 'vpro-maven-group'
NEXUS_LOGIN = 'nexuslogin'
SONARSERVER = 'sonarserver'
SONARSCANNER = 'sonarscanner'
registryCredential = 'ecr:us-east-1:awscreds'
appRegistry = '197383505749.dkr.ecr.us-east-1.amazonaws.com/appimg'
vprofileRegistry = "https://197383505749.dkr.ecr.us-east-1.amazonaws.com"
cluster = "vprostaging"
service = "vproappstagesvc"
}

stages {
stage('Build'){
steps {
sh 'mvn -s settings.xml -DskipTests install'
}
post {
success {
echo 'Archiving'
archiveArtifacts artifacts: '**/*.war'
}
}

}

stage('Test') {
steps {
sh 'mvn -s settings.xml test'
}
}

stage('Checkstyle Analysis') {
steps {
sh 'mvn -s settings.xml checkstyle:checkstyle'
}
}

stage ('Sonar Analysis') {
environment {
scannerHome = tool "${SONARSCANNER}"
}
steps {
withSonarQubeEnv("${SONARSERVER}") {
sh '''${scannerHome}/bin/sonar-scanner -Dsonar.projectKey=vprofile \
-Dsonar.projectName=vprofile \
-Dsonar.projectVersion=1.0 \
-Dsonar.sources=src/ \
-Dsonar.java.binaries=target/test-classes/com/visualpathit/account/controllerTest/ \
-Dsonar.junit.reportsPath=target/surefire-reports/ \
-Dsonar.jacoco.reportsPath=target/jacoco.exec \
-Dsonar.java.checkstyle.reportPaths=target/checkstyle-result.xml'
''
}
}
}

stage ("Upload Artifact") {
steps {
nexusArtifactUploader(
nexusVersion: 'nexus3',
protocol: 'http',
nexusUrl: "${NEXUS_IP}:${NEXUS_PORT}",
groupId: 'QA',
version: "${env.BUILD_ID}-${env.BUILD_TIMESTAMP}",
repository: "${RELEASE_REPO}",
credentialsId: "${NEXUS_LOGIN}",
artifacts: [
[artifactId: 'vproapp',
classifier: '',
file: 'target/vprofile-v2.war',
type: 'war']
]
)
}
}

stage ('Build App Image') {
steps {
script {
dockerImage = docker.build( appRegistry + ":$BUILD_NUMBER", "./Docker-files/app/multistage")
}
}
}

stage ('Upload App Image') {
steps {
script {
docker.withRegistry( vprofileRegistry, registryCredential) {
dockerImage.push("$BUILD_NUMBER")
dockerImage.push('latest')
}
}
}
}

stage ('Deploy to ECS') {
steps {
withAWS(credentials: 'awscreds', region: 'us-east-1') {
sh 'aws ecs update-service --cluster ${cluster} --service ${service} --force-new-deployment'
}
}
}

}
}

Save, commit and push to trigger the pipeline.

Now when a developer makes any commit in the source code, the Pipeline will be triggered and the deployment will be automated. But all of this is being done in the staging environment. This for testing purposes. When the developers are done with testing and are satisfied, the application will be pushed to production environment.

Step Seven: Promote to Production

Let’s assume the developers are done with the testing and the application is ready to be pushed to production. To do this, a new cluster will be created. We’ll follow the previous steps taken to create the staging environment cluster, but with a few edits. So in this case, use “prod” instead of “stage” in the naming convention.

Now we have to create a new branch for the production environment. In your terminal, ensure you’re in the cicd-jenkins branch then run the command;

git checkout -b prod

Now we’ll be using the ProdPipeline directory, but first we need to publish the branch and this can be done through VScode. In the terminal, while in the new branch, run the command

code .

This opens up VScode and prompts you to publish the new branch in the source control section.

Now in the ProdPipeline directory, open the Jenkinsfile and write the following;

pipeline {
agent any
options {
buildDiscarder(logRotator(numToKeepStr: "1"))
}

environment {
registryCredential = 'ecr:us-east-1:awscreds'
appRegistry = '197383505749.dkr.ecr.us-east-1.amazonaws.com/appimg'
vprofileRegistry = "https://197383505749.dkr.ecr.us-east-1.amazonaws.com"
cluster = "vproprod"
service = "vproapprodesvc"
}

stages {
stage ('Deploy to Prod ECS') {
steps {
withAWS(credentials: 'awscreds', region: 'us-east-1') {
sh 'aws ecs update-service --cluster ${cluster} --service ${service} --force-new-deployment'
}
}
}
}
}

Now go to Jenkins and create a new Job “Vprofile-cicd-prod-pipeline”, select “pipeline” and copy configuration from the previous pipeline job but change the branch to “prod” and Jenkinsfile path to “ProdPipeline”.
Before you manually trigger the build, ensure you change the ports of the target group and security group of the prod cluster just as we did for the stage cluster.

Congratulations! Your cloud-native CI/CD pipeline is now set up to transition seamlessly from staging to production. Developers can focus on testing in the staging environment, and when satisfied, promote the application to production effortlessly. This automated process, managed by Jenkins, ensures consistency, reliability, and efficiency in your cloud computing workflows.

Happy DevOps Engineering!!!!🧑🏾‍💻👩🏾‍💻

 
DevOps
Ci Cd Pipeline
AWS
Cloud Engineering

Leave a Comment

MFH IT Solutions (Regd No -LIN : AP-03-46-003-03147775)

Consultation & project support organization.

Contact

MFH IT Solutions (Regd)
NAD Kotha Road, Opp Bashyam School, Butchurajupalem, Jaya Prakash Nagar Visakhapatnam, Andhra Pradesh – 530027