Deploying Java Full Stack App, into K8s cluster | Jenkins CICD Pipeline
Streamlining Continuous Integration and Deployment with Jenkins for a Java Full Stack Application in Kubernetes
Workflow maintained in the companies:
1. **Client Request for Change:**
- Client raises a ticket specifying the change required (e.g., background color change).
2. **Development:**
- Developer receives the ticket.
- Develops and tests the code locally.
- Pushes the code to the GitHub repository.
- Informs relevant stakeholders and seeks approval.
3. **DevOps Ticket Creation:**
- Developer raises a DevOps ticket for deployment.
4. **Jenkins Pipeline:**
- Ensures Jenkins pipeline is configured (may already be present).
- Makes necessary adjustments.
5. **Pipeline Stages:**
- **Stage 1 - Compile and Unit Test:**
- Compiles the source code.
- Runs unit tests.
- **Stage 2 - Code Quality and Vulnerability Checks:**
- Uses SonarQube for code quality and coverage checks.
- Utilizes OWASP Dependency Check for vulnerabilities.
- Performs a vitality scan on dependencies.
- **Stage 3 - Artifact Packaging and Nexus Repository:**
- Packages the application with dependencies.
- Pushes the artifact to Nexus repository.
- **Stage 4 - Docker Image Build and Scan:**
- Builds a Docker image.
- Tags the Docker image.
- Scans the Docker image using Aqua Trivy.
- Pushes the Docker image to a Docker registry (private registry, ECR, ACR, or self-hosted).
- **Stage 5 - Manifest File Creation and Deployment:**
- Creates a YAML manifest file for Kubernetes deployment.
- Deploys the application to the Kubernetes cluster.
6. **Deployment:**
- Kubernetes deploys the application based on the manifest file.
- Monitors the deployment.
Practical Implementation
We will need three servers
Jenkins
SonarQube
Nexus
Use Ubuntu AMI, Instance type t2.medium, storage 20 GiB.
These are the Inbound Rules of the security Group attached to our servers.
Jenkins server
sudo apt update
sudo apt install openjdk-17-jre-headless -y
sudo wget -O /usr/share/keyrings/jenkins-keyring.asc https://pkg.jenkins.io/debian-stable/jenkins.io-2023.key
echo deb [signed-by=/usr/share/keyrings/jenkins-keyring.asc] https://pkg.jenkins.io/debian-stable binary/ | sudo tee /etc/apt/sources.list.d/jenkins.list > /dev/null
sudo apt-get update
sudo apt-get install jenkins -y
sudo systemctl status jenkins
Open your browser , paste
<PulicIPv4>:8080
sudo cat /var/lib/jenkins/secrets/initialAdminPassword
Thus you will get the initial password
Next, click on Install suggested plugins.
Plugins are being installed.
Fill the details and click on Save and Continue
Save and Finish
SonarQube
sudo apt update
docker
sudo apt install docker.io -y
sudo su
docker run -d -p 9000:9000 sonarqube:lts-community
docker ps
-p 9000:9000
here first 9000 is the host port → port opened in VM where docker is running
the 2nd 9000 is the container port → port opened in container
PubIPv4SonarQubeServer:HostPort
Deafult User and Password is both admin
Nexus
sudo apt update
sudo su
docker #to get the command
apt install docker.io -y
systemctl status dockerdocker run -d -p 8081:8081 sonatype/nexus3docker psdocker exec -it 985031e648cc /bin/bash # 985031e648cc is the ID of the container
docker exec -it 985031e648cc /bin/bash
Username is admin and password is in /nexus-data/admin.password file
cat /nexus-data/admin.password
OR
Here are the default repositories available.
Install docker inside the jenkins server too
sudo apt update
sudo apt install docker.io -y
‘jenkins’ user should be able to run docker without any root power or sudo power.
Change user mode
sudo chmod 666 /var/run/docker.sock
Not a good practice though, just for testing purpose
Now Jenkins , SonarQube and Nexus is configured
Come to jenkins Dashboard → Manage Jenkins
Click on Plugins
Install the below shown plugins
These are the plugins that we need.
Configure the plugins
Go to jdk, check Install Automatically, click on Add Installer , click on Install From adoptium.net → choose the version.
Same way configure for jdk11 also
Next, configure SonarQube Scanner
Next Configure Maven
Next Configure Dependency-Check
Configure Docker
Apply or Save
Jenkins is not yet connected to SonarQube and Nexus
Connect to the server — need credentials → token
Come to Jenkins server Dashboard → Manage jenkins → Credentials
Click on Create
Add the Docker Hub Credential also next
You can go with the token too. But am using username with password.
Now Configure the Servers
Go to Manage Jenkins → System
Scroll down to SonarQube servers
It comes from SonarQube Scanner Plugin that we installed previously.
Now Configure Nexus
Install the necessary Plugin.
Install it.
You can see a new option is available → Managed files
Click on it.
Here provide the credential for Nexus Server
Copy this and paste just like I did. have to put necessary details
<server>
<id>deploymentRepo</id>
<username>repouser</username>
<password>repopwd</password>
</server>
The name is going to be used as ID
Here the pass will be visible to others, so there is another way to give.
Scroll up just above the content you will see this.
Click on Add
For credentials go to Manage jenkins → Credentials.
Give the password that you gave to Nexus .
maven-releases will be the actual one that we will deploy in production
and
maven-snapshots is for lower environment like Development environments.
Same way configure for snapshots too.
Click on Submit
Configured Jenkins for Nexus.
Need some more changes →
in the pom.xml file in the github.
Provide the website URL, User and Password.
Username and Password is added in our jenkins.
So just configure the URL in the pom.xml file.
Need changes in these parts → URL
Change the respective URL, commit changes.
Let’s create out pipeline.
Next use pipeline syntax → to know the exact syntax
1st stage:
Git Checkout → creating a local copy of the source code in your jenkins.
In this case Repo is pulic → so no cred needed.
Stage 2: Compile the code
First specify the version of java and maven like the following →
Use those names that you configured the tools as.
pipeline {
agent any
tools {
maven 'maven3'
jdk 'jdk17'
}
stage('Compile') {
steps {
sh "mvn compile"
}
}
Recheck the names.
Next
stage3: Run the Unit Test Cases.
stage('Unit Tests') {
steps {
sh "mvn test"
}
}
Here test case will fail. — the pipeline will fail.
As there is test cases failing.
Troubleshoot →
-DskipTest=true
stage('Unit Tests') {
steps {
sh "mvn test -DskipTest=true"
}
}
stage 4: SonarQube Analysis
Define the SonarQube in the pipeline
Use this tool in pipeline this way.
Define it inside the variable.
environment {
SCANNER_HOME= tool 'sonar-scanner'
}
Use Pipeline Syntax, may get this type of syntax.
But, we have access to the server with the name sonar-scanner, already defined the credentials.
So don’t need to give credentials again separately.
Directly use the server shown like the below.
stage('SonarQube Analysis') {
steps {
withSonarQubeEnv('sonar') {
sh ''' $SCANNER_HOME/bin/sonar-scanner -Dsonar.projectKey=EKART -Dsonar.projectName=EKART \
-Dsonar.java.binaries=. '''
}
}
''' ''' --> used for mulitple lines of codes in a single shell command.
- `$SCANNER_HOME`: This appears to be a variable holding the path to the SonarQube Scanner installation directory. The `bin/sonar-scanner` part refers to the executable script for running the SonarQube Scanner.
- `-Dsonar.projectKey=EKART`: Specifies the project key for the SonarQube analysis. The project key is a unique identifier for your project in SonarQube.
- `-Dsonar.projectName=EKART`: Specifies the project name for the SonarQube analysis. This is a human-readable name for your project in SonarQube.
- `-Dsonar.java.binaries=.`: Specifies the location of compiled Java binaries. In this case, it's set to the current directory (`.`), indicating that the Java binaries are in the current working directory. SonarQube uses this information to analyze the code and provide code quality metrics.
Till NOW
pipeline {
agent any
tools {
maven 'maven3'
jdk 'jdk17'
}
environment {
SCANNER_HOME= tool 'sonar-scanner'
}
stages {
stage('Git Checkout') {
steps {
git branch: 'main', url: 'https://github.com/Sayantan2k24/java-maven-eks-Ekart.git'
}
}
stage('Compile') {
steps {
sh "mvn compile"
}
}
stage('Unit Tests') {
steps {
sh "mvn test -DskipTests=true"
}
}
stage('SonarQube Analysis') {
steps {
withSonarQubeEnv('sonar') {
sh ''' $SCANNER_HOME/bin/sonar-scanner -Dsonar.projectKey=EKART -Dsonar.projectName=EKART \
-Dsonar.java.binaries=. '''
}
}
}
}
}
Unpacking → all the tools will be installed. Only for the first time.
Installing the dependencies mentioned in the pom.xml file from Offical repositories of Maven.
Skipping the Test cases
Adding Stage5 : owasp dependency check Stage
Check the Tool, had configured the name with ‘DC’
Check Pipeline syntax
Using 3rd party tool, so have to define in the pipeline
odcInstallation: 'DC'
dependencyCheck additionalArguments: ' --scan ./', odcInstallation: 'DC'
' --scan ./' --> to define to scan the pom.xml file and give the where it is.
./ means the current directory.
One more thing
Need command to define which format the report should be generated.
Provide a Sample Format
stage('OWASP DEpendency Check') {
steps {
dependencyCheck additionalArguments: ' --scan ./', odcInstallation: 'DC'
dependencyCheckPublisher pattern: '**/dependency-check-report.xml'
}
}
Default pattern it will generate that is .xml
Stage 6: Build Our Application
Skip the test cases → -DskipTests=true
Stage 7: Deploy To Nexus
Deploying the artificat of the application, not deploying the actual application
For Nexus Stage — we need a plugin
Go in the Pipeline syntax.
Provide the necessary tools names that we configured previously.
stage('Deploy To Nexus') {
steps {
withMaven(globalMavenSettingsConfig: 'global-maven', jdk: 'jdk17', maven: 'maven3', mavenSettingsConfig: '', traceability: true) {
sh "mvn deploy -DskipTests=true"
}
}
Added this part
Skipping the test cases is only for dev env not for prod env.
Build now.
The OWASP Dependency Check stage is going to take a lot of time because for the first time when we run it, it needs to download the database which contains the information about the vulnerabilities.
Uploaded to maven-snapshots:
Go to Nexus and Browse to maven-snapshots, we will see our .jar file uploaded in Nexus.
Now a specific version of Artifact of our application will be available inside Nexus which can be used later.
Jar file uploaded into maven-snapshots but why not into releases? →
Check pom.xml file
This is the reason the .jar file uploaded in snapshots.
Rest of the stages.
Let’s configure our kubernetes server
2 Worker Nodes + 1 Master Node.
Master Node:
Worker Node 1
Worker Node 2
On Master and Worker Nodes
sudo apt update
sudo su # become root
sudo apt-get install docker.io -y
sudo service docker restart
Add Kubernetes repository(Both on Master and Worker)
sudo curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
echo "deb http://apt.kubernetes.io/ kubernetes-xenial main" >/etc/apt/sources.list.d/kubernetes.list
sudo apt-get update
Install Kueadm (run in all nodes)
sudo apt install kubeadm=1.20.0-00 kubectl=1.20.0-00 kubelet=1.20.0-00 -y
On Master Node Only
- Initialize Kubernetes with a pod network CIDR:
kubeadm init --pod-network-cidr=192.168.0.0/16
It will give the command for the worker nodes to join with the Master Node and create a cluster
This kind of output at last you may get
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 172.31.15.138:6443 --token uw31vp.da3oyzfr4arj06jz \
--discovery-token-ca-cert-hash sha256:7b56bc86f52cef150fd045d6ae5ced2a8e5e53a8b8f74e4f5efa9af3234bc7c4
The following command you need to run on any VM that you want it to be joined to Master Node as Worker Node
kubeadm join 172.31.15.138:6443 --token uw31vp.da3oyzfr4arj06jz \
--discovery-token-ca-cert-hash sha256:7b56bc86f52cef150fd045d6ae5ced2a8e5e53a8b8f74e4f5efa9af3234bc7c4
Follow these Instructions, Run on the master node (can run with the root user or general user, both will work)
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Again on the Master Node →
Apply network plugins:
Install Calico for our Networking part.
kubectl apply -f https://docs.projectcalico.org/v3.20/manifests/calico.yaml
Next run this for the Ingress Controller
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.49.0/deploy/static/provider/baremetal/deploy.yaml
On the Master Node
Create Service Account, Role, Bind the Role, Create a Secret for Service Account and generate a token for service account
apiVersion: v1
kind: ServiceAccount
metadata:
name: jenkins
namespace: webapps
Create a namespace first
namespace: webapps
kubectl create namespace webapps
Now create a yml file for the service account and paste this
kubectl apply -f sa.yml
Next Create a Role
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: app-role
namespace: webapps
rules:
- apiGroups:
- ""
- apps
- autoscaling
- batch
- extensions
- policy
- rbac.authorization.k8s.io
resources:
- pods
- componentstatuses
- configmaps
- daemonsets
- deployments
- events
- endpoints
- horizontalpodautoscalers
- ingress
- jobs
- limitranges
- namespaces
- nodes
- pods
- persistentvolumes
- persistentvolumeclaims
- resourcequotas
- replicasets
- replicationcontrollers
- serviceaccounts
- services
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
kubectl apply -f role.yml
Assign this role that we created to our Service Account that is jenkins
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: app-rolebinding
namespace: webapps
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: app-role
subjects:
- namespace: webapps
kind: ServiceAccount
name: jenkins
Now our Service Account jenkins has the access to all the works ont all the resources that we defined in the role.yml
Create a secret Inside the specified namespace → webapps
apiVersion: v1
kind: Secret
type: kubernetes.io/service-account-token
metadata:
name: mysecretname
annotations:
kubernetes.io/service-account.name: jenkins
kubectl apply -f sec.yml -n webapps
To view the secret token that exist inside the secret.
kubectl -n webapps describe secret mysecretname
Keep this token somewhere pasted, will need in future steps for Authentication.
Get the kube config file.
Get back to Jenkins Dashboard and create the remaining pipeline
stage 8: Create a Docker Image
stage('Build & Tag Docker Image') {
steps {
script {
withDockerRegistry(credentialsId: 'docker-hub-cred') {
sh "docker build -t shopping-cart -f docker/Dockerfile ."
sh "docker tag shopping-cart sayantan2k21/shopping-cart:latest"
}
}
}
Install trivy first on jenkins server
Use direct commands
sudo apt-get install wget apt-transport-https gnupg lsb-release -y
wget -qO - https://aquasecurity.github.io/trivy-repo/deb/public.key | gpg --dearmor | sudo tee /usr/share/keyrings/trivy.gpg > /dev/null
echo "deb [signed-by=/usr/share/keyrings/trivy.gpg] https://aquasecurity.github.io/trivy-repo/deb $(lsb_release -sc) main" | sudo tee -a /etc/apt/sources.list.d/trivy.list
sudo apt-get update
sudo apt-get install trivy -y
Now add trivy stage in our pipeline
stage 9: Scan the Docker Image using Trivy
stage('Trivy Scan') {
steps {
sh "trivy image sayantan2k21/shopping-cart:latest > trivy-report.txt "
}
}
stage 10: Push the container image to Docker Hub
stage('Push The Docker Image') {
steps {
withDockerRegistry(credentialsId: 'docker-hub-cred') {
sh "docker push sayantan2k21/shopping-cart:latest"
}
}
}
Deploying application to Kubernetes CLuster
Install the necessary plugins.
Go to pipeline syntax
To provide the Credential → we have to use that secret token that we got before.
Click on Add.
Next provide the server endpoint
Provide the namespace.
One thing to take a note of
In the Dockerfile, we are exposing port 8070, that means the application will run on port 8070.
Open this port inside our deploymentservice.yml file.
port: 8070 → on which the service is running in the cluster and that will be used within the cluster for communications.
targetPort: 8070 → port on which our application will be running actually.
type: NodePort → If we use Load Balancer it is much better as it creates a new external ip address which can be used to access the application directly.
But in our case, we are setting up our self hosted Kubernetes on our own VMs, so the Load Balancer may not always work.
It is better to use NodePort in this use case.
In Amazon Elastic Kubernetes Service (EKS), when you deploy a service of type LoadBalancer
, AWS automatically provisions an Elastic Load Balancer (ELB) to handle load balancing for your application. Therefore, you don’t have to manually set up and manage the load balancer; it’s taken care of by the EKS service.
Add this part to the pipeline.
stage('Kubernetes Deploy') {
steps {
withKubeConfig(caCertificate: '', clusterName: '', contextName: '', credentialsId: 'k8-token', namespace: 'webapps', restrictKubeConfigAccess: false, serverUrl: ' https://172.31.15.138:6443') {
sh "kubectl apply -f deploymentservice.yml -n webapps"
sh "kubectl get svc -n webapps"
}
}
}
Build Now.
Deploy got failed.
Have to install kubectl in the Jenkins Server first.
sudo snap install kubectl --classic
Now again Build the Job.
Browse the public ip of Worker Nodes and connect to port 32338
UserName and password is admin
Come to /home
Thank you for reading.😊