Installation Procedure For all Devops Tools


Install and Configure git in Linux and Ubuntu EC2 Instance:

Installation of GIT on RHEL

Install Server Updates

yum update -y

Install git package

yum install git -y

Verify git Package

which git

Check Version of git Package

git — version

Set Username Configuration

git config — global “Ram”

Set Email Configuration

git config — global “”

Verify Username and Email Configurations

git config — list

Set the Date

date +%T -s”21:43:00"

Verify Date


Installation of GIT on UBUNTU 18.04
Step 1 — Update Default Packages

Logged into your Ubuntu 18.04 server as a sudo non-root user, first update your default packages.

sudo apt update
Step 2 — Install Git
sudo apt install git
Step 3 — Confirm Successful Installation

You can confirm that you have installed Git correctly by running this command and receiving output similar to the following:

git --version

Outputgit version 2.17.1
Step 4 — Set Up Git

Now that you have Git installed and to prevent warnings, you should configure it with your information.

git config --global "Your Name"

git config --global ""

If you need to edit this file, you can use a text editor such as nano:

vi ~/.gitconfig

~/.gitconfig contents

name = Your Name
email =


How To Install Docker on Ubuntu 18.04:

How to Install Docker on Ubuntu 21.10

Docker is an application that makes it simple and easy to run application processes in a container, which are like virtual machines, only more portable, more resource-friendly, and more dependent on the host operating system.

In this tutorial, you’ll learn how to install and use it on an existing installation of Ubuntu 18.04.

Note: Docker requires a 64-bit version of Ubuntu as well as a kernel version equal to or greater than 3.10. The default 64-bit Ubuntu 18.04 server meets these requirements.

Installing Docker:

Note: All the commands in this tutorial should be run as a non-root user. If root access is required for the command, it will be preceded by sudo.

The Docker installation package available in the official Ubuntu 18.04 repository may not be the latest version. To get the latest and greatest version, install Docker from the official Docker repository. This section shows you how to do just that.

First, add the GPG key for the official Docker repository to the system:

curl -fsSL | sudo apt-key add -

Add the Docker repository to APT sources:

#  To add the edge or test repository, add the word edge or test (or both) after the word stable in the commands below.

sudo add-apt-repository "deb [arch=amd64] $(lsb_release -cs) stable edge"

Note: If you get an E: Package 'docker-ce' has no installation candidate error when using only stable APT source this is because the stable version of docker for Ubuntu 18.04 doesn’t exist yet.

Meanwhile you have to use the edge / test version.

Stable releases are done quarterly, so .03.06.09 and .12 are stable releases.

Starting with Docker 17.06, stable releases are also pushed to the edge and test repositories.

Next, update the package database with the Docker packages from the newly added repo:

Make sure you are about to install from the Docker repo instead of the default Ubuntu 18.04 repo:

apt-cache policy docker-ce

You should see output similar to the follow:

  Installed: (none)
  Candidate: 18.06.0~ce~3-0~ubuntu
  Version table:
     18.06.0~ce~3-0~ubuntu 500
        500 bionic/stable amd64 Packages
        500 bionic/edge amd64 Packages
     18.05.0~ce~3-0~ubuntu 500
        500 bionic/edge amd64 Packages

Notice that docker-ce is not installed, but the candidate for installation is from the Docker repository for Ubuntu 18.04. The docker-ce version number might be different.

Finally, install Docker:

sudo apt-get install -y docker-ce

Docker should now be installed, the daemon started, and the process enabled to start on boot. Check that it’s running:

sudo systemctl status docker

The output should be similar to the following, showing that the service is active and running:

● docker.service - Docker Application Container Engine
   Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
   Active: active (running) since Sat 2018-05-12 12:26:01 UTC; 4s ago
 Main PID: 6062 (dockerd)
    Tasks: 42
   CGroup: /system.slice/docker.service
           ├─6062 /usr/bin/dockerd -H fd://
           └─6089 docker-containerd --config /var/run/docker/containerd/containerd.toml

May 12 12:26:00 ubuntu18 dockerd[6062]: time="2018-05-12T12:26:00.511573048Z" level=info msg="Loading containers: start."
May 12 12:26:01 ubuntu18 dockerd[6062]: time="2018-05-12T12:26:01.390677887Z" level=info msg="Default bridge (docker0) is assigned with an IP address Daemon option --bip can be used to set a preferred IP address"
May 12 12:26:01 ubuntu18 dockerd[6062]: time="2018-05-12T12:26:01.494850688Z" level=info msg="Loading containers: done."
May 12 12:26:01 ubuntu18 dockerd[6062]: time="2018-05-12T12:26:01.789682259Z" level=info msg="Docker daemon" commit=f150324 graphdriver(s)=overlay2 version=18.05.0-ce
May 12 12:26:01 ubuntu18 dockerd[6062]: time="2018-05-12T12:26:01.789950360Z" level=info msg="Daemon has completed initialization"
May 12 12:26:01 ubuntu18 dockerd[6062]: time="2018-05-12T12:26:01.830063696Z" level=info msg="API listen on /var/run/docker.sock"
May 12 12:26:01 ubuntu18 systemd[1]: Started Docker Application Container Engine.

Installing Docker now gives you not just the Docker service (daemon) but also the docker command line utility, or the Docker client. We’ll explore how to use the docker command later in this tutorial.

Executing the Docker Command Without Sudo (Optional):

By default, running the docker command requires root privileges — that is, you have to prefix the command with sudo. It can also be run by a user in the docker group, which is automatically created during the installation of Docker. If you attempt to run the docker command without prefixing it with sudoor without being in the docker group, you’ll get an output like this:

docker: Cannot connect to the Docker daemon. Is the docker daemon running on this host?.
See 'docker run --help'.

If you want to avoid typing sudo whenever you run the docker command, add your username to the docker group:

sudo usermod -aG docker ${USER}

To apply the new group membership, you can log out of the server and back in, or you can type the following:

su ${USER}

You will be prompted to enter your user’s password to continue. Afterwards, you can confirm that your user is now added to the docker group by typing:

id -nG


username sudo docker

If you need to add a user to the docker group that you’re not logged in as, declare that username explicitly using:

sudo usermod -aG docker username

The rest of this article assumes you are running the docker command as a user in the docker user group. If you choose not to, please prepend the commands with sudo.

Run Docker Containers (Optional)

Run a docker container using the docker run command to download and start the container.

docker run hello-world

Output: This confirms us that Docker is correctly installed.

Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
9bb5a5d4561a: Pull complete
Digest: sha256:f5233545e43561214ca4891fd1157e1c3c563316ed8e237750d59bde73361e77
Status: Downloaded newer image for hello-world:latest

Hello from Docker!
This message shows that your installation appears to be working correctly.

To generate this message, Docker took the following steps:
 1. The Docker client contacted the Docker daemon.
 2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
 3. The Docker daemon created a new container from that image which runs the
    executable that produces the output you are currently reading.
 4. The Docker daemon streamed that output to the Docker client, which sent it
    to your terminal.

To try something more ambitious, you can run an Ubuntu container with:
 $ docker run -it ubuntu bash

Share images, automate workflows, and more with a free Docker ID:

For more examples and ideas, visit:


As per Apache Maven’s home page, Maven is a software project management and comprehension tool. Yes, you probably thought so too!

You will probably need to work on a Java project, and it will require a hell lot of binaries in the form of jar files. This is where exactly Maven is helpful! You see, you wouldn’t want to spend (read: waste) your precious coding time on downloading each of this jars separately. Maven will help you do that for you!

At the start…

Upgrade Ubuntu’s package listings

$ sudo apt-get update -y $ sudo apt-get upgrade -y

Downloading and Installing Maven

Visit this link for downloading the latest Maven for Ubuntu 16.04 — it will be a binary file ending in .tar.gz

Once this is done, extract it using the following command on terminal:

$ tar -zxvf apache-maven-3.x.x-bin.tar.gz

NOTE: The x in 3.x.x represents the latest version of Maven 3

After this, a folder will be created by the name apache-maven-3.x.x. Rename this as maven:

$ mv apache-maven-3.x.x maven

Copy it to /opt:

$ sudo mv maven /opt/

Configuring Maven path

Now that Maven is installed, it’s time to configure it so as to get it reflected in the path variables of Ubuntu.

Run the following command:

$ sudo gedit /etc/profile.d/

On opening the file, paste the following code in the bash script:

export JAVA_HOME=/usr/lib/jvm/java-8-oracle export M2_HOME=/opt/maven export MAVEN_HOME=/opt/maven export PATH=${M2_HOME}/bin:${PATH}

Here we are setting the paths.

NOTE: You can check for your Java’s path on terminal by running the command echo $JAVA_HOME. This will give you your actual Java path, which will go in the file

Close the file, then run the following commands:

$ sudo chmod +x /etc/profile.d/ $ sudo source /etc/profile.d/

The first command changes permissions on the file, while the second command reloads the path variables.

Now check if Maven is installed and configured properly by running the command:

$ mvn --version

Something like this should get displayed if it was installed correctly:

Apache Maven 3.5.3 (3383c37e1f9e9b3bc3df5050c29c8aff9f295297; 2018-02-25T01:19:05+05:30) Maven home: /opt/maven Java version: 1.8.0_151, vendor: Oracle Corporation Java home: /usr/lib/jvm/java-8-oracle/jre Default locale: en_IN, platform encoding: UTF-8 OS name: "linux", version: "4.13.0-38-generic", arch: "amd64", family: "unix"

Yay, you are done installing Maven! Now you can go ahead with you Java/Java EE projects that work on Maven setup.



  1. Ubuntu 20.04
  2. Oracle JDK 11.0.11
  1. Add the repository key to the system
$ wget -q -O - | sudo apt-key add -

2. Append the Debian package repository address to the system sources.list

$ sudo sh -c ‘echo deb binary/ > /etc/apt/sources.list.d/jenkins.list’

3. Update apt

$ sudo apt update

4. Install Jenkins and its dependencies

$ sudo apt install jenkins

5. Start Jenkins and verify if Jenkins has beem started successfully

#start Jenkins
$ sudo systemctl start jenkins#verify Jenkins status
$ sudo systemctl status jenkins
Jenkins have been started successfully

6. Opening the firewall

#allow port 8080 in firewall
$ sudo ufw allow 8080#verify if port 8080 already allowed
$ sudo ufw status

NOTE : If you have another application running in port 8080, you can change the HTTP_PORT variable in /etc/default/jenkins file like this following example, I change the HTTP_PORT into 3030

#in /etc/default/jenkins file # port for HTTP connector (default 8080; disable with -1) HTTP_PORT=3030

7. Setting up Jenkins

Open Jenkins with browser, using your server IP address or domain name http://your_server_ip_or_domain:8080 .The browser should show Unlock Jenkins screen :

Unlock Jenkins screen

Show the content of /var/lib/jenkins/secrets/initialAdminPassword file with sudo cat command, then copy and paste it as Administrator password.

#show the initial admin password
$ sudo cat /var/lib/jenkins/secrets/initialAdminPassword

The next screen will presents option of installing suggested plugins or selecting plugins to install. We’ll click Install suggested plugins which will be immediately begin the installation process.

Option of Jenkins Installation
Installation with suggested plugins

The next step is create first admin user

Create first admin user page

8. Set the instance configuration

Set the instance configuration

Click Start using Jenkins

Your Jenkins is ready

After that, you should can access Jenkins dashboard.

Jenkins dashboard

Kubernetes Cluster:

How To Install Kubernetes Cluster On Ubuntu

This blog is a step by step guide to install Kubernetes on top of Ubuntu VMs (Virtual Machines). Here, one VM will act as the master and the other VM will be the node. You can then replicate the same steps to deploy the Kubernetes cluster onto your prod.

Note: For this installation, we recommend a fresh Ubuntu 16.04 image since Kubernetes can take up a lot of resources. If your installation fails at any time, then execute all the steps mentioned from the very beginning in a fresh VM, because debugging would take longer.

To install Kubernetes, you have to diligently follow the 3 phases that come as part of the installation process:

  1. Pre-requisites to install Kubernetes
  2. Setting up Kubernetes environment
  3. Installing Kubeadm, Kubelet, Kubectl
  4. Starting the Kubernetes cluster from master
  5. Getting the nodes to join the cluster
Pre-requisites To Install Kubernetes

Since we are dealing with VMs, we recommend the following settings for the VMs:-


  • 2 GB RAM
  • 2 Cores of CPU

Slave/ Node:

  • 1 GB RAM
  • 1 Core of CPU

By this point of time, I have assumed you have 2 plain Ubuntu VMs imported onto your Oracle Virtual Box. So, I’ll just get along with the installation process.

Pre-Installation Steps On Both Master & Slave (To Install Kubernetes)

The following steps have to be executed on both the master and node machines. Let’s call the master as ‘kmaster‘ and node as ‘knode‘.

First, log in as ‘sudo’ user because the following set of commands need to be executed with ‘sudo’ permissions. Then, update your ‘apt-get’ repository.

$ sudo su 
# apt-get update

Note: After logging-in as ‘sudo’ user, note that your shell symbol will change to ‘#’ from ‘$’.

Turn Off Swap Space

Next, we have to turn off the swap space because Kubernetes will start throwing random errors otherwise. After that, you need to open the ‘fstab’ file and comment out the line which has mention of the swap partition.

# swapoff -a 
# nano /etc/fstab

To save the file press ‘Ctrl+X’ >> press ‘Y’ >>‘Enter’.

Update The Hostnames

To change the hostname of both machines, run the below command to open the file and subsequently rename the master machine to ‘kmaster’ and your node machine to ‘knode’.

# nano /etc/hostname

To save the file press ‘Ctrl+X’ >> press ‘Y’ >>‘Enter’.

Update The Hosts File With IPs Of Master & Node

Run the following command on both machines to note the IP addresses of each.

# ifconfig

Make a note of the IP address from the output of the above command. The IP address which has to be copied should be under “enp0s8”, as shown in the screenshot below.

Now go to the ‘hosts’ file on both the master and node and add an entry specifying their respective IP addresses along with their names ‘kmaster’ and ‘knode’. This is used for referencing them in the cluster. It should look like the below screenshot on both the machines.

# nano /etc/hosts

To save the file press ‘Ctrl+X’ >> press ‘Y’ >>‘Enter’.

Setting Static IP Addresses

Next, we will make the IP addresses used above, static for the VMs. We can do that by modifying the network interfaces file. Run the following command to open the file:

# nano /etc/network/interfaces

Now enter the following lines in the file.

auto enp0s8 
iface enp0s8 inet static
address <IP-Address-Of-VM>

It will look something like the below screenshot.

To save the file press ‘Ctrl+X’ >> press ‘Y’ >>‘Enter’.

After this, restart your machine(s).

Install OpenSSH-Server

Now we have to install openshh-server. Run the following command:

# sudo apt-get install openssh-server
Install Docker

Now we have to install Docker because Docker images will be used for managing the containers in the cluster. Run the following commands:

# sudo su 
# apt-get update
# apt-get install -y

Next, we have to install these 3 essential components for setting up Kubernetes environment: kubeadmkubectl, and kubelet.

Run the following commands before installing the Kubernetes environment.

# apt-get update && apt-get install -y apt-transport-https curl 
# curl -s | apt-key add -
# cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb kubernetes-xenial main
# apt-get update
Install kubeadm, Kubelet And Kubectl

Now its time to install the 3 essential components. Kubelet is the lowest level component in Kubernetes. It’s responsible for what’s running on an individual machine. Kuebadm is used for administrating the Kubernetes cluster. Kubectl is used for controlling the configurations on various nodes inside the cluster.

# apt-get install -y kubelet kubeadm kubectl
Updating Kubernetes Configuration

Next, we will change the configuration file of Kubernetes. Run the following command:

# nano /etc/systemd/system/kubelet.service.d/10-kubeadm.conf

This will open a text editor, enter the following line after the last “Environment Variable”:


Now, to save the file press ‘Ctrl+X’ >> press ‘Y’ >>‘Enter’.

Voila! You have successfully installed Kubernetes on both the machines now!

As of now, only the Kubernetes environment has been set up. But now, it is time to install Kubernetes completely, by moving onto the next 2 phases, where we will individually set the configurations in both machines.

Steps Only For Kubernetes Master VM (kmaster)

Note: These steps will only be executed on the master node (kmaster VM).

Step 1: We will now start our Kubernetes cluster from the master’s machine. Run the following command:

# kubeadm init --apiserver-advertise-address=<ip-address-of-kmaster-vm> --pod-network-cidr=
  1. You will get the below output. The commands marked as (1), execute them as a non-root user. This will enable you to use kubectl from the CLI
  2. The command marked as (2) should also be saved for future. This will be used to join nodes to your cluster

Step 2: As mentioned before, run the commands from the above output as a non-root user

$ mkdir -p $HOME/.kube 
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config

It should look like this:

To verify, if kubectl is working or not, run the following command:

$ kubectl get pods -o wide --all-namespaces

Step 3: You will notice from the previous command, that all the pods are running except one: ‘kube-dns’. For resolving this we will install a pod network. To install the CALICO pod network, run the following command:

$ kubectl apply -f

After some time, you will notice that all pods shift to the running state

Step 4: Next, we will install the dashboard. To install the Dashboard, run the following command:

$ kubectl create -f

It will look something like this:

Step 5: Your dashboard is now ready with it’s the pod in the running state.

Step 6: By default, the dashboard will not be visible on the Master VM. Run the following command in the command line:

$ kubectl proxy

Then you will get something like this:

To view the dashboard in the browser, navigate to the following address in the browser of your Master VM: http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/

You will then be prompted with this page, to enter the credentials:

Step 7: In this step, we will create a service account for the dashboard and get its credentials.
Note: Run all these commands in a new terminal, or your kubectl proxy command will stop.

Run the following commands:

1. This command will create a service account for the dashboard in the default namespace

$ kubectl create serviceaccount dashboard -n default

2. This command will add the cluster binding rules to your dashboard account

$ kubectl create clusterrolebinding dashboard-admin -n default \ 
--clusterrole=cluster-admin \

3. This command will give you the token required for your dashboard login

$ kubectl get secret $(kubectl get serviceaccount dashboard -o jsonpath="{.secrets[0].name}") -o jsonpath="{.data.token}" | base64 --decode

You should get the token like this:

4. Copy this token and paste it in Dashboard Login Page, by selecting the token option

5. You have successfully logged into your dashboard!

Steps For Only Kubernetes Node VM (knode)

It is time to get your node, to join the cluster! This is probably the only step that you will be doing on the node, after installing kubernetes on it.

Run the join command that you saved, when you ran ‘kubeadm init’ command on the master.

Note: Run this command with “sudo”.

sudo kubeadm join --apiserver-advertise-address=<ip-address-of-the master> --pod-network-cidr=

Bingo! Your Kubernetes Cluster is ready if you get something similar to the above screenshot.


Ansible is a configuration management tool that helps in controlling large number of servers from one location in an automated manner.

Ansible basically communicates over normal SSH in order to connect and control the remote machines / servers. Because of this feature of Ansible we only need to install it on the main controller machine and not on the client machines.

Therefore, any server that has an ssh port can be configured by an Ansible machine.

Steps for installing and configuring Ansible on Ubuntu are as follows:

  1. Make sure your Ubuntu machine is up to date with latest packages.

$ sudo apt-get update

2. Now we will install the Ansible PPA repository on the system using below command

$ sudo apt-add-repository ppa:ansible/ansible

3. Install ansible after successfully adding the ansible ppa repository

$ sudo apt-get install ansible

4. Check for ansible version after installation is done.

$ ansible –version

5. Generate ssh key in the ansible machine, which we have to copy to all the remote hosts for doing deployments or configurations on them.

$ ssh-keygen -t rsa -b 4096 -C “tushar@tushar-VirtualBox”

6. Copy the ssh key generated to the remote host using the below command

Note: Before copying the ssh key make sure that you are able to ssh the remote host where you want to copy the key

$ ssh-copy-id root@

***root@**** in my case is the remote host username & IP respectively.

7. Now we need to edit the “hosts” file of ansible by specifying the group of servers/remote hosts which we need to connect and perform operations on.

Open the hosts file for editing, I am using nano for editing.

$ sudo nano /etc/ansible/hosts

Add the servers as highlighted below

Note: [test-servers] is the group-name I have given, which will refer to all the servers listed under it (Here I have listed only one server i.e. under it)

8. Now as we have setup our “hosts” file and other configurations are done, let’s try a very simple ansible command, which will ping all the servers listed in the “hosts” file.

$ ansible all -m ping

“all” means all hosts

“-m” stands for module

“ping” is one of the module of ansible

Ooops!!!! an error came , we need to install “sshpass” in our Ubuntu.

Note: It’s not necessary that this error will appear for you.To resolve this issue just install sshpass using below command.

$ apt-get install sshpass

9. Again try running the ansible command for pinging all the servers.

$ ansible all -m ping

We are successfully able to ping the server using ansible.

Wola!!!!!!!!……You have successfully installed and configure Ansible on your Ubuntu machine.


Ensure that your system is up to date, and you have the gnupg, software-properties-common, and curl packages installed. You will use these packages to verify HashiCorp’s GPG signature, and install HashiCorp’s Debian package repository.

sudo apt-get update && sudo apt-get install -y gnupg software-properties-common

Install the HashiCorp GPG key.

wget -O- | \
gpg — dearmor | \
sudo tee /usr/share/keyrings/hashicorp-archive-keyring.gpg

Verify the key’s fingerprint.

gpg — no-default-keyring \
— keyring /usr/share/keyrings/hashicorp-archive-keyring.gpg \
— fingerprint

The gpg command will report the key fingerprint:

— — — — — — — — — — — — — — — — — — — — — — — — –
pub rsa4096 2020–05–07 [SC]
E8A0 32E0 94D8 EB4E A189 D270 DA41 8C88 A321 9F7B
uid [ unknown] HashiCorp Security (HashiCorp Package Signing) <>
sub rsa4096 2020–05–07 [E]
The fingerprint must match E8A0 32E0 94D8 EB4E A189 D270 DA41 8C88 A321 9F7B. You can also verify the key on Security at HashiCorp under Linux Package Checksum Verification.

Add the official HashiCorp repository to your system. The lsb_release -cs command finds the distribution release codename for your current system, such as buster, groovy, or sid.

echo “deb [signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] \ $(lsb_release -cs) main” | \
sudo tee /etc/apt/sources.list.d/hashicorp.list

Download the package information from HashiCorp.

sudo apt update

Install Terraform from the new repository.

sudo apt-get install terraform

»Verify the installation

Verify that the installation worked by opening a new terminal session and listing Terraform’s available subcommands.

terraform -help

Usage: terraform [-version] [-help] <command> [args]

The available commands for execution are listed below.
The most common, useful commands are shown first, followed by
less common or more advanced commands. If you’re just getting
started with Terraform, stick with the common commands. For the
other commands, please read the help and docs before usage.

AWS Command Line Interface (CLI)

Install AWS CLI and configure credentials.


In this tutorial, we are going to see how to install the AWS CLI on Ubuntu. And how to configure credentials?. Configuring the AWS credentials is needed when we do some activity using AWS CLI. Especially if you want to launch the EKS instance or want to push the Docker image to ECR, then we must need AWS CLI credentials. Please pay more attention to this. Why because we are enabling some security permissions. So, enable only needed permissions. For the demonstration purpose, I enabled the Admin Access policy.

In this tutorial, we will do the following activity.

1. Install AWS CLI on Ubuntu.

2. Create IAM credentials

3. Configure IAM credentials on Ubuntu(Local machine).

Let’s see them one by one.

Install AWS CLI on Ubuntu:

The latest AWS CLI version is 2. So download the AWS CLI.

curl "" -o ""

Unzip the file using the following command.


Install the AWS CLI using the following command.

sudo ./aws/install

That’s all. AWS CLI is installed successfully on Ubuntu.

We can get the AWS CLI version using the below command.

aws --version

The above command will return cli version 2 if the installation is successful.

Create IAM Credentials:

To configure the AWS CLI, we need an access key and access secret. We will create a new IAM user and get the access key. If you had an existing user then create a new access key under the security credentials section.

AWS Security Credentials

If you don’t have any users, then log in to the AWS web console. And go to the IAM section.

Search AWS IAM

Click the users and click the Add User button on the next page.

List of IAM Users
Add IAM User

Give a username and tick the Programmatic Access checkbox. And click the Permissions button.

User Creation With Programmatic Access

Here you can attach policies directly to the user. Here I chose the admin access policy. If you already created any group, you can attach the group. Click the Tags button.

Give Security Permission

Add a tag if you want or skip to the next section by clicking the Preview button.

Add Tags

Review the given information and click the Create User button.

Create User

Now the user is created and it will show the access key and secret. Download the CSV file for later use.

Access Key and Secret Access of IAM User
Configure IAM Credentials:

Go to the terminal on Ubuntu and type the below command to configure the access key and secret. Use the access key and secret access from the downloaded CSV file.

aws configure
  1. Enter AWS Access Key ID.
  2. Enter AWS Secret Access Key.
  3. Enter Default region name( like eu-central, us-east etc.).
  4. Enter Default output format. Allowed formats are json, yaml, text, and table.
Check Credentials Working or Not:

Use the below command to get the list of EC2 instances on your account.

aws ec2 describe-instances

Get the list of light sail server details.

aws lightsail get-bundles

Get the S3 bucket list.

aws s3 ls

If the credentials are configured correctly then the above commands will return corresponding details.

Delete Credentials:

Use the following command to delete the credentials.

aws iam delete-access-key --access-key-id your_key --user-name your_username#Exampleaws iam delete-access-key --access-key-id AKI8900IN --user-name bob

In this tutorial, we learned how to install and configure AWS CLI. In the forthcoming article, I planned to write about how to create an EKS cluster on AWS. For that, we need AWS credentials. This article will help beginners to set up AWS CLI and configure credentials.

Leave a Comment

MFH IT Solutions (Regd No -LIN : AP-03-46-003-03147775)

Consultation & project support organization.


MFH IT Solutions (Regd)
NAD Kotha Road, Opp Bashyam School, Butchurajupalem, Jaya Prakash Nagar Visakhapatnam, Andhra Pradesh – 530027