<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Praduman's blog]]></title><description><![CDATA[Hi, I'm learning DevOps tools and concepts. I enjoy sharing my learning experiences and projects through blogs to help others who are on a similar journey.]]></description><link>https://blogs.praduman.site</link><generator>RSS for Node</generator><lastBuildDate>Tue, 14 Apr 2026 11:00:58 GMT</lastBuildDate><atom:link href="https://blogs.praduman.site/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[Hosting GitLab on Your Private Server with CI/CD: A How-To Guide]]></title><description><![CDATA[Self-hosting GitLab on a private server gives you full control over your code, repositories, and CI/CD pipelines. It’s a great solution for teams needing privacy and customization. In this guide, I’ll walk you through setting up GitLab with CI/CD usi...]]></description><link>https://blogs.praduman.site/self-hosting-gitlab-with-cicd</link><guid isPermaLink="true">https://blogs.praduman.site/self-hosting-gitlab-with-cicd</guid><category><![CDATA[GitLab]]></category><category><![CDATA[Devops]]></category><category><![CDATA[AWS]]></category><category><![CDATA[Docker]]></category><category><![CDATA[CI/CD]]></category><category><![CDATA[self hosting]]></category><dc:creator><![CDATA[Praduman Prajapati]]></dc:creator><pubDate>Tue, 20 May 2025 12:38:07 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1747745601405/1c1d7219-859e-48d3-86df-0e2b68f3f4ed.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Self-hosting GitLab on a private server gives you full control over your code, repositories, and CI/CD pipelines. It’s a great solution for teams needing privacy and customization. In this guide, I’ll walk you through setting up GitLab with CI/CD using Docker, based on a project I recently completed.</p>
<h2 id="heading-why-self-host-gitlab">Why Self-Host GitLab?</h2>
<ul>
<li><p><strong>Privacy</strong>: Keep your code and data on your own infrastructure.</p>
</li>
<li><p><strong>Customization</strong>: Tailor GitLab to your team’s needs.</p>
</li>
<li><p><strong>CI/CD</strong>: Automate builds, tests, and deployments with GitLab Runners.</p>
</li>
</ul>
<h2 id="heading-prerequisites">Prerequisites</h2>
<ul>
<li><p>A server with 8 vCPU, 7.2GB RAM, and 50GB+ disk space (e.g., Ubuntu 22.04).</p>
</li>
<li><p>Root or sudo access.</p>
</li>
<li><p>A domain name (e.g., <code>git.example.com</code>) or static IP.</p>
</li>
</ul>
<h2 id="heading-creating-an-ec2-server">Creating an EC2 server</h2>
<ol>
<li><p>Go to <code>AWS console</code> → <code>EC2</code> → <code>Instances</code></p>
</li>
<li><p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1747728129391/b17fe7e5-ca70-4dab-85f6-ef974d1c504b.png" alt class="image--center mx-auto" /></p>
<p> Click on <code>Launch Instance</code></p>
</li>
<li><p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1747728205046/ea4b7475-2e4c-4ffd-9486-c586cfc9ef9c.png" alt class="image--center mx-auto" /></p>
<p> Fill the requirements</p>
<ul>
<li><p>Name: <code>GitLab Server</code></p>
</li>
<li><p>AMI: <code>Ubuntu Server 24.04</code></p>
</li>
<li><p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1747728415453/ca23ddd4-5e98-46fd-9dc4-4f3615067ce4.png" alt class="image--center mx-auto" /></p>
<p>  Instance type: <code>t2.2xlarge</code></p>
</li>
<li><p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1747728443918/4707095e-0806-475e-a5c3-33f34d5042e1.png" alt class="image--center mx-auto" /></p>
<p>  Create a new <code>key pair</code> and save it</p>
</li>
<li><p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1747728655297/b9f796aa-ae74-4675-88b6-e0a6d9cab347.png" alt class="image--center mx-auto" /></p>
<p>  Configure the <code>Network settings</code></p>
</li>
<li><p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1747728999845/bdb23179-c1f1-4d3b-b247-b1d79f79af8b.png" alt class="image--center mx-auto" /></p>
<p>  Configure storage (set <code>50+ GiB</code> of storage)</p>
</li>
<li><p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1747729071511/2328fc28-42d9-40c3-9099-3b507599f60a.png" alt class="image--center mx-auto" /></p>
<p>  Launch the Instance</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1747737846046/a8dbe028-16d9-4089-a297-a104091c952f.png" alt class="image--center mx-auto" /></p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1747738050410/67168a97-d63b-470b-9c9c-09620a27c09f.png" alt class="image--center mx-auto" /></p>
</li>
</ul>
</li>
</ol>
<h2 id="heading-connect-to-the-gitlab-server-using-ssh">Connect to the <code>Gitlab Server</code> using <code>SSH</code></h2>
<ol>
<li><p>Copy the <code>public IP</code> of the Server</p>
</li>
<li><p>Open Your <code>local terminal</code></p>
</li>
<li><p>Connect to the Instance via <code>ssh</code> command:</p>
<pre><code class="lang-bash"> ssh -i &lt;path_of_key_pair_file&gt; ubuntu@&lt;public_ip&gt;
</code></pre>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1747738692549/9a86116f-7ab0-4569-8a0e-5b413c0141d0.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Congrats! 🎉 Gitlab server is now connected to the local server via SSH</p>
</li>
</ol>
<h2 id="heading-step-1-prepare-the-server">Step 1: Prepare the Server</h2>
<p>Start by updating your server and installing dependencies:</p>
<pre><code class="lang-bash">sudo apt-get update &amp;&amp; sudo apt-get upgrade -y
sudo apt-get install -y curl openssh-server ca-certificates tzdata perl
</code></pre>
<p>Optionally, install Postfix for email notifications:</p>
<pre><code class="lang-bash">sudo apt-get install -y postfix
</code></pre>
<blockquote>
<p>Select <code>Internet Site</code> and enter your <code>server’s domain or IP</code>.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1747739322871/f3bbdf0b-8a7d-496c-9968-894cc9de628a.png" alt class="image--center mx-auto" /></p>
</blockquote>
<h2 id="heading-step-2-install-docker">Step 2: Install Docker</h2>
<p>Install Docker to run GitLab in a container:</p>
<pre><code class="lang-bash">curl -fsSL https://get.docker.com -o get-docker.sh
sudo sh get-docker.sh
sudo usermod -aG docker <span class="hljs-variable">$USER</span>
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1747739415055/d5c189f8-1bbc-4f24-97c0-c79e8fb17e81.png" alt class="image--center mx-auto" /></p>
<p>Log out and back in, then verify Docker:</p>
<pre><code class="lang-bash">docker --version
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1747739457132/41024130-6bff-49f5-a81c-f811303a1d6e.png" alt class="image--center mx-auto" /></p>
<blockquote>
<p>Log out and back in for the group change to take effect.</p>
</blockquote>
<h2 id="heading-step-3-deploy-gitlab">Step 3: Deploy GitLab</h2>
<p>Create directories for persistent storage:</p>
<pre><code class="lang-bash">mkdir -p /srv/gitlab/config /srv/gitlab/logs /srv/gitlab/data
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1747739584577/dbeb042f-6219-4bf2-9e8e-cbb6d21af847.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1747739700301/659c4604-9d55-4ee9-96f3-48b5a702bfaa.png" alt class="image--center mx-auto" /></p>
<p>Run the GitLab container:</p>
<pre><code class="lang-bash">sudo docker run --detach \
  --hostname git.example.com \
  --publish 443:443 --publish 80:80 --publish 22:22 \
  --name gitlab \
  --restart always \
  --volume /srv/gitlab/config:/etc/gitlab \
  --volume /srv/gitlab/logs:/var/<span class="hljs-built_in">log</span>/gitlab \
  --volume /srv/gitlab/data:/var/opt/gitlab \
  gitlab/gitlab-ce:latest
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1747741034912/afbc448f-d7c4-4bdd-8a9b-beb9f2b26ab5.png" alt class="image--center mx-auto" /></p>
<blockquote>
<ul>
<li><p>Replace <code>git.example.com</code> with your <code>domain or server IP</code>.</p>
</li>
<li><p>Wait 5-10 minutes for GitLab to initialize.</p>
</li>
</ul>
</blockquote>
<p><strong>Check container status</strong>:</p>
<pre><code class="lang-bash">docker ps
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1747741104634/28e198de-a424-48ce-b7fd-7b44212f72bb.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-step-4-secure-the-instance">Step 4: Secure the Instance</h2>
<ul>
<li><p><strong>Access GitLab</strong>: Navigate to http://&lt;server-ip&gt; or <code>https://git.example.com</code>.</p>
</li>
<li><p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1747741249280/dc489693-81c8-4bc4-9591-607fa6e79855.png" alt class="image--center mx-auto" /></p>
<p>  Set a password for the root user when prompted.</p>
</li>
<li><p><strong>Log in</strong>: Use username <code>root</code> and the password you set.</p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1747742010800/dc886bcc-0847-4ced-a564-886bcb5b5dd2.png" alt class="image--center mx-auto" /></p>
<p>Enable HTTPS by editing <code>/srv/gitlab/config/gitlab.rb</code>:</p>
<pre><code class="lang-bash">external_url <span class="hljs-string">'https://git.example.com'</span>
letsencrypt[<span class="hljs-string">'enable'</span>] = <span class="hljs-literal">true</span>
letsencrypt[<span class="hljs-string">'contact_emails'</span>] = [<span class="hljs-string">'your-email@example.com'</span>]
</code></pre>
<p>Reconfigure GitLab:</p>
<pre><code class="lang-bash">sudo docker <span class="hljs-built_in">exec</span> -it gitlab gitlab-ctl reconfigure
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1747742583960/37225a3a-4406-4f3b-809d-2718c2a82cd1.png" alt class="image--center mx-auto" /></p>
<p><strong>Restrict access</strong> (optional): In the GitLab UI, go to <strong>Admin Area &gt; Settings &gt; General &gt; Sign-up restrictions</strong> and disable sign-ups for a private setup.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1747742818318/0e170951-e794-489a-a896-89f4fd5b8e6e.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-step-5-set-up-cicd-with-gitlab-runner">Step 5: Set Up CI/CD with GitLab Runner</h2>
<p><strong>Install GitLab Runner</strong>:</p>
<pre><code class="lang-bash">curl -L <span class="hljs-string">"https://packages.gitlab.com/install/repositories/runner/gitlab-runner/script.deb.sh"</span> | sudo bash
sudo apt-get install -y gitlab-runner
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1747743540726/62f4058f-88c1-4803-94cc-507b02143f27.png" alt class="image--center mx-auto" /></p>
<p><strong>Register the Runner</strong>:</p>
<ul>
<li><p>In GitLab UI, go to <strong>Settings &gt; CI/CD &gt; Runners</strong> to get the registration token and URL.</p>
</li>
<li><p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1747743221237/f03dfab9-fd39-43ec-a201-33067f3d869b.png" alt class="image--center mx-auto" /></p>
<p>  Register the runner:</p>
</li>
</ul>
<pre><code class="lang-bash">sudo gitlab-runner register \
  --url https://git.example.com \
  --token &lt;your-registration-token&gt; \
  --description my-runner \
  --tag-list docker-runner \
  --executor docker \
  --docker-image alpine:latest
</code></pre>
<p><strong>Verify Runner</strong>:</p>
<pre><code class="lang-bash">sudo gitlab-runner status
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1747744329931/2409143a-e7d2-41be-86bb-b394d0f8597f.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-test-cicd-with-a-pipeline">Test CI/CD with a Pipeline</h2>
<p>Create a <code>.gitlab-ci.yml</code> file in your project to test the pipeline:</p>
<pre><code class="lang-yaml"><span class="hljs-attr">stages:</span>
  <span class="hljs-bullet">-</span> <span class="hljs-string">build</span>
  <span class="hljs-bullet">-</span> <span class="hljs-string">test</span>

<span class="hljs-attr">build:</span>
  <span class="hljs-attr">stage:</span> <span class="hljs-string">build</span>
  <span class="hljs-attr">tags:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-string">docker-runner</span>
  <span class="hljs-attr">script:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-string">echo</span> <span class="hljs-string">"Building the project..."</span>
    <span class="hljs-bullet">-</span> <span class="hljs-string">mkdir</span> <span class="hljs-string">-p</span> <span class="hljs-string">builds</span>
    <span class="hljs-bullet">-</span> <span class="hljs-string">touch</span> <span class="hljs-string">builds/app.txt</span>
  <span class="hljs-attr">artifacts:</span>
    <span class="hljs-attr">paths:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">builds/</span>

<span class="hljs-attr">test:</span>
  <span class="hljs-attr">stage:</span> <span class="hljs-string">test</span>
  <span class="hljs-attr">tags:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-string">docker-runner</span>
  <span class="hljs-attr">script:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-string">echo</span> <span class="hljs-string">"Running tests..."</span>
    <span class="hljs-bullet">-</span> <span class="hljs-string">test</span> <span class="hljs-string">-f</span> <span class="hljs-string">builds/app.txt</span>
</code></pre>
<p>Push the file and check the pipeline in <code>CI/CD &gt; Pipelines</code></p>
<h2 id="heading-conclusion">Conclusion</h2>
<p>You now have a self-hosted GitLab instance with CI/CD, ready for private development and automation. Secure it with a firewall (ufw) and regular backups:</p>
<pre><code class="lang-bash">sudo ufw allow OpenSSH
sudo ufw allow 80/tcp
sudo ufw allow 443/tcp
sudo ufw <span class="hljs-built_in">enable</span>
</code></pre>
<pre><code class="lang-basic">sudo docker exec -it gitlab gitlab-backup create
</code></pre>
]]></content:encoded></item><item><title><![CDATA[Capstone DevOps Project: Enterprise-Grade CI/CD Pipeline with Kubernetes on AWS, Jenkins, Helm, Ingress, and Monitoring]]></title><description><![CDATA[Introduction 🚀
In today’s fast-paced software development world, automating the process of building, testing, and deploying applications is essential for delivering features quickly and reliably. That’s where CI/CD pipelines come into play.
In this ...]]></description><link>https://blogs.praduman.site/capstone-devops-project-enterprise-grade-cicd-pipeline-with-kubernetes-on-aws-jenkins-helm-ingress-and-monitoring</link><guid isPermaLink="true">https://blogs.praduman.site/capstone-devops-project-enterprise-grade-cicd-pipeline-with-kubernetes-on-aws-jenkins-helm-ingress-and-monitoring</guid><category><![CDATA[Devops]]></category><category><![CDATA[CI/CD]]></category><category><![CDATA[Kubernetes]]></category><category><![CDATA[AWS]]></category><category><![CDATA[EKS]]></category><category><![CDATA[Jenkins]]></category><category><![CDATA[Grafana]]></category><category><![CDATA[#prometheus]]></category><category><![CDATA[DevSecOps]]></category><category><![CDATA[Docker]]></category><category><![CDATA[sonarqube]]></category><category><![CDATA[trivy]]></category><category><![CDATA[Terraform]]></category><category><![CDATA[Cloud Computing]]></category><dc:creator><![CDATA[Praduman Prajapati]]></dc:creator><pubDate>Fri, 18 Apr 2025 22:18:58 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1745014375931/83f11306-ca27-4db2-bcb1-f892c24fbf3d.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1 id="heading-introduction"><strong>Introduction</strong> 🚀</h1>
<p>In today’s fast-paced software development world, automating the process of building, testing, and deploying applications is essential for delivering features quickly and reliably. That’s where CI/CD pipelines come into play.</p>
<p>In this project, I’ve built a <strong>complete, production-grade CI/CD pipeline using Jenkins, Kubernetes (EKS), and various open-source DevOps tools</strong> — all deployed on AWS. This setup not only automates the deployment of applications but also integrates <strong>monitoring (with Prometheus &amp; Grafana)</strong> and <strong>secure ingress access with HTTPS</strong> using <strong>Nginx Ingress Controller and Cert-Manager</strong>.</p>
<p>Whether you're a DevOps beginner looking to understand real-world pipelines or a professional aiming to implement enterprise-grade CI/CD systems, this blog will walk you through every step — from infrastructure setup to continuous delivery and monitoring.</p>
<p>By the end of this guide, you'll have a fully functional, cloud-native CI/CD pipeline with all the essential components configured, deployed, and running on Kubernetes.</p>
<h1 id="heading-architecture-diagram-of-the-project">Architecture Diagram of the Project</h1>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1744981009073/bb34d8f8-2b26-459f-bb9c-8bf106189826.jpeg" alt class="image--center mx-auto" /></p>
<h1 id="heading-source-code-amp-project-repositories">📌 Source Code &amp; Project Repositories</h1>
<p>To keep things simple and organized, I’ve divided the entire project into <strong>three separate repositories</strong>, each focusing on a different part of the DevSecOps workflow:</p>
<h2 id="heading-project-repository-ci-repo">🔧 <strong>Project Repository (CI Repo)</strong></h2>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://github.com/praduman8435/Capstone-Mega-DevOps-Project">https://github.com/praduman8435/Capstone-Mega-DevOps-Project</a></div>
<p> </p>
<hr />
<h2 id="heading-cd-repository">🚀 <strong>CD Repository</strong></h2>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://github.com/praduman8435/Capstone-Mega-CD-Pipeline.git">https://github.com/praduman8435/Capstone-Mega-CD-Pipeline.git</a></div>
<p> </p>
<hr />
<h2 id="heading-infrastructure-as-code-iac-terraform-for-eks">☁️ Infrastructure as Code (IaC) — Terraform for EKS</h2>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://github.com/praduman8435/EKS-Terraform.git">https://github.com/praduman8435/EKS-Terraform.git</a></div>
<p> </p>
<hr />
<h1 id="heading-configure-aws-security-group">🔒 Configure AWS Security Group</h1>
<p>A <strong>Security Group</strong> in AWS acts like a virtual firewall. It controls what kind of traffic can come into or go out of your EC2 instances or services—keeping your infrastructure secure.</p>
<p>For this project, we'll either <strong>create a new security group</strong> or <strong>update an existing one</strong> with the required rules.</p>
<h3 id="heading-essential-security-group-rules-for-kubernetes-cluster">📌 Essential Security Group Rules for Kubernetes Cluster</h3>
<div class="hn-table">
<table>
<thead>
<tr>
<td><strong>Port(s)</strong></td><td><strong>Purpose</strong></td><td><strong>Why It’s Needed</strong></td></tr>
</thead>
<tbody>
<tr>
<td><code>587</code></td><td>SMTP (Email Notifications)</td><td>To allow tools like Jenkins to send email notifications</td></tr>
<tr>
<td><code>22</code></td><td>SSH Access</td><td>For secure shell access to EC2 instances (use with caution)</td></tr>
<tr>
<td><code>80</code> and <code>443</code></td><td>HTTP &amp; HTTPS</td><td>For serving web traffic (Ingress, Jenkins, ArgoCD UI, etc.)</td></tr>
<tr>
<td><code>3000 - 11000</code></td><td>App-Specific Ports</td><td>For apps like Grafana (3000), Prometheus, and others</td></tr>
</tbody>
</table>
</div><p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1744144367615/903147da-530d-4447-9e2b-121d11ff4113.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-best-practices-for-security-group-setup">✅ Best Practices for Security Group Setup</h3>
<ul>
<li><p>🔐 <strong>Follow Least Privilege</strong><br />  Only open the ports that your application <strong>actually needs</strong>. Avoid exposing everything “just in case.”</p>
</li>
<li><p>🛑 <strong>Restrict SSH Access (Port 22)</strong><br />  Limit SSH access to your IP or admin IPs only. Never leave it open to the entire internet (<code>0.0.0.0/0</code>)—this is a big security risk. (I have done this for demo purpose only).</p>
</li>
</ul>
<hr />
<h1 id="heading-create-ec2-instances-for-required-tools">Create EC2 Instances for Required Tools</h1>
<p>To run essential DevOps tools like <strong>Nexus</strong>, <strong>SonarQube</strong>, <strong>Jenkins</strong>, and manage infrastructure, you'll need to create <strong>four separate EC2 instances</strong> on AWS.</p>
<h3 id="heading-what-youll-be-creating">📋 What You’ll Be Creating:</h3>
<div class="hn-table">
<table>
<thead>
<tr>
<td><strong>Instance Name</strong></td><td><strong>Purpose</strong></td></tr>
</thead>
<tbody>
<tr>
<td><code>Nexus</code></td><td>Artifact repository to store JAR files, and other build artifacts</td></tr>
<tr>
<td><code>SonarQube</code></td><td>Static code analysis and code quality scanning</td></tr>
<tr>
<td><code>Jenkins</code></td><td>CI/CD automation server for building, testing, and triggering deployments</td></tr>
<tr>
<td><code>InfraServer</code></td><td>Used to provision the EKS cluster and manage infrastructure via Terraform</td></tr>
</tbody>
</table>
</div><h2 id="heading-step-1-launch-ec2-instances">🔧 Step 1: Launch EC2 Instances</h2>
<ol>
<li><p>Go to the AWS EC2 Dashboard</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1742063352982/d0607c70-0ab1-494b-996d-d9b2432ce915.png?auto=compress,format&amp;format=webp" alt /></p>
</li>
<li><p>Click <strong>“Launch Instance”</strong></p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1742186530563/ed9a05ec-effd-476c-ba2e-6a3adc07adce.png" alt /></p>
</li>
</ol>
<h2 id="heading-step-2-configure-instance">⚙️ Step 2: Configure Instance</h2>
<ol>
<li><p>Set the number of instances to <strong>4</strong></p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1744144742847/b3561d9d-ee8f-41bd-bd83-3137fca0e486.png" alt /></p>
</li>
<li><p><strong>AMI (Amazon Machine Image):</strong> Select the latest <strong>Ubuntu</strong> (e.g., Ubuntu 22.04 LTS)</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1744143068367/907dc0d0-c4ee-4a86-8e02-422907fab3c8.png" alt class="image--center mx-auto" /></p>
</li>
<li><p><strong>Instance Type:</strong> Choose <code>t2.large</code> (2 vCPU, 8 GB RAM)</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1744143113849/9177fdc5-434c-4785-bf21-734e062dee4e.png" alt class="image--center mx-auto" /></p>
</li>
<li><p><strong>Key Pair:</strong> Select an existing key pair or create a new one to access your instances via SSH</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1744143265582/3d4d9e93-fff3-4082-8f78-75b201d9a239.png" alt class="image--center mx-auto" /></p>
</li>
<li><p><strong>Storage:</strong> Set root volume to <strong>at least 25 GB</strong></p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1744144637222/795b3f99-e996-4375-ace0-6643bc55a68a.png" alt class="image--center mx-auto" /></p>
</li>
<li><p><strong>Security Group:</strong> Use the <strong>security group</strong> you configured earlier (with necessary ports open)</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1744144548798/827d9237-d5c0-4fef-837b-b31cb17cba85.png" alt /></p>
</li>
<li><p>Click <strong>Launch Instance</strong></p>
</li>
<li><p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1744145379774/6245f123-25fc-4f22-9c5a-a2cf2fa00ed2.png" alt class="image--center mx-auto" /></p>
<p> <strong>Tags:</strong> Add a <strong>Name</strong> tag to each instance to identify them easily</p>
</li>
</ol>
<div class="hn-table">
<table>
<thead>
<tr>
<td><strong>Instance</strong></td><td><strong>Name Tag</strong></td></tr>
</thead>
<tbody>
<tr>
<td>1</td><td><code>Nexus</code></td></tr>
<tr>
<td>2</td><td><code>SonarQube</code></td></tr>
<tr>
<td><a target="_blank" href="https://console.aws.amazon.com/ec2">3</a></td><td><code>Jenkins</code></td></tr>
<tr>
<td><a target="_blank" href="https://console.aws.amazon.com/ec2">4</a></td><td><code>InfraServer</code></td></tr>
</tbody>
</table>
</div><p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1744145591733/ceaba6ee-810c-46c4-b24e-006544578ebc.png" alt class="image--center mx-auto" /></p>
<hr />
<h1 id="heading-connecting-to-ec2-instances-via-ssh">🔗 Connecting to EC2 Instances via SSH</h1>
<p>Once your EC2 instances are up and running, you can connect to them securely using <strong>SSH (Secure Shell)</strong> from your local terminal.</p>
<h3 id="heading-what-you-need">🧩 What You Need:</h3>
<ul>
<li><p>The <code>.pem</code> file (private key) you downloaded or created while launching the EC2 instances</p>
</li>
<li><p>The <strong>public IP address</strong> of each EC2 instance (you’ll find it in the EC2 dashboard)</p>
</li>
</ul>
<h3 id="heading-ssh-command">💻 SSH Command</h3>
<pre><code class="lang-bash">ssh -i &lt;path-to-pem-file&gt; ubuntu@&lt;public-ip&gt;
</code></pre>
<ul>
<li><p>Replace <code>&lt;path-to-pem-file&gt;</code> with the path to your <code>.pem</code> file</p>
</li>
<li><p>Replace <code>&lt;public-ip&gt;</code> with the public IP of the instance you want to access</p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1744193701168/eec6ee55-65e7-4c6e-b58a-5c8974d351aa.png" alt class="image--center mx-auto" /></p>
<blockquote>
<p>Repeat this for each instance:</p>
<ul>
<li><p>Nexus</p>
</li>
<li><p>SonarQube</p>
</li>
<li><p>Jenkins</p>
</li>
<li><p>InfraServer</p>
</li>
</ul>
<pre><code class="lang-bash">ssh -i &lt;path-to-pem-file&gt; ubuntu@&lt;public-ip-of-SonarQube&gt;
ssh -i &lt;path-to-pem-file&gt; ubuntu@&lt;public-ip-of-Nexus&gt;
ssh -i &lt;path-to-pem-file&gt; ubuntu@&lt;public-ip-of-Jenkins&gt;
ssh -i &lt;path-to-pem-file&gt; ubuntu@&lt;public-ip-of-InfraServer&gt;
</code></pre>
</blockquote>
<hr />
<h1 id="heading-configure-each-server">Configure each server</h1>
<p>To ensure your server is up to date, run the following command:</p>
<pre><code class="lang-bash">sudo apt update
</code></pre>
<blockquote>
<p>This will refresh the package list and update any outdated software.</p>
</blockquote>
<h2 id="heading-configure-the-infrastructure-server">Configure the Infrastructure Server</h2>
<p>Now, we need to make sure that the server has the necessary permissions to create resources on AWS.</p>
<ol>
<li><p><strong>Create an IAM Role in AWS</strong>:</p>
<ul>
<li><p>Go to the AWS Management Console.</p>
</li>
<li><p>In the navigation bar, search for <strong>IAM</strong> and select <strong>Roles</strong>.</p>
</li>
<li><p>Click on <strong>Create role</strong>.</p>
</li>
</ul>
</li>
<li><p><strong>Set the Trusted Entity</strong>:</p>
<ul>
<li><p><strong>Trusted Entity Type</strong>: Select <strong>AWS service</strong>.</p>
</li>
<li><p><strong>Use Case</strong>: Select <strong>EC2</strong> (this allows EC2 instances to assume this role).</p>
</li>
</ul>
</li>
</ol>
<p>    <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1744216848760/fd092552-ead9-4aa4-b131-2fdf5b33c031.png" alt class="image--center mx-auto" /></p>
<ol start="3">
<li><p><strong>Attach Policies</strong>:</p>
<ul>
<li><p>Click <strong>Next: Permissions</strong>.</p>
</li>
<li><p>In the search bar, search for <strong>AdministratorAccess</strong>.</p>
</li>
<li><p>Check the box next to <strong>AdministratorAccess</strong> to give the EC2 instance full permissions.</p>
</li>
</ul>
</li>
</ol>
<p>    <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1744217632364/ff24d635-d7de-4ed2-9e82-0c59b9192627.png" alt class="image--center mx-auto" /></p>
<ol start="4">
<li><p><strong>Assign a Role Name</strong>:</p>
<ul>
<li><p>Choose a role name</p>
</li>
<li><p>Click <strong>Create role</strong> to finish creating the IAM role.</p>
</li>
</ul>
</li>
</ol>
<p>    <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1744217760205/6b1ad954-4a61-4baf-a7b5-b945040f75f5.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-attach-the-iam-role-to-the-ec2-instance">Attach the IAM Role to the EC2 Instance</h3>
<p>Now that your IAM role is created, it’s time to attach it to your EC2 instance.</p>
<ol>
<li><p>Go to the <strong>EC2 Dashboard</strong> in AWS.</p>
</li>
<li><p>Find the InfraServer instance and click on it to open the instance details.</p>
</li>
<li><p>Click <strong>Actions</strong> → <strong>Security</strong> → <strong>Modify IAM Role</strong>.</p>
</li>
<li><p>Under the <strong>IAM role</strong> section, select the role you just created.</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1744218016331/f14b7d95-70ff-4f90-88ea-0b7b451baf22.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Click <strong>Update IAM role</strong> to apply the changes.</p>
</li>
</ol>
<blockquote>
<p>InfraServer now has the necessary permissions to create AWS resources. 🚀</p>
</blockquote>
<h3 id="heading-install-aws-cli-on-the-infra-server">Install AWS CLI on the Infra Server</h3>
<p>To manage AWS resources from your server, you need to install the AWS Command Line Interface (CLI).</p>
<ol>
<li><p><strong>Download AWS CLI</strong>: Run the following command to download the AWS CLI installation package:</p>
<pre><code class="lang-bash"> curl <span class="hljs-string">"https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip"</span> -o <span class="hljs-string">"awscliv2.zip"</span>
</code></pre>
</li>
<li><p><strong>Install</strong> <code>unzip</code> (if not already installed):</p>
<pre><code class="lang-bash"> sudo apt install unzip -y
</code></pre>
</li>
<li><p><strong>Unzip the AWS CLI package</strong>:</p>
<pre><code class="lang-bash"> unzip awscliv2.zip
</code></pre>
</li>
<li><p><strong>Install AWS CLI</strong>:</p>
<pre><code class="lang-bash"> sudo ./aws/install
</code></pre>
</li>
</ol>
<h3 id="heading-verify-aws-cli-installation">Verify AWS CLI Installation</h3>
<p>To ensure AWS CLI is installed correctly, run:</p>
<pre><code class="lang-bash">aws --version
</code></pre>
<p>This should display the installed version of AWS CLI.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1744219707290/e29d6916-2396-46b0-957c-a09915dab8df.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-configure-aws-cli">Configure AWS CLI</h3>
<p>Set the AWS region globally so that the AWS CLI knows where to create resources. Use the following command:</p>
<pre><code class="lang-bash">aws configure <span class="hljs-built_in">set</span> region us-east-1
</code></pre>
<blockquote>
<p>Replace <code>us-east-1</code> with your preferred AWS region if needed.</p>
</blockquote>
<h3 id="heading-install-terraform">Install Terraform</h3>
<p>To use Terraform, follow these steps to install it on your InfraServer:</p>
<ol>
<li><p><strong>Update the server</strong> and install required dependencies:</p>
<pre><code class="lang-bash"> sudo apt-get update &amp;&amp; sudo apt-get install -y gnupg software-properties-common
</code></pre>
</li>
<li><p><strong>Download the HashiCorp GPG key</strong>:</p>
<pre><code class="lang-bash"> wget -O- https://apt.releases.hashicorp.com/gpg | \
 gpg --dearmor | \
 sudo tee /usr/share/keyrings/hashicorp-archive-keyring.gpg &gt; /dev/null
</code></pre>
</li>
<li><p><strong>Verify the GPG key</strong>:</p>
<pre><code class="lang-bash"> gpg --no-default-keyring \
 --keyring /usr/share/keyrings/hashicorp-archive-keyring.gpg \
 --fingerprint
</code></pre>
</li>
<li><p><strong>Add the HashiCorp repository</strong>:</p>
<pre><code class="lang-bash"> <span class="hljs-built_in">echo</span> <span class="hljs-string">"deb [signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] \
 https://apt.releases.hashicorp.com <span class="hljs-subst">$(lsb_release -cs)</span> main"</span> | \
 sudo tee /etc/apt/sources.list.d/hashicorp.list
</code></pre>
</li>
<li><p><strong>Update the package list</strong>:</p>
<pre><code class="lang-bash"> sudo apt update
</code></pre>
</li>
<li><p><strong>Install Terraform</strong>:</p>
<pre><code class="lang-bash"> sudo apt-get install terraform
</code></pre>
</li>
</ol>
<h3 id="heading-verify-terraform-installation">Verify Terraform Installation</h3>
<p>To confirm that Terraform is installed, run:</p>
<pre><code class="lang-bash">terraform -version
</code></pre>
<blockquote>
<p>This will display the installed version of Terraform.</p>
</blockquote>
<h3 id="heading-clone-the-infrastructure-as-code-iac-repository">Clone the Infrastructure as Code (IaC) Repository</h3>
<p>Clone GitHub repository to the InfraServer where the Terraform configuration files are stored.</p>
<ol>
<li><p><strong>Clone the repository</strong>:</p>
<pre><code class="lang-bash"> git <span class="hljs-built_in">clone</span> https://github.com/praduman8435/EKS-Terraform.git
</code></pre>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1744231984531/8a44e055-4377-4d92-a5c2-49ed65854eef.png" alt /></p>
</li>
<li><p><strong>Navigate into the repository directory</strong>:</p>
<pre><code class="lang-bash"> <span class="hljs-built_in">cd</span> EKS-Terraform
</code></pre>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1744232155157/aef37bb1-a14c-4302-97d2-c74c46cf94a9.png" alt class="image--center mx-auto" /></p>
</li>
</ol>
<h3 id="heading-create-resources-on-aws-using-terraform">Create Resources on AWS Using Terraform</h3>
<ol>
<li><p><strong>Initialize Terraform</strong>:</p>
<p> Before applying the configuration, you need to initialize the Terraform working directory:</p>
<pre><code class="lang-bash"> terraform init
</code></pre>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1744232792563/a1945816-ea01-4407-bf53-40d310c95280.png" alt class="image--center mx-auto" /></p>
</li>
<li><p><strong>Check the Resources Terraform Will Create</strong>:</p>
<p> Run the following command to see a preview of the resources Terraform will create:</p>
<pre><code class="lang-bash"> terraform plan
</code></pre>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1744232880165/7180b838-8059-4e09-9b8a-317ce79a7781.png" alt class="image--center mx-auto" /></p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1744232952689/970cf801-5bc5-4bc1-a1e2-c883a4748e92.png" alt class="image--center mx-auto" /></p>
</li>
<li><p><strong>Apply the Terraform Configuration</strong>:</p>
<p> Once you’re ready to create the resources, apply the configuration:</p>
<pre><code class="lang-bash"> terraform apply --auto-approve
</code></pre>
<p> This command will automatically approve the changes without prompting for confirmation.</p>
</li>
</ol>
<blockquote>
<p>Now, sit back and relax for approximately 10 minutes as Terraform creates the resources in AWS. You can monitor the progress in the terminal.</p>
</blockquote>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1744233612356/9bf19898-3a4b-4112-aa14-cd9bfcf20918.png" alt /></p>
<hr />
<h2 id="heading-set-up-the-jenkins-server">Set Up the Jenkins Server</h2>
<p>Now infrastructure is ready, let's set up Jenkins server. Jenkins will be the core tool for automating our CI/CD pipeline.</p>
<h3 id="heading-step-1-install-java">Step 1: Install Java</h3>
<p>Jenkins requires Java to run. We’ll install OpenJDK 17 (a stable, widely-used version):</p>
<pre><code class="lang-bash">sudo apt install openjdk-17-jre-headless -y
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1744234465411/5da72ac7-348d-4581-9442-ccf0909dfaa4.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-step-2-install-jenkins">Step 2: Install Jenkins</h3>
<ol>
<li><p><strong>Add the Jenkins repository key</strong>:</p>
<pre><code class="lang-bash"> sudo wget -O /usr/share/keyrings/jenkins-keyring.asc \
 https://pkg.jenkins.io/debian-stable/jenkins.io-2023.key
</code></pre>
</li>
<li><p><strong>Add the Jenkins repository to your system</strong>:</p>
<pre><code class="lang-bash"> <span class="hljs-built_in">echo</span> <span class="hljs-string">"deb [signed-by=/usr/share/keyrings/jenkins-keyring.asc] \
 https://pkg.jenkins.io/debian-stable binary/"</span> | sudo tee \
 /etc/apt/sources.list.d/jenkins.list &gt; /dev/null
</code></pre>
</li>
<li><p><strong>Update your package list</strong>:</p>
<pre><code class="lang-bash"> sudo apt-get update
</code></pre>
</li>
<li><p><strong>Install Jenkins</strong>:</p>
<pre><code class="lang-bash"> sudo apt-get install jenkins -y
</code></pre>
</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1744234521183/1b304c32-579d-46e2-81f1-be2f4b6a700f.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-step-3-access-jenkins-web-ui">Step 3: Access Jenkins Web UI</h3>
<ol>
<li><p><strong>Get the Jenkins Server’s Public IP</strong> from the AWS EC2 dashboard.</p>
</li>
<li><p>In your browser, go to:</p>
<pre><code class="lang-bash"> http://&lt;public-ip-of-your-Jenkins-server&gt;:8080
</code></pre>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1744234655670/a5c8f392-d14e-4ee9-99d5-61828a5ed017.png" alt /></p>
</li>
<li><p><strong>Important</strong>: Make sure <strong>port 8080</strong> is open in the security group attached to your Jenkins EC2 instance.</p>
<ul>
<li><p>Go to EC2 → Security Groups → Edit inbound rules.</p>
</li>
<li><p>Add a rule to allow TCP traffic on port <strong>8080</strong> from your IP or anywhere (0.0.0.0/0) for testing.</p>
</li>
</ul>
</li>
</ol>
<h3 id="heading-unlock-jenkins">Unlock Jenkins</h3>
<ol>
<li><p>On the Jenkins setup page, it will ask for the <strong>initial admin password</strong>.</p>
</li>
<li><p>Run this command on your server to get it:</p>
<pre><code class="lang-bash"> sudo cat /var/lib/jenkins/secrets/initialAdminPassword
</code></pre>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1744234761264/55cc1553-5a36-4da9-92ed-1406cadd5ccc.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Copy the password and paste it into the browser.</p>
</li>
</ol>
<h3 id="heading-step-5-install-plugins-amp-create-admin-user">Step 5: Install Plugins &amp; Create Admin User</h3>
<ol>
<li><p>Click <strong>"Install Suggested Plugins"</strong> when prompted.</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1744234804664/a2a2e096-e146-477a-9a4f-f6a8547f3295.png" alt class="image--center mx-auto" /></p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1744234858408/e774fd6a-087c-4cf0-a158-6e5133f665d5.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Once the plugins are installed, create your <strong>admin user</strong> (username, password, email).</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1744234914065/a20672ba-2a64-49fc-92a9-927a4a10412a.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Click <strong>"Save and Continue"</strong>.</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1744235012397/15874622-0b0e-484f-8da7-1e4986e8017b.png" alt /></p>
</li>
<li><p>Then click <strong>"Save and Finish"</strong>.</p>
</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1744235110299/4955cea9-f363-479a-93d5-3632f1b4500d.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-jenkins-is-ready">🎉 Jenkins is Ready!</h3>
<p>You’ve successfully installed and configured Jenkins! You can now start creating jobs and automating your CI/CD pipeline.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1744235184750/5d01a839-3abf-4cc1-bc60-cf55d1fe3fad.png" alt class="image--center mx-auto" /></p>
<hr />
<h2 id="heading-set-up-the-sonarqube-server">Set Up the SonarQube Server</h2>
<p>SonarQube is a powerful tool for continuously inspecting code quality and security. In this step, we’ll install Docker and run SonarQube as a container on our server.</p>
<h3 id="heading-step-1-update-the-server">Step 1: Update the Server</h3>
<p>Let’s start by updating the system packages:</p>
<pre><code class="lang-bash">sudo apt update
</code></pre>
<h3 id="heading-step-2-install-docker-on-the-server">Step 2: Install Docker on the Server</h3>
<p>To run SonarQube as a container, we first need Docker installed.</p>
<ol>
<li><p><strong>Install required dependencies</strong>:</p>
<pre><code class="lang-bash"> sudo apt-get install ca-certificates curl -y
</code></pre>
</li>
<li><p><strong>Add Docker’s official GPG key</strong>:</p>
<pre><code class="lang-bash"> sudo install -m 0755 -d /etc/apt/keyrings
 sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc
 sudo chmod a+r /etc/apt/keyrings/docker.asc
</code></pre>
</li>
<li><p><strong>Add the Docker repository</strong>:</p>
<pre><code class="lang-bash"> <span class="hljs-built_in">echo</span> \
 <span class="hljs-string">"deb [arch=<span class="hljs-subst">$(dpkg --print-architecture)</span> signed-by=/etc/apt/keyrings/docker.asc] \
 https://download.docker.com/linux/ubuntu \
 <span class="hljs-subst">$(. /etc/os-release &amp;&amp; echo <span class="hljs-string">"<span class="hljs-variable">${UBUNTU_CODENAME:-<span class="hljs-variable">$VERSION_CODENAME</span>}</span>"</span>)</span> stable"</span> | \
 sudo tee /etc/apt/sources.list.d/docker.list &gt; /dev/null
</code></pre>
</li>
<li><p><strong>Update your package index again</strong>:</p>
<pre><code class="lang-bash"> sudo apt-get update
</code></pre>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1744279993966/c84bd365-37ce-4139-b745-fd269f8dce82.png" alt /></p>
</li>
<li><p><strong>Install Docker Engine and tools</strong>:</p>
<pre><code class="lang-bash"> sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin -y
</code></pre>
</li>
</ol>
<h3 id="heading-step-3-enable-docker-for-your-user-optional-but-recommended">Step 3: Enable Docker for Your User (Optional, but recommended)</h3>
<p>To run Docker without <code>sudo</code> every time:</p>
<pre><code class="lang-bash">sudo usermod -aG docker <span class="hljs-variable">$USER</span>
</code></pre>
<blockquote>
<p>⚠️ <strong>Important</strong>: You need to <strong>log out and log back in</strong> after running this command for the changes to take effect.</p>
</blockquote>
<h3 id="heading-step-4-run-sonarqube-in-a-docker-container">Step 4: Run SonarQube in a Docker Container</h3>
<p>Now that Docker is ready, let’s launch SonarQube:</p>
<pre><code class="lang-bash">docker run -d --name sonarqube -p 9000:9000 sonarqube:lts
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1744288768306/94cb0724-8450-42fb-8a81-8ee824131152.png" alt /></p>
<p>This command will:</p>
<ul>
<li><p>Download the latest LTS (Long-Term Support) version of SonarQube.</p>
</li>
<li><p>Start it in a detached container.</p>
</li>
<li><p>Expose it on port 9000.</p>
</li>
</ul>
<h3 id="heading-step-5-access-sonarqube-web-ui">Step 5: Access SonarQube Web UI</h3>
<p>Open your browser and go to:</p>
<pre><code class="lang-bash">http://&lt;public-ip-of-sonarqube&gt;:9000
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1744288888642/cc9de884-9505-4aca-bf97-1a5b1c9118f6.png" alt /></p>
<p>Make sure <strong>port 9000</strong> is allowed in the EC2 security group.</p>
<h3 id="heading-step-6-login-and-change-default-password">Step 6: Login and Change Default Password</h3>
<p>Use the default credentials to log in:</p>
<ul>
<li><p><strong>Username</strong>: <code>admin</code></p>
</li>
<li><p><strong>Password</strong>: <code>admin</code></p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1744288943501/cc0128c7-efff-42ec-afdf-09d26d015dc4.png" alt /></p>
<p>You’ll be prompted to change the password on your first login.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1744289011320/62974949-4d9c-4a43-a273-c7ca2eaeb993.png" alt /></p>
<h3 id="heading-sonarqube-is-now-up-and-running-on-your-server">🎉 SonarQube is now up and running on your server!</h3>
<hr />
<h2 id="heading-set-up-the-nexus-server">Set Up the Nexus Server</h2>
<p>Nexus is a repository manager where we can store and manage build artifacts like Docker images, Maven packages, and more. We’ll install and run Nexus in a Docker container.</p>
<h3 id="heading-step-1-update-the-system">Step 1: Update the System</h3>
<p>Start by updating all the packages:</p>
<pre><code class="lang-bash">sudo apt update
</code></pre>
<h3 id="heading-step-2-install-docker">Step 2: Install Docker</h3>
<p>If Docker isn’t already installed on this server, follow these steps:</p>
<ol>
<li><p><strong>Install required packages</strong>:</p>
<pre><code class="lang-bash"> sudo apt-get install ca-certificates curl -y
</code></pre>
</li>
<li><p><strong>Add Docker’s GPG key</strong>:</p>
<pre><code class="lang-bash"> sudo install -m 0755 -d /etc/apt/keyrings
 sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc
 sudo chmod a+r /etc/apt/keyrings/docker.asc
</code></pre>
</li>
<li><p><strong>Add the Docker repository</strong>:</p>
<pre><code class="lang-bash"> <span class="hljs-built_in">echo</span> \
 <span class="hljs-string">"deb [arch=<span class="hljs-subst">$(dpkg --print-architecture)</span> signed-by=/etc/apt/keyrings/docker.asc] \
 https://download.docker.com/linux/ubuntu \
 <span class="hljs-subst">$(. /etc/os-release &amp;&amp; echo <span class="hljs-string">"<span class="hljs-variable">${UBUNTU_CODENAME:-<span class="hljs-variable">$VERSION_CODENAME</span>}</span>"</span>)</span> stable"</span> | \
 sudo tee /etc/apt/sources.list.d/docker.list &gt; /dev/null
</code></pre>
</li>
<li><p><strong>Update package index</strong>:</p>
<pre><code class="lang-bash"> sudo apt-get update
</code></pre>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1744289103877/a901c5d2-13bf-4c6e-b6fe-9246f14fbcd5.png" alt /></p>
</li>
<li><p><strong>Install Docker</strong>:</p>
<pre><code class="lang-bash"> sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin -y
</code></pre>
</li>
</ol>
<h3 id="heading-step-3-run-docker-without-sudo-optional">Step 3: Run Docker Without <code>sudo</code> (Optional)</h3>
<p>To avoid typing <code>sudo</code> before every Docker command:</p>
<pre><code class="lang-bash">sudo usermod -aG docker <span class="hljs-variable">$USER</span>
</code></pre>
<blockquote>
<p>🔁 Log out and log back in to apply this change.</p>
</blockquote>
<h3 id="heading-step-4-run-nexus-in-a-docker-container">Step 4: Run Nexus in a Docker Container</h3>
<p>Now that Docker is ready, let’s launch the Nexus container:</p>
<pre><code class="lang-bash">docker run -d --name nexus -p 8081:8081 sonatype/nexus3:latest
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1744289420427/83cb6d03-0716-4c87-a611-45625b317670.png" alt /></p>
<ul>
<li><p>This runs Nexus in the background.</p>
</li>
<li><p>The web interface will be available on <strong>port 8081</strong>.</p>
</li>
</ul>
<h3 id="heading-step-5-access-nexus-web-interface">Step 5: Access Nexus Web Interface</h3>
<ol>
<li><p>In your browser, go to:</p>
<pre><code class="lang-bash"> http://&lt;public-ip-of-nexus&gt;:8081
</code></pre>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1744289543982/68954596-3aed-4b58-963b-91144f55ca4c.png" alt /></p>
</li>
<li><p>Make sure <strong>port 8081</strong> is allowed in your EC2 instance’s <strong>Security Group</strong>.</p>
</li>
</ol>
<h3 id="heading-step-6-retrieve-the-admin-password">Step 6: Retrieve the Admin Password</h3>
<p>To sign in, you need the initial admin password which is stored <strong>inside the container</strong>.</p>
<p>Here’s how to get it:</p>
<ol>
<li><p><strong>Find the container ID</strong>:</p>
<pre><code class="lang-bash"> docker ps
</code></pre>
</li>
<li><p><strong>Access the container shell</strong>:</p>
<pre><code class="lang-bash"> docker <span class="hljs-built_in">exec</span> -it &lt;container-id&gt; /bin/bash
</code></pre>
</li>
<li><p><strong>Print the password</strong>:</p>
<pre><code class="lang-bash"> cat /nexus-data/admin.password
</code></pre>
</li>
<li><p>Copy the password and go back to the Nexus UI.</p>
</li>
</ol>
<h3 id="heading-step-7-login-amp-set-a-new-password">Step 7: Login &amp; Set a New Password</h3>
<ul>
<li><p>Username: <code>admin</code></p>
</li>
<li><p>Password: (paste the password you retrieved)</p>
</li>
</ul>
<p>After login, it will ask you to <strong>set a new admin password</strong>.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1744290075687/a6069e4c-e7dc-44db-ad8e-dc461cb1da94.png" alt /></p>
<h3 id="heading-thats-it-your-nexus-repository-manager-is-now-ready-to-use">🎉 That’s it! Your Nexus repository manager is now ready to use.</h3>
<hr />
<h1 id="heading-configure-jenkins-plugins-and-docker">Configure Jenkins Plugins and Docker</h1>
<p>Now that Jenkins is up and running, let’s install the required plugins and set up Docker on the Jenkins server to enable full CI/CD functionality.</p>
<h3 id="heading-step-1-install-required-jenkins-plugins">Step 1: Install Required Jenkins Plugins</h3>
<ol>
<li><p><strong>Go to Jenkins Dashboard</strong><br /> → Click <strong>Manage Jenkins</strong><br /> → Click <strong>Manage Plugins</strong><br /> → Go to the <strong>Available</strong> tab</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1744290525601/7350a774-e32e-4638-8410-b9f4df5eea83.png" alt /></p>
</li>
<li><p><strong>Search and install the following plugins</strong> (you can select multiple at once):</p>
<ul>
<li><p><code>Pipeline Stage View</code></p>
</li>
<li><p><code>Docker Pipeline</code></p>
</li>
<li><p><code>SonarQube Scanner</code></p>
</li>
<li><p><code>Config File Provider</code></p>
</li>
<li><p><code>Maven Integration</code></p>
</li>
<li><p><code>Pipeline Maven Integration</code></p>
</li>
<li><p><code>Kubernetes</code></p>
</li>
<li><p><code>Kubernetes CLI</code></p>
</li>
<li><p><code>Kubernetes Client API</code></p>
</li>
<li><p><code>Kubernetes Credentials</code></p>
</li>
<li><p><code>Kubernetes Credentials Provider</code></p>
</li>
</ul>
</li>
</ol>
<p>    <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1744291036361/195a79a8-3d42-4d60-8ff8-84619ef2ceb7.png" alt /></p>
<ol start="3">
<li><p>Click <strong>Install without restart</strong> and wait for all plugins to be installed.</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1744291267838/d5df93f6-40e6-474a-835b-69e1314691aa.png" alt /></p>
</li>
<li><p>Once done, <strong>restart Jenkins</strong> to apply all changes</p>
</li>
</ol>
<h3 id="heading-step-2-install-docker-on-jenkins-server">Step 2: Install Docker on Jenkins Server</h3>
<p>We’ll now install Docker so Jenkins jobs can build Docker images directly.</p>
<ol>
<li><p><strong>Update and install required packages</strong>:</p>
<pre><code class="lang-bash"> sudo apt-get update
 sudo apt-get install ca-certificates curl -y
</code></pre>
</li>
<li><p><strong>Add Docker's GPG key</strong>:</p>
<pre><code class="lang-bash"> sudo install -m 0755 -d /etc/apt/keyrings
 sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc
 sudo chmod a+r /etc/apt/keyrings/docker.asc
</code></pre>
</li>
<li><p><strong>Add the Docker repository</strong>:</p>
<pre><code class="lang-bash"> <span class="hljs-built_in">echo</span> \
 <span class="hljs-string">"deb [arch=<span class="hljs-subst">$(dpkg --print-architecture)</span> signed-by=/etc/apt/keyrings/docker.asc] \
 https://download.docker.com/linux/ubuntu \
 <span class="hljs-subst">$(. /etc/os-release &amp;&amp; echo <span class="hljs-string">"<span class="hljs-variable">${UBUNTU_CODENAME:-<span class="hljs-variable">$VERSION_CODENAME</span>}</span>"</span>)</span> stable"</span> | \
 sudo tee /etc/apt/sources.list.d/docker.list &gt; /dev/null
</code></pre>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1744289103877/a901c5d2-13bf-4c6e-b6fe-9246f14fbcd5.png" alt /></p>
</li>
<li><p><strong>Install Docker</strong>:</p>
<pre><code class="lang-bash"> sudo apt-get update
 sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin -y
</code></pre>
</li>
</ol>
<h3 id="heading-step-3-allow-jenkins-user-to-run-docker">Step 3: Allow Jenkins User to Run Docker</h3>
<p>To allow Jenkins to run Docker without needing <code>sudo</code> every time:</p>
<pre><code class="lang-bash">sudo usermod -aG docker <span class="hljs-variable">$USER</span>
</code></pre>
<blockquote>
<p>🔁 <strong>Log out and log back in</strong> (or reboot) for the group change to take effect.</p>
</blockquote>
<h3 id="heading-step-4-configure-maven-and-sonar-scanner-in-jenkins">Step 4: Configure Maven and Sonar Scanner in Jenkins</h3>
<ol>
<li><p>Go to <strong>Jenkins Dashboard</strong> → <strong>Manage Jenkins</strong> → <strong>Global Tool Configuration</strong></p>
</li>
<li><p>Scroll down to the <strong>Maven</strong> section:</p>
<ul>
<li><p>Click <strong>Add Maven</strong></p>
</li>
<li><p>Name it: <code>maven3</code></p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1744292428512/9316de03-3be2-4f4b-9936-73c2a16b2140.png" alt /></p>
</li>
<li><p>Choose “Install automatically” (Jenkins will download it)</p>
</li>
</ul>
</li>
<li><p>Scroll to the <strong>SonarQube Scanner</strong> section:</p>
<ul>
<li><p>Click <strong>Add SonarQube Scanner</strong></p>
</li>
<li><p>Name it: <code>sonar-scanner</code></p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1744292484705/c855ed86-12d3-452b-b47c-6ec8efe44d88.png" alt /></p>
</li>
<li><p>Enable “Install automatically”</p>
</li>
</ul>
</li>
<li><p>Click <strong>Save</strong> or <strong>Apply</strong> to finish.</p>
</li>
</ol>
<blockquote>
<p>🎉 Done! Jenkins is now fully equipped with all the tools you need to build, analyze, and deploy your applications in a modern DevOps workflow.</p>
</blockquote>
<hr />
<h1 id="heading-create-and-configure-jenkins-pipeline">Create and Configure Jenkins Pipeline</h1>
<h3 id="heading-step-1-create-a-new-pipeline">Step 1: Create a New Pipeline</h3>
<ol>
<li><p>Go to <strong>Jenkins Dashboard</strong></p>
</li>
<li><p>Click <strong>New Item</strong></p>
</li>
<li><p>Enter a <strong>name</strong></p>
</li>
<li><p>Choose <strong>Pipeline</strong> as the item type</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1744292755111/0d329c16-e62c-4069-88d1-e5c935c87210.png" alt /></p>
</li>
<li><p>Click <strong>OK</strong></p>
</li>
<li><p>Under <strong>Build Discarder</strong>:</p>
<ul>
<li><p>Check <strong>Discard Old Builds</strong></p>
</li>
<li><p>Set <strong>Max # of builds to keep</strong> = <code>3</code><br />  <em>(Keeps Jenkins light and fast)</em></p>
</li>
</ul>
</li>
</ol>
<p>    <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1744292916775/1e7581b4-d07d-4df3-9b05-f7c3c89cded6.png" alt /></p>
<h3 id="heading-step-2-install-trivy-on-jenkins-server">Step 2: Install Trivy on Jenkins Server</h3>
<p>Trivy is used for container vulnerability scanning.</p>
<p>Run the following commands on your Jenkins server:</p>
<pre><code class="lang-bash">sudo apt-get install wget apt-transport-https gnupg lsb-release -y

wget -qO - https://aquasecurity.github.io/trivy-repo/deb/public.key | \
gpg --dearmor | sudo tee /usr/share/keyrings/trivy.gpg &gt; /dev/null

<span class="hljs-built_in">echo</span> <span class="hljs-string">"deb [signed-by=/usr/share/keyrings/trivy.gpg] \
https://aquasecurity.github.io/trivy-repo/deb <span class="hljs-subst">$(lsb_release -sc)</span> main"</span> | \
sudo tee -a /etc/apt/sources.list.d/trivy.list

sudo apt-get update
sudo apt-get install trivy -y
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1744313544599/f0d17052-2b7d-46a1-8186-5864e1eb3d2f.png" alt /></p>
<p>Check if Trivy is working:</p>
<pre><code class="lang-bash">trivy --version
</code></pre>
<h3 id="heading-step-3-add-sonarqube-credentials-in-jenkins">Step 3: Add SonarQube Credentials in Jenkins</h3>
<ol>
<li><p>Go to <strong>SonarQube UI</strong> → Click on <strong>Administration</strong> → <strong>Security</strong> → <strong>Users</strong></p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1744314655968/d6a2f9ce-8e02-4059-9247-0c2295cfa287.png" alt /></p>
</li>
<li><p>Generate a <strong>new token</strong></p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1744314799983/f7b3694a-7069-4aeb-b184-acf7d5e9258e.png" alt /></p>
</li>
</ol>
<p>Now, add the token in Jenkins:</p>
<ol>
<li><p>Go to <strong>Jenkins Dashboard</strong> → <strong>Manage Jenkins</strong> → <strong>Credentials</strong></p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1744314439520/e8c89305-523a-404e-b793-bbd1e753a191.png" alt /></p>
</li>
<li><p>Click on <strong>(global)</strong> → <strong>Add Credentials</strong></p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1744314461326/88ad32fe-dc44-4c9a-9e07-b09b72878df9.png" alt /></p>
</li>
<li><p>Fill the form:</p>
<ul>
<li><p><strong>Kind</strong>: <code>Secret Text</code></p>
</li>
<li><p><strong>Secret</strong>: <em>(Paste the token copied from SonarQube)</em></p>
</li>
<li><p><strong>ID</strong>: <code>sonar-token</code> <em>(We'll refer to this in the pipeline)</em></p>
</li>
</ul>
</li>
</ol>
<p>    <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1744314842545/0b3133bd-2913-4c20-a76f-7577eaef5bbb.png" alt /></p>
<ol start="4">
<li><p>Click <strong>Create</strong></p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1744314901552/9e8f96bf-7cf3-48ea-8c5f-cc6e53cd23cc.png" alt /></p>
</li>
</ol>
<h3 id="heading-step-4-configure-sonarqube-server-in-jenkins">Step 4: Configure SonarQube Server in Jenkins</h3>
<ol>
<li><p>Go to <strong>Jenkins Dashboard</strong> → <strong>Manage Jenkins</strong> → <strong>Configure System</strong></p>
</li>
<li><p>Scroll to <strong>SonarQube servers</strong> section</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1744314994180/47c27ad7-1113-4931-a3e4-6b91d2c5b921.png" alt /></p>
</li>
<li><p>Click <strong>Add SonarQube</strong></p>
</li>
<li><p>Fill the details:</p>
<ul>
<li><p><strong>Name</strong>: <code>sonar</code></p>
</li>
<li><p><strong>Server Authentication Token</strong>: Choose <code>sonar-token</code></p>
</li>
<li><p><strong>Server URL</strong>: <code>http://&lt;public-ip-of-sonarqube&gt;:9000</code></p>
</li>
</ul>
</li>
</ol>
<p>    <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1744315161987/ebdac9b2-04f2-4260-8b1c-10f07b263520.png" alt /></p>
<ol start="5">
<li>Click <strong>Save</strong></li>
</ol>
<h2 id="heading-write-your-pipeline-script">Write Your Pipeline Script</h2>
<p>You’re now ready to write your pipeline script under <strong>Pipeline → Pipeline Script</strong> section of the job.</p>
<pre><code class="lang-bash">pipeline {
    agent any

    tools{
        maven <span class="hljs-string">'maven3'</span>
    }
    environment {
        SCANNER_HOME= tool <span class="hljs-string">'sonar-scanner'</span>
    }

    stages {
        stage(<span class="hljs-string">'Git Checkout'</span>) {
            steps {
                git branch: <span class="hljs-string">'main'</span>, url: <span class="hljs-string">'https://github.com/praduman8435/Capstone-Mega-DevOps-Project.git'</span>
            }
        }
        stage(<span class="hljs-string">'Compilation'</span>) {
            steps {
                sh <span class="hljs-string">'mvn compile'</span>
            }
        }
        stage(<span class="hljs-string">'Testing'</span>) {
            steps {
                sh <span class="hljs-string">'mvn test -DskipTests=true'</span>
            }
        }
        stage(<span class="hljs-string">'Trivy FS Scan'</span>) {
            steps {
                sh <span class="hljs-string">'trivy fs --format table -o fs-report.html .'</span>
            }
        }
        stage(<span class="hljs-string">'Code Quality Analysis'</span>) {
            steps {
                withSonarQubeEnv(<span class="hljs-string">'sonar'</span>) {
                    sh <span class="hljs-string">''</span><span class="hljs-string">' $SCANNER_HOME/bin/sonar-scanner -Dsonar-projectName=GCBank -Dsonar.projectKey=GCBank \
                            -Dsonar.java.binaries=target '</span><span class="hljs-string">''</span>
                }
            }
        }
    }
}
</code></pre>
<h3 id="heading-now-try-to-build-pipeline-and-check-till-now-everything-works-fine">Now, try to build pipeline and check till now everything works fine</h3>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1744701195508/f2ecbe39-1e21-48eb-9322-c88430fead7a.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1744701653054/7aaf0183-b4e9-49de-91df-e247eb9b7ec4.png" alt class="image--center mx-auto" /></p>
<blockquote>
<p>Here, everything works as expected now move forward and add some more scripts inside pipeline</p>
</blockquote>
<h2 id="heading-implement-quality-gate-check"><strong>Implement Quality Gate Check</strong></h2>
<p><strong>In SonarQube:</strong></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1744716522198/65c95970-fb93-4155-9d5b-dd84dcb3e28e.png" alt /></p>
<ol>
<li><p><strong>Go to:</strong> <code>Administration → Configuration → Webhooks</code></p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1744716609135/e5a85fb8-fb77-486c-9426-3f66ac67684e.png" alt /></p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1744716688318/e9db6045-4e06-43d9-868b-2354be81207a.png" alt /></p>
</li>
<li><p><strong>Create New Webhook</strong></p>
<ul>
<li><p><strong>Name:</strong> <code>sonarqube-webhook</code></p>
</li>
<li><p><strong>URL:</strong> <code>http://&lt;jenkins-public-ip&gt;:8080/sonarqube-webhook/</code></p>
</li>
</ul>
</li>
</ol>
<p>    <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1744716871930/43be0de4-3105-4042-85ba-0e70048ff6ba.png" alt /></p>
<ul>
<li>Click <strong>Create</strong></li>
</ul>
<p>    <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1744716960656/f7e78e1a-9782-492b-abe2-261513b5a3d0.png" alt /></p>
<p>This webhook will notify Jenkins once the SonarQube analysis is complete and the Quality Gate status is available.</p>
<blockquote>
<p>Your Jenkins server should be publicly accessible (or at least reachable by the SonarQube server) on that webhook URL.</p>
</blockquote>
<h2 id="heading-update-pomxml-with-nexus-repositories"><strong>Update</strong> <code>pom.xml</code> with Nexus Repositories</h2>
<p><strong>In Nexus UI:</strong></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1744717793489/12d0b5c5-8921-4069-a652-72b23bf2f226.png" alt /></p>
<ul>
<li><p><strong>Browse</strong> → Select <code>maven-releases</code> and <code>maven-snapshots</code></p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1744717859897/10028d45-a680-4883-be69-9ca1a21f10ce.png" alt /></p>
</li>
<li><p>Copy both URLs (you’ll use them in <code>pom.xml</code>)</p>
</li>
</ul>
<p><strong>In your</strong> <code>pom.xml</code>: Search for <code>&lt;distributionManagement&gt;</code> block and update it like this:</p>
<pre><code class="lang-bash">&lt;distributionManagement&gt;
    &lt;repository&gt;
        &lt;id&gt;maven-releases&lt;/id&gt;
        &lt;url&gt;http://&lt;nexus-ip&gt;:8081/repository/maven-releases/&lt;/url&gt;
    &lt;/repository&gt;
    &lt;snapshotRepository&gt;
        &lt;id&gt;maven-snapshots&lt;/id&gt;
        &lt;url&gt;http://&lt;nexus-ip&gt;:8081/repository/maven-snapshots/&lt;/url&gt;
    &lt;/snapshotRepository&gt;
&lt;/distributionManagement&gt;
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1744718393886/55916ecf-59f4-4834-9759-956fc2d553d9.png" alt /></p>
<p>💡 Don’t forget to:</p>
<ul>
<li><p>Replace <code>&lt;nexus-ip&gt;</code> with your actual Nexus IP</p>
</li>
<li><p>Commit and push the change to GitHub</p>
</li>
</ul>
<h2 id="heading-configure-nexus-credentials-in-jenkins-via-settingsxml"><strong>Configure Nexus Credentials in Jenkins via</strong> <code>settings.xml</code></h2>
<p><strong>In Jenkins:</strong></p>
<ul>
<li><p>Go to: <code>Manage Jenkins → Managed Files → Add a new Config File</code></p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1744718723257/430d5df7-cb46-4b88-8009-d40e674b33b2.png" alt /></p>
</li>
<li><p><strong>Type:</strong> <code>Global Maven settings.xml</code></p>
</li>
<li><p><strong>ID:</strong> <code>Capstone</code></p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1744718856345/8f8b6e3c-6b57-4344-b8e8-6aefec9ff765.png" alt /></p>
</li>
<li><p>click on <code>next</code></p>
</li>
<li><p>Now, Inside <code>Content</code> generated find the <code>&lt;servers&gt;</code> section</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1744719089869/f6376ddb-713e-4e6f-9e13-513f77b9bb07.png" alt /></p>
</li>
</ul>
<p>📄 In the <code>&lt;servers&gt;</code> section, add:</p>
<pre><code class="lang-bash">&lt;servers&gt;
  &lt;server&gt;
    &lt;id&gt;maven-releases&lt;/id&gt;
    &lt;username&gt;admin&lt;/username&gt;
    &lt;password&gt;heyitsme&lt;/password&gt;
  &lt;/server&gt;

  &lt;server&gt;
    &lt;id&gt;maven-snapshots&lt;/id&gt;
    &lt;username&gt;admin&lt;/username&gt;
    &lt;password&gt;heyitsme&lt;/password&gt;
  &lt;/server&gt;
&lt;/servers&gt;
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1744719794578/d49f75a7-8e0c-4d63-afd8-90b76166261e.png" alt /></p>
<ul>
<li>Submit the changes</li>
</ul>
<p>🔐 This ensures your Maven builds can <strong>authenticate with Nexus</strong> to deploy artifacts.</p>
<h2 id="heading-add-dockerhub-credentials-in-jenkins"><strong>Add DockerHub Credentials in Jenkins</strong></h2>
<ul>
<li><p>Go to: <code>Manage Jenkins → Credentials → Global → Add Credentials</code></p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1744749314899/82a64750-899b-43db-b8fc-d39580b519f7.png" alt /></p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1744749411655/4b1d181a-3949-41c8-833b-96b2fcc85462.png" alt /></p>
</li>
<li><p><strong>Kind:</strong> <code>Username and Password</code></p>
</li>
<li><p><strong>ID:</strong> <code>docker-cred</code></p>
</li>
<li><p><strong>Username:</strong> Your DockerHub username</p>
</li>
<li><p><strong>Password:</strong> Your DockerHub password</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1744749962004/bdc69108-4a87-4a54-9df5-f81294dad6e7.png" alt /></p>
</li>
</ul>
<p>💡 You’ll reference this ID in your pipeline when logging in to DockerHub.</p>
<h2 id="heading-add-jenkins-user-to-docker-group"><strong>Add Jenkins User to Docker Group</strong></h2>
<p>Run this on Jenkins server:</p>
<pre><code class="lang-bash">sudo usermod -aG docker jenkins
</code></pre>
<p>Then restart the server or run:</p>
<pre><code class="lang-bash">sudo systemctl restart jenkins
</code></pre>
<blockquote>
<p>So Jenkins can access Docker without sudo</p>
</blockquote>
<h2 id="heading-update-the-pipeline-script">Update the pipeline Script</h2>
<pre><code class="lang-bash">pipeline {
    agent any

    tools{
        maven <span class="hljs-string">'maven3'</span>
    }
    environment {
        SCANNER_HOME= tool <span class="hljs-string">'sonar-scanner'</span>
        IMAGE_TAG= <span class="hljs-string">"v<span class="hljs-variable">${BUILD_NUMBER}</span>"</span>
    }

    stages {
        stage(<span class="hljs-string">'Git Checkout'</span>) {
            steps {
                git branch: <span class="hljs-string">'main'</span>, url: <span class="hljs-string">'https://github.com/praduman8435/Capstone-Mega-DevOps-Project.git'</span>
            }
        }
        stage(<span class="hljs-string">'Compilation'</span>) {
            steps {
                sh <span class="hljs-string">'mvn compile'</span>
            }
        }
        stage(<span class="hljs-string">'Testing'</span>) {
            steps {
                sh <span class="hljs-string">'mvn test -DskipTests=true'</span>
            }
        }
        stage(<span class="hljs-string">'Trivy FS Scan'</span>) {
            steps {
                sh <span class="hljs-string">'trivy fs --format table -o fs-report.html .'</span>
            }
        }
        stage(<span class="hljs-string">'Code Quality Analysis'</span>) {
            steps {
                withSonarQubeEnv(<span class="hljs-string">'sonar'</span>) {
                    sh <span class="hljs-string">''</span><span class="hljs-string">' $SCANNER_HOME/bin/sonar-scanner -Dsonar-projectName=GCBank -Dsonar.projectKey=GCBank \
                            -Dsonar.java.binaries=target '</span><span class="hljs-string">''</span>
                }
            }
        }
        stage(<span class="hljs-string">'Quality Gate Check'</span>){
            steps{
                waitForQualityGate abortPipeline: <span class="hljs-literal">false</span>, credentialsId: <span class="hljs-string">'sonar-token'</span>
            }
        }
        stage(<span class="hljs-string">'Build the Application'</span>){
            steps{
                sh <span class="hljs-string">'mvn package -DskipTests'</span>
            }
        }
        stage(<span class="hljs-string">'Push Artifacts to Nexus'</span>){
            steps{
                withMaven(globalMavenSettingsConfig: <span class="hljs-string">'Capstone'</span>, jdk: <span class="hljs-string">''</span>, maven: <span class="hljs-string">'maven3'</span>, mavenSettingsConfig: <span class="hljs-string">''</span>, traceability: <span class="hljs-literal">true</span>) {
                    sh <span class="hljs-string">'mvn clean deploy -DskipTests'</span>
                }
            }
        }
        stage(<span class="hljs-string">'Build &amp; Tag Docker Image'</span>){
            steps{
                script{
                    withDockerRegistry(credentialsId: <span class="hljs-string">'docker-cred'</span>) {
                        sh <span class="hljs-string">'docker build -t thepraduman/bankapp:$IMAGE_TAG .'</span>
                    }
                }
            }
        }
        stage(<span class="hljs-string">'Docker Image Scan'</span>) {
            steps {
                sh <span class="hljs-string">'trivy image --format table -o image-report.html thepraduman/bankapp:$IMAGE_TAG'</span>
            }
        }
        stage(<span class="hljs-string">'Push Docker Image'</span>) {
            steps {
                script{
                    withDockerRegistry(credentialsId: <span class="hljs-string">'docker-cred'</span>) {
                        sh <span class="hljs-string">'docker push thepraduman/bankapp:$IMAGE_TAG'</span>
                    }
                }
            }
        }
    }
}
</code></pre>
<p>Click on <code>Build Now</code> To check the pipeline successfully triggered or not</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1744782810458/c6597325-db3f-4bc2-8235-3b0f780da81f.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1744783621105/2980f74b-17f9-459d-aa74-07a84a3cd660.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-automating-jenkins-pipeline-trigger-with-github-webhook">Automating Jenkins Pipeline Trigger with GitHub Webhook</h2>
<p>Now that the CI pipeline is ready, let’s automate it so that it runs <strong>automatically every time new code is pushed to the GitHub repository</strong>. We’ll use the <strong>Generic Webhook Trigger plugin</strong> in Jenkins for this.</p>
<h3 id="heading-install-generic-webhook-trigger-plugin">Install Generic Webhook Trigger Plugin</h3>
<ol>
<li><p>Go to <strong>Jenkins Dashboard → Manage Jenkins → Plugins</strong></p>
</li>
<li><p>Under the <strong>Available plugins</strong> tab, search for <strong>Generic Webhook Trigger</strong></p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1744783993933/99785093-7c59-4784-8329-8a137d82c1f8.png" alt /></p>
</li>
<li><p>Select it and click on <strong>Install</strong></p>
</li>
<li><p>Restart Jenkins once the installation is complete</p>
</li>
</ol>
<h3 id="heading-configure-webhook-trigger-in-your-pipeline">Configure Webhook Trigger in Your Pipeline</h3>
<ol>
<li><p>Go back to the Jenkins dashboard and open your pipeline job ( <code>capstone_CI</code>)</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1744784376012/637d7511-ca3b-480a-a4dd-b3990782eeff.png" alt /></p>
</li>
<li><p>Click on <strong>Configure</strong></p>
</li>
<li><p>Scroll down to the <strong>Build Triggers</strong> section and check the box for <strong>Generic Webhook Trigger</strong></p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1744784485406/81705771-0dd8-405e-9a16-ee84f362a383.png" alt /></p>
</li>
<li><p>Under <strong>Post content parameters</strong>, add:</p>
<ul>
<li><p><strong>Variable</strong>: <code>ref</code></p>
</li>
<li><p><strong>Expression</strong>: <code>$.ref</code></p>
</li>
<li><p><strong>Content-Type</strong>: <code>JSONPath</code></p>
</li>
</ul>
</li>
</ol>
<p>    <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1744784837867/7ae84460-fb7a-40d3-91e6-80071d51239e.png" alt /></p>
<ol start="5">
<li><p>Add a token:</p>
<ul>
<li><strong>Token Name</strong>: <code>capstone</code></li>
</ul>
</li>
<li><p>(Optional) Add a filter to trigger the pipeline only for changes on the <code>main</code> branch:</p>
<ul>
<li><p><strong>Expression</strong>: <code>refs/heads/main</code></p>
</li>
<li><p><strong>Text</strong>: <code>$ref</code></p>
</li>
</ul>
</li>
</ol>
<p>    <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1744785124803/ad5a9e71-692c-43b7-85b9-b984559af237.png" alt /></p>
<ol start="7">
<li>Click <strong>Save</strong></li>
</ol>
<p>Once saved, you’ll see a webhook URL under the token section, something like:</p>
<pre><code class="lang-bash">http://&lt;your-jenkins-ip&gt;:8080/generic-webhook-trigger/invoke?token=capstone
</code></pre>
<h3 id="heading-configure-github-webhook">Configure GitHub Webhook</h3>
<ol>
<li><p>Go to your <strong>GitHub repository</strong> (the one used in the pipeline)</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1744785862082/71a5c793-6d22-43d5-9ad5-0e6ff24d622d.png" alt /></p>
</li>
<li><p>Click on <strong>Settings → Webhooks → Add Webhook</strong></p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1744785980138/458d8d43-c066-4b0e-9115-1e1214c72505.png" alt /></p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1744786036645/b60dccd3-33e9-45b5-98fd-6192b4e30ca1.png" alt /></p>
</li>
<li><p>Fill out the form as follows:</p>
<ul>
<li><p><strong>Payload URL</strong>: Paste the webhook URL from Jenkins</p>
</li>
<li><p><strong>Content Type</strong>: <code>application/json</code></p>
</li>
<li><p>Leave the secret field blank (or add one and configure Jenkins accordingly)</p>
</li>
<li><p>Choose <strong>Just the push event</strong></p>
</li>
</ul>
</li>
</ol>
<p>    <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1744794356597/e4c8e1bb-f1ed-4415-b993-2e1409c02430.png" alt /></p>
<ol start="4">
<li><p>Click on <strong>Add Webhook</strong></p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1744794427152/20c22b83-00a7-4daa-aa5f-9127aec51152.png" alt /></p>
</li>
</ol>
<blockquote>
<p>That’s it! 🎉 Now, every time a new commit is pushed to the <code>main</code> branch, <strong>Jenkins will automatically trigger the pipeline</strong>.</p>
</blockquote>
<h1 id="heading-setting-up-cd-pipeline">Setting Up CD Pipeline</h1>
<p>With our CI pipeline automated, it’s time to set up the CD pipeline. The first step is ensuring we can <strong>update the Docker image tag in the Kubernetes deployment</strong> every time a new image is built by the CI pipeline.</p>
<p>We’ll start by granting Jenkins access to CD GitHub repository and setting up <strong>email notifications</strong> so that you receive updates when your pipeline fails or succeeds.</p>
<h2 id="heading-add-github-credentials-to-jenkins">Add GitHub Credentials to Jenkins</h2>
<p>This will allow Jenkins to <strong>clone or push</strong> changes to your GitHub CD repository.</p>
<ol>
<li><p>Go to <strong>Jenkins Dashboard → Manage Jenkins → Credentials</strong></p>
</li>
<li><p>Click on <strong>(global)</strong></p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1744825721490/bf469b9d-9830-4b60-9351-17ba0c223c92.png" alt /></p>
</li>
<li><p>Click on <strong>Add Credentials</strong></p>
</li>
<li><p>Fill in the following:</p>
<ul>
<li><p><strong>Kind</strong>: Username with password</p>
</li>
<li><p><strong>Scope</strong>: Global</p>
</li>
<li><p><strong>Username</strong>: Your GitHub username</p>
</li>
<li><p><strong>Password</strong>: Your GitHub password or personal access token (recommended)</p>
</li>
<li><p><strong>ID</strong>: <code>github-cred</code></p>
</li>
</ul>
</li>
</ol>
<p>    <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1744826099101/ced94b0f-ef61-48b9-8aec-a37c01db5049.png" alt /></p>
<ol start="5">
<li><p>Click on <strong>Create</strong></p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1744826165992/c0b8c66f-e938-4823-962e-f817a95ea744.png" alt /></p>
</li>
</ol>
<h2 id="heading-configure-email-notifications-in-jenkins">Configure Email Notifications in Jenkins</h2>
<h3 id="heading-generate-gmail-app-password">Generate Gmail App Password</h3>
<p>To securely send emails from Jenkins, we’ll use a Gmail App Password instead of your actual Gmail password.</p>
<ol>
<li><p>Log in to your <strong>Google account</strong></p>
</li>
<li><p>Navigate to <strong>Security</strong></p>
</li>
<li><p>Enable <strong>2-Step Verification</strong> if it’s not already enabled</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1742585033059/b8a7e99a-e3df-45f6-8014-62b856e09fc9.png?auto=compress,format&amp;format=webp" alt /></p>
</li>
<li><p>Scroll down to <strong>App Passwords</strong></p>
</li>
<li><p>Generate a new app password:</p>
<ul>
<li>App name: <code>capstone</code></li>
</ul>
</li>
</ol>
<p>    <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1744832585772/faf9d24d-65ad-4fd0-a477-ab0e011c2f2f.png" alt /></p>
<ul>
<li>Copy the generated token (you’ll need it in the next step)</li>
</ul>
<h3 id="heading-add-gmail-credentials-to-jenkins">Add Gmail Credentials to Jenkins</h3>
<ol>
<li><p>Go to <strong>Jenkins Dashboard → Manage Jenkins → Credentials</strong></p>
</li>
<li><p>Click on <strong>(global)</strong> and then <strong>Add Credentials</strong></p>
</li>
<li><p>Fill in the following:</p>
<ul>
<li><p><strong>Kind</strong>: Username with password</p>
</li>
<li><p><strong>Scope</strong>: Global</p>
</li>
<li><p><strong>Username</strong>: Your Gmail address</p>
</li>
<li><p><strong>Password</strong>: The generated Gmail app password</p>
</li>
<li><p><strong>ID</strong>: <code>mail-cred</code></p>
</li>
</ul>
</li>
</ol>
<p>    <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1742586006242/fd011391-d083-4629-8663-9c489e560fb8.png?auto=compress,format&amp;format=webp" alt /></p>
<ol start="4">
<li>Click <strong>Create</strong></li>
</ol>
<h3 id="heading-configure-jenkins-mail-server">Configure Jenkins Mail Server</h3>
<ol>
<li><p>Go to <strong>Manage Jenkins → Configure System</strong></p>
</li>
<li><p>Scroll to <strong>Extended E-mail Notification</strong> and fill out:</p>
<ul>
<li><p><strong>SMTP Server</strong>: <code>smtp.gmail.com</code></p>
</li>
<li><p><strong>SMTP Port</strong>: <code>465</code></p>
</li>
<li><p><strong>Credentials</strong>: Select <code>mail-cred</code></p>
</li>
<li><p>Check <strong>Use SSL</strong></p>
</li>
</ul>
</li>
</ol>
<p>    <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1742585462304/90c56d46-0a67-44c0-9ebf-fec27a2fa23d.png?auto=compress,format&amp;format=webp" alt /></p>
<ol start="3">
<li><p>Scroll to <strong>E-mail Notification</strong> section:</p>
<ul>
<li><p><strong>SMTP Server</strong>: <code>smtp.gmail.com</code></p>
</li>
<li><p><strong>Use SMTP Authentication</strong>: ✅</p>
</li>
<li><p><strong>Username</strong>: Your Gmail address</p>
</li>
<li><p><strong>Password</strong>: Your Gmail App Password (recently generated)</p>
</li>
<li><p><strong>SMTP Port</strong>: <code>465</code></p>
</li>
<li><p><strong>Use SSL</strong>: ✅</p>
</li>
</ul>
</li>
</ol>
<p>    <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1742586528509/0ae116cf-c9a9-47e6-b92c-d2b1012d0fe5.png?auto=compress,format&amp;format=webp" alt /></p>
<ol start="4">
<li>Click <strong>Save</strong></li>
</ol>
<blockquote>
<p>🛡️ Make sure ports <strong>465</strong> and <strong>587</strong> are open in the <strong>Jenkins server’s security group</strong> to allow email traffic.</p>
</blockquote>
<h4 id="heading-test-the-email-setup">✅ Test the Email Setup</h4>
<ol>
<li><p>In the <strong>Extended E-mail Notification</strong> section, click on <strong>Test configuration by sending a test e-mail</strong></p>
</li>
<li><p>Enter a recipient email address</p>
</li>
<li><p>Click <strong>Test Configuration</strong></p>
</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1744833818363/7819fb4c-b800-42e7-afc5-3cdaae6b184a.png" alt /></p>
<blockquote>
<p>If everything is set up correctly, you should receive an email confirming that the Jenkins email notification system is working! 🎉</p>
</blockquote>
<h2 id="heading-add-the-email-notification-script-in-the-ci-pipeline">Add the Email notification script in the CI pipeline</h2>
<blockquote>
<p>Add this inside the <code>pipeline</code> but outside the <code>stages</code> block</p>
</blockquote>
<pre><code class="lang-bash">post {
    always {
        script {
            def jobName = env.JOB_NAME
            def buildNumber = env.BUILD_NUMBER
            def pipelineStatus = currentBuild.result ?: <span class="hljs-string">'UNKNOWN'</span>
            def bannerColor = pipelineStatus.toUpperCase() == <span class="hljs-string">'SUCCESS'</span> ? <span class="hljs-string">'green'</span> : <span class="hljs-string">'red'</span>

            def body = <span class="hljs-string">""</span><span class="hljs-string">"
                &lt;html&gt;
                    &lt;body&gt;
                        &lt;div style="</span>border: 4px solid <span class="hljs-variable">${bannerColor}</span>; padding: 10px;<span class="hljs-string">"&gt;
                            &lt;h2&gt;<span class="hljs-variable">${jobName}</span> - Build #<span class="hljs-variable">${buildNumber}</span>&lt;/h2&gt;
                            &lt;div style="</span>background-color: <span class="hljs-variable">${bannerColor}</span>; padding: 10px;<span class="hljs-string">"&gt;
                                &lt;h3 style="</span>color: white;<span class="hljs-string">"&gt;Pipeline Status: <span class="hljs-variable">${pipelineStatus.toUpperCase()}</span>&lt;/h3&gt;
                            &lt;/div&gt;
                            &lt;p&gt;Check the &lt;a href="</span><span class="hljs-variable">${env.BUILD_URL}</span><span class="hljs-string">"&gt;Console Output&lt;/a&gt; for more details.&lt;/p&gt;
                        &lt;/div&gt;
                    &lt;/body&gt;
                &lt;/html&gt;
            "</span><span class="hljs-string">""</span>

            emailext(
                subject: <span class="hljs-string">"<span class="hljs-variable">${jobName}</span> - Build #<span class="hljs-variable">${buildNumber}</span> - <span class="hljs-variable">${pipelineStatus.toUpperCase()}</span>"</span>,
                body: body,
                to: <span class="hljs-string">'praduman.cnd@gmail.com'</span>,
                from: <span class="hljs-string">'praduman.8435@gmail.com'</span>,
                replyTo: <span class="hljs-string">'praduman.8435@gmail.com'</span>,
                mimeType: <span class="hljs-string">'text/html'</span>,
                attachmentsPattern: <span class="hljs-string">'fs-report.html'</span>
            )
        }
    }
}
</code></pre>
<blockquote>
<p>Now, The email successfully configured and sets up</p>
</blockquote>
<h2 id="heading-configure-the-infraserver">🚀 Configure the InfraServer</h2>
<h3 id="heading-install-kubectl">Install Kubectl</h3>
<pre><code class="lang-bash">curl -LO <span class="hljs-string">"https://dl.k8s.io/release/<span class="hljs-subst">$(curl -L -s https://dl.k8s.io/release/stable.txt)</span>/bin/linux/amd64/kubectl"</span>

curl -LO <span class="hljs-string">"https://dl.k8s.io/release/<span class="hljs-subst">$(curl -L -s https://dl.k8s.io/release/stable.txt)</span>/bin/linux/amd64/kubectl.sha256"</span>

sudo install -o root -g root -m 0755 kubectl /usr/<span class="hljs-built_in">local</span>/bin/kubectl

kubectl version --client
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1744910471584/580da3c4-5dac-4be6-91b7-fd33ad25dc85.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-update-the-kubeconfig-file">Update the Kubeconfig File</h3>
<p>Connect your Jenkins server to the EKS cluster:</p>
<pre><code class="lang-bash">aws eks update-kubeconfig \
  --region us-east-1 \
  --name capstone-cluster
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1744910523334/c5373c38-cba5-4700-88ff-c7192d123daf.png" alt /></p>
<h3 id="heading-install-eksctl">Install <code>eksctl</code></h3>
<p><code>eksctl</code> is a CLI tool that simplifies EKS cluster operations.</p>
<pre><code class="lang-bash">curl -sLO <span class="hljs-string">"https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_<span class="hljs-subst">$(uname -s)</span>_amd64.tar.gz"</span>
tar -xzf eksctl_$(uname -s)_amd64.tar.gz
sudo mv eksctl /usr/<span class="hljs-built_in">local</span>/bin

<span class="hljs-comment"># Verify the installation</span>
eksctl version
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1744911358399/a2da69e3-3f4d-4d65-9393-0f638dee463d.png" alt /></p>
<h3 id="heading-install-helm">Install Helm</h3>
<p>Helm is used for managing Kubernetes applications using Helm charts.</p>
<pre><code class="lang-bash">sudo apt update &amp;&amp; sudo apt upgrade -y
curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash
</code></pre>
<h3 id="heading-associate-iam-oidc-provider-with-the-cluster">Associate IAM OIDC Provider with the Cluster</h3>
<p>This step is needed to create service accounts with IAM roles.</p>
<pre><code class="lang-bash">eksctl utils associate-iam-oidc-provider \
  --cluster capstone-cluster \
  --region us-east-1 \
  --approve
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1744911885306/31e9dd51-0c72-4565-9c11-070aeda2cf6f.png" alt /></p>
<h3 id="heading-create-iam-service-account-for-ebs-csi-driver">Create IAM Service Account for EBS CSI Driver</h3>
<p>This enables your cluster to dynamically provision EBS volumes.</p>
<pre><code class="lang-bash">eksctl create iamserviceaccount \
  --name ebs-csi-controller-sa \
  --namespace kube-system \
  --cluster capstone-cluster \
  --region us-east-1 \
  --attach-policy-arn arn:aws:iam::aws:policy/service-role/AmazonEBSCSIDriverPolicy \
  --approve \
  --override-existing-serviceaccounts
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1744912383410/6a5eeb04-49bb-4bc0-ac99-0d6bd926047f.png" alt /></p>
<h3 id="heading-deploy-ebs-csi-driver">Deploy EBS CSI Driver</h3>
<pre><code class="lang-bash">kubectl apply -k <span class="hljs-string">"github.com/kubernetes-sigs/aws-ebs-csi-driver/deploy/kubernetes/overlays/stable/ecr/?ref=release-1.30"</span>
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1744913442031/658d2b6d-0f67-401a-92c6-589b7b07b212.png" alt /></p>
<h3 id="heading-install-nginx-ingress-controller">Install NGINX Ingress Controller</h3>
<p>This is required for routing external traffic to your services inside Kubernetes:</p>
<pre><code class="lang-bash">kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/static/provider/cloud/deploy.yaml
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1744913512195/17869eea-859e-4c39-993e-6b4116332e9c.png" alt /></p>
<h3 id="heading-install-cert-manager-for-tls-certificates">Install Cert-Manager (for TLS Certificates)</h3>
<p>Cert-manager helps you manage SSL certificates inside Kubernetes:</p>
<pre><code class="lang-bash">kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.14.4/cert-manager.yaml
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1744913589036/acefde67-f584-4eaa-b135-64ded8b2e09b.png" alt /></p>
<h2 id="heading-configure-rbac-role-based-access-control">🔐 Configure RBAC (Role-Based Access Control)</h2>
<p>To manage access control and permissions properly in your Kubernetes cluster, we’ll start by creating a dedicated namespace and then apply RBAC policies.</p>
<ul>
<li><p>Create a namespace with name <code>webapps</code></p>
<pre><code class="lang-bash">  kubectl create ns webapps
</code></pre>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1744913855969/3066b4bb-cdc2-4570-9102-1ea4325b9c50.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Create a Service Account</p>
<pre><code class="lang-bash">  vim service-account.yaml
</code></pre>
<blockquote>
<p>add the yaml file &amp; save it</p>
</blockquote>
<pre><code class="lang-bash">  apiVersion: v1
  kind: ServiceAccount
  metadata:
    name: jenkins
    namespace: webapps
</code></pre>
<pre><code class="lang-bash">  kubectl apply -f service-account.yaml
</code></pre>
</li>
<li><p>Create a Role</p>
<pre><code class="lang-bash">  vim role.yaml
</code></pre>
<blockquote>
<p>add the yaml file &amp; save it</p>
</blockquote>
<pre><code class="lang-bash">  apiVersion: rbac.authorization.k8s.io/v1
  kind: Role
  metadata:
    name: jenkins-role
    namespace: webapps
  rules:
    - apiGroups:
          - <span class="hljs-string">""</span>
          - apps
          - networking.k8s.io
          - autoscaling
      resources:
        - secrets
        - configmaps
        - persistentvolumeclaims
        - services
        - pods
        - deployments
        - replicasets
        - ingresses
        - horizontalpodautoscalers
      verbs: [<span class="hljs-string">"get"</span>, <span class="hljs-string">"list"</span>, <span class="hljs-string">"watch"</span>, <span class="hljs-string">"create"</span>, <span class="hljs-string">"update"</span>, <span class="hljs-string">"patch"</span>, <span class="hljs-string">"delete"</span>]
</code></pre>
<pre><code class="lang-bash">  kubectl apply -f role.yaml
</code></pre>
</li>
<li><p><strong>Bind the role to service account</strong></p>
<pre><code class="lang-bash">  vim role-binding.yaml
</code></pre>
<blockquote>
<p>add the yaml file &amp; save it</p>
</blockquote>
<pre><code class="lang-bash">  apiVersion: rbac.authorization.k8s.io/v1
  kind: RoleBinding
  metadata:
    name: jenkins-rolebinding
    namespace: webapps 
  roleRef:
    apiGroup: rbac.authorization.k8s.io
    kind: Role
    name: jenkins-role 
  subjects:
  - namespace: webapps 
    kind: ServiceAccount
    name: jenkins
</code></pre>
<pre><code class="lang-bash">  kubectl apply -f role-binding.yaml
</code></pre>
</li>
<li><p><strong>Create Cluster role</strong></p>
<pre><code class="lang-bash">  vim cluster-role.yaml
</code></pre>
<blockquote>
<p>add the yaml file &amp; save it</p>
</blockquote>
<pre><code class="lang-bash">  apiVersion: rbac.authorization.k8s.io/v1
  kind: ClusterRole
  metadata:
    name: jenkins-cluster-role
  rules:
  - apiGroups: [<span class="hljs-string">""</span>]
    resources: 
       - persistentvolumes
    verbs: [<span class="hljs-string">"get"</span>, <span class="hljs-string">"list"</span>, <span class="hljs-string">"watch"</span>, <span class="hljs-string">"create"</span>, <span class="hljs-string">"update"</span>, <span class="hljs-string">"patch"</span>, <span class="hljs-string">"delete"</span>]
  - apiGroups: [<span class="hljs-string">"storage.k8s.io"</span>]
    resources: 
       - storageclasses
    verbs: [<span class="hljs-string">"get"</span>, <span class="hljs-string">"list"</span>, <span class="hljs-string">"watch"</span>, <span class="hljs-string">"create"</span>, <span class="hljs-string">"update"</span>, <span class="hljs-string">"patch"</span>, <span class="hljs-string">"delete"</span>]
  - apiGroups: [<span class="hljs-string">"cert-manager.io"</span>]
    resources: 
       - clusterissuers
    verbs: [<span class="hljs-string">"get"</span>, <span class="hljs-string">"list"</span>, <span class="hljs-string">"watch"</span>, <span class="hljs-string">"create"</span>, <span class="hljs-string">"update"</span>, <span class="hljs-string">"patch"</span>, <span class="hljs-string">"delete"</span>]
</code></pre>
<pre><code class="lang-bash">  kubectl apply -f cluster-role.yaml
</code></pre>
</li>
<li><p><strong>Bind cluster role to Service Account</strong></p>
<pre><code class="lang-bash">  vim cluster-role-binding.yaml
</code></pre>
<blockquote>
<p>add the yaml file &amp; save it</p>
</blockquote>
<pre><code class="lang-bash">  apiVersion: rbac.authorization.k8s.io/v1
  kind: ClusterRoleBinding
  metadata:
    name: jenkins-cluster-rolebinding
  subjects:
  - kind: ServiceAccount
    name: jenkins
    namespace: webapps
  roleRef:
    kind: ClusterRole
    name: jenkins-cluster-role
    apiGroup: rbac.authorization.k8s.io
</code></pre>
<pre><code class="lang-bash">  kubectl apply -f cluster-role-binding.yaml
</code></pre>
</li>
</ul>
<hr />
<h2 id="heading-grant-jenkins-access-to-kubernetes-for-deployments">Grant Jenkins Access to Kubernetes for Deployments</h2>
<p>To allow Jenkins to deploy applications to your EKS cluster, we need to create a service account token and configure it in Jenkins.</p>
<h3 id="heading-create-a-kubernetes-token-for-jenkins">Create a Kubernetes Token for Jenkins</h3>
<p>Create a secret token that Jenkins will use to authenticate with your Kubernetes cluster.</p>
<p>Run the following command to create and open a token manifest:</p>
<pre><code class="lang-bash">vim token.yaml
</code></pre>
<p>Paste the following YAML content into the file:</p>
<pre><code class="lang-bash">apiVersion: v1
kind: Secret
<span class="hljs-built_in">type</span>: kubernetes.io/service-account-token
metadata:
  name: jenkins-secret
  annotations:
    kubernetes.io/service-account.name: jenkins
</code></pre>
<p>Apply the secret to the <code>webapps</code> namespace:</p>
<pre><code class="lang-bash">kubectl apply -f token.yaml -n webapps
</code></pre>
<p>Now, retrieve the token using:</p>
<pre><code class="lang-bash">kubectl describe secret jenkins-secret -n webapps | grep token
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1744916754003/ec1dd809-e1d8-4e1f-ab69-4ca1e2686230.png" alt /></p>
<blockquote>
<p>Copy the generated token.</p>
</blockquote>
<h3 id="heading-add-kubernetes-token-to-jenkins">Add Kubernetes Token to Jenkins</h3>
<ol>
<li><p>Go to your <strong>Jenkins Dashboard → Manage Jenkins → Credentials</strong>.</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1744924174844/265a0cc1-aa2f-4c04-890e-14974f9c4721.png" alt /></p>
</li>
<li><p>Select the <strong>(global)</strong> domain and click <strong>Add Credentials</strong>.</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1744924229626/243137cb-eefc-4048-bc8a-48b59c628884.png" alt /></p>
</li>
<li><p>Fill in the fields as follows:</p>
<ul>
<li><p><strong>Kind</strong>: Secret text</p>
</li>
<li><p><strong>Secret</strong>: <em>Paste the copied Kubernetes token</em></p>
</li>
<li><p><strong>ID</strong>: <code>k8s-cred</code></p>
</li>
</ul>
</li>
</ol>
<p>    <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1744924468086/6e3e8fdf-c83d-411d-b977-afaa27d53510.png" alt /></p>
<ol start="4">
<li><p>Click <strong>Create</strong>.</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1744924504795/c1c71591-b71f-4128-a244-f97fc3c9b21f.png" alt /></p>
</li>
</ol>
<h3 id="heading-install-kubectl-on-jenkins-server">Install <code>kubectl</code> on Jenkins Server</h3>
<p>To let Jenkins execute Kubernetes commands, install <code>kubectl</code> on the Jenkins machine:</p>
<pre><code class="lang-bash">curl -LO <span class="hljs-string">"https://dl.k8s.io/release/<span class="hljs-subst">$(curl -L -s https://dl.k8s.io/release/stable.txt)</span>/bin/linux/amd64/kubectl"</span>

curl -LO <span class="hljs-string">"https://dl.k8s.io/release/<span class="hljs-subst">$(curl -L -s https://dl.k8s.io/release/stable.txt)</span>/bin/linux/amd64/kubectl.sha256"</span>

sudo install -o root -g root -m 0755 kubectl /usr/<span class="hljs-built_in">local</span>/bin/kubectl
</code></pre>
<p>Verify installation:</p>
<pre><code class="lang-bash">kubectl version --client
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1744950570912/7a556995-cd39-42e2-a63e-ad27dbe38a05.png" alt /></p>
<h2 id="heading-setting-up-the-cd-pipeline-in-jenkins">Setting Up the CD Pipeline in Jenkins</h2>
<p>Now that CI is complete, let’s configure the <strong>Capstone Continuous Deployment (CD) Pipeline</strong> to automatically deploy your application to the Kubernetes cluster.</p>
<h3 id="heading-step-1-create-a-new-pipeline-job">Step 1: Create a New Pipeline Job</h3>
<ol>
<li><p>Go to <strong>Jenkins Dashboard → New Item</strong></p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1744917026877/a75bbe03-4c00-4112-94b1-c726f70614fb.png" alt /></p>
</li>
<li><p>Provide the following details:</p>
<ul>
<li><p><strong>Name</strong>: <code>capstone_CD</code></p>
</li>
<li><p><strong>Item Type</strong>: Select <strong>Pipeline</strong></p>
</li>
</ul>
</li>
<li><p>Click <strong>OK</strong></p>
</li>
</ol>
<h3 id="heading-step-2-configure-build-retention">Step 2: Configure Build Retention</h3>
<ol>
<li><p>Check the box <strong>"Discard old builds"</strong></p>
</li>
<li><p>Set <strong>Max # of builds to keep</strong> to <code>3</code></p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1744917280351/388f116c-a23c-40f7-88d9-8f68b778c6f3.png" alt /></p>
<blockquote>
<p>This helps conserve resources by keeping only the latest builds.</p>
</blockquote>
</li>
</ol>
<h3 id="heading-step-3-add-the-deployment-pipeline-script">Step 3: Add the Deployment Pipeline Script</h3>
<p>Scroll down to the <strong>Pipeline</strong> section and paste the following script:</p>
<pre><code class="lang-bash">groovyCopyEditpipeline {
    agent any

    stages {
        stage(<span class="hljs-string">'Git Checkout'</span>) {
            steps {
                git branch: <span class="hljs-string">'main'</span>, url: <span class="hljs-string">'https://github.com/praduman8435/Capstone-Mega-CD-Pipeline.git'</span>
            }  
        }

        stage(<span class="hljs-string">'Deploy to Kubernetes'</span>) {
            steps {
                withKubeConfig(
                    credentialsId: <span class="hljs-string">'k8s-cred'</span>,
                    clusterName: <span class="hljs-string">'capstone-cluster'</span>,
                    namespace: <span class="hljs-string">'webapps'</span>,
                    restrictKubeConfigAccess: <span class="hljs-literal">false</span>,
                    serverUrl: <span class="hljs-string">'https://D133D06C5103AE18A950F2047A8EB7DE.gr7.us-east-1.eks.amazonaws.com'</span>
                ) {
                    sh <span class="hljs-string">'kubectl apply -f kubernetes/Manifest.yaml -n webapps'</span>
                    sh <span class="hljs-string">'kubectl apply -f kubernetes/HPA.yaml'</span>
                    sleep 30
                    sh <span class="hljs-string">'kubectl get pods -n webapps'</span>
                    sh <span class="hljs-string">'kubectl get svc -n webapps'</span>
                }
            }  
        }
    }

    post {
        always {
            <span class="hljs-built_in">echo</span> <span class="hljs-string">"Pipeline execution completed."</span>
        }
    }
}
</code></pre>
<h3 id="heading-step-4-save-amp-run">Step 4: Save &amp; Run</h3>
<ol>
<li><p>Click <strong>Save</strong></p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1744952075752/11444311-6548-4ba1-86f6-52376e4d9f46.png" alt /></p>
</li>
<li><p>Click <strong>Build Now</strong> to trigger the pipeline</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1744952143513/fd3fa197-5e2a-4bb6-a3e9-b07f8c26966c.png" alt /></p>
</li>
</ol>
<p>If everything is set up correctly, Jenkins will pull the deployment manifests from the CD GitHub repository and deploy your app to the <code>webapps</code> namespace in the EKS cluster.</p>
<h2 id="heading-verifying-kubernetes-resources-amp-enabling-https-with-custom-domain">Verifying Kubernetes Resources &amp; Enabling HTTPS with Custom Domain</h2>
<p>After setting up the CI/CD pipeline, it’s time to make sure everything is working perfectly and your application is accessible securely over HTTPS with a custom domain.</p>
<h3 id="heading-step-1-verify-all-resources-in-the-cluster">Step 1: Verify All Resources in the Cluster</h3>
<p>On your <strong>Infra Server</strong>, run:</p>
<pre><code class="lang-bash">kubectl get all -n webapps
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1744952443217/35c9017d-e2e0-4027-a3c1-bb5ef9a42465.png" alt /></p>
<blockquote>
<p>You should see all your application pods, services, and other resources running successfully. If everything looks good, proceed to the next step.</p>
</blockquote>
<h3 id="heading-step-2-create-a-clusterissuer-resource-for-lets-encrypt">Step 2: Create a ClusterIssuer Resource for Let’s Encrypt</h3>
<p>We'll use Cert-Manager to automatically provision SSL certificates from Let’s Encrypt.</p>
<ol>
<li>Create a file called <code>cluster-issuer.yaml</code>:</li>
</ol>
<pre><code class="lang-bash">vim cluster-issuer.yaml
</code></pre>
<ol start="2">
<li>Paste the following configuration:</li>
</ol>
<pre><code class="lang-bash">apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: letsencrypt-prod
spec:
  acme:
    server: https://acme-v02.api.letsencrypt.org/directory
    email: praduman.cnd@gmail.com
    privateKeySecretRef:
      name: letsencrypt-prod
    solvers:
    - http01:
        ingress:
          class: nginx
</code></pre>
<ol start="3">
<li>Apply the ClusterIssuer:</li>
</ol>
<pre><code class="lang-bash">kubectl apply -f cluster-issuer.yaml
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1744952724486/ea4fc5cf-4108-4df7-b9fd-92496079c00a.png" alt /></p>
<h3 id="heading-step-3-create-an-ingress-resource-for-your-application">Step 3: Create an Ingress Resource for Your Application</h3>
<ol>
<li>Create an Ingress configuration file:</li>
</ol>
<pre><code class="lang-bash">vim ingress.yaml
</code></pre>
<ol start="2">
<li>Paste the following content:</li>
</ol>
<pre><code class="lang-bash">apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: bankapp-ingress
  namespace: webapps
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
    nginx.ingress.kubernetes.io/force-ssl-redirect: <span class="hljs-string">"true"</span>
    nginx.ingress.kubernetes.io/ssl-redirect: <span class="hljs-string">"true"</span>
    cert-manager.io/cluster-issuer: letsencrypt-prod
spec:
  ingressClassName: nginx
  tls:
  - hosts:
    - www.capstonebankapp.in
    secretName: bankapp-tls-secret
  rules:
  - host: www.capstonebankapp.in
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: bankapp-service
            port:
              number: 80
</code></pre>
<ol start="3">
<li>Apply the ingress resource:</li>
</ol>
<pre><code class="lang-bash">kubectl apply -f ingress.yaml
</code></pre>
<ol start="4">
<li>Check the status:</li>
</ol>
<pre><code class="lang-bash">kubectl get ing -n webapps
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1744960369982/e07abddb-1114-4758-b049-1ef3189b4598.png" alt /></p>
<p>Wait for a few moments, then run the command again. You’ll notice an <strong>external load balancer address</strong> (usually an AWS ELB) under the <code>ADDRESS</code> column.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1744960526326/d319d2ae-3f25-4bc6-8e68-bef581e40d1c.png" alt /></p>
<blockquote>
<p>Copy the bankapp ingress loadbalancer address</p>
</blockquote>
<h3 id="heading-step-4-configure-your-custom-domain-on-godaddy">Step 4: Configure Your Custom Domain on GoDaddy</h3>
<ol>
<li><p>Log in to your GoDaddy account.</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1744960762981/860c479c-9718-405f-8e1a-445b94d2955f.png" alt /></p>
</li>
<li><p>Navigate to <strong>My Products → DNS Settings</strong> for your domain (<code>capstonebankapp.in</code>).</p>
</li>
<li><p>Look for a CNAME record with name <code>www</code> and edit it.</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1744960835376/c333a035-0f5c-45be-95d0-d7e123f1f34a.png" alt /></p>
</li>
<li><p>In the <strong>Value</strong> field, paste the <strong>Load Balancer URL</strong> you got from the Ingress.</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1744960911729/75936c37-4ecf-403a-a13f-95b7bf47b79a.png" alt /></p>
</li>
<li><p>Save the changes.</p>
</li>
</ol>
<p><a target="_blank" href="https://www.godaddy.com/">⏳</a> Wait a few minutes for the DNS changes to propagate.</p>
<h3 id="heading-step-5-access-your-application">Step 5: Access Your Application</h3>
<p>Open your browser and visit:</p>
<pre><code class="lang-bash">https://www.capstonebankapp.in/login
</code></pre>
<p>If everything was configured correctly, your application will now load securely over HTTPS with a valid Let’s Encrypt certificate.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1744963549435/29390c9b-125f-43a7-8619-a34833ab2015.png" alt /></p>
<hr />
<h1 id="heading-setup-monitoring-with-prometheus-amp-grafana-on-eks">Setup Monitoring with Prometheus &amp; Grafana on EKS</h1>
<p>After deploying your application, it's essential to monitor its health, performance, and resource usage. Let’s integrate <strong>Prometheus</strong> and <strong>Grafana</strong> into your Kubernetes cluster using Helm.</p>
<h2 id="heading-add-prometheus-helm-repo">Add Prometheus Helm Repo</h2>
<p>On your <strong>Infra Server</strong>, add the official Prometheus Community Helm chart repository:</p>
<pre><code class="lang-bash">helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo update
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1744973704718/1909044a-9f3b-4618-83c8-ab67aa2f56d7.png" alt /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1744973734881/8fab6876-bb21-4b1a-be34-5923bdad7c4d.png" alt /></p>
<h2 id="heading-create-valuesyaml-for-custom-configuration">Create <code>values.yaml</code> for Custom Configuration</h2>
<p>We'll define how Prometheus and Grafana should be deployed, what metrics to scrape, and how to expose the services.</p>
<ol>
<li>Create a file called <code>values.yaml</code>:</li>
</ol>
<pre><code class="lang-bash">vi values.yaml
</code></pre>
<ol start="2">
<li>Paste the following configuration:</li>
</ol>
<pre><code class="lang-bash"><span class="hljs-comment"># values.yaml for kube-prometheus-stack</span>

alertmanager:
  enabled: <span class="hljs-literal">false</span>

prometheus:
  prometheusSpec:
    service:
      <span class="hljs-built_in">type</span>: LoadBalancer
    storageSpec:
      volumeClaimTemplate:
        spec:
          accessModes:
            - ReadWriteOnce
          resources:
            requests:
              storage: 5Gi
  additionalScrapeConfigs:
    - job_name: node-exporter
      static_configs:
        - targets:
            - node-exporter:9100
    - job_name: kube-state-metrics
      static_configs:
        - targets:
            - kube-state-metrics:8080

grafana:
  enabled: <span class="hljs-literal">true</span>
  service:
    <span class="hljs-built_in">type</span>: LoadBalancer
  adminUser: admin
  adminPassword: admin123

prometheus-node-exporter:
  service:
    <span class="hljs-built_in">type</span>: LoadBalancer

kube-state-metrics:
  enabled: <span class="hljs-literal">true</span>
  service:
    <span class="hljs-built_in">type</span>: LoadBalancer
</code></pre>
<blockquote>
<p>Save and exit the file.</p>
</blockquote>
<h2 id="heading-install-monitoring-stack-with-helm">Install Monitoring Stack with Helm</h2>
<pre><code class="lang-bash">helm upgrade --install monitoring prometheus-community/kube-prometheus-stack -f values.yaml -n monitoring --create-namespace
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1744975356275/ef8dc7b7-2407-4a6c-81d2-cf4dc7f1a0c3.png" alt /></p>
<h2 id="heading-patch-services-to-use-loadbalancer">Patch Services to Use LoadBalancer</h2>
<p>(Optional if already configured in <code>values.yaml</code>, but ensures services are exposed)</p>
<pre><code class="lang-bash">kubectl patch svc monitoring-kube-prometheus-prometheus -n monitoring -p <span class="hljs-string">'{"spec": {"type": "LoadBalancer"}}'</span>
kubectl patch svc monitoring-kube-state-metrics -n monitoring -p <span class="hljs-string">'{"spec": {"type": "LoadBalancer"}}'</span>
kubectl patch svc monitoring-prometheus-node-exporter -n monitoring -p <span class="hljs-string">'{"spec": {"type": "LoadBalancer"}}'</span>
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1744975557472/dede09c6-7816-4217-b1a3-6265a97f0344.png" alt /></p>
<h2 id="heading-check-services-amp-access-grafana">Check Services &amp; Access Grafana</h2>
<p>Get all resources in the monitoring namespace:</p>
<pre><code class="lang-bash">kubectl get all -n monitoring
kubectl get svc -n monitoring
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1744975708752/11305005-7dc0-4a33-a084-2e5b2bdf9f01.png" alt /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1744975836639/4f397c49-cf49-4b8a-9d52-6064ad34e1b7.png" alt /></p>
<p>You’ll find External IPs assigned to services like Grafana and Prometheus.</p>
<h3 id="heading-access-grafana">➤ Access Grafana</h3>
<ul>
<li><p>URL: <code>http://&lt;grafana-external-ip&gt;</code></p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1744975976485/c1b4fb6c-382a-47d3-8442-35c57ce4a247.png" alt /></p>
</li>
<li><p>Username: <code>admin</code></p>
</li>
<li><p>Password: <code>admin123</code></p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1744976059132/211b2c1b-07e5-4d8e-86ff-e1db6f3e4b64.png" alt /></p>
<h3 id="heading-access-prometheus">➤ Access Prometheus</h3>
<ul>
<li><p>URL: <code>http://&lt;prometheus-external-ip&gt;:9090</code></p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1744978076598/240b7529-b6f7-402d-b010-039a7ae45ced.png" alt /></p>
</li>
<li><p>Go to <strong>Status → Targets</strong> to see what’s being monitored.</p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1744978214058/6f29b042-cae7-44a1-9aac-740509dd0909.png" alt /></p>
<h2 id="heading-configure-grafana-dashboard">Configure Grafana Dashboard</h2>
<ol>
<li><p>Open the <strong>Grafana dashboard</strong>.</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1744978329635/e5eb4bfb-6009-429a-9aa8-4cfbb292ea3b.png" alt /></p>
</li>
<li><p>Go to <strong>Connections → Data Sources → Add new</strong>.</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1744978389105/3891056b-f228-44fc-a865-ff13d6447c53.png" alt /></p>
</li>
<li><p>Search for <code>Prometheus</code> and select it.</p>
</li>
<li><p>In the <strong>URL</strong>, enter your Prometheus service URL (e.g., <code>http://&lt;prometheus-external-ip&gt;:9090</code>).</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1744978859408/515f84ec-6505-4810-8a48-0bba6e09c723.png" alt /></p>
</li>
<li><p>Click <strong>Save &amp; Test</strong>.</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1744978958889/75c6b03b-401a-4194-b66b-f35f0ef23ab1.png" alt /></p>
</li>
</ol>
<h3 id="heading-view-dashboards">🎯 View Dashboards</h3>
<ul>
<li><p>Go to <strong>Dashboards → Browse</strong>.</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1744978574602/c03660c7-cd40-480e-974e-f76cbd7b09d5.png" alt /></p>
</li>
<li><p>Explore default dashboards for Node Exporter, Kubernetes metrics, and more.</p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1744979215711/4b95046a-79f5-4835-b694-cb73cc349367.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1744979265865/882dcf32-8743-4099-9750-a2a007881441.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1744979573728/07283cc2-4153-4295-9ec3-5308b23c2751.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1744979741225/a5b60ff3-9efd-42df-8d6d-7e98f8f261f6.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1744979856690/8be88e51-f593-49a5-8039-16e085e8db7b.png" alt class="image--center mx-auto" /></p>
<blockquote>
<p>Now you have real-time observability into your EKS cluster!</p>
</blockquote>
<hr />
<h1 id="heading-conclusion">✅ Conclusion</h1>
<p>In this project, I've successfully built an <strong>enterprise-grade CI/CD pipeline</strong> from scratch using <strong>Jenkins, Kubernetes (EKS), Docker, GitHub, and other DevOps tools</strong>, all running on AWS.</p>
<p>I automated the entire workflow:</p>
<ul>
<li><p>From building and pushing Docker images in the CI pipeline</p>
</li>
<li><p>To continuously deploying them into a secure, production-ready Kubernetes cluster via CD</p>
</li>
<li><p>With proper ingress routing, TLS certificates, and monitoring in place using <strong>Prometheus</strong> and <strong>Grafana</strong></p>
</li>
</ul>
<p>This setup demonstrates how modern DevOps practices can streamline software delivery and infrastructure management. It not only improves deployment speed but also ensures reliability, scalability, and observability of applications.</p>
<blockquote>
<p>💡 <em>Let’s connect and discuss DevOps, cloud automation, and cutting-edge technology</em></p>
<p>🔗 <a target="_blank" href="https://www.linkedin.com/in/praduman-prajapati/"><strong>LinkedIn</strong></a> | 💼 <a target="_blank" href="https://www.upwork.com/freelancers/~01fa3bf4d6797a9651"><strong>Upwork</strong></a> | 🐦 <a target="_blank" href="https://x.com/CndTwtprad"><strong>Twitter</strong></a> | 👨‍💻 <a target="_blank" href="https://github.com/praduman8435"><strong>GitHub</strong></a></p>
</blockquote>
]]></content:encoded></item><item><title><![CDATA[Ultimate Corporate-Grade DevSecOps Pipeline: Automated Kubernetes Deployment with Security, CI/CD, and Monitoring on AWS]]></title><description><![CDATA[Introduction 🚀
In today’s fast-paced software development world, automation is the key to delivering high-quality applications efficiently. That’s why enterprises need a robust, scalable, and secure CI/CD pipeline to streamline development, testing,...]]></description><link>https://blogs.praduman.site/ultimate-corporate-grade-devsecops-pipeline</link><guid isPermaLink="true">https://blogs.praduman.site/ultimate-corporate-grade-devsecops-pipeline</guid><category><![CDATA[DevSecOps]]></category><category><![CDATA[Kubernetes]]></category><category><![CDATA[ci-cd]]></category><category><![CDATA[trivy]]></category><category><![CDATA[sonarqube]]></category><category><![CDATA[#prometheus]]></category><category><![CDATA[Grafana]]></category><category><![CDATA[observability]]></category><category><![CDATA[Devops]]></category><dc:creator><![CDATA[Praduman Prajapati]]></dc:creator><pubDate>Tue, 25 Mar 2025 07:02:07 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1742906695811/4ac844cc-6ad2-4c42-ba01-5baf17868e2d.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1 id="heading-introduction"><strong>Introduction</strong> 🚀</h1>
<p>In today’s fast-paced software development world, <strong>automation is the key</strong> to delivering high-quality applications efficiently. That’s why enterprises need a <strong>robust, scalable, and secure CI/CD pipeline</strong> to streamline development, testing, and deployment.</p>
<p>This project is all about building the <strong>Ultimate Corporate-Grade CI/CD Pipeline</strong>—a fully automated, production-ready pipeline that follows <strong>DevSecOps best practices</strong> and is designed to work with <strong>Kubernetes on AWS</strong>.</p>
<h3 id="heading-what-this-pipeline-covers"><strong>What This Pipeline Covers?</strong></h3>
<p>✅ <strong>Kubernetes Cluster Setup</strong> – Using <strong>kubeadm</strong> to configure a <strong>highly available Kubernetes cluster</strong> on AWS EC2.<br />✅ <strong>Code Integration &amp; Testing</strong> – Automating builds, tests, and security scans.<br />✅ <strong>Containerization &amp; Orchestration</strong> – Using <strong>Docker</strong> and <strong>Kubernetes (EKS)</strong> for scalability.<br />✅ <strong>Automated Deployments</strong> – Implementing GitOps with <strong>ArgoCD</strong> for zero-downtime releases.<br />✅ <strong>Security &amp; Compliance</strong> – Integrating <strong>Trivy, SonarQube, and Prometheus</strong> for DevSecOps.<br />✅ <strong>Observability &amp; Monitoring</strong> – Setting up <strong>Grafana &amp; Prometheus</strong> for real-time insights.</p>
<h3 id="heading-why-this-matters"><strong>Why This Matters?</strong></h3>
<p>🔹 <strong>Enterprise-Grade Reliability</strong> – Built for scalability, high availability, and security.<br />🔹 <strong>Faster Releases</strong> – Automates everything from code commit to production deployment.<br />🔹 <strong>Cost-Effective</strong> – Optimized for AWS, reducing operational overhead.</p>
<p>By the end of this project, we’ll have a <strong>bulletproof CI/CD pipeline</strong>, complete with a <strong>Kubernetes cluster set up using kubeadm</strong>, ensuring a <strong>fully automated and secure software delivery process</strong>. Let’s dive in! 🚀</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1742906591520/4d64da50-667a-4c42-a074-d7a0970ea3cb.jpeg" alt class="image--center mx-auto" /></p>
<h1 id="heading-source-code-and-project-repository"><strong>Source Code and Project Repository</strong> 📌</h1>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://github.com/praduman8435/Boardgame">https://github.com/praduman8435/Boardgame</a></div>
<p> </p>
<hr />
<h1 id="heading-setting-up-your-kubernetes-cluster-with-kubeadm">🚀 Setting Up Your Kubernetes Cluster with Kubeadm</h1>
<h2 id="heading-configure-aws-security-group">🔒 Configure AWS Security Group</h2>
<p>Before deploying Kubernetes on AWS, you need to configure <strong>security groups</strong> to control inbound and outbound traffic. <strong>Security groups act as a firewall</strong>, ensuring that only necessary connections are allowed while keeping your cluster protected.</p>
<p>You can either create a new security group or modify an existing one with the <strong>essential rules</strong> listed below:</p>
<h3 id="heading-essential-security-group-rules-for-kubernetes">📌 Essential Security Group Rules for Kubernetes</h3>
<div class="hn-table">
<table>
<thead>
<tr>
<td><strong>Port(s)</strong></td><td><strong>Purpose</strong></td><td><strong>Use Case</strong></td></tr>
</thead>
<tbody>
<tr>
<td><strong>30000-32767</strong></td><td>NodePort Access</td><td>Expose apps without Ingress</td></tr>
<tr>
<td><strong>465, 25</strong></td><td>SMTP (Secure &amp; Standard)</td><td>Email notifications</td></tr>
<tr>
<td><strong>22</strong> (Use with caution)</td><td>SSH Access</td><td>Remote troubleshooting</td></tr>
<tr>
<td><strong>443, 80</strong></td><td>HTTPS &amp; HTTP</td><td>Secure and standard web traffic</td></tr>
<tr>
<td><strong>6443</strong></td><td>Kubernetes API Server</td><td>kubectl, CI/CD tools, ArgoCD</td></tr>
<tr>
<td><strong>10250-10259</strong></td><td>Internal Kubernetes Communication</td><td>Control plane ↔ Worker nodes</td></tr>
<tr>
<td><strong>2379-2380</strong></td><td>etcd Communication</td><td>Stores cluster data</td></tr>
<tr>
<td><strong>3000-10000</strong></td><td>App-Specific Traffic</td><td>Databases, Prometheus, Grafana</td></tr>
<tr>
<td><strong>53</strong></td><td>Cluster DNS (CoreDNS)</td><td>Internal service discovery</td></tr>
<tr>
<td><strong>8285, 8286, 5473</strong></td><td><strong>Flannel</strong></td><td></td></tr>
<tr>
<td><strong>4789</strong></td><td><strong>Calico</strong></td><td></td></tr>
<tr>
<td><strong>6783-6784</strong></td><td>CNI Networking</td><td>Enables pod communication</td></tr>
</tbody>
</table>
</div><p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1740396455475/6a3ae29c-f8bb-4706-81be-0c336d5a5787.png" alt class="image--center mx-auto" /></p>
<h4 id="heading-best-practices">✅ Best Practices</h4>
<ul>
<li><p><strong>Follow Least Privilege:</strong> Only open the ports your cluster needs.</p>
</li>
<li><p><strong>Restrict SSH Access:</strong> Limit port <code>22</code> to specific IPs to enhance security.</p>
</li>
<li><p><strong>Secure API Server:</strong> Only allow trusted IPs for port <code>6443</code> to prevent unauthorized access.</p>
</li>
</ul>
<p>Once your <strong>security groups are configured</strong>, you're now ready to <strong>initialize your Kubernetes cluster using Kubeadm!</strong> 🎯</p>
<h2 id="heading-create-virtual-machines-ec2-instances-for-the-cluster">🖥️ Create Virtual Machines (EC2 Instances) for the Cluster</h2>
<p>To set up a <strong>Kubernetes cluster on AWS</strong>, you need to create <strong>three EC2 instances</strong>:</p>
<ul>
<li><p><strong>One Master Node (Control Plane):</strong> Manages the cluster, schedules workloads, and maintains the desired state.</p>
</li>
<li><p><strong>Two Worker Nodes:</strong> Run application workloads and handle deployments.</p>
</li>
</ul>
<h3 id="heading-step-1-launch-ec2-instances">🔧 Step 1: Launch EC2 Instances</h3>
<ol>
<li><p>Open the <strong>AWS Console</strong> and navigate to the <strong>EC2 Dashboard</strong>.</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1742063352982/d0607c70-0ab1-494b-996d-d9b2432ce915.png" alt /></p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1742186530563/ed9a05ec-effd-476c-ba2e-6a3adc07adce.png" alt /></p>
</li>
<li><p>Click <strong>Launch Instance</strong> to create virtual machines.</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1742186530563/ed9a05ec-effd-476c-ba2e-6a3adc07adce.png" alt /></p>
</li>
</ol>
<h3 id="heading-step-2-configure-the-instances">⚙️ Step 2: Configure the Instances</h3>
<ol>
<li><p><strong>Set the number of instances</strong> → <code>3</code> (1 Master + 2 Workers).</p>
</li>
<li><p><strong>Choose Amazon Machine Image (AMI)</strong> → <strong>Latest Ubuntu AMI</strong> (recommended for Kubernetes).</p>
</li>
<li><p><strong>Select an instance type →</strong> <code>t2.xlarge</code></p>
</li>
<li><p><strong>Assign Security Group</strong> → Use the <strong>previously configured security group</strong> to allow necessary traffic.</p>
</li>
<li><p><strong>Set Storage</strong> → <strong>Minimum 30GB</strong> (for Kubernetes components, logs, and workloads).</p>
</li>
<li><p>Click <strong>Launch Instance</strong> to provision the virtual machines.</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1742186871770/a61c9994-a5a5-4b63-95a0-186c4dcbd41e.png" alt /></p>
</li>
</ol>
<h3 id="heading-step-3-naming-the-instances">🏷️ Step 3: Naming the Instances</h3>
<p>For easy identification, <strong>name your instances</strong> as follows:</p>
<ul>
<li><p><strong>🖥️ ControlPlane</strong> → Master Node</p>
</li>
<li><p><strong>📦 WorkerNode1</strong> → First Worker Node</p>
</li>
<li><p><strong>📦 WorkerNode2</strong> → Second Worker Node</p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1742186988982/0d7b395f-c8f4-462e-8814-c22b26882f3a.png" alt /></p>
<h2 id="heading-connecting-to-ec2-instances-via-ssh">Connecting to EC2 Instances via SSH</h2>
<p>Once your EC2 instances are running, connect to them using <strong>SSH</strong></p>
<blockquote>
<h4 id="heading-connect-to-the-control-plane-node">Connect to the Control Plane Node</h4>
</blockquote>
<pre><code class="lang-bash">ssh -i &lt;path-to-pem-file&gt; ubuntu@&lt;public-ip-of-controlplane&gt;
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1742189219214/3f71a1e8-794b-42a6-9f29-a71ffb66d31f.png" alt /></p>
<blockquote>
<h4 id="heading-connect-to-worker-nodes">Connect to Worker Nodes</h4>
</blockquote>
<pre><code class="lang-bash">ssh -i &lt;path-to-pem-file&gt; ubuntu@&lt;public-ip-of-workernode1&gt;
ssh -i &lt;path-to-pem-file&gt; ubuntu@&lt;public-ip-of-workernode2&gt;
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1742189219214/3f71a1e8-794b-42a6-9f29-a71ffb66d31f.png" alt /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1742189419482/280cba40-5a87-46b3-b27d-efd76ba9a6bc.png" alt /></p>
<p><strong>🔹 Replace:</strong></p>
<ul>
<li><p><code>&lt;path-to-pem-file&gt;</code> → <strong>Your private key file path</strong></p>
</li>
<li><p><code>&lt;public-ip&gt;</code> → <strong>The public IP address</strong> of each instance</p>
</li>
</ul>
<blockquote>
<p>Once your instances are up and running, you're ready to <strong>initialize the Kubernetes cluster with Kubeadm!</strong> 🚀</p>
</blockquote>
<h2 id="heading-updating-and-installing-kubernetes-on-all-nodes">🔧 Updating and Installing Kubernetes on All Nodes</h2>
<p>Before initializing your Kubernetes cluster, it's crucial to update your virtual machines, configure system settings, and install essential dependencies. This ensures a <strong>stable, secure, and well-optimized</strong> Kubernetes environment across all nodes.</p>
<h3 id="heading-step-1-disable-swap"><strong>Step 1: Disable Swap</strong></h3>
<p>Kubernetes requires <strong>swap to be disabled</strong> for optimal performance. Run the following commands <strong>on each node</strong>:</p>
<pre><code class="lang-bash">swapoff -a
sudo sed -i <span class="hljs-string">'/ swap / s/^\(.*\)$/#\1/g'</span> /etc/fstab
</code></pre>
<blockquote>
<p><strong>This disables swap immediately and ensures it remains disabled after a reboot.</strong></p>
</blockquote>
<h3 id="heading-step-2-enable-ip-forwarding-and-allow-bridged-traffic"><strong>Step 2: Enable IP Forwarding and Allow Bridged Traffic</strong></h3>
<p>To enable proper <strong>Kubernetes networking</strong>, configure IP forwarding and iptables:</p>
<pre><code class="lang-bash">cat &lt;&lt;EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF

sudo modprobe overlay
sudo modprobe br_netfilter
</code></pre>
<p>Now, set up required <strong>system parameters</strong> and apply them:</p>
<pre><code class="lang-bash">cat &lt;&lt;EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables  = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward                 = 1
EOF

sudo sysctl --system
</code></pre>
<h3 id="heading-verification">✅ <strong>Verification</strong></h3>
<p>Ensure the necessary modules are loaded</p>
<pre><code class="lang-bash">lsmod | grep br_netfilter
lsmod | grep overlay
</code></pre>
<p>Check if system variables are correctly set</p>
<pre><code class="lang-bash">sysctl net.bridge.bridge-nf-call-iptables net.bridge.bridge-nf-call-ip6tables net.ipv4.ip_forward
</code></pre>
<h3 id="heading-step-3-update-system-packages"><strong>Step 3: Update System Packages</strong></h3>
<p>Keeping your system updated ensures compatibility with the latest Kubernetes components. Run the following command <strong>on all nodes</strong></p>
<pre><code class="lang-bash">sudo apt update &amp;&amp; sudo apt upgrade -y
</code></pre>
<h3 id="heading-step-4-install-container-runtime-containerd"><strong>Step 4: Install Container Runtime (containerd)</strong></h3>
<p>Kubernetes requires a <strong>container runtime</strong> to manage containers. We'll use <code>containerd</code></p>
<ol>
<li><h4 id="heading-install-containerd"><strong>Install containerd</strong></h4>
</li>
</ol>
<pre><code class="lang-bash">curl -LO https://github.com/containerd/containerd/releases/download/v1.7.14/containerd-1.7.14-linux-amd64.tar.gz
sudo tar Cxzvf /usr/<span class="hljs-built_in">local</span> containerd-1.7.14-linux-amd64.tar.gz
</code></pre>
<ol start="2">
<li><h4 id="heading-set-up-containerd-as-a-service"><strong>Set up containerd as a service</strong></h4>
</li>
</ol>
<pre><code class="lang-bash">curl -LO https://raw.githubusercontent.com/containerd/containerd/main/containerd.service
sudo mkdir -p /usr/<span class="hljs-built_in">local</span>/lib/systemd/system/
sudo mv containerd.service /usr/<span class="hljs-built_in">local</span>/lib/systemd/system/
sudo mkdir -p /etc/containerd
containerd config default | sudo tee /etc/containerd/config.toml
</code></pre>
<ol start="3">
<li><h4 id="heading-enable-systemd-as-the-cgroup-driver"><strong>Enable systemd as the cgroup driver</strong></h4>
</li>
</ol>
<pre><code class="lang-bash">sudo sed -i <span class="hljs-string">'s/SystemdCgroup \= false/SystemdCgroup \= true/g'</span> /etc/containerd/config.toml
</code></pre>
<ol start="4">
<li><h4 id="heading-restart-containerd-and-verify-its-status"><strong>Restart containerd and verify its status</strong></h4>
</li>
</ol>
<pre><code class="lang-bash">sudo systemctl daemon-reload
sudo systemctl <span class="hljs-built_in">enable</span> --now containerd
systemctl status containerd
</code></pre>
<h3 id="heading-step-5-install-runc-container-runtime-interface-cri"><strong>Step 5: Install runc (Container Runtime Interface - CRI)</strong></h3>
<p><code>runc</code> is required to run containers inside <code>containerd</code>. Install it using</p>
<pre><code class="lang-bash">curl -LO https://github.com/opencontainers/runc/releases/download/v1.1.12/runc.amd64
sudo install -m 755 runc.amd64 /usr/<span class="hljs-built_in">local</span>/sbin/runc
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1742400060997/bf138485-69d4-44e7-8f66-3a4bee59176c.png" alt /></p>
<h3 id="heading-step-6-install-cni-container-network-interface-plugins"><strong>Step 6: Install CNI (Container Network Interface) Plugins</strong></h3>
<p>CNI plugins <strong>allow pod-to-pod communication</strong> within the cluster. Install them with</p>
<pre><code class="lang-bash">curl -LO https://github.com/containernetworking/plugins/releases/download/v1.5.0/cni-plugins-linux-amd64-v1.5.0.tgz
sudo mkdir -p /opt/cni/bin
sudo tar Cxzvf /opt/cni/bin cni-plugins-linux-amd64-v1.5.0.tgz
</code></pre>
<h3 id="heading-step-7-install-required-dependencies"><strong>Step 7: Install Required Dependencies</strong></h3>
<p>Install essential tools for <strong>secure communication and package management</strong></p>
<pre><code class="lang-bash">sudo apt install -y apt-transport-https ca-certificates curl gpg
sudo mkdir -p -m 755 /etc/apt/keyrings
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1742189916954/e8877709-f0e0-4fec-b137-a6c4fc283820.png" alt /></p>
<h3 id="heading-step-8-install-kubernetes-components"><strong>Step 8: Install Kubernetes Components</strong></h3>
<ol>
<li><h4 id="heading-add-the-kubernetes-repository-key-and-repository"><strong>Add the Kubernetes repository key and repository</strong></h4>
</li>
</ol>
<pre><code class="lang-bash">curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.29/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg

<span class="hljs-built_in">echo</span> <span class="hljs-string">'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.29/deb/ /'</span> | sudo tee /etc/apt/sources.list.d/kubernetes.list
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1742190091396/4d9fbe78-b908-4d6d-9b52-f2342b765fd9.png" alt /></p>
<ol start="2">
<li><h4 id="heading-update-the-package-list"><strong>Update the package list</strong></h4>
</li>
</ol>
<pre><code class="lang-bash">sudo apt update
</code></pre>
<ol start="3">
<li><h4 id="heading-install-kubernetes-tools-kubeadm-kubelet-kubectl"><strong>Install Kubernetes Tools (</strong><code>kubeadm</code>, <code>kubelet</code>, <code>kubectl</code>)</h4>
</li>
</ol>
<pre><code class="lang-bash">sudo apt-get install -y kubelet=1.29.6-1.1 kubeadm=1.29.6-1.1 kubectl=1.29.6-1.1 --allow-downgrades --allow-change-held-packages
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1742190188560/e680ac13-745d-409f-9ce0-739cc1ef08ff.png" alt /></p>
<ol start="4">
<li><h4 id="heading-prevent-accidental-updates-that-could-break-the-cluster"><strong>Prevent accidental updates that could break the cluster</strong></h4>
</li>
</ol>
<pre><code class="lang-bash">sudo apt-mark hold kubeadm kubelet kubectl
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1742190329162/837e3e76-29dd-44dd-8f81-aa47699de1d6.png" alt /></p>
<h3 id="heading-step-9-configure-crictl-to-work-with-containerd"><strong>Step 9: Configure</strong> <code>crictl</code> to Work with <code>containerd</code></h3>
<p>Set up <code>crictl</code> (a tool for interacting with container runtimes) to use <code>containerd</code></p>
<pre><code class="lang-bash">sudo crictl config runtime-endpoint unix:///var/run/containerd/containerd.sock
</code></pre>
<h3 id="heading-your-kubernetes-cluster-is-ready-for-initialization">🎯 <strong>Your Kubernetes Cluster is Ready for Initialization! 🚀</strong></h3>
<blockquote>
<p>With <strong>all nodes updated and configured</strong>, your Kubernetes cluster is now ready for initialization using <code>kubeadm</code> in the next step!</p>
</blockquote>
<h2 id="heading-initializing-the-kubernetes-cluster-master-node-only">🚀 <strong>Initializing the Kubernetes Cluster (Master Node Only)</strong></h2>
<p>Now that Kubernetes is installed on all nodes, it's time to <strong>initialize the cluster</strong> on the <strong>master node (Control Plane)</strong>. This process sets up the <strong>control plane</strong> and defines the <strong>pod network</strong>, which is essential for communication between pods.</p>
<h3 id="heading-step-1-initialize-the-kubernetes-control-plane"><strong>Step 1: Initialize the Kubernetes Control Plane</strong></h3>
<p>On the <strong>master node</strong>, run the following command to start the Kubernetes cluster:</p>
<pre><code class="lang-bash">sudo kubeadm init --pod-network-cidr=192.168.0.0/16 --apiserver-advertise-address=&lt;private-ip-of-controlplane&gt; --node-name controlplane
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1742190556370/7ca79d6c-3207-4395-b9a3-e57ef10f08a0.png" alt /></p>
<p>🔹 Replace <code>&lt;private-ip-of-controlplane&gt;</code> with the <strong>actual private IP</strong> of your Control Plane instance.</p>
<p>🔹 The <code>--pod-network-cidr=192.168.0.0/16</code> flag specifies the pod network range.</p>
<blockquote>
<p><strong>Network CNI Options:</strong></p>
<ul>
<li><p><strong>Calico:</strong> <code>192.168.0.0/16</code> (default in this example)</p>
</li>
<li><p><strong>Flannel:</strong> <code>10.244.0.0/16</code></p>
</li>
<li><p><strong>Cilium:</strong> <code>10.217.0.0/16</code></p>
</li>
</ul>
</blockquote>
<h4 id="heading-this-command-does-the-following"><strong>✅ This command does the following:</strong></h4>
<ul>
<li><p>Sets up the <strong>Kubernetes control plane</strong></p>
</li>
<li><p>Defines the <strong>pod network</strong> for inter-pod communication</p>
</li>
<li><p>Generates a <strong>join command</strong> to add worker nodes to the cluster</p>
</li>
</ul>
<h3 id="heading-step-2-save-the-worker-node-join-command"><strong>Step 2: Save the Worker Node Join Command</strong></h3>
<p>Once initialization is complete, Kubernetes will generate a command similar to this</p>
<pre><code class="lang-bash">sudo kubeadm join &lt;master-node-ip&gt;:6443 --token &lt;token&gt; --discovery-token-ca-cert-hash sha256:&lt;<span class="hljs-built_in">hash</span>&gt;
</code></pre>
<blockquote>
<p><strong>Copy and save this command</strong>—you will need it to <strong>connect worker nodes</strong> later.</p>
</blockquote>
<h3 id="heading-step-3-configure-kubectl-on-the-master-node"><strong>Step 3: Configure</strong> <code>kubectl</code> on the Master Node</h3>
<p>To start using Kubernetes, configure <code>kubectl</code> with the <strong>admin credentials</strong></p>
<pre><code class="lang-bash">mkdir -p <span class="hljs-variable">$HOME</span>/.kube
sudo cp -i /etc/kubernetes/admin.conf <span class="hljs-variable">$HOME</span>/.kube/config
sudo chown $(id -u):$(id -g) <span class="hljs-variable">$HOME</span>/.kube/config
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1742190840684/9822693b-63cf-4596-bbd7-86b3da2036bf.png" alt /></p>
<blockquote>
<p>This allows you to <strong>run</strong> <code>kubectl</code> commands without root privileges.</p>
</blockquote>
<h4 id="heading-verify-that-the-control-plane-is-running"><strong>Verify that the Control Plane is Running</strong></h4>
<pre><code class="lang-bash">kubectl get nodes
</code></pre>
<blockquote>
<p>The <strong>master node should appear as</strong> <code>NotReady</code>—this is expected until a <strong>network plugin</strong> is installed.</p>
</blockquote>
<h3 id="heading-step-4-deploy-a-network-plugin-cni"><strong>Step 4: Deploy a Network Plugin (CNI)</strong></h3>
<p>Kubernetes requires a <strong>Container Network Interface (CNI)</strong> to enable pod-to-pod communication. Choose a networking solution and apply the corresponding configuration.</p>
<h4 id="heading-install-calico-recommended-for-production"><strong>Install Calico (Recommended for Production)</strong></h4>
<p>Calico provides <strong>robust networking</strong> and <strong>security policies</strong> for Kubernetes clusters.</p>
<p>Run the following commands <strong>on the master node</strong></p>
<pre><code class="lang-bash">kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.28.0/manifests/tigera-operator.yaml
curl -O https://raw.githubusercontent.com/projectcalico/calico/v3.28.0/manifests/custom-resources.yaml
kubectl apply -f custom-resources.yaml
</code></pre>
<blockquote>
<p><strong>This enables networking between pods across all nodes.</strong></p>
</blockquote>
<h4 id="heading-verify-calico-installation"><strong>Verify Calico Installation</strong></h4>
<pre><code class="lang-bash">kubectl get pods -n calico-system
</code></pre>
<h3 id="heading-step-5-deploy-the-nginx-ingress-controller"><strong>Step 5: Deploy the NGINX Ingress Controller</strong></h3>
<p>To manage <strong>external access</strong> to services inside the cluster, deploy the <strong>NGINX Ingress Controller</strong>:</p>
<pre><code class="lang-bash">kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/static/provider/baremetal/deploy.yaml
</code></pre>
<h3 id="heading-your-kubernetes-control-plane-is-ready">🎯 <strong>Your Kubernetes Control Plane is Ready! 🚀</strong></h3>
<blockquote>
<p>At this point:</p>
<ul>
<li><p><strong>The control plane is up and running.</strong></p>
</li>
<li><p><strong>Networking is configured, allowing pods to communicate.</strong></p>
</li>
<li><p><strong>Ingress controller is deployed for external access.</strong></p>
</li>
</ul>
</blockquote>
<h2 id="heading-joining-worker-nodes-to-the-kubernetes-cluster">🚀 <strong>Joining Worker Nodes to the Kubernetes Cluster</strong></h2>
<p>After setting up the <strong>control plane</strong>, the next step is to add <strong>worker nodes</strong> to the cluster. This allows them to <strong>host workloads</strong> and be managed by the <strong>master node</strong>.</p>
<h3 id="heading-step-1-join-the-worker-nodes"><strong>Step 1: Join the Worker Nodes</strong></h3>
<p>Run the following command <strong>on each worker node</strong> to connect them to the Kubernetes cluster</p>
<pre><code class="lang-bash">sudo kubeadm join 172.31.38.39:6443 --token &lt;your-token&gt; \
    --discovery-token-ca-cert-hash sha256:&lt;your-ca-hash&gt;
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1742190734985/619a7530-cefe-473e-9f8c-01047c8d7f8b.png" alt /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1742190769120/e2483072-0400-4b29-8654-17fa7c96e918.png" alt /></p>
<blockquote>
<p>Replace <code>&lt;your-token&gt;</code> and <code>&lt;your-ca-hash&gt;</code> with the values <strong>generated during</strong> <code>kubeadm init</code>.</p>
</blockquote>
<h4 id="heading-this-command-does-the-following-1">✅ <strong>This command does the following:</strong></h4>
<ul>
<li><p>Connects <strong>worker nodes</strong> to the Kubernetes <strong>control plane</strong>.</p>
</li>
<li><p>Allows the <strong>master node</strong> to <strong>schedule</strong> and <strong>manage workloads</strong> on worker nodes.</p>
</li>
</ul>
<p>Once executed <strong>on all worker nodes</strong>, your <strong>Kubernetes cluster will be fully formed!</strong> 🎉</p>
<h3 id="heading-step-2-verify-the-cluster-setup"><strong>Step 2: Verify the Cluster Setup</strong></h3>
<p>To ensure that all nodes are properly connected, run the following command <strong>on the master node</strong></p>
<pre><code class="lang-bash">kubectl get nodes
</code></pre>
<blockquote>
<p>If everything is working correctly, you should see <strong>all three nodes (one master, two workers) in a</strong> <code>Ready</code> state.</p>
</blockquote>
<h3 id="heading-step-3-verify-all-pods-are-running"><strong>Step 3: Verify All Pods Are Running</strong></h3>
<p>Check if all cluster components and networking pods are operational</p>
<pre><code class="lang-bash">kubectl get pods -A
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1742402740674/2aa2a84d-8798-4d4d-9cb0-2a58308c4ad8.png" alt /></p>
<p>✔ If all pods are running correctly, your cluster is now stable.</p>
<blockquote>
<p>If <strong>some Calico pods</strong> are <strong>not running</strong>, follow the <strong>troubleshooting steps</strong> below.</p>
</blockquote>
<h3 id="heading-step-4-disable-sourcedestination-checks-aws-only"><strong>Step 4: Disable Source/Destination Checks (AWS Only)</strong></h3>
<p>AWS enforces <strong>Source/Destination Checks</strong>, which can interfere with Kubernetes networking, especially when using <strong>Calico, Flannel, or Cilium CNIs</strong>.</p>
<h4 id="heading-to-disable-this-check-on-all-nodes-master-workers"><strong>To disable this check on all nodes (master + workers):</strong></h4>
<ol>
<li><p>Go to <strong>AWS EC2 Dashboard → Instances</strong></p>
</li>
<li><p>Select <strong>all Kubernetes nodes (master &amp; workers)</strong></p>
</li>
<li><p>Click <strong>Actions → Networking → Change Source/Destination Check</strong></p>
</li>
<li><p>Select <strong>Disable</strong> and confirm ✅</p>
</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1742402355181/7e8db45a-195c-4d1f-a9b1-e5d1a960e74a.png" alt /></p>
<h4 id="heading-why-disable-sourcedestination-checks"><strong>🔍 Why Disable Source/Destination Checks?</strong></h4>
<ul>
<li><p><strong>Kubernetes networking</strong> often involves <strong>asymmetric routing</strong> (packets leave from one interface but return through another).</p>
</li>
<li><p><strong>AWS blocks asymmetric routing</strong> by default, which <strong>breaks pod communication</strong>.</p>
</li>
<li><p>Disabling this <strong>allows proper routing</strong> across nodes.</p>
</li>
</ul>
<h3 id="heading-step-5-allow-bidirectional-traffic-on-tcp-port-179-calico-cni-only"><strong>Step 5: Allow Bidirectional Traffic on TCP Port 179 (Calico CNI Only)</strong></h3>
<p>If using <strong>Calico CNI</strong>, you need to allow <strong>BGP (Border Gateway Protocol)</strong> traffic between nodes <strong>on TCP port 179</strong>.</p>
<h4 id="heading-configure-security-group-rules"><strong>Configure Security Group Rules</strong></h4>
<ol>
<li><p><strong>Go to AWS EC2 Dashboard → Security Groups</strong></p>
</li>
<li><p>Find the <strong>security group attached</strong> to your <strong>Kubernetes nodes</strong></p>
</li>
<li><p>Click on <strong>Inbound Rules → Edit Inbound Rules → Add Rule</strong></p>
</li>
</ol>
<ul>
<li><p><strong>Type:</strong> Custom TCP Rule</p>
</li>
<li><p><strong>Protocol:</strong> TCP</p>
</li>
<li><p><strong>Port Range:</strong> 179</p>
</li>
<li><p><strong>Source:</strong></p>
<ul>
<li><p><code>0.0.0.0/0</code> (<strong>not recommended for production</strong>)</p>
</li>
<li><p><strong>OR</strong> Your <strong>VPC CIDR</strong> (e.g., <code>10.0.0.0/16</code>)</p>
</li>
</ul>
</li>
</ul>
<ol start="4">
<li><p>Click <strong>Save Rules</strong> ✅</p>
</li>
<li><p><strong>Now, edit Outbound Rules:</strong></p>
</li>
</ol>
<ul>
<li><p>Click <strong>Outbound Rules → Edit Outbound Rules → Add Rule</strong></p>
</li>
<li><p><strong>Type:</strong> Custom TCP Rule</p>
</li>
<li><p><strong>Protocol:</strong> TCP</p>
</li>
<li><p><strong>Port Range:</strong> 179</p>
</li>
<li><p><strong>Destination:</strong></p>
<ul>
<li><p><code>0.0.0.0/0</code> (<strong>for all hosts</strong>)</p>
</li>
<li><p><strong>OR</strong> Your <strong>VPC CIDR</strong> (<strong>for internal traffic</strong>)</p>
</li>
</ul>
</li>
</ul>
<p>6️⃣ Click <strong>Save Rules</strong> ✅</p>
<h3 id="heading-final-step-verify-everything-is-running">🎯 <strong>Final Step: Verify Everything is Running</strong></h3>
<p>Run the following command again to ensure that <strong>all pods are running successfully</strong>:</p>
<pre><code class="lang-bash">kubectl get pods -A
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1742403690689/f7fcf2bd-2853-4e55-8521-9d07b9f8faf5.png" alt /></p>
<h3 id="heading-congratulations-your-kubernetes-cluster-is-now-fully-functional-and-ready-for-workloads">🎉 <strong>Congratulations! Your Kubernetes Cluster is Now Fully Functional and Ready for Workloads! 🚀</strong></h3>
<blockquote>
<p>Your <strong>Kubernetes cluster is now complete</strong> with a fully operational <strong>control plane and worker nodes</strong>. You’re now ready to <strong>deploy applications</strong> and <strong>manage workloads</strong> effectively.</p>
</blockquote>
<h2 id="heading-perform-a-security-audit-using-kubeaudit">🔒 <strong>Perform a Security Audit Using Kubeaudit</strong></h2>
<p>To ensure the <strong>security</strong> of your Kubernetes cluster, use <strong>Kubeaudit</strong>—a security auditing tool developed by Shopify. It helps identify <strong>misconfigurations, security vulnerabilities, and best practice violations</strong> in your cluster.</p>
<h3 id="heading-step-1-install-kubeaudit-on-ubuntu"><strong>Step 1: Install Kubeaudit on Ubuntu</strong></h3>
<p>Run the following commands on your <strong>control plane (master node)</strong></p>
<pre><code class="lang-bash">wget https://github.com/Shopify/kubeaudit/releases/download/v0.22.2/kubeaudit_0.22.2_linux_amd64.tar.gz
tar -xzf kubeaudit_0.22.2_linux_amd64.tar.gz  <span class="hljs-comment"># Extract the tar.gz file</span>
sudo mv kubeaudit /usr/<span class="hljs-built_in">local</span>/bin/             <span class="hljs-comment"># Move the binary to a system path</span>
rm kubeaudit_0.22.2_linux_amd64.tar.gz        <span class="hljs-comment"># Cleanup the tar.gz file</span>
kubeaudit version                             <span class="hljs-comment"># Verify installation</span>
</code></pre>
<blockquote>
<p>This will install <strong>Kubeaudit</strong> and verify that it's correctly set up.</p>
</blockquote>
<h3 id="heading-step-2-scan-the-cluster-for-security-issues"><strong>Step 2: Scan the Cluster for Security Issues</strong></h3>
<p>Run the following command to <strong>audit the cluster</strong> and check for security misconfigurations</p>
<pre><code class="lang-bash">kubeaudit all
</code></pre>
<h3 id="heading-what-does-this-command-do">📌 <strong>What does this command do?</strong></h3>
<ul>
<li><p>Scans all <strong>Kubernetes objects</strong> for <strong>security vulnerabilities</strong></p>
</li>
<li><p>Detects <strong>misconfigurations</strong> in <strong>RBAC, PodSecurity, and container settings</strong></p>
</li>
<li><p>Provides <strong>remediation steps</strong> to fix issues</p>
</li>
</ul>
<hr />
<h1 id="heading-setup-sonarqube-and-nexus-server">🛠 <strong>Setup SonarQube and Nexus Server</strong></h1>
<p>To implement <strong>code quality analysis (SonarQube)</strong> and <strong>artifact management (Nexus Repository Manager)</strong> in your <strong>CI/CD pipeline</strong>, follow these steps.</p>
<h2 id="heading-launch-ec2-instances-for-sonarqube-and-nexus"><strong>Launch EC2 Instances for SonarQube and Nexus</strong></h2>
<ol>
<li><p><strong>Go to AWS Console</strong> → Click <strong>Launch an Instance</strong></p>
</li>
<li><p>Configure <strong>two EC2 instances</strong>:</p>
<ul>
<li><p><strong>AMI:</strong> Ubuntu</p>
</li>
<li><p><strong>Instance Type:</strong> <code>t2.medium</code></p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1742408049380/079fad2d-60d6-4a72-b436-c5026e831131.png" alt /></p>
</li>
<li><p><strong>Storage:</strong> <code>20 GiB</code> or more</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1742408226501/3914ff15-ab4f-46d9-b38e-95cbb303ba77.png" alt /></p>
</li>
<li><p><strong>Number of Instances:</strong> <code>2</code></p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1742408375907/11291271-3568-44fd-bd35-7af3402ea042.png" alt /></p>
</li>
</ul>
</li>
<li><p>Click <strong>Launch Instance</strong></p>
</li>
<li><p>Rename instances to:</p>
<ul>
<li><p><code>SonarQube</code></p>
</li>
<li><p><code>Nexus</code></p>
</li>
</ul>
</li>
</ol>
<p>    <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1742408474945/cc06f960-700e-45d5-b985-2b8011cd567c.png" alt /></p>
<h2 id="heading-connect-to-instances-via-ssh"><strong>Connect to Instances via SSH</strong></h2>
<p>Connect to each instance from your terminal:</p>
<pre><code class="lang-bash">ssh -i &lt;path-to-pem-file&gt; ubuntu@&lt;private-ip&gt;
</code></pre>
<p>Once connected, <strong>update the system</strong>:</p>
<pre><code class="lang-bash">sudo apt update
</code></pre>
<h2 id="heading-install-docker-on-both-servers"><strong>Install Docker on Both Servers</strong></h2>
<h3 id="heading-step-1-install-dependencies"><strong>Step 1: Install Dependencies</strong></h3>
<pre><code class="lang-bash">sudo apt-get update
sudo apt-get install -y ca-certificates curl
</code></pre>
<h3 id="heading-step-2-add-dockers-official-gpg-key"><strong>Step 2: Add Docker's Official GPG Key</strong></h3>
<pre><code class="lang-bash">sudo install -m 0755 -d /etc/apt/keyrings
sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc
sudo chmod a+r /etc/apt/keyrings/docker.asc
</code></pre>
<h3 id="heading-step-3-add-the-docker-repository"><strong>Step 3: Add the Docker Repository</strong></h3>
<pre><code class="lang-bash"><span class="hljs-built_in">echo</span> \
  <span class="hljs-string">"deb [arch=<span class="hljs-subst">$(dpkg --print-architecture)</span> signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu \
  <span class="hljs-subst">$(. /etc/os-release &amp;&amp; echo <span class="hljs-string">"<span class="hljs-variable">${UBUNTU_CODENAME:-<span class="hljs-variable">$VERSION_CODENAME</span>}</span>"</span>)</span> stable"</span> | \
  sudo tee /etc/apt/sources.list.d/docker.list &gt; /dev/null
</code></pre>
<pre><code class="lang-bash">sudo apt-get update
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1742409559198/5ccba0d2-aa58-4a20-b559-8be2de7a575c.png" alt /></p>
<h3 id="heading-step-4-install-docker"><strong>Step 4: Install Docker</strong></h3>
<pre><code class="lang-bash">sudo apt-get install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1742409666085/67e3a5ca-a732-4eb0-9e68-c731ae31048f.png" alt /></p>
<h3 id="heading-step-5-allow-docker-without-sudo"><strong>Step 5: Allow Docker Without</strong> <code>sudo</code></h3>
<pre><code class="lang-bash">sudo chmod 666 /var/run/docker.sock
</code></pre>
<h2 id="heading-deploy-sonarqube-on-the-sonarqube-server"><strong>Deploy SonarQube on the SonarQube Server</strong></h2>
<p>Run the following command on the <strong>SonarQube EC2 instance</strong></p>
<pre><code class="lang-bash">docker run -d --name sonar -p 9000:9000 sonarqube:lts-community
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1742409999801/13ce992b-8b66-41d0-8155-fef4ff5087d2.png" alt /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1742410095675/c508e48c-e552-4e76-b145-b4e9ea31eb65.png" alt /></p>
<h3 id="heading-access-sonarqube-dashboard"><strong>Access SonarQube Dashboard</strong></h3>
<p>📌 Open your browser and go to:<br /><code>http://&lt;SonarQube-server-public-IP&gt;:9000</code></p>
<h3 id="heading-default-credentials"><strong>Default Credentials</strong></h3>
<ul>
<li><p><strong>Username:</strong> <code>admin</code></p>
</li>
<li><p><strong>Password:</strong> <code>admin</code></p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1742410737474/fa5b3682-7253-4c59-8610-0985840fd5c9.png" alt /></p>
<blockquote>
<p>After logging in, <strong>update the default password</strong> for security</p>
</blockquote>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1742410703860/f024fb56-bfc0-4dab-ac95-9905d2352944.png" alt /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1742410814873/51b645fb-23df-4c5f-8834-f34056c3f359.png" alt /></p>
<h2 id="heading-deploy-nexus-on-the-nexus-server"><strong>Deploy Nexus on the Nexus Server</strong></h2>
<p>Run this command on the <strong>Nexus EC2 instance</strong>:</p>
<pre><code class="lang-bash">docker run -d --name nexus -p 8081:8081 sonatype/nexus3
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1742410246832/96d88a80-4d3c-4bbf-9262-02ec54c1dbb3.png" alt /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1742410288133/4e008ce7-0a25-4ca2-938f-cdd299116da8.png" alt /></p>
<h3 id="heading-access-nexus-dashboard"><strong>Access Nexus Dashboard</strong></h3>
<p>📌 Open your browser and go to: <code>http://&lt;Nexus-server-public-IP&gt;:8081</code></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1742410854058/c1190563-cb5c-4429-811b-48ec4cbb65e1.png" alt /></p>
<h3 id="heading-retrieve-admin-password"><strong>Retrieve Admin Password</strong></h3>
<p>To log in, first <strong>find the default admin password</strong>:</p>
<pre><code class="lang-bash">docker <span class="hljs-built_in">exec</span> -it nexus /bin/bash
cat sonatype-work/nexus3/admin.password
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1742411300129/3d09b5f7-280e-4f7c-86b7-a7b9c0d7aa1d.png" alt /></p>
<p>Use this <strong>admin password</strong> to log in.</p>
<ul>
<li><p><strong>Username:</strong> <code>admin</code></p>
</li>
<li><p><strong>Password:</strong> <em>(from the above command)</em></p>
</li>
</ul>
<blockquote>
<p>Set a <strong>new password</strong> and enable <strong>anonymous access</strong> if needed.</p>
</blockquote>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1742411562050/039d338e-df6b-4770-a563-855bc519ba3f.png" alt /></p>
<h2 id="heading-configure-nexus-repository-in-pomxml"><strong>Configure Nexus Repository in</strong> <code>pom.xml</code></h2>
<ol>
<li><p>Go to <strong>Nexus Web UI</strong></p>
</li>
<li><p>Click <strong>Browse</strong> → Look for:</p>
<ul>
<li><p><strong>Maven Releases</strong></p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1742529847917/f579832e-88fc-4319-a2d3-5dd7aa8f982e.png" alt class="image--center mx-auto" /></p>
</li>
<li><p><strong>Maven Snapshots</strong></p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1742529997777/791d02ed-759b-476b-8d40-550dd4ddcdf6.png" alt /></p>
</li>
</ul>
</li>
<li><p>Copy the repository <strong>URL</strong></p>
</li>
<li><p>Add it to your project's <code>pom.xml</code> file:</p>
</li>
</ol>
<pre><code class="lang-bash">&lt;distributionManagement&gt;
    &lt;repository&gt;
        &lt;id&gt;nexus-releases&lt;/id&gt;
        &lt;url&gt;http://&lt;Nexus-server-public-IP&gt;:8081/repository/maven-releases/&lt;/url&gt;
    &lt;/repository&gt;
    &lt;snapshotRepository&gt;
        &lt;id&gt;nexus-snapshots&lt;/id&gt;
        &lt;url&gt;http://&lt;Nexus-server-public-IP&gt;:8081/repository/maven-snapshots/&lt;/url&gt;
    &lt;/snapshotRepository&gt;
&lt;/distributionManagement&gt;
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1742530154119/96e0483b-588d-4908-afdd-13b28d595098.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-final-verification">🎯 <strong>Final Verification</strong></h3>
<ol>
<li><p><strong>Check if SonarQube is Running:</strong></p>
<pre><code class="lang-bash"> docker ps | grep sonar
</code></pre>
<blockquote>
<p>You should see the <code>sonarqube</code> container running.</p>
</blockquote>
</li>
<li><p><strong>Check if Nexus is Running:</strong></p>
<pre><code class="lang-bash"> docker ps | grep nexus
</code></pre>
<blockquote>
<p>You should see the <code>nexus</code> container running.</p>
</blockquote>
</li>
<li><p><strong>Test SonarQube Web Access:</strong><br /> Open: <code>http://&lt;SonarQube-server-public-IP&gt;:9000</code></p>
</li>
<li><p><strong>Test Nexus Web Access:</strong><br /> Open: <code>http://&lt;Nexus-server-public-IP&gt;:8081</code></p>
</li>
</ol>
<h3 id="heading-sonarqube-and-nexus-are-now-set-up">🎉 <strong>SonarQube and Nexus Are Now Set Up!</strong> 🚀</h3>
<ul>
<li><p><strong>SonarQube</strong> is ready for <strong>code quality analysis</strong></p>
</li>
<li><p><strong>Nexus</strong> is ready for <strong>artifact storage</strong></p>
</li>
</ul>
<hr />
<h1 id="heading-set-up-jenkins-server-on-aws">🚀 <strong>Set Up Jenkins Server on AWS</strong></h1>
<p>Jenkins is a powerful automation server widely used for <strong>CI/CD pipelines</strong>. This guide will help you set up Jenkins on an <strong>AWS EC2 instance</strong>, install necessary dependencies, and configure it for seamless use.</p>
<h2 id="heading-launch-an-ec2-instance-for-jenkins"><strong>Launch an EC2 Instance for Jenkins</strong></h2>
<ol>
<li><p><strong>Go to AWS Console</strong> → Click <strong>Launch an Instance</strong></p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1742408723612/f1e06adb-c690-4db3-a70a-7c11bb9e08a0.png" alt /></p>
</li>
<li><p>Configure the instance:</p>
<ul>
<li><p><strong>Name:</strong> <code>Jenkins Server</code></p>
</li>
<li><p><strong>AMI:</strong> Ubuntu (latest LTS recommended)</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1742408813822/f900d3ba-b0ba-46bd-b101-12838483209b.png" alt /></p>
</li>
<li><p><strong>Instance Type:</strong> <code>t2.large</code> (At least <strong>2 vCPUs</strong> and <strong>8 GB RAM</strong> for smooth builds)</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1742408912574/426deadd-00af-4a7a-a2d0-b3f1900d5a24.png" alt /></p>
</li>
<li><p><strong>Storage:</strong> <code>30 GiB</code> (Jenkins jobs &amp; artifacts require storage)</p>
</li>
<li><p><strong>Number of Instances:</strong> <code>1</code></p>
</li>
</ul>
</li>
<li><p>Click <strong>Launch Instance</strong></p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1742409000144/b935d32d-b136-4c8f-bf43-0dea4e636f8c.png" alt /></p>
</li>
<li><p>Once running, <strong>copy the public IP</strong> of the Jenkins instance from the AWS Console.</p>
</li>
</ol>
<h2 id="heading-connect-to-the-jenkins-server-via-ssh"><strong>Connect to the Jenkins Server via SSH</strong></h2>
<p>Use SSH to connect from your local machine:</p>
<pre><code class="lang-bash">ssh -i &lt;path-of-pem-file&gt; ubuntu@&lt;jenkins-server-IP&gt;
</code></pre>
<p>Replace <code>&lt;path-of-pem-file&gt;</code> with your <strong>private key path</strong> and <code>&lt;jenkins-server-IP&gt;</code> with your <strong>instance's public IP</strong>.</p>
<h2 id="heading-update-the-system"><strong>Update the System</strong></h2>
<p>To ensure the latest security patches and package versions, run:</p>
<pre><code class="lang-bash">sudo apt update &amp;&amp; sudo apt upgrade -y
</code></pre>
<h2 id="heading-install-jenkins"><strong>Install Jenkins</strong></h2>
<h3 id="heading-step-1-install-java-required-for-jenkins"><strong>Step 1: Install Java (Required for Jenkins)</strong></h3>
<pre><code class="lang-bash">sudo apt install -y openjdk-17-jdk-headless
</code></pre>
<h3 id="heading-step-2-install-jenkins-using-official-repository"><strong>Step 2: Install Jenkins Using Official Repository</strong></h3>
<pre><code class="lang-bash"><span class="hljs-comment"># Add Jenkins repository key</span>
sudo wget -O /usr/share/keyrings/jenkins-keyring.asc \
  https://pkg.jenkins.io/debian-stable/jenkins.io-2023.key

<span class="hljs-comment"># Add Jenkins repository to sources list</span>
<span class="hljs-built_in">echo</span> <span class="hljs-string">"deb [signed-by=/usr/share/keyrings/jenkins-keyring.asc] \
  https://pkg.jenkins.io/debian-stable binary/"</span> | sudo tee \
  /etc/apt/sources.list.d/jenkins.list &gt; /dev/null

<span class="hljs-comment"># Update package lists and install Jenkins</span>
sudo apt-get update
sudo apt-get install -y jenkins
</code></pre>
<h3 id="heading-step-3-start-amp-verify-jenkins-service"><strong>Step 3: Start &amp; Verify Jenkins Service</strong></h3>
<pre><code class="lang-bash">sudo systemctl start jenkins
sudo systemctl <span class="hljs-built_in">enable</span> jenkins
sudo systemctl status jenkins
</code></pre>
<h2 id="heading-install-docker-for-running-jenkins-builds-in-containers">I<strong>nstall Docker (for Running Jenkins Builds in Containers)</strong></h2>
<h3 id="heading-step-1-install-dependencies-1"><strong>Step 1: Install Dependencies</strong></h3>
<pre><code class="lang-bash">sudo apt-get install -y ca-certificates curl
</code></pre>
<h3 id="heading-step-2-add-dockers-official-gpg-key-1"><strong>Step 2: Add Docker’s Official GPG Key</strong></h3>
<pre><code class="lang-bash">sudo install -m 0755 -d /etc/apt/keyrings
sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc
sudo chmod a+r /etc/apt/keyrings/docker.asc
</code></pre>
<h3 id="heading-step-3-add-docker-repository"><strong>Step 3: Add Docker Repository</strong></h3>
<pre><code class="lang-bash"><span class="hljs-built_in">echo</span> \
  <span class="hljs-string">"deb [arch=<span class="hljs-subst">$(dpkg --print-architecture)</span> signed-by=/etc/apt/keyrings/docker.asc] \
  https://download.docker.com/linux/ubuntu <span class="hljs-subst">$(. /etc/os-release &amp;&amp; echo <span class="hljs-string">"<span class="hljs-variable">${UBUNTU_CODENAME:-<span class="hljs-variable">$VERSION_CODENAME</span>}</span>"</span>)</span> stable"</span> | \
  sudo tee /etc/apt/sources.list.d/docker.list &gt; /dev/null
</code></pre>
<h3 id="heading-step-4-install-docker-1"><strong>Step 4: Install Docker</strong></h3>
<pre><code class="lang-bash">sudo apt-get update
sudo apt-get install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
</code></pre>
<h3 id="heading-step-5-allow-jenkins-to-use-docker-without-sudo"><strong>Step 5: Allow Jenkins to Use Docker Without</strong> <code>sudo</code></h3>
<pre><code class="lang-bash">sudo chmod 666 /var/run/docker.sock
</code></pre>
<h2 id="heading-install-kubectl-for-kubernetes-integration"><strong>Install</strong> <code>kubectl</code> (for Kubernetes Integration)</h2>
<p>Run the following command on the <strong>Jenkins server</strong>:</p>
<pre><code class="lang-bash">curl -o kubectl https://amazon-eks.s3.us-west-2.amazonaws.com/1.19.6/2021-01-05/bin/linux/amd64/kubectl
chmod +x ./kubectl
sudo mv ./kubectl /usr/<span class="hljs-built_in">local</span>/bin
kubectl version --short --client
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1742582208348/df8d62fa-1d20-4776-80bf-bdb92a49a5e7.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-access-jenkins-web-interface"><strong>Access Jenkins Web Interface</strong></h2>
<p>Jenkins runs on <strong>port 8080</strong> by default.</p>
<p>📌 Open your browser and visit: <code>http://&lt;Jenkins-server-public-IP&gt;:8080</code></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1742413689663/74c098f7-f471-4cd4-89ef-6f4bbef7f53b.png" alt /></p>
<h3 id="heading-retrieve-initial-admin-password"><strong>Retrieve Initial Admin Password</strong></h3>
<p>Run this command:</p>
<pre><code class="lang-bash">sudo cat /var/lib/jenkins/secrets/initialAdminPassword
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1742413784814/63d6ab5a-e21b-4357-bae8-8f1bbe8d33c9.png" alt /></p>
<p>Copy the <strong>admin password</strong> and paste it into the Jenkins web interface.</p>
<h2 id="heading-jenkins-setup-wizard"><strong>Jenkins Setup Wizard</strong></h2>
<ol>
<li><p><strong>Install Suggested Plugins</strong> (Recommended for most setups).</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1742413906921/ae59eb1d-5114-46a2-9e76-0df6655e4580.png" alt /></p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1742413962671/d25160bf-b122-4fe3-8969-51240c474932.png" alt /></p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1742414305358/9a14eeb6-8cf7-499f-ae46-c22ba0c95367.png" alt /></p>
</li>
<li><p><strong>Create an Admin User</strong> (Set username, password, and email).</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1742414335032/978c4eed-903a-44a0-9f57-29fc72698272.png" alt /></p>
</li>
<li><p><strong>Start Using Jenkins 🎉</strong></p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1742414687916/673b9ce0-e6ce-46e3-a4de-f733aa398b94.png" alt /></p>
</li>
</ol>
<h2 id="heading-install-required-jenkins-plugins"><strong>Install Required Jenkins Plugins</strong></h2>
<p>To enhance <strong>CI/CD functionality</strong>, install these plugins:</p>
<h3 id="heading-step-1-navigate-to-plugin-manager"><strong>Step 1: Navigate to Plugin Manager</strong></h3>
<ul>
<li><p><strong>Manage Jenkins</strong> → <strong>Manage Plugins</strong> → <strong>Available Plugins</strong></p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1742414821584/b7463d06-291c-48d0-a875-da37b6dc344b.png" alt /></p>
</li>
</ul>
<h3 id="heading-step-2-install-the-following-plugins"><strong>Step 2: Install the Following Plugins</strong></h3>
<ol>
<li><p><strong>Eclipse Temurin Installer</strong> <em>(Installs required Java versions)</em></p>
</li>
<li><p><strong>Config File Provider</strong> <em>(Manages configuration files for builds)</em></p>
</li>
<li><p><strong>Pipeline Maven Integration</strong> <em>(For running Maven builds inside pipelines)</em></p>
</li>
<li><p><strong>SonarQube Scanner</strong> <em>(For static code analysis integration)</em></p>
</li>
<li><p><strong>Docker</strong> <em>(Integrates Docker with Jenkins)</em></p>
</li>
<li><p><strong>Docker Pipeline</strong> <em>(Allows defining Docker containers in pipelines)</em></p>
</li>
<li><p><strong>Kubernetes</strong> <em>(For running Jenkins on Kubernetes)</em></p>
</li>
<li><p><strong>Kubernetes CLI</strong> <em>(Provides</em> <code>kubectl</code> inside Jenkins)</p>
</li>
<li><p><strong>Kubernetes Credentials</strong> <em>(Manages Kubernetes authentication)</em></p>
</li>
<li><p><strong>Kubernetes Client API</strong> <em>(Allows Jenkins to interact with Kubernetes clusters)</em></p>
</li>
<li><p><strong>Maven Integration</strong> <em>(For building Java projects)</em></p>
</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1742415537577/05bf3ec3-6165-472c-9567-d19515792dbb.png" alt /></p>
<blockquote>
<p>Click <strong>Install</strong> and wait for the installation to complete.</p>
</blockquote>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1742415568843/3f019b38-7a73-453d-8871-a45dbc64ce64.png" alt /></p>
<h3 id="heading-final-verification-1">🎯 <strong>Final Verification</strong></h3>
<ol>
<li><strong>Check Jenkins Service</strong></li>
</ol>
<pre><code class="lang-bash">sudo systemctl status jenkins
</code></pre>
<blockquote>
<p>You should see <strong>"active (running)"</strong> status.</p>
</blockquote>
<ol start="2">
<li><strong>Check Docker Installation</strong></li>
</ol>
<pre><code class="lang-bash">docker --version
</code></pre>
<blockquote>
<p>It should display the installed Docker version.</p>
</blockquote>
<ol start="3">
<li><strong>Check</strong> <code>kubectl</code> Installation</li>
</ol>
<pre><code class="lang-bash">kubectl version --short --client
</code></pre>
<blockquote>
<p>It should show the <strong>Kubernetes client version</strong>.</p>
</blockquote>
<ol start="4">
<li><strong>Test Jenkins Web Access</strong><br /> Open: <code>http://&lt;Jenkins-server-public-IP&gt;:8080</code></li>
</ol>
<h2 id="heading-jenkins-is-now-fully-set-up-on-aws">🎉 <strong>Jenkins is Now Fully Set Up on AWS! 🚀</strong></h2>
<ul>
<li><p><strong>Jenkins</strong> is installed &amp; running</p>
</li>
<li><p><strong>Docker</strong> is set up for containerized builds</p>
</li>
<li><p><code>kubectl</code> is installed for Kubernetes integration</p>
</li>
<li><p><strong>Essential plugins</strong> are installed</p>
</li>
</ul>
<hr />
<h1 id="heading-start-creating-cicd-pipelines">Start Creating CI/CD Pipelines! 🚀</h1>
<p>Jenkins is now fully set up! You can start creating Jenkins Pipelines to automate builds, tests, and deployments for your applications.</p>
<h2 id="heading-step-1-configure-essential-tools-in-jenkins">Step 1: Configure Essential Tools in Jenkins</h2>
<h3 id="heading-1-install-required-plugins">1. Install Required Plugins</h3>
<h4 id="heading-install-jdk">Install JDK</h4>
<ul>
<li><p>Navigate to <strong>Manage Jenkins → Tools</strong></p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1742415773288/eb362b5d-ddca-43b7-8115-b3b76e48158e.png" alt /></p>
</li>
<li><p>Under <strong>JDK</strong>, click <strong>Add JDK</strong></p>
</li>
<li><p>Name: <code>jdk17</code></p>
</li>
<li><p>Check <strong>Install automatically</strong></p>
</li>
<li><p>Installer: <code>adoptium.net</code></p>
</li>
<li><p>Version: <code>jdk-17</code></p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1742416103137/ddb7e887-3058-46dd-9c78-279f7526b2c2.png" alt /></p>
<h4 id="heading-install-sonarqube-scanner">Install SonarQube Scanner</h4>
<ul>
<li><p>Under <strong>SonarQube Scanner</strong>, click <strong>Add SonarQube Scanner</strong></p>
</li>
<li><p>Name: <code>sonar-scanner</code></p>
</li>
<li><p>Check <strong>Install automatically</strong></p>
</li>
<li><p>Choose the latest version</p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1742416302003/e223b197-70ef-46c2-af56-fd9643f7bb2a.png" alt /></p>
<h4 id="heading-install-maven">Install Maven</h4>
<ul>
<li><p>Under <strong>Maven</strong>, click <strong>Add Maven</strong></p>
</li>
<li><p>Name: <code>maven3</code></p>
</li>
<li><p>Check <strong>Install automatically</strong></p>
</li>
<li><p>Choose the latest version</p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1742416409977/7df028d4-c233-4710-91f6-0ba3a361ac7b.png" alt /></p>
<h4 id="heading-install-docker">Install Docker</h4>
<ul>
<li><p>Under <strong>Docker</strong>, click <strong>Add Docker</strong></p>
</li>
<li><p>Name: <code>docker</code></p>
</li>
<li><p>Check <strong>Install automatically</strong></p>
</li>
<li><p>Installer: Download from <code>docker.com</code></p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1742416561534/69ba46ab-3565-45be-8180-a8c2b94b7939.png" alt /></p>
<blockquote>
<p>Once these tools are set up, you are ready to create a pipeline.</p>
</blockquote>
<h2 id="heading-step-2-create-a-jenkins-pipeline">Step 2: Create a Jenkins Pipeline</h2>
<p>A Jenkins pipeline automates the software development lifecycle, including building, testing, and deploying applications.</p>
<h3 id="heading-1-access-jenkins-dashboard">1. Access Jenkins Dashboard</h3>
<ul>
<li><p>Log in to Jenkins using your browser.</p>
</li>
<li><p>On the <strong>Jenkins Dashboard</strong>, click <strong>New Item</strong> to create a new job.</p>
</li>
</ul>
<h3 id="heading-2-define-pipeline-details">2. Define Pipeline Details</h3>
<ul>
<li><p><strong>Enter an Item Name</strong> (Example: <code>BoardGame</code>).</p>
</li>
<li><p><strong>Select Item Type</strong> → Choose <strong>Pipeline</strong>.</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1742416787511/358cbedd-0ffa-4f4c-a74d-c071ea6edb19.png" alt /></p>
</li>
<li><p>Click <strong>OK</strong> to proceed.</p>
</li>
</ul>
<h3 id="heading-3-add-github-token-in-jenkins-credentials">3. Add GitHub Token in Jenkins Credentials</h3>
<ul>
<li><p>Go to <strong>Jenkins Dashboard → Manage Jenkins → Credentials</strong>.</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1742468479308/d6a00460-3365-458b-8583-2b247cec6e5b.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Click on <strong>global</strong> → <strong>Add Credentials</strong>.</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1742468525670/f8c1fb52-0de0-494e-a2b8-5501685bf086.png" alt class="image--center mx-auto" /></p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1742468559072/a97b39c6-fc7d-41fb-8dc0-324921b3744c.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Fill the details:</p>
<ul>
<li><p>Kind: <code>Username &amp; Password</code></p>
</li>
<li><p>Scope: <code>Global</code></p>
</li>
<li><p>Username: <code>GitHub username</code></p>
</li>
<li><p>Password: <code>Personal Access Token from GitHub</code></p>
</li>
<li><p>ID: <code>git-cred</code></p>
</li>
</ul>
</li>
</ul>
<p>    <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1742469114406/6afbffb0-cfe6-48e7-8528-41299c32d4a2.png" alt class="image--center mx-auto" /></p>
<ul>
<li>Click <strong>Create</strong>.</li>
</ul>
<h3 id="heading-4-install-trivy-on-the-jenkins-instance">4. Install Trivy on the Jenkins Instance</h3>
<pre><code class="lang-bash">sudo apt-get install wget apt-transport-https gnupg lsb-release
wget -qO - https://aquasecurity.github.io/trivy-repo/deb/public.key | gpg --dearmor | sudo tee /usr/share/keyrings/trivy.gpg &gt; /dev/null
<span class="hljs-built_in">echo</span> <span class="hljs-string">"deb [signed-by=/usr/share/keyrings/trivy.gpg] https://aquasecurity.github.io/trivy-repo/deb <span class="hljs-subst">$(lsb_release -sc)</span> main"</span> | sudo tee -a /etc/apt/sources.list.d/trivy.list
sudo apt-get update
sudo apt-get install trivy -y
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1742470025335/c46a6388-b422-4afe-a842-cd6fc688b202.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-5-add-sonarqube-credentials-in-jenkins">5. Add SonarQube Credentials in Jenkins</h3>
<ul>
<li><p>Go to <strong>SonarQube Server → Administration → Security → Users</strong>.</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1742490813414/47f17cd1-f4b8-48ed-8761-d1a02cbdf5ea.png" alt class="image--center mx-auto" /></p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1742491102592/310f0339-1356-49fa-822b-b17b4ba9cd5e.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Update the token.</p>
</li>
<li><p>Set a name and expiry date of the token and click on generate.</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1742491268933/ebe19ad2-9e3d-452d-9bb8-ebaf9a2fa217.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Go to <strong>Jenkins Dashboard → Manage Jenkins → Credentials</strong>.</p>
</li>
<li><p>Click <strong>global</strong> → <strong>Add new Credential</strong>.</p>
</li>
<li><p>Kind: <code>Secret text</code></p>
</li>
<li><p>Scope: <code>Global</code></p>
</li>
<li><p>Secret: <code>&lt;token-generated&gt;</code></p>
</li>
<li><p>ID: <code>sonar-token</code></p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1742491436760/261b239b-e187-4cff-9554-aeba6cb06f58.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Click <strong>Create</strong>.</p>
</li>
</ul>
<h3 id="heading-6-set-up-sonarqube-server-in-jenkins">6. Set Up SonarQube Server in Jenkins</h3>
<ul>
<li><p>Go to <strong>Jenkins Dashboard → Manage Jenkins → System</strong>.</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1742490551135/1e7ecc6d-7a10-4dde-9c52-20656fb6b8be.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Click <strong>Add SonarQube</strong>:</p>
<ul>
<li><p>Name: <code>sonar</code></p>
</li>
<li><p>Server URL: <code>&lt;SonarQube-public-IP&gt;:9000</code></p>
</li>
<li><p>Authentication Token: <code>sonar-token</code></p>
</li>
</ul>
</li>
</ul>
<p>    <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1742491736301/0d06c95b-aec3-4fea-814a-1fdce8326f38.png" alt class="image--center mx-auto" /></p>
<ul>
<li>Apply and Save.</li>
</ul>
<h3 id="heading-7-configure-sonarqube-webhook">7. Configure SonarQube Webhook</h3>
<ul>
<li><p>Go to <strong>SonarQube Server → Administration → Configuration → Webhook</strong>.</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1742492710951/2aa08e43-3e9e-49d6-a6b9-8575efad35a3.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Create a new webhook:</p>
<ul>
<li><p>URL: <code>&lt;Jenkins-public-IP&gt;:8080/sonarqube-webhook</code></p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1742493028358/24267e28-3767-438d-8f56-1bf3fb34b3a7.png" alt class="image--center mx-auto" /></p>
</li>
</ul>
</li>
<li><p>Click <strong>Create</strong>.</p>
</li>
</ul>
<h3 id="heading-8-add-nexus-repository-credentials">8. Add Nexus Repository Credentials</h3>
<ul>
<li><p>Go to <strong>Jenkins Dashboard → Manage Jenkins → Managed files</strong>.</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1742530535884/17d053f7-b14f-48a2-90da-9b02e878e265.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Add a new config: <strong>Global Maven settings.xml</strong>.</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1742530613712/b0d549a2-7704-49a6-9f06-983b970f2fd0.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>ID: <code>global-settings</code>.</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1742530754947/7189dbaf-efa5-4f9f-973b-8d4263bc2789.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>In the content section, update <code>&lt;servers&gt;&lt;/servers&gt;</code>:</p>
<pre><code class="lang-bash">  &lt;server&gt;
      &lt;id&gt;maven-releases&lt;/id&gt;
      &lt;username&gt;nexus-username&lt;/username&gt;
      &lt;password&gt;nexus-password&lt;/password&gt;
  &lt;/server&gt;
  &lt;server&gt;
      &lt;id&gt;maven-snapshots&lt;/id&gt;
      &lt;username&gt;nexus-username&lt;/username&gt;
      &lt;password&gt;nexus-password&lt;/password&gt;
  &lt;/server&gt;
</code></pre>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1742530858838/ebeed262-1ba6-40fb-a59e-55739af1dae7.png" alt class="image--center mx-auto" /></p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1742531252939/50895fbb-750e-4998-be86-1c78129ad748.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Click <strong>Submit</strong>.</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1742531400375/b5bd39d1-b92c-492f-880a-5917661b87d1.png" alt class="image--center mx-auto" /></p>
</li>
</ul>
<h3 id="heading-9-add-dockerhub-credentials">9. Add DockerHub Credentials</h3>
<ul>
<li><p>Go to <strong>Jenkins Dashboard → Manage Jenkins → Credentials</strong>.</p>
</li>
<li><p>Click <strong>global</strong> → <strong>Add Credential</strong>.</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1742531926748/8500922e-e793-4a90-8ee6-88d26259de34.png" alt class="image--center mx-auto" /></p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1742531970790/c1923e50-957a-47ba-81c1-0202f5df5c2e.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Kind: <code>Username with Password</code></p>
</li>
<li><p>Scope: <code>Global</code></p>
</li>
<li><p>Username: <code>DockerHub-username</code></p>
</li>
<li><p>Password: <code>DockerHub-password</code></p>
</li>
<li><p>ID: <code>docker-cred</code></p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1742533358920/cd45c610-a03c-4a58-bcba-ccba6b20ece7.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Click <strong>Create</strong>.</p>
</li>
</ul>
<h3 id="heading-10-create-a-kubernetes-service-account">10. Create a Kubernetes Service Account</h3>
<h4 id="heading-create-namespace">Create Namespace</h4>
<pre><code class="lang-bash">kubectl create ns webapps
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1742577856691/59601848-0378-4ac4-ac7e-b547f8ac9bf6.png" alt class="image--center mx-auto" /></p>
<h4 id="heading-create-service-account">Create Service Account</h4>
<pre><code class="lang-bash">apiVersion: v1
kind: ServiceAccount
metadata:
   name: jenkins
   namespace: webapps
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1742577733920/ea90c522-de74-4e66-90e3-b390e1133b06.png" alt class="image--center mx-auto" /></p>
<pre><code class="lang-bash">kubectl apply -f svcacc.yaml
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1742577915978/0f4beb28-7242-4df3-a02e-4e9e84db6793.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-11-create-a-role-using-in-kubernetes-cluster">11. Create a <code>Role</code> using in kubernetes cluster</h3>
<ul>
<li><p><strong>create a</strong> <code>role.yaml</code> <strong>configuration file</strong></p>
<pre><code class="lang-bash">  apiVersion: rbac.authorization.k8s.io/v1
  kind: Role
  metadata:
    name: app-role
    namespace: webapps
  rules:
    - apiGroups:
          - <span class="hljs-string">""</span>
          - apps
          - autoscaling
          - batch
          - extensions
          - policy
          - rbac.authorization.k8s.io
      resources:
        - pods
        - secrets
        - componentstatuses
        - configmaps
        - daemonsets
        - deployments
        - events
        - endpoints
        - horizontalpodautoscalers
        - ingress
        - <span class="hljs-built_in">jobs</span>
        - limitranges
        - namespaces
        - nodes
        - pods
        - persistentvolumes
        - persistentvolumeclaims
        - resourcequotas
        - replicasets
        - replicationcontrollers
        - serviceaccounts
        - services
      verbs: [<span class="hljs-string">"get"</span>, <span class="hljs-string">"list"</span>, <span class="hljs-string">"watch"</span>, <span class="hljs-string">"create"</span>, <span class="hljs-string">"update"</span>, <span class="hljs-string">"patch"</span>, <span class="hljs-string">"delete"</span>]
</code></pre>
</li>
<li><p><strong>create the role using the command</strong></p>
<pre><code class="lang-bash">  kubectl apply -f role.yaml
</code></pre>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1742578550277/3a5dc54b-08c6-40eb-800b-a616d25c08b6.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-12-bind-role-to-service-account">12. Bind Role to Service Account</h3>
<ul>
<li><p><strong>create a</strong> <code>bind.yaml</code> <strong>configuration file</strong></p>
<pre><code class="lang-bash">  apiVersion: rbac.authorization.k8s.io/v1
  kind: RoleBinding
  metadata:
    name: app-rolebinding
    namespace: webapps 
  roleRef:
    apiGroup: rbac.authorization.k8s.io
    kind: Role
    name: app-role 
  subjects:
  - namespace: webapps 
    kind: ServiceAccount
    name: jenkins
</code></pre>
</li>
<li><p><strong>create the</strong> <code>rolebinding</code> <strong>using the command</strong></p>
<pre><code class="lang-bash">  kubectl apply -f bind.yaml
</code></pre>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1742578784522/4fc494f6-0ec3-4b9f-971e-77a36dd90c21.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-13-create-kubernetes-secret-for-jenkins">13. Create Kubernetes Secret for Jenkins</h3>
<ul>
<li><p>Create a <code>secret</code> resource configuration file: <code>secrets.yaml</code></p>
<pre><code class="lang-bash">  apiVersion: v1
  kind: Secret
  metadata:
     name: mysecretname
     annotations:
        kubernetes.io/service-account.name: jenkins
  <span class="hljs-built_in">type</span>: kubernetes.io/service-account-token
</code></pre>
</li>
<li><p>Create secret using the kubectl command</p>
<pre><code class="lang-bash">  kubectl apply -f secrets.yaml -n webapps
</code></pre>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1742579427249/1be43379-1fd5-4c15-88b7-0a8b6b5d46a0.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Get the token to connect to jenkins and copy that</p>
<pre><code class="lang-bash">  kubectl describe secret mysecretname -n webapps
</code></pre>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1742579599694/ee6de7ef-667e-4ebc-9bc7-d2c0604f7e07.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Copy the token and add it to Jenkins <strong>Credentials</strong>.</p>
</li>
<li><p>Add the token in the jenkins credentials</p>
<ul>
<li><p>Go to Jenkins Dashboard → Manage Jenkins → Credentials</p>
</li>
<li><p>Click on Global and Add a new credential</p>
</li>
<li><p>Kind: <code>secret text</code></p>
</li>
<li><p>scope: <code>global</code></p>
</li>
<li><p>secret: <code>token-that-you-copied</code></p>
</li>
<li><p>ID: <code>k8s-cred</code></p>
</li>
</ul>
</li>
</ul>
<p>    <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1742580186023/ea8e8ad8-b074-4cb3-989e-c2801d6d4ad4.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-step-3-configure-mail-notification">Step 3: Configure Mail Notification</h2>
<h3 id="heading-1-generate-gmail-app-password">1. Generate Gmail App Password</h3>
<ul>
<li><p>Search for <strong>App Password</strong> on any browser.</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1742585033059/b8a7e99a-e3df-45f6-8014-62b856e09fc9.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Create a new App password with the name: <code>Jenkins</code>.</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1742585085514/ad7915f8-c80e-4680-8969-cfb61e5a04f9.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Copy the generated token.</p>
</li>
</ul>
<h3 id="heading-2-add-gmail-credentials-to-jenkins">2. Add Gmail Credentials to Jenkins</h3>
<ul>
<li><p>Go to <strong>Jenkins Dashboard → Manage Jenkins → Credentials</strong>.</p>
</li>
<li><p>Click on <strong>Global</strong> and add a new credential:</p>
<ul>
<li><p><strong>Kind</strong>: Username with password</p>
</li>
<li><p><strong>Scope</strong>: Global</p>
</li>
<li><p><strong>Username</strong>: your-gmail</p>
</li>
<li><p><strong>Password</strong>: generated token</p>
</li>
<li><p><strong>ID</strong>: <code>mail-cred</code></p>
</li>
</ul>
</li>
</ul>
<p>    <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1742586006242/fd011391-d083-4629-8663-9c489e560fb8.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-3-configure-mail-server-in-jenkins">3. Configure Mail Server in Jenkins</h3>
<ul>
<li><p>Go to <strong>Jenkins Dashboard → Manage Jenkins → System</strong>.</p>
</li>
<li><p>Search for <strong>Extended E-mail Notification</strong>.</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1742585462304/90c56d46-0a67-44c0-9ebf-fec27a2fa23d.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Fill in the details:</p>
<ul>
<li><p><strong>SMTP server</strong>: <code>smtp.gmail.com</code></p>
</li>
<li><p><strong>SMTP port</strong>: <code>465</code></p>
</li>
<li><p><strong>Credentials</strong>: Choose <code>mail-cred</code></p>
</li>
<li><p>Check <strong>Use SSL</strong></p>
</li>
</ul>
</li>
<li><p>Find the <strong>E-mail Notification</strong> section and configure</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1742586238607/edf696ce-0456-40cb-a780-ef7fb909f8c7.png" alt class="image--center mx-auto" /></p>
<ul>
<li><p><strong>SMTP server</strong>: <code>smtp.gmail.com</code></p>
</li>
<li><p><strong>SMTP port</strong>: <code>465</code></p>
</li>
<li><p>Check <strong>Use SSL</strong></p>
</li>
<li><p>Check <strong>Use SMTP Authentication</strong></p>
</li>
<li><p><strong>Username</strong>: email ID</p>
</li>
<li><p><strong>Password</strong>: copied app token</p>
</li>
</ul>
</li>
</ul>
<p>    <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1742586528509/0ae116cf-c9a9-47e6-b92c-d2b1012d0fe5.png" alt class="image--center mx-auto" /></p>
<ul>
<li>Save the changes.</li>
</ul>
<h2 id="heading-step-4-create-a-jenkins-pipeline">Step 4: Create a Jenkins Pipeline</h2>
<p>A Jenkins pipeline automates the software development lifecycle, including building, testing, and deploying applications.</p>
<h3 id="heading-1-general-settings">1. General Settings</h3>
<ul>
<li><p>Check <strong>Discard Old Builds</strong> to limit stored build history.</p>
</li>
<li><p>Configure:</p>
<ul>
<li><p><strong>Max builds to keep:</strong> 10</p>
</li>
<li><p><strong>Max days to keep builds:</strong> 30</p>
</li>
</ul>
</li>
</ul>
<p>    <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1742417102185/b1f02db0-9fd9-422b-be91-6306ec67cfdc.png" alt /></p>
<p>    <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1742417102185/b1f02db0-9fd9-422b-be91-6306ec67cfdc.png" alt /></p>
<h3 id="heading-2-define-the-pipeline-script">2. Define the Pipeline Script</h3>
<pre><code class="lang-bash">pipeline {
    agent any

    tools{
        jdk <span class="hljs-string">'jdk17'</span>
        maven <span class="hljs-string">'maven3'</span>
    }
    environment {
        SCANNER_HOME= tool <span class="hljs-string">'sonar-scanner'</span>
    }

    stages {
        stage(<span class="hljs-string">'Git Checkout'</span>) {
            steps {
                git credentialsId: <span class="hljs-string">'git-cred'</span>, url: <span class="hljs-string">'https://github.com/praduman8435/Boardgame.git'</span>
            }
        }
        stage(<span class="hljs-string">'Compile'</span>) {
            steps {
                sh <span class="hljs-string">"mvn compile"</span>
            }
        }
        stage(<span class="hljs-string">'Test'</span>) {
            steps {
                sh <span class="hljs-string">"mvn test"</span>
            }
        }
        stage(<span class="hljs-string">'File System Scan'</span>) {
            steps {
                sh <span class="hljs-string">"trivy fs --format table -o trivy-fs-report.html ."</span>
            }
        }
        stage(<span class="hljs-string">'Code Quality Analysis'</span>) {
            steps {
                withSonarQubeEnv(<span class="hljs-string">'sonar'</span>) {
                    sh <span class="hljs-string">''</span><span class="hljs-string">' $SCANNER_HOME/bin/sonar-scanner -Dsonar-projectName=BoardGame -Dsonar.projectKey=BoardGame \
                            -Dsonar.java.binaries=. '</span><span class="hljs-string">''</span>
                }
            }
        }
        stage(<span class="hljs-string">'Quality Gate'</span>) {
            steps {
                script {
                    waitForQualityGate abortPipeline: <span class="hljs-literal">false</span>, credentialsId: <span class="hljs-string">'sonar-token'</span>
                }
            }
        }
        stage(<span class="hljs-string">'Build'</span>) {
            steps {
                sh <span class="hljs-string">"mvn package"</span>
            }
        }
        stage(<span class="hljs-string">'Publish Artifacts to Nexus'</span>) {
            steps {
                withMaven(globalMavenSettingsConfig: <span class="hljs-string">'global-settings'</span>, jdk: <span class="hljs-string">'jdk17'</span>, maven: <span class="hljs-string">'maven3'</span>, mavenSettingsConfig: <span class="hljs-string">''</span>, traceability: <span class="hljs-literal">true</span>) {
                    sh <span class="hljs-string">"mvn deploy"</span>
                }
            }
        }
        stage(<span class="hljs-string">'Build &amp; Tag Docker Image'</span>) {
            steps {
                script {
                    withDockerRegistry(credentialsId: <span class="hljs-string">'docker-cred'</span>, toolName: <span class="hljs-string">'docker'</span>) {
                        sh <span class="hljs-string">"docker build -t thepraduman/boardgame:latest ."</span>
                    }
                }
            }
        }
        stage(<span class="hljs-string">'Docker Image Scan'</span>) {
            steps {
                sh <span class="hljs-string">"trivy image --format table -o trivy-image-report.html  thepraduman/boardgame:latest"</span>
            }
        }
        stage(<span class="hljs-string">'Push Docker Image'</span>) {
            steps {
                script {
                    withDockerRegistry(credentialsId: <span class="hljs-string">'docker-cred'</span>, toolName: <span class="hljs-string">'docker'</span>) {
                        sh <span class="hljs-string">"docker push thepraduman/boardgame:latest"</span>
                    }
                }
            }
        }
        stage(<span class="hljs-string">'Deploy to k8s'</span>) {
            steps {
                withKubeConfig(caCertificate: <span class="hljs-string">''</span>, clusterName: <span class="hljs-string">'kubernetes'</span>, contextName: <span class="hljs-string">''</span>, credentialsId: <span class="hljs-string">'k8s-cred'</span>, namespace: <span class="hljs-string">'webapps'</span>, restrictKubeConfigAccess: <span class="hljs-literal">false</span>, serverUrl: <span class="hljs-string">'https://172.31.39.125:6443'</span>) {
                    sh <span class="hljs-string">"kubectl apply -f k8s-manifest"</span>
                }
            }
        }
        stage(<span class="hljs-string">'Verify the deployment'</span>) {
            steps {
                withKubeConfig(caCertificate: <span class="hljs-string">''</span>, clusterName: <span class="hljs-string">'kubernetes'</span>, contextName: <span class="hljs-string">''</span>, credentialsId: <span class="hljs-string">'k8s-cred'</span>, namespace: <span class="hljs-string">'webapps'</span>, restrictKubeConfigAccess: <span class="hljs-literal">false</span>, serverUrl: <span class="hljs-string">'https://172.31.39.125:6443'</span>) {
                    sh <span class="hljs-string">"kubectl get pods -n webapps"</span>
                    sh <span class="hljs-string">"kubectl get svc -n webapps"</span>
                }
            }
        }
    }
}
</code></pre>
<h3 id="heading-3-build-the-pipeline">3. Build the Pipeline</h3>
<blockquote>
<p>Click on Build Now to build the pipeline</p>
</blockquote>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1742882449933/946c8db6-299f-4f3f-a87b-ab90923d7aab.png" alt class="image--center mx-auto" /></p>
<blockquote>
<p>Check the SonarQube Dashboard</p>
</blockquote>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1742882629761/392b1a4a-7f9f-4a09-975d-3f2889218f69.png" alt class="image--center mx-auto" /></p>
<hr />
<h1 id="heading-setup-monitoring-prometheus-amp-grafana">Setup Monitoring (Prometheus &amp; Grafana)</h1>
<h3 id="heading-create-an-ec2-instance-for-monitoring">Create an EC2 Instance for Monitoring</h3>
<ol>
<li><p>Go to AWS console and launch an EC2 instance.</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1742664679698/f5763f61-df66-435f-a5a7-2b50c6ec4d46.png" alt class="image--center mx-auto" /></p>
</li>
<li><p><strong>Instance Name:</strong> Monitoring</p>
</li>
<li><p><strong>AMI:</strong> Ubuntu</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1742664797548/174a8359-bca7-4a0f-b484-289ecf28a8e4.png" alt class="image--center mx-auto" /></p>
</li>
<li><p><strong>Instance Type:</strong> t2.large</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1742664869942/96e964f7-9cb7-4fbe-8b83-2d21872aea6d.png" alt class="image--center mx-auto" /></p>
</li>
<li><p><strong>Key-Pair:</strong> Create a new key-pair or use an existing one.</p>
</li>
<li><p><strong>Security Group:</strong> Use the previously created one.</p>
</li>
<li><p><strong>Storage:</strong> 20 GiB</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1742665087748/1e79c19b-226f-4c43-be31-97cd5f093236.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Click <strong>Launch Instance</strong>.</p>
</li>
</ol>
<h3 id="heading-ssh-into-the-monitoring-server">SSH into the Monitoring Server</h3>
<ol>
<li><p>Copy the <strong>public IP</strong> of the monitoring server.</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1742665243164/4d81c20d-b3fb-4782-9ab6-679abb77c76d.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Connect via SSH from your local machine:</p>
<pre><code class="lang-sh"> ssh -i &lt;path-to-pem-file&gt; ubuntu@&lt;public-IP&gt;
</code></pre>
</li>
<li><p>Update the server:</p>
<pre><code class="lang-sh"> sudo apt update
</code></pre>
</li>
</ol>
<h3 id="heading-install-prometheus">Install Prometheus</h3>
<ol>
<li><p>Download Prometheus:</p>
<pre><code class="lang-sh"> wget https://github.com/prometheus/prometheus/releases/download/v3.2.1/prometheus-3.2.1.linux-amd64.tar.gz
</code></pre>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1742666051863/192dc5b1-8cd5-4719-8832-b71cdec7fded.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Extract the tar file:</p>
<pre><code class="lang-sh"> tar -xvf prometheus-3.2.1.linux-amd64.tar.gz
</code></pre>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1742666199807/0ea9c08c-b555-49f4-9bac-33ccf4e593b4.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Navigate to the Prometheus directory:</p>
<pre><code class="lang-sh"> <span class="hljs-built_in">cd</span> prometheus-3.2.1.linux-amd64
</code></pre>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1742666422669/85ce311c-391c-49af-acbf-085aacf53286.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Run Prometheus in the background:</p>
<pre><code class="lang-sh"> ./prometheus &amp;
</code></pre>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1742667050594/5ae10b10-24a8-4e0f-9c24-6b15bc24df99.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Access Prometheus at:</p>
<pre><code class="lang-bash"> http://&lt;public-IP-of-monitoring&gt;:9090
</code></pre>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1742667378300/dfcc0b89-bc49-432f-a021-c20c4816a9f1.png" alt class="image--center mx-auto" /></p>
</li>
</ol>
<h3 id="heading-install-grafana">Install Grafana</h3>
<ol>
<li><p>Install dependencies:</p>
<pre><code class="lang-sh"> sudo apt-get install -y adduser libfontconfig1 musl
</code></pre>
</li>
<li><p>Download and install Grafana:</p>
<pre><code class="lang-sh"> wget https://dl.grafana.com/enterprise/release/grafana-enterprise_11.5.2_amd64.deb
 sudo dpkg -i grafana-enterprise_11.5.2_amd64.deb
</code></pre>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1742667658786/61078215-c00c-4c8c-a045-b0277bce6d8c.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Start Grafana:</p>
<pre><code class="lang-sh"> sudo systemctl start grafana-server
</code></pre>
</li>
<li><p>Access Grafana at:</p>
<pre><code class="lang-bash"> http://&lt;public-IP-of-monitoring&gt;:3000
</code></pre>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1742726150834/720174b0-4944-4172-8269-0777ce346257.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Default credentials:</p>
<ul>
<li><p><strong>Username:</strong> admin</p>
</li>
<li><p><strong>Password:</strong> admin (Change it after logging in)</p>
</li>
</ul>
</li>
</ol>
<p>    <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1742726424698/8f2be481-b688-452f-9e32-fdd8b975e06f.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-install-blackbox-exporter">Install Blackbox Exporter</h3>
<ol>
<li><p>Download Blackbox Exporter:</p>
<pre><code class="lang-sh"> wget https://github.com/prometheus/blackbox_exporter/releases/download/v0.26.0/blackbox_exporter-0.26.0.linux-amd64.tar.gz
</code></pre>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1742726710033/49b0bca0-85eb-47b8-9c22-e6850f9412f4.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Extract the tar file:</p>
<pre><code class="lang-sh"> tar -xvf blackbox_exporter-0.26.0.linux-amd64.tar.gz
</code></pre>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1742726833827/c4c985f6-c827-4432-a78a-a3c91e7f137f.png" alt /></p>
</li>
<li><p>Navigate to the directory:</p>
<pre><code class="lang-sh"> <span class="hljs-built_in">cd</span> blackbox_exporter-0.26.0.linux-amd64
</code></pre>
</li>
<li><p>Run Blackbox Exporter:</p>
<pre><code class="lang-sh"> ./blackbox_exporter &amp;
</code></pre>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1742726992776/db9deafa-6968-4442-bd50-bbdaa13d39a9.png" alt /></p>
</li>
<li><p>Access at:</p>
<pre><code class="lang-bash"> http://&lt;public-IP-of-monitoring&gt;:9115
</code></pre>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1742727129092/4bb31477-e855-48d4-ac05-fc3b09fea67f.png" alt class="image--center mx-auto" /></p>
</li>
</ol>
<h3 id="heading-configure-prometheus">Configure Prometheus</h3>
<ol>
<li><p>Edit <code>prometheus.yml</code>:</p>
<pre><code class="lang-sh"> vim prometheus.yml
</code></pre>
</li>
<li><p>Add Blackbox Exporter job:</p>
<pre><code class="lang-yaml"> <span class="hljs-bullet">-</span> <span class="hljs-attr">job_name:</span> <span class="hljs-string">'blackbox'</span>
   <span class="hljs-attr">metrics_path:</span> <span class="hljs-string">/probe</span>
   <span class="hljs-attr">params:</span>
     <span class="hljs-attr">module:</span> [<span class="hljs-string">http_2xx</span>]  <span class="hljs-comment"># Look for a HTTP 200 response.</span>
   <span class="hljs-attr">static_configs:</span>
     <span class="hljs-bullet">-</span> <span class="hljs-attr">targets:</span>
       <span class="hljs-bullet">-</span> <span class="hljs-string">http://prometheus.io</span>    <span class="hljs-comment"># Target to probe with http.</span>
       <span class="hljs-bullet">-</span> <span class="hljs-string">https://prometheus.io</span>   <span class="hljs-comment"># Target to probe with https.</span>
       <span class="hljs-bullet">-</span> <span class="hljs-string">http://example.com:8080</span> <span class="hljs-comment"># Target to probe with http on port 8080.</span>
   <span class="hljs-attr">relabel_configs:</span>
     <span class="hljs-bullet">-</span> <span class="hljs-attr">source_labels:</span> [<span class="hljs-string">__address__</span>]
       <span class="hljs-attr">target_label:</span> <span class="hljs-string">__param_target</span>
     <span class="hljs-bullet">-</span> <span class="hljs-attr">source_labels:</span> [<span class="hljs-string">__param_target</span>]
       <span class="hljs-attr">target_label:</span> <span class="hljs-string">instance</span>
     <span class="hljs-bullet">-</span> <span class="hljs-attr">target_label:</span> <span class="hljs-string">__address__</span>
       <span class="hljs-attr">replacement:</span> <span class="hljs-number">127.0</span><span class="hljs-number">.0</span><span class="hljs-number">.1</span><span class="hljs-string">:9115</span>  <span class="hljs-comment"># The blackbox exporter's real hostname:port.</span>
</code></pre>
</li>
<li><p>Restart Prometheus:</p>
<pre><code class="lang-sh"> pgrep prometheus
</code></pre>
<pre><code class="lang-bash"> <span class="hljs-built_in">kill</span> &lt;PID&gt;
</code></pre>
<pre><code class="lang-bash"> ./prometheus &amp;
</code></pre>
</li>
<li><p>Check <strong>Prometheus Dashboard → Status → Target</strong>.</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1742761732697/f5c206cc-c372-441e-b78d-f2550dcb1f7d.png" alt class="image--center mx-auto" /></p>
</li>
</ol>
<h3 id="heading-add-prometheus-as-datasource-in-grafana">Add Prometheus as DataSource in Grafana</h3>
<ol>
<li><p>Go to <strong>Grafana Dashboard</strong>.</p>
</li>
<li><p>Click <strong>Data Sources</strong> under <strong>Connections</strong>.</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1742761945767/9ba28078-693b-4274-8eee-ab15013240c2.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Search and select <strong>Prometheus</strong>.</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1742762018587/d1500587-8cbd-4001-a99a-b0e64d98e568.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Add a new data source and enter the <strong>Prometheus server URL</strong>.</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1742762119387/a8cc8990-1c87-4c3d-b7d9-f7a65f3adec4.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Click <strong>Save &amp; Test</strong>.</p>
</li>
</ol>
<h3 id="heading-import-grafana-dashboard">Import Grafana Dashboard</h3>
<ol>
<li><p>Go to <strong>Grafana Home Page</strong>.</p>
</li>
<li><p>Click <strong>+ (Add)</strong> → <strong>Import Dashboard</strong>.</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1742762290098/b73f1a43-de5c-4e36-8046-be95f426aa76.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Enter <strong>7587</strong> as the dashboard ID and click <strong>Load</strong>.</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1742762490075/5ccc54fe-d9f2-46d4-b860-b2958b08d979.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Select <strong>Prometheus</strong> as the data source and click <strong>Import</strong>.</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1742788183216/a6f7cb27-2be0-463e-9294-e4be5b9a177b.png" alt class="image--center mx-auto" /></p>
</li>
</ol>
<h3 id="heading-install-prometheus-plugin-on-jenkins">Install Prometheus Plugin on Jenkins</h3>
<ol>
<li><p>Go to <strong>Jenkins Dashboard → Manage Jenkins → Plugins</strong>.</p>
</li>
<li><p>Search for <strong>Prometheus</strong> and install it.</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1742788442407/fc6d120a-1264-46e3-9d63-aed3bae23104.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Restart Jenkins.</p>
</li>
</ol>
<h3 id="heading-install-node-exporter-on-monitoring-server">Install Node Exporter on Monitoring Server</h3>
<ol>
<li><p>Download Node Exporter:</p>
<pre><code class="lang-sh"> wget https://github.com/prometheus/node_exporter/releases/download/v1.9.0/node_exporter-1.9.0.linux-amd64.tar.gz
</code></pre>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1742788796214/f83f28e7-398e-46d8-a48e-23db090a4f3f.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Extract the tar file:</p>
<pre><code class="lang-sh"> tar -xvf node_exporter-1.9.0.linux-amd64.tar.gz
</code></pre>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1742788852662/eb460dc7-e9db-474e-8e8f-988fc10e558d.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Navigate to the directory:</p>
<pre><code class="lang-sh"> <span class="hljs-built_in">cd</span> node_exporter-1.9.0.linux-amd64/
</code></pre>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1742788953749/74679084-7b9c-4c61-9af5-97ae109f1aee.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Run Node Exporter:</p>
<pre><code class="lang-sh"> ./node_exporter &amp;
</code></pre>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1742789041674/8be57d4d-8e8a-4768-a6a3-0d9292eb7a93.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Access at:</p>
<pre><code class="lang-bash"> http://&lt;public-IP-of-monitoring-server&gt;:9100
</code></pre>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1742789147305/60351336-b2fa-40fc-b2db-3f0c6458fea3.png" alt class="image--center mx-auto" /></p>
</li>
</ol>
<h3 id="heading-configure-prometheus-for-jenkins">Configure Prometheus for Jenkins</h3>
<ol>
<li><p>Go to Jenkins Dashboard → Manage Jenkins → System</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1742789400541/ed4e5d86-1551-4fd8-9076-0cb69afd36c7.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>search for prometheus section</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1742789480397/9d7213ea-c0cf-4951-8b01-d41e303d4b13.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Check if other things required or leave it default and save it</p>
</li>
</ol>
<h3 id="heading-edit-the-prometheusyml-file-in-monitoring-server">Edit the <code>prometheus.yml</code> file in monitoring server</h3>
<ol>
<li><p>Edit <code>prometheus.yml</code>:</p>
<pre><code class="lang-sh"> vim prometheus.yml
</code></pre>
</li>
<li><p>Add the following jobs:</p>
<pre><code class="lang-yaml"> <span class="hljs-bullet">-</span> <span class="hljs-attr">job_name:</span> <span class="hljs-string">node</span>
   <span class="hljs-attr">static_configs:</span>
     <span class="hljs-bullet">-</span> <span class="hljs-attr">targets:</span> [<span class="hljs-string">'&lt;jenkins-public-IP&gt;:9100'</span>]
 <span class="hljs-bullet">-</span> <span class="hljs-attr">job_name:</span> <span class="hljs-string">jenkins</span>
   <span class="hljs-attr">metrics_path:</span> <span class="hljs-string">'/prometheus'</span>
   <span class="hljs-attr">static_configs:</span>
     <span class="hljs-bullet">-</span> <span class="hljs-attr">targets:</span> [<span class="hljs-string">'&lt;jenkins-public-IP&gt;:8080'</span>]
</code></pre>
</li>
<li><p>Restart Prometheus:</p>
<pre><code class="lang-sh"> pgrep prometheus
</code></pre>
<pre><code class="lang-bash"> <span class="hljs-built_in">kill</span> &lt;PID&gt;
</code></pre>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1742790391988/86df0107-69bf-4b9e-ad6d-9837f14f2d97.png" alt class="image--center mx-auto" /></p>
<pre><code class="lang-bash"> ./prometheus &amp;
</code></pre>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1742797773853/0e457a4a-0f0d-47a6-8847-b071cbfa0efc.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Verify in <strong>Prometheus → Status → Target Health</strong>.</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1742797986110/3d776f9c-910d-4b7f-b880-d157d3c6ba07.png" alt class="image--center mx-auto" /></p>
</li>
</ol>
<h3 id="heading-import-node-exporter-dashboard-in-grafana">Import Node Exporter Dashboard in Grafana</h3>
<ol>
<li><p>Go to <strong>Grafana Dashboard → Import</strong>.</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1742798465161/6f2ca796-a7a5-46fb-a4dc-d9893b749dc3.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Enter ID: <strong>1860</strong> and click <strong>Load</strong>.</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1742798530868/801f1714-9818-461c-a768-d5c6fde1506f.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Select <strong>Prometheus</strong> as the data source and import.</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1742798725367/0e6b729b-6092-4f86-894f-2ab0fff38bf3.png" alt class="image--center mx-auto" /></p>
</li>
</ol>
<blockquote>
<p>Monitoring setup with Prometheus, Grafana, and Jenkins is now complete! 🚀</p>
</blockquote>
<p><strong>Enjoyed the post? to support my writing!</strong></p>
<p><a target="_blank" href="buymeacoffee.com/praduman"><img src="https://img.shields.io/badge/Buy%20Me%20A%20Coffee-FFDD00?style=for-the-badge&amp;logo=buy-me-a-coffee&amp;logoColor=black" alt="Buy Me A Coffee" /></a></p>
<h2 id="heading-conclusion">Conclusion</h2>
<p>This project covered the complete setup of a <strong>DevSecOps pipeline</strong>, from <strong>infrastructure provisioning</strong> to <strong>CI/CD automation, security, and monitoring</strong>. We deployed a <strong>Kubernetes cluster on AWS</strong>, automated deployments using <strong>Jenkins, Terraform, and ArgoCD</strong>, and integrated security tools like <strong>Trivy and SonarQube</strong>. For monitoring, we set up <strong>Prometheus and Grafana</strong> along with <strong>Node Exporter and Blackbox Exporter</strong>. By combining these tools, we built a <strong>secure, automated, and scalable environment</strong> that ensures smooth deployments, security compliance, and real-time monitoring.</p>
<hr />
<blockquote>
<p>💡 <em>Let’s connect and discuss DevOps, cloud automation, and cutting-edge technology</em></p>
<p>🔗 <a target="_blank" href="https://www.linkedin.com/in/praduman-prajapati/"><strong>LinkedIn</strong></a> | 💼 <a target="_blank" href="https://www.upwork.com/freelancers/~01fa3bf4d6797a9651"><strong>Upwork</strong></a> | 🐦 <a target="_blank" href="https://x.com/CndTwtprad"><strong>Twitter</strong></a> | 👨‍💻 <a target="_blank" href="https://github.com/praduman8435"><strong>GitHub</strong></a></p>
</blockquote>
]]></content:encoded></item><item><title><![CDATA[Ultimate DevSecOps Project: End-to-End Kubernetes Three-Tier Deployment on AWS EKS with ArgoCD, Prometheus, Grafana & Jenkins]]></title><description><![CDATA[Introduction: Why DevSecOps?
In today’s fast-paced tech world, speed and security go hand in hand. You can’t just build and deploy apps quickly—you need to keep them secure from day one. That’s where DevSecOps comes in! It blends development, securit...]]></description><link>https://blogs.praduman.site/ultimate-devsecops-project-end-to-end-kubernetes-three-tier-deployment-on-aws-eks-with-argocd-prometheus-grafana-and-jenkins</link><guid isPermaLink="true">https://blogs.praduman.site/ultimate-devsecops-project-end-to-end-kubernetes-three-tier-deployment-on-aws-eks-with-argocd-prometheus-grafana-and-jenkins</guid><category><![CDATA[DevSecOps]]></category><category><![CDATA[Kubernetes]]></category><category><![CDATA[AWS]]></category><category><![CDATA[Jenkins]]></category><category><![CDATA[Terraform]]></category><category><![CDATA[sonarqube]]></category><category><![CDATA[#prometheus]]></category><category><![CDATA[Grafana]]></category><category><![CDATA[ArgoCD]]></category><category><![CDATA[CI/CD]]></category><category><![CDATA[gitops]]></category><category><![CDATA[cloud security]]></category><category><![CDATA[Docker]]></category><category><![CDATA[automation]]></category><category><![CDATA[trivy]]></category><dc:creator><![CDATA[Praduman Prajapati]]></dc:creator><pubDate>Thu, 06 Mar 2025 16:57:06 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1741724598614/cf86907e-fb02-44b8-b119-876161bab7d8.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-introduction-why-devsecops"><strong>Introduction: Why DevSecOps?</strong></h2>
<p>In today’s fast-paced tech world, speed and security go hand in hand. You can’t just build and deploy apps quickly—you need to <strong>keep them secure</strong> from day one. That’s where <strong>DevSecOps</strong> comes in! It blends <strong>development, security, and operations</strong> into one seamless process, ensuring that security is baked into every stage of the pipeline instead of being an afterthought.</p>
<p>This <strong>Ultimate DevSecOps Project</strong> is all about deploying a <strong>three-tier application</strong> on <strong>AWS EKS</strong> with a fully automated <strong>CI/CD pipeline</strong>. The goal? To make sure every piece of code is <strong>secure, high-quality, and production-ready</strong> before it even goes live.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1741698758056/91258b41-3a89-4900-8afa-031fb2fbcc77.webp" alt class="image--center mx-auto" /></p>
<h3 id="heading-whats-inside-this-project"><strong>What’s Inside This Project?</strong></h3>
<p>We’ll be using some of the <strong>best DevSecOps tools</strong> out there to make this happen:</p>
<p>✅ <strong>Jenkins</strong> – Automates the entire CI/CD pipeline.<br />✅ <strong>SonarQube &amp; OWASP Dependency-Check</strong> – Keep the code clean, secure, and compliant.<br />✅ <strong>Trivy</strong> – Scans container images for security vulnerabilities before deployment.<br />✅ <strong>Terraform</strong> – Automates infrastructure setup on AWS.<br />✅ <strong>ArgoCD</strong> – Ensures Kubernetes deployments stay in sync with Git (GitOps).<br />✅ <strong>Prometheus &amp; Grafana</strong> – Provide real-time monitoring and insights.</p>
<p>By the end of this project, you’ll have a <strong>fully functional, security-first DevSecOps pipeline</strong> that not only deploys applications but also keeps them <strong>safe, scalable, and efficient</strong>.</p>
<p>🚀 <strong>Ready to dive in? Let’s build something amazing!</strong></p>
<hr />
<h2 id="heading-source-code-amp-repository"><strong>Source Code &amp; Repository</strong> 👇</h2>
<p>You can find the source code for this project here:</p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://github.com/praduman8435/DevSecOps-in-Action/">https://github.com/praduman8435/DevSecOps-in-Action/</a></div>
<p> </p>
<hr />
<h2 id="heading-step-1-set-up-a-jenkins-server-on-aws-ec2"><strong>Step 1: Set Up a Jenkins Server on AWS EC2</strong></h2>
<h4 id="heading-1-log-in-to-aws-and-launch-an-ec2-instance"><strong>1. Log in to AWS and Launch an EC2 Instance</strong></h4>
<ol>
<li><p>Go to the <strong>AWS Console</strong>.</p>
</li>
<li><p>Navigate to <strong>EC2</strong> (Elastic Compute Cloud).</p>
</li>
<li><p>Click <strong>Launch Instance</strong> to create a new virtual machine.</p>
</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1741292059477/784e1068-1aea-4399-b999-83d1a0bafb8f.png" alt class="image--center mx-auto" /></p>
<h4 id="heading-2-configure-the-ec2-instance"><strong>2. Configure the EC2 Instance</strong></h4>
<ul>
<li><p><strong>AMI (Amazon Machine Image):</strong> Choose <strong>Ubuntu Server</strong>.</p>
</li>
<li><p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1741292194265/b0f3b2c7-084b-4a03-a1a5-becb4af20c3b.png" alt class="image--center mx-auto" /></p>
<p>  <strong>Instance Type:</strong> Select <strong>t2.2xlarge</strong> (8 vCPUs, 32GB RAM) for better performance.</p>
</li>
<li><p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1741292259900/ee5e29eb-0610-4cb0-a91f-eb48eef384c9.png" alt class="image--center mx-auto" /></p>
<p>  <strong>Key Pair:</strong> <strong>No need to create a key pair</strong> (proceed without key pair).</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1741292295866/66776b0e-ac60-4f13-ba85-0632c9fd4f35.png" alt class="image--center mx-auto" /></p>
</li>
</ul>
<h4 id="heading-3-configure-security-group-firewall-rules"><strong>3. Configure Security Group (Firewall Rules)</strong></h4>
<p>Set up inbound rules to allow required network traffic:</p>
<div class="hn-table">
<table>
<thead>
<tr>
<td><strong>Port</strong></td><td><strong>Protocol</strong></td><td><strong>Purpose</strong></td></tr>
</thead>
<tbody>
<tr>
<td>8080</td><td>TCP</td><td>Jenkins Web UI (Restrict access to trusted IPs or internal network).</td></tr>
<tr>
<td>50000</td><td>TCP</td><td>Communication between Jenkins Controller and Agents (for distributed builds).</td></tr>
<tr>
<td>443</td><td>TCP</td><td>HTTPS access (if Jenkins is secured with SSL).</td></tr>
<tr>
<td>80</td><td>TCP</td><td>HTTP access (if using an Nginx reverse proxy for Jenkins).</td></tr>
<tr>
<td>9000</td><td>TCP</td><td>SonarQube Access (for code analysis).</td></tr>
</tbody>
</table>
</div><p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1741292610377/9547ca61-0423-44d1-ad79-eb09e4a10d7d.png" alt class="image--center mx-auto" /></p>
<blockquote>
<p><strong>Note:</strong> For security, avoid opening all these ports to the public. Instead, restrict access to trusted IPs or internal networks.</p>
</blockquote>
<ul>
<li><strong>Choose the created</strong> <code>security group</code> <strong>in Network setting</strong></li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1741292807123/55c5a524-3441-463e-bd3d-f64df1b65455.png" alt class="image--center mx-auto" /></p>
<h4 id="heading-4-configure-storage-and-iam-role"><strong>4. Configure Storage and IAM Role</strong></h4>
<ul>
<li><p><strong>Storage:</strong> Set at least <strong>30 GiB</strong>.</p>
</li>
<li><p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1741293161695/8f46ae7e-c591-407b-b8c6-748be37d1837.png" alt class="image--center mx-auto" /></p>
<p>  <strong>IAM Role:</strong> Attach an <strong>IAM profile with administrative access</strong> to allow Jenkins to manage AWS resources.</p>
<blockquote>
<p>Create an IAM profile with Administrator Access and attach it to EC2 instance</p>
</blockquote>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1741293299421/fbdcfb68-26ae-4247-ab9c-a65826835a50.png" alt class="image--center mx-auto" /></p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1741293467284/08898a1a-152b-40b4-8de5-547875fd01cd.png" alt class="image--center mx-auto" /></p>
</li>
</ul>
<h4 id="heading-5-automate-installation-with-user-data"><strong>5. Automate Installation with User Data</strong></h4>
<p>Instead of manually installing required tools, you can automate it using a <strong>User Data script</strong>. This script will automatically install:</p>
<ul>
<li><p>Jenkins</p>
</li>
<li><p>Docker</p>
</li>
<li><p>Terraform</p>
</li>
<li><p>AWS CLI</p>
</li>
<li><p>SonarQube (running in a container)</p>
</li>
<li><p>Trivy (for security scanning)</p>
</li>
</ul>
<pre><code class="lang-bash"><span class="hljs-meta">#!/bin/bash</span>
<span class="hljs-comment"># For Ubuntu 22.04</span>
<span class="hljs-comment"># Intsalling Java</span>
sudo apt update -y
sudo apt install openjdk-17-jre -y
sudo apt install openjdk-17-jdk -y
java --version

<span class="hljs-comment"># Installing Jenkins</span>
curl -fsSL https://pkg.jenkins.io/debian/jenkins.io-2023.key | sudo tee \
  /usr/share/keyrings/jenkins-keyring.asc &gt; /dev/null
<span class="hljs-built_in">echo</span> deb [signed-by=/usr/share/keyrings/jenkins-keyring.asc] \
  https://pkg.jenkins.io/debian binary/ | sudo tee \
  /etc/apt/sources.list.d/jenkins.list &gt; /dev/null
sudo apt-get update -y
sudo apt-get install jenkins -y

<span class="hljs-comment"># Installing Docker </span>
<span class="hljs-comment">#!/bin/bash</span>
sudo apt update
sudo apt install docker.io -y
sudo usermod -aG docker jenkins
sudo usermod -aG docker ubuntu
sudo systemctl restart docker
sudo chmod 777 /var/run/docker.sock

<span class="hljs-comment"># If you don't want to install Jenkins, you can create a container of Jenkins</span>
<span class="hljs-comment"># docker run -d -p 8080:8080 -p 50000:50000 --name jenkins-container jenkins/jenkins:lts</span>

<span class="hljs-comment"># Run Docker Container of Sonarqube</span>
<span class="hljs-comment">#!/bin/bash</span>
docker run -d  --name sonar -p 9000:9000 sonarqube:lts-community


<span class="hljs-comment"># Installing AWS CLI</span>
<span class="hljs-comment">#!/bin/bash</span>
curl <span class="hljs-string">"https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip"</span> -o <span class="hljs-string">"awscliv2.zip"</span>
sudo apt install unzip -y
unzip awscliv2.zip
sudo ./aws/install

<span class="hljs-comment"># Installing Kubectl</span>
<span class="hljs-comment">#!/bin/bash</span>
sudo apt update
sudo apt install curl -y
sudo curl -LO <span class="hljs-string">"https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl"</span>
sudo chmod +x kubectl
sudo mv kubectl /usr/<span class="hljs-built_in">local</span>/bin/
kubectl version --client


<span class="hljs-comment"># Installing eksctl</span>
<span class="hljs-comment">#! /bin/bash</span>
curl --silent --location <span class="hljs-string">"https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_<span class="hljs-subst">$(uname -s)</span>_amd64.tar.gz"</span> | tar xz -C /tmp
sudo mv /tmp/eksctl /usr/<span class="hljs-built_in">local</span>/bin
eksctl version

<span class="hljs-comment"># Installing Terraform</span>
<span class="hljs-comment">#!/bin/bash</span>
wget -O- https://apt.releases.hashicorp.com/gpg | sudo gpg --dearmor -o /usr/share/keyrings/hashicorp-archive-keyring.gpg
<span class="hljs-built_in">echo</span> <span class="hljs-string">"deb [signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] https://apt.releases.hashicorp.com <span class="hljs-subst">$(lsb_release -cs)</span> main"</span> | sudo tee /etc/apt/sources.list.d/hashicorp.list
sudo apt update
sudo apt install terraform -y

<span class="hljs-comment"># Installing Trivy</span>
<span class="hljs-comment">#!/bin/bash</span>
sudo apt-get install wget apt-transport-https gnupg lsb-release -y
wget -qO - https://aquasecurity.github.io/trivy-repo/deb/public.key | sudo apt-key add -
<span class="hljs-built_in">echo</span> deb https://aquasecurity.github.io/trivy-repo/deb $(lsb_release -sc) main | sudo tee -a /etc/apt/sources.list.d/trivy.list
sudo apt update
sudo apt install trivy -y


<span class="hljs-comment"># Intalling Helm</span>
<span class="hljs-comment">#! /bin/bash</span>
sudo snap install helm --classic
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1741293562191/ca3dc12d-b66a-4296-9d4d-ce3aa9d5d3fc.png" alt class="image--center mx-auto" /></p>
<blockquote>
<p><strong>Why use User Data?</strong></p>
<ul>
<li><p>Automates setup.</p>
</li>
<li><p>Ensures all required tools are installed before first login.</p>
</li>
<li><p>Saves time compared to manual installation.</p>
</li>
</ul>
</blockquote>
<h4 id="heading-6-launch-the-instance"><strong>6. Launch the Instance</strong></h4>
<ul>
<li>Click <strong>Launch Instance</strong> to start your Jenkins server.</li>
</ul>
<h4 id="heading-7-connect-to-the-instance"><strong>7. Connect to the Instance</strong></h4>
<p>Since SSH is disabled for security reasons, use the <strong>EC2 Instance Connect</strong> feature:</p>
<ul>
<li><p>Go to the <strong>AWS EC2 Console</strong>.</p>
</li>
<li><p>Select your Jenkins instance.</p>
</li>
<li><p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1741293679169/40de1163-3fb8-4b77-a17d-2c4511faaba7.png" alt class="image--center mx-auto" /></p>
<p>  Click the <strong>"Connect"</strong> button at the top.</p>
</li>
<li><p>Choose the <strong>"EC2 Instance Connect"</strong> tab.</p>
</li>
<li><p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1741293744180/7b851c53-7f58-41fc-80c5-0e508a6d3496.png" alt class="image--center mx-auto" /></p>
<p>  Click <strong>"Connect"</strong> to open a web-based terminal directly in your browser.</p>
</li>
</ul>
<h4 id="heading-8-monitor-running-processes"><strong>8. Monitor Running Processes</strong></h4>
<p>To check what commands are running inside the instance, use:</p>
<pre><code class="lang-bash">htop
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1741293861459/ac005bc4-b040-4c06-9bdf-c41dc7a7f6cf.png" alt class="image--center mx-auto" /></p>
<p>This command provides a real-time view of system performance and running processes.</p>
<hr />
<h2 id="heading-step-2-set-up-a-jenkins-pipeline-to-deploy-eks-and-networking-services"><strong>Step 2: Set Up a Jenkins Pipeline to Deploy EKS and Networking Services</strong></h2>
<h4 id="heading-1-access-jenkins"><strong>1. Access Jenkins</strong></h4>
<ol>
<li><p>Open your browser and go to:</p>
<pre><code class="lang-bash"> http://&lt;public-ip-of-jenkins-server&gt;:8080
</code></pre>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1741294434215/b9c6c2dc-6c2b-494c-a273-09f79a5bd77d.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Retrieve the initial Jenkins admin password by running the following command in the <strong>jenkins Instance</strong>:</p>
<pre><code class="lang-bash"> sudo cat /var/lib/jenkins/secrets/initialAdminPassword
</code></pre>
</li>
<li><p>Follow the setup wizard:</p>
<ul>
<li><p>Install <strong>Suggested Plugins</strong>.</p>
</li>
<li><p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1741294679400/dc21cab8-5540-4831-a9e3-c329abda050a.png" alt class="image--center mx-auto" /></p>
<p>  Create an <strong>Admin Username &amp; Password</strong>.</p>
</li>
<li><p>Complete the basic configuration.</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1741294906516/f9efff05-a7f4-49ac-9c24-7151cf978adc.png" alt class="image--center mx-auto" /></p>
</li>
</ul>
</li>
</ol>
<blockquote>
<p>Now, <strong>Jenkins is ready to use</strong>.</p>
</blockquote>
<h4 id="heading-2-install-required-plugins"><strong>2. Install Required Plugins</strong></h4>
<p>To enable Jenkins to work with AWS and Terraform, install the following plugins manually:</p>
<ol>
<li><p>Click on <strong>"Manage Jenkins"</strong> &gt; <strong>"Plugins Manager"</strong>.</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1741295083525/5e02f6ce-0f48-485a-8f6e-ab06c873b1b8.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Search for and install these plugins:</p>
<ul>
<li><p><strong>AWS Credentials</strong> → Securely stores AWS access keys.</p>
</li>
<li><p><strong>Pipeline: AWS Steps</strong> → Adds built-in AWS-specific pipeline steps.</p>
</li>
<li><p><strong>Terraform</strong> → Enables Terraform automation in Jenkins.</p>
</li>
<li><p><strong>Pipeline: Stage View</strong> → Provides a visual representation of pipeline stages.</p>
</li>
</ul>
</li>
</ol>
<p>    <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1741295264252/2beb8213-17ed-4720-b46b-b03a575f10fd.png" alt class="image--center mx-auto" /></p>
<h4 id="heading-3-configure-aws-credentials-in-jenkins"><strong>3. Configure AWS Credentials in Jenkins</strong></h4>
<ol>
<li><p>Go to <strong>"Manage Jenkins"</strong> &gt; <strong>"Credentials"</strong>.</p>
</li>
<li><p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1741295411239/6e5f0a6e-323b-430c-a8f5-cf1bdfd36d73.png" alt class="image--center mx-auto" /></p>
<p> Click <strong>"Global"</strong> &gt; <strong>"Add Credentials"</strong>.</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1741295457576/9180c439-d5c1-4cf1-960d-ed83d794bb71.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Fill in the details:</p>
<ul>
<li><p><strong>Kind:</strong> AWS Credentials</p>
</li>
<li><p><strong>Scope:</strong> Global</p>
</li>
<li><p><strong>ID:</strong> <code>aws-creds</code></p>
</li>
<li><p><strong>Access Key ID:</strong> <code>From your AWS IAM user</code></p>
</li>
<li><p><strong>Secret Access Key:</strong> <code>From your AWS IAM user</code></p>
</li>
</ul>
</li>
<li><p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1741295930076/17d4c666-df84-406e-b68d-094ad5623a60.png" alt class="image--center mx-auto" /></p>
<p> Click <strong>"Create"</strong>.</p>
</li>
</ol>
<blockquote>
<p><strong>Note:</strong> Create an <strong>IAM User</strong> in AWS with <strong>Administrator Access</strong> and use its credentials here.</p>
</blockquote>
<h4 id="heading-4-configure-terraform-in-jenkins"><strong>4. Configure Terraform in Jenkins</strong></h4>
<ol>
<li><p>Go to <strong>"Manage Jenkins"</strong> &gt; <strong>"Tools"</strong>.</p>
</li>
<li><p>Scroll to <strong>Terraform Installation</strong>.</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1741296144218/23525410-81af-4563-b85d-e364d87ee90f.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Provide:</p>
<ul>
<li><p><strong>Name:</strong> <code>terraform</code></p>
</li>
<li><p><strong>Installation Directory:</strong> Find Terraform’s installation path by running below command in jenkins instance:</p>
<pre><code class="lang-bash">  whereis terraform
</code></pre>
</li>
</ul>
</li>
<li><p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1741296359742/94569b47-fd79-4dde-a4fa-a7059e81f220.png" alt class="image--center mx-auto" /></p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1741296416592/0e9a2f0c-6ef8-4b15-9f66-802082ab658e.png" alt class="image--center mx-auto" /></p>
<p> Click <strong>Save</strong>.</p>
</li>
</ol>
<h4 id="heading-5-create-a-new-jenkins-pipeline"><strong>5. Create a New Jenkins Pipeline</strong></h4>
<blockquote>
<p>In this pipeline i will use this repository here all the terraform source code present to create production grade infrastructure</p>
</blockquote>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://github.com/praduman8435/Production-ready-EKS-with-automation">https://github.com/praduman8435/Production-ready-EKS-with-automation</a></div>
<p> </p>
<ol>
<li><p>In Jenkins, go to <strong>Dashboard</strong> &gt; <strong>New Item</strong>.</p>
</li>
<li><p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1741296577029/43590b03-97a2-4ac5-8635-5b1af97e5b1f.png" alt class="image--center mx-auto" /></p>
<p> Enter an <strong>item name</strong>.</p>
</li>
<li><p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1741296664743/a6d5d08b-c877-464b-99fb-f5a57ece067b.png" alt class="image--center mx-auto" /></p>
<p> Select <strong>Pipeline</strong> and click <strong>OK</strong>.</p>
</li>
<li><p>Scroll to the <strong>Pipeline</strong> section:</p>
<ul>
<li><p>Under <strong>Definition</strong>, select <strong>Pipeline script</strong>.</p>
</li>
<li><p>Copy and paste the following pipeline script:</p>
</li>
</ul>
</li>
</ol>
<pre><code class="lang-bash">    properties([
        parameters([
            string(
                defaultValue: <span class="hljs-string">'dev'</span>,
                name: <span class="hljs-string">'Environment'</span>
            ),
            choice(
                choices: [<span class="hljs-string">'plan'</span>, <span class="hljs-string">'apply'</span>, <span class="hljs-string">'destroy'</span>], 
                name: <span class="hljs-string">'Terraform_Action'</span>
            )
        ])
    ])

    pipeline {
        agent any
        stages {
            stage(<span class="hljs-string">'Preparing'</span>) {
                steps {
                    sh <span class="hljs-string">'echo Preparing'</span>
                }
            }
            stage(<span class="hljs-string">'Git Pulling'</span>) {
                steps {
                    git branch: <span class="hljs-string">'main'</span>, url: <span class="hljs-string">'https://github.com/praduman8435/Production-ready-EKS-with-automation.git'</span>
                }
            }
            stage(<span class="hljs-string">'Init'</span>) {
                steps {
                    withAWS(credentials: <span class="hljs-string">'aws-creds'</span>, region: <span class="hljs-string">'ap-south-1'</span>) {
                        sh <span class="hljs-string">'terraform -chdir=eks/ init'</span>
                    }
                }
            }
            stage(<span class="hljs-string">'Validate'</span>) {
                steps {
                    withAWS(credentials: <span class="hljs-string">'aws-creds'</span>, region: <span class="hljs-string">'ap-south-1'</span>) {
                        sh <span class="hljs-string">'terraform -chdir=eks/ validate'</span>
                    }
                }
            }
            stage(<span class="hljs-string">'Action'</span>) {
                steps {
                    withAWS(credentials: <span class="hljs-string">'aws-creds'</span>, region: <span class="hljs-string">'ap-south-1'</span>) {
                        script {    
                            <span class="hljs-keyword">if</span> (params.Terraform_Action == <span class="hljs-string">'plan'</span>) {
                                sh <span class="hljs-string">"terraform -chdir=eks/ plan -var-file=<span class="hljs-variable">${params.Environment}</span>.tfvars"</span>
                            } <span class="hljs-keyword">else</span> <span class="hljs-keyword">if</span> (params.Terraform_Action == <span class="hljs-string">'apply'</span>) {
                                sh <span class="hljs-string">"terraform -chdir=eks/ apply -var-file=<span class="hljs-variable">${params.Environment}</span>.tfvars -auto-approve"</span>
                            } <span class="hljs-keyword">else</span> <span class="hljs-keyword">if</span> (params.Terraform_Action == <span class="hljs-string">'destroy'</span>) {
                                sh <span class="hljs-string">"terraform -chdir=eks/ destroy -var-file=<span class="hljs-variable">${params.Environment}</span>.tfvars -auto-approve"</span>
                            } <span class="hljs-keyword">else</span> {
                                error <span class="hljs-string">"Invalid value for Terraform_Action: <span class="hljs-variable">${params.Terraform_Action}</span>"</span>
                            }
                        }
                    }
                }
            }
        }
    }
</code></pre>
<p>    <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1741296777138/1d14929d-0327-43a5-8f69-de27a5392673.png" alt class="image--center mx-auto" /></p>
<h4 id="heading-6-finalize-pipeline-setup"><strong>6. Finalize Pipeline Setup</strong></h4>
<ol>
<li><p><strong>Enable Groovy Sandbox:</strong> Check the box for <strong>"Use Groovy Sandbox"</strong>.</p>
</li>
<li><p>Click <strong>"Save"</strong>.</p>
</li>
</ol>
<h4 id="heading-7-run-the-pipeline"><strong>7. Run the Pipeline</strong></h4>
<ol>
<li><p>Wait for a minute, then click <strong>"Build with Parameters"</strong>.</p>
</li>
<li><p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1741297477046/1323bbd7-1f49-42d0-8765-55d05921759f.png" alt class="image--center mx-auto" /></p>
<p> Select a <strong>Terraform action</strong> (<code>plan</code>, <code>apply</code>, or <code>destroy</code>).</p>
</li>
<li><p>Click <strong>"Build"</strong>.</p>
</li>
<li><p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1741410767616/eb6fe041-5fde-47a1-9e2a-0f1b79940766.png" alt class="image--center mx-auto" /></p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1741411925695/520fa628-1ecf-4f6c-8b98-106d22daef72.png" alt class="image--center mx-auto" /></p>
<p> Navigate to the <strong>Console Output</strong> to track progress.</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1741411958533/aed6e5a2-39cd-467a-ab0d-f916847afac1.png" alt class="image--center mx-auto" /></p>
</li>
</ol>
<blockquote>
<h3 id="heading-what-this-pipeline-does">What This Pipeline Does?</h3>
<ul>
<li><p><strong>Connects Jenkins to AWS</strong> using stored credentials.</p>
</li>
<li><p><strong>Fetches Terraform code</strong> from GitHub.</p>
</li>
<li><p><strong>Initializes Terraform</strong> for EKS and networking setup.</p>
</li>
<li><p><strong>Validates Terraform code</strong> before deployment.</p>
</li>
<li><p><strong>Executes Terraform actions</strong> based on user selection (<code>plan</code>, <code>apply</code>, or <code>destroy</code>)</p>
</li>
</ul>
</blockquote>
<hr />
<h2 id="heading-step-3-set-up-the-jump-server"><strong>Step 3: Set Up the Jump Server</strong></h2>
<h4 id="heading-why-do-you-need-a-jump-server"><strong>Why Do You Need a Jump Server?</strong></h4>
<p>Since your <strong>EKS cluster is inside a VPC</strong>, it cannot be accessed directly from the internet. A <strong>Jump Server (Bastion Host)</strong> acts as a secure gateway, allowing access to private resources within your VPC.</p>
<h4 id="heading-how-it-works"><strong>How It Works:</strong></h4>
<ul>
<li><p>Your <strong>EKS cluster</strong> and other private resources don’t have public IPs, so they can't be accessed directly.</p>
</li>
<li><p>Instead of exposing these private resources, you connect to a <strong>Jump Server</strong> first.</p>
</li>
<li><p>The Jump Server has a <strong>public IP</strong> and is placed in a <strong>public subnet</strong>, acting as an intermediary to access the private cluster securely.</p>
</li>
</ul>
<h3 id="heading-1-create-a-jump-server-in-aws">1. Create a Jump Server in AWS</h3>
<ol>
<li><p><strong>Go to AWS EC2 Console</strong> and click <strong>"Launch Instance"</strong>.</p>
</li>
<li><p><strong>Configure the Instance:</strong></p>
<ul>
<li><p><strong>Instance Name:</strong> <code>jump-server</code></p>
</li>
<li><p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1741413617948/7077e8a2-2498-4916-80ac-6776735ac98d.png" alt class="image--center mx-auto" /></p>
<p>  <strong>AMI:</strong> Ubuntu</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1741413676683/8bcb1a0c-3641-4dfe-90a1-b2f90bf68f4b.png" alt class="image--center mx-auto" /></p>
</li>
<li><p><strong>Instance Type:</strong> <code>t2.medium</code></p>
</li>
<li><p><strong>Key Pair:</strong> <em>No need to attach a key pair (SSH disabled for security).</em></p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1741413746424/c1aa733e-4fd1-47b3-a7f8-ce4c584f4119.png" alt class="image--center mx-auto" /></p>
</li>
<li><p><strong>Network Settings:</strong></p>
<ul>
<li><p><strong>VPC:</strong> Select the VPC created by the <strong>Jenkins Terraform pipeline</strong>.</p>
</li>
<li><p><strong>Subnet:</strong> Choose <strong>any public subnet</strong>.</p>
</li>
</ul>
</li>
<li><p><strong>Storage:</strong> At least <strong>30 GiB</strong>.</p>
</li>
<li><p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1741414061185/43d4cf74-da34-423d-a9a9-e05c3699c3be.png" alt class="image--center mx-auto" /></p>
<p>  <strong>IAM Role:</strong> Attach an <strong>IAM profile with administrative access</strong>.</p>
</li>
</ul>
</li>
<li><p><strong>Install</strong> required tools on the <strong>Jump Server</strong> automatically, by adding the following script in the <strong>User Data</strong> field:</p>
<pre><code class="lang-bash"> sudo apt update -y

 <span class="hljs-comment"># Installing AWS CLI</span>
 <span class="hljs-comment">#!/bin/bash</span>
 curl <span class="hljs-string">"https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip"</span> -o <span class="hljs-string">"awscliv2.zip"</span>
 sudo apt install unzip -y
 unzip awscliv2.zip
 sudo ./aws/install

 <span class="hljs-comment"># Installing Kubectl</span>
 <span class="hljs-comment">#!/bin/bash</span>
 sudo apt update
 sudo apt install curl -y
 sudo curl -LO <span class="hljs-string">"https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl"</span>
 sudo chmod +x kubectl
 sudo mv kubectl /usr/<span class="hljs-built_in">local</span>/bin/
 kubectl version --client

 <span class="hljs-comment"># Intalling Helm</span>
 <span class="hljs-comment">#! /bin/bash</span>
 sudo snap install helm --classic

 <span class="hljs-comment"># Installing eksctl</span>
 <span class="hljs-comment">#! /bin/bash</span>
 curl --silent --location <span class="hljs-string">"https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_<span class="hljs-subst">$(uname -s)</span>_amd64.tar.gz"</span> | tar xz -C /tmp
 sudo mv /tmp/eksctl /usr/<span class="hljs-built_in">local</span>/bin
 eksctl version
</code></pre>
</li>
<li><p><strong>Launch the Instance</strong> by clicking <strong>"Launch Instance"</strong>.</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1741414254891/71281416-52f5-4948-9df7-f5d4f1958756.png" alt class="image--center mx-auto" /></p>
</li>
</ol>
<h3 id="heading-2-connect-to-the-jump-server-and-verify-access">2. Connect to the Jump Server and Verify Access</h3>
<p>Once the instance is running, access it using <strong>EC2 Instance Connect</strong>:</p>
<ol>
<li><p>Go to <strong>AWS EC2 Console</strong>.</p>
</li>
<li><p>Select your <strong>Jump Server</strong> instance.</p>
</li>
<li><p>Click <strong>"Connect"</strong> &gt; <strong>"EC2 Instance Connect"</strong>.</p>
</li>
<li><p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1741414558786/c70eb5cb-7b03-4381-a9d1-c058ff6e5f1b.png" alt class="image--center mx-auto" /></p>
<p> Click <strong>"Connect"</strong> to open a web terminal.</p>
</li>
</ol>
<h3 id="heading-3-configure-aws-credentials-on-the-jump-server">3. Configure AWS Credentials on the Jump Server</h3>
<p>To allow the Jump Server to interact with AWS services, configure AWS CLI:</p>
<pre><code class="lang-bash">aws configure
</code></pre>
<ul>
<li><p><strong>Enter AWS Access Key ID</strong> (from IAM user).</p>
</li>
<li><p><strong>Enter AWS Secret Access Key</strong> (from IAM user).</p>
</li>
<li><p><strong>Default region:</strong> Set your AWS region (e.g., <code>us-east-1</code>).</p>
</li>
<li><p><strong>Output format:</strong> Press Enter (default is <code>json</code>).</p>
</li>
</ul>
<h3 id="heading-4-update-kubeconfig-to-access-the-eks-cluster">4. Update kubeconfig to Access the EKS Cluster</h3>
<p>Run the following command to configure <code>kubectl</code> for your EKS cluster:</p>
<pre><code class="lang-bash">aws eks update-kubeconfig --name &lt;your-eks-cluster-name&gt; --region &lt;your-region&gt;
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1741437752403/45b020f4-9bae-4abd-9493-62a0272a8da3.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-5-verify-the-cluster-connection">5. Verify the Cluster Connection</h3>
<p>Check if the Jump Server can access the EKS cluster by listing the worker nodes:</p>
<pre><code class="lang-bash">kubectl get nodes
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1741437762798/e61dd85d-5ec6-491c-bf1f-3007eb1f8093.png" alt class="image--center mx-auto" /></p>
<p>If you see the nodes, <strong>your Jump Server setup is successful!</strong> 🎉</p>
<hr />
<h2 id="heading-step-4-set-up-an-aws-load-balancer-for-eks"><strong>Step 4: Set Up an AWS Load Balancer for EKS</strong></h2>
<p>In order to configure an AWS <strong>Load Balancer</strong> in our <strong>EKS cluster</strong>, we need a <strong>Service Account</strong> that allows Kubernetes to create and manage the load balancer automatically.</p>
<h3 id="heading-1-create-an-iam-backed-service-account">1. Create an IAM-Backed Service Account</h3>
<p>The AWS Load Balancer Controller requires an IAM role with the necessary permissions to create and manage Elastic Load Balancers (ELB) in AWS.</p>
<p>Run the following command to create a <strong>service account</strong> with the required IAM role inside the EKS cluster:</p>
<pre><code class="lang-bash">eksctl create iamserviceaccount \
  --cluster=&lt;eks-cluster-name&gt; \
  --namespace=kube-system \
  --name=aws-load-balancer-controller \
  --role-name AmazonEKSLoadBalancerRole \
  --attach-policy-arn=arn:aws:iam::&lt;AWS_ACCOUNT_ID&gt;:policy/AWSLoadBalancerControllerIAMPolicy \
  --approve \
  --region=ap-south-1
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1741437810310/e6353e33-cc05-43a5-bf33-29b63e1b8336.png" alt class="image--center mx-auto" /></p>
<p>📌 <strong>Explanation:</strong></p>
<ul>
<li><p><code>--cluster=&lt;eks-cluster-name&gt;</code> → Name of your EKS cluster.</p>
</li>
<li><p><code>--namespace=kube-system</code> → Deploys the service account in the <code>kube-system</code> namespace.</p>
</li>
<li><p><code>--name=aws-load-balancer-controller</code> → Creates a service account with this name.</p>
</li>
<li><p><code>--role-name AmazonEKSLoadBalancerRole</code> → Assigns an IAM role to the service account.</p>
</li>
<li><p><code>--attach-policy-arn=arn:aws:iam::&lt;AWS_ACCOUNT_ID&gt;:policy/AWSLoadBalancerControllerIAMPolicy</code> → Attaches the AWS Load Balancer Controller IAM policy.</p>
</li>
<li><p><code>--approve</code> → Automatically applies the changes.</p>
</li>
</ul>
<h3 id="heading-2-add-the-aws-eks-helm-repository">2. Add the AWS EKS Helm Repository</h3>
<p>To deploy the <strong>AWS Load Balancer Controller</strong>, we use <strong>Helm</strong>, a package manager for Kubernetes.</p>
<p>Add the official AWS <strong>EKS Helm repository</strong>:</p>
<pre><code class="lang-bash">helm repo add eks https://aws.github.io/eks-charts
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1741437839956/94e69cd0-5a42-4d35-8c50-07879f0ccd8f.png" alt class="image--center mx-auto" /></p>
<p>This repository contains pre-packaged Helm charts for essential AWS EKS components such as:<br />✅ <strong>AWS Load Balancer Controller</strong><br />✅ <strong>EBS CSI Driver</strong> (for dynamic volume provisioning)<br />✅ <strong>VPC CNI Plugin</strong> (for networking enhancements)<br />✅ <strong>Cluster Autoscaler</strong> (for automatic scaling)</p>
<p><strong>Update the repository to get the latest charts:</strong></p>
<pre><code class="lang-bash">helm repo update
</code></pre>
<h3 id="heading-3-install-the-aws-load-balancer-controller">3. Install the AWS Load Balancer Controller</h3>
<p>Now, install the <strong>AWS Load Balancer Controller</strong> using <strong>Helm</strong>:</p>
<pre><code class="lang-bash">helm install aws-load-balancer-controller eks/aws-load-balancer-controller \
  --namespace kube-system \
  --<span class="hljs-built_in">set</span> clusterName=&lt;eks-cluster-name&gt; \
  --<span class="hljs-built_in">set</span> serviceAccount.create=<span class="hljs-literal">false</span> \
  --<span class="hljs-built_in">set</span> serviceAccount.name=aws-load-balancer-controller
</code></pre>
<p>📌 <strong>Explanation:</strong></p>
<ul>
<li><p><code>--namespace kube-system</code> → Deploys the controller in the <code>kube-system</code> namespace.</p>
</li>
<li><p><code>--set clusterName=&lt;eks-cluster-name&gt;</code> → Associates it with your EKS cluster.</p>
</li>
<li><p><code>--set serviceAccount.create=false</code> → Uses the existing IAM-backed service account.</p>
</li>
<li><p><code>--set serviceAccount.name=aws-load-balancer-controller</code> → Specifies the service account created earlier.</p>
</li>
</ul>
<h3 id="heading-4-verify-the-aws-load-balancer-controller">4. Verify the AWS Load Balancer Controller</h3>
<p>Check if the <strong>Load Balancer Controller</strong> is running correctly:</p>
<pre><code class="lang-bash">kubectl get pods -n kube-system | grep aws-load-balancer-controller
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1741438471377/b51b2dd3-a22c-471c-a331-e42bfa9a5025.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-4-fixing-pods-in-error-or-crashloopbackoff">4. Fixing Pods in Error or CrashLoopBackOff</h3>
<p>If your AWS Load Balancer Controller pods are in <strong>Error</strong> or <strong>CrashLoopBackOff</strong>, it’s likely due to misconfiguration. To fix this, upgrade the Helm release with the correct settings:</p>
<pre><code class="lang-bash">helm upgrade -i aws-load-balancer-controller eks/aws-load-balancer-controller \
  --<span class="hljs-built_in">set</span> clusterName=&lt;cluster-name&gt; \
  --<span class="hljs-built_in">set</span> serviceAccount.create=<span class="hljs-literal">false</span> \
  --<span class="hljs-built_in">set</span> serviceAccount.name=aws-load-balancer-controller \
  --<span class="hljs-built_in">set</span> region=&lt;your-region&gt; \
  --<span class="hljs-built_in">set</span> vpcId=&lt;your-vpc-id&gt; \
  -n kube-system
</code></pre>
<ul>
<li><p><code>&lt;your-vpc-id&gt;</code>: VPC ID where your EKS cluster runs (e.g., <code>vpc-0123456789abcdef0</code>).</p>
</li>
<li><p><code>&lt;cluster-name&gt;</code>: Your EKS cluster name (e.g., <code>dev-medium-eks-cluster</code>).</p>
</li>
<li><p><code>&lt;your-region&gt;</code>: AWS region of your cluster (e.g., <code>us-west-1</code>).</p>
</li>
</ul>
<blockquote>
<h4 id="heading-what-this-does"><strong>What This Does</strong>?</h4>
<ul>
<li><p><strong>Upgrades/Installs</strong>: Updates the Helm release or installs it if missing.</p>
</li>
<li><p><strong>Configures Correctly</strong>: Ensures the controller uses the right cluster, service account, region, and VPC.</p>
</li>
</ul>
</blockquote>
<p><strong>Check if the pods are running:</strong></p>
<pre><code class="lang-bash">kubectl get pods -n kube-system -l app.kubernetes.io/name=aws-load-balancer-controller
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1741438829300/9c190c92-9313-490c-865b-76980b11b797.png" alt class="image--center mx-auto" /></p>
<p>🚀 <strong>Your AWS Load Balancer Controller is now ready to manage Kubernetes services!</strong> 🎉</p>
<hr />
<h2 id="heading-step-5-set-up-and-configure-argocd-on-eks"><strong>Step 5: Set Up and Configure ArgoCD on EKS</strong></h2>
<p>ArgoCD is a <strong>GitOps</strong> continuous delivery tool that automates the deployment of applications to Kubernetes. We will install ArgoCD in our <strong>EKS cluster</strong> and expose its UI for external access.</p>
<h3 id="heading-1-create-a-separate-namespace-for-argocd">1. Create a Separate Namespace for ArgoCD</h3>
<p>To keep ArgoCD components organized, create a dedicated <strong>namespace</strong>:</p>
<pre><code class="lang-bash">kubectl create namespace argocd
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1741439351489/a185aa2d-b571-4d90-a295-814f033521aa.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-2-install-argocd-using-manifests">2. Install ArgoCD Using Manifests</h3>
<p>Apply the official <strong>ArgoCD installation YAML</strong> to deploy its components:</p>
<pre><code class="lang-bash">kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1741439449977/9b3f488b-cfe9-48e5-b1c0-92b1f320c716.png" alt class="image--center mx-auto" /></p>
<p>This will install all necessary ArgoCD components inside the <code>argocd</code> namespace.</p>
<h3 id="heading-3-verify-argocd-installation">3. Verify ArgoCD Installation</h3>
<p>Check if all ArgoCD <strong>pods</strong> are running:</p>
<pre><code class="lang-bash">kubectl get pods -n argocd
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1741439486327/2f4c8c30-7c14-4d64-be54-078824843f5b.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-4-expose-argocd-server"><strong>4. Expose ArgoCD Server</strong></h3>
<p>By default, ArgoCD runs as a <strong>ClusterIP service</strong>, meaning it is only accessible inside the cluster. To access the UI externally, change it to a <strong>LoadBalancer</strong> service:</p>
<pre><code class="lang-bash">kubectl patch svc argocd-server -n argocd -p <span class="hljs-string">'{"spec": {"type": "LoadBalancer"}}'</span>
</code></pre>
<p>This will assign a public IP to the ArgoCD server, making it accessible via an <strong>Elastic Load Balancer (ELB)</strong> in AWS.</p>
<h3 id="heading-5-retrieve-the-external-ip-of-argocd">5. Retrieve the External IP of ArgoCD</h3>
<p>Run the following command to get the external URL:</p>
<pre><code class="lang-bash">kubectl get svc -n argocd argocd-server
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1741439573002/3ace2789-147d-4e85-8802-4ecb013c8104.png" alt class="image--center mx-auto" /></p>
<p>Look for the <strong>EXTERNAL-IP</strong> in the output. This is the URL you’ll use to access the ArgoCD UI.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1741439768269/e971887f-7d2a-41c4-8db7-bec16fb15f2c.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-6-get-the-argocd-admin-password">6. Get the ArgoCD Admin Password</h3>
<p>By default, ArgoCD generates an <strong>admin password</strong> stored as a Kubernetes secret. Retrieve it using:</p>
<pre><code class="lang-bash">kubectl get secret argocd-initial-admin-secret -n argocd -o jsonpath=<span class="hljs-string">"{.data.password}"</span> | base64 --decode
</code></pre>
<p>Use this password to log in as the <code>admin</code> user.</p>
<h3 id="heading-7-access-the-argocd-ui">7. Access the ArgoCD UI</h3>
<p>Now, you can access the UI of argoCD using the elastic loadbalancer created on the aws</p>
<p>Login using:</p>
<ul>
<li><p><strong>Username:</strong> <code>admin</code></p>
</li>
<li><p><strong>Password:</strong> <em>(retrieved from the secret above)</em></p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1741439804593/2056b752-4073-4cee-9ede-564f76f193e8.png" alt class="image--center mx-auto" /></p>
<blockquote>
<p><strong>ArgoCD is now ready!</strong> You can start managing your Kubernetes deployments using GitOps. 🚀</p>
</blockquote>
<hr />
<h2 id="heading-step-6-configure-sonarqube-for-devsecops-pipeline"><strong>Step 6: Configure SonarQube for DevSecOps Pipeline</strong></h2>
<p>SonarQube is a crucial tool for <strong>static code analysis</strong>, ensuring <strong>code quality and security</strong> in your DevSecOps pipeline. We will configure it within Jenkins for automated code scanning.</p>
<h3 id="heading-1-verify-if-sonarqube-is-running">1. Verify if SonarQube is Running</h3>
<p>Since SonarQube is running as a <strong>Docker container</strong> on the Jenkins server, check its status with:</p>
<pre><code class="lang-bash">docker ps
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1741439908245/ccf2d8c4-e366-4e4d-8d31-c338c0278c9e.png" alt class="image--center mx-auto" /></p>
<p>You should see a running SonarQube container <strong>exposed on port 9000</strong>.</p>
<h3 id="heading-2-access-sonarqube-ui">2. Access SonarQube UI</h3>
<p>Open any web browser and visit:</p>
<pre><code class="lang-bash">http://&lt;public-ip-of-jenkins-server&gt;:9000
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1741439966061/2da33e4c-a391-460e-8341-c802c580ceb9.png" alt class="image--center mx-auto" /></p>
<p>Log in using the default credentials:</p>
<ul>
<li><p><strong>Username:</strong> <code>admin</code></p>
</li>
<li><p><strong>Password:</strong> <code>admin</code></p>
</li>
</ul>
<p>Once logged in, <strong>set a new password</strong> for security.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1741440050172/dfbf8897-cc8d-4972-8f8d-3ee275516d7b.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-3-generate-an-authentication-token">3. Generate an Authentication Token</h3>
<p>Jenkins needs a <strong>token</strong> to authenticate with SonarQube for automated scans.</p>
<ol>
<li><p>Go to <strong>Administration</strong> → <strong>Security</strong> → <strong>Users</strong>.</p>
</li>
<li><p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1741440120078/c7d2deac-510e-4e4b-b72e-5ba3e1d2143a.png" alt class="image--center mx-auto" /></p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1741440353109/8c32814d-561f-4123-8c3a-b66337004e57.png" alt class="image--center mx-auto" /></p>
<p> Click on <strong>Update Token</strong>.</p>
</li>
<li><p>Provide a <strong>name</strong> and set an <strong>expiration date</strong> (or leave it as "No Expiration").</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1741440489989/ee5fe744-8bfd-4624-976d-f29b45867663.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Click <strong>Generate Token</strong>.</p>
</li>
<li><p><strong>Copy and save</strong> the token securely (you will need it for Jenkins).</p>
</li>
</ol>
<h3 id="heading-4-create-a-webhook-for-jenkins-notifications">4. Create a Webhook for Jenkins Notifications</h3>
<p>A webhook will notify Jenkins once SonarQube completes an analysis.</p>
<ol>
<li><p>Navigate to <strong>Administration</strong> → <strong>Configuration</strong> → <strong>Webhooks</strong>.</p>
</li>
<li><p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1741440664143/69b7eb5b-ee81-4683-8c7e-7373df3fc454.png" alt class="image--center mx-auto" /></p>
<p> Click <strong>Create Webhook</strong>.</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1741440702895/a9a62e5f-e128-4d39-a3b2-e2240abaef79.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Enter the details:</p>
<ul>
<li><p><strong>Name:</strong> <code>Jenkins Webhook</code></p>
</li>
<li><p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1741440860439/75c3c487-a15c-4203-b8c9-bb783e37ade6.png" alt class="image--center mx-auto" /></p>
<p>  <strong>URL:</strong> <code>http://&lt;public-ip-of-jenkins-server&gt;:8080/sonarqube-webhook</code></p>
</li>
<li><p><strong>Secret:</strong> (Leave blank)</p>
</li>
</ul>
</li>
<li><p>Click <strong>Create</strong>.</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1741487965585/b3bc2554-674a-4921-a496-4c20e9f263a4.png" alt class="image--center mx-auto" /></p>
</li>
</ol>
<blockquote>
<p>Now, the webhook will trigger Jenkins when a project analysis is complete.</p>
</blockquote>
<h3 id="heading-5-create-a-sonarqube-project-for-code-analysis">5. Create a SonarQube Project for Code Analysis</h3>
<p>SonarQube will analyze the <strong>frontend and backend</strong> code separately.</p>
<h4 id="heading-frontend-analysis-configuration"><strong>Frontend Analysis Configuration</strong></h4>
<ol>
<li><p>Go to <strong>Projects</strong> → <strong>Manually → Create a New Project</strong>.</p>
</li>
<li><p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1741441258669/81ccfc7c-1d6e-4e94-a5ef-3ec53f318b4b.png" alt class="image--center mx-auto" /></p>
<p> Fill in the required details (Project Name, Key, etc.).</p>
</li>
<li><p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1741441418605/aa78604f-da77-47e3-a230-22e2c66b0bd5.png" alt class="image--center mx-auto" /></p>
<p> Click <strong>Setup</strong>.</p>
</li>
<li><p>Choose <strong>Analyze Locally</strong>.</p>
</li>
<li><p>Select <strong>Use an Existing Token</strong> and <strong>paste the token</strong> generated earlier.</p>
</li>
<li><p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1741441582221/166bedc7-2765-438c-a1a8-7073cc4f9d3a.png" alt class="image--center mx-auto" /></p>
<p> Choose <strong>Other</strong> if your build type is not listed.</p>
</li>
<li><p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1741441621232/65fb0958-141a-4aa4-82b9-2d039822166c.png" alt class="image--center mx-auto" /></p>
<p> Select <strong>OS: Linux</strong>.</p>
</li>
<li><p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1741441672905/b5162b3a-e481-4d86-be98-141387ef29c0.png" alt class="image--center mx-auto" /></p>
<p> SonarQube will generate a command for analysis—<strong>copy and save it</strong>.</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1741441699085/e31e9567-a89e-45b3-8370-4f13957ae9a8.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Add the command to your <strong>Jenkins pipeline</strong></p>
</li>
</ol>
<h4 id="heading-backend-analysis-configuration"><strong>Backend Analysis Configuration</strong></h4>
<p>Repeat the <strong>same steps</strong> for the backend project:</p>
<ol>
<li><p>Go to <strong>Projects</strong> → <strong>Create a New Project</strong>.</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1741441258669/81ccfc7c-1d6e-4e94-a5ef-3ec53f318b4b.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Fill in the required details.</p>
</li>
<li><p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1741441891639/8c0388dc-3ca1-4151-bc34-af7098e0e53f.png" alt class="image--center mx-auto" /></p>
<p> Click <strong>Setup</strong> → <strong>Analyze Locally</strong>.</p>
</li>
<li><p>Use the <strong>previously generated token</strong>.</p>
</li>
<li><p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1741441967533/88db0b7d-43a7-4c29-a4a2-2f8f1df8b9b2.png" alt class="image--center mx-auto" /></p>
<p> Choose <strong>Other</strong> as the build type if needed.</p>
</li>
<li><p>Select <strong>OS: Linux</strong>.</p>
</li>
<li><p><strong>Copy the generated analysis command</strong> and save it.</p>
</li>
<li><p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1741442008154/019dc47a-6987-4f33-981c-d61b8cef3887.png" alt class="image--center mx-auto" /></p>
<p> Add the command to your <strong>Jenkins pipeline</strong></p>
</li>
</ol>
<h3 id="heading-6-final-verification">6. Final Verification</h3>
<p>At this point:<br />✅ <strong>SonarQube is running</strong> and accessible.<br />✅ <strong>Jenkins has an authentication token</strong> to interact with SonarQube.<br />✅ <strong>A webhook is set up</strong> to notify Jenkins about completed scans.<br />✅ <strong>Projects are created</strong>, and <strong>analysis commands</strong> are ready for Jenkins execution.</p>
<p>Now, whenever Jenkins runs the pipeline, <strong>SonarQube will analyze the code and report quality &amp; security issues</strong>. 🎯 ✅</p>
<hr />
<h2 id="heading-step-6-create-an-ecr-repository-for-docker-images"><strong>Step 6: Create an ECR Repository for Docker Images</strong></h2>
<p>Amazon <strong>Elastic Container Registry (ECR)</strong> will store the <strong>frontend and backend Docker images</strong> used for deployment. Let's configure it and store necessary credentials in Jenkins.</p>
<h3 id="heading-1-create-private-ecr-repositories">1. Create Private ECR Repositories</h3>
<ol>
<li><p><strong>Open AWS Console</strong> and navigate to the <strong>ECR service</strong>.</p>
</li>
<li><p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1741442220525/01fc1f1c-87ba-481f-bc52-37717f214dec.png" alt class="image--center mx-auto" /></p>
<p> Click <strong>Create Repository</strong>.</p>
</li>
<li><p>Choose <strong>Private Repository</strong>.</p>
</li>
<li><p><strong>Repository Name:</strong> <code>frontend</code>.</p>
</li>
<li><p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1741443658443/3719eea2-68ba-4d84-a437-39e024f0875b.png" alt class="image--center mx-auto" /></p>
<p> Repeat the same steps to create a <code>backend</code> repository.</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1741443722005/889f0766-ee66-4c40-8247-240a2fc1414e.png" alt class="image--center mx-auto" /></p>
</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1741443777588/da0bef4d-f75a-4e3f-8503-77c4e04d0345.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-2-store-credentials-in-jenkins">2. Store Credentials in Jenkins</h3>
<p>To integrate Jenkins with SonarQube, AWS ECR, and GitHub, we need to store various credentials securely.</p>
<h4 id="heading-a-store-sonarqube-token-in-jenkins"><strong>a) Store SonarQube Token in Jenkins</strong></h4>
<ol>
<li><p>Go to <strong>Jenkins Dashboard</strong> → <strong>Manage Jenkins</strong> → <strong>Credentials</strong>.</p>
</li>
<li><p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1741442565891/30a3a863-56b1-4ba7-bae6-802cfc51b33c.png" alt class="image--center mx-auto" /></p>
<p> Select the appropriate <strong>Global credentials domain</strong>.</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1741442659623/d4bb19fc-b819-4bb5-8ab4-f9ef07dc8dfb.png" alt class="image--center mx-auto" /></p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1741442716197/ce323ecc-98db-4565-8960-f3206d918614.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Click <strong>Add Credentials</strong> and fill in the details:</p>
<ul>
<li><p><strong>Kind:</strong> Secret Text</p>
</li>
<li><p><strong>Scope:</strong> Global</p>
</li>
<li><p><strong>Secret:</strong> <code>&lt;sonar-qube-token&gt;</code></p>
</li>
<li><p><strong>ID:</strong> <code>sonar-token</code></p>
</li>
<li><p>Click <strong>Create</strong>.</p>
</li>
</ul>
</li>
</ol>
<p>    <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1741442811289/49f8c587-6987-41e4-a94b-7aaa306dbfb5.png" alt class="image--center mx-auto" /></p>
<h4 id="heading-b-store-aws-account-id-in-jenkins"><strong>b) Store AWS Account ID in Jenkins</strong></h4>
<ol>
<li><p>Go to <strong>Credentials</strong> → <strong>Add Credentials</strong>.</p>
</li>
<li><p>Enter the details:</p>
<ul>
<li><p><strong>Kind:</strong> Secret Text</p>
</li>
<li><p><strong>Scope:</strong> Global</p>
</li>
<li><p><strong>Secret:</strong> <code>&lt;AWS-Account-ID&gt;</code></p>
</li>
<li><p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1741442935298/8ccf9024-a7d8-4c90-be74-f0ffc99c050b.png" alt class="image--center mx-auto" /></p>
<p>  <strong>ID:</strong> <code>Account_ID</code></p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1741442974476/8dd3ecb2-c9df-496d-b91d-ad1722a9300f.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Click <strong>Create</strong>.</p>
</li>
</ul>
</li>
</ol>
<h4 id="heading-c-store-ecr-repository-names-in-jenkins"><strong>c) Store ECR Repository Names in Jenkins</strong></h4>
<p>For <strong>Frontend Repository</strong>:</p>
<ol>
<li><p><strong>Add New Credential</strong>:</p>
<ul>
<li><p><strong>Kind:</strong> Secret Text</p>
</li>
<li><p><strong>Scope:</strong> Global</p>
</li>
<li><p><strong>Secret:</strong> <code>frontend</code></p>
</li>
<li><p><strong>ID:</strong> <code>ECR_REPO1</code></p>
</li>
<li><p>Click <strong>Create</strong>.</p>
</li>
</ul>
</li>
</ol>
<p>For <strong>Backend Repository</strong>:</p>
<ol>
<li><p><strong>Add New Credential</strong>:</p>
<ul>
<li><p><strong>Kind:</strong> Secret Text</p>
</li>
<li><p><strong>Scope:</strong> Global</p>
</li>
<li><p><strong>Secret:</strong> <code>backend</code></p>
</li>
<li><p><strong>ID:</strong> <code>ECR_REPO2</code></p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1741443151975/365596a2-0c1c-479e-ac8d-3133e227739e.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Click <strong>Create</strong>.</p>
</li>
</ul>
</li>
</ol>
<h4 id="heading-d-store-github-credentials-in-jenkins"><strong>d) Store GitHub Credentials in Jenkins</strong></h4>
<ol>
<li><p><strong>Add New Credential</strong>:</p>
<ul>
<li><p><strong>Kind:</strong> Username with Password</p>
</li>
<li><p><strong>Scope:</strong> Global</p>
</li>
<li><p><strong>Username:</strong> <code>&lt;GitHub-Username&gt;</code></p>
</li>
<li><p><strong>Password:</strong> <code>&lt;Personal-Access-Token&gt;</code></p>
</li>
<li><p><strong>ID:</strong> <code>GITHUB-APP</code></p>
</li>
</ul>
</li>
</ol>
<h4 id="heading-e-store-github-personal-access-token-in-jenkins"><strong>e) Store GitHub Personal Access Token in Jenkins</strong></h4>
<ol>
<li><p><strong>Add New Credential</strong>:</p>
<ul>
<li><p><strong>Kind:</strong> Secret Text</p>
</li>
<li><p><strong>Scope:</strong> Global</p>
</li>
<li><p><strong>Secret:</strong> <code>&lt;Personal-Access-Token&gt;</code></p>
</li>
<li><p><strong>ID:</strong> <code>github</code></p>
</li>
</ul>
</li>
</ol>
<blockquote>
<p>here all required credentials are get added</p>
</blockquote>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1741444543186/f9e07190-22d1-4b76-a67e-9a96183e840b.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-final-confirmation"><strong>Final Confirmation</strong></h3>
<p>✅ <strong>ECR Repositories Created</strong><br />✅ <strong>SonarQube Token Stored in Jenkins</strong><br />✅ <strong>AWS Account ID Saved</strong><br />✅ <strong>ECR Repository Names Stored</strong><br />✅ <strong>GitHub Credentials &amp; Token Added</strong></p>
<p>With these credentials configured, <strong>Jenkins can authenticate and push Docker images to AWS ECR</strong> seamlessly. 🎯</p>
<h2 id="heading-install-and-configure-essential-plugins-amp-tools-in-jenkins">Install and Configure Essential Plugins &amp; Tools in Jenkins</h2>
<p>To ensure seamless <strong>containerized builds, security analysis, and CI/CD automation</strong>, install and configure the necessary <strong>Jenkins plugins and tools</strong>.</p>
<h3 id="heading-1-install-required-plugins">1. Install Required Plugins</h3>
<p>Navigate to <strong>Jenkins Dashboard</strong> → <strong>Manage Jenkins</strong> → <strong>Plugins</strong> → <strong>Available Plugins</strong>, then search and install the following:</p>
<p>✅ <strong>Docker</strong> – Enables Docker integration.<br />✅ <strong>Docker Pipeline</strong> – Provides Docker support in Jenkins pipelines.<br />✅ <strong>Docker Commons</strong> – Manages shared Docker images.<br />✅ <strong>Docker API</strong> – Allows interaction with Docker daemon.<br />✅ <strong>NodeJS</strong> – Supports Node.js builds.<br />✅ <strong>OWASP Dependency-Check</strong> – Detects security vulnerabilities in dependencies.<br />✅ <strong>SonarQube Scanner</strong> – Enables code quality analysis with SonarQube.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1741444870296/d0b8da8d-096d-44cd-a8ee-84b080bbb2ab.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1741444907427/8a66ee60-47ba-4106-9c3c-e2fb62d3966f.png" alt class="image--center mx-auto" /></p>
<p>Once installed, <strong>restart Jenkins</strong> to apply changes.</p>
<h3 id="heading-2-configure-essential-tools-in-jenkins">2. Configure Essential Tools in Jenkins</h3>
<h4 id="heading-a-nodejs-installation"><strong>a) NodeJS Installation</strong></h4>
<ol>
<li><p>Go to <strong>Manage Jenkins</strong> → <strong>Tools</strong></p>
</li>
<li><p>Under <strong>NodeJS</strong>, click <strong>Add NodeJS</strong>.</p>
</li>
<li><p>Fill in the required details.</p>
</li>
<li><p>Check the box <strong>Install Automatically</strong>.</p>
</li>
<li><p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1741445221523/b0af3629-63dc-4aa2-a342-b66df19acb0a.png" alt class="image--center mx-auto" /></p>
<p> Click <strong>Save</strong>.</p>
</li>
</ol>
<h4 id="heading-b-sonarqube-scanner-installation"><strong>b) SonarQube Scanner Installation</strong></h4>
<ol>
<li><p>Under <strong>Tools Configuration</strong>, go to <strong>SonarQube Scanner</strong>.</p>
</li>
<li><p>Click <strong>Add SonarQube Scanner</strong>.</p>
</li>
<li><p>Fill in the required details.</p>
</li>
<li><p>Check <strong>Install Automatically</strong>.</p>
</li>
<li><p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1741445339243/557241fb-587f-4235-b5c2-063b83875f16.png" alt class="image--center mx-auto" /></p>
<p> Click <strong>Save</strong>.</p>
</li>
</ol>
<h4 id="heading-c-owasp-dependency-check-installation"><strong>c) OWASP Dependency Check Installation</strong></h4>
<ol>
<li><p>Under <strong>Tools Configuration</strong>, go to <strong>Dependency Check</strong>.</p>
</li>
<li><p>Click <strong>Add Dependency Check</strong>.</p>
</li>
<li><p>Check <strong>Install Automatically from GitHub</strong>.</p>
</li>
<li><p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1741445517523/5686bd62-1514-4a3f-bbe7-469a0e69f84d.png" alt class="image--center mx-auto" /></p>
<p> Click <strong>Save</strong>.</p>
</li>
</ol>
<h4 id="heading-d-docker-installation"><strong>d) Docker Installation</strong></h4>
<ol>
<li><p>Under <strong>Tools Configuration</strong>, go to <strong>Docker</strong>.</p>
</li>
<li><p>Click <strong>Add Docker</strong>.</p>
</li>
<li><p>Fill in the required details.</p>
</li>
<li><p>Check <strong>Install Automatically from</strong> <strong>Docker.com</strong>.</p>
</li>
<li><p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1741445599671/c0845040-5c52-499e-9664-fb1e8d6bb39a.png" alt class="image--center mx-auto" /></p>
<p> Click <strong>Save &amp; Apply</strong>.</p>
</li>
</ol>
<h3 id="heading-3-configure-sonarqube-webhook-in-jenkins">3. Configure SonarQube Webhook in Jenkins</h3>
<p>To enable <strong>SonarQube notifications in Jenkins</strong>, configure the webhook.</p>
<h4 id="heading-add-sonarqube-server-in-jenkins"><strong>Add SonarQube Server in Jenkins</strong></h4>
<ol>
<li><p>Navigate to <strong>Manage Jenkins</strong> → <strong>Configure System</strong>.</p>
</li>
<li><p>Scroll to the <strong>SonarQube installation</strong> section.</p>
</li>
<li><p>Click <strong>Add SonarQube</strong> and enter:</p>
<ul>
<li><p><strong>Name:</strong> <code>sonar-server</code></p>
</li>
<li><p><strong>Server URL:</strong> <code>http://&lt;public-ip-of-jenkins-server&gt;:9000</code></p>
</li>
<li><p><strong>Server Authentication Token:</strong> <code>&lt;sonar-qube-token-credential-name&gt;</code></p>
</li>
</ul>
</li>
<li><p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1741445857812/239f98f1-91ed-4f26-86e8-92e751f460e4.png" alt class="image--center mx-auto" /></p>
<p> Click <strong>Apply &amp; Save</strong>.</p>
</li>
</ol>
<blockquote>
<p>Jenkins is now <strong>fully equipped</strong> to handle <strong>Docker builds, security analysis, and SonarQube scanning</strong> in the DevSecOps pipeline. 🚀</p>
</blockquote>
<hr />
<h2 id="heading-step-7-create-a-jenkins-pipeline-for-frontend">Step 7: Create a Jenkins Pipeline for Frontend</h2>
<p>This pipeline automates the <strong>frontend build, security analysis, Docker image creation, and deployment updates</strong> for the <strong>DevSecOps pipeline</strong>.</p>
<h3 id="heading-1-create-a-new-pipeline-in-jenkins">1. Create a New Pipeline in Jenkins</h3>
<ol>
<li><p>Navigate to <strong>Jenkins Dashboard</strong> → <strong>New Item</strong>.</p>
</li>
<li><p>Enter a <strong>Pipeline Name</strong> (e.g., <code>frontend-pipeline</code>).</p>
</li>
<li><p>Select <strong>Pipeline</strong> as the item type.</p>
</li>
<li><p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1741456092289/fcb015a5-9fb4-401c-801d-0ab19218063b.png" alt class="image--center mx-auto" /></p>
<p> Click <strong>OK</strong> to proceed.</p>
</li>
<li><p>Scroll down to the <strong>Pipeline</strong> section and choose <strong>Pipeline script</strong>.</p>
</li>
</ol>
<h3 id="heading-2-add-the-pipeline-script">2. Add the Pipeline Script</h3>
<p>Copy and paste the following <strong>Jenkinsfile</strong>:</p>
<pre><code class="lang-bash">pipeline {
    agent any 
    tools {
        nodejs <span class="hljs-string">'nodejs'</span>
    }
    environment {
        SCANNER_HOME = tool <span class="hljs-string">'sonar-scanner'</span>
        AWS_ACCOUNT_ID = credentials(<span class="hljs-string">'Account_ID'</span>)
        AWS_ECR_REPO_NAME = credentials(<span class="hljs-string">'ECR_REPO1'</span>)
        AWS_DEFAULT_REGION = <span class="hljs-string">'ap-south-1'</span>
        REPOSITORY_URI = <span class="hljs-string">"<span class="hljs-variable">${AWS_ACCOUNT_ID}</span>.dkr.ecr.<span class="hljs-variable">${AWS_DEFAULT_REGION}</span>.amazonaws.com/"</span>
    }
    stages {
        stage(<span class="hljs-string">'Cleaning Workspace'</span>) {
            steps {
                cleanWs()
            }
        }
        stage(<span class="hljs-string">'Checkout from Git'</span>) {
            steps {
                git credentialsId: <span class="hljs-string">'GITHUB-APP'</span>, url: <span class="hljs-string">'https://github.com/praduman8435/DevSecOps-in-Action.git'</span>, branch: <span class="hljs-string">'main'</span>
            }
        }
        stage(<span class="hljs-string">'SonarQube Analysis'</span>) {
            steps {
                dir(<span class="hljs-string">'frontend'</span>) {
                    withSonarQubeEnv(<span class="hljs-string">'sonar-server'</span>) { // Use withSonarQubeEnv wrapper
                        sh <span class="hljs-string">''</span><span class="hljs-string">'
                        $SCANNER_HOME/bin/sonar-scanner \
                        -Dsonar.projectName=frontend \
                        -Dsonar.projectKey=frontend \
                        -Dsonar.sources=.
                        '</span><span class="hljs-string">''</span>
                    }
                }
            }
        }
        stage(<span class="hljs-string">'Quality Check'</span>) {
            steps {
                script {
                    waitForQualityGate abortPipeline: <span class="hljs-literal">false</span>, credentialsId: <span class="hljs-string">'sonar-token'</span>
                }
            }
        }
        stage(<span class="hljs-string">'OWASP Dependency-Check Scan'</span>) {
            steps {
                dir(<span class="hljs-string">'Application-Code/backend'</span>) {
                    dependencyCheck additionalArguments: <span class="hljs-string">'--scan ./ --disableYarnAudit --disableNodeAudit'</span>, odcInstallation: <span class="hljs-string">'DP-Check'</span>
                    dependencyCheckPublisher pattern: <span class="hljs-string">'**/dependency-check-report.xml'</span>
                }
            }
        }
        stage(<span class="hljs-string">'Trivy File Scan'</span>) {
            steps {
                dir(<span class="hljs-string">'frontend'</span>) {
                    sh <span class="hljs-string">'trivy fs . &gt; trivyfs.txt'</span>
                }
            }
        }
        stage(<span class="hljs-string">'Docker Image Build'</span>) {
            steps {
                script {
                    dir(<span class="hljs-string">'frontend'</span>) {
                        sh <span class="hljs-string">'docker system prune -f'</span>
                        sh <span class="hljs-string">'docker container prune -f'</span>
                        sh <span class="hljs-string">'docker build -t ${AWS_ECR_REPO_NAME} .'</span>
                    }
                }
            }
        }
        stage(<span class="hljs-string">'ECR Image Pushing'</span>) {
            steps {
                script {
                    sh <span class="hljs-string">'aws ecr get-login-password --region ${AWS_DEFAULT_REGION} | docker login --username AWS --password-stdin ${REPOSITORY_URI}'</span>
                    sh <span class="hljs-string">'docker tag ${AWS_ECR_REPO_NAME} ${REPOSITORY_URI}${AWS_ECR_REPO_NAME}:${BUILD_NUMBER}'</span>
                    sh <span class="hljs-string">'docker push ${REPOSITORY_URI}${AWS_ECR_REPO_NAME}:${BUILD_NUMBER}'</span>
                }
            }
        }
        stage(<span class="hljs-string">'Trivy Image Scan'</span>) {
            steps {
                sh <span class="hljs-string">'trivy image ${REPOSITORY_URI}${AWS_ECR_REPO_NAME}:${BUILD_NUMBER} &gt; trivyimage.txt'</span>
            }
        }
        stage(<span class="hljs-string">'Update Deployment File'</span>) {
            environment {
                GIT_REPO_NAME = <span class="hljs-string">"DevSecOps-in-Action"</span>
                GIT_USER_NAME = <span class="hljs-string">"praduman8435"</span>
            }
            steps {
                dir(<span class="hljs-string">'k8s-manifests/frontend'</span>) {
                    withCredentials([string(credentialsId: <span class="hljs-string">'github'</span>, variable: <span class="hljs-string">'GITHUB_TOKEN'</span>)]) {
                        sh <span class="hljs-string">''</span><span class="hljs-string">'
                        git config user.email "praduman.cnd@gmail.com"
                        git config user.name "praduman"
                        BUILD_NUMBER=${BUILD_NUMBER}
                        imageTag=$(grep -oP '</span>(?&lt;=frontend:)[^ ]+<span class="hljs-string">' deployment.yaml)
                        sed -i "s/${AWS_ECR_REPO_NAME}:${imageTag}/${AWS_ECR_REPO_NAME}:${BUILD_NUMBER}/" deployment.yaml
                        git add deployment.yaml
                        git commit -m "Update deployment image to version ${BUILD_NUMBER}"
                        git push https://${GITHUB_TOKEN}@github.com/${GIT_USER_NAME}/${GIT_REPO_NAME} HEAD:main
                        '</span><span class="hljs-string">''</span>
                    }
                }
            }
        }
    }
}
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1741457839893/15fd831a-2071-4a69-894f-b36de7ce4c24.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-3-build-the-pipeline">3. Build the Pipeline</h3>
<ol>
<li><p>Click <strong>Save &amp; Apply</strong>.</p>
</li>
<li><p>Click <strong>Build Now</strong> to start the pipeline.</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1741458808440/e91d9923-d0da-4dfb-a11a-6dcc8430992c.png" alt class="image--center mx-auto" /></p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1741514552441/eae55dca-acc9-4bc8-b15c-44b3e89ea063.png" alt class="image--center mx-auto" /></p>
</li>
</ol>
<h3 id="heading-4-verify-sonarqube-analysis">4. Verify SonarQube Analysis</h3>
<ol>
<li><p>Open SonarQube UI:</p>
<pre><code class="lang-bash"> http://&lt;public-ip-of-jenkins-server&gt;:9000
</code></pre>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1741514588913/1d911ec7-71a7-4880-b4b4-c4c0ec6bff94.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Check if the <strong>SonarQube scan results</strong> appear in the UI under the <strong>frontend project</strong>.</p>
</li>
</ol>
<h3 id="heading-pipeline-workflow-summary"><strong>Pipeline Workflow Summary</strong></h3>
<p>✅ <strong>Code Checkout</strong> from GitHub.<br />✅ <strong>SonarQube Scan</strong> for code quality analysis.<br />✅ <strong>Security Scans</strong> using OWASP Dependency-Check and Trivy.<br />✅ <strong>Docker Build &amp; Push</strong> to Amazon ECR.<br />✅ <strong>Deployment Update</strong> in Kubernetes manifests.</p>
<p>The frontend pipeline is now <strong>fully automated and integrated into the DevSecOps workflow! 🚀</strong></p>
<hr />
<h2 id="heading-step-8-create-a-jenkins-pipeline-for-backend"><strong>Step 8: Create a Jenkins Pipeline for Backend</strong></h2>
<p>This pipeline automates the <strong>backend build, security scanning, Docker image creation, and Kubernetes deployment updates</strong> for the <strong>DevSecOps pipeline</strong>.</p>
<h3 id="heading-1-create-a-new-pipeline-in-jenkins-1">1. Create a New Pipeline in Jenkins</h3>
<ol>
<li><p>Go to <strong>Jenkins Dashboard</strong> → <strong>New Item</strong>.</p>
</li>
<li><p>Enter a <strong>Pipeline Name</strong> (e.g., <code>backend-pipeline</code>).</p>
</li>
<li><p>Select <strong>Pipeline</strong> as the item type.</p>
</li>
<li><p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1741518531141/eb049dd3-8936-4810-8f66-028378f8f12c.png" alt class="image--center mx-auto" /></p>
<p> Click <strong>OK</strong> to proceed.</p>
</li>
<li><p>Scroll to the <strong>Pipeline</strong> section and choose <strong>Pipeline script</strong>.</p>
</li>
</ol>
<h3 id="heading-2-add-the-pipeline-script-1">2. Add the Pipeline Script</h3>
<p>Copy and paste the following <strong>Jenkinsfile</strong>:</p>
<pre><code class="lang-bash">pipeline {
    agent any 
    tools {
        nodejs <span class="hljs-string">'nodejs'</span>
    }
    environment {
        SCANNER_HOME = tool <span class="hljs-string">'sonar-scanner'</span>
        AWS_ACCOUNT_ID = credentials(<span class="hljs-string">'Account_ID'</span>)
        AWS_ECR_REPO_NAME = credentials(<span class="hljs-string">'ECR_REPO2'</span>)
        AWS_DEFAULT_REGION = <span class="hljs-string">'ap-south-1'</span>
        REPOSITORY_URI = <span class="hljs-string">"<span class="hljs-variable">${AWS_ACCOUNT_ID}</span>.dkr.ecr.<span class="hljs-variable">${AWS_DEFAULT_REGION}</span>.amazonaws.com/"</span>
    }
    stages {
        stage(<span class="hljs-string">'Cleaning Workspace'</span>) {
            steps {
                cleanWs()
            }
        }
        stage(<span class="hljs-string">'Checkout from Git'</span>) {
            steps {
                git credentialsId: <span class="hljs-string">'GITHUB-APP'</span>, url: <span class="hljs-string">'https://github.com/praduman8435/DevSecOps-in-Action.git'</span>, branch: <span class="hljs-string">'main'</span>
            }
        }
        stage(<span class="hljs-string">'SonarQube Analysis'</span>) {
            steps {
                dir(<span class="hljs-string">'backend'</span>) {
                    withSonarQubeEnv(<span class="hljs-string">'sonar-server'</span>) { // Use withSonarQubeEnv wrapper
                        sh <span class="hljs-string">''</span><span class="hljs-string">'
                        $SCANNER_HOME/bin/sonar-scanner \
                        -Dsonar.projectName=backend \
                        -Dsonar.projectKey=backend \
                        -Dsonar.sources=.
                        '</span><span class="hljs-string">''</span>
                    }
                }
            }
        }
        stage(<span class="hljs-string">'Quality Check'</span>) {
            steps {
                script {
                    waitForQualityGate abortPipeline: <span class="hljs-literal">false</span>, credentialsId: <span class="hljs-string">'sonar-token'</span>
                }
            }
        }
        stage(<span class="hljs-string">'OWASP Dependency-Check Scan'</span>) {
            steps {
                dir(<span class="hljs-string">'backend'</span>) {
                    dependencyCheck additionalArguments: <span class="hljs-string">'--scan ./ --disableYarnAudit --disableNodeAudit'</span>, odcInstallation: <span class="hljs-string">'DP-Check'</span>
                    dependencyCheckPublisher pattern: <span class="hljs-string">'**/dependency-check-report.xml'</span>
                }
            }
        }
        stage(<span class="hljs-string">'Trivy File Scan'</span>) {
            steps {
                dir(<span class="hljs-string">'backend'</span>) {
                    sh <span class="hljs-string">'trivy fs . &gt; trivyfs.txt'</span>
                }
            }
        }
        stage(<span class="hljs-string">'Docker Image Build'</span>) {
            steps {
                script {
                    dir(<span class="hljs-string">'backend'</span>) {
                        sh <span class="hljs-string">'docker system prune -f'</span>
                        sh <span class="hljs-string">'docker container prune -f'</span>
                        sh <span class="hljs-string">'docker build -t ${AWS_ECR_REPO_NAME} .'</span>
                    }
                }
            }
        }
        stage(<span class="hljs-string">'ECR Image Pushing'</span>) {
            steps {
                script {
                    sh <span class="hljs-string">'aws ecr get-login-password --region ${AWS_DEFAULT_REGION} | docker login --username AWS --password-stdin ${REPOSITORY_URI}'</span>
                    sh <span class="hljs-string">'docker tag ${AWS_ECR_REPO_NAME} ${REPOSITORY_URI}${AWS_ECR_REPO_NAME}:${BUILD_NUMBER}'</span>
                    sh <span class="hljs-string">'docker push ${REPOSITORY_URI}${AWS_ECR_REPO_NAME}:${BUILD_NUMBER}'</span>
                }
            }
        }
        stage(<span class="hljs-string">'Trivy Image Scan'</span>) {
            steps {
                sh <span class="hljs-string">'trivy image ${REPOSITORY_URI}${AWS_ECR_REPO_NAME}:${BUILD_NUMBER} &gt; trivyimage.txt'</span>
            }
        }
        stage(<span class="hljs-string">'Update Deployment File'</span>) {
            environment {
                GIT_REPO_NAME = <span class="hljs-string">"DevSecOps-in-Action"</span>
                GIT_USER_NAME = <span class="hljs-string">"praduman8435"</span>
            }
            steps {
                dir(<span class="hljs-string">'k8s-manifests/backend'</span>) {
                    withCredentials([string(credentialsId: <span class="hljs-string">'github'</span>, variable: <span class="hljs-string">'GITHUB_TOKEN'</span>)]) {
                        sh <span class="hljs-string">''</span><span class="hljs-string">'
                        git config user.email "praduman.cnd@gmail.com"
                        git config user.name "praduman"
                        BUILD_NUMBER=${BUILD_NUMBER}
                        imageTag=$(grep -oP '</span>(?&lt;=backend:)[^ ]+<span class="hljs-string">' deployment.yaml)
                        sed -i "s/${AWS_ECR_REPO_NAME}:${imageTag}/${AWS_ECR_REPO_NAME}:${BUILD_NUMBER}/" deployment.yaml
                        git add deployment.yaml
                        git commit -m "Update deployment image to version ${BUILD_NUMBER}"
                        git push https://${GITHUB_TOKEN}@github.com/${GIT_USER_NAME}/${GIT_REPO_NAME} HEAD:main
                        '</span><span class="hljs-string">''</span>
                    }
                }
            }
        }
    }
}
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1741519197297/4015381c-9824-4414-985e-a85be33e690c.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-3-build-the-pipeline-1">3. Build the Pipeline</h3>
<ol>
<li><p>Click <strong>Save &amp; Apply</strong>.</p>
</li>
<li><p>Click <strong>Build Now</strong> to trigger the pipeline.</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1741523884768/3cc037bb-99a5-4435-b006-c6e9e343f254.png" alt class="image--center mx-auto" /></p>
</li>
</ol>
<h3 id="heading-4-verify-sonarqube-analysis-1">4. Verify SonarQube Analysis</h3>
<ol>
<li><p>Open SonarQube UI:</p>
<pre><code class="lang-bash"> http://&lt;public-ip-of-jenkins-server&gt;:9000
</code></pre>
</li>
<li><p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1741519441252/9599b465-e09d-45ad-a104-a407ea1cb14c.png" alt class="image--center mx-auto" /></p>
<p> Check the <strong>SonarQube scan results</strong> under the <strong>backend project</strong>.</p>
</li>
</ol>
<h3 id="heading-pipeline-workflow-summary-1"><strong>Pipeline Workflow Summary</strong></h3>
<p>✅ <strong>Code Checkout</strong> from GitHub.<br />✅ <strong>SonarQube Scan</strong> for code quality analysis.<br />✅ <strong>Security Scans</strong> using OWASP Dependency-Check and Trivy.<br />✅ <strong>Docker Build &amp; Push</strong> to Amazon ECR.<br />✅ <strong>Deployment Update</strong> in Kubernetes manifests.</p>
<p>The <strong>backend pipeline</strong> is now fully automated and integrated into the <strong>DevSecOps workflow! 🚀</strong></p>
<hr />
<h2 id="heading-step-9-setup-application-in-argocd"><strong>Step 9: Setup Application in ArgoCD</strong></h2>
<p>In this step, we will <strong>deploy the application (frontend, backend, database, and ingress) to the EKS cluster</strong> using <strong>ArgoCD</strong>.</p>
<h3 id="heading-1-open-argocd-ui">1. Open ArgoCD UI</h3>
<ol>
<li><p>Get the ArgoCD <strong>server External-IP</strong>:</p>
<pre><code class="lang-bash"> kubectl get svc -n argocd argocd-server
</code></pre>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1741520697491/d405f4f6-cb99-448f-94e5-8f6779bd3fc1.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Access the ArgoCD UI using <strong>EXTERNAL-IP</strong> in the output.</p>
</li>
<li><p>Login using <code>username</code> and <code>password</code> created by you previously.</p>
</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1741524032969/bf38e285-4f43-41c4-811c-f583c35a4b0b.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-2-connect-github-repository-to-argocd">2. Connect GitHub Repository to ArgoCD</h3>
<ol>
<li><p>Go to <strong>Settings</strong> → <strong>Repositories</strong>.</p>
</li>
<li><p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1741524080333/523cc402-8152-4c35-a966-7e7a478060e9.png" alt class="image--center mx-auto" /></p>
<p> Click <strong>"Connect Repository using HTTPS"</strong>.</p>
</li>
<li><p>Enter:</p>
<ul>
<li><p><strong>Project:</strong> <code>default</code></p>
</li>
<li><p><strong>Repository URL:</strong> <a target="_blank" href="https://github.com/praduman8435/DevSecOps-in-Action.git"><code>https://github.com/praduman8435/DevSecOps-in-Action.git</code></a></p>
</li>
<li><p><strong>Authentication:</strong> None (if public repo)</p>
</li>
</ul>
</li>
<li><p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1741524351951/0745cb43-eb83-460d-8e02-bc5bfea1a9a9.png" alt class="image--center mx-auto" /></p>
<p> Click <strong>"Connect"</strong>.</p>
</li>
</ol>
<h3 id="heading-3-create-kubernetes-namespace-for-deployment">3. Create Kubernetes Namespace for Deployment</h3>
<ol>
<li><p>Open <strong>terminal</strong> and run:</p>
<pre><code class="lang-bash"> kubectl create namespace three-tier
</code></pre>
</li>
<li><p>Verify the namespace:</p>
<pre><code class="lang-bash"> kubectl get namespaces
</code></pre>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1741524473226/95f4a026-bc5b-45b1-88dc-e7dbd5bec1e5.png" alt class="image--center mx-auto" /></p>
</li>
</ol>
<h3 id="heading-4-deploy-database-in-argocd">4. Deploy Database in ArgoCD</h3>
<ol>
<li><p>In ArgoCD UI, go to <strong>Applications</strong> → Click <strong>New Application</strong>.</p>
</li>
<li><p>Fill in the following details:</p>
<ul>
<li><p><strong>Application Name:</strong> <code>three-tier-database</code></p>
</li>
<li><p><strong>Project Name:</strong> <code>default</code></p>
</li>
<li><p><strong>Sync Policy:</strong> <code>Automatic</code></p>
</li>
<li><p><strong>Repository URL:</strong> <a target="_blank" href="https://github.com/praduman8435/DevSecOps-in-Action.git"><code>https://github.com/praduman8435/DevSecOps-in-Action.git</code></a></p>
</li>
<li><p><strong>Path:</strong> <code>k8s-manifests/database</code></p>
</li>
<li><p><strong>Cluster URL:</strong> <code>https://kubernetes.default.svc</code></p>
</li>
<li><p><strong>Namespace:</strong> <code>three-tier</code></p>
</li>
</ul>
</li>
<li><p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1741525023900/7b80d291-5161-419d-8e73-dfec2028f5af.png" alt class="image--center mx-auto" /></p>
<p> Click <strong>Create</strong>.</p>
</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1741525162975/5aa54702-f575-4ba2-9323-7e9fa53ab19b.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1741525188668/f927c4f4-1cdc-42a4-ba2b-f25c99a0cc35.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1741525425034/c790456f-eb2f-4990-aa21-b6fa0c7db513.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-5-deploy-backend-in-argocd">5. Deploy Backend in ArgoCD</h3>
<ol>
<li><p>Go to <strong>Applications</strong> → Click <strong>New Application</strong>.</p>
</li>
<li><p>Fill in:</p>
<ul>
<li><p><strong>Application Name:</strong> <code>three-tier-backend</code></p>
</li>
<li><p><strong>Project Name:</strong> <code>default</code></p>
</li>
<li><p><strong>Sync Policy:</strong> <code>Automatic</code></p>
</li>
<li><p><strong>Repository URL:</strong> <a target="_blank" href="https://github.com/praduman8435/DevSecOps-in-Action.git"><code>https://github.com/praduman8435/DevSecOps-in-Action.git</code></a></p>
</li>
<li><p><strong>Path:</strong> <code>k8s-manifests/backend</code></p>
</li>
<li><p><strong>Cluster URL:</strong> <code>https://kubernetes.default.svc</code></p>
</li>
<li><p><strong>Namespace:</strong> <code>three-tier</code></p>
</li>
</ul>
</li>
<li><p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1741525596484/fab7203e-ff2b-47fd-876c-708fa6efddb2.png" alt class="image--center mx-auto" /></p>
<p> Click <strong>Create</strong>.</p>
</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1741525631628/3daaeb8d-27ec-4be0-bf13-841029f078c4.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-6-deploy-frontend-in-argocd">6. Deploy Frontend in ArgoCD</h3>
<ol>
<li><p>Go to <strong>Applications</strong> → Click <strong>New Application</strong>.</p>
</li>
<li><p>Fill in:</p>
<ul>
<li><p><strong>Application Name:</strong> <code>three-tier-frontend</code></p>
</li>
<li><p><strong>Project Name:</strong> <code>default</code></p>
</li>
<li><p><strong>Sync Policy:</strong> <code>Automatic</code></p>
</li>
<li><p><strong>Repository URL:</strong> <a target="_blank" href="https://github.com/praduman8435/DevSecOps-in-Action.git"><code>https://github.com/praduman8435/DevSecOps-in-Action.git</code></a></p>
</li>
<li><p><strong>Path:</strong> <code>k8s-manifests/frontend</code></p>
</li>
<li><p><strong>Cluster URL:</strong> <a target="_blank" href="https://kubernetes.default.svc"><code>https://kubernetes.default.svc</code></a></p>
</li>
<li><p><strong>Namespace:</strong> <code>three-tier</code></p>
</li>
</ul>
</li>
<li><p>Click <strong>Create</strong>.</p>
</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1741607070003/87b23be1-15eb-4b78-a1a1-cd4105982a2c.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-7-deploy-ingress-in-argocd">7. Deploy Ingress in ArgoCD</h3>
<ol>
<li><p>Go to <strong>Applications</strong> → Click <strong>New Application</strong>.</p>
</li>
<li><p>Fill in:</p>
<ul>
<li><p><strong>Application Name:</strong> <code>three-tier-ingress</code></p>
</li>
<li><p><strong>Project Name:</strong> <code>default</code></p>
</li>
<li><p><strong>Sync Policy:</strong> <code>Automatic</code></p>
</li>
<li><p><strong>Repository URL:</strong> <a target="_blank" href="https://github.com/praduman8435/DevSecOps-in-Action.git"><code>https://github.com/praduman8435/DevSecOps-in-Action.git</code></a></p>
</li>
<li><p><strong>Path:</strong> <code>k8s-manifests</code></p>
</li>
<li><p><strong>Cluster URL:</strong> <code>https://kubernetes.default.svc</code></p>
</li>
<li><p><strong>Namespace:</strong> <code>three-tier</code></p>
</li>
</ul>
</li>
<li><p>Click <strong>Create</strong>.</p>
</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1741607042824/e0df1c07-151f-4fa6-992e-1d4227732cb1.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1741607128084/8a7fb643-b1d7-4999-9e49-d0539ce1fb27.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-8-verify-deployment-in-argocd">8. Verify Deployment in ArgoCD</h3>
<ol>
<li><p>Go to <strong>Applications</strong> in ArgoCD UI.</p>
</li>
<li><p>Check if all applications are <strong>Synced and Healthy</strong>.</p>
</li>
<li><p>If needed, <strong>Manually Sync</strong> any pending application.</p>
</li>
</ol>
<blockquote>
<p>🎉 <strong>Congratulations! Your application is now fully deployed using ArgoCD!</strong> 🚀 and can be accessed at <a target="_blank" href="http://3-111-158-0.nip.io/"><code>http://3-111-158-0.nip.io/</code></a></p>
</blockquote>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1741607218351/449d87b3-f39f-481b-a957-e79966c78210.png" alt class="image--center mx-auto" /></p>
<hr />
<h2 id="heading-step-10-configure-monitoring-using-prometheus-and-grafana"><strong>Step 10: Configure Monitoring using Prometheus and Grafana</strong></h2>
<p>In this step, we will <strong>install and configure Prometheus and Grafana</strong> using <strong>Helm charts</strong> to monitor the Kubernetes cluster.</p>
<h3 id="heading-1-add-helm-repositories-for-prometheus-amp-grafana">1. Add Helm Repositories for Prometheus &amp; Grafana</h3>
<p>Run the following commands to add and update the Helm repositories:</p>
<pre><code class="lang-bash">helm repo add stable https://charts.helm.sh/stable
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo add grafana https://grafana.github.io/helm-charts
helm repo update
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1741610771165/040d9108-e8da-488f-915b-e33384af1489.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-2-install-prometheus-and-grafana-using-helm">2. Install Prometheus and Grafana using Helm</h3>
<pre><code class="lang-bash">helm install prometheus prometheus-community/kube-prometheus-stack \
  --<span class="hljs-built_in">set</span> prometheus.server.persistentVolume.storageClass=gp2 \
  --<span class="hljs-built_in">set</span> alertmanager.alertmanagerSpec.persistentVolume.storageClass=gp2 \
</code></pre>
<h3 id="heading-3-access-prometheus-ui">3. Access Prometheus UI</h3>
<ul>
<li><p>Get the <strong>Prometheus service details</strong>:</p>
<pre><code class="lang-bash">  kubectl get svc 
  <span class="hljs-comment">#look for prometheus-kube-prometheus-prometheus svc</span>
</code></pre>
</li>
<li><p>Change the serveice type from <code>ClusterIP</code> to <code>Loadbalancer</code></p>
<pre><code class="lang-bash">  kubectl edit svc  prometheus-kube-prometheus-prometheus
</code></pre>
<ul>
<li>Find the line <code>type: ClusterIP</code> and change it to <code>type: LoadBalancer</code>.</li>
</ul>
</li>
<li><p>You can now access the prometheus server using external-IP of <code>prometheus-kube-prometheus-prometheus</code> svc</p>
<pre><code class="lang-bash">  kubectl get svc prometheus-kube-prometheus-prometheus
</code></pre>
</li>
<li><p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1741627422238/9e1a26e9-3e30-40d9-b6c6-3cd72e148a32.png" alt class="image--center mx-auto" /></p>
<p>  Open <code>&lt;External-IP&gt;:9999</code> in your browser.</p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1741628374258/3c4d69c4-d8ec-46cc-8fed-7f3fc4cd0e9e.png" alt class="image--center mx-auto" /></p>
<ul>
<li><p>Click on <strong>Status</strong> and select <strong>Target</strong>. You'll see a list of Targets displayed. In Grafana, we'll use this as a data source.</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1741630034240/7a33de79-caf9-4cef-8ad7-582efb0d30ba.png" alt class="image--center mx-auto" /></p>
</li>
</ul>
<h3 id="heading-4-access-grafana-ui">4. Access Grafana UI</h3>
<ul>
<li><p>Get the <strong>Grafana service details</strong></p>
<pre><code class="lang-bash">  kubectl get svc
  <span class="hljs-comment">#look for the prometheus-grafana svc</span>
</code></pre>
</li>
<li><p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1741628868815/1a4c02b0-036d-4b32-8036-c62d55e03ca6.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>By default, it uses <code>ClusterIP</code>. Change it to <code>LoadBalancer</code>:</p>
<pre><code class="lang-bash">  kubectl edit svc prometheus-grafana
</code></pre>
<ul>
<li>Find the line <code>type: ClusterIP</code> and change it to <code>type: LoadBalancer</code>.</li>
</ul>
</li>
</ul>
<p>    <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1741629034772/e0f43fd3-15f2-4056-aaaf-b9f7c16cc5e2.png" alt class="image--center mx-auto" /></p>
<ul>
<li><p>Get the <strong>external IP</strong> of Grafana:</p>
<pre><code class="lang-bash">  kubectl get svc prometheus-grafana
</code></pre>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1741629165298/fc3afc55-7cfe-4f7b-a359-8f88153cf603.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Open <code>&lt;EXTERNAL-IP&gt;</code> in your browser.</p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1741629578205/00861a5b-b4d5-4951-94f0-8acdc3db0739.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-5-get-grafana-admin-password">5. Get Grafana Admin Password</h3>
<pre><code class="lang-bash">kubectl get secret grafana -n default -o jsonpath=<span class="hljs-string">"{.data.admin-password}"</span> | base64 --decode
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1741629668671/dc7e1e63-f33c-4a5b-9d19-dfb15311f8da.png" alt class="image--center mx-auto" /></p>
<ul>
<li><p><strong>Username:</strong> <code>admin</code></p>
</li>
<li><p><strong>Password:</strong> (output from the above command)</p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1741629732834/b9b95613-41e7-4c98-8860-a2a354019aa6.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-6-configure-prometheus-as-a-data-source-in-grafana"><strong>6. Configure Prometheus as a Data Source in Grafana</strong></h3>
<ol>
<li><p>Login to <strong>Grafana UI</strong>.</p>
</li>
<li><p>Go to <strong>Connections</strong> → <strong>Data Sources</strong>.</p>
</li>
<li><p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1741630661899/20d56a67-d85c-4208-bef6-69e9fa3cf162.png" alt class="image--center mx-auto" /></p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1741630724688/e305815a-4d60-4696-982e-b392dce4314d.png" alt class="image--center mx-auto" /></p>
<p> Click <strong>Data source</strong> → Select <strong>Prometheus</strong></p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1741630757616/4d990978-9a0d-4c0c-bbb0-198364c40ea5.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Provide the <strong>Prometheus URL</strong> (&lt;prometheus-loadbalancer-dns&gt;:9090) if not get provided.</p>
</li>
<li><p>Click <strong>Save &amp; Test</strong>.</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1741630997721/a5105f26-a119-4d4a-bc96-69cb99e34e2f.png" alt class="image--center mx-auto" /></p>
</li>
</ol>
<blockquote>
<p>🎉 <strong>Congratulations! Your Kubernetes cluster is now being monitored using Prometheus &amp; Grafana!</strong></p>
</blockquote>
<h3 id="heading-7-setting-up-dashboards-in-grafana-for-kubernetes-monitoring"><strong>7. Setting Up Dashboards in Grafana for Kubernetes Monitoring</strong></h3>
<p>Grafana allows us to visualize Kubernetes cluster and resource metrics effectively. We’ll set up two essential dashboards to monitor our cluster using Prometheus as the data source.</p>
<h4 id="heading-dashboard-1-kubernetes-cluster-monitoring"><strong>Dashboard 1: Kubernetes Cluster Monitoring</strong></h4>
<p>This dashboard provides an overview of the Kubernetes cluster, including node health, resource usage, and workload performance.</p>
<p><strong>Steps to Import the Dashboard:</strong></p>
<ul>
<li><p>Open the <strong>Grafana UI</strong> and navigate to <strong>Dashboards</strong>.</p>
</li>
<li><p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1741631187960/c29b5cde-c389-45da-b79c-56cb8dccfaa6.png" alt class="image--right mx-auto mr-0" /></p>
<p>  Click on <strong>New</strong> → <strong>Import</strong>.</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1741631232457/f4d81a20-7b49-4a3a-8ba7-1def38368e4b.png" alt class="image--center mx-auto" /></p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1741631372426/938341f1-9559-4014-8ea9-391776f58245.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>In the <strong>Import via</strong> <a target="_blank" href="http://Grafana.com"><strong>Grafana.com</strong></a> field, enter <strong>6417</strong> (Prometheus Kubernetes Cluster Monitoring Dashboard).</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1741631528244/d4e7e50a-dce5-4c54-84e1-a29aa143b0d5.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Click <strong>Load</strong>.</p>
</li>
<li><p>Select <strong>Prometheus</strong> as the data source.</p>
</li>
<li><p>Click <strong>Import</strong>.</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1741631806997/c4c07457-de58-4c21-acd1-ed86ec1085d8.png" alt class="image--center mx-auto" /></p>
</li>
</ul>
<blockquote>
<p>You should now see a comprehensive dashboard displaying Kubernetes cluster metrics.</p>
</blockquote>
<h4 id="heading-dashboard-2-kubernetes-resource-monitoring"><strong>Dashboard 2: Kubernetes Resource Monitoring</strong></h4>
<p>This dashboard provides insights into individual Kubernetes resources such as pods, deployments, and namespaces.</p>
<p><strong>Steps to Import the Dashboard:</strong></p>
<ol>
<li><p>Open the <strong>Grafana UI</strong> and navigate to <strong>Dashboards</strong>.</p>
</li>
<li><p>Click on <strong>New</strong> → <strong>Import</strong>.</p>
</li>
<li><p>Enter <strong>17375</strong> (Kubernetes Resources Monitoring Dashboard).</p>
</li>
<li><p>Click <strong>Load</strong>.</p>
</li>
<li><p>Select <strong>Prometheus</strong> as the data source.</p>
</li>
<li><p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1741632190806/881b8686-0d75-41f9-9253-635ba9f9a1fa.png" alt class="image--center mx-auto" /></p>
<p> select the data source i.e. <code>prometheus</code> and click on <code>import</code></p>
</li>
<li><p>Click <strong>Import</strong>.</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1741632323000/e67a6044-8017-4be3-b0ee-f2249be47c6e.png" alt class="image--center mx-auto" /></p>
</li>
</ol>
<blockquote>
<p>Now, you have two powerful dashboards to monitor both the overall cluster health and specific Kubernetes resources in real-time.</p>
</blockquote>
<hr />
<p><strong>Enjoyed the post? to support my writing!</strong></p>
<p><a target="_blank" href="buymeacoffee.com/praduman"><img src="https://img.shields.io/badge/Buy%20Me%20A%20Coffee-FFDD00?style=for-the-badge&amp;logo=buy-me-a-coffee&amp;logoColor=black" alt="Buy Me A Coffee" /></a></p>
<h2 id="heading-conclusion"><strong>Conclusion</strong></h2>
<p>This <strong>Ultimate DevSecOps Project</strong> is all about bringing security into the DevOps pipeline while deploying a <strong>scalable, secure, and fully automated three-tier application</strong> on <strong>AWS EKS</strong>. By combining the power of <strong>Jenkins, SonarQube, Trivy, OWASP Dependency-Check, Terraform, ArgoCD, Prometheus, and Grafana</strong>, we've built a <strong>robust CI/CD pipeline</strong> that ensures <strong>code quality, security, and smooth deployments</strong>—without any manual headaches!</p>
<p>With <strong>SonarQube and OWASP Dependency-Check</strong>, we keep our code secure and compliant. <strong>Trivy</strong> scans our Docker images before they even reach <strong>AWS ECR</strong>, blocking vulnerabilities before they hit production. <strong>Jenkins</strong> takes care of automation, while <strong>ArgoCD</strong> ensures our Kubernetes deployments stay in perfect sync. And of course, <strong>Prometheus and Grafana</strong> give us full visibility into system health and performance, so we're always on top of things.</p>
<p>This project isn't just a <strong>DevSecOps tutorial</strong>—it's a <strong>real-world playbook</strong> for modern software delivery. Whether you're a <strong>DevOps pro, security enthusiast, or just diving into cloud automation</strong>, this guide sets you up with the tools and best practices to <strong>master DevSecOps in Kubernetes</strong>.</p>
<p>🚀 <strong>Ready to take your DevSecOps game to the next level? Let’s build, secure, and deploy—without limits!</strong> 🔐🎯</p>
<blockquote>
<p>💡 <em>Let’s connect and discuss DevOps, cloud automation, and cutting-edge technology</em></p>
<p>🔗 <a target="_blank" href="https://www.linkedin.com/in/praduman-prajapati/"><strong>LinkedIn</strong></a> | 💼 <a target="_blank" href="https://www.upwork.com/freelancers/~01fa3bf4d6797a9651"><strong>Upwork</strong></a> | 🐦 <a target="_blank" href="https://x.com/CndTwtprad"><strong>Twitter</strong></a> | 👨‍💻 <a target="_blank" href="https://github.com/praduman8435"><strong>GitHub</strong></a></p>
</blockquote>
]]></content:encoded></item><item><title><![CDATA[Deploying an Application on AWS EKS with Ingress: A Step-by-Step Guide]]></title><description><![CDATA[In this guide, we'll walk through the steps to set up an Amazon Elastic Kubernetes Service (EKS) cluster using Fargate (a serverless compute engine for containers). We'll also deploy a sample application and configure an Application Load Balancer (AL...]]></description><link>https://blogs.praduman.site/deploying-an-application-on-aws-eks-with-ingress-a-step-by-step-guide</link><guid isPermaLink="true">https://blogs.praduman.site/deploying-an-application-on-aws-eks-with-ingress-a-step-by-step-guide</guid><category><![CDATA[EKS]]></category><category><![CDATA[Kubernetes]]></category><category><![CDATA[Load Balancer]]></category><category><![CDATA[Ingress Controllers]]></category><dc:creator><![CDATA[Praduman Prajapati]]></dc:creator><pubDate>Fri, 21 Feb 2025 19:12:49 GMT</pubDate><content:encoded><![CDATA[<p>In this guide, we'll walk through the steps to set up an Amazon Elastic Kubernetes Service (EKS) cluster using <strong>Fargate</strong> (a serverless compute engine for containers). We'll also deploy a sample application and configure an Application Load Balancer (ALB) to make the app accessible. Let’s get started!</p>
<h2 id="heading-step-1-install-required-tools"><strong>Step 1: Install Required Tools</strong></h2>
<p>Before we begin, you’ll need to install three essential tools:</p>
<ol>
<li><p><strong>kubectl</strong>: A command-line tool for managing Kubernetes clusters. It allows you to deploy applications, inspect resources, and manage cluster operations.</p>
</li>
<li><p><strong>eksctl</strong>: A tool specifically designed for Amazon EKS. It simplifies the process of creating, managing, and scaling EKS clusters.</p>
</li>
<li><p><strong>AWS CLI</strong>: A command-line interface for interacting with AWS services. It’s used to configure and manage AWS resources.</p>
</li>
</ol>
<p>Run the following command to install these tools on an Arch Linux-based system (or use the appropriate package manager for your OS):</p>
<pre><code class="lang-bash">sudo pacman -Sy kubectl aws-cli eksctl
</code></pre>
<h2 id="heading-step-2-configure-aws-cli"><strong>Step 2: Configure AWS CLI</strong></h2>
<p>Once the tools are installed, you need to configure the AWS CLI with your credentials. This allows you to interact with your AWS account.</p>
<p>Run the following command and provide your <strong>Access Key ID</strong>, <strong>Secret Access Key</strong>, <strong>AWS Region</strong>, and preferred output format when prompted:</p>
<pre><code class="lang-bash">aws configure
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1740129826671/5a7d5f87-12b9-457b-9493-69d756b783cf.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-step-3-create-an-eks-cluster"><strong>Step 3: Create an EKS Cluster</strong></h2>
<p>Now, let’s create an EKS cluster using <strong>eksctl</strong>. We’ll use <strong>Fargate</strong> to run our workloads, which means we don’t need to manage EC2 instances—AWS handles the underlying infrastructure for us.</p>
<p>Run the following command to create a cluster named <strong>alpha</strong> in the <strong>ap-south-1</strong> region:</p>
<pre><code class="lang-bash">eksctl create cluster --name alpha --region ap-south-1 --fargate
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1740137107965/5339de73-ab94-45b3-bbcd-d63c3bd13ce8.png" alt class="image--center mx-auto" /></p>
<p>This process may take 10–15 minutes. Once completed, your EKS cluster will be up and running.</p>
<h2 id="heading-step-4-update-kubeconfig"><strong>Step 4: Update kubeconfig</strong></h2>
<p>To interact with your EKS cluster using <strong>kubectl</strong>, you need to update your kubeconfig file. This file contains the necessary information to connect to your cluster.</p>
<p>Run the following command:</p>
<pre><code class="lang-bash">aws eks update-kubeconfig --name alpha --region ap-south-1
</code></pre>
<h2 id="heading-step-5-deploy-the-application"><strong>Step 5: Deploy the Application</strong></h2>
<p>We’ll deploy a simple game called <strong>2048</strong> as our sample application. To do this, we’ll use a pre-configured YAML file that defines the deployment, service, and ingress resources.</p>
<p>Run the following command to deploy the application:</p>
<pre><code class="lang-bash">kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.5.4/docs/examples/2048/2048_full.yaml
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1740157592641/cdcfbd98-b098-4f19-ac6c-cde65aec8513.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-step-6-create-a-fargate-profile"><strong>Step 6: Create a Fargate Profile</strong></h2>
<p>Fargate profiles determine which pods should run on Fargate. Let’s create a Fargate profile for our application.</p>
<p>Run the following command:</p>
<pre><code class="lang-bash">eksctl create fargateprofile --cluster alpha --region ap-south-1 --name alb-sample-app --namespace game-2048
</code></pre>
<h2 id="heading-step-7-set-up-iam-oidc-provider"><strong>Step 7: Set Up IAM OIDC Provider</strong></h2>
<p>To allow the AWS Load Balancer (ALB) Controller to communicate with AWS resources, we need to configure an IAM OIDC provider for the cluster.</p>
<p>Run the following command:</p>
<pre><code class="lang-bash">eksctl utils associate-iam-oidc-provider --cluster alpha --approve
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1740159534312/8a9a6455-9643-4da6-87a1-0bf9416d4083.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-step-8-install-the-alb-controller"><strong>Step 8: Install the ALB Controller</strong></h2>
<p>The ALB Controller is responsible for creating and managing Application Load Balancers for your Kubernetes applications. To install it, follow these steps:</p>
<ol>
<li><p><strong>Download the IAM Policy</strong>:</p>
<pre><code class="lang-bash"> curl -O https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.11.0/docs/install/iam_policy.json
</code></pre>
</li>
<li><p><strong>Create the IAM Policy</strong>:</p>
<pre><code class="lang-bash"> aws iam create-policy \
     --policy-name AWSLoadBalancerControllerIAMPolicy \
     --policy-document file://iam_policy.json
</code></pre>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1740160184554/30194f12-a95d-4a9f-95f6-dd0ce6383e55.png" alt class="image--center mx-auto" /></p>
</li>
<li><p><strong>Create an IAM Role for the ALB Controller</strong>: Replace <code>&lt;your-cluster-name&gt;</code> and <code>&lt;your-aws-account-id&gt;</code> with your actual cluster name and AWS account ID.</p>
<pre><code class="lang-bash"> eksctl create iamserviceaccount \
   --cluster=alpha \
   --namespace=kube-system \
   --name=aws-load-balancer-controller \
   --role-name AmazonEKSLoadBalancerControllerRole \
   --attach-policy-arn=arn:aws:iam::&lt;your-aws-account-id&gt;:policy/AWSLoadBalancerControllerIAMPolicy \
   --approve
</code></pre>
</li>
<li><p><strong>Install the ALB Controller using Helm</strong>: Helm is a package manager for Kubernetes. Run the following commands to add the Helm repository and install the ALB Controller:</p>
<pre><code class="lang-bash"> helm repo add eks https://aws.github.io/eks-charts
 helm repo update eks
 helm install aws-load-balancer-controller eks/aws-load-balancer-controller \
   -n kube-system \
   --<span class="hljs-built_in">set</span> clusterName=alpha \
   --<span class="hljs-built_in">set</span> serviceAccount.create=<span class="hljs-literal">false</span> \
   --<span class="hljs-built_in">set</span> serviceAccount.name=aws-load-balancer-controller \
   --<span class="hljs-built_in">set</span> region=ap-south-1 \
   --<span class="hljs-built_in">set</span> vpcId=&lt;your-vpc-id&gt;
</code></pre>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1740162770997/e1187de7-7c4b-4fee-bf8c-e7ae7af4449d.png" alt class="image--center mx-auto" /></p>
</li>
</ol>
<h2 id="heading-step-9-verify-the-deployment"><strong>Step 9: Verify the Deployment</strong></h2>
<p>To ensure everything is working correctly, check the status of the ALB Controller deployment:</p>
<pre><code class="lang-bash">kubectl get deployment -n kube-system aws-load-balancer-controller
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1740162859985/e30f3e6d-2d97-4a5f-9923-bf2d334c932b.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1740163902020/9bf65ddd-04ed-400a-afd2-8a928484ba1d.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-step-10-access-the-application"><strong>Step 10: Access the Application</strong></h2>
<p>Once everything is set up, the ALB Controller will create a load balancer for your application. You can access the game using the DNS name of the load balancer. For example:</p>
<pre><code class="lang-bash">http://k8s-game2048-ingress2-067ca5a9c2-1851598133.ap-south-1.elb.amazonaws.com
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1740163379462/a411b0ca-0449-4efd-b562-8d0bb6dc48b7.png" alt class="image--center mx-auto" /></p>
<hr />
<h2 id="heading-conclusion"><strong>Conclusion</strong></h2>
<p>Congratulations! You’ve successfully set up an EKS cluster using Fargate, deployed a sample application, and configured an Application Load Balancer. This setup is ideal for running serverless Kubernetes workloads without worrying about managing the underlying infrastructure.</p>
<blockquote>
<p>💡 <em>Let’s connect and discuss DevOps, cloud automation, and cutting-edge technology</em></p>
<p>🔗 <a target="_blank" href="https://www.linkedin.com/in/praduman-prajapati/"><strong>LinkedIn</strong></a> | 💼 <a target="_blank" href="https://www.upwork.com/freelancers/~01fa3bf4d6797a9651"><strong>Upwork</strong></a> | 🐦 <a target="_blank" href="https://x.com/CndTwtprad"><strong>Twitter</strong></a> | 👨‍💻 <a target="_blank" href="https://github.com/praduman8435"><strong>GitHub</strong></a></p>
</blockquote>
]]></content:encoded></item><item><title><![CDATA[Mastering DevOps: Transforming a Go Web App with End-to-End Automation]]></title><description><![CDATA[In this article, we’ll implement end-to-end DevOps practices for a Go web application that currently lacks any DevOps methodologies. We’ll cover everything from containerizing the app with Docker to deploying it on a Kubernetes cluster using EKS, Hel...]]></description><link>https://blogs.praduman.site/mastering-devops-transforming-a-go-web-app-with-end-to-end-automation</link><guid isPermaLink="true">https://blogs.praduman.site/mastering-devops-transforming-a-go-web-app-with-end-to-end-automation</guid><category><![CDATA[Devops]]></category><category><![CDATA[Docker]]></category><category><![CDATA[Kubernetes]]></category><category><![CDATA[Continuous Integration]]></category><category><![CDATA[continuous deployment]]></category><category><![CDATA[github-actions]]></category><category><![CDATA[ArgoCD]]></category><category><![CDATA[EKS]]></category><category><![CDATA[Helm]]></category><category><![CDATA[automation]]></category><category><![CDATA[ci-cd]]></category><category><![CDATA[Cloud Computing]]></category><dc:creator><![CDATA[Praduman Prajapati]]></dc:creator><pubDate>Mon, 17 Feb 2025 20:01:19 GMT</pubDate><content:encoded><![CDATA[<p>In this article, we’ll implement <strong>end-to-end DevOps practices</strong> for a Go web application that currently lacks any DevOps methodologies. We’ll cover everything from containerizing the app with Docker to deploying it on a Kubernetes cluster using <strong>EKS</strong>, <strong>Helm</strong>, <strong>GitHub Actions</strong>, and <strong>ArgoCD</strong>. Let’s dive in!</p>
<h2 id="heading-source-code-amp-repository"><strong>Source Code &amp; Repository</strong> 👇</h2>
<p>You can find the source code for this project here:</p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://github.com/praduman8435/go-web-app">https://github.com/praduman8435/go-web-app</a></div>
<p> </p>
<h2 id="heading-steps-well-follow"><strong>Steps We’ll Follow</strong></h2>
<ol>
<li><p><strong>Containerize the application</strong> using a multistage Dockerfile.</p>
</li>
<li><p><strong>Create Kubernetes manifests</strong> for deployment, service, and ingress.</p>
</li>
<li><p><strong>Set up Continuous Integration (CI)</strong> using GitHub Actions.</p>
</li>
<li><p><strong>Implement Continuous Deployment (CD)</strong> using GitOps with ArgoCD.</p>
</li>
<li><p><strong>Deploy the application</strong> on an <strong>AWS EKS cluster</strong>.</p>
</li>
<li><p><strong>Use Helm charts</strong> for environment-specific deployments.</p>
</li>
<li><p><strong>Configure an ingress controller</strong> to make the app accessible via a load balancer.</p>
</li>
</ol>
<hr />
<h2 id="heading-step-1-containerize-the-go-web-app-with-docker"><strong>Step 1: Containerize the Go Web App with Docker</strong></h2>
<p>We’ll start by creating a <strong>multistage Dockerfile</strong> to containerize the application. This ensures a lightweight and secure final image.</p>
<h3 id="heading-create-a-dockerfile"><strong>Create a Dockerfile</strong></h3>
<pre><code class="lang-dockerfile"><span class="hljs-comment"># Build stage</span>
<span class="hljs-keyword">FROM</span> golang:<span class="hljs-number">1.22</span> as base
<span class="hljs-keyword">WORKDIR</span><span class="bash"> /app</span>
<span class="hljs-keyword">COPY</span><span class="bash"> go.mod .</span>
<span class="hljs-keyword">RUN</span><span class="bash"> go mod download</span>
<span class="hljs-keyword">COPY</span><span class="bash"> . .</span>
<span class="hljs-keyword">RUN</span><span class="bash"> go build -o /app/main .</span>

<span class="hljs-comment"># Final stage - distroless image</span>
<span class="hljs-keyword">FROM</span> gcr.io/distroless/base
<span class="hljs-keyword">COPY</span><span class="bash"> --from=base /app/main .</span>
<span class="hljs-keyword">COPY</span><span class="bash"> --from=base /app/static ./static</span>
<span class="hljs-keyword">EXPOSE</span> <span class="hljs-number">8080</span>
<span class="hljs-keyword">CMD</span><span class="bash"> [<span class="hljs-string">"./main"</span>]</span>
</code></pre>
<h3 id="heading-build-and-run-the-docker-image"><strong>Build and Run the Docker Image</strong></h3>
<pre><code class="lang-bash">docker build -t thepraduman/go-web-app:v1 .
docker run -p 8080:8080 -it thepraduman/go-web-app:v1
</code></pre>
<p>Once the container is running, you can access the app at <a target="_blank" href="http://localhost:8080/home"><code>http://localhost:8080/home</code></a>.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1739608632053/a6672a8f-c924-4504-8479-6c4b8c2a47e7.png" alt class="image--center mx-auto" /></p>
<hr />
<h2 id="heading-step-2-deploy-on-kubernetes-with-yaml-manifests"><strong>Step 2: Deploy on Kubernetes with YAML Manifests</strong></h2>
<p>Next, we’ll create Kubernetes manifests to deploy the app on a Kubernetes cluster.</p>
<h3 id="heading-push-the-docker-image"><strong>Push the Docker Image</strong></h3>
<p>Before deploying, push the Docker image to a container registry:</p>
<pre><code class="lang-dockerfile">docker push thepraduman/go-web-app:v1
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1739613714645/4a8a464a-a2c1-4ef9-b9b1-85f12e4f7497.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-create-kubernetes-manifests"><strong>Create Kubernetes Manifests</strong></h3>
<p>Create a folder <code>k8s/manifest</code> and add the following files:</p>
<ol>
<li><p><code>deployment.yaml</code>:</p>
<pre><code class="lang-yaml"> <span class="hljs-attr">apiVersion:</span> <span class="hljs-string">apps/v1</span>
 <span class="hljs-attr">kind:</span> <span class="hljs-string">Deployment</span>
 <span class="hljs-attr">metadata:</span>
   <span class="hljs-attr">name:</span> <span class="hljs-string">go-web-app</span>
   <span class="hljs-attr">labels:</span>
     <span class="hljs-attr">app:</span> <span class="hljs-string">go-web-app</span>
 <span class="hljs-attr">spec:</span>
   <span class="hljs-attr">replicas:</span> <span class="hljs-number">2</span>
   <span class="hljs-attr">selector:</span>
     <span class="hljs-attr">matchLabels:</span>
       <span class="hljs-attr">app:</span> <span class="hljs-string">go-web-app</span>
   <span class="hljs-attr">template:</span>
     <span class="hljs-attr">metadata:</span>
       <span class="hljs-attr">labels:</span>
         <span class="hljs-attr">app:</span> <span class="hljs-string">go-web-app</span>
     <span class="hljs-attr">spec:</span>
       <span class="hljs-attr">containers:</span>
       <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">go-web-app</span>
         <span class="hljs-attr">image:</span> <span class="hljs-string">thepraduman/go-web-app:v1</span>
         <span class="hljs-attr">ports:</span>
         <span class="hljs-bullet">-</span> <span class="hljs-attr">containerPort:</span> <span class="hljs-number">8080</span>
</code></pre>
</li>
<li><p><code>service.yaml</code>:</p>
<pre><code class="lang-yaml"> <span class="hljs-attr">apiVersion:</span> <span class="hljs-string">v1</span>
 <span class="hljs-attr">kind:</span> <span class="hljs-string">Service</span>
 <span class="hljs-attr">metadata:</span>
   <span class="hljs-attr">name:</span> <span class="hljs-string">go-web-app</span>
   <span class="hljs-attr">labels:</span>
     <span class="hljs-attr">app:</span> <span class="hljs-string">go-web-app</span>
 <span class="hljs-attr">spec:</span>
   <span class="hljs-attr">type:</span> <span class="hljs-string">ClusterIP</span>
   <span class="hljs-attr">selector:</span>
     <span class="hljs-attr">app:</span> <span class="hljs-string">go-web-app</span>
   <span class="hljs-attr">ports:</span>
     <span class="hljs-bullet">-</span> <span class="hljs-attr">protocol:</span> <span class="hljs-string">TCP</span>
       <span class="hljs-attr">port:</span> <span class="hljs-number">80</span>
       <span class="hljs-attr">targetPort:</span> <span class="hljs-number">8080</span>
</code></pre>
</li>
<li><p><code>ingress.yaml</code>:</p>
<pre><code class="lang-yaml"> <span class="hljs-attr">apiVersion:</span> <span class="hljs-string">networking.k8s.io/v1</span>
 <span class="hljs-attr">kind:</span> <span class="hljs-string">Ingress</span>
 <span class="hljs-attr">metadata:</span>
   <span class="hljs-attr">name:</span> <span class="hljs-string">go-web-app</span>
   <span class="hljs-attr">annotations:</span>
     <span class="hljs-attr">nginx.ingress.kubernetes.io/rewrite-target:</span> <span class="hljs-string">/</span>
 <span class="hljs-attr">spec:</span>
   <span class="hljs-attr">ingressClassName:</span> <span class="hljs-string">nginx</span>
   <span class="hljs-attr">rules:</span>
   <span class="hljs-bullet">-</span> <span class="hljs-attr">host:</span> <span class="hljs-string">"go-web-app.local"</span>
     <span class="hljs-attr">http:</span>
       <span class="hljs-attr">paths:</span>
       <span class="hljs-bullet">-</span> <span class="hljs-attr">pathType:</span> <span class="hljs-string">Prefix</span>
         <span class="hljs-attr">path:</span> <span class="hljs-string">"/"</span>
         <span class="hljs-attr">backend:</span>
           <span class="hljs-attr">service:</span>
             <span class="hljs-attr">name:</span> <span class="hljs-string">go-web-app</span>
             <span class="hljs-attr">port:</span>
               <span class="hljs-attr">number:</span> <span class="hljs-number">80</span>
</code></pre>
</li>
</ol>
<h3 id="heading-apply-the-manifests"><strong>Apply the Manifests</strong></h3>
<pre><code class="lang-bash">kubectl apply -f k8s/manifest
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1739644993905/7eb30e53-1ca5-4615-bc6e-ef5cb0648d0a.png" alt class="image--center mx-auto" /></p>
<hr />
<h2 id="heading-step-3-set-up-an-eks-cluster"><strong>Step 3: Set Up an EKS Cluster</strong></h2>
<p>To deploy the app on Kubernetes, we’ll use <strong>Amazon EKS</strong>. Ensure you have <code>awscli</code>, <code>eksctl</code>, and <code>kubectl</code> installed.</p>
<h3 id="heading-create-an-eks-cluster"><strong>Create an EKS Cluster</strong></h3>
<pre><code class="lang-bash">eksctl create cluster --name demo-cluster --region ap-south-1
</code></pre>
<hr />
<h2 id="heading-step-4-configure-the-ingress-controller"><strong>Step 4: Configure the Ingress Controller</strong></h2>
<p>At the moment, the resources aren't accessed directly through ingress because we need an ingress controller. This controller helps assign an address for the ingress resource. To make the app accessible, we’ll install the <strong>NGINX Ingress Controller.</strong></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1739645179356/3e84e590-2c73-4983-965c-d8c0ab1b3585.png" alt class="image--center mx-auto" /></p>
<ul>
<li><p>First, let's make sure the service is running smoothly without using ingress. To check this, we'll change the service type from ClusterIP to NodePort.</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1739645790867/ed138999-de0f-4150-b8ba-4865d44a2c20.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Run this command after you've changed the service type to find out the NodePort where your application is running.</p>
<pre><code class="lang-bash">  kubectl get svc
</code></pre>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1739727252627/243525bf-70f7-4bcb-8338-d99b635e39b1.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Take a look at the external IP for the nodes in the Kubernetes cluster</p>
<pre><code class="lang-bash">  kubectl get nodes -o wide
</code></pre>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1739727308022/56d53633-f234-4bb0-a481-fbe6aa600068.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>You can now check out the application at <code>http://13.126.11.218:31296/home</code>.</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1739727201667/08f91cc4-d9fc-4674-9d33-70fee91ace2d.png" alt class="image--center mx-auto" /></p>
</li>
</ul>
<h3 id="heading-now-install-the-ingress-controller"><strong>Now, Install the Ingress Controller</strong></h3>
<pre><code class="lang-bash">kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.11.1/deploy/static/provider/aws/deploy.yaml
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1739728011875/731e8407-d272-439d-bb2d-4cf3eae24bc3.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-lets-check-if-the-ingress-controller-is-up-and-running"><strong>Let's check if the ingress controller is up and running.</strong></h3>
<pre><code class="lang-bash">kubectl get pod -n ingress-nginx
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1739728747685/4c18bf2a-1722-4711-9aed-d32f30ea5608.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-verify-the-ingress"><strong>Verify the Ingress</strong></h3>
<pre><code class="lang-bash">kubectl get ing
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1739729101193/c702341c-3cc0-437c-94f8-fe85b9721ac4.png" alt class="image--center mx-auto" /></p>
<p>Here, we can see that the ingress controller is managing our ingress resource. It has assigned a domain name: <code>adc73383deb374481a2ea5c3f048b7d2-181af1fd367d9cc8.elb.ap-south-1.amazonaws.com</code>.</p>
<blockquote>
<p>Wait a minute, what happens if we try to access the load balancer at <code>adc73383deb374481a2ea5c3f048b7d2-181af1fd367d9cc8.elb.ap-south-1.amazonaws.com</code>? Will we be able to reach our application?</p>
</blockquote>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1739729690884/269e6023-14ea-4e4b-9869-5aa51d4daab4.png" alt class="image--center mx-auto" /></p>
<p>In this scenario, the application is not accessible through <code>adc73383deb374481a2ea5c3f048b7d2-181af1fd367d9cc8.elb.ap-south-1.amazonaws.com</code>. This raises the question: why did this occur?The reason for this is that in the ingress configuration, we have specified that the load balancer should only accept requests if they are accessing the hostname <code>go-web-app.local</code>.</p>
<blockquote>
<p>Map the load balancer’s DNS to <code>go-web-app.local</code> in your <code>/etc/hosts</code> file to access the app.</p>
</blockquote>
<ul>
<li><p>To obtain the IP address of <code>adc73383deb374481a2ea5c3f048b7d2-181af1fd367d9cc8.elb.ap-south-1.amazonaws.com</code>, execute the following command</p>
<pre><code class="lang-bash">  nslookup adc73383deb374481a2ea5c3f048b7d2-181af1fd367d9cc8.elb.ap-south-1.amazonaws.com
</code></pre>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1739730596248/b9639861-15ff-40ab-888c-bf8dca6e076c.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Now, let's link the IP of elastic loadbalancer address <code>adc73383deb374481a2ea5c3f048b7d2-181af1fd367d9cc8.elb.ap-south-1.amazonaws.com</code> with the host <code>go-web-app.local</code>.</p>
<pre><code class="lang-bash">  cat sudo vim /etc/hosts
</code></pre>
</li>
<li><p>You can now access the application at go-web-app.local! 🎉</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1739732765664/77bcc621-3416-4574-b45f-98a8f93ceb86.png" alt class="image--center mx-auto" /></p>
</li>
</ul>
<hr />
<h2 id="heading-step-5-simplify-deployments-with-helm"><strong>Step 5: Simplify Deployments with Helm</strong></h2>
<p>When deploying an application across different environments, Helm becomes essential. So far, we've been using hard-coded configuration files for services, deployments, and more. Imagine we need the image <code>go-web-app:dev</code> for the development environment, <code>go-web-app:prod</code> for production, and <code>go-web-app:qa</code> for staging. Does this mean we have to create separate folders like k8s/dev, k8s/prod, and k8s/staging? Fortunately, no. Helm allows us to make these configurations variable, simplifying the process.</p>
<h3 id="heading-create-a-helm-chart"><strong>Create a Helm Chart</strong></h3>
<blockquote>
<p>Make sure you have Helm installed on your computer.</p>
</blockquote>
<pre><code class="lang-bash">helm create go-web-app-chart
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1739734549917/9512a81d-9f40-4b8f-95e5-2479a7171d97.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1739734575694/a26de7e0-8419-4a6e-a60c-d4534aa27ad2.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-update-the-helm-chart"><strong>Update the Helm Chart</strong></h3>
<ol>
<li><p>Copy the Kubernetes manifests into the <code>templates</code> folder. before that delete everything from the template folder.</p>
<pre><code class="lang-bash"> cp k8s/manifest/ingress.yaml helm/go-web-app-chart/template
 cp k8s/manifest/service.yaml helm/go-web-app-chart/template
 cp k8s/manifest/deployment.yaml helm/go-web-app-chart/template
</code></pre>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1739735496041/ebfd2d1c-d868-4f54-888c-b9e597841e76.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Replace the image tag in <code>deployment.yaml</code> present in template folder with <code>{{ .Values.image.tag }}</code>.</p>
<pre><code class="lang-yaml"> <span class="hljs-attr">apiVersion:</span> <span class="hljs-string">apps/v1</span>
 <span class="hljs-attr">kind:</span> <span class="hljs-string">Deployment</span>
 <span class="hljs-attr">metadata:</span>
   <span class="hljs-attr">name:</span> <span class="hljs-string">go-web-app</span>
   <span class="hljs-attr">labels:</span>
     <span class="hljs-attr">app:</span> <span class="hljs-string">go-web-app</span>
 <span class="hljs-attr">spec:</span>
   <span class="hljs-attr">replicas:</span> <span class="hljs-number">2</span>
   <span class="hljs-attr">selector:</span>
     <span class="hljs-attr">matchLabels:</span>
       <span class="hljs-attr">app:</span> <span class="hljs-string">go-web-app</span>
   <span class="hljs-attr">template:</span>
     <span class="hljs-attr">metadata:</span>
       <span class="hljs-attr">labels:</span>
         <span class="hljs-attr">app:</span> <span class="hljs-string">go-web-app</span>
     <span class="hljs-attr">spec:</span>
       <span class="hljs-attr">containers:</span>
       <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">go-web-app</span>
         <span class="hljs-attr">image:</span> <span class="hljs-string">thepraduman/go-web-app:{{</span> <span class="hljs-string">.Values.image.tag</span> <span class="hljs-string">}}</span>
         <span class="hljs-attr">ports:</span>
         <span class="hljs-bullet">-</span> <span class="hljs-attr">containerPort:</span> <span class="hljs-number">8080</span>
</code></pre>
<blockquote>
<p>Now, whenever Helm runs, it checks the <code>values.yaml</code> file for the image tag.</p>
</blockquote>
</li>
<li><p>Update <code>values.yaml</code>:</p>
<pre><code class="lang-yaml"> <span class="hljs-comment"># Default values for go-web-app-chart.</span>
 <span class="hljs-comment"># This is a YAML-formatted file.</span>
 <span class="hljs-comment"># Declare variables to be passed into your templates.</span>

 <span class="hljs-attr">replicaCount:</span> <span class="hljs-number">1</span>

 <span class="hljs-attr">image:</span>
   <span class="hljs-attr">repository:</span> <span class="hljs-string">abhishekf5/go-web-app</span>
   <span class="hljs-attr">pullPolicy:</span> <span class="hljs-string">IfNotPresent</span>
   <span class="hljs-comment"># Overrides the image tag whose default is the chart appVersion.</span>
   <span class="hljs-attr">tag:</span> <span class="hljs-string">"v1"</span>
   <span class="hljs-comment"># When we set up CI/CD, </span>
   <span class="hljs-comment"># we'll make the Helm values.yaml update automatically. </span>
   <span class="hljs-comment"># Every time the CI/CD runs, </span>
   <span class="hljs-comment"># it will refresh the Helm values.yaml with the newest image created in the CI. </span>
   <span class="hljs-comment"># Then, using ArgoCD, that latest image with the newest tag will be deployed automatically.</span>

 <span class="hljs-attr">ingress:</span>
   <span class="hljs-attr">enabled:</span> <span class="hljs-literal">false</span>
   <span class="hljs-attr">className:</span> <span class="hljs-string">""</span>
   <span class="hljs-attr">annotations:</span> {}
     <span class="hljs-comment"># kubernetes.io/ingress.class: nginx</span>
     <span class="hljs-comment"># kubernetes.io/tls-acme: "true"</span>
   <span class="hljs-attr">hosts:</span>
     <span class="hljs-bullet">-</span> <span class="hljs-attr">host:</span> <span class="hljs-string">chart-example.local</span>
       <span class="hljs-attr">paths:</span>
         <span class="hljs-bullet">-</span> <span class="hljs-attr">path:</span> <span class="hljs-string">/</span>
           <span class="hljs-attr">pathType:</span> <span class="hljs-string">ImplementationSpecific</span>
</code></pre>
</li>
<li><p>To make sure <code>ingress-nginx</code> gets installed automatically as a dependency in your Helm chart, just add the following inside your Helm chart (<code>./go-web-app-chart/Chart.yaml</code>):</p>
<pre><code class="lang-bash">  dependencies:
   - name: ingress-nginx
     version: <span class="hljs-string">"4.10.0"</span>  <span class="hljs-comment"># Use latest stable version</span>
     repository: <span class="hljs-string">"https://kubernetes.github.io/ingress-nginx"</span>
</code></pre>
</li>
<li><p>Let's verify whether Helm is working accordingly or not.</p>
<pre><code class="lang-bash"> <span class="hljs-comment"># lets delete all the resources and recreate them using helm chart</span>
 kubectl delete deploy/go-web-app svc/go-web-app ing/go-web-app
</code></pre>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1739769118399/9f5e9c34-fc52-453d-8d5e-f102c9d4ad9b.png" alt class="image--center mx-auto" /></p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1739769171125/035324b9-a679-4e17-932c-89b3be071204.png" alt class="image--center mx-auto" /></p>
<blockquote>
<p>Now, run the following command to create all the resources again with the Helm chart and watch the magic happen! 🚀</p>
</blockquote>
</li>
</ol>
<h3 id="heading-deploy-with-helm"><strong>Deploy with Helm</strong></h3>
<pre><code class="lang-bash">helm install go-web-app ./go-web-app-chart
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1739769454743/4ba657dd-7234-4ee5-b4d7-2f6a30d33cbe.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1739945262159/1d7bfe5c-f1e5-494d-bb7d-0ff428b1b6af.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1739769423707/7984f1f9-5093-4391-b0f8-34317fbd45fd.png" alt class="image--center mx-auto" /></p>
<p>You can now access the application at <code>go-web-app.local</code>! 🎉</p>
<h3 id="heading-to-uninstall-everything-run-the-command"><strong>To uninstall everything, run the command</strong></h3>
<pre><code class="lang-bash">helm uninstall go-web-app
</code></pre>
<blockquote>
<p>Here, we can say that Helm is working perfectly!</p>
</blockquote>
<hr />
<h2 id="heading-step-6-set-up-ci-with-github-actions"><strong>Step 6: Set Up CI with GitHub Actions</strong></h2>
<blockquote>
<p><strong>In CI, we will set up several stages:</strong></p>
</blockquote>
<ul>
<li><p>Build and run unit tests.</p>
</li>
<li><p>Perform static code analysis.</p>
</li>
<li><p>Create a Docker image and push it.</p>
</li>
<li><p>Update Helm with the new Docker image.</p>
</li>
</ul>
<blockquote>
<p><strong>Once this is complete, CD will take over:</strong></p>
</blockquote>
<ul>
<li>When the Helm tag is updated, ArgoCD will pull the Helm chart and deploy it to the Kubernetes cluster.</li>
</ul>
<p>To implement Continuous Integration (CI) using GitHub Actions, Add a file <code>.github/workflows/ci.yaml</code> :</p>
<pre><code class="lang-yaml"><span class="hljs-attr">name:</span> <span class="hljs-string">CI/CD</span>
<span class="hljs-comment"># Exclude the workflow to run on changes to the helm chart</span>
<span class="hljs-attr">on:</span>
  <span class="hljs-attr">push:</span>
    <span class="hljs-attr">branches:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">main</span>
    <span class="hljs-attr">paths-ignore:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">'helm/**'</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">'k8s/**'</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">'README.md'</span>
<span class="hljs-attr">jobs:</span>
<span class="hljs-comment">## stage 1</span>
  <span class="hljs-attr">build:</span>
    <span class="hljs-attr">runs-on:</span> <span class="hljs-string">ubuntu-latest</span>
    <span class="hljs-attr">steps:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Checkout</span> <span class="hljs-string">repository</span>
      <span class="hljs-attr">uses:</span> <span class="hljs-string">actions/checkout@v4</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Set</span> <span class="hljs-string">up</span> <span class="hljs-string">Go</span> <span class="hljs-number">1.22</span>
      <span class="hljs-attr">uses:</span> <span class="hljs-string">actions/setup-go@v2</span>
      <span class="hljs-attr">with:</span>
        <span class="hljs-attr">go-version:</span> <span class="hljs-number">1.22</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Build</span>
      <span class="hljs-attr">run:</span> <span class="hljs-string">go</span> <span class="hljs-string">build</span> <span class="hljs-string">-o</span> <span class="hljs-string">go-web-app</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Test</span>
      <span class="hljs-attr">run:</span> <span class="hljs-string">go</span> <span class="hljs-string">test</span> <span class="hljs-string">./...</span>
<span class="hljs-comment">## stage 2</span>
  <span class="hljs-attr">code-quality:</span>
    <span class="hljs-attr">runs-on:</span> <span class="hljs-string">ubuntu-latest</span>
    <span class="hljs-attr">steps:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Checkout</span> <span class="hljs-string">repository</span>
      <span class="hljs-attr">uses:</span> <span class="hljs-string">actions/checkout@v4</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Run</span> <span class="hljs-string">golangci-lint</span>
      <span class="hljs-attr">uses:</span> <span class="hljs-string">golangci/golangci-lint-action@v6</span>
      <span class="hljs-attr">with:</span>
        <span class="hljs-attr">version:</span> <span class="hljs-string">latest</span>
<span class="hljs-comment">## stage 3</span>
  <span class="hljs-attr">push:</span>
    <span class="hljs-attr">runs-on:</span> <span class="hljs-string">ubuntu-latest</span>
    <span class="hljs-attr">needs:</span> <span class="hljs-string">build</span>
    <span class="hljs-attr">steps:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Checkout</span> <span class="hljs-string">repository</span>
      <span class="hljs-attr">uses:</span> <span class="hljs-string">actions/checkout@v4</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Set</span> <span class="hljs-string">up</span> <span class="hljs-string">Docker</span> <span class="hljs-string">Buildx</span>
      <span class="hljs-attr">uses:</span> <span class="hljs-string">docker/setup-buildx-action@v1</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Login</span> <span class="hljs-string">to</span> <span class="hljs-string">DockerHub</span>
      <span class="hljs-attr">uses:</span> <span class="hljs-string">docker/login-action@v3</span>
      <span class="hljs-attr">with:</span>
        <span class="hljs-attr">username:</span> <span class="hljs-string">${{</span> <span class="hljs-string">secrets.DOCKERHUB_USERNAME</span> <span class="hljs-string">}}</span>
        <span class="hljs-attr">password:</span> <span class="hljs-string">${{</span> <span class="hljs-string">secrets.DOCKERHUB_TOKEN</span> <span class="hljs-string">}}</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Build</span> <span class="hljs-string">and</span> <span class="hljs-string">Push</span> <span class="hljs-string">action</span>
      <span class="hljs-attr">uses:</span> <span class="hljs-string">docker/build-push-action@v6</span>
      <span class="hljs-attr">with:</span>
        <span class="hljs-attr">context:</span> <span class="hljs-string">.</span>
        <span class="hljs-attr">file:</span> <span class="hljs-string">./Dockerfile</span>
        <span class="hljs-attr">push:</span> <span class="hljs-literal">true</span>
        <span class="hljs-attr">tags:</span> <span class="hljs-string">${{</span> <span class="hljs-string">secrets.DOCKERHUB_USERNAME</span> <span class="hljs-string">}}/go-web-app:${{github.run_id}}</span>
<span class="hljs-comment">## stage 4</span>
  <span class="hljs-attr">update-newtag-in-helm-chart:</span>
    <span class="hljs-attr">runs-on:</span> <span class="hljs-string">ubuntu-latest</span>
    <span class="hljs-attr">needs:</span> <span class="hljs-string">push</span>
    <span class="hljs-attr">steps:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Checkout</span> <span class="hljs-string">repository</span>
      <span class="hljs-attr">uses:</span> <span class="hljs-string">actions/checkout@v4</span>
      <span class="hljs-attr">with:</span>
        <span class="hljs-attr">token:</span> <span class="hljs-string">${{</span> <span class="hljs-string">secrets.TOKEN</span> <span class="hljs-string">}}</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Update</span> <span class="hljs-string">tag</span> <span class="hljs-string">in</span> <span class="hljs-string">Helm</span> <span class="hljs-string">chart</span>
      <span class="hljs-attr">run:</span> <span class="hljs-string">|
        sed -i 's/tag: .*/tag: "${{github.run_id}}"/' helm/go-web-app-chart/values.yaml
</span>    <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Commit</span> <span class="hljs-string">and</span> <span class="hljs-string">push</span> <span class="hljs-string">changes</span>
      <span class="hljs-attr">run:</span> <span class="hljs-string">|
        git config --global user.email "abhishek@gmail.com"
        git config --global user.name "Abhishek Veeramalla"
        git add helm/go-web-app-chart/values.yaml
        git commit -m "Update tag in Helm chart"
        git push</span>
</code></pre>
<ul>
<li><p>Now that we've finished setting up the CI pipeline, let's check to make sure everything is working smoothly.</p>
<pre><code class="lang-bash">  git push
</code></pre>
<blockquote>
<p>Head over to the GitHub repository and take a look at the Actions tab.</p>
</blockquote>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1739783631459/8de09d98-4f8b-4b41-9a7c-028ae647970f.png" alt class="image--center mx-auto" /></p>
</li>
</ul>
<blockquote>
<p>Here, the GitHub Action is working perfectly, and we've successfully completed the implementation of continuous integration! 🎉</p>
</blockquote>
<hr />
<h2 id="heading-step-7-implement-cd-with-argocd"><strong>Step 7: Implement CD with ArgoCD</strong></h2>
<p><strong>Now, let's turn our attention to the ArgoCD component. Every time the CI pipeline runs, ArgoCD should spot the changes and deploy them to the Kubernetes cluster</strong></p>
<blockquote>
<p>Create a namespace called argocd and install ArgoCD there using a manifest file.</p>
</blockquote>
<ul>
<li><pre><code class="lang-bash">        kubectl create namespace argocd
        kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
</code></pre>
</li>
<li><p>Access the Argo CD UI (Loadbalancer service).</p>
<pre><code class="lang-bash">  kubectl patch svc argocd-server -n argocd -p <span class="hljs-string">'{"spec": {"type": "LoadBalancer"}}'</span>
</code></pre>
</li>
<li><p>Access the Argo CD UI (Loadbalancer service) -For Windows</p>
<pre><code class="lang-bash">  kubectl patch svc argocd-server -n argocd -p <span class="hljs-string">'{\"spec\": {\"type\": \"LoadBalancer\"}}</span>
</code></pre>
</li>
<li><p>Get the Loadbalancer service IP</p>
<pre><code class="lang-bash">  kubectl get svc argocd-server -n argocd
</code></pre>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1739946544759/c1d0af40-7397-4c84-b4a1-e577c0e18f87.png" alt class="image--center mx-auto" /></p>
<blockquote>
<p>Now you can access the ArgoCD UI at <code>ae9f68624737e4e49a0bea0a56d 6dce4-1141447750.ap-south-1.elb.amazonaws.com</code>. Enjoy exploring!</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1739787215102/8050f275-8fe7-45d6-ac39-d52e3aad8537.png" alt class="image--center mx-auto" /></p>
</blockquote>
</li>
<li><p>To log in to ArgoCD, simply use <code>admin</code> as the username, and you can get the password by running the following command.</p>
<pre><code class="lang-bash">  kubectl get secrets -n argocd
  kubectl edit secrets argocd-initial-admin-secret -n argocd
</code></pre>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1739818674722/e892ed76-4655-4a52-85aa-c20e1c9688d4.png" alt class="image--center mx-auto" /></p>
<blockquote>
<p>The password we got is in base64 format, so to decode it, just run this command.</p>
</blockquote>
<pre><code class="lang-bash">  <span class="hljs-built_in">echo</span> &lt;password-that-you-got&gt; | base64 --decode
</code></pre>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1739819028710/6e5de4cb-a94f-4adb-b3e6-2370d6891eba.png" alt class="image--center mx-auto" /></p>
</li>
</ul>
<blockquote>
<p>We've got access to the ArgoCD UI! 🎉</p>
</blockquote>
<ul>
<li><p>Tap on the <code>"New App"</code> button and fill in the required details.</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1739819673988/bf514714-1987-42bb-8828-698952d67d8d.png" alt class="image--center mx-auto" /></p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1739819718792/632100d8-598b-4c0f-90f7-74b5ecf74b68.png" alt class="image--center mx-auto" /></p>
<blockquote>
<p>Once you've filled in all the details, just click on <code>"Create"</code>.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1739819912791/0b5a53bf-19bc-404f-8dd9-fdc90c5b6a29.png" alt class="image--center mx-auto" /></p>
</blockquote>
</li>
</ul>
<p>Now, ArgoCD will look for all the files within the Helm chart in the GitHub repository. It will update the values.yaml file with all the necessary changes. If you click on the style you will able to see that argocd started deploying everything</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1739820135260/841f5502-98ce-40a3-aade-13c8463fecca.png" alt class="image--center mx-auto" /></p>
<p>So, we're all done, and guess what? We deployed the application using CI/CD! 🎉</p>
<h2 id="heading-conclusion"><strong>Conclusion</strong></h2>
<p>By following these steps, we’ve successfully implemented <strong>end-to-end DevOps practices</strong> for our Go web app. From containerization to automated CI/CD pipelines, we’ve streamlined the deployment process and made it scalable and efficient.</p>
<blockquote>
<p>💡 <em>Let’s connect and discuss DevOps, cloud automation, and cutting-edge technology</em></p>
<p>🔗 <a target="_blank" href="https://www.linkedin.com/in/praduman-prajapati/"><strong>LinkedIn</strong></a> | 💼 <a target="_blank" href="https://www.upwork.com/freelancers/~01fa3bf4d6797a9651"><strong>Upwork</strong></a> | 🐦 <a target="_blank" href="https://x.com/CndTwtprad"><strong>Twitter</strong></a> | 👨‍💻 <a target="_blank" href="https://github.com/praduman8435"><strong>GitHub</strong></a></p>
</blockquote>
]]></content:encoded></item><item><title><![CDATA[Kubernetes-part-10]]></title><description><![CDATA[Volumes
volumes are used to provide storage that pods and containers can access. Unlike regular container storage, which is ephemeral (lost when the container stops), volumes allow data to persist or be shared across containers in a pod. Volumes ensu...]]></description><link>https://blogs.praduman.site/kubernetes-part-10</link><guid isPermaLink="true">https://blogs.praduman.site/kubernetes-part-10</guid><category><![CDATA[Kubernetes]]></category><category><![CDATA[Devops]]></category><dc:creator><![CDATA[Praduman Prajapati]]></dc:creator><pubDate>Thu, 26 Sep 2024 07:17:43 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1741280578009/b8662317-d1f8-4910-8b76-e22afa4febe7.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h3 id="heading-volumes"><strong>Volumes</strong></h3>
<p><strong>volumes</strong> are used to provide storage that pods and containers can access. Unlike regular container storage, which is ephemeral (lost when the container stops), volumes allow data to persist or be shared across containers in a pod. Volumes ensure that data survives container restarts and can be used to share data between containers within the same pod. Any no. of volumes can be attached to pod.</p>
<h4 id="heading-types-of-volumes-in-kubernetes"><strong>Types of volumes in Kubernetes</strong></h4>
<ol>
<li><p><strong>emptyDir</strong></p>
<ul>
<li><p>An <code>emptyDir</code> volume is created when a pod is assigned to a node and exists as long as the pod is running. Once the pod is deleted, the data in the <code>emptyDir</code> is also removed.</p>
</li>
<li><p><strong>Use case:</strong> Temporary storage for data that needs to be shared between containers in the same pod.</p>
</li>
<li><p><strong>Example</strong>:</p>
<pre><code class="lang-yaml">  <span class="hljs-attr">apiVersion:</span> <span class="hljs-string">v1</span>
  <span class="hljs-attr">kind:</span> <span class="hljs-string">Pod</span>
  <span class="hljs-attr">metadata:</span>
    <span class="hljs-attr">name:</span> <span class="hljs-string">emptydir-pod</span>
  <span class="hljs-attr">spec:</span>
    <span class="hljs-attr">containers:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">busybox</span>
        <span class="hljs-attr">image:</span> <span class="hljs-string">busybox</span>
        <span class="hljs-attr">command:</span> [<span class="hljs-string">'sh'</span>, <span class="hljs-string">'-c'</span>, <span class="hljs-string">'echo "Writing data to /data/emptydir-volume..."; echo "Hello from Kubesimplify" &gt; /data/emptydir-volume/hello.txt; sleep 3600'</span>]
        <span class="hljs-attr">volumeMounts:</span>
          <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">temp-storage</span>
            <span class="hljs-attr">mountPath:</span> <span class="hljs-string">/data/emptydir-volume</span>
    <span class="hljs-attr">volumes:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">temp-storage</span>
        <span class="hljs-attr">emptyDir:</span> {}
</code></pre>
<blockquote>
<p>The YAML configuration defines a Kubernetes pod named <code>emptydir-pod</code> that uses an <code>emptyDir</code> volume for temporary storage. A container running the <code>busybox</code> image executes a command to write <code>"Hello from Kubesimplify"</code> to a file named <code>hello.txt</code> at <code>/data/emptydir-volume</code>. This volume is created when the pod starts and exists only as long as the pod is running; all data is deleted when the pod is terminated, making it ideal for temporary storage needs.</p>
</blockquote>
<pre><code class="lang-yaml">  <span class="hljs-attr">apiVersion:</span> <span class="hljs-string">v1</span>
  <span class="hljs-attr">kind:</span> <span class="hljs-string">Pod</span>
  <span class="hljs-attr">metadata:</span>
    <span class="hljs-attr">name:</span> <span class="hljs-string">emptydir-pod</span>
  <span class="hljs-attr">spec:</span>
    <span class="hljs-attr">containers:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">busybox</span>
        <span class="hljs-attr">image:</span> <span class="hljs-string">busybox</span>
        <span class="hljs-attr">command:</span> [<span class="hljs-string">'sh'</span>, <span class="hljs-string">'-c'</span>, <span class="hljs-string">'echo "Writing data to /data/emptydir-volume..."; sleep 3600'</span>]
        <span class="hljs-attr">volumeMounts:</span>
          <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">temp-storage</span>
            <span class="hljs-attr">mountPath:</span> <span class="hljs-string">/data/emptydir-volume</span>
    <span class="hljs-attr">volumes:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">temp-storage</span>
        <span class="hljs-attr">emptyDir:</span>
          <span class="hljs-attr">medium:</span> <span class="hljs-string">Memory</span>
          <span class="hljs-attr">sizeLimit:</span> <span class="hljs-string">512Mi</span>
</code></pre>
<blockquote>
<p>This pod runs a <code>busybox</code> container that writes data to a temporary, memory-based volume mounted at <code>/data/emptydir-volume</code> inside the container. The volume is limited to 512 MiB and will be deleted once the pod stops.</p>
</blockquote>
</li>
</ul>
</li>
<li><p><strong>hostPath</strong></p>
<ul>
<li><p>A <strong>HostPath</strong> volume mounts a file or directory from the node’s filesystem (the host) into the pod, allowing the container to access or write data directly on the host.</p>
</li>
<li><p><strong>Use case:</strong> Accessing specific files or logs on the host node.</p>
</li>
<li><p><strong>Example</strong>:</p>
<pre><code class="lang-yaml">  <span class="hljs-attr">apiVersion:</span> <span class="hljs-string">v1</span>
  <span class="hljs-attr">kind:</span> <span class="hljs-string">Pod</span>
  <span class="hljs-attr">metadata:</span>
    <span class="hljs-attr">name:</span> <span class="hljs-string">hostpath-pod</span>
  <span class="hljs-attr">spec:</span>
    <span class="hljs-attr">containers:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">busybox</span>
        <span class="hljs-attr">image:</span> <span class="hljs-string">busybox</span>
        <span class="hljs-attr">command:</span> [<span class="hljs-string">'sh'</span>, <span class="hljs-string">'-c'</span>, <span class="hljs-string">'echo "Writing data to /data/hostpath-volume..."; echo "Hello from Kubesimplify" &gt; /data/hostpath-volume/hello.txt; sleep 3600'</span>]
        <span class="hljs-attr">volumeMounts:</span>
          <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">host-storage</span>
            <span class="hljs-attr">mountPath:</span> <span class="hljs-string">/data/hostpath-volume</span>
    <span class="hljs-attr">volumes:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">host-storage</span>
        <span class="hljs-attr">hostPath:</span>
          <span class="hljs-attr">path:</span> <span class="hljs-string">/tmp/hostpath</span>
          <span class="hljs-attr">type:</span> <span class="hljs-string">DirectoryOrCreate</span>
</code></pre>
<blockquote>
<p>The <strong>HostPath Volume</strong> mounts the directory <code>/tmp/hostpath</code> from the node's filesystem into the container at <code>/data/hostpath-volume</code>. The pod writes the text "Hello from Kubesimplify" to a file named <code>hello.txt</code> inside the container at <code>/data/hostpath-volume/hello.txt</code>. Since this is a HostPath volume, the file is actually created in <code>/tmp/hostpath/hello.txt</code> on the host node. Additionally, the data stored in the HostPath volume will persist even if the pod is deleted, as it is stored directly on the host node’s filesystem.</p>
</blockquote>
</li>
</ul>
</li>
<li><p><strong>Persistent Volume (PV)</strong></p>
<ul>
<li><p>It is a piece of storage in a Kubernetes cluster that has been provisioned by an administrator or dynamically provisioned using Storage Classes.</p>
</li>
<li><p>PVs are cluster resources that exist independently of any individual pod and are designed for long-term data storage.</p>
</li>
<li><p>They can be backed by various types of storage systems, such as NFS, iSCSI, cloud storage (like AWS EBS, Google Persistent Disk), or local storage.</p>
</li>
<li><p>PVs have a lifecycle that is separate from the pods that use them, allowing data to persist even if pods are deleted or moved.</p>
</li>
</ul>
</li>
</ol>
<p>    <strong>Persistent Volume Claim (PVC)</strong>:</p>
<ul>
<li><p>A <strong>Persistent Volume Claim (PVC)</strong> is a request for storage by a user or a pod.</p>
</li>
<li><p>It specifies the desired storage size and access modes (e.g., ReadWriteOnce, ReadOnlyMany) and can also include specific storage class requirements.</p>
</li>
<li><p>When a PVC is created, Kubernetes looks for a matching PV that satisfies the request. If a suitable PV is found, it is bound to the PVC, allowing pods to use that storage.</p>
</li>
<li><p>If no matching PV is available, Kubernetes may dynamically provision a new PV if a Storage Class is specified in the PVC.</p>
</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[Kubernetes-part-9]]></title><description><![CDATA[Understanding Role-Based Access Control (RBAC) in Kubernetes
Role-Based Access Control (RBAC) is a way to manage who can do what in a system by assigning roles to users. So, instead of giving each user individual access permissions, you give them a r...]]></description><link>https://blogs.praduman.site/kubernetes-part-9</link><guid isPermaLink="true">https://blogs.praduman.site/kubernetes-part-9</guid><category><![CDATA[Kubernetes]]></category><category><![CDATA[Devops]]></category><dc:creator><![CDATA[Praduman Prajapati]]></dc:creator><pubDate>Mon, 23 Sep 2024 18:30:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1726606453386/427ee71c-fe91-4c77-9bda-dd3b59945208.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h3 id="heading-understanding-role-based-access-control-rbac-in-kubernetes">Understanding Role-Based Access Control (RBAC) in Kubernetes</h3>
<p>Role-Based Access Control (RBAC) is a way to manage who can do what in a system by assigning roles to users. So, instead of giving each user individual access permissions, you give them a role, and the role controls what they can or can't do. This makes it easier to manage security in big systems since you only need to change the permissions for a role, not every user.</p>
<p><code>Roles</code> and <code>RoleBindings</code> are key components of the <code>RBAC</code> system. They control access to resources based on user roles. There are two main types of <code>Roles</code> and <code>RoleBindings</code></p>
<h3 id="heading-defining-roles-in-kubernetes-role-vs-clusterrole">Defining Roles in Kubernetes: Role vs. ClusterRole</h3>
<h4 id="heading-1-role">1. <strong>Role</strong></h4>
<ul>
<li><p>Limited to a specific namespace</p>
</li>
<li><p>Used to define permissions (like read, write, update, delete) on resources within a specific namespace.</p>
</li>
<li><p><strong>Example</strong>: A role that allows access to resources like Pods, Services, and ConfigMaps within the <code>dev</code> namespace.</p>
</li>
</ul>
<blockquote>
<h4 id="heading-example-role-yaml">Example Role YAML</h4>
<pre><code class="lang-yaml"><span class="hljs-attr">apiVersion:</span> <span class="hljs-string">rbac.authorization.k8s.io/v1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">Role</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">namespace:</span> <span class="hljs-string">dev</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">pod-reader</span>
<span class="hljs-attr">rules:</span>
<span class="hljs-bullet">-</span> <span class="hljs-attr">apiGroups:</span> [<span class="hljs-string">""</span>]
  <span class="hljs-attr">resources:</span> [<span class="hljs-string">"pods"</span>]
  <span class="hljs-attr">verbs:</span> [<span class="hljs-string">"get"</span>, <span class="hljs-string">"list"</span>, <span class="hljs-string">"watch"</span>]
</code></pre>
<p>In this example, the <code>pod-reader</code> role allows the user to get, list, and watch Pods within the <code>dev</code> namespace.</p>
</blockquote>
<h4 id="heading-2-clusterrole">2. <strong>ClusterRole</strong></h4>
<ul>
<li><p>It defines permissions across the entire cluster</p>
</li>
<li><p>Used for cluster-wide resources like <code>nodes</code> or <code>persistentvolumes</code>, it can also be used within specific namespaces if bound with a <code>RoleBinding</code></p>
</li>
<li><p><strong>Example</strong>: A role that gives access to cluster-level resources such as nodes, storage, or the ability to manage namespaces.</p>
</li>
</ul>
<blockquote>
<h4 id="heading-example-clusterrole-yaml">Example ClusterRole YAML</h4>
<pre><code class="lang-yaml"><span class="hljs-attr">apiVersion:</span> <span class="hljs-string">rbac.authorization.k8s.io/v1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">ClusterRole</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">cluster-admin</span>
<span class="hljs-attr">rules:</span>
<span class="hljs-bullet">-</span> <span class="hljs-attr">apiGroups:</span> [<span class="hljs-string">""</span>]
  <span class="hljs-attr">resources:</span> [<span class="hljs-string">"nodes"</span>, <span class="hljs-string">"namespaces"</span>]
  <span class="hljs-attr">verbs:</span> [<span class="hljs-string">"get"</span>, <span class="hljs-string">"list"</span>, <span class="hljs-string">"watch"</span>, <span class="hljs-string">"create"</span>, <span class="hljs-string">"delete"</span>]
</code></pre>
<p>In this example, the <code>cluster-admin</code> role allows access to nodes and namespaces across the entire cluster.</p>
</blockquote>
<h4 id="heading-1-rolebindings-assigning-permissions-in-kubernetes">1. <strong>RoleBindings: Assigning Permissions in Kubernetes</strong></h4>
<ul>
<li><p>Namespaced</p>
</li>
<li><p>A RoleBinding grants a Role's permissions to users, groups, or service accounts within a specific namespace. It binds a Role to a user or group in that namespace.</p>
</li>
<li><p><strong>Example</strong>: Bind the <code>pod-reader</code> role to a specific user or service account in the <code>dev</code> namespace.</p>
</li>
</ul>
<blockquote>
<h4 id="heading-example-rolebinding-yaml">Example RoleBinding YAML</h4>
<pre><code class="lang-yaml"><span class="hljs-attr">apiVersion:</span> <span class="hljs-string">rbac.authorization.k8s.io/v1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">RoleBinding</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">read-pods-binding</span>
  <span class="hljs-attr">namespace:</span> <span class="hljs-string">dev</span>
<span class="hljs-attr">subjects:</span>
<span class="hljs-bullet">-</span> <span class="hljs-attr">kind:</span> <span class="hljs-string">User</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">praduman</span>
  <span class="hljs-attr">apiGroup:</span> <span class="hljs-string">rbac.authorization.k8s.io</span>
<span class="hljs-attr">roleRef:</span>
  <span class="hljs-attr">kind:</span> <span class="hljs-string">Role</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">pod-reader</span>
  <span class="hljs-attr">apiGroup:</span> <span class="hljs-string">rbac.authorization.k8s.io</span>
</code></pre>
<p>In this example, the <code>RoleBinding</code> grants the user <code>praduman</code> the <code>pod-reader</code> role's permissions within the <code>dev</code> namespace.</p>
</blockquote>
<h4 id="heading-2-clusterrolebindings-cluster-wide-permission-management">2. <strong>ClusterRoleBindings: Cluster-Wide Permission Management</strong></h4>
<ul>
<li><p>Cluster-wide</p>
</li>
<li><p>A ClusterRoleBinding grants a ClusterRole's permissions to users, groups, or service accounts across the entire cluster or across all namespaces.</p>
</li>
<li><p><strong>Example</strong>: Bind the <code>cluster-admin</code> role to a user or group across the whole cluster.</p>
</li>
</ul>
<blockquote>
<h4 id="heading-example-clusterrolebinding-yaml">Example ClusterRoleBinding YAML</h4>
<pre><code class="lang-yaml"><span class="hljs-attr">apiVersion:</span> <span class="hljs-string">rbac.authorization.k8s.io/v1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">ClusterRoleBinding</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">admin-binding</span>
<span class="hljs-attr">subjects:</span>
<span class="hljs-bullet">-</span> <span class="hljs-attr">kind:</span> <span class="hljs-string">User</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">praduman</span>
  <span class="hljs-attr">apiGroup:</span> <span class="hljs-string">rbac.authorization.k8s.io</span>
<span class="hljs-attr">roleRef:</span>
  <span class="hljs-attr">kind:</span> <span class="hljs-string">ClusterRole</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">cluster-admin</span>
  <span class="hljs-attr">apiGroup:</span> <span class="hljs-string">rbac.authorization.k8s.io</span>
</code></pre>
<p>In this example, the <code>ClusterRoleBinding</code> grants the user <code>john</code> the <code>cluster-admin</code> permissions across the entire cluster.</p>
</blockquote>
<hr />
<ul>
<li><p>Create a <code>service account</code> (<code>sa.yaml</code>)</p>
<pre><code class="lang-yaml">  <span class="hljs-attr">apiVersion:</span> <span class="hljs-string">v1</span>
  <span class="hljs-attr">kind:</span> <span class="hljs-string">ServiceAccount</span>
  <span class="hljs-attr">metadata:</span>
    <span class="hljs-attr">name:</span> <span class="hljs-string">deployment-manager</span>
</code></pre>
<pre><code class="lang-basic">  kubectl apply -f sa.yaml
</code></pre>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1727198512467/1f9654ee-3822-4da8-b3e3-4bd005827167.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Create a <code>Role</code> (<code>role.yaml</code>)</p>
<pre><code class="lang-yaml">  <span class="hljs-attr">apiVersion:</span> <span class="hljs-string">rbac.authorization.k8s.io/v1</span>
  <span class="hljs-attr">kind:</span> <span class="hljs-string">Role</span>
  <span class="hljs-attr">metadata:</span>
    <span class="hljs-attr">namespace:</span> <span class="hljs-string">default</span>
    <span class="hljs-attr">name:</span> <span class="hljs-string">deployment-creator</span>
  <span class="hljs-attr">rules:</span>
  <span class="hljs-bullet">-</span> <span class="hljs-attr">apiGroups:</span> [<span class="hljs-string">"apps"</span>]        <span class="hljs-comment"># mention the group of resource</span>
    <span class="hljs-attr">resources:</span> [<span class="hljs-string">"deployments"</span>]
    <span class="hljs-attr">verbs:</span> [<span class="hljs-string">"create"</span>, <span class="hljs-string">"delete"</span>, <span class="hljs-string">"get"</span>, <span class="hljs-string">"list"</span>, <span class="hljs-string">"patch"</span>, <span class="hljs-string">"update"</span>, <span class="hljs-string">"watch"</span>]
</code></pre>
<blockquote>
<p><code>verbs</code> define what user can do</p>
</blockquote>
<pre><code class="lang-yaml">  <span class="hljs-string">kubectl</span> <span class="hljs-string">apply</span> <span class="hljs-string">-f</span> <span class="hljs-string">role.yaml</span>
</code></pre>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1727198661667/5ac72847-9bc6-4536-80a6-804f7525448c.png" alt class="image--center mx-auto" /></p>
</li>
</ul>
<blockquote>
<p>To list all the resource types that are available in the cluster</p>
<pre><code class="lang-basic">kubectl api-resources
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1727199931834/98fb99b2-6c1c-4132-ac0a-a04d2e2ea0cc.png" alt class="image--center mx-auto" /></p>
<p>To see the <code>groups</code> &amp; <code>version</code> of a specific resource</p>
<pre><code class="lang-basic">kubectl explain &lt;resource-<span class="hljs-keyword">name</span>&gt;
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1727200087998/74d94d20-9942-43c5-9080-bf6cacc9e35a.png" alt class="image--center mx-auto" /></p>
</blockquote>
<ul>
<li><p>Create <code>RoleBinding</code> (<code>rb.yaml</code>)</p>
<pre><code class="lang-yaml">  <span class="hljs-attr">apiVersion:</span> <span class="hljs-string">rbac.authorization.k8s.io/v1</span>
  <span class="hljs-attr">kind:</span> <span class="hljs-string">RoleBinding</span>
  <span class="hljs-attr">metadata:</span>
    <span class="hljs-attr">name:</span> <span class="hljs-string">deployment-manager-binding</span>
    <span class="hljs-attr">namespace:</span> <span class="hljs-string">default</span>
  <span class="hljs-attr">subjects:</span>
  <span class="hljs-bullet">-</span> <span class="hljs-attr">kind:</span> <span class="hljs-string">ServiceAccount</span>
    <span class="hljs-attr">name:</span> <span class="hljs-string">deployment-manager</span>
    <span class="hljs-attr">namespace:</span> <span class="hljs-string">default</span>
  <span class="hljs-attr">roleRef:</span>
    <span class="hljs-attr">kind:</span> <span class="hljs-string">Role</span>
    <span class="hljs-attr">name:</span> <span class="hljs-string">deployment-creator</span>
    <span class="hljs-attr">apiGroup:</span> <span class="hljs-string">rbac.authorization.k8s.io</span>
</code></pre>
<pre><code class="lang-yaml">  <span class="hljs-string">kubectl</span> <span class="hljs-string">apply</span> <span class="hljs-string">-f</span> <span class="hljs-string">rb.yaml</span>
</code></pre>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1727200370345/70ed227c-e3d9-4ec1-b14e-21e12e4bd315.png" alt class="image--center mx-auto" /></p>
<blockquote>
<p>To see about how a specific resource is defined</p>
<pre><code class="lang-yaml"><span class="hljs-string">kubectl</span> <span class="hljs-string">explain</span> <span class="hljs-string">&lt;resource-name&gt;</span>
</code></pre>
<p>Example: To see what we can fill inside kind section of rolebinding</p>
<pre><code class="lang-yaml"><span class="hljs-string">kubectl</span> <span class="hljs-string">explain</span> <span class="hljs-string">RoleBinding.subjects.kind</span>
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1727289571881/8b763d2f-d3eb-42fd-bd32-c8b58c398399.png" alt class="image--center mx-auto" /></p>
</blockquote>
</li>
<li><p>Create a <code>deployment</code> using the <code>service account</code> that is created (<code>deploy.yaml</code>)</p>
<pre><code class="lang-yaml">  <span class="hljs-attr">apiVersion:</span> <span class="hljs-string">apps/v1</span>
  <span class="hljs-attr">kind:</span> <span class="hljs-string">Deployment</span>
  <span class="hljs-attr">metadata:</span>
    <span class="hljs-attr">name:</span> <span class="hljs-string">nginx-deployment</span>
  <span class="hljs-attr">spec:</span>
    <span class="hljs-attr">replicas:</span> <span class="hljs-number">1</span>
    <span class="hljs-attr">selector:</span>
      <span class="hljs-attr">matchLabels:</span>
        <span class="hljs-attr">app:</span> <span class="hljs-string">nginx</span>
    <span class="hljs-attr">template:</span>
      <span class="hljs-attr">metadata:</span>
        <span class="hljs-attr">labels:</span>
          <span class="hljs-attr">app:</span> <span class="hljs-string">nginx</span>
      <span class="hljs-attr">spec:</span>
        <span class="hljs-attr">serviceAccountName:</span> <span class="hljs-string">deployment-manager</span>
        <span class="hljs-attr">containers:</span>
        <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">nginx</span>
          <span class="hljs-attr">image:</span> <span class="hljs-string">nginx</span>
          <span class="hljs-attr">ports:</span>
          <span class="hljs-bullet">-</span> <span class="hljs-attr">containerPort:</span> <span class="hljs-number">80</span>
</code></pre>
<pre><code class="lang-bash">  kubectl apply -f deploy.yaml
</code></pre>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1727200966627/6b02e227-9b6b-499a-8475-f1697532572a.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Try to create the <code>service</code> or <code>pod</code> using the <code>service account</code></p>
<pre><code class="lang-bash">  kubectl expose deployment nginx-deployment --port=80 --as=system:serviceaccount:default:deployment-manager
</code></pre>
<pre><code class="lang-bash">  kubectl run nginx --image=nginx --as=system:serviceaccount:default:deployment-manager
</code></pre>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1727201990749/31f48ee2-f110-4334-9183-107a0a5e569d.png" alt class="image--center mx-auto" /></p>
<blockquote>
<p>This will not created as we didn’t give that user permission to do this</p>
</blockquote>
</li>
<li><p>To check if a service account can perform a task</p>
<pre><code class="lang-bash">  kubectl auth can-i create deployments --as=system:serviceaccount:default:deployment-manager
  kubectl auth can-i create secrets --as=system:serviceaccount:default:deployment-manager
  kubectl auth can-i list services --as=system:serviceaccount:default:deployment-manager
</code></pre>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1727202103867/20d94c36-7b77-4d08-9ccf-29afa7fc2522.png" alt class="image--center mx-auto" /></p>
</li>
</ul>
<h3 id="heading-admission-controllers-ensuring-security-and-compliance-in-kubernetes">Admission Controllers: Ensuring Security and Compliance in Kubernetes</h3>
<p>It validate and mutate the request coming to the API server. It is important for keeping security, rules and operations in check. They work like gatekeepers, making sure that any changes to the cluster follow the rules, whether it's about security, limits on resources, or specific business guidelines. You can use the built-in admission controllers or create custom ones with webhooks to fit your needs.</p>
<h4 id="heading-types-of-admission-controllers-validating-vs-mutating"><strong>Types of Admission Controllers: Validating vs. Mutating</strong></h4>
<ol>
<li><p><strong>Validating Admission Controllers</strong>:</p>
<ul>
<li><p>These controllers are used to validate incoming requests. If a request fails validation, it is rejected.</p>
</li>
<li><p>Example: You might use a validating admission controller to ensure that all Pods have certain labels or annotations.</p>
</li>
</ul>
</li>
<li><p><strong>Mutating Admission Controllers</strong>:</p>
<ul>
<li><p>These controllers can modify incoming requests before they are stored. For instance, they can add default values or modify fields in the request.</p>
</li>
<li><p>Example: A mutating admission controller might automatically add resource limits to containers if they are not specified.</p>
</li>
</ul>
</li>
</ol>
<blockquote>
<p>On creating a <code>namespace</code> a <code>service account</code> with <code>default</code> name is created</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1727202912726/ce8736ec-4672-409a-9644-620351ce155c.png" alt /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1727202937683/573493ba-808d-4d77-8543-ea051dbc1060.png" alt /></p>
</blockquote>
<ul>
<li><p>To create a token for a service account</p>
<blockquote>
<p>before <code>kubernetes V1.24</code> a token get automatically generated when we create a service account. but now we need to create it manually. For pod the <code>token</code> get generated automatically with the validation for 1 hour by default</p>
</blockquote>
<pre><code class="lang-bash">  kubectl create token &lt;service-account-name&gt;
</code></pre>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1727203131281/209e1176-2aba-4d9b-9c5f-340e13a32f1c.png" alt class="image--center mx-auto" /></p>
<blockquote>
<p>You can check the Token validity on <a target="_blank" href="https://jwt.io/"><code>jwt.io</code></a></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1727203429651/834e9257-927a-4345-a5f4-4208f9028530.png" alt /></p>
</blockquote>
</li>
</ul>
<hr />
<h4 id="heading-practical-demo-implementing-rbac-in-kubernetes"><strong>Practical Demo: Implementing RBAC in Kubernetes</strong></h4>
<ul>
<li><p>Create a namespace</p>
<pre><code class="lang-bash">  kubectl create ns praduman
</code></pre>
</li>
<li><p>Create a ServiceAccount</p>
<pre><code class="lang-bash">  kubectl create sa my-sa -n praduman
</code></pre>
</li>
<li><p>Create a Role</p>
<pre><code class="lang-bash">  kubectl create role my-role --verb=create --resource=deployments.apps -n pradumankubectl create role my-role --verb=create --resource=deployments.apps -n praduman
</code></pre>
</li>
<li><p>Create a RoleBinding</p>
<pre><code class="lang-bash">  kubectl create rolebinding my-rolebinding --role=my-role --serviceaccount=praduman:my-sa -n praduman
</code></pre>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1727291974513/872de234-65e3-42fb-acc3-9653af923ec0.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Check the permission</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1727292194005/1f6d3236-5f1d-4af7-98d5-19e7ee16df58.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Create a token for the ServiceAccount</p>
<pre><code class="lang-bash">  kubectl create token my-sa -n praduman
</code></pre>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1727292417517/7fd50b36-2fd9-436c-8ae9-b064ca2e9d35.png" alt class="image--center mx-auto" /></p>
</li>
</ul>
<h3 id="heading-authenticating-with-kubernetes-a-step-by-step-demo">Authenticating with Kubernetes: A Step-by-Step Demo</h3>
<ul>
<li><p><strong>View Kubernetes Configuration</strong></p>
<pre><code class="lang-basic">  kubectl config <span class="hljs-keyword">view</span>
</code></pre>
</li>
<li><p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1727320386930/b881ab58-a998-4bc7-a6ff-55f467805297.png" alt class="image--center mx-auto" /></p>
<p>  <strong>Find the Cluster Name from Kubeconfig</strong></p>
<pre><code class="lang-basic">  export CLUSTER_NAME=&lt;your-cluster-<span class="hljs-keyword">name</span>&gt;
</code></pre>
</li>
<li><p><strong>Get the API Server Endpoint</strong></p>
<pre><code class="lang-basic">  export APISERVER=$(kubectl config <span class="hljs-keyword">view</span> -o jsonpath=<span class="hljs-comment">'{.clusters[0].cluster.server}')</span>
</code></pre>
<blockquote>
<p>This retrieves the URL of the Kubernetes API server from your kubeconfig file and stores it in the <code>APISERVER</code> environment variable.</p>
</blockquote>
</li>
<li><p><strong>Make a Request to the API Server</strong></p>
<pre><code class="lang-basic">  curl --cacert /etc/kubernetes/pki/ca.crt $APISERVER/version
</code></pre>
<blockquote>
<p>This <code>curl</code> command attempts to query the API server for its version, using the CA certificate to verify the server's identity. The <code>--cacert</code> option points to the CA certificate used by the Kubernetes control plane to authenticate requests.</p>
</blockquote>
</li>
<li><p><strong>Attempts to get a list of deployments</strong></p>
<pre><code class="lang-basic">  curl --cacert /etc/kubernetes/pki/ca.crt $APISERVER/apis/apps/v1/deployments
</code></pre>
<blockquote>
<p>However, <strong>this request will likely fail without proper authentication</strong>, as the Kubernetes API requires either a client certificate, a bearer token, or some other authentication method.</p>
</blockquote>
</li>
<li><p><strong>Using Client Certificates for Authentication</strong></p>
<p>  Extract and decode the client certificate and key</p>
<blockquote>
<p>These are base64-encoded, so they need to be decoded using <code>base64 -d</code></p>
</blockquote>
<pre><code class="lang-basic">  echo <span class="hljs-string">"&lt;client-certificate-data_from kubeconfig&gt;"</span> | base64 -d &gt; client
  echo <span class="hljs-string">"&lt;client-key-data_from kubeconfig&gt;"</span> | base64 -d &gt; <span class="hljs-keyword">key</span>
</code></pre>
</li>
<li><p><strong>Use the certificate and key with curl to make the request</strong></p>
<pre><code class="lang-basic">  curl --cacert /etc/kubernetes/pki/ca.crt --cert client --<span class="hljs-keyword">key</span> <span class="hljs-keyword">key</span> $APISERVER/apis/apps/v1/deployments
</code></pre>
<blockquote>
<p>This authenticates the request using the client certificate and key, allowing you to retrieve a list of deployments.</p>
</blockquote>
</li>
</ul>
<h3 id="heading-validating-admission-policies-with-cel-in-kubernetes">Validating Admission Policies with CEL in Kubernetes</h3>
<p><strong>Validation Admission Policies</strong> are like rules for your Kubernetes cluster. They decide if changes to the cluster are okay or not. <strong>CEL(</strong><code>Common Expression Language</code><strong>)</strong> is a special language used to write these rules. It's like a recipe book for the policy. You can customize these rules a lot. You can make them specific to certain things in your cluster or even change them based on different situations. You can use CEL to create rules that control what can and can't be done in your Kubernetes cluster. These rules can be very flexible and tailored to your specific needs.</p>
<ul>
<li><strong>Create a</strong> <code>validating admission policy</code> <strong>and</strong> <code>validating admission policy binding</code></li>
</ul>
<pre><code class="lang-yaml"><span class="hljs-attr">apiVersion:</span> <span class="hljs-string">admissionregistration.k8s.io/v1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">ValidatingAdmissionPolicy</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">"demo-policy.example.com"</span>
<span class="hljs-attr">spec:</span>
  <span class="hljs-attr">failurePolicy:</span> <span class="hljs-string">Fail</span>
  <span class="hljs-attr">matchConstraints:</span>
    <span class="hljs-attr">resourceRules:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">apiGroups:</span>   [<span class="hljs-string">"apps"</span>]
      <span class="hljs-attr">apiVersions:</span> [<span class="hljs-string">"v1"</span>]
      <span class="hljs-attr">operations:</span>  [<span class="hljs-string">"CREATE"</span>, <span class="hljs-string">"UPDATE"</span>]
      <span class="hljs-attr">resources:</span>   [<span class="hljs-string">"deployments"</span>]
  <span class="hljs-attr">validations:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">expression:</span> <span class="hljs-string">"object.spec.replicas &lt;= 5"</span>

<span class="hljs-meta">---</span>

<span class="hljs-attr">apiVersion:</span> <span class="hljs-string">admissionregistration.k8s.io/v1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">ValidatingAdmissionPolicyBinding</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">"demo-binding-test.example.com"</span>
<span class="hljs-attr">spec:</span>
  <span class="hljs-attr">policyName:</span> <span class="hljs-string">"demo-policy.example.com"</span>
  <span class="hljs-attr">validationActions:</span> [<span class="hljs-string">Deny</span>]
  <span class="hljs-attr">matchResources:</span>
    <span class="hljs-attr">namespaceSelector:</span>
      <span class="hljs-attr">matchLabels:</span>
        <span class="hljs-attr">environment:</span> <span class="hljs-string">test</span>
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1727294209164/38e4a494-eb26-4231-8668-04e11f4f54ac.png" alt class="image--center mx-auto" /></p>
<ul>
<li><p>Lable the namespace</p>
<pre><code class="lang-basic">  kubectl label ns default environment=test
</code></pre>
</li>
<li><p>Try to run replicas</p>
<pre><code class="lang-basic">  kubectl create deploy nginx --image=nginx --replicas=<span class="hljs-number">6</span>
</code></pre>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1727294382785/e88c94d1-4098-4a27-bae5-b0d5484a2d70.png" alt class="image--center mx-auto" /></p>
</li>
</ul>
<h3 id="heading-enforcing-image-policies-with-imagepolicywebhook-in-kubernetes">Enforcing Image Policies with ImagePolicyWebhook in Kubernetes</h3>
<p>The <strong>ImagePolicyWebhook</strong> is an admission controller that inspects image metadata when a pod is being created. It checks the image being used in the pod, evaluates it against predefined rules, and decides whether to allow or deny the image. This is useful for enforcing security policies and ensuring that only authorized images are used in the cluster</p>
<ul>
<li><p>Clone the GitHub Repository</p>
<pre><code class="lang-basic">  git clone https://github.<span class="hljs-keyword">com</span>/saiyam1814/imagepolicy.git
</code></pre>
<blockquote>
<p>This repository contains a sample implementation of the ImagePolicyWebhook</p>
</blockquote>
</li>
<li><p>Create a Directory for the Demo</p>
<pre><code class="lang-basic">  <span class="hljs-keyword">mkdir</span> /etc/kubernetes/demo
</code></pre>
<blockquote>
<p><strong>creates a directory</strong> at <code>/etc/kubernetes/demo</code> where you will store the ImagePolicyWebhook configuration and file</p>
</blockquote>
</li>
<li><p>Copy Files to the Demo Directory</p>
<pre><code class="lang-basic">  cp -r imagepolicy/ /etc/kubernetes/demo
</code></pre>
<blockquote>
<p>This command copies the <code>imagepolicy</code> directory to the <code>/etc/kubernetes/demo directory</code>. The <code>-r</code> flag ensures that everything inside the <code>imagepolicy</code> folder is copied</p>
</blockquote>
</li>
<li><p>Navigate to the Demo Directory</p>
<pre><code class="lang-basic">  cd /etc/kubernetes/demo
</code></pre>
</li>
<li><p>Move Files to Parent Directory</p>
<pre><code class="lang-basic">  cd imagepolicy
  mv * ..
  cd ..
</code></pre>
<blockquote>
<p><strong>moving</strong> all files from the <code>imagepolicy</code> folder into the <code>/etc/kubernetes/demo</code> directory, then returning to the parent directory</p>
</blockquote>
</li>
<li><p>View the <code>admission.json</code> File</p>
<pre><code class="lang-basic">  cat admission.json
</code></pre>
<blockquote>
<p>This JSON file defines how the webhook interacts with the Kubernetes API server</p>
</blockquote>
</li>
<li><p>View the Configuration File</p>
<pre><code class="lang-basic">  cat config
</code></pre>
</li>
<li><p>Edit the <code>kube-apiserver.yaml</code> File</p>
<pre><code class="lang-basic">  vi /etc/kubernetes/manifests/kube-apiserver.yaml
</code></pre>
</li>
<li><p>Add New Settings to Enable ImagePolicyWebhook</p>
<blockquote>
<p>You need to add the following lines to the <code>kube-apiserver.yaml</code> file to enable the ImagePolicyWebhook</p>
</blockquote>
<pre><code class="lang-basic">  - --enable-admission-plugins=NodeRestriction,ImagePolicyWebhook
  - --admission-control-config-file=/etc/kubernetes/demo/admission.json
</code></pre>
<blockquote>
<ul>
<li><p>The first line enables the <code>ImagePolicyWebhook</code> admission plugin along with <code>NodeRestriction</code>.</p>
</li>
<li><p>The second line specifies the <strong>path</strong> to the <code>admission.json</code> file you created earlier, which contains webhook rules.</p>
</li>
</ul>
</blockquote>
</li>
<li><p>Configuring Volume Mounts</p>
<pre><code class="lang-basic">  volumeMounts:
    - mountPath: /etc/kubernetes/demo
      <span class="hljs-keyword">name</span>: admission
      readOnly: true
</code></pre>
</li>
<li><p>Adding the Volume</p>
<pre><code class="lang-basic">  volumes:
  - hostPath:
      path: /etc/kubernetes/demo
    <span class="hljs-keyword">name</span>: admission
</code></pre>
</li>
<li><p>Run an Nginx Pod to Test the Policy</p>
<pre><code class="lang-basic">  kubectl <span class="hljs-keyword">run</span> nginx --image=nginx
</code></pre>
<blockquote>
<p>If the ImagePolicyWebhook is working, it will inspect the <code>nginx</code> image to see if it meets the defined policies. Based on the policy logic, it will either allow or deny the creation of the pod</p>
</blockquote>
</li>
</ul>
<h3 id="heading-conclusion">Conclusion</h3>
<blockquote>
<p>In this article, we explored the intricacies of Role-Based Access Control (RBAC) in Kubernetes, delving into the definitions and differences between Roles and ClusterRoles, as well as RoleBindings and ClusterRoleBindings. We also examined the importance of Admission Controllers in maintaining security and compliance within a Kubernetes cluster, highlighting the roles of Validating and Mutating Admission Controllers. Additionally, we discussed the implementation of RBAC through practical demos, including creating namespaces, service accounts, roles, and role bindings. Finally, we touched on advanced topics like Validating Admission Policies with CEL and enforcing image policies using ImagePolicyWebhook. By understanding and implementing these concepts, you can significantly enhance the security and manageability of your Kubernetes environments</p>
</blockquote>
<hr />
<p>💡 <em>Let’s connect and discuss DevOps, cloud automation, and cutting-edge technology</em></p>
<p>🔗 <a target="_blank" href="https://www.linkedin.com/in/praduman-prajapati/"><strong>LinkedIn</strong></a> | 💼 <a target="_blank" href="https://www.upwork.com/freelancers/~01fa3bf4d6797a9651"><strong>Upwork</strong></a> | 🐦 <a target="_blank" href="https://x.com/CndTwtprad"><strong>Twitter</strong></a> | 👨‍💻 <a target="_blank" href="https://github.com/praduman8435"><strong>GitHub</strong></a></p>
]]></content:encoded></item><item><title><![CDATA[Kubernetes-part-8]]></title><description><![CDATA[Understanding Kubernetes Services: A Comprehensive Guide
A Kubernetes Service is an object that helps you expose an application running in one or more Pods in your cluster. Since pod IP addresses can change when pods are created or destroyed, a servi...]]></description><link>https://blogs.praduman.site/kubernetes-part-8</link><guid isPermaLink="true">https://blogs.praduman.site/kubernetes-part-8</guid><category><![CDATA[Kubernetes]]></category><category><![CDATA[Devops]]></category><dc:creator><![CDATA[Praduman Prajapati]]></dc:creator><pubDate>Sat, 21 Sep 2024 09:31:32 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1726606427683/05c64538-b65b-4142-bfb9-185983af846f.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h3 id="heading-understanding-kubernetes-services-a-comprehensive-guide">Understanding Kubernetes Services: A Comprehensive Guide</h3>
<p>A Kubernetes Service is an object that helps you expose an application running in one or more Pods in your cluster. Since pod IP addresses can change when pods are created or destroyed, a service provides a stable IP that doesn't change. This ensures that both internal and external users can always connect to the right application, even if the pods behind it are constantly changing. By default the service that is created is <code>clusterIP</code>, it is useful for communication within the cluster.</p>
<ul>
<li><p><strong>To create a service (</strong><code>default: ClusterIP</code><strong>)</strong></p>
<pre><code class="lang-bash">  kubectl expose &lt;deployment/pod-name&gt; --port=80
</code></pre>
</li>
<li><p><strong>To see the services</strong></p>
<pre><code class="lang-bash">  kubectl get svc -owide
</code></pre>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1727034767770/9198e3a6-10d1-472f-8b1f-009143034fe0.png" alt class="image--center mx-auto" /></p>
</li>
</ul>
<h4 id="heading-what-are-endpoints-in-kubernetes">What Are Endpoints in Kubernetes?</h4>
<p>Endpoints are objects that list the IP addresses and ports of the pods associated with a specific service. When you create a service in Kubernetes, it uses a selector to determine which pods it should communicate with. The Endpoints object automatically updates as pods are added or removed, ensuring that the service always knows where to send traffic. It play a crucial role in connecting services to the pods they manage.</p>
<blockquote>
<p><strong>Viewing Endpoints</strong></p>
<pre><code class="lang-bash">kubectl get endpoints &lt;service-name&gt;
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1727034930881/8c184bd2-5f7c-4c36-80e9-53203a40394f.png" alt class="image--center mx-auto" /></p>
</blockquote>
<h4 id="heading-exploring-different-types-of-kubernetes-services">Exploring Different Types of Kubernetes Services</h4>
<ul>
<li><p>ClusterIP</p>
</li>
<li><p>NodePort</p>
</li>
<li><p>LoadBalancer</p>
</li>
<li><p>ExternalName</p>
</li>
<li><p>Headless</p>
</li>
<li><p>ExternalDNS</p>
</li>
</ul>
<h3 id="heading-networking-fundamentals-in-kubernetes-a-deep-dive">Networking Fundamentals in Kubernetes: A Deep Dive</h3>
<p>Inside each node, there's always a <strong>veth</strong>(<code>Virtual Ethernet</code>) <strong>pair</strong> for networking. When a pod runs, a special container called the <strong>pause container</strong> is also created. For example, if you create a pod with two containers, like <strong>busybox</strong> and <strong>nginx</strong>, there will actually be three containers: the two you defined and the pause container. The pod gets its own IP, and this IP is connected to an interface called <strong>eth0</strong>(<code>Ethernet interface</code>) inside the pod using <code>CNI</code>.</p>
<ul>
<li><p>Create a multi-container pod (<code>mcp.yml</code>)</p>
<pre><code class="lang-yaml">  <span class="hljs-attr">apiVersion:</span> <span class="hljs-string">v1</span>
  <span class="hljs-attr">kind:</span> <span class="hljs-string">Pod</span>
  <span class="hljs-attr">metadata:</span>
    <span class="hljs-attr">name:</span> <span class="hljs-string">shared-namespace</span>
  <span class="hljs-attr">spec:</span>
    <span class="hljs-attr">containers:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">p1</span>
        <span class="hljs-attr">image:</span> <span class="hljs-string">busybox</span>
        <span class="hljs-attr">command:</span> [<span class="hljs-string">'/bin/sh'</span>, <span class="hljs-string">'-c'</span>, <span class="hljs-string">'sleep 10000'</span>]
      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">p2</span>
        <span class="hljs-attr">image:</span> <span class="hljs-string">nginx</span>
</code></pre>
<pre><code class="lang-bash">  kubectl apply -f mcp.yml
</code></pre>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1727035422081/02c5bbe4-15c2-4229-9829-76401385f3e6.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>To see the pod’s IP</p>
<pre><code class="lang-bash">  kubectl get pod shared-namespace -owide
</code></pre>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1727035501759/932c5c16-6dd0-4461-9f7b-ebab8a292092.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Check the node where the pod is running and SSH into it</p>
<pre><code class="lang-bash">  ssh node01
</code></pre>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1727035555351/cb03278f-88d1-40a7-a2c7-587560689130.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>To view the network namespaces created</p>
<pre><code class="lang-bash">  ip netns list
</code></pre>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1727035604459/eb73248a-0f9f-4c19-906f-df4b22663d57.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>To find the pause container</p>
<pre><code class="lang-bash">  lsns | grep nginx
</code></pre>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1727035736678/89d8c4ba-3b5f-45c2-8bac-ad312301a9f4.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Get details of the pause container's namespaces (net, ipc, uts)</p>
<pre><code class="lang-bash">  lsns -p &lt;PID-from-the-previous-command&gt;
</code></pre>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1727036619786/dd3809c1-6b17-4746-8468-ce630f4354e0.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>To check the list of all network namespaces</p>
<pre><code class="lang-bash">  ls -lt /var/run/netns
</code></pre>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1727035836018/33ba65eb-bb9f-4eb4-8ff7-daba9e073298.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Exec into the namespace or into the pod to see the ip link</p>
<pre><code class="lang-bash">  ip netns <span class="hljs-built_in">exec</span> &lt;namespace&gt; ip link
  kubectl <span class="hljs-built_in">exec</span> -it &lt;pod-name&gt; -- ip addr
</code></pre>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1727036968501/178cacd3-2285-4703-8e87-13c85fa98293.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Find the veth Pair</p>
<ul>
<li><p>Once you run the above command, you may see an interface with a name like <code>eth@9</code>. The number after <code>@</code> represents the identifier of the virtual Ethernet (veth) pair interface.</p>
</li>
<li><p>To find the corresponding link on the Kubernetes node, you can search using this identifier. For example, if the number is <code>9</code>, run the following on the node</p>
</li>
</ul>
</li>
</ul>
<pre><code class="lang-bash">    ip link | grep -A1 ^9
</code></pre>
<ul>
<li>This will show the details of the veth pair on the node</li>
</ul>
<ul>
<li><p><strong>Inter Node communication</strong></p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1727037359729/dcb73739-2675-44f4-bb1a-b9edafb5b36b.png" alt class="image--center mx-auto" /></p>
</li>
</ul>
<blockquote>
<p>Here, Traffic(packet) goes to <code>eth0</code> and then <code>veth1</code> acts as tunnel and traffic goes to <code>root namespace</code> and <code>bridge</code> resolve the destination address using the <code>ARP table</code> then <code>Veth1</code> send traffic(packet) to <code>pod B</code>. This all only happens if there is a single node</p>
</blockquote>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1727037799205/fa47b17e-b08e-4357-bb56-fbef2aefbaac.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-statefulset-managing-stateful-applications-in-kubernetes">StatefulSet: Managing Stateful Applications in Kubernetes</h3>
<p>It helps manage stateful applications where each pod needs a unique identity and keeps its own data. It makes sure pods are started, updated, and stopped one by one, in a set order. Each pod has a fixed name and can keep its data even if it is restarted or moved to another machine. This is useful for apps like databases, where data and order matter.</p>
<h4 id="heading-key-differences-between-deployments-and-statefulsets">Key Differences Between Deployments and StatefulSets</h4>
<div class="hn-table">
<table>
<thead>
<tr>
<td><strong>Feature</strong></td><td><strong>Deployment</strong></td><td><strong>StatefulSet</strong></td></tr>
</thead>
<tbody>
<tr>
<td><strong>Use Case</strong></td><td>For stateless applications</td><td>For stateful applications</td></tr>
<tr>
<td><strong>Pod Identity</strong></td><td>Pods are interchangeable and do not have stable identities</td><td>Pods have unique, stable identities (e.g., <code>pod-0</code>, <code>pod-1</code>)</td></tr>
<tr>
<td><strong>Storage</strong></td><td>Typically uses ephemeral storage, data is lost when a pod is deleted</td><td>Each pod can have its own persistent volume attached</td></tr>
<tr>
<td><strong>Scaling Behavior</strong></td><td>Pods are scaled simultaneously and in random order</td><td>Pods are scaled sequentially (e.g., <code>pod-0</code> before <code>pod-1</code>)</td></tr>
<tr>
<td><strong>Pod Updates</strong></td><td>All pods can be updated concurrently</td><td>Pods are updated sequentially, ensuring one pod is ready before moving to the next</td></tr>
<tr>
<td><strong>Order of Pod Creation/Deletion</strong></td><td>No specific order in pod creation or deletion</td><td>Pods are created/deleted in a specific order (e.g., <code>pod-0</code>, <code>pod-1</code>)</td></tr>
<tr>
<td><strong>Network Identity</strong></td><td>Uses a ClusterIP service, no stable network identity</td><td>Typically uses a headless service, giving each pod a stable network identity</td></tr>
<tr>
<td><strong>Examples</strong></td><td>Microservices, stateless web apps</td><td>Databases (MySQL, Cassandra), distributed systems requiring unique identity or stable storage</td></tr>
<tr>
<td><strong>Use of Persistent Volumes</strong></td><td>Persistent volumes are shared across pods (if needed)</td><td>Each pod gets a dedicated persistent volume</td></tr>
</tbody>
</table>
</div><blockquote>
<p>The <code>Service</code> that is created in <code>StatefulSet</code> is the <code>ClusterIP: none</code> i.e., (<code>headless service</code>)</p>
</blockquote>
<ul>
<li><p><strong>StatefulSet example for deploying a MySQL database in Kubernetes</strong></p>
<pre><code class="lang-yaml">  <span class="hljs-attr">apiVersion:</span> <span class="hljs-string">apps/v1</span>
  <span class="hljs-attr">kind:</span> <span class="hljs-string">StatefulSet</span>
  <span class="hljs-attr">metadata:</span>
    <span class="hljs-attr">name:</span> <span class="hljs-string">mysql</span>
  <span class="hljs-attr">spec:</span>
    <span class="hljs-attr">serviceName:</span> <span class="hljs-string">"mysql-service"</span>
    <span class="hljs-attr">replicas:</span> <span class="hljs-number">3</span>
    <span class="hljs-attr">selector:</span>
      <span class="hljs-attr">matchLabels:</span>
        <span class="hljs-attr">app:</span> <span class="hljs-string">mysql</span>
    <span class="hljs-attr">template:</span>
      <span class="hljs-attr">metadata:</span>
        <span class="hljs-attr">labels:</span>
          <span class="hljs-attr">app:</span> <span class="hljs-string">mysql</span>
      <span class="hljs-attr">spec:</span>
        <span class="hljs-attr">containers:</span>
        <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">mysql</span>
          <span class="hljs-attr">image:</span> <span class="hljs-string">mysql:5.7</span>
          <span class="hljs-attr">ports:</span>
          <span class="hljs-bullet">-</span> <span class="hljs-attr">containerPort:</span> <span class="hljs-number">3306</span>
            <span class="hljs-attr">name:</span> <span class="hljs-string">mysql</span>
          <span class="hljs-attr">volumeMounts:</span>
          <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">mysql-storage</span>
            <span class="hljs-attr">mountPath:</span> <span class="hljs-string">/var/lib/mysql</span>
    <span class="hljs-attr">volumeClaimTemplates:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">metadata:</span>
        <span class="hljs-attr">name:</span> <span class="hljs-string">mysql-storage</span>
      <span class="hljs-attr">spec:</span>
        <span class="hljs-attr">accessModes:</span> [<span class="hljs-string">"ReadWriteOnce"</span>]
        <span class="hljs-attr">resources:</span>
          <span class="hljs-attr">requests:</span>
            <span class="hljs-attr">storage:</span> <span class="hljs-string">1Gi</span>
</code></pre>
<ul>
<li><p>Each pod will have its own persistent storage (<code>mysql-storage</code>).</p>
</li>
<li><p>Pods will have stable network names (<code>mysql-0</code>, <code>mysql-1</code>, etc.), and their data will persist even if they restart.</p>
</li>
</ul>
</li>
<li><p><strong>Create a StatefulSet</strong></p>
<pre><code class="lang-yaml">  <span class="hljs-string">cat</span> <span class="hljs-string">&lt;&lt;EOF</span> <span class="hljs-string">|</span> <span class="hljs-string">kubectl</span> <span class="hljs-string">apply</span> <span class="hljs-string">-f</span> <span class="hljs-bullet">-</span>
  <span class="hljs-attr">apiVersion:</span> <span class="hljs-string">apps/v1</span>
  <span class="hljs-attr">kind:</span> <span class="hljs-string">StatefulSet</span>
  <span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">postgres</span>
  <span class="hljs-attr">spec:</span>
  <span class="hljs-attr">serviceName:</span> <span class="hljs-string">"postgres"</span>
  <span class="hljs-attr">replicas:</span> <span class="hljs-number">3</span>
  <span class="hljs-attr">selector:</span>
  <span class="hljs-attr">matchLabels:</span>
  <span class="hljs-attr">app:</span> <span class="hljs-string">postgres</span>
  <span class="hljs-attr">template:</span>
  <span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">labels:</span>
  <span class="hljs-attr">app:</span> <span class="hljs-string">postgres</span>
  <span class="hljs-attr">spec:</span>
  <span class="hljs-attr">containers:</span>
  <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">postgres</span>
  <span class="hljs-attr">image:</span> <span class="hljs-string">postgres:13</span>
  <span class="hljs-attr">ports:</span>
  <span class="hljs-bullet">-</span> <span class="hljs-attr">containerPort:</span> <span class="hljs-number">5432</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">postgres</span>
  <span class="hljs-attr">env:</span>
  <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">POSTGRES_PASSWORD</span>
  <span class="hljs-attr">value:</span> <span class="hljs-string">"example"</span>
  <span class="hljs-attr">volumeMounts:</span>
  <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">postgres-storage</span>
  <span class="hljs-attr">mountPath:</span> <span class="hljs-string">/var/lib/postgresql/data</span>
  <span class="hljs-attr">volumeClaimTemplates:</span>
  <span class="hljs-bullet">-</span> <span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">postgres-storage</span>
  <span class="hljs-attr">spec:</span>
  <span class="hljs-attr">accessModes:</span> [ <span class="hljs-string">"ReadWriteOnce"</span> ]
  <span class="hljs-attr">resources:</span>
  <span class="hljs-attr">requests:</span>
  <span class="hljs-attr">storage:</span> <span class="hljs-string">1Gi</span>
  <span class="hljs-string">EOF</span>
</code></pre>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1726909024059/d8691034-3b8f-4fdd-81ee-2ce4c0f3e793.png" alt class="image--center mx-auto" /></p>
</li>
<li><p><strong>A</strong> <code>persistent volume</code> <strong>and</strong> <code>persistent volume claim</code> <strong>is also created</strong></p>
<pre><code class="lang-bash">  kubectl get pv
</code></pre>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1726909331546/a1451085-b5ec-4629-8235-d3303b68232d.png" alt class="image--center mx-auto" /></p>
<pre><code class="lang-bash">  kubectl get pvc
</code></pre>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1726909335672/57582e0d-5be9-4c19-a719-7cdb17f650ca.png" alt class="image--center mx-auto" /></p>
</li>
<li><p><strong>Check for pods that is created</strong></p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1726909533722/5aa3f5db-9348-4eb3-8fc5-88f1b48bdaa0.png" alt class="image--center mx-auto" /></p>
</li>
<li><p><strong>Create a service (</strong><code>svc.yml</code><strong>)</strong></p>
<pre><code class="lang-yaml">  <span class="hljs-attr">apiVersion:</span> <span class="hljs-string">v1</span>
  <span class="hljs-attr">kind:</span> <span class="hljs-string">Service</span>
  <span class="hljs-attr">metadata:</span>
    <span class="hljs-attr">name:</span> <span class="hljs-string">postgres</span>
    <span class="hljs-attr">labels:</span>
      <span class="hljs-attr">app:</span> <span class="hljs-string">postgres</span>
  <span class="hljs-attr">spec:</span>
    <span class="hljs-attr">ports:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">port:</span> <span class="hljs-number">5432</span>
      <span class="hljs-attr">name:</span> <span class="hljs-string">postgres</span>
    <span class="hljs-attr">clusterIP:</span> <span class="hljs-string">None</span>
    <span class="hljs-attr">selector:</span>
      <span class="hljs-attr">app:</span> <span class="hljs-string">postgres</span>
</code></pre>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1726909682007/e8931793-58ac-4451-a44e-1d351bab5bf6.png" alt class="image--center mx-auto" /></p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1726909735841/b07da77b-02b0-4565-bbb6-c9152bb2f547.png" alt class="image--center mx-auto" /></p>
</li>
<li><p><strong>Increase the replicas and you will see the each pod have a ordered and fixed name</strong></p>
<pre><code class="lang-bash">  kubectl scale statefulset postgres --replicas=6
</code></pre>
<pre><code class="lang-bash">  kubectl get pod
</code></pre>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1726910106467/347b2175-fc31-485a-be0d-9f7247065427.png" alt class="image--center mx-auto" /></p>
</li>
<li><p><strong>If you delete a pod a new pod with the same name again created</strong></p>
<pre><code class="lang-bash">  kubectl delete pod postgres-3
</code></pre>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1726910233816/886c15f4-3136-4b00-84f6-3a39e65c5fcb.png" alt class="image--center mx-auto" /></p>
</li>
<li><p><strong>Check if the service is working or not</strong></p>
<pre><code class="lang-bash">  kubectl <span class="hljs-built_in">exec</span> -it postgres-0 -- psql -U postgres
</code></pre>
</li>
</ul>
<h3 id="heading-nodeport-exposing-your-kubernetes-application">NodePort: Exposing Your Kubernetes Application</h3>
<p>A NodePort is a way to let people from outside the cluster access your app. It opens a specific port on every node in the cluster, allowing you to reach the app using the node's IP and the assigned port. This port is a number in the range <code>30000–32767</code>. You can access your application by visiting <code>&lt;NodeIP&gt;:&lt;NodePort&gt;</code>, where <code>NodeIP</code> is the IP address of any node in the cluster, and <code>NodePort</code> is the port assigned by Kubernetes.</p>
<h4 id="heading-where-its-useful">Where it's useful ?</h4>
<ul>
<li><p><strong>For testing and development</strong>: You can quickly share access to apps without need of extra networking setup.</p>
</li>
<li><p><strong>For small or internal projects</strong>: If a company doesn't use cloud-based load balancers or advanced setups, NodePort is a simple solution to expose an app.</p>
</li>
</ul>
<blockquote>
<p>YAML configuration for a NodePort service that exposes an nginx deployment or pod</p>
<pre><code class="lang-yaml"><span class="hljs-attr">apiVersion:</span> <span class="hljs-string">v1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">Service</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">nginx-nodeport</span>
  <span class="hljs-attr">labels:</span>
    <span class="hljs-attr">app:</span> <span class="hljs-string">nginx</span>
<span class="hljs-attr">spec:</span>
  <span class="hljs-attr">type:</span> <span class="hljs-string">NodePort</span>  <span class="hljs-comment"># Specify the service type as NodePort</span>
  <span class="hljs-attr">selector:</span>
    <span class="hljs-attr">app:</span> <span class="hljs-string">nginx</span>  <span class="hljs-comment"># This must match the labels of the nginx pods</span>
  <span class="hljs-attr">ports:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">protocol:</span> <span class="hljs-string">TCP</span>
      <span class="hljs-attr">port:</span> <span class="hljs-number">80</span>  <span class="hljs-comment"># The port the service will listen on</span>
      <span class="hljs-attr">targetPort:</span> <span class="hljs-number">80</span>  <span class="hljs-comment"># The port the nginx container is listening on</span>
      <span class="hljs-attr">nodePort:</span> <span class="hljs-number">30008</span>  <span class="hljs-comment"># NodePort exposed (optional, Kubernetes can assign one if omitted)</span>
</code></pre>
</blockquote>
<h4 id="heading-create-a-pod-and-expose-it-using-nodeport-service">Create a Pod and expose it using NodePort service</h4>
<pre><code class="lang-bash">kubectl run nginx --image=nginx
kubectl expose pod nginx --<span class="hljs-built_in">type</span>=NodePort --port 80
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1727105140410/67d640fd-661b-4317-92a9-efbf90edf1ce.png" alt class="image--center mx-auto" /></p>
<p><strong>Access the Server</strong> <code>http://192.168.1.4:32613</code></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1727106216440/f2329250-44eb-4cd9-bfd7-fb0f112c25ad.png" alt class="image--center mx-auto" /></p>
<ul>
<li><p><strong>Check NodePort and KUBE Rules</strong></p>
<blockquote>
<p>to check the rules in the iptables firewall configuration related to Kubernetes NodePort services</p>
</blockquote>
<pre><code class="lang-bash">  sudo iptables -t nat -L -n -v | grep -e NodePort -e KUBE
</code></pre>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1727106551462/6aff24b2-d087-4c72-bea0-7792b271cd4e.png" alt class="image--center mx-auto" /></p>
</li>
<li><p><strong>Check Specific NodePort Rules</strong></p>
<pre><code class="lang-bash">  sudo iptables -t nat -L -n -v | grep 32613
</code></pre>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1727106679408/43bfcd1c-f643-48d9-a327-ca797995c469.png" alt class="image--center mx-auto" /></p>
<blockquote>
<p>This indicates that traffic coming to port <code>32613</code> (your NodePort) is being redirected properly to the appropriate service.</p>
</blockquote>
</li>
</ul>
<h3 id="heading-loadbalancer-making-your-application-public">LoadBalancer: Making Your Application Public</h3>
<p>It allows your application to be accessible to the public over the internet. When you create a service of this type, Kubernetes asks your cloud provider (like <code>AWS</code>, <code>GCP</code>, or <code>Azure</code>) to set up a load balancer automatically.</p>
<p>This service type is ideal when you want to expose an application, such as a web server, to users outside your cluster. The cloud provider provides an <code>external IP address</code>, which users can use to access your application.</p>
<p>The load balancer automatically distributes incoming traffic to all the pods running the service. This helps spread the traffic evenly and avoid overloading any single pod.</p>
<blockquote>
<p><strong>LoadBalancer YAML file</strong></p>
<pre><code class="lang-yaml"><span class="hljs-attr">apiVersion:</span> <span class="hljs-string">v1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">Service</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">nginx-loadbalancer</span>
<span class="hljs-attr">spec:</span>
  <span class="hljs-attr">type:</span> <span class="hljs-string">LoadBalancer</span>  <span class="hljs-comment"># This makes it a LoadBalancer service</span>
  <span class="hljs-attr">selector:</span>
    <span class="hljs-attr">app:</span> <span class="hljs-string">nginx</span>  <span class="hljs-comment"># This selects the nginx pods</span>
  <span class="hljs-attr">ports:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">protocol:</span> <span class="hljs-string">TCP</span>
      <span class="hljs-attr">port:</span> <span class="hljs-number">80</span>         <span class="hljs-comment"># Exposes port 80 (HTTP) to the public</span>
      <span class="hljs-attr">targetPort:</span> <span class="hljs-number">80</span>   <span class="hljs-comment"># The port on the nginx container</span>
</code></pre>
<h4 id="heading-how-it-works">How It Works:</h4>
<ol>
<li><p>You create the <strong>LoadBalancer</strong> service.</p>
</li>
<li><p>Kubernetes communicates with the cloud provider, and a load balancer is created.</p>
</li>
<li><p>The load balancer assigns an <strong>external IP address</strong>.</p>
</li>
<li><p>Traffic from the external IP is sent to your service, which distributes it to the nginx pods.</p>
</li>
</ol>
</blockquote>
<ul>
<li><p><strong>Creating a LoadBalancer service in minikube</strong></p>
<pre><code class="lang-bash">  kubectl run nginx --image=nginx
  kubectl expose pod nginx --<span class="hljs-built_in">type</span>=LoadBalancer --port=80
</code></pre>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1727110315954/0c508165-b196-45cb-9bbb-63878ab5d84b.png" alt class="image--center mx-auto" /></p>
</li>
<li><p><strong>Creates a network tunnel, making your LoadBalancer service accessible</strong></p>
<pre><code class="lang-bash">  minikube tunnel
</code></pre>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1727110572203/cd3f2ba4-6407-4359-ab56-fae09ddd2438.png" alt class="image--center mx-auto" /></p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1727110594870/e60f4c85-1994-427f-b561-8e461773bcc8.png" alt class="image--center mx-auto" /></p>
</li>
<li><p><strong>Access the Service at</strong> <code>External-ip</code></p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1727110670807/563b2536-c0f8-4aa7-b79b-3dab581c0531.png" alt class="image--center mx-auto" /></p>
</li>
</ul>
<h4 id="heading-why-avoid-loadbalancer-in-production">Why Avoid LoadBalancer in Production?</h4>
<ol>
<li><p><strong>Cost</strong>: LoadBalancers can be expensive, as cloud providers charge for them.</p>
</li>
<li><p><strong>Scalability Problems</strong>: They might struggle with sudden increases in traffic.</p>
</li>
<li><p><strong>Single Point of Failure</strong>: If the LoadBalancer fails, all services using it can go down.</p>
</li>
<li><p><strong>Limited Flexibility</strong>: They have fixed rules, making it hard to manage complex traffic needs.</p>
</li>
<li><p><strong>Performance Delays</strong>: They can add extra time (latency) to how quickly users access your services.</p>
</li>
<li><p><strong>Dependence on Cloud Services</strong>: If the cloud provider has issues, your services might become unavailable.</p>
</li>
<li><p><strong>Complex Setup</strong>: Managing LoadBalancers can complicate your system, especially with multiple clusters.</p>
</li>
</ol>
<h4 id="heading-alternatives-to-loadbalancer-for-kubernetes">Alternatives to LoadBalancer for Kubernetes</h4>
<ul>
<li><p><strong>Ingress Controllers</strong>: These are cheaper and allow more flexible traffic management.</p>
</li>
<li><p><strong>Service Mesh</strong>: They help manage traffic and improve observability without needing a LoadBalancer.</p>
</li>
</ul>
<h3 id="heading-externalname-mapping-services-to-external-dns-names">ExternalName: Mapping Services to External DNS Names</h3>
<p>The <strong>ExternalName</strong> service type is a special kind of service in Kubernetes that allows you to map a service to an external DNS name. Instead of using a cluster IP, it returns a <code>CNAME</code> record with the specified external name. We use it when we need to communicate to an external service (like <code>external API</code> or <code>database</code>) without changing your application code. Using this we can also communicate with services that is present in <code>other namespaces</code>.</p>
<blockquote>
<p>Example: Two services within two other namespaces communicate with each other</p>
<p>Use <a target="_blank" href="https://github.com/saiyam1814/Kubernetes-hindi-bootcamp/tree/main/part8/ExternalName"><code>This file</code></a> to get the Sourcecode</p>
</blockquote>
<ul>
<li><p>Create two namespace</p>
<pre><code class="lang-bash">  kubectl create ns database-ns
  kubectl create ns application-ns
</code></pre>
</li>
<li><p>Create the database pod and service</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1727115368577/3a5fc1d0-0a0b-491e-9e2e-0a64fd83ca0b.png" alt class="image--center mx-auto" /></p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1727115377815/cfdda7e6-a74b-4410-aa5b-ae1f0ac27e63.png" alt class="image--center mx-auto" /></p>
<pre><code class="lang-bash">  kubectl apply -f db.yaml
  kubectl apply -f db_svc.yaml
</code></pre>
</li>
<li><p>Create ExternalName service</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1727115300028/65f8c022-4f09-4066-bb27-fe42011993bd.png" alt class="image--center mx-auto" /></p>
<pre><code class="lang-bash">  kubectl apply -f externam-db_svc.yaml
</code></pre>
</li>
<li><p>Create Application to access the service Docker build</p>
<pre><code class="lang-bash">  docker build --no-cache --platform=linux/amd64 -t ttl.sh/saiyamdemo:1h .
</code></pre>
</li>
<li><p>Create the pod</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1727115974019/eccd8060-429f-4b69-9f92-f894b4c205fb.png" alt class="image--center mx-auto" /></p>
<pre><code class="lang-bash">  kubectl apply -f apppod.yaml
</code></pre>
</li>
<li><p>Check the pod logs to see if the connection was successful</p>
<pre><code class="lang-bash">  kubectl logs my-application -n application-ns
</code></pre>
</li>
</ul>
<h3 id="heading-ingress-controlling-user-access-to-kubernetes-applications">Ingress: Controlling User Access to Kubernetes Applications</h3>
<p><strong>It</strong> is a way to control how users access your applications running in Kubernetes. Think of it as a gatekeeper that directs traffic to the right place. To use ingress we firstly need to deploy an <code>Ingress Controller</code> on the cluster and the Ingress are implemented through <code>Ingress Controller</code>. Ingress controllers are the open source project of many companies such as <code>NGINX</code>, <code>Traefik</code>, <code>Istio Ingress</code>, etc.</p>
<p>Now, we create a service using <code>ClusterIP</code> and a resource using <code>Ingress</code>. User request firstly goes to <code>ingress controller</code> then ingress controller send it to <code>ingress</code> then after it goes to the <code>clusterIP</code> service configured by us and at last it goes to the pod. Here we deploy <code>Ingress Controller</code> as the <code>LoadBalancer</code>.</p>
<h4 id="heading-why-use-ingress">Why Use Ingress?</h4>
<ol>
<li><p><strong>Single Address</strong>: Instead of having different addresses for each app, you can use one address for everything.</p>
</li>
<li><p><strong>Routing by URL</strong>: You can send users to different apps based on the URL they visit. For example, <code>/app1</code> goes to one app and <code>/app2</code> goes to another.</p>
</li>
<li><p><strong>Secure Connections</strong>: Ingress can handle secure connections (HTTPS) easily, so your apps are safe to use.</p>
</li>
<li><p><strong>Balancing Traffic</strong>: It spreads out incoming traffic evenly, so no single app gets overloaded.</p>
</li>
</ol>
<blockquote>
<h4 id="heading-example">Example</h4>
<p>Imagine you have two apps:</p>
<ul>
<li><p>A website</p>
</li>
<li><p>An API</p>
</li>
</ul>
<p>You can set up Ingress to route traffic like this:</p>
<ul>
<li><p>If someone goes to <code>/</code>, they get the website.</p>
</li>
<li><p>If they go to <code>/api</code>, they get the API.</p>
</li>
</ul>
<p>Here’s how you might set it up:</p>
<pre><code class="lang-yaml"><span class="hljs-attr">apiVersion:</span> <span class="hljs-string">networking.k8s.io/v1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">Ingress</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">example-ingress</span>
<span class="hljs-attr">spec:</span>
  <span class="hljs-attr">rules:</span>
  <span class="hljs-bullet">-</span> <span class="hljs-attr">host:</span> <span class="hljs-string">"my-app.com"</span>
    <span class="hljs-attr">http:</span>
      <span class="hljs-attr">paths:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">path:</span> <span class="hljs-string">/</span>
        <span class="hljs-attr">pathType:</span> <span class="hljs-string">Prefix</span>
        <span class="hljs-attr">backend:</span>
          <span class="hljs-attr">service:</span>
            <span class="hljs-attr">name:</span> <span class="hljs-string">website-service</span>
            <span class="hljs-attr">port:</span>
              <span class="hljs-attr">number:</span> <span class="hljs-number">80</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">path:</span> <span class="hljs-string">/api</span>
        <span class="hljs-attr">pathType:</span> <span class="hljs-string">Prefix</span>
        <span class="hljs-attr">backend:</span>
          <span class="hljs-attr">service:</span>
            <span class="hljs-attr">name:</span> <span class="hljs-string">api-service</span>
            <span class="hljs-attr">port:</span>
              <span class="hljs-attr">number:</span> <span class="hljs-number">80</span>
</code></pre>
</blockquote>
<ul>
<li><p>Create a deployment (<code>deploy.yaml</code>)</p>
<pre><code class="lang-yaml">  <span class="hljs-attr">apiVersion:</span> <span class="hljs-string">apps/v1</span>
  <span class="hljs-attr">kind:</span> <span class="hljs-string">Deployment</span>
  <span class="hljs-attr">metadata:</span>
     <span class="hljs-attr">name:</span> <span class="hljs-string">nginx-deployment</span>
  <span class="hljs-attr">spec:</span>
     <span class="hljs-attr">replicas:</span> <span class="hljs-number">1</span>
     <span class="hljs-attr">selector:</span>
     <span class="hljs-attr">matchLabels:</span>
        <span class="hljs-attr">app:</span> <span class="hljs-string">nginx</span>
     <span class="hljs-attr">template:</span>
        <span class="hljs-attr">metadata:</span>
           <span class="hljs-attr">labels:</span>
             <span class="hljs-attr">app:</span> <span class="hljs-string">nginx</span>
        <span class="hljs-attr">spec:</span>
           <span class="hljs-attr">containers:</span>
           <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">nginx</span>
             <span class="hljs-attr">image:</span> <span class="hljs-string">nginx:latest</span>
             <span class="hljs-attr">ports:</span>
             <span class="hljs-bullet">-</span> <span class="hljs-attr">containerPort:</span> <span class="hljs-number">80</span>
             <span class="hljs-attr">volumeMounts:</span>
             <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">config-volume</span>
               <span class="hljs-attr">mountPath:</span> <span class="hljs-string">/etc/nginx/nginx.conf</span>
               <span class="hljs-attr">subPath:</span> <span class="hljs-string">nginx.conf</span>  <span class="hljs-comment"># Ensure this matches the filename in the ConfigMap</span>
             <span class="hljs-attr">volumes:</span>   
             <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">config-volume</span>
               <span class="hljs-attr">configMap:</span>
               <span class="hljs-attr">name:</span> <span class="hljs-string">nginx-config</span>
</code></pre>
<pre><code class="lang-yaml">  <span class="hljs-string">kubectl</span> <span class="hljs-string">apply</span> <span class="hljs-string">-f</span> <span class="hljs-string">deploy.yaml</span>
</code></pre>
</li>
<li><p>Create a <code>clusterIP</code> service (svc.yaml)</p>
<pre><code class="lang-yaml">  <span class="hljs-attr">apiVersion:</span> <span class="hljs-string">v1</span>
  <span class="hljs-attr">kind:</span> <span class="hljs-string">Service</span>
  <span class="hljs-attr">metadata:</span>
     <span class="hljs-attr">name:</span> <span class="hljs-string">nginx-service</span>
  <span class="hljs-attr">spec:</span>
     <span class="hljs-attr">selector:</span>
        <span class="hljs-attr">app:</span> <span class="hljs-string">nginx</span>
      <span class="hljs-attr">ports:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">protocol:</span> <span class="hljs-string">TCP</span>
        <span class="hljs-attr">port:</span> <span class="hljs-number">80</span>
        <span class="hljs-attr">targetPort:</span> <span class="hljs-number">80</span>
</code></pre>
<pre><code class="lang-bash">  kubectl apply -f svc.yaml
</code></pre>
</li>
<li><p>verify the service is created</p>
<pre><code class="lang-bash">  kubectl get svc
</code></pre>
</li>
<li><p>Create a nginx.conf file</p>
<pre><code class="lang-nginx">  <span class="hljs-attribute">user</span>  nginx;
  <span class="hljs-attribute">worker_processes</span>  auto;

  <span class="hljs-attribute">error_log</span>  /var/log/nginx/error.log <span class="hljs-literal">notice</span>;
  <span class="hljs-attribute">pid</span>        /var/run/nginx.pid;

  <span class="hljs-section">events</span> {
  <span class="hljs-attribute">worker_connections</span>  <span class="hljs-number">1024</span>;
  }

  <span class="hljs-section">http</span> {
  <span class="hljs-attribute">include</span>       /etc/nginx/mime.types;
  <span class="hljs-attribute">default_type</span>  application/octet-stream;

  <span class="hljs-attribute">log_format</span>  main  <span class="hljs-string">'<span class="hljs-variable">$remote_addr</span> - <span class="hljs-variable">$remote_user</span> [<span class="hljs-variable">$time_local</span>] "<span class="hljs-variable">$request</span>" '</span>
  <span class="hljs-string">'<span class="hljs-variable">$status</span> <span class="hljs-variable">$body_bytes_sent</span> "<span class="hljs-variable">$http_referer</span>" '</span>
  <span class="hljs-string">'"<span class="hljs-variable">$http_user_agent</span>" "<span class="hljs-variable">$http_x_forwarded_for</span>"'</span>;

  <span class="hljs-attribute">access_log</span>  /var/log/nginx/access.log  main;

  <span class="hljs-attribute">sendfile</span>        <span class="hljs-literal">on</span>;
  <span class="hljs-comment">#tcp_nopush     on;</span>

  <span class="hljs-attribute">keepalive_timeout</span>  <span class="hljs-number">65</span>;

  <span class="hljs-comment">#gzip  on;</span>

  <span class="hljs-section">server</span> {
  <span class="hljs-attribute">listen</span>       <span class="hljs-number">80</span>;
  <span class="hljs-attribute">server_name</span>  localhost;

  <span class="hljs-attribute">location</span> / {
  <span class="hljs-attribute">root</span>   /usr/share/nginx/html;
  <span class="hljs-attribute">index</span>  index.html index.htm;
  }

  <span class="hljs-attribute">location</span> /public {
  <span class="hljs-attribute">return</span> <span class="hljs-number">200</span> <span class="hljs-string">'Access to public granted!'</span>;
  }

  <span class="hljs-attribute">error_page</span>   <span class="hljs-number">500</span> <span class="hljs-number">502</span> <span class="hljs-number">503</span> <span class="hljs-number">504</span>  /50x.html;
  <span class="hljs-attribute">location</span> = /50x.html {
  <span class="hljs-attribute">root</span>   /usr/share/nginx/html;
  }
  }
  }
</code></pre>
</li>
<li><p>Create a ConfigMap</p>
<pre><code class="lang-bash">  kubectl create configmap nginx-config --from-file=nginx.conf
</code></pre>
</li>
<li><p>Access the server</p>
<pre><code class="lang-nginx">  <span class="hljs-attribute">curl</span> &lt;cluster-IP&gt;
</code></pre>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1727124231790/d638df6e-1f6e-4d01-9356-25a8f724fb9e.png" alt class="image--center mx-auto" /></p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1727124365615/d06b9753-85f6-401b-b263-1fbcfa5c948c.png" alt class="image--center mx-auto" /></p>
</li>
</ul>
<blockquote>
<p>Now we need to access this application from outside the cluster without using <code>NodePort</code> and <code>LoadBalancer</code> service</p>
</blockquote>
<ul>
<li><p>Install <code>Ingress controller</code> first</p>
<pre><code class="lang-basic">  kubectl apply -f https://raw.githubusercontent.<span class="hljs-keyword">com</span>/kubernetes/ingress-nginx/controller-v1.<span class="hljs-number">9.4</span>/deploy/static/provider/cloud/deploy.yaml
</code></pre>
</li>
<li><p>Create a Ingress object (<code>ingress.yaml</code>)</p>
<pre><code class="lang-yaml">  <span class="hljs-attr">apiVersion:</span> <span class="hljs-string">networking.k8s.io/v1</span>
  <span class="hljs-attr">kind:</span> <span class="hljs-string">Ingress</span>
  <span class="hljs-attr">metadata:</span>
    <span class="hljs-attr">name:</span> <span class="hljs-string">bootcamp</span>
  <span class="hljs-attr">spec:</span>
    <span class="hljs-attr">ingressClassName:</span> <span class="hljs-string">nginx</span>
    <span class="hljs-attr">rules:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">host:</span> <span class="hljs-string">"kubernetes.hindi.bootcamp"</span>
      <span class="hljs-attr">http:</span>
        <span class="hljs-attr">paths:</span>
        <span class="hljs-bullet">-</span> <span class="hljs-attr">path:</span> <span class="hljs-string">/</span>
          <span class="hljs-attr">pathType:</span> <span class="hljs-string">Prefix</span>
          <span class="hljs-attr">backend:</span>
            <span class="hljs-attr">service:</span>
              <span class="hljs-attr">name:</span> <span class="hljs-string">nginx-service</span>
              <span class="hljs-attr">port:</span>
                <span class="hljs-attr">number:</span> <span class="hljs-number">80</span>
        <span class="hljs-bullet">-</span> <span class="hljs-attr">path:</span> <span class="hljs-string">/public</span> 
          <span class="hljs-attr">pathType:</span> <span class="hljs-string">Prefix</span>
          <span class="hljs-attr">backend:</span>
            <span class="hljs-attr">service:</span>
              <span class="hljs-attr">name:</span> <span class="hljs-string">nginx-service</span>
              <span class="hljs-attr">port:</span>
                <span class="hljs-attr">number:</span> <span class="hljs-number">80</span>
</code></pre>
</li>
<li><p>ssh onto the node where the pod is deployed and the change the /etc/hosts file</p>
<pre><code class="lang-yaml">  <span class="hljs-comment"># add this in file</span>
  <span class="hljs-string">&lt;node-internal-IP&gt;</span> <span class="hljs-string">&lt;nodeName&gt;</span>
  <span class="hljs-string">&lt;node-internal-IP&gt;</span> <span class="hljs-string">kubernets.hindi.bootcamp</span>
</code></pre>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1727125797335/802cb4be-a2a6-4caa-bb42-15cb1b0c93e2.png" alt class="image--center mx-auto" /></p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1727125853427/5a725d7c-30cd-40cf-94a6-aac359eb74fa.png" alt class="image--center mx-auto" /></p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1727125857934/df63eef1-53c5-40fd-8ecf-98bc74932dd9.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Now apply the create <code>ingress.yaml</code> file</p>
<pre><code class="lang-bash">  kubectl apply -f ingress.yaml
</code></pre>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1727126034264/5a8bdf43-49c9-43d5-a028-114ee8737bfa.png" alt class="image--center mx-auto" /></p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1727126080194/8b92f224-6457-40d9-a7f5-bd03f0d10ed7.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>We need to connect user to the ingress controller</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1727126383167/a47e632d-cba7-4ba2-a5f4-7e8637e1d57b.png" alt class="image--center mx-auto" /></p>
<pre><code class="lang-bash">  curl kubernetes.hindi.bootcamp:30418
</code></pre>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1727126520617/82ac228a-368b-422b-ae8b-cb4a40924f6c.png" alt class="image--center mx-auto" /></p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1727126576495/3011b087-b01a-4637-a274-34e5472ebabf.png" alt class="image--center mx-auto" /></p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1727126869944/05776b87-6b71-4798-8d8e-0e1f8d56f639.png" alt class="image--center mx-auto" /></p>
</li>
</ul>
<h3 id="heading-externaldns-connecting-kubernetes-to-dns-providers">ExternalDNS: Connecting Kubernetes to DNS Providers</h3>
<p>ExternalDNS is a Kubernetes-native solution that automatically manages DNS records for services and ingress resources within Kubernetes clusters. It simplifies the process of associating a Kubernetes service with a domain name, ensuring that services deployed in Kubernetes are easily accessible from outside the cluster via friendly domain names rather than IP addresses.</p>
<h4 id="heading-how-externaldns-works">How ExternalDNS Works</h4>
<p>ExternalDNS continuously monitors Kubernetes resources such as <code>Service</code> and <code>Ingress</code> objects. It then communicates with a DNS provider to ensure that the corresponding DNS records reflect the current state of the Kubernetes cluster. The typical workflow is as follows:</p>
<ol>
<li><p><strong>Service or Ingress Resource Created</strong>: When a new service or ingress resource is deployed in Kubernetes, it usually exposes an external IP or hostname.</p>
</li>
<li><p><strong>DNS Record Creation</strong>: ExternalDNS detects the new service and automatically creates the corresponding DNS record in the external DNS provider (e.g., AWS Route 53).</p>
</li>
<li><p><strong>Dynamic Updates</strong>: If the service’s external IP changes (e.g., due to a rolling update or scaling event), ExternalDNS updates the DNS record accordingly.</p>
</li>
<li><p><strong>Resource Deletion</strong>: When a service or ingress resource is removed, ExternalDNS also deletes the associated DNS records.</p>
</li>
</ol>
<h4 id="heading-use-cases-for-externaldns">Use Cases for ExternalDNS</h4>
<ol>
<li><p><strong>Public Service Exposure</strong>: ExternalDNS is ideal for exposing services running inside Kubernetes clusters to the public via user-friendly domain names.</p>
</li>
<li><p><strong>Multi-Cloud Deployments</strong>: In multi-cloud environments, ExternalDNS helps manage DNS across different cloud platforms, ensuring consistent access to services regardless of the underlying infrastructure.</p>
</li>
<li><p><strong>Blue-Green Deployments</strong>: When performing blue-green or canary deployments, ExternalDNS can help dynamically switch DNS records as traffic is gradually routed to new versions of an application.</p>
</li>
<li><p><strong>Private DNS Management</strong>: For internal DNS setups, ExternalDNS can manage private DNS records for services in a non-public namespace, ensuring services are accessible within a private network.</p>
</li>
</ol>
<h3 id="heading-conclusion">Conclusion</h3>
<blockquote>
<p>In this article, we explored various aspects of Kubernetes, including services, networking fundamentals, StatefulSets, and different types of services like NodePort, LoadBalancer, ExternalName, Ingress, and ExternalDNS. We delved into the specifics of how each service type functions, their use cases, and provided practical examples to illustrate their implementation. Understanding these components is crucial for effectively managing and deploying applications in a Kubernetes environment. By mastering these concepts, you can ensure your applications are scalable, resilient, and efficiently managed within your Kubernetes clusters.</p>
</blockquote>
<hr />
<p>💡 <em>Let’s connect and discuss DevOps, cloud automation, and cutting-edge technology</em></p>
<p>🔗 <a target="_blank" href="https://www.linkedin.com/in/praduman-prajapati/"><strong>LinkedIn</strong></a> | 💼 <a target="_blank" href="https://www.upwork.com/freelancers/~01fa3bf4d6797a9651"><strong>Upwork</strong></a> | 🐦 <a target="_blank" href="https://x.com/CndTwtprad"><strong>Twitter</strong></a> | 👨‍💻 <a target="_blank" href="https://github.com/praduman8435"><strong>GitHub</strong></a></p>
]]></content:encoded></item><item><title><![CDATA[A Complete Guide to Terraform: Automate Your Infrastructure]]></title><description><![CDATA[Why Use Terraform for Infrastructure as Code (IaC)?
Imagine you’re a DevOps engineer and you’re assigned a simple task to create an S3 bucket in AWS. Normally, you would log in to your AWS account, search for the S3 service, and manually create the b...]]></description><link>https://blogs.praduman.site/a-complete-guide-to-terraform-automate-your-infrastructure</link><guid isPermaLink="true">https://blogs.praduman.site/a-complete-guide-to-terraform-automate-your-infrastructure</guid><category><![CDATA[Terraform]]></category><category><![CDATA[hashicorp]]></category><category><![CDATA[Devops]]></category><dc:creator><![CDATA[Praduman Prajapati]]></dc:creator><pubDate>Thu, 19 Sep 2024 18:45:08 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1726689494949/0e85b2dc-53a2-451a-b77b-26f7d31777ff.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h3 id="heading-why-use-terraform-for-infrastructure-as-code-iac">Why Use Terraform for Infrastructure as Code (IaC)?</h3>
<p>Imagine you’re a DevOps engineer and you’re assigned a simple task to create an S3 bucket in AWS. Normally, you would log in to your AWS account, search for the S3 service, and manually create the bucket by filling in the necessary details. This works fine if you're only creating one bucket.</p>
<p>But what if you need to create 100 or even 1,000 S3 buckets? Doing this manually would take a lot of time and effort. In situations like this, you’d want a more programmatic approach—using the AWS CLI or scripting to interact with AWS APIs. With these tools, you could create all the required buckets in seconds. However, this requires good programming knowledge, and things get complicated when you need to create multiple resources together, like a VPC, EC2 instances, and S3 buckets.</p>
<p>To address these challenges, cloud providers offer Infrastructure as Code (IaC) tools. These tools let you define your infrastructure in code, using formats like YAML or JSON, and automate the provisioning of resources. AWS, for example, provides CloudFormation, which allows you to define AWS resources in templates.</p>
<p><strong>Here are some examples of IaC tools:</strong></p>
<ul>
<li><p><strong>AWS CloudFormation</strong> (for AWS)</p>
</li>
<li><p><strong>Azure Resource Manager</strong> (for Azure)</p>
</li>
<li><p><strong>Heat Template</strong> (for OpenStack)</p>
</li>
</ul>
<h3 id="heading-why-terraform">Why Terraform?</h3>
<p>With so many IaC tools available, why use Terraform? The answer lies in its flexibility and universal approach.</p>
<p>Let’s say you’re working with AWS and using CloudFormation, but later you switch to an organization that uses Azure. You would then need to learn Azure’s IaC tool. While you can certainly learn these tools, it can be a complex and time-consuming process.</p>
<p>Terraform solves this problem by providing a cloud-agnostic IaC tool. It allows you to manage infrastructure across multiple cloud platforms using a single language—<strong>HashiCorp Configuration Language (HCL)</strong>. This means you don’t need to learn different IaC tools for different clouds. Just learn Terraform, and you can work with any cloud provider.</p>
<p>This is why Terraform has become such a popular and essential tool for DevOps and cloud engineers.</p>
<hr />
<blockquote>
<p>Install <a target="_blank" href="https://developer.hashicorp.com/terraform/tutorials/aws-get-started/install-cli"><code>Terraform</code></a> &amp; <a target="_blank" href="https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html"><code>aws cli</code></a> on your system using official documentation</p>
</blockquote>
<h3 id="heading-how-to-configure-aws-cli-with-access-keys">How to Configure AWS CLI with Access Keys</h3>
<ul>
<li><p>Go to your AWS account and find the <code>secuirity credentials</code> option</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1726717967183/3af2f0c0-fa4c-4f65-8454-eda61fbea3b8.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Inside <code>Security Credentials</code>, create a new <code>Access Key</code>. This will provide you with an <code>Access Key ID</code> and a <code>Secret Access Key</code></p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1726718137454/d89a9974-3276-4ac5-bbe3-f9aea80ccd40.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Open your terminal and run the following command</p>
<pre><code class="lang-basic">  aws configure
</code></pre>
<blockquote>
<p>You will be prompted to enter the <code>Access Key ID</code> and <code>Secret Access Key</code>. Paste the keys you just created to connect your AWS account to the <code>CLI</code></p>
</blockquote>
</li>
<li><p>To ensure everything is working, run</p>
<pre><code class="lang-basic">  aws s3 ls
</code></pre>
<blockquote>
<p>This command will list all the S3 buckets in your AWS account</p>
</blockquote>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1726719097143/fdee66ea-c4be-41cb-895e-9ed8b992d769.png" alt class="image--center mx-auto" /></p>
</li>
</ul>
<h3 id="heading-create-an-ec2-instance-using-terraform">Create an EC2 instance using terraform</h3>
<ul>
<li><p><strong>Create a Terraform Configuration File (</strong><a target="_blank" href="http://main.tf"><code>main.tf</code></a><strong>)</strong></p>
<pre><code class="lang-basic">  provider <span class="hljs-string">"aws"</span> {
      region = <span class="hljs-string">"us-east-1"</span>  # Set your desired AWS region
  }

  resource <span class="hljs-string">"aws_instance"</span> <span class="hljs-string">"example"</span> {
      ami           = <span class="hljs-string">"ami-0c55b159cbfafe1f0"</span>  # Specify an appropriate AMI ID
      instance_type = <span class="hljs-string">"t2.micro"</span>
</code></pre>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1726720927401/0ac923aa-9c4f-4a0c-a87c-88a1dbf4dfb2.png" alt class="image--center mx-auto" /></p>
</li>
<li><h4 id="heading-run-the-following-command-to-initialize-your-terraform-project">Run the following command to initialize your Terraform project</h4>
<pre><code class="lang-basic">  terraform init
</code></pre>
<p>  This command prepares the working directory by downloading the necessary provider plugins (in this case, for AWS) and setting up the environment</p>
</li>
<li><p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1726721949946/6e07f111-4118-4cc0-9a12-b80e53600f40.png" alt class="image--center mx-auto" /></p>
<h4 id="heading-preview-changes-using-terraform-plan">Preview Changes Using <code>terraform plan</code></h4>
<pre><code class="lang-basic">  terraform plan
</code></pre>
<p>  This command will show you the "execution plan"—detailing what resources Terraform will create, modify, or destroy. It helps you confirm that everything looks correct before applying changes</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1726722254387/135eb462-88ba-4043-8b61-f3ebb2edc7d0.png" alt class="image--center mx-auto" /></p>
</li>
<li><p><strong>Apply Changes Using</strong> <code>terraform apply</code></p>
<pre><code class="lang-basic">  terraform apply
</code></pre>
<p>  It is used to execute the changes described in your configuration files and actually create, modify, or delete infrastructure resources. After running <code>terraform plan</code> to preview the changes, <code>terraform apply</code> makes those changes happen</p>
<blockquote>
<p>If you encounter an error like this</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1726723556072/a585c4f6-556e-4331-834f-c1ba39bc2f4f.png" alt class="image--center mx-auto" /></p>
<p>This happens because the <code>AMI ID</code> provided may not be valid in your AWS region</p>
</blockquote>
</li>
<li><p><strong>To resolve this Error</strong></p>
<ol>
<li><p>Go to your AWS account</p>
</li>
<li><p>In the <strong>EC2 Dashboard</strong>, search for a valid AMI ID</p>
</li>
<li><p>Copy the correct AMI ID</p>
</li>
<li><p>Replace the invalid AMI ID in the <code>main.tf</code> file with the correct one</p>
</li>
</ol>
</li>
</ul>
<p>    <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1726723813149/506d4255-11e3-4f7b-9600-297c60bf504f.png" alt class="image--center mx-auto" /></p>
<p>    <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1726724002714/9c5c6184-154a-42bc-a3f1-9e665aaa56aa.png" alt class="image--center mx-auto" /></p>
<ul>
<li><p><strong>Once you have the correct AMI ID, run the</strong> <code>terraform apply</code> <strong>command again to create the EC2 instance</strong></p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1726725797681/acb97bae-aee2-426c-9178-0f8cd1f534d0.png" alt class="image--center mx-auto" /></p>
<blockquote>
<p>If the command runs successfully, an EC2 instance will be created in your AWS account</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1726725944794/4c402d73-7b8c-4526-8112-639cf4cc5167.png" alt class="image--center mx-auto" /></p>
</blockquote>
</li>
<li><p><strong>Run</strong> <code>terraform destroy</code> <strong>to delete all infrastructure resources that Terraform has created and is managing. This command will remove everything that is defined in your state file, essentially tearing down your entire infrastructure</strong></p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1726727526343/90e2af06-697e-48d0-a828-c1701e6f4118.png" alt class="image--center mx-auto" /></p>
</li>
</ul>
<h4 id="heading-terraform-state">Terraform State</h4>
<p>Terraform keeps a file called a "state file" that stores the current setup of your infrastructure. This file helps Terraform figure out what changes need to be made by comparing the setup you want with what already exists. It uses this information to apply updates correctly.</p>
<hr />
<h3 id="heading-terraform-providers"><strong>Terraform Providers</strong></h3>
<p>In Terraform, <code>providers</code> are plugins that allow Terraform to communicate with cloud platforms, services, or other APIs. Providers essentially tell Terraform which services to interact with, such as AWS, Azure, Google Cloud, etc.</p>
<p>When setting up infrastructure, you define <code>providers</code> in your configuration file using the <code>provider</code> block. This tells Terraform what cloud services you are going to use and sets necessary parameters such as region or authentication information.</p>
<p><strong>Some examples of providers:</strong></p>
<ul>
<li><p><code>azurerm</code> - for Azure</p>
</li>
<li><p><code>google</code> - for Google Cloud Platform</p>
</li>
<li><p><code>kubernetes</code> - for Kubernetes</p>
</li>
<li><p><code>openstack</code> - for OpenStack</p>
</li>
<li><p><code>vsphere</code> - for VMware vSphere</p>
</li>
</ul>
<blockquote>
<h4 id="heading-single-provider-example-aws"><strong>Single Provider Example (AWS)</strong></h4>
<p>If you want to use Terraform to manage infrastructure on AWS, you first define the AWS provider</p>
<pre><code class="lang-bash">provider <span class="hljs-string">"aws"</span> {
  region = <span class="hljs-string">"us-east-1"</span>  <span class="hljs-comment"># Specify the AWS region</span>
}
</code></pre>
<p>Here, the region <code>us-east-1</code> is specified, which tells Terraform that resources should be created in the <strong>US East region</strong>.</p>
</blockquote>
<h3 id="heading-multiple-region-setup"><strong>Multiple Region Setup</strong></h3>
<p>Terraform supports setting up resources across multiple regions within the same cloud provider by using the <code>alias</code> keyword. This allows you to define <code>multiple instances</code> of the same provider, each targeting a different region.</p>
<pre><code class="lang-bash">provider <span class="hljs-string">"aws"</span> {
  <span class="hljs-built_in">alias</span>  = <span class="hljs-string">"us-east-1"</span>  <span class="hljs-comment"># Alias to identify this provider instance</span>
  region = <span class="hljs-string">"us-east-1"</span>
}

provider <span class="hljs-string">"aws"</span> {
  <span class="hljs-built_in">alias</span>  = <span class="hljs-string">"us-west-2"</span>  <span class="hljs-comment"># Alias for another region</span>
  region = <span class="hljs-string">"us-west-2"</span>
}

resource <span class="hljs-string">"aws_instance"</span> <span class="hljs-string">"east_instance"</span> {
  ami           = <span class="hljs-string">"ami-0123456789abcdef0"</span>
  instance_type = <span class="hljs-string">"t2.micro"</span>
  provider      = <span class="hljs-string">"aws.us-east-1"</span>  <span class="hljs-comment"># Use the east region provider</span>
}

resource <span class="hljs-string">"aws_instance"</span> <span class="hljs-string">"west_instance"</span> {
  ami           = <span class="hljs-string">"ami-0123456789abcdef0"</span>
  instance_type = <span class="hljs-string">"t2.micro"</span>
  provider      = <span class="hljs-string">"aws.us-west-2"</span>  <span class="hljs-comment"># Use the west region provider</span>
}
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1727643952263/968ce77c-eefa-4a6b-9230-83b260924f12.png" alt class="image--center mx-auto" /></p>
<blockquote>
<p>You can check the <code>ec2 instance</code> is created in both <code>us-east-1</code> &amp; <code>us-west-2</code> region</p>
</blockquote>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1727643958302/9e199aee-5565-4bcc-add2-47c0c5e4aeab.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1727643964577/ff6b8e9a-a46c-406a-b697-3d6ea4a97927.png" alt class="image--center mx-auto" /></p>
<blockquote>
<p><strong>In this example</strong></p>
<ul>
<li><p>Two <code>provider</code> blocks are created for AWS, one for the <code>US East</code> <code>region</code> and one for the <code>US West region</code>.</p>
</li>
<li><p>The <code>alias</code> keyword helps Terraform distinguish between different instances of the same provider.</p>
</li>
<li><p>Each resource (EC2 instance) is tied to a specific provider using the <code>provider</code> attribute.ami-0123456789abcdef0</p>
</li>
</ul>
</blockquote>
<h3 id="heading-multi-cloud-setup"><strong>Multi-Cloud Setup</strong></h3>
<p>Terraform allows you to use multiple providers in one project, enabling you to manage infrastructure across different cloud platforms simultaneously. Here's how you can set this up ?</p>
<ul>
<li><h4 id="heading-create-a-providerstf-file">Create a <code>providers.tf</code> File</h4>
<p>  Begin by creating a <code>providers.tf</code> file in the root directory of your Terraform project. This file will define the cloud providers that you want to use.</p>
</li>
<li><p><strong>Define Providers in</strong> <code>providers.tf</code></p>
<p>  In the <code>providers.tf</code> file, define the providers for AWS and Azure.</p>
<pre><code class="lang-bash">  provider <span class="hljs-string">"aws"</span> {
    region = <span class="hljs-string">"us-east-1"</span>
  }

  provider <span class="hljs-string">"azurerm"</span> {
    subscription_id = <span class="hljs-string">"your-azure-subscription-id"</span>
    client_id       = <span class="hljs-string">"your-azure-client-id"</span>
    client_secret   = <span class="hljs-string">"your-azure-client-secret"</span>
    tenant_id       = <span class="hljs-string">"your-azure-tenant-id"</span>
  }
</code></pre>
<blockquote>
<p>This configuration sets up AWS and Azure as providers in your project. Replace the placeholder values with your actual credentials.</p>
</blockquote>
</li>
<li><p><strong>Use Providers in Resource Definitions</strong></p>
<p>  Once you've configured the providers, you can create resources in AWS and Azure within the same project.</p>
<pre><code class="lang-bash">  resource <span class="hljs-string">"aws_instance"</span> <span class="hljs-string">"example"</span> {
    ami           = <span class="hljs-string">"ami-0123456789abcdef0"</span>
    instance_type = <span class="hljs-string">"t2.micro"</span>
  }

  resource <span class="hljs-string">"azurerm_virtual_machine"</span> <span class="hljs-string">"example"</span> {
    name     = <span class="hljs-string">"example-vm"</span>
    location = <span class="hljs-string">"eastus"</span>
    size     = <span class="hljs-string">"Standard_A1"</span>
  }
</code></pre>
<blockquote>
<p>In this example</p>
<ul>
<li><p>An EC2 instance is provisioned in AWS using the <code>aws_instance</code> resource.</p>
</li>
<li><p>A virtual machine is provisioned in Azure using the <code>azurerm_virtual_machine</code> resource.</p>
</li>
</ul>
</blockquote>
</li>
</ul>
<h3 id="heading-terraform-variables"><strong>Terraform Variables</strong></h3>
<p>Terraform variables make your configuration more dynamic, reusable, and flexible. Instead of hardcoding values directly in your code, you define variables that can be reused across your configuration. This makes it easier to adapt the same infrastructure for different environments or teams.</p>
<h4 id="heading-types-of-variables"><strong>Types of Variables</strong></h4>
<ol>
<li><p><strong>Input Variables</strong></p>
<p> It allow you to parameterize your Terraform configurations. This means you can <strong>pass values</strong> into your modules or configurations from the outside (for example, when running <code>terraform apply</code>). Input variables can be defined at both the module level and the root level.</p>
<blockquote>
<h4 id="heading-defining-input-variables"><strong>Defining Input Variables</strong></h4>
<p>You can define an input variable in Terraform like this</p>
<pre><code class="lang-bash">variable <span class="hljs-string">"instance_type"</span> {
  description = <span class="hljs-string">"EC2 instance type"</span>  <span class="hljs-comment"># A description for documentation</span>
  <span class="hljs-built_in">type</span>        = string               <span class="hljs-comment"># The expected type (string, number, list, etc.)</span>
  default     = <span class="hljs-string">"t2.micro"</span>           <span class="hljs-comment"># Default value if not provided</span>
}
</code></pre>
<p>This block defines an input variable <code>instance_type</code>, which can be set when running the configuration, or it will use the <strong>default value</strong> of <code>"t2.micro"</code> if not provided</p>
<h4 id="heading-using-input-variables"><strong>Using Input Variables</strong></h4>
<p>After defining a variable, you can reference it using the <code>var</code> keyword inside your configuration</p>
<pre><code class="lang-bash">resource <span class="hljs-string">"aws_instance"</span> <span class="hljs-string">"example_instance"</span> {
  ami           = var.ami_id         <span class="hljs-comment"># Referencing the variable value</span>
  instance_type = var.instance_type  <span class="hljs-comment"># Referencing another variable</span>
}
</code></pre>
<p>When you run <code>terraform apply</code>, you can pass the value of the input variables through the command line or in a <code>.tfvars</code> file</p>
</blockquote>
</li>
<li><p><strong>Output Variables</strong></p>
<p> <strong>Output variables</strong> are used to display values after the execution of a configuration. These are useful for <strong>retrieving key information</strong> from your infrastructure, such as the IP address of a created instance or resource IDs.</p>
<blockquote>
<h4 id="heading-defining-output-variables"><strong>Defining Output Variables</strong></h4>
<p>You can define an output variable like this</p>
<pre><code class="lang-bash">output <span class="hljs-string">"public_ip"</span> {
  description = <span class="hljs-string">"Public IP address of the EC2 instance"</span>
  value       = aws_instance.example_instance.public_ip  <span class="hljs-comment"># The value to output</span>
}
</code></pre>
<p>In this case, after Terraform finishes creating the EC2 instance, it will display the <strong>public IP address</strong> of the instance as part of the output</p>
</blockquote>
</li>
</ol>
<h3 id="heading-organizing-terraform-files"><strong>Organizing Terraform Files</strong></h3>
<p>When working on large projects, it’s important to organize your Terraform files for better readability and maintenance. A typical structure might look like this:</p>
<ul>
<li><p><code>providers.tf</code>: Contains provider configurations.</p>
</li>
<li><p><code>variables.tf</code>: Defines all input variables.</p>
</li>
<li><p><code>outputs.tf</code>: Defines output variables.</p>
</li>
<li><p><code>main.tf</code>: Contains the core infrastructure resources.</p>
</li>
</ul>
<p>For projects involving multiple environments (development, production, staging), use a <strong>terraform.tfvars</strong> file to separate the actual values of the variables</p>
<h3 id="heading-conditional-expressions"><strong>Conditional Expressions</strong></h3>
<p>Conditional expressions in Terraform allow you to define dynamic behavior based on conditions. They work similarly to ternary operators (<code>? :</code>) in programming languages. You can use them to decide whether resources should be created or how they should be configured based on variable values or other conditions.</p>
<h4 id="heading-syntax-of-conditional-expressions"><strong>Syntax of Conditional Expressions</strong></h4>
<p>The syntax of a conditional expression in Terraform is</p>
<pre><code class="lang-bash">condition ? true_value : false_value
</code></pre>
<p>This evaluates the <code>condition</code>. If it’s <code>true</code>, it returns the <code>true_value</code>, otherwise, it returns the <code>false_value</code>.</p>
<h4 id="heading-conditional-resource-creation"><strong>Conditional Resource Creation</strong></h4>
<p>You can conditionally create resources using the <code>count</code> attribute:</p>
<pre><code class="lang-bash">resource <span class="hljs-string">"aws_instance"</span> <span class="hljs-string">"example"</span> {
  count = var.create_instance ? 1 : 0  <span class="hljs-comment"># Create instance if create_instance is true</span>

  ami           = <span class="hljs-string">"ami-0123456789abcdef0"</span>
  instance_type = <span class="hljs-string">"t2.micro"</span>
}
</code></pre>
<p>In this example, the instance will only be created if the variable <code>create_instance</code> is set to <code>true</code>. If <code>false</code>, Terraform will create <strong>0 instances</strong>.</p>
<h4 id="heading-conditional-resource-configuration"><strong>Conditional Resource Configuration</strong></h4>
<p>You can use conditional expressions within resource configuration blocks to dynamically adjust settings:</p>
<pre><code class="lang-bash">resource <span class="hljs-string">"aws_security_group"</span> <span class="hljs-string">"example"</span> {
  name        = <span class="hljs-string">"example-sg"</span>
  description = <span class="hljs-string">"Example security group"</span>

  ingress {
    from_port   = 22
    to_port     = 22
    protocol    = <span class="hljs-string">"tcp"</span>
    cidr_blocks = var.enable_ssh ? [<span class="hljs-string">"0.0.0.0/0"</span>] : []  <span class="hljs-comment"># Conditionally allow SSH</span>
  }
}
</code></pre>
<p>In this example, SSH access is only allowed if the variable <code>enable_ssh</code> is set to <code>true</code>. Otherwise, the security group won’t allow any incoming SSH connections.</p>
<h3 id="heading-terraform-built-in-functions"><strong>Terraform Built-in Functions</strong></h3>
<p>Terraform has a set of built-in functions that make it easier to manipulate and transform data in your configuration. These functions allow you to perform operations on lists, strings, and maps</p>
<h4 id="heading-common-built-in-functions"><strong>Common Built-in Functions</strong></h4>
<ol>
<li><p><strong>concat(list1, list2, ...)</strong> Combines multiple lists into a single list.</p>
<pre><code class="lang-bash"> variable <span class="hljs-string">"list1"</span> {
   <span class="hljs-built_in">type</span>    = list
   default = [<span class="hljs-string">"a"</span>, <span class="hljs-string">"b"</span>]
 }

 variable <span class="hljs-string">"list2"</span> {
   <span class="hljs-built_in">type</span>    = list
   default = [<span class="hljs-string">"c"</span>, <span class="hljs-string">"d"</span>]
 }

 output <span class="hljs-string">"combined_list"</span> {
   value = concat(var.list1, var.list2)  <span class="hljs-comment"># Returns ["a", "b", "c", "d"]</span>
 }
</code></pre>
</li>
<li><p><strong>element(list, index)</strong> Retrieves an element from a list based on the given index.</p>
<pre><code class="lang-bash"> variable <span class="hljs-string">"my_list"</span> {
   <span class="hljs-built_in">type</span>    = list
   default = [<span class="hljs-string">"apple"</span>, <span class="hljs-string">"banana"</span>, <span class="hljs-string">"cherry"</span>]
 }

 output <span class="hljs-string">"selected_element"</span> {
   value = element(var.my_list, 1)  <span class="hljs-comment"># Returns "banana"</span>
 }
</code></pre>
</li>
<li><p><strong>length(list)</strong> Returns the number of elements in a list.</p>
<pre><code class="lang-bash"> variable <span class="hljs-string">"my_list"</span> {
   <span class="hljs-built_in">type</span>    = list
   default = [<span class="hljs-string">"apple"</span>, <span class="hljs-string">"banana"</span>, <span class="hljs-string">"cherry"</span>]
 }

 output <span class="hljs-string">"list_length"</span> {
   value = length(var.my_list)  <span class="hljs-comment"># Returns 3</span>
 }
</code></pre>
</li>
<li><p><strong>lookup(map, key)</strong> Retrieves the value from a map by the specified key.</p>
<pre><code class="lang-bash"> variable <span class="hljs-string">"my_map"</span> {
   <span class="hljs-built_in">type</span>    = map
   default = {
     name  = <span class="hljs-string">"Alice"</span>
     age   = 25
   }
 }

 output <span class="hljs-string">"value"</span> {
   value = lookup(var.my_map, <span class="hljs-string">"name"</span>)  <span class="hljs-comment"># Returns "Alice"</span>
 }
</code></pre>
</li>
<li><p><strong>join(separator, list)</strong> Joins the elements of a list into a single string, separated by the specified separator.</p>
<pre><code class="lang-bash"> variable <span class="hljs-string">"my_list"</span> {
   <span class="hljs-built_in">type</span>    = list
   default = [<span class="hljs-string">"apple"</span>, <span class="hljs-string">"banana"</span>, <span class="hljs-string">"cherry"</span>]
 }

 output <span class="hljs-string">"joined_string"</span> {
   value = join(<span class="hljs-string">", "</span>, var.my_list)  <span class="hljs-comment"># Returns "apple, banana, cherry"</span>
 }
</code></pre>
</li>
</ol>
<hr />
<h3 id="heading-module-in-terraform">Module in Terraform</h3>
<p>In Terraform, <code>modules</code> are like building blocks that help organize and reuse your infrastructure code. Instead of writing the same code over and over, you can group related resources into a module and use it multiple times. It make your Terraform setup easier to manage, reuse, and scale without duplicating code.</p>
<hr />
<h3 id="heading-provisioners-in-terraform">Provisioners in Terraform</h3>
]]></content:encoded></item><item><title><![CDATA[Kubernetes-part-7]]></title><description><![CDATA[Understanding ConfigMap: Simplify Your Kubernetes Configuration Management
It is used to store non-confidential configuration data in key-value pairs. It keep configuration separate from the application code, that makes it easier to manage and update...]]></description><link>https://blogs.praduman.site/kubernetes-part-7</link><guid isPermaLink="true">https://blogs.praduman.site/kubernetes-part-7</guid><category><![CDATA[Kubernetes]]></category><category><![CDATA[Devops]]></category><dc:creator><![CDATA[Praduman Prajapati]]></dc:creator><pubDate>Tue, 17 Sep 2024 16:05:37 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1726606404328/e1ac102c-9ec0-4f90-b09a-fa287cb11935.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h3 id="heading-understanding-configmap-simplify-your-kubernetes-configuration-management">Understanding ConfigMap: Simplify Your Kubernetes Configuration Management</h3>
<p>It is used to store non-confidential configuration data in key-value pairs. It keep configuration separate from the application code, that makes it easier to manage and update without redeploying the entire application</p>
<blockquote>
<h4 id="heading-example-1-passing-database-user-to-mysql-pod-using-configmap"><strong>Example 1: Passing Database User to MySQL Pod Using ConfigMap</strong></h4>
</blockquote>
<ul>
<li><p><strong>Create the ConfigMap (cm1.yaml)</strong>: The ConfigMap will contain the database user and password</p>
<pre><code class="lang-yaml">  <span class="hljs-comment"># cm1.yaml</span>
  <span class="hljs-attr">apiVersion:</span> <span class="hljs-string">v1</span>
  <span class="hljs-attr">kind:</span> <span class="hljs-string">ConfigMap</span>
  <span class="hljs-attr">metadata:</span>
    <span class="hljs-attr">name:</span> <span class="hljs-string">bootcamp-configmap</span>
  <span class="hljs-attr">data:</span>
    <span class="hljs-attr">username:</span> <span class="hljs-string">"saiyam"</span>
    <span class="hljs-attr">database_name:</span> <span class="hljs-string">"exampledb"</span>
</code></pre>
<pre><code class="lang-basic">  kubectl apply -f cm1.yaml
</code></pre>
</li>
<li><p><strong>Create the Pod (pod1.yaml)</strong>: The Pod will run a MySQL container and use the environment variables from the ConfigMap</p>
<pre><code class="lang-yaml">  <span class="hljs-comment"># pod1.yaml</span>
  <span class="hljs-attr">apiVersion:</span> <span class="hljs-string">v1</span>
  <span class="hljs-attr">kind:</span> <span class="hljs-string">Pod</span>
  <span class="hljs-attr">metadata:</span>
    <span class="hljs-attr">name:</span> <span class="hljs-string">mysql-pod</span>
  <span class="hljs-attr">spec:</span>
    <span class="hljs-attr">containers:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">mysql</span>
      <span class="hljs-attr">image:</span> <span class="hljs-string">mysql:5.7</span>
      <span class="hljs-attr">env:</span>
        <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">MYSQL_USER</span>
          <span class="hljs-attr">valueFrom:</span>
            <span class="hljs-attr">configMapKeyRef:</span>
              <span class="hljs-attr">name:</span> <span class="hljs-string">bootcamp-configmap</span>
              <span class="hljs-attr">key:</span> <span class="hljs-string">username</span>
        <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">MYSQL_DATABASE</span>
          <span class="hljs-attr">valueFrom:</span>
            <span class="hljs-attr">configMapKeyRef:</span>
              <span class="hljs-attr">name:</span> <span class="hljs-string">bootcamp-configmap</span>
              <span class="hljs-attr">key:</span> <span class="hljs-string">database_name</span>
        <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">MYSQL_PASSWORD</span>
          <span class="hljs-attr">value:</span> <span class="hljs-string">demo123</span>  <span class="hljs-comment"># Specify a strong password.</span>
        <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">MYSQL_ROOT_PASSWORD</span>
          <span class="hljs-attr">value:</span> <span class="hljs-string">demo345</span> <span class="hljs-comment"># You should change this value.</span>
      <span class="hljs-attr">ports:</span>
        <span class="hljs-bullet">-</span> <span class="hljs-attr">containerPort:</span> <span class="hljs-number">3306</span>
          <span class="hljs-attr">name:</span> <span class="hljs-string">mysql</span>
      <span class="hljs-attr">volumeMounts:</span>
        <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">mysql-storage</span>
          <span class="hljs-attr">mountPath:</span> <span class="hljs-string">/var/lib/mysql</span>
    <span class="hljs-attr">volumes:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">mysql-storage</span>
        <span class="hljs-attr">emptyDir:</span> {}
</code></pre>
<pre><code class="lang-basic">   kubectl apply -f pod1.yaml
</code></pre>
</li>
<li><p><strong>Access the MySQL Pod</strong>: Once the Pod is running, connect to the MySQL container and verify the user setup</p>
<pre><code class="lang-basic">  kubectl exec -it mysql-pod -- mysql -u root -p
</code></pre>
<blockquote>
<p>If asks for <code>password</code> , Use that defined in the <code>ConfigMap</code></p>
</blockquote>
</li>
<li><p><strong>Run MySQL Commands</strong>: Inside the MySQL prompt, run the following commands to verify the users and databases:</p>
<pre><code class="lang-sql">  <span class="hljs-keyword">SELECT</span> <span class="hljs-keyword">user</span> <span class="hljs-keyword">FROM</span> mysql.user;
  <span class="hljs-keyword">SHOW</span> <span class="hljs-keyword">DATABASES</span>;
</code></pre>
</li>
</ul>
<blockquote>
<h4 id="heading-example-2-managing-devprod-properties-with-configmap"><strong>Example 2: Managing Dev/Prod Properties with ConfigMap</strong></h4>
</blockquote>
<ul>
<li><p>Create the ConfigMap (cm2.yaml)</p>
<pre><code class="lang-yaml">  <span class="hljs-attr">apiVersion:</span> <span class="hljs-string">v1</span>
  <span class="hljs-attr">kind:</span> <span class="hljs-string">ConfigMap</span>
  <span class="hljs-attr">metadata:</span>
    <span class="hljs-attr">name:</span> <span class="hljs-string">app-config-dev</span>
  <span class="hljs-attr">data:</span>
    <span class="hljs-attr">settings.properties:</span> <span class="hljs-string">|
      # Development Configuration
      debug=true
      database_url=http://dev-db.example.com
      featureX_enabled=false
</span>
  <span class="hljs-string">---</span>

  <span class="hljs-attr">apiVersion:</span> <span class="hljs-string">v1</span>
  <span class="hljs-attr">kind:</span> <span class="hljs-string">ConfigMap</span>
  <span class="hljs-attr">metadata:</span>
    <span class="hljs-attr">name:</span> <span class="hljs-string">app-config-prod</span>
  <span class="hljs-attr">data:</span>
    <span class="hljs-attr">settings.properties:</span> <span class="hljs-string">|
      # Production Configuration
      debug=false
      database_url=http://prod-db.example.com
      featureX_enabled=true</span>
</code></pre>
<pre><code class="lang-basic">  kubectl apply -f cm2.yaml
</code></pre>
</li>
<li><p>Create the Pod (pod2.yaml)</p>
<pre><code class="lang-yaml">  <span class="hljs-attr">apiVersion:</span> <span class="hljs-string">apps/v1</span>
  <span class="hljs-attr">kind:</span> <span class="hljs-string">Deployment</span>
  <span class="hljs-attr">metadata:</span>
    <span class="hljs-attr">name:</span> <span class="hljs-string">my-web-app</span>
  <span class="hljs-attr">spec:</span>
    <span class="hljs-attr">replicas:</span> <span class="hljs-number">1</span>
    <span class="hljs-attr">selector:</span>
      <span class="hljs-attr">matchLabels:</span>
        <span class="hljs-attr">app:</span> <span class="hljs-string">my-web-app</span>
    <span class="hljs-attr">template:</span>
      <span class="hljs-attr">metadata:</span>
        <span class="hljs-attr">labels:</span>
          <span class="hljs-attr">app:</span> <span class="hljs-string">my-web-app</span>
      <span class="hljs-attr">spec:</span>
        <span class="hljs-attr">containers:</span>
        <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">web-app-container</span>
          <span class="hljs-attr">image:</span> <span class="hljs-string">nginx</span>  
          <span class="hljs-attr">ports:</span>
          <span class="hljs-bullet">-</span> <span class="hljs-attr">containerPort:</span> <span class="hljs-number">80</span>
          <span class="hljs-attr">env:</span>
          <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">ENVIRONMENT</span>
            <span class="hljs-attr">value:</span> <span class="hljs-string">"development"</span>  
          <span class="hljs-attr">volumeMounts:</span>
          <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">config-volume</span>
            <span class="hljs-attr">mountPath:</span> <span class="hljs-string">/etc/config</span>
        <span class="hljs-attr">volumes:</span>
        <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">config-volume</span>
          <span class="hljs-attr">configMap:</span>
            <span class="hljs-attr">name:</span> <span class="hljs-string">app-config-dev</span>
</code></pre>
<pre><code class="lang-yaml">  <span class="hljs-string">kubectl</span> <span class="hljs-string">apply</span> <span class="hljs-string">-f</span> <span class="hljs-string">pod2.yaml</span>
</code></pre>
</li>
<li><p><strong>Verify the ConfigMap is Mounted</strong>: After the Pod is running, you can check if the <code>settings.properties</code> file has been correctly mounted and read its contents</p>
<pre><code class="lang-yaml">  <span class="hljs-string">kubectl</span> <span class="hljs-string">exec</span> <span class="hljs-string">-it</span> <span class="hljs-string">app-pod</span> <span class="hljs-string">--</span> <span class="hljs-string">cat</span> <span class="hljs-string">/etc/config/settings.properties</span>
</code></pre>
<p>  This should display the content of the <code>settings.properties</code> file:</p>
<pre><code class="lang-yaml">  <span class="hljs-comment"># Development Configuration</span>
  <span class="hljs-string">debug=true</span>
  <span class="hljs-string">database_url=http://dev-db.example.com</span>
  <span class="hljs-string">featureX_enabled=false</span>
</code></pre>
</li>
<li><p><strong>Switch to Production</strong>: To use the <strong>production</strong> configuration instead, update the Deployment to mount <code>app-config-prod</code>.</p>
<p>  Change this part in <code>pod2.yaml</code>:</p>
<pre><code class="lang-yaml">  <span class="hljs-attr">configMap:</span>
    <span class="hljs-attr">name:</span> <span class="hljs-string">app-config-prod</span>
</code></pre>
<p>  <strong>Reapply the deployment</strong></p>
<pre><code class="lang-yaml">  <span class="hljs-string">kubectl</span> <span class="hljs-string">apply</span> <span class="hljs-string">-f</span> <span class="hljs-string">pod2.yaml</span>
</code></pre>
<p>  Now, if you check the configuration inside the pod, you’ll see the production settings:</p>
<pre><code class="lang-yaml">  <span class="hljs-comment"># Production Configuration</span>
  <span class="hljs-string">debug=false</span>
  <span class="hljs-string">database_url=http://prod-db.example.com</span>
  <span class="hljs-string">featureX_enabled=true</span>
</code></pre>
</li>
</ul>
<blockquote>
<h4 id="heading-example-3-accessing-configmap-programmatically-with-python"><strong>Example 3: Accessing ConfigMap Programmatically with Python</strong></h4>
</blockquote>
<ul>
<li><p><strong>read_config.py:</strong> This Python script reads and prints the contents of the <code>app-config</code> ConfigMap in the <code>default</code> namespace</p>
<pre><code class="lang-python">  <span class="hljs-keyword">from</span> kubernetes <span class="hljs-keyword">import</span> client, config

  <span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">main</span>():</span>
      <span class="hljs-comment"># Load the Kubernetes configuration</span>
      config.load_incluster_config()

      v1 = client.CoreV1Api()
      config_map_name = <span class="hljs-string">'app-config'</span>
      namespace = <span class="hljs-string">'default'</span>

      <span class="hljs-keyword">try</span>:
          <span class="hljs-comment"># Read the ConfigMap</span>
          config_map = v1.read_namespaced_config_map(config_map_name, namespace)
          print(<span class="hljs-string">"ConfigMap data:"</span>)
          <span class="hljs-keyword">for</span> key, value <span class="hljs-keyword">in</span> config_map.data.items():
              print(<span class="hljs-string">f"<span class="hljs-subst">{key}</span>: <span class="hljs-subst">{value}</span>"</span>)
      <span class="hljs-keyword">except</span> client.exceptions.ApiException <span class="hljs-keyword">as</span> e:
          print(<span class="hljs-string">f"Exception when calling CoreV1Api-&gt;read_namespaced_config_map: <span class="hljs-subst">{e}</span>"</span>)

  <span class="hljs-keyword">if</span> __name__ == <span class="hljs-string">'__main__'</span>:
      main()
</code></pre>
</li>
<li><p><strong>Dockerfile:</strong> This Dockerfile builds the image that runs the Python script.</p>
<pre><code class="lang-bash">  <span class="hljs-comment"># Use a lightweight Python image</span>
  FROM python:3.8-slim

  <span class="hljs-comment"># Install the Kubernetes Python client</span>
  RUN pip install kubernetes

  <span class="hljs-comment"># Copy the Python script</span>
  COPY read_config.py /read_config.py

  <span class="hljs-comment"># Set the command to run the script</span>
  CMD [<span class="hljs-string">"python"</span>, <span class="hljs-string">"/read_config.py"</span>]
</code></pre>
</li>
<li><p><strong>Build the Docker Image</strong>: Build the Docker image using the provided <code>Dockerfile</code> and push it to a public image registry (in this case, <code>ttl.sh</code>)</p>
<pre><code class="lang-bash">  docker build -t ttl.sh/hindi-boot:1h .
  docker push ttl.sh/hindi-boot:1h
</code></pre>
</li>
<li><p><strong>app.yaml</strong>: This YAML file defines all the Kubernetes resources: the ConfigMap, Deployment, Role, and RoleBinding.</p>
<pre><code class="lang-yaml">  <span class="hljs-comment"># ConfigMap storing some example properties</span>
  <span class="hljs-attr">apiVersion:</span> <span class="hljs-string">v1</span>
  <span class="hljs-attr">kind:</span> <span class="hljs-string">ConfigMap</span>
  <span class="hljs-attr">metadata:</span>
    <span class="hljs-attr">name:</span> <span class="hljs-string">app-config</span>
    <span class="hljs-attr">namespace:</span> <span class="hljs-string">default</span>
  <span class="hljs-attr">data:</span>
    <span class="hljs-attr">example.property:</span> <span class="hljs-string">"Hello, world!"</span>
    <span class="hljs-attr">another.property:</span> <span class="hljs-string">"Just another example."</span>
  <span class="hljs-string">---</span>
  <span class="hljs-comment"># Deployment to run the Python script in a pod</span>
  <span class="hljs-attr">apiVersion:</span> <span class="hljs-string">apps/v1</span>
  <span class="hljs-attr">kind:</span> <span class="hljs-string">Deployment</span>
  <span class="hljs-attr">metadata:</span>
    <span class="hljs-attr">name:</span> <span class="hljs-string">config-reader-deployment</span>
    <span class="hljs-attr">namespace:</span> <span class="hljs-string">default</span>
  <span class="hljs-attr">spec:</span>
    <span class="hljs-attr">replicas:</span> <span class="hljs-number">1</span>
    <span class="hljs-attr">selector:</span>
      <span class="hljs-attr">matchLabels:</span>
        <span class="hljs-attr">app:</span> <span class="hljs-string">config-reader</span>
    <span class="hljs-attr">template:</span>
      <span class="hljs-attr">metadata:</span>
        <span class="hljs-attr">labels:</span>
          <span class="hljs-attr">app:</span> <span class="hljs-string">config-reader</span>
      <span class="hljs-attr">spec:</span>
        <span class="hljs-attr">containers:</span>
        <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">config-reader</span>
          <span class="hljs-attr">image:</span> <span class="hljs-string">ttl.sh/hindi-boot:1h</span>
          <span class="hljs-attr">imagePullPolicy:</span> <span class="hljs-string">Always</span>
  <span class="hljs-string">---</span>
  <span class="hljs-comment"># Role granting read access to ConfigMaps in the default namespace</span>
  <span class="hljs-attr">apiVersion:</span> <span class="hljs-string">rbac.authorization.k8s.io/v1</span>
  <span class="hljs-attr">kind:</span> <span class="hljs-string">Role</span>
  <span class="hljs-attr">metadata:</span>
    <span class="hljs-attr">namespace:</span> <span class="hljs-string">default</span>
    <span class="hljs-attr">name:</span> <span class="hljs-string">config-reader</span>
  <span class="hljs-attr">rules:</span>
  <span class="hljs-bullet">-</span> <span class="hljs-attr">apiGroups:</span> [<span class="hljs-string">""</span>]
    <span class="hljs-attr">resources:</span> [<span class="hljs-string">"configmaps"</span>]
    <span class="hljs-attr">verbs:</span> [<span class="hljs-string">"get"</span>, <span class="hljs-string">"list"</span>, <span class="hljs-string">"watch"</span>]
  <span class="hljs-string">---</span>
  <span class="hljs-comment"># RoleBinding to attach the Role to the default ServiceAccount</span>
  <span class="hljs-attr">apiVersion:</span> <span class="hljs-string">rbac.authorization.k8s.io/v1</span>
  <span class="hljs-attr">kind:</span> <span class="hljs-string">RoleBinding</span>
  <span class="hljs-attr">metadata:</span>
    <span class="hljs-attr">name:</span> <span class="hljs-string">read-configmaps</span>
    <span class="hljs-attr">namespace:</span> <span class="hljs-string">default</span>
  <span class="hljs-attr">subjects:</span>
  <span class="hljs-bullet">-</span> <span class="hljs-attr">kind:</span> <span class="hljs-string">ServiceAccount</span>
    <span class="hljs-attr">name:</span> <span class="hljs-string">default</span> 
    <span class="hljs-attr">namespace:</span> <span class="hljs-string">default</span>
  <span class="hljs-attr">roleRef:</span>
    <span class="hljs-attr">kind:</span> <span class="hljs-string">Role</span>
    <span class="hljs-attr">name:</span> <span class="hljs-string">config-reader</span>
    <span class="hljs-attr">apiGroup:</span> <span class="hljs-string">rbac.authorization.k8s.io</span>
</code></pre>
</li>
<li><p><strong>Apply the Kubernetes Resources</strong>: Deploy the <strong>ConfigMap</strong>, <strong>Deployment</strong>, and <strong>RBAC</strong> resources to Kubernetes.</p>
<pre><code class="lang-basic">  kubectl apply -f app.yaml
</code></pre>
</li>
<li><p><strong>Check the Logs</strong>: After the pod is running, you can check the logs to see if the ConfigMap has been read successfully.</p>
<pre><code class="lang-basic">  kubectl logs -l app=config-reader
</code></pre>
<p>  The output should display the content of the <strong>ConfigMap</strong></p>
<pre><code class="lang-basic">  ConfigMap <span class="hljs-keyword">data</span>:
  example.property: Hello, world!
  another.property: Just another example.
</code></pre>
</li>
</ul>
<h3 id="heading-secrets-in-kubernetes-securely-manage-sensitive-information">Secrets in Kubernetes: Securely Manage Sensitive Information</h3>
<p>When deploying applications in Kubernetes, you often need to handle sensitive information such as passwords, API keys, or tokens. Storing these directly in your code or configuration files can be insecure. <strong>Kubernetes Secrets</strong> provide a way to securely store and manage sensitive information.</p>
<p>Unlike ConfigMaps (which store non-sensitive configuration data), Secrets are designed for security. They store data in an encoded format (Base64) and can be mounted into Pods or accessed as environment variables.</p>
<h4 id="heading-types-of-kubernetes-secrets-opaque-tls-and-docker-registry"><strong>Types of Kubernetes Secrets: Opaque, TLS, and Docker Registry</strong></h4>
<ul>
<li><p><strong>Opaque Secret</strong>: General-purpose secret, commonly used for storing credentials.</p>
</li>
<li><p><strong>TLS Secret</strong>: Specifically for storing SSL/TLS certificates.</p>
</li>
<li><p><strong>Docker Registry Secret</strong>: For storing Docker registry credentials, used for pulling private images.</p>
</li>
</ul>
<h4 id="heading-example-1-creating-a-kubernetes-secret-using-yaml"><strong>Example 1: Creating a Kubernetes Secret Using YAML</strong></h4>
<blockquote>
<ul>
<li><p><strong>Encode your values</strong> in Base64</p>
<pre><code class="lang-bash">  <span class="hljs-built_in">echo</span> -n <span class="hljs-string">'my-db-username'</span> | base64
  <span class="hljs-built_in">echo</span> -n <span class="hljs-string">'my-db-password'</span> | base64
</code></pre>
<p>  This will return</p>
<pre><code class="lang-bash">  bXktZGItdXNlcm5hbWU=
  bXktZGItcGFzc3dvcmQ=
</code></pre>
</li>
<li><p><strong>Create the Secret YAML</strong> file (<code>secret.yaml</code>)</p>
<pre><code class="lang-yaml">  <span class="hljs-attr">apiVersion:</span> <span class="hljs-string">v1</span>
  <span class="hljs-attr">kind:</span> <span class="hljs-string">Secret</span>
  <span class="hljs-attr">metadata:</span> 
    <span class="hljs-attr">name:</span> <span class="hljs-string">db-secret</span>
  <span class="hljs-attr">type:</span> <span class="hljs-string">opaque</span>
  <span class="hljs-attr">data:</span> 
    <span class="hljs-attr">username:</span> <span class="hljs-string">bXktZGItdXNlcm5hbWU=</span>
    <span class="hljs-attr">password:</span> <span class="hljs-string">bXktZGItcGFzc3dvcmQ=</span>
</code></pre>
<p>  Apply the secret</p>
<pre><code class="lang-bash">  kubectl apply -f secret.yaml
</code></pre>
</li>
<li><p>To view the secret</p>
<pre><code class="lang-bash">  kubectl get secret
</code></pre>
</li>
<li><p>To decrypt the encrypted data</p>
<pre><code class="lang-bash">  <span class="hljs-built_in">echo</span> <span class="hljs-string">"put-encrypted-data"</span> | base64 -d
</code></pre>
</li>
</ul>
</blockquote>
<h4 id="heading-example-2-creating-a-kubernetes-secret-from-the-command-line"><strong>Example 2: Creating a Kubernetes Secret from the Command Line</strong></h4>
<pre><code class="lang-bash">kubectl create secret generic db-secret \
  --from-literal=username=my-db-username \
  --from-literal=password=my-db-password
</code></pre>
<h4 id="heading-using-secrets-in-a-pod-environment-variables-and-volume-mounts"><strong>Using Secrets in a Pod: Environment Variables and Volume Mounts</strong></h4>
<p>Once the Secret is created, you can use it in a Pod as either environment variables or mounted as files</p>
<ul>
<li><h4 id="heading-example-using-a-secret-as-environment-variables">Example: Using a Secret as Environment Variables</h4>
<pre><code class="lang-yaml">  <span class="hljs-attr">apiVersion:</span> <span class="hljs-string">v1</span>
  <span class="hljs-attr">kind:</span> <span class="hljs-string">Pod</span>
  <span class="hljs-attr">metadata:</span>
    <span class="hljs-attr">name:</span> <span class="hljs-string">db-app</span>
  <span class="hljs-attr">spec:</span>
    <span class="hljs-attr">containers:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">app-container</span>
      <span class="hljs-attr">image:</span> <span class="hljs-string">my-app:latest</span>
      <span class="hljs-attr">env:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">DB_USERNAME</span>
        <span class="hljs-attr">valueFrom:</span>
          <span class="hljs-attr">secretKeyRef:</span>
            <span class="hljs-attr">name:</span> <span class="hljs-string">db-secret</span>
            <span class="hljs-attr">key:</span> <span class="hljs-string">username</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">DB_PASSWORD</span>
        <span class="hljs-attr">valueFrom:</span>
          <span class="hljs-attr">secretKeyRef:</span>
            <span class="hljs-attr">name:</span> <span class="hljs-string">db-secret</span>
            <span class="hljs-attr">key:</span> <span class="hljs-string">password</span>
</code></pre>
<blockquote>
<p>Here, the <strong>username</strong> and <strong>password</strong> from the Secret are injected into the container as environment variables</p>
</blockquote>
</li>
<li><h4 id="heading-example-mounting-a-secret-as-a-volume"><strong>Example: Mounting a Secret as a Volume</strong></h4>
<pre><code class="lang-yaml">  <span class="hljs-attr">apiVersion:</span> <span class="hljs-string">v1</span>
  <span class="hljs-attr">kind:</span> <span class="hljs-string">Pod</span>
  <span class="hljs-attr">metadata:</span>
    <span class="hljs-attr">name:</span> <span class="hljs-string">app-with-secret</span>
  <span class="hljs-attr">spec:</span>
    <span class="hljs-attr">containers:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">app-container</span>
      <span class="hljs-attr">image:</span> <span class="hljs-string">my-app:latest</span>
      <span class="hljs-attr">volumeMounts:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">secret-volume</span>
        <span class="hljs-attr">mountPath:</span> <span class="hljs-string">/etc/secrets</span>
        <span class="hljs-attr">readOnly:</span> <span class="hljs-literal">true</span>
    <span class="hljs-attr">volumes:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">secret-volume</span>
      <span class="hljs-attr">secret:</span>
        <span class="hljs-attr">secretName:</span> <span class="hljs-string">db-secret</span>
</code></pre>
<blockquote>
<p>This will mount the Secret’s data into <code>/etc/secrets/</code> inside the container</p>
</blockquote>
</li>
</ul>
<h4 id="heading-storing-ssh-private-keys-in-kubernetes-secrets"><strong>Storing SSH Private Keys in Kubernetes Secrets</strong></h4>
<pre><code class="lang-basic">kubectl create secret generic my-ssh-<span class="hljs-keyword">key</span>-secret \
--from-file=ssh-privatekey=/path/<span class="hljs-keyword">to</span>/.ssh/id_rsa \
--type=kubernetes.io/ssh-auth
</code></pre>
<blockquote>
<p><code>--from-file=ssh-privatekey=/path/to/.ssh/id_rsa</code>:</p>
<ul>
<li><p><code>--from-file</code>: Specifies that the content for the Secret should come from a file.</p>
</li>
<li><p><code>ssh-privatekey</code>: The key under which the private key will be stored in the Secret.</p>
</li>
<li><p><code>/path/to/.ssh/id_rsa</code>: Path to the actual SSH private key file (usually located in <code>~/.ssh/id_rsa</code>).</p>
</li>
</ul>
<p><code>--type=kubernetes.io/ssh-auth</code>:</p>
<ul>
<li>This specifies that the Secret is of type <code>kubernetes.io/ssh-auth</code>, which is used for SSH authentication.</li>
</ul>
</blockquote>
<h4 id="heading-managing-ssltls-certificates-with-kubernetes-tls-secrets"><strong>Managing SSL/TLS Certificates with Kubernetes TLS Secrets</strong></h4>
<pre><code class="lang-basic">kubectl create secret tls my-tls-secret \
--cert=path/<span class="hljs-keyword">to</span>/cert/file \
--<span class="hljs-keyword">key</span>=path/<span class="hljs-keyword">to</span>/<span class="hljs-keyword">key</span>/file
</code></pre>
<blockquote>
<p><code>--cert=path/to/cert/file</code>:</p>
<ul>
<li>This specifies the path to the TLS certificate file. Replace <code>path/to/cert/file</code> with the actual path to your <code>.crt</code> (certificate) file.</li>
</ul>
<p><code>--key=path/to/key/file</code>:</p>
<ul>
<li>This specifies the path to the TLS private key file. Replace <code>path/to/key/file</code> with the actual path to your private key file (typically <code>.key</code>).</li>
</ul>
</blockquote>
<h4 id="heading-mysql-example-combining-configmap-and-secrets-in-kubernetes"><strong>MySQL Example: Combining ConfigMap and Secrets in Kubernetes</strong></h4>
<ul>
<li><p><strong>create a configmap</strong> <code>(myconfig.yaml)</code></p>
<pre><code class="lang-yaml">  <span class="hljs-attr">apiVersion:</span> <span class="hljs-string">v1</span>
  <span class="hljs-attr">kind:</span> <span class="hljs-string">configMap</span>
  <span class="hljs-attr">metadata:</span>
     <span class="hljs-attr">name:</span> <span class="hljs-string">bootcamp-configMap</span>
  <span class="hljs-attr">data:</span> 
     <span class="hljs-attr">username:</span> <span class="hljs-string">"praduman"</span>
     <span class="hljs-attr">database_name:</span> <span class="hljs-string">"my-db"</span>
</code></pre>
<pre><code class="lang-bash">  kubectl apply -f myconfig.yaml
</code></pre>
</li>
<li><p><strong>create secrets</strong> <code>(mysecret.yaml)</code></p>
<pre><code class="lang-bash">  <span class="hljs-built_in">echo</span> -n <span class="hljs-string">'rootPass'</span> | base64
  <span class="hljs-built_in">echo</span> -n <span class="hljs-string">'userPass'</span> | base64
</code></pre>
<pre><code class="lang-yaml">  <span class="hljs-attr">apiVersion:</span> <span class="hljs-string">v1</span>
  <span class="hljs-attr">kind:</span> <span class="hljs-string">Secret</span>
  <span class="hljs-attr">metadata:</span> 
     <span class="hljs-attr">name:</span> <span class="hljs-string">mysql-root-pass</span>
  <span class="hljs-attr">type:</span> <span class="hljs-string">Opaque</span>
  <span class="hljs-attr">data:</span> 
     <span class="hljs-attr">password:</span> <span class="hljs-string">&lt;use-base64-encoded&gt;</span>
  <span class="hljs-string">---</span>
  <span class="hljs-attr">apiVersion:</span> <span class="hljs-string">v1</span>
  <span class="hljs-attr">kind:</span> <span class="hljs-string">Secret</span>
  <span class="hljs-attr">metadata:</span> 
     <span class="hljs-attr">name:</span> <span class="hljs-string">mysql-user-pass</span>
  <span class="hljs-attr">type:</span> <span class="hljs-string">Opaque</span>
  <span class="hljs-attr">data:</span> 
     <span class="hljs-attr">password:</span> <span class="hljs-string">&lt;use-base64-encoded&gt;</span>
</code></pre>
<pre><code class="lang-bash">  kubectl apply -f mysecret.yaml
</code></pre>
</li>
<li><p><strong>YAML file of</strong> <code>deployment</code> <strong>using both</strong> <code>configMap</code> <strong>and</strong> <code>Secret</code><strong>(deploy.yaml)</strong></p>
<pre><code class="lang-yaml">  <span class="hljs-attr">apiVersion:</span> <span class="hljs-string">apps/v1</span>
  <span class="hljs-attr">kind:</span> <span class="hljs-string">Deployment</span>
  <span class="hljs-attr">metadata:</span>
    <span class="hljs-attr">name:</span> <span class="hljs-string">mysql</span>
  <span class="hljs-attr">spec:</span>
    <span class="hljs-attr">selector:</span>
      <span class="hljs-attr">matchLabels:</span>
        <span class="hljs-attr">app:</span> <span class="hljs-string">mysql</span>
    <span class="hljs-attr">template:</span>
      <span class="hljs-attr">metadata:</span>
        <span class="hljs-attr">labels:</span>
          <span class="hljs-attr">app:</span> <span class="hljs-string">mysql</span>
      <span class="hljs-attr">spec:</span>
        <span class="hljs-attr">containers:</span>
        <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">mysql</span>
          <span class="hljs-attr">image:</span> <span class="hljs-string">mysql:5.7</span>
          <span class="hljs-attr">env:</span>
            <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">MYSQL_ROOT_PASSWORD</span>
              <span class="hljs-attr">valueFrom:</span>
                <span class="hljs-attr">secretKeyRef:</span>
                  <span class="hljs-attr">name:</span> <span class="hljs-string">mysql-root-pass</span>
                  <span class="hljs-attr">key:</span> <span class="hljs-string">password</span>
            <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">MYSQL_USER</span>
              <span class="hljs-attr">valueFrom:</span>
                <span class="hljs-attr">configMapKeyRef:</span>
                  <span class="hljs-attr">name:</span> <span class="hljs-string">bootcamp-configmap</span>
                  <span class="hljs-attr">key:</span> <span class="hljs-string">username</span>
            <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">MYSQL_PASSWORD</span>
              <span class="hljs-attr">valueFrom:</span>
                <span class="hljs-attr">secretKeyRef:</span>
                  <span class="hljs-attr">name:</span> <span class="hljs-string">mysql-user-pass</span>
                  <span class="hljs-attr">key:</span> <span class="hljs-string">password</span>
            <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">MYSQL_DATABASE</span>
              <span class="hljs-attr">valueFrom:</span>
                <span class="hljs-attr">configMapKeyRef:</span>
                  <span class="hljs-attr">name:</span> <span class="hljs-string">bootcamp-configmap</span>
                  <span class="hljs-attr">key:</span> <span class="hljs-string">database_name</span>
          <span class="hljs-attr">ports:</span>
            <span class="hljs-bullet">-</span> <span class="hljs-attr">containerPort:</span> <span class="hljs-number">3306</span>
          <span class="hljs-attr">volumeMounts:</span>
            <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">mysql-storage</span>
              <span class="hljs-attr">mountPath:</span> <span class="hljs-string">/var/lib/mysql</span>
        <span class="hljs-attr">volumes:</span>
          <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">mysql-storage</span>
            <span class="hljs-attr">emptyDir:</span> {}
</code></pre>
<pre><code class="lang-bash">  kubectl apply -f deploy.yaml
</code></pre>
</li>
<li><p><strong>Access the MYSQL pod</strong></p>
<pre><code class="lang-bash">  kubectl <span class="hljs-built_in">exec</span> -it &lt;pod-name&gt; -- mysql -u root -p
</code></pre>
</li>
</ul>
<h3 id="heading-conclusion">Conclusion</h3>
<blockquote>
<p>In this article, we explored the essential concepts of Kubernetes ConfigMaps and Secrets, highlighting their importance in managing configuration data and sensitive information securely. ConfigMaps allow you to decouple configuration from application code, making updates seamless without redeploying the entire application. Secrets, on the other hand, provide a secure way to handle sensitive data like passwords and API keys, ensuring they are not exposed in your codebase.</p>
<p>We provided practical examples demonstrating how to use ConfigMaps and Secrets in various scenarios, such as passing database credentials to a MySQL Pod, managing environment-specific properties, and accessing configuration programmatically with Python. Additionally, we covered the different types of Secrets, including Opaque, TLS, and Docker Registry Secrets, and how to use them in Pods as environment variables or volume mounts.</p>
<p>By understanding and implementing these Kubernetes features, you can enhance the security and manageability of your applications, ensuring a more robust and flexible deployment process.</p>
</blockquote>
<hr />
<p>💡 <em>Let’s connect and discuss DevOps, cloud automation, and cutting-edge technology</em></p>
<p>🔗 <a target="_blank" href="https://www.linkedin.com/in/praduman-prajapati/"><strong>LinkedIn</strong></a> | 💼 <a target="_blank" href="https://www.upwork.com/freelancers/~01fa3bf4d6797a9651"><strong>Upwork</strong></a> | 🐦 <a target="_blank" href="https://x.com/CndTwtprad"><strong>Twitter</strong></a> | 👨‍💻 <a target="_blank" href="https://github.com/praduman8435"><strong>GitHub</strong></a></p>
]]></content:encoded></item><item><title><![CDATA[Kubernetes-part-6]]></title><description><![CDATA[Ensuring Application Availability with ReplicaSets
It helps keep your application running by ensuring the correct number of pod replicas are always available, making it essential for scaling and reliability. ReplicaSets are often managed by Deploymen...]]></description><link>https://blogs.praduman.site/kubernetes-part-6</link><guid isPermaLink="true">https://blogs.praduman.site/kubernetes-part-6</guid><category><![CDATA[Kubernetes]]></category><category><![CDATA[Devops]]></category><dc:creator><![CDATA[Praduman Prajapati]]></dc:creator><pubDate>Mon, 16 Sep 2024 09:15:16 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1726606382430/2632c03a-b786-42d0-9379-eaad92fb8b14.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h3 id="heading-ensuring-application-availability-with-replicasets">Ensuring Application Availability with ReplicaSets</h3>
<p>It helps keep your application running by ensuring the correct number of pod replicas are always available, making it essential for scaling and reliability. ReplicaSets are often managed by <strong>Deployments</strong>, which provide more features like rolling updates. When you create a Deployment, it automatically creates a ReplicaSet for managing pod replicas.</p>
<blockquote>
<p><strong>Example:</strong></p>
</blockquote>
<p>If you set a ReplicaSet to have 3 pods, and one pod stops, the ReplicaSet will automatically create a new pod so that there are always 3 running.</p>
<ul>
<li><p><strong>Defining a ReplicaSet: YAML Example</strong></p>
<pre><code class="lang-yaml">  <span class="hljs-attr">apiVersion:</span> <span class="hljs-string">apps/v1</span>
  <span class="hljs-attr">kind:</span> <span class="hljs-string">ReplicaSet</span>
  <span class="hljs-attr">metadata:</span>
    <span class="hljs-attr">name:</span> <span class="hljs-string">nginx-rs</span>
    <span class="hljs-attr">labels:</span>
      <span class="hljs-attr">app:</span> <span class="hljs-string">nginx</span>
  <span class="hljs-attr">spec:</span>
    <span class="hljs-attr">replicas:</span> <span class="hljs-number">3</span>
    <span class="hljs-attr">selector:</span>
      <span class="hljs-attr">matchLabels:</span>
        <span class="hljs-attr">app:</span> <span class="hljs-string">nginx</span>
    <span class="hljs-attr">template:</span>
      <span class="hljs-attr">metadata:</span>
        <span class="hljs-attr">labels:</span>
          <span class="hljs-attr">app:</span> <span class="hljs-string">nginx</span>
      <span class="hljs-attr">spec:</span>
        <span class="hljs-attr">containers:</span>
        <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">nginx</span>
          <span class="hljs-attr">image:</span> <span class="hljs-string">nginx:latest</span>
          <span class="hljs-attr">ports:</span>
          <span class="hljs-bullet">-</span> <span class="hljs-attr">containerPort:</span> <span class="hljs-number">80</span>
</code></pre>
<p>  This YAML file tells Kubernetes to create 3 identical pods running the <code>nginx</code> web server. It will automatically make sure there are always 3 pods running, so if one stops, Kubernetes will start another. Each pod runs on port 80, and all are labeled with <code>app: nginx</code> for identification.</p>
</li>
<li><p><strong>Real-Time watch and monitor the status of your pods</strong></p>
<pre><code class="lang-basic">  kubectl <span class="hljs-keyword">get</span> pod -w
</code></pre>
</li>
<li><p><strong>To see the</strong> <code>ReplicaSet</code> <strong>present</strong></p>
<pre><code class="lang-basic">  kubectl <span class="hljs-keyword">get</span> rs
</code></pre>
</li>
<li><p><strong>To delete a replicaSet and all its pod</strong></p>
<pre><code class="lang-basic">  kubectl <span class="hljs-keyword">delete</span> rs &lt;rs-file&gt;
</code></pre>
</li>
</ul>
<h4 id="heading-understanding-propagation-policies-in-kubernetes"><strong>Understanding Propagation Policies in Kubernetes</strong></h4>
<p>It is used to control how resources are deleted, especially when dealing with related resources like ReplicaSets and their pods. It decides whether the resources created by a ReplicaSet (like pods) should be deleted right away or left running when the ReplicaSet is removed.</p>
<h4 id="heading-types-of-propagation-policies">Types of Propagation Policies:</h4>
<ul>
<li><p><strong>Foreground</strong>: Pods are deleted first, then the ReplicaSet.</p>
<pre><code class="lang-basic">  kubectl <span class="hljs-keyword">delete</span> replicaset nginx-rs --cascade=foreground
</code></pre>
<blockquote>
<p><strong>Another way</strong>: using <code>kubectl proxy</code> with a <code>curl</code> command to delete a ReplicaSet using the Kubernetes API and specify a propagationPolicy</p>
<ul>
<li><strong>Start the Kubernetes Proxy</strong></li>
</ul>
<pre><code class="lang-basic">kubectl proxy --port=<span class="hljs-number">8080</span>
</code></pre>
<p>This command starts a local proxy that allows you to interact with the Kubernetes API at <code>localhost:8080</code> without needing to authenticate with tokens.</p>
<pre><code class="lang-basic">kubectl proxy --port=<span class="hljs-number">8080</span>
curl -X <span class="hljs-keyword">DELETE</span> <span class="hljs-comment">'http://localhost:8080/apis/apps/v1/namespaces/default/replicasets/nginx-rs' \</span>
     -d <span class="hljs-comment">'{"kind":"DeleteOptions","apiVersion":"v1","propagationPolicy":"Foreground"}' \</span>
     -H <span class="hljs-string">"Content-Type: application/json"</span>
</code></pre>
</blockquote>
</li>
<li><p><strong>Background</strong>: ReplicaSet is deleted first, pods are deleted later.</p>
<pre><code class="lang-basic">  kubectl <span class="hljs-keyword">delete</span> replicaset nginx-rs --cascade=background
</code></pre>
<blockquote>
<p><strong>Another way</strong>: using <code>kubectl proxy</code> with a <code>curl</code> command to delete a ReplicaSet using the Kubernetes API and specify a propagationPolicy</p>
<ul>
<li><strong>Start the Kubernetes Proxy</strong></li>
</ul>
<pre><code class="lang-basic">kubectl proxy --port=<span class="hljs-number">8080</span>
</code></pre>
<p>This command starts a local proxy that allows you to interact with the Kubernetes API at <code>localhost:8080</code> without needing to authenticate with tokens.</p>
<pre><code class="lang-basic">curl -X <span class="hljs-keyword">DELETE</span> <span class="hljs-comment">'http://localhost:8080/apis/apps/v1/namespaces/default/replicasets/nginx-rs' \</span>
     -d <span class="hljs-comment">'{"kind":"DeleteOptions","apiVersion":"v1","propagationPolicy":"Background"}' \</span>
     -H <span class="hljs-string">"Content-Type: application/json"</span>
</code></pre>
</blockquote>
</li>
<li><p><strong>Orphan</strong>: ReplicaSet is deleted, but the pods remain running.</p>
<pre><code class="lang-basic">  kubectl <span class="hljs-keyword">delete</span> replicaset nginx-rs --cascade=orphan
</code></pre>
<blockquote>
<p><strong>Another way</strong>: using <code>kubectl proxy</code> with a <code>curl</code> command to delete a ReplicaSet using the Kubernetes API and specify a propagationPolicy</p>
<ul>
<li><strong>Start the Kubernetes Proxy</strong></li>
</ul>
<pre><code class="lang-basic">kubectl proxy --port=<span class="hljs-number">8080</span>
</code></pre>
<p>This command starts a local proxy that allows you to interact with the Kubernetes API at <code>localhost:8080</code> without needing to authenticate with tokens.</p>
<pre><code class="lang-basic">curl -X <span class="hljs-keyword">DELETE</span> <span class="hljs-comment">'http://localhost:8080/apis/apps/v1/namespaces/default/replicasets/nginx-rs' \</span>
     -d <span class="hljs-comment">'{"kind":"DeleteOptions","apiVersion":"v1","propagationPolicy":"Orphan"}' \</span>
     -H <span class="hljs-string">"Content-Type: application/json"</span>
</code></pre>
</blockquote>
</li>
</ul>
<h3 id="heading-mastering-kubernetes-deployments">Mastering Kubernetes Deployments</h3>
<p>It is used to manage and automate the lifecycle of applications running in pods. It provides more advanced features than a ReplicaSet, such as rolling updates, rollback capabilities, and scaling.</p>
<ul>
<li><p><strong>Creating and Managing Deployments with YAML</strong></p>
<pre><code class="lang-yaml">  <span class="hljs-attr">apiVersion:</span> <span class="hljs-string">apps/v1</span>
  <span class="hljs-attr">kind:</span> <span class="hljs-string">Deployment</span>
  <span class="hljs-attr">metadata:</span>
    <span class="hljs-attr">name:</span> <span class="hljs-string">nginx-deployment</span>
  <span class="hljs-attr">spec:</span>
    <span class="hljs-attr">replicas:</span> <span class="hljs-number">3</span>
    <span class="hljs-attr">selector:</span>
      <span class="hljs-attr">matchLabels:</span>
        <span class="hljs-attr">app:</span> <span class="hljs-string">nginx</span>
    <span class="hljs-attr">template:</span>
      <span class="hljs-attr">metadata:</span>
        <span class="hljs-attr">labels:</span>
          <span class="hljs-attr">app:</span> <span class="hljs-string">nginx</span>
      <span class="hljs-attr">spec:</span>
        <span class="hljs-attr">containers:</span>
        <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">nginx</span>
          <span class="hljs-attr">image:</span> <span class="hljs-string">nginx:1.16.1</span>
          <span class="hljs-attr">ports:</span>
          <span class="hljs-bullet">-</span> <span class="hljs-attr">containerPort:</span> <span class="hljs-number">80</span>
</code></pre>
<pre><code class="lang-basic">  kubectl apply -f deployment.yaml
</code></pre>
</li>
<li><p><strong>Get Deployment Status</strong></p>
<pre><code class="lang-basic">  kubectl <span class="hljs-keyword">get</span> deployments
</code></pre>
</li>
<li><p><strong>Scaling Deployments for Optimal Performance</strong></p>
<pre><code class="lang-basic">  kubectl scale deployment nginx-deployment --replicas=<span class="hljs-number">5</span>
</code></pre>
</li>
<li><p><strong>Updating Deployment Images</strong></p>
<pre><code class="lang-basic">  kubectl set image deployment/nginx-deployment nginx=nginx:<span class="hljs-number">1.17.0</span>
</code></pre>
</li>
<li><p><strong>Checking Deployment Status</strong></p>
<pre><code class="lang-basic">  kubectl rollout status deployment/nginx-deployment
</code></pre>
</li>
<li><p><strong>Checking Rollout History</strong></p>
<pre><code class="lang-basic">  kubectl rollout history deploy/nginx-deployment
</code></pre>
</li>
<li><p><strong>View Specific Revision</strong></p>
<pre><code class="lang-basic">  kubectl rollout history deploy/nginx-deployment --revision=<span class="hljs-number">2</span>
</code></pre>
</li>
<li><p><strong>Rolling Back to Previous Deployment Revisions</strong></p>
<pre><code class="lang-basic">  kubectl rollout undo deployment/nginx-deployment --<span class="hljs-keyword">to</span>-revision=<span class="hljs-number">1</span>
</code></pre>
</li>
<li><p><strong>Pause a Rollout</strong></p>
<pre><code class="lang-basic">  kubectl rollout pause deployment/nginx-deployment
</code></pre>
</li>
<li><p><strong>Edit Deployment Directly</strong></p>
<pre><code class="lang-basic">  kubectl <span class="hljs-keyword">edit</span> deployment/nginx-deployment
</code></pre>
</li>
</ul>
<h4 id="heading-recreate-strategy-minimizing-downtime-during-updates"><strong>Recreate Strategy: Minimizing Downtime During Updates</strong></h4>
<p>With this strategy, when a new version of an application is deployed, Kubernetes first <strong>deletes all existing pods</strong> before creating the new ones. This can cause some downtime because there will be a gap between the old pods being terminated and the new ones starting.</p>
<blockquote>
<p><strong>Example YAML for Recreate Strategy</strong></p>
<pre><code class="lang-yaml"><span class="hljs-attr">apiVersion:</span> <span class="hljs-string">apps/v1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">Deployment</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">demo-deployment</span>
<span class="hljs-attr">spec:</span>
  <span class="hljs-attr">replicas:</span> <span class="hljs-number">3</span>
  <span class="hljs-attr">strategy:</span>
    <span class="hljs-attr">type:</span> <span class="hljs-string">Recreate</span>
  <span class="hljs-attr">selector:</span>
    <span class="hljs-attr">matchLabels:</span>
      <span class="hljs-attr">app:</span> <span class="hljs-string">demo</span>
  <span class="hljs-attr">template:</span>
    <span class="hljs-attr">metadata:</span>
      <span class="hljs-attr">labels:</span>
        <span class="hljs-attr">app:</span> <span class="hljs-string">demo</span>
    <span class="hljs-attr">spec:</span>
      <span class="hljs-attr">containers:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">demo</span>
        <span class="hljs-attr">image:</span> <span class="hljs-string">nginx:latest</span>
        <span class="hljs-attr">ports:</span>
        <span class="hljs-bullet">-</span> <span class="hljs-attr">containerPort:</span> <span class="hljs-number">80</span>
</code></pre>
<pre><code class="lang-basic">kubectl apply -f &lt;recreate-yml-file&gt;
</code></pre>
<pre><code class="lang-basic">kubectl set image deploy/demo-deployment demo=nginx:<span class="hljs-number">14.0</span>
</code></pre>
</blockquote>
<h3 id="heading-probes-ensuring-pod-health-and-readiness">Probes: Ensuring Pod Health and Readiness</h3>
<p>This is used to check the health of pod. They help Kubernetes know when a pod is ready to start serving traffic or if it’s still alive and functioning correctly. If a probe fails, Kubernetes can take action like restarting the pod</p>
<h4 id="heading-types-of-probes-in-kubernetes">Types of probes in Kubernetes</h4>
<ul>
<li><p><strong>Liveness Probe:</strong> Checks if the pod is alive. Restarts it if it’s stuck.</p>
</li>
<li><p><strong>Readiness Probe:</strong> Checks if the pod is ready to serve traffic. Stops sending traffic if it’s not ready.</p>
</li>
<li><p><strong>Startup Probe:</strong> Gives the pod time to fully start. Ensures it isn’t marked as failed too early.</p>
</li>
</ul>
<blockquote>
<h4 id="heading-implementing-probes-in-deployment-yaml">Implementing Probes in Deployment YAML</h4>
<pre><code class="lang-yaml"><span class="hljs-attr">apiVersion:</span> <span class="hljs-string">apps/v1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">Deployment</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">nginx-deployment</span>
<span class="hljs-attr">spec:</span>
  <span class="hljs-attr">replicas:</span> <span class="hljs-number">2</span>
  <span class="hljs-attr">selector:</span>
    <span class="hljs-attr">matchLabels:</span>
      <span class="hljs-attr">app:</span> <span class="hljs-string">nginx</span>
  <span class="hljs-attr">template:</span>
    <span class="hljs-attr">metadata:</span>
      <span class="hljs-attr">labels:</span>
        <span class="hljs-attr">app:</span> <span class="hljs-string">nginx</span>
    <span class="hljs-attr">spec:</span>
      <span class="hljs-attr">containers:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">nginx</span>
        <span class="hljs-attr">image:</span> <span class="hljs-string">nginx:latest</span>
        <span class="hljs-attr">ports:</span>
        <span class="hljs-bullet">-</span> <span class="hljs-attr">containerPort:</span> <span class="hljs-number">80</span>
        <span class="hljs-attr">livenessProbe:</span>
          <span class="hljs-attr">httpGet:</span>
            <span class="hljs-attr">path:</span> <span class="hljs-string">/</span>
            <span class="hljs-attr">port:</span> <span class="hljs-number">80</span>
          <span class="hljs-attr">initialDelaySeconds:</span> <span class="hljs-number">15</span>
          <span class="hljs-attr">timeoutSeconds:</span> <span class="hljs-number">2</span>
          <span class="hljs-attr">periodSeconds:</span> <span class="hljs-number">5</span>
          <span class="hljs-attr">failureThreshold:</span> <span class="hljs-number">3</span>
        <span class="hljs-attr">readinessProbe:</span>
          <span class="hljs-attr">httpGet:</span>
            <span class="hljs-attr">path:</span> <span class="hljs-string">/</span>
            <span class="hljs-attr">port:</span> <span class="hljs-number">80</span>
          <span class="hljs-attr">initialDelaySeconds:</span> <span class="hljs-number">5</span>
          <span class="hljs-attr">timeoutSeconds:</span> <span class="hljs-number">2</span>
          <span class="hljs-attr">periodSeconds:</span> <span class="hljs-number">5</span>
          <span class="hljs-attr">successThreshold:</span> <span class="hljs-number">1</span>
          <span class="hljs-attr">failureThreshold:</span> <span class="hljs-number">3</span>
        <span class="hljs-attr">startupProbe:</span>
          <span class="hljs-attr">httpGet:</span>
            <span class="hljs-attr">path:</span> <span class="hljs-string">/</span>
            <span class="hljs-attr">port:</span> <span class="hljs-number">80</span>
          <span class="hljs-attr">initialDelaySeconds:</span> <span class="hljs-number">10</span>
          <span class="hljs-attr">periodSeconds:</span> <span class="hljs-number">5</span>
          <span class="hljs-attr">failureThreshold:</span> <span class="hljs-number">10</span>
</code></pre>
<pre><code class="lang-basic">kubectl apply -f &lt;yaml-file&gt;
</code></pre>
</blockquote>
<ul>
<li><p><strong>Liveness Probe</strong>: Checks if the container is still running. If it fails 3 times (every 5 seconds), the container is restarted.</p>
</li>
<li><p><strong>Readiness Probe</strong>: Checks if the container is ready to serve traffic. If it fails 3 times, the container won’t receive traffic until it passes.</p>
</li>
<li><p><strong>Startup Probe</strong>: Gives the container 10 chances (every 5 seconds) to start before other probes begin checking it.</p>
</li>
</ul>
<h3 id="heading-blue-green-deployment-minimizing-downtime-and-risk">Blue-Green Deployment: Minimizing Downtime and Risk</h3>
<ul>
<li><p>A strategy for deploying applications that minimizes downtime and risk.</p>
</li>
<li><p>It involves running two environments: <code>Blue</code> and <code>Green</code></p>
</li>
<li><p><code>Blue</code> represents the current version of the application that's live and handling user traffic.</p>
</li>
<li><p><code>Green</code> is a new version of the application that is being prepared for release.</p>
</li>
</ul>
<p><strong>Example:</strong> You have an application running with version 1 (Blue) and want to deploy version 2 (Green)</p>
<blockquote>
<ul>
<li><p>Current Deployment (Blue) : This is the live version</p>
<pre><code class="lang-yaml">  <span class="hljs-attr">apiVersion:</span> <span class="hljs-string">apps/v1</span>
  <span class="hljs-attr">kind:</span> <span class="hljs-string">Deployment</span>
  <span class="hljs-attr">metadata:</span>
    <span class="hljs-attr">name:</span> <span class="hljs-string">myapp-blue</span>
  <span class="hljs-attr">spec:</span>
    <span class="hljs-attr">replicas:</span> <span class="hljs-number">2</span>
    <span class="hljs-attr">selector:</span>
      <span class="hljs-attr">matchLabels:</span>
        <span class="hljs-attr">app:</span> <span class="hljs-string">myapp</span>
        <span class="hljs-attr">version:</span> <span class="hljs-string">blue</span>
    <span class="hljs-attr">template:</span>
      <span class="hljs-attr">metadata:</span>
        <span class="hljs-attr">labels:</span>
          <span class="hljs-attr">app:</span> <span class="hljs-string">myapp</span>
          <span class="hljs-attr">version:</span> <span class="hljs-string">blue</span>
      <span class="hljs-attr">spec:</span>
        <span class="hljs-attr">containers:</span>
        <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">myapp</span>
          <span class="hljs-attr">image:</span> <span class="hljs-string">myapp:1.0</span>
          <span class="hljs-attr">ports:</span>
          <span class="hljs-bullet">-</span> <span class="hljs-attr">containerPort:</span> <span class="hljs-number">80</span>
</code></pre>
</li>
<li><p><strong>New Deployment (Green)</strong>: This is the new version</p>
<pre><code class="lang-yaml">  <span class="hljs-attr">apiVersion:</span> <span class="hljs-string">apps/v1</span>
  <span class="hljs-attr">kind:</span> <span class="hljs-string">Deployment</span>
  <span class="hljs-attr">metadata:</span>
    <span class="hljs-attr">name:</span> <span class="hljs-string">myapp-green</span>
  <span class="hljs-attr">spec:</span>
    <span class="hljs-attr">replicas:</span> <span class="hljs-number">2</span>
    <span class="hljs-attr">selector:</span>
      <span class="hljs-attr">matchLabels:</span>
        <span class="hljs-attr">app:</span> <span class="hljs-string">myapp</span>
        <span class="hljs-attr">version:</span> <span class="hljs-string">green</span>
    <span class="hljs-attr">template:</span>
      <span class="hljs-attr">metadata:</span>
        <span class="hljs-attr">labels:</span>
          <span class="hljs-attr">app:</span> <span class="hljs-string">myapp</span>
          <span class="hljs-attr">version:</span> <span class="hljs-string">green</span>
      <span class="hljs-attr">spec:</span>
        <span class="hljs-attr">containers:</span>
        <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">myapp</span>
          <span class="hljs-attr">image:</span> <span class="hljs-string">myapp:2.0</span>
          <span class="hljs-attr">ports:</span>
          <span class="hljs-bullet">-</span> <span class="hljs-attr">containerPort:</span> <span class="hljs-number">80</span>
</code></pre>
<pre><code class="lang-basic">  kubectl apply -f myapp-green.yaml
</code></pre>
</li>
<li><p><strong>Service</strong>: The service will route traffic to the live version (initially Blue). Later, it will be updated to point to the Green version.</p>
<pre><code class="lang-yaml">  <span class="hljs-attr">apiVersion:</span> <span class="hljs-string">v1</span>
  <span class="hljs-attr">kind:</span> <span class="hljs-string">Service</span>
  <span class="hljs-attr">metadata:</span>
    <span class="hljs-attr">name:</span> <span class="hljs-string">myapp-service</span>
  <span class="hljs-attr">spec:</span>
    <span class="hljs-attr">selector:</span>
      <span class="hljs-attr">app:</span> <span class="hljs-string">myapp</span>
      <span class="hljs-attr">version:</span> <span class="hljs-string">blue</span> <span class="hljs-comment"># Initially points to Blue</span>
    <span class="hljs-attr">ports:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">protocol:</span> <span class="hljs-string">TCP</span>
      <span class="hljs-attr">port:</span> <span class="hljs-number">80</span>
      <span class="hljs-attr">targetPort:</span> <span class="hljs-number">80</span>
    <span class="hljs-attr">type:</span> <span class="hljs-string">LoadBalancer</span>
</code></pre>
</li>
<li><p><strong>Switch Traffic to Green</strong>: Update the service to point to the Green version</p>
<pre><code class="lang-yaml">  <span class="hljs-attr">apiVersion:</span> <span class="hljs-string">v1</span>
  <span class="hljs-attr">kind:</span> <span class="hljs-string">Service</span>
  <span class="hljs-attr">metadata:</span>
    <span class="hljs-attr">name:</span> <span class="hljs-string">myapp-service</span>
  <span class="hljs-attr">spec:</span>
    <span class="hljs-attr">selector:</span>
      <span class="hljs-attr">app:</span> <span class="hljs-string">myapp</span>
      <span class="hljs-attr">version:</span> <span class="hljs-string">green</span> <span class="hljs-comment"># Now points to Green</span>
    <span class="hljs-attr">ports:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">protocol:</span> <span class="hljs-string">TCP</span>
      <span class="hljs-attr">port:</span> <span class="hljs-number">80</span>
      <span class="hljs-attr">targetPort:</span> <span class="hljs-number">80</span>
    <span class="hljs-attr">type:</span> <span class="hljs-string">LoadBalancer</span>
</code></pre>
<pre><code class="lang-basic">  kubectl apply -f myapp-service.yaml
</code></pre>
</li>
</ul>
<p>To rollback to Blue, just update the service selector back to the Blue version</p>
</blockquote>
<h3 id="heading-canary-deployment-gradual-and-safe-application-updates">Canary Deployment: Gradual and Safe Application Updates</h3>
<p>A strategy where a new version of an application is deployed to a small portion of users (e.g., 10%), while the rest continue using the old version. This allows you to test the new version in production with minimal risk. If it works well, you gradually increase its usage (e.g., 50%, then 100%) and If the canary version causes issues, you can quickly rollback and send all traffic back to the old version.</p>
<p><strong>Example :</strong> Let's say you have version 1 of your app running and you want to canary release version 2</p>
<blockquote>
<ul>
<li><p>Existing Deployment (Version 1)</p>
<pre><code class="lang-yaml">  <span class="hljs-attr">apiVersion:</span> <span class="hljs-string">apps/v1</span>
  <span class="hljs-attr">kind:</span> <span class="hljs-string">Deployment</span>
  <span class="hljs-attr">metadata:</span>
    <span class="hljs-attr">name:</span> <span class="hljs-string">myapp-v1</span>
  <span class="hljs-attr">spec:</span>
    <span class="hljs-attr">replicas:</span> <span class="hljs-number">4</span>
    <span class="hljs-attr">selector:</span>
      <span class="hljs-attr">matchLabels:</span>
        <span class="hljs-attr">app:</span> <span class="hljs-string">myapp</span>
        <span class="hljs-attr">version:</span> <span class="hljs-string">v1</span>
    <span class="hljs-attr">template:</span>
      <span class="hljs-attr">metadata:</span>
        <span class="hljs-attr">labels:</span>
          <span class="hljs-attr">app:</span> <span class="hljs-string">myapp</span>
          <span class="hljs-attr">version:</span> <span class="hljs-string">v1</span>
      <span class="hljs-attr">spec:</span>
        <span class="hljs-attr">containers:</span>
        <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">myapp</span>
          <span class="hljs-attr">image:</span> <span class="hljs-string">myapp:1.0</span>
          <span class="hljs-attr">ports:</span>
          <span class="hljs-bullet">-</span> <span class="hljs-attr">containerPort:</span> <span class="hljs-number">80</span>
</code></pre>
</li>
<li><p><strong>Canary Deployment (Version 2)</strong></p>
<pre><code class="lang-yaml">  <span class="hljs-attr">apiVersion:</span> <span class="hljs-string">apps/v1</span>
  <span class="hljs-attr">kind:</span> <span class="hljs-string">Deployment</span>
  <span class="hljs-attr">metadata:</span>
    <span class="hljs-attr">name:</span> <span class="hljs-string">myapp-canary</span>
  <span class="hljs-attr">spec:</span>
    <span class="hljs-attr">replicas:</span> <span class="hljs-number">1</span> <span class="hljs-comment"># Canary version has only 1 replica</span>
    <span class="hljs-attr">selector:</span>
      <span class="hljs-attr">matchLabels:</span>
        <span class="hljs-attr">app:</span> <span class="hljs-string">myapp</span>
        <span class="hljs-attr">version:</span> <span class="hljs-string">canary</span>
    <span class="hljs-attr">template:</span>
      <span class="hljs-attr">metadata:</span>
        <span class="hljs-attr">labels:</span>
          <span class="hljs-attr">app:</span> <span class="hljs-string">myapp</span>
          <span class="hljs-attr">version:</span> <span class="hljs-string">canary</span>
      <span class="hljs-attr">spec:</span>
        <span class="hljs-attr">containers:</span>
        <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">myapp</span>
          <span class="hljs-attr">image:</span> <span class="hljs-string">myapp:2.0</span>
          <span class="hljs-attr">ports:</span>
          <span class="hljs-bullet">-</span> <span class="hljs-attr">containerPort:</span> <span class="hljs-number">80</span>
</code></pre>
<pre><code class="lang-basic">  kubectl apply -f myapp-canary.yaml
</code></pre>
</li>
<li><p><strong>Service</strong>: The service can be configured to split traffic between the two versions. Initially, it will send most traffic to version 1 (v1) and a small portion to the canary version</p>
<pre><code class="lang-yaml">  <span class="hljs-attr">apiVersion:</span> <span class="hljs-string">v1</span>
  <span class="hljs-attr">kind:</span> <span class="hljs-string">Service</span>
  <span class="hljs-attr">metadata:</span>
    <span class="hljs-attr">name:</span> <span class="hljs-string">myapp-service</span>
  <span class="hljs-attr">spec:</span>
    <span class="hljs-attr">selector:</span>
      <span class="hljs-attr">app:</span> <span class="hljs-string">myapp</span>
      <span class="hljs-attr">version:</span> <span class="hljs-string">v1</span>
    <span class="hljs-attr">ports:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">protocol:</span> <span class="hljs-string">TCP</span>
      <span class="hljs-attr">port:</span> <span class="hljs-number">80</span>
      <span class="hljs-attr">targetPort:</span> <span class="hljs-number">80</span>
    <span class="hljs-attr">type:</span> <span class="hljs-string">LoadBalancer</span>
</code></pre>
</li>
<li><p><strong>Update Service</strong>: Modify the service to route a small portion of traffic to the Canary deployment</p>
<pre><code class="lang-yaml">  <span class="hljs-attr">apiVersion:</span> <span class="hljs-string">v1</span>
  <span class="hljs-attr">kind:</span> <span class="hljs-string">Service</span>
  <span class="hljs-attr">metadata:</span>
    <span class="hljs-attr">name:</span> <span class="hljs-string">myapp-service</span>
  <span class="hljs-attr">spec:</span>
    <span class="hljs-attr">selector:</span>
      <span class="hljs-attr">app:</span> <span class="hljs-string">myapp</span>
    <span class="hljs-attr">ports:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">protocol:</span> <span class="hljs-string">TCP</span>
      <span class="hljs-attr">port:</span> <span class="hljs-number">80</span>
      <span class="hljs-attr">targetPort:</span> <span class="hljs-number">80</span>
    <span class="hljs-attr">type:</span> <span class="hljs-string">LoadBalancer</span>
</code></pre>
<blockquote>
<p>You can use Kubernetes features like weighted routing in an Ingress controller, <code>to route a portion of traffic to the Canary</code> or manage it through a service mesh.</p>
</blockquote>
</li>
<li><p><strong>Monitor Canary</strong>: Check the performance and stability of the Canary version. If there are no issues, you can gradually increase the traffic directed to Canary.</p>
</li>
<li><p><strong>Full Rollout</strong>: Once confident in the Canary’s stability, update the service selector to point to the new version and increase the replicas of the Canary deployment while reducing the replicas of the Stable deployment.</p>
</li>
<li><p><strong>Cleanup</strong>: Remove the old stable deployment after the Canary version is fully rolled out and stable.</p>
</li>
</ul>
</blockquote>
<p><strong>Canary deployments help ensure that new releases are stable and function correctly with minimal risk.</strong></p>
<h3 id="heading-conclusion">Conclusion</h3>
<blockquote>
<p>In this article, we explored various aspects of Kubernetes, including ReplicaSets, Deployments, Probes, and deployment strategies like Blue-Green and Canary deployments. Understanding these concepts is crucial for managing and scaling applications efficiently in a Kubernetes environment. By leveraging these tools and strategies, you can ensure high availability, reliability, and seamless updates for your applications, ultimately enhancing your overall DevOps practices.</p>
</blockquote>
<hr />
<p>💡 <em>Let’s connect and discuss DevOps, cloud automation, and cutting-edge technology</em></p>
<p>🔗 <a target="_blank" href="https://www.linkedin.com/in/praduman-prajapati/"><strong>LinkedIn</strong></a> | 💼 <a target="_blank" href="https://www.upwork.com/freelancers/~01fa3bf4d6797a9651"><strong>Upwork</strong></a> | 🐦 <a target="_blank" href="https://x.com/CndTwtprad"><strong>Twitter</strong></a> | 👨‍💻 <a target="_blank" href="https://github.com/praduman8435"><strong>GitHub</strong></a></p>
]]></content:encoded></item><item><title><![CDATA[kubernetes-part-5]]></title><description><![CDATA[Efficient Kubernetes Scheduling with Kube-Scheduler
The kube-scheduler is Kubernetes default scheduler and part of the control plane. It selects the best node for new or unscheduled pods by filtering out nodes that don't meet the pod's requirements, ...]]></description><link>https://blogs.praduman.site/kubernetes-part-5</link><guid isPermaLink="true">https://blogs.praduman.site/kubernetes-part-5</guid><category><![CDATA[Kubernetes]]></category><category><![CDATA[Devops]]></category><dc:creator><![CDATA[Praduman Prajapati]]></dc:creator><pubDate>Sun, 15 Sep 2024 20:31:29 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1726606356399/38005f35-6266-4c23-8e45-f03d46881336.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h3 id="heading-efficient-kubernetes-scheduling-with-kube-scheduler">Efficient Kubernetes Scheduling with Kube-Scheduler</h3>
<p>The kube-scheduler is Kubernetes default scheduler and part of the control plane. It selects the best node for new or unscheduled pods by filtering out nodes that don't meet the pod's requirements, known as feasible nodes. If no nodes are suitable, the pod remains unscheduled until one becomes available. The scheduler scores feasible nodes and selects the highest-scoring one to run the pod. This decision is then communicated to the API server in a process called binding. Custom schedulers can also be created if needed.</p>
<h3 id="heading-mastering-labels-and-selectors-for-resource-grouping">Mastering Labels and Selectors for Resource Grouping</h3>
<p>Labels and Selectors are standard methods to group things together. Labels are properties attached to each resources. Selectors help you to filter these resources. Kubernetes uses labels to connect different objects together.</p>
<ul>
<li><p><strong>A pod definition file with labels</strong></p>
<pre><code class="lang-yaml">  <span class="hljs-attr">apiVersion:</span> <span class="hljs-string">v1</span>
  <span class="hljs-attr">kind:</span> <span class="hljs-string">Pod</span>
  <span class="hljs-attr">metadata:</span>
   <span class="hljs-attr">name:</span> <span class="hljs-string">simple-webapp</span>
   <span class="hljs-attr">labels:</span>
     <span class="hljs-attr">app:</span> <span class="hljs-string">App1</span>
     <span class="hljs-attr">function:</span> <span class="hljs-string">Front-end</span>
  <span class="hljs-attr">spec:</span>
   <span class="hljs-attr">containers:</span>
   <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">simple-webapp</span>
     <span class="hljs-attr">image:</span> <span class="hljs-string">nginx</span>
     <span class="hljs-attr">ports:</span>
     <span class="hljs-bullet">-</span> <span class="hljs-attr">containerPort:</span> <span class="hljs-number">8080</span>
</code></pre>
<pre><code class="lang-basic">  kubectl apply -f &lt;pod-definition-file&gt;
</code></pre>
</li>
<li><p><strong>To select the pod with labels</strong></p>
<pre><code class="lang-basic">  kubectl <span class="hljs-keyword">get</span> pods --selector app=App1
</code></pre>
</li>
<li><p><strong>To see the lable of a pod</strong></p>
<pre><code class="lang-basic">  kubectl <span class="hljs-keyword">get</span> pods &lt;pod-<span class="hljs-keyword">name</span>&gt; --show-labels
</code></pre>
</li>
<li><p><strong>To label a pod once it is created</strong></p>
<pre><code class="lang-basic">  kubectl label pod &lt;pod-<span class="hljs-keyword">name</span>&gt; <span class="hljs-keyword">key</span>=value
</code></pre>
</li>
<li><p><strong>To see pods with/without specific label</strong></p>
<pre><code class="lang-basic">  kubectl <span class="hljs-keyword">get</span> pod -l &lt;<span class="hljs-keyword">key</span>&gt;=&lt;value&gt;
</code></pre>
<pre><code class="lang-basic">  kubectl <span class="hljs-keyword">get</span> pod -l &lt;<span class="hljs-keyword">key</span>&gt;!=&lt;value&gt;
</code></pre>
</li>
<li><p><strong>Imperative way to create deployment</strong></p>
<pre><code class="lang-basic">  kubectl create deploy bootcamp --image=nginx --replicas <span class="hljs-number">3</span>
</code></pre>
</li>
<li><p><strong>Selection of pod using lable in replicasets</strong></p>
<pre><code class="lang-yaml">   <span class="hljs-attr">apiVersion:</span> <span class="hljs-string">apps/v1</span>
   <span class="hljs-attr">kind:</span> <span class="hljs-string">ReplicaSet</span>
   <span class="hljs-attr">metadata:</span>
     <span class="hljs-attr">name:</span> <span class="hljs-string">simple-webapp</span>
     <span class="hljs-attr">labels:</span>
       <span class="hljs-attr">app:</span> <span class="hljs-string">App1</span>
       <span class="hljs-attr">function:</span> <span class="hljs-string">Front-end</span>
   <span class="hljs-attr">spec:</span>
    <span class="hljs-attr">replicas:</span> <span class="hljs-number">3</span>
    <span class="hljs-attr">selector:</span>
      <span class="hljs-attr">matchLabels:</span>
       <span class="hljs-attr">app:</span> <span class="hljs-string">App1</span>
    <span class="hljs-attr">template:</span>
      <span class="hljs-attr">metadata:</span>
        <span class="hljs-attr">labels:</span>
          <span class="hljs-attr">app:</span> <span class="hljs-string">App1</span>
          <span class="hljs-attr">function:</span> <span class="hljs-string">Front-end</span>
      <span class="hljs-attr">spec:</span>
        <span class="hljs-attr">containers:</span>
        <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">simple-webapp</span>
          <span class="hljs-attr">image:</span> <span class="hljs-string">simple-webapp</span>
</code></pre>
</li>
<li><p><strong>Selection of pod using lable in services</strong></p>
<pre><code class="lang-yaml">    <span class="hljs-attr">apiVersion:</span> <span class="hljs-string">v1</span>
    <span class="hljs-attr">kind:</span> <span class="hljs-string">Service</span>
    <span class="hljs-attr">metadata:</span>
     <span class="hljs-attr">name:</span> <span class="hljs-string">my-service</span>
    <span class="hljs-attr">spec:</span>
     <span class="hljs-attr">selector:</span>
       <span class="hljs-attr">app:</span> <span class="hljs-string">App1</span>
     <span class="hljs-attr">ports:</span>
     <span class="hljs-bullet">-</span> <span class="hljs-attr">protocol:</span> <span class="hljs-string">TCP</span>
       <span class="hljs-attr">port:</span> <span class="hljs-number">80</span>
       <span class="hljs-attr">targetPort:</span> <span class="hljs-number">9376</span>
</code></pre>
</li>
</ul>
<h3 id="heading-understanding-kubernetes-namespaces-for-resource-isolation">Understanding Kubernetes Namespaces for Resource Isolation</h3>
<p>Namespaces are a way to divide cluster resources between multiple users, teams, or projects. They provide a scope for names, allowing multiple users to share the same cluster without interfering with each other.</p>
<p><strong>Kubernetes starts with four initial namespaces:</strong></p>
<ol>
<li><p><strong>default</strong>: A space where you start working in Kubernetes right away</p>
</li>
<li><p><strong>kube-node-lease</strong>: Keeps track of whether each node in the system is healthy or not</p>
</li>
<li><p><strong>kube-public</strong>: A place where anyone, even without special access, can see certain information</p>
</li>
<li><p><strong>kube-system</strong>: Where Kubernetes itself stores its important stuff</p>
</li>
</ol>
<ul>
<li><p><strong>To see all namespaces</strong></p>
<pre><code class="lang-basic">  kubectl <span class="hljs-keyword">get</span> namespace
</code></pre>
</li>
<li><p><strong>To create a namespace</strong></p>
<pre><code class="lang-basic">  kubectl create ns &lt;<span class="hljs-keyword">name</span>-of-namespace&gt;
</code></pre>
</li>
</ul>
<blockquote>
<p>Avoid creating namespaces with the prefix <code>kube-</code> since it is reserved for Kubernetes system namespaces</p>
</blockquote>
<ul>
<li><p><strong>Creating two namespace and create deployment on each with nginx image and try to access the deployment on the other ns from the current ns</strong></p>
<pre><code class="lang-basic">  kubectl create ns frontent
</code></pre>
<pre><code class="lang-basic">  kubectl create ns backend
</code></pre>
<pre><code class="lang-basic">  kubectl create deploy demo -n frontent --image=nginx
</code></pre>
<pre><code class="lang-basic">  kubectl create deploy demo -n backend --image=nginx
</code></pre>
<pre><code class="lang-basic">  kubectl <span class="hljs-keyword">get</span> deployment -n frontent -owide
</code></pre>
</li>
<li><p><strong>To switch to a namespace</strong></p>
<pre><code class="lang-basic">  kubectl config set-context --current --namespace=backend
</code></pre>
</li>
</ul>
<h3 id="heading-managing-resource-quotas-in-kubernetes">Managing Resource Quotas in Kubernetes</h3>
<p>This is a way to manage and limit resource usage in a specific namespace. It ensures that no single team, application, or user can consume too many resources (like CPU, memory, storage, or the number of objects) in a shared cluster. This Prevent one team or application from using all the resources in the cluster and also Manage resources efficiently.</p>
<h4 id="heading-how-resource-quota-works"><strong>How Resource Quota Works</strong></h4>
<p>Administrators set quotas at the <strong>namespace</strong> level, and Kubernetes enforces them. When a Pod or object (like a Persistent Volume or Service) is created, Kubernetes checks the quota and ensures that the creation doesn't exceed the defined limits</p>
<ul>
<li><p><strong>Create a namespace</strong></p>
<pre><code class="lang-basic">  kubectl create ns example-namespace
</code></pre>
</li>
<li><p><strong>Create a resourceQuota</strong></p>
<pre><code class="lang-yaml">  <span class="hljs-attr">apiVersion:</span> <span class="hljs-string">v1</span>
  <span class="hljs-attr">kind:</span> <span class="hljs-string">ResourceQuota</span>
  <span class="hljs-attr">metadata:</span>
    <span class="hljs-attr">name:</span> <span class="hljs-string">example-quota</span>
    <span class="hljs-attr">namespace:</span> <span class="hljs-string">example-namespace</span>
  <span class="hljs-attr">spec:</span>
    <span class="hljs-attr">hard:</span>
      <span class="hljs-attr">requests.cpu:</span> <span class="hljs-string">"500m"</span> <span class="hljs-comment"># total amount of CPU that can be requested</span>
      <span class="hljs-attr">requests.memory:</span> <span class="hljs-string">"200Gi"</span> <span class="hljs-comment"># total amount of memory that can be requested</span>
      <span class="hljs-attr">limits.cpu:</span> <span class="hljs-string">"1"</span> <span class="hljs-comment"># total amount of CPU limit across all pods</span>
      <span class="hljs-attr">limits.memory:</span> <span class="hljs-string">400Gi</span> <span class="hljs-comment"># total amount of memory limit across all pods</span>
      <span class="hljs-attr">pods:</span> <span class="hljs-string">"10"</span> <span class="hljs-comment"># total number of pods that can be created</span>
</code></pre>
<pre><code class="lang-basic">  kubectl apply -f &lt;RQ-<span class="hljs-keyword">name</span>.yaml&gt;
</code></pre>
</li>
<li><p><strong>To check resouceQuota</strong></p>
<pre><code class="lang-basic">  kubectl <span class="hljs-keyword">get</span> resourceQuota
</code></pre>
</li>
</ul>
<h3 id="heading-ensuring-pod-scheduling-readiness">Ensuring Pod Scheduling Readiness</h3>
<p>Pod Scheduling Readiness is a feature that lets you control when a Pod is ready to be placed on a node. It adds a delay in scheduling the Pod until certain conditions are met, making sure the Pod is fully prepared. This is helpful when the Pod depends on other services or resources to be available before it can start running smoothly</p>
<p><img src="https://kubernetes.io/docs/images/podSchedulingGates.svg" alt="pod-scheduling-gates-diagram" /></p>
<p><strong>Example:</strong> Waiting for a Service to Be Ready</p>
<ul>
<li><p><strong>Pod definition file that includes custom scheduling gates</strong></p>
<pre><code class="lang-yaml">  <span class="hljs-attr">apiVersion:</span> <span class="hljs-string">v1</span>
  <span class="hljs-attr">kind:</span> <span class="hljs-string">Pod</span>
  <span class="hljs-attr">metadata:</span>
    <span class="hljs-attr">name:</span> <span class="hljs-string">test-pod</span>
  <span class="hljs-attr">spec:</span>
    <span class="hljs-attr">schedulingGates:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">example.com/service-ready</span>
    <span class="hljs-attr">containers:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">pause</span>
      <span class="hljs-attr">image:</span> <span class="hljs-string">registry.k8s.io/pause:3.6</span>
</code></pre>
</li>
<li><p><strong>Implement a Custom Controller</strong></p>
<pre><code class="lang-yaml">  <span class="hljs-comment"># Pseudo-code for the custom controller</span>
  <span class="hljs-attr">if service_is_ready:</span>
    <span class="hljs-string">remove_scheduling_gate(pod_name="test-pod",</span> <span class="hljs-string">gate_name="example.com/service-ready")</span>
</code></pre>
</li>
<li><h4 id="heading-use-the-custom-controller-to-manage-scheduling"><strong>Use the Custom Controller to Manage Scheduling</strong></h4>
<p>  The custom controller will ensure that the Pod is only scheduled when the specified service is confirmed to be ready. Until then, the Pod will remain unscheduled.</p>
</li>
</ul>
<blockquote>
<p><strong>This approach ensures that the Pod will only be scheduled when the necessary conditions are met, providing a more controlled scheduling process</strong></p>
</blockquote>
<h3 id="heading-enhancing-availability-with-pod-topology-spread-constraints">Enhancing Availability with Pod Topology Spread Constraints</h3>
<p>It help to ensure high availability by spreading Pods across different nodes, zones, or regions. It also helps to maintain balanced load by evenly distributing Pods to prevent overloading any single node or domain. It Improve fault tolerance by avoiding concentration in a single location</p>
<p><strong>Example</strong></p>
<blockquote>
<p>Kubernetes Deployment using Pod Topology Spread Constraints to evenly distribute Pods across different nodes</p>
</blockquote>
<pre><code class="lang-yaml"><span class="hljs-attr">apiVersion:</span> <span class="hljs-string">apps/v1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">Deployment</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">demo-app</span>
<span class="hljs-attr">spec:</span>
  <span class="hljs-attr">replicas:</span> <span class="hljs-number">4</span>
  <span class="hljs-attr">selector:</span>
    <span class="hljs-attr">matchLabels:</span>
      <span class="hljs-attr">app:</span> <span class="hljs-string">demo-app</span>
  <span class="hljs-attr">template:</span>
    <span class="hljs-attr">metadata:</span>
      <span class="hljs-attr">labels:</span>
        <span class="hljs-attr">app:</span> <span class="hljs-string">demo-app</span>
    <span class="hljs-attr">spec:</span>
      <span class="hljs-attr">containers:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">app-container</span>
        <span class="hljs-attr">image:</span> <span class="hljs-string">nginx</span>
      <span class="hljs-attr">topologySpreadConstraints:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">maxSkew:</span> <span class="hljs-number">1</span>
        <span class="hljs-attr">topologyKey:</span> <span class="hljs-string">"kubernetes.io/hostname"</span>
        <span class="hljs-attr">whenUnsatisfiable:</span> <span class="hljs-string">DoNotSchedule</span>
        <span class="hljs-attr">labelSelector:</span>
          <span class="hljs-attr">matchLabels:</span>
            <span class="hljs-attr">app:</span> <span class="hljs-string">demo-app</span>
</code></pre>
<ul>
<li><p><strong>maxSkew</strong>: degree to which the pod is evenly distributed</p>
</li>
<li><p><strong>topologyKey</strong>: key of node labels</p>
</li>
<li><p><strong>whenUnsatisfiable</strong>: If the constraints cannot be met, <code>DoNotSchedule</code> or <code>ScheduleAnyway</code></p>
</li>
<li><p><strong>labelSelector</strong>: find maching pod</p>
</li>
</ul>
<blockquote>
<p>To scale this deployment</p>
<pre><code class="lang-basic">kubectl scale deployment demo-app --replicas=<span class="hljs-number">6</span>
</code></pre>
</blockquote>
<p>When you need to prevent new workloads from being scheduled on a node due to issues with that node, you can <code>cordon</code> the node. Cordon marks the node as unschedulable, ensuring that no new pods are scheduled to it.</p>
<pre><code class="lang-basic">kubectl cordon &lt;node-<span class="hljs-keyword">name</span>&gt;
</code></pre>
<p>To make that node again schedulable you need to uncordon that node</p>
<pre><code class="lang-basic">kubectl uncordon &lt;node-<span class="hljs-keyword">name</span>&gt;
</code></pre>
<h3 id="heading-prioritizing-pods-with-priority-classes">Prioritizing Pods with Priority Classes</h3>
<p>It helps the Kubernetes Scheduler decide which Pods should be scheduled first when resources are limited. Pods with higher priority values are scheduled before those with lower priority. Higher-priority Pods can displace lower-priority Pods if resources are scarce.</p>
<p><strong>Example of priorityClass</strong></p>
<blockquote>
<p>Here’s how you can create a PriorityClass and use it with a Pod</p>
</blockquote>
<ul>
<li><p><strong>Define a PriorityClass</strong></p>
<pre><code class="lang-yaml">  <span class="hljs-attr">apiVersion:</span> <span class="hljs-string">scheduling.k8s.io/v1</span>
  <span class="hljs-attr">kind:</span> <span class="hljs-string">PriorityClass</span>
  <span class="hljs-attr">metadata:</span>
    <span class="hljs-attr">name:</span> <span class="hljs-string">demo-priority</span>
  <span class="hljs-attr">value:</span> <span class="hljs-number">1000000</span>
  <span class="hljs-attr">globalDefault:</span> <span class="hljs-literal">false</span>
  <span class="hljs-attr">description:</span> <span class="hljs-string">"This priority class is for critical workloads."</span>
</code></pre>
</li>
<li><p><strong>Use the PriorityClass in a Pod:</strong></p>
<pre><code class="lang-yaml">  <span class="hljs-attr">apiVersion:</span> <span class="hljs-string">v1</span>
  <span class="hljs-attr">kind:</span> <span class="hljs-string">Pod</span>
  <span class="hljs-attr">metadata:</span>
    <span class="hljs-attr">name:</span> <span class="hljs-string">high-priority-pod</span>
  <span class="hljs-attr">spec:</span>
    <span class="hljs-attr">priorityClassName:</span> <span class="hljs-string">demo-priority</span>
    <span class="hljs-attr">containers:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">my-container</span>
        <span class="hljs-attr">image:</span> <span class="hljs-string">my-image</span>
</code></pre>
</li>
</ul>
<h3 id="heading-understanding-and-managing-pod-overhead">Understanding and Managing Pod Overhead</h3>
<p><strong>Pod Overhead</strong> is the extra amount of resources a Pod needs beyond what its containers require. It covers things like the Pod's management and networking needs. Pod Overhead helps ensure that enough resources are available for both the Pod and its additional needs, not just the containers inside it</p>
<h4 id="heading-how-it-works"><strong>How It Works</strong></h4>
<ul>
<li><p><strong><em>Container Requests</em>:</strong> You specify how much CPU and memory a container needs</p>
</li>
<li><p><strong><em>Add Overhead</em>:</strong> Kubernetes adds extra resources for the Pod’s management and networking</p>
</li>
<li><p><strong><em>Total Resources</em>:</strong> The total resources needed are the sum of container requests plus Pod Overhead</p>
</li>
</ul>
<blockquote>
<h4 id="heading-example-specify-pod-overhead-in-a-pods-resource-request"><strong>Example:</strong> Specify Pod Overhead in a Pod’s resource request</h4>
</blockquote>
<p><strong>Define a</strong> <code>Pod</code> with Overhead</p>
<pre><code class="lang-yaml"><span class="hljs-attr">apiVersion:</span> <span class="hljs-string">v1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">Pod</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">my-pod</span>
<span class="hljs-attr">spec:</span>
  <span class="hljs-attr">containers:</span>
  <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">my-container</span>
    <span class="hljs-attr">image:</span> <span class="hljs-string">nginx</span>
    <span class="hljs-attr">resources:</span>
      <span class="hljs-attr">requests:</span>
        <span class="hljs-attr">memory:</span> <span class="hljs-string">"512Mi"</span>
        <span class="hljs-attr">cpu:</span> <span class="hljs-string">"500m"</span>
  <span class="hljs-attr">overhead:</span>
    <span class="hljs-attr">cpu:</span> <span class="hljs-string">"100m"</span>
    <span class="hljs-attr">memory:</span> <span class="hljs-string">"100Mi"</span>
</code></pre>
<p>In this example:</p>
<ul>
<li><p><strong>Container Requests:</strong> The container requests 512 Mi of memory and 500 milliCPU.</p>
</li>
<li><p><strong>Pod Overhead:</strong> An additional 100 Mi of memory and 100 milliCPU are reserved for the Pod’s overhead.</p>
</li>
</ul>
<h3 id="heading-conclusion">Conclusion</h3>
<blockquote>
<p>In this article, we delved into several advanced Kubernetes concepts, including scheduling, labels and selectors, namespaces, resource quotas, pod scheduling readiness, topology spread constraints, priority classes, and pod overhead. Mastering these features is essential for effectively managing and optimizing Kubernetes clusters. By utilizing these tools and techniques, you can ensure high availability, balanced resource usage, and controlled scheduling processes, ultimately leading to a more robust and resilient Kubernetes environment.</p>
</blockquote>
<hr />
<p>💡 <em>Let’s connect and discuss DevOps, cloud automation, and cutting-edge technology</em></p>
<p>🔗 <a target="_blank" href="https://www.linkedin.com/in/praduman-prajapati/"><strong>LinkedIn</strong></a> | 💼 <a target="_blank" href="https://www.upwork.com/freelancers/~01fa3bf4d6797a9651"><strong>Upwork</strong></a> | 🐦 <a target="_blank" href="https://x.com/CndTwtprad"><strong>Twitter</strong></a> | 👨‍💻 <a target="_blank" href="https://github.com/praduman8435"><strong>GitHub</strong></a></p>
]]></content:encoded></item><item><title><![CDATA[kubernetes-part-4]]></title><description><![CDATA[Init container can be converted to sidecar container using restartpolicy: always

Example of a Sidecar Container
https://github.com/thockin/kubectl-sidecar/blob/main/example.yaml
 
# This is the identity the Pods will run as.
apiVersion: v1
kind: Ser...]]></description><link>https://blogs.praduman.site/kubernetes-part-4</link><guid isPermaLink="true">https://blogs.praduman.site/kubernetes-part-4</guid><category><![CDATA[Kubernetes]]></category><category><![CDATA[Devops]]></category><dc:creator><![CDATA[Praduman Prajapati]]></dc:creator><pubDate>Sat, 14 Sep 2024 18:30:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1726606322330/d9155a93-b05f-42d9-af20-d30a7da81b2b.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<blockquote>
<p><strong>Init container can be converted to sidecar container using</strong> <code>restartpolicy: always</code></p>
</blockquote>
<h3 id="heading-example-of-a-sidecar-container">Example of a Sidecar Container</h3>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://github.com/thockin/kubectl-sidecar/blob/main/example.yaml">https://github.com/thockin/kubectl-sidecar/blob/main/example.yaml</a></div>
<p> </p>
<pre><code class="lang-yaml"><span class="hljs-comment"># This is the identity the Pods will run as.</span>
<span class="hljs-attr">apiVersion:</span> <span class="hljs-string">v1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">ServiceAccount</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">demo-kubectl-sidecar</span>
  <span class="hljs-attr">namespace:</span> <span class="hljs-string">default</span>
<span class="hljs-meta">---</span>
<span class="hljs-comment"># This defines the namespace-scope permissions granted.</span>
<span class="hljs-attr">apiVersion:</span> <span class="hljs-string">rbac.authorization.k8s.io/v1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">Role</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">demo-kubectl-sidecar</span>
  <span class="hljs-attr">namespace:</span> <span class="hljs-string">default</span>
<span class="hljs-attr">rules:</span>
<span class="hljs-bullet">-</span> <span class="hljs-attr">apiGroups:</span>
  <span class="hljs-bullet">-</span> <span class="hljs-string">''</span>
  <span class="hljs-attr">resources:</span>
  <span class="hljs-bullet">-</span> <span class="hljs-string">pods</span>
  <span class="hljs-attr">verbs:</span>
  <span class="hljs-bullet">-</span> <span class="hljs-string">get</span>
  <span class="hljs-bullet">-</span> <span class="hljs-string">watch</span>
<span class="hljs-meta">---</span>
<span class="hljs-comment"># This joins the ServiceAccount to the Role above.</span>
<span class="hljs-attr">apiVersion:</span> <span class="hljs-string">rbac.authorization.k8s.io/v1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">RoleBinding</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">demo-kubectl-sidecar</span>
  <span class="hljs-attr">namespace:</span> <span class="hljs-string">default</span>
<span class="hljs-attr">roleRef:</span>
  <span class="hljs-attr">apiGroup:</span> <span class="hljs-string">rbac.authorization.k8s.io</span>
  <span class="hljs-attr">kind:</span> <span class="hljs-string">Role</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">demo-kubectl-sidecar</span>
<span class="hljs-attr">subjects:</span>
<span class="hljs-bullet">-</span> <span class="hljs-attr">kind:</span> <span class="hljs-string">ServiceAccount</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">demo-kubectl-sidecar</span>
<span class="hljs-meta">---</span>
<span class="hljs-comment"># This defines the cluster-scope permissions granted.</span>
<span class="hljs-attr">apiVersion:</span> <span class="hljs-string">rbac.authorization.k8s.io/v1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">ClusterRole</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">demo-kubectl-sidecar</span>
<span class="hljs-attr">rules:</span>
<span class="hljs-bullet">-</span> <span class="hljs-attr">apiGroups:</span>
  <span class="hljs-bullet">-</span> <span class="hljs-string">''</span>
  <span class="hljs-attr">resources:</span>
  <span class="hljs-bullet">-</span> <span class="hljs-string">nodes</span>
  <span class="hljs-attr">verbs:</span>
  <span class="hljs-bullet">-</span> <span class="hljs-string">get</span>
  <span class="hljs-bullet">-</span> <span class="hljs-string">watch</span>
<span class="hljs-meta">---</span>
<span class="hljs-comment"># This joins the ServiceAccount to the ClusterRole above.</span>
<span class="hljs-attr">apiVersion:</span> <span class="hljs-string">rbac.authorization.k8s.io/v1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">ClusterRoleBinding</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">demo-kubectl-sidecar</span>
<span class="hljs-attr">roleRef:</span>
  <span class="hljs-attr">apiGroup:</span> <span class="hljs-string">rbac.authorization.k8s.io</span>
  <span class="hljs-attr">kind:</span> <span class="hljs-string">ClusterRole</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">demo-kubectl-sidecar</span>
<span class="hljs-attr">subjects:</span>
<span class="hljs-bullet">-</span> <span class="hljs-attr">kind:</span> <span class="hljs-string">ServiceAccount</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">demo-kubectl-sidecar</span>
  <span class="hljs-attr">namespace:</span> <span class="hljs-string">default</span>
<span class="hljs-meta">---</span>
<span class="hljs-comment"># This is the actual workload.</span>
<span class="hljs-attr">apiVersion:</span> <span class="hljs-string">apps/v1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">Deployment</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">demo-kubectl-sidecar</span>
  <span class="hljs-attr">namespace:</span> <span class="hljs-string">default</span>
<span class="hljs-attr">spec:</span>
  <span class="hljs-attr">replicas:</span> <span class="hljs-number">1</span>
  <span class="hljs-attr">selector:</span>
    <span class="hljs-attr">matchLabels:</span>
      <span class="hljs-attr">app:</span> <span class="hljs-string">demo-kubectl-sidecar</span>
  <span class="hljs-attr">template:</span>
    <span class="hljs-attr">metadata:</span>
      <span class="hljs-attr">labels:</span>
        <span class="hljs-attr">app:</span> <span class="hljs-string">demo-kubectl-sidecar</span>
    <span class="hljs-attr">spec:</span>
      <span class="hljs-attr">serviceAccountName:</span> <span class="hljs-string">demo-kubectl-sidecar</span>
      <span class="hljs-attr">securityContext:</span>
        <span class="hljs-comment"># Set this to any valid GID, and two things happen:</span>
        <span class="hljs-comment">#   1) The volume "content" is group-owned by this GID.</span>
        <span class="hljs-comment">#   2) This GID is added to each container.</span>
        <span class="hljs-attr">fsGroup:</span> <span class="hljs-number">9376</span>
      <span class="hljs-attr">containers:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">server</span>
        <span class="hljs-attr">image:</span> <span class="hljs-string">nginx</span>
        <span class="hljs-attr">volumeMounts:</span>
        <span class="hljs-bullet">-</span> <span class="hljs-attr">mountPath:</span> <span class="hljs-string">/usr/share/nginx/html</span>
          <span class="hljs-attr">name:</span> <span class="hljs-string">content</span>
          <span class="hljs-attr">readOnly:</span> <span class="hljs-literal">true</span>
      <span class="hljs-attr">initContainers:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">sidecar</span>
        <span class="hljs-attr">image:</span> <span class="hljs-string">thockin/kubectl-sidecar:v1.30.0-1</span>
        <span class="hljs-attr">restartPolicy:</span> <span class="hljs-string">Always</span>
        <span class="hljs-attr">env:</span>
          <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">MYPOD</span>
            <span class="hljs-attr">valueFrom:</span>
              <span class="hljs-attr">fieldRef:</span>
                <span class="hljs-attr">fieldPath:</span> <span class="hljs-string">metadata.name</span>
          <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">MYNS</span>
            <span class="hljs-attr">valueFrom:</span>
              <span class="hljs-attr">fieldRef:</span>
                <span class="hljs-attr">fieldPath:</span> <span class="hljs-string">metadata.namespace</span>
          <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">MYNODE</span>
            <span class="hljs-attr">valueFrom:</span>
              <span class="hljs-attr">fieldRef:</span>
                <span class="hljs-attr">fieldPath:</span> <span class="hljs-string">spec.nodeName</span>
        <span class="hljs-attr">args:</span>
          <span class="hljs-bullet">-</span> <span class="hljs-string">bash</span>
          <span class="hljs-bullet">-</span> <span class="hljs-string">-c</span>
          <span class="hljs-bullet">-</span> <span class="hljs-string">|
            while true; do
              kubectl -n $MYNS get pod $MYPOD -o json | jq '.status' &gt; /data/this-pod-status.json
              kubectl get node $MYNODE -o json | jq '.status' &gt; /data/this-node-status.json
              sleep 30
            done
</span>        <span class="hljs-attr">volumeMounts:</span>
        <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">content</span>
          <span class="hljs-attr">mountPath:</span> <span class="hljs-string">/data</span>
        <span class="hljs-attr">securityContext:</span>
          <span class="hljs-comment"># This doesn't need to run as root.</span>
          <span class="hljs-attr">runAsUser:</span> <span class="hljs-number">9376</span>
          <span class="hljs-attr">runAsGroup:</span> <span class="hljs-number">9376</span>
      <span class="hljs-attr">volumes:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">content</span>
        <span class="hljs-attr">emptyDir:</span> {}
      <span class="hljs-attr">terminationGracePeriodSeconds:</span> <span class="hljs-number">5</span>
</code></pre>
<pre><code class="lang-bash">kubectl apply -f &lt;yaml-file-name&gt;
</code></pre>
<pre><code class="lang-bash">kubectl port-forward deployment/demo-kubectl-sidecar 8080:80
</code></pre>
<pre><code class="lang-bash">curl http://localhost:8080/this-pod-status.json
curl http://localhost:8080/this-node-status.json
</code></pre>
<hr />
<h3 id="heading-understanding-the-role-of-the-pause-container-in-kubernetes">Understanding the Role of the Pause Container in Kubernetes</h3>
<p>When we try to restart a container then its <code>IP</code> changes. But if a container inside a pod restart then the <code>IP</code> of the pod remains same, this is due to <code>pause container</code> present in kubernetes.</p>
<p>It holds the network namespace and IP address for the Pod, allowing the other containers within the Pod to communicate and share networking resources.</p>
<p>The pause container is automatically created by containerd when you start a Pod. It is not visible to kubectl, but you can see it using the <code>ctr</code> command.</p>
<h3 id="heading-how-to-view-pause-containers-and-pod-ips">How to View Pause Containers and Pod IPs</h3>
<ul>
<li><p><strong>To see all</strong> <code>pause containers</code> <strong>on the node</strong></p>
<pre><code class="lang-bash">  ctr --namespace k8s.io containers list | grep pause
</code></pre>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1716239025136/14d15ac5-7bcf-4886-a2ed-d6eb56be0efc.png" alt /></p>
</li>
<li><p><strong>To check the</strong> <code>IP</code> <strong>of the specific pod</strong></p>
<pre><code class="lang-bash">  kubectl get pod &lt;pod-name&gt; -owide
</code></pre>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1716296877434/e342ac21-22a8-41bb-9887-a24d6d3c7500.png" alt /></p>
</li>
</ul>
<h3 id="heading-kubernetes-user-namespaces">Kubernetes User Namespaces</h3>
<p>User namespaces enhance security by isolating user IDs (UIDs) and group IDs (GIDs) inside containers from the host system. This means that a root user inside a container doesn't have root privileges on the host, reducing the risk if a container is compromised.</p>
<p>This feature helps limit the damage from attacks by keeping container permissions separate from host permissions. User namespaces are now more stable and widely available, supporting both stateless and stateful pods.</p>
<h3 id="heading-pod-disruptions">Pod Disruptions</h3>
<p>Pods do not disappear until someone (a person or a controller) destroys them, or there is an unavoidable hardware or system software error.</p>
<p><strong>Types: <em>Voluntary and involuntary disruptions</em></strong></p>
<ol>
<li><p><strong>Voluntary disruption</strong></p>
<ul>
<li><p>Deleting the deployment</p>
</li>
<li><p>Updating a deployment's pod template</p>
</li>
<li><p>Directly deleting a pod</p>
</li>
<li><p>Draining a node</p>
</li>
<li><p>Removing a pod from a node to allow something else to fit on that node</p>
</li>
</ul>
</li>
<li><p><strong>Non-voluntary disruption</strong></p>
<ul>
<li><p>Hardware failure</p>
</li>
<li><p>Cluster administrator deletes a VM (instance) by mistake</p>
</li>
<li><p>Cloud provider or hypervisor failure makes a VM disappear</p>
</li>
<li><p>Kernel panic</p>
</li>
<li><p>Node disappears from the cluster due to network partition</p>
</li>
<li><p>Eviction of a pod due to the node running out of resources</p>
</li>
</ul>
</li>
</ol>
<p><strong>Here are some ways to reduce involuntary disruptions:</strong></p>
<ul>
<li><p>Ensure your pod requests the resources it needs.</p>
</li>
<li><p>Replicate your application if you need higher availability.</p>
</li>
<li><p>For even higher availability with replicated applications, spread them across racks (using anti-affinity) or across zones (if using a multi-zone cluster).</p>
</li>
</ul>
<h3 id="heading-pod-disruption-budgets-pdb"><strong>Pod disruption budgets (PDB)</strong></h3>
<p>It helps to keep your application running smoothly by limiting how many pods can be disrupted at once. They ensure that a minimum number of pods remain available during maintenance or updates. You set up a PDB to specify how many pods need to stay up and running, and Kubernetes follows these rules during planned disruptions but not during unexpected ones like crashes.</p>
<blockquote>
<p>We use PDB only for high availability</p>
</blockquote>
<p><strong>Here is an example of a PDB configuration:</strong></p>
<ul>
<li><p>Create a deployment</p>
<pre><code class="lang-bash">  apiVersion: apps/v1
  kind: Deployment
  metadata:
    name: nginx-deployment
    labels:
      app: nginx
  spec:
    replicas: 3
    selector:
      matchLabels:
        app: nginx
    template:
      metadata:
        labels:
          app: nginx
      spec:
        containers:
        - name: nginx
          image: nginx:1.14.2
          ports:
          - containerPort: 80
</code></pre>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1716322505137/d56f1817-7f13-47b7-adaa-eeb2a2d13f31.png" alt /></p>
</li>
<li><p>Create PDB</p>
<pre><code class="lang-bash">  apiVersion: policy/v1
  kind: PodDisruptionBudget
  metadata:
    name: nginx-pdb
  spec:
    minAvailable: 2
    selector:
      matchLabels:
        app: nginx
</code></pre>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1716322902965/8874cc3d-a49f-4734-876f-f41d7cec8ad8.png" alt /></p>
</li>
<li><p>Check available PDB</p>
<pre><code class="lang-bash">  kubectl get pdb
</code></pre>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1716323039495/f9d95eec-e00d-4e70-90df-b6c7c006afe0.png" alt /></p>
</li>
<li><p>Do rolling updates (update nginx image)</p>
<pre><code class="lang-bash">  kubectl <span class="hljs-built_in">set</span> image deployment/nginx-deployment nginx=nginx:1.16.1
</code></pre>
</li>
<li><p>Watch the pods now</p>
<pre><code class="lang-bash">  kubectl get pods -w
</code></pre>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1716381297401/4654f0cf-c85e-48c8-8d9d-c27a11ee5acb.png" alt /></p>
</li>
</ul>
<h3 id="heading-setting-resource-requests-and-limits-for-efficient-resource-management">Setting Resource Requests and Limits for Efficient Resource Management</h3>
<p>In Kubernetes, applications run as pods inside nodes. A node is a physical or virtual machine with specific configurations like CPU and memory. Each application consumes a certain amount of these resources.</p>
<p><em>Kubernetes uses the concepts of resource requests and limits to manage CPU and memory usage:</em></p>
<ul>
<li><p><strong>Resource Requests</strong>: The minimum amount of CPU and memory that a pod needs to run.</p>
</li>
<li><p><strong>Resource Limits</strong>: The maximum amount of CPU and memory a pod is allowed to use.</p>
</li>
</ul>
<p>These settings are crucial for scheduling because they help the scheduler find the best node to deploy a pod. For example, if a pod requires at least 2 CPUs and 1 GB of RAM, you set these values in the resource requests and limits. The scheduler then finds a suitable node that can accommodate these requirements.</p>
<p>When creating a highly available Kubernetes cluster, it's important to properly configure resource requests and limits to ensure efficient resource utilization and stability.</p>
<h4 id="heading-understanding-cpu-throttling-in-kubernetes">Understanding CPU Throttling in Kubernetes</h4>
<p>CPU throttling means limiting a pod's CPU usage when it tries to use more than its allowed amount. This prevents one pod from using too much CPU and affecting other pods on the same node. If a pod reaches its CPU limit, Kubernetes slows it down to stay within the limit, which can make the pod run slower. This helps ensure that resources are shared fairly and the cluster stays stable.</p>
<h4 id="heading-overutilization-and-underutilization-of-resources"><strong>Overutilization and Underutilization of Resources</strong></h4>
<ul>
<li><p><strong>Overutilization</strong> happens when a resource is used too much, exceeding its capacity. For example, if a CPU is consistently running at 100%, it can cause performance issues, slow down processes, or even crash the system because there's not enough capacity left for other tasks.</p>
</li>
<li><p><strong>Underutilization</strong> occurs when a resource is used too little, meaning it's not being fully used even though it's available. For instance, if a server has a lot of memory but is only using a small portion of it, the rest of the memory is underutilized, leading to inefficient use of resources and potentially higher costs without any benefit.</p>
</li>
</ul>
<blockquote>
<p><strong>we can specify the LimitRange of a namespace</strong></p>
</blockquote>
<pre><code class="lang-yaml"><span class="hljs-attr">apiVersion:</span> <span class="hljs-string">v1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">LimitRange</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">example-limitrange</span>
  <span class="hljs-attr">namespace:</span> <span class="hljs-string">default</span>
<span class="hljs-attr">spec:</span>
  <span class="hljs-attr">limits:</span>
  <span class="hljs-bullet">-</span> <span class="hljs-attr">type:</span> <span class="hljs-string">Pod</span>
    <span class="hljs-attr">max:</span>
      <span class="hljs-attr">cpu:</span> <span class="hljs-string">"2"</span>
      <span class="hljs-attr">memory:</span> <span class="hljs-string">"1Gi"</span>
    <span class="hljs-attr">min:</span>
      <span class="hljs-attr">cpu:</span> <span class="hljs-string">"200m"</span>
      <span class="hljs-attr">memory:</span> <span class="hljs-string">"100Mi"</span>
  <span class="hljs-bullet">-</span> <span class="hljs-attr">type:</span> <span class="hljs-string">Container</span>
    <span class="hljs-attr">max:</span>
      <span class="hljs-attr">cpu:</span> <span class="hljs-string">"1"</span>
      <span class="hljs-attr">memory:</span> <span class="hljs-string">"500Mi"</span>
    <span class="hljs-attr">min:</span>
      <span class="hljs-attr">cpu:</span> <span class="hljs-string">"100m"</span>
      <span class="hljs-attr">memory:</span> <span class="hljs-string">"50Mi"</span>
    <span class="hljs-attr">default:</span>
      <span class="hljs-attr">cpu:</span> <span class="hljs-string">"300m"</span>
      <span class="hljs-attr">memory:</span> <span class="hljs-string">"200Mi"</span>
    <span class="hljs-attr">defaultRequest:</span>
      <span class="hljs-attr">cpu:</span> <span class="hljs-string">"200m"</span>
      <span class="hljs-attr">memory:</span> <span class="hljs-string">"100Mi"</span>
</code></pre>
<blockquote>
<p>Now you can directly create pods without specifying limit and ranges it will automatically assigned to the pod</p>
</blockquote>
<h3 id="heading-quality-of-service-qos">Quality of Service (QoS)</h3>
<p><strong>Quality of Service (QoS)</strong> in Kubernetes is a way to prioritize and manage resources for Pods to ensure they get the resources they need based on their importance and requirements. Kubernetes classifies Pods into different QoS tiers based on their resource requests and limits.</p>
<ul>
<li><p>Pods that have both CPU and memory requests and limits set, and the requests are equal to the limits. These Pods get the highest priority for resources and known as <code>guaranteed</code> QOS.</p>
<blockquote>
<p>Guaranteed QoS Example:</p>
<pre><code class="lang-yaml"><span class="hljs-attr">apiVersion:</span> <span class="hljs-string">v1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">Pod</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">nginx-guaranteed</span>
<span class="hljs-attr">spec:</span>
  <span class="hljs-attr">containers:</span>
  <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">nginx</span>
    <span class="hljs-attr">image:</span> <span class="hljs-string">nginx</span>
    <span class="hljs-attr">resources:</span>
      <span class="hljs-attr">requests:</span>
        <span class="hljs-attr">memory:</span> <span class="hljs-string">"256Mi"</span>
        <span class="hljs-attr">cpu:</span> <span class="hljs-string">"500m"</span>
      <span class="hljs-attr">limits:</span>
        <span class="hljs-attr">memory:</span> <span class="hljs-string">"256Mi"</span>
        <span class="hljs-attr">cpu:</span> <span class="hljs-string">"500m"</span>
</code></pre>
</blockquote>
</li>
<li><p>Pods that have CPU and memory requests and limits set, but the requests and limits are not equal. These Pods get some level of resource assurance but can use more resources if available and known as <code>burstable</code> QOS.</p>
<blockquote>
<p><strong>Burstable QoS Example:</strong></p>
<pre><code class="lang-yaml"><span class="hljs-attr">apiVersion:</span> <span class="hljs-string">v1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">Pod</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">nginx-burstable</span>
<span class="hljs-attr">spec:</span>
  <span class="hljs-attr">containers:</span>
  <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">nginx</span>
    <span class="hljs-attr">image:</span> <span class="hljs-string">nginx</span>
    <span class="hljs-attr">resources:</span>
      <span class="hljs-attr">requests:</span>
        <span class="hljs-attr">memory:</span> <span class="hljs-string">"256Mi"</span>
      <span class="hljs-attr">limits:</span>
        <span class="hljs-attr">memory:</span> <span class="hljs-string">"512Mi"</span>
</code></pre>
</blockquote>
</li>
<li><p>Pods that do not specify any CPU or memory requests or limits. These Pods get the lowest priority for resources and can be easily evicted if the node is under resource pressure and known as <code>besteffort</code> QOS.</p>
<blockquote>
<p><strong>BestEffort QoS Example:</strong></p>
<pre><code class="lang-yaml"><span class="hljs-attr">apiVersion:</span> <span class="hljs-string">v1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">Pod</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">nginx-besteffort</span>
<span class="hljs-attr">spec:</span>
  <span class="hljs-attr">containers:</span>
  <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">nginx</span>
    <span class="hljs-attr">image:</span> <span class="hljs-string">nginx</span>
</code></pre>
</blockquote>
<p>  <strong>When k8s runs out of resource then on the basis of QOS class, first evict the</strong> <code>BestEffort</code> <strong>then</strong> <code>Burstable</code> <strong>and at last</strong> <code>Guaranteed</code> <strong>evicted</strong></p>
</li>
</ul>
<h3 id="heading-understanding-kubernetes-downward-api">Understanding Kubernetes Downward API</h3>
<p><strong>Downward API</strong> is a useful feature that allows Pods to access information about themselves and the cluster environment. It helps applications running inside Pods to get metadata and resource details, which they might need to operate correctly or adjust their behavior.</p>
<p>The Downward API provides a way for Pods to obtain information about themselves, such as:</p>
<ul>
<li><p><strong>Pod Name:</strong> The name assigned to the Pod.</p>
</li>
<li><p><strong>Namespace:</strong> The namespace in which the Pod is running.</p>
</li>
<li><p><strong>Pod Labels and Annotations:</strong> Metadata attached to the Pod.</p>
</li>
<li><p><strong>Resource Limits and Requests:</strong> Information about how much CPU and memory the Pod has been allocated.</p>
</li>
</ul>
<h4 id="heading-how-to-use-the-downward-api">How to Use the Downward API</h4>
<p>The Downward API provides this information through environment variables or files within the Pod. Here’s a simple example of how to use it:</p>
<ol>
<li><p><strong>Using Environment Variables:</strong></p>
<p> You can define environment variables in your Pod spec to expose metadata from the Downward API:</p>
<pre><code class="lang-yaml"> <span class="hljs-attr">apiVersion:</span> <span class="hljs-string">v1</span>
 <span class="hljs-attr">kind:</span> <span class="hljs-string">Pod</span>
 <span class="hljs-attr">metadata:</span>
   <span class="hljs-attr">name:</span> <span class="hljs-string">my-pod</span>
 <span class="hljs-attr">spec:</span>
   <span class="hljs-attr">containers:</span>
   <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">my-container</span>
     <span class="hljs-attr">image:</span> <span class="hljs-string">my-image</span>
     <span class="hljs-attr">env:</span>
     <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">POD_NAME</span>
       <span class="hljs-attr">valueFrom:</span>
         <span class="hljs-attr">fieldRef:</span>
           <span class="hljs-attr">fieldPath:</span> <span class="hljs-string">metadata.name</span>
     <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">POD_NAMESPACE</span>
       <span class="hljs-attr">valueFrom:</span>
         <span class="hljs-attr">fieldRef:</span>
           <span class="hljs-attr">fieldPath:</span> <span class="hljs-string">metadata.namespace</span>
</code></pre>
<p> In this example:</p>
<ul>
<li><p>The <code>POD_NAME</code> environment variable will be set to the Pod's name.</p>
</li>
<li><p>The <code>POD_NAMESPACE</code> environment variable will be set to the Pod's namespace.</p>
</li>
</ul>
</li>
<li><p><strong>Using Volumes:</strong></p>
<p> You can also expose metadata through files in a volume:</p>
<pre><code class="lang-yaml"> <span class="hljs-attr">apiVersion:</span> <span class="hljs-string">v1</span>
 <span class="hljs-attr">kind:</span> <span class="hljs-string">Pod</span>
 <span class="hljs-attr">metadata:</span>
   <span class="hljs-attr">name:</span> <span class="hljs-string">my-pod</span>
 <span class="hljs-attr">spec:</span>
   <span class="hljs-attr">containers:</span>
   <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">my-container</span>
     <span class="hljs-attr">image:</span> <span class="hljs-string">my-image</span>
     <span class="hljs-attr">volumeMounts:</span>
     <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">pod-info</span>
       <span class="hljs-attr">mountPath:</span> <span class="hljs-string">/etc/podinfo</span>
   <span class="hljs-attr">volumes:</span>
   <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">pod-info</span>
     <span class="hljs-attr">downwardAPI:</span>
       <span class="hljs-attr">items:</span>
       <span class="hljs-bullet">-</span> <span class="hljs-attr">path:</span> <span class="hljs-string">"name"</span>
         <span class="hljs-attr">fieldRef:</span>
           <span class="hljs-attr">fieldPath:</span> <span class="hljs-string">metadata.name</span>
       <span class="hljs-bullet">-</span> <span class="hljs-attr">path:</span> <span class="hljs-string">"namespace"</span>
         <span class="hljs-attr">fieldRef:</span>
           <span class="hljs-attr">fieldPath:</span> <span class="hljs-string">metadata.namespace</span>
</code></pre>
<p> In this example:</p>
<ul>
<li>The Pod’s name and namespace will be available as files in <code>/etc/podinfo</code> within the container.</li>
</ul>
</li>
</ol>
<h3 id="heading-conclusion">Conclusion</h3>
<blockquote>
<p>In conclusion, this article delves into essential Kubernetes concepts, including the transformation of init containers to sidecar containers, the critical role of the pause container in maintaining network namespaces, and the security benefits of user namespaces. It also covers the different types of pod disruptions and how to manage them using Pod Disruption Budgets (PDB). Additionally, the article explains the importance of setting resource requests and limits for efficient resource utilization and stability, and it categorizes the different Quality of Service (QoS) classes in Kubernetes. Practical examples and YAML configurations are provided to help illustrate these concepts, making it a comprehensive guide for managing Kubernetes environments effectively.</p>
</blockquote>
<hr />
<p>💡 <em>Let’s connect and discuss DevOps, cloud automation, and cutting-edge technology</em></p>
<p>🔗 <a target="_blank" href="https://www.linkedin.com/in/praduman-prajapati/"><strong>LinkedIn</strong></a> | 💼 <a target="_blank" href="https://www.upwork.com/freelancers/~01fa3bf4d6797a9651"><strong>Upwork</strong></a> | 🐦 <a target="_blank" href="https://x.com/CndTwtprad"><strong>Twitter</strong></a> | 👨‍💻 <a target="_blank" href="https://github.com/praduman8435"><strong>GitHub</strong></a></p>
]]></content:encoded></item><item><title><![CDATA[kubernetes-part-3]]></title><description><![CDATA[Imperative vs. Declarative: Two Approaches to Kubernetes Infrastructure Management

Imperative: Directly command Kubernetes to perform actions step-by-step

To create a pod in imperative way we need to run a single command kubectl run my-pod —image=n...]]></description><link>https://blogs.praduman.site/kubernetes-part-3</link><guid isPermaLink="true">https://blogs.praduman.site/kubernetes-part-3</guid><category><![CDATA[Devops]]></category><category><![CDATA[Kubernetes]]></category><dc:creator><![CDATA[Praduman Prajapati]]></dc:creator><pubDate>Fri, 13 Sep 2024 18:30:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1726606299887/8b444b8c-9cbb-486d-9ddd-ae55ed6c162c.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h3 id="heading-imperative-vs-declarative-two-approaches-to-kubernetes-infrastructure-management">Imperative vs. Declarative: Two Approaches to Kubernetes Infrastructure Management</h3>
<ol>
<li><p><strong>Imperative:</strong> Directly command Kubernetes to perform actions step-by-step</p>
<blockquote>
<p>To create a pod in imperative way we need to run a single command <code>kubectl run my-pod —image=nginx</code></p>
</blockquote>
</li>
<li><p><strong>Declarative:</strong> Define the desired state in a configuration file and let Kubernetes manage it</p>
<blockquote>
<p>To create a pod in imperative way we need to write a pod definition file( <code>a yaml file</code> ) first</p>
<pre><code class="lang-yaml"><span class="hljs-attr">apiVersion:</span> <span class="hljs-string">v1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">Pod</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">nginx</span>
<span class="hljs-attr">spec:</span>
  <span class="hljs-attr">containers:</span>
  <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">my-container</span>
    <span class="hljs-attr">image:</span> <span class="hljs-string">nginx</span>
</code></pre>
<p>then run command <code>kubectl apply -f filename.yaml</code></p>
</blockquote>
</li>
</ol>
<h3 id="heading-understanding-yaml-the-backbone-of-kubernetes-configuration">Understanding YAML: The Backbone of Kubernetes Configuration</h3>
<ul>
<li><p><strong>YAML</strong> stands for (YAML Ain't Markup Language)</p>
</li>
<li><p>It is easy &amp; comes with human-readable format</p>
</li>
<li><p>Indentation matters alot</p>
</li>
<li><p>It is used for configuring Kubernetes resources</p>
</li>
<li><p>It allows you to define and organize settings in a structured way</p>
</li>
<li><p>It makes easy to create and manage complex configurations</p>
</li>
</ul>
<p><strong>YAML file to create a pod</strong></p>
<pre><code class="lang-yaml"><span class="hljs-attr">apiVersion:</span> <span class="hljs-string">v1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">Pod</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">my-pod</span>
<span class="hljs-attr">spec:</span>
  <span class="hljs-attr">containers:</span>
  <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">my-container</span>
    <span class="hljs-attr">image:</span> <span class="hljs-string">nginx</span>
</code></pre>
<blockquote>
<p><strong><em>Command used to apply a YAML configuration :</em></strong></p>
<pre><code class="lang-basic">kubectl apply -f filename.yaml
</code></pre>
</blockquote>
<p>This command tells Kubernetes to create or update resources based on the specifications defined in the YAML file</p>
<h3 id="heading-key-concepts-in-yaml-objects-lists-comments-and-multi-line-strings">Key Concepts in YAML: Objects, Lists, Comments, and Multi-line Strings</h3>
<ol>
<li><p><strong>Objects:</strong> Collections of key-value pairs, similar to dictionaries in Python. They are defined using indentation.</p>
<p> <strong>Example:</strong></p>
<pre><code class="lang-yaml"> <span class="hljs-attr">person:</span>
   <span class="hljs-attr">name:</span> <span class="hljs-string">Alice</span>
   <span class="hljs-attr">age:</span> <span class="hljs-number">30</span>
   <span class="hljs-attr">city:</span> <span class="hljs-string">New</span> <span class="hljs-string">York</span>
</code></pre>
<p> In this example, <code>person</code> is an object with three key-value pairs: <code>name</code>, <code>age</code>, and <code>city</code>.</p>
</li>
<li><p><strong>Lists:</strong> Sequences of items, where each item is prefixed by a hyphen (<code>-</code>) and space.</p>
<p> <strong>Example:</strong></p>
<pre><code class="lang-yaml"> <span class="hljs-attr">yamlCopy codefruits:</span>
   <span class="hljs-bullet">-</span> <span class="hljs-string">Apple</span>
   <span class="hljs-bullet">-</span> <span class="hljs-string">Banana</span>
   <span class="hljs-bullet">-</span> <span class="hljs-string">Cherry</span>
</code></pre>
<p> Here, <code>fruits</code> is a list containing three items: <code>Apple</code>, <code>Banana</code>, and <code>Cherry</code>.</p>
</li>
<li><p><strong>Comments:</strong> YAML allows comments, start with a <code>#</code>. These are ignored by YAML and are used for adding explanations.</p>
<p> <strong>Example:</strong></p>
<pre><code class="lang-yaml"> <span class="hljs-comment"># This is a comment</span>
 <span class="hljs-attr">server:</span>
   <span class="hljs-attr">host:</span> <span class="hljs-string">localhost</span>
   <span class="hljs-attr">port:</span> <span class="hljs-number">8080</span>
</code></pre>
</li>
<li><p><strong>Multi-line Strings:</strong> YAML provides ways to handle multi-line strings using <code>|</code> (literal block) or <code>&gt;</code> (folded block).</p>
<p> <strong>Literal Block (</strong><code>|</code>):</p>
<pre><code class="lang-yaml"> <span class="hljs-attr">description:</span> <span class="hljs-string">|
   This is a multi-line
   string that preserves newlines.</span>
</code></pre>
<p> <strong>Folded Block (</strong><code>&gt;</code>):</p>
<pre><code class="lang-yaml"> <span class="hljs-attr">description:</span> <span class="hljs-string">&gt;
   This is a multi-line
   string that folds into a single line.</span>
</code></pre>
</li>
</ol>
<h3 id="heading-pods-in-kubernetes-the-fundamental-unit-of-deployment">Pods in Kubernetes: The Fundamental Unit of Deployment</h3>
<blockquote>
<p>Application runs as a pod in kubernetes</p>
</blockquote>
<ul>
<li><p>Pod is the smallest unit in kubernetes</p>
</li>
<li><p>Inside pod the containers run</p>
</li>
<li><p>It holds one or more containers that works together</p>
</li>
<li><p>No two similar kind of container can run in a single pod</p>
</li>
<li><p>Each Pod gets its own IP address</p>
</li>
<li><p>Containers in the same Pod can communicate with each other using <code>localhost</code></p>
</li>
<li><p>Pods can have storage that containers in the Pod can use to share files</p>
</li>
</ul>
<p><strong>Example</strong>: YAML file to create a Pod</p>
<pre><code class="lang-yaml"><span class="hljs-attr">apiVersion:</span> <span class="hljs-string">v1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">Pod</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">my-pod</span>
<span class="hljs-attr">spec:</span>
  <span class="hljs-attr">containers:</span>
  <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">my-container</span>
    <span class="hljs-attr">image:</span> <span class="hljs-string">nginx</span>
    <span class="hljs-attr">ports:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">containerPort:</span> <span class="hljs-number">80</span>
</code></pre>
<p>This file creates a Pod called <code>my-pod</code> with a single container that runs the <code>nginx</code> web server.</p>
<blockquote>
<p>Creating same pod in imperative way</p>
<pre><code class="lang-basic">kubectl <span class="hljs-keyword">run</span> my-pod --image=nginx --port=<span class="hljs-number">80</span>
</code></pre>
</blockquote>
<ul>
<li><p>Command to list all the pods running in the cluster is <code>kubectl get pods</code></p>
</li>
<li><p>If you want more information about a specific pod the use <code>kubectl describe pod pod-name</code></p>
</li>
</ul>
<h3 id="heading-pod-lifecycle-from-creation-to-termination">Pod Lifecycle: From Creation to Termination</h3>
<p>It represents the different phases a pod goes through from creation to termination.</p>
<ol>
<li><p><strong>Pending</strong></p>
<ul>
<li>The pod is created but not yet running. It’s waiting for the scheduler to assign it to a node or for container images to be pulled</li>
</ul>
</li>
<li><p><strong>Running</strong></p>
<ul>
<li><p>The pod is assigned to a node, and at least one container is running. However, the container might face issues like:</p>
<ul>
<li><p><strong>CrashLoopBackOff</strong>: The container repeatedly crashes and restarts</p>
</li>
<li><p><strong>Error</strong>: The container terminates with an error</p>
</li>
</ul>
</li>
</ul>
</li>
<li><p><strong>Succeeded</strong></p>
<ul>
<li>All containers in the pod have finished successfully and won't restart</li>
</ul>
</li>
<li><p><strong>Failed</strong></p>
<ul>
<li>One or more containers in the pod have terminated with an error, and the pod is not being restarted.</li>
</ul>
</li>
</ol>
<blockquote>
<p><strong>Termination:</strong></p>
<p>When a pod is deleted, it enters in Terminating state, where Kubernetes stops and removes its containers</p>
</blockquote>
<h3 id="heading-namespaces-in-kubernetes-organizing-and-isolating-resources">Namespaces in Kubernetes: Organizing and Isolating Resources</h3>
<p><strong>namespaces</strong> are used to organize and isolate resources within a cluster. They allow different teams, projects, or environments (like dev, test, and prod) to run without conflicts. Each namespace has its own resources, and you can apply resource limits and access controls per namespace. It makes easier to manage and separate resources in a large Kubernetes cluster</p>
<p><strong>Key commands:</strong></p>
<ul>
<li><p>Create a namespace</p>
<pre><code class="lang-basic">  kubectl create namespace my-namespace
</code></pre>
</li>
<li><p>List resources in a specific namespace</p>
<pre><code class="lang-basic">  kubectl <span class="hljs-keyword">get</span> pods -n my-namespaceview all namespaces
</code></pre>
</li>
<li><p>view all namespaces</p>
<pre><code class="lang-basic">  kubectl <span class="hljs-keyword">get</span> ns
</code></pre>
</li>
</ul>
<h3 id="heading-init-containers-setting-up-your-pod-for-success">Init Containers: Setting Up Your Pod for Success</h3>
<p><strong>Init Container</strong> is like a helper that runs before the main part of your app starts in a pod. It’s used to perform setup tasks or ensure certain conditions are met before the main container runs</p>
<ul>
<li><p>Init containers always run before any other container in the pod</p>
</li>
<li><p>A pod can have one or more init containers</p>
</li>
<li><p>The main container will not start until the Init container completes finishes successfully</p>
</li>
<li><p>If a pod’s init container fails, the kubelet repeatedly restart that init container untill it succeeds</p>
</li>
<li><p>If the pod has a <code>restartpolicy</code> of never, and an init container fails during startup of the pod, kubernetes treats overall pod as failed</p>
</li>
<li><p>The regular init containers do not support the <code>lifecycle</code>, <code>livenessProbe</code>, <code>readinessProbe</code> or <code>startupProbe</code> fields</p>
</li>
</ul>
<blockquote>
<p><strong>Init container definition file</strong></p>
<pre><code class="lang-yaml"><span class="hljs-attr">apiVersion:</span> <span class="hljs-string">v1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">Pod</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">bootcamp-pod</span>
<span class="hljs-attr">spec:</span>
  <span class="hljs-attr">volumes:</span>
  <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">shared-data</span>
    <span class="hljs-attr">emptyDir:</span> {}
  <span class="hljs-attr">initContainers:</span>
  <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">bootcamp-init</span>
    <span class="hljs-attr">image:</span> <span class="hljs-string">busybox</span>
    <span class="hljs-attr">command:</span> [<span class="hljs-string">'sh'</span>, <span class="hljs-string">'-c'</span>, <span class="hljs-string">'wget -O /usr/share/data/index.html http://kubesimplify.com'</span>]
    <span class="hljs-attr">volumeMounts:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">shared-data</span>
      <span class="hljs-attr">mountPath:</span> <span class="hljs-string">/usr/share/data</span>
  <span class="hljs-attr">containers:</span>
  <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">nginx</span>
    <span class="hljs-attr">image:</span> <span class="hljs-string">nginx</span>
    <span class="hljs-attr">volumeMounts:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">shared-data</span>
      <span class="hljs-attr">mountPath:</span> <span class="hljs-string">/usr/share/nginx/html</span>
</code></pre>
<p><strong>Multiple init containers definition file</strong></p>
<pre><code class="lang-yaml"><span class="hljs-attr">apiVersion:</span> <span class="hljs-string">v1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">Pod</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">init-demo-2</span>
<span class="hljs-attr">spec:</span>
  <span class="hljs-attr">initContainers:</span>
  <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">check-db-service</span>
    <span class="hljs-attr">image:</span> <span class="hljs-string">busybox</span>
    <span class="hljs-attr">command:</span> [<span class="hljs-string">'sh'</span>, <span class="hljs-string">'-c'</span>, <span class="hljs-string">'until nslookup db.default.svc.cluster.local; do echo waiting for db service; sleep 2; done;'</span>]
  <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">check-myservice</span>
    <span class="hljs-attr">image:</span> <span class="hljs-string">busybox</span>
    <span class="hljs-attr">command:</span> [<span class="hljs-string">'sh'</span>, <span class="hljs-string">'-c'</span>, <span class="hljs-string">'until nslookup myservice.default.svc.cluster.local; do echo waiting for db service; sleep 2; done;'</span>]
  <span class="hljs-attr">containers:</span>
  <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">main-container</span>
    <span class="hljs-attr">image:</span> <span class="hljs-string">busybox</span>
    <span class="hljs-attr">command:</span> [<span class="hljs-string">'sleep'</span>, <span class="hljs-string">'3600'</span>]
</code></pre>
<p>Definition file of services for multiple init container</p>
<pre><code class="lang-yaml"><span class="hljs-attr">apiVersion:</span> <span class="hljs-string">v1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">Service</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">db</span>
<span class="hljs-attr">spec:</span>
  <span class="hljs-attr">selector:</span>
    <span class="hljs-attr">app:</span> <span class="hljs-string">demo1</span>
  <span class="hljs-attr">ports:</span>
  <span class="hljs-bullet">-</span> <span class="hljs-attr">protocol:</span> <span class="hljs-string">TCP</span>
    <span class="hljs-attr">port:</span> <span class="hljs-number">3306</span>
    <span class="hljs-attr">targetPort:</span> <span class="hljs-number">3306</span>
<span class="hljs-meta">---</span>
<span class="hljs-attr">apiVersion:</span> <span class="hljs-string">v1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">Service</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">myservice</span>
<span class="hljs-attr">spec:</span>
  <span class="hljs-attr">selector:</span>
    <span class="hljs-attr">app:</span> <span class="hljs-string">demo2</span>
  <span class="hljs-attr">ports:</span>
  <span class="hljs-bullet">-</span> <span class="hljs-attr">protocol:</span> <span class="hljs-string">TCP</span>
    <span class="hljs-attr">port:</span> <span class="hljs-number">80</span>
    <span class="hljs-attr">targetPort:</span> <span class="hljs-number">80</span>
</code></pre>
</blockquote>
<h3 id="heading-sidecar-containers-enhancing-your-main-application">Sidecar Containers: Enhancing Your Main Application</h3>
<p><strong>Sidecar Container</strong> is an additional container that runs alongside the main container in a pod. It helps enhance or support the main container’s functionality</p>
<ul>
<li><p>The sidecar container runs in the same pod as the main container, sharing the same network and storage resources</p>
</li>
<li><p>It can act as a proxy server to handle communication between the main container and external services</p>
</li>
</ul>
<blockquote>
<p><strong>Example of multiple container with a sidecar container</strong></p>
<pre><code class="lang-yaml"><span class="hljs-attr">apiVersion:</span> <span class="hljs-string">v1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">Pod</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">multi-container-pod</span>
<span class="hljs-attr">spec:</span>
  <span class="hljs-attr">volumes:</span>
  <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">shared-data</span>
    <span class="hljs-attr">emptyDir:</span> {}
  <span class="hljs-attr">initContainers:</span>
  <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">meminfo-container</span>
    <span class="hljs-attr">image:</span> <span class="hljs-string">alpine</span>
    <span class="hljs-attr">restartPolicy:</span> <span class="hljs-string">Always</span>
    <span class="hljs-attr">command:</span> [<span class="hljs-string">'sh'</span>, <span class="hljs-string">'-c'</span>, <span class="hljs-string">'sleep 5; while true; do cat /proc/meminfo &gt; /usr/share/data/index.html; sleep 10; done;'</span>]
    <span class="hljs-attr">volumeMounts:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">shared-data</span>
      <span class="hljs-attr">mountPath:</span> <span class="hljs-string">/usr/share/</span>
<span class="hljs-string">data</span>
  <span class="hljs-attr">containers:</span>
  <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">nginx-container</span>
    <span class="hljs-attr">image:</span> <span class="hljs-string">nginx</span>
    <span class="hljs-attr">volumeMounts:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">shared-data</span>
      <span class="hljs-attr">mountPath:</span> <span class="hljs-string">/usr/share/nginx/html</span>
</code></pre>
<p><strong>Example of a pod with a sidecar container</strong></p>
<pre><code class="lang-yaml"><span class="hljs-attr">apiVersion:</span> <span class="hljs-string">v1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">Pod</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">sidecar-example</span>
<span class="hljs-attr">spec:</span>
  <span class="hljs-attr">containers:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">main-container</span>
      <span class="hljs-attr">image:</span> <span class="hljs-string">main-app-image</span>
      <span class="hljs-attr">ports:</span>
        <span class="hljs-bullet">-</span> <span class="hljs-attr">containerPort:</span> <span class="hljs-number">80</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">sidecar-container</span>
      <span class="hljs-attr">image:</span> <span class="hljs-string">logging-agent-image</span>
      <span class="hljs-attr">volumeMounts:</span>
        <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">shared-storage</span>
          <span class="hljs-attr">mountPath:</span> <span class="hljs-string">/logs</span>
  <span class="hljs-attr">volumes:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">shared-storage</span>
      <span class="hljs-attr">emptyDir:</span> {}
</code></pre>
</blockquote>
<ul>
<li><p><strong>Main Container</strong>: Runs the primary application</p>
</li>
<li><p><strong>Sidecar Container</strong>: Runs a logging agent that collects logs from the shared storage directory</p>
</li>
</ul>
<h3 id="heading-conclusion">Conclusion</h3>
<blockquote>
<p>In conclusion, Kubernetes offers a robust and flexible platform for managing containerized applications. By understanding the different ways to define and manage infrastructure, such as imperative and declarative approaches, users can choose the method that best suits their needs. YAML plays a crucial role in configuring Kubernetes resources, providing a human-readable format that simplifies the creation and management of complex configurations. Pods, as the smallest unit in Kubernetes, encapsulate containers and their lifecycle, ensuring efficient resource utilization and communication. Namespaces help in organizing and isolating resources, making it easier to manage large clusters. Additionally, init containers and sidecar containers extend the functionality of pods, allowing for more sophisticated application setups. By mastering these concepts, users can effectively leverage Kubernetes to deploy, scale, and manage their applications in a cloud-native environment.</p>
</blockquote>
<hr />
<p>💡 <em>Let’s connect and discuss DevOps, cloud automation, and cutting-edge technology</em></p>
<p>🔗 <a target="_blank" href="https://www.linkedin.com/in/praduman-prajapati/"><strong>LinkedIn</strong></a> | 💼 <a target="_blank" href="https://www.upwork.com/freelancers/~01fa3bf4d6797a9651"><strong>Upwork</strong></a> | 🐦 <a target="_blank" href="https://x.com/CndTwtprad"><strong>Twitter</strong></a> | 👨‍💻 <a target="_blank" href="https://github.com/praduman8435"><strong>GitHub</strong></a></p>
]]></content:encoded></item><item><title><![CDATA[Kubernetes-Part-2]]></title><description><![CDATA[Kubernetes tools for creating clusters

kubeadm

kops

ksctl


when you create a cluster using these tools, it is called a self-managed Kubernetes cluster
Some managed kubernetes cluster

EKS - AWS

GKE - GCP

Redhat - OpenShifts

AKS - Azure

DKE - ...]]></description><link>https://blogs.praduman.site/kubernetes-part-2</link><guid isPermaLink="true">https://blogs.praduman.site/kubernetes-part-2</guid><category><![CDATA[Kubernetes]]></category><category><![CDATA[Devops]]></category><dc:creator><![CDATA[Praduman Prajapati]]></dc:creator><pubDate>Thu, 12 Sep 2024 18:30:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1726606268473/aa592019-3af5-4464-b3da-2c5ffc39f32a.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h3 id="heading-kubernetes-tools-for-creating-clusters"><strong>Kubernetes tools for creating clusters</strong></h3>
<ul>
<li><p>kubeadm</p>
</li>
<li><p>kops</p>
</li>
<li><p>ksctl</p>
</li>
</ul>
<p><em>when you create a cluster using these tools, it is called a self-managed Kubernetes cluster</em></p>
<h3 id="heading-some-managed-kubernetes-cluster"><strong>Some managed kubernetes cluster</strong></h3>
<ul>
<li><p>EKS - AWS</p>
</li>
<li><p>GKE - GCP</p>
</li>
<li><p>Redhat - OpenShifts</p>
</li>
<li><p>AKS - Azure</p>
</li>
<li><p>DKE - Digital Ocean</p>
</li>
</ul>
<p><em>we generally create self-managed k8s cluster if we have our own servers</em></p>
<h3 id="heading-tools-for-running-kubernetes-clusters-in-a-local-environment"><strong>Tools for running Kubernetes clusters in a local environment:</strong></h3>
<ul>
<li><p>kind</p>
</li>
<li><p>minikube</p>
</li>
<li><p>rancher desktop</p>
</li>
<li><p>docker desktop</p>
</li>
</ul>
<h3 id="heading-understanding-kubernetes-architecture-master-and-worker-nodes">Understanding Kubernetes Architecture: Master and Worker Nodes</h3>
<p>Kubernetes architecture is divided into master nodes and worker nodes, each with specific components that handle different tasks.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1715781229351/06180e99-d0e6-4906-bc45-77f33cdd87f9.png" alt /></p>
<h4 id="heading-master-node"><strong>Master Node</strong></h4>
<p>The master node is the control center of the Kubernetes cluster. It manages the cluster and coordinates all activities.</p>
<ul>
<li><p><strong>ETCD</strong>: A key-value store that keeps all the cluster's data and configuration. It is crucial for maintaining the cluster's state.</p>
</li>
<li><p><strong>Kube-API Server</strong>: The main management point for the cluster. It handles requests from users, other components, and external agents, and updates the cluster's state.</p>
</li>
<li><p><strong>Kube-Scheduler</strong>: Assigns tasks to worker nodes based on available resources and needs, making sure resources are used efficiently.</p>
</li>
<li><p><strong>Controller Manager</strong>: Keeps the cluster in the desired state by managing various controllers, like the replication controller and endpoint controller.</p>
</li>
<li><p><strong>Cloud Controller Manager</strong>: Handles cloud-specific tasks. It allows Kubernetes to interact with cloud provider APIs for things like load balancers and storage.</p>
</li>
</ul>
<h4 id="heading-worker-node"><strong>Worker Node</strong></h4>
<p>Worker nodes run the applications. They handle the workload by running pods, which are the smallest deployable units in Kubernetes.</p>
<ul>
<li><p><strong>Kubelet</strong>: An agent on each worker node that makes sure containers are running in pods as they should.</p>
</li>
<li><p><strong>Kube-Proxy</strong>: Manages network communication for the pods, ensuring proper routing within the cluster and to external networks.</p>
</li>
<li><p><strong>Container Runtime Interface (CRI)</strong>: The software that runs the containers, like Docker or containerd, managing their lifecycle.</p>
</li>
<li><p><strong>Pod</strong>: The smallest deployable unit in Kubernetes, consisting of one or more containers that share storage, network, and configuration. Pods are scheduled on worker nodes and managed by the kubelet.</p>
</li>
</ul>
<h3 id="heading-step-by-step-guide-k8s-installation-using-kops-on-ec2">Step-by-Step Guide: K8s Installation Using KOPS on EC2</h3>
<p>Create an <a target="_blank" href="https://itspraduman.hashnode.dev/how-to-connect-to-ec2-instance-easily-without-password"><strong>EC2 instance</strong></a> and connect it to your local machine or use your personal laptop.</p>
<p>Required dependencies:</p>
<ol>
<li><p>Python3</p>
</li>
<li><p>AWS CLI</p>
</li>
<li><p>kubectl</p>
</li>
</ol>
<p><strong>KOPS Installation</strong></p>
<pre><code class="lang-bash">curl -LO https://github.com/kubernetes/kops/releases/download/$(curl -s https://api.github.com/repos/kubernetes/kops/releases/latest | grep tag_name | cut -d <span class="hljs-string">'"'</span> -f 4)/kops-linux-amd64

chmod +x kops-linux-amd64

sudo mv kops-linux-amd64 /usr/<span class="hljs-built_in">local</span>/bin/kops
</code></pre>
<p><strong>Set up AWS CLI configuration on your EC2 Instance or Laptop.</strong></p>
<p><em>Run</em> <code>aws configure</code></p>
<h3 id="heading-kubernetes-cluster-installation">Kubernetes Cluster Installation</h3>
<p>Follow these steps carefully and read each command before executing.</p>
<ol>
<li><p><strong>Create an S3 bucket</strong> to store KOPS objects:</p>
<pre><code class="lang-bash"> aws s3api create-bucket --bucket kops-abhi-storage --region us-east-1
</code></pre>
</li>
<li><p><strong>Create the cluster</strong>:</p>
<pre><code class="lang-bash"> kops create cluster --name=demok8scluster.k8s.local --state=s3://kops-abhi-storage --zones=us-east-1a --node-count=1 --node-size=t2.micro --master-size=t2.micro --master-volume-size=8 --node-volume-size=8
</code></pre>
</li>
<li><p><strong>Edit the configuration</strong>:</p>
<blockquote>
<p><strong><em>Important: Edit the cluster configuration as there are multiple resources created that won't fall into the free tier.</em></strong></p>
</blockquote>
<pre><code class="lang-bash"> kops edit cluster demok8scluster.k8s.local
</code></pre>
</li>
<li><p><strong>Build the cluster</strong>:</p>
<pre><code class="lang-bash"> kops update cluster demok8scluster.k8s.local --yes --state=s3://kops-abhi-storage
</code></pre>
<blockquote>
<p><strong><em>This will take a few minutes to complete.</em></strong></p>
</blockquote>
</li>
<li><p><strong>Verify the cluster installation</strong>:</p>
<pre><code class="lang-bash"> kops validate cluster demok8scluster.k8s.local
</code></pre>
</li>
</ol>
<h3 id="heading-kubectl-commands"><strong>kubectl commands</strong></h3>
<ul>
<li><p><strong>Nodes present in the cluster</strong></p>
<pre><code class="lang-bash">  kubectl get nodes
</code></pre>
</li>
<li><p><strong>Create a pod</strong></p>
<pre><code class="lang-bash">  kubectl run nginx --image=nginx
</code></pre>
</li>
</ul>
<hr />
<h3 id="heading-how-kubectl-commands-work-behind-the-scenes">How kubectl Commands Work: Behind the Scenes</h3>
<p><em>when we give a command, we are generally communicate with the cluster</em></p>
<p><strong>How we did that ?</strong></p>
<p><em>using the config file, which are present inside the .kube folder on the controlplane</em></p>
<pre><code class="lang-bash">cat .kube/config
</code></pre>
<h3 id="heading-how-things-work"><strong>How things work</strong></h3>
<blockquote>
<p><em>Let's say we need to create a pod using a command</em></p>
<p><code>kubectl run my-pod --image=nginx</code></p>
</blockquote>
<p><em>when we want to do any work using kubectl command. firstly, they make a RestAPI call to API server. After that 3 more things happen :</em></p>
<ul>
<li><p>Authentication</p>
</li>
<li><p>Authorization</p>
</li>
<li><p>Admission</p>
</li>
</ul>
<blockquote>
<p>To ensure the request is valid and the person making the request is authorized to perform the task without violating any policies.</p>
</blockquote>
<p>The scheduler looks for the best node based on taints/tolerations, affinity, and node selector to run the workload. Then, kubelet runs the workload and talks to CRI to pull the image and start it. The status is updated. Kube-proxy acts like iptables, and your workload runs. Health checks pass and inform the API server that the pod is running, and data is stored in ETCD.</p>
<p><strong>Inside kubeconfig file we have :</strong></p>
<ul>
<li><p>user information</p>
</li>
<li><p>cluster information</p>
</li>
<li><p>context information</p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1715834899537/cda9db26-9dcd-4827-9751-413d89bbf63f.png" alt /></p>
<ul>
<li><p>Inside context, we define which user is connected to which cluster.</p>
</li>
<li><p>Inside users, we define the users present.</p>
</li>
<li><p>Inside clusters, we define the clusters present.</p>
</li>
</ul>
<blockquote>
<p><strong><em>Command to view config file :</em></strong></p>
</blockquote>
<pre><code class="lang-bash">kubectl config view
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1715835730812/b0fba319-dab0-4643-97d5-7f502d3f6f42.png" alt /></p>
<blockquote>
<p><strong><em>Command to view contexts :</em></strong></p>
</blockquote>
<pre><code class="lang-bash">kubectl config get-contexts
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1715835865681/51224f0a-98f7-4865-bf42-a45ea5663d0b.png" alt /></p>
<blockquote>
<p><strong><em>Command to view the users :</em></strong></p>
</blockquote>
<pre><code class="lang-bash">kubectl config get-users
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1718804912951/89e862ef-cfad-493c-a704-29f0ea8c0303.png" alt class="image--center mx-auto" /></p>
<hr />
<h3 id="heading-creating-a-new-user-in-kubernetes-a-detailed-guide">Creating a New User in Kubernetes: A Detailed Guide</h3>
<ul>
<li><p><strong>Create a private key using openssl</strong></p>
<pre><code class="lang-bash">  openssl genrsa -out praduman.key 2048
</code></pre>
<pre><code class="lang-bash">  openssl req -new -key praduman.key -out praduman.csr -subj <span class="hljs-string">"/CN=praduman/O=group1"</span>
</code></pre>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1718807880743/21660be5-f5d9-4ca7-8e83-9da7e5a86f90.png" alt class="image--center mx-auto" /></p>
</li>
<li><p><strong>Encode created CSR to base64</strong></p>
<pre><code class="lang-bash">  cat praduman.csr | base64 | tr -d <span class="hljs-string">'\n'</span>
</code></pre>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1715836303598/620fd344-b09f-450d-8e28-e8ba1d0ce201.png" alt /></p>
</li>
<li><p><strong>Create CSR</strong></p>
<pre><code class="lang-bash">  vi csr.yaml
</code></pre>
<pre><code class="lang-yaml">  <span class="hljs-attr">apiVersion:</span> <span class="hljs-string">certificates.k8s.io/v1</span>
  <span class="hljs-attr">kind:</span> <span class="hljs-string">CertificateSigningRequest</span>
  <span class="hljs-attr">metadata:</span>
      <span class="hljs-attr">name:</span> <span class="hljs-string">praduman</span>
  <span class="hljs-attr">spec:</span>
      <span class="hljs-attr">request:</span> <span class="hljs-string">&lt;BASE64-encoded</span> <span class="hljs-string">CSR&gt;</span>
      <span class="hljs-attr">signinerName:</span> <span class="hljs-string">kubernetes.io/kube-apiserver-client</span>
      <span class="hljs-attr">usages:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">client</span> <span class="hljs-string">auth</span>
</code></pre>
<blockquote>
<p>Replace base64_csr with the previous output</p>
</blockquote>
</li>
<li><p><strong>Apply yaml file and approve the certificate to the user</strong></p>
<pre><code class="lang-bash">  kubectl apply -f csr.yaml
  kubectl certificate approve praduman
</code></pre>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1715838970607/bc6f7939-00bc-4917-9339-c5f5b039080e.png" alt /></p>
</li>
<li><p><strong>Get the crt specific to the user using jsonpath</strong></p>
<pre><code class="lang-bash">  kubectl get csr praduman -o jsonpath=<span class="hljs-string">'{.status.certificate}'</span> | base64 --decode &gt; praduman.crt
</code></pre>
<blockquote>
<p>a <code>praduman.crt</code> file is created</p>
</blockquote>
</li>
<li><p><strong>Create a role and role binding</strong></p>
<pre><code class="lang-basic">  vim role.yaml
</code></pre>
<pre><code class="lang-yaml">   <span class="hljs-attr">kind:</span> <span class="hljs-string">Role</span>
   <span class="hljs-attr">apiVersion:</span> <span class="hljs-string">rbac.authorization.k8s.io/v1</span>
   <span class="hljs-attr">metadata:</span>
     <span class="hljs-attr">namespace:</span> <span class="hljs-string">default</span>
     <span class="hljs-attr">name:</span> <span class="hljs-string">pod-reader</span>
   <span class="hljs-attr">rules:</span>
   <span class="hljs-bullet">-</span> <span class="hljs-attr">apiGroups:</span> [<span class="hljs-string">""</span>]
     <span class="hljs-attr">resources:</span> [<span class="hljs-string">"pods"</span>]
     <span class="hljs-attr">verbs:</span> [<span class="hljs-string">"get"</span>, <span class="hljs-string">"watch"</span>, <span class="hljs-string">"list"</span>]
  <span class="hljs-string">---</span>
   <span class="hljs-attr">kind:</span> <span class="hljs-string">RoleBinding</span>
   <span class="hljs-attr">apiVersion:</span> <span class="hljs-string">rbac.authorization.k8s.io/v1</span>
   <span class="hljs-attr">metadata:</span>
     <span class="hljs-attr">name:</span> <span class="hljs-string">read-pods</span>
     <span class="hljs-attr">namespace:</span> <span class="hljs-string">default</span>
   <span class="hljs-attr">subjects:</span>
   <span class="hljs-bullet">-</span> <span class="hljs-attr">kind:</span> <span class="hljs-string">User</span>
     <span class="hljs-attr">name:</span> <span class="hljs-string">praduman</span>
     <span class="hljs-attr">apiGroup:</span> <span class="hljs-string">rbac.authorization.k8s.io</span>
   <span class="hljs-attr">roleRef:</span>
     <span class="hljs-attr">kind:</span> <span class="hljs-string">Role</span>
     <span class="hljs-attr">name:</span> <span class="hljs-string">pod-reader</span>
     <span class="hljs-attr">apiGroup:</span> <span class="hljs-string">rbac.authorization.k8s.io</span>
</code></pre>
<pre><code class="lang-basic">  kubectl apply -f role.yaml
</code></pre>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1715839653601/48313cf1-ae3f-424e-8359-27aaef7b4693.png" alt /></p>
</li>
<li><p><strong>Set credentials</strong></p>
<pre><code class="lang-bash">  kubectl config set-credentials praduman --client-certificate=praduman.crt --client-key=praduman.key
</code></pre>
</li>
<li><p><strong>Create context</strong></p>
<pre><code class="lang-bash">  kubectl config set-context praduman-context --cluster=kubernetes --namespace=default --user=praduman
</code></pre>
</li>
<li><p><strong>view contexts</strong></p>
<pre><code class="lang-bash">  kubectl config get-contexts
</code></pre>
</li>
</ul>
<blockquote>
<p>you will see a context with name praduman-context in the default namespace and kubernetes cluster</p>
</blockquote>
<ul>
<li><p><strong>use the created context</strong></p>
<pre><code class="lang-bash">  kubectl config use-context praduman-context
</code></pre>
</li>
</ul>
<blockquote>
<p>a new user is now created and now you can use it</p>
</blockquote>
<hr />
<h3 id="heading-how-kubectl-command-works"><strong>How kubectl command works</strong></h3>
<ul>
<li><p>Firstly, it searches for the kubeconfig file.</p>
</li>
<li><p>If you set this variable in the current shell, then it first looks for the config file that you provide.</p>
<pre><code class="lang-bash">  <span class="hljs-built_in">export</span> KUBECONFIG=path-to-your-config-file
</code></pre>
</li>
<li><p>Another way to do this is to provide the kubeconfig file path along with the command.</p>
<pre><code class="lang-bash">  kubectl get pod --kubeconfig ~/.kube/config
</code></pre>
</li>
</ul>
<blockquote>
<p>Let's say you have 3 clusters. Each cluster must have its own kubeconfig file. How will you merge all the config files?</p>
</blockquote>
<ul>
<li><p>first method</p>
<pre><code class="lang-bash">  <span class="hljs-built_in">export</span> KUBECONFIG=/path/to/first/config:/path/to/second/config:/path/to/third/config
</code></pre>
</li>
<li><p>second method (use kubectx tool)</p>
<blockquote>
<p>we use kubectx when we have multiple cluster, users and context for Faster way to switch between clusters (context) in kubectl</p>
</blockquote>
<pre><code class="lang-bash">  <span class="hljs-comment"># switch to another cluster that's in kubeconfig</span>
  $ kubectx minikube
  Switched to context <span class="hljs-string">"minikube"</span>.

  <span class="hljs-comment"># switch back to previous cluster</span>
  $ kubectx -
  Switched to context <span class="hljs-string">"oregon"</span>.

  <span class="hljs-comment"># rename context</span>
  $ kubectx dublin=gke_ahmetb_europe-west1-b_dublin
  Context <span class="hljs-string">"gke_ahmetb_europe-west1-b_dublin"</span> renamed to <span class="hljs-string">"dublin"</span>.
</code></pre>
</li>
</ul>
<hr />
<h3 id="heading-gvr-group-version-resource"><strong>GVR (Group-Version-Resource)</strong></h3>
<p>Think of GVR as the address for finding a type of resource in Kubernetes</p>
<ul>
<li><p><strong>Group</strong>: A broad category (like a section in a library)</p>
</li>
<li><p><strong>Version</strong>: A specific edition within that category (like a book edition)</p>
</li>
<li><p><strong>Resource</strong>: The actual item you want (like a specific book)</p>
</li>
</ul>
<p><strong>Example</strong>: <code>apps/v1/deployments</code></p>
<ul>
<li><p><strong>Group</strong>: <code>apps</code> (the section for applications)</p>
</li>
<li><p><strong>Version</strong>: <code>v1</code> (the first edition of this section)</p>
</li>
<li><p><strong>Resource</strong>: <code>deployments</code> (the specific item you’re looking for, like a book on deployments)</p>
</li>
</ul>
<h3 id="heading-gvk-group-version-kind"><strong>GVK (Group-Version-Kind)</strong></h3>
<p>GVK is similar, but it focuses on the type of item you’re working with</p>
<ul>
<li><p><strong>Group</strong>: Same broad category</p>
</li>
<li><p><strong>Version</strong>: Same specific edition</p>
</li>
<li><p><strong>Kind</strong>: The specific type or class of the item</p>
</li>
</ul>
<p><strong>Example</strong>: <code>apps/v1/Deployment</code></p>
<ul>
<li><p><strong>Group</strong>: <code>apps</code></p>
</li>
<li><p><strong>Version</strong>: <code>v1</code></p>
</li>
<li><p><strong>Kind</strong>: <code>Deployment</code> (the exact type of item you’re dealing with, like a specific chapter in the book)</p>
</li>
</ul>
<blockquote>
<ul>
<li><p><strong>GVR</strong> tells you where to find the resource (<code>apps/v1/deployments</code>)</p>
</li>
<li><p><strong>GVK</strong> tells you exactly what the resource is (<code>apps/v1/Deployment</code>)</p>
</li>
</ul>
</blockquote>
<hr />
<h3 id="heading-communicate-with-the-api-server-without-using-kubectl-and-kubeconfig">Communicate with the API server without using kubectl and kubeconfig</h3>
<blockquote>
<p><em>wanted to talk to kubernetes api server without kubectl or kubeconfig ?</em></p>
<p><em>we will talk to kubernetes directly through api server</em></p>
</blockquote>
<ul>
<li><p><strong>create a service account</strong></p>
<pre><code class="lang-bash">  kubectl create serviceaccount praduman -n default
</code></pre>
</li>
<li><p><strong>create cluster role binding</strong></p>
<pre><code class="lang-bash">  kubectl create clusterrolebinding praduman-clusteradmin-binding --clusterrole=cluster-admin --serviceaccount=default:praduman
</code></pre>
</li>
<li><p><strong>create a token</strong></p>
<pre><code class="lang-bash">  kubectl create token praduman
</code></pre>
<pre><code class="lang-bash">  Token=&lt;output-of-the-above-cmd&gt;
</code></pre>
<pre><code class="lang-bash">  APISERVER=$(kubectl config view --minify -o jsonpath=<span class="hljs-string">'{.clusters[0].cluster.server}'</span>)
</code></pre>
</li>
<li><p><strong>create deployment</strong></p>
<pre><code class="lang-bash">  curl -X POST <span class="hljs-variable">$APISERVER</span>/apis/apps/v1/namespaces/default/deployments \
    -H <span class="hljs-string">"Authorization: Bearer <span class="hljs-variable">$TOKEN</span>"</span> \
    -H <span class="hljs-string">'Content-Type: application/json'</span> \
    -d @deploy.json \
    -k
</code></pre>
</li>
<li><p><strong>list pods</strong></p>
<pre><code class="lang-bash">  curl -X GET <span class="hljs-variable">$APISERVER</span>/api/v1/namespaces/default/pods \
    -H <span class="hljs-string">"Authorization: Bearer <span class="hljs-variable">$TOKEN</span>"</span> \
    -k
</code></pre>
</li>
</ul>
<h3 id="heading-conclusion">Conclusion</h3>
<blockquote>
<p>In conclusion, this article provides a detailed exploration of Kubernetes, covering its architecture, tools for creating and managing clusters, and practical steps for setting up a Kubernetes cluster using KOPS on EC2. It distinguishes between self-managed and managed Kubernetes clusters, highlights tools for local development, and delves into the components of master and worker nodes. Additionally, it offers commands and procedures for cluster creation, configuration, and user management, including direct interaction with the Kubernetes API server using REST calls. This comprehensive guide serves as a valuable resource for anyone looking to understand and implement Kubernetes effectively.</p>
</blockquote>
<hr />
<p>💡 <em>Let’s connect and discuss DevOps, cloud automation, and cutting-edge technology</em></p>
<p>🔗 <a target="_blank" href="https://www.linkedin.com/in/praduman-prajapati/"><strong>LinkedIn</strong></a> | 💼 <a target="_blank" href="https://www.upwork.com/freelancers/~01fa3bf4d6797a9651"><strong>Upwork</strong></a> | 🐦 <a target="_blank" href="https://x.com/CndTwtprad"><strong>Twitter</strong></a> | 👨‍💻 <a target="_blank" href="https://github.com/praduman8435"><strong>GitHub</strong></a></p>
]]></content:encoded></item><item><title><![CDATA[Kubernetes-Part-1]]></title><description><![CDATA[Understanding Kubernetes: The Container Orchestration Powerhouse
Kubernetes, also known as k8s, is an open-source system that helps automate the deployment, scaling, and management of applications in containers. It is a container orchestration platfo...]]></description><link>https://blogs.praduman.site/kubernetes-part-1</link><guid isPermaLink="true">https://blogs.praduman.site/kubernetes-part-1</guid><category><![CDATA[Kubernetes]]></category><category><![CDATA[Devops]]></category><dc:creator><![CDATA[Praduman Prajapati]]></dc:creator><pubDate>Wed, 11 Sep 2024 18:30:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1726606175668/136e970b-4c80-4a5b-b9d7-5567a4968693.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h3 id="heading-understanding-kubernetes-the-container-orchestration-powerhouse">Understanding Kubernetes: The Container Orchestration Powerhouse</h3>
<p>Kubernetes, also known as k8s, is an open-source system that helps automate the deployment, scaling, and management of applications in containers. It is a container orchestration platform.</p>
<p>When you install Kubernetes, you create a cluster, which is a group of nodes. Nodes are physical or virtual machines that helps to run your containers.</p>
<p>Virtual machines can also be orchestrated through Kubernetes by installing KubeVirt.</p>
<p><strong><em>Kubernetes are:</em></strong></p>
<ul>
<li><p><strong>CNCF Graduated Project</strong></p>
</li>
<li><p><strong>Inspired by Borg and Omega</strong></p>
</li>
<li><p><strong>Launched in 2014</strong></p>
</li>
<li><p><strong>Designed for Scale</strong></p>
</li>
<li><p><strong>Run Anywhere</strong></p>
</li>
</ul>
<h3 id="heading-why-choose-kubernetes-unleashing-the-power-of-container-management">Why Choose Kubernetes? Unleashing the Power of Container Management</h3>
<p><strong><em>Kubernetes is popular because it offers powerful features:</em></strong></p>
<ol>
<li><p><strong>Autoscaling</strong>: Automatically adjusts the number of running containers based on current demand, ensuring efficient resource usage and optimal performance.</p>
</li>
<li><p><strong>Autohealing</strong>: Detects and replaces failed containers to maintain the health and availability of applications.</p>
</li>
<li><p><strong>Scheduling</strong>: Efficiently allocates containers to nodes based on resource requirements and constraints, optimizing resource usage.</p>
</li>
<li><p><strong>Load Balancing</strong>: Distributes network traffic across multiple containers to ensure no single container is overwhelmed.</p>
</li>
<li><p><strong>Storage Orchestration</strong>: Automatically mounts the necessary storage systems, whether local, cloud-based, or network storage.</p>
</li>
<li><p><strong>Deployment Automation</strong>: Automates the deployment, scaling, and rollback of applications, ensuring smooth updates and maintaining high availability.</p>
</li>
<li><p><strong>Monitoring and Logging Integration</strong>: Provides integration with various tools for monitoring and logging, offering insights into application performance and health.</p>
</li>
</ol>
<h3 id="heading-dockers-drawbacks-why-kubernetes-is-the-superior-choice">Docker's Drawbacks: Why Kubernetes is the Superior Choice</h3>
<p>Docker is a platform for running containers, which are lightweight temporary by nature.</p>
<p><strong><em>Here are some key issues with Docker:</em></strong></p>
<ul>
<li><p><strong>Ephemeral Nature</strong>: Containers have short lifespans. If a container stops or crashes, the application inside becomes inaccessible until someone restarts it.</p>
</li>
<li><p><strong>Single Host Limitation</strong>: Docker manages containers on a single machine, making it difficult to handle large-scale deployments.</p>
</li>
<li><p><strong>Manual Monitoring</strong>: Monitoring and managing a large number of containers manually is impractical. Running commands like <code>docker ps</code> to check container states isn't feasible at scale.</p>
</li>
<li><p><strong>Handling Traffic</strong>: When a container experiences a surge in traffic, Docker lacks built-in mechanisms to automatically distribute the load or scale up the number of containers, leading to potential performance issues.</p>
</li>
</ul>
<h3 id="heading-how-kubernetes-helps">How Kubernetes Helps</h3>
<p><strong>Kubernetes addresses these problems with features like:</strong></p>
<ul>
<li><p><strong>Autohealing</strong>: Automatically detects and restarts failed containers, ensuring applications remain accessible without manual intervention.</p>
</li>
<li><p><strong>Autoscaling</strong>: Automatically adjusts the number of running containers based on traffic demand, ensuring that applications can handle increased traffic without performance degradation.</p>
</li>
<li><p><strong>Multi-Host Orchestration</strong>: Manages containers across multiple machines, enabling large-scale deployments and better resource utilization.</p>
</li>
</ul>
<hr />
<h3 id="heading-step-by-step-guide-creating-a-container-using-docker">Step-by-Step Guide: Creating a Container Using Docker</h3>
<p><strong>firstly make sure you have installed Docker on your server</strong></p>
<ul>
<li><p><strong><em>Pull nginx image from dockerHub on your local.</em></strong></p>
<pre><code class="lang-basic">  docker <span class="hljs-keyword">run</span> -d --<span class="hljs-keyword">name</span> my-nginx-container --memory <span class="hljs-number">512</span>m --cpus <span class="hljs-number">1</span> nginx
</code></pre>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1715754053278/bb8d27ac-2bfd-4d7e-87b2-209643527e60.png" alt class="image--center mx-auto" /></p>
</li>
<li><p><strong><em>check image are running</em></strong></p>
<pre><code class="lang-bash">  docker ps
</code></pre>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1715754315784/a5e8f1b9-f7a9-4c89-ac3f-0d3dd76cb564.png" alt class="image--center mx-auto" /></p>
</li>
<li><p><strong><em>print process id (PID) of container nginx</em></strong></p>
<pre><code class="lang-bash">  docker inspect --format <span class="hljs-string">'{{.State.Pid}}'</span> my-nginx-container

  (or)

  ps aux | grep <span class="hljs-string">'[n]ginx'</span> | sort -n -k 2 | head -n 1 | awk <span class="hljs-string">'{print $2}'</span>
</code></pre>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1718737283139/a64e64cc-6086-40fe-87ca-dafb1db70f6b.png" alt class="image--center mx-auto" /></p>
</li>
<li><p><strong><em>list namespaces of a process of linux</em></strong></p>
<pre><code class="lang-bash">  lsns -p &lt;PID&gt;
</code></pre>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1718737355110/1494f481-f5b6-4e7d-801d-a3ace6b7e108.png" alt class="image--center mx-auto" /></p>
</li>
</ul>
<hr />
<h3 id="heading-conclusion">Conclusion</h3>
<blockquote>
<p>Kubernetes, or k8s, revolutionizes the way applications are deployed, scaled, and managed by automating these processes within containerized environments. Its robust features, such as autoscaling, autohealing, efficient scheduling, load balancing, and storage orchestration, make it an indispensable tool for modern DevOps practices. Unlike Docker, which is limited to single-host container management and requires manual intervention for scaling and monitoring, Kubernetes excels in multi-host orchestration and automatic management of container health and scaling. By leveraging Kubernetes, organizations can achieve greater efficiency, reliability, and scalability in their application deployments.</p>
</blockquote>
<hr />
<p>💡 <em>Let’s connect and discuss DevOps, cloud automation, and cutting-edge technology</em></p>
<p>🔗 <a target="_blank" href="https://www.linkedin.com/in/praduman-prajapati/"><strong>LinkedIn</strong></a> | 💼 <a target="_blank" href="https://www.upwork.com/freelancers/~01fa3bf4d6797a9651"><strong>Upwork</strong></a> | 🐦 <a target="_blank" href="https://x.com/CndTwtprad"><strong>Twitter</strong></a> | 👨‍💻 <a target="_blank" href="https://github.com/praduman8435"><strong>GitHub</strong></a></p>
]]></content:encoded></item><item><title><![CDATA[How to Create an IAM in AWS: A Step-by-Step Guide]]></title><description><![CDATA[AWS IAM (Identity and Access Management) is a service provided by Amazon Web Services (AWS) that helps you manage access to your AWS resources, acting like a security system for your AWS account.
IAM allows you to create and manage users, groups, and...]]></description><link>https://blogs.praduman.site/how-to-create-an-iam-in-aws-a-step-by-step-guide</link><guid isPermaLink="true">https://blogs.praduman.site/how-to-create-an-iam-in-aws-a-step-by-step-guide</guid><category><![CDATA[Devops]]></category><category><![CDATA[Cloud]]></category><category><![CDATA[IAM]]></category><dc:creator><![CDATA[Praduman Prajapati]]></dc:creator><pubDate>Mon, 20 May 2024 19:14:48 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1718741777732/0a72454e-c42f-4b65-ad02-90d465f722da.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>AWS IAM (Identity and Access Management) is a service provided by Amazon Web Services (AWS) that helps you manage access to your AWS resources, acting like a security system for your AWS account.</p>
<p><strong>IAM allows you to create and manage users, groups, and roles</strong>.</p>
<p><strong>Users:</strong> <em>IAM users are individual people or entities (like applications or services) that use your AWS resources. Each user has a unique name and security credentials (password or access keys) for authentication and access control.</em></p>
<p><strong>Groups:</strong> <em>IAM groups are collections of users with similar access requirements. Instead of managing permissions for each user individually, you can assign permissions to groups, making it easier to manage access control. Users can be added or removed from groups as needed.</em></p>
<p><strong>Roles:</strong> <em>IAM roles give temporary access to AWS resources. They are usually used by applications or services that need to access AWS resources for users or other services. Roles have policies that specify what actions and permissions are allowed.</em></p>
<h4 id="heading-with-iam-you-can-control-and-define-permissions-through-policies">With IAM, you can control and define permissions through policies.</h4>
<p><em>IAM policies are JSON documents that define permissions, specifying the actions that can be performed on AWS resources and the resources to which the actions apply. These policies can be attached to users, groups, or roles to control access. IAM provides both AWS managed policies (predefined policies maintained by AWS) and customer managed policies (policies created and managed by you).</em></p>
<p><strong>Overall, IAM is a key part of AWS security. It gives you detailed control over who can access your AWS account and resources, lowers the risk of unauthorized access, and helps keep your environment secure.</strong></p>
<hr />
<h3 id="heading-create-an-iam-role-in-aws">Create an IAM role in AWS</h3>
<ul>
<li><h3 id="heading-login-to-aws-account-using-root-user-and-search-for-iam-service">Login to AWS account using root user and search for IAM service</h3>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1716230003535/0b805508-864b-4775-a580-7378c9c5e682.png" alt class="image--center mx-auto" /></p>
<ul>
<li><p>Tap on users and create a new user</p>
<ul>
<li><p>Enter user name</p>
</li>
<li><p>click on i want to create a new user</p>
</li>
<li><p>Autogenrate password(user can create password by themselves when they login ones)</p>
</li>
</ul>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1716230484650/b4605266-b5c6-43aa-bd5f-ea07a64656b9.png" alt class="image--center mx-auto" /></p>
<ul>
<li>set permissions</li>
</ul>
<p>Attach policy directly or add them to a group where policies are already attached. I am attaching policies directly.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1716230674105/9641a85e-68b3-4a8e-aef8-ad09d27e962b.png" alt class="image--center mx-auto" /></p>
<ul>
<li>go to the next step by attaching reqired policies. you will see summary</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1716230828065/d7d27e05-6d7c-4574-8df3-029434a108de.png" alt class="image--center mx-auto" /></p>
<ul>
<li>User is created. Save and share the .csv file to the person who is going to use this</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1716230923620/b3bbee7f-be7f-4581-860f-82f14e1052f8.png" alt class="image--center mx-auto" /></p>
<ul>
<li>login as IAM user</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1716231236831/e100c5b6-db2f-46a8-b17e-914ed15cb24b.png" alt class="image--center mx-auto" /></p>
<ul>
<li>reset password</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1716231479722/9f91f62e-ae44-4daf-b3ad-6ff15a09cf00.png" alt class="image--center mx-auto" /></p>
<ul>
<li>you are now logged in as a new user and can do the work permissiion you have</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1716231740572/87269109-90f1-43ce-a891-968a4c60330f.png" alt class="image--center mx-auto" /></p>
]]></content:encoded></item><item><title><![CDATA[How to Connect to EC2 Instance Easily Without Password]]></title><description><![CDATA[First, you need to Create an EC2 instance and connect it to the local terminal


Now, open a new terminal on local and run the command:

ssh-keygen -t rsa



Open the ".ssh" folder and copy the content from the "id_rsa.pub" file.

cat .ssh/id_rsa.pub...]]></description><link>https://blogs.praduman.site/how-to-connect-to-ec2-instance-easily-without-password</link><guid isPermaLink="true">https://blogs.praduman.site/how-to-connect-to-ec2-instance-easily-without-password</guid><category><![CDATA[EC2 instance]]></category><category><![CDATA[Devops]]></category><category><![CDATA[AWS]]></category><dc:creator><![CDATA[Praduman Prajapati]]></dc:creator><pubDate>Sat, 18 May 2024 18:39:30 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1714131196158/bebe95c1-c53a-4699-afa0-2ba3422219be.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><strong>First, you need to</strong> <a target="_blank" href="https://itspraduman.hashnode.dev/step-by-step-guide-to-deploying-an-ec2-instance-on-aws-and-connecting-it-to-your-computer"><strong><em>Create an EC2 instance and connect it to the local terminal</em></strong></a></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1714131508063/3cb9b9fc-efec-4246-9645-1ecf7d2f73c0.png" alt class="image--center mx-auto" /></p>
<ul>
<li><em>Now, open a new terminal on local and run the command:</em></li>
</ul>
<pre><code class="lang-bash">ssh-keygen -t rsa
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1714131720002/8c618db2-7689-46d1-83b7-9dc54c5d705d.png" alt class="image--center mx-auto" /></p>
<ul>
<li><em>Open the ".ssh" folder and copy the content from the "id_rsa.pub" file.</em></li>
</ul>
<pre><code class="lang-bash">cat .ssh/id_rsa.pub
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1714131882704/3ed8fc04-f026-42de-a38e-5bbe643a9d74.png" alt class="image--center mx-auto" /></p>
<ul>
<li><em>Open the EC2 instance you want to connect to and access the .ssh folder there.</em></li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1714132087720/8d5a00e2-023d-4e87-9cbb-e0fd16cfc234.png" alt class="image--center mx-auto" /></p>
<ul>
<li><em>Now, open the authorized_keys file and paste the content you copied from your local terminal inside the authorized_keys at the end, then save the file.</em></li>
</ul>
<pre><code class="lang-bash">vim authorized_keys
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1714132345142/316f81a8-cecb-4e20-b544-557c020bbaca.png" alt class="image--center mx-auto" /></p>
<ul>
<li><em>You can now SSH to the EC2 instance without a password from your local terminal. Just use the private IP address of your EC2 instance.</em></li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1714132591000/e06df038-a378-45b2-b469-0b1756a7b956.png" alt class="image--center mx-auto" /></p>
<ul>
<li><em>copy your EC2 instance private IPv4 address and paste it in your local terminal using the command</em></li>
</ul>
<pre><code class="lang-bash">ssh &lt;your-private-ip&gt;
</code></pre>
<p><strong>Congrats! You can now connect to your instance without using a password</strong></p>
]]></content:encoded></item><item><title><![CDATA[Setting Up Docker Containers as Jenkins Build Agents]]></title><description><![CDATA[Firstly, Create an EC2 instance

Then setup jenkins on it


When the above steps done, then wait for the Jenkins to be started

Docker Slave Configuration

To install Docker, run the command below on the EC2 instance terminal
  sudo apt update && upg...]]></description><link>https://blogs.praduman.site/setting-up-docker-containers-as-jenkins-build-agents</link><guid isPermaLink="true">https://blogs.praduman.site/setting-up-docker-containers-as-jenkins-build-agents</guid><category><![CDATA[Jenkins]]></category><category><![CDATA[Continuous Integration]]></category><category><![CDATA[Docker]]></category><category><![CDATA[Devops]]></category><dc:creator><![CDATA[Praduman Prajapati]]></dc:creator><pubDate>Sat, 27 Apr 2024 12:00:20 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1714217918790/ff13ae3b-ba00-4c2e-9075-0581f2fe08a1.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<ul>
<li><p><strong><em>Firstly,</em></strong> <a target="_blank" href="https://itspraduman.hashnode.dev/step-by-step-guide-to-deploying-an-ec2-instance-on-aws-and-connecting-it-to-your-computer"><strong><em>Create an EC2 instance</em></strong></a></p>
</li>
<li><p><strong><em>Then</em></strong> <a target="_blank" href="https://itspraduman.hashnode.dev/setting-up-jenkins-on-amazon-ec2?source=more_articles_bottom_blogs"><strong><em>setup jenkins on it</em></strong></a></p>
</li>
</ul>
<p><strong>When the above steps done, then wait for the Jenkins to be started</strong></p>
<hr />
<h2 id="heading-docker-slave-configuration">Docker Slave Configuration</h2>
<ul>
<li><p><strong><em>To install Docker, run the command below on the EC2 instance terminal</em></strong></p>
<pre><code class="lang-bash">  sudo apt update &amp;&amp; upgrade
  sudo apt install docker.io
</code></pre>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1714169666095/6bf2850d-ae05-4ad2-bf2a-44141e29b4a4.png" alt class="image--center mx-auto" /></p>
</li>
<li><p><strong><em>Grant Jenkins user and ubuntu user permission to docker daemon and restart docker</em></strong></p>
<pre><code class="lang-bash">  sudo su-
  usermod -aG docker jenkins
  usermod -aG docker ubuntu
  systemctl restart docker
</code></pre>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1714170248463/c338ddbc-f800-469a-a5d0-f480a2cbe516.png" alt class="image--center mx-auto" /></p>
</li>
<li><p><strong><em>Switch to the Jenkins user</em></strong></p>
<pre><code class="lang-bash">  su jenkins
</code></pre>
</li>
<li><p>Check if the Jenkins user can run containers</p>
<pre><code class="lang-bash">  docker run hello-world
</code></pre>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1714171120371/f8390ed8-b6b2-4663-adba-58805190ce5b.png" alt class="image--center mx-auto" /></p>
</li>
</ul>
<p><strong><em>Sometimes jenkins might not pickup these changes. So, just restart your jenkins</em></strong></p>
<ul>
<li><p><strong>To restart Jenkins, simply go to your browser and type</strong></p>
<pre><code class="lang-markdown">  http://<span class="xml"><span class="hljs-tag">&lt;<span class="hljs-name">your-EC2-public-ip</span>&gt;</span></span>:8080/restart
</code></pre>
</li>
</ul>
<hr />
<h2 id="heading-install-the-docker-pipeline-plugin-in-jenkins">Install the Docker pipeline plugin in jenkins</h2>
<p><strong><em>We should install the Docker pipeline plugin so that Jenkins can work with Docker as an agent. This means that when running a job, Jenkins needs to know that if a user provides a Jenkins file to run a specific job on Docker, the configuration is in place</em></strong></p>
<ul>
<li><p>Click on <strong>Manage Jenkins</strong></p>
</li>
<li><p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1714172385282/4fb7b7ac-572f-47ef-9575-82f62412c5b6.png" alt class="image--center mx-auto" /></p>
<p>  Click on plugins and install Docker pipeline plugin</p>
</li>
<li><p>Restart Jenkins after the installation is finished</p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1714173111116/12ca1597-7744-4625-8046-6ef9c6919722.png" alt class="image--center mx-auto" /></p>
<p><strong><em>Congratulations! You are now ready to begin creating your pipelines.</em></strong></p>
]]></content:encoded></item></channel></rss>