<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Kaiwalya Koparkar]]></title><description><![CDATA[I am Kaiwalya Koparkar, founder of Geek Around Community, a GitHub Campus Expert, MLH Coach, Open-Source Advocate & DevRel. I write about open source, DevRel, C]]></description><link>https://blogs.kaiwalyakoparkar.com</link><generator>RSS for Node</generator><lastBuildDate>Tue, 14 Apr 2026 19:20:26 GMT</lastBuildDate><atom:link href="https://blogs.kaiwalyakoparkar.com/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[Getting started with Observability using SigNoz 🔭]]></title><description><![CDATA[You’ve faced mysterious slowdowns and downtime in your application. Logs are scattered, metrics are basic, and you’re still left guessing what parts of your code are failing. Observability solves this by bringing together traces, metrics, and logs in...]]></description><link>https://blogs.kaiwalyakoparkar.com/getting-started-with-observability-using-signoz</link><guid isPermaLink="true">https://blogs.kaiwalyakoparkar.com/getting-started-with-observability-using-signoz</guid><category><![CDATA[signoz]]></category><category><![CDATA[Open Source]]></category><category><![CDATA[observability]]></category><category><![CDATA[Otel]]></category><dc:creator><![CDATA[Kaiwalya Koparkar]]></dc:creator><pubDate>Mon, 30 Jun 2025 15:06:33 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1751295553471/0a66abed-2ad6-4019-b89d-d9465a29561a.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>You’ve faced mysterious slowdowns and downtime in your application. Logs are scattered, metrics are basic, and you’re still left guessing what parts of your code are failing. Observability solves this by bringing together traces, metrics, and logs in a single view. With <strong>OpenTelemetry</strong> and <strong>SigNoz</strong>, you can set this up quickly and self-host your data for full control.</p>
<h1 id="heading-install-signoz-via-docker-compose"><strong>Install SigNoz via Docker Compose</strong> 💿</h1>
<p><a target="_blank" href="https://signoz.io">SigNoz</a> is an open-source observability platform that’s perfect for this job, and it’s easy to deploy locally using Docker. Make sure you’re on a server or local machine with <strong>Docker installed and at least 4 GB of memory allocated to Docker,</strong> this ensures all required services can run smoothly without resource errors. Begin by opening your terminal and running:</p>
<pre><code class="lang-bash">git <span class="hljs-built_in">clone</span> -b main https://github.com/SigNoz/signoz.git &amp;&amp; <span class="hljs-built_in">cd</span> signoz/deploy/
<span class="hljs-built_in">cd</span> docker
docker compose up -d --remove-orphans
</code></pre>
<p>This will pull all the required images (if not already present) and spin up multiple coordinated containers, each serving a part of the observability platform, such as the UI, collector, storage database (clickhouse in this case), and query service. Verify containers are running:</p>
<pre><code class="lang-bash">docker ps
</code></pre>
<p>You should see containers like <code>signoz/signoz-otel-collector</code>, <code>signoz/clickhouse-server</code>, and <code>signoz/signoz</code>. Next, open your browser and visit:</p>
<pre><code class="lang-bash">http://localhost:8080
</code></pre>
<p>You should see the SigNoz dashboard ready to receive and visualise trace data from your instrumented applications. If anything doesn’t work as expected, for example, if containers fail to start or the UI isn’t accessible, be sure to consult the <a target="_blank" href="https://signoz.io/docs/install/docker/">dedicated Signoz documentation</a> for detailed troubleshooting and setup guidance.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1751283109865/b6396dd3-dd1b-467d-91bc-24c32d1ea826.png" alt class="image--center mx-auto" /></p>
<h1 id="heading-build-a-simple-nodejs-dice-app"><strong>Build a Simple Node.js Dice App</strong> 🎲</h1>
<p>Before you can add tracing to your application, you’ll need something to instrument, and for this purpose, a simple <em>dice roller service</em> is perfect. It’s lightweight &amp; easy to understand. The idea is to build a small Node.js server using the popular <a target="_blank" href="https://expressjs.com/">Express framework</a>, which will respond to HTTP requests by returning random dice roll results. This gives us a nice, testable endpoint that we can later trace with OpenTelemetry. Create a folder for your project:</p>
<pre><code class="lang-bash">mkdir dice-app
<span class="hljs-built_in">cd</span> dice-app
npm init -y
npm install express
</code></pre>
<p>Now that you have your project set up, you’ll want to create your actual application code. Make a new file in this folder named <code>app.js</code>, and save the following code into it:</p>
<pre><code class="lang-javascript"><span class="hljs-keyword">const</span> express = <span class="hljs-built_in">require</span>(<span class="hljs-string">'express'</span>);
<span class="hljs-keyword">const</span> app = express();
<span class="hljs-keyword">const</span> PORT = <span class="hljs-number">3000</span>;

app.get(<span class="hljs-string">'/rolldice'</span>, <span class="hljs-function">(<span class="hljs-params">req, res</span>) =&gt;</span> {
  <span class="hljs-keyword">const</span> rolls = <span class="hljs-built_in">parseInt</span>(req.query.rolls) || <span class="hljs-number">3</span>;
  <span class="hljs-keyword">const</span> results = <span class="hljs-built_in">Array</span>.from({ <span class="hljs-attr">length</span>: rolls }, <span class="hljs-function">() =&gt;</span>
    <span class="hljs-built_in">Math</span>.floor(<span class="hljs-built_in">Math</span>.random() * <span class="hljs-number">6</span>) + <span class="hljs-number">1</span>
  );
  res.json(results);
});

app.listen(PORT, <span class="hljs-function">() =&gt;</span> {
  <span class="hljs-built_in">console</span>.log(<span class="hljs-string">`Server listening at http://localhost:<span class="hljs-subst">${PORT}</span>`</span>);
});
</code></pre>
<p>Now we will run the application using the command below:</p>
<pre><code class="lang-bash">node app.js
</code></pre>
<p>You should see the message “Server listening at <a target="_blank" href="http://localhost:3000,%E2%80%9D">http://localhost:3000”</a> confirming that it’s up and running. To test it, you can use your browser or a command-line tool like curl. For example, try running:</p>
<pre><code class="lang-bash">curl <span class="hljs-string">"http://localhost:3000/rolldice?rolls=5"</span>
</code></pre>
<p>This sends a GET request to your server asking for 5 dice rolls. The server will respond with a JSON array of 5 random numbers between 1 and 6, simulating the roll of 5 dice. This is a simple but effective way to demonstrate a working service that can generate meaningful traffic.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1751283349741/f0f3beec-c883-48a8-bea8-b8f459a39b9a.png" alt class="image--center mx-auto" /></p>
<h1 id="heading-add-opentelemetry-packages"><strong>Add OpenTelemetry Packages</strong> 📦</h1>
<p>You now want automatic tracing. Install the needed OpenTelemetry libraries. These OpenTelemetry packages let you add tracing to your Node.js app: <code>@opentelemetry/sdk-node</code> is the core SDK to set up and manage tracing, <code>@opentelemetry/auto-instrumentations-node</code> automatically captures spans from popular Node.js libraries with minimal setup, and <code>@opentelemetry/exporter-trace-otlp-proto</code> sends collected trace data in the OTLP (protobuf) format to backends like the OpenTelemetry Collector or observability platforms. Use the following command:</p>
<pre><code class="lang-bash">npm install \
  @opentelemetry/sdk-node \
  @opentelemetry/auto-instrumentations-node \
  @opentelemetry/exporter-trace-otlp-proto
</code></pre>
<h1 id="heading-configure-tracing"><strong>Configure Tracing</strong> ⚙️</h1>
<p>This code sets up OpenTelemetry tracing in a Node.js app by creating a NodeSDK instance configured to automatically capture spans from common libraries (like HTTP and Express). Create <code>tracing.js</code> in your root folder, where the dice roll app exists, with the following content:</p>
<pre><code class="lang-javascript"><span class="hljs-keyword">const</span> { NodeSDK } = <span class="hljs-built_in">require</span>(<span class="hljs-string">'@opentelemetry/sdk-node'</span>);
<span class="hljs-keyword">const</span> { getNodeAutoInstrumentations } = <span class="hljs-built_in">require</span>(<span class="hljs-string">'@opentelemetry/auto-instrumentations-node'</span>);
<span class="hljs-keyword">const</span> { OTLPTraceExporter } = <span class="hljs-built_in">require</span>(<span class="hljs-string">'@opentelemetry/exporter-trace-otlp-proto'</span>);

<span class="hljs-keyword">const</span> exporter = <span class="hljs-keyword">new</span> OTLPTraceExporter({
  <span class="hljs-attr">url</span>: <span class="hljs-string">'http://localhost:4318/v1/traces'</span>
});

<span class="hljs-keyword">const</span> sdk = <span class="hljs-keyword">new</span> NodeSDK({
  <span class="hljs-attr">traceExporter</span>: exporter,
  <span class="hljs-attr">instrumentations</span>: [getNodeAutoInstrumentations()],
  <span class="hljs-attr">serviceName</span>: <span class="hljs-string">'signoz-demo-dice-app'</span>
});

sdk.start();
</code></pre>
<p>We capture spans from common libraries using <code>getNodeAutoInstrumentations()</code>, and to export those spans to an OTLP-compatible backend (like the OpenTelemetry Collector or SigNoz) at <a target="_blank" href="http://localhost:4318/v1/traces">http://localhost:4318/v1/traces</a> using the <code>OTLPTraceExporter</code>. By calling <code>sdk.start()</code> we initialise the tracer, enabling end-to-end tracing of requests in the <code>signoz-demo-dice-app</code> service for performance monitoring and debugging. This sets the <a target="_blank" href="http://service.name">service.name</a> attribute for all traces, helping SigNoz group telemetry under “<strong>signoz-demo-dice-app</strong>”</p>
<h1 id="heading-start-the-app-with-tracing"><strong>Start the App with Tracing</strong> 🏁</h1>
<p>OpenTelemetry must load before your application logic. To confirm so, run:</p>
<pre><code class="lang-bash">node --require ./tracing.js app.js
</code></pre>
<p>This command makes sure the tracing setup runs first by pre-loading <code>tracing.js</code>, so when <code>app.js</code> starts, tracing is already enabled for the whole app.</p>
<h1 id="heading-send-some-test-traffic"><strong>Send Some Test Traffic</strong> 🚦</h1>
<p>To see tracing in action, we generate a few HTTP requests to our running app using these curl commands:</p>
<pre><code class="lang-bash">curl <span class="hljs-string">"http://localhost:3000/rolldice?rolls=10"</span>
curl <span class="hljs-string">"http://localhost:3000/rolldice?rolls=2"</span>
curl <span class="hljs-string">"http://localhost:3000/rolldice"</span>
</code></pre>
<p>Each command simulates a user accessing the <code>/rolldice</code> endpoint on your server (running locally on port 3000). The optional rolls query parameter tells the app how many dice rolls to simulate.</p>
<h1 id="heading-view-data-in-the-signoz-dashboard"><strong>View Data in the SigNoz Dashboard</strong> 📊</h1>
<p>In SigNoz, you can now visualise these traces in the UI, analyse the duration and flow of requests, identify performance bottlenecks, or spot errors. This automatic capture ensures you don’t need to manually add tracing code for basic HTTP interactions; it just works out of the box once your instrumentation is set up. Return to:</p>
<pre><code class="lang-bash">http://localhost:8080
</code></pre>
<p>Navigate to the <strong>Traces</strong> tab. Under <strong>Traces</strong>, you can inspect details like timestamps, durations, and spans that show how each request flowed through your service.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1751274719825/82bcb210-dc0c-42a3-aa0b-6fc197ba8ae5.jpeg" alt class="image--center mx-auto" /></p>
<p>You can view the <strong>Services.</strong> In this, you should now see <strong>“signoz-demo-dice-app”</strong> listed. Selecting our service from the panel will help you view a graphical representation of Latency, Rate, Apdex, Key Operations, etc.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1751274597233/b9b6167c-50cf-46ba-855e-53a79addf03e.jpeg" alt class="image--center mx-auto" /></p>
<h1 id="heading-troubleshooting-checklist"><strong>Troubleshooting Checklist</strong> ✅</h1>
<div class="hn-table">
<table>
<thead>
<tr>
<td><strong>Symptom</strong></td><td><strong>What to Check</strong></td></tr>
</thead>
<tbody>
<tr>
<td>Only see <code>unknown_service:node</code></td><td>Make sure you started the app with <code>--require ./tracing.js</code></td></tr>
<tr>
<td>No traces showing up</td><td>Ensure SigNoz Collector is reachable at <a target="_blank" href="http://localhost:4318">localhost:4318</a> and app logs show no errors. Also, make sure OTel environments are being exported correctly; if not, try adding them manually as export OTEL_TRACES_EXPORTER=console</td></tr>
<tr>
<td>SigNoz containers failing to start</td><td>Check Docker logs <code>signoz-clickhouse</code> for resource or startup errors</td></tr>
</tbody>
</table>
</div><h1 id="heading-whats-next"><strong>What’s Next?</strong> 🤔</h1>
<p>Once you have basic tracing set up, you can further enhance your observability by <a target="_blank" href="https://opentelemetry.io/docs/specs/otel/metrics/">adding metrics</a> using the <a target="_blank" href="https://opentelemetry.io/docs/specs/otel/metrics/">OpenTelemetry Metrics SDK</a> to monitor things like request counts and latencies; <a target="_blank" href="https://opentelemetry.io/docs/collector/logs/">capturing and correlating logs</a> alongside traces for full context during debugging; deploying in a <a target="_blank" href="https://opentelemetry.io/docs/collector/security/">secure setup</a> with authentication and encryption to <a target="_blank" href="https://opentelemetry.io/docs/collector/security/">protect your telemetry data</a>; and <a target="_blank" href="https://opentelemetry.io/docs/instrumentation/js/manual/">adding custom spans</a> in your code (for example, to trace critical workflows like a credit card payment process) to get deep, business-specific visibility into your application’s behavior.</p>
<h1 id="heading-resource-for-you">Resource for You! 📚</h1>
<ul>
<li><p><a target="_blank" href="https://signoz.io/docs/instrumentation/opentelemetry-express/">https://signoz.io/docs/instrumentation/opentelemetry-express/</a></p>
</li>
<li><p><a target="_blank" href="https://signoz.io/blog/distributed-tracing/">https://signoz.io/blog/distributed-tracing/</a></p>
</li>
<li><p><a target="_blank" href="https://signoz.io/blog/distributed-tracing-span/">https://signoz.io/blog/distributed-tracing-span/</a></p>
</li>
</ul>
<h1 id="heading-thanks-for-reading">Thanks for Reading ❤️</h1>
<p>This setup sets the anchor for robust observability. From here, you can evolve your tracing, logs, metrics, and even multi-service pipelines, all with SigNoz as your single source of truth. I hope this walkthrough made the setup process clear and approachable. If you found this helpful, consider sharing it with your team so everyone can benefit from improved observability practices.</p>
]]></content:encoded></item><item><title><![CDATA[Building pod-logger from scratch]]></title><description><![CDATA[In this blog, I’ll take you through the complete journey of building a multi-container application that displays Kubernetes pod logs using a Go backend, a web frontend, and a dashboard built with Builder.io. The idea started small: fetch and display ...]]></description><link>https://blogs.kaiwalyakoparkar.com/podlogger</link><guid isPermaLink="true">https://blogs.kaiwalyakoparkar.com/podlogger</guid><category><![CDATA[Kubernetes]]></category><category><![CDATA[Go Language]]></category><category><![CDATA[github-actions]]></category><category><![CDATA[Docker]]></category><category><![CDATA[tools]]></category><dc:creator><![CDATA[Kaiwalya Koparkar]]></dc:creator><pubDate>Sun, 13 Apr 2025 18:03:09 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1747031424805/0577d9b3-5558-41e0-ba4e-1c31a1fe5e4c.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In this blog, I’ll take you through the complete journey of building a multi-container application that displays Kubernetes pod logs using a Go backend, a web frontend, and a dashboard built with Builder.io. The idea started small: fetch and display logs from running pods. But as I explored more, I realized the scope was larger, and it had the potential to be a full-fledged logging utility. The project taught me how to work with the Kubernetes API, create RoleBindings for access, and manage communication between containers using Nginx and proper service configurations. I faced several scope-related challenges, especially trying to balance between MVP and scalable design from day one. Instead of building everything in a monolith, I broke it into clean services—frontend (dashboard), backend. Each component had its own build process and was automated using GitHub Actions and Docker Hub. Let’s explore the building blocks step by step.</p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://github.com/kaiwalyakoparkar/pod-logger">https://github.com/kaiwalyakoparkar/pod-logger</a></div>
<p> </p>
<h2 id="heading-understanding-multi-container-setup-and-the-role-of-volumes"><strong>Understanding Multi-Container Setup and the Role of Volumes</strong></h2>
<p>When working with Kubernetes, splitting components into multiple containers provides modularity, cleaner deployment, and fault isolation. In my case, the Go-based backend was responsible for talking to the Kubernetes API and fetching logs, while the frontend container displayed the logs in a simple browser interface. Volumes played a key role in this setup, not for sharing application state but for persisting logs between restarts. When the backend fetched logs, they could be optionally saved to a mounted volume for easy backup or future parsing. This structure also opened possibilities for adding sidecars in the future to handle tasks like log parsing or storage. Multi-container pods ensure shared networking and volume space, making communication and data exchange between containers easier. Instead of exposing services externally for container-to-container communication, they can directly interact through shared localhost or volumes. It’s one of the best practices in Kubernetes for tight coupling and internal service dependencies.</p>
<h2 id="heading-using-the-kubernetes-api-from-inside-the-container"><strong>Using the Kubernetes API from Inside the Container</strong></h2>
<p>Accessing the Kubernetes API securely from inside a pod required some configuration. Initially, I tried to directly call the API server using curl, but I faced authentication issues and SSL errors. After reading <a target="_blank" href="https://prathapreddy-mudium.medium.com/accessing-the-kubernetes-api-from-a-pod-b38c5c775ce8">this guide</a>, I realized that I needed a ServiceAccount, Role, and RoleBinding to enable secure access. I used a Go script to call Kubernetes APIs, relying on the token mounted inside the pod at <code>/var/run/secrets/kubernetes.io/serviceaccount</code>. This helped authenticate the API requests correctly. I also had to adjust the script’s environment variables using Docker’s ENV configuration instead of relying on shell scripting within the pod. The curl commands still didn’t work initially, but I fixed them using insights from <a target="_blank" href="https://nieldw.medium.com/curling-the-kubernetes-api-server-d7675cfc398c">another blog</a>. Once these fixes were done, the Go backend was able to pull logs from pods successfully, proving that my setup and permissions were correct.</p>
<h2 id="heading-creating-the-backend-in-go-with-a-command-executor"><strong>Creating the Backend in Go with a Command Executor</strong></h2>
<p>The first backend version was a simple Go program that ran a shell command and displayed its output on the <code>/api/logs</code> endpoint. To test the basic idea, I used the ls command inside the container and displayed the results on a webpage. This gave me confidence that the API server could run shell commands and serve output dynamically. Later, I replaced the ls command with actual Kubernetes API calls using the Go client SDK to fetch pod logs. I implemented routes <code>/api/namespaces and /api/pods?namespace=X</code> to allow users to list available namespaces and pods dynamically. The backend followed a basic RESTful structure, returning JSON for each API call. Error handling was added to gracefully return messages when the API was unreachable or the pod log was not found. This created a reusable base that I could later expand upon for more advanced log inspection features.</p>
<h2 id="heading-frontend-in-html-bootstrap-and-javascript"><strong>Frontend in HTML, Bootstrap, and JavaScript</strong></h2>
<p>To make the logs viewable, I built a very lightweight frontend using HTML, Bootstrap, and plain JavaScript. It called the Go API endpoints and displayed the responses in a browser interface. The design was kept minimal to ensure fast load times and simplicity. Initially, I ran the frontend locally using <code>python3 -m</code> HTTP server, but that wasn’t scalable. So, I containerized it using Nginx, added a Dockerfile, and set up proper routing to make calls to the backend using cluster IPs. Relative URLs were updated to reflect container-to-container communication using service names defined in Kubernetes. I also added a refresh button that would fetch the latest logs without reloading the whole page. This small interaction made the frontend feel much more dynamic and usable.</p>
<h2 id="heading-automating-the-build-with-github-actions-and-docker-hub"><strong>Automating the Build with GitHub Actions and Docker Hub</strong></h2>
<p>To keep things easy, I automated the container build process using GitHub Actions. Each push to the main branch would trigger a workflow to build the Go backend and frontend containers, tag them with both latest and version numbers, and push them to Docker Hub. I configured secrets for Docker Hub credentials in GitHub to avoid leaking sensitive data. The same workflow also handled tagging using Git commands to ensure consistent version tracking. This meant I could test changes locally with Docker Compose and push them confidently knowing the images were versioned correctly in the registry. With the new images available, I could then update my Kubernetes manifests and redeploy the containers quickly. Automation reduced human error and helped maintain consistency across environments.</p>
<h2 id="heading-bringing-it-all-together-with-docker-compose"><strong>Bringing It All Together with Docker Compose</strong></h2>
<p>To make testing easier before deploying to Kubernetes, I created a <code>docker-compose.yml</code> file that defined all three services: the Go backend, the frontend with Nginx. Each container had its own port, and the compose file handled network communication between them. This setup allowed me to run the entire application with a single docker-compose up command, making local development and testing significantly faster. I mapped ports to localhost so I could hit the endpoints and browser frontend from my laptop. The logs being pulled from Kubernetes were now shown in the browser in real-time. With this working setup, I could validate the entire end-to-end pipeline from Kubernetes API to user interface. Docker Compose was particularly helpful during early prototyping and debugging phases before moving everything into a cluster.</p>
<h2 id="heading-adding-a-dashboard-with-builderio-and-design-collaboration"><strong>Adding a Dashboard with Builder.io and Design Collaboration</strong></h2>
<p>To give the app a more polished user interface, I teamed up with <a target="_blank" href="https://x.com/GuptaSanskritii">Sanskriti Gupta</a>, who helped craft the design for a new dashboard. Using <a target="_blank" href="https://www.builder.io/">Builder.io</a>, I built a visually appealing dashboard layout that could later fetch real data. The current dashboard uses dummy data, but it’s ready to be integrated with the Go API endpoints. Builder.io’s drag-and-drop interface made it easy to convert a design prototype into production-ready HTML/CSS components. The dashboard container was also dockerized and added to the <code>docker-compose.yml</code> setup. With the frontend and backend all running as separate containers, the system now felt modular and production-ready. Adding future features like log filtering, alerts, or even log storage would be much easier within this structured layout.</p>
<h2 id="heading-finalizing-kubernetes-configs-and-making-deployment-easy"><strong>Finalizing Kubernetes Configs and Making Deployment Easy</strong></h2>
<p>Once the application was running smoothly with Docker Compose, I transitioned it to Kubernetes by creating a consolidated set of configuration files. Instead of defining separate deployments for the frontend and backend, I opted for a single pod manifest that ran both containers side-by-side in a multi-container setup. I also created ConfigMaps for managing environment variables cleanly and applied Role, RoleBinding, ClusterRole, and ClusterRoleBinding to ensure the backend container could securely access the Kubernetes API. These permissions enabled the backend to list namespaces, pods, and fetch logs directly from inside the cluster. After validating each component individually, I compiled everything, pods, configmaps, roles, bindings, and services, into one combined YAML file. This made deployment extremely simple, allowing anyone to deploy the app using a single curl command followed by <code>kubectl apply -f -</code>, with no need to manually edit anything. The final structure is clean, portable, and ready for production-like Kubernetes environments.</p>
<h2 id="heading-final-touches-nginx-config-relative-urls-and-refresh-functionality"><strong>Final Touches: Nginx Config, Relative URLs, and Refresh Functionality</strong></h2>
<p>One last challenge was making sure that services could talk to each other inside Kubernetes without relying on localhost. I updated all frontend API calls to use the internal Kubernetes service names like <code>http://api-service:8080</code> instead of <code>http://localhost</code>. I also edited the Nginx config to route traffic appropriately and avoid 404s on refreshes. The refresh button on the frontend was tied to JavaScript logic that hit the <code>/api/logs</code> endpoint every few seconds or on demand. With this, users could see the log update in near real-time. It helped validate that logs were truly coming from running pods and not cached or static. The end product is a clean, interactive logging dashboard that’s powered by Kubernetes, built using Go, and delivered via modern frontend tools.</p>
<h2 id="heading-lets-try-it">Let’s try it!</h2>
<p>Just head over to <a target="_blank" href="https://github.com/kaiwalyakoparkar/pod-logger">https://github.com/kaiwalyakoparkar/pod-logger</a> or run the following commands inside your local terminal that has a connection to your Kubernetes cluster via kubectl</p>
<pre><code class="lang-bash">curl -LO https://github.com/kaiwalyakoparkar/pod-logger/blob/main/api/kubernetes/combined.yaml
</code></pre>
<p>Next, run this command to apply the Kubernetes pod configurations</p>
<pre><code class="lang-bash">kubectl apply -f podlogger.yaml
</code></pre>
<p>You should be able to access the podlogger dashboard at <code>http://&lt;your-cluster-ip&gt;:30080</code></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1744566668736/c7852dc3-e39b-4dfd-a972-99a43ce74feb.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-thank-you-for-reading">Thank you for reading ❤️</h2>
<p>Thank you so much for reading this blog and joining me in this small side-project journey. The major aim of this project was to polish my Kubernetes knowledge and make something useful and fun that has medium complexity. I feel really nice as this project is towards the end of the idea that I had in my mind, and with all of your feedbacks, I will be working on more of the improvements into this. You can try out the project by heading over to my GitHub and following the steps I mentioned above</p>
]]></content:encoded></item><item><title><![CDATA[How I cleared my Kubernetes and Cloud Native Associate (KCNA) exam in first attempt.]]></title><description><![CDATA[Hello everyone! A while back I passed my first cloud certification in the first attempt. To be honest I was super nervous putting all my knowledge at an official test. But in the end, it played out well. In this blog, I will introduce you to the cert...]]></description><link>https://blogs.kaiwalyakoparkar.com/what-is-kcna</link><guid isPermaLink="true">https://blogs.kaiwalyakoparkar.com/what-is-kcna</guid><category><![CDATA[Kubernetes]]></category><category><![CDATA[KCNA Exam]]></category><category><![CDATA[CNCF]]></category><category><![CDATA[cloud native]]></category><dc:creator><![CDATA[Kaiwalya Koparkar]]></dc:creator><pubDate>Sun, 23 Jul 2023 14:30:09 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1690041704560/7ab065f5-2b28-479e-b3a5-3fb196cef90f.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Hello everyone! A while back I passed my first cloud certification in the first attempt. To be honest I was super nervous putting all my knowledge at an official test. But in the end, it played out well. In this blog, I will introduce you to the certification (If you are unaware of the certification or want to know more about it) and then share the tips and resources that I followed to clear this exam on the very first attempt.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1690040532328/be6bed88-1b5e-418c-8437-93346df52c86.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-what-and-why-of-kcna">What and Why of KCNA?</h2>
<p>KCNA stands for <strong>Kubernetes and Cloud Native Associate</strong> exam. This exam is facilitated by CNCF (Cloud Native Computing Foundation) as a part of Linux Foundation training. As the name suggests this certification revolves around testing your knowledge in Kubernetes and cloud-native. This certification confirms the user's knowledge about Kubernetes as a technology and the cloud native ecosystem as a whole. This certification is categorized as pre-professional certification and aims to help to provide a robust foundation for the professional cloud journey for the person.</p>
<p>This certification validates a candidate's fundamental understanding of Kubernetes and cloud-native technologies. It encompasses various areas such as deploying applications using essential Kubectl commands, comprehending the architecture of Kubernetes (including containers, pods, nodes, and clusters), grasping the cloud-native landscape and associated projects (such as storage, networking, GitOps, and service mesh), as well as understanding the principles of cloud-native security. Completing the KCNA exam demonstrates a candidate's proficiency in these key areas.</p>
<p>So from this, I hope you are not able to get a clear vision of how one getting this certification can help someone validate their cloud-native and Kubernetes skills and get those opportunities in the cloud-native ecosystem. Now that we know about the intent of the certification we can move forward with more details of the certification.</p>
<p>Though the exam costs $250 now. You can always get good deals on sales and even free sometimes through programs like LiFT scholarships etc. It would help if you kept an eye on it.</p>
<h2 id="heading-uniqueness-of-kcna-exam">Uniqueness of KCNA Exam:</h2>
<p>Some factors make this exam unique. One among them is strict proctoring during the exam. You can read more about it <a target="_blank" href="https://docs.linuxfoundation.org/tc-docs/certification/important-instructions-kcna">on this page</a>. Although every exam taker is abided by the <a target="_blank" href="https://docs.linuxfoundation.org/tc-docs/certification/lf-cert-agreement">confidentiality agreement</a> I will only able to share the resources, and information that are disclosed to the public. The certification is valid for 3 years.</p>
<h2 id="heading-curriculum-of-the-exam">Curriculum of the Exam:</h2>
<p>The curriculum of the exam is very nicely constructed by giving appropriate importance to the appropriate modules and introducing almost all the cloud native concepts. The curriculum equally weighs upon the theory of things and practical implementation of the subject. So if you have a practical application with your theoretical study this exam might become relatively easy for you. This exam is <strong>MCQ-based</strong> and a total of <strong>60 MCQs</strong> are asked in the exam. To pass the exam you have to attain a <strong>75% result</strong>. The exam should be completed in <strong>90 mins</strong></p>
<p>The curriculum consists of the following components (The detailed curriculum can be found here. <a target="_blank" href="https://github.com/cncf/curriculum/blob/master/KCNA_Curriculum.pdf">Kindly check for/if any revisions to the curriculum</a>)</p>
<ol>
<li><p>Kubernetes Fundamentals (46%)</p>
</li>
<li><p>Container Orchestration (22%)</p>
</li>
<li><p>Cloud Native Architecture (16%)</p>
</li>
<li><p>Cloud Native Observability (8%)</p>
</li>
<li><p>Cloud Native Application Delivery (8%)</p>
</li>
</ol>
<h2 id="heading-resources-for-the-exam">Resources for the Exam</h2>
<p>Now that we know about the exam and the curriculum. It's time to share with you the resources I followed to prepare for the exam. I have also made my notes public so you can use them for reference, study, or quick revision (recommended). The external resources could be</p>
<ol>
<li><p><a target="_blank" href="https://www.exampro.co/kcna">https://www.exampro.co/kcna</a> - Free lectures and 1 free practice test</p>
</li>
<li><p><a target="_blank" href="https://github.com/moabukar/Kubernetes-and-Cloud-Native-Associate-KCNA">https://github.com/moabukar/Kubernetes-and-Cloud-Native-Associate-KCNA</a></p>
</li>
<li><p><a target="_blank" href="https://notes.kaiwalyakoparkar.com/kcna">https://notes.kaiwalyakoparkar.com/kcna</a></p>
</li>
<li><p><a target="_blank" href="https://blog.bradmccoy.io/how-to-pass-your-kcna-exam-cf98cfa7d70f">https://blog.bradmccoy.io/how-to-pass-your-kcna-exam-cf98cfa7d70f</a></p>
</li>
</ol>
<h2 id="heading-tips-for-the-exam">Tips for the Exam</h2>
<p>These are some of my tips while you prepare for your exams</p>
<ol>
<li><p>Try to get as much hands-on practice as you can. Learned a new concept? Try to implement at least the Kubernetes one. Create pods, deployments and play around with it.</p>
</li>
<li><p>Give yourself enough time to understand. This is a very conceptual technology and understanding how everything works with each other.</p>
</li>
<li><p>Try to deep dive. Although the resources seem simple overviews try to learn a bit deeper into the concept. This will help you during exams as well as with other concepts.</p>
</li>
<li><p>Try to solve as many mock exams as you can. Multiple websites help you with sample questions. Take advantage of them.</p>
</li>
<li><p>Believe in your preparation. You got this 💪</p>
</li>
</ol>
<h2 id="heading-conclusion">Conclusion</h2>
<p>Well, that's it for this blog, In the upcoming blogs I will try to pick up the topics from the curriculum and make them easier for you. Do join CNCF and Kubernetes communities to receive help during your preparation. You can also join <a target="_blank" href="https://community.kaiwalyakoparkar.com/">my discord community</a> and I would be happy to help you with your preparation. See you in the next blog 👋</p>
<h2 id="heading-thank-you-so-much-for-reading">Thank you so much for reading 💖</h2>
<p>Connect: <a target="_blank" href="https://link.kaiwalyakoparkar.com/">https://link.kaiwalyakoparkar.com/</a></p>
]]></content:encoded></item><item><title><![CDATA[Application Logging with FluentD]]></title><description><![CDATA[Hey everyone! Welcome back 👋. In this blog, we will see what is FluentD and why we actually need it. To start with simple information, Fluentd comes under the "Observability and Analysis" part of the cloud-native application cycle and is very helpfu...]]></description><link>https://blogs.kaiwalyakoparkar.com/introduction-to-fluentd</link><guid isPermaLink="true">https://blogs.kaiwalyakoparkar.com/introduction-to-fluentd</guid><category><![CDATA[Devops]]></category><category><![CDATA[logging]]></category><category><![CDATA[fluentd]]></category><category><![CDATA[CNCF]]></category><category><![CDATA[cloud native]]></category><dc:creator><![CDATA[Kaiwalya Koparkar]]></dc:creator><pubDate>Fri, 12 May 2023 14:30:42 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1683884985156/0a7c4d5f-411e-463e-90e4-47c44ec612a6.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Hey everyone! Welcome back 👋. In this blog, we will see what is FluentD and why we actually need it. To start with simple information, Fluentd comes under the "Observability and Analysis" part of the cloud-native application cycle and is very helpful for logging the logs from the application.</p>
<h1 id="heading-what-is-fluentd">What is FluentD:</h1>
<p><img src="https://www.devopsschool.com/blog/wp-content/uploads/2022/03/headimg.png" alt="Top 50 Interview Questions and Answers of FluentD - DevOpsSchool.com" /></p>
<p>As I mentioned it is an open source logging tool for cloud-native applications. Instead of saying it logging tool we can mention it as an open-source data collector that also helps you unify the data in order to make more sense of it. FluentD comes will a lot of unique features which really help while you are working with different sets of tools. All the features will be introduced in the further part of the blog.</p>
<h1 id="heading-why-do-we-log-data">Why do we log data:</h1>
<p>Before jumping directly into FluentD, let's consider a simple scenario first to better understand why we need logging and appreciate what FluentD offers us.</p>
<p>Let's say you have a microservice application deployed on Kuberenetes Cluster and applications are written in different languages for example some in Javascript, python different databases, message brokers, and other services. Now as these applications communicate with each other they generate some data also called logs. And each application will generate this data in different formats depending on the languages and plugins you are using.</p>
<p>Now these data can be anything. You might need to log for <strong>compliance</strong> or logging specific data depending on the industry you are working in or the product you are working on.</p>
<p>These logs can also be for ensuring the <strong>security</strong> of your cluster and server (access logs) and can help in detecting suspicious access to your application data.</p>
<p>The logs can be also used like always in the traditional way to find an error and debug your application in case anything goes wrong</p>
<p>Now that we know why we need logging, let's see how is the data logged.</p>
<h1 id="heading-how-is-data-logged">How is data logged?</h1>
<p>Once the data is sent by the application it is generally stored in 3 ways</p>
<ul>
<li><p><strong>File</strong> - The log data can directly be written to files. The issue with the method is that it is not humanly accessible and there can be a lot of log files which makes it impossible to go through files. Also, as mentioned before these files may be of different formats for different applications.</p>
</li>
<li><p><strong>Log into DB</strong> - The logs can be stored in a log database like Elastic Search so that it can be visualized easily with the help of different applications, or bundled visualizers. But in this case, Each application should be configured to log the data to elastic search.</p>
</li>
<li><p><strong>Third-party application</strong> - We can use Third-Party applications to log the data generated from the application but we can't control how they log the data making it inconsistent when the applications are written in different languages and structures.</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1683889042273/8e5ad7f0-959d-4091-845a-e55bd429414c.png" alt class="image--center mx-auto" /></p>
</li>
</ul>
<h2 id="heading-how-does-fluentd-solve-this">How does FluentD solve this?</h2>
<p>Now that we know how the data is logged and what can be the shortcoming in all the above scenarios. It would be easier to understand how FluentD comes as a solution.</p>
<ul>
<li><p>FluentD acts as a <strong>Unified logging layer</strong> which means no matter how many ways it gets logs in from source applications. It will convert all the logs into a singular unified format which can be distributed for the use of analysis, alerting, etc.</p>
</li>
<li><p>FluentD collects data from different data sources Data sources like apps, access logs, system logs, and databases collect &amp; Process into a unified format and then these logs are sent and used at the destination for Alerting analysis, archiving</p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1683889508728/eb29268d-c293-4e25-8655-0501efeee91e.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-how-does-it-work">How does it work?</h2>
<p>Now let's see how FluentD works, first of all as we have seen in previous tutorials and tools, it is similarly deployed on the cluster you will be collecting logs. FluentD collects the logs from every application on the server. This also includes the logs from the 3rd party applications installed on the server.</p>
<p>After receiving the logs from the application, FluentD converts them into a unified format. Conversion to a unified format helps to work on logs coming from different applications and types. In addition to the conversation about the unified format, you can also enrich the data with additional information like information about the pod, namespace, container names, etc. FluentD also helps us modify the data being logged. After the conversion and necessary filtering and modification, the logs are sent to the destination. This can be anything like the one mentioned. The interesting part is that you can choose where your logs go. So you can define what destination a particular type of log goes to. This is called "routing"</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1683890521081/b6e5d1d3-0545-456d-ad35-9d74fba74c20.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-features-of-fluentd">Features of FluentD</h2>
<ul>
<li><p>Not tied to any specific backend: This feature gives flexibility to FluentD to function with all types of backend services</p>
</li>
<li><p>No vendor lock-in: Because FluentD not being dependent on the backend there is no vendor lock-in.</p>
</li>
<li><p>Apart from this saves data on the hard drive until fluent parses and sends the data to the destination. Data will still be there if the server restarts and will pick up the execution where is was halted. Even if it facilitates this, it doesn't need additional storage configuration.</p>
</li>
<li><p>It will keep on trying to push the logs to the destinations in case the destination (eg: database) fails and will keep on trying until the destination is available</p>
</li>
</ul>
<h2 id="heading-how-to-configure">How to configure</h2>
<ul>
<li><p>We have to install the FluentD deamonset. We can find the installation guide <a target="_blank" href="https://docs.fluentd.org/installation/">here</a></p>
</li>
<li><p>FluentD hence runs on the Kubernetes node and thus receives the logs from the applications residing on those nodes</p>
</li>
<li><p>We have to configure the FluentD using the configuration file stating the rules, source, and destination configurations respectively. We use FluentD plugins to configure the working of FluentD on the cluster</p>
</li>
</ul>
<p>Fluentd plugins are classified as:</p>
<ul>
<li><p><strong>Input</strong>: What are the sources and types of input you want to log and can be of different types like <code>http</code>, <code>tcp</code>, <code>syslog</code> etc</p>
</li>
<li><p><strong>Parser</strong>: How data is processed in key-value pair example <code>csv</code>, <code>tsv</code>, <code>json</code></p>
</li>
<li><p><strong>Filter</strong>: You can enrich the data as I discussed above using <code>record_transformer</code></p>
</li>
<li><p><strong>Output</strong>: You can configure the destination where the logs go example <code>elastic search</code>, <code>mongo</code></p>
</li>
<li><p>Then we Use tags to group the logs. We essentially use the <code>source</code> block to bring the logs in the input and <code>parse</code> them then use <code>filter</code> block to help us with enriching and modifying the logs</p>
<pre><code class="lang-apache">  <span class="hljs-comment"># All apps with tag myapp to be parsed as json</span>
  <span class="hljs-section">&lt;filter myapp.*&gt;</span>
  ...
      <span class="hljs-section">&lt;parse&gt;</span>
          @<span class="hljs-attribute">type</span> json
          ...
      <span class="hljs-section">&lt;/parse&gt;</span>
  <span class="hljs-section">&lt;/filter&gt;</span>
</code></pre>
<p>  Similarly, we use <code>match</code> block to define the destination where we desire to send the logs</p>
<pre><code class="lang-apache">  <span class="hljs-comment"># All logs from service with tag myservice should go to elasticsearch </span>
  <span class="hljs-section">&lt;match myservice.*&gt;</span>
      @<span class="hljs-attribute">type</span> elasticsearch
      ...
  <span class="hljs-section">&lt;/match&gt;</span>
</code></pre>
</li>
</ul>
<h2 id="heading-difference-in-fluentd-and-fluent-bit">Difference in FluentD and Fluent-Bit</h2>
<p>FluentD is similar to Fluent Bit in terms of functionality but the difference is that Fluent Bit is lightweight and used for high efficiency and low cost. Fluent Bit is known as a "High scale low resource", and is preferred for containerized applications</p>
<h2 id="heading-conclusion">Conclusion:</h2>
<p>Although there are many logging tools and we will be learning about them as well in the future but that's it for the introduction to FluentD in this blog. In upcoming blogs, we will be looking into how you can use FluentD with a demo.</p>
<h2 id="heading-resources">Resources:</h2>
<ul>
<li><p><a target="_blank" href="https://youtu.be/5ofsNyHZwWE">How Fluentd simplifies collecting and consuming logs | Fluentd simply explained</a></p>
</li>
<li><p><a target="_blank" href="https://youtu.be/Gp0-7oVOtPw">Introduction to Fluentd: Collect logs and send almost anywhere</a></p>
</li>
<li><p><a target="_blank" href="https://docs.fluentd.org">FluentD Docs</a></p>
</li>
</ul>
<h2 id="heading-thank-you-so-much-for-reading">Thank you so much for reading 💖</h2>
<p>Connect: <a target="_blank" href="https://link.kaiwalyakoparkar.com/">https://link.kaiwalyakoparkar.com/</a></p>
]]></content:encoded></item><item><title><![CDATA[Dark Side of DevRel: Moving beyond the sterotypes]]></title><description><![CDATA[Hey everyone, wonder why I chose this title? This was my talk title for DevRelCon Yokohama 2023 and this blog will serve as a recap of my session (and a summary for those who couldn't attend). I hope this blog serves as a good reference point for all...]]></description><link>https://blogs.kaiwalyakoparkar.com/dark-side-of-devrel</link><guid isPermaLink="true">https://blogs.kaiwalyakoparkar.com/dark-side-of-devrel</guid><category><![CDATA[DevRel]]></category><category><![CDATA[student]]></category><category><![CDATA[Communities]]></category><category><![CDATA[DevRelCon Yokohama]]></category><category><![CDATA[Conference Talk]]></category><dc:creator><![CDATA[Kaiwalya Koparkar]]></dc:creator><pubDate>Fri, 17 Mar 2023 14:30:39 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1678694126109/f780db0c-0192-495c-828a-61219317a074.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Hey everyone, wonder why I chose this title? This was my talk title for <a target="_blank" href="https://yokohama-2023.devrelcon.dev/speakers/kaiwalya_/">DevRelCon Yokohama 2023</a> and this blog will serve as a recap of my session (and a summary for those who couldn't attend). I hope this blog serves as a good reference point for all those students who have uncertainties about the DevRel as a job role and for working professionals. Interesting right? so let's get into it.</p>
<p>So this talk was targetted for:</p>
<ol>
<li><p>Students: To help them understand and self-reflect on whether its the thing they want to do in the long run</p>
</li>
<li><p>DevRel Professionals: To understand how students and beginners pursue them as DevRel which might help them understand what can they do better/differently to better contribute to the community</p>
</li>
<li><p>Organisations/ Companies: To understand what you can do better while building ambassadorship and other programs around students and communities.</p>
</li>
</ol>
<h2 id="heading-wrong-keywords">Wrong Keywords ⚠️</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1678694502270/4b973b79-0f59-4804-bfeb-d05ca676c157.png" alt class="image--center mx-auto" /></p>
<p>What do you see in this picture? Haha, it's me and friends during the KubeCon NA 2022 after-parties. We all love to take photos and post them on social media to share our joy and experience with everyone and there is nothing wrong with it. But have you wondered what keywords get reflected from these images if a beginner looks at them and knows that you are working as a DevRel? Following are some of the keywords that I could find</p>
<ul>
<li><p>Enjoyment</p>
</li>
<li><p>Fun parties</p>
</li>
<li><p>Travel</p>
</li>
<li><p>Cool events</p>
</li>
<li><p>Amazing lifestyle</p>
</li>
<li><p>Less work</p>
</li>
</ul>
<p>So sharing images has some negative effects too. Does that mean we shouldn't share the fun images of our memories? Absolutely you should but moving ahead we will see what other things would benefit and could stop creating misconceptions</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1678694934618/8b1b2d9d-19f2-4a6a-9bed-491aa48c86cc.png" alt class="image--center mx-auto" /></p>
<p>It's important for me to tell my journey into DevRel which can help students understand the dimensions DevRel can be described into. I will keep it short, I joined communities that advocated open source and started learning it. While learning I used to help out other learners if I knew certain things or with the resources that could help them. Eventually, I started participating in the community events like monthly calls and other events and live streams the community used to host. After attending the many times I tried to take responsibility and tried hosting them myself. In a similar manner, I tried applying all these learning and experience in other parts (eg: CFPs, workshops, talks, etc). All I wanted to do it <strong>learn -&gt; build -&gt; share -&gt; teach -&gt; help -&gt; explore</strong> and in the end I realized that what I was doing is also called Developer Relations or Developer Advocacy. By the way, this can be also applied for eg: You can learn, teach and talk about XYZ tech/ tool you like a lot (like I talk a lot about open source and GitHub :P). Now that you know my journey it would be easier for you as a student to navigate your journey until now.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1678696379178/a4bb2e13-08a0-47a6-9ae2-788b7ed45e8d.png" alt class="image--center mx-auto" /></p>
<p>I went around communities and social media and tried to collect some statistics around the common questions that students had for Developer Relations. Let's take a look at them.</p>
<p><img src="https://lh4.googleusercontent.com/g3mnmsFKQJQ9yPVbrp5hN6HcZDFK_91d8AVkhvQglWvPrZnXGN0KxBZX7YIEQCxLR8wAzlE-VxJUuEvMIxRLh2SnMD1-5_eXAZalsSJh8MGM6kVGkYOMmkO7DDB4rKicL6sgZiBmSlMHoLvV-AkA5Kkb7Qg-45SCUbQgrmDCuCBMoI37Wm-cAdLBNyRtvNgU=s2048" alt="Forms response chart. Question title: Do you know what DevRels do at their job?. Number of responses: 34 responses." /></p>
<p><img src="https://lh6.googleusercontent.com/vP2ReiAUWPmMKqIK_sy5zxFWXemH7Jc_tPfyu-YeA5ttw4zXM2lVtpph3FrH1jHJs-VrvU6j2ARRpFeXlKjlMnqlTZnSNsYBHUQBfEijw6xsCVggP4Dh_6ZzJ_rjyFRPuDRaIEqvY5BGtHGOHr1XrtfQ-fHUWNhjCItcnkvS7_esaeP5koP6Zix-tz4epd3Z=s2048" alt="Forms response chart. Question title: Do you think DevRel is kind of Marketing for the product?. Number of responses: 34 responses." /></p>
<p><img src="https://lh3.googleusercontent.com/8dVh3-dFLCHc6s5nLh2_BxZoHFu0g7UD8XmVqsqxZwFj63g45Vvj6gIF3te3dYZj3aeDQyZsqTWI5sHQgLqR4YFRvtUWqwJRz1XAaSST5mdJ6KhTYRN_WZrxVsf-UKyyn1ilMSAgIInVLNoF-oLDWAugfISpCyQqQarzNrDaLtsmXt0jpBXZEWckKb4EA54R=s2048" alt="Forms response chart. Question title: Do you think DevRels can be from non-tech background?. Number of responses: 34 responses." /></p>
<p><img src="https://lh4.googleusercontent.com/qVhTBOhUZE6fPOIKz6Nh0CTL1fdYNZQ3QlZNjufUvgnDeRijVCqeU9Qz0mcZWFsHlKj5GjNzBoG9KEektkD4app8U_flhU_tHS0IMoPeHzYyiRz6KLEbCt6q3P3I8s5AeNEnUcvnhpchLya6Rt3SnHxXG79n5LCQKKyW6nQjbVZl7aQ26wGSf45oOFnFDJ5E=s2048" alt="Forms response chart. Question title: Does social presence matter for being DevRel?. Number of responses: 34 responses." /></p>
<p><img src="https://lh4.googleusercontent.com/ty71OzbWrrZhGew_5LcfBg6VwHJVNgo7AAtMJjua8MlECb13DvSjkGbC8NeFYblY9gCxPH7fOXr-S2_e87zWtvx1E7tVvQ6Pk4KO9I1ycZm477PmiHKUW2WjLRc-0GBzHjR7KcKBwOnzis38353sA2mjJd34hzFhG69Ywsh_kgWD-top5isYawq033eu41E3=s2048" alt="Forms response chart. Question title: Do you feel any difference between DevRel/ DevAdvocate/ Community Manager/ Community Builder. Number of responses: 34 responses." /></p>
<p><img src="https://lh5.googleusercontent.com/MZoDBW5GMjdRkRKum-WC6fVdam0Ovj0zKWT95hFLc8OFKUJ70Yoo3jyMhf107hjm4VVMYXv815A1YwBK6zMDu-H6JmaeArz5gbFm0s3gS_OZuUr0Tzg_wlrmxRMhTvtW-ESutVGKsMtj4hZGEx2ZBIGN0B_pFUwVrN2ArH1Vb7iq6jUmfrsQylKGLTpTIQFb=s2048" alt="Forms response chart. Question title: Do you know programs that support students in becoming better at communities and eventually DevRel?. Number of responses: 34 responses." /></p>
<p>The statistics are quite shocking, right? the uncertainties and misconceptions can be clearly reflected in these statistics. These are the stats, let's look at some real conversations I had.</p>
<p>Following are screenshots from the students I have been interacting with within discord communities</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1678888145048/cb8d8cf4-d824-4f74-943e-8342aeae09f7.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1678888192511/0baedde1-deaf-4896-b25c-1464eeb80665.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1678888258071/b892f003-a155-4d0f-ad13-d9514f018301.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1678888276614/1599ea17-9e98-4e55-923c-07bb26d1f78f.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1678888300720/854a2751-e6d1-42fb-b228-4a1ba0b1e210.png" alt class="image--center mx-auto" /></p>
<p><img src="https://lh5.googleusercontent.com/hsF1xPDqWe7h-vg7miv42TUFPA2R533z5ISxqQE3e5JhrTC3cz7uOL_cLAOIa68zFTb0lXPntwU56NaNO1XVAyNtMxGphmodlIBJIJipHu3HQWUjRQWX85Yjqaa9GaRgYNmQ8u7Z5z-vELfFw1ccFGAeuehe8AikEec8LKNV3GACJ7743A246dl09fKxwDcL=s2048" alt /></p>
<p><img src="https://lh4.googleusercontent.com/4eATBHaxkHak3l59ZTE1MmJ6CTHQhxVMdkAoCZHEPK430lgvvttqnmkWGop7lCcERSGDyqMExxeC-AlKCDsgrKCE1uZqF4Xl_biWqs_ZgMBy--82zeG9Q1_7FzBlzevwe0kHyla4Gn9Mp4x-uttTqea-B2baT4ddD-m83cjBEgaPq8gSRGEjDO8t6z0k_aqc=s2048" alt /></p>
<p>These chats really show how the social media and the posts the DevRel share have a great impact on the beginner and student communities.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1678697592152/3b733f88-2926-4ee8-8f52-2250c0b04ab4.png" alt class="image--center mx-auto" /></p>
<p>This is my personal opinion but this is the way I see DevRel, you can totally be an amazing Developer Advocate without being under the organization title. Also, you can be on totally different job titles for most of the crowd, the goal is to become a software engineer, so you can be a software engineer by job title and can still teach and advocate about the technology and tool you like and that's how this role works. You can create your own job description in DevRel (This is where many organizations and companies fail to recognize if there is really a need for DevRel in the team and what should be the job description confusing the students in the community)</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1678698401957/05087ac2-b160-43c4-a7dd-833b9c0166d1.png" alt class="image--center mx-auto" /></p>
<p>I know as a student you would want to be there and do what everyone besides you is doing but that might not be a good option every time. Why? because oftentimes we need to ask ourselves if that is really the thing we want to do. Also following the crowd reduces your freedom to explore around and experiment with different things. A while back <a class="user-mention" href="https://hashnode.com/@adityaoberai">Aditya Oberai</a> wrote an amazing article on Dev.to about "Everyone can do DevRel (But Should they)". You should definitely take a look at it. It is a great resource for self-reflection [Following image has a clickable link]</p>
<p><a target="_blank" href="https://dev.to/appwrite/everyone-can-do-devrel-but-should-they-2jdl"><img src="https://lh6.googleusercontent.com/-ZDYa46SPNQmmRdkiryKve7_9nuP2jLIm8Npfnw_o7ZO34l7QztffCqe-VI4d4O59RHM_YKUdTr0Zio3GLJJvuUdMrD6I5DU4jFId-0KnYjzfp1v4hGJRXWIo9zJ1wF81N-gx62htYGbbtU1_PN-ZZJyakp1rvEQVlko0wyGQAcHssEfEClpPmbGwDniVFbP=s2048" alt /></a></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1678699276958/4fd2b2b0-4af6-4648-a875-bd594ab531cc.png" alt class="image--center mx-auto" /></p>
<p>I think the title is self-explanatory. If we look at the current tech industry, there are multiple trends that come up regularly to keep developers and engineers hooked for a while but running behind the trends continuously won't give you chance to understand it to the root additionally it will further confuse one on what's going on in tech world and especially the rise of questions like "Is their future in XYZ?" or "Can we get a job with XYZ". These questions are totally valid but getting these questions continuously can deviate you from your goal. There is an amazing article about the same written by <a class="user-mention" href="https://hashnode.com/@divyamohan">Divya Mohan</a> which you should definitely check out. Divya writes about the "DevRel influencer trend" which is really great perspective to think about in social media trends and tech influencers. [The following image has a clickable link]</p>
<p><a target="_blank" href="https://divya-mohan0209.medium.com/the-devrel-influencer-trend-a6e8d618683e"><img src="https://lh3.googleusercontent.com/Ekvl9u_i0Y44PbZ5hi3Gz50SAwE0fqySkq5AdeiYqSoB_sgT34wr_qs9bzSl0XbD3xqnMIfnN_-9Bavd5YnDHONK9bHvVvaid96ZsvR0b8cUsZY1cOo0cV11f6dxETJ4vujh3SbtF7MUCV-H9fnRAg_7eKc20XaoE1ABUpsuh7KuatIy1D0AMUK826VVW19B=s2048" alt /></a></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1678700698948/d39e5a8c-5c7a-4db2-97c3-62293f738148.png" alt class="image--center mx-auto" /></p>
<p>Now, what is this? This is a simple test I came up with to enable students and beginners to evaluate things before taking up roles or responsibilities. This is an acronym for <strong>Goal -&gt; Learning -&gt; Growth</strong> always check if the opportunity matches your career goal and is joining the program or role enabling you to escalate towards that goal. Next is learning, see what new things are you learning with that opportunity. If you are not going to learn anything then I would highly recommend waiting for a little for another better opportunity. And the last one is growth, considering the factors that will change after the completion of the opportunity. Can you see the growth during the time period? Now measuring growth is highly subjective and I can't really tell about how you should measure your growth but the important factor can be aligning your goal with it.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1678701301407/27123d99-ad99-42bf-9077-a09e88305cc2.png" alt class="image--center mx-auto" /></p>
<p>There was a very good question tweeted some time ago which led me to think about this question. After the economic crises in the tech industry, many questions arose as many DevRels (known and well-known) at all seniority levels were let go by multiple companies. Let's take a look at the tweet and the gem replies by wonderful people in the community</p>
<p><img src="https://lh3.googleusercontent.com/s_kGkWMwDLNWFYL-6RL7tRHiNhOd9vXm1cvxonPKS5afu-pzZRWEC4ua81TQM_YnRd4mag-ze1vW0z8mvMkQko6D-duIHe7TO5ksClLeGKbX2N3-qaSAUwSE1qcp3DNzD-p35TTHfbwLjm26Uw5Iz_r93hgvi1LHiIF_OwBRzBgstksK93tb0BRQfrfUd5GB=s2048" alt /></p>
<p><img src="https://lh3.googleusercontent.com/kb8x6uyQuqO8Yu4y-uXYrZQglg5lkFEiSxp7ZzvSZiHfi2YMuDnCQUFKYfUfLbZw8gXY5m5LUQnyoZXmZD2kht70cVnHMy_yFPKKxeTheHduHKD3XJWofa_J17TOdudmLW6bVr2KDXxk7oXZ73lXWsRhRJAFdw6-uR8FllzUwKD3LqWw1UV_RhocKcTXSRAq=s2048" alt /></p>
<p><img src="https://lh5.googleusercontent.com/q53tpE0r2pkCvxGuZ8xgythBX_bBjOejFSMwq2blPwEuZVUuLHUyFi0YJ6AQ1z0yC6KPu4aoSQBDKVib34r1zQdPSXwkpQFnUqKiRPvJEuPROkAmPomcsub-uGC5YF557t_lHUf-OdLw9Y78NlqlN5UK-D1Enh0gFYGKr8phdp4zn2H_9diVI-6zDenGq9PZ=s2048" alt /></p>
<p>Now that we have talked so much about DevRel and misconceptions let's talk a bit about programs students can take part in. This section is interesting dependent so the programs I mention might not actually be helpful for your goals but are worth giving a shot or exploring more about. These programs and communities can be seen as a benchmark for student programs and organisations can study their structures to improve their own programs and add value to the applicant.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1678706401889/22eb6449-b6e3-4ff8-94cb-fe8cc4ae4c78.png" alt class="image--center mx-auto" /></p>
<p>And yeah, that was pretty much it. I really hope you'll find this blog helpful and will be able to decide for yourself if you should really pursue DevRel or at least know the truth about it. Again, the definition and job description change from organization to organization so you should definitely consider the above-discussed points to better make decisions and find guidance regarding the same. I will add some of the resources and the tweets mentioned which might help you as you go through other comments and impressions on the post.</p>
<h3 id="heading-resources">Resources:</h3>
<ul>
<li><p><a target="_blank" href="https://youtu.be/RfOJVdOFIpE">My introduction to this talk</a></p>
</li>
<li><p><a target="_blank" href="https://dev.to/appwrite/everyone-can-do-devrel-but-should-they-2jdl">Everyone can do DevRel (But Should They)</a> - Aditya Oberai</p>
</li>
<li><p><a target="_blank" href="https://newsletter.oberai.dev/p/how-can-devrel-enable-engineering-1270727">How can Devrel enable engineering</a> - Aditya Oberai (Blog)</p>
</li>
<li><p><a target="_blank" href="https://youtu.be/4ta0pkN6Z8s">How can Devrel enable engineering</a> - Aditya Oberai (YouTube Video)</p>
</li>
<li><p><a target="_blank" href="https://twitter.com/JJarzebowski/status/1626566708927381510?s=20">https://twitter.com/JJarzebowski/status/1626566708927381510?s=20</a></p>
</li>
<li><p><a target="_blank" href="https://divya-mohan0209.medium.com/the-devrel-influencer-trend-a6e8d618683e">DevRel influencer trend</a> - Divya Mohan</p>
</li>
<li><p><a target="_blank" href="https://twitter.com/taylor_atx/status/1575218433632804864?s=20">https://twitter.com/taylor_atx/status/1575218433632804864?s=20</a></p>
</li>
<li><p><a target="_blank" href="https://twitter.com/ghumare64/status/1617175435015774208?s=20">https://twitter.com/ghumare64/status/1617175435015774208?s=20</a></p>
</li>
</ul>
<h3 id="heading-thank-you-so-much-for-reading">Thank you so much for reading 💖</h3>
<p>Like | Follow | Subscribe to the newsletter.</p>
<p>Catch me on my socials here: <a target="_blank" href="https://link.kaiwalyakoparkar.com">https://link.kaiwalyakoparkar.com</a></p>
]]></content:encoded></item><item><title><![CDATA[What is ArgoCD - Introduction, Advantages & Demo]]></title><description><![CDATA[Hey everyone in this blog we will see what is ArgoCD and in what ways it helps to overcome the drawbacks of the regular CI/CD pipeline while working with Kubernetes deployment
What is ArgoCD?
ArgoCD is known as a declarative GitOps tool which is base...]]></description><link>https://blogs.kaiwalyakoparkar.com/what-is-argocd</link><guid isPermaLink="true">https://blogs.kaiwalyakoparkar.com/what-is-argocd</guid><category><![CDATA[Devops]]></category><category><![CDATA[ArgoCD]]></category><category><![CDATA[CNCF]]></category><dc:creator><![CDATA[Kaiwalya Koparkar]]></dc:creator><pubDate>Thu, 16 Feb 2023 14:30:39 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1676392050664/0490c8a2-590b-4afb-bb86-2496f0c96167.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Hey everyone in this blog we will see what is ArgoCD and in what ways it helps to overcome the drawbacks of the regular CI/CD pipeline while working with Kubernetes deployment</p>
<h2 id="heading-what-is-argocd">What is ArgoCD?</h2>
<p>ArgoCD is known as a declarative GitOps tool which is based on Kubernetes. Let's try to break this down. "GitOps" is a process where you take the code you have written and pushed to the Git registry (eg: GitHub, GitLab, Bitbucket) and take it to the deployment and this is mainly done and implemented as automation. The word "declarative" means that the deployment has exactly the same architecture that you want and have decided on. So if there was a case that something gets changed in the deployment then it will revert back to the previous state (the one which you have mentioned)</p>
<h2 id="heading-cd-workflow-without-argocd">CD workflow without ArgoCD:</h2>
<p>So whenever the application is pushed to the git repository, there are certain steps it goes through to reach the deployment. These steps are automated and known as CI/CD pipeline. So once the code changes are pushed they are tested, the image is built, pushed to the DockerHub or other registry, Manifest files are updated and finally through kubectl apply... it is applied to the Kubernetes cluster.</p>
<p>But in this case, we have to face and resolve challenges like:</p>
<ul>
<li><p>Install and setup tools like kubectl</p>
</li>
<li><p>Configuring access to Kubernetes</p>
</li>
<li><p>Configuring access to could providers</p>
</li>
<li><p>Several security challenges</p>
</li>
<li><p>No visibility of deployment status</p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1676379180429/9d1ffd6a-e514-4caf-989b-1604a84d3a97.png" alt="Source: Tech with Nana youtube video" class="image--center mx-auto" /></p>
<h2 id="heading-cd-workflow-with-argocd">CD workflow with ArgoCD:</h2>
<p>So in a general scenario the above-mentioned workflow is used, but while working with Kubernetes we might need to change this workflow. It is essential to know that according to the set best practice for the Git repository, we should have two separate repositories for Code files and application configuration (k8s manifests).</p>
<p>The main advantage is there can be a time we want to change only the config files but don't want the test -&gt; to build etc workflow to run as the configs can be changed independently of the application built. Having this as a reference now ArgoCD (which is installed on your Kubernetes Cluster) keeps track of the state you mentioned in the configuration repositories.</p>
<p>So if were to use the earlier CI/CD pipeline then it would have been too complex to manage and configure. Once the application is built and up the changes are reflected in the configuration files. These changes are detected by ArgoCD and pulled into the cluster.</p>
<blockquote>
<p>Note: ArgoCD works on the pull method and not like the usual push method</p>
</blockquote>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1676380750890/948970a3-9402-4a83-bc9c-cf44f121ee05.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-advantages-of-using-argocd">Advantages of using ArgoCD:</h2>
<ul>
<li><p><strong>Git as a single source of truth:</strong></p>
<p>  This means that if any changes are to occur in the Kubernetes cluster then the git repository is a place to validate and check if everything is going as mentioned in the manifests and any instance that is out of instructions is rolled back.</p>
</li>
<li><p><strong>Easy rollback:</strong></p>
<p>  This is a very essential feature of ArgoCD as it helps us to revert to the previous state if there was any problem or misconfiguration in the cluster.</p>
</li>
<li><p><strong>Cluster disaster recovery:</strong></p>
<p>  This means if one of my servers were to go down then I can create another cluster with ArgoCD and point it to the same repository then the process for recovery becomes much more efficient and fast</p>
</li>
<li><p><strong>K8s access control:</strong></p>
<p>  We can manage the cluster access directly through git so we don't have to worry about giving access to the cluster to an external tool.</p>
</li>
</ul>
<h3 id="heading-lets-try-it-out">Let's try it out!</h3>
<p>ArgoCD is super easy to install and configure. We will be looking at it through an example. I will be using Minikube for my Kubernetes Cluster.</p>
<p>I will be using this repository for <a target="_blank" href="https://github.com/kaiwalyakoparkar/practical-devops/tree/main/ArgoCD">this</a> demo purpose and you can use the same</p>
<ol>
<li>Install ArgoCD into your cluster using the following commands</li>
</ol>
<pre><code class="lang-bash">kubectl create namespace argocd
</code></pre>
<pre><code class="lang-bash">kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
</code></pre>
<ol>
<li>Now that we have installed the argocd in the argocd namespace we can now create a yaml file with the following code. I will name the YAML file as <code>application.yaml</code>.</li>
</ol>
<pre><code class="lang-yaml"><span class="hljs-attr">apiVersion:</span> <span class="hljs-string">argoproj.io/v1alpha1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">Application</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">wmd-argo-application</span>
  <span class="hljs-attr">namespace:</span> <span class="hljs-string">argocd</span>
<span class="hljs-attr">spec:</span>
  <span class="hljs-attr">project:</span> <span class="hljs-string">default</span>

  <span class="hljs-attr">source:</span>
    <span class="hljs-comment">#Your repo link here</span>
    <span class="hljs-attr">repoURL:</span> <span class="hljs-string">https://github.com/kaiwalyakoparkar/practical-devops.git</span>
    <span class="hljs-attr">targetRevision:</span> <span class="hljs-string">HEAD</span>
    <span class="hljs-comment">#Location where the manifests are stored</span>
    <span class="hljs-attr">path:</span> <span class="hljs-string">ArgoCD/dev</span>
  <span class="hljs-attr">destination:</span> 
    <span class="hljs-attr">server:</span> <span class="hljs-string">https://kubernetes.default.svc</span>
    <span class="hljs-attr">namespace:</span> <span class="hljs-string">wmd</span>

  <span class="hljs-attr">syncPolicy:</span>
    <span class="hljs-attr">syncOptions:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-string">CreateNamespace=true</span>

    <span class="hljs-attr">automated:</span>
      <span class="hljs-attr">selfHeal:</span> <span class="hljs-literal">true</span>
      <span class="hljs-attr">prune:</span> <span class="hljs-literal">true</span>
</code></pre>
<ol>
<li>Create a <code>dev</code> folder and place your deployment and service files into it like below. (Notice that it's the same path which I have mentioned in above <code>application.yaml</code> file.</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1676384489504/0e9b4cd0-30de-4f31-a2c8-12d7203e55f1.png" alt class="image--center mx-auto" /></p>
<ol>
<li>Let's create a namespace for our application as we have mentioned in the manifest files. (Again the manifest files like <code>deployment.yaml</code> and <code>service.yaml</code> are already present on the repo so you can check them out there</li>
</ol>
<pre><code class="lang-bash"><span class="hljs-comment"># You can give any name to your namespace and update respective files</span>
kubectl create namespace wmd
</code></pre>
<ol>
<li>Now we have to apply the <code>application.yaml</code> file to the cluster. We can do that using the <code>kubectl apply</code> command like below</li>
</ol>
<pre><code class="lang-bash">kubectl apply -f application.yaml
</code></pre>
<ol>
<li>Let's go and check out it on the ArgoCD dashboard. To see the dashboard run the following command on your terminal</li>
</ol>
<pre><code class="lang-bash">kubectl port-forward svc/argocd-server 8080:443 -n argocd
</code></pre>
<p>and now you can go to <code>http://localhost:8080</code> to access the ArgoCD dashboard</p>
<ol>
<li>In the <code>Username</code> field enter <code>admin</code> and to generate a password run the following commands into the terminal</li>
</ol>
<pre><code class="lang-bash"> kubectl get secret argocd-initial-admin-secret -n argocd -o yaml
</code></pre>
<p>and you will get output something like:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1676385052217/39ea7218-3c12-4e8a-ae27-7e1ba2151107.png" alt class="image--center mx-auto" /></p>
<p>We are not done yet, copy the password in this case mine will be <code>VktPYkNUZGZRcjlNVk1ibQ==</code> now put this password into another command to decode it and obtain the real password. Run the following command to decode it</p>
<pre><code class="lang-bash"><span class="hljs-comment"># Change after echo with your password</span>
<span class="hljs-built_in">echo</span> VktPYkNUZGZRcjlNVk1ibQ== | base64 --decode
</code></pre>
<p>Now copy the password that is obtained (Remember to omit <code>%</code> at the end of the password obtained) and put it into the ArgoCD dashboard</p>
<ol>
<li>Now you will be able to view the application in the dashboard</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1676385309478/b06800fa-7e3a-4450-a3d2-fa5d7d3fe633.png" alt class="image--center mx-auto" /></p>
<p>Click on the application name and you are able to see a beautiful map of your deployment</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1676385374374/a388639a-8fbd-457a-876d-5c2cc555a866.png" alt class="image--center mx-auto" /></p>
<p>You can see every possible information about the cluster here including the health of the pod, cluster and service. ArgoCD pulls the changes every 3 minutes this can be changed using webhooks and other methods.</p>
<p>And done 🎉 you have successfully configured and implemented ArgoCD to your cluster now whenever you want to change anything in the deployment then you can directly change the manifest and push the code to git repository and changes will be reflected in ArgoCD automatically.</p>
<h3 id="heading-resources">Resources:</h3>
<ul>
<li><p><a target="_blank" href="https://argo-cd.readthedocs.io/en/stable/">ArgoCD documentation</a></p>
</li>
<li><p><a target="_blank" href="https://youtu.be/MeU5_k9ssrs">ArgoCD Tutorial for Beginners | GitOps CD for Kubernetes</a></p>
</li>
<li><p><a target="_blank" href="https://youtu.be/p-kAqxuJNik">What is ArgoCD</a></p>
</li>
</ul>
<h3 id="heading-thank-you-so-much-for-reading">Thank you so much for reading 💖</h3>
<p>Like | Follow | Subscribe to the newsletter.</p>
<p>Catch me on my socials here: <a target="_blank" href="https://link.kaiwalyakoparkar.com">https://link.kaiwalyakoparkar.com</a></p>
]]></content:encoded></item><item><title><![CDATA[Rust is not that rusty]]></title><description><![CDATA[Hey everyone, if you follow me on my socials you already know that I started learning Rust a while back. And today after completing the basic fundamentals, I would like to share my learning about the same with you'll. This blog not might be the most ...]]></description><link>https://blogs.kaiwalyakoparkar.com/what-is-rust</link><guid isPermaLink="true">https://blogs.kaiwalyakoparkar.com/what-is-rust</guid><category><![CDATA[Rust]]></category><category><![CDATA[Programming Blogs]]></category><category><![CDATA[programming languages]]></category><dc:creator><![CDATA[Kaiwalya Koparkar]]></dc:creator><pubDate>Fri, 10 Feb 2023 14:30:38 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1676030186855/b4952bd9-e97d-44a6-8920-1515ffecbea2.webp" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Hey everyone, if you follow me on my <a target="_blank" href="https://link.kaiwalyakoparkar.com/">socials</a> you already know that I started learning Rust a while back. And today after completing the basic fundamentals, I would like to share my learning about the same with you'll. This blog not might be the most accurate piece of information you might find on the internet about rust but I am writing this from my understanding of what is rust. So if you would like to correct or suggest something, please feel free to do that in the comments below. So with that let's get started 🚀</p>
<h2 id="heading-what-is-rust">What is Rust</h2>
<p>Rust is a programming language with a lot of potentials. Rust became popular because of its similarities with C++ language with some extra flexible and highly efficient features. Rust quickly became one of the popular languages as it's trusted for great performance (Super fast and memory efficient), reliability, and productivity (Good error messages, documentation support, integrated package manager, etc) while building software.</p>
<h2 id="heading-why-is-rust-becoming-popular">Why is Rust becoming popular?</h2>
<p>Apart from all its great features, Rust became popular to it's highly flexible and fitting nature across different domains. Some of them are:</p>
<ol>
<li><p>Command Line: Building and distributing of command line tool is extremely easier due to Rust's robust ecosystem and also helps in maintenance for the same.</p>
</li>
<li><p>WebAssembly: This concept of taking web apps to the next step is truly revolutionary and Rust helps us work with Web Assembly to efficiently work with Web Applications.</p>
</li>
<li><p>Networking: Rust becomes a popular choice for building network services as it provides reliability and resource footprints needed for predictable performance</p>
</li>
<li><p>Embedded: Many of the embedded software developers prefer low-level languages like C etc but Rust helps these developers to use it as low-level language without giving out the high-level language features</p>
</li>
</ol>
<h2 id="heading-installation">Installation</h2>
<p>Installation is pretty simple for Rust, if you are using Linux or mac you can simply learn the following curl command</p>
<pre><code class="lang-bash">curl --proto <span class="hljs-string">'=https'</span> --tlsv1.2 -sSf https://sh.rustup.rs | sh
</code></pre>
<p>you will get a prompt something like the below press one and hit enter and that's it, your rust is installed.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1676029697791/d46c8600-fd31-47c3-a24d-eb2645f0a778.png" alt class="image--center mx-auto" /></p>
<p>If you want to check if it's installed correctly you can run the following commands:</p>
<pre><code class="lang-bash">rustc --version
</code></pre>
<pre><code class="lang-bash">cargo --version
</code></pre>
<h2 id="heading-what-is-cargo">What is cargo?</h2>
<p>Cargo is a package manager for Rust, if you have used and worked with JavaScript and Python then it's similar to the native npm and pip we use. You can use some of the cargo commands to run, and build your code</p>
<pre><code class="lang-bash">//This <span class="hljs-built_in">command</span> initializes the rust project
cargo init
</code></pre>
<pre><code class="lang-bash">//This <span class="hljs-built_in">command</span> is used to run your rust program
cargo run
</code></pre>
<pre><code class="lang-bash">//This <span class="hljs-built_in">command</span> is used to build your rust program (Creates /target folder)
cargo build
</code></pre>
<pre><code class="lang-bash">//This <span class="hljs-built_in">command</span> is used to build release ready <span class="hljs-built_in">command</span> executable
cargo build --release
</code></pre>
<h2 id="heading-creating-a-rust-project">Creating a rust project</h2>
<p>Now that we are done with introductions and basics, let's try to create the first hello world program with Rust.</p>
<p><strong>Step 1:</strong> Go to the preferred destination on your file structure and create a folder with your project name here we will take it as <code>hello</code>, and initialize your terminal</p>
<p><strong>Step 2:</strong> Run <code>cargo init</code> this will create some folders and files needed.</p>
<p><strong>Step 3:</strong> Navigate inside <code>src/</code> folder and you should see <code>main.rs</code> file. This is the file we will do the changes</p>
<p><strong>Step 4:</strong> Add the following code to the <code>main.rs</code> file</p>
<pre><code class="lang-rust"><span class="hljs-function"><span class="hljs-keyword">fn</span> <span class="hljs-title">main</span></span>() {
    <span class="hljs-built_in">println!</span>(<span class="hljs-string">"Hello World"</span>);
}
</code></pre>
<p><strong>Step 5:</strong> Into the terminal navigate to the location of <code>main.rs</code> file</p>
<p><strong>Step 6:</strong> Run <code>cargo run</code> and you should be able to see the output as <code>Hello World</code>.</p>
<p>And that's it, you have successfully created your first Rust program. In the upcoming blogs, I will try to go deeper into the basic programming with Rust where we will see about variables, conditionals, loops, and other language-specific things one needs to remember.</p>
<h2 id="heading-resources">Resources:</h2>
<ul>
<li><p><a target="_blank" href="https://www.rust-lang.org/">Rust Documentation</a></p>
</li>
<li><p><a target="_blank" href="https://youtu.be/zF34dRivLOw">Rust Crash Course | Rustlang</a></p>
</li>
<li><p>Graphics and images are not made by me and rights remain with the respective entity</p>
</li>
</ul>
<h3 id="heading-thank-you-so-much-for-reading">Thank you so much for reading 💖</h3>
<p>Like | Follow | Subscribe to the newsletter.</p>
<p>Catch me on my socials here: <a target="_blank" href="https://link.kaiwalyakoparkar.com">https://link.kaiwalyakoparkar.com</a></p>
]]></content:encoded></item><item><title><![CDATA[An Introduction to Prometheus: The Basics You Need to Know]]></title><description><![CDATA[Hey everyone! In this blog, we are going to see what is Prometheus and how we can use it to get meaningful analysis of the data that we are getting out of microservices and infrastructure. Although this might sound a bit advanced and not much of an a...]]></description><link>https://blogs.kaiwalyakoparkar.com/introduction-to-prometheus</link><guid isPermaLink="true">https://blogs.kaiwalyakoparkar.com/introduction-to-prometheus</guid><category><![CDATA[Devops]]></category><category><![CDATA[Kubernetes]]></category><category><![CDATA[monitoring]]></category><category><![CDATA[#prometheus]]></category><dc:creator><![CDATA[Kaiwalya Koparkar]]></dc:creator><pubDate>Sun, 08 Jan 2023 15:30:42 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1673185562480/75f9dbb8-7e10-498a-ab23-91a06df6bc17.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Hey everyone! In this blog, we are going to see what is Prometheus and how we can use it to get meaningful analysis of the data that we are getting out of microservices and infrastructure. Although this might sound a bit advanced and not much of an application for a beginner or just getting started after this blog, I hope I am able to give you reasons why you should include and use Prometheus in your infrastructure. So let's get started</p>
<h2 id="heading-what-is-prometheus">What is Prometheus 🤔</h2>
<p>Prometheus is used as a monitoring tool for highly dynamic container environments or even bare metal servers. Okay so with day to day increasing complexity of the applications it is becoming difficult to handle everything manually and needs automation. For example, if you have many services and object running on the server there is no insight into what is going on hardware level or application level. There might be cases when a service goes down and that causes other services to malfunction. In such cases, you should be able to quickly find the cause and solve it.</p>
<p>In a regular scenario, it won't be easy as you will have to trace a long way back to find out the service at the root which is causing trouble and fix it. So Prometheus constantly monitors all the services, and alerts the admins or individual users when crash. Prometheus also helps you to detect these problems beforehand (eg: resource depletion, memory shortage, ram/ memory usage)</p>
<p><img src="https://media-cdn.squaredup.com/wp-content/uploads/2021/07/22131456/cluster-metrics.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-prometheus-architecture">Prometheus Architecture ✨</h2>
<h3 id="heading-prometheus-server">Prometheus server:</h3>
<p>This is actually responsible for monitoring work. Prometheus server is comprised of 3 different components</p>
<ul>
<li><p><strong>Data Retrieval Worker:</strong> This component pull the matrics data from the application and services</p>
</li>
<li><p><strong>Time Series Database:</strong> This is a database that saves all the matrix data.</p>
</li>
<li><p><strong>HTTP Server:</strong> This component accepts the queries and provides the data from Time Series Database to The web UI (eg: Grafana)</p>
</li>
</ul>
<p><em>(You can see how these components are put in architecture from the image below)</em></p>
<p><img src="https://www.redhat.com/sysadmin/sites/default/files/styles/embed_large/public/2020-07/Picture1Arch.png?itok=TJGEDX3p" alt class="image--center mx-auto" /></p>
<h3 id="heading-what-are-targets">What are Targets 🎯:</h3>
<p><em>Targets</em> are any objects which are monitored by Prometheus. These targets include Linux/Windows server, Apache Server, Single Application, Servies, or Database. And these targets have <em>units</em> for monitoring. For eg units for Linux servers can be CPU states, Memory usage, etc.</p>
<h3 id="heading-what-are-metrics">What are Metrics 📊:</h3>
<p>Prometheus provides a human readable format for metrics collected from the targets. Metrics entries are separated by TYPE and HELP attributes. HELP attribute is used to describe what the matrics are and there are 3 types of metrics</p>
<ul>
<li><p><strong>Counter:</strong> These metrics keep track of how many times a particular error was caused or alerts were sent.</p>
</li>
<li><p><strong>Gauge:</strong> This metric keeps track of what is the current value of the unit is.</p>
</li>
<li><p><strong>Histogram:</strong> This matrix keeps track of how long and how big the size of the request was.</p>
</li>
</ul>
<h3 id="heading-target-endpoints-and-exporters">Target endpoints and exporters 🚛:</h3>
<p>Some services have a default endpoint exposing data to prometheus but many services need another component and that component is an <em>exporter</em>. Exporter is a service or a script that fetches the data from the matrix, converts them to a format that Prometheus understands, and exposes this converted data on its own <code>/metrics</code> endpoint.</p>
<h3 id="heading-advantages-of-alert-manager">Advantages of Alert Manager 🔔:</h3>
<p>An alert manager is used to check the rules set by the admin or the user and triggers when any of the given rules is reached. The alert manager then sends the alerts and signals on the provided channels like email, slack, discord, etc.</p>
<p><img src="https://miro.medium.com/max/646/0*zShJJwUBC0ecPRkL" alt class="image--center mx-auto" /></p>
<h3 id="heading-common-characteristics-of-prometheus">Common characteristics of Prometheus 📝:</h3>
<ul>
<li><p>Reliable: Prometheus is highly reliable as you are able to get a clear insight on what is going on inside your services and application</p>
</li>
<li><p>Stand-alone and self-containing: Prometheus doesn't need any outer service to support its functioning.</p>
</li>
<li><p>It also works even if other parts of the infrastructure are broken. It is meant and supposed to work as a separate service.</p>
</li>
<li><p>Prometheus doesn't need any extensive setup for implementation. It has it's own helm chart which makes it absolutely easy to configure and use it for monitoring on your infrastructure and applications</p>
</li>
<li><p>Prometheus is less complex compared to other monitoring tools.</p>
</li>
</ul>
<h2 id="heading-setting-up-prometheus-on-k8s">Setting up Prometheus on K8s ⚙️:</h2>
<ol>
<li><p>Create a separate namespace called <code>monitoring</code></p>
<pre><code class="lang-bash"> kubectl create namespace monitoring
</code></pre>
</li>
<li><p>Add Prometheus repo to helm:</p>
<pre><code class="lang-markdown"> helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
</code></pre>
</li>
<li><p>Update the helm repo list</p>
<pre><code class="lang-bash"> helm update repo
</code></pre>
</li>
<li><p>Install Prometheus operator on your cluster using helm:</p>
<pre><code class="lang-bash"> helm install prometheus prometheus-community/kube-prometheus-stack -n monitoring
</code></pre>
</li>
<li><p>Port-forward Grafana dashboard service to access the Prometheus metrics:</p>
<pre><code class="lang-bash"> kubectl port-forward svc/prometheus-grafana 3000:80 -n monitoring
</code></pre>
</li>
<li><p>Visit <code>http://localhost:3000</code> to access the Grafana dashboard</p>
</li>
<li><p>Once on the dashboard, you can use the following username and password to login as they are the default for everyone at the start and ofcourse you can update them if you want.</p>
<pre><code class="lang-bash"> username: admin
 password: prom-operator
</code></pre>
</li>
</ol>
<p>And that's it, you have successfully added Prometheus as a monitoring engine for your infrastructure and modify the Grafana dashboard according to your will and preference to suit your needs. The most essential parts of Prometheus are time series data and time series database, types of metrics you can expect. If you dig deep into above topics then you will have much deeper understanding how you can modify Prometheus at each step to suit your specific use case.</p>
<h2 id="heading-references">References 📖</h2>
<ul>
<li><p><a target="_blank" href="https://prometheus.io/">Prometheus Documentation</a></p>
</li>
<li><p><a target="_blank" href="https://youtu.be/QoDqxm7ybLc">Setup Prometheus Monitoring on Kubernetes using Helm and Prometheus Operator</a></p>
</li>
<li><p><a target="_blank" href="https://youtu.be/2USCcDbbAZc">Writing a Prometheus exporter from IDE to deployed in 20 minutes</a></p>
</li>
<li><p><a target="_blank" href="https://youtu.be/h4Sl21AKiDg">How Prometheus Monitoring works | Prometheus Architecture explained</a></p>
</li>
<li><p><a target="_blank" href="https://github.com/kaiwalyakoparkar/practical-devops/tree/main/Prometheus">kaiwalyakoparkar/practical-devops</a></p>
</li>
</ul>
<h3 id="heading-thank-you-so-much-for-reading">Thank you so much for reading 💖</h3>
<p>Like | Follow | Subscribe to the newsletter.</p>
<p>Catch me on my socials here: <a target="_blank" href="https://link.kaiwalyakoparkar.com">https://link.kaiwalyakoparkar.com</a></p>
]]></content:encoded></item><item><title><![CDATA[What is Helm? - Introduction, features, cheatsheet, and more]]></title><description><![CDATA[Hey everyone! Welcome back. In today's blog, we will be learning about what is helm in depth. Although there are many resources and one-line definitions for helm (Which is a good thing, to be honest) having an in-depth understanding of helm can be be...]]></description><link>https://blogs.kaiwalyakoparkar.com/what-is-helm</link><guid isPermaLink="true">https://blogs.kaiwalyakoparkar.com/what-is-helm</guid><category><![CDATA[Devops]]></category><category><![CDATA[Helm]]></category><category><![CDATA[Kubernetes]]></category><category><![CDATA[CNCF]]></category><dc:creator><![CDATA[Kaiwalya Koparkar]]></dc:creator><pubDate>Sat, 24 Dec 2022 15:30:42 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1671715437999/AoEhp1xZZ.webp" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Hey everyone! Welcome back. In today's blog, we will be learning about what is helm in depth. Although there are many resources and one-line definitions for helm (Which is a good thing, to be honest) having an in-depth understanding of helm can be beneficial while working in an organization or wanting to use helm more than just package manager. So let's get started 🚀</p>
<h2 id="heading-what-is-helm">What is Helm?</h2>
<p>Helm is a tool that helps in defining, upgrading, and managing the deployments that are needed in your Kubernetes cluster. Helm also works as a template engine which makes it really helmful (intended pun - helpful 😅) when you are working across teams with lots of files and configurations. Let's see some feature understanding everything I just mentioned in detail.</p>
<h2 id="heading-features-of-helm">Features of Helm</h2>
<h3 id="heading-1-package-manager-and-charts">1. Package Manager and Charts:</h3>
<p>Helm is used to package yaml files. These yaml files are deployment and config files for the application you want in your Kubernetes Cluster. This makes it easier to distribute these files as packages in public for private repositories.</p>
<p>For example, if you want to use Elastic Stack. For including it in your cluster you will have to write the <code>deployment</code> yaml files, create <code>secrets</code>, <code>configmaps</code>, <code>Kubernetes user with Permissions</code>, and multiple <code>Services</code>. And anyone who wants to use the Elastic stack will have to repeat this process again. What if someone packaged everything ready into a template format so that it becomes more manageable for everyone? That is indeed done. These packages and called Helm Charts. These charts can be shared via repositories.</p>
<p><img src="https://i.imgur.com/FX4WBz8.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-2-templating-engine">2. Templating Engine</h3>
<p>Let's say you have an application with multiple microservices running. The config yaml files for these microservices are identical and the only thing changes is <code>name</code>, <code>image</code> name etc. In the ideal case, you will create separate files with exact same code for every microservices. Helm helps you create templates of these exact same code converting it into just one file and all the variable properties go under the values file. Let's see how these files look like</p>
<pre><code class="lang-yaml"><span class="hljs-comment">#Template for pod.yaml</span>
<span class="hljs-attr">apiVersion:</span> <span class="hljs-string">v1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">Pod</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">name:</span> {{ <span class="hljs-string">.Values.name</span> }}
<span class="hljs-attr">spec:</span>
  <span class="hljs-attr">containers:</span>
  <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> {{ <span class="hljs-string">.Values.container.name</span> }}
    <span class="hljs-attr">image:</span> {{ <span class="hljs-string">.Values.container.image</span> }}
    <span class="hljs-attr">port:</span> {{ <span class="hljs-string">.Values.container.port</span> }}
</code></pre>
<pre><code class="lang-yaml"><span class="hljs-comment">#values.yml</span>
<span class="hljs-attr">name:</span> <span class="hljs-string">my-app</span>
<span class="hljs-attr">container:</span>
    <span class="hljs-attr">name:</span> <span class="hljs-string">my-app-container</span>
    <span class="hljs-attr">image:</span> <span class="hljs-string">my-app-image</span>
    <span class="hljs-attr">port:</span> <span class="hljs-number">9001</span>
</code></pre>
<p>Following is the logical diagram for the working of template and config files</p>
<p><img src="https://raw.githubusercontent.com/christianh814/kbe-guide/main/04-argocd-working-with-helm/img/helm.jpg" alt="Argo CD Working With Helm | Kube by Example" class="image--center mx-auto" /></p>
<h3 id="heading-3-release-management">3. Release Management</h3>
<p>One of the other prominent feature of helm is release management. Helm version comes in 2 parts <code>helm cli</code> and <code>helm server</code> (also called as <code>Tiller</code>). So the cli sends the request to the server (which runs inside the kubernetes cluster) and upon request <code>Tiller</code> creates the components like <code>pods</code> and <code>services</code> inside Kubernetes cluster. This offers the release management feature.</p>
<p>When you create a deployment <code>Tiller</code> keeps a copy with itself for future references creating a release history and when you run something like <code>help upgrade &lt;chartname&gt;</code> for upgrading version for example<code>version: 1.14.2</code> then the current deployments are updated instead of removing the deployments and starting new once. This helps in seamless roll-in and roll-out feature</p>
<p><img src="https://miro.medium.com/max/646/1*1lHh6xk05cs9AO3w-vh7iA.jpeg" alt="Simplifying App Deployment in Kubernetes with Helm Charts | by Kirill  Goltsman | Supergiant.io | Medium" class="image--center mx-auto" /></p>
<h2 id="heading-downside-of-helm">Downside of Helm:</h2>
<h3 id="heading-security-issue-caused-by-tiller">Security issue caused by Tiller:</h3>
<p>Although <code>Tiller</code> has amazing use cases but it has too many permissions to your Kubernetes Cluster like <code>CREATE</code>, <code>UPDATE</code>, <code>DELETE</code> and hence raises a big security threat. Thus in the Helm version 3, the <code>Tiller</code> has been removed and is now just helm binary.</p>
<p>So this takes the release management feature from Helm and makes it bit more difficult.</p>
<h3 id="heading-helm-commands-cheatsheet">Helm Commands - Cheatsheet</h3>
<p>You can use this cheatsheet for popularly used Helm commands</p>
<pre><code class="lang-bash"><span class="hljs-comment">#Helm chart creation</span>
helm create &lt;name&gt;

<span class="hljs-comment">#Helm install</span>
helm install &lt;name&gt; &lt;repo-name&gt;

<span class="hljs-comment">#Listing version</span>
helm list &lt;flag&gt;

<span class="hljs-comment">#Pulling helm charts</span>
helm pull &lt;chart URL | repo/chartname&gt;

<span class="hljs-comment">#Pushing helm chart</span>
heml push &lt;name&gt; &lt;repo-name&gt;

<span class="hljs-comment"># add a chart repository</span>
helm repo add &lt;chart-name&gt; 

<span class="hljs-comment">#generate an index file given a directory containing packaged charts</span>
helm repo index &lt;chart-name&gt; 

<span class="hljs-comment"># list chart repositories</span>
helm repo list &lt;chart-name&gt; 

<span class="hljs-comment"># remove one or more chart repositories</span>
helm repo remove &lt;chart-name&gt; 

<span class="hljs-comment"># update information of available charts locally from chart repositories</span>
helm repo update &lt;chart-name&gt; 

<span class="hljs-comment"># search a keywork in chart</span>
helm search repo &lt;keyword&gt;
</code></pre>
<p>And that is all for helm. There is surely more you can deep dive into helm but as a starter having the above knowledge is enough. Checkout following resources :)</p>
<h2 id="heading-references">References 📖</h2>
<ul>
<li><p><a target="_blank" href="https://helm.sh/docs/">Helm Documentation</a></p>
</li>
<li><p><a target="_blank" href="https://youtu.be/fy8SHvNZGeE">What is Helm?</a></p>
</li>
<li><p><a target="_blank" href="https://youtu.be/-ykwb1d0DXU">What is Helm in Kubernetes? Helm and Helm charts explained</a></p>
</li>
<li><p><a target="_blank" href="https://github.com/kaiwalyakoparkar/practical-devops">kaiwalyakoparkar/practical-devops</a></p>
</li>
</ul>
<h3 id="heading-thank-you-so-much-for-reading">Thank you so much for reading 💖</h3>
<p>Like | Follow | Subscribe to the newsletter.</p>
<p>Catch me on my socials here: <a target="_blank" href="https://link.kaiwalyakoparkar.com">https://link.kaiwalyakoparkar.com</a></p>
]]></content:encoded></item><item><title><![CDATA[Understanding Policy Management using Kyverno]]></title><description><![CDATA[Hey everyone, welcome back. In this blog, we are going to see about Kyverno which is a policy engine which is designed for Kubernetes. This might sound like a lot at the start but it's fairly easy. There are some prerequisites though, you don't need ...]]></description><link>https://blogs.kaiwalyakoparkar.com/understanding-policy-management-using-kyverno</link><guid isPermaLink="true">https://blogs.kaiwalyakoparkar.com/understanding-policy-management-using-kyverno</guid><category><![CDATA[kyverno]]></category><category><![CDATA[Devops]]></category><category><![CDATA[Kubernetes]]></category><dc:creator><![CDATA[Kaiwalya Koparkar]]></dc:creator><pubDate>Tue, 20 Dec 2022 13:25:42 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1671522325171/n66HVwQ8r.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Hey everyone, welcome back. In this blog, we are going to see about Kyverno which is a policy engine which is designed for Kubernetes. This might sound like a lot at the start but it's fairly easy. There are some prerequisites though, you don't need to be an expert in Kubernetes but it would help if you have deployed 1-2 apps on Kubernetes before. So let's get started!</p>
<h2 id="heading-what-is-kyverno">➡️ What is Kyverno 🤔 ?</h2>
<p><img src="https://pbs.twimg.com/media/FjT1EhzWYAEidmw?format=png&amp;name=4096x4096" alt="Kyverno - Kubernetes Native Policy Management (@kyverno) / Twitter" /></p>
<p>According to the Kyverno official documentation, it is described as follows</p>
<blockquote>
<p>Kyverno is a policy engine designed for Kubernetes. With Kyverno, policies are managed as Kubernetes resources and no new language is required to write policies. This allows using familiar tools such as <code>kubectl</code>, <code>git</code>, and <code>kustomize</code> to manage policies. Kyverno policies can validate, mutate, and generate Kubernetes resources plus ensure OCI image supply chain security. The Kyverno CLI can be used to test policies and validate resources as part of a CI/CD pipeline.</p>
<p><a target="_blank" href="https://kyverno.io/docs/introduction/"><strong>Documentation</strong></a></p>
</blockquote>
<p>but what are all these "policies" we are talking about? To simplify let's take an example :</p>
<p>We all have been to schools/ colleges/ universities at some point in our life, some might be pursuing it right now (like me :D) and we have various rules we need to keep in mind while attending the school &amp; colleges. These rules can be seen as policies in the "Kubernetes" school or college. So if you want to restrict/ validate/ mandate some properties, or features your deployment should have or shouldn't have you can mention that using Policies and Kyverno helps us in the creation of the policies which validate your deployment and send errors if the requirement is not matched. So as you might have guessed, this helps you get total control over what goes into your deployment and whether it is up to the guidelines you have created.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1671523179320/DWmCM2q5T.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-architecture-of-kyverno">➡️ Architecture of Kyverno ⚙️</h2>
<p>Let's look into the architecture. Though understanding every part of architecture is not at all essential at this point but knowing where everything goes and how the workflow goes can help you better visualize and implement. Let's take a look at the images below</p>
<p><img src="https://res.cloudinary.com/practicaldev/image/fetch/s--GP4ttue4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7w0kz4zb9e1bkdtsf44h.png" alt="Kubernetes policy management: III - Kyverno - DEV Community 👩‍💻👨‍💻" /></p>
<p>As seen that Kyverno acts as a middleman when you are trying to apply the manifest files to your clusters. It verifies it and upon passing applies it to the requested section in the deployment. Enough theory let's try it out</p>
<h2 id="heading-hands-on-time">➡️ Hands-on time 🤩!</h2>
<p>Now we will be doing a simple <code>nginx</code> container deployment. But this time we don't want <code>:latest</code> image of <code>nginx</code> so we won't be allowed to deploy if the image has <code>:latest</code> tag and allow only if another tag <code>:1.14.2</code> so let's start.</p>
<h3 id="heading-installations"><strong>Installations 💻 :</strong></h3>
<p><strong>1)</strong> First of all we need will be using helm in this tutorial so it's essential that you have <a target="_blank" href="https://helm.sh/docs/intro/install/">helm installed</a>. Then go ahead and create a Kubernetes cluster on any cloud service provider of your choice, in this tutorial I will be using <a target="_blank" href="https://minikube.sigs.k8s.io/docs/start/">minikube</a></p>
<p><strong>2)</strong> Now let's add Kyverno repo to the helm. Run the following command in terminal</p>
<pre><code class="lang-bash">helm repo add kyverno https://kyverno.github.io/kyverno/
</code></pre>
<p><strong>3)</strong> We have to now add Kyverno to our deployment cluster. Run the following 2 commands</p>
<pre><code class="lang-bash"><span class="hljs-comment"># This is not mandatory command, this installs the pods security standards implemented by Kyverno. I have included it because it can be valuable practice in long run :)</span>
helm install kyverno-policies kyverno/kyverno-policies -n kyverno
</code></pre>
<pre><code class="lang-bash">helm install kyverno kyverno/kyverno -n kyverno --create-namespace
</code></pre>
<pre><code class="lang-bash"><span class="hljs-comment"># Output for the above command:</span>
NAME: kyverno
LAST DEPLOYED: Tue Dec 20 14:31:48 2022
NAMESPACE: kyverno
STATUS: deployed
REVISION: 1
NOTES:
Chart version: 2.6.5
Kyverno version: v1.8.5

Thank you <span class="hljs-keyword">for</span> installing kyverno! Your release is named kyverno.
</code></pre>
<p><strong>4)</strong> You can verify if it's successfully added by running the following <code>kubectl</code> command</p>
<pre><code class="lang-bash">kubectl get deploy -n kyverno
</code></pre>
<p>you should get something like this 👇</p>
<pre><code class="lang-bash">NAME      READY   UP-TO-DATE   AVAILABLE   AGE
kyverno   1/1     1            1           98s
</code></pre>
<p>If you get this then you have successfully installed Kyverno and you are ready to work with it.</p>
<h3 id="heading-creating-testing-andamp-managing-policies"><strong>Creating, Testing &amp; Managing Policies ✨ :</strong></h3>
<p><strong>1)</strong> Now create a folder named <code>Kyverno</code> on your computer and add files named <code>nginx.yml</code> which will be our deployment file and <code>my-policy.yml</code> which will have the policy we are trying to apply. Add the following <code>yml</code> configuration code to it. (Can be also found <a target="_blank" href="https://github.com/kaiwalyakoparkar/practical-devops/tree/main/Kyverno">here</a>)</p>
<p>-- This is code for <code>nginx.yml</code> (Notice here the image tag is <code>1.14.2</code>)</p>
<pre><code class="lang-yaml"><span class="hljs-attr">apiVersion:</span> <span class="hljs-string">apps/v1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">Deployment</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">nginx-deployment</span>
  <span class="hljs-attr">labels:</span>
    <span class="hljs-attr">app:</span> <span class="hljs-string">nginx</span>
<span class="hljs-attr">spec:</span>
  <span class="hljs-attr">replicas:</span> <span class="hljs-number">1</span>
  <span class="hljs-attr">selector:</span>
    <span class="hljs-attr">matchLabels:</span>
      <span class="hljs-attr">app:</span> <span class="hljs-string">nginx</span>
  <span class="hljs-attr">template:</span>
    <span class="hljs-attr">metadata:</span>
      <span class="hljs-attr">labels:</span>
        <span class="hljs-attr">app:</span> <span class="hljs-string">nginx</span>
    <span class="hljs-attr">spec:</span>
      <span class="hljs-attr">containers:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">nginx</span>
        <span class="hljs-attr">image:</span> <span class="hljs-string">nginx:1.14.2</span>
        <span class="hljs-attr">ports:</span>
        <span class="hljs-bullet">-</span> <span class="hljs-attr">containerPort:</span> <span class="hljs-number">80</span>
</code></pre>
<p>-- This code is for <code>my-policy.yml</code> (You can see different policy <a target="_blank" href="https://kyverno.io/policies/">templates</a> here, the one we are customizing and using here is <a target="_blank" href="https://kyverno.io/policies/best-practices/disallow_latest_tag/disallow_latest_tag/">this</a>)</p>
<pre><code class="lang-yaml"><span class="hljs-comment"># This is original policy file from the kyverno docs</span>
<span class="hljs-attr">apiVersion:</span> <span class="hljs-string">kyverno.io/v1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">ClusterPolicy</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">disallow-latest-tag</span>
  <span class="hljs-attr">annotations:</span>
    <span class="hljs-attr">policies.kyverno.io/title:</span> <span class="hljs-string">Disallow</span> <span class="hljs-string">Latest</span> <span class="hljs-string">Tag</span>
    <span class="hljs-attr">policies.kyverno.io/category:</span> <span class="hljs-string">Best</span> <span class="hljs-string">Practices</span>
    <span class="hljs-attr">policies.kyverno.io/severity:</span> <span class="hljs-string">medium</span>
    <span class="hljs-attr">policies.kyverno.io/subject:</span> <span class="hljs-string">Pod</span>
    <span class="hljs-attr">policies.kyverno.io/description:</span> <span class="hljs-string">&gt;-
      The ':latest' tag is mutable and can lead to unexpected errors if the
      image changes. A best practice is to use an immutable tag that maps to
      a specific version of an application Pod. This policy validates that the image
      specifies a tag and that it is not called `latest`.      
</span><span class="hljs-attr">spec:</span>
  <span class="hljs-attr">validationFailureAction:</span> <span class="hljs-string">audit</span>
  <span class="hljs-attr">background:</span> <span class="hljs-literal">true</span>
  <span class="hljs-attr">rules:</span>
  <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">require-image-tag</span>
    <span class="hljs-attr">match:</span>
      <span class="hljs-attr">resources:</span>
        <span class="hljs-attr">kinds:</span>
        <span class="hljs-bullet">-</span> <span class="hljs-string">Pod</span>
    <span class="hljs-attr">validate:</span>
      <span class="hljs-attr">message:</span> <span class="hljs-string">"An image tag is required."</span>
      <span class="hljs-attr">pattern:</span>
        <span class="hljs-attr">spec:</span>
          <span class="hljs-attr">containers:</span>
          <span class="hljs-bullet">-</span> <span class="hljs-attr">image:</span> <span class="hljs-string">"*:*"</span>
  <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">validate-image-tag</span>
    <span class="hljs-attr">match:</span>
      <span class="hljs-attr">resources:</span>
        <span class="hljs-attr">kinds:</span>
        <span class="hljs-bullet">-</span> <span class="hljs-string">Pod</span>
    <span class="hljs-attr">validate:</span>
      <span class="hljs-attr">message:</span> <span class="hljs-string">"Using a mutable image tag e.g. 'latest' is not allowed."</span>
      <span class="hljs-attr">pattern:</span>
        <span class="hljs-attr">spec:</span>
          <span class="hljs-attr">containers:</span>
          <span class="hljs-bullet">-</span> <span class="hljs-attr">image:</span> <span class="hljs-string">"!*:latest"</span>
</code></pre>
<ol>
<li>Now edit the <code>spec</code> section of the configuration file in <code>my-policy.yml</code> as below (on line <code>16</code> to be specific)</li>
</ol>
<pre><code class="lang-yaml"><span class="hljs-attr">spec:</span>
  <span class="hljs-attr">validationFailureAction:</span> <span class="hljs-string">enforce</span>
  <span class="hljs-attr">background:</span> <span class="hljs-literal">true</span>
  <span class="hljs-attr">failurePolicy:</span> <span class="hljs-string">Fail</span>
</code></pre>
<ol>
<li>We are done with the edition, now let's try to see if Kyverno stops us if we are trying to deploy <code>nginx:latest</code> for this update <code>nginx.yml</code> so it looks as follows:</li>
</ol>
<pre><code class="lang-yaml"><span class="hljs-attr">apiVersion:</span> <span class="hljs-string">apps/v1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">Deployment</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">nginx-deployment</span>
  <span class="hljs-attr">labels:</span>
    <span class="hljs-attr">app:</span> <span class="hljs-string">nginx</span>
<span class="hljs-attr">spec:</span>
  <span class="hljs-attr">replicas:</span> <span class="hljs-number">1</span>
  <span class="hljs-attr">selector:</span>
    <span class="hljs-attr">matchLabels:</span>
      <span class="hljs-attr">app:</span> <span class="hljs-string">nginx</span>
  <span class="hljs-attr">template:</span>
    <span class="hljs-attr">metadata:</span>
      <span class="hljs-attr">labels:</span>
        <span class="hljs-attr">app:</span> <span class="hljs-string">nginx</span>
    <span class="hljs-attr">spec:</span>
      <span class="hljs-attr">containers:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">nginx</span>
        <span class="hljs-attr">image:</span> <span class="hljs-string">nginx:latest</span>
        <span class="hljs-attr">ports:</span>
        <span class="hljs-bullet">-</span> <span class="hljs-attr">containerPort:</span> <span class="hljs-number">80</span>
</code></pre>
<ol>
<li>Now apply the <code>my-policy.yml</code> using the following command</li>
</ol>
<pre><code class="lang-bash">kubectl apply -f my-policy.yml
</code></pre>
<p>You will get output similar to</p>
<pre><code class="lang-bash">clusterpolicy.kyverno.io/disallow-latest-tag created
</code></pre>
<ol>
<li>Now apply the <code>nginx.yml</code> using the following command. (Remember as we have edited it to <code>nginx:latest</code> this should not get deployed and we should get an error)</li>
</ol>
<pre><code class="lang-bash">kubectl apply -f nginx.yml
</code></pre>
<p>You should get an error message similar to</p>
<pre><code class="lang-bash">Error from server: error when creating <span class="hljs-string">"nginx.yml"</span>: admission webhook <span class="hljs-string">"validate.kyverno.svc-fail"</span> denied the request: 

policy Deployment/default/nginx-deployment <span class="hljs-keyword">for</span> resource violation: 

disallow-latest-tag:
  autogen-validate-image-tag: <span class="hljs-string">'validation error: Using a mutable image tag e.g. '</span><span class="hljs-string">'latest'</span><span class="hljs-string">'
    is not allowed. rule autogen-validate-image-tag failed at path /spec/template/spec/containers/0/image/'</span>
</code></pre>
<p>Wohoo 🎉 you successfully created a policy that doesn't allow <code>:latest</code> tags on the image</p>
<ol>
<li>Now change the tag in the <code>nginx.yml</code> file to :<code>1.14.2</code></li>
</ol>
<pre><code class="lang-yaml"><span class="hljs-attr">image:</span> <span class="hljs-string">nginx:1.14.2</span>
</code></pre>
<p>and now lets try to apply the <code>nginx.yml</code> file again and this should get deployed as the image no longer has <code>:latest</code> tag</p>
<pre><code class="lang-bash">kubectl apply -f nginx.yml
</code></pre>
<p>And the output for this should be</p>
<pre><code class="lang-bash">deployment.apps/nginx-deployment created
</code></pre>
<p>That's it, that's how you enforce the policies and manage them using Kyverno. You can use multiple <a target="_blank" href="https://kyverno.io/policies/">pre-made policies</a> or you can custom create your own too. And that is all for this blog. Would definitely like to know your feedback in the comments below. Until next time 👋</p>
<h2 id="heading-references">References 📖</h2>
<ul>
<li><p><a target="_blank" href="https://kyverno.io/docs/">Kyverno Documentation</a></p>
</li>
<li><p><a target="_blank" href="https://youtu.be/M_-r6vUKevQ">Kyverno Overview -- Defining Kubernetes Cluster Policies</a></p>
</li>
<li><p><a target="_blank" href="https://github.com/kaiwalyakoparkar/practical-devops">kaiwalyakoparkar/practical-devops</a></p>
</li>
<li><p>The above images are not created by me and are taken from the internet. The credit for this images go to their respected creators :)</p>
</li>
</ul>
<h3 id="heading-thank-you-so-much-for-reading">Thank you so much for reading 💖</h3>
<p>Like | Follow | Subscribe to the newsletter.</p>
<p>Catch me on my socials here: <a target="_blank" href="https://link.kaiwalyakoparkar.com">https://link.kaiwalyakoparkar.com</a></p>
]]></content:encoded></item><item><title><![CDATA[7 steps to craft a perfect CFP for your next conference.]]></title><description><![CDATA[Hey everyone, if you are here then you have either submitted CFPs before or a total beginner who is looking to get more information about this process. I feel like this blog will be able to help both categories of people. So let's begin and get your ...]]></description><link>https://blogs.kaiwalyakoparkar.com/7-steps-to-craft-a-perfect-cfp</link><guid isPermaLink="true">https://blogs.kaiwalyakoparkar.com/7-steps-to-craft-a-perfect-cfp</guid><category><![CDATA[conference]]></category><category><![CDATA[tips and tricks]]></category><category><![CDATA[cfp]]></category><dc:creator><![CDATA[Kaiwalya Koparkar]]></dc:creator><pubDate>Tue, 22 Nov 2022 16:30:42 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/unsplash/6vAjp0pscX0/upload/v1669122514077/XpJrFCITD.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Hey everyone, if you are here then you have either submitted CFPs before or a total beginner who is looking to get more information about this process. I feel like this blog will be able to help both categories of people. So let's begin and get your next CFP selected 🔥</p>
<h2 id="heading-what-is-cfp">What is CFP?</h2>
<p>CFP stands for "Call for Papers" or "Call for proposals". These are types of announcements made by organizers to start accepting the talk ideas from people. The concept of CFP is really simple, whenever there is an open CFP, you are made to submit your idea, you write about yourself, and your topic in brief, and try to pitch your talk idea to the organizers. And then upon CFP selection, you are invited to give a talk at that particular event or conference. Now let's start with our 7 steps</p>
<h2 id="heading-understand-the-audiences">➡️ Understand the audiences</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1669122588538/UlfwFCVDg.png" alt="image.png" class="image--center mx-auto" /></p>
<h3 id="heading-attendees">Attendees :</h3>
<p>It shouldn't be about you, it should be about the audience. For example "My 5 tips to ...." talks about you, instead, this should focus more on what the attendees would gain from the session. Understand who will your attendees be and try to make your description more specific for those people. Your description shouldn't be in generic terms. Understanding your audience more can help you find some amazing angles to put forward your ideas and generate curiosity for that specific segment of the attendees.</p>
<h3 id="heading-organisers">Organisers :</h3>
<p>Now you should think about who is going to select the CFPs and what they want to see. One of the things they want is to have their attendees have a great experience and they will be looking for things that would help in providing that experience. Your description to your talk should demonstrate the potential to engage the audience and give them a great experience while attending your talk.</p>
<h2 id="heading-why-you">➡️ Why you?</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1669123708454/PG7otVHh0.png" alt="image.png" class="image--center mx-auto" /></p>
<p>This is one of the factors to highly consider while creating your CFP. See self-validation is very important but you should think about why should "you" be the person to give this talk. And your CFP should reflect the same things too. You should be clearly able to demonstrate why you are the most suitable person to give a talk among all the other people. You can mention all the points that you think differentiate you from others in your bio and description.</p>
<h2 id="heading-research-is-important">➡️ Research is important</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1669124715183/TMRJPLl1S.webp" alt="research_project.webp" class="image--center mx-auto" />
Ofcourse you should know about the topic you are talking about. But the research I am referring to here is knowing more about the conference/event you are trying to submit your talk to. You should know what conference that is and what types of talks would be there (maybe from the past talk list). Most of the conferences will have the themes they are looking to get talks on so be on the lookout for them. If the organizers have organized the conference before then you can visit the past talks, read the descriptions and see what themes and types of talks they prefer having the most. Even if you are presenting the same topic you like, you can find different angles to present the same idea.</p>
<h2 id="heading-lets-write">➡️ Let's write</h2>
<p><img src="https://media.giphy.com/media/63H62LSXP60QAggS4e/giphy.gif" alt="lets write" class="image--center mx-auto" />
The biggest thing to remember is that whatever you write is likely going to be on website, newsletter or other promotional content and there sometimes won't be easy ways to change that. Will that put pressure on you while writing? To be honest it might but don't worry that might actually help you boost your confidence. So for eg instead of writing "If this talks gets selected I will talk about ..." you can be more firm like "In this talk xyz will be talking about....". At this point you can look into yourself and your background and see if you can include it in some way. Take a look at the following description from my talk at KubeCon NA 2022</p>
<blockquote>
<p>Are you a student trying to pursue cloud-native? Are you perplexed and interested in learning how to build a career in cloud-native? Attend the panel discussion to learn how hackathons can help you advance in cloud-native development. Hackathons are an excellent way to hone collaborative abilities, communication skills, and engineering skills all while keeping it fun, beginner-friendly, and a great experience over weekends. The panel will demonstrate how students can contribute to CNCF projects while participating in hackathons, specifically cloud-native hackathons. The panel is made up of students who have competed in over 145 global hackathons collectively in the past 12 months (winning a collective of 33 out of them and 5 together as a team) The panelists are also mentors, judges, and organizers at student hackathons and will share their experience as an organizer with those folks who are willing to bring the cloud-native culture via hackathons to their local community. The panelists will demonstrate how organizing CNCF hackathons could help attract more young people and students to contribute."</p>
</blockquote>
<h2 id="heading-it-all-starts-with-a-title">➡️ It all starts with a title</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1669125155110/kwe_a3miU.png" alt="image.png" />
It's obvious that if it's a blog, youtube video, talk proposal, organisers and audience likes compelling content. While writing proposals 80% of your efforts should be expected just for title (Sounds weird but many researchers say the same). Re-iterate over and over on your title and you will start feeling that your titles are getting better and better. Some of the studies say that you should at least write 25 titles on the same topic description to start getting grip and understand what actually would work. Also while doing so you need to think about what impression will it create on the 2 types of audiences we talked before.</p>
<h2 id="heading-get-feedbacks">➡️ Get Feedbacks</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1669129945651/yoKGQiykl.png" alt="image.png" />
This is self explanatory, feedbacks are indeed important to quickly testing if your thoughts are being conveyed properly to another person. Get feedback on your proposal from at least one person and get it as descriptive as possible so you can see what impression your writing is making and if that matches your expectations. Best ways to ask for feedbacks is to be specific what feedback you want. So rather getting your typo mistakes as a feedback the other person knows where they have to check on your thoughts. You can then work back and forth on improving the writeups.</p>
<h2 id="heading-network-with-organisers">➡️ Network with Organisers</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1669130297222/BNvuqP1lF.png" alt="image.png" />
Okay, this might be debatable based on if it's ethical or unethical to connect with organizers. But I feel networking doesn't hurt anyone, on the other hand, it helps you understand the organizers more and get to know what they really want and this supports one of the points mentioned above. You want to help organizers.</p>
<h2 id="heading-references">References 📖</h2>
<ul>
<li><a target="_blank" href="https://www.twitch.tv/videos/1655386493">The art of writing conference proposals</a></li>
</ul>
<h3 id="heading-thank-you-so-much-for-reading">Thank you so much for reading 💖</h3>
<p>Like | Follow | Subscribe to the newsletter.</p>
<p>Catch me on my socials here: https://link.kaiwalyakoparkar.com</p>
]]></content:encoded></item><item><title><![CDATA[What is K8s? - Jumpstart your Kubernetes Journey  ⎈]]></title><description><![CDATA[Hey everyone, heard a lot about kubernetes and want to know more and get started with it. Before you get started with kubernetes I would highly suggest you all to get familarized with containers and docker. I have a blog written on Docker you can sur...]]></description><link>https://blogs.kaiwalyakoparkar.com/what-is-kubernetes</link><guid isPermaLink="true">https://blogs.kaiwalyakoparkar.com/what-is-kubernetes</guid><category><![CDATA[Kubernetes]]></category><category><![CDATA[Devops]]></category><category><![CDATA[getting started]]></category><dc:creator><![CDATA[Kaiwalya Koparkar]]></dc:creator><pubDate>Tue, 08 Nov 2022 15:30:45 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1667909455505/DKp1BGef9.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Hey everyone, heard a lot about kubernetes and want to know more and get started with it. Before you get started with kubernetes I would highly suggest you all to get familarized with containers and docker. I have a blog written on <a target="_blank" href="https://kaiwalyakoparkar.hashnode.dev/docker-beginner-to-advance">Docker</a> you can surely check that out and get back to this articles. So, let's get started 🚀</p>
<h2 id="heading-history">History</h2>
<p><img src="https://media.giphy.com/media/9V71czzFLwBNE0O8EV/giphy.gif" alt /></p>
<h3 id="heading-bare-metal-servers">- Bare metal servers</h3>
<p>It is typically a single computer server which is used by only one consumer or tenet. Each server offered for rental is a distinct physical piece of hardware that is a functional server on its own.</p>
<h3 id="heading-virtual-machine">- Virtual Machine</h3>
<p>As you might have already used virtual machines for trying out differnt operating systems. Virtual machines helps your softwares to share same physical resources instead of seperate physical resources for each software which would increase the cost drastically.</p>
<h3 id="heading-containers">- Containers</h3>
<p>Containers are similar to virtual machines conceptually. But the key factor is that containers run the program in isolated environments.</p>
<h2 id="heading-monolithic-vs-microservices">Monolithic v/s Microservices</h2>
<p>There are multiple resources revolving around monolithic and microservice so I will try to make it as simpler and shorter as possible (because that is not really main scope of this blog and would need a dedicated article on itself),
So let's say you have an application which has frontend, backend, database etc and you containerize your entire application in one container then that is called as a monolithic architecture. So if your container goes down your entire instance of that application goes down. Now the reverse is that you containerize each part of your application and connect them with each other, now this each containerized part of application is called as microservice. <em>(See the image below for better understanding)</em></p>
<p><img src="https://i.imgur.com/0wyC8JU.png" alt="Monolitic v/s microservices image" /></p>
<h2 id="heading-what-is-orchestration">What is Orchestration?</h2>
<p>Orchestration has multiple definitions. You can think of it as a automation process which helps us in deploying and managing the application. So the orchestrator will keep looking for anything wrong with your deployment and try to self heal, scale to keep your production damages to it's lowest. You can picture it as the orchestrator in an orchestra who is responsible to create a melody and monitor it continuously.</p>
<p><img src="https://i.imgur.com/3Av5QBg.png" alt="Orchestrator" /></p>
<h2 id="heading-what-is-kubernetes">What is Kubernetes?</h2>
<p>Kubernetes also known as k8s (Fun fact: It is called k8s becuase it has 8 letters between <code>K</code> &amp; <code>S</code> in word <code>Kubernetes</code>) is widely termed as Container orchestrator but Kubernetes is not just container orchestrator which means it helps you orchestrate the containers and provide additional features like </p>
<ul>
<li>Deploy your application</li>
<li>Enable zero downtime updates (Also called rolling updates)</li>
<li>Helps scale your application dynamically with increasing or decreasing traffic</li>
<li>Self heals the containers and other services.</li>
<li>Run it on our own machine/cloude</li>
<li>Run it on public cloud providers</li>
<li>Helps migrate from one cloud provide to other</li>
<li>It can also replicate services, scale those services</li>
<li>You can also use Volumes (Much more about volumes in upcoming blogs)</li>
<li>One of the prominent feature is load balencing</li>
</ul>
<p>There are many more such features provided by Kubernetes apart from once mentioned above that seperates it from being just the 'container orchestrator'.</p>
<h2 id="heading-kubernetes-cluster">Kubernetes Cluster</h2>
<p>Cluster is the outer most box which holds everthing. Cluster is simply a collection of Control Plane and Worker nodes. <em>(Refer the diagram below with the explaination in the blog to build good intuition of the architecture)</em></p>
<p><img src="https://i.imgur.com/CrV5lgs.png" alt="K8s Cluster" /></p>
<h3 id="heading-kubectl-cli">KubeCtl (CLI)</h3>
<p>This is a command line tool provided to us so we can communicate with our control plane do apply and do the changes. Optionally if you don't want to work with cli there are also few gui provided you can take a look into. KubeCtl must be installed on your local machine in order to talk to the API server in the control plane.</p>
<h3 id="heading-control-plane">Control Plane</h3>
<p>1) <strong>API Server:</strong> All the communication will happen via api server. It listens to requests on <code>HTTPS</code> and port <code>443</code></p>
<p>2) <strong>Controller Manager:</strong> So if the command is to create 5 pods then controller will take care of this but controller manager will handle and take care of the controller. It has 4 functions</p>
<ul>
<li><strong>Desired state:</strong> Sees whats the desired state of the server being told by the API server for the controller</li>
<li><strong>Current State:</strong> Constantly checks if the operations meet the desired state</li>
<li><strong>Differences &amp; Make changes:</strong> Listen to the changes &amp; make the changes as being requested by api server</li>
</ul>
<p>3) <strong>Scheduler:</strong> This is responsiple for physically scheduling the processes. It inspects the worker nodes and schedules the things accordingly.</p>
<p>4) <strong>etcd:</strong> It's the database that contains the infomation about the cluster so if the API server wants to get the information about the cluster the it will communicate with etcd.</p>
<h3 id="heading-worker-node">Worker node</h3>
<p>1) <strong>Kubelet:</strong> Communicates with Container Runtime and with Control Plane API server. Whenever a new worker node is created and attached to the control plane, kubelet is installed on it.</p>
<p>2) <strong>Kube-Proxy:</strong> Communicates with Control Plane API server. Kube proxy is responsible for networking. This will allocate ip addresses to the worker node/ nodes</p>
<p>3) <strong>Container Runtime:</strong></p>
<ul>
<li><strong>Pod:</strong> It is the smallest scheduling unit. It monitors the health of the container, reinitiates one if one dies etc. This is similar to the controller manager (in the bigger picture). <em>Containers are inside a pod</em></li>
<li><strong>Container:</strong> It's a simple container running your application or service inside it. Container is a isolated environment to build and run your application. You can read more if you like in my <a target="_blank" href="https://kaiwalyakoparkar.hashnode.dev/docker-beginner-to-advance">Docker blog</a></li>
</ul>
<h2 id="heading-steps-to-run-applications-on-kubernetes">Steps to run applications on Kubernetes</h2>
<p>1) Create microservices</p>
<p>2) Containerize your microservices</p>
<p>3) Put container in pods</p>
<p>4) Deploy these pods to controllers</p>
<h2 id="heading-k8s-dns">K8s DNS</h2>
<p><img src="https://media.giphy.com/media/Zx1KzuQBR8wIbrm81t/giphy.gif" alt />
This concept is pretty much understandable if you have worked with docker compose or other sorts of networking based communications between containers and services. If you haven't then don't worry. K8s DNS has some ip addresses for every pod and container so that they can communicate with each other, share information. From the previous application example you can imagine it as a way for your frontend containers to talk with your backend containers etc.a</p>
<h2 id="heading-all-required-installations">All required installations</h2>
<p>1) <a target="_blank" href="https://www.docker.com/products/docker-desktop/">Docker Desktop</a></p>
<p>2) <a target="_blank" href="https://kubernetes.io/docs/tasks/tools/">Kubectl</a></p>
<p>3) <a target="_blank" href="https://minikube.sigs.k8s.io/docs/start/">Minikube</a></p>
<h2 id="heading-command-cheatsheets">Command Cheatsheets</h2>
<h3 id="heading-minikube-commands">Minikube commands</h3>
<pre><code class="lang-shell">$ minikube start

$ minikube status

$ minikube dashboard

$ minikube docker-env

# will switch to bash prompt of minikube
$ minikube ssh
</code></pre>
<h3 id="heading-kubectl-commands">Kubectl commands</h3>
<pre><code class="lang-shell">$ kubectl version --client

$ kubectl get pods

$ kubectl get nodes

# Shows information about the cluster
$ kubectl config view

# get everything like pods
$ kubectl get all

$ kubectl get deployments

$ kubectl create -f &lt;pod-config yaml file&gt;

# Get more info in wide format
$ kubectl get pod &lt;pod-name&gt; -o wide

# Get more info in yaml format
$ kubectl get pod &lt;pod-name&gt; -o yaml

# Port forwarding like docker
$ kubectl port-forward nginx-pod 8080:80

# Gets the replica sets
$ kubectl get rs view

# Get services
$ kubectl get services

# Apply the configurations to the k8s cluster
$ kubectl apply -f &lt;file_name&gt;.yml

# See the rollout history of the deployment
$ kubectl rollout history deploy/&lt;name-of-deployment&gt;

# Rollback to previous versions of the deployment
$ kubectl rollout undo deploy/&lt;name-of-deployment&gt; --to-revision &lt;no-of-revision-from-history-command&gt;
</code></pre>
<p><img src="https://media.giphy.com/media/4zJa3fUd2yQA1jHARC/giphy.gif" alt /></p>
<p>Well that's it, that is pretty much that goes around in kubernetes. You will understand and love it more once you build and deploy your application on it. Trust me best way to understand and learn kubernetes is through hands-on practice. Want to build an application and deploy it to your own cluster? Keep an eye on my blog page and there might be something for you in coming future 😉</p>
<h2 id="heading-references">References</h2>
<ul>
<li><a target="_blank" href="https://youtu.be/KVBON1lA9N8">Kubernetes Tutorial for Beginners | What is Kubernetes? Architecture Simplified!</a></li>
<li><a target="_blank" href="https://youtu.be/d6WC5n9G_sM">Kubernetes Course - Full Beginners Tutorial (Containerize Your Apps!)</a></li>
<li><a target="_blank" href="https://spacelift.io/blog/kubernetes-tutorial">Kubernetes Tutorial for Beginners – Basic Concepts &amp; Examples</a></li>
</ul>
<h3 id="heading-thank-you-so-much-for-reading">Thank you so much for reading 💖</h3>
<p>Like | Follow | Subscribe to the newsletter.</p>
<p>Catch me on my socials here: https://link.kaiwalyakoparkar.com/</p>
]]></content:encoded></item><item><title><![CDATA[Docker Beginner to Advance 🐳]]></title><description><![CDATA[Hey everyone, welcome back to the new blog. In today's blog we are going to learn about Docker. So without wasting time let's get straight into it.
What is Docker?
Docker is a tool which helps you run you application in isolated environment. Docker c...]]></description><link>https://blogs.kaiwalyakoparkar.com/docker-beginner-to-advance</link><guid isPermaLink="true">https://blogs.kaiwalyakoparkar.com/docker-beginner-to-advance</guid><category><![CDATA[Docker]]></category><category><![CDATA[Programming Blogs]]></category><category><![CDATA[Devops]]></category><dc:creator><![CDATA[Kaiwalya Koparkar]]></dc:creator><pubDate>Sat, 15 Oct 2022 14:30:42 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1671529691603/Oj6TqEb6p.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Hey everyone, welcome back to the new blog. In today's blog we are going to learn about Docker. So without wasting time let's get straight into it.</p>
<h2 id="heading-what-is-docker">What is Docker?</h2>
<p>Docker is a tool which helps you run you application in isolated environment. Docker creates a isolated environment often called "container" where you run your application without getting affected by external factors. This helps in solving the major problem and often a joke "But this works on my computer". You can easily share images to spin up containers and it would work exactly the same as it would run on the machine it was built on. This is very effective tool while deploying your application to production
<img src="https://i.imgur.com/yIvuoYU.png" alt="What is docker" /></p>
<h2 id="heading-steps-to-containerize-your-application">Steps to Containerize your application</h2>
<p>In this section we will see 3 essential steps you need to follow in order to containerize your application
<img src="https://i.imgur.com/nXkKoQ7.png" alt="steps of cont" /></p>
<h3 id="heading-dockerfile">Dockerfile</h3>
<p>This file is often referred as "recipe" for your container. This is a simple file which contains all and each step to setup and run your project. This file needs every detail from packages you install to the command you run in order to run that specific application. We will look more into how you can build your own dockerfile in the coming sections</p>
<h3 id="heading-docker-image">Docker Image</h3>
<p>Image is what you build out of your dockerfile. Image is easily sharable and storable. So if you want someone to try out your application you can just send them this docker image and they would be able to run the application exact same way without setting up anything on their local machines. Or even you can use these images to push your application in the production. You can think this a "class" from the "object oriented programming" reference.</p>
<h3 id="heading-docker-container">Docker Container</h3>
<p>Container is the running instance of an application. Containers are the built and setup using docker images. So whenever you get an image and you run it then a 'container' is created with all the information and this is the step where your application is actually running. You can create multiple containers from single image. You can think this as "object" from the "object oriented programming" reference.</p>
<p><img src="https://i.imgur.com/Ro2YJ44.png" alt="Run multiple containers" /></p>
<h2 id="heading-build-your-first-dockerfile">Build your first Dockerfile</h2>
<p>Every step in the docker file acts as a layer for next step while building the image out of it. Let's take a simple docker file as an example.</p>
<pre><code class="lang-dockerfile">FROM node:alpine
COPY . ./
RUN npm install
CMD ["npm", "start"]
</code></pre>
<p>This is a dockerfile for a simple nodejs api script.</p>
<ul>
<li><code>FROM</code> : This specifies the base os image on which your application needs to run. This is one of the essential fields you need to mention in your dockerfile</li>
<li><code>COPY</code>: This is simple copy command which has 2 preceding arguments. 1 - the files you need to copy, in this case all so <code>.</code> and where you need to copy so in this case <code>./</code> root directory</li>
<li><code>RUN</code>: You add a command here which you want to execute during the building of the container. In this case I did <code>npm install</code> because well I need <code>node_modules</code> folder to run my nodejs application</li>
<li><code>CMD</code>: This is where you add the command you need to run after the container is created and set up. You can keep on adding the commands in this array. If it's only one command then you don't necessarily need to put it in the array format.</li>
</ul>
<h2 id="heading-basic-docker-commands">Basic Docker Commands</h2>
<h3 id="heading-status-checking-commands">Status Checking Commands</h3>
<pre><code class="lang-bash"><span class="hljs-comment">#Check docker version</span>
$ docker -v

<span class="hljs-comment">#See all the images in docker</span>
$ docker image ls <span class="hljs-comment">#[OR]</span>
$ docker images

<span class="hljs-comment">#See the runnig containers</span>
$ docker container ls

<span class="hljs-comment">#See all the containers</span>
$ docker container ls -a

<span class="hljs-comment">#See the running docker processes</span>
$ docker ps

<span class="hljs-comment">#See all the docker processes</span>
$ docker ps -a

<span class="hljs-comment">#Inspect a docker container</span>
$ docker inspect &lt;container-name&gt;/&lt;container-id&gt;

<span class="hljs-comment">#Inspect a docker image</span>
$ docker inspect &lt;image-name&gt;/&lt;image-id&gt;

<span class="hljs-comment">#See all the networks on host</span>
$ docker network ls
</code></pre>
<h3 id="heading-execution-commands">Execution Commands</h3>
<pre><code class="lang-bash"><span class="hljs-comment"># Pull an image from dockerhub</span>
$ docker pull &lt;image-name&gt;

<span class="hljs-comment"># Run the image (if not present on device then it will pull and run)</span>
$ docker run &lt;image-name&gt;

<span class="hljs-comment"># Run the image with specific version</span>
$ docker run &lt;image-name&gt;:&lt;version&gt;

<span class="hljs-comment"># Run the image with attached linux command (Command comes after image name)</span>
$ docker run &lt;image-name&gt; <span class="hljs-built_in">echo</span> hello

<span class="hljs-comment"># Run the image in interactive mode</span>
$ docker run -it &lt;image-name&gt;

<span class="hljs-comment"># Run the image in detached mode (in background)</span>
$ docker run -d &lt;image-name&gt;

<span class="hljs-comment"># Run the appliation running in the container into your machine with port forwarding</span>
$ docker run -d -p &lt;local-port(eg: 8080)&gt;:&lt;service-default-port&gt; &lt;image-name&gt;
eg:
<span class="hljs-comment"># You can now acces ngix on https://localhost:8080</span>
$ docker run -d -p 8080:80 nginx

<span class="hljs-comment"># Start Container</span>
$ docker start &lt;container-name&gt;/&lt;container-id&gt;

<span class="hljs-comment"># Stop Container</span>
$ docker stop &lt;container0name&gt;/&lt;container-id&gt;

<span class="hljs-comment"># Remove image</span>
$ docker rmi &lt;image-name&gt;/&lt;image-id&gt; -f

<span class="hljs-comment"># Remove container</span>
$ docker rm &lt;container-name&gt;/&lt;container-id&gt;

<span class="hljs-comment"># Get the logs of the running container</span>
$ docker logs &lt;container-id&gt;
</code></pre>
<h2 id="heading-building-and-deploying-commands">Building and Deploying Commands</h2>
<pre><code class="lang-bash"><span class="hljs-comment"># Build your on docker file</span>
$ docker build -t &lt;image-name&gt;:&lt;image-version&gt; &lt;dockerfile-path&gt;

<span class="hljs-comment"># Push the image to docker hub</span>
$ docker push &lt;hub-user&gt;/&lt;repo-name&gt;:&lt;tag&gt;
</code></pre>
<h2 id="heading-networking-in-docker">Networking in Docker</h2>
<p>When you install docker it create 3 networks automatically</p>
<h3 id="heading-bridge">Bridge:</h3>
<ul>
<li>It is the default network that gets attached with the container.</li>
<li>It is a private and internal network created by docker on host. </li>
<li>Every container gets a internal ip address and containers can access each other using this internal ip address. </li>
<li>To access any container from external you need to map the port of the container to ports of docker host</li>
<li>You can run multiple web containers on same docker host and on the same port</li>
</ul>
<p><img src="https://i.imgur.com/FRd7AuM.png" alt="Default Docker Network" /></p>
<h3 id="heading-none">None:</h3>
<ul>
<li>In this type the container is not connected to any network.</li>
<li>This means it has no connections with external networks nor other containers on the same docker host.</li>
<li>And the container will run in totally isolated network.</li>
<li>Use the following command to attach this network to your container<pre><code class="lang-bash">$ docker run &lt;image-name&gt; --network=none
</code></pre>
<img src="https://i.imgur.com/0qMamCQ.png" alt="None network" /></li>
</ul>
<h3 id="heading-host">Host:</h3>
<ul>
<li>To access the container externally you can attach the host network parameter to the container and it takes out any network isolation between docker host and docker container. </li>
<li>So if you have web app container with port 5000 then it can be accessed at same port expertally without the port mapping.</li>
<li>Now this means you can't have multiple containers running on the same docker host and on the same port as port are now common to all the containers on the host network</li>
<li>Use the following command to attach this network to your container</li>
</ul>
<pre><code class="lang-bash">$ docker run &lt;image-name&gt; --network=host
</code></pre>
<p><strong>eg of the host:</strong> Taking same example as <code>nginx</code> from above</p>
<pre><code class="lang-bash"><span class="hljs-comment"># with port mapping http://localhost:80</span>
$ docker run -d -p 80:80 nginx

<span class="hljs-comment"># without port mapping, host networking http://localhost:80</span>
$ docker run -d nginx --network=host
</code></pre>
<p><img src="https://i.imgur.com/008w9jq.png" alt="Host Network" /></p>
<h3 id="heading-user-define-networks">User-define networks</h3>
<p>When we create containers on same docker host there is only 1 bridge created connecting all the containers by default. (eg: 172.17.0.1) and you want to create a new bridge maybe with ip (182.18.0.1) on the same host you need to do it through thte command</p>
<pre><code class="lang-bash">$ docker network create \
    --driver bridge \
    --subnet 182.18.0.0/16
    custom-isolate-network
</code></pre>
<p><img src="https://i.imgur.com/yPy1BFx.png" alt="User Defined Network" /></p>
<h2 id="heading-docker-compose">Docker Compose</h2>
<p>Docker compose is a feature provide by docker host to connect and spin up multiple containers with mostly different images but the application is dependent on both the images quickly. 
<img src="https://i.imgur.com/P1LSF4m.png" alt /></p>
<p>For example frontend image eg: <code>voting</code> and backend database in <code>mongodb</code> so let's try creating a docker compose file for it</p>
<pre><code class="lang-yml"><span class="hljs-attr">version:</span> <span class="hljs-number">2</span>
<span class="hljs-attr">services:</span>
    <span class="hljs-attr">voting:</span>
        <span class="hljs-attr">image:</span> <span class="hljs-string">local-fe-voting</span>
        <span class="hljs-attr">network:</span> 
             <span class="hljs-bullet">-</span> <span class="hljs-string">frontend</span>
    <span class="hljs-attr">db:</span>
        <span class="hljs-attr">image:</span> <span class="hljs-string">mongo</span>
        <span class="hljs-attr">network:</span> 
            <span class="hljs-bullet">-</span> <span class="hljs-string">backend</span>

<span class="hljs-comment">#This will create different networks on same docker host</span>
<span class="hljs-attr">networks:</span>
    <span class="hljs-attr">frontend:</span>
    <span class="hljs-attr">backend:</span>
</code></pre>
<h2 id="heading-docker-engine-architecture">Docker Engine/ Architecture</h2>
<p>Let's see the docker architecture and how it runs applications in containers under the hood. When you install docker on your computer you essentially downloading 3 different components</p>
<p><img src="https://i.imgur.com/xOSb5JO.png" alt="Architecture" /></p>
<h3 id="heading-docker-deamon">Docker Deamon</h3>
<p>It is a backgroud process that manages docker objects like images, containers, volumes etc</p>
<h3 id="heading-rest-api">REST API</h3>
<p>It is a programming interface used to talk to the docker deamon and provide instructions. You can use this api to create your own differnt docker tools</p>
<h3 id="heading-docker-cli">Docker CLI</h3>
<p>And this is a CLI that we have used until now to perform action. It uses REST API to talk to the docker deamon.
Now the thing to note here is docker cli need not to be on same host. It can be on different computer and can still communicate with remote docker engine. Use the following command to do so</p>
<pre><code class="lang-bash">$ docker -H=&lt;remote-docker-engine&gt;:&lt;port-number&gt;

<span class="hljs-comment">#example with nginx remote docker engine</span>
$ docker -H=10.123.2.1:2375 run nginx
</code></pre>
<h2 id="heading-container-orchestration">Container Orchestration</h2>
<p>With simple <code>docker run</code> command you create instance of your application but that's just one instance what if you wanted to create multiple instance. In that case you will have to run <code>docker run</code> command multiple time. Not just this you will have to closly monitor the health of each container and spin up new instance if one already running likely goes down. And what about the health of docker host? What if the docker host goes down, in that case the containers hosted on that host becomes inaccessble too.
Orchestration is set of tools and scripts that helps us to host the containers in the real production environment. Generally Container Orchestration solution have multiple docker hosts that can host containers. So even if one fails, application is still accessible to others.
The following is the command used in docker swarm</p>
<pre><code class="lang-bash">$ docker service create --replicas=100 nodejs
</code></pre>
<p><img src="https://i.imgur.com/T6yy5f6.png" alt="Container Orchestration" /></p>
<p>There are multiple orchestrations solutions available now a days</p>
<ul>
<li>Docker Swarm</li>
<li>Kubernetes</li>
<li>Mesos</li>
</ul>
<p>Well we deep dived quite a bit into the details of docker from simple commands to architecture but that's it for this blog. See you in the next one :D</p>
<h3 id="heading-references">References:</h3>
<ul>
<li><a target="_blank" href="https://youtu.be/17Bl31rlnRM">Docker Tutorial for Beginners - What is Docker? Introduction to Containers</a></li>
<li><a target="_blank" href="https://youtu.be/fqMOX6JJhGo">Docker Tutorial for Beginners - A Full DevOps Course on How to Run Applications in Containers</a></li>
</ul>
<h3 id="heading-thank-you-so-much-for-reading">Thank you so much for reading 💖</h3>
<p>Like | Follow | Subscribe to the newsletter.</p>
<p>Catch me on my socials here: https://bio.link/kaiwalya</p>
]]></content:encoded></item><item><title><![CDATA[YAML for Dummies 🤔]]></title><description><![CDATA[Hey everyone, In this blog I am going to take you through what is yaml and actually write your first file with yaml.
What is Serialization and Deserialization:
Let's first understand what is Serialization and Deserialization. Let's say you have a obj...]]></description><link>https://blogs.kaiwalyakoparkar.com/yaml-for-dummies</link><guid isPermaLink="true">https://blogs.kaiwalyakoparkar.com/yaml-for-dummies</guid><category><![CDATA[YAML]]></category><category><![CDATA[Programming Blogs]]></category><category><![CDATA[learning]]></category><dc:creator><![CDATA[Kaiwalya Koparkar]]></dc:creator><pubDate>Fri, 14 Oct 2022 15:30:45 GMT</pubDate><content:encoded><![CDATA[<p>Hey everyone, In this blog I am going to take you through what is yaml and actually write your first file with yaml.</p>
<h2 id="heading-what-is-serialization-and-deserialization">What is Serialization and Deserialization:</h2>
<p>Let's first understand what is Serialization and Deserialization. Let's say you have a object which you want to store in files. You can't just copy paste the object into the file so for that reason you need to "Serialize" your data. By serializing it means it converts your object into stream of bytes that can then be saved in files, databases or memory as per your choice, and this is done through a "serializer" <em>(refer fig 1)</em>. Now this serialized data is easily transimmatable and shared. And exactly the reverse of this would be "Deserialization".</p>
<p><img src="https://i.imgur.com/o8VdMjK.png" alt /></p>
<h2 id="heading-what-is-yaml">What is YAML:</h2>
<p>Okay so YAML is not a programming language nor a markup language. It's a data serialization language - meaning how you store the data in the code/file format. YAML is similar to XML and JSON if you know about it. Also the interesting fact is you can only save data into YAML files and not commands. So storing the data in files is known as "Data Serialization".</p>
<h2 id="heading-why-yaml">Why YAML?</h2>
<p>YAML is a very essential and easy language to write the configuration files for your project or object creation. YAML can also be used for logs, caches. I will list down some benefits of using YAML</p>
<ul>
<li>Human readable language</li>
<li>Strict syntax</li>
<li>Easily convertable to JSON or XML</li>
<li>Most of the languages use YAML</li>
<li>Helpful while representing complex data</li>
<li>You can use various tools like Parsers etc.</li>
<li>Reading the data is easy from YAML files</li>
</ul>
<h2 id="heading-enough-talk-lets-try-out">Enough talk, let's try out:</h2>
<p>What's benefit in just learning about it when you can't try it out right? So let's go ahead and write and learn some basic yaml</p>
<pre><code class="lang-yaml"><span class="hljs-comment">#key value pair</span>
<span class="hljs-attr">"apple":</span> <span class="hljs-string">"I build iPhones"</span>
<span class="hljs-attr">1:</span> <span class="hljs-string">"This is Kaiwalya"</span>

<span class="hljs-comment"># or</span>

{<span class="hljs-attr">apple:</span> <span class="hljs-string">"I build iPhones"</span>, <span class="hljs-attr">1:</span> <span class="hljs-string">"This is Kaiwalya"</span>}
</code></pre>
<pre><code class="lang-yaml"><span class="hljs-comment">#Lists in yaml</span>
<span class="hljs-bullet">-</span> <span class="hljs-string">Apple</span>
<span class="hljs-bullet">-</span> <span class="hljs-string">Microsoft</span>
<span class="hljs-bullet">-</span> <span class="hljs-string">Google</span>
<span class="hljs-bullet">-</span> <span class="hljs-string">Facebook</span>
</code></pre>
<pre><code class="lang-yaml"><span class="hljs-comment">#Block style in yaml</span>
<span class="hljs-attr">countries:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-string">India</span>
    <span class="hljs-bullet">-</span> <span class="hljs-string">Bhutan</span>
    <span class="hljs-bullet">-</span> <span class="hljs-string">Bangladesh</span>

<span class="hljs-comment">#or</span>

<span class="hljs-attr">countries:</span> [<span class="hljs-string">India</span>, <span class="hljs-string">Bhutan</span>, <span class="hljs-string">Bangladesh</span>]
</code></pre>
<p>You can tell YMAL that above three are different documents by seperate it via following sysntax</p>
<pre><code class="lang-yaml"><span class="hljs-comment">#Start and seperation of document</span>
<span class="hljs-meta">---</span>
<span class="hljs-comment">#Ending the document</span>
<span class="hljs-string">...</span>
</code></pre>
<p>Now lets see the datatypes in YAML</p>
<pre><code class="lang-yaml"><span class="hljs-comment">#valid String Variables</span>
<span class="hljs-attr">me:</span> <span class="hljs-string">Kaiwalya</span>
<span class="hljs-attr">lname:</span> <span class="hljs-string">"Koparkar"</span>
<span class="hljs-attr">job:</span> <span class="hljs-string">'Developer'</span>
</code></pre>
<p>But how can we use it in real world scenario? so for that we are going to see it using a real world example of school.
<strong>Note that if there is going to be multiple <code>name</code> then there is <code>-</code> added infront of it. (Like list of school <code>name</code>s)</strong></p>
<pre><code class="lang-yaml"><span class="hljs-meta">---</span>
<span class="hljs-attr">School:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">"DPS"</span>
      <span class="hljs-attr">principal:</span> <span class="hljs-string">"Random"</span>
      <span class="hljs-attr">student:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">rollno:</span> <span class="hljs-number">1</span>
        <span class="hljs-attr">name:</span> <span class="hljs-string">"Kaiwalya"</span>
        <span class="hljs-attr">marks:</span> <span class="hljs-number">92</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">rollno:</span> <span class="hljs-number">2</span>
        <span class="hljs-attr">name:</span> <span class="hljs-string">"Ram"</span>
        <span class="hljs-attr">marks:</span> <span class="hljs-number">80</span>

    <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">"ABC"</span>
      <span class="hljs-attr">principal:</span> <span class="hljs-string">"Someone"</span>
      <span class="hljs-attr">student:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">rollno:</span> <span class="hljs-number">1</span>
        <span class="hljs-attr">name:</span> <span class="hljs-string">"Stud1"</span>
        <span class="hljs-attr">marks:</span> <span class="hljs-number">79</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">rollno:</span> <span class="hljs-number">2</span>
        <span class="hljs-attr">name:</span> <span class="hljs-string">"Stud2"</span>
        <span class="hljs-attr">marks:</span> <span class="hljs-number">80</span>
<span class="hljs-string">...</span>
</code></pre>
<p>And done, honestly that's it. That's all you need to know about YAML. Just some headsup, It is extremely crusial to test if you YAML file is valid or not for that reason you can use tool like <a target="_blank" href="http://www.yamllint.com/">YAMLlint</a> to validate your YAML code. </p>
<h2 id="heading-resources">Resources:</h2>
<p><a target="_blank" href="https://youtu.be/IA90BTozdow">Complete YAML Course - Beginner to Advanced for DevOps and more!</a></p>
<h3 id="heading-thank-you-so-much-for-reading">Thank you so much for reading 💖</h3>
<p>Like | Follow | Subscribe to the newsletter.</p>
<p>Catch me on my socials here: https://bio.link/kaiwalya</p>
]]></content:encoded></item><item><title><![CDATA[MLH  HackCon  India 2022 - Learnings & Experiencing Communities]]></title><description><![CDATA[Hey everyone! If you are on Twitter or any other social media then you might have seen the images of HackCon India which was organized by MLH. It was truly a great event and experience and many memories were captured in those snaps. The thing that wa...]]></description><link>https://blogs.kaiwalyakoparkar.com/mlh-hackcon-india-2022</link><guid isPermaLink="true">https://blogs.kaiwalyakoparkar.com/mlh-hackcon-india-2022</guid><category><![CDATA[hackconIndia]]></category><category><![CDATA[github campus experts]]></category><category><![CDATA[conference]]></category><category><![CDATA[mlh]]></category><category><![CDATA[BlogsWithCC]]></category><dc:creator><![CDATA[Kaiwalya Koparkar]]></dc:creator><pubDate>Fri, 30 Sep 2022 13:02:43 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1664542590182/9XPiDNrRf.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Hey everyone! If you are on Twitter or any other social media then you might have seen the images of HackCon India which was organized by MLH. It was truly a great event and experience and many memories were captured in those snaps. The thing that was not able to be captured was the learning and experiences gained during the entire HackCon. Meeting the leader of the communities tells you lot about the communities and I leaned a lot from these leaders which share different backgrounds and problems. So in this blog, I am going to share my learning and experiences (And lot of photos 😜).</p>
<p><img src="https://i.imgur.com/DEiyr2S.png" alt="Campus Experts" /></p>
<h2 id="heading-beginning-of-the-story">Beginning of the story ⏱</h2>
<p>Let's go a bit back on how I ended up attending HackCon India. When it was announced that HackCon will take place in India for the first time everyone was very excited and so was I. GitHub Education being the sponsor of the event and me being GitHub Campus Expert I got an amazing opportunity to represent at HackCon as a GitHub Campus Expert. And we were in charge of planning the activities, games, and some sessions at our booth. During these activities I found some common misconceptions and worries the community leaders, organisers and the community members had. So I would be diving these blog into several section and each section will have a learning that came with my experience. 
<img src="https://i.imgur.com/lysbepT.jpg" alt="Campus Experts" /></p>
<h2 id="heading-start-of-the-hackcon">Start of the HackCon 🎉</h2>
<p>Being joined by around 200 hackers/organisers HackCon started around 8am with Keynote session from Mike Swift (CEO of MLH) followed by many other community leaders/builders and hackathon organisers. </p>
<p><img src="https://i.imgur.com/utyE0dP.jpg" alt="Swift Keynote" /></p>
<p>During one of the session from <a href="https://twitter.com/sagaruprety9">Sagar Uprety</a> he mentioned that how Nepal not having community or hackathon culture how he organised, inspired to organise hackathon and gave the meanings to hackathon way beyond just winning prizes. So there comes two learnings from this</p>
<p><img src="https://i.imgur.com/FUjvBKS.png" alt="Sagar's Talk" /></p>
<h3 id="heading-learning-1-always-have-a-bigger-vision-behind-the-activity-you-conduct">Learning 1: Always have a bigger vision behind the activity you conduct 📈</h3>
<p>While your vision can be getting people started with hackathons, open source and that's the reason you organise hackathon that's totally okay. You don't specifically need bigger and flashy prizes/ swags. Sagar during the later conversation also mentioned how he inspired one of his connection to organise a hackathon in their classroom with ~40+ students. So this explains that the vision and your passion to achieve the vision that matters when you organise any event. Now the incident from this gives me my second learning</p>
<h3 id="heading-learning-2-inspire-your-connections-and-help-them-leverage">Learning 2: Inspire your connections and help them leverage. ✨</h3>
<p>You might have lot of connections on social media but are they really a connection if they are not collaborating and helping each other grow? Sagar in some chats mentioned that during his earlier events he connected with lot of people and when he was planning to organise first GitHub Field Day Nepal he reached out to these highly passionate connections and got them into the organising team giving them exposure and proper platform to utilize thier passion</p>
<p>Moving forward from the Sagar's talk I personally talked with many other aspiring community leaders and they mentioned that their college is not supportive and they don't know where to start. So this inspired me to help them with their situation as this has been exact case with lot of communities and following is the learning and the suggestions I gave them</p>
<h3 id="heading-learning-3-network-inspire-collaborate">Learning 3: Network - Inspire - Collaborate 🤝</h3>
<p>So not being supported by your college might put you in tough spot. But you don't need to start big with extravigant fests and events you can just start by delivering session about the things you know. Deliver the sessions in your and other colleges/schools in your region and spread a word about your community. So this is networking - Spreading knowledge and awareness about your community. Next comes inspire, whereever you visit for session always inspire people in that college/school to do the same. Inspire them start community, teach others this way you are also helping your connections grow personally (As from learning 2). Now collaborate - Collaborate with these bunch of higly inspired and passionate people to spread the wings of your community. Sooner you will realise that you have walked long way in building your community and your community stands on strong foundation 'without college support'.</p>
<p>During my further conversations I found some members who had misconceptions about community building and I was asked "Then how do you earn through this?" and that brings me to my next learning and answer I told them.</p>
<h3 id="heading-learning-4-community-building-is-not-about-making-money">Learning 4: Community Building is not about making money 🎗</h3>
<p>Community building is a process where you represent a underrepresented or unpreviledged group of people which are facing same problems and you solve these problems together. If you look at community building from money making point of view then I would highly recommend you to reconsider the vision you have kept forward for you community. Of course community building and management will help you in you career but solving problems collectively should be your aim and not other way around.</p>
<p>The next amazing community person I had chat with was <a href="https://twitter.com/carrycooldude">Kartikey Rawat</a> and we discussed some problems while making the next leader for your community and the next two are learnings from it.</p>
<h3 id="heading-learning-5-your-community-aims-can-differ-they-should-differ">Learning 5: Your community aims can differ, they should differ 🦋</h3>
<p>If you are building a community it's not mandatory to start your own from scratch. You can look for communities that share same vision, problem as you are trying to solve and you can directly get involved with the community and contribute. If you aims differ only then I would recommend you to start your own community.</p>
<h3 id="heading-learning-6-there-is-no-politics-in-communities">Learning 6: There is no 'Politics' in 'Communities' 🚫</h3>
<p>Imagine a scenario where there are two communities in the same college/school with same vision and instead of collaborating they do things like keeping sessions at the exact same time on the same topics and try to compete with each other. This is not what I call community. There should be collaboration between communities to share expeirence and help each other grow and solve their shared problems.</p>
<p>While attending the talks and sessions I interacted with lot of interesting community leaders and one of them mentioned how they are struggling in passing on their community leadership and many people pitched in and had great conversation the following learning demonstrates that</p>
<h3 id="heading-learning-7-leaders-job-is-to-create-new-leaders">Learning 7: Leaders job is to create new leaders 💼</h3>
<p>Leadership can't be cultivated in single night you have to give it proper time and attention. Creation of new leader starts from the moment the member joins the community. So as the community leader it's your role to deligate correct responsibilities and create enough opportunities for everyone to take up the stage and cultivate their leadership. Being given the opportunity from the start really helps them try new things, fail and learn.</p>
<p>And all the above learnings were my takeaways from the HackCon. Was it only all about talks and sessions? No. We at GitHub Education booth conducted some interesting activities and gave away some swags involving polaroid photos, stickers, pens, notepads, mugs, bags and exclusive yoga mats ❤️</p>
<p><img src="https://i.imgur.com/QFdsdxC.jpg" alt="GitHub Eduction Booth" /></p>
<p>Overall the experience was great and the organising team did a wonderful job managing the entire event. At the end I was glad that I didn't return back with just swags 😃</p>
<p><img src="https://i.imgur.com/ao8TDza.jpg" alt="With GitHub Team" /></p>
<h3 id="heading-references">References:</h3>
<ul>
<li>All the images taken in this blog are present openly on social media</li>
</ul>
<h3 id="heading-thank-you-so-much-for-reading">Thank you so much for reading 💖</h3>
<p>Like | Follow | Subscribe to the newsletter.</p>
<p>Catch me on my socials here: https://bio.link/kaiwalya</p>
]]></content:encoded></item><item><title><![CDATA[What are GitHub Actions?]]></title><description><![CDATA[Hey everyone, If you are an open-source developer or a developer then you have probably heard and are familiar with GitHub. It offers many cool features which help you in your software development and management processes. One of those cool features ...]]></description><link>https://blogs.kaiwalyakoparkar.com/what-are-github-actions</link><guid isPermaLink="true">https://blogs.kaiwalyakoparkar.com/what-are-github-actions</guid><category><![CDATA[GitHub]]></category><category><![CDATA[github-actions]]></category><category><![CDATA[Programming Blogs]]></category><category><![CDATA[software development]]></category><category><![CDATA[workflow]]></category><dc:creator><![CDATA[Kaiwalya Koparkar]]></dc:creator><pubDate>Mon, 21 Feb 2022 14:41:03 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1645454357607/q8gJGQelF.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Hey everyone, If you are an open-source developer or a developer then you have probably heard and are familiar with <a target="_blank" href="https://github.com/">GitHub</a>. It offers many cool features which help you in your software development and management processes. One of those cool features in 'GitHub Actions'. So let's get into it.</p>
<h2 id="heading-what-are-github-actions">🤔 What are GitHub actions?</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1645364088847/wsn4NQsCM.png" alt="image.png" /></p>
<p>According to GitHub Documentation:</p>
<blockquote>
<p>It helps automate, customize, and execute your software development workflows right in your repository with GitHub Actions. <a target="_blank" href="https://docs.github.com/en/actions">Read more</a></p>
</blockquote>
<p>So basically GitHub actions help you create a workflow and automate it in your repository itself. Workflow means steps you take while going building software and deploying it. Eg: When I get a pull request to my project, I run tests on it, I check if it's according to semantics, I check for any kinds of errors that might occur while deploying. Once done I merge it then update the version, re-build and deploy it. Now this seems to be a long process and takes a lot of time and energy out of me. What if there was a way to automate all these processes and cut it down to, I get a PR -&gt; (They somehow gets checked with all sorts of things I want) -&gt; I merge it -&gt; (Somehow does all the re-build and deploy stuff).  Here the 'somehow' work is done by GitHub actions.</p>
<p>You just have to make a <code>.yml</code> once and it will do all the tasks on every pull request and issue as you prefer. Also creating a <code>.yml</code> file is not a big task as it looks, there are lots of templates for different kinds of automation already available online. Simply copying and pasting them does the trick too (unless they have additional secrets etc to it). </p>
<p>Enough of theory and explanation. Let's get writing our first GitHub action.</p>
<h2 id="heading-writing-your-first-github-action">✍ Writing your first GitHub action</h2>
<p>So, by now I hope you have a clear understanding of what are GitHub actions and why are they so useful while building software. In this section, we will be writing our first and simple action. An action that welcomes and provides resources to people when they create a PR or issue. This action comes in handy as you possibly can't be only 24/7, but this will help your repo to be interactive to some extent. </p>
<h4 id="heading-creating-a-file">📂 Creating a file:</h4>
<p>You have to create a <code>.yml</code> file inside of a <code>.github/workflows</code> folder. (make sure the .github folder is in the root directory of your project.). You can name your action file anything you want. In this case, I will be naming it <code>greetings.yml</code>. Refer to the image below for a clear understanding of the folder structure.
<img src="https://i.imgur.com/BYGJLAo.png" alt="Folder structure" /></p>
<h4 id="heading-writing-yml-code">🛠 Writing <code>yml</code> code</h4>
<p>Copy and paste the following code into the <code>greetings.yml</code> file</p>
<pre><code class="lang-yml"><span class="hljs-attr">name:</span> <span class="hljs-string">Greetings</span>

<span class="hljs-attr">on:</span> [<span class="hljs-string">pull_request</span>, <span class="hljs-string">issues</span>]

<span class="hljs-attr">jobs:</span>
  <span class="hljs-attr">welcome:</span>
      <span class="hljs-attr">runs-on:</span> <span class="hljs-string">ubuntu-latest</span>
      <span class="hljs-attr">steps:</span>
        <span class="hljs-bullet">-</span> <span class="hljs-attr">uses:</span> <span class="hljs-string">actions/checkout@v1</span>
        <span class="hljs-bullet">-</span> <span class="hljs-attr">uses:</span> <span class="hljs-string">EddieHubCommunity/gh-action-community/src/welcome@main</span>
          <span class="hljs-attr">with:</span>
            <span class="hljs-attr">github-token:</span> <span class="hljs-string">${{</span> <span class="hljs-string">secrets.GITHUB_TOKEN</span> <span class="hljs-string">}}</span>
            <span class="hljs-attr">issue-message:</span> <span class="hljs-string">'&lt;h3&gt;Hello 👋, Thank you very much for raising an issue 🙌. The maintainers will get back to you soon for discussion over the issue!&lt;/h3&gt;'</span>
            <span class="hljs-attr">pr-message:</span> <span class="hljs-string">'&lt;h3&gt;Yeah! You did it 🎉. Now, Relax 😉 -&gt; Grab a drink ☕ -&gt; And wait for the maintainers views on your contribution. Meanwhile you can discuss on other issues and solve them 😀&lt;/h3&gt;'</span>
            <span class="hljs-attr">footer:</span> <span class="hljs-string">'If you would like to continue contributing to open source and would like to do it with an awesome inclusive community, you should join our &lt;a href="https://discord.gg/jvdcY2NkXa"&gt;Discord Server&lt;/a&gt;- we help and encourage each other to contribute to open source little and often 🤓 . Any questions let us know.'</span>
</code></pre>
<p>The above code is pretty simple. We name the action using the <code>name</code> tag. Then we specify when should the action run. In this case, we want it to run on pull requests and issues hence mentioned in the <code>on</code> tag. Then we move ahead with <code>jobs</code>. You can add multiple jobs to action, you can also link jobs in each other to achieve the flow. We add the job name as a tag itself. Provide base os info in <code>runs-on</code> (To be honest, I generally just copy-paste this required stuff). Here we only have one job so we add the <code>steps</code> we need GitHub to follow after every pull request and issue. We link other deployed actions using <code>uses</code> property. And with that, we add messages for pull requests and issues in the form of key-value pair. And you are done 🎉. Once this file is merged in the <code>main</code> or <code>default</code> branch your actions will start running.</p>
<p>You can create many such actions on your own. To know more about how you can create your own custom action and what all actions are offered by GitHub as a template can be found <a target="_blank" href="https://docs.github.com/en/actions/creating-actions/about-custom-actions">here</a></p>
<h3 id="heading-thank-you-so-much-for-reading">Thank you so much for reading 💖</h3>
<p>Like | Follow | Subscribe to the newsletter.</p>
<p>Catch me on my socials here: https://bio.link/kaiwalya</p>
]]></content:encoded></item><item><title><![CDATA[🚩 Phases of being GitHub Campus Expert 🚩]]></title><description><![CDATA[Hey everyone, Couple of months ago I became a GitHub Campus Expert. It's been an interesting journey from getting selected and going through training. Today I am here to tell you the phases a student goes through while becoming a GitHub Campus Expert...]]></description><link>https://blogs.kaiwalyakoparkar.com/phases-of-being-github-campus-expert</link><guid isPermaLink="true">https://blogs.kaiwalyakoparkar.com/phases-of-being-github-campus-expert</guid><category><![CDATA[GitHub]]></category><category><![CDATA[Experience ]]></category><category><![CDATA[leadership]]></category><category><![CDATA[education]]></category><dc:creator><![CDATA[Kaiwalya Koparkar]]></dc:creator><pubDate>Mon, 14 Feb 2022 07:01:29 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1644765901908/t7EvZmpkW.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Hey everyone, Couple of months ago I became a GitHub Campus Expert. It's been an interesting journey from getting selected and going through training. Today I am here to tell you the phases a student goes through while becoming a GitHub Campus Expert. I have divided this blog into 2 sections. </p>
<p>1) Pre-Selection phase and </p>
<p>2) Post-Selection phase. </p>
<p>This will help you get a clear picture of how GitHub makes its program different and ideal for building and community growth. Let's get to it.</p>
<h3 id="heading-what-is-github-campus-expert-program">What is GitHub Campus Expert Program?</h3>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1644663856344/MpojqOXWy.png" alt="image.png" /></p>
<p>It is a program to help college/university student leaders build, grow and expand their communities. GitHub provides tools, training, sponsorships, etc to help the community overcome their struggles. GitHub Campus Experts are trained with public speaking, events management, and exercises to analyze your community. This program selects student leaders twice a year (in two cohorts). Once in February and in August.</p>
<p>Now that you have an idea about what is GitHub Campus Experts program is, we can get started with the pre-selection phase and post-selection phase.</p>
<h3 id="heading-pre-selection-phase">Pre-Selection Phase:</h3>
<p>GitHub opens the application twice every year. You can check the progress of the application <a target="_blank" href="https://education.github.com/experts">here</a>. The Pre-Selection phase is divided into 2 parts. Essay and video resume. This helps the GitHub Education team to understand more about you and your community. The essay part consists of 3-5 questions which are based on the analysis of your community struggles, aim, etc. Once your essays gets submitted and selected you head to video resume part. 
You have to record 3-5 min video focusing on certain points such as introduction, the aim of community, and similar things. After submitting the video resume you just have to wait for the result to come out.</p>
<blockquote>
<p>If your essays get selected then only you will be proceeding to the video resume stage</p>
</blockquote>
<p> <strong>Personal tips for getting through the pre-selection phase:</strong></p>
<p>1) Have a clear aim statement about your community.</p>
<p>2) Try to keep your essays focussed and around the aim of the community.</p>
<p>3) Be particular about the point you are trying to convey.</p>
<h3 id="heading-post-selection-phase">Post-Selection Phase:</h3>
<p>This is the most important phase. because in this phase you go through training to understand and develop yourself and your community. Once you are selected by your essays and video resume you will be invited to GitHub Campus Experts GitHub organization and to an onboarding event. Where you will be explained about the program and training methods and will get to meet other new campus experts. After the onboarding, you will have your own private GitHub repo in the organization where you will update your training essays. You will be provided with all the templates and materials for your training. You will be trained for public speaking, git and github, community analysis, how to grow community, how to become a good leader, community ethics, etc. And you will have to write comprehensive essays following the training with your community into focus. There are in total 6 modules which you have to complete before a certain deadline (This deadline can be extended by communicating with the program manager). The training videos are pre-recorded which means you can watch them whenever and wherever you want. Once you complete your essays they will be reviewed by the Campus Experts team and you'll be suggested updations (if any). </p>
<p>You will be submitting these essays for review in the form of a pull request so once your pull request is merged you will get "GitHub Campus Experts" badge on your github profile. What next? Now you are almost there, Next you have to create your campus experts profile which will be visible on the github campus experts website. You will be provided with spreadsheets, issue templates to get support from the team for all your needs. </p>
<h3 id="heading-my-take-on-github-campus-experts-program">My take on GitHub Campus Experts program</h3>
<p>I personally really like the program. The differentiator of this program from other program is that upon selection it does not just give you a tag but provide you with the training, guidance, and support you need to develop yourself and your community. You are told about the ethics &amp; values the term communities carry and what makes the term 'community' stand out.</p>
<h3 id="heading-thank-you-so-much-for-reading">Thank you so much for reading 💖</h3>
<p>Like | Follow | Subscribe to the newsletter.</p>
<p>Catch me on my socials here: https://bio.link/kaiwalya</p>
]]></content:encoded></item><item><title><![CDATA[#Week 5: Learning Cassandra [DoK intern Series]]]></title><description><![CDATA[Hey everyone 👋 welcome back, As you might have guessed from the thumbnail and the promotions that I recently completed my 5th week as an Data on Kubernetes Community Intern. And In this blog, I will be sharing my experience and learning of week 5.
O...]]></description><link>https://blogs.kaiwalyakoparkar.com/learning-cassandra-dok-intern-series</link><guid isPermaLink="true">https://blogs.kaiwalyakoparkar.com/learning-cassandra-dok-intern-series</guid><category><![CDATA[Cassandra]]></category><category><![CDATA[Databases]]></category><category><![CDATA[learning]]></category><category><![CDATA[Learning Journey]]></category><dc:creator><![CDATA[Kaiwalya Koparkar]]></dc:creator><pubDate>Mon, 30 Aug 2021 14:35:45 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1630324359697/b1C7xSgiS.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Hey everyone 👋 welcome back, As you might have guessed from the thumbnail and the promotions that I recently completed my 5th week as an <a target="_blank" href="https://dok.community">Data on Kubernetes Community</a> Intern. And In this blog, I will be sharing my experience and learning of week 5.</p>
<p>Okay as explained by me in my <a target="_blank" href>last blog</a> that I have been assigned a project which I would be working on and going through a couple of k8ssandra doc and reference videos I found that it would be beneficial if I learn Cassandra before k8ssandra. So I started searching for resources for learning Cassandra 'the right way'. After going through a couple of resources these are my learnings: </p>
<h2 id="what-is-apache-cassandra">What is Apache Cassandra?</h2>
<p>Apache Cassandra is an open-source NoSQL distributed database. Which provides features like scalability, distributiveness, elasticity, fault-tolerant, hybrid, etc.</p>
<p>The most interesting thing I found about it is that it doesn't have any master node which has all the data but it uses distributed architecture to store the data. So whenever the data is added (be that be by any medium) it locates the nearest node (This node is selected randomly on the basis of few factors). The nearest node accepts the data and passes it on to the correct node of its index. To explain it better if we are storing students names in Cassandra cluster okay, so say suppose we make 6 nodes and distribute the name according to this <strong>[A-D], [E-H], [I-L], [M-P], [Q-T], [U-X], [Y-Z]</strong>. Now if the data contains a name that starts from "M" but the nearest node located was [U-X] then the data would be passed on to the [M-P]. But the process does not stop here yet. 
Imagine a condition if [E-H] node goes node or crashes so the users which are having names from E-H would be having a really tough time, so what's the solution? The solution is really interesting and it amazed me a lot as well. So the solution is every node follows the <strong>data partitioning</strong> approach. This means the node will replicate itself in several other nodes. Let's understand this with a diagram.
<img src="https://i.imgur.com/w472JZS.png" />
Here as you can see the <strong>A</strong> data is replicated in two other nodes. So now if Node 1 goes down still we have a copy of its data to be served. So this architecture becomes self-healing and more reliable. And you can have multiple node rings over several cloud service providers (Azure, Google Cloud, AWS, or local infra). Also wherever the data is updated in any of the rings it instantly gets updated in all other rings making all the rings contain the exact same data which can be assessed from any geographical location according to convenience</p>
<h2 id="resources-i-follow-to-learn-cassandra">Resources I follow to learn Cassandra</h2>
<h3 id="written-resources">Written resources</h3>
<ul>
<li><a target="_blank" href="https://docs.datastax.com/en/cassandra-oss/3.0/cassandra/cassandraAbout.html">Official Documentation</a></li>
<li><a target="_blank" href="https://en.wikipedia.org/wiki/Apache_Cassandra">Wikipedia Page</a></li>
<li><a target="_blank" href="https://www.tutorialspoint.com/cassandra/cassandra_introduction.htm">Tutorial's Point Tutorial</a></li>
</ul>
<h3 id="video-resources">Video resources</h3>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://youtu.be/uNcKWoE4mZM">https://youtu.be/uNcKWoE4mZM</a></div>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://youtu.be/d7o6a75sfY0">https://youtu.be/d7o6a75sfY0</a></div>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://youtu.be/kRYMwOl6Uo4">https://youtu.be/kRYMwOl6Uo4</a></div>
<p>I am learning a lot of new things every day so if you think I have explained something wrongly please do make a comment and let me know. Also if you know any resources which helped you with them don't forget to link them down.</p>
<h3 id="thank-you-so-much-for-reading">Thank you so much for reading 💖</h3>
<p>Like | Follow | Subscribe to the newsletter.</p>
<p>Catch me on my socials here: https://bio.link/kaiwalya</p>
]]></content:encoded></item><item><title><![CDATA[#Week 4: Rap lyrics on Kubernetes? [DoK intern Series]]]></title><description><![CDATA[Hey everyone, Thank you very much for following along me on my journey of Community Management Intern at DoK. If you don't know I have been documenting my journey every week and this is my 4th week experience blog and learning. From last couple of bl...]]></description><link>https://blogs.kaiwalyakoparkar.com/week-4-rap-lyrics-on-kubernetes-dok-intern-series</link><guid isPermaLink="true">https://blogs.kaiwalyakoparkar.com/week-4-rap-lyrics-on-kubernetes-dok-intern-series</guid><category><![CDATA[Kubernetes]]></category><category><![CDATA[Databases]]></category><category><![CDATA[Cassandra]]></category><category><![CDATA[k8s]]></category><dc:creator><![CDATA[Kaiwalya Koparkar]]></dc:creator><pubDate>Tue, 24 Aug 2021 08:28:33 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1629793571658/CU6uAjCK6.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Hey everyone, Thank you very much for following along me on my journey of <a target="_blank" href="https://kaiwalyakoparkar.hashnode.dev/week-1-joining-dok-as-a-community-management-intern">Community Management Intern at DoK</a>. If you don't know I have been documenting my journey every week and this is my <strong>4th week</strong> experience blog and learning. From last couple of blogs I have been telling you that I have been assigned a interesting project and today I will be discussing about it and tell you how can you get involved in it too. So let's get started</p>
<h2 id="what-is-kubernetes-and-why-should-you-care-about-data-on-it-in-lame-terms">What is Kubernetes and why should you care about data on it (In lame terms)?</h2>
<p>If you are not familiar with Kubernetes then I will try to explain it in lame terms. Consider it as a tech (I know it's a tech already). But when it was built then it was build keeping statelessness in mind. But after some time it was found that we can obtain statefullness in it. And that what our community does. It organises sessions and talks to spread awareness and dig deep into this topic.</p>
<h2 id="so-what-is-rap-lyrics-on-kubernetes">So what is Rap lyrics on Kubernetes?</h2>
<p>As you might know that the unique thing about our community is that it has a bit of artistic/musical touch to it. Our rapper <a target="_blank" href="https://twitter.com/birthmarkbart">Bart Farrell</a> raps for our every session and speaker. So we have the the rap lyrics that we host on some platforms. So now what we are trying to do is writing these lyrics on database (which one we will discuss ahead) and then run that database on kubernetes (Makes sense?).</p>
<h3 id="tech-stack-we-are-using">Tech stack we are using</h3>
<p>We (team) had discussion about the tech stack we are going to use. If you are looking forward to contribute then we are open to suggestions and discussions. we are going to use this primarily</p>
<ol>
<li>Spotify API (To host and fetch the rap lyrics)</li>
<li>k8ssandra (as a db to write the rap onto it)</li>
</ol>
<h3 id="architecture-we-are-following">Architecture we are following:</h3>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1629792351140/g9Ea3EkjB.png" alt="image.png" /></p>
<p>So, if this excites you and you would like to contribute to it then you can go to our <a target="_blank" href="https://join.slack.com/t/dokcommunity/shared_invite/zt-g3ui5r0g-jDKz5dhh2W1ayElqwKYYAg">Slack</a> and then join <code>#genius-rap-to-k8s-database</code> channel so that you can discuss and contribute to the project. </p>
<p>And that's all for this blog. I learned many thing and got opportunity to learn many interesting things with this project. If you liked reading this blog then make sure you like this blog and subscribe to the newsletter for constant updates about the project.</p>
<h3 id="thank-you-so-much-for-reading">Thank you so much for reading 💖</h3>
<p>Like | Follow | Subscribe to the newsletter.</p>
<p>Catch me on my socials here: https://bio.link/kaiwalya</p>
]]></content:encoded></item><item><title><![CDATA[#Week 3: Learning through transcripts [DoK intern Series]]]></title><description><![CDATA[Hey everyone 👋 welcome back, As you might have guessed from the thumbnail and the promotions that I recently completed my 3rd week as an Data on Kubernetes Community Intern. And In this blog, I will be sharing my experience and learning of week 3.
M...]]></description><link>https://blogs.kaiwalyakoparkar.com/week-3-learning-through-transcripts</link><guid isPermaLink="true">https://blogs.kaiwalyakoparkar.com/week-3-learning-through-transcripts</guid><category><![CDATA[Kubernetes]]></category><category><![CDATA[Experience ]]></category><dc:creator><![CDATA[Kaiwalya Koparkar]]></dc:creator><pubDate>Mon, 16 Aug 2021 06:51:17 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1629095993259/UbhydVOxC.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Hey everyone 👋 welcome back, As you might have guessed from the thumbnail and the promotions that I recently completed my 3rd week as an <a target="_blank" href="https://dok.community">Data on Kubernetes Community</a> Intern. And In this blog, I will be sharing my experience and learning of week 3.</p>
<h3 id="my-week">My Week:</h3>
<p>This week was as always interesting and with lots of fun. We (team) had a discussion over many things. As the weekly task (Which I talked about in the last blog) I spent time with my family watching movies and taking a walk in a garden. We are going to come up with many great initiatives so this week was entirely into the discussions and planning of those.</p>
<p>I was allotted a transcript to be looked after for this week. Its topic was <a target="_blank" href="https://youtu.be/ZC1ezkqEipM">"It's just a SQL - Crash course on Synapse Serverless for T-SQL ninjas"</a>). In that manner, I got an opportunity to explore more on Synapse Serverless SQL. I learned many things like why is it used, how to cut down cost-effectively, how the difference in the file format impacts the processing cost of the service directly etc. And this is a great way to learn. </p>
<p>I am fortunate that personal growth is given equal importance in this community. Every intern is allocated with certain transcripts which they have to filter for that we have to watch the recording of the session. In this manner, we also learn about the different concepts and add value to the community simultaneously.</p>
<p>I have been also assigned an interesting project which is aimed at running raps on Kubernetes. About which I will be talking in the next week's blog. So make sure you subscribe to the newsletter and I will meet you in the next one :)</p>
<h3 id="thank-you-so-much-for-reading">Thank you so much for reading 💖</h3>
<p>Like | Follow | Subscribe to the newsletter.</p>
<p>Catch me on my socials here: https://bio.link/kaiwalya</p>
]]></content:encoded></item></channel></rss>