<?xml version="1.0" encoding="utf-8"?>
    <feed xmlns="http://www.w3.org/2005/Atom">
     <title>BigBinary Blog</title>
     <link href="https://www.bigbinary.com/feed.xml" rel="self"/>
     <link href="https://www.bigbinary.com/"/>
     <updated>2026-03-08T07:30:08+00:00</updated>
     <id>https://www.bigbinary.com/</id>
     <entry>
       <title><![CDATA[Grafana Loki and Kubernetes Event exporter]]></title>
       <author><name>Vishal Yadav</name></author>
      <link href="https://www.bigbinary.com/blog/k8s-event-exporter-and-grafana-loki-integration"/>
      <updated>2024-05-07T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/k8s-event-exporter-and-grafana-loki-integration</id>
      <content type="html"><![CDATA[<p>In the previous<a href="https://www.bigbinary.com/blog/prometheus-and-grafana-integration">blog</a>, wediscussed integrating <a href="https://prometheus.io/">Prometheus</a> and<a href="https://grafana.com/">Grafana</a> in the Kubernetes Cluster. In this blog, we'llexplore how to integrate the<a href="https://github.com/resmoio/kubernetes-event-exporter">Kubernetes Event exporter</a>&amp; <a href="https://grafana.com/oss/loki/">Grafana Loki</a> into your Kubernetes Clusterusing a helm chart.</p><p>Additionally, youll also learn how to add Grafana Loki as a data source to yourGrafana Dashboard. This will help you visualize the Kubernetes events.</p><p>Furthermore, we'll delve into the specifics of setting up the Event exporter andGrafana Loki, ensuring you understand each step of the process. From downloadingand configuring the necessary helm charts to understanding the Grafana Lokidashboard, we'll cover it all.</p><p>By the end of this blog, you'll be able to fully utilize Grafana Loki andKubernetes Event Exporter, gaining insights from your Kubernetes events.</p><h2>How Kubernetes event exporter can help us in monitoring health</h2><p>Objects in Kubernetes, such as Pod, Deployment, Ingress, Service publish eventsto indicate status updates or problems. Most of the time, these events areoverlooked and their 1-hour lifespan might cause missing important updates. Theyare also not searchable and cannot be aggregated.</p><p>For instance, they can alert you to changes in the state of pods, errors inscheduling, and resource constraints. Therefore, exporting these events andvisualizing them can be crucial for maintaining the health of your cluster.</p><p>Kubernetes event exporter allows exporting the often missed Kubernetes events tovarious outputs so that they can be used for observability or alerting purposes.We can have multiple receivers to export the events from the Kubernetes cluster.</p><ul><li><a href="https://www.opsgenie.com/">Opsgenie</a></li><li><a href="https://github.com/resmoio/kubernetes-event-exporter#webhookshttp">Webhooks/HTTP</a></li><li><a href="https://www.elastic.co/">Elasticsearch</a></li><li><a href="https://opensearch.org/">OpenSearch</a></li><li><a href="https://github.com/resmoio/kubernetes-event-exporter#slack">Slack</a></li><li><a href="https://github.com/resmoio/kubernetes-event-exporter#kinesis">Kinesis</a></li><li><a href="https://github.com/resmoio/kubernetes-event-exporter#firehose">Firehose</a></li><li><a href="https://github.com/resmoio/kubernetes-event-exporter#sns">SNS</a></li><li><a href="https://github.com/resmoio/kubernetes-event-exporter#sqs">SQS</a></li><li><a href="https://github.com/resmoio/kubernetes-event-exporter#file">File</a></li><li><a href="https://github.com/resmoio/kubernetes-event-exporter#stdout">Stdout</a></li><li><a href="https://github.com/resmoio/kubernetes-event-exporter#kafka">Kafka</a></li><li><a href="https://docs.aws.amazon.com/systems-manager/latest/userguide/OpsCenter.html">OpsCenter</a></li><li><a href="https://github.com/resmoio/kubernetes-event-exporter#customizing-payload">Customize Payload</a></li><li><a href="https://github.com/resmoio/kubernetes-event-exporter#pubsub">Pubsub</a></li><li><a href="https://github.com/resmoio/kubernetes-event-exporter#teams">Teams</a></li><li><a href="https://github.com/resmoio/kubernetes-event-exporter#syslog">Syslog</a></li><li><a href="https://github.com/resmoio/kubernetes-event-exporter#bigquery">Bigquery</a></li><li><a href="https://github.com/resmoio/kubernetes-event-exporter#pipe">Pipe</a></li><li><a href="https://github.com/resmoio/kubernetes-event-exporter#aws-eventbridge">Event Bridge</a></li><li><a href="https://github.com/resmoio/kubernetes-event-exporter#loki">Grafana Loki</a></li></ul><h2>Setting up Grafana Loki &amp; Kubernetes event exporter using Helm chart</h2><p>We will once again use <a href="https://artifacthub.io/">ArtifactHub</a>, which provides ahelm chart for installing Grafana Loki onto a Kubernetes Cluster. If you needinstructions on how to install Helm on your system, you can refer to this blog.</p><p>In this blog post, we will install a Helm<a href="https://artifacthub.io/packages/helm/grafana/loki">chart</a> that sets up Loki inscalable mode, with separate read-and-write components that can be independentlyscaled. Alternatively, we can install Loki in monolithic mode, where the HelmChart installation runs the Grafana Loki <em>single binary</em> within a Kubernetescluster. You can learn more about this<a href="https://grafana.com/docs/loki/latest/setup/install/helm/install-monolithic/#install-the-monolithic-helm-chart">here</a>.</p><h3>1. Create S3 buckets</h3><ul><li><p>grafana-loki-chunks-bucket</p></li><li><p>grafana-loki-admin-bucket</p></li><li><p>grafana-loki-ruler-bucket</p><p><img src="/blog_images/2024/k8s-event-exporter-and-grafana-loki-integration/loki-s3-buckets.png" alt="loki-s3-buckets.png"></p></li></ul><h3>2. Create a policy for Grafana Loki</h3><p>Create a new policy under IAM on Amazon AWS using the below snippet.</p><pre><code>{    &quot;Version&quot;: &quot;2012-10-17&quot;,    &quot;Statement&quot;: [        {            &quot;Sid&quot;: &quot;LokiStorage&quot;,            &quot;Effect&quot;: &quot;Allow&quot;,            &quot;Action&quot;: [                &quot;s3:ListBucket&quot;,                &quot;s3:PutObject&quot;,                &quot;s3:GetObject&quot;,                &quot;s3:DeleteObject&quot;            ],            &quot;Resource&quot;: [                &quot;arn:aws:s3:::grafana-loki-chunks-bucket&quot;,                &quot;arn:aws:s3:::grafana-loki-chunks-bucket/*&quot;,                &quot;arn:aws:s3:::grafana-loki-admin-bucket&quot;,                &quot;arn:aws:s3:::grafana-loki-admin-bucket/*&quot;,                &quot;arn:aws:s3:::grafana-loki-ruler-bucket&quot;,                &quot;arn:aws:s3:::grafana-loki-ruler-bucket/*&quot;            ]        }    ]}</code></pre><p>Output:</p><p><img src="/blog_images/2024/k8s-event-exporter-and-grafana-loki-integration/grafana-loki-policy.png" alt="grafana-loki-policy.png"></p><h3>3. Create a Role with the above permission</h3><p>Create a role with a custom trust policy &amp; use the below snippet</p><pre><code>{    &quot;Version&quot;: &quot;2012-10-17&quot;,    &quot;Statement&quot;: [        {            &quot;Effect&quot;: &quot;Allow&quot;,            &quot;Principal&quot;: {                &quot;Federated&quot;: &quot;arn:aws:iam::account_id:oidc-provider/oidc.eks.us-east-1.amazonaws.com/id/open_id&quot;            },            &quot;Action&quot;: &quot;sts:AssumeRoleWithWebIdentity&quot;,            &quot;Condition&quot;: {                &quot;StringEquals&quot;: {                    &quot;oidc.eks.us-east-1.amazonaws.com/id/open_id:aud&quot;: &quot;sts.amazonaws.com&quot;,                    &quot;oidc.eks.us-east-1.amazonaws.com/id/open_id:sub&quot;: &quot;system:serviceaccount:default:grafana-loki-access-s3-role-sa&quot;                }            }        }    ]}</code></pre><p>Note: Please update the account_id and open_id in the above given snippet.</p><p><strong>grafana-loki-access-s3-role-sa</strong>is the service account name that we willmention in the Loki values.</p><h3>4. Add Grafana using the helm chart</h3><p>To get this Helm chart, run this command:</p><pre><code class="language-jsx">helm repo add grafana https://grafana.github.io/helm-chartshelm repo update</code></pre><p>Output:</p><p><img src="/blog_images/2024/k8s-event-exporter-and-grafana-loki-integration/chart-add-output.png" alt="chart-add-output.png"></p><p>We have downloaded the latest version of the Grafana.</p><h3>5. Install grafana/loki stack using the helm chart</h3><p>Create a <strong>loki-values yaml</strong> file with the below snippet</p><pre><code>loki:  readinessProbe: {}  auth_enabled: false  storage:    bucketNames:      chunks: grafana-loki-chunks-bucket      ruler: grafana-loki-rules-bucket      admin: grafana-loki-admin-bucket    type: s3    s3:      endpoint: null      region: us-east-1      secretAccessKey: null      accessKeyId: null      s3ForcePathStyle: false      insecure: falsemonitoring:  lokiCanary:      enabled: false  selfMonitoring:    enabled: falsetest:  enabled: falseserviceAccount:  create: true  name: grafana-loki-access-s3-role-sa  imagePullSecrets: []  annotations:    eks.amazonaws.com/role-arn: arn:aws:iam::account_id:role/loki-role  automountServiceAccountToken: true</code></pre><p>To install Loki using the Helm Chart on Kubernetes Cluster, runthis<code>helm install</code>command:</p><pre><code class="language-jsx">helm install my-loki grafana/loki --values loki-values.yaml</code></pre><p>Output:</p><p><img src="/blog_images/2024/k8s-event-exporter-and-grafana-loki-integration/chart-installation-output.png" alt="chart-installation-output.png"></p><p>We have successfully installed Loki on the Kubernetes Cluster.</p><p>Run the followingcommand to view all the resources created by the Loki HelmChart in your Kubernetes cluster:</p><pre><code class="language-jsx">kubectl get all -l app.kubernetes.io/name=loki</code></pre><p>Output:</p><p><img src="/blog_images/2024/k8s-event-exporter-and-grafana-loki-integration/all-resources-output.png" alt="all-resources-output.png"></p><p>Helm chart created the following components:</p><ul><li><strong>Loki read and write:</strong> Loki is installed in scalable mode by default, whichincludes a read-and-write component. These components can be independentlyscaled out.</li><li><strong>Gateway:</strong> Inspired by Grafanas<a href="https://github.com/grafana/loki/blob/main/production/ksonnet/loki">Tanka setup</a>,the chart installs a gateway component by default. This NGINX componentexposes Lokis API and automatically proxies requests to the appropriate Lokicomponents (read or write, or a single instance in the case of filesystemstorage). The gateway must be enabled to provide an Ingress since the Ingressonly exposes the gateway. If enabled, Grafana and log shipping agents, such asPromtail, should be configured to use the gateway. If NetworkPolicies areenabled, they become more restrictive when the gateway is active.</li><li><strong>Caching:</strong> In-memory caching is enabled by default. If this type of cachingis unsuitable for your deployment, consider setting up memcache.</li></ul><p>Run this command to view all the Kubernetes Services for Prometheus &amp; Grafana:</p><pre><code class="language-jsx">kubectl get service -l app.kubernetes.io/name=loki</code></pre><p>Output:</p><p><img src="/blog_images/2024/k8s-event-exporter-and-grafana-loki-integration/all-services-output.png" alt="all-services-output.png"></p><p>Listed services for Loki are:</p><ul><li>loki-backend</li><li>loki-backend-headless</li><li>loki-gateway</li><li>loki-memberlist</li><li>loki-read</li><li>loki-read-headless</li><li>loki-write</li><li>loki-write-headless</li><li>query-scheduler-discovery</li></ul><p>The <code>loki-gateway</code> service will be used to add Loki as a Datasource intoGrafana.</p><h3>6. Adding Loki data source in Grafana</h3><p>On the main page of Grafana, click on &quot;<strong>Home</strong>&quot;. Under &quot;<strong>Connections</strong>&quot;, youwill find the &quot;<strong>Data sources</strong>&quot; option.</p><p><img src="/blog_images/2024/k8s-event-exporter-and-grafana-loki-integration/grafana-dashboard.png" alt="/blog_images/event-exporter-and-grafana-loki-integration/grafana-dashboard.png"></p><p>On the Data Sources page, click on the &quot;Add new data source&quot; button.</p><p><img src="/blog_images/2024/k8s-event-exporter-and-grafana-loki-integration/data-sources-page.png" alt="/blog_images/event-exporter-and-grafana-loki-integration/data-sources-page.png"></p><p>In the search bar, type &quot;Loki&quot; and search for it.</p><p><img src="/blog_images/2024/k8s-event-exporter-and-grafana-loki-integration/add-data-source.png" alt="/blog_images/event-exporter-and-grafana-loki-integration/add-data-source.png"></p><p>Clicking on &quot;Loki&quot; will redirect you to the dedicated page for the Loki datasource.</p><p><img src="/blog_images/2024/k8s-event-exporter-and-grafana-loki-integration/loki-data-source.png" alt="/blog_images/event-exporter-and-grafana-loki-integration/loki-data-source.png"></p><p>To read the metrics from Loki, we will use the <code>loki-gateway</code> service. Add theURL of the service as <code>http://loki-gateway</code>.</p><p><img src="/blog_images/2024/k8s-event-exporter-and-grafana-loki-integration/loki-form.png" alt="/blog_images/event-exporter-and-grafana-loki-integration/loki-form.png"></p><p>After clicking on the &quot;Save &amp; test&quot; button, you will receive a toastr messageshown in the image below. This message is received because no clients have beencreated for Loki yet.</p><p><img src="/blog_images/2024/k8s-event-exporter-and-grafana-loki-integration/loki-addon-output.png" alt="loki-addon-output.png"></p><h3>7. Install Kubernetes event exporter using the helm chart</h3><p>Create an <strong>event-exporter-values yaml</strong> file with the below snippet</p><pre><code class="language-jsx">config:  leaderElection: {}  logLevel: debug  logFormat: pretty  metricsNamePrefix: event_exporter_  receivers:    - name: &quot;dump&quot;      file:        path: &quot;/dev/stdout&quot;        layout: {}    - name: &quot;loki&quot;      loki:        url: &quot;http://loki-gateway/loki/api/v1/push&quot;        streamLabels:          source: kubernetes-event-exporter          container: kubernetes-event-exporter  route:    routes:      - match:          - receiver: &quot;dump&quot;          - receiver: &quot;loki&quot;</code></pre><p>With the use of the above snippet, run this command to install the Kubernetesevent exporter in your Kubernetes Cluster.</p><pre><code class="language-jsx">helm repo add bitnami [https://charts.bitnami.com/bitnami](https://charts.bitnami.com/bitnami)helm install event-exporter bitnami/kubernetes-event-exporter --values event-exporter.yaml</code></pre><p>Output:</p><p><img src="/blog_images/2024/k8s-event-exporter-and-grafana-loki-integration/event-exporter-installation-output.png" alt="event-exporter-installation-output.png"></p><p>To view all the resources created by the above helm chart, run this command:</p><pre><code class="language-jsx">kubectl get all -l app.kubernetes.io/name=kubernetes-event-exporter</code></pre><p>Output:</p><p><img src="/blog_images/2024/k8s-event-exporter-and-grafana-loki-integration/event-exporter-all-resources.png" alt="event-exporter-all-resources.png"></p><p>To view the logs of the event exporter POD, run this command:</p><pre><code class="language-jsx">kubectl logs -f pod/kubernetes-event-exporter-586455bbdd-sqlqc</code></pre><p>Note: Replace <strong>kubernetes-event-exporter-586455bbdd-sqlqc</strong> with your pod name.</p><p>Output:</p><p><img src="/blog_images/2024/k8s-event-exporter-and-grafana-loki-integration/event-exporter-logs.png" alt="event-exporter-logs"></p><p>As you can see in the above image, the event exporter is working and runningfine. Event logs are being sent to both the receivers that wed configure in thevalues YAML file.</p><p>Once the POD is created &amp; running, we can go back to the Loki data source under<strong>Connections</strong> &gt; <strong>Data Sources</strong> page.</p><p>Again click on the Save &amp; test button &amp; this time youll receive a successtoastr message.</p><p>Output:</p><p><img src="/blog_images/2024/k8s-event-exporter-and-grafana-loki-integration/loki-data-source-added-output.png" alt="loki-data-source-added-output"></p><h3>8. Kubernetes event exporter dashboard</h3><p>We will import this<a href="https://grafana.com/grafana/dashboards/17882-kubernetes-event-exporter/">dashboard</a>into Grafana to monitor and track the events received from the Kubernetescluster. You can go through this blog if you want to learn how to import anexisting dashboard into Grafana.</p><p>After successfully importing the dashboard, you can view all the events from thecluster, as shown in the image below. Additionally, you can filter the eventsbased on any value within any interval.</p><p>Kubernetes Event Exporter</p><p><img src="/blog_images/2024/k8s-event-exporter-and-grafana-loki-integration/event-exporter-dashboard.png" alt="Kubernetes Event Exporter"></p><h2>Conclusion</h2><p>In this blog post, we discussed the process of setting up Grafana Loki andKubernetes Event exporter. We covered various steps, such as creating a policyfor Grafana Loki, creating a role with the necessary permissions, and addingGrafana using the Helm chart, installing the Loki stack, and adding Loki as adata source in Grafana, installing Kubernetes event exporter using the Helmchart, and finally, setting up the Kubernetes event exporter dashboard inGrafana.</p><p>By following the steps outlined in this blog post, you can effectively monitorand track events from your Kubernetes cluster using Grafana Loki and KubernetesEvent exporter. This setup provides valuable insights and helps introubleshooting and analyzing events in your cluster.</p><p>If you have any questions or feedback, please feel free to reach out. Happymonitoring!</p>]]></content>
    </entry><entry>
       <title><![CDATA[Setting up Prometheus and Grafana on Kubernetes using Helm]]></title>
       <author><name>Vishal Yadav</name></author>
      <link href="https://www.bigbinary.com/blog/prometheus-and-grafana-integration"/>
      <updated>2024-01-25T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/prometheus-and-grafana-integration</id>
      <content type="html"><![CDATA[<p>In this blog, we will learn how to set up Prometheus and Grafana on Kubernetesusing Helm.</p><p><a href="https://prometheus.io/">Prometheus</a> along with<a href="https://grafana.com/">Grafana</a>is a highly scalable open-sourcemonitoringframework for<a href="https://devopscube.com/docker-container-clustering-tools/">container orchestration platform</a>.Prometheus probes the application and collects various data. It stores all thisdata in its time series database. Grafana is a visualization tool. It uses thedata from the database to show the data that is meaningful to the user.</p><p>Both Prometheus and Grafana are gaining popularity inthe<a href="https://devopscube.com/what-is-observability/">observability</a>space as ithelps with metrics and alerts. Learning to integrate them using Helm will allowus to monitor our Kubernetes cluster and troubleshoot problems easily.Furthermore, we can deep dive into our cluster's well-being and efficiency,focusing on resource usage and performance metrics within our Kubernetesenvironment.</p><p>We will also learn how to create a simple<a href="https://grafana.com/grafana/dashboards/">dashboard</a> on Grafana.</p><h2><strong>Why using Prometheus and Grafana for monitoring is good</strong></h2><p>Using Prometheus and Grafana for monitoring has many benefits:</p><ul><li><strong>Scalability:</strong> Both tools are highly scalable and can handle the monitoringneeds of small to large Kubernetes clusters.</li><li><strong>Flexibility:</strong> They allow us to create custom dashboards tailored to ourspecific monitoring requirements.</li><li><strong>Real-time Monitoring:</strong> Prometheus provides real-time monitoring, helping usto quickly detect and respond to issues.</li><li><strong>Alerting:</strong> Prometheus enables us to set up alerts based on specificmetrics, so we can be notified when issues arise.</li><li><strong>Data Visualization:</strong> Grafana offers powerful data visualizationcapabilities, making it easier to understand complex data.</li><li><strong>Open Source:</strong> Both Prometheus and Grafana are open-source, reducingmonitoring costs.</li><li><strong>Community Support:</strong> We can benefit from active communities, ensuringcontinuous development and support.</li><li><strong>Integration:</strong> They seamlessly integrate with other Kubernetes componentsand applications, simplifying setup.</li><li><strong>Historical Data:</strong> Grafana allows us to explore historical data, aiding inlong-term analysis and trend identification.</li><li><strong>Extensible:</strong> Both tools are extensible, allowing us to integrate additionaldata sources and plugins.</li><li><strong>Efficient Resource Usage:</strong> Prometheus efficiently utilizes resources,ensuring minimal impact on our cluster's performance.</li></ul><p>Two common ways to use Prometheus and Grafana on Kubernetes:</p><ol><li><strong>Manual Kubernetes deployment</strong>: In this method, we need to write<a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/">Kubernetes Deployments</a>and<a href="https://kubernetes.io/docs/concepts/services-networking/service/">Services</a>for both Prometheus and Grafana. In the YAML file, we need to put all thesettings for Prometheus and Grafana on Kubernetes. Then we send these filesto our Kubernetes cluster. But we can end up with many YAML files, which canbe hard. If we make a mistake in any YAML file, Prometheus and Grafana won'twork on Kubernetes.</li><li><strong>Using Helm</strong>: This is an easy way to send any application container toKubernetes.<a href="https://helm.sh/">Helm</a>is the official package manager forKubernetes. With Helm, we can make installing, sending, and managingKubernetes makes applications easier.</li></ol><p>A<a href="https://helm.sh/">Helm Chart</a>has all the YAML files:</p><ul><li>Deployments.</li><li>Services.</li><li>Secrets.</li><li>ConfigMaps manifests.</li></ul><p>We use these files to send the application container to Kubernetes. Instead ofmaking individual YAML files for each application container, Helm lets usdownload Helm charts that already have YAML files.</p><h2>Setting up Prometheus and Grafana using Helm chart</h2><p>We will use <a href="https://artifacthub.io/">ArtifactHub</a>, which offers public andprivate repositories for Helm Charts. We will use these Helm Charts to arrangethe pods and services in our Kubernetes cluster.</p><p>To get Prometheus and Grafana working on Kubernetes with Helm, we will start byinstalling Helm.</p><h4>Installing Helm on Linux</h4><pre><code class="language-bash">sudo apt-get install helm</code></pre><h4>Installing Helm on Windows</h4><pre><code class="language-bash">choco install Kubernetes-helm</code></pre><h4>Installing Helm on macOS</h4><pre><code class="language-bash">brew install helm</code></pre><p>We can check out the official<a href="https://helm.sh/docs/intro/install/">Helm documentation</a> if we run into anyissues while installing Helm.</p><p>The image below represents the successful Helm installation on macOS.</p><p><img src="/blog_images/2024/prometheus-and-grafana-integration/Screenshot_2023-11-01_at_1.09.49_PM.png" alt="Screenshot 2023-11-01 at 1.09.49PM.png"></p><p>For this blog, were going to install Helm<a href="https://artifacthub.io/packages/helm/prometheus-community/kube-prometheus-stack">chart</a>and by default, this chart also installs additional, dependent charts (includingGrafana):</p><ul><li><a href="https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-state-metrics">prometheus-community/kube-state-metrics</a></li><li><a href="https://github.com/prometheus-community/helm-charts/tree/main/charts/prometheus-node-exporter">prometheus-community/prometheus-node-exporter</a></li><li><a href="https://github.com/grafana/helm-charts/tree/main/charts/grafana">grafana/grafana</a></li></ul><p>To get this Helm chart, let's run this command:</p><pre><code class="language-bash">helm repo add prometheus-community https://prometheus-community.github.io/helm-chartshelm repo update</code></pre><p><img src="/blog_images/2024/prometheus-and-grafana-integration/Screenshot_2023-11-01_at_1.15.43_PM.png" alt="Screenshot 2023-11-01 at 1.15.43PM.png"></p><p>We have downloaded the latest version of Prometheus &amp; Grafana.</p><p>To install the Prometheus Helm Chart on a Kubernetes Cluster, let's run thefollowing command:</p><pre><code class="language-bash">helm install my-kube-prometheus-stack prometheus-community/kube-prometheus-stack</code></pre><p><img src="/blog_images/2024/prometheus-and-grafana-integration/Screenshot_2023-11-01_at_1.21.46_PM.png" alt="Screenshot 2023-11-01 at 1.21.46PM.png"></p><p>We have successfully installed Prometheus &amp; Grafana on the Kubernetes Cluster.We can access the Prometheus &amp; Grafana servers via ports 9090 &amp; 80,respectively.</p><p>Now, let's run the followingcommand to view all the resources created by theHelm Chart in our Kubernetes cluster:</p><pre><code class="language-bash">kubectl get all</code></pre><p><img src="/blog_images/2024/prometheus-and-grafana-integration/Screenshot_2023-11-02_at_11.19.39_AM.png" alt="Screenshot 2023-11-02 at 11.19.39AM.png">The Helm chart created the following resources:</p><ul><li><strong>Pods</strong>: It hosts the deployed Prometheus Kubernetes application inside thecluster.</li><li><strong>Replica Sets</strong>: A collection of instances of the same application inside theKubernetes cluster. It enhances application reliability.</li><li><strong>Deployments</strong>: It is the blueprint for creating the application pods.</li><li><strong>Services</strong>: It exposes the pods running inside the Kubernetes cluster. Weuse it to access the deployed Kubernetes application.</li><li><strong>Stateful Sets</strong>: They manage the deployment of the stateful applicationcomponents and ensure stable and predictable network identities for thesecomponents.</li><li><strong>Daemon Sets</strong>: They ensure that all (or a specific set of) nodes run a copyof a pod, which is useful for tasks such as logging, monitoring, and othernode-specific operations.</li></ul><p>Run this command to view all the Kubernetes Services for Prometheus &amp; Grafana:</p><pre><code class="language-bash">kubectl get service</code></pre><p><img src="/blog_images/2024/prometheus-and-grafana-integration/Screenshot_2023-11-02_at_11.37.39_AM.png" alt="Screenshot 2023-11-02 at 11.37.39AM.png"></p><p>Listed services for Prometheus and Grafana are:</p><ul><li>alertmanager-operated</li><li>kube-prometheus-stack-alertmanager</li><li>kube-prometheus-stack-grafana</li><li>kube-prometheus-stack-kube-state-metrics</li><li>kube-prometheus-stack-operator</li><li>kube-prometheus-stack-prometheus</li><li>kube-prometheus-stack-prometheus-node-exporter</li><li>prometheus-operated</li></ul><p><code>kube-prometheus-stack-grafana</code> and <code>kube-prometheus-stack-prometheus</code> are the<code>ClusterIP</code> type services, which means we can only access them within theKubernetes cluster.</p><p>To expose the Prometheus and Grafana to be accessed outside the Kubernetescluster , we can either use the NodeIP or LoadBalance service.</p><h2>Exposing Prometheus and Grafana using NodePort services</h2><p>Let's run the following command to expose the<code>Prometheus</code>Kubernetes service:</p><pre><code class="language-bash">kubectl expose service kube-prometheus-stack-prometheus --type=NodePort --target-port=9090 --name=prometheus-node-port-servicekubectl expose service kube-prometheus-stack-grafana --type=NodePort --target-port=3000 --name=grafana-node-port-service</code></pre><p>That command will create new services of<code>NodePort</code>type &amp; make thePrometheusand Grafana is accessible outside the Kubernetes Cluster on ports<code>9090</code> and<code>80</code>.</p><p><img src="/blog_images/2024/prometheus-and-grafana-integration/Screenshot_2023-11-02_at_11.59.15_AM.png" alt="Screenshot 2023-11-02 at 11.59.15AM.png"></p><p>As we can see, the <code>grafana-node-port-service</code> and<code>prometheus-node-port-service</code> are successfully created and are being exposed onnode ports <code>32489</code> &amp; <code>30905</code></p><p>Now, we can run this command and get the external IP of any node to access thePrometheus and Grafana:</p><pre><code class="language-jsx">kubectl get nodes -o wide</code></pre><p><img src="/blog_images/2024/prometheus-and-grafana-integration/Screenshot_2023-11-02_at_11.57.17_AM.png" alt="Screenshot 2023-11-02 at 11.57.17AM.png"></p><p>We can use the External-IP and the node ports to access the Prometheus andGrafana dashboards outside the cluster environment.</p><p>Prometheus Dashboard</p><p><img src="/blog_images/2024/prometheus-and-grafana-integration/Screenshot_2023-11-02_at_12.04.36_PM.png" alt="Prometheus Dashboard"></p><p>Grafana Dashboard</p><p><img src="/blog_images/2024/prometheus-and-grafana-integration/Screenshot_2023-11-02_at_12.04.53_PM.png" alt="Grafana Dashboard"></p><p>Run this command, to get the password for the <strong>admin</strong> user of the Grafanadashboard:</p><pre><code>kubectl get secret --namespace default kube-prometheus-stack-grafana -o jsonpath=&quot;{.data.admin-password}&quot; | base64 --decode ; echo</code></pre><h2>Grafana Dashboard</h2><p>Upon login to the Grafana dashboard, use <code>admin</code>as the username and ourgenerated password. We will see &quot;Welcome to Grafana&quot; homepage as shown below.</p><p><img src="/blog_images/2024/prometheus-and-grafana-integration/Screenshot_2023-11-02_at_12.12.04_PM.png" alt="Screenshot 2023-11-02 at 12.12.04PM.png"></p><p>Since we used the Kube Prometheus Stack helm chart, the data source forPrometheus and Alert Manager is added by default.</p><p><img src="/blog_images/2024/prometheus-and-grafana-integration/Screenshot_2023-11-02_at_12.18.14_PM.png" alt="Screenshot 2023-11-02 at 12.18.14PM.png"></p><p>We can add more data sources by clicking on the <strong>Add new data source</strong> buttonon the top right side.</p><p>By default, this Helm chart adds multiple dashboards to monitor the health ofthe Kubernetes cluster and its resources.</p><p><img src="/blog_images/2024/prometheus-and-grafana-integration/Screenshot_2023-11-02_at_12.22.37_PM.png" alt="Screenshot 2023-11-02 at 12.22.37PM.png"></p><p>Additionally, we also have the option of creating our dashboards from scratch aswell as importing multiple Grafana dashboards provided by the<a href="https://grafana.com/grafana/dashboards/">Grafana library</a>.</p><p>To import a Grafana Dashboard, let's follow these steps:</p><ul><li><p>From this <a href="https://grafana.com/grafana/dashboards/">Grafana library</a>, we canadd any dashboard</p><p><img src="/blog_images/2024/prometheus-and-grafana-integration/Screenshot_2023-11-02_at_12.28.44_PM.png" alt="Screenshot 2023-11-02 at 12.28.44PM.png"></p></li><li><p>Select Dashboard and copy the Dashboard ID</p><p><img src="/blog_images/2024/prometheus-and-grafana-integration/Screenshot_2023-11-02_at_12.34.54_PM.png" alt="Screenshot 2023-11-02 at 12.34.54PM.png"></p></li><li><p>Under <strong>Dashboards</strong> page we can get the <strong>Import</strong> option</p><p><img src="/blog_images/2024/prometheus-and-grafana-integration/Screenshot_2023-11-02_at_12.32.55_PM.png" alt="Screenshot 2023-11-02 at 12.32.55PM.png"></p></li><li><p>Under &quot;Import Dashboard&quot; page, we need to paste the Dashboard IP that wecopied earlier &amp; click on the <strong>Load</strong> button.</p><p><img src="/blog_images/2024/prometheus-and-grafana-integration/Screenshot_2023-11-02_at_12.35.39_PM.png" alt="Screenshot 2023-11-02 at 12.35.39PM.png"></p></li><li><p>After clicking on the <strong>Load</strong> button, it will auto-load the dashboard fromthe library after which we can import the dashboard by clicking on the<strong>Import</strong> button.</p><p><img src="/blog_images/2024/prometheus-and-grafana-integration/Screenshot_2023-11-02_at_12.37.50_PM.png" alt="Screenshot 2023-11-02 at 12.37.50PM.png"></p></li><li><p>Once the import is complete, well be redirected to the new imported dashboardwhichll also be visible under the Dashboards page.</p><p><img src="/blog_images/2024/prometheus-and-grafana-integration/Screenshot_2023-11-02_at_12.39.42_PM.png" alt="Screenshot 2023-11-02 at 12.39.42PM.png"></p><p>We can use this Node Exporter dashboard to monitor &amp; observe the health of ournodes present in our Kubernetes Cluster.</p></li></ul><h2>Conclusion</h2><p>In this blog, we learned how to integrate Prometheus and Grafana using the helmchart. We also learned how to import dashboards into Grafana from the<a href="https://grafana.com/grafana/dashboards/">Grafana library</a>.</p><p>In the next blog, we will explore how to integrate<a href="https://grafana.com/oss/loki/">Grafana Loki</a> with Grafana and collect and storeevent-related metrics using the<a href="https://github.com/resmoio/kubernetes-event-exporter">Kubernetes Event Exporter</a>.</p>]]></content>
    </entry><entry>
       <title><![CDATA[Using enable-load-relative flag in building Ruby binaries]]></title>
       <author><name>Vishal Yadav</name></author>
      <link href="https://www.bigbinary.com/blog/use-of-enable-load-relative-flag-in-building-ruby-binaries"/>
      <updated>2023-06-20T12:00:00+00:00</updated>
      <id>https://www.bigbinary.com/blog/use-of-enable-load-relative-flag-in-building-ruby-binaries</id>
      <content type="html"><![CDATA[<p>I'm working on building <a href="https://neeto.com/neetoci">NeetoCI</a>, which is a CI/CDsolution. While building precompiled Ruby binaries, we encountered somechallenges. This blog post explores the problems we faced and how we solvedthem.</p><h2>Pre-compiled Ruby binaries</h2><p>Pre-compiled Ruby binaries are distribution-ready versions of Ruby that includeoptimized features for specific systems. These Ruby binaries save time byeliminating the need to compile Ruby source code manually. Pre-compiled Rubybinaries help users quickly deploy applications that use different versions ofRuby on multiple machines.</p><p><a href="https://rvm.io/">RVM</a> (Ruby Version Manager) is widely used for managing Rubyinstallations on Unix-like systems. RVM provides customized pre-compiled Rubybinaries tailored for various CPU architectures. These binaries offer additionalfeatures like readline support and SSL/TLS support. You can find them at<a href="https://rvm.io/binaries/">RVM binaries</a>.</p><h2>The Need for pre-compiled Ruby Binaries</h2><p><a href="https://www.neeto.com/neetoci">NeetoCI</a> must execute user code in acontainerized environment. A Ruby environment is essential for running Ruby onRails applications. However, relying on the system's Ruby version is impracticalsince it may differ from the user's required version. Although rbenv or rvm canbe used to install the necessary Ruby version, this approach could be slow. Tosave time, we chose to leverage pre-compiled Ruby binaries.</p><p>As a CI/CD system, NeetoCI must ensure that all versions of Ruby that anapplication requires are always available. Hence, we decided to build ourbinaries instead of relying on binaries provided by RVM. Also, this would allowus to do more system-specific optimizations to the Ruby binary at build time.</p><h2>Building pre-compiled Ruby binaries</h2><p>We built a Ruby binary following the<a href="https://github.com/ruby/ruby/blob/master/doc/contributing/building_ruby.md">official documentation</a>. We were able to execute it on our local development machines. But the samebinary ran into an error in our CI/CD environment.</p><p><img src="/blog_images/2023/use-of-enable-load-relative-flag-in-building-ruby-binaries/failing-binary-error-image.png" alt="Bad Interpreter"></p><pre><code class="language-bash">$ bundle config path vendor/bundle./ruby: bad interpreter: No such file or directory</code></pre><p>To debug the issue, we initially focused on <code>$PATH</code>. However, even afterresolving the <code>$PATH</code> issues, the problem persisted. We conducted a thoroughinvestigation to identify the root cause. Unfortunately, not much was written onthe Internet about this error. There was no mention of it in the official<a href="https://github.com/ruby/ruby/blob/master/doc/contributing/building_ruby.md">Ruby documentation</a>.</p><p>As the next step, we decided to download the binary for version 3.2.2 from<a href="https://rvm.io/binaries/">RVM</a>. While examining the configuration file, wenoticed that the following arguments were used with the configure command duringthe Ruby binary build process:</p><pre><code class="language-bash">configure_args=&quot;'--prefix=/usr/share/rvm/rubies/ruby-3.2.2' '--enable-load-relative' '--sysconfdir=/etc' '--disable-install-doc' '--enable-shared'&quot;</code></pre><p>Here are the explanations of the configuration arguments:</p><ol><li><p><code>--prefix=/usr/share/rvm/rubies/ruby-3.2.2</code>: This specifies the directorywhere the Ruby binaries, libraries and other files will be kept after theinstallation is done.</p></li><li><p><code>--enable-load-relative</code>: This specifies that Ruby can load relative pathsfor dynamically linked libraries. It allows the usage of relative pathsinstead of absolute paths when loading shared libraries. This feature can bebeneficial in specific deployment scenarios.</p></li><li><p><code>--sysconfdir=/etc</code>: This argument sets the directory where Ruby's systemconfiguration files will be installed. In this case, it specifies the <code>/etc</code>directory as the location for these files.</p></li><li><p><code>--disable-install-doc</code>: When this option is enabled, the installation ofdocumentation files during the build process is disabled. This can help speedup the build process and save disk space, especially if you do not requirethe documentation files.</p></li><li><p><code>--enable-shared</code>: Enabling this option allows the building of sharedlibraries for Ruby. Shared libraries enable Ruby to dynamically link and loadspecific functionality at runtime, leading to potential performanceimprovements and reduced memory usage.</p></li></ol><p>In simpler terms, when the <code>--enable-load-relative</code> flag is enabled, thecompiled Ruby binary can search for shared libraries in its own directory usingthe <code>$ORIGIN</code> variable.</p><p>When I built the binary on the Docker registry, then the passed <code>--prefix</code> wassomething like <code>/usr/share/neetoci</code>. When the binary is built, then binary had<code>/usr/share/neetoci</code> is hard-coded at various places. When we download thisbinary and use in CI then in the CI environment, Ruby is looking for<code>/user/share/neetoci</code> to load dependencies.</p><p>By enabling <code>--enable-load-relative</code> flag while building the binary Ruby willnot use the hard coded value. Rather Ruby will use <code>$ORIGIN</code> variable and willsearch for the dependencies in the directory mentioned in <code>$ORIGIN</code>.</p><p>This is particularly helpful when the Ruby binary is relocated to a differentdirectory or system. By using relative paths with <code>$ORIGIN</code>, the binary can findits shared libraries regardless of its new location. Without this flag, sharedlibraries are loaded using absolute paths, which can cause issues if the binaryis moved to a different location and cannot locate its shared libraries.</p><p>In our specific use case, where we create and download binaries in separatecontainers, we encountered an error due to the absolute paths. To overcome this,we enabled the <code>--enable-load-relative</code> flag. This allowed the binary to findits shared libraries successfully, and it worked as expected in our CI/CDenvironment.</p><p><img src="/blog_images/2023/use-of-enable-load-relative-flag-in-building-ruby-binaries/passing-ruby-binary-image.png" alt="Successful Build"></p>]]></content>
    </entry>
     </feed>