Setting Up A Grafana Loki Demo Environment on Kubernetes

Setting Up A Grafana Loki Demo Environment on Kubernetes

I recently wanted to set up a small Loki environment on Kubernetes just to see how it worked, evaluate how it performs out of the box, and decide if it was something I wanted to use for a small logging environment. Grafana has Helm charts available and instructions on how to use them for this specific purpose, but I (among many others based on the countless GitHub issues) have still had an incredibly difficult time setting up the simplest possible Loki environment. Because of that, I figured I would put out what ended up working for me.

Setting up Kubernetes

I won’t go too much into this since there’s a lot of different ways to setup Kubernetes and this should (in theory) work on any of them. In my particular case I was using microk8s. The only reason I used microk8s is because the Ubuntu installer ships with the option to install microk8s alongside the Ubuntu installation. In my lab I usually use Ubuntu virtual machines…so you get the idea.

Here’s a few other popular avenues for setting up Kubernetes:

You’ll also need Helm installed. Some of the above come with Helm preinstalled, otherwise you can look at Helm’s installation page which has all the available ways of installing Helm.

Create a Helm Values Override File (Optional)

We haven’t gotten to the Helm stuff and actually installing Loki, so creating the Helm overrides file might not make any sense yet. Alas, here it is anyways. If you look at the Helm overrides Grafana recommends, it’s much more involved. I think this is partly why so many people have trouble getting Loki up and running. Simpler is better here, as you can see by the Helm values I’m using here in my file named loki-values.yaml:

loki:
  enabled: true

promtail:
  enabled: true

Now if you’ve looked into setting up Loki in Kubernetes enough, you’d probably be wondering why this file even exists. The values here already exist in the Helm chart as defaults. And in fact, this file actually doesn’t really do anything. However, if you want to continue messing around with Loki beyond just getting an environment stood up you’ll need this file, which is why I included this section.

Install Grafana’s Helm Repo and Install Loki

Now it’s time to install Loki. We’ll start by adding Grafana’s Helm repo:

helm repo add grafana https://grafana.github.io/helm-charts

Next we’ll run an update:

helm repo update

Now we can actually run the helm install. I personally like to run an upgrade with the install flag so I can reuse the command whenever I have changes. It’s just an easy way for me to have one command I always use instead of separate commands for the install and upgrades:

helm upgrade --install --create-namespace -f loki-values.yaml --version 2.10.2 --namespace=loki-stack loki grafana/loki-stack

This will set up Loki in a new namespace named loki-stack. I’m pinning the version to 2.10.2 since that is what I’m using and know works. I cannot vouch that future or past releases will work the same way. We can check the status of the deploy using the below command:

kubectl get pods -n loki-stack

We’ll want to make sure all pods eventually start and stay running. This usually doesn’t take more than a minute or two.

Expose the Loki Service

This part is optional depending on your setup and the method I’m going to show here isn’t really best practice (hence why I say demo environment in the title of this article). We’ve set up Loki, but it’s currently only accessible within the Kubernetes cluster. If everything that will be communicating with Loki is also within the Kubernetes cluster, or you have some other elaborate networking/service mesh stuff going on, you may not need this. In my case I had some outside data I wanted to send to this for testing, and my Grafana instance wasn’t running in Kubernetes either. There needs to be a path across the network from these external devices to the Loki service, and I did this by giving the Loki service an external IP address.

This can be done a few different ways. If you wanted to just do a quick kubectl command, you can do something like this:

kubectl patch svc loki -n loki-stack -p '{"spec": {"externalIPs":["192.168.100.100"]}}'

This would make Loki available at that IP and I’d be able to access it within my private network. If you’re a fan of version controlling everything like I am, you’d likely opt to make a Kubernetes manifest like I have in a file named loki-manifest.yaml:

apiVersion: v1
kind: Service
metadata:
  name: loki
  namespace: loki-stack
spec:
  externalIPs:
  - 192.168.100.100

This can then be applied with the following command:

kubectl apply -f loki-manifest.yaml --namespace=loki-stack

Link Loki to Grafana

Loki’s frontend is just Grafana. You can go in Grafana and add a Loki datasource with the URL of your service. In my case I entered http://192.168.100.100:3100. Make sure you include the port number which is 3100 by default.

One issue of note that I ran into and saw many GitHub issues about was an error when saving the datasource. If you receive an error, go to the explore tab, change your datasource to the newly created Loki datasource, and actually run a query. In my case, I got an error but Loki works fine. Based on the GitHub issues I read through, it has something to do with the storage, which in this case isn’t persistent. Loki seems to dislike that, but works with it anyways.

Now What?

Well this isn’t a production ready system (obviously), but it is a good starting point to get a feel for Loki and see whether you like it or not. It’s also a good starting point to add in the production ready aspects you’ll need like persistent storage and something like a reverse proxy to handle authentication.

Leave a Reply