This article follows our first blog post related to Kraken’s deployment on Kubernetes. It is a step by step guide explaining how to deploy the InfluxDb/Telegraf/Grafana stack used to generate load testing reports on Kraken.
More importantly we will se:
- How to map a configuration file using a ConfigMap resource?
- How to map sensitive environment variables using the Secrets object?
- How to use Kompose to generate declarative K8S configuration?
- how to mount a data volume with a PersistentVolumeClaim?
- As well as several other tips like connecting to a running Pod, displaying logs or cross container communication using kube-dns.
Here again we rely on declarative configuration of K8s to install the InfluxDB/Grafana stack. It’s the easiest way to do since there are many configuration objects to create.
Prerequisites
You need to have installed Kubernetes and Minikube, please check my first blog post for more information.
InfluxDB is an open-source time series database (TSDB). It is optimized for fast, high-availability storage and retrieval of time series data.
In the context of load testing with Kraken, we use it to store performance metrics generated by Gatling such as:
- Number of requests (hits),
- Number of errors,
- Response times,
- etc.
Monitoring metrics can also be inserted using Telegraf (That’s what we do in this blog post). Grafana displays the metrics gathered in InfluxDb in customizable dashboards.
TL; DR
Download and extract the kraken-monitoring.zip archive.
It contains several K8s configuration files for InfluxDB, Telegraf and Grafana, as well as configuration files specific to each application.
It also contains a Makefile. Here is an extract of this file:
|
|
To launch the complete stack:
- Run
make start
to start Minikube (or copy paste the command above in a terminal if you do not have installedmake
), - Execute
make mount
to mount Grafana’s configuration file into Minikube, - In a new terminal, run
make all
to launch the complete stack, - Wait for the various Pods to start (it may take some time to download the Docker images) using
make watch
, - List the available services with
make list
.
|
|
Open the URL of the grafana-service
and check that the stack is properly installed.
How to Deploy InfluxDB?
Before we deploy the InfluxDB container on Kubernetes, we must create several resources used by it:
- The
influxdb.conf
file must be mounted as a ConfigMap, - Secured environment variables such as admin credentials must be set using Secrets,
- A volume must be created to persist the InfluxDB data using PersistentVolumeClaim.
In this chapter we will take the time to test several options to generate declarative object configuration files.
Read more about the Yaml format of K8s object in their documentation: Describing a Kubernetes Object.
Map a Configuration File Using ConfigMap
InfluxDb is configured using a influxdb.conf
file placed in the /etc/influxdb/
folder.
TL; DR: Download a sample ConfigMap file and import it with the command
kubectl apply -f influxdb-config.yaml
.
In Kubernetes, mapping a configuration file is done by creating a ConfigMap.
- An easy way to generate a ConfigMap is to use an existing file. You can download a sample here: influxdb.conf
- Then execute the command
kubectl create configmap influxdb-config --from-file=influxdb.conf
to generate the ConfigMap object.
You can export the created ConfigMap into a Yaml configuration file named influxdb-config.yaml
with the following command:
|
|
This allows you to see the format used in such files influxdb-config.yaml
:
|
|
The command kubectl get <object type> --export -o yaml
is an easy way to generate declarative configuration file from existing K8s objects.
Unfortunately the --export
option is deprecated.
Check the data
field.
It contains one or several entries, named after the configuration file used.
The data.influxdb.conf
field contains the raw content of the source influxdb.conf
file.
This field of the ConfigMap is then referenced inside the InfluxDB Deployment.
Finally, check that the ConfigMap is created (here using the apply
command as we can guess by the presence of the kubectl.kubernetes.io/last-applied-configuration
annotation):
|
|
You can also check the K8s API to know the format and the required fields for any kind of K8s resource.
Regarding ConfigMaps, the same format is used for the telegraf.conf
file and Grafana’s multiple file inputs.
Map Environment Variables Using Secrets
Let’s start with the definition of a K8s secret:
A Secret is an object that contains a small amount of sensitive data such as a password, a token, or a key.
This is the perfect resource for storing sensitive InfluxDb environment variables.
Start by creating the configuration file influxdb-secrets.yaml
(or download it here):
|
|
Import it in K8s with the apply
command:
|
|
Display the created Secret, environment variable values are not visible:
|
|
Mount a Data Volume
In Kubernetes, persistence of data is donne using persistent volumes.
We need to create a PersistentVolumeClaim in order to keep InfluxDB’s data even if the service is restarted. A PersistentVolumeClaim describes the type and details of the volume required. Kubernetes finds a previously created volume that fits the claim or creates one with a dynamic volume provisioner.
If you are using MiniKube, a dynamic provisioner is present. It map volumes to local folders inside the VM.
Using Kompose
Let’s try another tool to generate declarative configuration files for Kubernetes: Kompose. Kompose takes a Docker Compose file and translates it into Kubernetes resources.
Installation is straightforward:
|
|
Unfortunately Kompose only works with major docker-compose releases (2.0, 3.0 etc).
So if you have a docker-compose file with version 3.5 you have to downgrade it to 3.0.
Also, in my case the generated resource configuration files did not work out ouf the box.
It still gives an idea of what must be done in K8s for a given docker-compose.yml
configuration file.
Create the following docker-compose.yml
file:
|
|
And download the configuration influxdb.conf and env files.
Run the kompose convert
command to convert the created docker-compose.yml
file into several K8s configuration files:
|
|
It’s far from perfect:
- Sensitive environment variables should be placed inside Secrets,
- A PersistentVolumeClaim is created for the
influxdb.conf
file, but there is no reference to the actual file. A ConfigMap works better in such case.
It generates a file named influxdb-data-persistentvolumeclaim.yaml
though.
Rename it to influxdb-data.yaml
and update the storage capacity:
|
|
Finally create the PersistentVolumeClain:
|
|
And check that the created PVC is matched to a PersistentVolume:
|
|
We can see the volume name pvc-xxx
and Bound
status here.
All prerequisites to an InfluxDB deployment are now done: environment variables, configuration file and data persistence.
Create an InfluxDB Deployment
Let’s apply an InfluxDB Deployment at last.
Create the influxdb-deployment.yaml
file:
|
|
Volumes must be declared outside the container and referenced in the spec.template.spec.containers.volumeMounts
section:
influxdb-data
references a PersistentVolumeClaim with the same name,influxdb-config
references a ConfigMap with the same name.
The influxdb-config
volume mount has subPath: influxdb.conf
property.
We previously generated a ConfigMap with only one entry: its key is the name of the source configuration file and its value is the .conf
content.
The environment variables reference the previously created Secrets.
Apply this deployment to the K8s cluster:
|
|
Check the InfluxDb Deployment
Now that the InfluxDB deployment is created, let’s check if our previous configurations are taken into account.
Are Corresponding Pod Created?
First and foremost, check that the Deployment is created and ready:
|
|
This may take a while if you have a low bandwidth (the Docker image must be pulled).
Check for the corresponding Pod creation:
|
|
You can also describe
the created Pod:
|
|
The containers.influxdb.Mounts
and Volumes
sections shows us that all volume configurations are OK from the Pod point of view.
Let’s check this case by case.
Is the Configuration File Loaded?
Connect to the Pod and display the content of the influxdb.conf
configuration file:
|
|
It should match what you set while creating the ConfigMap.
Are the Secrets mapped?
Connect to the Pod:
|
|
Then connect to InfluxDB and display the databases:
|
|
The credentials set while creating the Secrets should give you access to the database, and the gatling DB should exist.
Is the Data Folder Mounted?
We previously created a PersistentVolumeClaim. Under the hood, K8s created a PersistentVolume of type HostPath. There are many kind of PersistentVolume, several being specific to Cloud providers.
List all existing PV:
|
|
Describe the PersistentVolume named after our PVC:
|
|
We can see here that it points to the /tmp/hostpath-provisioner/pvc-a59b241b-ca22-4ea3-a7de-ac48e3493616
folder on the host machine.
Since we are using Minikube this host folder is in fact inside the VM. Connect to the VM and display the content of the folder:
|
|
You can see the sub-folders generated by InfluxDB.
Note: In case you need to have access to the data directly from your machine, this documentation page explains how to map a host folder to a Minikube one.
Expose a Deployment as a Service
Our goal here is to make InfluxDB accessible:
- To Telegraf so it can inject data,
- To Grafana in order to display dashboards based on these data.
Creating a Service that wraps InfluxDB will allow us to use Kubernetes DNS and expose it to other containers living in the cluster.
Start by writing a influxdb-service.yaml
file:
|
|
Apply this configuration to the K8s cluster:
|
|
Check for the created service:
|
|
It’s done and the port 8086 is opened.
Check also that the kube-dns
service is started (You need to activate it otherwise):
|
|
We can finally test that our DNS setup is working with nslookup
:
|
|
How to Deploy Telegraf?
It’s the same principle as for InfluxDB: we need to create several resources for configuration files and then create a deployment.
To avoid redundancy, we’ll skip the creation of the declarative configuration files used to create K8s resources.
Download and Apply Configuration Files
Download the following files:
- telegraf-config.yaml: How to map a configuration file using ConfigMap?
- telegraf-secrets.yaml: How to map env variables using Secrets?
- telegraf-deployment.yaml: How to create a Deployment that uses both ConfigMap and Secret?
Then apply them all:
|
|
Note: You can also place these file in a folder named
telegraf
and then directly run the commandkubectl apply -f telegraf
.In case you have sub-folders, use the
-R
parameter to make it recursive.
Check Data Injection Into InfluxDB
Once the Telegraf Pod is started, verify that it injects some data into InfluxDB.
|
|
We can see in InfluxDb’s logs that Telegraf is periodically sending data.
Another way to check this is by connecting to the pod and displaying available measurement for the telegraf database in InfluxDB:
|
|
How to Deploy Grafana?
Grafana is a free software that allows visualization and formatting of metric data. It allows you to create dashboards and graphs from multiple sources including time series databases like Graphite and InfluxDB.
It requires several configuration files to setup:
- Data source configurations,
- Default dashboards configurations and JSON definitions,
- The
grafana.ini
file.
As well as environment variables.
The simplest way would have probably been to create a custom Docker image that include these files. But we are going to try to mount a folder in Minikube to access all these files in one operation.
Mount Configuration Folder in Minikube
Download and extract the set of configuration files: grafana.zip.
The config
sub-folder contains the configuration files mentioned above.
One file in particular should catch your attention: config/provisioning/datasource/influxdb.yaml
.
It reference the InfluxDB service using Kubernetes DNS, the database created by Telegraf, and the administrator login and password.
Execute the following command to mount the grafana/config
folder into the Minikube VM:
|
|
Leave the terminal used to run this command opened for the duration of the test.
Apply Deployment and Service Configurations
The grafana.zip
archive also contains the grafana-deployment.yaml
configuration:
|
|
There are several differences with the InfluxDB deployment:
- The environment variables are declared directly in the deployment, not in a Secret, making the InfluxDB admin password visible in the K8s cluster,
- The volumes do not reference a PersistentVolumeClaim making this configuration less portable (it works only with Minikube and a host folder mapping).
Note that we also use the volumeMounts.subPath
to references directories (/grafana/provisioning/
and /grafana/dashboards/
) as well as the grafana.ini
configuration.
Apply the Deployment configuration:
|
|
The grafana-service.yaml
file exposes the previously created deployment using a NodePort service.
More information is available in my previous blog post on how to use NodePort to expose a port lower than 3000.
|
|
Apply the Service configuration:
|
|
Check Grafana Installation
List all services exposed by Minikube:
|
|
Open the grafana-service
URL and go to the Data sources list (left menu > Configuration > Data Sources).
You should like two Data sources pointing to InfluxDb:
That was a really good tutorial Gerald. Thanks a lot. I had one question. How do things change when we also have an Ingress controller (nginx) in the picture ? In such a case, is there a need to allow ingress to the InfluxDB via the Ingress Controller ? And then how does the configuration change for Telegraf (or even Chronograf) in such a case ? Thanks in advance.
Thank you very much Gerald , it was a great! tutorial