In this blog, I will talk about how I load test microservices in Kubernetes using K6 and Prometheus. We will use Prometheus to collect metrics from the microservices and k6 and display the metrics in Grafana.

Please note, that this blog is for advanced readers and assumes you already understand the above technologies, if you don’t, please read about them first. I won’t be covering the basics of these technologies in this blog. I will talk about the setup of k6 and the practices I use to run the tests.

What is K6?

K6 is an Open-Source and Hosted service for Load Testing Services. It’s created by Grafana Labs and is written in golang, the test cases are written in Javascript. You can read more about it here.

Pre-requisites

  • Promethius & Grafana: This blog assumes you understand how Prometheus and Grafana work. If you don’t know about them, you can read about them here.
  • Kubernetes: This blog assumes you understand how Kubernetes works. If you don’t know about them, you can read about them here.

Running Locally

Writing Test Cases

K6 test cases are written in Javascript. You can take help from the documentation.

We will start with a very simple test case, where we will hit a single endpoint of a microservice and check the response code.

You can read more about the logic of the test cases from the documentation and examples.

  • test_CASE_NAME.js

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    
    import http from 'k6/http';
    import { check, sleep } from 'k6';
    
    export const options = {
    stages: [
        { duration: '5m', target: 2000},
        { duration: '10m', target: 2000},
    ],
    }
    
    export default function () {
    const url = "http://domain/v1/home"
    const params = {
        headers: {
            'Content-Type': 'application/json',
            'Authorization': 'Bearer TOKEN'
        },
    };
    const res = http.get(url, params);
    check(res, {'status was 200': (r) => r.status == 200})
    }
    

Running Test Cases

  • Run a simple test

    1
    
    k6 run test_CASE_NAME.js
  • Run a test with options and save the results

    1
    
    k6 run --vus 10 --duration 30s --out json=test_CASE_NAME.json test_CASE_NAME.js 
  • Run the test and save the results to Prometheus

    1
    
    k6 run -o experimental-Prometheus-rw test_CASE_NAME.js 

    Here, it will save the results to Prometheus, and it pushes data to http://localhost:9090/api/v1/write. You can read about it here.

You can then use Grafan to visualize the results. You can find the dashboard here

Please note, that this is experimental right now and might change in the future.

Setup

Now, we will run our tests in Kubernetes. We will set up Prometheus and Grafana to collect the metrics from k6 and microservices. We will also set up a k6-operator to run the tests in Kubernetes.

Setup Kubernetes

You can either host it on any cloud provider or use the Minikube to run it locally. I will be using minikube for this blog. Here is the blog to set up minikube on Ubuntu 20.04: Setup Minikube on Ubuntu 20.04

Setup Prometheus & Grafana

I use Kube-Prometheus-Stack Helm Chart to setup Prometheus and Grafana. Below are the steps to set Prometheus and Grafana using Helm Chart:

1
2
3
4
5
6
# Add the Prometheus-community repo
helm repo add Prometheus-community https://Prometheus-community.github.io/helm-charts
helm repo update

# Install the kube-Prometheus-stack helm chart
helm install Prometheus Prometheus-community/kube-Prometheus-stack

You can optionally provide a value file and provide the version of the chart you wanna deploy. With that, you can provide it with a domain name and it will setup the ingress for you. It assumes you already have a ingress controller setup.

1
helm upgrade --install Prometheus Prometheus-community/kube-Prometheus-stack -f VALUE-FILE.yaml -n monitoring --version=37.2.0

Setup K6

K6 can run locally or in a docker container, which runs inside the Kubernetes cluster.

1
2
git clone https://github.com/grafana/k6-operator && cd k6-operator
make deploy

Ideally, deploy the k6-operator in a different namespace than the namespace you are running your tests in or the namespace you are running your microservices in or Prometheus in.

Running on K8s

Create test file configmap

1
2
3
4
5
# Create a configmap for the test cases
kubectl create configmap test_CASE_NAME --from-file=test_CASE_NAME.js -n NAMESPACE

# Check if the configmap is created
kubectl describe configmap test_CASE_NAME -n NAMESPACE

Create k6 CRD

Create a k6 CRD file, which will run the test cases in Kubernetes.

  • test_k6.yaml

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    15
    
    apiVersion: k6.io/v1alpha1
    kind: K6
    metadata:
    name: load-test
    spec:
    parallelism: 8
    arguments: --out experimental-Prometheus-rw
    script:
    configMap:
      name: load-test
      file: test_rhino.js
    runner:
    env:
      - name: K6_Prometheus_RW_SERVER_URL
        value: http://kube-Prometheus-stack-Prometheus.monitoring.svc.cluster.local:9090/api/v1/write

Here, we are passing the experimental Prometheus output flag, --out experimental-Prometheus-rw, to save the results to Prometheus.

Also note, we are replacing a env variable, K6_Prometheus_RW_SERVER_URL, from its default value of http://localhost:9090/api/v1/write and replaceing it with the Prometheus URL, in this case, locally running in the cluster. You can also use the domain name of Prometheus, if you have setup the ingress for it.

Apply the k6 CRD

1
2
# Apply the k6 crd file
kubectl apply -f test_k6.yaml -n NAMESPACE

This will deploy k6 dockers in NAMESPACE, which will run the test_CASE_NAME.js file. You can check the logs of the k6 pods to see the results of the test cases and the metrics.

K6 CI/CD Pipeline

This is probably the key takeaway from this blog. How do we make the test cases run automatically? How do we make sure the team has access to the test cases, and abstract the k6 setup to make it easy?

I keep all the test cases in a central place, accessible to the team, in my case Github. Then use a CI/CD pipeline, to deploy my test cases automatically. I use Github Actions to deploy the k6 test cases. But, you can use any CI/CD tool you want.

Action File

  • .github/workflows/k6.yaml

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    45
    
    name: Deploy-Test
    on:
    push:
    branches:
      - 'main'
    env:
    GPID: ******
    GKE_ZONE: ******
    GKE_CLUSTER: ******
    GITHUB_SHA: ******
    CA_FILE: test_CASE_NAME.js
    jobs:
    setup-build-publish-deploy:
    name: setup-build-publish-deploy
    runs-on: ubuntu-latest
    steps:
      - name: Checkout Repo
        uses: actions/checkout@v2
      - id: include-google-actions
        uses: google-github-actions/auth@v1
        with:
          credentials_json: ${{ secrets.GCP_CREDENTIALS_QA }}
      - name: setup-GCLOUD
        uses: google-github-actions/setup-gcloud@v1
        with:
          install_components: 'gke-gcloud-auth-plugin'
      - name: Install yq
        run: |
          sudo wget -qO /usr/local/bin/yq https://github.com/mikefarah/yq/releases/latest/download/yq_linux_amd64
          sudo chmod +x /usr/local/bin/yq
      - name: Delete old k6s-test
        id: delete
        run: |
          export USE_GKE_GCLOUD_AUTH_PLUGIN=True
          gcloud container clusters get-credentials $GKE_CLUSTER --zone $GKE_ZONE --project $GPID
          kubectl delete k6s.k6.io load-test -n NAMESPACE
        continue-on-error: true
      - name: Deploy
        id: extract
        run: |
          export USE_GKE_GCLOUD_AUTH_PLUGIN=True
          gcloud container clusters get-credentials $GKE_CLUSTER --zone $GKE_ZONE --project $GPID
          FIELD_VALUE=$(yq -r '.spec.script.configMap.file' $CA_FILE)
          kubectl create configmap load-test --from-file=k6/$FIELD_VALUE --dry-run=client -o yaml -n NAMESPACE | kubectl apply -f -
          kubectl apply -f $CA_FILE -n NAMESPACE

Here is the what’s happening in the above file: - Checkout the repo - Setup Google Cloud - Install yq - It’s used to read the value of the file from the k6 CRD file - Delete the old k6 CRD - This is optional, but I prefer to delete the old CRD before deploying the new one. - Deploy the new k6 CRD - We first create the configmap for the test cases and then apply the k6 CRD file.

This is a basic setup, to get started, but similar setup can be made with other CI/CD tools as well. You can also configure it to run multiple test cases.

Also note, run load-testing only in the QA or Stage environment.

Repo Structure

1
2
3
4
5
6
.
├── k6
│   ├── k6
│   ├── test_CASE_NAME.js
│   └── test_CASE_NAME_2.js
└── test_config.yaml

Conclusion

This is how I run load testing in Kubernetes. I hope you find this blog useful. If you have any questions, please feel free to reach out to me on Twitter or LinkedIn.

This is a very basic setup, and you can extend it to run multiple test cases, run it in multiple environments, and also run it in parallel.

Thank you for reading.

Resources