K6 is a great tool for load testing your web-services. This article will focus through the lens of GRPC Services in this k6 review.
K6 is written in GoLang and you can have a look at the underlying code here and contribute if you so wish. Golang is a fun language to program in and you can find out more information from the goLang website. K6 licence is a copy left licence which means any changes/additions have to be made available to the community.
K6 documentation in my opinion is very extensive and a breath of fresh air. Why this article? I hear you say! This article will focus on GRPC web services and will hopefully bring all of the documentation together in that respect.
From a users perspective you interact with K6 using javascript files and commands. I speak about the benefits of running testing using this commands in this documentation here. https://test-logic.com/automate-tests-from-command-line-terminal/.
It is highly recommended that you do not Load/Spike test your live system. Instead I would create an identical service for this purpose. This will need to be identical in every way, including processor, memory, disk-space and configuration. If you use docker or kubernetes to build your service, you can tear it down as soon as the testing is done, saving maintenance and running costs. It also makes it easier to ensure the production and test services have identical configurations.
Performance in its basics terms can be affected by code and infrastructure. It can be easy to throw money at new hardware, but you cannot outrun poor code. Installing good coding practices within your team minimise any potential issues and most obvious place to look when trying to performance improve your system. Here is a few places to look for inefficiencies when looking at your own code.
- Code Locks
- Over reliance on Try catches
- Awaiting long methods to complete
- Single Threading policy
- Sleep statements (yes it happens)
- Concatenating immutable objects
- External resources are inefficient
- Your SQL Quires are inefficient
- Poor database indexes
- Time out configurations
- Poor Memory Management
- No page caching
Performance Test Variations
In this K6 review, we will cover the following:
- Smoke test – Initial verification that you can connect to a service and it returns a given http code
- Spike test – Enables a large load on a server in a small amount of time
- Load test – Enables a large load on the server ramping up slowly to start with
- Soak test – This is a prolonged load test over a long period of time which helps uncover memory leaks
Interaction via commands
Interaction via the commands in k6 is very intuitive. First you create your java script file, which has all of the magic in it and then you call it from your terminal of choice. The thinking behind running it like this is is that it is resource light and does not take up processing power from say a browser. The best benefit of running it as a command is that you could include it as part of a daily test in your CI system (jenkins).
In its most basic form it is this.
K6 run K6TestHelloSmoke.js
K6 Definition Files
We have mentioned previously that the definition files are written in JavaScript. This will allow you to edit the definition files in your favourite IDE and lint tool. You should name these files so that it is intuitive what is going on inside from looking at the file. For example K6SmokeTestMakeCoffee.js. This tells us at a glance that it is a discovery test and will let us know if the service is available.
Smoke Test with GRPC
Smoke testing a unary GRPC service is super simple and is a great way to see if a given test is running correctly. You can also use a K6 smoke test to debug locally if you run your webService in debug mode, this can potentially save time creating an expensive client.
Let’s get stuck in! I have created a simple GRPC hello coffee shop service and I set it to run from inside of Inteliji. This gives us something to performance test against. Let’s start from the top of this script and work our way down.
import grpc from 'k6/net/grpc';
import { check, sleep } from 'k6';
const client = new grpc.Client();
client.load(['../proto/theBusyBean'], 'CoffeeMaker.proto');
export default () => {
client.connect('localhost:50051', {
plaintext: true,
});
const serviceParameters = { subject: 'David' };
const response = client.invoke('CoffeeMaker.CoffeeShopService/Hello', serviceParameters);
check(response, {
'status is OK': (r) => r && r.status === grpc.StatusOK,
});
console.log(JSON.stringify(response.message));
client.close();
sleep(1);
};
Imports allow you to pull in the libraries required to test the system
Client is your GRPC consumer. The first parameter in client.load defines the path to directories of your proto files and is represented as a string array. If it is not provided the current directory is used. The Second parameter defines the proto files to use in the test.
Export default () => is the method that runs when the k6 script is called from the command line. The tool will exercise this method as many times as is defined. For our smoke test this is only going to be once.
I set up my GRPC service to run on localhost:50051, define this as the first param in the client.connect. the next parameter is an object of parameters. This consists of:
- plaintext – set this as true if you are testing locally but by default and rightly so it should be left to its default which is TLS.
- timeout – timeout is the amount of time k6 will wait to connect to your service. Default is 60s seconds.
- reflect – Boolean for GRPC server reflection protocol
You can see below the only value I have set is plain text true.
serviceParameters holds all of the parameters that your service expects. We know that our service has the parameter Subject and it expects a name. A royal subject perhaps ha ha
Request and response are kind of merged into one parameter called response. Response invokes has two parameters, the first is the namespace and service you wish to call and the second is the arguments defined in the serviceParameters object.
Check, checks the status of the call to ensure that it was HTTP 200. We then take the response and turn it into a human readable format and output it to the console. If you are expecting a different status then you can check against these other constants
Close the connection and then sleep for a millisecond and we are done.
Spike Test with GRPC
Spike testing your GRPC web-service is really simple once you have your smoke test set up. It is as simple as adding the duration you wish the spike to run and how many virtual users (vus) you would simulate. The example below spikes your service at 1000 users for the period of 10 mins. This could easily be 600 virtual users for 1 min. You can either test till you meet your expected output or until your service starts returning errors. Based on your expectations this will tell you if you need to take a look at your services code and/or scale up or down your infrastructure. At the very least it will give you a baseline to watch for.
K6 run K6TestHelloSmoke.js --duration=10m --vus=1000
This spike also offers you the ability so see how your service would handle denial of servcie attacks.
Soak Test with GRPC
Soak testing is the ability to test you system under 80% capacity for several hours. K6 recommends that you do a soak test for one hour then analyse your results and then for several hours thereafter. I assume this is because this type of test can be expensive especially if your service provider bills you based on bandwidth. If issues surface after an hour you may as well resolve them then and run the hour test again, minimising the hours you are running the longer test for.
Some errors are time related rather that based on the number of requests. Memory leaks are what instantly springs to mind as this type of error is accumulative and takes time to start to effect a service. K6 go into more detail on how to find memory leaks for this kind of error.
Say these errors where not discovered on say a cloud server with auto scaling. There is a chance that you could be billed over a long period of time for using resources that you don’t require or your service could simply stop or limp along sub optimally.
It will also shine a torch on resources such as band width. If for example you are lugging massive images that could be optimised you will notice a difference when you optimise these.
Let’s take a look at how we achieve soak testing with K6 Options. Options are a great way to set settings that you do not want to pass in via the command. They can also be combined to run effectively as a group.
export const options = {
stages: [
{ duration: '2m', target: 200 }, // ramp up to 400 users
{ duration: '1h', target: 200 }, // stay at 400 for 1 hours
{ duration: '2m', target: 0 }, // scale down. (optional)
],
};
The first soak test above ramps up the users to 200 over a 2 minute period and keeps this going for 1 solid hour before ramping down. The below example follows the same pattern except the middle tests runs for a period of 4 hours.
export const options = {
stages: [
{ duration: '2m', target: 200 }, // ramp up to 400 users
{ duration: '4h', target: 200 }, // stay at 400 for ~4 hours
{ duration: '2m', target: 0 }, // scale down. (optional)
],
};
To harness these option constants place them above the default method in your K6 code.
Stress Test with GRPC
Stress testing is most likely the reason you have been looking at k6. Stress testing is about pushing the system beyond its capabilities. This gives you the knowledge that is required for:
- Setting an effective scaling strategy
- Setting an alert strategy when thresholds are being met
- Self healing properties of your site
- Expectation of what will happen if it is overwhelmed
Lets get down to the meat
Limitations of K6 and GRPC
K6 has no support for Bidirectional, Server or client-side streaming. This means that support is only available for performance testing Unary GRPC calls.
Unary is the basis for most web-services such as api and soap. A request is given with some parameters and there is a Response with the result of the method call. Streaming holds the connection open and allows for multiple calls to be made.
The RPC type com.google.protobuf.Any has now been resolved as of version 0.39.0. I will create a demo page on this as my next task as this is a super useful thing to know.