ElasticSearch

Visualising test data with ElasticSearch & Kibana

You probably already get some kind of metrics out of your tests, whether you get stats from your build, Specflow reports and other build artefacts. This article will show you how to send your data to ElasticSearch and visualise the stats with it’s sister product Kibana.

There are several alternative ways you can achieve the same thing here, you may prefer to use another visualisation tool such as Graphite or Grafana… or you may want to store your metrics in something like Influx, so lets keep an open mind… I chose the ELK stack for no particular reason!

So lets get started, you’re going to need to be running ElasticSearch on your machine or a VM.

I’ll leave it to you to install rather than detail it here, the documentation is detailed and there are several ways, choose what’s best for you.

When you have the services running, check you can access them. Assuming you kept the default ports:

  • http://192.168.xxx.xxx:9200/

elasticSearch

  • http://192.168.yyy.yyy:5601/

Kibana

ElasticSearch

So how do we add useful test data? Let’s have a think about what you want to capture, maybe things like… :

  • Test Name
  • Test Result (Pass/ Fail)
  • The Server it ran against
  • The elapsed time of the test
  • The Feature File the test belongs to
  • A name for the test run (I call this the test run context)

Some of this stuff is easier to get hold of than others, depending on whether or not you run tests in parallel but lets start with our hooks or setup/teardown.

As we want to know how long our test takes to run, let’s create a stop watch in the [BeforeScenario] hook:

var sw = new Stopwatch();
sw.Reset();
sw.Start();

Unsurprisingly, we should stop it after the test is complete.

sw.Stop();

What about the rest of the info? As I’m using NUnit3 to run tests and Specflow, it should be easy to get at the test goodies we want to visualise and include these in our [AfterScenario] Hook:

var testRunTime = sw.ElapsedMilliseconds;
// Convert the time so it can be easily consumed later
var convTime = Convert.ToInt32(testRunTime);
var testName = TestContext.CurrentContext.Test.Name;
var testResults = TestContext.CurrentContext.Result.Outcome.Status.ToString();
var featureFile = FeatureContext.FeatureInfo.Title;

I tend to keep info like the server name in config so we can just pull that out of there.

var appSettings = ConfigurationManager.AppSettings;
var serverName = appSettings.Get("ServerName");

Tip: If you are running concurrently and hitting circular dependencies or similar, you can inherit your hook class from Specflows ”Steps”.

We have now have our info, if you debug you can see the variables being populated, however we are not yet sending it to our instance of ElasticSearch, let’s do that now.

Have a look in your [BeforeTestRun] hook and lets make a connection to ElasticSearch which in essence is just HttpClient call:

public class TestRunHooks
{
public static HttpClient EsClient;

[BeforeTestRun]
public static void BeforeTestRun()
{
// Setup ElasticSearch client
EsClient = ESClient.Create();

...
}

...
}

Here we call a class ESClient to do the heavy lifting:

public class ESClient
{
public static HttpClient client;

public static HttpClient Create()
{
// Get the info for your ES instance from config (http://192.168.xxx.xxx:9200/)
var appSettings = ConfigurationManager.AppSettings;
var elasticUrl = appSettings.Get("elasticSearchUrl");
client = new HttpClient()
{
BaseAddress = new Uri(elasticUrl),
Timeout = TimeSpan.FromMilliseconds(500)
};
return client;
}

public static async Task<HttpResponseMessage> PostESData(HttpClient client, object test)
{
try
{
// Post the data as Json to an index of your choice in ElasticSearch!
return await client.PostAsync("/qa/apitests", new StringContent(JsonConvert.SerializeObject(test), Encoding.UTF8, "application/json"));
}
catch (Exception)
{
return null;
}
}

// The object to post in our async call to ElasticSearch
public class TestResult
{
public string name;
public int elapsedTime;
public string result;
public string testRunContext;
public string featureFile;
public string serverName;
public string date = DateTime.UtcNow.ToString("yyyy/MM/dd HH:mm:ss");
}

// Call this from your hook with the data we have gathered in our [AfterScenarioHook]
public static void PostToEs(string testName, string outcome, int convTime, string featureFile, string serverName)
{
// Test conext Id is just a unique name for the test run, generate it how you like!
var testRunContext = TestRunHooks.TestContextId;
var client = ESClient.client;

var testResult = new TestResult() { name = testName, result = outcome, elapsedTime = convTime, testRunContext = testRunContext, featureFile = featureFile, serverName = serverName};
var result = PostESData(client, testResult).Result;
}
}

Don’t forget to dispose of your client after the test run:

[AfterTestRun]
public static void AfterTestRun()
{
EsClient.Dispose();

...
}

All going well (No Firewall issues etc) your post call to ElasticSearch should return a 201 status code and ElasticSearch now has Data! Moving on to Kibana…

Kibana

If we click on ‘Discover’ in Kibana and we have the correct date/time range selected from the upper right hand corner, we should see the raw data sent to ElasticSearch. Alternatively, you can perform a Lucene query (don’t worry, you won’t need to know this syntax in depth to make good use of it!) such as…

result:”Passed”

This returns all tests that passed for the selected time period, based on the data we have pushed into ElasticSearch.

Now that we have data we can create visualisations based on it! Keeping with the query above let’s visualise our passing tests in varying forms:

gauge 
When you have enough visualisations you can drop them all onto a dashboard and share with your team.

Dashboard

Note: If you want to lock down Kibana functionality and give team members their own login, you will need to install the x-pack addon.

Finally, and as eluded to at the start – once you have your metrics in ElasticSearch or Influx or whatever… (there are a few out there) then you are not limited by what tool to visualise with. I’d like to build on what is outlined here to compare results of runs, trends, drilldown to failures etc although I am not there yet 🙂

Happy graphing!




Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.