Automated Tester

.science

This content shows Simple View

May 2018

Visualising test data with ElasticSearch & Kibana

You probably already get some kind of metrics out of your tests, whether you get stats from your build, Specflow reports and other build artefacts. This article will show you how to send your data to ElasticSearch and visualise the stats with it’s sister product Kibana.

There are several alternative ways you can achieve the same thing here, you may prefer to use another visualisation tool such as Graphite or Grafana… or you may want to store your metrics in something like Influx, so lets keep an open mind… I chose the ELK stack for no particular reason!

So lets get started, you’re going to need to be running ElasticSearch on your machine or a VM.

I’ll leave it to you to install rather than detail it here, the documentation is detailed and there are several ways, choose what’s best for you.

When you have the services running, check you can access them. Assuming you kept the default ports:

  • http://192.168.xxx.xxx:9200/

elasticSearch

  • http://192.168.yyy.yyy:5601/

Kibana

ElasticSearch

So how do we add useful test data? Let’s have a think about what you want to capture, maybe things like… :

  • Test Name
  • Test Result (Pass/ Fail)
  • The Server it ran against
  • The elapsed time of the test
  • The Feature File the test belongs to
  • A name for the test run (I call this the test run context)

Some of this stuff is easier to get hold of than others, depending on whether or not you run tests in parallel but lets start with our hooks or setup/teardown.

As we want to know how long our test takes to run, let’s create a stop watch in the [BeforeScenario] hook:

var sw = new Stopwatch();
sw.Reset();
sw.Start();

Unsurprisingly, we should stop it after the test is complete.

sw.Stop();

What about the rest of the info? As I’m using NUnit3 to run tests and Specflow, it should be easy to get at the test goodies we want to visualise and include these in our [AfterScenario] Hook:

var testRunTime = sw.ElapsedMilliseconds;
// Convert the time so it can be easily consumed later
var convTime = Convert.ToInt32(testRunTime);
var testName = TestContext.CurrentContext.Test.Name;
var testResults = TestContext.CurrentContext.Result.Outcome.Status.ToString();
var featureFile = FeatureContext.FeatureInfo.Title;

I tend to keep info like the server name in config so we can just pull that out of there.

var appSettings = ConfigurationManager.AppSettings;
var serverName = appSettings.Get("ServerName");

Tip: If you are running concurrently and hitting circular dependencies or similar, you can inherit your hook class from Specflows ”Steps”.

We have now have our info, if you debug you can see the variables being populated, however we are not yet sending it to our instance of ElasticSearch, let’s do that now.

Have a look in your [BeforeTestRun] hook and lets make a connection to ElasticSearch which in essence is just HttpClient call:

public class TestRunHooks
{
public static HttpClient EsClient;

[BeforeTestRun]
public static void BeforeTestRun()
{
// Setup ElasticSearch client
EsClient = ESClient.Create();

...
}

...
}

Here we call a class ESClient to do the heavy lifting:

public class ESClient
{
public static HttpClient client;

public static HttpClient Create()
{
// Get the info for your ES instance from config (http://192.168.xxx.xxx:9200/)
var appSettings = ConfigurationManager.AppSettings;
var elasticUrl = appSettings.Get("elasticSearchUrl");
client = new HttpClient()
{
BaseAddress = new Uri(elasticUrl),
Timeout = TimeSpan.FromMilliseconds(500)
};
return client;
}

public static async Task<HttpResponseMessage> PostESData(HttpClient client, object test)
{
try
{
// Post the data as Json to an index of your choice in ElasticSearch!
return await client.PostAsync("/qa/apitests", new StringContent(JsonConvert.SerializeObject(test), Encoding.UTF8, "application/json"));
}
catch (Exception)
{
return null;
}
}

// The object to post in our async call to ElasticSearch
public class TestResult
{
public string name;
public int elapsedTime;
public string result;
public string testRunContext;
public string featureFile;
public string serverName;
public string date = DateTime.UtcNow.ToString("yyyy/MM/dd HH:mm:ss");
}

// Call this from your hook with the data we have gathered in our [AfterScenarioHook]
public static void PostToEs(string testName, string outcome, int convTime, string featureFile, string serverName)
{
// Test conext Id is just a unique name for the test run, generate it how you like!
var testRunContext = TestRunHooks.TestContextId;
var client = ESClient.client;

var testResult = new TestResult() { name = testName, result = outcome, elapsedTime = convTime, testRunContext = testRunContext, featureFile = featureFile, serverName = serverName};
var result = PostESData(client, testResult).Result;
}
}

Don’t forget to dispose of your client after the test run:

[AfterTestRun]
public static void AfterTestRun()
{
EsClient.Dispose();

...
}

All going well (No Firewall issues etc) your post call to ElasticSearch should return a 201 status code and ElasticSearch now has Data! Moving on to Kibana…

Kibana

If we click on ‘Discover’ in Kibana and we have the correct date/time range selected from the upper right hand corner, we should see the raw data sent to ElasticSearch. Alternatively, you can perform a Lucene query (don’t worry, you won’t need to know this syntax in depth to make good use of it!) such as…

result:”Passed”

This returns all tests that passed for the selected time period, based on the data we have pushed into ElasticSearch.

Now that we have data we can create visualisations based on it! Keeping with the query above let’s visualise our passing tests in varying forms:

gauge 
When you have enough visualisations you can drop them all onto a dashboard and share with your team.

Dashboard

Note: If you want to lock down Kibana functionality and give team members their own login, you will need to install the x-pack addon.

Finally, and as eluded to at the start – once you have your metrics in ElasticSearch or Influx or whatever… (there are a few out there) then you are not limited by what tool to visualise with. I’d like to build on what is outlined here to compare results of runs, trends, drilldown to failures etc although I am not there yet 🙂

Happy graphing!



Proxying UI Automation to OWASP ZAP

Quick disclaimer: I’m not a security expert, pen tester or ZAP expert but that doesn’t mean to say we should ignore security. A cheap way of adding a layer of security testing is to take your existing Selenium automation and proxy them through OWASP Zed Attack Proxy. So lets get started.

This is not a guide on how to use OWASP Zap and will not go into great configuration detail.

  • Download, install and start OWASP ZAP (Requires Java) either locally or on a VM
  • Install FoxyProxy which is a popular browser plugin. You can get it for Chrome or Firefox.
  • Install root level certificate

In ZAP go to Options > Local Proxies and set the Address and Port as desired. In this article I am running ZAP on a VM so I put the address of the VM in and set the port to 8999. Add an additional proxy of localhost:8999.

We can test this with FoxyProxy, setup a proxy to point to the VM on port 8999 and you should see the ZAP API front end interface.

Zap API Frontend

Next go to the API options in ZAP and add the VMs IP address to the list of addresses permitted to use the API. For the purposes of this article, I have also disabled the API key required to perform commands.

Now we can hit that API front end without the use of FoxyProxy.

So lets setup a connection to our ZAP instance in code and point tests via the proxy. Lets start by adding the OWASPZAPDotNetAPI nuget package.

We’ll connect to the ZAP service at the start of our test run in the hook:

public static ClientApi Zap;

[BeforeTestRun]
public static void BeforeTestRun()
{
// Note if you are using an API key, pass it in here instead of null
Zap = new ClientApi("192.168.xxx.xxx", 8999, null);

...
}

Now lets configure a proxy and pass it to the driver:

var options = new ChromeOptions(); var options = new ChromeOptions();

options.AddArgument("start-maximized");
options.AddArguments("disable-infobars");

// ZAP Proxy (Passive Scan) - ZAP should already be invoked.

var proxy = new Proxy();
options.Proxy = proxy;
proxy.Kind = ProxyKind.Manual;
proxy.IsAutoDetect = false;
proxy.HttpProxy = "192.168.xxx.xxx:8999";
proxy.SslProxy = "192.168.xxx.xxx:8999";

options.AddArgument("ignore-certificate-errors");
options.Proxy = proxy;
var timespan = TimeSpan.FromMinutes(3);
_driver = new RemoteWebDriver(new Uri(gridHub), options.ToCapabilities());

If we run a test, we should see traffic being generated in the ZAP History tab:

Zap Traffic

Great, we’re almost there. Now it’s time to generate a report – you can do this manually by using the API UI in core/other/html report (http://192.168.xxx.xxx:8999/UI/core/other/htmlreport/) but as we have a connection in code to the API lets do it there, after the test run:

public static void WriteZapHtmlReport(string path, byte[] bytes)
{
File.WriteAllBytes(path, bytes);
}

[AfterTestRun]
public static void AfterTestRun()
{
HookHelper.WriteZapHtmlReport(ReportDirectory + "_PassiveScanReport.html", Zap.core.htmlreport());
Zap.Dispose();

Here we take the byte array from the zap.core api and write it to a HTML file somewhere on disk, giving you a handy passive scan report.

Zap Report

Limitations

Be aware that any test that uses ZAP needs to have exclusive use of it- so to generalise, it’s better to keep all the browser based tests sequential. This includes tests that use ZAP as a proxy, not just the scanning tests. These words were robbed from here and are very true having spent a couple of hours see if the above would work with concurrency.




top