Automated Tester

.science

This content shows Simple View

Performance

Visualising test data with ElasticSearch & Kibana

You probably already get some kind of metrics out of your tests, whether you get stats from your build, Specflow reports and other build artefacts. This article will show you how to send your data to ElasticSearch and visualise the stats with it’s sister product Kibana.

There are several alternative ways you can achieve the same thing here, you may prefer to use another visualisation tool such as Graphite or Grafana… or you may want to store your metrics in something like Influx, so lets keep an open mind… I chose the ELK stack for no particular reason!

So lets get started, you’re going to need to be running ElasticSearch on your machine or a VM.

I’ll leave it to you to install rather than detail it here, the documentation is detailed and there are several ways, choose what’s best for you.

When you have the services running, check you can access them. Assuming you kept the default ports:

  • http://192.168.xxx.xxx:9200/

elasticSearch

  • http://192.168.yyy.yyy:5601/

Kibana

ElasticSearch

So how do we add useful test data? Let’s have a think about what you want to capture, maybe things like… :

  • Test Name
  • Test Result (Pass/ Fail)
  • The Server it ran against
  • The elapsed time of the test
  • The Feature File the test belongs to
  • A name for the test run (I call this the test run context)

Some of this stuff is easier to get hold of than others, depending on whether or not you run tests in parallel but lets start with our hooks or setup/teardown.

As we want to know how long our test takes to run, let’s create a stop watch in the [BeforeScenario] hook:

var sw = new Stopwatch();
sw.Reset();
sw.Start();

Unsurprisingly, we should stop it after the test is complete.

sw.Stop();

What about the rest of the info? As I’m using NUnit3 to run tests and Specflow, it should be easy to get at the test goodies we want to visualise and include these in our [AfterScenario] Hook:

var testRunTime = sw.ElapsedMilliseconds;
// Convert the time so it can be easily consumed later
var convTime = Convert.ToInt32(testRunTime);
var testName = TestContext.CurrentContext.Test.Name;
var testResults = TestContext.CurrentContext.Result.Outcome.Status.ToString();
var featureFile = FeatureContext.FeatureInfo.Title;

I tend to keep info like the server name in config so we can just pull that out of there.

var appSettings = ConfigurationManager.AppSettings;
var serverName = appSettings.Get("ServerName");

Tip: If you are running concurrently and hitting circular dependencies or similar, you can inherit your hook class from Specflows ”Steps”.

We have now have our info, if you debug you can see the variables being populated, however we are not yet sending it to our instance of ElasticSearch, let’s do that now.

Have a look in your [BeforeTestRun] hook and lets make a connection to ElasticSearch which in essence is just HttpClient call:

public class TestRunHooks
{
public static HttpClient EsClient;

[BeforeTestRun]
public static void BeforeTestRun()
{
// Setup ElasticSearch client
EsClient = ESClient.Create();

...
}

...
}

Here we call a class ESClient to do the heavy lifting:

public class ESClient
{
public static HttpClient client;

public static HttpClient Create()
{
// Get the info for your ES instance from config (http://192.168.xxx.xxx:9200/)
var appSettings = ConfigurationManager.AppSettings;
var elasticUrl = appSettings.Get("elasticSearchUrl");
client = new HttpClient()
{
BaseAddress = new Uri(elasticUrl),
Timeout = TimeSpan.FromMilliseconds(500)
};
return client;
}

public static async Task<HttpResponseMessage> PostESData(HttpClient client, object test)
{
try
{
// Post the data as Json to an index of your choice in ElasticSearch!
return await client.PostAsync("/qa/apitests", new StringContent(JsonConvert.SerializeObject(test), Encoding.UTF8, "application/json"));
}
catch (Exception)
{
return null;
}
}

// The object to post in our async call to ElasticSearch
public class TestResult
{
public string name;
public int elapsedTime;
public string result;
public string testRunContext;
public string featureFile;
public string serverName;
public string date = DateTime.UtcNow.ToString("yyyy/MM/dd HH:mm:ss");
}

// Call this from your hook with the data we have gathered in our [AfterScenarioHook]
public static void PostToEs(string testName, string outcome, int convTime, string featureFile, string serverName)
{
// Test conext Id is just a unique name for the test run, generate it how you like!
var testRunContext = TestRunHooks.TestContextId;
var client = ESClient.client;

var testResult = new TestResult() { name = testName, result = outcome, elapsedTime = convTime, testRunContext = testRunContext, featureFile = featureFile, serverName = serverName};
var result = PostESData(client, testResult).Result;
}
}

Don’t forget to dispose of your client after the test run:

[AfterTestRun]
public static void AfterTestRun()
{
EsClient.Dispose();

...
}

All going well (No Firewall issues etc) your post call to ElasticSearch should return a 201 status code and ElasticSearch now has Data! Moving on to Kibana…

Kibana

If we click on ‘Discover’ in Kibana and we have the correct date/time range selected from the upper right hand corner, we should see the raw data sent to ElasticSearch. Alternatively, you can perform a Lucene query (don’t worry, you won’t need to know this syntax in depth to make good use of it!) such as…

result:”Passed”

This returns all tests that passed for the selected time period, based on the data we have pushed into ElasticSearch.

Now that we have data we can create visualisations based on it! Keeping with the query above let’s visualise our passing tests in varying forms:

gauge 
When you have enough visualisations you can drop them all onto a dashboard and share with your team.

Dashboard

Note: If you want to lock down Kibana functionality and give team members their own login, you will need to install the x-pack addon.

Finally, and as eluded to at the start – once you have your metrics in ElasticSearch or Influx or whatever… (there are a few out there) then you are not limited by what tool to visualise with. I’d like to build on what is outlined here to compare results of runs, trends, drilldown to failures etc although I am not there yet 🙂

Happy graphing!



Surgical Strike UI Automation Testing

A lot of people when starting out with automated testing or Selenium may follow a kind of record and playback approach to writing automated tests, whether this is born out of using something like the plugin or just the general approach:

  • Fire up a browser
  • Goto the site
  • Login
  • Get to where you need to be
  • Perform a bunch of interactions
  • Logout (possibly)

There are a few optimisations we can do without much effort, like navigating with a url rather than clicking a bunch of menu items. This approach may need an environment with test data already present and that can turn into a big overhead. It might be a lot harder to ‘get to where you need to be’ if you have to create a whole structure first and doing that in the UI as part of your test should be avoided.

Trimming the fat

In the past on this blog I have talked about API tests and UI tests, lets combine the two to really optimise the UI tests. You’re writing a Selenium test to test a specific piece of functionality, lets keep it that way and just use Selenium to perform precision, surgical UI interactions that we care about. This will speed up your tests and make them more stable.

This way we will:

  • Perform a bunch of internal API calls to set the test up
  • Fire up a browser
  • Login
  • Navigate
  • Do the test
  • Perform another set of internal API calls to rip out the test setup

I’ve found a nice way to do this is to setup a stack which we can push fixtures onto and then iterativeley pop them off after the test is done.

private Stack<Action> teardownActions;

public Stack<Action> TeardownActions => teardownActions ?? (teardownActions = new Stack<Action>());

As usual I am using Specflow in my setup, now I turn to the hooks to perform an action I need for each test – lets say…. make a folder.

FolderRootDto = new FolderDto

{

FolderVisibility = 50,

Name = $"{ScenarioInfo.Title}"

};

Folders.CreateNewFolder(FolderRootDto.Name, FolderRootDto);

Stack it out

When the create method is called and we actually perform the api post request, we get the Id from the DTO (or whatever we need in the delete call) and push an action onto the stack, in this case another method that calls delete.

var client = client;

var newFolder = RestManager.Post<FolderDto>(client, BaseUrl + "/rest/endpoint/to/call", folderDto);

if (newFolder.StatusCode == HttpStatusCode.OK)

{

var LastFolderCreatedId = newFolder.Response.Id;

TreeState.Get(ScenarioContext).TeardownActions.Push(() =>

{

DeleteFolder(newFolder.Response.Id);

});

}

Now before our browser even fires, I have a folder to perform a test in, if we add our teardown action stack to fire after the scenario then this newly created folder gets ripped out afterwards.

var nextAction = TeardownActions.Pop();

while (nextAction != null)

{

nextAction();

nextAction = null;
if (TeardownActions.Count > 0)

{

nextAction = TreeState.Get(scenarioContext).TeardownActions.Pop();

}

}

As these actions get performed in milliseconds it can drastically reduce the time of your Selenium test that might be doing the all the foundation work or reduce the overhead you might have in order to get your fixture setup in place whether it be build steps or database restores.



Measuring Speed and Performance with Sitespeed.io

I recently came across a fantastic open source speed and performance tool called Sitespeed and was pretty amazed by both what it can do and how it presents the results to you.

It has its foundations in some other cool stuff you may have heard of like YSlow, PhantomJS and Bootstrap and makes good use of WebPageTest too. It seems to support pretty much every platform too, I’m a .Net/ windows guy so I’ll be working with that. Lets have a look:

Install

Install via node – if you don’t have NodeJs get it here.

$ npm install -g sitespeed.io

Configure and Run

So to get some metrics out of something, lets run a simple test. Open a command prompt and run the following:

sitespeed.io.cmd -u http://www.google.co.uk -d 0 -b chrome -v –viewPort 1920×1080 –name Google_Report

If you’re a windows user like me don’t forget the .cmd on the end of the sitespeed.io initialiser, it’s an easy mistake to make! Let Sitespeed.io do it’s thing, in this particular instance we are crawling google to a depth of 0 (so just it’s landing page, for brevity.) and doing this with chrome, viewport size is set to 1920×1080, -v for verbose stdout, name gives it a name.

Running sitespeed.io

Hey presto, a performance dashboard:

Sitespeed.io report

Full example report is here.

We’re not really measuring against anything here just getting something pretty to look at, sitespeed.io recommends we create a Performance Budget to measure against which sounds like good advice to me, if you don’t know much about it they even recommend a some great articles to broaden your mind and follow best practice, to quote their site:

“Have you heard of a performance budget? If not, please read the excellent posts by Tim Kadlec Setting a performance budget and Fast enough. Also read Daniel Malls How to make a performance budget. After that, continue setup sitespeed.io :)”

Add this into your parameters and your report will illustrate what is falling in and out of your budget. So what else can we do with sitespeed.io?

  • Add a list of URLs to test against
  • Add authentication if you need to login somewhere (–requestHeaders cookie.json for example)
  • Drill down and get HAR files, maybe even convert to jmx for stress testing
  • Get screenshots
  • Get results in different formats (HTML/ XML and such)
  • Do multiple passes for comparison
  • Compare multiple sites
  • Throttle connections
  • Supports Selenium for running with different browsers
  • Supports Google Page Speed Insights
  • Supports Docker

In fact, there is way too much to list out here – luckily sitespeed.io has some very good documentation on how to configure so it should be easy to fine tune it to your needs. Furthermore if you use some other cool stuff like Graphite you can output the results to there.

Summary

The benefits of this should be clear, we can hook up a command in a build step on a CI server to either output some funky looking graphite metrics or generate performance dashboards regularly to catch performance issues before they get released.

I don’t yet know if this tool can be incorporated into existing automation tests with Selenium… or really if you would even want to do that, I had been looking into using Jan Odvarko’s HAR Export Trigger to compare against YSlow scores but sitespeed.io seems to kill lots of birds with one stone.

My first impressions are that sitespeed.io is a very powerful, easy to configure, cross platform performance tool that displays data in such as way that is easy to understand.

Bootnote

Keep your eyes peeled for sitespeed.io 4.0 which will have even more cool stuff going on, due to be released in a few weeks time.




top