Automated Tester

.science

This content shows Simple View

NUnit

Visualising test data with ElasticSearch & Kibana

You probably already get some kind of metrics out of your tests, whether you get stats from your build, Specflow reports and other build artefacts. This article will show you how to send your data to ElasticSearch and visualise the stats with it’s sister product Kibana.

There are several alternative ways you can achieve the same thing here, you may prefer to use another visualisation tool such as Graphite or Grafana… or you may want to store your metrics in something like Influx, so lets keep an open mind… I chose the ELK stack for no particular reason!

So lets get started, you’re going to need to be running ElasticSearch on your machine or a VM.

I’ll leave it to you to install rather than detail it here, the documentation is detailed and there are several ways, choose what’s best for you.

When you have the services running, check you can access them. Assuming you kept the default ports:

  • http://192.168.xxx.xxx:9200/

elasticSearch

  • http://192.168.yyy.yyy:5601/

Kibana

ElasticSearch

So how do we add useful test data? Let’s have a think about what you want to capture, maybe things like… :

  • Test Name
  • Test Result (Pass/ Fail)
  • The Server it ran against
  • The elapsed time of the test
  • The Feature File the test belongs to
  • A name for the test run (I call this the test run context)

Some of this stuff is easier to get hold of than others, depending on whether or not you run tests in parallel but lets start with our hooks or setup/teardown.

As we want to know how long our test takes to run, let’s create a stop watch in the [BeforeScenario] hook:

var sw = new Stopwatch();
sw.Reset();
sw.Start();

Unsurprisingly, we should stop it after the test is complete.

sw.Stop();

What about the rest of the info? As I’m using NUnit3 to run tests and Specflow, it should be easy to get at the test goodies we want to visualise and include these in our [AfterScenario] Hook:

var testRunTime = sw.ElapsedMilliseconds;
// Convert the time so it can be easily consumed later
var convTime = Convert.ToInt32(testRunTime);
var testName = TestContext.CurrentContext.Test.Name;
var testResults = TestContext.CurrentContext.Result.Outcome.Status.ToString();
var featureFile = FeatureContext.FeatureInfo.Title;

I tend to keep info like the server name in config so we can just pull that out of there.

var appSettings = ConfigurationManager.AppSettings;
var serverName = appSettings.Get("ServerName");

Tip: If you are running concurrently and hitting circular dependencies or similar, you can inherit your hook class from Specflows ”Steps”.

We have now have our info, if you debug you can see the variables being populated, however we are not yet sending it to our instance of ElasticSearch, let’s do that now.

Have a look in your [BeforeTestRun] hook and lets make a connection to ElasticSearch which in essence is just HttpClient call:

public class TestRunHooks
{
public static HttpClient EsClient;

[BeforeTestRun]
public static void BeforeTestRun()
{
// Setup ElasticSearch client
EsClient = ESClient.Create();

...
}

...
}

Here we call a class ESClient to do the heavy lifting:

public class ESClient
{
public static HttpClient client;

public static HttpClient Create()
{
// Get the info for your ES instance from config (http://192.168.xxx.xxx:9200/)
var appSettings = ConfigurationManager.AppSettings;
var elasticUrl = appSettings.Get("elasticSearchUrl");
client = new HttpClient()
{
BaseAddress = new Uri(elasticUrl),
Timeout = TimeSpan.FromMilliseconds(500)
};
return client;
}

public static async Task<HttpResponseMessage> PostESData(HttpClient client, object test)
{
try
{
// Post the data as Json to an index of your choice in ElasticSearch!
return await client.PostAsync("/qa/apitests", new StringContent(JsonConvert.SerializeObject(test), Encoding.UTF8, "application/json"));
}
catch (Exception)
{
return null;
}
}

// The object to post in our async call to ElasticSearch
public class TestResult
{
public string name;
public int elapsedTime;
public string result;
public string testRunContext;
public string featureFile;
public string serverName;
public string date = DateTime.UtcNow.ToString("yyyy/MM/dd HH:mm:ss");
}

// Call this from your hook with the data we have gathered in our [AfterScenarioHook]
public static void PostToEs(string testName, string outcome, int convTime, string featureFile, string serverName)
{
// Test conext Id is just a unique name for the test run, generate it how you like!
var testRunContext = TestRunHooks.TestContextId;
var client = ESClient.client;

var testResult = new TestResult() { name = testName, result = outcome, elapsedTime = convTime, testRunContext = testRunContext, featureFile = featureFile, serverName = serverName};
var result = PostESData(client, testResult).Result;
}
}

Don’t forget to dispose of your client after the test run:

[AfterTestRun]
public static void AfterTestRun()
{
EsClient.Dispose();

...
}

All going well (No Firewall issues etc) your post call to ElasticSearch should return a 201 status code and ElasticSearch now has Data! Moving on to Kibana…

Kibana

If we click on ‘Discover’ in Kibana and we have the correct date/time range selected from the upper right hand corner, we should see the raw data sent to ElasticSearch. Alternatively, you can perform a Lucene query (don’t worry, you won’t need to know this syntax in depth to make good use of it!) such as…

result:”Passed”

This returns all tests that passed for the selected time period, based on the data we have pushed into ElasticSearch.

Now that we have data we can create visualisations based on it! Keeping with the query above let’s visualise our passing tests in varying forms:

gauge 
When you have enough visualisations you can drop them all onto a dashboard and share with your team.

Dashboard

Note: If you want to lock down Kibana functionality and give team members their own login, you will need to install the x-pack addon.

Finally, and as eluded to at the start – once you have your metrics in ElasticSearch or Influx or whatever… (there are a few out there) then you are not limited by what tool to visualise with. I’d like to build on what is outlined here to compare results of runs, trends, drilldown to failures etc although I am not there yet ūüôā

Happy graphing!



Concurrent Test Running with Specflow and NUnit 3

A little while back I wrote a post on achieving concurrent or parallel test running with Selenium, Specflow and NUnit2, but what about NUnit3? Let’s have a look at that, thankfully it is a bit simpler than running an external build script as done previously.

First up, according to the Specflow docs – open up your AssemblyInfo.cs file in your project and add the following line:

[assembly: Parallelizable(ParallelScope.Fixtures)]

Next we need to setup/ teardown our browser and config from the Scenario Hook level, doing it in a Test Run Hook will be problematic as it is static. Your Hooks might look similar to this:

[Binding]
public class ScenarioHooks
{
private readonly IObjectContainer objectContainer;
private IWebDriver Driver;

public ScenarioHooks(IObjectContainer objectContainer)
{
this.objectContainer = objectContainer;
}

[BeforeScenario]
public void BeforeScenario()
{
var customCapabilitiesIE = new DesiredCapabilities();
customCapabilitiesIE.SetCapability(CapabilityType.BrowserName, "internet explorer");
customCapabilitiesIE.SetCapability(CapabilityType.Platform, new Platform(PlatformType.Windows));
customCapabilitiesIE.SetCapability("webdriver.ie.driver", @"C:\tmp\webdriver\iedriver\iedriver_3.0.0_Win32bit.exe");

Driver = new RemoteWebDriver(new Uri(XXX.XXX.XXX.XXX), customCapabilitiesIE);
objectContainer.RegisterInstanceAs<IWebDriver>(Driver);
}

[AfterScenario]
public void AfterScenario()
{
Driver.Dispose();
Driver.Quit();
}
}

You can see from the browser instantiation we are sending the tests to a Selenium Grid Hub, so as a precursor to running the tests you will need suitable infrastructure to run a grid, or you could configure it to go off to SauceLabs or BrowserStack.

Assuming the hub and nodes are configured correctly, when your build process runs the tests then the hub will farm them out by feature file (for other options see the parallel scope in AssemblyInfo.cs) to achieve concurrent test running, and that’s it! Much nicer.



Specflow Reports with NUnit3

This post will look at setting up Specflow reports to work with NUnit3. When you install Specflow via NuGet, there is a tools folder which contains specflow.exe – if we run a batch process after the build on the CI server which points to the NUnit3 TestResult.xml, then we can generate a report and reference it as an artefact in TeamCity (and similar CI software).

But there is a catch, which isn’t (at the time of writing) explained in the Specflow reporting documentation. Currently Specflow (v 2.0.0) cannot interpret the NUnit3 TestResult output, so to counter this restriction we need to configure NUnit3 to output its TestResult.xml in the NUnit2 format. A full list of NUnit3 console options can be found here.

nunit3-console.exe –where cat==SmokeTests&&cat!=ExcludeFromCI
–out=TestResult.txt
–result=TestResult.xml;format=nunit2¬†C:\Path\To\Acceptance.Tests.dll

With this in place it’s now just a case of rigging up the Specflow command to point to the Test Result data and telling it to produce a report and specifying where to output the HTML report.

specflow.exe nunitexecutionreport Your.AcceptanceTest.csproj /testResult:TestResult.xml
 /out:MyResult.html

Once you have the TestResult.xml and the Result output generating where you want, it’s easy to wrap the command up into a batch file and call it from a build step after your tests have executed. Make this output an artefact and then you have metrics in place for your nightly and/ or other test builds.

Demo Specflow Report



Pickles Reports – Living Document Generation

Adding Pickles to your Project is really useful for presenting your Gherkin based tests in an easy to read, searchable format with some funky metrics added in. If you get your CI build to publish the output as an artifact then on each build you will always have up to date current documentation of what is being tested.

You can configure it numerous ways, via MSBuild, PowerShell, GUI or Command Line.

In this article I will be setting up via command line and running a batch file as a build step in TeamCity. First off we need to Install Pickles Command Line via NuGet. NUnit 3 is my test runner.

Once built we can start making use of it by running the executable with our desired parameters:

  • –feature-directory: Where your Feature Files live, relative to the executable
  • –output-directory: Where you want your Pickles report to generate
  • –documentation-format: Documentation format (DHTML, HTML, Word, Excel or JSON)
  • ‚Äďlink-results-file: Path to NUnit Results.xml (this will allows graphs and metrics)
  • ‚Äďtest-results-format: nunit, nunit3, xunit, xunit2, mstest, cucumberjson, specrun, vstest

Together it looks like the below, which is put into a batch file and called in a closing build step.

.\packages\Pickles.CommandLine.2.5.0\tools\pickles.exe ‚Äďfeature-directory=..\..\Automation\Tests\Project.Acceptance.Tests\Features^ ‚Äďoutput-directory=Documentation^ ‚Äďlink-results-file=..\..\..\TestResult.xml ‚Äďtest-results-format=nunit ‚Äďdocumentation-format=dhtml

I use the NUnit format over NUnit3 because I have set my NUnit3 runner to output NUnit2 formatted results, this is so Specflow can consume the same output and produce more metrics on the build. With the results file hooked in you can get some whizzy graphics:

pickles_report

Pickles is a great tool for BDD and should ¬†help bridge that gap between “the Business” and IT. A full sample report can be found here. You can find extensive documentation here for the various ways to set up Pickles.

 



Setting Up Allure with NUnit in TeamCity

This article is a follow up from¬†my previous article in setting up Allure locally with NUnit. I made a brief mention of the Allure TeamCity plugin but encountered difficulties in getting it to work and didn’t have much time to look into getting around it.

Lets get stuck in with a brief recap/ some initial steps to take.

  • Install NUnit 2.6.4 and the compatible NUnit adapter into your allure bin/addins folder
  • Edit the config.xml in your bin/addins folder to write out the XML to a known location (I‚Äôm using C:\AllureXML)

Generated Allure XML

  • Run some tests in the GUI or NUnit Console and generate some XML, you should check XML is being generated before continuing
  • Generate a report with the Command Line Interface (this will verify your JAVA_HOME system variable is present and correct)

Allure TeamCity Setup and Configuration

  • Install the plugin and restart the build server
  • Copy the¬†allure-commandline.zip¬†from¬†latest release¬†to the¬†<TeamCity Data Directory>/plugins/.tools¬†– No server restart needed for this step.
  • Your build needs to be generating XML, we tested this earlier
  • Open your build config settings, create a new build step for the Allure Report
  • Specify a relative path to your XML folder (in this example it is ../../../AllureXML)
  • Use the given example for test reports or similar

Allure Report Build Step

  • Edit your build step running the tests ‚Äď I have not been able to get it to work with the NUnit runner type (it just won‚Äôt generate any XML) however, we can run a script to spin up the NUnit Console and run it that way.
  • Choose PowerShell as the runner, x86 bitness
  • Script: Source code
  • The script:
&amp; 'C:\Program Files (x86)\NUnit 2.6.4\bin\nunit-console-x86.exe' /framework=net-4.0 C:\BuildAgent\work\a0569b5caa0eb74d\Automation\Tests\Example.Automated.Selenium.Tests\Example.Automated.Selenium.Tests\bin\Release\Example.Automated.Selenium.Tests.dll

 

Test run build step for Nunit and Allure

It is a good idea to test the script on the build machine locally so you can ensure it is correct, before putting it into your TeamCity build step.

Notice a couple of things:

  • We run the x86 bit console as specified in the powershell run mode in the build step
  • We have passed in ¬†“/framework=net-4.0”, this is important and in line with current documentation.

This should now, when run, kick off the tests via the console, generate the necessary XML and publish the artifacts in TeamCity giving you a report, which is great for retrieving daily summaries of builds you run overnight which is a great health check on your continuous integration builds.

Further Help

Please keep these installation notes at the forefront of your mind if you encounter difficulties:

If you wish to add different parameters to your tests you can find a full list below or by running ¬†“nunit-console-x86.exe /help”

NUNIT-CONSOLE [inputfiles] [options]

Runs a set of NUnit tests from the console.

You may specify one or more assemblies or a single project file of type .nunit.

Options:

/fixture=STR               Test fixture or namespace to be loaded (Deprecated) (Short format: /load=STR)
/run=STR                   Name of the test case(s), fixture(s) or namespace(s) to run
/runlist=STR               Name of a file containing a list of the tests to run, one per line
/config=STR                Project configuration (e.g.: Debug) to load
/result=STR                Name of XML result file (Default: TestResult.xml) (Short format: /xml=STR)
/xmlConsole                Display XML to the console (Deprecated)
/noresult                  Suppress XML result output (Short format: /noxml)
/output=STR                File to receive test output (Short format: /out=STR)
/err=STR                   File to receive test error output
/work=STR                  Work directory for output files
/labels                    Label each test in stdOut
/trace=X                   Set internal trace level: Off, Error, Warning, Info, Verbose
/include=STR               List of categories to include
/exclude=STR               List of categories to exclude
/framework=STR             Framework version to be used for tests
/process=X                 Process model for tests: Single, Separate, Multiple
/domain=X                  AppDomain Usage for tests: None, Single, Multiple
/apartment=X               Apartment for running tests: MTA (Default), STA
/noshadow                  Disable shadow copy when running in separate domain
/nothread                  Disable use of a separate thread for tests
/basepath=STR              Base path to be used when loading the assemblies
/privatebinpath=STR        Additional directories to be probed when loading assemblies, separated by semicolons
/timeout=X                 Set timeout for each test case in milliseconds
/wait                      Wait for input before closing console window
/nologo                    Do not display the logo
/nodots                    Do not display progress
/stoponerror               Stop after the first test failure or error
/cleanup                   Erase any leftover cache files and exit
/help                      Display help (Short format: /?)




top