Automated Tester

.science

This content shows Simple View

Parallel

Visualising test data with ElasticSearch & Kibana

You probably already get some kind of metrics out of your tests, whether you get stats from your build, Specflow reports and other build artefacts. This article will show you how to send your data to ElasticSearch and visualise the stats with it’s sister product Kibana.

There are several alternative ways you can achieve the same thing here, you may prefer to use another visualisation tool such as Graphite or Grafana… or you may want to store your metrics in something like Influx, so lets keep an open mind… I chose the ELK stack for no particular reason!

So lets get started, you’re going to need to be running ElasticSearch on your machine or a VM.

I’ll leave it to you to install rather than detail it here, the documentation is detailed and there are several ways, choose what’s best for you.

When you have the services running, check you can access them. Assuming you kept the default ports:

  • http://192.168.xxx.xxx:9200/

elasticSearch

  • http://192.168.yyy.yyy:5601/

Kibana

ElasticSearch

So how do we add useful test data? Let’s have a think about what you want to capture, maybe things like… :

  • Test Name
  • Test Result (Pass/ Fail)
  • The Server it ran against
  • The elapsed time of the test
  • The Feature File the test belongs to
  • A name for the test run (I call this the test run context)

Some of this stuff is easier to get hold of than others, depending on whether or not you run tests in parallel but lets start with our hooks or setup/teardown.

As we want to know how long our test takes to run, let’s create a stop watch in the [BeforeScenario] hook:

var sw = new Stopwatch();
sw.Reset();
sw.Start();

Unsurprisingly, we should stop it after the test is complete.

sw.Stop();

What about the rest of the info? As I’m using NUnit3 to run tests and Specflow, it should be easy to get at the test goodies we want to visualise and include these in our [AfterScenario] Hook:

var testRunTime = sw.ElapsedMilliseconds;
// Convert the time so it can be easily consumed later
var convTime = Convert.ToInt32(testRunTime);
var testName = TestContext.CurrentContext.Test.Name;
var testResults = TestContext.CurrentContext.Result.Outcome.Status.ToString();
var featureFile = FeatureContext.FeatureInfo.Title;

I tend to keep info like the server name in config so we can just pull that out of there.

var appSettings = ConfigurationManager.AppSettings;
var serverName = appSettings.Get("ServerName");

Tip: If you are running concurrently and hitting circular dependencies or similar, you can inherit your hook class from Specflows ”Steps”.

We have now have our info, if you debug you can see the variables being populated, however we are not yet sending it to our instance of ElasticSearch, let’s do that now.

Have a look in your [BeforeTestRun] hook and lets make a connection to ElasticSearch which in essence is just HttpClient call:

public class TestRunHooks
{
public static HttpClient EsClient;

[BeforeTestRun]
public static void BeforeTestRun()
{
// Setup ElasticSearch client
EsClient = ESClient.Create();

...
}

...
}

Here we call a class ESClient to do the heavy lifting:

public class ESClient
{
public static HttpClient client;

public static HttpClient Create()
{
// Get the info for your ES instance from config (http://192.168.xxx.xxx:9200/)
var appSettings = ConfigurationManager.AppSettings;
var elasticUrl = appSettings.Get("elasticSearchUrl");
client = new HttpClient()
{
BaseAddress = new Uri(elasticUrl),
Timeout = TimeSpan.FromMilliseconds(500)
};
return client;
}

public static async Task<HttpResponseMessage> PostESData(HttpClient client, object test)
{
try
{
// Post the data as Json to an index of your choice in ElasticSearch!
return await client.PostAsync("/qa/apitests", new StringContent(JsonConvert.SerializeObject(test), Encoding.UTF8, "application/json"));
}
catch (Exception)
{
return null;
}
}

// The object to post in our async call to ElasticSearch
public class TestResult
{
public string name;
public int elapsedTime;
public string result;
public string testRunContext;
public string featureFile;
public string serverName;
public string date = DateTime.UtcNow.ToString("yyyy/MM/dd HH:mm:ss");
}

// Call this from your hook with the data we have gathered in our [AfterScenarioHook]
public static void PostToEs(string testName, string outcome, int convTime, string featureFile, string serverName)
{
// Test conext Id is just a unique name for the test run, generate it how you like!
var testRunContext = TestRunHooks.TestContextId;
var client = ESClient.client;

var testResult = new TestResult() { name = testName, result = outcome, elapsedTime = convTime, testRunContext = testRunContext, featureFile = featureFile, serverName = serverName};
var result = PostESData(client, testResult).Result;
}
}

Don’t forget to dispose of your client after the test run:

[AfterTestRun]
public static void AfterTestRun()
{
EsClient.Dispose();

...
}

All going well (No Firewall issues etc) your post call to ElasticSearch should return a 201 status code and ElasticSearch now has Data! Moving on to Kibana…

Kibana

If we click on ‘Discover’ in Kibana and we have the correct date/time range selected from the upper right hand corner, we should see the raw data sent to ElasticSearch. Alternatively, you can perform a Lucene query (don’t worry, you won’t need to know this syntax in depth to make good use of it!) such as…

result:”Passed”

This returns all tests that passed for the selected time period, based on the data we have pushed into ElasticSearch.

Now that we have data we can create visualisations based on it! Keeping with the query above let’s visualise our passing tests in varying forms:

gauge 
When you have enough visualisations you can drop them all onto a dashboard and share with your team.

Dashboard

Note: If you want to lock down Kibana functionality and give team members their own login, you will need to install the x-pack addon.

Finally, and as eluded to at the start – once you have your metrics in ElasticSearch or Influx or whatever… (there are a few out there) then you are not limited by what tool to visualise with. I’d like to build on what is outlined here to compare results of runs, trends, drilldown to failures etc although I am not there yet ūüôā

Happy graphing!



Concurrent Test Running with Specflow and NUnit 3

A little while back I wrote a post on achieving concurrent or parallel test running with Selenium, Specflow and NUnit2, but what about NUnit3? Let’s have a look at that, thankfully it is a bit simpler than running an external build script as done previously.

First up, according to the Specflow docs – open up your AssemblyInfo.cs file in your project and add the following line:

[assembly: Parallelizable(ParallelScope.Fixtures)]

Next we need to setup/ teardown our browser and config from the Scenario Hook level, doing it in a Test Run Hook will be problematic as it is static. Your Hooks might look similar to this:

[Binding]
public class ScenarioHooks
{
private readonly IObjectContainer objectContainer;
private IWebDriver Driver;

public ScenarioHooks(IObjectContainer objectContainer)
{
this.objectContainer = objectContainer;
}

[BeforeScenario]
public void BeforeScenario()
{
var customCapabilitiesIE = new DesiredCapabilities();
customCapabilitiesIE.SetCapability(CapabilityType.BrowserName, "internet explorer");
customCapabilitiesIE.SetCapability(CapabilityType.Platform, new Platform(PlatformType.Windows));
customCapabilitiesIE.SetCapability("webdriver.ie.driver", @"C:\tmp\webdriver\iedriver\iedriver_3.0.0_Win32bit.exe");

Driver = new RemoteWebDriver(new Uri(XXX.XXX.XXX.XXX), customCapabilitiesIE);
objectContainer.RegisterInstanceAs<IWebDriver>(Driver);
}

[AfterScenario]
public void AfterScenario()
{
Driver.Dispose();
Driver.Quit();
}
}

You can see from the browser instantiation we are sending the tests to a Selenium Grid Hub, so as a precursor to running the tests you will need suitable infrastructure to run a grid, or you could configure it to go off to SauceLabs or BrowserStack.

Assuming the hub and nodes are configured correctly, when your build process runs the tests then the hub will farm them out by feature file (for other options see the parallel scope in AssemblyInfo.cs) to achieve concurrent test running, and that’s it! Much nicer.



Gallio Test Runner

Gallio (sometimes known as MB Unit or Gallio Icarus) is a test runner which to me has two distinct advantages over MsTest and NUnit.

Parallel test execution support and built in test reports, although NUnit does have parallel support with PNUnit.

How to set it up

  • Download and install Gallio from here.
  • In your Visual Studio Project navigate to Project > [project name] Properties from the drop down menu.
  • Click Debug
  • Set startup program to Gallio, by default it should be somewhere like: C:\Program Files\Gallio\bin\Gallio.Icarus.exe
  • In Start Options > Command Line Arguments paste in the path of your projects DLL housing the tests you want to execute
  • Save
  • Hit Start when you want to run or debug your tests

Gallio_Setup

Gallio GUI

This looks very similar to NUnit but with some extra buttons and menus, the only ones you will really care about are start/ stop/ debug and test reports.

Gallio_GUI

The reports aren’t anywhere near as nice as that you would find in Allure, but they are easily generated and easy to view.

Gallio_Report

Bootnote: In Gallio Icarus select menu Tools -> Options, select page “Preferences”, set “Test Runner Factory” to IsolatedAppDomain or Local to get the debugger to work.



Applitools Eyes: Easy Automated Visual Checking

Applitools Eyes allows us to easily integrate automated visual testing into our existing BDD Solution. Having automated visual testing is a big plus because whilst our automation may be good at asserting if elements are visible on the page or if some certain text is present, it doesn’t tell us that it has appeared in the right place.

Automated visual testing will capture baseline images to make future checks against.

To quote another article: “For example, a single automated visual test will look at a page and assert that every element on it has rendered correctly. Effectively checking hundreds of things and telling you if any of them are out of place. This will occur every time the test is run, and it can be scaled to each browser, operating system, and screen resolution you require.”

If you run this in parallel then you have a powerful testing beast on your hands.

Put another way, one automated visual test is worth hundreds of assertions. And if done in the service of an iterative development workflow, then you’re one giant leap closer to¬†Continuous Delivery. — Dave Haeffner

As with everything on this blog it is tailored around C# .Net but the following works in a similar way with other bindings like Java, Python, Ruby and JS.

How it’s set up in the Solution for C# .Net

First off we need a Nuget package for Eyes:

Nuget_Eyes

Then in our TestRunContext Class we need to instantiate it as part of our window setup:

using Applitools;

private static Eyes eyes;
public static void WindowSetup()

{
    Driver.Navigate().GoToUrl("about:blank");
    Driver.Manage().Window.Maximize();

// This is your api key, make sure you use it in all your tests.
    eyes = new Eyes {ApiKey = "API KEY GOES HERE"};

// Start visual testing with browser.
// Make sure to use the returned driver from this point on.
   Driver = eyes.Open(Driver, "Application name", "Test Name", new System.Drawing.Size(1024, 768));
}

We call on the following in our tests to perform a check of the page, passing in a string to be displayed in Applitools:

public static void EyeCheck(string pageName)
        {
            // Visual validation point
            eyes.CheckWindow(pageName);
        }

And then close the Eyes at the end of our checks:

public static void CloseEyes()
        {
 // End visual testing. Validate visual correctness.
            eyes.Close();
            eyes.AbortIfNotClosed();
        }

Rather than storing the Selenium instance in the¬†Driver¬†variable, we’re now storing it in a local¬†browser¬†variable and passing it into¬†eyes.Open — storing the WebDriver object that¬†eyes.Openreturns in the¬†Driver variable instead.

This way the Eyes platform will be able to capture what our test is doing when we ask it to capture a screenshot. The Selenium actions in our test will not need to be modified.

Before calling¬†eyes.Open we provide the API key. When calling¬†eyes.Open, we pass it the Selenium instance, the name of the app we’re testing (e.g.,¬†“Test Suite“), and the name of the test (e.g.,¬†“Test Name“) in the above example the part that says “new System.Drawing.Size(1024, 768)” is specifying the view port size of the browser.

For mobile testing it is not relevant, since the device size will be used, but for web testing, it is highly recommended to configure a size in order to ensure that the result of the test will be consistent even when you run it on different machines with different screen resolutions/sizes.  You can also paramterise it and run on several sizes in case of responsive design, but in general it is recommended to set it explicitly.

Now we’re ready to add some visual checks to our test by calling the following in our step definitions.

Page.PerformEyeCheck(pageName);

With¬†eyes.CheckWindow() working in the background we are specifying when in the test’s workflow we’d like Applitools Eyes to capture a screenshot (along with some description text).

NOTE: These visual checks are effectively doing the same work as the pre-existing assertion (e.g. where we’re asking Selenium if a success notification is displayed and asserting on the Boolean result) — in addition to reviewing other visual aspects of the page. So once we verify that our test is working correctly we can remove this assertion and still be covered.

We end the test with¬†eyes.Close. You may feel the urge to place this in¬†teardown, but in addition to closing the session with Eyes, it acts like an assertion. If Eyes finds a failure in the app (or if a baseline image approval is required), then¬†eyes.Close will throw an exception; failing the test. So it’s best suited to live in the test itself.

NOTE: An exception from eyes.Close will include a URL to the Applitools Eyes job in your test output. The job will include screenshots from each test step and enable you to play back the keystrokes and mouse movements from your Selenium tests. You can also determine the level of strictness to use or exclude areas of your page from being checked.

When an exception gets thrown by¬†eyes.Close, the Eyes session will close. But if an exception occurs before¬†eyes.Close can fire, the session will remain open. To handle that, we’ll need to add an additional command to our¬†teardown.

eyes.AbortIfNotClosed(); will make sure the Eyes session terminates properly regardless of what happens in the test.

Now when we run the test, it will execute locally while also performing visual checks in Applitools Eyes.

Applitools Eyes

Further Example:

https://eyes.applitools.com/app/tutorial.html



Parallel Test Execution with Specflow

Running Specflow scenarios in Parallel is a tricky problem to solve due to the nature of how tests are written. Achieving this is a lot simpler if we write Selenium tests in a unit style fashion but as this isn’t the case for Specflow with its Gherkin syntax we will look at executing them with a powershell build script against a Browserstack grid.

Thanks to this great article from Kenneth Truyers, it has really helped me to achieve this. We will look at executing parallel Specflow tests against a BrowserStack grid.

Set Up

In our Initialisation method we need to specify our remote driver whilst passing in our varying desired capabilities in Config.

private static void SetupCloudDriver()
        {
            var capabilities = new DesiredCapabilities();

            capabilities.SetCapability(CapabilityType.Version, ConfigurationManager.AppSettings["version"]);
            capabilities.SetCapability("os", ConfigurationManager.AppSettings["os"]);
            capabilities.SetCapability("os_version", ConfigurationManager.AppSettings["os_version"]);
            capabilities.SetCapability("browserName", ConfigurationManager.AppSettings["browser"]);

            capabilities.SetCapability("browserstack.user", ConfigurationManager.AppSettings["browserstack.user"]);
            capabilities.SetCapability("browserstack.key", ConfigurationManager.AppSettings["browserstack.key"]);
            capabilities.SetCapability("browserstack.local", false);
            capabilities.SetCapability("browserstack.debug", true);

            capabilities.SetCapability("project", "Project Name");

            Driver = new RemoteWebDriver(new Uri(ConfigurationManager.AppSettings["browserstack.hub"]), capabilities);
            Driver.Manage().Timeouts().ImplicitlyWait(TimeSpan.FromSeconds(1));
            ScenarioContext.Current["driver"] = Driver;
        }

Config for Cross Browser spin up

One of our config files might look like so:

<xml version="1.0" encoding="utf-8">
<configuration xmlns:xdt="http://schemas.microsoft.com/XML-Document-Transform">
    <appSettings>
        <add key="browser" value="Safari" xdt:Transform="Insert"/>
        <add key="os" value="osx" xdt:Transform="Insert"/>
        <add key="version" value="8" xdt:Transform="Insert"/>
        <add key="os_version" value="Yosemite" xdt:Transform="Insert"/>
    </appSettings>
</configuration>

This tells Browserstack what system to spin up and run the tests against for one parallel instance, multiple config files with other systems are needed so they can all be executed depending on your requirements. Furthermore we need to set these up in Configuration Manager so that each config picks up the tests when the solution is built.

What This Enables

We can select what environment we wish to run against, run the test and see it appear in Browserstack (not in parallel) or run it locally using our own setup.

Bstack_1

So now we have essentially got a single test running in the cloud (and our config still allows us to run locally if we choose), next we need to kick off a bunch of these against different systems.

The Build Script

We use a PowerShell build script to achieve the parallel part of this, here it is:

$solution = "Your.Testing.Solution.sln"

function Get-SolutionConfigurations($solution)
{
        Get-Content $solution |
        Where-Object {$_ -match "(?&lt;config&gt;\w+)\|"} |
        %{ $($Matches['config'])} |
        select -uniq
}

$frameworkDirs = @((Get-ItemProperty -Path "HKLM:\SOFTWARE\Microsoft\MSBuild\ToolsVersions\12.0" -Name "MSBuildToolsPath32")."MSBuildToolsPath32",
                        "$env:windir\Microsoft.NET\Framework\v4.0.30319")
    for ($i = 0; $i -lt $frameworkDirs.Count; $i++) {
        $dir = $frameworkDirs[$i]
        if ($dir -Match "\$\(Registry:HKEY_LOCAL_MACHINE(.*?)@(.*)\)") {
            $key = "HKLM:" + $matches[1]
            $name = $matches[2]
            $dir = (Get-ItemProperty -Path $key -Name $name).$name
            $frameworkDirs[$i] = $dir
        }
    }

    $env:path = ($frameworkDirs -join ";") + ";$env:path"

@(Get-SolutionConfigurations $solution) | foreach {
      Write-Host "Building for $_"
    msbuild $solution /p:Configuration=$_ /nologo /verbosity:quiet
}
 
 New-Item "$(get-location)\packages\specflow.1.9.0\tools\specflow.exe.config" -type file -force -value "&lt;?xml version=""1.0"" encoding=""utf-8"" ?&gt; &lt;configuration&gt; &lt;startup&gt; &lt;supportedRuntime version=""v4.0.30319"" /&gt; &lt;/startup&gt; &lt;/configuration&gt;" | Out-Null

@(Get-SolutionConfigurations $solution)| foreach {
    Start-Job -ScriptBlock {
        param($configuration, $basePath)

        try
        {
            &amp; $basePath\packages\NUnit.Runners.2.6.4\tools\nunit-console.exe /labels /out=$basePath\nunit_$configuration.txt /xml:$basePath\nunit_$configuration.xml /nologo /config:$configuration "$basePath/Your.Testing.Solution/bin/$configuration/Your.Testing.Solution.dll"
        }
        finally
        {
            &amp; $basePath\packages\specflow.1.9.0\tools\specflow.exe nunitexecutionreport "$basePath\Your.Testing.Solution\Your.Testing.Solution.csproj" /out:$basePath\specresult_$configuration.html /xmlTestResult:$basePath\nunit_$configuration.xml /testOutput:nunit_$configuration.txt
        }

    } -ArgumentList $_, $(get-location)
}
Get-Job | Wait-Job
Get-Job | Receive-Job

It obviously needs tweaking in areas to point to your solution but what the hell is it actually doing? It is running msbuild against each config file and executing the test suite. Our Safari based config gets built and executed against BrowserStack as do any other configurations. A test report for each config/ system is then generated to let you know the outcome of that particular run.

Running the script should allow you to see parallel test execution against your desired systems in Browserstack:

Bstack_3

Generated Feedback and Reporting

Reports generated after look similar to the below:

Bstack_2

What Next?

You will undoubtedly encounter issues with Browserstack seeing your internal test environments, this is something that you will need to consider. Consult the Browserstack documentation online regarding tunnelling or running Browserstack locally.

We can also throw in some automated visual checking to really get the ball rolling with Continuous Delivery. If you have some well set up Applitools Eyes base images then visual checking on many systems at once is potentially worth thousands of checks.




top