Automated Tester

.science

This content shows Simple View

C#

Visualising test data with ElasticSearch & Kibana

You probably already get some kind of metrics out of your tests, whether you get stats from your build, Specflow reports and other build artefacts. This article will show you how to send your data to ElasticSearch and visualise the stats with it’s sister product Kibana.

There are several alternative ways you can achieve the same thing here, you may prefer to use another visualisation tool such as Graphite or Grafana… or you may want to store your metrics in something like Influx, so lets keep an open mind… I chose the ELK stack for no particular reason!

So lets get started, you’re going to need to be running ElasticSearch on your machine or a VM.

I’ll leave it to you to install rather than detail it here, the documentation is detailed and there are several ways, choose what’s best for you.

When you have the services running, check you can access them. Assuming you kept the default ports:

  • http://192.168.xxx.xxx:9200/

elasticSearch

  • http://192.168.yyy.yyy:5601/

Kibana

ElasticSearch

So how do we add useful test data? Let’s have a think about what you want to capture, maybe things like… :

  • Test Name
  • Test Result (Pass/ Fail)
  • The Server it ran against
  • The elapsed time of the test
  • The Feature File the test belongs to
  • A name for the test run (I call this the test run context)

Some of this stuff is easier to get hold of than others, depending on whether or not you run tests in parallel but lets start with our hooks or setup/teardown.

As we want to know how long our test takes to run, let’s create a stop watch in the [BeforeScenario] hook:

var sw = new Stopwatch();
sw.Reset();
sw.Start();

Unsurprisingly, we should stop it after the test is complete.

sw.Stop();

What about the rest of the info? As I’m using NUnit3 to run tests and Specflow, it should be easy to get at the test goodies we want to visualise and include these in our [AfterScenario] Hook:

var testRunTime = sw.ElapsedMilliseconds;
// Convert the time so it can be easily consumed later
var convTime = Convert.ToInt32(testRunTime);
var testName = TestContext.CurrentContext.Test.Name;
var testResults = TestContext.CurrentContext.Result.Outcome.Status.ToString();
var featureFile = FeatureContext.FeatureInfo.Title;

I tend to keep info like the server name in config so we can just pull that out of there.

var appSettings = ConfigurationManager.AppSettings;
var serverName = appSettings.Get("ServerName");

Tip: If you are running concurrently and hitting circular dependencies or similar, you can inherit your hook class from Specflows ”Steps”.

We have now have our info, if you debug you can see the variables being populated, however we are not yet sending it to our instance of ElasticSearch, let’s do that now.

Have a look in your [BeforeTestRun] hook and lets make a connection to ElasticSearch which in essence is just HttpClient call:

public class TestRunHooks
{
public static HttpClient EsClient;

[BeforeTestRun]
public static void BeforeTestRun()
{
// Setup ElasticSearch client
EsClient = ESClient.Create();

...
}

...
}

Here we call a class ESClient to do the heavy lifting:

public class ESClient
{
public static HttpClient client;

public static HttpClient Create()
{
// Get the info for your ES instance from config (http://192.168.xxx.xxx:9200/)
var appSettings = ConfigurationManager.AppSettings;
var elasticUrl = appSettings.Get("elasticSearchUrl");
client = new HttpClient()
{
BaseAddress = new Uri(elasticUrl),
Timeout = TimeSpan.FromMilliseconds(500)
};
return client;
}

public static async Task<HttpResponseMessage> PostESData(HttpClient client, object test)
{
try
{
// Post the data as Json to an index of your choice in ElasticSearch!
return await client.PostAsync("/qa/apitests", new StringContent(JsonConvert.SerializeObject(test), Encoding.UTF8, "application/json"));
}
catch (Exception)
{
return null;
}
}

// The object to post in our async call to ElasticSearch
public class TestResult
{
public string name;
public int elapsedTime;
public string result;
public string testRunContext;
public string featureFile;
public string serverName;
public string date = DateTime.UtcNow.ToString("yyyy/MM/dd HH:mm:ss");
}

// Call this from your hook with the data we have gathered in our [AfterScenarioHook]
public static void PostToEs(string testName, string outcome, int convTime, string featureFile, string serverName)
{
// Test conext Id is just a unique name for the test run, generate it how you like!
var testRunContext = TestRunHooks.TestContextId;
var client = ESClient.client;

var testResult = new TestResult() { name = testName, result = outcome, elapsedTime = convTime, testRunContext = testRunContext, featureFile = featureFile, serverName = serverName};
var result = PostESData(client, testResult).Result;
}
}

Don’t forget to dispose of your client after the test run:

[AfterTestRun]
public static void AfterTestRun()
{
EsClient.Dispose();

...
}

All going well (No Firewall issues etc) your post call to ElasticSearch should return a 201 status code and ElasticSearch now has Data! Moving on to Kibana…

Kibana

If we click on ‘Discover’ in Kibana and we have the correct date/time range selected from the upper right hand corner, we should see the raw data sent to ElasticSearch. Alternatively, you can perform a Lucene query (don’t worry, you won’t need to know this syntax in depth to make good use of it!) such as…

result:”Passed”

This returns all tests that passed for the selected time period, based on the data we have pushed into ElasticSearch.

Now that we have data we can create visualisations based on it! Keeping with the query above let’s visualise our passing tests in varying forms:

gauge 
When you have enough visualisations you can drop them all onto a dashboard and share with your team.

Dashboard

Note: If you want to lock down Kibana functionality and give team members their own login, you will need to install the x-pack addon.

Finally, and as eluded to at the start – once you have your metrics in ElasticSearch or Influx or whatever… (there are a few out there) then you are not limited by what tool to visualise with. I’d like to build on what is outlined here to compare results of runs, trends, drilldown to failures etc although I am not there yet 🙂

Happy graphing!



Proxying UI Automation to OWASP ZAP

Quick disclaimer: I’m not a security expert, pen tester or ZAP expert but that doesn’t mean to say we should ignore security. A cheap way of adding a layer of security testing is to take your existing Selenium automation and proxy them through OWASP Zed Attack Proxy. So lets get started.

This is not a guide on how to use OWASP Zap and will not go into great configuration detail.

  • Download, install and start OWASP ZAP (Requires Java) either locally or on a VM
  • Install FoxyProxy which is a popular browser plugin. You can get it for Chrome or Firefox.
  • Install root level certificate

In ZAP go to Options > Local Proxies and set the Address and Port as desired. In this article I am running ZAP on a VM so I put the address of the VM in and set the port to 8999. Add an additional proxy of localhost:8999.

We can test this with FoxyProxy, setup a proxy to point to the VM on port 8999 and you should see the ZAP API front end interface.

Zap API Frontend

Next go to the API options in ZAP and add the VMs IP address to the list of addresses permitted to use the API. For the purposes of this article, I have also disabled the API key required to perform commands.

Now we can hit that API front end without the use of FoxyProxy.

So lets setup a connection to our ZAP instance in code and point tests via the proxy. Lets start by adding the OWASPZAPDotNetAPI nuget package.

We’ll connect to the ZAP service at the start of our test run in the hook:

public static ClientApi Zap;

[BeforeTestRun]
public static void BeforeTestRun()
{
// Note if you are using an API key, pass it in here instead of null
Zap = new ClientApi("192.168.xxx.xxx", 8999, null);

...
}

Now lets configure a proxy and pass it to the driver:

var options = new ChromeOptions(); var options = new ChromeOptions();

options.AddArgument("start-maximized");
options.AddArguments("disable-infobars");

// ZAP Proxy (Passive Scan) - ZAP should already be invoked.

var proxy = new Proxy();
options.Proxy = proxy;
proxy.Kind = ProxyKind.Manual;
proxy.IsAutoDetect = false;
proxy.HttpProxy = "192.168.xxx.xxx:8999";
proxy.SslProxy = "192.168.xxx.xxx:8999";

options.AddArgument("ignore-certificate-errors");
options.Proxy = proxy;
var timespan = TimeSpan.FromMinutes(3);
_driver = new RemoteWebDriver(new Uri(gridHub), options.ToCapabilities());

If we run a test, we should see traffic being generated in the ZAP History tab:

Zap Traffic

Great, we’re almost there. Now it’s time to generate a report – you can do this manually by using the API UI in core/other/html report (http://192.168.xxx.xxx:8999/UI/core/other/htmlreport/) but as we have a connection in code to the API lets do it there, after the test run:

public static void WriteZapHtmlReport(string path, byte[] bytes)
{
File.WriteAllBytes(path, bytes);
}

[AfterTestRun]
public static void AfterTestRun()
{
HookHelper.WriteZapHtmlReport(ReportDirectory + "_PassiveScanReport.html", Zap.core.htmlreport());
Zap.Dispose();

Here we take the byte array from the zap.core api and write it to a HTML file somewhere on disk, giving you a handy passive scan report.

Zap Report

Limitations

Be aware that any test that uses ZAP needs to have exclusive use of it- so to generalise, it’s better to keep all the browser based tests sequential. This includes tests that use ZAP as a proxy, not just the scanning tests. These words were robbed from here and are very true having spent a couple of hours see if the above would work with concurrency.



Surgical Strike UI Automation Testing

A lot of people when starting out with automated testing or Selenium may follow a kind of record and playback approach to writing automated tests, whether this is born out of using something like the plugin or just the general approach:

  • Fire up a browser
  • Goto the site
  • Login
  • Get to where you need to be
  • Perform a bunch of interactions
  • Logout (possibly)

There are a few optimisations we can do without much effort, like navigating with a url rather than clicking a bunch of menu items. This approach may need an environment with test data already present and that can turn into a big overhead. It might be a lot harder to ‘get to where you need to be’ if you have to create a whole structure first and doing that in the UI as part of your test should be avoided.

Trimming the fat

In the past on this blog I have talked about API tests and UI tests, lets combine the two to really optimise the UI tests. You’re writing a Selenium test to test a specific piece of functionality, lets keep it that way and just use Selenium to perform precision, surgical UI interactions that we care about. This will speed up your tests and make them more stable.

This way we will:

  • Perform a bunch of internal API calls to set the test up
  • Fire up a browser
  • Login
  • Navigate
  • Do the test
  • Perform another set of internal API calls to rip out the test setup

I’ve found a nice way to do this is to setup a stack which we can push fixtures onto and then iterativeley pop them off after the test is done.

private Stack<Action> teardownActions;

public Stack<Action> TeardownActions => teardownActions ?? (teardownActions = new Stack<Action>());

As usual I am using Specflow in my setup, now I turn to the hooks to perform an action I need for each test – lets say…. make a folder.

FolderRootDto = new FolderDto

{

FolderVisibility = 50,

Name = $"{ScenarioInfo.Title}"

};

Folders.CreateNewFolder(FolderRootDto.Name, FolderRootDto);

Stack it out

When the create method is called and we actually perform the api post request, we get the Id from the DTO (or whatever we need in the delete call) and push an action onto the stack, in this case another method that calls delete.

var client = client;

var newFolder = RestManager.Post<FolderDto>(client, BaseUrl + "/rest/endpoint/to/call", folderDto);

if (newFolder.StatusCode == HttpStatusCode.OK)

{

var LastFolderCreatedId = newFolder.Response.Id;

TreeState.Get(ScenarioContext).TeardownActions.Push(() =>

{

DeleteFolder(newFolder.Response.Id);

});

}

Now before our browser even fires, I have a folder to perform a test in, if we add our teardown action stack to fire after the scenario then this newly created folder gets ripped out afterwards.

var nextAction = TeardownActions.Pop();

while (nextAction != null)

{

nextAction();

nextAction = null;
if (TeardownActions.Count > 0)

{

nextAction = TreeState.Get(scenarioContext).TeardownActions.Pop();

}

}

As these actions get performed in milliseconds it can drastically reduce the time of your Selenium test that might be doing the all the foundation work or reduce the overhead you might have in order to get your fixture setup in place whether it be build steps or database restores.



Appium Mobile Emulation

Getting started with Appium requires a few installs and some configuration, so let’s get to it.

This article assumes you already have Java installed and set to the PATH environment variables.

What you need:

Android Studio

Once installed you need to look under Tools > Android > SDK. There are several components we need. Namely SDK Tools, SDK Build Tools and SDK Platform Tools.

Android SDK Install

Once setup you will need to add ANDROID_HOME to your PATH environment variable and point it to your main SDK folder. Note: A reboot may be required for Windows to pick up the change.

Android HOME

Visual Studio Emulator for Android

Now let’s look at Visual Studio Emulator for Android, once installed it should hopefully list a bunch of android API device profiles you can either run (if HyperV is enabled). Start one up and you should be able to interact with the virtual device. Note: If you look in Hyper-V Manager you can see the running emulator and tweak memory usage etc.

Device Profiles

HyperV Manager

If we navigate to our sdk tools we can also see the device running by running 

adb devices -L

 and this will list the device name for our Appium setup. In this instance the device is donatello.

Tip: If you happen to want to install an App you can do so with the command: adb install <path\to\yourapk.apk>

Appium Desktop

Next download Appium and click the Android icon to configure the server, input the relevant fields and device name then click the start button to run the server.

Appium Config

Installing an SSL Certificate on the device if required

As we are interacting with a website rather than an App for the purposes of this article, it can be useful to store SSL Certs on the devices when connecting. Using Visual Studio Emulator for Android we can setup and SD Card.

Emulator Options > SD Card

Pull from SD card first to a temp folder in order to create the directory structure first, then place the SSL Certificate in an easy to remember place and push to the SD card. Under Settings > Privacy in the Android OS we can then add a trusted certificate to the device.

When doing this, Android will force you to create a passcode for the device, do so and keep it easy to remember.

How do I find Android UI Elements for my tests?

Back over in Android SDK land…. in Sdk/tools/bin there is a batch file called uiautomatorviewer.bat. Open this in a text editor and find the line:

call "%java_exe%" "-Djava.ext.dirs=%javaextdirs%" "-Dcom.android.uiautomator.bindir=%prog_dir%" -jar %jarpath% %*

Then replace it with:

call "%java_exe%" "-Djava.ext.dirs=%javaextdirs%" "-Dcom.android.uiautomator.bindir=C:\DEV\androidSDK\tools" -jar %jarpath% %*

Making sure to set the binding directory to where your SDK tools are installed.

Then run uiautomatorviewer.bat whilst your emulator is running. Clicking the Android screenshot icon (second one from the left) will paint a view of what is currently on the emulator screen, you can then hover over buttons etc in order to find locators and anything else you might need.

uiautomatorviewer

Please note, your emulator API must be level 17 and upwards.

Time to Code

We can now write automation to run via this emulator with the usual webdriver goodies. Install the Appium Webdriver NuGet package and create the driver with the desired capabilities.

DesiredCapabilities capabilities = new DesiredCapabilities();
capabilities.SetCapability("device", "Android");
capabilities.SetCapability(CapabilityType.Platform, "Windows");
capabilities.SetCapability("deviceName", "donatello");
capabilities.SetCapability("platformName", "Android");
capabilities.SetCapability("platformVersion", "6.0.0");
capabilities.SetCapability(MobileCapabilityType.BrowserName, "BROWSER");
capabilities.SetCapability(CapabilityType.AcceptSslCertificates, true);

driver = new AndroidDriver<AndroidElement>(new Uri("http://127.0.0.1:4723/wd/hub"), capabilities, TimeSpan.FromSeconds(180));

Note that browserName “BROWSER” will start the devices’ native Android browser.

We can now start the automation of the site, starting with some very crude Selenium here… :

driver.Navigate().GoToUrl("https://mySecureUrl.com");

Thread.Sleep(1000);
var element = driver.FindElement(By.Id("Email"));

element.Click();
element.Clear();
element.SendKeys("username@mySecureUrl.com");

var element2 = driver.FindElement(By.Id("Password"));
element2.Click();
element2.Clear();
element2.SendKeys("mySecurePassword");
element2.SendKeys(Keys.Enter);

Hopefully this is a an easy enough to follow guide to get you up and running using Appium in your c# dev environment, these are the steps I took to being playing with Appium in more depth. Happy testing!



Concurrent Test Running with Specflow and NUnit 3

A little while back I wrote a post on achieving concurrent or parallel test running with Selenium, Specflow and NUnit2, but what about NUnit3? Let’s have a look at that, thankfully it is a bit simpler than running an external build script as done previously.

First up, according to the Specflow docs – open up your AssemblyInfo.cs file in your project and add the following line:

[assembly: Parallelizable(ParallelScope.Fixtures)]

Next we need to setup/ teardown our browser and config from the Scenario Hook level, doing it in a Test Run Hook will be problematic as it is static. Your Hooks might look similar to this:

[Binding]
public class ScenarioHooks
{
private readonly IObjectContainer objectContainer;
private IWebDriver Driver;

public ScenarioHooks(IObjectContainer objectContainer)
{
this.objectContainer = objectContainer;
}

[BeforeScenario]
public void BeforeScenario()
{
var customCapabilitiesIE = new DesiredCapabilities();
customCapabilitiesIE.SetCapability(CapabilityType.BrowserName, "internet explorer");
customCapabilitiesIE.SetCapability(CapabilityType.Platform, new Platform(PlatformType.Windows));
customCapabilitiesIE.SetCapability("webdriver.ie.driver", @"C:\tmp\webdriver\iedriver\iedriver_3.0.0_Win32bit.exe");

Driver = new RemoteWebDriver(new Uri(XXX.XXX.XXX.XXX), customCapabilitiesIE);
objectContainer.RegisterInstanceAs<IWebDriver>(Driver);
}

[AfterScenario]
public void AfterScenario()
{
Driver.Dispose();
Driver.Quit();
}
}

You can see from the browser instantiation we are sending the tests to a Selenium Grid Hub, so as a precursor to running the tests you will need suitable infrastructure to run a grid, or you could configure it to go off to SauceLabs or BrowserStack.

Assuming the hub and nodes are configured correctly, when your build process runs the tests then the hub will farm them out by feature file (for other options see the parallel scope in AssemblyInfo.cs) to achieve concurrent test running, and that’s it! Much nicer.




top