Automated Tester

.science

This content shows Simple View

Uncategorized

Visual testing with BackstopJs

There are many visual testing tools out there these days. Previously I have covered Applitools which without a doubt is great product with awesome people who contribute a lot to the testing community. I have also looked at Galen which is a more of a DOM layout tester. In this article we will have a look at BackstopJs, which is the closest opensource project I’ve come across that matches what Applitools do with regard to the way visual regressions are presented to the user.

So, in a nutshell:

BackstopJS automates visual regression testing of your responsive web UI by comparing DOM screenshots over time.

Setup

The good thing about Applitools is I can just dump in a line of code to already existing tests and see it do it’s thing. We have to start from the ground up here but fortunately the project maintainers have done a fantastic job of making everything streamline, so let’s go.

I’m a windows user in my day to day so lets have a look at setting this up. Note, you will require some pre-reqs in order to get started:

BackstopJs has very good documentation, so I’ll try not to just repeat what that covers. Rather, give a quick flavour of what BackstopJs can do and how to overcome any issues I had during set up.

There was only one of those actually, as I am looking at Local installation I edited my PATH environment variable to include backstop.cmd, so just had to add the following parameter: C:\Users\username\AppData\Roaming\npm\

Workflow

It’s important to remember the workflow here, we need to call Backstop Init before we run our baseline tests.

  • Backstop Init (from your project directory)
  • Backstop test (get our base images)
  • Backstop approve (set the base images)
  • Backstop test – run the tests and compare against the baseline

BackstopJs Commands in action

To hit the ground running I recommend cloning this sample project and just tweaking the settings to meet your needs.

If we edit/ tweak ‘backstop.json’ we can easily dump new scenarios into the Json array once we are familiar with it, add or modify viewport sizes and exclude elements on the page that may be much more variable than others.

Output

In the output we get a cool slider in the visual diff inspector, a scrubber for checking out differences and other useful info in a nicely styled report, tests are also very fast to execute.

Visual ChangeBackstopJs Report

 

This was great for quickly spinning up tests against 25 odd branded log in screens generating 75 test cases covering three different resolutions. Yes, it’s not the same as emulating or against physical devices but nonetheless useful.

Digging a little deeper

What if we do want to log in though? The recommended ‘engine’ is some selenium-esque coding which is required in the OnReady.js file for the specified engine. You can use others but here we’re looking at the recommended option of Puppeteer.

If you’re familiar with Selenium then Puppeteer is a breeze and also has very good API documentation. This is for use with Chromium only (makes sense as it’s created by the Google dev tools team).

To perform a login for example, we add some Puppeteer code:

module.exports = async (page, scenario, vp) => {
console.log('SCENARIO > ' + scenario.label);
await require('./clickAndHoverHelper')(page, scenario);

// add more ready handlers here...
await page.focus('#Email')
await page.keyboard.type('email@someDomain.com');
await page.focus('#Password')
await page.keyboard.type('secretPassword');
await page.keyboard.press('Enter');
const navigationPromise = page.waitForNavigation();

await navigationPromise; // The navigationPromise resolves after navigation has finished

page
.waitForSelector('#homeSupport > h3:nth-child(8) > a')
.then(() => console.log('First URL with image: ' + currentURL));
for (currentURL of ['https://www.someUrl.com'])
await page.goto(currentURL);
await page.waitFor('#homeSupport > h3:nth-child(8) > a');
};

If you require an area to be excluded from the test such as some dynamic panel with variable data in it, we can hide DOM elements in the config (backstop.json).

Why not give BackstopJs a try? It could save you a lot of time.



Visualising test data with ElasticSearch & Kibana

You probably already get some kind of metrics out of your tests, whether you get stats from your build, Specflow reports and other build artefacts. This article will show you how to send your data to ElasticSearch and visualise the stats with it’s sister product Kibana.

There are several alternative ways you can achieve the same thing here, you may prefer to use another visualisation tool such as Graphite or Grafana… or you may want to store your metrics in something like Influx, so lets keep an open mind… I chose the ELK stack for no particular reason!

So lets get started, you’re going to need to be running ElasticSearch on your machine or a VM.

I’ll leave it to you to install rather than detail it here, the documentation is detailed and there are several ways, choose what’s best for you.

When you have the services running, check you can access them. Assuming you kept the default ports:

  • http://192.168.xxx.xxx:9200/

elasticSearch

  • http://192.168.yyy.yyy:5601/

Kibana

ElasticSearch

So how do we add useful test data? Let’s have a think about what you want to capture, maybe things like… :

  • Test Name
  • Test Result (Pass/ Fail)
  • The Server it ran against
  • The elapsed time of the test
  • The Feature File the test belongs to
  • A name for the test run (I call this the test run context)

Some of this stuff is easier to get hold of than others, depending on whether or not you run tests in parallel but lets start with our hooks or setup/teardown.

As we want to know how long our test takes to run, let’s create a stop watch in the [BeforeScenario] hook:

var sw = new Stopwatch();
sw.Reset();
sw.Start();

Unsurprisingly, we should stop it after the test is complete.

sw.Stop();

What about the rest of the info? As I’m using NUnit3 to run tests and Specflow, it should be easy to get at the test goodies we want to visualise and include these in our [AfterScenario] Hook:

var testRunTime = sw.ElapsedMilliseconds;
// Convert the time so it can be easily consumed later
var convTime = Convert.ToInt32(testRunTime);
var testName = TestContext.CurrentContext.Test.Name;
var testResults = TestContext.CurrentContext.Result.Outcome.Status.ToString();
var featureFile = FeatureContext.FeatureInfo.Title;

I tend to keep info like the server name in config so we can just pull that out of there.

var appSettings = ConfigurationManager.AppSettings;
var serverName = appSettings.Get("ServerName");

Tip: If you are running concurrently and hitting circular dependencies or similar, you can inherit your hook class from Specflows ”Steps”.

We have now have our info, if you debug you can see the variables being populated, however we are not yet sending it to our instance of ElasticSearch, let’s do that now.

Have a look in your [BeforeTestRun] hook and lets make a connection to ElasticSearch which in essence is just HttpClient call:

public class TestRunHooks
{
public static HttpClient EsClient;

[BeforeTestRun]
public static void BeforeTestRun()
{
// Setup ElasticSearch client
EsClient = ESClient.Create();

...
}

...
}

Here we call a class ESClient to do the heavy lifting:

public class ESClient
{
public static HttpClient client;

public static HttpClient Create()
{
// Get the info for your ES instance from config (http://192.168.xxx.xxx:9200/)
var appSettings = ConfigurationManager.AppSettings;
var elasticUrl = appSettings.Get("elasticSearchUrl");
client = new HttpClient()
{
BaseAddress = new Uri(elasticUrl),
Timeout = TimeSpan.FromMilliseconds(500)
};
return client;
}

public static async Task<HttpResponseMessage> PostESData(HttpClient client, object test)
{
try
{
// Post the data as Json to an index of your choice in ElasticSearch!
return await client.PostAsync("/qa/apitests", new StringContent(JsonConvert.SerializeObject(test), Encoding.UTF8, "application/json"));
}
catch (Exception)
{
return null;
}
}

// The object to post in our async call to ElasticSearch
public class TestResult
{
public string name;
public int elapsedTime;
public string result;
public string testRunContext;
public string featureFile;
public string serverName;
public string date = DateTime.UtcNow.ToString("yyyy/MM/dd HH:mm:ss");
}

// Call this from your hook with the data we have gathered in our [AfterScenarioHook]
public static void PostToEs(string testName, string outcome, int convTime, string featureFile, string serverName)
{
// Test conext Id is just a unique name for the test run, generate it how you like!
var testRunContext = TestRunHooks.TestContextId;
var client = ESClient.client;

var testResult = new TestResult() { name = testName, result = outcome, elapsedTime = convTime, testRunContext = testRunContext, featureFile = featureFile, serverName = serverName};
var result = PostESData(client, testResult).Result;
}
}

Don’t forget to dispose of your client after the test run:

[AfterTestRun]
public static void AfterTestRun()
{
EsClient.Dispose();

...
}

All going well (No Firewall issues etc) your post call to ElasticSearch should return a 201 status code and ElasticSearch now has Data! Moving on to Kibana…

Kibana

If we click on ‘Discover’ in Kibana and we have the correct date/time range selected from the upper right hand corner, we should see the raw data sent to ElasticSearch. Alternatively, you can perform a Lucene query (don’t worry, you won’t need to know this syntax in depth to make good use of it!) such as…

result:”Passed”

This returns all tests that passed for the selected time period, based on the data we have pushed into ElasticSearch.

Now that we have data we can create visualisations based on it! Keeping with the query above let’s visualise our passing tests in varying forms:

gauge 
When you have enough visualisations you can drop them all onto a dashboard and share with your team.

Dashboard

Note: If you want to lock down Kibana functionality and give team members their own login, you will need to install the x-pack addon.

Finally, and as eluded to at the start – once you have your metrics in ElasticSearch or Influx or whatever… (there are a few out there) then you are not limited by what tool to visualise with. I’d like to build on what is outlined here to compare results of runs, trends, drilldown to failures etc although I am not there yet 🙂

Happy graphing!



Proxying UI Automation to OWASP ZAP

Quick disclaimer: I’m not a security expert, pen tester or ZAP expert but that doesn’t mean to say we should ignore security. A cheap way of adding a layer of security testing is to take your existing Selenium automation and proxy them through OWASP Zed Attack Proxy. So lets get started.

This is not a guide on how to use OWASP Zap and will not go into great configuration detail.

  • Download, install and start OWASP ZAP (Requires Java) either locally or on a VM
  • Install FoxyProxy which is a popular browser plugin. You can get it for Chrome or Firefox.
  • Install root level certificate

In ZAP go to Options > Local Proxies and set the Address and Port as desired. In this article I am running ZAP on a VM so I put the address of the VM in and set the port to 8999. Add an additional proxy of localhost:8999.

We can test this with FoxyProxy, setup a proxy to point to the VM on port 8999 and you should see the ZAP API front end interface.

Zap API Frontend

Next go to the API options in ZAP and add the VMs IP address to the list of addresses permitted to use the API. For the purposes of this article, I have also disabled the API key required to perform commands.

Now we can hit that API front end without the use of FoxyProxy.

So lets setup a connection to our ZAP instance in code and point tests via the proxy. Lets start by adding the OWASPZAPDotNetAPI nuget package.

We’ll connect to the ZAP service at the start of our test run in the hook:

public static ClientApi Zap;

[BeforeTestRun]
public static void BeforeTestRun()
{
// Note if you are using an API key, pass it in here instead of null
Zap = new ClientApi("192.168.xxx.xxx", 8999, null);

...
}

Now lets configure a proxy and pass it to the driver:

var options = new ChromeOptions(); var options = new ChromeOptions();

options.AddArgument("start-maximized");
options.AddArguments("disable-infobars");

// ZAP Proxy (Passive Scan) - ZAP should already be invoked.

var proxy = new Proxy();
options.Proxy = proxy;
proxy.Kind = ProxyKind.Manual;
proxy.IsAutoDetect = false;
proxy.HttpProxy = "192.168.xxx.xxx:8999";
proxy.SslProxy = "192.168.xxx.xxx:8999";

options.AddArgument("ignore-certificate-errors");
options.Proxy = proxy;
var timespan = TimeSpan.FromMinutes(3);
_driver = new RemoteWebDriver(new Uri(gridHub), options.ToCapabilities());

If we run a test, we should see traffic being generated in the ZAP History tab:

Zap Traffic

Great, we’re almost there. Now it’s time to generate a report – you can do this manually by using the API UI in core/other/html report (http://192.168.xxx.xxx:8999/UI/core/other/htmlreport/) but as we have a connection in code to the API lets do it there, after the test run:

public static void WriteZapHtmlReport(string path, byte[] bytes)
{
File.WriteAllBytes(path, bytes);
}

[AfterTestRun]
public static void AfterTestRun()
{
HookHelper.WriteZapHtmlReport(ReportDirectory + "_PassiveScanReport.html", Zap.core.htmlreport());
Zap.Dispose();

Here we take the byte array from the zap.core api and write it to a HTML file somewhere on disk, giving you a handy passive scan report.

Zap Report

Limitations

Be aware that any test that uses ZAP needs to have exclusive use of it- so to generalise, it’s better to keep all the browser based tests sequential. This includes tests that use ZAP as a proxy, not just the scanning tests. These words were robbed from here and are very true having spent a couple of hours see if the above would work with concurrency.



Surgical Strike UI Automation Testing

A lot of people when starting out with automated testing or Selenium may follow a kind of record and playback approach to writing automated tests, whether this is born out of using something like the plugin or just the general approach:

  • Fire up a browser
  • Goto the site
  • Login
  • Get to where you need to be
  • Perform a bunch of interactions
  • Logout (possibly)

There are a few optimisations we can do without much effort, like navigating with a url rather than clicking a bunch of menu items. This approach may need an environment with test data already present and that can turn into a big overhead. It might be a lot harder to ‘get to where you need to be’ if you have to create a whole structure first and doing that in the UI as part of your test should be avoided.

Trimming the fat

In the past on this blog I have talked about API tests and UI tests, lets combine the two to really optimise the UI tests. You’re writing a Selenium test to test a specific piece of functionality, lets keep it that way and just use Selenium to perform precision, surgical UI interactions that we care about. This will speed up your tests and make them more stable.

This way we will:

  • Perform a bunch of internal API calls to set the test up
  • Fire up a browser
  • Login
  • Navigate
  • Do the test
  • Perform another set of internal API calls to rip out the test setup

I’ve found a nice way to do this is to setup a stack which we can push fixtures onto and then iterativeley pop them off after the test is done.

private Stack<Action> teardownActions;

public Stack<Action> TeardownActions => teardownActions ?? (teardownActions = new Stack<Action>());

As usual I am using Specflow in my setup, now I turn to the hooks to perform an action I need for each test – lets say…. make a folder.

FolderRootDto = new FolderDto

{

FolderVisibility = 50,

Name = $"{ScenarioInfo.Title}"

};

Folders.CreateNewFolder(FolderRootDto.Name, FolderRootDto);

Stack it out

When the create method is called and we actually perform the api post request, we get the Id from the DTO (or whatever we need in the delete call) and push an action onto the stack, in this case another method that calls delete.

var client = client;

var newFolder = RestManager.Post<FolderDto>(client, BaseUrl + "/rest/endpoint/to/call", folderDto);

if (newFolder.StatusCode == HttpStatusCode.OK)

{

var LastFolderCreatedId = newFolder.Response.Id;

TreeState.Get(ScenarioContext).TeardownActions.Push(() =>

{

DeleteFolder(newFolder.Response.Id);

});

}

Now before our browser even fires, I have a folder to perform a test in, if we add our teardown action stack to fire after the scenario then this newly created folder gets ripped out afterwards.

var nextAction = TeardownActions.Pop();

while (nextAction != null)

{

nextAction();

nextAction = null;
if (TeardownActions.Count > 0)

{

nextAction = TreeState.Get(scenarioContext).TeardownActions.Pop();

}

}

As these actions get performed in milliseconds it can drastically reduce the time of your Selenium test that might be doing the all the foundation work or reduce the overhead you might have in order to get your fixture setup in place whether it be build steps or database restores.



Automated Responsive Layout Testing with Galen Framework and Java

A short time ago Joe Colantonio wrote a good piece about the differences between Applitools and Galen. I had not heard of Galen Framework before so started to look at the CLI version. Seeing some value for a project I began to ponder how I could use it and get some high level feedback on a responsive site and get it fast.

So off I went looking for the C# bindings only to see there are none, so I decided to dip my toes in with Java as it is similar to C# (but you can also use JavaScript).

To my surprise I had something I could demo to my team in a couple of hours even though I’ve never touched Java or the IDE I downloaded (IntelliJ Community Edition). So what happened next?

I cloned the example java tests at the Galen repo located here. In order to build the project I had to install and reference a JRE from Java and that was that.

It was then just a case of hacking the example tests to suit what I was doing. Galen uses Selenium to spin up a driver of your choice and set the viewport size accordingly. If you are familiar with Selenium (I image most people coming here are) then it’s easy enough to manipulate it to do your logins and such, it is quite similar to C#.

The DSL

Galen has it’s own DSL, it’s not quite Gherkin but it is easy to get. You essentially declare the objects on the page you are interested in and write tests that say element A is 5px below element B and that this is true on all devices you have specified. All of which are declared in a *.spec or *.gspec file like so:

@objects
    banner-block        css     #HLHelp > header > div > h1
    search-box          id      searchBox
    side-block          css     #HLHelp > header > aside
    main-section        id      main
    header              css     #HLHelp > header
    getting-started     css     #main > article > section:nth-child(1)
    network             css     #main > article > section:nth-child(2)

= Header =
    @on mobile,tablet,laptop
        search-box:
            below banner-block width 15 to 30 px

    @on desktop
        search-box:
            below banner-block width 30 to 35 px

= Main =
    @on mobile,tablet,laptop
        getting-started:
            below header width 20px

    @on desktop
        getting-started:
            below header width 30px

    @on tablet, laptop
        getting-started:
            left-of network  width 5 to 10 px

The device names you can see are mapped from the GalenTestBase file, more on that soon. And if you get stuck with the options available or the syntax, you will be glad to know there is some very good documentation.

Devices / Viewports

Devices and their viewport sizes are easy to declare too:

<pre>@DataProvider(name = "devices")
public Object [][] devices () {
    return new Object[][] {
            {new TestDevice("mobile", new Dimension(360, 640), asList("mobile"))},
            {new TestDevice("tablet", new Dimension(768, 1024), asList("tablet"))},
            {new TestDevice("laptop", new Dimension(1366, 720), asList("laptop"))},
            {new TestDevice("desktop", new Dimension(1920, 1080), asList("desktop"))}
    };
}

Because Galen is Selenium based then of course you can run against your existing Grid, SauceLabs or BrowserStack configurations.

The Tests

Define a test name, load a URL, do some Selenium interactions (if required) and then check the layout from your spec file:

@Test(dataProvider = "devices")
public void popDown_shouldLookGood_onDevice(TestDevice device) throws IOException {
    load("/");
    getDriver().findElement(By.id("searchBox")).sendKeys("WiFi");
    checkLayout("/specs/loginPage.spec", device.getTags());
 }

The Output

After it has run Galen produces a HTML report which is very handy and also produces heat maps for the elements declared in your gspec file. The Galen website has a good example HTML report here.

So what’s left? Hook it up to your CI and away you go. Although you if plan on using this more extensively than something high level you may want to structure your solution accordingly and implement a more page object approach.

Summary

Galen has its uses and may be a viable high level alternative if you cannot get hold of an Applitools license, but as Joe states in his article there are many differences between the two. Although Galen has a built in image comparison tool, I have no idea how to set this up yet. Either way I got something quick (if dirty) working in no time and furthermore I found myself quite pleased/ surprised with how easy it was to use.




top