Automated Tester

.science

This content shows Simple View

TeamCity

Automated Responsive Layout Testing with Galen Framework and Java

A short time ago Joe Colantonio wrote a good piece about the differences between Applitools and Galen. I had not heard of Galen Framework before so started to look at the CLI version. Seeing some value for a project I began to ponder how I could use it and get some high level feedback on a responsive site and get it fast.

So off I went looking for the C# bindings only to see there are none, so I decided to dip my toes in with Java as it is similar to C# (but you can also use JavaScript).

To my surprise I had something I could demo to my team in a couple of hours even though I’ve never touched Java or the IDE I downloaded (IntelliJ Community Edition). So what happened next?

I cloned the example java tests at the Galen repo located here. In order to build the project I had to install and reference a JRE from Java and that was that.

It was then just a case of hacking the example tests to suit what I was doing. Galen uses Selenium to spin up a driver of your choice and set the viewport size accordingly. If you are familiar with Selenium (I image most people coming here are) then it’s easy enough to manipulate it to do your logins and such, it is quite similar to C#.

The DSL

Galen has it’s own DSL, it’s not quite Gherkin but it is easy to get. You essentially declare the objects on the page you are interested in and write tests that say element A is 5px below element B and that this is true on all devices you have specified. All of which are declared in a *.spec or *.gspec file like so:

@objects
    banner-block        css     #HLHelp > header > div > h1
    search-box          id      searchBox
    side-block          css     #HLHelp > header > aside
    main-section        id      main
    header              css     #HLHelp > header
    getting-started     css     #main > article > section:nth-child(1)
    network             css     #main > article > section:nth-child(2)

= Header =
    @on mobile,tablet,laptop
        search-box:
            below banner-block width 15 to 30 px

    @on desktop
        search-box:
            below banner-block width 30 to 35 px

= Main =
    @on mobile,tablet,laptop
        getting-started:
            below header width 20px

    @on desktop
        getting-started:
            below header width 30px

    @on tablet, laptop
        getting-started:
            left-of network  width 5 to 10 px

The device names you can see are mapped from the GalenTestBase file, more on that soon. And if you get stuck with the options available or the syntax, you will be glad to know there is some very good documentation.

Devices / Viewports

Devices and their viewport sizes are easy to declare too:

<pre>@DataProvider(name = "devices")
public Object [][] devices () {
    return new Object[][] {
            {new TestDevice("mobile", new Dimension(360, 640), asList("mobile"))},
            {new TestDevice("tablet", new Dimension(768, 1024), asList("tablet"))},
            {new TestDevice("laptop", new Dimension(1366, 720), asList("laptop"))},
            {new TestDevice("desktop", new Dimension(1920, 1080), asList("desktop"))}
    };
}

Because Galen is Selenium based then of course you can run against your existing Grid, SauceLabs or BrowserStack configurations.

The Tests

Define a test name, load a URL, do some Selenium interactions (if required) and then check the layout from your spec file:

@Test(dataProvider = "devices")
public void popDown_shouldLookGood_onDevice(TestDevice device) throws IOException {
    load("/");
    getDriver().findElement(By.id("searchBox")).sendKeys("WiFi");
    checkLayout("/specs/loginPage.spec", device.getTags());
 }

The Output

After it has run Galen produces a HTML report which is very handy and also produces heat maps for the elements declared in your gspec file. The Galen website has a good example HTML report here.

So what’s left? Hook it up to your CI and away you go. Although you if plan on using this more extensively than something high level you may want to structure your solution accordingly and implement a more page object approach.

Summary

Galen has its uses and may be a viable high level alternative if you cannot get hold of an Applitools license, but as Joe states in his article there are many differences between the two. Although Galen has a built in image comparison tool, I have no idea how to set this up yet. Either way I got something quick (if dirty) working in no time and furthermore I found myself quite pleased/ surprised with how easy it was to use.



Concurrent Test Running with Specflow and NUnit 3

A little while back I wrote a post on achieving concurrent or parallel test running with Selenium, Specflow and NUnit2, but what about NUnit3? Let’s have a look at that, thankfully it is a bit simpler than running an external build script as done previously.

First up, according to the Specflow docs – open up your AssemblyInfo.cs file in your project and add the following line:

[assembly: Parallelizable(ParallelScope.Fixtures)]

Next we need to setup/ teardown our browser and config from the Scenario Hook level, doing it in a Test Run Hook will be problematic as it is static. Your Hooks might look similar to this:

[Binding]
public class ScenarioHooks
{
private readonly IObjectContainer objectContainer;
private IWebDriver Driver;

public ScenarioHooks(IObjectContainer objectContainer)
{
this.objectContainer = objectContainer;
}

[BeforeScenario]
public void BeforeScenario()
{
var customCapabilitiesIE = new DesiredCapabilities();
customCapabilitiesIE.SetCapability(CapabilityType.BrowserName, "internet explorer");
customCapabilitiesIE.SetCapability(CapabilityType.Platform, new Platform(PlatformType.Windows));
customCapabilitiesIE.SetCapability("webdriver.ie.driver", @"C:\tmp\webdriver\iedriver\iedriver_3.0.0_Win32bit.exe");

Driver = new RemoteWebDriver(new Uri(XXX.XXX.XXX.XXX), customCapabilitiesIE);
objectContainer.RegisterInstanceAs<IWebDriver>(Driver);
}

[AfterScenario]
public void AfterScenario()
{
Driver.Dispose();
Driver.Quit();
}
}

You can see from the browser instantiation we are sending the tests to a Selenium Grid Hub, so as a precursor to running the tests you will need suitable infrastructure to run a grid, or you could configure it to go off to SauceLabs or BrowserStack.

Assuming the hub and nodes are configured correctly, when your build process runs the tests then the hub will farm them out by feature file (for other options see the parallel scope in AssemblyInfo.cs) to achieve concurrent test running, and that’s it! Much nicer.



Measuring Speed and Performance with Sitespeed.io

I recently came across a fantastic open source speed and performance tool called Sitespeed and was pretty amazed by both what it can do and how it presents the results to you.

It has its foundations in some other cool stuff you may have heard of like YSlow, PhantomJS and Bootstrap and makes good use of WebPageTest too. It seems to support pretty much every platform too, I’m a .Net/ windows guy so I’ll be working with that. Lets have a look:

Install

Install via node – if you don’t have NodeJs get it here.

$ npm install -g sitespeed.io

Configure and Run

So to get some metrics out of something, lets run a simple test. Open a command prompt and run the following:

sitespeed.io.cmd -u http://www.google.co.uk -d 0 -b chrome -v –viewPort 1920×1080 –name Google_Report

If you’re a windows user like me don’t forget the .cmd on the end of the sitespeed.io initialiser, it’s an easy mistake to make! Let Sitespeed.io do it’s thing, in this particular instance we are crawling google to a depth of 0 (so just it’s landing page, for brevity.) and doing this with chrome, viewport size is set to 1920×1080, -v for verbose stdout, name gives it a name.

Running sitespeed.io

Hey presto, a performance dashboard:

Sitespeed.io report

Full example report is here.

We’re not really measuring against anything here just getting something pretty to look at, sitespeed.io recommends we create a Performance Budget to measure against which sounds like good advice to me, if you don’t know much about it they even recommend a some great articles to broaden your mind and follow best practice, to quote their site:

“Have you heard of a performance budget? If not, please read the excellent posts by Tim Kadlec Setting a performance budget and Fast enough. Also read Daniel Malls How to make a performance budget. After that, continue setup sitespeed.io :)”

Add this into your parameters and your report will illustrate what is falling in and out of your budget. So what else can we do with sitespeed.io?

  • Add a list of URLs to test against
  • Add authentication if you need to login somewhere (–requestHeaders cookie.json for example)
  • Drill down and get HAR files, maybe even convert to jmx for stress testing
  • Get screenshots
  • Get results in different formats (HTML/ XML and such)
  • Do multiple passes for comparison
  • Compare multiple sites
  • Throttle connections
  • Supports Selenium for running with different browsers
  • Supports Google Page Speed Insights
  • Supports Docker

In fact, there is way too much to list out here – luckily sitespeed.io has some very good documentation on how to configure so it should be easy to fine tune it to your needs. Furthermore if you use some other cool stuff like Graphite you can output the results to there.

Summary

The benefits of this should be clear, we can hook up a command in a build step on a CI server to either output some funky looking graphite metrics or generate performance dashboards regularly to catch performance issues before they get released.

I don’t yet know if this tool can be incorporated into existing automation tests with Selenium… or really if you would even want to do that, I had been looking into using Jan Odvarko’s HAR Export Trigger to compare against YSlow scores but sitespeed.io seems to kill lots of birds with one stone.

My first impressions are that sitespeed.io is a very powerful, easy to configure, cross platform performance tool that displays data in such as way that is easy to understand.

Bootnote

Keep your eyes peeled for sitespeed.io 4.0 which will have even more cool stuff going on, due to be released in a few weeks time.



Specflow Reports with NUnit3

This post will look at setting up Specflow reports to work with NUnit3. When you install Specflow via NuGet, there is a tools folder which contains specflow.exe – if we run a batch process after the build on the CI server which points to the NUnit3 TestResult.xml, then we can generate a report and reference it as an artefact in TeamCity (and similar CI software).

But there is a catch, which isn’t (at the time of writing) explained in the Specflow reporting documentation. Currently Specflow (v 2.0.0) cannot interpret the NUnit3 TestResult output, so to counter this restriction we need to configure NUnit3 to output its TestResult.xml in the NUnit2 format. A full list of NUnit3 console options can be found here.

nunit3-console.exe –where cat==SmokeTests&&cat!=ExcludeFromCI
–out=TestResult.txt
–result=TestResult.xml;format=nunit2 C:\Path\To\Acceptance.Tests.dll

With this in place it’s now just a case of rigging up the Specflow command to point to the Test Result data and telling it to produce a report and specifying where to output the HTML report.

specflow.exe nunitexecutionreport Your.AcceptanceTest.csproj /testResult:TestResult.xml
 /out:MyResult.html

Once you have the TestResult.xml and the Result output generating where you want, it’s easy to wrap the command up into a batch file and call it from a build step after your tests have executed. Make this output an artefact and then you have metrics in place for your nightly and/ or other test builds.

Demo Specflow Report



Pickles Reports – Living Document Generation

Adding Pickles to your Project is really useful for presenting your Gherkin based tests in an easy to read, searchable format with some funky metrics added in. If you get your CI build to publish the output as an artifact then on each build you will always have up to date current documentation of what is being tested.

You can configure it numerous ways, via MSBuild, PowerShell, GUI or Command Line.

In this article I will be setting up via command line and running a batch file as a build step in TeamCity. First off we need to Install Pickles Command Line via NuGet. NUnit 3 is my test runner.

Once built we can start making use of it by running the executable with our desired parameters:

  • –feature-directory: Where your Feature Files live, relative to the executable
  • –output-directory: Where you want your Pickles report to generate
  • –documentation-format: Documentation format (DHTML, HTML, Word, Excel or JSON)
  • –link-results-file: Path to NUnit Results.xml (this will allows graphs and metrics)
  • –test-results-format: nunit, nunit3, xunit, xunit2, mstest, cucumberjson, specrun, vstest

Together it looks like the below, which is put into a batch file and called in a closing build step.

.\packages\Pickles.CommandLine.2.5.0\tools\pickles.exe –feature-directory=..\..\Automation\Tests\Project.Acceptance.Tests\Features^ –output-directory=Documentation^ –link-results-file=..\..\..\TestResult.xml –test-results-format=nunit –documentation-format=dhtml

I use the NUnit format over NUnit3 because I have set my NUnit3 runner to output NUnit2 formatted results, this is so Specflow can consume the same output and produce more metrics on the build. With the results file hooked in you can get some whizzy graphics:

pickles_report

Pickles is a great tool for BDD and should  help bridge that gap between “the Business” and IT. A full sample report can be found here. You can find extensive documentation here for the various ways to set up Pickles.

 




top