Automated Tester

.science

This content shows Simple View

TeamCity

Automated Responsive Layout Testing with Galen Framework and Java

A short time ago Joe Colantonio wrote a good piece about the differences between Applitools and Galen. I had not heard of Galen Framework before so started to look at the CLI version. Seeing some value for a project I began to ponder how I could use it and get some high level feedback on a responsive site and get it fast.

So off I went looking for the C# bindings only to see there are none, so I decided to dip my toes in with Java as it is similar to C# (but you can also use JavaScript).

To my surprise I had something I could demo to my team in a couple of hours even though I’ve never touched Java or the IDE I downloaded (IntelliJ Community Edition). So what happened next?

I cloned the example java tests at the Galen repo located here. In order to build the project I had to install and reference a JRE from Java and that was that.

It was then just a case of hacking the example tests to suit what I was doing. Galen uses Selenium to spin up a driver of your choice and set the viewport size accordingly. If you are familiar with Selenium (I image most people coming here are) then it’s easy enough to manipulate it to do your logins and such, it is quite similar to C#.

The DSL

Galen has it’s own DSL, it’s not quite Gherkin but it is easy to get. You essentially declare the objects on the page you are interested in and write tests that say element A is 5px below element B and that this is true on all devices you have specified. All of which are declared in a *.spec or *.gspec file like so:

@objects
    banner-block        css     #HLHelp > header > div > h1
    search-box          id      searchBox
    side-block          css     #HLHelp > header > aside
    main-section        id      main
    header              css     #HLHelp > header
    getting-started     css     #main > article > section:nth-child(1)
    network             css     #main > article > section:nth-child(2)

= Header =
    @on mobile,tablet,laptop
        search-box:
            below banner-block width 15 to 30 px

    @on desktop
        search-box:
            below banner-block width 30 to 35 px

= Main =
    @on mobile,tablet,laptop
        getting-started:
            below header width 20px

    @on desktop
        getting-started:
            below header width 30px

    @on tablet, laptop
        getting-started:
            left-of network  width 5 to 10 px

The device names you can see are mapped from the GalenTestBase file, more on that soon. And if you get stuck with the options available or the syntax, you will be glad to know there is some very good documentation.

Devices / Viewports

Devices and their viewport sizes are easy to declare too:

<pre>@DataProvider(name = "devices")
public Object [][] devices () {
    return new Object[][] {
            {new TestDevice("mobile", new Dimension(360, 640), asList("mobile"))},
            {new TestDevice("tablet", new Dimension(768, 1024), asList("tablet"))},
            {new TestDevice("laptop", new Dimension(1366, 720), asList("laptop"))},
            {new TestDevice("desktop", new Dimension(1920, 1080), asList("desktop"))}
    };
}

Because Galen is Selenium based then of course you can run against your existing Grid, SauceLabs or BrowserStack configurations.

The Tests

Define a test name, load a URL, do some Selenium interactions (if required) and then check the layout from your spec file:

@Test(dataProvider = "devices")
public void popDown_shouldLookGood_onDevice(TestDevice device) throws IOException {
    load("/");
    getDriver().findElement(By.id("searchBox")).sendKeys("WiFi");
    checkLayout("/specs/loginPage.spec", device.getTags());
 }

The Output

After it has run Galen produces a HTML report which is very handy and also produces heat maps for the elements declared in your gspec file. The Galen website has a good example HTML report here.

So what’s left? Hook it up to your CI and away you go. Although you if plan on using this more extensively than something high level you may want to structure your solution accordingly and implement a more page object approach.

Summary

Galen has its uses and may be a viable high level alternative if you cannot get hold of an Applitools license, but as Joe states in his article there are many differences between the two. Although Galen has a built in image comparison tool, I have no idea how to set this up yet. Either way I got something quick (if dirty) working in no time and furthermore I found myself quite pleased/ surprised with how easy it was to use.



Measuring Speed and Performance with Sitespeed.io

I recently came across a fantastic open source speed and performance tool called Sitespeed and was pretty amazed by both what it can do and how it presents the results to you.

It has its foundations in some other cool stuff you may have heard of like YSlow, PhantomJS and Bootstrap and makes good use of WebPageTest too. It seems to support pretty much every platform too, I’m a .Net/ windows guy so I’ll be working with that. Lets have a look:

Install

Install via node – if you don’t have NodeJs get it here.

$ npm install -g sitespeed.io

Configure and Run

So to get some metrics out of something, lets run a simple test. Open a command prompt and run the following:

sitespeed.io.cmd -u http://www.google.co.uk -d 0 -b chrome -v –viewPort 1920×1080 –name Google_Report

If you’re a windows user like me don’t forget the .cmd on the end of the sitespeed.io initialiser, it’s an easy mistake to make! Let Sitespeed.io do it’s thing, in this particular instance we are crawling google to a depth of 0 (so just it’s landing page, for brevity.) and doing this with chrome, viewport size is set to 1920×1080, -v for verbose stdout, name gives it a name.

Running sitespeed.io

Hey presto, a performance dashboard:

Sitespeed.io report

Full example report is here.

We’re not really measuring against anything here just getting something pretty to look at, sitespeed.io recommends we create a Performance Budget to measure against which sounds like good advice to me, if you don’t know much about it they even recommend a some great articles to broaden your mind and follow best practice, to quote their site:

“Have you heard of a performance budget? If not, please read the excellent posts by Tim Kadlec Setting a performance budget and Fast enough. Also read Daniel Malls How to make a performance budget. After that, continue setup sitespeed.io :)”

Add this into your parameters and your report will illustrate what is falling in and out of your budget. So what else can we do with sitespeed.io?

  • Add a list of URLs to test against
  • Add authentication if you need to login somewhere (–requestHeaders cookie.json for example)
  • Drill down and get HAR files, maybe even convert to jmx for stress testing
  • Get screenshots
  • Get results in different formats (HTML/ XML and such)
  • Do multiple passes for comparison
  • Compare multiple sites
  • Throttle connections
  • Supports Selenium for running with different browsers
  • Supports Google Page Speed Insights
  • Supports Docker

In fact, there is way too much to list out here – luckily sitespeed.io has some very good documentation on how to configure so it should be easy to fine tune it to your needs. Furthermore if you use some other cool stuff like Graphite you can output the results to there.

Summary

The benefits of this should be clear, we can hook up a command in a build step on a CI server to either output some funky looking graphite metrics or generate performance dashboards regularly to catch performance issues before they get released.

I don’t yet know if this tool can be incorporated into existing automation tests with Selenium… or really if you would even want to do that, I had been looking into using Jan Odvarko’s HAR Export Trigger to compare against YSlow scores but sitespeed.io seems to kill lots of birds with one stone.

My first impressions are that sitespeed.io is a very powerful, easy to configure, cross platform performance tool that displays data in such as way that is easy to understand.

Bootnote

Keep your eyes peeled for sitespeed.io 4.0 which will have even more cool stuff going on, due to be released in a few weeks time.



Specflow Reports with NUnit3

This post will look at setting up Specflow reports to work with NUnit3. When you install Specflow via NuGet, there is a tools folder which contains specflow.exe – if we run a batch process after the build on the CI server which points to the NUnit3 TestResult.xml, then we can generate a report and reference it as an artefact in TeamCity (and similar CI software).

But there is a catch, which isn’t (at the time of writing) explained in the Specflow reporting documentation. Currently Specflow (v 2.0.0) cannot interpret the NUnit3 TestResult output, so to counter this restriction we need to configure NUnit3 to output its TestResult.xml in the NUnit2 format. A full list of NUnit3 console options can be found here.

nunit3-console.exe –where cat==SmokeTests&&cat!=ExcludeFromCI
–out=TestResult.txt
–result=TestResult.xml;format=nunit2 C:\Path\To\Acceptance.Tests.dll

With this in place it’s now just a case of rigging up the Specflow command to point to the Test Result data and telling it to produce a report and specifying where to output the HTML report.

specflow.exe nunitexecutionreport Your.AcceptanceTest.csproj /testResult:TestResult.xml
 /out:MyResult.html

Once you have the TestResult.xml and the Result output generating where you want, it’s easy to wrap the command up into a batch file and call it from a build step after your tests have executed. Make this output an artefact and then you have metrics in place for your nightly and/ or other test builds.

Demo Specflow Report



Setting Up Allure with NUnit in TeamCity

This article is a follow up from my previous article in setting up Allure locally with NUnit. I made a brief mention of the Allure TeamCity plugin but encountered difficulties in getting it to work and didn’t have much time to look into getting around it.

Lets get stuck in with a brief recap/ some initial steps to take.

  • Install NUnit 2.6.4 and the compatible NUnit adapter into your allure bin/addins folder
  • Edit the config.xml in your bin/addins folder to write out the XML to a known location (I’m using C:\AllureXML)

Generated Allure XML

  • Run some tests in the GUI or NUnit Console and generate some XML, you should check XML is being generated before continuing
  • Generate a report with the Command Line Interface (this will verify your JAVA_HOME system variable is present and correct)

Allure TeamCity Setup and Configuration

  • Install the plugin and restart the build server
  • Copy the allure-commandline.zip from latest release to the <TeamCity Data Directory>/plugins/.tools – No server restart needed for this step.
  • Your build needs to be generating XML, we tested this earlier
  • Open your build config settings, create a new build step for the Allure Report
  • Specify a relative path to your XML folder (in this example it is ../../../AllureXML)
  • Use the given example for test reports or similar

Allure Report Build Step

  • Edit your build step running the tests – I have not been able to get it to work with the NUnit runner type (it just won’t generate any XML) however, we can run a script to spin up the NUnit Console and run it that way.
  • Choose PowerShell as the runner, x86 bitness
  • Script: Source code
  • The script:
&amp; 'C:\Program Files (x86)\NUnit 2.6.4\bin\nunit-console-x86.exe' /framework=net-4.0 C:\BuildAgent\work\a0569b5caa0eb74d\Automation\Tests\Example.Automated.Selenium.Tests\Example.Automated.Selenium.Tests\bin\Release\Example.Automated.Selenium.Tests.dll

 

Test run build step for Nunit and Allure

It is a good idea to test the script on the build machine locally so you can ensure it is correct, before putting it into your TeamCity build step.

Notice a couple of things:

  • We run the x86 bit console as specified in the powershell run mode in the build step
  • We have passed in  “/framework=net-4.0”, this is important and in line with current documentation.

This should now, when run, kick off the tests via the console, generate the necessary XML and publish the artifacts in TeamCity giving you a report, which is great for retrieving daily summaries of builds you run overnight which is a great health check on your continuous integration builds.

Further Help

Please keep these installation notes at the forefront of your mind if you encounter difficulties:

If you wish to add different parameters to your tests you can find a full list below or by running  “nunit-console-x86.exe /help”

NUNIT-CONSOLE [inputfiles] [options]

Runs a set of NUnit tests from the console.

You may specify one or more assemblies or a single project file of type .nunit.

Options:

/fixture=STR               Test fixture or namespace to be loaded (Deprecated) (Short format: /load=STR)
/run=STR                   Name of the test case(s), fixture(s) or namespace(s) to run
/runlist=STR               Name of a file containing a list of the tests to run, one per line
/config=STR                Project configuration (e.g.: Debug) to load
/result=STR                Name of XML result file (Default: TestResult.xml) (Short format: /xml=STR)
/xmlConsole                Display XML to the console (Deprecated)
/noresult                  Suppress XML result output (Short format: /noxml)
/output=STR                File to receive test output (Short format: /out=STR)
/err=STR                   File to receive test error output
/work=STR                  Work directory for output files
/labels                    Label each test in stdOut
/trace=X                   Set internal trace level: Off, Error, Warning, Info, Verbose
/include=STR               List of categories to include
/exclude=STR               List of categories to exclude
/framework=STR             Framework version to be used for tests
/process=X                 Process model for tests: Single, Separate, Multiple
/domain=X                  AppDomain Usage for tests: None, Single, Multiple
/apartment=X               Apartment for running tests: MTA (Default), STA
/noshadow                  Disable shadow copy when running in separate domain
/nothread                  Disable use of a separate thread for tests
/basepath=STR              Base path to be used when loading the assemblies
/privatebinpath=STR        Additional directories to be probed when loading assemblies, separated by semicolons
/timeout=X                 Set timeout for each test case in milliseconds
/wait                      Wait for input before closing console window
/nologo                    Do not display the logo
/nodots                    Do not display progress
/stoponerror               Stop after the first test failure or error
/cleanup                   Erase any leftover cache files and exit
/help                      Display help (Short format: /?)




top