Automated Tester


This content shows Simple View


Parallel Test Execution with Specflow

Running Specflow scenarios in Parallel is a tricky problem to solve due to the nature of how tests are written. Achieving this is a lot simpler if we write Selenium tests in a unit style fashion but as this isn’t the case for Specflow with its Gherkin syntax we will look at executing them with a powershell build script against a Browserstack grid.

Thanks to this great article from Kenneth Truyers, it has really helped me to achieve this. We will look at executing parallel Specflow tests against a BrowserStack grid.

Set Up

In our Initialisation method we need to specify our remote driver whilst passing in our varying desired capabilities in Config.

private static void SetupCloudDriver()
            var capabilities = new DesiredCapabilities();

            capabilities.SetCapability(CapabilityType.Version, ConfigurationManager.AppSettings["version"]);
            capabilities.SetCapability("os", ConfigurationManager.AppSettings["os"]);
            capabilities.SetCapability("os_version", ConfigurationManager.AppSettings["os_version"]);
            capabilities.SetCapability("browserName", ConfigurationManager.AppSettings["browser"]);

            capabilities.SetCapability("browserstack.user", ConfigurationManager.AppSettings["browserstack.user"]);
            capabilities.SetCapability("browserstack.key", ConfigurationManager.AppSettings["browserstack.key"]);
            capabilities.SetCapability("browserstack.local", false);
            capabilities.SetCapability("browserstack.debug", true);

            capabilities.SetCapability("project", "Project Name");

            Driver = new RemoteWebDriver(new Uri(ConfigurationManager.AppSettings["browserstack.hub"]), capabilities);
            ScenarioContext.Current["driver"] = Driver;

Config for Cross Browser spin up

One of our config files might look like so:

<xml version="1.0" encoding="utf-8">
<configuration xmlns:xdt="">
        <add key="browser" value="Safari" xdt:Transform="Insert"/>
        <add key="os" value="osx" xdt:Transform="Insert"/>
        <add key="version" value="8" xdt:Transform="Insert"/>
        <add key="os_version" value="Yosemite" xdt:Transform="Insert"/>

This tells Browserstack what system to spin up and run the tests against for one parallel instance, multiple config files with other systems are needed so they can all be executed depending on your requirements. Furthermore we need to set these up in Configuration Manager so that each config picks up the tests when the solution is built.

What This Enables

We can select what environment we wish to run against, run the test and see it appear in Browserstack (not in parallel) or run it locally using our own setup.


So now we have essentially got a single test running in the cloud (and our config still allows us to run locally if we choose), next we need to kick off a bunch of these against different systems.

The Build Script

We use a PowerShell build script to achieve the parallel part of this, here it is:

$solution = "Your.Testing.Solution.sln"

function Get-SolutionConfigurations($solution)
        Get-Content $solution |
        Where-Object {$_ -match "(?&lt;config&gt;\w+)\|"} |
        %{ $($Matches['config'])} |
        select -uniq

$frameworkDirs = @((Get-ItemProperty -Path "HKLM:\SOFTWARE\Microsoft\MSBuild\ToolsVersions\12.0" -Name "MSBuildToolsPath32")."MSBuildToolsPath32",
    for ($i = 0; $i -lt $frameworkDirs.Count; $i++) {
        $dir = $frameworkDirs[$i]
        if ($dir -Match "\$\(Registry:HKEY_LOCAL_MACHINE(.*?)@(.*)\)") {
            $key = "HKLM:" + $matches[1]
            $name = $matches[2]
            $dir = (Get-ItemProperty -Path $key -Name $name).$name
            $frameworkDirs[$i] = $dir

    $env:path = ($frameworkDirs -join ";") + ";$env:path"

@(Get-SolutionConfigurations $solution) | foreach {
      Write-Host "Building for $_"
    msbuild $solution /p:Configuration=$_ /nologo /verbosity:quiet
 New-Item "$(get-location)\packages\specflow.1.9.0\tools\specflow.exe.config" -type file -force -value "&lt;?xml version=""1.0"" encoding=""utf-8"" ?&gt; &lt;configuration&gt; &lt;startup&gt; &lt;supportedRuntime version=""v4.0.30319"" /&gt; &lt;/startup&gt; &lt;/configuration&gt;" | Out-Null

@(Get-SolutionConfigurations $solution)| foreach {
    Start-Job -ScriptBlock {
        param($configuration, $basePath)

            &amp; $basePath\packages\NUnit.Runners.2.6.4\tools\nunit-console.exe /labels /out=$basePath\nunit_$configuration.txt /xml:$basePath\nunit_$configuration.xml /nologo /config:$configuration "$basePath/Your.Testing.Solution/bin/$configuration/Your.Testing.Solution.dll"
            &amp; $basePath\packages\specflow.1.9.0\tools\specflow.exe nunitexecutionreport "$basePath\Your.Testing.Solution\Your.Testing.Solution.csproj" /out:$basePath\specresult_$configuration.html /xmlTestResult:$basePath\nunit_$configuration.xml /testOutput:nunit_$configuration.txt

    } -ArgumentList $_, $(get-location)
Get-Job | Wait-Job
Get-Job | Receive-Job

It obviously needs tweaking in areas to point to your solution but what the hell is it actually doing? It is running msbuild against each config file and executing the test suite. Our Safari based config gets built and executed against BrowserStack as do any other configurations. A test report for each config/ system is then generated to let you know the outcome of that particular run.

Running the script should allow you to see parallel test execution against your desired systems in Browserstack:


Generated Feedback and Reporting

Reports generated after look similar to the below:


What Next?

You will undoubtedly encounter issues with Browserstack seeing your internal test environments, this is something that you will need to consider. Consult the Browserstack documentation online regarding tunnelling or running Browserstack locally.

We can also throw in some automated visual checking to really get the ball rolling with Continuous Delivery. If you have some well set up Applitools Eyes base images then visual checking on many systems at once is potentially worth thousands of checks.

Automation with AutoIt

This is a tool primarily used for automating windows applications but can also be used in conjunction with Selenium to deal with any windows dialogue boxes that Selenium on its own cannot manipulate. It can also interact with Active X dialogues. Generally speaking it is bad practice to implement this tool in your framework as it will only work with windows but if for example you find yourself on a project where you have little or no option it can be very useful, I have personally used this for a legacy application that was windows only and relied heavily on ActiveX but I would not recommend it for anything else!

Selenium on it’s own can do a lot with it’s builder actions and abilities to handle (to an extent) alert boxes, so don’t bring this into your solution unless you really need it. This is good for legacy applications that are not very browser friendly.

First off, download AutoIt:


Rather than go through the more traditional route of authoring VB Script in Auto IT’s Scite Editor (Explained below) we can import AutoItX – DLL/COM control which features a C# assembly. You still need to install AutoIt and also add the assembly to your references in Visual Studio.

Once setup we can start authoring C# with the goodies contained in AutoItX, lets dig in with an example.

public static void HandlePrintDialogue()
            au3.WinWait(AiWindows.PrintWindow, "", 10);

This code handles IE’s print dialogue window after the user has clicked a print button on the page or pressed CTRL + P. To begin with we are waiting for a window with the title “Print” to appear on screen within 10 seconds. Please bear in mind the window you are looking for may be names differently in different versions of windows.

au3.WinWait(AiWindows.PrintWindow, "", 10);

Once found we are activating that window to be be manipulated in some way.


Then we send an enter key press because with careful observation we know Enter defaults to “Ok” and the print job will be started.


You can do a lot more with AutoItX including making assertions on the content of windows dialogues, handling downloads and more.

Please refer to the following for a full list of functions:

Scite Editor

Open the Script editor and type your VB Script to deal with the dialogue/ issue in question. In this example we are only sending keystrokes to the active window if it is titled as “Login”.

If Winactive ("Login") = 0 Then

Send ("Username")
Send ("{TAB}")
Send ("Password")
Send ("{ENTER}")

We can save this and compile it, AutoIt will create an *.exe which we can execute from our step definition in the selenium code:

public void HandleLoginPrompt()
            Runtime run = Runtime.getRuntime();
            Process pp = run.exec("C:\\PathToFile\\AILoginScript.exe");

Run the test and see the results.

Load Testing with Selenium and BrowserMob Proxy

In days of yore, Selenium had the capability to capture network traffic and manipulate it. This was taken out of later versions as the mantra for the project was to mimic user actions.

We can still harness this capability and do some other cool things along the way such as blacklisting or whitelisting certain services, simulate slow connections, write data and produce metrics.

How To

We can only harness this capability by spinning up our tests via a proxy and that is where our first tool comes in; BrowserMob Proxy. There is also a write up on it here and which coincidentally I lifted some of the code from: Ada The Dev – BrowserMob Proxy, the blog of one of the .Net Gurus of BrowserMob.

When we are setting up our start up method we need to instantiate the proxy and then spin up the browser.

// Supply the path to the Browsermob Proxy batch file
            Server server = new Server(@"C:\Users\\Desktop\BMP\bin\browsermob-proxy.bat");
            Client client = server.CreateProxy();
            client.NewHar("Load Test Numbers");
            var seleniumProxy = new Proxy { HttpProxy = client.SeleniumProxy };
            var profile = new FirefoxProfile();

            // Navigate to the page to retrieve performance stats for
            Driver = new FirefoxDriver(profile);

After we have started the proxy we are instructing the client to create a new HAR file called “Load Test Numbers”. A HAR file is a Http Archive. More information can be found on that here: Har File Spec v1.2.

After navigating to Google we need to get the performance statistics and then view them in some way. This is easy to do in debug mode, looking into variables or writing the content out to the console.

// Get the performance stats
         HarResult harData = client.GetHar();

 // Do whatever you want with the metrics here. Easy to persist
 Log log = harData.Log;
            Entry[] entries = log.Entries;

            var file = new System.IO.StreamWriter("c:\\test.txt");
            foreach (var entry in entries)
                Request request = entry.Request;
                Response response = entry.Response;
                var url = request.Url;
                var time = entry.Time;
                var status = response.Status;
                Console.WriteLine("Url: " + url + " - Time: " + time + " Response: " + status);

                file.WriteLine("Url: " + url + " - Time: " + time + " Response: " + status);


In the code above we a getting select elements from the HAR (URL, Time and Status Code) and writing them out to the console and also a text file.

Viewing the entire HAR

We can capture the JSON generated in the HAR file and write out the HAR to disk. Using this HAR we can use another tool to view it and it’s metrics. We can also convert the HAR to a JMX file which creates for us a JMeter Test plan.

Once we retrieve the performance stats we need to serialise the content:

// Get the performance stats
            HarResult harData = client.GetHar();

            var json = new JavaScriptSerializer().Serialize(harData);
            Log log = harData.Log;
            Entry[] entries = log.Entries;

            var file = new System.IO.StreamWriter("c:\\test.har");

This writes a serialised HAR file out to file. If we paste the content into HAR Viewer we magically get a whole bunch of useful metrics:



Converting to JMX

This is simple to do, as with pasting your HAR data into HAR Viewer. You can paste the same JSON in Flood.IO Har2JMX and download the JMX. Now simply load into JMeter, change the Thread number, ramp up, loops and then add desired listeners and you are ready to roll.

Unfortunately there is currently no way of programatically converting HAR2JMX in C#. The Flood.IO code is open source however.


Exporting a HAR straight from the Browser

To do this you need FireBug and NetExport. Once installed you can save all network traffic logged in FireBug to a HAR format and even view in firebug by pressing F12. If you look under the Net tab. Selecting Export >> Save As will get you your HAR.


Test Reports with Visual Studio, NUnit and Allure

Getting some decent test reporting can be painful when working with Visual Studio and particularly with MS Test as the framework as newer versions of Visual Studio don’t appear to produce a *.trx file anymore.

Using NUnit we can generate some fancy graphs and stats using the Allure framework with NUnit.

  • Firstly we need to download and install NUnit:
  • Secondly we need the Allure NUnit adaptor, follow instructions here: Allure Framework Adaptor for NUnit
  • The Allure Command Line Interface is needed, get it from here: Allure CLI
  • Java JDK needs to be installed and specified correctly in your JAVE_HOME variable in your environment


Allure produces an XML file which can use that to generate a HTML report, there is also an adaptor for MS Test but it’s an extra hoop to jump through as it converts a *.trx into a XML which you then have to generate.

The NUnit adaptor generates the XML file to a directory when your tests are run with NUnit.

To generate the report run cmd in your Allure.bat directory and run the command:

allure report generate –v 1.4.1 path/to/directory/with/xml/files

Note: It is important you DO NOT paste this command in – type it out the long way – You can paste in the path to the XML files though.

You should see the following message:

Successfully generated report to [//Path to generated report]

Go to the directory and open the HTML report in Firefox as other browsers will not work with some of the Ajax requirements locally.

This is your generated report! You might also want to publish it as an artefact in your Team City CI build, with the Allure Team City Plugin.

NOTE: If you are using Specflow with NUnit be sure to install nuget package NUnit.Runners. If this breaks your build, go download the binary version of NUnit ( and copy over both nunit.core.dll – nunit.core.interfaces.dll  to be referenced in your solution. This will ensure NUnit sees your [AfterTestRun] hook and perform all necessary teardown tasks.

SummaryxUnitTimeline    GraphsNo Defects


Collating multiple sets of results into one report

If you have the need to merge multiple sets of results into a single report this can be done easily by copying the generated XML files out to another directory, running another set of tests and then copying the previous files back over.

When you generate the report from the directory with two sets of XML files, Allure will mash it all together into one report.

You could also try Running Multiple Assemblies with NUnit but this is not recommended in my opinion.

Structuring Projects with BDD, Specflow and Selenium in C#

Typically when I create a Specflow BDD solution, BDD elements are separated into Feature Files, Hooks, Step Definitions and Pages. These elements are separate folders or subsets of folders within the solution.
Code begins with the feature files written in Gherkin, feature files contain Scenarios which themselves execute test cases. Sometimes a scenario is just one test case or can be many iterations of a particular action. An example is listed below:

Scenario Outline: Sitemap Links
 And I click the sitemap link 'LinkName'
 Then I am taken to the sitemap page for 'LinkName'
 | LinkName              |
 | Contact Us            |
 | Terms and Conditions  |
 | Privacy Policy        |

Generally we perform a set of actions and then assert the outcome at the end, in the scenario above we are searching for links in the sitemap section of the footer, selecting a result from a list and then asserting that we are taken to that particular page.

It helps greatly if these features are written in collaboration. In an ideal world features should be authored in a “three amigos” style with the project Business Analyst and Product Owner.

Gherkin generates Step Definitions which can be used across a set of feature files. It makes sense to group common ones together in one big Step Definition file for actions such as navigating to the site or logging in/out of the application. More feature specific steps are kept in separate files.

It is our aim to keep the code in step definitions short and clean, we do this by wrapping up selenium code and putting these methods in to the Pages .cs files. For example:

Given I have navigated to Test Environment

Generates the following Step Definition:

[Given(@"I have navigated to (.*)"]
public void GivenIHaveNavigatedTo(string website)


From here in one of the common Pages .cs files we have created the following method:

public void NavigateTo(string url)

We try to keep the Selenium code wrapped up in these statements to keep the Step definitions easy to write. With this method in place our step definition looks like this:

[Given(@"I have navigated to (.*)"]
public void GivenIHaveNavigatedTo(string website)

Organising Tests
Tests are organised with @Tags which make groups of or individual tests easy to find in MS Test. They also have other important uses in hooks and can be used to execute a set of tests on a CI build.

For example all Sanity tests are labelled with @_Sanity this makes the whole suite of sanity tests easy to find and execute. If this tag was passed to a CI build then rather handily just the sanity tests would run on an overnight build rather than a lengthier @_Regression suite.

What’s with the underscore? Well that just keeps your tags at the top of the MS Test list if you happen to be using that test framework, depending on your preference of viewing them.


Hooks are used primarily for the test run set up and teardown. Before a run we want to instantiate a driver and spin a browser up before any scenarios are automated, otherwise it would go nowhere fast. Similarly at the end of a test run we want to kill the browser off.

You can be a bit more clever with hooks and execute bits of code before or after certain features, steps, scenarios or scenario blocks. In order to keep an easy to understand codebase these must be used sparingly. This is because debugging can be harder and feature files can have all sorts of things happening that aren’t specified in English and that’s not what BDD is intended for.

For more information on hooks please visit:


This section of classes is where we keep all of the wrapped up Selenium code, which is a bit nasty. We wrap them up into helper methods to make our step definitions easier to write as described previously. This code should be kept clean and refactored and should encapsulate common practices and apply sensible programming practices such as DRY (Don’t Repeat Yourself).

The Selectors Page is where we keep a large number of constants, this is so that if an Id changes on the System Under Test you will only ever need to change it once to fix your broken test(s).


We also keep in mind a set of rules  when structuring the code which are:

  • No Selenium Code in Step Definitions
  • No Project Specific Methods in PageBase, use MainPage for these
  • All By.Criteria to be defined and referenced in selectors page
  • URLs and base URLs that aren’t in Gherkin must be constants
  • Create triptych Given/When/Then step definitions where necessary
  • Always copy generated methods to clipboard
  • No Thread.Sleeps in any step definitions, wait for something instead
  • Regions, regions, regions – use regions in code to keep it nice and tidy
  • “That” is a banned word in Gherkin
  • Replace all auto generated variable names with something sensible
  • All assertions have the optional message parameter populated except for AreEqual
  • Split Step Definitions out into sensible groupings/ filenames


BDD Solution Architecture
BDD Solution Architecture

Base Solution

You can find a base solution here.

Wrap Up

There is a lot more you can do to improve this solution like putting in a clever environment selector, addition of Sauce Labs or BrowserStack and also Applitools Eyes. The way this is build is designed to give you scalability and organisation to cope as your test suite grows but you have to be strict with yourself and keep in mind the BDD Rules set out or things can get messy very quickly.