Automated Tester

.science

This content shows Simple View

Uncategorized

Concurrent Test Running with Specflow and NUnit 3

A little while back I wrote a post on achieving concurrent or parallel test running with Selenium, Specflow and NUnit2, but what about NUnit3? Let’s have a look at that, thankfully it is a bit simpler than running an external build script as done previously.

First up, according to the Specflow docs – open up your AssemblyInfo.cs file in your project and add the following line:

[assembly: Parallelizable(ParallelScope.Fixtures)]

Next we need to setup/ teardown our browser and config from the Scenario Hook level, doing it in a Test Run Hook will be problematic as it is static. Your Hooks might look similar to this:

[Binding]
public class ScenarioHooks
{
private readonly IObjectContainer objectContainer;
private IWebDriver Driver;

public ScenarioHooks(IObjectContainer objectContainer)
{
this.objectContainer = objectContainer;
}

[BeforeScenario]
public void BeforeScenario()
{
var customCapabilitiesIE = new DesiredCapabilities();
customCapabilitiesIE.SetCapability(CapabilityType.BrowserName, "internet explorer");
customCapabilitiesIE.SetCapability(CapabilityType.Platform, new Platform(PlatformType.Windows));
customCapabilitiesIE.SetCapability("webdriver.ie.driver", @"C:\tmp\webdriver\iedriver\iedriver_3.0.0_Win32bit.exe");

Driver = new RemoteWebDriver(new Uri(XXX.XXX.XXX.XXX), customCapabilitiesIE);
objectContainer.RegisterInstanceAs<IWebDriver>(Driver);
}

[AfterScenario]
public void AfterScenario()
{
Driver.Dispose();
Driver.Quit();
}
}

You can see from the browser instantiation we are sending the tests to a Selenium Grid Hub, so as a precursor to running the tests you will need suitable infrastructure to run a grid, or you could configure it to go off to SauceLabs or BrowserStack.

Assuming the hub and nodes are configured correctly, when your build process runs the tests then the hub will farm them out by feature file (for other options see the parallel scope in AssemblyInfo.cs) to achieve concurrent test running, and that’s it! Much nicer.



SeConf 2016 – London

If you aren’t aware of the Selenium Conference in London in November, head on over to their website and book yourself a ticket. Early bird prices are still available so get them whilst they are cheap and I’ll see you there! Looking forward to sharing ideas and experiences with other testing pros out there.

Tickets are here.

Check out the website for all the details on speakers and workshops for the 3 day event.



Testing Rest Endpoints with RestSharp and Specflow

In an earlier post I set how to use Specflow to test SOAP endpoints but what about REST API’s?

Given the nature of building URLs to interact with them it should be a cinch!… And it was. By leveraging use of the RestSharp library I was easily able to manipulate an example Rest interface over at http://www.thomas-bayer.com/sqlrest/.

So lets have a look at how:

Each method will be calling on the endpoint we initially set (the base URL), so lets wire in some tests with Gherkin and define some step definitions.

Scenario Outline: RestSharp Get Customer Record
Given I have an example endpoint http://www.thomas-bayer.com/sqlrest/
When I search for customer record 0
Then the result contains customer <ID>
Then the result contains customer <First Name>
Then the result contains customer <Last Name>
Then the result contains customer <Street>
Then the result contains customer <City>

Examples:
| ID | First Name | Last Name | Street | City |
| 0 | Laura | Steel | 429 Seventh Av. | Dallas |

Scenario: Post Product Price
Given I have an example endpoint http://www.thomas-bayer.com/sqlrest/
And I update the price of Product 1 to 10.00
When I search for Product 1
Then the price is 10.00
And I reset the price of product 1 to 24.8

We are passing the endpoint down to our helper here but I would normally put it in the App.config to keep things configurable.

Next up I created a Helper called RestHelper which will deal with all the setup, GET and POST requests etc. We need to define the endpoint and assign it to a commonly accessible place.

public class RestHelper
{
public RestClient endpoint = null;

public RestClient SetEndpoint(string endpointUrl)
{
endpoint = new RestClient(endpointUrl);
return endpoint;
}

Note: Your endpoints will most likely require some authentication of some kind which I am not covering. However I was easily able to add cookie authentication with a little tweaking.

Next lets create some other useful helper methods like GET and POST:

public string GetQuery(string query)
{
var request = new RestRequest(query, Method.GET);
IRestResponse response = endpoint.Execute(request);
var content = response.Content; // raw content as string
return content;
}

My POST method is simply to update the price of a product but you could rework to whatever you need:

public void UpdatePrice(string query, string price)
{
var request = new RestRequest(query, Method.POST) { RequestFormat = DataFormat.Xml };
var body = ("<resource><PRICE>" + price + "</PRICE></resource>");
request.AddParameter("text/xml", body, ParameterType.RequestBody);
endpoint.Execute(request);
}

In our Step definitions we just need to new up the Helper and pass in the right query, we are also assigning the Rest query result in a place that can be accessed by multiple step definitions:

private readonly RestHelper Rest = new RestHelper();

private string queryResult = null;

[Given(@"I have an example endpoint (.*)")]
public void GivenIHaveAnExampleEndpoint(string restEndpoint)
{
Rest.SetEndpoint(restEndpoint);
}

[When(@"I search for customer record (.*)")]
public void WhenISearchForCustomerRecord(string customerNo)
{
queryResult = Rest.GetQuery("CUSTOMER/" + customerNo + "/");
}

[Given(@"I update the price of Product (.*) to (.*)")]
[Then(@"I reset the price of product (.*) to (.*)")]
public void GivenIUpdateThePriceOfProductTo(int productNo, string newPrice)
{
Rest.UpdatePrice("PRODUCT/" + productNo +"/", newPrice);
}

And so on… If you want to have a play around with the code you can clone it from here and have fun testing REST APIs!



Setting up Marionette/ GeckoDriver

 

Firefox will crash if you are using the Webdriver implementation for FirefoxDriver with Firefox v47 and beyond. This needs to be replaced with Mozilla’s GeckoDriver/ Marionette. At the time of writing it is still in a pre-release stage; meaning you may get unpredictable results against the current stable version of Firefox. You may want to use it with Firefox Nightly or Firefox Developer.

Below are some steps to get you going with Marionette/ Geckodriver C# .Net:

  • Download the latest release here
  • Extract the *.exe middleware and rename it to ‘wires.exe’
  • Place ‘wires.exe’ in to the given source code location
  • Include ‘wires.exe’ into your Solution and set properties to ‘Copy if Newer’

To set the Driver up in code, I have found the following works well in C# .Net:

            FirefoxDriverService service = FirefoxDriverService.CreateDefaultService();
            service.FirefoxBinaryPath = @"C:\Path\to\your\FF\exe.exe";
            FirefoxOptions options = new FirefoxOptions();
            options.AddAdditionalCapability(CapabilityType.AcceptSslCertificates, true);
            TimeSpan t = TimeSpan.FromSeconds(10);

            Driver = new FirefoxDriver(service, options, t);

Note that you need to put in the path to your Firefox *.exe or add it to your system Path variable.

If you are experiencing issues I would recommend having a look at the project bug list or the project GitHub issues page before raising any issues. There is also much more information over at the Mozilla page for Marionette.



Pickles Reports – Living Document Generation

Adding Pickles to your Project is really useful for presenting your Gherkin based tests in an easy to read, searchable format with some funky metrics added in. If you get your CI build to publish the output as an artifact then on each build you will always have up to date current documentation of what is being tested.

You can configure it numerous ways, via MSBuild, PowerShell, GUI or Command Line.

In this article I will be setting up via command line and running a batch file as a build step in TeamCity. First off we need to Install Pickles Command Line via NuGet. NUnit 3 is my test runner.

Once built we can start making use of it by running the executable with our desired parameters:

  • –feature-directory: Where your Feature Files live, relative to the executable
  • –output-directory: Where you want your Pickles report to generate
  • –documentation-format: Documentation format (DHTML, HTML, Word, Excel or JSON)
  • –link-results-file: Path to NUnit Results.xml (this will allows graphs and metrics)
  • –test-results-format: nunit, nunit3, xunit, xunit2, mstest, cucumberjson, specrun, vstest

Together it looks like the below, which is put into a batch file and called in a closing build step.

.\packages\Pickles.CommandLine.2.5.0\tools\pickles.exe –feature-directory=..\..\Automation\Tests\Project.Acceptance.Tests\Features^ –output-directory=Documentation^ –link-results-file=..\..\..\TestResult.xml –test-results-format=nunit –documentation-format=dhtml

I use the NUnit format over NUnit3 because I have set my NUnit3 runner to output NUnit2 formatted results, this is so Specflow can consume the same output and produce more metrics on the build. With the results file hooked in you can get some whizzy graphics:

pickles_report

Pickles is a great tool for BDD and should  help bridge that gap between “the Business” and IT. A full sample report can be found here. You can find extensive documentation here for the various ways to set up Pickles.

 




top