Search

Sunday 18 October 2015

Cucumber JVM: Advanced Reporting 3. Handling FileNotFound or Invalid JSON Errors

Cucumber JVM: Advanced Reporting 3. Handling FileNotFound or Invalid JSON Errors

Introduction

Since Advanced Cucumber JVM reporting was initially introduced and then enhancements were provided the most frequent question was related to the error when input file was not found or it was improperly formatted. Normally such error was accompanied with the output like:

java.io.IOException: Input is invalid JSON; does not start with '{' or '[', c=-1
 at com.cedarsoftware.util.io.JsonReader.readJsonObject(JsonReader.java:1494)
 at com.cedarsoftware.util.io.JsonReader.readObject(JsonReader.java:707)
 at com.github.mkolisnyk.cucumber.reporting.CucumberResultsOverview.readFileContent(CucumberResultsOverview.java:81)
 ...
or
java.io.FileNotFoundException
 at com.github.mkolisnyk.cucumber.reporting.CucumberResultsOverview.readFileContent(CucumberResultsOverview.java:76)
 at com.github.mkolisnyk.cucumber.reporting.CucumberResultsOverview.executeFeaturesOverviewReport(CucumberResultsOverview.java:189)
 ...
As it is one of the most frequent questions I decided to create separate post explaining reasons of it and the way to fix.

Sunday 13 September 2015

Test Automation Best Practices

Test Automation Best Practices

Introduction

Test automation is not brand new thing. Even more, it's pretty known and quite old part of entire software development process. Thus, a lot of people have good and bad experience with it. As the result, there is some common set of best practices which were formulated in different ways. Different people concentrate on different aspects of test automation. Some of them are concentrating on technical parts while others pay more attention to higher level things. As always, the truth is somewhere in between.

So, what are the best practices in test automation?

For this post I've collected several posts/articles covering this topic (the list of them can be found at the References section of this post) and I'll try to combine them into one common list of practices. Mainly I'll contentrate on the top most UI level automation, however, a lot of practices are fully applicable to any level of automated testing.

Tuesday 30 June 2015

WebDriving Test Automation

WebDriving Test Automation

Introduction

WebDriver shows growing popularity for many years and at least for past 4 years it is number 1 player on the market of web test automation solutions. The growth hasn't been stopped yet. The WebDriver exposes open interface and it's server side API is documented in W3C Standard. Thus, a lot of people can implement their own back-end API and expand technology support. So, WebDriver can become more than another option among web test automation tools list. It may become (if not yet) the most wide spread test automation platform so that the term of UI test automation may be replaced with another term like Web-Driving. In this post I'll describe where the WebDriver solution "web-drives" us to and where are the areas of further growth.

Saturday 13 June 2015

Cucumber JVM: Advanced Reporting 2. Detailed Report (HTML, PDF)

Cucumber JVM: Advanced Reporting 2. Detailed Report (HTML, PDF)

Introduction

Previously I've described basic report samples based on Cucumber-JVM. Since I'm using this reporting solution on practice quite frequently the number of requirements and enhancements grows so since last time I had to add some more additional features which can be in use.

Some of new features are:

  • Cucumber extension to support failed tests re-run
  • Detailed results report which supports the following:
    • Detailed results report generation in HTML format
    • Detailed results report generation in PDF format
    • Screen shot are included into both the above reports
All those new features are included into 0.0.5 version, so we can add Maven dependency:
<dependency>
 <groupId>com.github.mkolisnyk</groupId>
 <artifactId>cucumber-reports</artifactId>
 <version>0.0.5</version>
</dependency>
or the same thing for Gradle:
'com.github.mkolisnyk:cucumber-reports:0.0.5'

Since Cucumber failed tests re-run functionality was described before in this post I'll concentrate more on detailed results report features.

Tuesday 26 May 2015

Cucumber JVM + JUnit: Re-run failed tests

Cucumber JVM + JUnit: Re-run failed tests

Automated tests should run reliably and provide predictable results. At the same time there are some spontaneous temporary errors which distort the entire picture while reviewing test results. And it's really annoying when tests fail on some temporary problem and mainly pass on next run. Of course, one thing is when we forgot to add some waiting time out to wait for element to appear before interacting with it. But there are cases when the reason of such temporary problem lays beyond automated tests implementation but mainly related to environment which may cause some delays of downtime for short time. So, normal reaction on that is to re-run failed tests and confirm functionality is fine. But this is too routine task and it doesn't really require some intellectual work to perform. It's simply additional logic which handles test result state and triggers repetitive run in case of error. If test fails permanently we'll still see the error but if test passes after that then the problem doesn't require too much attention.

And generally, if you simply stepped out it's not the reason to fail.

This problem is not new and it's been resolved already for many particular cases. E.g. here is the JUnit solution example. In this post I'll show you how to perform the same re-run for Cucumber-JVM in combination with JUnit as I'm actively using this combination of engines and it is quite popular. The solution shown in previous link doesn't really fit the Cucumber as each specific JUnit test in Cucumber-JVM corresponds to some specific step rather than entire scenario. Thus, the re-run functionality for this combination of engines looks a bit different. So, let's see how we can re-run our Cucumber tests in JUnit.

Monday 11 May 2015

The Future of Test Automation Frameworks

The Future of Test Automation Frameworks

It's always interesting to know the future to be able to react on that properly. It's especially true for the world of technology when every time we get something new, when something which was just a subject of science fiction yesterday becomes observable reality nowadays. Automated testing is not an exception here. We should be able to catch proper trend and to be prepared for that. Actually, our professional growth depends on that. Should we stick to the technologies we use at the moment of should we dig more into some areas which aren't well-developed at the moment but still have big potential? In order to find an answer to that question we need to understand how test automation was evolving, what promising areas are. Based on that we can identify what should we expect next.

So, let's observe test automation frameworks evolution to see how it evolves and where we can grow to.

Wednesday 6 May 2015

Ploblem Solved: Cucumber-JVM running actions before and after tests execution with JUnit

Ploblem Solved: Cucumber-JVM running actions before and after tests execution with JUnit

Background

It is frequent case when we need to do some actions before and/or after entire test suite execution. Mainly, such actions are needed for global initialization/cleanup or some additional reporting or any other kind of pre/post-processing. There may be many different reasons for that and some test engines provide such ability, e.g. TestNG has BeforeSuite and AfterSuite annotations, the JUnit has test fixtures which may run before/after test class (it's not really the same but when we use Cucumber-JVM it's very close to what we need).

Problem

The problem appears when you want to add some actions at the very latest or very earlies stages of tests execution and you use Cucumber-JVM with JUnit. In my case I wanted to add some reports post-processing to make an advanced Cucumber report generation. In this case JUnit fixtures didn't help as AfterClass-annotated method runs before Cucumber generates final reports.

At the same time adding @BeforeAll and @AfterAll hooks question raised on Cucumber side as well. And there was even some solution proposed. Unfortunately, authors decided to revert those changes as there were some cases when it does not work.

So, the problem is that I need something to run after entire Cucumber-JVM suite is done but neither Cucumber nor JUnit gives me built-in capability for doing this.

Solution

Sunday 3 May 2015

Cucumber JVM: Advanced Reporting

Cucumber JVM: Advanced Reporting

Advanced Cucumber Reporting Introduction

The Cucumber JVM contains some set of predefined reports available as the plugin option. By default we have some raw reports. Some of them are ready to be provided for end users (e.g. HTML report) while others still require some post-processing like JSON reports (both for usage and results reports). Also, available standard HTML report isn't really good enough. It's good for analysis (partially) but if we want to provide some kind of overview information we don't need so much details as well as we may need some more visualized statistics rather than just one simple table.

Well, there are some already existing solutions for such advanced reporting, e.g. Cucumber Reports by masterthought.net. That's really nice results reporting solution for Cucumber JVM. But when I tried to apply this solution I encountered several problems with Maven dependencies resolution and report processing. That became a reason for me to look at something custom as I couldn't smoothly integrate this reporting to my testing solution. Additionally, it covers only results reporting while I'm also interested in steps usage statistic information to make sure that we use our Cucumber steps effectively. And apparently I didn't find reporting solution covering this area in appropriate way for Cucumber JVM.

Being honest, I'm not really big fan of writing custom reporting solution at all as for test automation engineers existing information is more than enough in most of the cases. But if you need to provide something more specific and preferably in e-mailable form to send to other members of project team we need something else. That's why I created some components which generate some Cucumber reports like results and usage reports (see sample screen shots below).

The above samples show e-mailable reports which mainly provide results overview information which can be sent via e-mail as well as additional HTML report summarizing usage statistics. In this post I'll describe my reporting solution with some examples and some detailed explanations.

Sunday 22 February 2015

Aerial: Create New Project Using Archetype

Aerial: Create New Project Using Archetype

Since 0.0.5 version Aerial has an ability to generate Java projects from archetypes to get sample working project quickly. In this post I'll describe the steps for doing this.

Wednesday 18 February 2015

Problem Solved: run TestNG test from JUnit using Maven

NBehave vs SpecFlow Comparison

Background

Recently I've been developing some applicaiton component which was supposed to run with TestNG. Actually, it was another extension of TestNG test. So, in order to test that functionality I had to emulate TestNG suite run within the test body. Well, performing TestNG run programmatically wasn't a problem. It can be done using code like:

import org.testng.TestListenerAdapter;
import org.testng.TestNG;

.......
        TestListenerAdapter tla = new TestListenerAdapter();
        TestNG testng = new TestNG();
        testng.setTestClasses(new Class[] {SomeTestClass.class});
        testng.addListener(tla);
        testng.run();
.......
where SomeTestClass is some existing TestNG class. This can be even used with JUnit (which was my case as I mainly used JUnit for the test suite). So, technically TestNG can be executed from JUnit test and vice versa.

Problem

The problem appeared when I tried to run JUnit test performing TestNG run via Maven. Normally tests are picked up using surefire plugin which can be included into Maven pom.xml file with the entry like this:

 <plugin>
  <groupId>org.apache.maven.plugins</groupId>
  <artifactId>maven-surefire-plugin</artifactId>
  <version>2.18.1</version>
 </plugin>
If you use JUnit only it picks up JUnit tests. In my case I also used JUnit for running test suites but one test required TestNG test class (actually the class with TestNG @Test annotation) as well as I had to add TestNG Maven dependency. In this case only this TestNG class was executed during test run. So, how to let Maven know that I want to run exactly JUnit tests but not TestNG ones while both of them are present within the same Maven project?

Monday 9 February 2015

NBehave vs SpecFlow Comparison

NBehave vs SpecFlow Comparison

It's always good when you use some technology and you have a choice between various tools/engines. In some cases it makes a problem like it happens sometimes with BDD Engines especially when we have to choose between similar engines which are widely used and at first glance they seem to be identical. Some time ago I've made JBehave vs Cucumber-JVM comparison to spot some differences and comparative characteristics of the most evolved engines in Java world. And as I can see from the pge views statistics it's quite interesting topic. At the same there is .NET technology which has another set of BDD engines. And they are quite popular as well. So, in this world we may encounter question like: What is better, NBehave or SpecFlow ?

This answer isn't so trivial. When I did cross-platform BDD Engines comparison almost 3 years ago some of the engines weren't well enough or at least their documentation was on low level. At that time NBehave didn't seem to look well. But since that time a lot of things changed and now both NBehave and SpecFlow are turned into full-featured Gherkin interpreter engines. So, the choice of better tool among then isn't so trivial anymore. It means we'll start new comparison between NBehave or SpecFlow

So, let's find out who's winner in this battle!!!

Sunday 18 January 2015

Aerial: Introduction to Test Scenarios Generation Based on Requirements

Aerial: Introduction to Test Scenarios Generation Based on Requirements

In one of the previous articles dedicated to moving to executable requirements I've described some high-level ideas about how such requirements should look like. Main idea is that document containing requirements/specifications can be the source for generating test cases. But in the previous article we just made brief overview on how it should work in theory. Now it's time to turn theory into practice. In this article I'll make brief introduction to new engine I've started working with. So, let me introduce the Aerial the magic box which makes our requirements executable.

In this article I'll describe main principles of this engine and provide some base examples to bring an idea of what it is and where it is used.