Search

Sunday 18 January 2015

Aerial: Introduction to Test Scenarios Generation Based on Requirements

Aerial: Introduction to Test Scenarios Generation Based on Requirements

In one of the previous articles dedicated to moving to executable requirements I've described some high-level ideas about how such requirements should look like. Main idea is that document containing requirements/specifications can be the source for generating test cases. But in the previous article we just made brief overview on how it should work in theory. Now it's time to turn theory into practice. In this article I'll make brief introduction to new engine I've started working with. So, let me introduce the Aerial the magic box which makes our requirements executable.

In this article I'll describe main principles of this engine and provide some base examples to bring an idea of what it is and where it is used.

Aerial Main Purpose and Benefits

Aerial is an engine implementing the approach of Executable Requirements. It is targeted to generate test scenarios based on requirements description. Thus, the requirements document is the source for tests. Mainly it is designed as an extension of Cucumber to provide the following possibilities:

  • More compact and structured representation of requirements and scenarios - with Aerial we don't need to expand all data and scenarios. We just define rules how to expand behavior description into actual set of scenarios. This form is more compact and more declarative.
  • Built-in mechanism for generating test scenarios - main idea is that test scenarios are generated, so we don't need to spend the time on test design itself. In other words the Requirements Definition and Test Design stages are collapsed into one stage.
  • Generalised approach for getting data from external resources - requirements are usually defined in text form and can be stored anywhere in any form. The Aerial provides extensible mechanism for various input sources processing.
  • Ability to perform static checks on requirements - since requirements are used as the source for test scenario generation there is an ability to provide various validations. In other words Aerial partially automated static validation of requirements.
  • Tests and their automated implementation reacts on any requirement change - it is common thing in any testing project where we should always keep out tests and automated tests in synch with requirements. Taking into account that tests are supposed to be generated based on requirements description we can see that if we do any changes to requirements the underlying tests will be updated automatically.
  • Simplify requirements and test coverage calculation - the coverage is always important. Since test scenarios are generated from initial requirements the requirements coverage is 100% by design. In particular this is described in Measure Automated Tests Quality post when we were looking at unified test coverage metric.

How Is It Ever Possible?

First question which should appear is how can we generate test scenarios based on requirements at all? Definitely, requirements are free-form text which can be varying and describing everything. And again, what and how should we describe?

Common Elements For Scenario Description

Every test scenario has specific components which are present almost everywhere disregard what application we test. First of all every test performs some Action which drives system to some specific state where we perform verifications. Of course, every action leads to the place where we verify results.

Depending on action result we can appear in a different state. For generator we should distribute further actions depending on Action success or failure.

In some cases we have to make sure that our test drives application to correct initial state before doing any actions. This is normally done by some pre-conditions or Pre-requisites.

And finally, all scenarios are driven by some input data we use. Depending on the input characteristics we can identify whether our scenario is positive or negative.

So, eventually, every functional case can be represented with the following items:

  • Action - some set of instructions explaining what should be done in order to reach some application state where we should do some verifications
  • Actions on Success/Failure - the set of instructions to be done in case Action is expected to be finished successfully or with error respectively
  • Pre-requisites - the set of instructions to be done before test scenario starts being executed
  • Test Data - the set of descriptions showing the data to be used within the scenarios as well as the data format
Their dependency and interaction can be represented with the following diagram:
Depending on level of detail we can represent any test scenario in this form. This is pretty convenient for generation as we can describe each specific part and then combine them and build final scenarios just by combining all those parts.

Typical Data Is Tested In Typical Way

Main essential part which left in the generation algorithm is test data. We should declare the data in some specific way so that the actual input will be generated just based on some rules.

This can be achieved by test data format definition. E.g. if some input is very specific we can use regular expression to identify what the format valid value should have. Based on that we also can define potential values which violate the format.

In real life it can be represented by typical patterns on entering the value into some specific field. E.g. you probably heard the joke like:

QA Engineer walks into a bar. Orders a beer. Orders 0 beers. Orders 999999999 beers. Orders a lizard. Orders -1 beers. Orders a sfdeljknesv
So, in this example we have:
  • Valid number (1 beer)
  • Lower boundary value (0 beers)
  • Big number which is very likely bigger than maximal acceptable one (999999999)
  • Something which is not the number (lizard & sfdeljknesv)
  • Improper number (-1)
However, generally the number of beers could be described with the range from 0 to 1000 (imagine you pay for all beers during the day). So, the input data in this example can be described as numeric value in the range of 0 to 1000.

Now let's switch from this example to another using similar numeric entry. E.g. we have some payment system where we can make payments from 0 to 1000$. Let's imagine test inputs we could use to check this system. Well, they would be pretty similar to the ones we have for previous example.

So, even being abstracted from the context we can define some rules for input data generation where we can define some declarative form of input. Based on some algorithm we can expand those rules into actual test data. Indeed, if our value is defined within a range we definitely try something inside the range (positive testing), outside the range (negative testing), values on or near the borders (boundary analysis). We shouldn't take all possible data here, just some instances of data belonging to some specific group (equivalence classes).

So, the same thing is for Aerial. The initial document contains input section where we define just the input name and acceptable format. The engine itself simply generates typical data based on some rules.

The Set of Scenarios is Typical

Mainly when we have some input data and some pre-defined flow we can define the following groups of scenarios:

  • Positive - operate with valid data with expectation of valid output
  • Negative - operate with invalid data with expectation of invalid output
  • Unique values - perform the same scenario on similar set of data more than once. If some value is unique it would be accepted at first turn but the error is expected on second turn.
Some of the test groups may appear implicitly here. E.g. Boundary Analysis can be performed as the part of Positive and Negative tests while checking values based on range defined. Also there may be Pair-wise testing applied in case the number of positive tests data is big enough.

Main this in the above paragraph is that there are some specific groups of tests which can be generated in some common way.

Sample Scenario Generation Patterns

Positive Scenarios

Positive scenarios operate with positive values and they are targeted to check expected behaviour. Mainly positive scenarios are generated from document description using the following structure:

    Scenario: < Case Name > positive test
        Given < Pre-requisites steps >
        When < Action text >
        Then < Actions in case of success >
    Examples:
        <The positive test data table>

Negative Scenarios

Negative scenarios are built the same way as positive scenarios except they operate with negative test data where at least one item doesn't fit acceptable format. Also, since this scenario uses invalid input it expects actions on errors to be expected results. So, mainly negative test scenario is build using the following template:

    Scenario: < Case Name > negative test
        Given < Pre-requisites steps >
        When < Action text >
        Then < Actions in case of error >
    Examples:
        <The negative test data table>

Unique Value Scenarios

Unique value scenario generation is triggered as soon as at least one field has **Unique** column value set to **true** in the input data table. In this context the **Unique** term isn't restricted just with the case when we cannot create 2 records with the same value of some field. In this case uniqueness means that we cannot perform some action twice having the same value for some field.

Getting to the technical side of the scenario generation we should get the scenario when we run action successfully at first turn but on the second turn we get the error if we use the same value in some field. Thus, the unique value scenario can be described with the following template:

    Scenario: < Case Name > negative test
        Given < Pre-requisites steps >
        When < Action text >
        Then < Actions in case of success >
        When < Action text >
        Then < Actions in case of error >
    Examples:
        <Unique scenario data>

Scenario Generation Sample

In order to show all the above items on practice here is an example of such generation. E.g. initial requirements document looks like:

This is a sample document
With multiline description

Feature: Sample Feature
    This is a sample feature
    With multiline description

    Case: Sample Test
        This is a sample test case
        With multiline description

        Action:
            Sample action on <Test> value
        Input:
            | Name | Type | Value   | Unique |
            | Test | Int  | [0;100) | true   |
        On Success:
            This is what we see on success
        On Failure:
            This is what we see on error
        Pre-requisites:
            These are our pre-requisites
Here we describe some action and expectations depending on input validity. After transformation we'll get the file containing text like:
Feature:  Sample Feature
    Scenario Outline:  Sample Test positive test
        Given These are our pre-requisites
        When Sample action on <Test> value
        Then This is what we see on success
    Examples:
        | Test | ValidInput |
        | 50 | true  |
        | 51 | true  |
        | 0 | true  |

    Scenario Outline:  Sample Test negative test
        Given These are our pre-requisites
        When Sample action on <Test> value
        Then This is what we see on error
    Examples:
        | Test | ValidInput |
        | 100 | false |
        | -1 | false |
        | 101 | false |

    Scenario Outline:  Sample Test unique value test
        Given These are our pre-requisites
        When Sample action on <Test> value
        Then This is what we see on success
        When Sample action on <Test> value
        Then This is what we see on error
    Examples:
        | Test | ValidInput |
        | 50 | true  |
As it's seen on example we are getting full-features Cucumber feature containing multiple scenario outlines. So, when we use some data we shouldn't forget to include data items into actions text. Otherwise, our generated scenario wouldn't be consistent.

High Level Integration Overview

Aerial isn't really independent engine. It just prepares initial data for further processing. At the moment it is targeted to generate Cucumber-like scenarios to send them to JUnit. So, mainly entire flow covers 3 major components:

  1. Aerial itself - it is used for test scenarios generation
  2. Cucumber - used to run generated scenarios
  3. JUnit - main test engine which runs all
Schematically the entire flow can be represented with the following diagram:
Here we have 2 major stages:
  1. Stage 1 - the requirements document is processed by Aerial. In the end the Cucumber-like feature files are generated
  2. Stage 2 - the Cucumber feature files are sent to Cucumber driven by JUnit. As the result the JUnit reports are produced

Components Overview

Aerial itself isn't a monolith product. It contains several parts which can be released separately and targeted to specific goals. These conponents are:

  • Aerial core engine
  • Aerial Maven plugin
Let's describe all those components in more details.

Aerial Engine

Aerial Engine is the core library which performs entire scenarios processing. It can be delivered either as:

  • External dependency:
    • Maven:
      <dependency>
       <groupId>com.github.mkolisnyk</groupId>
       <artifactId>aerial</artifactId>
       <version>0.0.2</version>
      </dependency>
        
    • Gradle:
      'com.github.mkolisnyk:aerial:0.0.2'
        
    • Buildr:
      'com.github.mkolisnyk:aerial:jar:0.0.2'
        
  • Jar library which can be downloaded from here (for different version just change version numbers in file/folder names).

Aerial Maven Plugin

Aerial Maven plugin is needed in case we want to split entire processing activities. By default, if we bind Aerial annotations to JUnit tests the entire flow will be performed but it uses very specific runner. If you need to use some other runners there's an ability to generate scenarios separately from execution. This is current purpose of the Aerial Maven plugin. It can be included using the following entry:

<dependency>
 <groupId>com.github.mkolisnyk</groupId>
 <artifactId>aerial-maven-plugin</artifactId>
 <version>0.0.2</version>
</dependency>
At the moment it supports aerial:generate goal which is being executed during generate-sources phase.

Tests Examples

Examples with detailed description can be found on main Aerial site page.

Summary

That was a brief overview of Aerial, its' capabilities and major concepts behind it. Major things we should take from this article are:

  • Specifications\Requirements are represented as free-form text. So, we can make them be written in some specific format for convenience
  • Test scenarios are relatively typical and may contain some common structure items.
  • Aerial uses the above principles and introduces some input format which is suitable for test scenarios generation
  • Aerial doesn't perform the entire test execution. It is just additional automated step before Cucumber (or any other similar framework) run
Of course, Aerial is pretty new engine and a lot of things are to be changed, added. But the major concepts are still the same

Presentation and Video

References

No comments:

Post a Comment