Search

Tuesday 16 December 2014

Automated Testing: Moving to Executable Requirements

Automated Testing: Moving to Executable Requirements

A lot of test automation tools and frameworks are targeted to combine technical and non-technical aspects of test automation. Mainly it's about combination of test design and automated implementation. Thus, we can delegate test automation to people without programming skills as well as make solution more efficient due to maximal functionality re-use. Eventually, test automation tools are usually capable enough to create some sort of DSL which reflects business specifics of system under tests. There is a lot of test automation software created to provide such capabilities. Main thing they all reach is that they mitigate the border between manual and automated testing. As the result we usually control correspondence between test scenarios and their automated implementation.

But it's not enough as we also have requirements and if we change them we should spend some time to make sure that requirements are in line with tests and their automated implementation. Thus, we need an approach which combines requirements, tests and auto-tests into something uniform. One of such approaches is called Executable Requirements. In this post I'll try to describe existing solutions for that and some possible ways where to move.

Sunday 21 September 2014

Measure Automated Tests Quality

Introduction

There's one logical recursion I encounter with test automation. Test automation is about developing software targeted to test some software. So, the output of test automation is another software. This is one of the reason for treating the test automation as the development process (which is one of the best practices for test automation). But how are we going to make sure that the software we create for testing is good enough? Indeed, when we develop the software we use testing (and test automation) as one of the tools for checking and measure the quality of the software under test.

So, what about software we create for test automation?

On the other hand we use testing to make sure that the software under test is of acceptable quality. In case of test automation we use another software for this. And in some cases this software becomes complicated as well. So, how can we rely on non-tested software for making any conclusions about the target product we develop? Of course, we can make test automation simple but it's not the common solution. So, we should find some compromise where we use reliable software to check the target software (the system under test). Also, we should find the way to find out how deep testing can be and how we can measure that.

So, main questions which appear here are:

  • How can we identify that the automated tests we have are enough to measure quality of end product?
  • How can we identify that our tests are really good?
  • How can we keep quality control on our automated tests?
  • How can we identify if our tests are of acceptable complexity?
In this article I'll try to find out answers to many of those questions.

Sunday 7 September 2014

Mutation Testing Overview

Mutation Testing Overview

Introduction

It's always good to have the entire application code covered with tests. Also, it's nice to have some tracking on features we implement and test. All this stuff provides overall picture of actual behaviour correspondence to expectations. That's actually one of the main goals of testing. But there are some cases when all the coverage metrics don't work or do not show the actual picture. E.g. we can have some test which invokes some specific functionality and provides 100% coverage by any accessible measure but none of the tests contains any verifications. It means that we potentially have some problems there but nothing is alerted about that. Or we may have a lot of empty tests which are always green. We still may obtain coverage metrics to find out if existing number of tests is enough for some specific module. But additionally we should make sure that our tests provide good quality of potential problems detection. So, one of the ways to reach that is to inject some modification which is supposed to lead to some error and make sure that our tests are able to detect the problem. The approach of certifying tests against intentionally modified application is called Mutation Testing.

Main feature of such testing type is that we do not discover application issues but rather certify tests for errors detection abilities. Unlike "traditional testing" we initially know where the bug is expected to appear (as we insert it ourselves) and we have to make sure that our testing system is capable to detect it. So, mutation testing is mainly targeted to check quality of the tests. The above examples of empty test suites or tests without verifications are corner cases and they are quite easy to detect. But in real life there's interim stage when tests have verifications but they may have some gaps. In order to make testing solid and reliable we need to mitigate such gaps. And mutation testing is one of the best ways to detect such gaps.

In this article I'll describe main concepts of mutation testing as well as describe potential ways to perform this testing type with all relevant proc and cons.