Search

Tuesday 20 July 2010

SOA Testing: what should we know about it

It's obvoius that testing is not limited by clicks on application GUI. All we have as a part of the product should be tested. And sometimes we can interact on non-GUI level. In particular it key feature of services testing.

Typically, we have some services running on some remote machines and having some interaction protocol like HTTP. We can send requests and analyze the response. Is that all? I don't think so. Key problem is that testing on such level can be taken as "traditional" GUI-level testing. The source for this problem is related to the fact that it's testing too and it requires test engineers to be involved. Well, sounds logically but some facts are missed.

1. Services can be tested only via some additional software

Services are usually tested by sending some requests and analyzing response. In most cases we should have some additional software which can at least send the request and analyze the response. When there're a lot of various requests generating various specifically formatted responses we should make some wrapper to parse results. Otherwise we'll spend too much time on looking at the response. Also, we're at risk to miss some small but necessary details. So, typically, test execution and results retrieval should be delegated to some set of software.

In other words services testing is mostly automated testing. And it requires test automation practices as well as infrastructure and people skill-set. Once you missed that fact get ready to incompatibilities in process especially when there're multiple integrated services with complex functionality.

How it's reflected? Firstly, it's orientation to simplicity for test design rather than for test execution. And I don't even talk about such thing as maintenability, extensibility, portability. That's typical "artifact" of manual GUI-level testing. Yes, for GUI-level testing we should have some test scenarios which are usually some structured representation of natual-language user instructions and expected results. Once we have some test scenario we can repeat it multiple times whenever we need. So, the most "technical" part here is test-design. It works fine for GUI-level testing. But when we have services we can see that there're a lot of simple tests without multi-step scenarios. Typically it's verification that single command returns expected results for valid/invalid combination of parameters. We can even find some specific classes of tests applicable for many services. Also, we should pre-define some environment properties. It means that in addition to test design we should have some stage for architecture before starting test design. Otherwise, we'll encounter a number of problems in the future when some changes should take place.

Secondly, in such approach testing is still independent and performed by request. It means that there's dedicated test engineer who should start tests after service is set up on some environment. It's necessary even when it needs to run just a couple of sample commands. It's just another bottle-neck on test engineer.

Thirdly, overall solution operates only with testing infrastructure. There're a lot of systems tracking test scenarios, requirements, bugs. They should be used anyway but services testing may require additional resources used by test software. These are typically various test data. It should be stored in centralized way and from time to time we have to make some changes and be able to revert them back. Also, we should keep overall solution synchronized. We should have some source control system for it. But sometimes it needs to explain to people that we should use it especially in case several people work with the same set of resources.

These are typical mismatches making overall solution rather semi-automated and a lot of actions are still up to test engineer. Maybe it's enough for some projects but in general such approach brings a lot of potential problems.

So, when we create tests for services we should also take care about how to configure, run and maintain them with minimal efforts. Partially it can be covered by more complicated infrastructure including not only tracking solutions but also some part of development infrastructure like auto-builders (Cruise Control, Hudson), build engines (Ant, NAnt, Maven) and source control systems (CVS, SVN, Git).

2. Tool is not enough. We should look for the solution.

Another typical mistake in test automation is that people may be too much tool-oriented. In other works when there's necessity of test automation such people start looking for tools.

Well, I wouldn't say it's bad. I can even say that we definitely should have some tool. But we should remember that tool just assists people in some activities.

At the same time we should always look at the overall system, overall process. And it involves a lot of other software. And one more thing. The tool itself doesn't mean problem resolution yet. Generally speaking, when we have some problem to solve we should look not for the tool but for the solution (let's play in Captain Obvious for a while). What the difference is? Unlike tool, solution is some set of various software integrated with each other and driven by some engine in addition to practices and process of using all this stuff.

What the specifics of service testing is? Key specific is that services usually don't have GUI and there's specialized interface for interaction with it. Typically it's some protocol like HTTP, TCP or so. Basically, we should send some information, get some response after that and analyze it. That's one of the most common workflow for services testing.

Of course, there're some dedicated tools for that. But do we really need some dedicated tool for such simple operations? Any standard programming language has libraries to interact with standard protocols. Even more, when protocol is not standard it's hardly possible that any existing tool will handle it. And almost each standard programming language has libraries for test engine (JUnit, Test::Unit, NUnit, TestNG) with reporting features. In general, development processes and practices evolve more intensively than corresponding processes and practices in testing (just by the fact that testing is actually auxilary activity for software development unlike programming). So, we can easily use the same infrastructure as for development.

For example, just look at .NET developers. The only tool they actually see is Visual Studio or any other IDE. All bugs, tasks and other artifacts are tracked there. All the infrastructure included. So, the reasonable question appears. It is: why shouldn't testing do the same instead of having "zoo" of various software? Well, nothing prevents us doing that except the "fear" of magic words like "C#", "Java", "Ruby", "Unit tests".

Actually, service tests are quite primitive by the structure. The closest analog in development is unit tests. So, basically we can wrap our request sending operations with some kind of API and all our tests are actually unit tests for API. All this solution is stored under sourse control. All the tests are driven by test engines and corresponding auto-build engines like Ant, NAnt, Maven, Rake. And all this stuff is included into continuous integration solution. This would be a solution supporting test design (actually coding), automated execution, automated results notification and full integration into development. E.g. when we need to run some smoke tests after service is just built and/or deployed we can arrange additional task to do it after some specific event. And no test engineer interaction is needed in case tests work fine. At the same time testing solution is transparent to developers.

Of course, we can find some tool and even we can pay money for it. But is there a guarantee that this tool do all we want in the way convenient to us? And most likely it's not integrated into other infrastructure components. So, before concentrating on some tool just think about costs. Are you ready to invest extra money just for a reason of people skill-set lack? It's quite necessary because it may appear that investments in testing can be bigger than expences when there's no testing at all. And if it happens there's a reason to think if we need such testing at all.

3. Testing should be done on multiple layers

Usually, services are located on some remote server. And during request we can go through different infrastructure like load balancers, various proxies, other security services. Each of these components can bring it's own errors. It means that in the most general case results from accessing service directly and passing through some additional infrastructure can be different.

If we have some additional levels between user and service we should check the service accessing through each level. We should be able to localize any problem properly. This is one of additional feature of services testing.

4. Tests should be as simple as possible

Actually, it's general practice for automated tests design. The simplier tests are the easier it is to develop them. A lot of times I encountered cases when people wrapped some specific commands into complex structures in addition to a number of templates identifying input of output data. As the result, we have some amorphous "mass" of various files of various formats which is hard to support and at least hard to understand.

From technical point of view services testing is simple task. So, the solution for it should be simple too.

For this reason it's better to distribute all the solution into several separate layers:
  1. Core
  2. API
  3. Tests
  4. Resources

Each layer is formed by some strict rules with their own requirements.

A lot of described things can be applicable to other testing types. But these things are almost the first to pay attention to while testing services. Something may seem to be complex and require additional skills. Anyway, no matter what tool or solution you take you have to learn how to use it, how to program your tests. In any case some programming practices are needed. At the same time it another reason for people to grow and improve their knowledge. And it's good opportunity to collaborate with developers. So, we shouldn't be afraid of something new because no matter what approach we choose we encounter the same problems. And if some problems are avoided in one place, the other place with bring us some challenges. So, it's up to you what difficulties do you prefer.

No comments:

Post a Comment