The Need for Dynamic Testing

"The system goes online on August 4th, 1997. Human decisions are removed from strategic defense. Skynet begins to learn at a geometric rate. It becomes self-aware 2:14 AM, Eastern time, August 29th. In a panic, they try to pull the plug." - Terminator 2 : Judgement Day

We're not writing a test automation system that will become self-aware.  So far, our tools have never said (to use another Sci-Fi reference), "Sorry Steve, I can't do that."  But we are writing tests that automatically create other tests, and tests that respond dynamically to the system they're presented with.

In this blog I want to explain why, and in later blogs we will talk more about how we do it.

All Tests Are Equal.  Some Are More Equal Than Others.

In the world of finance and trading systems, it's easy to spot an inexperienced tester.  Actually there are lots of ways, but a giveaway is when he/she says "we should test all combinations".

A moment's thought reveals that, for any non-trivial financial system, this is not possible and it is not cost-effective to even attempt to get close.  It can be too expensive to build the tests, too expensive to run the tests, too expensive to maintain the tests.... or, frequently, all three.

It's not cost-effective because it assumes that all test cases have equal value.  In reality, testing combinations that are already in use is much higher value than testing combinations that are  not used.  Combinations that might be used sit somewhere in the middle.  A skilled tester understands the law of diminishing returns.  A skilled tester understands the domain.  A skilled tester knows where to focus the effort.

But if you employ the services of a skilled tester once (e.g. to build an automated regression pack) what happens over time?  The perceived value of tests change over time as the usage of the system changes.  You might end up with too much coverage (spend too much time executing tests, even if they are automated) for parts of the system that are not used anymore, and not enough coverage on parts that are used now.

The standard response from most test consultancies at this point would be "hire more of our testers". 

We think there is a smarter way.

We need tests that dynamically react to changes in the system.

Necessary, but Irrelevant

Perhaps only an automated tester could dismiss something as necessary, but irrelevant, but actually it is a crucial part of what we do.  Failure to understand this is the root of the majority of test maintenance costs.

Typically at least 50% of the data in an automated test is necessary (the system won't work if you don't do it), but irrelevant (has no bearing on what you are trying to test).  Sadly, this is the portion that is most likely to change and stop working.

To give an example: I want to test an interest rate swap that spans a period in which there are (unusual) holidays and I want to confirm the correct valuation.  To capture this trade I need to supply a large number of attributes that have no bearing on the outcome.  I almost certainly need to record the counterparty, some sort of trading book, the name of the person that captured the trade, etc.  These are necessary, but irrelevant.

Necessary, but Irrelevant is the Achilles Heel of traditional automated testing.  Everything works fine until, for example, a counterparty name changes and a large number of tests fail.  We don't care about the counterparty ... apart from the fact that we're spending money maintaining automated tests because something irrelevant has changed.

We needs tests that dynamically react to changes in the system.

Blessed are the Framework Makers (?)

Most of the systems we test are not fixed in scope: they are built upon frameworks, with almost boundless configuration possibilities.

If you are the user of such a framework the testing is hard enough.  But spare a thought for the maker of the framework.

If you are a maker of a framework, how do you begin to test it efficiently?  You can't test all combinations (unless you have an infinite supply of inexperienced testers (or monkeys  ... it amounts to the same thing)).  But you can't even test the important combinations ... because what is important depends upon the user of the framework.

Wouldn't it be great if you could observe how your customers were using the framework, and you could dynamically re-direct your finite QA resources to the parts of the system that are used the most?

What you need are tests that dynamically react to the usage of the system.

Purists : An Aside

There is an argument that says "if the system should support X, then you should test that the system supports X". That doesn't sound controversial and as a tester it can be hard to get past this: If the system should support something but doesn't, it is a dereliction of duty not to test it and report problems.

But that argument assumes two things that are almost certainly not true.

  1. We already have good test coverage
  2. We have unlimited resources 

In the real world we constantly deal with the incomplete, the under-funded and the mis-understood.  In this world we make prioritisation decisions all the time.  We decide what to test... and also what not to test.  Only inexperienced testers don't consciously decide what not to test.

However, what we all need... inexperienced and experienced testers alike... is the ability to change our prioritisation decisions over time.  As the system under test changes, our tests need to change.

We needs tests that dynamically react to changes in the system.

Who Controls the Past, Doesn't Control the Future

In politics it may be, as Orwell said, that "Who controls the past, controls the future".  But in the world of testing that is a dangerous assumption.

A test that works today, is a test that might work tomorrow, and is a test that will cease to work in the future,  Having control of the past is necessary from a test perspective to start tomorrow.  But it is not sufficient for the future.

If the system you are testing changes in the future then essentially you are probably offering one of two options:

  1. Ask the client for more money to fix the tests that no longer work
  2. An "out of office" email reply.

We think that tests that adapt to systems offers a third way.  We will discuss how we have implemented dynamic testing, where it has worked and, to be honest, some of the challenges.