If we adhere to best testing practice, we will emerge from the deep jungle of system implementation with a carefully crafted suite of automated system tests which can then be used for ongoing regression testing on the savanna plains of 'business as usual'.
Back in the real world, the comprehensive pack of automated tests that we intended to build during development never quite materialised and instead the system assurance was performed with a heroic, but largely manual, testing effort.
But at least the system is 'live' and generally bug-free.
....and now we want to fix those niggly bugs that are spoiling the 'go-live' party and to add as enhancements the functionality that was previously descoped in the mad dash for the watering hole.
But how to approach the system assurance for the necessary, ongoing releases? We can't afford to perform again the full set of tests that we used before (at least not as often as our sprint cycles are coming round) and we don't have a ready to roll automated regression test suite which covers the full range of functionality of the new system. Furthermore, our system is inherently stateful, and certain defects will only ever be manifest against a current copy of Production data.
This is where we may look to holistic regression testing.
Rather than running tests from first principles which create all their preconditions and have well-defined expected results, holistic regression testing instead focuses upon comparing the results produced by the currently 'live' system with those produced by a system with identical business data but configured with the candidate release build.
By performing the same operations to both the master and candidate build systems and automatically comparing the outputs and results between the two, we can quickly provide assurance that the new build isn't causing regression of core business functionality. The set of operations to be performed can be incrementally grown to include all of the key functions and outputs.
So how do I achieve this?
The short answer is, 'Hire an expert'.
'Well you would say that!', I can hear you retorting.
OK, you could do this yourself, but here's what you'll need.
A test runner/means of driving the system under test automatically to perform all the operations you want to verify
The ability to deploy multiple copies of your Production system
A database/repository to persist the results from both systems
A means of programmatically comparing the datasets and outputting useful reports
We have built automated regression test packs harnessing the power of SpecFlow together with our own 'Laser Diff' comparison suite in around a month (for a moderately complex ETRM system). These test harnesses have been put to use immediately, finding defects for our clients before they escape into the wild!