We're all familiar with scope creep; the pernicious growth of new requirements beyond that originally envisaged.
But is it all one way traffic? Does the following sound familiar? Imagine a trading system in which ....
- Requirements are agreed and they include the flexibility to trade all manner of new products.
- Implementation begins but (unknowingly) neglects some requirements.
- System testing covers only a sub-set of the possible combinations; some gaps are missed.
- User Acceptance Testing is time-constrained so only covers a sub-set of the scenarios.
- As go-live approaches the focus shifts to that which is absolutely needed for go live (“do they actually need to trade that?”, “does that happen currently?”, etc.).
- "Regression testing" is introduced on this sub-set.
Except regression testing frequently just means "take a copy of the production system and compare P&L before/after a change".
So the scope of regression testing is not determined by the original requirements, but what is currently being actively traded; a small sub-set of the original solution.
These are the only tests that are run regularly, so over time they come to define the only part of the system that can confidently be said to work.
The difference between expectations and the scope of regression testing is a testing gap and indicative of a potentially wasted investment; functionality built, but no-one can be confident that it (still) works.
Inverse scope creep.
But that never happens. Right?