Artificial Intelligence (AI) is fundamentally reshaping how software is built. Code is being generated at remarkable speed. Features are appearing faster than anyone expected. Release cadences that once felt ambitious now look leisurely.
On the surface, it’s a triumph of modern engineering.
Under the surface, things are getting interesting, because while development has discovered the fast-forward button, many testing functions are still buffering.
Across industries, AI-assisted development tools are enabling teams to generate code faster, iterate more often, and deliver features at unprecedented velocity. What once took weeks can now take days, and in some cases, hours.
In theory, this is great news.
In practice, many organizations have quietly created a new constraint; testing throughput.
Because while developers are shipping at AI speed, QA capacity in many enterprises still looks suspiciously human.
What we’re seeing across complex enterprise environments, particularly in large ETRM platforms, is a familiar pattern emerging in a new form.
Development velocity increases. Regression demand explodes. Manual testing becomes the limiting factor. Cue:
None of this is surprising. It’s simply delivery physics: if you double the rate of change but keep assurance capacity flat, something must give.
Historically, that “something” has been quality.
"If you double the rate of change but keep assurance capacity flat, something must give."
At this point, many organizations reach for the corporate comfort blanket:
“We should probably buy an automation tool.”
To be clear, and I am slightly biased here, automation absolutely matters. The right skills and tools are essential.
But the uncomfortable truth is most testing bottlenecks are cultural before they are technical.
"Most testing bottlenecks are cultural before they are technical."
High-performing engineering organizations treat testing as:
Where that mindset hasn’t landed, automation often arrives full of promise and quietly retires into partial coverage and brittle scripts. I know of many enterprises have at least one such initiative quietly living in their attic.
AI doesn’t just make teams faster. It makes systems more volatile.
AI-assisted development typically drives:
In environments like energy trading, this becomes particularly lively.
Because:
Put simply, manual-heavy testing models do not scale gracefully into this world.