Skip to main content

Artificial Intelligence (AI) is fundamentally reshaping how software is built. Code is being generated at remarkable speed. Features are appearing faster than anyone expected. Release cadences that once felt ambitious now look leisurely.

On the surface, it’s a triumph of modern engineering.

Under the surface, things are getting interesting, because while development has discovered the fast-forward button, many testing functions are still buffering.

Across industries, AI-assisted development tools are enabling teams to generate code faster, iterate more often, and deliver features at unprecedented velocity. What once took weeks can now take days, and in some cases, hours.

In theory, this is great news.

In practice, many organizations have quietly created a new constraint; testing throughput.

Because while developers are shipping at AI speed, QA capacity in many enterprises still looks suspiciously human.

 

The bottleneck nobody meant to build

What we’re seeing across complex enterprise environments, particularly in large ETRM platforms, is a familiar pattern emerging in a new form.

Development velocity increases. Regression demand explodes. Manual testing becomes the limiting factor. Cue:

  • Release delays
  • Growing defect leakage
  • Increased change risk.

None of this is surprising. It’s simply delivery physics: if you double the rate of change but keep assurance capacity flat, something must give.

Historically, that “something” has been quality.

"If you double the rate of change but keep assurance capacity flat, something must give."

 

This is not (just) a tool problem

At this point, many organizations reach for the corporate comfort blanket:

“We should probably buy an automation tool.”

To be clear, and I am slightly biased here, automation absolutely matters. The right skills and tools are essential.

But the uncomfortable truth is most testing bottlenecks are cultural before they are technical.

 

"Most testing bottlenecks are cultural before they are technical."

 

High-performing engineering organizations treat testing as:

  • Continuous, not a phase
  • Engineered, not manual
  • Shared, not siloed
  • Strategic, not administrative.

Where that mindset hasn’t landed, automation often arrives full of promise and quietly retires into partial coverage and brittle scripts. I know of many enterprises have at least one such initiative quietly living in their attic.

 

Why AI is pouring fuel on the fire

AI doesn’t just make teams faster. It makes systems more volatile.

AI-assisted development typically drives:

  • Larger change volumes
  • More frequent releases
  • Increased integration churn
  • Faster business expectations
  • Less tolerance for lengthy regression windows.

In environments like energy trading, this becomes particularly lively.

Because:

  • The regression surface is wide
  • The workflows are interdependent
  • The financial exposure is real
  • And production surprises generate “audience participation.”

Put simply, manual-heavy testing models do not scale gracefully into this world.

The organizations getting this right

 

Working with several of the world’s leading trading organizations across energy, commodities and capital markets, the teams navigating AI acceleration successfully are not the ones simply generating the most code.

They are the ones who have discreetly industrialized quality and can continuously prove that change is safe.

Invariably this is underpinned by four key differentiators:

Treating automation as infrastructure, not decoration

Mature organizations move beyond scattered scripts toward repeatable, deterministic regression capability.

This means:

  • API and service-level coverage
  • CI/CD-native execution
  • Data-driven test design
  • Stable environment integration
  • Test assets that someone still understands six months later.

Where the goal is not simply “more automation.” It is more automation that survives contact with reality.

They shift left…. but also right. Because reality is messy

Quality leaders no longer treat this as a binary debate. They do both deliberately and with clear intent. Because defects, inconveniently don’t respect organizational swim lanes.

There is an important nuance often lost in the industry enthusiasm for shifting in all directions.

Shifting left is essential. Catching defects earlier is simply good engineering economics.

Shifting right also has value, providing real-world validation and feedback. Production telemetry reveals that behaviours in pre-release environments can’t always replicate perfectly.

But neither movement, on its own, solves the central challenge in complex enterprise platforms.

In high-stakes systems such as trading environments, the dominant risk remains deterministic cross-workflow regression. Organizations that over-rotate toward
unit coverage on the left or observability on the right often discover an uncomfortable middle gap: they can build quickly and monitor extensively yet still lack repeatable proof that critical business flows remain intact after change. Closing that gap requires industrialised, API-level automation that scales with release velocity, otherwise the bottleneck simply moves rather than disappears.

They evolve QA into Quality Engineering

In high-velocity environments, QA teams cannot remain primarily manual executors.

The role is evolving towards:

  • Test architecture
  • Automation strategy
  • Risk-based coverage
  • Test data engineering
  • Environment orchestration
  • Quality intelligence.

In other words, fewer people clicking through screens, and more people designing systems that make clicking unnecessary.

Leadership stops treating testing as a cost to minimise

This is the quiet differentiator. Where leadership views testing as overhead, organizations inevitably hit the same ceiling.

Where leadership recognizes testing as:

  • A release enabler
  • A risk control
  • A velocity multiplier

…the conversation changes, and so do the outcomes.

 

Treating automation as infrastructure, not decoration

Treating automation as infrastructure, not decoration

Mature organizations move beyond scattered scripts toward repeatable, deterministic regression capability.

This means:

  • API and service-level coverage
  • CI/CD-native execution
  • Data-driven test design
  • Stable environment integration
  • Test assets that someone still understands six months later.

Where the goal is not simply “more automation.” It is more automation that survives contact with reality.

They shift left…. but also right. Because reality is messy

They shift left…. but also right. Because reality is messy

Quality leaders no longer treat this as a binary debate. They do both deliberately and with clear intent. Because defects, inconveniently don’t respect organizational swim lanes.

There is an important nuance often lost in the industry enthusiasm for shifting in all directions.

Shifting left is essential. Catching defects earlier is simply good engineering economics.

Shifting right also has value, providing real-world validation and feedback. Production telemetry reveals that behaviours in pre-release environments can’t always replicate perfectly.

But neither movement, on its own, solves the central challenge in complex enterprise platforms.

In high-stakes systems such as trading environments, the dominant risk remains deterministic cross-workflow regression. Organizations that over-rotate toward
unit coverage on the left or observability on the right often discover an uncomfortable middle gap: they can build quickly and monitor extensively yet still lack repeatable proof that critical business flows remain intact after change. Closing that gap requires industrialised, API-level automation that scales with release velocity, otherwise the bottleneck simply moves rather than disappears.

They evolve QA into Quality Engineering

They evolve QA into Quality Engineering

In high-velocity environments, QA teams cannot remain primarily manual executors.

The role is evolving towards:

  • Test architecture
  • Automation strategy
  • Risk-based coverage
  • Test data engineering
  • Environment orchestration
  • Quality intelligence.

In other words, fewer people clicking through screens, and more people designing systems that make clicking unnecessary.

Leadership stops treating testing as a cost to minimise

Leadership stops treating testing as a cost to minimise

This is the quiet differentiator. Where leadership views testing as overhead, organizations inevitably hit the same ceiling.

Where leadership recognizes testing as:

  • A release enabler
  • A risk control
  • A velocity multiplier

…the conversation changes, and so do the outcomes.

 

The strategic reality

AI-accelerated development is not slowing down. If anything, we are still in the warm-up act.

This leaves organizations with a choice:

  • Accelerate development and hope testing keeps up, or
  • Deliberately engineer quality to scale with change.

One of these approaches produces confident, repeatable delivery. The other produces much more exciting incident reviews.

 

Closing thought

In the AI era, the winners will not simply be those who can generate code the fastest.

They will be those who can change rapidly and continuously prove that the change is safe.

"Because speed without assurance isn’t innovation. It’s just added risk… delivered faster."