All posts

Testing the complete truth! Why testing more than the responses is critical

Portrait of Steve Morland
October 2, 2025
5 min read

Quality engineering is a significant part of what we do at Leighton. It has its own dedicated community of best practice, headed by Sunil Bagga, and helps our customers define quality across our portfolio. In my 12 years with Leighton, and 25 years of working in this industry, I’ve seen significant change, especially in quality engineering, with automated testing being pushed further left, with software engineers giving considerable test coverage in their automated test suites and quality engineering applying a different perspective and facet on top of this to provide us with a galvanised quality approach.

I recently sat down with some of the Leighton team to discuss this, opening with a provocation:

“Is testing a response enough? How do we test the complete truth?”

It’s a question that matters more than ever. Today’s systems are sprawling, distributed, and often invisible in their complexity - microservices, event-driven architectures, serverless functions. A response might say “success” but the truth could be very different.

That’s why I gathered five colleagues, each with a different angle on testing:

Sasi Sathya — QA Automation Lead. Specialist in frameworks that adapt to different architectures.

Sunil Bagga — QA Test Manager. Brings 15 years of experience looking beyond surface responses.

Lee Gilmore — Head of AWS Practice. An architect who designs with testability in mind.

Greg Farrow — Principal Engineer. A developer with QA roots, focused on prevention.

Zain Mustafa — QA Automation Engineer. Thinks of automation as a driver of better products.

What unfolded wasn’t just a discussion about testing, it was a call to build better platforms and to break down the barriers between quality engineering and software engineering.

“You’re really testing the journey.” — Sasi Sathya

I kicked things off with Sasi because automation is where many people’s minds go first. He reframed the conversation immediately.

“It’s about testing the journey of the application or architecture,” he said. “You’re really testing the actual behaviour.”

Sasi explained that different architectures demand different approaches. “With event-driven systems, you can’t always rely on end-to-end tests,” he pointed out. “You need ways to capture the asynchronous behaviour, otherwise you miss the real picture.”

He added that test strategies should be layered. “Unit tests, component tests, integration tests; each has its role,” Sasi said. “But they only give you confidence if they’re stitched together. That stitching is where QA and engineering need to work hand in hand.”

“How do we know it really saved?” - Sunil Bagga

Sunil followed with a reminder from years in the trenches.

“An API can give you a nice response back,” he warned, “but how are we ensuring that it’s correctly storing information at the DB layer? How do we confirm that?”

He explained that too often, testing stops at the surface. “If you just look at the front-end or the API response, you’re blind to what’s happening underneath,” he said. “We need to check the integrations, the data pipelines, even the events firing in the background.”

Sunil’s frustration was apparent: “Even today, I see QA being brought in too late. We need to be part of the conversation at the start, with architects, with devs, so that the truth can actually be tested.”

“Make the truth visible.” - Lee Gilmore

From there, I turned to Lee for the architect’s perspective. He didn’t hesitate.

“If you don’t design for testability, you’re setting QA up to fail,” Lee said flatly.

He gave a practical example from cloud systems: “We build observable systems, using audit logs in datastores, to enable engineers and quality engineers to monitor the full truth. Without those logs, you can’t join the dots. Observable systems are testable systems.”

Lee argued that platforms must include observability by default. “It’s not enough to have logging in one service and monitoring in another,” he said. “You need to design an integrated picture so the entire technical team are looking at the same truth.”

“The cheapest bug to fix is the one you never build.” - Greg Farrow

Greg, who began his career in QA before transitioning to development, discussed the mindset difference.

“Where QAs have a real special power,” he said, “is their ability to think about what’s being tested in a different way to the developer.”

He explained: “Developers have a bias; we test to prove code works. QAs test to break it, to imagine how things might fail.”

Then came his killer line: “The cheapest way to fix a bug is to prevent it. And the way you prevent it is by having QA involved early, shaping requirements, influencing design, writing the behaviours that matter.”

Greg also touched on test maintenance. “If your tests are too tightly coupled to the implementation, you’ll rewrite them every time,” he said. “That’s false confidence. You should be testing behaviours, not the way the code happens to be written.”

“Automation can make the product better.” - Zain Mustafa

Finally, Zain wrapped things up with a fresh take on automation.

“Automation doesn’t have to be just testing scripts,” he said. “If you can deliver a holistic development and testing platform, stitching together APIs, credentials, and databases, it’s nicer to work with for the client and for us.”

She emphasised that automation should feel like part of the product. “Think about logging in,” she said. “If your automation manages the credentials and handles the APIs, it’s not just a test; it’s part of the workflow that everyone benefits from.”

Zain’s point landed well: automation isn’t just a gatekeeper; it’s an enabler. Done right, it improves the developer experience, the QA process, and the end-user outcome.

Pulling it together

As the moderator of this conversation, what struck me most was how all these points are very much connected:

  • Sasi showed us why behaviours matter more than responses.
  • Sunil reminded us to validate the hidden layers.
  • Lee pressed for platforms that make truth visible by design.
  • Greg made the case for QA and dev working as one to prevent bugs.
  • Zain reframed automation as part of the product itself.

The common thread? Testing the complete truth isn’t QA’s job alone. It’s a platform problem. The platform must be built by QA and engineering working together, ideally as a single team.

Where do we go from here?

So how do we build platforms that test the truth? Three steps stood out to me from this discussion:

1. Design for observability.

Add trace IDs, logs, and hooks that make behaviours visible across the system. QA can’t test what they can’t see. Observable systems are testable systems.

Observability is one of the key pillars in creating well-architected systems and platforms. Not only are we building the end product that the users interact with, but we are also building the non-functional aspects that allow the end product to achieve operational excellence.

2. Create shared platforms.

Architect and build test harnesses, dashboards, and CI/CD pipelines that QA and engineers co-own -  one source of truth, one toolset. When we create a product, we are also building the infrastructure that supports it.

This is something I think we excel at at Leighton; we are investing in our ‘common approach,’ which allows us to reuse components, such as GitHub pipelines, to benefit all our customers and lower the cognitive load of a project, so that the technical team can focus on getting to value quicker.

3. Work as one team.

No handoffs. Let QA shape requirements, influence architecture, and co-author automation with devs and architects. That’s how prevention happens.

Because in the end, a response can lie. But when QA and engineering work as one team, on shared platforms built for visibility and behaviour, we can finally test the complete truth.

Share this post
Portrait of Steve Morland
October 2, 2025
5 min read
All posts

Testing the complete truth! Why testing more than the responses is critical

Quality engineering is a significant part of what we do at Leighton. It has its own dedicated community of best practice, headed by Sunil Bagga, and helps our customers define quality across our portfolio. In my 12 years with Leighton, and 25 years of working in this industry, I’ve seen significant change, especially in quality engineering, with automated testing being pushed further left, with software engineers giving considerable test coverage in their automated test suites and quality engineering applying a different perspective and facet on top of this to provide us with a galvanised quality approach.

I recently sat down with some of the Leighton team to discuss this, opening with a provocation:

“Is testing a response enough? How do we test the complete truth?”

It’s a question that matters more than ever. Today’s systems are sprawling, distributed, and often invisible in their complexity - microservices, event-driven architectures, serverless functions. A response might say “success” but the truth could be very different.

That’s why I gathered five colleagues, each with a different angle on testing:

Sasi Sathya — QA Automation Lead. Specialist in frameworks that adapt to different architectures.

Sunil Bagga — QA Test Manager. Brings 15 years of experience looking beyond surface responses.

Lee Gilmore — Head of AWS Practice. An architect who designs with testability in mind.

Greg Farrow — Principal Engineer. A developer with QA roots, focused on prevention.

Zain Mustafa — QA Automation Engineer. Thinks of automation as a driver of better products.

What unfolded wasn’t just a discussion about testing, it was a call to build better platforms and to break down the barriers between quality engineering and software engineering.

“You’re really testing the journey.” — Sasi Sathya

I kicked things off with Sasi because automation is where many people’s minds go first. He reframed the conversation immediately.

“It’s about testing the journey of the application or architecture,” he said. “You’re really testing the actual behaviour.”

Sasi explained that different architectures demand different approaches. “With event-driven systems, you can’t always rely on end-to-end tests,” he pointed out. “You need ways to capture the asynchronous behaviour, otherwise you miss the real picture.”

He added that test strategies should be layered. “Unit tests, component tests, integration tests; each has its role,” Sasi said. “But they only give you confidence if they’re stitched together. That stitching is where QA and engineering need to work hand in hand.”

“How do we know it really saved?” - Sunil Bagga

Sunil followed with a reminder from years in the trenches.

“An API can give you a nice response back,” he warned, “but how are we ensuring that it’s correctly storing information at the DB layer? How do we confirm that?”

He explained that too often, testing stops at the surface. “If you just look at the front-end or the API response, you’re blind to what’s happening underneath,” he said. “We need to check the integrations, the data pipelines, even the events firing in the background.”

Sunil’s frustration was apparent: “Even today, I see QA being brought in too late. We need to be part of the conversation at the start, with architects, with devs, so that the truth can actually be tested.”

“Make the truth visible.” - Lee Gilmore

From there, I turned to Lee for the architect’s perspective. He didn’t hesitate.

“If you don’t design for testability, you’re setting QA up to fail,” Lee said flatly.

He gave a practical example from cloud systems: “We build observable systems, using audit logs in datastores, to enable engineers and quality engineers to monitor the full truth. Without those logs, you can’t join the dots. Observable systems are testable systems.”

Lee argued that platforms must include observability by default. “It’s not enough to have logging in one service and monitoring in another,” he said. “You need to design an integrated picture so the entire technical team are looking at the same truth.”

“The cheapest bug to fix is the one you never build.” - Greg Farrow

Greg, who began his career in QA before transitioning to development, discussed the mindset difference.

“Where QAs have a real special power,” he said, “is their ability to think about what’s being tested in a different way to the developer.”

He explained: “Developers have a bias; we test to prove code works. QAs test to break it, to imagine how things might fail.”

Then came his killer line: “The cheapest way to fix a bug is to prevent it. And the way you prevent it is by having QA involved early, shaping requirements, influencing design, writing the behaviours that matter.”

Greg also touched on test maintenance. “If your tests are too tightly coupled to the implementation, you’ll rewrite them every time,” he said. “That’s false confidence. You should be testing behaviours, not the way the code happens to be written.”

“Automation can make the product better.” - Zain Mustafa

Finally, Zain wrapped things up with a fresh take on automation.

“Automation doesn’t have to be just testing scripts,” he said. “If you can deliver a holistic development and testing platform, stitching together APIs, credentials, and databases, it’s nicer to work with for the client and for us.”

She emphasised that automation should feel like part of the product. “Think about logging in,” she said. “If your automation manages the credentials and handles the APIs, it’s not just a test; it’s part of the workflow that everyone benefits from.”

Zain’s point landed well: automation isn’t just a gatekeeper; it’s an enabler. Done right, it improves the developer experience, the QA process, and the end-user outcome.

Pulling it together

As the moderator of this conversation, what struck me most was how all these points are very much connected:

  • Sasi showed us why behaviours matter more than responses.
  • Sunil reminded us to validate the hidden layers.
  • Lee pressed for platforms that make truth visible by design.
  • Greg made the case for QA and dev working as one to prevent bugs.
  • Zain reframed automation as part of the product itself.

The common thread? Testing the complete truth isn’t QA’s job alone. It’s a platform problem. The platform must be built by QA and engineering working together, ideally as a single team.

Where do we go from here?

So how do we build platforms that test the truth? Three steps stood out to me from this discussion:

1. Design for observability.

Add trace IDs, logs, and hooks that make behaviours visible across the system. QA can’t test what they can’t see. Observable systems are testable systems.

Observability is one of the key pillars in creating well-architected systems and platforms. Not only are we building the end product that the users interact with, but we are also building the non-functional aspects that allow the end product to achieve operational excellence.

2. Create shared platforms.

Architect and build test harnesses, dashboards, and CI/CD pipelines that QA and engineers co-own -  one source of truth, one toolset. When we create a product, we are also building the infrastructure that supports it.

This is something I think we excel at at Leighton; we are investing in our ‘common approach,’ which allows us to reuse components, such as GitHub pipelines, to benefit all our customers and lower the cognitive load of a project, so that the technical team can focus on getting to value quicker.

3. Work as one team.

No handoffs. Let QA shape requirements, influence architecture, and co-author automation with devs and architects. That’s how prevention happens.

Because in the end, a response can lie. But when QA and engineering work as one team, on shared platforms built for visibility and behaviour, we can finally test the complete truth.

Watch now!

To watch the on-demand video, please enter your details below:
By completing this form, you provide your consent to our processing of your information in accordance with Leighton's privacy policy.

Thank you!

Use the button below to watch the video. By doing so, a separate browser window will open.
Watch now
Oops! Something went wrong while submitting the form.
All posts

Testing the complete truth! Why testing more than the responses is critical

Quality engineering is a significant part of what we do at Leighton. It has its own dedicated community of best practice, headed by Sunil Bagga, and helps our customers define quality across our portfolio. In my 12 years with Leighton, and 25 years of working in this industry, I’ve seen significant change, especially in quality engineering, with automated testing being pushed further left, with software engineers giving considerable test coverage in their automated test suites and quality engineering applying a different perspective and facet on top of this to provide us with a galvanised quality approach.

I recently sat down with some of the Leighton team to discuss this, opening with a provocation:

“Is testing a response enough? How do we test the complete truth?”

It’s a question that matters more than ever. Today’s systems are sprawling, distributed, and often invisible in their complexity - microservices, event-driven architectures, serverless functions. A response might say “success” but the truth could be very different.

That’s why I gathered five colleagues, each with a different angle on testing:

Sasi Sathya — QA Automation Lead. Specialist in frameworks that adapt to different architectures.

Sunil Bagga — QA Test Manager. Brings 15 years of experience looking beyond surface responses.

Lee Gilmore — Head of AWS Practice. An architect who designs with testability in mind.

Greg Farrow — Principal Engineer. A developer with QA roots, focused on prevention.

Zain Mustafa — QA Automation Engineer. Thinks of automation as a driver of better products.

What unfolded wasn’t just a discussion about testing, it was a call to build better platforms and to break down the barriers between quality engineering and software engineering.

“You’re really testing the journey.” — Sasi Sathya

I kicked things off with Sasi because automation is where many people’s minds go first. He reframed the conversation immediately.

“It’s about testing the journey of the application or architecture,” he said. “You’re really testing the actual behaviour.”

Sasi explained that different architectures demand different approaches. “With event-driven systems, you can’t always rely on end-to-end tests,” he pointed out. “You need ways to capture the asynchronous behaviour, otherwise you miss the real picture.”

He added that test strategies should be layered. “Unit tests, component tests, integration tests; each has its role,” Sasi said. “But they only give you confidence if they’re stitched together. That stitching is where QA and engineering need to work hand in hand.”

“How do we know it really saved?” - Sunil Bagga

Sunil followed with a reminder from years in the trenches.

“An API can give you a nice response back,” he warned, “but how are we ensuring that it’s correctly storing information at the DB layer? How do we confirm that?”

He explained that too often, testing stops at the surface. “If you just look at the front-end or the API response, you’re blind to what’s happening underneath,” he said. “We need to check the integrations, the data pipelines, even the events firing in the background.”

Sunil’s frustration was apparent: “Even today, I see QA being brought in too late. We need to be part of the conversation at the start, with architects, with devs, so that the truth can actually be tested.”

“Make the truth visible.” - Lee Gilmore

From there, I turned to Lee for the architect’s perspective. He didn’t hesitate.

“If you don’t design for testability, you’re setting QA up to fail,” Lee said flatly.

He gave a practical example from cloud systems: “We build observable systems, using audit logs in datastores, to enable engineers and quality engineers to monitor the full truth. Without those logs, you can’t join the dots. Observable systems are testable systems.”

Lee argued that platforms must include observability by default. “It’s not enough to have logging in one service and monitoring in another,” he said. “You need to design an integrated picture so the entire technical team are looking at the same truth.”

“The cheapest bug to fix is the one you never build.” - Greg Farrow

Greg, who began his career in QA before transitioning to development, discussed the mindset difference.

“Where QAs have a real special power,” he said, “is their ability to think about what’s being tested in a different way to the developer.”

He explained: “Developers have a bias; we test to prove code works. QAs test to break it, to imagine how things might fail.”

Then came his killer line: “The cheapest way to fix a bug is to prevent it. And the way you prevent it is by having QA involved early, shaping requirements, influencing design, writing the behaviours that matter.”

Greg also touched on test maintenance. “If your tests are too tightly coupled to the implementation, you’ll rewrite them every time,” he said. “That’s false confidence. You should be testing behaviours, not the way the code happens to be written.”

“Automation can make the product better.” - Zain Mustafa

Finally, Zain wrapped things up with a fresh take on automation.

“Automation doesn’t have to be just testing scripts,” he said. “If you can deliver a holistic development and testing platform, stitching together APIs, credentials, and databases, it’s nicer to work with for the client and for us.”

She emphasised that automation should feel like part of the product. “Think about logging in,” she said. “If your automation manages the credentials and handles the APIs, it’s not just a test; it’s part of the workflow that everyone benefits from.”

Zain’s point landed well: automation isn’t just a gatekeeper; it’s an enabler. Done right, it improves the developer experience, the QA process, and the end-user outcome.

Pulling it together

As the moderator of this conversation, what struck me most was how all these points are very much connected:

  • Sasi showed us why behaviours matter more than responses.
  • Sunil reminded us to validate the hidden layers.
  • Lee pressed for platforms that make truth visible by design.
  • Greg made the case for QA and dev working as one to prevent bugs.
  • Zain reframed automation as part of the product itself.

The common thread? Testing the complete truth isn’t QA’s job alone. It’s a platform problem. The platform must be built by QA and engineering working together, ideally as a single team.

Where do we go from here?

So how do we build platforms that test the truth? Three steps stood out to me from this discussion:

1. Design for observability.

Add trace IDs, logs, and hooks that make behaviours visible across the system. QA can’t test what they can’t see. Observable systems are testable systems.

Observability is one of the key pillars in creating well-architected systems and platforms. Not only are we building the end product that the users interact with, but we are also building the non-functional aspects that allow the end product to achieve operational excellence.

2. Create shared platforms.

Architect and build test harnesses, dashboards, and CI/CD pipelines that QA and engineers co-own -  one source of truth, one toolset. When we create a product, we are also building the infrastructure that supports it.

This is something I think we excel at at Leighton; we are investing in our ‘common approach,’ which allows us to reuse components, such as GitHub pipelines, to benefit all our customers and lower the cognitive load of a project, so that the technical team can focus on getting to value quicker.

3. Work as one team.

No handoffs. Let QA shape requirements, influence architecture, and co-author automation with devs and architects. That’s how prevention happens.

Because in the end, a response can lie. But when QA and engineering work as one team, on shared platforms built for visibility and behaviour, we can finally test the complete truth.

Download
To download the assets, please enter your details below:
By completing this form, you provide your consent to our processing of your information in accordance with Leighton's privacy policy.

Thank you!

Use the button below to download the file. By doing so, the file will open in a separate browser window.
Download now
Oops! Something went wrong while submitting the form.
By clicking “Accept All Cookies”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Privacy Policy for more information.