Quality Begins Where the User Journey Ends

Apica BrainBlog 1 User JourneysPart One of the Humanizing Software Quality Series for Apica

A brand is not defined by a logo or a tagline. Nor is it encompassed by a website or marketing materials.

A brand represents the sum total of all experiences and impressions, good and bad, that customers and employees have with your company. Humans – whether the CTO giving a talk at a tradeshow, or a customer support representative on the phone – make an undeniable impact on the organization’s brand.

However important personal interactions are, in most modern industries, the majority of our interactions with a business now happen through software. Software defines the customer’s journey with a brand – meaning user journeys are at the center of software quality, now more than ever.

What constitutes a user journey?

For the purpose of this series, we’re talking about digital user journeys, which flow through software and infrastructure rather than through people in the field or at service desks. (Employees also have their own connections to digital user journeys that impact customer experience, which we’ll cover later).

A typical user journey would include all the actions in a typical human user’s session using a website or application – the screens they are presented with, what objects they click on, and what screens and data payloads are returned.

Along with that primary workflow of on-screen requests and responses, the journey also includes metrics that are directly experienced by users, such as response times, the frequency of interruptions, visual consistency, and accurate results.

All of these aspects of a digital user journey should be tested by a person. There’s nothing wrong with wanting the reassurance of good old-fashioned human-based user acceptance testing (UAT).

Indeed, there was a time when applications were vertical stacks of software atop networks and servers wholly owned by the company. There might have been one or two integrations with data or transaction services that resided elsewhere, but for the most part, we could validate user journeys with procedural test scripts, and have software testers run regressions and UAT cycles for any features that changed.

In today’s distributed cloud application environments, no matter how many front-end tests an organization runs, there are still infinite ways something could go wrong in front of users.

If we are to save our applications from chaos, we must find ways to make those infinite failure possibilities much less probable with more thorough testing. How can we still follow that user journey, when almost every aspect of an application is made up of API calls and ephemeral microservices that are hidden behind the scenes?

Distracted by distributed, dynamic architectures

The decentralization of our software architectures has become a Gordian Knot that is too hard to untangle through UAT and traditional functional and load testing scripts. Almost any on-screen element can change dynamically between test runs because of calls to external services and variable data inputs, so it always seems like the software is changing too fast to test.

As a consequence, devtest teams often turned away from testing user journeys, and focused their attention elsewhere, creating several new side effects in the process.

Hygiene before health. When sequential test plans and scripts break against dynamic on-screen results, teams start spending time conducting test hygiene and repairing broken tests, instead of focusing on the health of the user experience.

Component level testing. Changes happen much slower at the middle tier and integration layers than the UI, so testing third-party services and back-ends directly can offer longer-lasting reassurances about the integrity of individual components – even if the components don’t behave as expected once merged in production.

Metrics and observability data hoarding. The cost of storage alone used to be prohibitive to this practice, but now that we have better bandwidth and capacity, many companies started capturing and warehousing the increasing data emanating from observability agents and streaming sources, and then sorting and analyzing it later.

All of these changed habits are natural reactions to the challenge of testing user journeys…

Read Part 1 of the Humanizing Software Quality series at Apica.com here: https://www.apica.io/quality-begins-where-the-user-journey-ends/ 


©2022 Intellyx LLC. Intellyx retains editorial control over the content of this document. At the time of writing, Apica is an Intellyx customer. Image sources: Jem Yoshioka, Flickr CC2.0 license, FontAwesome (turkey icon).

SHARE THIS:

Principal Analyst & CMO, Intellyx. Twitter: @bluefug