Blog Banner

What is QA in digital?

If you’re buying software development services, QA affects your final product far more than you’ll expect. It pays to specify the exact testing you require. This guide will help you understand what QA is, does, and means for you.

User experience design’s recent and very vocal soul-searching reminded us of the digital development subdiscipline forever locked in every other’s shadow: quality assurance (QA).

QA has long been the development discipline least able to advocate for itself. Consequently, QA remains poorly-understood even by software-adjacent experts, practitioners, and sophisticated professional services buyers.

It doesn’t help that “QA” means a bunch of things within itself. Let's clarify what QA means in a practical way.

Digging into QA

You’ve undoubtedly been on this call when shopping for software development services:

You: “You QA your work, right?”

Them: “Absolutely. We have a QA practice.”

You: “OK, cool.”

No, it’s not cool. In any given instance, understanding software QA as it does (or will) apply to you boils down to asking one key follow-up question:

“What do you mean when you say ‘QA’?”

Your respondent’s answer might not align with your needs.

How software QA teams evolve

First, a quick welfare check. Part of QA’s inscrutability comes from the evolution of QA practices within software development. Typically, QA evolves with engineering while evolving against unrealistic expectations.

Institutional QA practices typically mature in distinct stages alongside their attached software development groups, often in a caterpillar/butterfly-like cycle:

  1. Self QA. In smaller (or rogue) teams, QA begins life as a self-auditing type task performed by the people who designed and built the application.
  2. Farm team. At a certain size, someone takes part-time point on QA and the discipline evolves into a farm team for junior engineers.
  3. Discrete discipline. Eventually, QA becomes a full-fledged practice, getting a full-time leader whose job is to…cobble it together. This is the first time the engineering team hires QA-specific experts. The team performs QA testing, but does not own a part of the engineering process with authority.
  4. Release control. As codebase size increases and/or the number of users increases, the risk associated with each release increases proportionally if not exponentially. At this point, QA becomes an integral part of the engineering process, owning not just feature, integration, and regression testing, but release approval. QA will certainly create a ton of documentation, especially if there’s a business-wide QMS to support.

Unfortunately, the path to recognition as a separate and necessary function isn’t quite that easy. While keeping pace with engineering, that same QA discipline evolves against the caustic and persistent unrealistic expectation all software engineers dread:

“It should just work, right? You’d better be perfect for what I'm paying you! 15% QA tax? I thought you were experts?”

QA acts as a shield protecting developers from the world and the world from developers. Through tickets, scripts, and edge cases, QA eats and metabolizes customer complaints, executive blame, and engineering jibes as a matter of course. This doom-spiral reaches its apotheosis when QA is expected to authorize releases they know are flawed, but disincentivized (or disempowered) to prevent or correct the release in time.

QA’s job then is to ensure a thing is good and meets standards, right? Sure, but what does that mean?

QA approaches

Across organizations, we’ve found that very few software quality assurance teams have the same structure, mentality, or even the same expertise. Depending on an organization’s evolution, you’ll find a strategy or skills biases toward one or more of the following:

  • Control—post-production, pre-release testing
  • Assurance—organization-wide, process-driven quality procedures and SOPs enacted throughout development
  • Management—deliberate planning, documentation, and oversight of the above

In product and/or regulated quality disciplines, control, assurance, and management go hand-in-hand, laddering up to TQM and similar holistic disciplines. In software, think of control, assurance, and management more like a classical iron triangle where a resource-limited QA org must choose how much of each they’re going to perform.

QA tactics

QA tactics are the levers a QA manager can use to execute quality assurance, control, and management in software engineering. When most people consider “QA” in digital, they jump straight to tactics and activities which frequently include (but are certainly not limited to):

  • Test strategy—defines the organizational approach to testing, including release control, review and approval, available testing methods, and team responsibilities
  • Test planning—defines the product-specific objectives, scope, timelines, environments, deliverables, tools, and methods needed to execute the test strategy
  • Smoke testing—preliminary testing to reveal simple failures nasty enough to warrant rejecting a release outright
  • Compatibility testing—typically tests of an application in multiple specified operating systems and environments
  • Functional (manual) testing—usually manual tests focused on the expected behaviors of specified features
  • Automated testing—software-driven testing best used when you need to test something repeatedly, part of continuous testing
  • Regression testing—a recurring test of an application’s key features
  • Test suites—automated tests against functions classified as essential in a given application or ecosystem
  • Security testing—vulnerability scanning, penetration testing, API testing, application security testing, audits and risk assessments
  • Performance testing—a large toolbox separated into stress, load, soak, spike, and other facets of software performance
  • Usability testing—tests of the application’s viability and comprehensibility with human testers typically driven by the test plan and/or project requirements
  • Acceptance testing—operational readiness testing, usually done in software as a final step prior to market release
  • Quality reviews—inspections that help measure the extent to which product quality is meeting its objectives

You’ll also hear about methods like “hybrid QA” or “embedded QA” where a QA team integrates and applies tactics like those above throughout development instead of at the end. Agile development makes integrated approaches like these more typical.

Scales of QA

Frustratingly, quality assurance occurs at different scales. From micro to macro:

  • Unit—portions of code, like a method or query
  • Application—complete applications, say a WordPress website or a mobile application
  • Integration—testing the interfaces between components in an application ecosystem
  • System—whole-system testing against solutions that stitch together multiple technologies, say a consumer connected product (mobile app + app/platform API + IoT platform + other cloud services + device API + device firmware + device hardware)

QA at the wrong scale means that while QA may have been performed on your mobile app, web app, and connected device, that doesn’t mean your system was tested as a singular experience. QA scale challenges become particularly agitating with IoT and similar multi-ecosystem technology development, and become amplified in the absence of an empowered QA team who can own a holistic testing regime.

So, what exactly does QA mean for you?


QA expertise plays a major role in Next Mile’s customers’ success through hardship. If you’ve got an elusive quality problem you need solved, contact us and we’ll get to the bottom of it.

Find additional insight at our blog or contact us for a monthly summary of new content.

If this speaks to a problem you’re facing, we'd love to see if we can help you further.