Calculating software development’s key inefficiencies
This article augments our earlier writing about the custom software purchasing errors businesses continue to commit and tolerate, unpacking the most common inefficiencies that stifle software delivery.
Previously, we defined software development errors as “any issue necessitating a change in capability”. Examples included constant priority shifts, UX and UAT trouble, technical gotchas, hidden requirements, and so many more. Our rough error cost equation can help you ballpark error cost estimation:
Cost of wasted time + cost of rework + cascading impacts + opportunity cost = error cost
This quick reference can help refine your calculation depending on when and where your error occurred:
- Pre-design errors: 0.5 x time wasted (BA/strategist only). Rework is minimal, cascading impact usually measured in hours, no opportunity cost. $
- Pre-development errors: 1 x time wasted (BA+strategist+UX) + 0.5 x time wasted for rework + cascading impact. No opportunity cost usually. $$$
- In-development errors: 1 x time wasted (for the entire sprint team) + 0.5–3x time wasted for rework + tech debt repayment (if calculable) + cascading impact + opportunity cost (usually missed revenue). $$$$$
- Post-development errors: Building the right thing wrong ($$$$$$$) or building the wrong thing (all the $s).
Today’s guide (below) focuses on error classes, not case-specific causes and details. Plenty has been written about those. We find it more useful to understand the fundamental principle underpinning each error class so they’re easier to spot, understand, measure, and avoid. Fundamentals win!
Let’s jump right into the primary error classes that can infect a hypothetical 2000 hour connected product mobile development project.
Wasted time carries different implications as a project unfurls, but generally, your aggregate error cost escalates as you near completion. To calculate wasted time, we’ll determine your waste basis and waste operators.
A role’s associated hourly cost multiplied by its utilization over the course of a given period provides your cost or waste basis per role, a.k.a the numbers, __for any wasted time:
Role hourly x hours utilized = waste basis
Next, we need the operators. Regardless of Agile, waterfall, or your management system du jour, it matters whose week you wasted, which you can gauge by when waste occurred:
- Pre-design errors will include the BAs + any strategists
- Pre-development errors include the BAs + strategists + UX designers + PM time
- In-development errors include BAs + UXers + PMs + developers
- Post-development errors vary considerably and will stay out of scope for today
The staffing ratio and hourlies for your example project will look something like this over its duration:
BA—0.33 FTE @ $165/hr $17,911.18
UX—1.5 FTE @ $185/hr $91,282.89
Dev—3 FTE @ $165/hr $162,828.95
QA—4 x 0.25 FTE @ $80/hr $26,315.79
PM—0.25 FTE @ $125/hr $10,279.60
Giving us totals of:
Duration: ~1 quarter (~13 weeks)
Now, let's waste a week. Maybe the team worked on a feature that got suddenly de-prioritized. If your wasted week occurs mid-program with an Agile system, you have a 33–50% chance of wasting a full team/partial development week ($24,328.00) and a 50–66% chance of a full-team/full-development week ($37,528.00), depending on your sprint cycle. Let’s say it was worst-case, a full-team/full-dev week:
Obviously a week of go-hard development is rarely a total waste, but off-target is off-target.
In a waterfall system, about 8–9 of the expected 13 weeks are development, so you've got a ~65% chance of wasting about $20,000.00 in time. Sounds like less money, but the stop cost, rework cost, and reengagement costs of a given error in that system could be significant.
With all this, we've calculated our cost of wasted time.
Wasted time isn't the only way you're going to feel the impact. We can't forget rework!
Rework that impacts early-stage requirements or design effort usually stays contained, affecting BAs and UXers, and adding time to the often slow-moving early work where stakeholders already expect mistakes. In-development software engineering rework can feel like a river reversing its flow, requiring that more people change direction and/or bring the entire team back in.
Engineering rework has multiple levels of magnitude depending on what's required:
- Roll back and move forward from previous: 0.1 x wasted time
- Mild tweaks to what was just done: ~0.5 x wasted time
- Do something else/do it again (clean slate): 1 x wasted time
- Surgically replace pieces of what was done in code: 2–3 x wasted time + retesting + technical debt
Obviously, our fourth type of rework crushes both progress and morale. It's got a higher cost multiplier in part because it implies that product management chose to deploy the waste, then repair it. On top of that, when said waste gets rolled into our codebase and then gets developed upon, we also add technical debt, a subject for another article.
Leveraging our running example, our rework cost table would look like:
- Roll back and move forward: $3,752.80
- Mild tweaks to what was just done: $18,764.00
- Do something else/do it again: $37,528.00
- Surgically replace pieces of what was done in code: $75,056.00–112,584.00
This gives us our approximate cost of rework. Fortunately, we just needed some mild tweaks. This time.
Delays in marketing technology development affect campaign schedules and launch dates. ERP module rollout problems affect manufacturing lines and groups inside the business. Unfortunately, software development for IoT and connected products doesn’t have the luxury of such isolation.
At an OEM, a connected product’s master schedule will include contributions from the following teams, at minimum:
- Consumer insight
- Product development
- Electrical engineering
- Mechanical engineering
- Industrial design
- User experience design
- Firmware engineering
- Mobile application development
- Backend/software engineering
- Public relations
Suddenly, software development errors aren’t just waste, they’re blockers for people in the business who don’t know your name but depend on you to deliver on time so they can do their jobs.
Think of a software development program error as an incident fixed somewhere in the middle of the master schedule. That error’s severity determines which direction the error affects. Catastrophic discoveries such as, say, an iOS technical assumption that was just disproven can reverberate all the way back up the chain, causing a significant change order to the physical product itself.
Most often, IoT software development errors affect the future half of the master schedule, pushing dates, angering dependents, and jeopardizing go-to-market efforts. While rarely as bad as wrecking work done to date, there are always huge landmines after development like reviewer releases, press embargo dates, retailer holiday on-shelf deadlines, and shipping cutoffs.
To calculate our cascading impact cost, look at your master schedule and answer:
- Who, other than you (software design/development) is affected?
- Is the physical product affected?
- Is the product’s premise affected?
If you’re lucky, you can contain the software error’s blast, push some downstream dates, and hopefully avoid landmine target dates. If you’re not, you’ll need to estimate the rework and change cost of every affected team and add their cascading impact cost to your total error cost. You’ll also have some uncomfortable meetings ahead.
Fortunately, in our hypothetical, our late app build only angered our overseers and not our customers or dependents.
In product-focused software development, think of opportunity cost as the cost of unmitigated consequences. More than just the foregone what-could-have-been alternative, opportunity cost here has several components:
- Missed revenue from being out of market
- Missed opportunity from poor ratings or reviews
- Lower margins from weaker performance
- Multiplied costs from fixing a known problem post-launch
- Reputation impact to other product lines
While it’s easy to become paralyzed by wasted time, rework, cascading impact avoidance, blustering forward though a real problem will produce a low-quality application that helps your organization miss dollars it ought to capture.
Calculating opportunity costs requires industry-specific acumen outside a technology firm’s purview, but we can recommend the following to get started:
- If you’re heading toward a delay, estimate the cost of being out-of-market
- If you’re heading toward a customer-facing issue, use historic data (if available) to estimate and add the cost to triage, service, and resolve a customer-upsetting issue
- If you’re heading toward a manufacturing or distribution problem and you believe your forecasts, a quick differential between the expected forecast and likely shortfall can help you make your point
Those should be enough to calculate your opportunity cost without implying the apocalypse. Don’t go nuts stacking on every implicit and explicit cost or you’ll waste time analyzing instead of solving. If you want to go nuts, then by all means do so, and don’t forget the accompanying sunk, marginal, and adjustment costs tied to the initial investment and lower-than-expected volumes.
Our hypothetical connected product app is already in market, but the delayed release contained fixes to known issues and only helped us accumulate more 1-star reviews.
If you own the development team instead of employing a professional services firm, that means you also own 100% of its inefficiency. Like rework, idle time and relitigation are the enemies of efficiency for a software engineering team. Rework affects all projects and gets its own section (see above), but idle time and relitigation are a special hell for team owners.
While stakeholders pause, re-think, and re-decide, software designers and engineers go idle, costing the company the same money while they wait. Idle time factors into wasted time, and the idle time calculation is rather simple:
Hours spent not working x hourly cost = idle time
Remember that you need to calculate this for every worker downstream from the stalled work. Idled UX blocks development and QA, but idled development only blocks QA. Consequently, idle time becomes less impactful late in the project.
Also, team managers typically re-task idle developers, which always creates a workstream collision and context-switching burden once the idled project comes back online. Rarely, engineers themselves push decisions back up to create idle time, using it as space to work on something else. Tsk tsk!
Relitigation occurs when teams fight over past decisions. While not a major direct contributor to project waste, relitigation changes or hardens opinions, amplifying decision decay as project requirements pass through the software development system.
We haven’t found a great way to estimate relitigation as a factor of waste. It’s more of a poison that inflates hours, inflames emotions, and deflates hopes. Our most common observation is that the corrosive effect of ongoing relitigation imposes an increasing multiplier for future work estimates over time (1.2x, then 1.4x, and so on). What used to take 800 hours now takes 960, then 1120, then 1280 and beyond as team frustration builds.
At Next Mile, we’ve lived through all of these inefficiencies so many times that we eventually learned how to solve them. If you’re caught in a cost-generating doom loop, contact us and we’ll help you escape.