Blog Banner

Custom software purchasing errors you continue to repeat

When you gain experience in any field, you notice and internalize the precedents and patterns of both success and failure. Successful patterns are ultimately more informative—they keep you from wasting time, money, and energy. Sometimes it pays to look at the failures, though. Here’s our synthesis of custom software purchasing errors that businesses continue to make.

In connected product development, there’s no rockier terrain than custom software. Next Mile likens both digital strategy and subsequent execution strategy to the Monty Hall problem: take the bad choices off the table first, map your plan all the way out knowing it will change, switch away from high-risk paths, and thereby increase your probability of success. To do that, you need to understand some signals that the odds might not be in your favor. We outline eight common custom software development purchasing errors below.

1. Start building without plans

Nobody begins commercial construction without blueprints, but people love doing exactly that in software!

Hype media like WIRED and Fast Company say speed to market matters, especially with digital. They predicate this on sloganized arguments about switching cost and network effects:

Switching cost argument: When people use something, they invest time and effort in it, and become disincentivized to switch away from it.

Network effects argument: The more users a platform has, the more useful it becomes.

Ergo, digital success requires you act fast and disrupt first or become a disruptee. It’s 20 year old dogma from the Enron era.

These arguments are (sigh), simple answers to complex questions, maybe right in spirit, mostly wrong in practice. The phrase "speed to market" short-circuits the complete truth: speed to ______ in market matters. Speed to quality in market matters, speed to relevant value delivered in market matters, speed to profitability in market matters.

Worse, you still confuse perfunctory kickoff sessions for plans:

  1. A discovery session is not a plan
  2. A UX workshop is not a plan (no matter the Post-it count)
  3. Agile against a wishlist is definitely not a plan (it may be an antiplan)

Each of these can simulate a plan enough to tick the box, but they’re really onramps into execution. An actual plan will revolve first around your strategy (why), then detail the whats, wheres, hows, whens, etc. You are unlikely to succeed with applications designed via iterative guesswork built by the lowest bidder or an inexperienced go-getter, but for some reason, you keep trying.

Bottom line: As they say in auto racing, "slow down so you can go faster". The internet’s saturated with junk, so standing out requires quality and relevance, not just presence.

2. Placing hope in an MVP

Speaking of dogma, one common approach rarely delivers for non-tech companies: the minimum viable product (MVP). MVPs are the gateway drug of custom software development. You’ve all heard the dealer say “let's just get it to this stage, then you can get it in front of customers, and go from there”. Once you’re on the sauce, you’ll come back for change orders.

A successful MVP solves one complete problem without creating a new one. If you’re relying on an MVP to generate feedback, it must deliver the one thing your audience needs or you'll get static in response, bypassing whatever hypotheses you held about value because you didn't test what you intended to test.

The ultimate utility of your MVP hinges entirely on the expectations of whoever’s going to see and use it. They won’t be impressed by anything they’ve seen before, they won’t care about nice-to-haves, and they’ll recoil from unpolished garbage. Further, if you launch an MVP expecting to “grow from here”, many markets (like heavy industry and agriculture) will not tolerate a flash in the pan.

If you need more than one core feature, or if you need something that looks and functions like a finished application to impress your audience and pass your go/no-go stage gate, MVP-centered development will absolutely fail you.

Bottom line: If you take this approach, grind hard to understand which feature to base your MVP around. Most of you get one shot with your audience—what are you going to take it with?

3. Execution bias fuels ignorant design

If you build it, they will not come. We’re way past the greenfield era of digital solutionmaking.

Can you honestly say you want more digital intrusion in your life? No, you want it only when it's specifically and obviously beneficial. That's exactly how your customers see things, which means custom software must be increasingly precise. You can disambiguate by defining (in this order):

  • Your general strategy
  • Who your audiences are (and aren’t)
  • What your audiences actually care about
  • Why they care about it
  • What exact value you deliver
  • When your audience cares about what you can deliver
  • How your audience wants to accept delivery of your value
  • How your digital offering fits into the other 9,999 digital aspects of their life

Without this information clearly detailed and communicated, decision making decays throughout custom software creation, forcing stakeholders, designers, developers, and testers to substitute instinct for information in the name of progress. When this happens, you develop for you and not your audience.

Bottom line: If your company repeatedly produces mediocre software (~3 star), you have this problem.

4. Preferring design optioneering to opinion

Recently, low-cost firms have embraced the lowest-common-denominator trend of Figma-powered high-option, spray-and-pray design: in each sprint, they present you with 6–10 choices for a given workflow, putting the onus on you to choose your favorite. This:

  1. Ignores the actual user (who isn’t you)
  2. Makes design choices yours (aren’t you paying a design expert?)
  3. Makes design problems your fault (hey, you picked it, not us)
  4. Increases the likelihood further sprints and scope changes ($)
  5. Absolves the vendor of responsibility for user success (not it!)

No empathy, no experience, no audience data—only production expertise. This type of design can and will be done by AI.

The heart of design lives outside its production in the ideas, conversations, and research that occurs before a single pixel gets pushed. It's the dualist marriage of constraining problem with creative problem solver that produces solutions both novel enough and obvious enough to be elegant.

Bottom line: If your designers don't have opinions, they don’t have expertise. Switch. Quickly.

5. Misunderstanding efficiency

Somehow, we're 30 years into the internet age, and you still think of non-engineering tasks as "overhead" that "slows things down" without “hard data” and prevents the "real doers" from "getting it done". It's this kind of non-intellectual empiricism that creates penny-wise, pound-foolish software development programs.

Here’s the truth: errors cost custom software buyers more unbudgeted money than anything else. An “error” is any issue necessitating a change in capability (so, not bugs). Deprioritizations, reprioritizations, technical impasses, UAT failures, you name it, that’s an error. Errors waste time, create rework, impact other facets of an initiative, and eliminate anything you’d gain (or lose) by deploying. By adding hours and weeks, errors add more cost as a percentage of your project than team composition or even bill rate. We use the following equation to calculate error costs:

Cost of wasted time + cost of rework + cascading impacts + opportunity cost = error cost

If you'd like finer detail on error cost calculation, read our overview of that separately. For now, use this as a rough quick reference to calculate the cost of a software development error:

  1. Pre-design errors: 0.5 x time wasted (BA/strategist only). Rework is minimal, cascading impact usually measured in hours, no opportunity cost. $
  2. Pre-development errors: 1 x time wasted (BA+strategist+UX) + 0.5 x time wasted for rework + cascading impact. No opportunity cost usually. $$$
  3. In-development errors: 1 x time wasted (for the entire sprint team) + 0.5–3x time wasted for rework + tech debt repayment (if calculable) + cascading impact + opportunity cost (usually missed revenue). $$$$$
  4. Post-development errors: Building the right thing wrong ($$$$$$$) or building the wrong thing (all the $s).

Overhead exists to cheapen errors up front and maximize development efficiency. Plot twist: the project with a high overhead-to-engineering ratio was not a wasteful project—it was an efficient project.

Bottom line: You have to increase the probability of catching errors before they become prohibitively expensive. Proper strategy, requirements, design, management, staffing decisions/structure, and tech selection diligence shield you against high-cost engineering rework and "hurry-up-and-wait" idle time.

6. Sunk vendor fallacy

We've seen a lot of people pick the wrong firm for the task because they liked them, believed them, and thought they could work well together. Or they were the incumbent. That’s 100% fair.

Then something goes awry, usually minor things at first, like boneheaded communications or a small deadline slip. The buyer cashes in the trust the vendor has earned up to this point hoping for fixes and some bygones-be-bygones moving on. When the development effort comes off the rails, the buyer reacts (again, rightfully) by fearing the staggering mid-stream switching costs, then drives their vendor hard toward a dead end in an attempt to self-rescue by aggressively holding that vendor to account.

It's an exercise in futility—if they weren't capable of building it, what makes you think they're capable of salvaging it? Instead of progress, you’ll get gridlock, acrimony, and defensive paper-trail construction.

Bottom line: Match your team to your project requirements and lose any fear you have about managing contributors in and out of your project.

7. "Bringing it all under one roof"

Also known as "one throat to choke". Buyers find this appealing because it simplifies decisions and centralizes management they might not have bandwidth for. It works if you need, say, a web application that requires a common stack of skills. For “one roof” to work well in complex digital like connected products, an organization needs many deep specialties, which is far rarer than it appears.

This approach fails most often because “one roof” providers lack the specialized expertise required across an ecosystem of technologies. Hardware and software engineering are not the same. Your marketing website and your vended platform are not the same. IT and DevOps are not the same. In a “one roof” org, project teams change complexion from one specialty to the next (your project may have multiple "internal handoffs"), creating nearly the same discontinuity you'd face with multiple vendors, but with less skill applied to each function.

Bottom line: Understand the best way to achieve each of your principal requirements. Yes, sometimes "under one roof" is best, but not as often as you'd think. We’ll discuss the parts of digital execution you must own in a future post.

8. Conflating "launched" with "done"

Usually, the cost tail on a software project is both longer and taller than most expect. It's especially long and tall when you don't have defined milestones!

This problem first manifests as an unpleasant surprise, then creates resentment as upper management stakeholders feel shackled to an undesirable expense. Typically, the decision to begin wasn’t clear-eyed, and the seeds of disappointment were sown much earlier in the budgeting phase. Any custom software development effort must account for:

  1. The total non-recurring design and build effort.
  2. The total effort required by you as the buyer to populate the digital product with content, design, customer service, etc.
  3. Any one-time or recurring licensing.
  4. Any additional non-software headcount required to maintain and use your digital product.
  5. Lifetime maintenance and upgrade costs.
  6. Allowances for radical pivots
  7. End-of-life costs

Even if you acknowledge each category with a “?”, that beats fantasizing about a one-time expense.

Bottom line: This problem isn't hard to get ahead of, but it does require proactive roadmapping and knowledgeable estimation to enable smart decisions.

Each of these issues reads like a movie we’ve watched too many times. When we run delivery, Next Mile aggressively prevents all of the above problems (and so many more) before they bite. If you’re getting started, we’ll keep execution on track. If you’re caught in the jaws of a delivery problem, we can extract you. Let’s talk.

Find additional insight at our blog or contact us for a monthly summary of new content.

If this speaks to a problem you’re facing, we'd love to see if we can help you further.