Blog Banner

Crisis culture in software development part 2: treatment

We focus on treatment in this second of two articles on crisis culture in software development, highlighting the trap most find themselves in, describing the fundamentals of successful software, and going component-by-component through what’s required to make a successful software product.

You won't find a one-size-fits-all, a silver bullet, a sure-fire method, a never-fail orthodoxy, or a Gino Wickman book that can increase the probability of software project success. Like your doctor might say, there's no miracle cure for a failure condition, but there is a path forward.

Oh, if you’re waiting for Superdev, Superdev won’t save you. Superdev will take one look at your dysfunction, scoff, and fly away. Many orgs churn developers hoping a mismanagement accident will create Superdev because they don't understand what (beyond luck) actually makes software projects succeed. Even with an army of Superdevs, few businesses have the ingredients of success in place. Moreover, the heroism required today eventually burdens any Superdev beyond their capacity and (super)humanity.

“Slow down so you can go faster!” — Porsche Sport Driving School instructors

Creating successful software projects isn’t magic, it’s more like a diet: the foods you started eating aren’t producing those results, your switch away from bad foods and the habits broken by that switch are helping improve your vitals. In software, practicing the right activities is less about intense focus on doing everything perfectly, and more about doing a larger percentage of correct-adjacent things so you do fewer wrong things.

Escape the activity trap, embrace the fundamentals

Although development talent and project methodologies are clearly necessary for a software development organization’s operation, a well-run development org won’t create a successful software application without respecting the fundamentals.

To create successful software, understand where software project success emerges. We measure the success of a software project in four ways:

  1. The extent to which it was successfully DELIVERED.
  2. Its QUALITY.
  3. Its RELEVANCE to target users.
  4. Its LONGEVITY and overall ROI.

Additionally, less tangible factors impact all types of success throughout the software's lifecycle, including intake, estimation, culture, alignment, stakeholder roles and responsibilities, and finger-pointing CYA nonsense. Most software professionals grasp the fundamentals specific to each kind of project success. However, actually creating success requires some unspoken, unacknowledged, or unpracticed know-how applied in pursuit of success as a separate but related project alongside your software.

Building your success project

Unless you like gambling with large sums of company money, success, like software, must be architected. Luck comes from preparation, preparation begins with informed practice. Knowing the components of success and the mentality needed to achieve each will arm you to create your next success.

Component 1: delivery

Delivery means the extent to which an application gets designed, developed, and deployed. Failures metastasize from the delivery phase with appalling frequency.

Who enables delivery successes? Project management and other operations personnel, software architects, developers, upstream players like business analysts and user experience designers.

Delivery’s a chain as strong as its weakest link. Luckily, most software professionals know how to get things done:

  • Clear vision drives clearer business requirements
  • Full stakeholder support enables rapid decision making
  • Proper fulfillment talent must be present and occupy the right roles
  • Risks need to be identified and hedged against
  • Alignment focuses that energy toward a targeted outcome

Progress occurs rapidly when participants work using the same processes, tools, and communication patterns against mutually-understood requirements.

Isn’t this enough? Afraid not—”successful delivery” and “a delivered success” aren’t the same.

Delivery secrets

Someone things about successful outcomes in software delivery everybody knows but doesn’t say:

  1. Nobody in the delivery chain is incentivized to deliver a successful project—they're incentivized to produce a completed project to (an often fictional) schedule. To the extent a product org has leveraged anyone’s compensation based on performance, it’s usually to concrete, checkable boxes (not that leveraged comp is always good…)
  2. Consequently, organizations frequently over-enforce alignment (even on unclear goals), steamrolling the valid tensions and questions that arise about a project's ability to satisfy the four success types.
  3. Developers and operations personnel are very different from each other, yet Corporate America manages to a. lump them together and b. measure both on productivity, like factory equipment.

These three combine to put everyone on the path to burnout.

How to deliver a success

“Where others do, don’t, and if you have to do, do less.”

— Calvin Coolidge

A software application is a cumulative effort, accruing more and better code over time. Similarly, software teams accumulate institutional knowledge, habits, and team members…right up until they’re decommissioned. With any cumulative endeavor, the team and the product will carry nearly every uncorrected mistake forward in perpetuity, living it over and over again.

Consequently, above all other traits, restraint makes software delivery successful. Most teams deliver software using a cycle based on Agile philosophy. Crudely:

  1. Plan
  2. Design
  3. Develop
  4. Test
  5. Deploy
  6. Review

Without a “done” objective, software engineering programs thus become self-perpetuating, and that cycle of review, backlog prioritization, and sprinting gains sunk-cost momentum the further you go, seemingly violating the laws of thermodynamics. Anyone who stops to ask “should we be building at all?” gets run over or ignored by the habitual development engine. These are common, weak reasons to act:

  • “We need to have something ready by ___.”
  • “We’ve been working on ___ forever.”
  • “We keep de-prioritizing X and need to get it done.”

Yes, there is a point where you just have to build something. Good thing it’s easy to know when you’re there:

  • You have questions that only an engineering build can answer
  • You need a prototype to user test with
  • You’ve got credible evidence that the thing you’re about to build (or rebuild) will add value for users and most of the team believes it

It’s also easy to know when you need to fix something:

  • You’re down
  • Some third-party update broke a function or integration
  • You’ve got bugs causing a significant number of customer difficulties
  • A new release is raising hell

Here’s the thing: with live software, each release disrupts something. Forced update annoyance, new regressions, breaking changes from the conditioned experience your users now expect, put-upon admins, etc. Obviously, to successfully deliver anything, you have to act. However, if your proposed release fails to meet any of those build or fix criteria above, you’re going to a. disrupt everything and b. immortalize additional code with no proven value. You’re almost certainly better off doing nothing than doing those two things.

"It is much more important to kill a bad bill, than to pass a good one."

— Calvin Coolidge, again

Therefore, to ensure that your delivery engine produces successes, your mentality should default to inaction. Pick your battles, justify your work before you engage it. If you do a thing:

  • What does it mean for you, our users, and our customers?
  • What does it cost?
  • What do you gain?
  • What do you risk?

You’ll do better work cheaper this way. When you’re creating success, better beats more. (Note: when you’re successful and trying to scale, more might beat better until enshittification takes hold.)

Most delivery professionals have been conditioned to believe that pauses are negative, being taught through cultural osmosis that action is progress and reflection is waste. Your development team is incredibly intelligent. You lose nothing by letting overstressed engineers cool down and reflect on their recent work and the work to come. Instead, their pre-and post-processing makes future action clearer, reducing the likelihood of bad choices and inelegant code.

If you can restrain the force of aggressive, thoughtless, draining activity, you’ll get better action when go time arrives.

Component 2: quality

Quality is somewhat self-explanatory: the extent to which the application performs and delivers the intended/designed experience, functionally and non-functionally. Note that an application's overall quality may also rely on dependencies/services its developers don't control.

Who enables quality successes? QA teams, software architects, developers, business analysts, and stakeholders who decide that testing matters.

You've already learned some basics about quality:

  • Good QA talent isn't easy to find—junior developers often do QA.
  • Employ methodologies that fit your situation like, test-driven development, unit testing, automated testing, etc.
  • Quality goes beyond the code's functionality—your application must survive security, performance, availability, and other non-functional testing as well.
  • We’re drowning in digital—customers won’t tolerate junk software for long, if at all.

Digital QA is generally poorly-understood by non-QA types. We wrote a thorough overview previously to demystify it.

Quality secrets

QA doesn’t seem to work the way many wish it would:

  1. Quality QA testers usually aren't developers at all, and employing new developers for QA merely forces them down an artificial career track.
  2. During crunch time, developers sometimes rush through programming letting QA catch the issues, increasing the QA burden along with the re-work effort.
  3. While testing, QA will be adding to the project's backlog, frustrating some stakeholders.
  4. Regarding security, only human testers understand the motivation for hacking your application and can derive the method of attack from there.

QA tends not to produce code or designs, so many delivery managers and project stakeholders don’t grasp its importance.

How to create quality software

Ironically, even if your product does the exact same thing the exact same way as someone else’s product, you can still gobble up your market if yours does it better. In that case, your QA approach and team will become your secret weapon.

Any quality program will produce tickets that will generate upstream work for developers, designers, business analysts, and management staff. Your job is to ensure those tickets matter to your experiential and business outcomes by targeting testing.

Managing QA against a program at the theoretical level becomes frustrating fast—there’s a huge list of QA methods and all of them could potentially apply to you at various layers in your overall experience. To narrow the scope and tailor testing to your program, focus on each scale first:

  • Unit—portions of code, like a method or query (finds/rejects unexpected results)
  • Application—complete applications, say a WordPress website or a mobile application (tests user’s experience of one application)
  • Integration—testing the interfaces between components in an application ecosystem (tests outcomes of external dependencies)
  • System—whole-system testing against solutions that stitch together multiple technologies, say a consumer connected product (tests the entire user experience)

Second, pick a QA tactic or two that you believe most necessary at each scale, and have your test plan constructed around that. For example, a hypothetical IoT drink dispenser:

  • Unit—testing only if developers see fit
  • Application—iOS and mobile app get automated regression tests on core features and functional tests on new features
  • Integration—low-stakes, so alerts against API status
  • System—scripted benchmark round-trip performance test

For another example, how about an operating AI-based mental health service:

  • Unit—unit test key methods in services
  • Application—functional tests on new features, regression tests on core, release control processes for AI, services, and apps
  • Integration—integration test scripts run daily with performance results, API alerts, release notes review for all API, LLM, and OS dependencies, occasional security tests and performance audits
  • System—functional tests on new features (AI + services + all interfaces), trigger/response performance and crash analytics measured at interface

QA at the wrong scale means that while QA may have been performed on your mobile app, web app, and connected device, your system may not have been tested in a way relevant to users or customers. QA scale challenges become agitating with highly-dependent or integrated experiences and get ugly when nobody performs holistic testing.

Key quality actions to avoid:

  1. Box-ticking—performative testing to green-light that next release
  2. Using subject-inappropriate methods—most often, we see hardware-based testing methods applied to software and web- and mobile-based methods applied to firmware
  3. Overtesting—too much will get expensive and slow everyone else down

You’ll know you’re off-track if your team creates and resolves countless bugs but gains nothing after they’re fixed.

Component 3: relevance

Relevance is the extent to which your target audiences find your software or service useful and beneficial. By far the most common failure area, relevance is the most mutable success metric before, during, and after design, development, and deployment.

Who makes software relevant? User experience designers, digital business strategists, product owners and managers, sales and marketing staff who know the user and customer.

What you already know:

  • You should compose and validate (with prospective buyers) a business strategy for your digital thing.
  • Your organization should undertake user research and prototype testing prior to greenlighting any significant build.
  • Requirements management can keep your full build from drifting too far off track.
  • You’re not done when you launch—post-launch acceptance testing and analysis, comparative you-vs.-alternative testing, and analytics data mining actions can ensure that users actually find utility or enjoyment in your digital experience.

Of course, hardly any of this ever happens.

Relevance secrets

Relevance starts with questions. Unfortunately, software development is full of professionals who’ve been conditioned over time by disciplinarian managers to censor their questions and suppress their ideas. Staying quiet causes less trouble, and if the product fails, screw it, just take a new job somewhere else.

  1. Perversely, product owners within an organization are often the same people who originated the idea and push hard to see it constructed, regardless of its potential or any gotchas.
  2. If your product owner isn’t curious, your effort’s doomed. What do you want to know about your audience? How do you want them to behave? What are your ways around the technical barriers you just found?
  3. Post-launch or post-deployment optimization is key—your most valuable undiscovered insight lives here.
  4. Content effectiveness and experience utility in context can actually be tested.
  5. Establish baseline trends before making decisions on data.

Strong research efforts faceplant hardest when they only generate or transmit answers: whats without whys, ungrounded recommendations, user snapshots without motion. Answers themselves become less valuable when independent of the context collected in seeking said answers, and context is central to relevance.

How to create relevant software

True art is novel and obvious. To create relevant value from a software experience, focus on novelty in the truest sense: being original and compelling in a user’s context.

“But someone else already does X!”

It’s not about the what. Anyone can think of the what. In software as with most products, novelty comes from your execution of the how.

  1. Do less, do better. List all the functions your software does or will perform. All of them. We’d guess that most of them are table-stakes boilerplate, like login, share, activate an integration, etc. Then mark the features and functions people actually use and buy/like your product for using data from observation or analytics. Whatever they are, they’re what makes you valuable—those are the things you do. Get users there faster, and focus on doing those things excellently.
  2. Design for casual use. Unless you’re making a AAA video game, your software isn’t anyone’s priority but yours. We strongly recommend that every user experience have a defined entry, value acquisition, and exit pathway. Ideally, that’s as simple as “login, view data, logout” or “open notification, swipe, click home button”. Yes, there are power users, but your ability to decomplicate the execution of the tasks and behaviors your application solves for will both increase your relevance and boost your competitiveness.
  3. Recognize that we’ve hit software overload. Everybody has dozens of logins and uses too much software. Look at your offering’s value and your experience alongside all the other things your audience has to do that day. Then, capture how your experience does (and should) fit alongside all the other stuff on your audience’s interface. Your project management tool likely lives with an accounting tool, time tracking tool, ticketing system, code repository, and a bunch of spreadsheets in someone’s bookmarks tab. On mobile, we’ve seen some great applications fail because they weren’t easily switched between during a critical task that requires multiple systems.
  4. Involve engineers in reflection and analysis. We’re going to rant about this separately another time, but the process-driven dehumanization of the engineer has proven itself one of the worst sins in software engineering. Engineers don’t just squeeze out code, they’re the likeliest people to think of a clever solution to a user problem because they actually build the software. Load your users’ problems into their minds, encourage agency, and watch novelty materialize.

Awareness is the key to relevance—context awareness, value awareness, and especially self-awareness. You’ll need to know when something’s a fact, when something’s truthy depending on circumstances, and when something’s just an assumption, then act accordingly. Assumptions, especially those about what people and markets will and will not do, make an, well, you know the rest. You’ll know your offering is losing (or doesn’t have) relevance when customers choose alternatives like doing nothing.

Component 4: longevity

Longevity refers to an application's returned value to you, your users, and your customers (could be money, satisfaction, etc.) over a given expected lifespan. It's important to recognize since so many projects fizzle after launch and wind up derelict.

Who enables software longevity? Digital business strategists, user experience designers, financiers, software architects, DevOps.

Most people associate longevity with flexibility. If you picked up a how-to on digital from the 2010s, the author would argue that you create longevity with:

  • Services-oriented architecture
  • Omni-channel/experience strategy
  • Roadmaps, backlogs, and continuous integration
  • Constantly reducing technical debt
  • Analytics informing marketing strategy

Today’s guide would be similar, but go further by leaning into specific stacks, methods, and externalizations:

  • Offloading as much as possible to cloud services
  • Conversely, making it as self-contained as possible
  • Managing external and internal resource lifespans
  • Building with frameworks that account for backward compatibility
  • Using CI/CD methods
  • Intense cost management with outsourcing, AI, and automated infrastructure deployment at the center (all of which cost huge management bucks)

Much of that can be good advice on a case-by-case basis.

Longevity secrets

A few things software pros know about longevity but rarely state:

  • You need total control over deployment. If you can’t deploy, you might as well stop designing and coding.
  • Simply possessing an API doesn't give you a valuable or usable services architecture. Worse, if you’re undocumented, you might as well be an isolated monolith even to your internal teams.
  • True post-launch optimization requires hard work like analytics review, content revision, page-level testing, pathway optimization, user research/testing, multivariate testing, and campaign effectiveness review.
  • Educate your financial partners on your software's actual useful life and fund it based on that. The amount of money you need might be significantly larger or smaller than just the build cost.
  • Expect to continually justify the existence of your software. To help this, create the ability to attribute software usage to revenue and relevance to all key audiences. Analytics tools have made this easier, but acquiring and synthesizing the relevant data is still a challenge.
  • For any internal or external application to succeed long term, it must change a specific behavior for the better and become an obvious part of life for your target audience.

Above all, software product longevity comes from its continued utility to each critical audience, including you.

How to create long-lived software products

Back to that novel and obvious thing—where relevance focuses on novelty, longevity requires you build and condition two types of obviousness:

  1. As a business, your product is obviously sustainable.
  2. Your software product has continuous utility to customers and users.

In building your software success project, architect for longevity of utility. This has two aspects: technical and experiential. Most technical contributors to longevity focus on sustainability, your organization’s ability to build and maintain the product without being crushed by its cost. Fortunately, most software architects are rather good at sustainable design (if asked). In addition to the commonly-understood functional and management pieces mentioned earlier, encourage your architects to design for important non-functionals:

  • Compute cost and location
  • API call reduction
  • Code minimization
  • Performance maximization
  • Low attack surface

Experientially, you build longevity by simplifying the behavior your application changes. “I just do X”—you want to be that X. Start by avoiding threats:

  1. Minimize disruption. The last thing you want is a massive disruption in your software’s life, be it a continued outage, a failed critical dependency, or an atrocious new user experience. Trust isn’t easy to build—don’t break it.
  2. Understand and manage your conditioned experience. After using your software, users create a mental model for how it works, and that’s the experience your team must avoid worsening. No amount of design overcomes someone’s existing training, and even when you choose to break the existing workflows, you must acknowledge that the first casualty after your release will be your customers’ expectations. No, this isn’t what your design system does.
  3. Get ahead of external dependencies. Remember all those critical infrastructure bits? The services integrations? The OSes you depend on? Keep your finger on the pulse of all of it or you’ll get caught flat-footed.

Then, focus on experiential longevity drivers:

  1. Create “done” points. “Aha, but we’re never done” you say. “We must always evolve!” Clearly, you’re never permanently done, but over the long haul, it helps you and your users a great deal when you don’t fix what ain’t broke.
  2. Build convenience. Many businesses accomplish this with ecosystems, creating ubiquity with single sign-on, integrations, compatibility, etc. It’s easy to get rattled by a competitor’s size and pervasiveness, but recognize that for what it is: convenience, which is something you control. Do you need that login? Do you really need that interface? That legalese? Chances are you can simplify and re-simplify your experience.

After all that, don’t forget that it’s perfectly fine for software to be intentionally short-lived. Longevity’s not for everything!

Notes on software development culture

Generally, culture is a blanket term covering intangibles and environmental elements that either help or hinder project success. Culture's not an antidote to a business’s endemic problems, but it can become a secret sauce that promotes successful behavior.

A healthy culture takes the edge off the cycling drudgery of continuous development. Some things to internalize:

  1. Make requests deliberately specific or deliberately open. Some think freedom of operation paralyzes development teams because engineers are just order-takers. Freedom to operate does create paralysis, but only when it includes a dollop of uncertainty about why you’re doing something or when the team isn't confident that leadership can accept the result maturely or predictably. If it ever feels like your developers are "always waiting for someone else", management’s the problem.
  2. Your application isn’t why your engineers got into software. Most probably began their education with an interest in creating video games and did not join your company to do marketing work, but sales and marketing applications comprise 90% of the work in web/mobile. A majority of developers have spent their entire careers disenchanted.
  3. Retention’s not just money now. Ideally, environments, compensation packages, perks, benefits, etc. either adjust to a software professional's life stage or, more likely, that professional finds that different things retain them as they progress through life.
  4. What happens after code? The specter of a laborious career in the code mines haunts many developers, and any reification of said fear becomes high-performer kryptonite. On top of that, fast-paced professions like software development create micro-careers: frequent promotions pressing recipients up against uncrackable in-club ceilings.

Avoid burning and churning, embrace humanity. Even shreds of empathy from senior management can temper the criticality of the reactor that is your software development engine.

How can you unravel a crisis culture?

If you'd like to improve your success rate, these steps broadly outline the first three things you should do.

  1. Diagnose your weak areas. Do you have a delivery, quality, relevance, longevity, or cultural problem? If not, great, you're part of the upper 20–30%. If so, welcome to the family.
  2. Identify your key symptoms. Then, trace those symptoms back to a root cause. While root cause analysis is nothing new, beware of buck-passing and finger-pointing as you learn the hows and whys of your team's successes and failures. In most circumstances, everyone is genuinely doing their job correctly and professionally to the extent possible.
  3. Forecast change. Identify key areas to change and start considering what change might mean. Systemic changes, like software changes, have regression effects that you'll need to foresee.

If everything feels right, start making changes in small movements.


Next Mile has lived—and cured—digital crisis culture in some of the world’s largest organizations. Contact us for an assessment of your scenario and we’ll architect your application’s success plan.

Find additional insight at our blog or contact us for a monthly summary of new content.

If this speaks to a problem you’re facing, we'd love to see if we can help you further.