Designing software that uses AI
After 5 years of in-depth experience employing different types of AI in manufacturing software, we've codified our first principles of software user experience design for AI-enabled applications here for you.
This is an article about AI. Reader discretion is advised.
“What are we doing with AI?”
Technologists have found themselves behind that very eyeroll-inducing question for over a year now. Yes, it’s prompted by the swirling Silicon Valley hype machine. Yes, it’s obnoxious. And yes, it’s been over a year since ChatGPT’s debut and we’re still waiting for AI’s first game-changing use case.
Still, said question deserves a thoughtful answer instead of reflexive, prompt-engineering makework. At Next Mile, we’ve successfully leveraged multiple types of AI against one massive manufacturing business problem for several years. We’ll lay out what we’ve learned to accelerate your first steps toward a clear, specific, and implementable definition of AI employed in your application’s experience design and system architecture.
The definition of AI in software design
To begin, we’ll borrow Orion Innovation’s broad definition of artificial intelligence (AI):
“Any technique which enables computers to mimic human behavior.”
Subsets of AI include generative AI, large language models (LLMs), machine learning, deep learning, natural language processing, and computer vision.
In other words, any time you use a computer program to do something a person does, that’s broadly-defined AI, so most technologists are more familiar with AI than they might think of themselves as. That said, implementing AI well in software requires a few subtle changes in thinking, so we'll step through all but the first one:
- Demarcate the tasks inside the process a person’s executing.
- Locate the burdensome effort inside that task.
- Classify that effort.
- Match the AI enabler to the effort type to simulate that effort.
- Architect and implement in a clearly beneficial way.
Most designers and architects can pluck tasks out of a process, so there's no need to cover that here.
AI enablement reduces effort, not people
AI-enabled software clarifies and accelerates task completion by replacing, augmenting, or amplifying human effort with computer processing. Well-implemented AI-enablement reduces intellectual labor, trimming the number of steps, amount of information, or volume of stimuli assaulting your user. In a manufacturing setting, AI is best used to focus the worker (employee, robot, etc.) on the workpiece and increase the correctness and efficiency of that worker’s effort.
Many people reactively ask how AI can replace humans and design for that use case, but effort is the key component. Similarly, pop-business books like Jobs to be Done focus on outcomes, like Ulwick’s proverbial ¼” hole, but so far, effective real-world AI has been about reducing effort.
Notice that if you remove “AI-enabled software” and “with computer processing” from the first sentence of this section, you end up with a good generic description of what technology does. As a software designer or architect, this isn’t new. As a product manager, it’s why your software exists.
Key question: What effort do you want to reduce?
Classifying effort
Your next key question is an immediate follow-up: what kind of effort do you want to reduce, augment, or improve?
We’ve found that in the workplace, human efforts can be split into six primary classes:
- Labor—production effort needed to make something.
- Experience—knowing likely outcomes and shortcuts based on previous exposure to similar circumstances.
- Judgment—accurate observations and conclusions.
- Coordination—aligning and managing the efforts of others.
- Direction—acting in furtherance of an intent while adapting to circumstances and limits.
- Empathy—understanding how someone else (like a customer or employee) feels and perceives a given situation.
There are certainly more, but those describe most workplace tasks. While no single technology delivers a holistic substitute for human beings working toward an objective, you can simulate pieces of the whole to great benefit.
Key question: What type of effort is the effort you want to reduce or enhance?
Simulating effort
Next, we match the type of effort your design intends to reduce, augment, or improve to the AI technology that best simulates said effort. Let’s look at the options:
Key question: Which technology, if any, fits the type of effort you want to reduce or enhance?
Simulating labor
Generative AI and large language models (LLMs) simulate labor by producing simulacra that accelerate creation and revision. We say “simulacra” specifically because generative AI produces its best attempts at a prompt’s desired output based on its model.
Generative AI supplementation can relieve the burdens of creation and enable rapid iteration, so wringing the most out of generative AI means designing workflows into your software that aid:
- AI prompt editing
- Result review
- Result acceptance
then facilitates a human’s:
- Review
- Editing
- Approval
Remember, gen AI outputs are imitations that get close but rarely nail it. When they’re close, they help your expert users apply the final tweaks that express their talents. AI generation cannot replace guided design over several weeks or months with multiple feedback inputs, user testing, and revision cycles, but there are moments in most business processes where generative AI's brand of brute force saves time.
Simulating experience
Coded policies simulate experience by influencing the object of inquiry. In a complex production or service environment, the tedium of remembering which customer needs what under what circumstances encourages mistakes, stoppages, and future rework.
When you hear “digital twin” most people think of a building with sensors and a lookalike UI, but digital twins can also digitize the context of more than just hardware. When represented as rules in code, contracts and procedures can be associated with tasks, eliminating guessing, stoppages, and lookups by making obscure or implicit knowledge automatically explicit in, say, a work order. Today, work order 123 might just describe the job to be done, but a policy-enhanced work order 123 can inherit rules from context, "knowing" that:
- It has Customer X, which means it gets the special coating in the finishing bay.
- It’s subject to Customer X Contract 2, which means it must be shipped faster than usual.
- It’s Job Type B, which means it’s never routed through the welding facility and can bypass any corresponding quality management.
As circumstance-specific this-not-thats stack up, the nearly-factorial nature of a task’s forking paths introduces more opportunities for failure, and the farther down the process a task makes it, the higher the stress of any rework, backtracking, or scrap.
Later on, the predictive beauty of this system blossoms when you add a simulation mode and then introduce an event like a large order or a disabled machine and watch the bottlenecks appear! You can also leverage a tool like this to understand absolute cost per cycle in your system, maybe for the first time.
Simulating judgment
Ongoing, rule-driven analyses simulate judgment by comparing an event’s data to synthesized baseline data to highlight anomalies, patterns, and other normalities or abnormalities. A given event’s data is typically measured against specified tolerances and flagged for action depending on findings—it’s that recommended action that separates simulated judgment from detection, as judgment takes the added step of influencing action, for example sorting formed concrete blocks that meet quality standards from those that don’t.
Simulated judgment becomes tricky in a hurry, but works as you, the designer or architect, narrow your ask and reduce the denominator of possibilities a given event must be analyzed against. Biology’s taxonomic rank (domain, kingdom, phylum, class, order, family, genus, and species) helps think about this: are you asking a computer vision system to identify specific bird species from the entire class or from a defined genus?
Simulating coordination
Coordination is where digital tools already shine brightest. Chances are your application facilitates coordination somehow, even though the multitude of communication tools frequently makes coordination more difficult!
For those that need a boost, the humble task list is still the place to look. With sufficient context, Natural Language Processing (NLP) tools can help tailor a worker’s or user’s to-dos specifically for them and trigger reminders. They can also help prefill forms and other bureaucratic instruments to reduce the follow-up burden, helping people “just do their job”s.
Simulating direction
We haven’t seen a technology that can replace actual leadership or a technology that can reliably accept leadership intent and interpret it correctly and justly against complicated circumstances. When people worry about the robot apocalypse, this is why.
Simulating empathy
You’re in chatbot territory now. If you’re honestly considering whether you should simulate empathy for a customer or employee, please stop unless you have an unimpeachable reason to do so.
Your efforts reduce theirs
You’ve chosen a target, let’s say “reducing an estimator’s effort by prefilling information and recommending tasks and pricing”. Successful AI implementations in software tools require defined targets, which we’ve discussed, but also defined benefits:
Start small, then repeat. Start with one type of job, one automated task, or one analysis. Think statistically like “we need to both increase the accuracy of this station’s output and reduce the likelihood of an undesirable variance”. You’re not eliminating an entire job, just reducing the burden created by a part of it. You don’t need to collect all the data, either, just the right data for that one effort reduction.
Minimize pointless effort. This is how you're helping instead of harming. This is an easy sell in our example since everyone knows that data entry still wastes time everywhere, and recommendations or prefilled options can save countless cycles. If you’re hunting for pointlessness, interviews and observational studies root out useless work quickly.
Eliminate irrelevant outputs. Users have to trust what you make, and computerized garbage outputs will sabotage that trust. Think taxonomically about what is and is not included in your AI systems’ possible user-facing output and be explicit about both. It's never pleasant, but you may have to design around AI's foibles.
The farther into AI you go, the harder it becomes to hyperventilate about LLMs, GPTs, automated factories orbiting in the night sky, or the machine-driven apocalypse. For now, AI helpers are tools in the toolbox, and it pays to equip users with the right tool for the job.
Key question: If people gain what you expect them to gain from your AI implementation, what will it change for them?
At Next Mile, we’ve spent a huge amount of time helping businesses implement meaningful AI solutions to productivity problems. If you’ve been challenged by the “what are we doing with AI?” question and need a boost, please contact us and we’ll get you going.