Building user trust in AI-driven processes
In just a few years, the adoption of artificial intelligence, or AI, has grown exponentially. AI systems now help us with everything from object detection, text recognition and improving our grammar, to optimizing our smartphone battery life and reminding us to stay hydrated. However, as the capability of AI technologies grows, so to does their complexity. The more capable they become, the less we understand how these systems make decisions. Successful deployment and adoption of any use case that demands decision transparency will depend on an AI’s ability to explain how it goes from A to B. Both in terms of winning the trust of customers, as well as satisfying the requirements of regulators.
Over the past couple of years, researchers have shown that training AI models on exponentially more data substantially expands their overall capability set. Today, this capability expansion is tightly coupled with growing model complexity. And, growing complexity presents challenges in our ability to understand how these systems actually operate. In other words, the more capable an AI becomes, the less interpretable it is. This is commonly referred to as the black box problem of AI, and there’s an entire academic research field called ‘explainable AI,’ that works to address this shortcoming.
Another side-effect of more capable AI systems, is that users of highly automated systems tend to place a disproportionate amount of trust in the outputs that are produced, and grow complacent in critically evaluating them. This phenomenon is called ‘automation bias,’ and isn’t unique to AI, but a risk factor in all highly automated systems.
Whether the use of AI is immediately visible to, or appreciated by, an end user, these systems have profound impact on the overall solution they are part of. For innocent use cases that include photography or image or text generation, the potential downside of deploying an AI to make decisions automatically at scale is limited. Especially when the effort required to retry is negligible.
However, in many regulated markets, such as healthcare, finance or manufacturing, successful deployment and adoption of AI solutions demands that we are able to deeply understand how these systems make decisions - including what makes people build trust in them. It is only a matter of time before the explainability of an AI becomes a hygiene factor, at the insistence of customers, or a license to operate, at the insistence of regulators, in these types of markets.
At Encube, we develop what we call Intelligence Augmentation workflows into our platform. These workflows rely heavily on the use of AI models to surface and contextualize information related to product industrialization, but they don’t make end to end decisions without humans in the loop. Our belief is that this approach has a much higher probability of commercial success upfront compared to fully automated solutions that tries to displace human decision making. Successful deployment and adoption of the AI we’re embedding into our platform therefore hinges upon our ability to quickly build trust with customers, and in ensuring they feel in control of the decisions being made to push product development forward.
Trust is built by being transparent with how our AI arrives at a certain outcome from a given starting point, including why it chose A over B. Control of decision making, in turn, requires that users can challenge the choices our AI makes, by making small or large adjustments to the presented outcome. We achieve this in three key ways.
- Everything our AI produces can be presented in a visual context. This includes things like highlighting a potential manufacturing issue and what can be done to address it, simulating the actual production process or visualizing how a specific design choice drives manufacturing cost.
- We build an end to end view of the full decision chain that our AI makes. From start to finish. This helps our customers quickly understand how our AI reasons, and why it makes the choices it does - in the context of day to day work to secure product industrialization at scale.
- We provide customers with the freedom to modify and challenge the decisions our AI makes as it analyzes a part design to understand its manufacturability, how it should be produced and what the associated manufacturing costs will be if they believe it will produce a more competitive outcome.
We chose to build our software this way because we genuinely believe that the purpose of AI isn’t to displace workforce talent, but rather to empower people to get better work done, faster. A co-pilot, if you will, that provides in-depth analysis and feedback, while keeping users squarely in control of the product development process.
Machine things. Better
This is just a sneak peak of what we're up to. Reach out to learn more about how we're reimagining product industrialization at its core
About
We're encube, a deep tech software startup operating out of Sweden. We're fundamentally reimagining computer-aided industrialization for modern teams. It's collaborative, cloud-native, and highly AI-powered by design. Reach out to learn more and setup a demo session.
Resources
Check out our engineering blog to keep tabs on some of the more radical ideas we're pursuing.
Career
We have a trail. Now we need blazers. If you're head over heels for AI, distributed computing, computer graphics and want to work on truly impactful problems, drop us a note and we'll chat.
Get in touch
Interested in getting in touch with us? Drop us a note at contact@getencube.com, and we'll get the conversation started!