The AI Value Catalyst

How to get more value from AI with less risk

AI is transforming businesses, creating winners and losers at an ever accelerating pace.

Three-quarters of C-level executives believe that if they don’t move beyond experimentation to aggressively deploy artificial intelligence (AI) across their organizations they risk going out of business by 2025 (according to this report from Accenture.)

At the same time 56% of business leaders feel that their staff lack the skills to implement AI and 42% feel that they don’t fully understand AI benefits and use in the workplace (according to Gartner’s 2019 CIO survey).

Clearly there is tremendous potential for personal and business success in AI. But equally clear is the perceived and real risk of failing and wasting time, money, and effort on failed AI efforts.  

This article is the first in a series that aims to help you by giving you a ‘catalyst’ for AI value generation. It’s a four-step tool to accelerate your ability to generate value from AI while keeping risk under control. 

The present article gives you an overview. At the end you will find links to subsequent articles in the series that dig deeper into the individual steps.

How does the AI value catalyst help?

There are four main steps involved in creating value with AI:

  • Observe
  • Understand
  • Simulate
  • Decide

We observe by collecting, storing, and preparing data. 

We understand by feeding our data to an appropriate machine learning model and training it using appropriate algorithms. 

We simulate by varying certain inputs or conditions and using our model to predict how the outputs will change accordingly. 

Finally, we decide by using our simulator to figure out what the best decision is in each setting or condition.

A classical visualization of this goes something like this:

From left to right difficulty increases: it is relatively easy to collect data (“what happened”), harder to explain why it happened, harder still to predict what’s going to happen. Finally, working out what to do – which action to actually take – is the hardest. That requires all the previous knowledge plus additional optimization.

Along with difficulty, value increases. A tool that helps us decide what to do is clearly more valuable than just being able to see the data, which in most realistic cases can be hard to grasp with our limited human senses and pattern recognition ability.    

Thinking about AI like this tends to lead people to implement and seek value from AI by starting at the bottom left and progressing to the right and upwards. Afterall, it seems natural to start with the least difficult bit. 

The problem with that approach is that it increases tremendously the risk of wasting time and money on AI projects that end up going nowhere. 

The AI Value Catalyst minimizes that risk by turning the classical approach upside down. It tells us to start at the end

Start at the end

The biggest value we can get from AI lies in enabling us to make better decisions. Therefore, we should start by identifying decisions that could be significantly improved through AI. Let’s call these “decision targets”.

This is a search problem: find the right decision. We are not really interested in searching for data, models, algorithms, or even predictions. We should search for that which matters: decisions. 

The problem with starting on the bottom left on the figure above is that we can end up investing a lot of time and effort in collecting and processing data that we won’t need, building models and algorithms that won’t turn out to be effective, and predicting quantities that won’t turn out to matter.

By starting at the end we instead instantly narrow down our search space by orders of magnitude.  

Decision builds on simulation

The way AI makes decisions is similar to the way we humans do it. We consider the alternatives and imagine what the outcomes would be. Based on that, we make our decision. You could say that before we decide, we simulate what would happen. AI works the same way: it needs to be able to simulate what would happen in various conditions and how that outcome is affected by the different actions it could take. 

Imagine a business case where we have identified a costly decision, namely the decision to replace parts of a manufacturing machine. Presently the decision is made by measuring operating hours and then following the recommendations of the parts manufacturer to replace each part based on total operating hours. Frequently the replaced part turns out to be ok and the replacement was wasteful. We did avoid breakdowns – which are catastrophic – but what if we could replace only parts that actually need to be replaced?

To do that, we would need a simulator that could tell us the odds of a particular part breaking down within, say, the next 8 hours. It should take operating hours and some measure of the load on the part as inputs and provide the breakdown likelihood as an output.

This simulator allow us to make better decisions. For example, we could create an app for the engineers that alert them when it is time to replace specific parts. In the end, we would replace fewer parts and save big chunks of cash.

Simulation builds on understanding

A simulator builds on an understanding of the underlying dynamics of the problem. For our parts breakdown example, the simulator needs a machine learning model that can relate conditions and time to the probability of breakdowns. There are various approaches to building this kind of model. One is based on classification where a classifier takes the history of the part as inputs and outputs the probability that the part will break down in the next 8 hours. 

Another approach is regression where the output instead is the wear of the part. 

Yet another option is a so-called “latent variable model” where the true state of the part is treated as a ‘hidden variable’ and the model predicts either observed wear or probabilities of breakdown in a specific time window. 

It is generally best to start with the simplest model that we can think of. That will make it easier to assemble all the parts into a working prototype. We can always build a more complex model later.

Understanding builds on data

At this point, we have a very clear idea of the kind of data we need.

For our parts maintenance example, say we decided to go with the classification approach.  To build that, we need operational data – time and loads – and the classification that was done on each part after replacement by the engineer. 

In our example case it turns out that the logs kept by our engineers contain this data. Every time a part is replaced, it is inspected by the engineer and he records the wear as either “minimal”, “worn”, or “heavy wear.” We decide to treat the first two categories as “did not break down” and the latter as “broke down”. Although parts with heavy wear have a varying amount of hours left in them before breaking down, the data doesn’t have that information so we will simply err on the side of caution here. 

At a later time, we may decide to build a classifier with three classes, matching the engineer’s log categories. 

Before we build anything on the data we visualize it to get an understanding of its properties. We analyze the quality, looking for missing data issues, sensor errors and the like. It might turn out, for example, that some of the log entries are missing the operating hours, recorded loads, or the classification label. We’ll have to come up with some way of handling that. We should also consider automating the collection of hours and load data as we start thinking about how our solution would be deployed in a production environment.

When do we actually build something?

You may have noticed that all four steps of the AI Value Catalyst talk about design rather than implementation. This is because before we actually build something – which takes a lot of time and effort – we want to get as close as possible to knowing what we actually need to build. That reduces the risk of wasted efforts significantly. Measure twice, cut once.

Once we’ve gone through all four steps the time has come to start building. Here we actually go through the steps in reverse. The difference from starting with that order is that now we know exactly where we want to end up

First, we collect and process data. Then we build and train our models. Then we build our simulator. Finally, we test the new method for making the target decision. If it works well and reliably, we roll our new solution out to production. If not, we go through the steps again – as many times around the block as we need to make sure we are creating real value.

As you gain experience, you will be able to mentally juggle all four steps during your implementation. Until then, stick with the four-steps of the AI Value Catalyst. Slow is smooth and smooth is fast, as they say on “SEAL Team”. 

Where to go from here

In the rest of this series we will look in more detail at each of the four steps; the next article is about targeting the right decision.

If you’d like my help with leveraging AI strategically for your business – and your career – contact me. 

Leave a Reply

Your email address will not be published. Required fields are marked *