The AI value catalyst step 2: Building the right simulator

This is the 3rd post in the AI Value Catalyst series. The previous post can be found here.

In step 1, we identified the most promising business decision for adding value with AI.

In our running example of the wind turbine business we found that the decision to replace a particular drivetrain bearing is a good candidate for AI value generation. Bearings are frequently replaced even though they are not worn – simply because we don’t know their true state and we can’t risk the turbine breaking down. 

We estimated that if we could reduce this waste the potential savings would be at least $7,000 per month.  

But how can we find out when a bearing really needs to be replaced? How can we move from planned to predictive maintenance? To answer that, we need to build the right simulator for our decision. 

Intermezzo: User interface & experience

You could argue that building the right user interface with a great user experience deserves it’s own step in the AI Value Catalyst. 

However, UI/UX design is its own “beast” and while the combination of AI and UI/UX is a worthy and fascinating subject, not all decisions are carried out by humans. Further, once the target decision is clear and the simulator is designed, the UI and UX can more or less proceed in parallel. That is why we won’t dig into it further in this context.

Decision checklist

The Cambridge dictionary defines a decision as:

“A choice that you make about something after thinking about several possibilities.”

So we have several possibilities, and we think about each one –  makes sense. This is useful but not complete for our needs; we need to dig a bit deeper to make sure our simulator is sufficient for us to make optimal decisions. 

Firstly, a decision is always made in a specific context, under specific conditions – present and historical. Should we buy this used car? Well, that depends on the conditions. How many people have owned it previously? How many miles has it done?

Secondly, we must often decide under constraints. If we have plenty of cash we might decide on a new car. If not, buying that used one with 100k miles on it could be a better decision.

Thirdly, we must be very clear about our objective. We must define how we measure the outcome of our decision making. 

In addition to this general anatomy of any decision, we also want to make decisions based on data – historical and present observations of the system we want to optimize. 

Now we have a checklist for designing our simulator: 

For our example, this could look as follows:

  • Objective: Replace parts when close to failing
  • Alternatives: Replace or not
  • Conditions: The history of loads on each drivetrain, operating hours, and vibration measurements are available.
  • Constraints: Only one: no matter what our AI tells us, we will not allow any bearing to exceed a certain very high number of operating hours – at least until the new AI has earned our trust. 
  • Predicting: We have several options that we need to consider.

Now we are ready to design a simulator that will enable us to switch to predictive maintenance for our wind turbine drivetrains. And start saving some serious cash on the bottom line.

Designing our simulator

Our objective is to replace a part when there is a certain risk, within a certain timeframe, that it will wear out. Vice versa, we do not want to replace it if that is not the case. 

Our simulator therefore needs to output something that can be related to this risk. If we call our “decision window” W, then that could be:

  1. A probability that the part will become worn within W
  2. A probability that the part will fail within W
  3. An estimate of the “remaining useful lifetime” of the part
  4. An estimate of whether recent measurements are “normal” or not

There are other approaches as well, but these are the most typical ones taken for predictive maintenance.

Option B is tricky because our maintenance data contains no failures. In fact there aren’t many examples of worn parts either – but enough, hopefully, to make option A work.

Option C is also a bit tricky. To make that work, we would have to invest expert manual labor to estimate the remaining useful lifetime or RUL for each inspected part. 

Option D is probably the easiest to implement. It consists of designing some kind of probability density distribution for our measurements, using only data for periods for which we know that the part was not worn. For example, we could base this on only the first few (relatively speaking) hours of operation for each part. We would then estimate the probability that recent measurements of vibration were generated from this distribution. If not, something might be wrong. 

The trouble with this approach is that it can be hard to relate this “something” with the probability that the part is actually worn.

We decide to go with option A first. We will design what in machine learning is called a classifier. It will learn to relate operating hours, load, and vibrations to the “class” of each bearing. To keep it simple, we will classify bearings as either “ok” or “not ok” (some wear detected). 

This classifier can tell us – any time we ask it – what the probability is of the part being worn. If this is high – say, 50% or more – we replace. Otherwise, we don’t. 

Next step: understanding

Having designed the simulator that we need to start making optimal decisions about bearing replacement, the next step is to work out what the details of the machine learning models and algorithms we will need to run the simulator. 

That is the subject of the next article in this series.

In the mean time – if you’d like my help with leveraging AI strategically for your business – and your career – contact me.

Leave a Reply

Your email address will not be published.