AI Decision Transparency
There was an interesting article in New Scientist last month that highlights new regulations in the UK related to AI decision transparency. Organizations there could face multi-million pound fines if they cannot adequately explain why decisions are made from these models. There are similar regulations in the EU. This points to a problem in the way that many AI models are built using deep learning models where the model itself acts as a big black box. You feed the inputs to the model and it spits out a result at the end. The “why” of the model is missing as you have no idea where the results came from or why the model determined a particular result.
A lack of transparency is fine for some types of classification models or other use cases. However, when it’s critically important to be able to prove fairness, to provide best of care solutions or when you need to explain why a certain decision is made – black box AI will not suffice. Does this mean that machine learning cannot be applied? Not at all, but it will change your approach.
There are a couple of different approaches that you could employ to create models that can be understood and explained, and both will use the same underlying machine learning technology. The first thing to understand is that there are a good number of machine learning modeling technologies. Experts in the field will debate how to classify these various models but understand that any of the following model types are regularly advertised as machine learning algorithms: regression, instance-based, regularization, decision trees, bayesian, clustering, association rules, artificial neural networks, deep learning,…… and the list goes on.
Fundamentally, to explain how an algorithm arrives at a conclusion you need to answer the question “If this, then that”. So, the best algorithms to use in order to answer this question are decision trees. You might use other algorithms to calculate inputs to a decision tree (regression, bayesian) but the decision tree will give you the ultimate explanation of how your model arrived at a conclusion. Decision trees might end up being slightly less “accurate” than a deep learning model, but it will be able to explain itself.
Using a decision tree model will result in a series of “if then” rules that will determine your model outputs. These rules can then be simply created (or imported) into a rules engine like the Decisions platform. This will allow you to explain how the model arrived at a given conclusion given a series of inputs. This will also allow you to look closely at the rules and clean or modify any rules that look like they are spurious and don’t make sense. Spurious rules will indicate to you that your model is likely overtrained and is honing in on a specific data point that really doesn’t match a real-world scenario.
Another method would be to create a neural network or deep learning model and then run a sensitivity analysis on the model. A sensitivity analysis is a method whereby you range all the inputs from minimum to maximum and watch the sensitivity of the output to those inputs. Looking at sensitivity scores can give great insight into what inputs are important and help you create rules based on what the model has taught you.
Model to Decision Tree
A final method might be to model your model. This sounds a bit confusing but sometimes a decision tree created from a deep learning model can generate better rules than if you had simply created the decision tree from the initial data first. In this way, you are feeding your model inputs and outputs to use in a second decision tree model. Here again, you will end up with a set of “if this, then that” rules that can explain the decision making process.
To conclude, there are many industries or applications where it is critically important to understand or be able to explain why a machine learning model has arrived at a particular result or conclusion. This doesn’t mean that you can’t use “black box” models in some form or fashion but it will dictate the final model choice. Implementing decision tree models in rules engines like Decisions can not only allow you to employ machine learning models but be able to explain them as well.
If you would like to talk about your specific model or use case, please feel free to reach out to us at firstname.lastname@example.org. We love talking about rules and models.
- Edge Cases Don’t Fit Your Workflow? Customize with a Business Rules Engine.
- How to Improve Resiliency in the Covid Pandemic with Business Process Rules
- Are Machine Learning Models to be Trusted?
- 9 Signs You Might Need a Rules Engine
- New Ocean Health Solutions Simplifies Processes and Streamlines Ecosystem with Decisions
- How is No-Code Software Different from Low-Code Software?
- Can You Use No-Code Tools for Mission-Critical Applications?
- Integrating R & Python
- The New World of Business Rules Engines
- How Business Users Can Manage Complex Rules with No-Code Software