TrustyAI Introduction

Have you ever used a machine learning (ML) algorithm and been confused by its predictions? How did it make this decision? AI-infused systems are increasingly being used within businesses, but how do you know you can trust them? 

We can trust a system if we have confidence that it will make critical business decisions accurately. For example, can a medical diagnosis made by an AI system be trusted by a doctor? It is integral that domain experts (such as doctors) can trust the system to make accurate and correct decisions. Another important reason for this trust is customer understanding. New laws such as GDPR include the right to access how your data has been processed. Therefore, domain experts must understand the way in which a customer’s data has been processed, so that they can pass this information back to them. 

This has led to a new initiative, within the kie group, to increase trust in decision making processes that depend on AI predictive models. This initiative focuses on three aspects: runtime, explainability and accountability. 

Within TrustyAI we will combine ML models and decision logic (i.e. levaging on the integration of DMN and PMML)  to enrich automated decisions by including predictive analytics. By monitoring the outcome of decision making we can audit systems to ensure they (like the above) meet regulations. We can also trace these results through the system to help with a global overview of the decisions and predictions made. TrustyAI will leverage on the combination of these two standards to ensure trusted automated decision making. Let’s look at a use case to put this into context. 

“As a bank manager I want to manage current loans and approval rates to ensure they meet company regulations. I also want to review specific loan decisions so that I can explain to the customer why that loan was rejected or accepted.”

There are four personas that we will use in this example:

  1.  A data scientist needs to check a predictive ML model in more detail to check if features are being used correctly. For example: if a bank manager makes a loan request but that loan is denied because of someone’s gender, then that would be an unfair bias in the model. This needs to be rectified. By enabling the user to investigate a model, they have the ability to identify these biases and ensure a model is balanced and accurate. 
  2. A compliance officer wants to ensure the whole system is compliant with company policies and regulations. 
  3. A caseworker is the end user of the system and they will use the system to make decisions. An example of an end user would be the bank manager who wants to tell the customer why a bank loan was rejected.
  4. A DevOps engineer looks at the health metrics of the system to ensure that the system runs correctly. They will monitor system behaviour over time.

Each of these personas uses a different aspect of TrustyAI, in our next blog post we will relate each of these aspects to the personas and the use cases mentioned here.

Author

5 5 votes
Article Rating
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments