Last week

TrustyAI SHAP: Overview and Examples

Blog post image

SHAP is soon to be the newest addition to the TrustyAI explanation suite. To properly introduce it, let’s briefly explore what SHAP is, why it’s useful, and go over some tips about how to get the best performance out it. A Brief Overview Shapley Values The core idea of a SHAP explanation is that ofRead more →

Counterfactuals; getting the right answer

Blog post image

Sometimes the result of an automated decision may be neither desired or that which was required. What if there was a tool to find a way to overturn those decisions, maybe changing some of the figures that were provided to the system, and achieve a different outcome? That’s what we’ve been working on lately withinRead more →

Last 6 months

Local to global – Using LIME for feature importance

Blog post image

In a previous blog post we have discussed how to leverage LIME to get more insights about specific predictions generated by a black box decision service. In fact LIME is mostly used to find out which input features where most important for the generation of a particular output, according to that decision service. Such explanationsRead more →

Introducing process operational monitoring for Kogito

Blog post image

Monitoring is a well known concept in Kogito: the support for decisions was available since Kogito 0.11 through the Prometheus monitoring add-on. Today we announce that, starting from Kogito 1.11.0, this addon is enhanced to enable monitoring of processes. Unlike decisions, however, the feature is currently limited to operational metrics. The domain metrics section isRead more →

Using TrustyAI’s explainability from Python

Blog post image

The TrustyAI‘s explainability library is primarily aimed at the Java Virtual Machine (JVM) and designed to be integrated seamlessly with the remaining TrustyAI services, adding explainability capabilities (such as feature importance and counterfactual explanations) to business automation workflows that integrate predictive models. Many of these capabilities are useful on their own. However, in the dataRead more →

Shopping recommendations in PMML.

Blog post image

In previous posts (PMML revisited and Predictions in Kogito) we had a glance at how a PMML engine has been implemented inside Drools/Kogito ecosystem.This time we will start looking at a concrete example of a recommendation engine based on top of PMML.The first part of this post will deal with the ML aspect of it,Read more →

Autotuning LIME explanations with few predictions

Blog post image

Tuning algorithms, especially when machine learning is involved is often a tricky business. In this post we present an optimization based technique to automatically tune LIME in order to obtain more stable explanations. LIME (aka Local Interpretable Model agnostic explanations) is one of the most commonly used algorithms for generating explanations for AI based models.Read more →

Getting started with TrustyAI in only 15 minutes

Blog post image

Hi Kogito folks, In the previous blogposts we demonstrated how to deploy a Kogito service together with the TrustyAI infrastructure on an OpenShift cluster https://blog.kie.org/2020/12/how-to-integrate-your-kogito-application-with-trustyai-part-1.html.If you are new to TrustyAI, we suggest you read this introduction: https://blog.kie.org/2020/06/trusty-ai-introduction.html In this blogpost, we’d like to demonstrate how to get started with TrustyAI in ~15 minutes. In orderRead more →

Last Year

Model fairness with partial dependence plots

Blog post image

A quick guide on how to leverage partial dependence plots to visualize whether an ML model is fair with respect to different groups of people. As machine learning models, and decision services in general, are used more and more as aiding tools in making decisions that impact human lives, a common concern that is oftenRead more →