A Genetic Algorithm with Trusty PMML

Recently, I’ve stumbled upon this interesting article and paired project about a Genetic Algorithm. Then, I’ve asked myself if somehow the features of Trusty PMML could be meaningfully used inside such context. I won’t go deep into technical details, but basically, the Genetic Algorithm classifies features as "genes", a set of genes is a "genoma",Read more →

How to integrate your Kogito application with TrustyAI – Part 3

In the second part of the blog series https://blog.kie.org/2020/12/how-to-integrate-your-kogito-application-with-trustyai-part-2.html we showed how to setup the OpenShift cluster that will host the TrustyAI infrastructure and the Kogito application we created in the first part https://blog.kie.org/2020/12/how-to-integrate-your-kogito-application-with-trustyai-part-1.html . In this third and last part of our journey, we are going to demonstrate how to deploy the TrustyAI infrastructureRead more →

How to integrate your Kogito application with TrustyAI – Part 2

In the first part https://blog.kie.org/2020/12/how-to-integrate-your-kogito-application-with-trustyai-part-1.html we have created a Kogito application and configured it to make it work with the TrustyAI infrastructure. In this second part, we are going to talk about the setup of the OpenShift cluster (https://docs.jboss.org/kogito/release/latest/html_single/#chap-kogito-deploying-on-openshift). The first step is to create a new project, which we call my-trusty-demo. As you canRead more →

How to integrate your Kogito application with TrustyAI – Part 1

How can you audit a decision out of your new Kogito application? It’s pretty simple: in this series of articles, we are going to demonstrate how to create a new Kogito application and how to deploy the TrustyAI infrastructure on an OpenShift cluster.If you are new to TrustyAI, we suggest you read this introduction: https://blog.kie.org/2020/06/trusty-ai-introduction.htmlWithRead more →

TrustyAI meets Kogito: the decision tracing addon

New to Kogito? Check out our “get started” page and get up to speed! 😉 This post presents the decision tracing addon: a component of the Kogito runtime quite relevant for the TrustyAI initiative (introduced here and here). One of the key goals of TrustyAI is to enable advanced auditing capabilities, which, as written inRead more →

An introduction to TrustyAI Explainability capabilities

In this blog post you’ll learn about the TrustyAI explainability library and how to use it in order to provide explanations of “predictions” generated by decision services and plain machine learning models. The need for explainability Nowadays AI based systems and decision services are widely used in industry in a wide range of domains, likeRead more →

TrustyAI meets Kogito: decision monitoring

In this article, we introduce the metrics monitoring add-on for Kogito. This add-on is part of the TrustyAI initiative already introduced in the previous article https://blog.kie.org/2020/06/trusty-ai-introduction.html . Like Quarkus extensions, the Kogito add-ons are modules that can be imported as dependencies and add capabilities to the application. For example, another add-on is the infinispan-persistence-addon thatRead more →

TrustyAI Aspects

As mentioned in the previous blog post we have three aspects that we are implementing: explainability, runtime and accountability. However we need to see how these are connected with the use cases and personas. The first aspect we will consider is runtime tracing and monitoring. The term monitoring refers to the system overseeing performance orRead more →

TrustyAI Introduction

Have you ever used a machine learning (ML) algorithm and been confused by its predictions? How did it make this decision? AI-infused systems are increasingly being used within businesses, but how do you know you can trust them?  We can trust a system if we have confidence that it will make critical business decisions accurately.Read more →

Model fairness with partial dependence plots

A quick guide on how to leverage partial dependence plots to visualize whether an ML model is fair with respect to different groups of people. As machine learning models, and decision services in general, are used more and more as aiding tools in making decisions that impact human lives, a common concern that is oftenRead more →