Last 6 months

Local to global – Using LIME for feature importance

Blog post image

In a previous blog post we have discussed how to leverage LIME to get more insights about specific predictions generated by a black box decision service. In fact LIME is mostly used to find out which input features where most important for the generation of a particular output, according to that decision service. Such explanationsRead more →

Autotuning LIME explanations with few predictions

Blog post image

Tuning algorithms, especially when machine learning is involved is often a tricky business. In this post we present an optimization based technique to automatically tune LIME in order to obtain more stable explanations. LIME (aka Local Interpretable Model agnostic explanations) is one of the most commonly used algorithms for generating explanations for AI based models.Read more →

Model fairness with partial dependence plots

Blog post image

A quick guide on how to leverage partial dependence plots to visualize whether an ML model is fair with respect to different groups of people. As machine learning models, and decision services in general, are used more and more as aiding tools in making decisions that impact human lives, a common concern that is oftenRead more →

Last Year

An introduction to TrustyAI Explainability capabilities

Blog post image

In this blog post you’ll learn about the TrustyAI explainability library and how to use it in order to provide explanations of “predictions” generated by decision services and plain machine learning models. The need for explainability Nowadays AI based systems and decision services are widely used in industry in a wide range of domains, likeRead more →