External Components were introduced in Developing Custom Components for Dashbuilder post. At that point, if your component wants to consume Dashbuilder data, it has to manually handle the messages coming from Dashbuilder (DB) to build our component. As the library components grew, we wanted to avoid repeating code like model class definition and the windowRead more →
KIE
Featured Posts: All
Dashbuilder External Components Javascript API
External Components were introduced in Developing Custom Components for Dashbuilder post. At that point, if your component wants to consume Dashbuilder data, it has to manually handle the messages coming from Dashbuilder (DB) to build our component.
As the library components grew, we wanted to avoid repeating code like model class definition and the window messages dispatching, because at this point External Component does not only support dataset but also function calls and configuration change requests.
In order to avoid code duplication and make it easy to create components, we developed a Javascript/Typescript API for DB External Components.
External Components Javascript API
It was discussed in the External Components introduction post how the integration with DB works. In summary, we exchange messages between the component and DB, and into these messages, we carry objects that contain datasets and other information, such as function call responses.
In our API we used Typescript to create classes that represent the message and the message content, however, API users are not required to know about the message and related objects internals.
Having this said, we can divide the API into two parts:
- Model Objects: The message object, message type, and the objects that come with the messages, such as Dataset, FunctionRequest, and FunctionResponse;
- Controller: The controller connects the component to DB and it is the class that allows components to interact with DB and receive datasets.
The controller allows you to:
- Register callback to receive datasets and the init message;
- Call a function and have the response in a Promise;
- Send a request to DB asking users to modify the configuration;
- Send filter requests.
Let’s take a look at some code and make it more clear:
Usage
The package that contains the API is @dashbuilder-js/component-api. As we already mentioned, it uses Typescript, so the types are also included.
The API entry point is the class ComponentApi, which allows us to access ComponentController. Once you create a component API, you can then access the component controller and set a callback for init or dataset
That’s all you need to know to build components that can display data coming from DB!
The same API can also be used with React. The only requirement is to be careful when registering the dataset and init callbacks. Here’s an example:
This code results in the following component and the full code can be found in our components library.
Component Development
With the component API, we also introduced a new package for development that emulates a Dashbuilder, which means that we can develop without running a full Dashbuilder distribution.
The package for component dev is @dashbuilder-js/component-dev and after you add to your project dev dependencies you can simply call new ComponentDev().start(); and it will send the dataset and init parameters to the component.
Parameters, dataset, and function calls can be all mapped in a file named manifest.dev.json, which should be in your component root folder. Here’s manifest.dev.json for the hello component:
Notice that it should be used with a web pack (or equivalent) development server.
Conclusion
The code for the hello component can be found in github. Component JS API makes it very easy to create Dashbuilder visual components. In the next post, we will talk about a more complex component that was created to be part of Business Central: process heatmap!
Queries for Building Kie Server Dashboards
Business Central is a great tool to create Dashboards as presented in KieLive#8 Authoring Dashboards in Business Central. In that video, datasets were mentioned, but there was no example of Kie Server datasets. In this post, we describe how to create Kie Server datasets and share some useful queries to create datasets for business dashboards.Read more →
Business Central is a great tool to create Dashboards as presented in KieLive#8 Authoring Dashboards in Business Central. In that video, datasets were mentioned, but there was no example of Kie Server datasets. In this post, we describe how to create Kie Server datasets and share some useful queries to create datasets for business dashboards.
Creating Kie Server DataSets
A Kie Server dataset runs remote queries on a Kie Server installation and organizes the results in datasets that can be used in Business Dashboards. The requirement for this post is to have a Kie Server connected to Business Central. Fortunately, in jBPM server installation we already have a setup configured for you, with a sample kie server connected to Business Central.
In order to create a dataset we must first access the Datasets tool from Admin tools in Business Central, see
Kie Server datasets are also called Remote Server datasets. When you access the Datasets editor and click on New Data Set you will see the Execution Server type in the list.
Once you select Execution Server you will be prompted to enter more information about the dataset, which are:
UUID: The dataset identifier which is also used as the query name on the Kie Server side;
Name: An user-friendly name for the dataset;
Data Source JNDI Name: This field can be ignored. It is derived from SQL datasets, which is not used here;
Server Name: The Kie Server configuration for the servers connected to BC;
Query Target: Set it as CUSTOM to use free SQL
Query: The SQL query used to retrieve data.
After you fill the fields, click on Test you will see the query result in a table, and in next to save. Then you will be able to use this dataset to create dashboards.
To build the query you need to have an understanding of the jBPM tables schema, which would take some time if you are a beginner.
In order to help you to get insights about your running Kie Server, let’s share some useful queries in this post and explain it and then how you can import all queries to your installation.
Kie Server Useful Queries
You can retrieve a lot of insights from Kie Server. In this section, we will describe some useful and common queries:
- Nodes updates at the last minute: This query can be used to check which processes nodes were active only at the last minute, this can be used to monitor a process activity for a period of time. You can change to any period of time
- Task status as categories: A task status is in a column specific to it. In this query, we transform the row value into a column. With this query, status can be used as a chart category.
- Nodes execution time and hits: Heatmaps are introduced in Business Central 7.48.0 and it needs to be filled with queries that give any information about a process’s nodes. With this query, we have the node execution time and the total hits and both information can be used with the Process Heat Map Component.
- Task Variables Values: This dataset contains all human tasks and variable values. It is useful to check a specific task user input.
Import queries
If you don’t want to create queries manually you can import a ZIP we made available and it contains all the described queries and it may be updated later with new queries. Notice that the ZIP also contains queries used internally by Business Central, but in public mode so you can build your own Dashboards.
When you import the queries and go to the datasets page you may notice an error. This happens because the server template name probably was not found in this installation. There are at least two ways to workaround this problem:
- Correct the server template name in the dataset editor
- Modify dataset JSON files directly from the ZIP files to set the correct server template. By the way, you can delete and remove files as you want.
NOTE: When testing a query you may face the error from JIRA RHPAM-3359. Ignore the error, save the dataset and re-open it. This is a known issue when testing datasets related to the columns ordering and it should be fixed in a later version.
Conclusion
In this post, we shared how to create Kie Server datasets and how to import useful queries to your installation. You can check this post content also on Youtube:
Next post let’s finally introduce the new built-in Heatmap component.
Embed your BPMN and DMN models!
In the last release, we have introduced a new feature that will enable you to embed your BPMN and DMN models on any web application with an iframe. Let’s have a better look at it. We have updated our toolbar on the Business Modeler Preview, and now under the “Share” menu, there is an “Embed” option.Read more →

In the last release, we have introduced a new feature that will enable you to embed your BPMN and DMN models on any web application with an iframe. Let’s have a better look at it.
We have updated our toolbar on the Business Modeler Preview, and now under the “Share” menu, there is an “Embed” option.
Clicking on it will open a modal containing the Embed options.
The first option “Current content,” will create an Embed code with a static model, which can’t be changed after it’s embedded. In that case, if it’s necessary to update your model, you’ll need to go through this process again, generating a new Embed code and pasting it on your application.
To enable the “GitHub gist” option, you need to currently be with an open gist on the Business Modeler Preview. If you’re not, you can easily create a new one using the “Gist it!” option on the “Share” menu. To enable this option, it’s a requirement to set up your GitHub token. Remember that the token needs to have the “gist” permission.
Back to our “Embed” modal, the “GitHub gist” option will create an Embed code using the gist content, and the embed model will reflect the contents of the gist, so if the gist is updated, the embedded model is too, keeping the gist and the generated embed sync. Note that it can take a few minutes to update due to the GitHub gist cache effectively.
Clicking on the copy icon will copy the Embed code directly to your copy area to paste on any place you desire. Here is a code example generated by “GitHub gist”:

Now you can start using this option in any application that supports an iframe element. You can make tutorials, demos, examples, or even documentation!
Thanks a lot for reading, and stay tuned for more awesome updates.
JBPM Messages and Kafka
When I was studying my degree, I recall a wise teacher that repeatedly told us, their beloved pupils, that the most difficult part of a document is the beginning, and I can assure you, dear reader, that he was right, because I cannot figure out a better way to start my first entry on aRead more →
When I was studying my degree, I recall a wise teacher that repeatedly told us, their beloved pupils, that the most difficult part of a document is the beginning, and I can assure you, dear reader, that he was right, because I cannot figure out a better way to start my first entry on a RedHat blog than explaining the title I have chosen for it. The title refers to two entities that in BPMN 7.48.x release has been brought together, hopefully for good: BPMN messages and Kafka.
According to the not always revered BPMN specification, messages “represents the content of a communication between two Participants”, or, as interpreted by JBPM, a message is an object (which in this context means a Java object, either a “primitive” one like Number or String, or an user defined POJO) that is either being received by a process (participant #1) from external world (participant #2) or, obeying the rules of symmetry, being sent from a process to the external world.
Kafka, not to be mistaken with the famous existentialist writer from Prague, is an “event streaming platform used by thousands of companies for high-performance data pipelines, streaming analytics, data integration, and mission-critical applications”. In more plain language, the middleware that is becoming the facto standard for inter process asynchronous communication in the software business applications world. If still not clear to you, try to think of Kafka as a modern replacement for your old JMS or equivalent message broker pal, but please do not tell anyone I have written that, because as Kafka designers proudly manifest, their fancy toy is doing so much more than that.
As other messaging brokers, Kafka uses a set of channels, called topics, to organize the data being processed. This data consist on a set of persistent records, each of them composed by an optional key and a meaningful value. Most Kafka use cases consist on reading and/or writing records from a set of topics and, as you will have guessed by now, JBPM is not the exception to that rule, so the purpose of this entry is to explain how KIE server sends and receives BPMN messages from and to Kafka broker.
With such spirit, let me briefly explain the functionality that has been implemented . When Kafka support for messages is enabled, for any KIE jar (remember, a KIE Jar contains a set of BPMN process and related artifacts needed to make them work) that is being deployed into a KIE server (because Kafka integration is a KIE server feature, not an JBPM engine one), if any of the process being deployed contains a message definition, depending on the nodes where that message is used, different interactions with Kafka broker will occur.
If the nodes using the message are Start, IntermediateCatch
or Boundary
events, a subscription to a Kafka topic will be attempted at deployment time. When a Kafka record containing a value with the expected format is received on that topic, the JBPM engine is notified and acts accordingly, so either a new process instance is started or already started processes are resumed (depending on the incumbent node being an Start or an Intermediate Catch respectively)
However, if the nodes using the message are End
or IntermediateThrow
events, then a process event listener is automatically registered at deployment time, so when a process instance, as part of its execution, reach one of these nodes, a Kafka record containing the message object will be published to a Kafka topic.
Examples
Once description of the functionality has been concluded, let’s illustrate how it really works with a couple of processes. In the first one, a process instance will be started by sending a message from a Kafka Broker. In the second one, a message object containing a POJO will be published into a Kafka Broker when the process is ended.
First example just consist of a start message event that receives the message object, an script task which prints that message object and the end node

The start event node, besides receiving the message named “HelloMessage”, assigns the message object to a property named “x”, of type com.javierito.Person. A person has a name and an age.

Scrip task just prints the content of “x” in console to verify the message has been correctly received using Java code (the output of toString
method).

When this process is deployed to KIE server and Kafka extension is enabled, if we publish {"data":{"name":"Real Betis Balompie","age":113}}
on Kafka topic “HelloMessage”, then Received event is Person [name=Real Betis Balompie, age=113]
is printed in KIE server console.
Second example diagram is even more straightforward than the previous one, it just contains two nodes: start and end message event

In order to fill the message object to be sent, an input assignment is defined to set message object value from property “person”, of type com.javierito.Person

And that’s all, when an instance of this process is executed, passing as “person” property a Person instance which name is Real Betis Balompie and age is 113, a cloud event json object {.... "data":{"name":"Real Betis Balompie", "age":113},"type":"com.javierito.Person" ...}
is sent to Kafka topic “personMessage”
Hopefully these two simple examples will give you a basic idea of what kind of functionality can be achieved when integrating BPMN messages with Kafka. In next section, you can find a FAQ where certain technical details are discussed
FAQ
- How is Kafka functionality enabled? Kafka functionality is provided at Kie server level. As an optional feature, it is disabled by default and it is implemented using an already existing functionality called Kie Server extension. In order to enable it:
- For EAP deployments, set system property
org.kie.kafka.server.ext.disabled
tofalse
- In Spring Boot applications, add
kieserver.kafka.enabled=true
to application properties.
- For EAP deployments, set system property
- Why Kafka functionality was not included as part of JBPM engine? Because JBPM engine must not have dependencies with external processes. Kafka broker, as sophisticated as it is, consists on at least one (typically more) external process, which due to its distributed nature relies on Zookeeper, which gives a minimum of two external processes.
- How BPMN knows which Kafka topics should be used? In a nutshell, using message name. More specifically, if no additional configuration is provided, the message name will be assumed to be the topic name. In order to provide a different mapping, system properties must be used for now ( an ongoing discussion regarding the possibility of providing mapping between message and topic in the process itself is happening while I wrote these lines) . The format of these system properties is
org.kie.server.jbpm-kafka.ext.topics.<messageName>=<topicName>
. So, if you want to map message name “RealBetisBalompie” to topic name “BestFootballClubEver”, you will need to add following system property to Kie Server:org.kie.server.jbpm-kafka.ext.RealBetisBalompie=BestFootballClubEver
. - Why a
WorkItemHandlerNotFoundException
is getting thrown in my environment when the message node is executed? JBPM has been out for a while and any new functionality needs to keep backward compatibility. Before this feature for Kafka was added to JBPM, when a process sends a message, aWorkItem
named “Send Task” is executed. This behavior is still active, which means that in order to avoid the exception, aWorkItemHandler
implementation for “send task” needs to be registered . The steps to register a work item handler are described here. If just Kafka functionality is needed, this handler might be a custom one that does nothing (implemented methods will be empty). Keeping this legacy functionality allows both JMS (through registering the proper JMS WorkItemHandler) and Kafka (through enabling the Kie extension) to naturally coexist in the same KIE server instance. - Which is the expected format for Kafka record value to be consumed by JBPM? Currently, JBPM expects a JSON object that honors cloud event specification (although only “data” field is currently used) and which “data” field contains a JSON object that can be mapped to the Java object optionally defined in
structureRef
attribute ofMessageDefinition
. If no such object is defined the java object generated from “data” field will be a java.util.Map. If there is any problem during the parsing procedure, the Kafka record will be ignored. In future we are planning to support also plain JSON objects (not embedded in “data” field) and customer customization of the parsing procedure (so value can contain any format and customer will be able to write custom code that converts its bytes to the java object defined in structureRef)
How to integrate your Kogito application with TrustyAI – Part 3
In the second part of the blog series https://blog.kie.org/2020/12/how-to-integrate-your-kogito-application-with-trustyai-part-2.html we showed how to setup the OpenShift cluster that will host the TrustyAI infrastructure and the Kogito application we created in the first part https://blog.kie.org/2020/12/how-to-integrate-your-kogito-application-with-trustyai-part-1.html . In this third and last part of our journey, we are going to demonstrate how to deploy the TrustyAI infrastructureRead more →
In the second part of the blog series https://blog.kie.org/2020/12/how-to-integrate-your-kogito-application-with-trustyai-part-2.html we showed how to setup the OpenShift cluster that will host the TrustyAI infrastructure and the Kogito application we created in the first part https://blog.kie.org/2020/12/how-to-integrate-your-kogito-application-with-trustyai-part-1.html .
In this third and last part of our journey, we are going to demonstrate how to deploy the TrustyAI infrastructure and the Kogito application we created so far.
Let’s have a look at the TrustyAI infrastructure, just to have an high level overview of the services that we are going to deploy.

In the yellow box the Kogito application is represented: it contains our DMN model and every time a decision is evaluated, a new tracing event is generated. This event contains all the information that the TrustyAI services need to calculate the explainability and keep track of inputs/outputs of each decision.
The tracing events are then consumed by the Trusty Service, which stores all the data and makes it available for the frontend (alias the AuditUI). It also communicates with the Explainability Service: in a nutshell, starting from the decision taken by the Kogito application, it creates many different new decisions perturbing the original one. Once all of the new decisions have been evaluated by the Kogito application, a machine learning model is trained to figure out the most relevant features that contributed to the original outcome.
Deployment of the Trusty Service
Let’s deploy first of all the Trusty service using the Kogito Operator: go to Operators -> Installed Operators -> Kogito -> Kogito Service -> Create KogitoSupportingService
.
Create a new resource named trusty-service
, select the Resource Type
TrustyAI
and in the Infra
section add the two KogitoInfra
resource names that we created so far: kogito-kafka-infra
and kogito-infinispan-infra
.

Deployment of the AuditUI
The frontend, i.e. the AuditUI, consumes some API exposed by the Trusty Service: by consequence it needs to know what is the URL of the Trusty Service. This information has to be injected using an environment variable called KOGITO_TRUSTY_ENDPOINT
.
On OpenShift, it is possible to get the URL of the Trusty Service we have just deployed under Networking -> Routes
. Copy the URL.

On the Kogito Operator console, create a new KogitoSupportingService
named trustyui-service
, select the Resource Type
TrustyUI
and add the environment variable KOGITO_TRUSTY_ENDPOINT
with the URL of the Trusty Service you’ve just copied.

Deployment of the Explainability Service
On the Kogito Operator console, once again create a new KogitoSupportingService
named explainability-service
with Resource Type
Explainability
. The KogitoInfra
to be linked is only kogito-kafka-infra
.

Deployment of the Kogito Application
The TrustyAI infrastructure has been deployed, and we’re ready to deploy the Kogito application finally.
Create a new Kogito Service custom resource with name my-kogito-application-service
and label app=my-kogito-application-service
. The Image should be the tag you used for the docker image that contains your Kogito application (the one that we created in the first video/blogpost). If you have used https://quay.io/
, it should be something like quay.io/<your_username>/my-kogito-application:1.0
.
The last step is to add the string kogito-kafka-infra
in the Infra
section.

Execute, Audit, Explain
We’re ready to play with the Kogito application, look at the AuditUI and investigate the explainability results!
Under Networking -> Routes
click on the URL of the Kogito Application: a new tab should be opened.

Let’s execute a request to the /LoanEligibility
endpoint to evaluate our DMN model. If you’d like, you can use the swagger ui at http://<your_kogito_application_url>/swagger-ui
.
A sample payload is the following:
{"Bribe": 1000,"Client": {"age": 43,"existing payments": 100,"salary": 1950},"Loan": {"duration": 15,"installment": 180}, "SupremeDirector": "Yes"}

From the OpenShift console under Networking -> Routes
, open the AuditUI URL. You should be able to see the executions, the explainability results and all the inputs/outputs of each decision. Enjoy!

DecisionCAMP Monthly event on 2021-01-19
On 2021 January 19th, Mario and myself will present at the perpetual DecisionCAMP monthly events! Since DecisionCAMP 2020 held virtually, the organizers have decided to institute a series of perpetual meetups, in addition to the annual conference; you can join the community following the instructions here. Event Title Kogito: Cloud-native Business Automation Event Abstract KogitoRead more →
On 2021 January 19th, Mario and myself will present at the perpetual DecisionCAMP monthly events!
Since DecisionCAMP 2020 held virtually, the organizers have decided to institute a series of perpetual meetups, in addition to the annual conference; you can join the community following the instructions here.
Event Title
Kogito: Cloud-native Business Automation
Event Abstract
Kogito is a new platform and framework capabilities based on Drools, jBPM and OptaPlanner, designed to bring our traditional, battle-tested business automation engines to the cloud.
We have rethought the architecture of our platform to enable Java and JVM developers to realize distributed business automation applications with ease.
Leveraging modern application development frameworks, such as Quarkus, we can integrate seamlessly into a large range of capabilities. In particular, Quarkus has shown how it is possible to push the boundaries of traditional Java frameworks to make them cloud-native, through the power of GraalVM’s native compilation.
After a quick introduction to Kogito we will show with practical examples how to build cloud-native event-driven business applications, to the point where applications can be even deployed in a serverless environment, through Knative. We will also show what challenges a distributed environment poses, and how we can deal with them effectively thanks to Kogito.
Speaker bios
Mario Fusco is a principal software engineer at Red Hat working as Drools project lead. He has a huge experience as Java developer having been involved in (and often leading) many enterprise level projects in several industries ranging from media companies to the financial sector. Among his interests there are also functional programming and Domain Specific Languages. By leveraging these 2 passions he created the open source library lambdaj with the purposes of providing an internal Java DSL for manipulating collections and allowing a bit of functional programming in Java. He is also a Java Champion and the co-author of “Modern Java in Action” published by Manning.
Matteo Mortari is a Senior Software Engineer at Red Hat, where he contributes in Drools development and support for the DMN standard. Matteo graduated from Engineering with focus on enterprise systems with a thesis involving rule engines which sparked his interests and influenced his professional career since. He believes there is a whole new range of unexplored applications for Expert Systems (AI) within the Corporate business; additionally, he believes defining the Business Rules on the BRMS system not only enables knowledge inference from raw data but, most importantly, helps to shorten the distance between experts and analysts, between developers and end-users, business stakeholders.
You can join the event by following the instructions here!
How to integrate your Kogito application with TrustyAI – Part 2
In the first part https://blog.kie.org/2020/12/how-to-integrate-your-kogito-application-with-trustyai-part-1.html we have created a Kogito application and configured it to make it work with the TrustyAI infrastructure. In this second part, we are going to talk about the setup of the OpenShift cluster (https://docs.jboss.org/kogito/release/latest/html_single/#chap-kogito-deploying-on-openshift). The first step is to create a new project, which we call my-trusty-demo. As you canRead more →
In the first part https://blog.kie.org/2020/12/how-to-integrate-your-kogito-application-with-trustyai-part-1.html we have created a Kogito application and configured it to make it work with the TrustyAI infrastructure.
In this second part, we are going to talk about the setup of the OpenShift cluster (https://docs.jboss.org/kogito/release/latest/html_single/#chap-kogito-deploying-on-openshift).
The first step is to create a new project, which we call my-trusty-demo.

As you can see in the TrustyAI architecture below, a kafka and an infinispan instance are needed to process the events and store the information.

There are two options available:
1) You can setup, configure and deploy your instance of kafka and infinispan, and then bind them to the Kogito services (you can find an example here using kubernetes https://github.com/kiegroup/kogito-examples/tree/stable/trusty-demonstration).
2) Use the KogitoInfra custom resource that the Kogito operator offers. In this way the Kogito operator will take care of deploying and managing the instances for us. For the sake of the demo, we are going to use this one.
Given that we would like to use the KogitoInfra custom resource, the Strimzi and Infinispan operators have to be installed in the namespace. Go to Operators -> OperatorHub and look for strimzi.

And install it under the namespace my-trusty-demo.

And do the same for Infinispan, paying attention to install the version 2.0.x which is the only one supported at the moment.

Now you can install the Kogito operator from OperatorHub as well.

Let’s create the KogitoInfra custom resource: go to Operators -> Installed Operators and then click on Kogito.
In the upper tab select Kogito Infra and then on the button Create KogitoInfra.

According to the official documentation https://docs.jboss.org/kogito/release/latest/html_single/#_kogito_operator_dependencies_on_third_party_operators , we need to create the following resources
apiVersion: app.kiegroup.org/v1beta1
kind: KogitoInfra
metadata:
name: kogito-infinispan-infra
spec:
resource:
apiVersion: infinispan.org/v1
kind: Infinispan
---
apiVersion: app.kiegroup.org/v1beta1
kind: KogitoInfra
metadata:
name: kogito-kafka-infra
spec:
resource:
apiVersion: kafka.strimzi.io/v1beta1
kind: Kafka
And this can be done also using the OpenShift console:


Once they are created, you see them in the KogitoInfra tab

The OpenShift cluster has been set up, and we are ready to deploy the TrustyAI infrastructure together with the Kogito application we created in the first video.
The next part can be found here https://blog.kie.org/2020/12/how-to-integrate-your-kogito-application-with-trustyai-part-3.html
How to integrate your Kogito application with TrustyAI – Part 1
How can you audit a decision out of your new Kogito application? It’s pretty simple: in this series of articles, we are going to demonstrate how to create a new Kogito application and how to deploy the TrustyAI infrastructure on an OpenShift cluster.If you are new to TrustyAI, we suggest you read this introduction: https://blog.kie.org/2020/06/trusty-ai-introduction.htmlWithRead more →
How can you audit a decision out of your new Kogito application? It’s pretty simple: in this series of articles, we are going to demonstrate how to create a new Kogito application and how to deploy the TrustyAI infrastructure on an OpenShift cluster.
If you are new to TrustyAI, we suggest you read this introduction: https://blog.kie.org/2020/06/trusty-ai-introduction.html
With the additional capabilities of TrustyAI, you will get a nice overview of all the decisions that have been taken by the Kogito application, as well as a representation of why the model took those decisions (i.e. the explanation of the decisions).
At the moment, TrustyAI provides via Audit UI two main features:
1) The complete list of all the executions of the DMN models in the Kogito application.

2) The details of each execution, including all the inputs, all the internal outcomes and their explainability.

In order to achieve this goal, we’ll go through these three steps:
1) Create the Kogito application, enable the so-called tracing addon and create the docker image (the subject of the current blogpost).
2) Prepare your OpenShift cluster.
3) Deploy the infrastructure.
Let’s go into the details of the first step!
Create the kogito application and enable the tracing addon
Let’s assume you have already created your DMN model that contains your business logic using your preferred channel (http://dmn.new/ for instance).
For the sake of the demo, we will use the following DMN model: https://raw.githubusercontent.com/kiegroup/kogito-examples/master/dmn-tracing-quarkus/src/main/resources/LoanEligibility.dmn .
You can create a new Kogito application using maven:
mvn archetype:generate \
-DarchetypeGroupId=org.kie.kogito \
-DarchetypeArtifactId=kogito-quarkus-archetype \
-DgroupId=com.redhat.developer -DartifactId=my-kogito-application \
-DarchetypeVersion=1.0.0.Final \
-Dversion=1.0-SNAPSHOT
And then put your DMN model under the folder my-kogito-application/src/main/resources
(and delete the default BPMN resource). The project structure should look like the following:

The Kogito tracing addon is needed to export the necessary information about the decisions taken by the model. Modify the pom.xml
file to import the following dependency:
<dependency>
<groupId>org.kie.kogito</groupId>
<artifactId>tracing-decision-quarkus-addon</artifactId>
</dependency>
The last thing to do is to configure the tracing addon: add the following lines to the file my-kogito-application/src/main/resources/application.properties
.
# Kafka Tracing
mp.messaging.outgoing.kogito-tracing-decision.group.id=kogito-runtimes
mp.messaging.outgoing.kogito-tracing-decision.connector=smallrye-kafka
mp.messaging.outgoing.kogito-tracing-decision.topic=kogito-tracing-decision
mp.messaging.outgoing.kogito-tracing-decision.value.serializer=org.apache.kafka.common.serialization.StringSerializer
# Kafka Tracing Model
mp.messaging.outgoing.kogito-tracing-model.group.id=kogito-runtimes
mp.messaging.outgoing.kogito-tracing-model.connector=smallrye-kafka
mp.messaging.outgoing.kogito-tracing-model.topic=kogito-tracing-model
mp.messaging.outgoing.kogito-tracing-model.value.serializer=org.apache.kafka.common.serialization.StringSerializer
For a more detailed overview on the tracing addon, a dedicated blogpost is available here: https://blog.kie.org/2020/11/trustyai-meets-kogito-the-decision-tracing-addon.html .
You can use the Kogito operator if you are running on OpenShift to automatically execute the build using the custom resource KogitoBuild
.

As an alternative approach in this demo we are going to deploy it using the KogitoService
one: this custom resource deploys a Kogito application from a docker image on a remote hub. Let’s create a docker image from our project then!
Create the jar application with
mvn clean package -DskipTests
And then use the following Dockerfile
to build a new docker image
FROM quay.io/kiegroup/kogito-quarkus-jvm-ubi8:latest
COPY target/*-runner.jar $KOGITO_HOME/bin
COPY target/lib $KOGITO_HOME/bin/lib
Assuming you have an account on docker hub (https://hub.docker.com/), build your image with
docker build --tag <your_username>/my-kogito-application:1.0 .
And then push it to the hub
docker push <your_username>/my-kogito-application:1.0
The second part is here https://blog.kie.org/2020/12/how-to-integrate-your-kogito-application-with-trustyai-part-2.html
Kogito 1.0: Build-Time Optimized Business Automation in the Cloud
For the last few months, here at KIE team we’ve been hard at work. Today I am proud to announce that our cloud-native business automation platform is hitting a major milestone. Today we release Kogito 1.0! Kogito includes best-of-class support for the battle-tested engines of the KIE platform: the Drools rule language and decision platform, the jBPM workflow and process automation engine, the OptaPlanner constraint satisfaction solver; and it bringsRead more →
For the last few months, here at KIE team we’ve been hard at work. Today I am proud to announce that our cloud-native business automation platform is hitting a major milestone. Today we release Kogito 1.0!
Kogito includes best-of-class support for the battle-tested engines of the KIE platform:
- the Drools rule language and decision platform,
- the jBPM workflow and process automation engine,
- the OptaPlanner constraint satisfaction solver;
and it brings along several new capabilities
- our fresh new unified BPMN and DMN editors and VSCode-based extension
- the new vendor-neutral Serverless Workflow Specification
- business-relevant insights on machine-assisted decisions through the contributions of the TrustyAI initiative
- automated deployment through the Kogito Operator and the
kogito
CLI - noSQL persistence through the Infinispan and the MongoDB addons
- GraphQL as the query language for process data
- microservice-based data indexing and timer management
- completely revisited UIs for task and process state
- CloudEvent for event handling
Code Generation
I believe there is a lot to be proud of, but I want to talk more about another thing that makes Kogito special, and that is the heavy reliance on code-generation.
In Kogito code-generation has a double purpose:
- we generate code ahead-of-time to avoid run-time reflection;
- we automatically generate domain-specific services from user-provided knowledge assets.
Together, Kogito delivers a truly low-code platform for the design and implementation of knowledge-oriented REST services.
Ahead-of-Time Code-Generation
In Kogito, we load, parse, analyze your knowledge assets such as rules, decisions or workflow definitions during your build-time. This way, your application starts faster and it consumes less memory, and, at run-time, it won’t do more than what’s necessary.
Compare this to a more traditional pipeline, where instead the all the stages of processing of a knowledge asset would occur at run-time:
Application Density
The Cloud, albeit allegedly being «just someone else’s computer», is a deployment environment that we have to deal with. More and more businesses are using cloud platforms to deploy and run their services. Thus, because they are paying for the resources they use, they are caring more and more about them.
This is why application density is becoming increasingly more important: we want to fit more application instances in the same space, because we want to keep costs low. If your application has a huge memory footprint and high CPU requirements, it will cost you more.
While we do support Spring Boot (because, hey, you can’t really ignore such a powerhouse), we chose Quarkus as our primary runtime target, because through its extension system, it lets us truly embrace ahead-of-time code generation.
Whichever you choose, be it Spring, or Quarkus, Kogito will move as much processing as possible at build time. But if you want to get the most out of it, we invite you to give Quarkus a try: through its simplified support to native image generation, allows Kogito to truly show its potential, producing the tiniest native executables. So tiny and cute, they are the envy of a gopher.
Kogito cuts the fat, but you won’t lose flavor. And if you pick Quarkus, you’ll get live code reload for free.
Automated Generation of Services and Live Reload
Although build-time processing is a characterizing trait of Kogito, code-generation is also key to another aspect. We automatically generate a service starting from the knowledge assets that users provide.
From Knowledge to Service: a Low-Code Platform
You write rules, a DMN decision, a BPMN process or a serverless workflow: in all these cases, in order for these resources to be consumed, you need an API to be provided. In the past, you had full access to the power of our engines, through a command-based REST API for remote execution or through their Java programmatic API, when embedding them in a larger application.
While programmatic interaction will always be possible (and we are constantly improving it in Kogito to make it better, with a new API), in Kogito we aim for low-code. You drop your business assets in a folder, start the build process, and you get a working service running.
In the animation you see that a single DMN file is translated into an entire fully-functional service, complete with its OpenAPI documentation and UI.
From Knowledge to Deployed Service: Kogito Operator
Through the Kogito Operator you are also able to go from a knowledge asset to a fully-working service in a matter of one click or one command. In this animation you can see the kogito
cli in action: the operator picks up the knowledge assets, builds a container and deploys it to OpenShift with just 1 command!
Fast Development Feedback
For local development, the [Kogito Quarkus extension][qex] in developer mode extends Quarkus’ native live code reloading capabilities going further from reloading plain-text source code (a feature in Quarkus core) to adding support to hot reload of graphical models supported by our modeling tools. In this animation, for instance you can see hot-reload of a DMN decision table.
In this animation, we update a field of the decision table. As a result, the next time we invoke the decision, the result is different. No rebuild process is necessary, as it is all handled seamlessly by the Kogito extension. You get the feeling of live, run-time processing, but under the hood, Quarkus and Kogito do the heavy lifting of rebuilding, reloading and evaluating the asset.
Future Work
In the future we plan to support customization of these automatically-generated services, with a feature we call scaffolding. With scaffolding you will also be able to customize the code that is being generated. You can already get a sneak peek of this preview feature by following the instructions in the manual.
Conclusions
Kogito 1.0 brings a lot of new features, we are excited for reaching this milestone and we can’t wait to see what you will build! Reach out for feedback on all our platforms!
TrustyAI meets Kogito: the decision tracing addon
New to Kogito? Check out our “get started” page and get up to speed! 😉 This post presents the decision tracing addon: a component of the Kogito runtime quite relevant for the TrustyAI initiative (introduced here and here). One of the key goals of TrustyAI is to enable advanced auditing capabilities, which, as written inRead more →
New to Kogito? Check out our “get started” page and get up to speed! 😉
This post presents the decision tracing addon: a component of the Kogito runtime quite relevant for the TrustyAI initiative (introduced here and here).
One of the key goals of TrustyAI is to enable advanced auditing capabilities, which, as written in the second introductory post, “enables a compliance officer to trace the decision making of the system and ensure it meets regulations”.
The capability to export auditable information whenever a key event occurs is the first mandatory feature that such a system must provide in order to achieve this goal. This is exactly the purpose of the decision tracing addon.
Key concepts
The addon focuses on Kogito applications exposing DMN models (hence the “decision” in the name). Every time a model is executed, it emits a TraceEvent containing all the relevant information about the execution.
Here are some key concepts:
- It works for Kogito applications based on both Quarkus and Spring Boot.
- The TraceEvent is wrapped in a CloudEvent envelope.
- TraceEvents are sent as JSON to a Kafka topic.
The Trusty Service is the designated component of TrustyAI that consumes these events. With Kafka acting as a decoupler, however, custom consumers can be written to match any specific need.
How to use it
The addon has been available as part of Kogito since v0.17.0
. It leverages the code generation capabilities of Kogito to minimize the effort required to enable it. There are only two steps:
- Add it to the dependencies of your project. Make sure to pick the flavour that matches the underlying framework of your Kogito service.
- Add some properties to configure the connection to Kafka. They currently differ depending on the flavour, so pick the right ones (check the examples below).
Note: be aware that API changes may still occur in the next releases. Also, bugs may be there (that’s code after all, isn’t it?). If you find one, let us know via Kogito Jira.
Usage with Quarkus
- Here is the dependency to add to your
pom.xml
(version is automatically taken fromkogito-bom
):
<dependency>
<groupId>org.kie.kogito</groupId>
<artifactId>tracing-decision-quarkus-addon</artifactId>
</dependency>
- The Quarkus addon uses MicroProfile Reactive Messaging under the hood, so the configuration follows the same rules. Here are the properties:
# Where to find the Kafka broker
kafka.bootstrap.address=localhost:9092
# These two properties must be set with these exact values
mp.messaging.outgoing.kogito-tracing-decision.connector=smallrye-kafka
mp.messaging.outgoing.kogito-tracing-decision.value.serializer=org.apache.kafka.common.serialization.StringSerializer
# These values can be changed with your kafka group ID and topic
# If the topic doesn't exist, the addon tries to create it automatically
mp.messaging.outgoing.kogito-tracing-decision.group.id=kogito-runtimes
mp.messaging.outgoing.kogito-tracing-decision.topic=kogito-tracing-decision
Example with Quarkus
A detailed example on how to use the decision tracing addon with Quarkus can be found here in our kogito-examples
repository.
Usage with Spring Boot
- Here is the dependency to add to your
pom.xml
(version is automatically taken fromkogito-springboot-starter
):
<dependency>
<groupId>org.kie.kogito</groupId>
<artifactId>tracing-decision-springboot-addon</artifactId>
</dependency>
- Here are the properties:
# Where to find the Kafka broker
kogito.addon.tracing.decision.kafka.bootstrapAddress=localhost:9092
# The topic name
kogito.addon.tracing.decision.kafka.topic.name:kogito-tracing-decision
# If the topic doesn't exist, the addon tries to create it automatically
# These two additional properties can configure the auto generation
kogito.addon.tracing.decision.kafka.topic.partitions=1
kogito.addon.tracing.decision.kafka.topic.replicationFactor=1
Example with Spring Boot
A detailed example on how to use the decision tracing addon with Spring Boot can be found here in our kogito-examples
repository.
Next steps
We’re only at the beginning of the TrustyAI journey. If you want to be part of it with us, stay tuned for more news!
Thanks for reading.