I’ve recorded the following "developer notes" as a support medium to demonstrate the progress and the current integration of several Kogito features. As I believe this could be of interest to a wider audience to have a brief overview of the many capabilities of the Kogito platform for modeling and developing decision services with DMN,Read more →
I’ve recorded the following "developer notes" as a support medium to demonstrate the progress and the current integration of several Kogito features. As I believe this could be of interest to a wider audience to have a brief overview of the many capabilities of the Kogito platform for modeling and developing decision services with DMN, I am sharing them in this post.
Let us know if you find these useful and if you’d like to see more of this kind of video recordings!
DMN with Kogito on Quarkus
DMN with Kogito on Quarkus
Bootstrap a new project with Kogito Maven archetype
The user experience for decision service nodes is something we’re incrementally enhancing in the DMN editor. Kogito 0.8.4 will introduce a new feature that significantly enhances how users call decision services. Imagine you have a DMN model like this one: Also, imagine that you need to invoke the decision service “user message” into the “Example”Read more →
The user experience for decision service nodes is something we’re incrementally enhancing in the DMN editor. Kogito 0.8.4 will introduce a new feature that significantly enhances how users call decision services.
Imagine you have a DMN model like this one:
Also, imagine that you need to invoke the decision service “user message” into the “Example” node with a Literal Expression. How can you do that? 🤔
You probably would need to guess the parameters’ order, which is quite frustrating (of course – we may be wrong!). On Kogito 0.8.4, users will be able to select a decision service and see the order of parameters, like this:
So, now it’s much easier to call “user message” now, as we can clearly see the order of our parameters is “Age (number)” and then “Name (string)”. Also, now we know the answer to the question above and we can call our decision service:
If you’re wondering what this model returns, check this example of DMN model call and output:
require 'httparty'
body = {
'Name': 'Kojima',
'Age': 57,
}
resp = HTTParty.post("http://localhost:8080/example",
body: body.to_json,
headers: { 'Content-Type': 'application/json', 'Accept': 'application/json' },
basic_auth: { username: 'kieserver', password: 'kieserver1!' })
puts resp
# {
# "user message":"function user message( Age, Name )",
# "upcase":"THE USER KOJIMA IS 57 YEARS OLD.",
# "Example":"THE USER GUILHERME IS 29 YEARS OLD.",
# "concat":"The user Kojima is 57 years old.",
# "Age":57,
# "Name":"Kojima"
# }
Straightforward, right? 🙂 We’ll release Kogito 0.8.4 in a few days with other surprising news… but, if you really wanna try it right now, download my latest VSCode plugin build here.
In 2021 it’s almost undeniable that modern application development needs to target the cloud, given the requirements of flexibility, scalability and availability imposed by today’s world. Event-driven architectures have proven to be well suited models for this purpose. As a result, we’re adopting these principles in several components of Kogito, which aims to be theRead more →
In 2021 it’s almost undeniable that modern application development needs to target the cloud, given the requirements of flexibility, scalability and availability imposed by today’s world.
Event-driven architectures have proven to be well suited models for this purpose. As a result, we’re adopting these principles in several components of Kogito, which aims to be the next generation cloud-native business automation solution.
This blogpost presents a new component that aligns with this view: the event-driven decisions addon. It is available since Kogito v1.2.
Key concepts
This addon enables the evaluation of decision models in an event-driven fashion, so that it can be used as part of an event processing pipeline.
It comes in two flavours: Quarkus and Spring Boot.
The developer only needs to include the correct version as dependency of his Kogito app and configure the input/output topics. The wiring is done by the Kogito code-generation and framework specific dependency injection.
The execution is triggered upon receiving an event containing the initial context from a specific Kafka input topic. The result is then sent to a Kafka output topic (which may be the same). Both input and output events are formatted as CloudEvents.
Its capabilities are implemented to be similar to the ones available via REST endpoints:
trigger evaluation of the whole model or of a specific decision service
receive only the context or the full DMN result in the output event
filter out the inputs from the output context, no matter if returned alone or inside the DMN result
Event structure
Input event
A model evaluation is triggered by a specific event called DecisionRequest. Here is the list of the supported fields, including the optional ones:
Field
Purpose
Mandatory
Default
data
Input context
yes
–
id
CloudEvent ID
yes
–
kogitodmnevaldecision
Name of decision service to evaluate. If specified the engine triggers the evaluation of this service only.
no
null
kogitodmnfilteredctx
Boolean flag to enable/disable filtering out inputs from the context
no
false
kogitodmnfullresult
Boolean flag to enable/disable receiving full DMN result as output
no
false
kogitodmnmodelname
Name of DMN model to evaluate
yes
–
kogitodmnmodelnamespace
Namespace of DMN model to evaluate
yes
–
source
CloudEvent source
yes
–
specversion
Must be equal to 1.0 as mandated by CloudEvent specification
yes
–
subject
If specified, the engine will put the same value as subject of the output event. Its usage is up to the caller (e.g. as correlation ID).
If the request is evaluated successfully, the system returns two different types of output events depending on the value of the kogitodmnfullresult flag:
DecisionResponse if only the output context is returned
DecisionResponseFull if the full DMN result is returned
If, for some reason, the request event is malformed or contains wrong information so that the evaluation can’t be triggered, a DecisionResponseError is sent as output.
In this case the data field contains a string that specifies the error type:
Error Type
Meaning
BAD_REQUEST
Malformed input event (e.g. when some mandatory fields are missing)
MODEL_NOT_FOUND
The specified model can’t be found in the current service
Examples
The Kogito Examples repository contains two examples, one for Quarkusand one for Spring Boot, that you can use as a starting point to practice with this addon.
They also contain tests for every possible variation in the structure of the input/output events supported by the addon.
Conclusion
If you liked this article and are interested in the evolution of Kogito, stay tuned for more news!
In the last few days, I’ve been working on a major update for the Learn DMN in 15 minutes course, and it’s finally finished – the v2 is already online! Some parts of the content are refreshed according to the newest versions of the DMN tooling. Also, the interactive tutorials are not getting users stuckRead more →
In the last few days, I’ve been working on a major update for the Learn DMN in 15 minutes course, and it’s finally finished – the v2 is already online!
Some parts of the content are refreshed according to the newest versions of the DMN tooling. Also, the interactive tutorials are not getting users stuck on corner cases anymore 😅
Every reported issue was solved, and visual enhancements were introduced. Now, it’s easier to navigate between sections, and the interactions are more fluid.
All text content was converted from HTML to friendly Markdown. It makes proposing changes and enhancements in the course pages simpler.
Introduction Since the 0.7.0 release, the Kogito DMN editor supports loading PMML models as part of a DMN model. A PMML (Predictive Model Markup Language) model is an XML file that describes a predictive model generated by Data Mining or AI algorithms. You can learn more about on PMML on Data Mining Group – PMML page. ExamplesRead more →
Introduction
Since the 0.7.0 release, the Kogito DMN editor supports loading PMML models as part of a DMN model.
A PMML (Predictive Model Markup Language) model is an XML file that describes a predictive model generated by Data Mining or AI algorithms. You can learn more about on PMML on Data Mining Group – PMML page. Examples of handled models are Naive Bayes, Neural Network, Support Vector Machine, and others.Introducing these kinds of modules into DMN Editor enriches the logic and the algorithms a user can create to determine a decision process, opening a wide door to the Machine Learning / AI world.
Most likely, some of you already dealt with this feature in Business Central. In this case, it will be straightforward for you to migrate to DMN Editor VSCode plugin, which now offers it. For all other users, this article will provide a complete step-by-step tutorial presenting how to import and process a PMML model inside of the DMN Editor in VSCode.
Cloning kogito-examples repository, which contains the example;
After cloning kogito-examples repository, open your VSCode editor and open the directory kogito-examples/dmn-pmml-quarkus-example, which contains the covered example. As a result, you should see the project open in VSCode, as shown in the below screenshot
VSCode editor after opening kogito-examples/dmn-pmml-quarkus-example project
First file to analyze is "test_regression.pmml" located in kogito-examples/dmn-pmml-quarkus-example/src/main/resources directory, giving us the opportunity to learn more about this kind of file. The model refers to Regression Models families and more specifically to a Linear Regression model. This model’s typical application is to determine the relationship between the dependent variable and one or more independent variables. This formula describes the regression model present in "test_regression.pmml":
The formula which describes the regression model hold in test_regression.pmml
The aim is to design a DMN process that determines a decision exploiting PMML model output fdl4 (dependent variable) given its defined input variables fdl1, fdl2, and fdl3 (independentvariables) using the above formula. That is precisely the scope of the next file we will analyze: "KiePMMLRegression.dmn". Despite the file already contains the final DMN process design, listed below, you can find the steps to create the DMN Process using the editor:
Create a new file with .dmn extension, under the same folder which contains test_regression.pmml
Open the file, and you should see an empty DMN Editor, as shown below
Empty DMN asset
In the editor menu, go to Included Model items. Here, you can import the PMML model previously described, pressing the Include Model button. A popup will appear. Select “test_regression.pmml” file and assign a unique name of your choice (eg. TestRegression). Next screenshot shows what you should see if everything went correctly:
Included Models section
Now, go back to the Editor menu item. Here, we need to define a node that will hold the previously imported PMML model. Select a DMN Business Knowledge Model from the editor’s left palette and drag it into the editor. Click the Edit icon to open the DMN boxed expression designer:
Creating a DMN Business Knowledge Model
A table will appear. Set the expression type to PMML click the top-left function cell. In the document and model rows in the table, double-click the undefined cells to specify the included PMML document (“TestRegression”) and the PMML model (“LinReg”). Input variables are automatically set.
Table content of created Business Knowledge Node
We are now ready to define the DMN Process, starting from the three required DMN Input Data to represent input values fdl1, fdl2 and fdl3. Don’t forget to set them as numeric as Data type, selecting every node and opening the Properties card icon in the upper-right corner of the DMN designer.
Input Data added
The next required step is to introduce a DMN Decision Node, which will combine the input nodes values with the PMML logic to determine the Decision result, as will be explained in the following steps. Set its data type as numeric as well.
Time to link the nodes! Link input nodes fdl1, fdl2, and fdl3 with Decision node using Create DMN Information Requirement option, visible when selected a node. Link together RegressionModelBKM and Decision nodes using a Create DMN Knowledge Requirement instead. This describes the final graphical representation of the DMN process: Given inputs fdl1, fdl2, and fdl3, determine the Decision using the given PMML model.
To finalize the DMN Process, we need to define the logic in the Decision node. Select it and press the Edit icon to open the DMN boxed expression designer. As expression type, choose “Invocation“. That indicates the Decision needs to invoke an external logic to determine the decision result. As a function, write “RegressionModelBKM” which represents the name of the Business Knowledge Model we previously defined to hold the PMML model. As parameters, add three rows to define the parameters – i.e. Inputs node of the decision, associating them with the variable name defined in the PMML logic. In this case, both input nodes and PMML model input variables share the same names, but this is not a strict rule.
DMN Boxed Expression of Decision Node
This step concludes the tutorial. Design DMN process now fully integrate PMML model test_regression.pmml. Our Plugin provides additional features useful to test the correctness of the designed logic of the DMN process. A future article will cover this topic, stay tuned!
Conclusion
With this article, we learned how to include a PMML model as a part of a DMN Model using DMN Editor VSCode plugin. The shown case can be easily extended to more complex DMN process and PMML models, better fitting your requirements. Of course, this requires a deeper knowledge of both DMN and PMML standards and their combined functionality. One resource strongly suggested to start this path is this article Knowledge meets machine learning for smarter decisions
For the last few months, here at KIE team we’ve been hard at work. Today I am proud to announce that our cloud-native business automation platform is hitting a major milestone. Today we release Kogito 1.0! Kogito includes best-of-class support for the battle-tested engines of the KIE platform: the Drools rule language and decision platform, the jBPM workflow and process automation engine, the OptaPlanner constraint satisfaction solver; and it bringsRead more →
For the last few months, here at KIE team we’ve been hard at work. Today I am proud to announce that our cloud-native business automation platform is hitting a major milestone. Today we release Kogito 1.0!
Kogito includes best-of-class support for the battle-tested engines of the KIE platform:
noSQL persistence through the Infinispan and the MongoDB addons
GraphQL as the query language for process data
microservice-based data indexing and timer management
completely revisited UIs for task and process state
CloudEvent for event handling
Code Generation
I believe there is a lot to be proud of, but I want to talk more about another thing that makes Kogito special, and that is the heavy reliance on code-generation.
we generate code ahead-of-time to avoid run-time reflection;
we automatically generate domain-specific services from user-provided knowledge assets.
Together, Kogito delivers a truly low-code platform for the design and implementation of knowledge-oriented REST services.
Ahead-of-Time Code-Generation
In Kogito, we load, parse, analyze your knowledge assets such as rules, decisions or workflow definitions during your build-time. This way, your application starts faster and it consumes less memory, and, at run-time, it won’t do more than what’s necessary.
Compare this to a more traditional pipeline, where instead the all the stages of processing of a knowledge asset would occur at run-time:
Application Density
The Cloud, albeit allegedly being «just someone else’s computer», is a deployment environment that we have to deal with. More and more businesses are using cloud platforms to deploy and run their services. Thus, because they are paying for the resources they use, they are caring more and more about them.
This is why application density is becoming increasingly more important: we want to fit more application instances in the same space, because we want to keep costs low. If your application has a huge memory footprint and high CPU requirements, it will cost you more.
While we do support Spring Boot (because, hey, you can’t really ignore such a powerhouse), we chose Quarkus as our primary runtime target, because through its extension system, it lets us truly embrace ahead-of-time code generation.
Whichever you choose, be it Spring, or Quarkus, Kogito will move as much processing as possible at build time. But if you want to get the most out of it, we invite you to give Quarkus a try: through its simplified support to native image generation, allows Kogito to truly show its potential, producing the tiniest native executables. So tiny and cute, they are the envy of a gopher.
Kogito cuts the fat, but you won’t lose flavor. And if you pick Quarkus, you’ll get live code reload for free.
Automated Generation of Services and Live Reload
Although build-time processing is a characterizing trait of Kogito, code-generation is also key to another aspect. We automatically generate a service starting from the knowledge assets that users provide.
From Knowledge to Service: a Low-Code Platform
You write rules, a DMN decision, a BPMN process or a serverless workflow: in all these cases, in order for these resources to be consumed, you need an API to be provided. In the past, you had full access to the power of our engines, through a command-based REST API for remote execution or through their Java programmatic API, when embedding them in a larger application.
While programmatic interaction will always be possible (and we are constantly improving it in Kogito to make it better, with a new API), in Kogito we aim for low-code. You drop your business assets in a folder, start the build process, and you get a working service running.
In the animation you see that a single DMN file is translated into an entire fully-functional service, complete with its OpenAPI documentation and UI.
From Knowledge to Deployed Service: Kogito Operator
Through the Kogito Operator you are also able to go from a knowledge asset to a fully-working service in a matter of one click or one command. In this animation you can see the kogito cli in action: the operator picks up the knowledge assets, builds a container and deploys it to OpenShift with just 1 command!
Fast Development Feedback
For local development, the [Kogito Quarkus extension][qex] in developer mode extends Quarkus’ native live code reloading capabilities going further from reloading plain-text source code (a feature in Quarkus core) to adding support to hot reload of graphical models supported by our modeling tools. In this animation, for instance you can see hot-reload of a DMN decision table.
In this animation, we update a field of the decision table. As a result, the next time we invoke the decision, the result is different. No rebuild process is necessary, as it is all handled seamlessly by the Kogito extension. You get the feeling of live, run-time processing, but under the hood, Quarkus and Kogito do the heavy lifting of rebuilding, reloading and evaluating the asset.
Future Work
In the future we plan to support customization of these automatically-generated services, with a feature we call scaffolding. With scaffolding you will also be able to customize the code that is being generated. You can already get a sneak peek of this preview feature by following the instructions in the manual.
Conclusions
Kogito 1.0 brings a lot of new features, we are excited for reaching this milestone and we can’t wait to see what you will build! Reach out for feedback on all our platforms!
The DMN editor continues evolving towards making users’ lives as simple as possible. On Kogito 0.8.1, we introduce a new mechanism to open DMN 1.1 and 1.3 assets. We’re still saving your model as a DMN 1.2 asset at conformance level 3. However, now any 1.1 or 1.3 model, non including DMN 1.3 features, isRead more →
The DMN editor continues evolving towards making users’ lives as simple as possible. On Kogito 0.8.1, we introduce a new mechanism to open DMN 1.1 and 1.3 assets.
We’re still saving your model as a DMN 1.2 asset at conformance level 3. However, now any 1.1 or 1.3 model, non including DMN 1.3 features, is converted to 1.2.
In other words, you can now open DMN 1.1, 1.2, and 1.3 models in the editor, as you can see 🙂
As you probably have already noticed, this is a baby step toward fully-supporting the DMN 1.3 spec, but it’s already great news. If you’re using an older version of the editor, or even if you have DMN models from other vendors persisted as DMN 1.1 or 1.3, you’re now able to open them on VSCode, Online Editor, and on any other Kogito channel.
Thinking on the tooling’s future, we’ve also introduced a backward compatibility test suite with almost every file pushed in the dmn-tck repository. It ensures that new Kogito releases will continue supporting older DMN versions without any regression.
We’ll release Kogito 0.8.1 in a few days with other surprising news. Stay tuned! 😉
In 0.7.2.alpha3 we started shipping a new component of the KIE tooling, what we’re calling Standalone Editors. These Standalone Editors provide a straightforward way to use our tried-and-true DMN and BPMN Editors embedded in your own web applications. The editors are now distributed in a self contained library that provides an all-in-one JavaScript file forRead more →
In 0.7.2.alpha3 we started shipping a new component of the KIE tooling, what we’re calling Standalone Editors.
These Standalone Editors provide a straightforward way to use our tried-and-true DMN and BPMN Editors embedded in your own web applications.
The editors are now distributed in a self contained library that provides an all-in-one JavaScript file for each of them, that can be interacted using a comprehensive API for setup and control of them.
Installation
In this release, you can choose from three ways to install the Standalone Editors:
readOnly (optional, defaults to false): Use false to allow content edition, and true for read-only mode, in which the Editor will not allow changes. WARNING: Currently only the DMN Editor supports read-only mode.
origin (optional, defaults to window.location.origin): If for some reason your application needs to change this parameter, you can use it.
resources (optional, defaults to []): Map of resources that will be provided for the Editor. This can be used, for instance, to provide included models for the DMN Editor or Work Item Definitions for the BPMN Editor. Each entry in the map has the resource name as its key and an object containing the content-type (text or binary) and the resource content (Promise similar to the initialContent parameter) as its value.
The returned object will contain the methods needed to manipulate the Editor:
getContent(): Promise<string>: Returns a Promise containing the Editor content.
setContent(content: string): void: Sets the content of the Editor.
getPreview(): Promise<string>: Returns a Promise containing the SVG string of the current diagram.
subscribeToContentChanges(callback: (isDirty: boolean) => void): (isDirty: boolean) => void: Setup a callback to be called on every content change in the Editor. Returns the same callback to be used for unsubscription.
unsubscribeToContentChanges(callback: (isDirty: boolean) => void): void: Unsubscribes the passed callback from content changes.
markAsSaved(): void: Resets the Editor state, signalizing that its content is saved. This will also fire the subscribed callbacks of content changes.
undo(): void: Undo the last change in the Editor. This will also fire the callbacks subscribed for content changes.
redo(): void: Redo the last undone change in the Editor. This will also fire the callbacks subscribed for content changes.
close(): void: Closes the Editor.
getElementPosition(selector: string): Promise<Rect>: Provides an alternative for extending the standard query selector when the element lives inside a canvas or even a video component. The selector parameter must follow the format of “:::”, e.g. Canvas:::MySquare or Video:::PresenterHand. Returns a Rect representing the element position.
Now let’s implement an application that provides the DMN Editor and adds a simple toolbar to the top that explores the main features of the API.
First, we start with a simple HTML page, and add a script tag with the DMN Standalone Editor JS library. We also add a <div> for the toolbar and a <div> for the Editor.
For the toolbar, we will add a few buttons, that will take advantage of the Editor’s API:
This script will open an empty and modifiable DMN Editor inside the div#dmn-editor-container. But we still have to implement the toolbar actions. To be able to undo and redo changes, we can add the following script:
The KIE YouTube channel is regularly posting quite interesting content! Last week I had the pleasure of presenting a talk about DMN at the KieLive#11 session: Tomorrow (October 27th), Edson Tirelli will also talk about DMN, but from a different and more advanced perspective at the KieLive#12: I’ve just activated the YouTube reminder because IRead more →
Currently, the DMN editor is supported in a variety of environments. You can create a DMN model in an online editor, in a chrome extension, in a desktop app, in a VSCode extension, and even on Business Central. Until the latest release, you could face some differences between a model created by VSCode and aRead more →
Currently, the DMN editor is supported in a variety of environments. You can create a DMN model in an online editor, in a chrome extension, in a desktop app, in a VSCode extension, and even on Business Central.
Until the latest release, you could face some differences between a model created by VSCode and a one created by Business Central. Now, this post aims to demonstrate that both environments are fully compatible by showing the same project from two perspectives: I) how to create a project on VSCode and import it on Business Central, and II) how to create a project on Business Central and import it on VSCode.
Let’s check both environments.
–
The VSCode perspective
Let’s get started to understand how to use VSCode to handle a DMN file in a context of a Business Central project by relying only on VSCode.
In a shell prompt, enter the following mvn command to create a project:
When the archetype plug-in switches to interactive mode, accept the default values for the remaining fields
After generating your project, create a DMN asset at the src/main/resources/<your package> directory and populate it with some interesting decision, like the one in the video:
Now, initialize the git repository for your project, add all files, and commit them:
git init && git add . && git commit -m "Init"
Use pwd to get the local path of your project
Open the Business Central spaces screen, click on Import Project, and set file://<the local path of your project> as the Repository URL
Done! You’ve successfully created a Business Central project relying only on your terminal and on your VSCode DMN editor! 🎉
–
The Business Central perspective
Alright, now let’s do the opposite. Let’s create a project on Business Central and import it to VSCode.
Create a project on Business Central in the spaces screen
Create a DMN asset and populate it with some interesting decision, like the one in the video above
Now, click on “Settings” to open the project settings screen
Done! You’ve successfully opened your Business Central project with VSCode. You can now perform changes into your DMN mode, commit them with git, and the Business Central will automatically detect them
–
I hope you’ve enjoyed this quick tutorial! Stay tuned for new DMN features! 🙂