Elytron is the new security framework offered by JBoss EAP/Wildfly, which tries to unify security management and application access in a single subsystem. More →
If you would like to know how to create, build and deploy the projects in VSCode, or you would like to integrate the Business Central and VSCode Kogito projects, you are in the right place. More →
In the last months, the Design Tools Team released many cool new features on Kogito Tooling 0.9.0 and Business Central 7.52.0. This post will do a quick overview of those. I hope you enjoy it! Dashbuilder Programmatic Layout API Until the launch of this new API, the only way to create dashboards on Dashbuilder wasRead more →
In the last months, the Design Tools Team released many cool new features on Kogito Tooling 0.9.0 and Business Central 7.52.0. This post will do a quick overview of those. I hope you enjoy it!
Dashbuilder Programmatic Layout API
Until the launch of this new API, the only way to create dashboards on Dashbuilder was via drag and drop on Layout Editor. Now, users can create their Dashboards, pages, components, and data sets directly on Java.
We also introduced a “dev mode” to Dashbuilder Runtime, which automatically updates the Dashbuilder Runtime while developing and exporting the ZIP. Soon, we will publish a blog post with more details about this new feature, but meanwhile, a sneak peek of authoring workflow using it:
DMN Editor – Enhanced code-completion for Literal FEEL expressions
Context-aware code completion is one of the most important features an IDE can provide to speed up coding, reduce typos and avoid other common mistakes. On Kogito Tooling 0.9.0 release, we introduced enhanced code-completion for Literal FEEL expressions.
Since October, we also ship our editors as a standalone npm package. One of my favorite features of the standalone is the read-only mode because it is really useful for diagram visualization. Now, this mode is also supported on BPMN. The read-only mode is also used for the visualization of diagrams on our Chrome Extension.
Work Item Definition support improvements
To evolve our Work Item Definition support on Kogito Tooling BPMN editor, we included on 0.9.0 a lot of improvements in this area, primarily related to a better parsing mechanism and also better compatibility with Business Central. Now, we also search for wids and icons the ‘global’ directory used on BC.
Dashbuilder Prometheus Data Set Provider
Dashbuilder can read data from multiple types of data set sources, including CSV, SQL, ElasticSearch, and Kie Server. Since Business central 7.50.0 Final, we introduced a new type of provider for data sets: Prometheus.
Prometheus is the standard for collecting metrics. It has connectors to very well-known systems, such as Kafka and metrics can be easily consumed from third-party systems. Furthermore, Kie Server by default also exports Prometheus metrics! See a sample Dashboard based on Prometheus data:
For a full description of this new feature, take a look at this blog post.
Dashbuilder Kafka Data Set Provider
We also recently introduced Dashbuilder support for Kafka data sets. Kafka is the standard event streaming platform for cloud applications and RHPAM/Kogito systems expose metrics using Kafka, so this is the reason why we added Kafka support on Dashbuilder as a data set provider.
Soon we will publish a blog post with more details about this new feature.
Dashbuilder Time Series Displayer
This new component represents time-series metrics to smoothly support the new Prometheus data-set provider.
Now, you can provide a custom dataset or Prometheus metrics and create visualizations of your time series data on a line or area chart using Dashbuilder. See this blog post for more details.
GWT 2.9 and JDK11 upgrade
After a collective effort involving many people from a lot of different teams, we also did two major upgrades on our codebase, supporting JDK11 compilation and GWT 2.9 on Business Central. This is a huge effort in a sizable codebase, so congrats to everyone involved!
Other important issues and improvements:
BPMN:
KOGITO-3853 Move the structure option to the top of the Data Type drop-down
JBPM-9597 – [BPMN] Open subprocesses in a new editor on BC only
RHPAM-3207 Stunner – Text area for scripts is cropped/shifted
RHPAM-3250 Stunner – Not all illegal characters are removed from Data Object name
KOGITO-3853 Move the structure option to the top of the Data Type drop-down
DROOLS-6181 Allow sorting in guided decision table when clicking the column name
SceSim
DROOLS-5775 Test Scenario does not support nested Enum type attributes
DROOLS-6075 Scenario Simulation type error popup when constraint applied to DMN data type
DROOLS-5876 Display actual test results instead of a generic message
KOGITO-4190 SceSim runner does not display reason for failure
Thank you to everyone involved!
I would like to thank everyone involved with this release, from the excellent KIE Tooling Engineers to the lifesavers QEs and the UX people that help us look awesome!
The JBPM KIE server has a rich set of REST APIs that allows control over business processes and other business assets. Interaction with business processes is straightforward and easily accessible. Usually these APIs are used by the Business Central component and custom applications. In this post I want to give a simple example of howRead more →
The JBPM KIE server has a rich set of REST APIs that allows control over business processes and other business assets. Interaction with business processes is straightforward and easily accessible. Usually these APIs are used by the Business Central component and custom applications. In this post I want to give a simple example of how to interact with business processes, deployed in a KIE server, using Apache Camel. Utilizing business processes in a camel route, as you will see, is pretty easy!
There is an available component in Camel called simply the JBPM component. Both consumers and producers are supported. The consumer side has already been covered by Maciej in this excellent post. My example will focus on the producer side.
What interactions are available?
The JBPM component uses exchange headers to hold the operations/commands used to interact with the KIE server. The query parameters in the URL holds the security and deployment information. The following is an example of an JBPM URL.
That URL above will interact with the KIE server running on localhost port 8080, the container deployed as mydeployment, and using the user myuser and password mypass. Of course all these values can be parameterized.
The following table shows some of the interactions available as of version Apache Camel 3.8. The header key for the operation header is JBPMConstants.OPERATION. All supported header keys are listed in the JBPMConstants class.
Operation
Support Headers
Description
startProcess
PROCESS_ID PARAMETERS
Start a process in the deployed container (in URL) and the process definition ID and any parameters.
signalEvent
PROCESS_INSTANCE_ID EVENT_TYPE EVENT
Signal a process instance with the signal (EVENT_TYPE) and payload (EVENT). If process instance ID is missing then the signal scope will be default
getProcessInstance
PROCESS_INSTANCE_ID
Retrieve the process instance using the ID. The resulting exchange body will be populated by a org.kie.server.api.model.instance.ProcessInstance object.
completeWorkItem
PROCESS_INSTANCE_ID WORK_ITEM_ID PARAMETERS
Complete the work item using the parameters.
You can find the complete list in the org.apache.camel.component.jbpm.JBPMProducer.Operation enum.
Simple Example
In this example we’ll be using the following versions:
Log into Business Central (http://localhost:8080/business-central/) using wbadmin/wbadmin as the credentials. In the default MySpace, create a project call test-camel and a business process in it call test-camel-process. Add a process variable call data of type String. Add a script task to print out data. Save and deploy the project.
To quickly stand up a Camel environment, we’ll be using Spring Boot. The following command will help you create the spring boot project:
The above example will send "hello world" as the data to start the process in the KIE server. By default without specifying JBPMConstants.OPERATION, the operation will be to start the process. The reason the operation is set to CamelJBPMOperationstartProcess is the operation is an aggregation of the value of JBPMConstants.OPERATION and the enum value defined in org.apache.camel.component.jbpm.JBPMProducer.Operation. Unfortunately we cannot use the operation enum directly because it’s not exposed publicly.
If you would like to know how to create, build and deploy the projects in VSCode, or you would like to integrate the Business Central and VSCode Kogito projects, you are in the right place.
About
With the VSCode Kogito Bundle extension, you can author processes, rules, or test scenarios directly in VSCode without the need to run the whole Business Central.
If you would like to know how to create, build and deploy the projects in VSCode, or you would like to integrate the Business Central and VSCode Kogito projects, you are in the right place.
Important note: the integration described in this post works for most use cases, although, it is not currently part of the Red Hat product supported capabilities.
Prepare the environment
The first thing you will need to do is to prepare the environment. It is recommended to use the latest versions when using any of the following software/tools unless specified otherwise.
Versions used in this post:
OpenJDK: 1.8+ (e.g. 1.8.0_282 or 11.0.10)
Maven: 3.6.3
Git: 2.30.2
VSCode: 1.46.0+ (e.g. 1.54.3)
Kogito Bundle extension: 0.8.6
Business Central: 7.51.0.Final
KIE server: 7.51.0.Final
Tools
Git
To migrate projects between Business Central and VSCode, you need to have Git installed on your local machine.
Maven
To generate projects in VSCode and build them, ensure you have installed Maven.
Note, you might need to install JDK first if it’s not already pre-installed in your OS. See the system requirements for Maven for more information.
If you have properly configured the environment, we can proceed with importing projects from Business Central to VSCode. Note, this workflow should work for the most of the use cases, but if you experience any issues, please report it here: https://issues.redhat.com/projects/KOGITO/summary.
Create or open any project in Business Central:
Navigate to Menu -> Projects
You can create a new project by clicking the Add Project button or you can open any Sample project by clicking on Try Samples,select project(s) and click Ok.
Once you have a Business Central project, there are two options how to migrate it to VSCode. The first and preferred one is to clone the repository using git. The second option is to download it using GUI.
Clone the project using Git
To clone the repository, open Business Central, navigate to Project’s Settings, locate URL property, choose either ssh or http from the drop-down menu and copy the link. Alternatively you can use the local git repository that you can find in the “.niogit” folder of your Business Central, but this is not recommended.
Open the terminal in VSCode by clicking on Terminal → New Terminal or use “Ctrl + Shift + `” shortcut and execute these commands:
To download the project using GUI in Business Central, follow the instructions below:
Open any asset in the project in Business Central
Open Project Explorer
Click on the Cogwheel icon
Click on Download Project
Unzip the project.
Note, there is no Git repository cloned if you downloaded the project using Business Central GUI. If you would like to push/pull changes between Business Central and VSCode, you need to set up a remote repository first by executing the following commands:
Now open the cloned/downloaded project in VSCode. There are two options.
The first option is to use the VSCode terminal.
Execute the following commands, hover over the project’s path (the return value of pwd command), and press “Ctrl + Click”. The project will open in a new window.
Click File → Open folder or use “Ctrl + O” shortcut, locate the root folder of the project, and then click OK.
Tip: If you don’t want to close the current project, you can open a new window using the shortcut “Ctrl + Shift + N” before opening the project.
Create a project in VSCode
You can also create your own projects from scratch directly in VSCode, instead of transferring them from Business Central.
You can do that by generating a kjar project skeleton with a Maven archetype command.
If you want to create more projects, use properties with specific GAV when generating projects. If you want to create a Case project, specify it by using caseProject property when generating projects.
Open terminal in VSCode by clicking on Terminal → New Terminal or use “Ctrl + Shift + `” shortcut and execute the following commands:
# Create a project using maven archetype with specific GAV
# If you want create more projects, you can also specify GAV of your project by adding these properties to the command: -DgroupId=<my.groupid> -DartifactId=<my-artifactId>
During the project generation, it will ask you to confirm the configuration. Press y to confirm it and then Enter.
If you are experiencing issues with generating the artifact, ensure you have configured the maven repositories properly. Note, the command might not work properly in PowerShell when trying to execute it on Windows. Use Cmd prompt instead.
Create assets in VSCode
If you have successfully created the project skeleton, it is time to create all assets and necessary files for your project.
Kogito editors
There are currently three different editors provided by Kogito VSCode extension: BPMN, DMN and Test Scenario.
BPMN editor
Files with bpmn extension $BPMN_FILE_NAME.bpmn are handled by the BPMN, respectively Business Process editor. You can design processes with this modeler.
DMN editor
Files with dmn extension $DMN_FILE_NAME.dmn are handled by the DMN editor. You can design decisions with this modeler.
SCESIM editor
Files with scesim extension $SCESIM_FILE_NAME.scesim are handled by the Test Scenario editor. You can design test scenarios for testing your DMN assets with this modeler.
Note, the pop-up dialog will ask you to select the DMN asset when creating Test Scenario. Therefore make sure you create it in advance.
Create assets
If you create/open any of the listed assets above, an editor will open.
To use VSCode GUI, follow the instructions below:
Open the project (click on Explorer in upper left corner)
Create the missing folders by right click on the location → New folder.
src/main/resources/$PACKAGE(e.g. src/main/resources/org/kie/businessapp) Use this location for BPMN and DMN assets.
src/test/resources/$PACKAGE (e.g. src/test/resources/org/kie/businessapp) Use this location for SCESIM assets.
Select the location, click on New File, and input file name with the desired extension (e.g.process.bpmn, dmn.dmn or test-scenario.scesim).
The other option is to use a terminal. Execute following commands:
############################
# Create BPMN and DMN assets
############################
# cd $WORKING_DIRECTORY/$PROJECTS/$PROJECT_NAME/src/main/resources/
The created file can be opened by listing the files in the folder with lscommand, hovering over the file name and then “ctrl + click”. Alternatively, you can open the file using GUI by locating the file and clicking on it.
After creating a few assets and modeling simple BPMN process, your project may look like this:
Other file formats
We don’t provide custom editors in Kogito for any other assets not mentioned above, including work item handlers. However, you can find useful tips on how to create some of them below.
Data Objects
The Data Object in Business Central is no more than a POJO, so you can create/use it manually:
Create a file $CLASS_NAME.java (e.g. Person.java) insrc/main/java/$PACKAGE(e.g. src/main/java/org/kie/businessapp)
Create a java class in the file. For example: package org.kie.businessapp; public class Person {/*...*/}
Forms
You can create it in Business Central and copy the source code or download the file to your local project, usually to src/main/resources/$PACKAGE(e.g. src/main/resources/org/kie/businessapp).
Other assets
You can create any other assets that you need for your project in Business Central and copy the file/source code to VSCode. Another option is obviously to create them from scratch by creating a file with specific extension $FILE_NAME.$EXTENSION (e.g. rules.drl) in desired location (e.g. src/main/resources/org/kie/businessapp).
Migrate a project from VSCode to Business Central
Once you have your project created, you can migrate it to Business Central.
You need to create a repository from the project first. Open the terminal and execute these commands:
# cd $WORKING_DIRECTORY/$PROJECTS/$PROJECT_NAME
cd~/jbpm-kogito-home/projects/mybusinessapp
git init
git add .
git commit -a
# Press Insert, write commit message (e.g. Initial commit), press Esc, input :wq and press Enter
Once you created the repository, import the project to Business Central:
Open or Create a space in Business Central
Click on the arrowdrop-down menu next to the Add Project and click Import Project
Paste the URL to your repository and click Ok file://$WORKING_DIRECTORY/$PROJECTS/$PROJECT_NAME/.git(e.g. file:///home/user/jbpm-kogito-home/projects/mybusinessapp/.git)
Select the project and click Ok
Synchronize changes between VScode and Business Central
Whether you migrated your projects from Business Central or you created a new one in VSCode from scratch, you might want to synchronize the changes between VSCode and Business Central. You can do it manually using Git since there is no automated way to do that.
The first thing you need to do is to set up a remote repository if you haven’t done so:
After that, pull or push the changes between Business Central and VSCode. Note, currently VSCode Kogito extension is under alpha release, hence some corner case issues may arise with complex round-trip synchronization. Also, if any conflicts occur, you will need to handle them manually.
To be able to visualize the process in the runtime monitoring tools, ensure that all processes have their SVGs generated. If you created the project from scratch, you have to generate them manually:
Open process in VScode and click SVG icon in the upper right corner.
Rename the generated SVG file to the format ${process id}-svg.svg (e.g. process-svg.svg) You can find the process id in the Properties panel in the Business Process editor.
Build the project
Go to your project home (where pom.xml file is located) and execute the command:
# Build and install project to the local maven repository
If you are experiencing issues with not resolved dependencies, ensure you have set the right maven repositories. You can also add necessary repositories to the project’s pom.xml file.
Deploy the project using Swagger
You can deploy your project to any KIE server you want. For demonstration purposes, we will use KIE server which comes alongside with Business Central.
First, ensure the KIE server is running. See “Business Central and KIE server” section above for more information.
under “KIE Server and KIE containers” category find: PUT/server/containers/{containerId} Creates a new KIE container in the KIE Server with a specified KIE container ID
Click PUT and then Try it out
Input{containerId}(e.g. containerMyBusinessApp)
Update container-id and GAV in body (you can find it in pom.xml of your project) For example: containerMyBusinessApp, org.kie.businessapp:mybusinessapp:1.0
Click Execute You should get a successful response with code 201
I hope this article helped you better understand how to integrate Kogito and Business Central projects. You can seamlessly migrate the existing projects or create a completely new one directly in VSCode. You can now easily start using the Kogito Bundle extension and utilize it to its full potential.
Context-aware code-completion is one of the most important features an IDE can provide to speed-up coding, reduce typos and avoid other common mistakes. Kogito Tooling 0.9.0 release will bring enhanced code-completion for Literal FEEL expressions: Look how it helps me realize that I need to use the "string(from)" function if I want to concatenate somethingRead more →
Context-aware code-completion is one of the most important features an IDE can provide to speed-up coding, reduce typos and avoid other common mistakes. Kogito Tooling 0.9.0 release will bring enhanced code-completion for Literal FEEL expressions:
Look how it helps me realize that I need to use the "string(from)" function if I want to concatenate something to my string. Let’s check these two examples of FEEL expressions:
"2" + 1
The FEEL expression above is evaluated to 3.
"2" + string(1)
While the FEEL expression above is evaluated to "21".
Interesting, isn’t it? 🤓
With some naive enhancements in the FEEL code-completion, users already can avoid the mistake above and concatenate the string as they expect. But now you’re probably wondering how does the magic work, right? It relies on a combination of two elements that power this context-aware FEEL code-completion:
ANTLR4 (ANother Tool for Language Recognition v4) – generates the parser based on FEEL grammar
antlr4-c3 (ANTLR4 Code Completion Core) – provides code-completion candidates based on ANTLR4 parsed trees (as the image below shows)
With the combination of these tools and a naive implementation, the Literal Boxed expression editor already provides helpful suggestions. In the next Kogito Tooling release, suggestions will be based only on the FEEL functions’ return type and the context’s inferred type.
There’s still room for enhancements in this initial implementation, like considering parameters and other information that those two powerful tools are already providing to us. This feature has a high level of isolation from the rest of the code base, and it’s an excellent starting point for new contributors! 🚀
If you’re wondering about contributing to one of the Kogito Tooling projects 🙂 ping me on Zulip (kie.zulipchat.com) or in one of the follow-up JIRAs related to this topic, and I will be glad to help! 🙂
We are delighted to announce that preliminary work on a PMML (4.4) Scorecard Editor has completed. A VSCode extension has been published to the Marketplace and can be added to your VSCode installation. The release is considered alpha and primarily aimed at providing a channel to gather feedback. The journey to provide a capable editorRead more →
We are delighted to announce that preliminary work on a PMML (4.4) Scorecard Editor has completed.
Overview
A VSCode extension has been published to the Marketplace and can be added to your VSCode installation.
The release is considered alpha and primarily aimed at providing a channel to gather feedback.
The journey to provide a capable editor has started but the road is long and winding.
Please install, kick the tyres and provide feedback as this is what will drive change.
Editing made simple
Just click, edit and move on. All changes can be undone/redone.
Model settings
In-line validation
Made a mistake? Errors are shown in-line and can be undone and corrected as needed.
Inline errors
Integration with VSCode
Errors also integrate with the Problems panel.
Problems panel
Defining Data Fields
Use the "Set Data Dictionary" dialog to define Data Fields.
Click on a row to edit a Data Field’s extended properties
Basic properties
Click on "Edit properties" for more advanced extended properties.
Editing extended properties
Defining Mining Fields and Output Fields
Similarly the Mining Schema and Outputs can be defined.
Mining Field basic propertiesMining Field extended propertiesOutput Field basic propertiesOutput Field extended properties
Characteristics and Attributes
The way in which you interact with the editor remains consistent.
Both Characteristics and Attributes are authored similarly to the different types of fields.
Predicates
The PMML Specification allows for the definition of complex compound expressions.
We therefore decided to provide a context-aware, auto-complete predicate editor.
This is where we really would like to receive feedback.
Your editor needs you!
Real-life predicates seem to be much more simple and we find ourselves at a cross-road.
Do we invest in completing the text-based predicate editor or do we investigate a different approach?
If I want to solve many data sets of a planning problem every night, what architecture can easily scale out horizontally without loss of data? In this article, we will take a look at how to use a transactional AMQ queue in front of a set of stateless OptaPlanner pods. Client applications can submit dataRead more →
If I want to solve many data sets of a planning problem every night, what architecture can easily scale out horizontally without loss of data? In this article, we will take a look at how to use a transactional AMQ queue in front of a set of stateless OptaPlanner pods. Client applications can submit data sets to solve and listen to the resulting solutions without worrying about which OptaPlanner pod does the actual solving.
Very often, there are multiple instances of the same planning problem to solve. Either these come from splitting an enormous input problem into smaller pieces, or just from the need to solve completely unrelated data sets. Imagine independently scheduling many vehicle routes for several regions, or optimizing school timetables for numerous schools. To take advantage of time, you run OptaPlanner every night to prepare for the next day in the business or even longer for the next semester. On the other hand, during the day or in the middle of the semester, there is nothing to optimize and so there should be no OptaPlanner running. In other words, these cases call for batch solving.
School Timetabling
The quickstart focuses on the school timetabling problem, which is described in depth in the Quarkus guide. Let’s just very briefly revisit the problem domain and its constraints.
In the school timetabling problem, the goal is to assign each lesson to a room and a timeslot. To use the OptaPlanner vocabulary, the Lesson is a planning entity and its references to the Room and the Timeslot are planning variables.
The TimeTableConstraintProvider defines the following constraints on how the lessons should be assigned to timeslots and rooms:
A room can have at most one lesson at the same time (hard).
A teacher can teach at most one lesson at the same time (hard).
A student can attend at most one lesson at the same time (hard).
A teacher prefers to teach in a single room (soft).
A teacher prefers to teach sequential lessons and dislikes gaps between lessons (soft).
A student dislikes sequential lessons on the same subject (soft).
Quickstart structure
The project consists of three modules:
amq-quarkus-school-timetabling-common defines the problem domain, and the SolverRequest and SolverResponse classes for messaging. The following two modules depend on this one.
amq-quarkus-school-timetabling-client is the Client Quarkus application that contains a UI, a REST endpoint and a demo data generator.
amq-quarkus-school-timetabling-solver is the Solver Server Quarkus application that solves school timetabling problem instances coming via a message queue solver_request.
Messaging
The Client application serializes an unsolved TimeTable wrapped by the SolverRequest class into a JSON and sends it to the solver_request queue. The Solver Server receives the request from this queue, deserializes it and solves the TimeTable via OptaPlanner. After the solving finishes, the Solver Server wraps the TimeTable by the SolverResponse class, serializes it to a JSON and sends it to the solver_response queue.
Requirements
No solver request message must be lost, even if the Solver Server crashes.
Any error that occurs in the Solver Server must be propagated back to the Client.
Invalid solver request message is sent to a dead letter queue.
AMQ is a natural fit
AMQ Artemis comes as a natural fit for this use case for multiple reasons. First, it supports huge messages without extra configuration. Second, solving may often take several hours before the Solver Server can send a response with a solution and finally approve the request message. Last but not least, the AMQ guarantees to deliver each message exactly once, provided the messages are persisted at the broker. These properties let the Solver Server avoid keeping any state and just transform the input planning problems into solutions.
For different use cases, for example, real-time planning, other technologies like Kafka may be a better fit, but for this use case, AMQ wins.
When messaging meets OptaPlanner
The quickstart uses Smallrye Reactive Messaging to send and receive messages. Let’s take a look at the TimeTableMessagingHandler located in the Solver Server application.
...
Solver<TimeTable> solver;
@Inject
ObjectMapper objectMapper; // (1)
@Inject
@Channel("solver_response") // (2)
Emitter<String> solverResponseEmitter;
@Inject
TimeTableMessagingHandler(SolverFactory<TimeTable> solverFactory) {
solver = solverFactory.buildSolver(); // (3)
}
@Incoming("solver_request") // (4)
public CompletionStage<Void> solve(Message<String> solverRequestMessage) { // (5)
return CompletableFuture.runAsync(() -> { // (6)
SolverRequest solverRequest;
try {
solverRequest = objectMapper.readValue(solverRequestMessage.getPayload(), SolverRequest.class); // (7)
} catch (Throwable throwable) {
LOGGER.warn("Unable to deserialize solver request from JSON.", throwable);
/* Usually a bad request, which should be immediately rejected.
No error response can be sent back as the problemId is unknown.
Such a NACKed message is redirected to the DLQ (Dead letter queue).
Catching the Throwable to make sure no unchecked exceptions are missed. */
solverRequestMessage.nack(throwable);
return;
}
TimeTable solution;
try {
solution = solver.solve(solverRequest.getTimeTable()); // (8)
replySuccess(solverRequestMessage, solverRequest.getProblemId(), solution);
} catch (Throwable throwable) {
replyFailure(solverRequestMessage, solverRequest.getProblemId(), throwable); // (9)
}
});
}
...
Inject ObjectMapper to unmarshall the JSON message payload.
Emitter sends response messages to the solver_response channel.
Inject a SolverFactory and build a Solver.
The @Incoming annotation makes the method listen for incoming messages from the solver_request channel.
By accepting Message as a parameter, you have full control over acknowledgement of the message. The generic type of the Message is String, because the message contains the SolverRequest serialized to a JSON String. Finally, the return type CompletionStage<Void> enables an asynchronous acknowledgement. See Consuming Messages for more details.
Return a CompletionStage<Void> to satisfy the method contract and avoid blocking the thread.
Unmarshall the JSON payload. If it’s not possible, reject the message.
Solve the input timetabling problem and then send a reply (see the next figure).
In case any exception occurs, include information about the exception into the response.
The example below shows how to reply and acknowledge the original request message:
thenAccept() defines what happens when the AMQ broker acknowledges the response message sent via the Emitter. In this case, the request message is acknowledged. This way, the request message is never lost even if the Solver Server dies.
To understand how the channels correspond to messaging queues, see the application.properties file located in src/main/resources:
Modernising kie-server with new and more user-friendly DMN endpoints, better Swagger/OpenAPI documentation, easier JSON-based REST invocations; an intermediate step to help developers transitioning to service-oriented deployments such as a Kogito-based application.
Modernising kie-server with new and more user-friendly DMN endpoints, better Swagger/OpenAPI documentation, easier JSON-based REST invocations; an intermediate step to help developers transitioning to service-oriented deployments such as a Kogito-based application.
In a nutshell:
The current DMN kie-server endpoints are fully compliant with kie-server extension design architecture, and aligned with all other kie-server services and extensions; however, some aspects of the current generic approach of kie-server sometimes are not very user-friendly for DMN evaluations, due to limitations of swagger documentation and the REST payloads requirements to follow the generic kie-server marshaller protocol. These aspects do apply to all kie-server services, including naturally DMN kie-server endpoints as well. On other hand, experience shown that building manually the REST payload on Kogito for DMN evaluation is very easy for end-users, thanks to key features pertaining to DMN core capabilities.
This new feature (DROOLS-6047) extends DMN on kie-server with new endpoints, leveraging those core capabilities; the new DMN endpoints provide better Swagger documentation and can be more easily consumed by end-users, therefore contributing to modernising the kie-server platform while also making easier to eventually transition to a full Kogito-based application and deployment!
Why is this needed?
Currently on kie-server, the DMN service exposes 2 endpoints which are fully compliant with kie-server extension design architecture:
GET /server/containers/{containerId}/dmn Retrieves DMN model for given container
POST /server/containers/{containerId}/dmn Evaluates decisions for given input
The current swagger documentation is agnostic to the actual model content of the knowledge asset, like for any other kie-server extension:
This limited style of swagger documentation is sometimes an undesirable side-effect to the generic approach of kie-server extension design:
all kie-server extensions receive as input a generic String, which is actually converted internally to the extension using the generic kie-server marshaller. This causes the swagger documentation to not display anything meaningful for the request body besides Model==string, and the only helpful information can only be provided as a comment (“DMN context to be used while evaluation decisions as DMNContextKS type”).
all kie-server extensions return as output a ServiceResponse<T>, where the Java’s generic T is extension-specific. Generating swagger documentation with Java generics is already limited, in this case the difficulty compounds because the actual content of T varies, by DMN model to model !
the DMN evaluation payload itself contains the coordinates of the model to be evaluated and the model-specific input context, per the original implementation requirements; but this interconnection between model coordinates values and input content structure, is pragmatically impossible to be defined meaningfully with a Swagger or OpenAPI descriptor.
About the last point specifically, consider this example DMN payload:
because the content of dmn-context depends on the values of model-namespace and model-name coordinates, there is no pragmatic way to define with Swagger/OpenAPI that dmn-context must have the properties “Driver”, “Violation” for this traffic violation model, or property “Customer” for another DMN model.
Besides endpoint documentation limitations, experience proved that building manually from scratch the kie-server generic payload following the style of the kie-server generic marshaller is very difficult for most end-users (in fact we always advise to use the Kie Server Client API first, and not start from scratch, but this suggestion is often ignored anyway):
XML/JAXB format requires domain model pojo to be correctly annotated first, and building Java collection manually is quite tricky.
XML/XStream is a more natural format, still requires domain model pojo annotations, requires to respect the domain object FQN, but is yet another xml format while most end-users seem to prefer json instead.
JSON/Jackson would be the user preference nowadays, but requires to respect the domain object FQN which is very alien to json native users.
Example. The correct way to marshall for Traffic Violation example, respecting the domain model defined in the kjar project, would be:
Everything would be much more easier, while building the JSON body payload manually for DMN evaluation, if we could drop the strict requirement to respect the generic kie-server marshalling format.
NEW model-specific DMN kie-server endpoints
We can now move past and beyond these limitations, thanks to the next generation of DMN endpoints on kie-server, leveraging some new DMN core capabilities:
programmatic generation of Swagger and OpenAPI (Swagger/OAS) metadata (DROOLS-5670)
consistent DMNContext build from JSON, based on DMN Model metadata (DROOLS-5719)
to ultimately offer more user-friendly endpoints on kie-server for DMN evaluation!
Following similar style to what is offered today via Kogito, summarized in this blog post, we implemented the following new DMN endpoints on kie-server:
GET /server/containers/{containerId}/dmn/openapi.json (|.yaml) Retrieves Swagger/OAS for the DMN models in the kjar project
GET /server/containers/{containerId}/dmn/models/{modelname} Standard DMN XML but without any decision logic, so this can be used as a descriptor of the DMN model (which are the inputs, which are the decisions), while using the same format of the DMN XSD instead.
POST /server/containers/{containerId}/dmn/models/{modelname} JSON-only evaluation of a specific DMN model with a body payload tailored for the specific model
POST /server/containers/{containerId}/dmn/models/{modelname}/{decisionServiceName} JSON-only evaluation of a specific decision service of a specific DMN model with a body payload tailored for the specific model
POST /server/containers/{containerId}/dmn/models/{modelname}/dmnresult JSON-only evaluation of a specific DMN model with a body payload tailored for the specific model, but returning a JSON representation as a DMNResult
POST /server/containers/{containerId}/dmn/models/{modelname}/{decisionServiceName}/dmnresult JSON-only evaluation of a specific decision service of a specific DMN model with a body payload tailored for the specific model, but returning a JSON representation as a DMNResult
For the difference between “business-domain” and “dmnresult” variants of the rest endpoints, reference the original blog post as also linked above.
Making reference to the Traffic Violation example model, this new capability can now offer on kie-server something similar to:
As we can see, both the input body payload and the response body payload offer Swagger/OAS schemas which are consistent with the specific DMN model!
This is possible thanks to a convergence of factors:
Because each REST POST endpoint for DMN evaluation is specific for DMN model in the REST Path, it is possible to offer Swagger/OAS definition which are DMN model-specific e.g.: because POST /server/containers/mykjar-project/dmn/traffic-violation is a REST endpoint specific to the Traffic Violation model, both its input and output payload can now be documented properly in the Swagger/OAS schema definitions.
Because each Swagger/OAS definition is offered at kjar/kie-container level, it is possible to generate programmatically the schema definitions for the DMN models contained only in the specific container. e.g.: because GET /server/containers/mykjar-project/dmn/openapi.json would offer only definitions for the DMN models inside “mykjar-project”. This is thanks to the following DMN core capability: programmatic generation of Swagger/OAS metadata (DROOLS-5670)
Because these endpoints are DMN evaluation specific and focusing on a natural and idiomatic JSON usage, they do NOT require to follow the generic kie-server marshalling format. This is thanks to the following DMN core capability: consistent DMNContext build from JSON based on DMNModel metadata (DROOLS-5719)
Any limitations?
Being a new set of endpoints, in addition to the currently existing ones, there is basically no impact on the already-existing DMN kie-server capabilities.
As this proposed set of new endpoints are contained within a specific {containerId}, it also means that the openapi.json|.yaml swagger/OAS definition file is only kie-container specific.
In turn, it means when accessing the swagger-ui client editor, user need to manually point to the container URL, for example something like:
Finally, as this core capability do leverage Eclipse MicroProfile for OpenAPI Specification (OAS) and SmallRye-openapi-core, this requires making use of Swagger-UI and clients which are compatible with OpenAPI Specification version 3.0.3, onwards.
Conclusions
We believe this feature meaningfully extends the current set of capabilities, by providing more user-friendly DMN endpoints on kie-server!
Developers can make full use of this new feature to simplify existing REST call invocations, and as a stepping stone to eventually migrate to a Kogito-based application.
Have you tried it yet? Do you have any feedback? Let us know in the comments below!
We have just launched a fresh new Kogito Tooling release! On the 0.8.5 release, we made many improvements and bug fixes. We are also happy to announce a new PMML Scorecard Editor and, also, that our editors are now available on Eclipse Theia Upstream (built from theia master). This post will give a quick overviewRead more →
We have just launched a fresh new Kogito Tooling release! On the 0.8.5 release, we made many improvements and bug fixes.
We are also happy to announce a new PMML Scorecard Editor and, also, that our editors are now available on Eclipse Theia Upstream (built from theia master).
This post will give a quick overview of what is included on this release.
PMML Scorecard Editor (alpha) hits VSCode Market Place
We are happy to announce that we have a new VS Code extension: PMML Editor. It allows you to create and edit PMML 4.4 (.pmml) Scorecard files.
This new editor is in the alpha stage, and we are looking for feedback from the community. We hope you enjoy it!
Eclipse Theia and Open VSIX Store
Eclipse Theia is an extensible framework based on VS Code to develop full-fledged multi-language Cloud & Desktop IDE-like products with state-of-the-art web technologies. Recently, Theia’s team merged a PR, allowing support for CustomEditor API.
In practice, this means that from now on, our BPMN, DMN and editors can run on Eclipse Theia upstream (you can build it from theia master and run), take a look on this demo:
Eclipse Theia uses Open VSX Registry, and from now on, all our releases will also be available on Open VSX store.
New Features, fixed issues, and improvements
We also made some new features, a lot of refactorings and improvements, with highlights to:
New features:
Infrastructure
KOGITO-204 – Implement a integration tests using Cypress for online channel
KOGITO-4242 – Migrate VS Code Extension release job to new Jenkins instance
KOGITO-4666 – Converge the CSS to avoid conflicts between PF3 and PF4
Editors
FAI-362 – Score Cards: Integrate with VS Code channel
I want to thank everyone involved with this release, from the excellent KIE Tooling Engineers to the lifesavers QEs and the UX people that help us look fabulous!
The invocation of remote services plays a big role in workflow orchestration. In this blog post, we will take a look at RESTful service orchestration using Kogito and the OpenAPI specification. CNCF Serverless Workflow Implementation Kogito is a modern business automation runtime. In addition to flowchart and form-based workflow DSLs, it also supports CNCF ServerlessRead more →
The invocation of remote services plays a big role in workflow orchestration. In this blog post, we will take a look at RESTful service orchestration using Kogito and the OpenAPI specification.
CNCF Serverless Workflow Implementation
Kogito is a modern business automation runtime. In addition to flowchart and form-based workflow DSLs, it also supports CNCF Serverless Workflow, a declarative workflow DSL that targets the serverless technology domain. At the time of this writing, Kogito supports a subset of features of the 0.5 of the specification version.
Since version 1.3.0, Kogito has the ability to define workflows that can orchestrate RESTful services described via OpenAPI. This fits well with the Serverless Workflow specification, where OpenAPI is the default standard for describing RESTful services. In other words, you don’t need to worry about writing boilerplate client code to orchestrate RESTful services. All you need to do is to declare the service calls!
Understanding function declarations with OpenAPI
Our business requirement is to write a simple serverless temperature converter. To do this, we want to write a workflow that can orchestrate two existing RESTful services, namely Multiplication and Subtraction services.
These two services are described via OpenAPI, meaning they are described in a programming language-agnostic way. This means that you do not need to know how to write the code that invokes these services.
Kogito reads this function definition during build time. It contains the needed information to generate REST client code based on these OpenAPI specification files during build time. The code generated is based on the OpenAPI Generator tool, now embedded in our platform.
In our workflow definition, we have to know the location of the services’ OpenAPI definition and the specific operation we want to invoke on the defined service.
Serverless Workflow allows us to define reusable function definitions. These definitions represent an invocation of an operation on a remote service. Function definitions have a domain-specific name and can be referenced by that name throughout workflow control-flow logic when they need to actually be invoked. Below is our workflow function definition we will use throughout the blog post:
With this in place, invoking these services in the workflow becomes trivial. All we have to do is define when in the workflow control-flow logic they need to be invoked. Workflow control-flow logic in Serverless Workflow is defined within the "states" block. This is where you define all your workflow states (steps) and the transitions between them:
Going back to our business requirements, for temperature conversion, our workflow needs to call the two services in a certain order. First, we want to execute the Multiplication service and then the Subtraction service. The Operation state is perfect for what we need.
The parameters are taken from the workflow data input and processed with JSONPath expressions. And how do we know how to define these parameters? It’s just a matter of extracting from the OpenAPI Specification file:
The workflow declares two functions that represent the service operations that should be invoked during workflow execution. The first one, multiplication, will execute the operation doOperation from the OpenAPI specification file in our project’s classpath (Kogito also supports file and http schemas). Same thing for the subtraction function.
Since this operation only needs one parameter, we can name it as we like (in this case, multiplicationOperation). For operations that require multiple parameters (like query strings), you should use the same names as defined by the OpenAPI specification.
Configuring the Endpoints
The last piece of this puzzle is to define the URL for each of the services we want to invoke. To do so, declare the URLs in your application properties file. You should set a configuration like: org.kogito.openapi.client.<spec file name>.base_path=http://myservice.com.
This is a runtime property that can be defined using any method the target runtime (Quarkus or SpringBoot) supports. But if the OpenAPI Specification file declares the endpoint URL, you don’t even need to bother. Take a look at the PetStore, for example:
Now you’re ready to call your newly generated Kogito Workflow and start orchestrating services! You can find the full Temperature Conversion workflow example here.
TIP: If you’re curious about the CNCF Serverless Workflow Kogito implementation, please take a look at these references:
As you probably already know, you can use Dashbuilder, a part of Business Central to create pages and intuitive dashboards. In Dashbuilder, Pages are composed of small components that can show any type of data. Dashbuilder provides by default multiple components that users can drag to pages. Recently, we have added a bunch of newRead more →
As you probably already know, you can use Dashbuilder, a part of Business Central to create pages and intuitive dashboards.
In Dashbuilder, Pages are composed of small components that can show any type of data. Dashbuilder provides by default multiple components that users can drag to pages. Recently, we have added a bunch of new components like treemaps, charts, maps, etc, to extend the usability and aid in representing user data in a precise way.
Recently, we added the Prometheus dataset provider to extend the usability of Dashbuilder to represent time series metrics (see more details on this blog post).
In this blog post, I’m going to walk you through the new external component added to Dashbuilder to better represent time-series data and use it to create your own dashboards connected to your time series datasets.
Time series component
This is one of the new components that we have added using React and ApexCharts library. Now, you can provide a custom dataset or Prometheus metrics and create visualizations of your time series data on a line or area chart using Dashbuilder.
ApexCharts is an MIT licensed open-source library to create interactive JavaScript charts built on SVG. You can find it on GitHub. ApexCharts provides some in-built features like downloading the dataset in CSV or downloading the chart in PNG or SVG format. You just have to click the sandwich menu icon on the top right corner to discover it. The component that I have used is zoomable time series, which means you can choose to zoom in and out a particular area of the chart.
After adding the component to the aforementioned directory and enabling external components, just click on the External Components dropdown, and select time-series-chart and drag it to the page, select the dataset in the Data tab(make sure that the columns are selected properly), set the component properties in the Component Editor tab and you are done.
In order to add datasets, click on the menu dropdown in the navigation bar, select Datasets. You will be asked the type of dataset you want to add. You can add a CSV dataset or Prometheus metrics according to your choice. Based on what you name the datasets while adding them, the names of datasets in the Data tab will be populated after dragging the component to the page.
You can use any library of your choice to create a component, just add the library, for instance, react-apexcharts and apexcharts to package.json and import them in respective TypeScript or JavaScript files. Configure the data to the required format to feed it to the component, for example, the major props that the Zoomable series chart uses includes options and series. The Options interface, which takes care of the x-axis categories and the chart name and the Series interface, which takes the arrays of names and the series values, looks like this:
We have the following component properties to make it customizable:
Show Area: A checkbox to set the type of chart, area, or line;
Chart Name: To set the chart name;
Date Categories: A checkbox to handle categories as dates or pure text;
Labels: To enable or disable data labels on data points;
Transposed: Whether the dataset provided uses series as separate columns or as rows.
Time series component in action
Conclusion
There is no limit to what library/framework you want to use. We are continuing to include more custom components to allow users to create interactive dashboards.
Huge shoutout to William Antônio Siqueira for coming up with integrating the Prometheus dataset provider with Dashbuilder and for the chart GIF.