The JBPM KIE server has a rich set of REST APIs that allows control over business processes and other business assets. Interaction with business processes is straightforward and easily accessible. Usually these APIs are used by the Business Central component and custom applications. In this post I want to give a simple example of howRead more →
The JBPM KIE server has a rich set of REST APIs that allows control over business processes and other business assets. Interaction with business processes is straightforward and easily accessible. Usually these APIs are used by the Business Central component and custom applications. In this post I want to give a simple example of how to interact with business processes, deployed in a KIE server, using Apache Camel. Utilizing business processes in a camel route, as you will see, is pretty easy!
There is an available component in Camel called simply the JBPM component. Both consumers and producers are supported. The consumer side has already been covered by Maciej in this excellent post. My example will focus on the producer side.
What interactions are available?
The JBPM component uses exchange headers to hold the operations/commands used to interact with the KIE server. The query parameters in the URL holds the security and deployment information. The following is an example of an JBPM URL.
That URL above will interact with the KIE server running on localhost port 8080, the container deployed as mydeployment, and using the user myuser and password mypass. Of course all these values can be parameterized.
The following table shows some of the interactions available as of version Apache Camel 3.8. The header key for the operation header is JBPMConstants.OPERATION. All supported header keys are listed in the JBPMConstants class.
Operation
Support Headers
Description
startProcess
PROCESS_ID PARAMETERS
Start a process in the deployed container (in URL) and the process definition ID and any parameters.
signalEvent
PROCESS_INSTANCE_ID EVENT_TYPE EVENT
Signal a process instance with the signal (EVENT_TYPE) and payload (EVENT). If process instance ID is missing then the signal scope will be default
getProcessInstance
PROCESS_INSTANCE_ID
Retrieve the process instance using the ID. The resulting exchange body will be populated by a org.kie.server.api.model.instance.ProcessInstance object.
completeWorkItem
PROCESS_INSTANCE_ID WORK_ITEM_ID PARAMETERS
Complete the work item using the parameters.
You can find the complete list in the org.apache.camel.component.jbpm.JBPMProducer.Operation enum.
Simple Example
In this example we’ll be using the following versions:
Log into Business Central (http://localhost:8080/business-central/) using wbadmin/wbadmin as the credentials. In the default MySpace, create a project call test-camel and a business process in it call test-camel-process. Add a process variable call data of type String. Add a script task to print out data. Save and deploy the project.
To quickly stand up a Camel environment, we’ll be using Spring Boot. The following command will help you create the spring boot project:
The above example will send "hello world" as the data to start the process in the KIE server. By default without specifying JBPMConstants.OPERATION, the operation will be to start the process. The reason the operation is set to CamelJBPMOperationstartProcess is the operation is an aggregation of the value of JBPMConstants.OPERATION and the enum value defined in org.apache.camel.component.jbpm.JBPMProducer.Operation. Unfortunately we cannot use the operation enum directly because it’s not exposed publicly.
Elytron will become the one and unified subsystem for authentication and authorization. For easing the transition, a partial migration is offered to link both subsystems, but the full migration is preferred. KIE server is ready to migrate with a few jboss-cli operations.
“It’s like having wings, like flying sometimes because you go off into another realm” (Paul Rodgers)
Elytron is the new security framework offered by JBoss EAP/Wildfly, which tries to unify security management and application access in a single subsystem.
Legacy security subsystem has been deprecated and maybe removed or limited in future versions of JBoss EAP/Wildfly, while now it’s shipping Elytron as its replacement. In this post, we cover how to migrate current jBPM images for kie-server-showcase and jbpm-server-full (includes also Business-Central) from legacy PicketBox (with security subsystem based on JAAS login modules) to Elytron.
The new images should incorporate the configuration for LDAP authentication and authorization instead of the default one which is properties-based.
For each image, we are going to follow a different strategy:
Partial migration: maintains the legacy login modules at the security subsystem but exposes them to Elytron.
Full migration: Login modules are completely replaced by Security Domain at Elytron.
All the code and configuration for these examples can be found here.
Environment setup
Our test class (with scenarios for testing authentication and process variable change authorization in jbpm) will make use of testcontainers:
openLDAP populated with ldif (LDAP Data Interchange Format) containing fixture;
KIE Server plus a business application, that will be built on-the-fly, with a multi-stage strategy in the dockerfile:
First, maven installs the kjar (other option would have been to fetch it from GitHub);
Then, the jboss-cli scripts tune standalone configuration including LDAP support and Elytron;
In this setup, both containers will share the same network and will communicate with each other using the network-alias.
TIP:Notice that if the LDAP server doesn’t allow anonymous binding (as in the current image), then ldap.bind.user and ldap.bind.pwd parameters are mandatory in this file.
Partial migration
In this case, we are going to use jbpm-server-fullimage as it uses KieLoginModule for business-central.war and jbpm-casemgmt.war deployments. The KieLoginModule is in charge of keeping BASIC Authorization header as a principal for the upcoming REST API invocations from these clients.
So, the idea is to add a new legacy login module for LDAP auth, belonging to WildFly’s security subsystem, and then expose this domain as an Elytron security realm so that it can be part of the Elytron subsystem.
We’ll do these actions by using the jboss-cli script:
TIP:jboss-cli is a script available by default in WildFly’s bin directory. You can find it in .sh and .bat files, so you can run on Unix based OS and Windows respectively.
1.- Let’s define a LdapExtLoginModule that matches our LDAP configuration:
Notice that the security-domain has to be called other because it is the same name protected by the KIE application security domain, as you can see inthe images, in the file jboss-web.xml.
This name is the same as preconfigured security domain for other login modules, so it’s better to remove these legacy ones:
TIP: In this case, there is no need for a simple-role-decoder to associate roles, as these ones are retrieved by legacy login modules.
4.- Configure an http-authentication-factory (here called ldap-http-auth) for the KIEDomain and add BASIC (linked to LegacyRealm) and FORM authentication mechanisms used by KIE application to it.
It’s time to check that everything worked fine: at runtime, from jboss-cli, read the protected deployments (remember that other is the name for the security-domain in the jboss-web.xml of these wars):
For the authorization scenarios, the authenticated subject should contain the principals represented on the image below. These are populated by LoginModules, and will be used by JACC mechanism to obtain the roles for the IdentityProvider:
Full migration
In the case of kie-server-showcase image, only the kie-server.war is present (no KieLoginModule dependencies) and therefore, it’s possible to make a full migration to Elytron.
Elytron is based on a security-domain concept, in other words, on the representation of a security policy. It is backed by security-realm/s, and resources to make transformations (role-decoder, permission-mapper and others).
In this practical example, we are going to use Elytron LDAP Security Realm to access LDAP backend and verify credentials as well as obtain attributes associated with an identity.
More complex scenarios would allow having several security realms, and by means of a security-mapper, determine which attributes would be retrieved from each security realm.
1.- First, let’s remove the security-domain called other at legacy security subsystem, as it will be no longer used:
/subsystem=security/security-domain=other:remove
2.- Let’s add elytron subsystem from scratch (if not present):
4.- Create the security domain in Elytron, named KIEDomain, (any name is valid, as we will map it later to the one defined at application level) and add it the previous LDAP realm, and the default-permission-mapper:
TIP: The default-permission-mapper gives “login permission” to all users but the one with anonymous principal, excluded for login. This means that it doesn’t matter if the verification with the backend LDAP is successful (and valid roles), login action won’t be allowed.
<permission-mapping>
<principal name="anonymous"/>
<!-- No permissions: Deny any permission to anonymous! -->
</permission-mapping>
It will produce following logs:
Identity [anonymous] attributes are:
Attribute [Roles] value [user].
Authorizing principal anonymous.
Authorizing against the following attributes: [Roles] => [user]
Permission mapping: identity [anonymous] with roles [user] implies ("org.wildfly.security.auth.permission.LoginPermission" "") = false
Authorization failed - identity does not have required LoginPermission
5.- Next, we need to define the HTTP authentication factory: for kie-server, it’s needed to link the mechanisms for BASIC and FORM authentications:
6.- Map the application security domain (other, as it is the one specified at jboss-web.xml) to our Elytron security domain (KIEDomain) for the undertow and ejb3 subsystems:
7.- Update the messaging-activemq (JMS) to point to our Elytron security domain (KIEDomain) and undefine (remove) the default security domain given by WildFly:
That’s all. Now, let’s see how it works: After a request to the KIE server is filtered and assigned to HTTP mechanism, it’s assigned to the KieLdap Realm. Once a user has been authenticated against LDAP retrieving its roles, the security domain produces a security identity as you can see on the logs below:
Obtaining authorization identity attributes for principal [Bartlet]:
Identity [Bartlet] attributes are:
Attribute [Roles] value [President].
Attribute [Roles] value [kie-server].
These roles will be retrieved by JACC IdentityProvider to authorize actions inside KIE server.
Conclusion
Legacy security subsystem has been deprecated from EAP/Wildfly, and in the future, it will be totally removed. Then, Elytron will become the one and unified subsystem for authentication and authorization.
For easing the transition, a partial migration is offered to link both subsystems, but the full migration is preferred. KIE server is ready to migrate with a few jboss-cli operations. Give it a try, really worth it!
For a long time, Business Central has Dashboards capabilities to build reports from process execution data. Recently with the addition of external components, it is also possible to create any visual representation for the dataset coming from Business Central. In this post, we will discuss a new category of visual components added in jBPM 7.48.0 Final:Read more →
For a long time, Business Central has Dashboards capabilities to build reports from process execution data. Recently with the addition of external components, it is also possible to create any visual representation for the dataset coming from Business Central.
In this post, we will discuss a new category of visual components added in jBPM 7.48.0 Final: heatmaps.
Heatmaps Components
Processes are visually represented in a BPMN Editor and Kie Server projects contain the process BPMN and an SVG representation for the process. All the process execution data is stored in jBPM Log tables and, especially, the NodeInstanceLog table contains information about the nodes executed for a process instance. This information, along with the process SVG is what the heatmap components use under the hood to gather data from your process.
In 7.48.0 Final, when authoring a page you will notice a new Heatmaps category. It contains two new components: Process Heatmap and Processes Heatmaps. To enable these components, make sure you have system property dashbuilder.components.enable as true and when accessing Pages in Business Central you will notice the new components:
Heatmap component in actionHeatmaps Category
Process Heatmap Component
This component can be used to display a specific process heatmap. In the components properties users need to fill:
Server Template: The Kie Server server template where the process is running;
Container ID: The container that contains the process definition;
Process ID: The process id.
Process Heatmap configuration
This should be enough information to have the process diagram being rendered. To put heat information you must use a Data Set that contains the node id and a value for the heat, such as Total hits.
This component should be used If you want to explore all the processes in a Kie Server installation. The only manual setting is the server template id.
All processes heatmaps component configuration
The data about the container, process, node, and heat value will come from the dataset. You can use the same query Nodes execution time and hits from the last post and provide columns EXTERNALID, PROCESSID, NID, and some of the values for the heat, such as AVERAGEEXECUTIONTIME.
Processes Heatmaps Data Set configuration
Heatmaps Tutorial
Here are the detailed steps to use heatmaps feature in jBPM 7.48.0 Final:
Run Business Central connected to Kie Server
The main requirement is to have Business Central running and connected to at least one Kie Server. Also, make sure that you have a container and some data for testing. You can use the sample “Evaluation” project and use a jBPM server distribution which comes configured with a Kie Server connected to it.
Kie Server connected to Business Central
Create a Kie Server dataset to retrieve nodes information
Following the steps from Queries for Building Kie Server post create a dataset to retrieve container, process, nodes, and heat information from Kie Server. The query Nodes execution time and hits could be a good option.
select
pil.externalId,
pil.processId,
nid,
nodetype,
nodename,
count(nid) as total_hits,
avg(execution_time) as averageExecutionTime,
min(execution_time) as minExecutionTime,
max(execution_time) as maxExecutionTime
from(
select
max(log_date) as lastLog,
processinstanceid as piid,
nodeinstanceid as niid,
nodeid as nid,
nodetype,
nodename,
DATEDIFF(SECOND, min(log_date), max(log_date)) as execution_time
Notice that the Node Execution SQL query may not be portable to all databases due the use of DATEDIFF, which vary from database distributions. A possible portable alternative is a query that only counts the nodes total hits.
Having defined the query then you can go to Business Central and create the data set:
Creating Data Set
Create a page and drag heatmap components to it
Now we are ready to use heatmaps components. In order to do this, we need to go to the Pages tool. Using the Menu click on Pages on the Design Menu. Now we can start using heatmaps components.
Process Heatmap: It shows a specific process heatmap. Drag it from the component pallet to the page and you will notice that the component is not immediately displayed, instead we see a warning message saying that a server template is required.
Process Heatmap configuration issue
The message says that some properties are missing, so go to the Component Editor tab and fill in the required information. Once you finish the information you will see the process SVG.
Process Heatmap configuration
Now go to the Data tab and select the data set we created in step 2. Once the data set is selected you may see another WARNING message saying that the columns are invalid.
Process Heatmap data set bad configuration
What we need to do is simply select the correct columns: the first column must contain the node id and the second column a value for the heat for that node. Once it is done the SVG will be back with print values
Process Heatmap data set configuration
The process heatmap is tight to a specific process definition. For multiple processes, we can use the All Processes Heatmaps component.
All Processes Heatmaps: All processes heatmaps component is capable of showing heatmaps for multiple processes available in a Kie Server instance. Find it under the Heatmaps category and drag it to the Page and you will notice a warning message
Bad configuration for All Processes Heatmaps component
Go to the Component Editor tab and fill a Server Template value and the warning message will change, now it will complain about the dataset columns.
Data Set bad configuration
Select the Data tab and then the dataset you created in step 2 with the columns in this order: CONTAINER ID, PROCESS ID, NODE ID, and some value for the heat. Then you should see the process diagram with the heat values, but this time with a selector to select other processes
All Processes Heatmaps configuration
Running Heatmaps on Dashbuilder Runtime
You can export dashboards containing heatmap using the data transfer feature and to import in Dashbuilder you just have to configure system properties to setup the server template credentials and URL as described in Introduction to Dashbuilder Runtime post.
For example, let’s say the server template used in the heatmap component and in the dataset is sample-server. In Dashbuilder Runtime you must set the following system properties:
To run the same dashboard against another Kie Server simply change the values for the properties above. You may also set the replace_query flag as true so Dashbuilder Runtime will create the required queries on Kie Server side (be aware that it will replace the queries with the same UUID)
Start Dashbuilder only with Kie Server credentials and in the first Dashbuilder Runtime access upload the Heatmaps ZIP or;
Start Dashbuilder Runtime server with the system property dashbuilder.runtime.allowExternal as true and then when accessing Dashbuilder Runtime pass the heatmaps dashboard url, for example:
You may face the error “There was an error retrieving process SVG: There was an error executing the function” when setting up heatmaps component.
Error when the process SVG is not found
The Heatmap component relies on the UI Extension, hence there are different causes for this problem:
The container does not contain the process SVG. In this case, make sure to open the process diagram on Business Central project explorer before deploying the container. The SVG can also be inserted directly into the kjar, see this
Server Template, Process Id, or container id is wrong. Review the component settings and try again;
Kie Server is not accessible. Check if you can manually access the SVG in Kie Server using a web browser.
Another problem is when doing the lookup in Kie Server. In these cases, you will see an “Unexpected Error” message. In these cases check also Kie Server logs to see what went wrong.
Conclusion
In this post, we introduced the Heatmap component! We hope you will find other use cases for it. You have any questions or issue feel free to contact the Kie community on Zulip chat.
Testcontainers is a Java library which allows you to interact seamlessly with any application that can be dockerized, making your integration testing much easier (and fun). Let’s see a practical integration sample with Kie Server, the lightweight process and decision engine used within projects like jBPM and Drools. In this article, we are going toRead more →
“There are easy ways to bring back summer in the snowstorm” (André Aciman)
Testcontainers is a Java library which allows you to interact seamlessly with any application that can be dockerized, making your integration testing much easier (and fun).
Let’s see a practical integration sample with Kie Server, the lightweight process and decision engine used within projects like jBPM and Drools.
In this article, we are going to demonstrate how Kie Server tests can take advantage of Testcontainers, when these tests involve other applications, like Keycloak (an open-source Identity Provider which secures resources with minimum fuss). As a result of this, arrange phase of the tests will be part of our fixture classes (as we can manage entire container lifecycle on demand, from test code) and, therefore, easily automated in a CI/CD pipeline.
The code sample used during this post can be found here.
Test elements
Our sample test comprises these elements:
KeycloakContainer class as customization of a GenericContainer for the Keycloak image, allowing start, stop and interact with Keycloak from the test environment.
KeycloakFixture class, where needed resources for tests (application client, users and their roles) are created in the Keycloak container.
Process Definition, based on BPMN, with a process variable that is tagged as restricted, i.e., it can only be updated by authorized users.
KeycloakKieServerTest class, which starts a Spring application context and contains all the tests to verify the target feature (in this case, tagged process variables with authorization).
KieServerAppplicationclass, a SpringBoot application that uses Keycloak and Spring Security for securing access to Kie Server resources.
KeycloakIdentityProvider class, adapter that implements IdentityProvider interface providing authorization methods (getRoles, hasRole and getName) from the SecurityContext.
KeycloakContainer, bringing the Keycloak image to Java
For using Testcontainers, first we need to add its dependency:
KeycloakContainer is the customized container class created by extending Testcontainers’ GenericContainer base class and passing a String to the parent constructor with the configurable Docker image we want to use (e.g., “quay.io/keycloak/keycloak:12.0.1”).
INFO: During startup, if the required image is not available at Docker local images cache, Testcontainers will download it and store it there for quicker executions the next time.
The constructor also defines the exposed port number/s inside of the container. From the test point of view (outside of the container), Keycloak will listen on a random free port, making it perfect for the execution of parallel tests and avoiding port clashes.
With this, it is necessary to expose the Keycloak dynamic URL somehow. Take a look at the following method:
public StringgetAuthServerUrl() {returnString.format("http://%s:%s%s",getContainerIpAddress(),getMappedPort(KEYCLOAK_PORT_HTTP),KEYCLOAK_AUTH_PATH);
}
As the tests need to ask Testcontainers for this random port, getMappedPort method is used to return it at runtime, taking in the container port as an argument to resolve it. This Keycloak dynamic URL is exposed through the getAuthServerUrl method above.
Finally, the configure method can be overridden to set up specific commands, as well as the ENVIRONMENT_VARIABLES used during the container start up. For example, the user/password for the Keycloak admin account that can be provided as environment variables or by means of a file.
INFO: After running all our tests -or in the rainy scenario, when the JVM crashes due to an unexpected error- a sidecar container Ryuk(started with Testcontainers) will take care ofterminating all involved containers. By the way, Ryuk was named after the anime/manga daemon from Death Note, that kills everybody whose name is written down in the notebook.
Ryuk started - will monitor and terminate Testcontainers containers on JVM exit
TIP:In some environments (like Fedora 33), Ryuk must be started in privileged mode to work properly (if not, "permission denied" will be shown). If that is your case, add the following line to the .testcontainers.properties:
ryuk.container.privileged = true
KeycloakFixture, getting everything ready before testing
To test this authorization feature, users and roles must have been defined in advance in Keycloak. This is the goal of this class, which allows us to create all these elements in an automated way.
Basically, we may use the KeycloakBuilder client to access Keycloak with the admin user:
With the default master realm (or any other realm), a client named “springboot-app” will be created with AccessType set to “public” and “Direct Access Grants” enabled.
In this realm, two users will be defined:
user named john and password john1 with PM role
user named Bartlet and password 123456 with President role
The former user won’t have access to the process variable tagged as restricted, while the latter one (with President role) will, as it’ll be explained in the next section.
Tagged variables in BPMN, defining the business logic
At this point, let’s dive in the actual behavior we want to test.
Suppose we have a critical process like this:
This process contains a boolean variable named “press” that it is tagged as “restricted”. It can only be updated by a user with a privileged role.
In our example, only users with the President role might change this “press” variable value.
INFO:The configuration of the roles used in this tagged variable is defined in the application.properties with:
kie.restricted-role = President
If an unprivileged user tries to change the “press” value, a VariableViolationException will be thrown during runtime by the process. This access guard only affects variables, so the process itself might have other relaxed restrictions.
These are the three tests, belonging to KeycloakKieServerTest class, under the spotlight:
testAuthorizedUserOnRestrictedVar, privileged user updates press variable
testNoRestrictedVarViolation, authenticated (but unprivileged) user can run the process without updating the press variable
testRestrictedVarViolationByUnauthorizedUser, a VariableViolationException is thrown by the engine when an unprivileged user tries to update press variable
KeycloakKieServerTest, off we go!
This KeycloakKieServerTest class is annotated with @SpringBootTest, which sets up a Spring Boot environment for testing, using a random port, with two target classes:
KieServerApplication containing the main method of the SpringBootApplication and
KeycloakIdentityProvider which implements the IdentityProvider interface and provides the authenticated user roles.
The@DynamicPropertySource is a very useful annotation that eases adding or updating application properties in a Spring environment with dynamic values. Notice that the SpringBoot Keycloak Adapter requires these three properties:
keycloak.auth-server-url
keycloak.realm
keycloak.resource(i.e., client-id)
The keycloak.auth-server-url property is not static (remember that Testcontainers uses a different random port each time), but we can overcome this by retrieving the auth-server-urlfrom our customized container and set it up programmatically:
Another interesting point is to disable the tests in case we detect that Docker is not installed. In JUnit 5, Testcontainers provides the annotation @Testcontainers(disabledWithoutDocker=true), but in JUnit 4 -where this annotation doesn’t exist- we may implement a similar mechanism:
In this way, in our pipelines, running a certification matrix with some configurations that do not support Docker (or have not installed it yet), these tests won’t be executed.
Before executing the tests, users must be authenticated by means of the REST KieServicesClient in the Kie Server (which delegates to Keycloak, the configured Identity Provider).
serverUrl="http://localhost:"+ port +"/rest/server";
configuration= KieServicesFactory.newRestConfiguration(serverUrl, user, password);kieServicesClient = KieServicesFactory.newKieServicesClient(configuration);
This authentication part is out of the scope of this article, but it’s carried out by KeycloakWebSecurityConfig class.
KeycloakVariableGuardProcessEventListener, listen before variable changes
Our custom listener is a specialization of the jBPM VariableGuardProcessEventListener, just with some additions of our own configuration and/or logic.
In this sample, we are going to use the predefined tag "restricted" (with the two-argument constructor) and the required role is injected from the application properties with the annotation @ Value("${kie.restricted-role}").
It would be also possible to use the three-argument constructor and assign a custom tag (first argument) for protecting the variables.
The VariableGuardProcessEventListener is the one in charge of overriding the beforeVariableChanged method. This method throws the VariableViolationException in case the target variable is tagged as “restricted” and the authenticated user doesn’t have the required role.
KeycloakIdentityProvider, show me the roles you’ve got in your token
Finally, the KeycloakIdentityProvider class is responsible for providing the roles to the listener. The Spring SecurityContextHolder contains the security information associated with the current thread of execution, which the application can use to retrieve the authentication token.
Keycloak uses JWT(JSON Web Token) that self-contains the Granted Authorities. In our case, the granted authorities were previously populated as roles associated with the user.
Therefore, the getRolesmethod implementation just maps the list of Granted Authorities (retrieved from the token) into a list of Strings.
In the getName method, the username is taken out from the KeycloakPrincipal.
Conclusion: a complex process should not always have a painful testing
Kie Server has some recent features (like tagged variables -not only restricted, but also readonly, required, or any other custom one) that deserve an opportunity to take advantage of them (as well as being tested).
Keycloak integration with Kie Server is easy, but sometimes, this kind of testing (involving several applications) may be difficult to automate into a CI/CD pipeline. Testcontainers library provides valuable help to achieve this. We can run the tests at our convenience: from our preferred IDE, from the build tool, or in a continuous integration environment.
Definitely, Testcontainers is a very useful tool for integration tests but on the other hand, its main drawback is that it requires Docker. Some testing environments are not supported and other Docker limitations are carried with. Let’s follow its evolution (if they finally introduce new container engines like podman) but at the same time, a great kudos to the Testcontainers team for easing container testing in the Java world.
Code related to this sample can be found here. Happy painless testing!
When I was studying my degree, I recall a wise teacher that repeatedly told us, their beloved pupils, that the most difficult part of a document is the beginning, and I can assure you, dear reader, that he was right, because I cannot figure out a better way to start my first entry on aRead more →
When I was studying my degree, I recall a wise teacher that repeatedly told us, their beloved pupils, that the most difficult part of a document is the beginning, and I can assure you, dear reader, that he was right, because I cannot figure out a better way to start my first entry on a RedHat blog than explaining the title I have chosen for it. The title refers to two entities that in BPMN 7.48.x release has been brought together, hopefully for good: BPMN messages and Kafka.
According to the not always revered BPMN specification, messages “represents the content of a communication between two Participants”, or, as interpreted by JBPM, a message is an object (which in this context means a Java object, either a “primitive” one like Number or String, or an user defined POJO) that is either being received by a process (participant #1) from external world (participant #2) or, obeying the rules of symmetry, being sent from a process to the external world.
Kafka, not to be mistaken with the famous existentialist writer from Prague, is an “event streaming platform used by thousands of companies for high-performance data pipelines, streaming analytics, data integration, and mission-critical applications”. In more plain language, the middleware that is becoming the facto standard for inter process asynchronous communication in the software business applications world. If still not clear to you, try to think of Kafka as a modern replacement for your old JMS or equivalent message broker pal, but please do not tell anyone I have written that, because as Kafka designers proudly manifest, their fancy toy is doing so much more than that.
As other messaging brokers, Kafka uses a set of channels, called topics, to organize the data being processed. This data consist on a set of persistent records, each of them composed by an optional key and a meaningful value. Most Kafka use cases consist on reading and/or writing records from a set of topics and, as you will have guessed by now, JBPM is not the exception to that rule, so the purpose of this entry is to explain how KIE server sends and receives BPMN messages from and to Kafka broker.
With such spirit, let me briefly explain the functionality that has been implemented . When Kafka support for messages is enabled, for any KIE jar (remember, a KIE Jar contains a set of BPMN process and related artifacts needed to make them work) that is being deployed into a KIE server (because Kafka integration is a KIE server feature, not an JBPM engine one), if any of the process being deployed contains a message definition, depending on the nodes where that message is used, different interactions with Kafka broker will occur.
If the nodes using the message are Start, IntermediateCatch or Boundary events, a subscription to a Kafka topic will be attempted at deployment time. When a Kafka record containing a value with the expected format is received on that topic, the JBPM engine is notified and acts accordingly, so either a new process instance is started or already started processes are resumed (depending on the incumbent node being an Start or an Intermediate Catch respectively)
However, if the nodes using the message are End or IntermediateThrow events, then a process event listener is automatically registered at deployment time, so when a process instance, as part of its execution, reach one of these nodes, a Kafka record containing the message object will be published to a Kafka topic.
Examples
Once description of the functionality has been concluded, let’s illustrate how it really works with a couple of processes. In the first one, a process instance will be started by sending a message from a Kafka Broker. In the second one, a message object containing a POJO will be published into a Kafka Broker when the process is ended.
First example just consist of a start message event that receives the message object, an script task which prints that message object and the end node
The start event node, besides receiving the message named "HelloMessage", assigns the message object to a property named "x", of type com.javierito.Person. A person has a name and an age.
Scrip task just prints the content of "x" in console to verify the message has been correctly received using Java code (the output of toString method).
When this process is deployed to KIE server and Kafka extension is enabled, if we publish {"data":{"name":"Real Betis Balompie","age":113}} on Kafka topic "HelloMessage", then Received event is Person [name=Real Betis Balompie, age=113] is printed in KIE server console.
Second example diagram is even more straightforward than the previous one, it just contains two nodes: start and end message event
In order to fill the message object to be sent, an input assignment is defined to set message object value from property "person", of type com.javierito.Person
And that’s all, when an instance of this process is executed, passing as "person" property a Person instance which name is Real Betis Balompie and age is 113, a cloud event json object {.... "data":{"name":"Real Betis Balompie", "age":113},"type":"com.javierito.Person" ...} is sent to Kafka topic "personMessage"
Hopefully these two simple examples will give you a basic idea of what kind of functionality can be achieved when integrating BPMN messages with Kafka. In next section, you can find a FAQ where certain technical details are discussed
FAQ
How is Kafka functionality enabled? Kafka functionality is provided at Kie server level. As an optional feature, it is disabled by default and it is implemented using an already existing functionality called Kie Server extension. In order to enable it:
For EAP deployments, set system property org.kie.kafka.server.ext.disabled to false
In Spring Boot applications, add kieserver.kafka.enabled=true to application properties.
Why Kafka functionality was not included as part of JBPM engine? Because JBPM engine must not have dependencies with external processes. Kafka broker, as sophisticated as it is, consists on at least one (typically more) external process, which due to its distributed nature relies on Zookeeper, which gives a minimum of two external processes.
How BPMN knows which Kafka topics should be used? In a nutshell, using message name. More specifically, if no additional configuration is provided, the message name will be assumed to be the topic name. In order to provide a different mapping, system properties must be used for now ( an ongoing discussion regarding the possibility of providing mapping between message and topic in the process itself is happening while I wrote these lines) . The format of these system properties is org.kie.server.jbpm-kafka.ext.topics.<messageName>=<topicName>. So, if you want to map message name “RealBetisBalompie” to topic name “BestFootballClubEver”, you will need to add following system property to Kie Server: org.kie.server.jbpm-kafka.ext.RealBetisBalompie=BestFootballClubEver.
Why a WorkItemHandlerNotFoundException is getting thrown in my environment when the message node is executed? JBPM has been out for a while and any new functionality needs to keep backward compatibility. Before this feature for Kafka was added to JBPM, when a process sends a message, a WorkItem named “Send Task” is executed. This behavior is still active, which means that in order to avoid the exception, a WorkItemHandler implementation for “send task” needs to be registered . The steps to register a work item handler are described here. If just Kafka functionality is needed, this handler might be a custom one that does nothing (implemented methods will be empty). Keeping this legacy functionality allows both JMS (through registering the proper JMS WorkItemHandler) and Kafka (through enabling the Kie extension) to naturally coexist in the same KIE server instance.
Which is the expected format for Kafka record value to be consumed by JBPM? Currently, JBPM expects a JSON object that honors cloud event specification (although only “data” field is currently used) and which “data” field contains a JSON object that can be mapped to the Java object optionally defined in structureRef attribute of MessageDefinition. If no such object is defined the java object generated from “data” field will be a java.util.Map. If there is any problem during the parsing procedure, the Kafka record will be ignored. In future we are planning to support also plain JSON objects (not embedded in “data” field) and customer customization of the parsing procedure (so value can contain any format and customer will be able to write custom code that converts its bytes to the java object defined in structureRef)
For the last few months, here at KIE team we’ve been hard at work. Today I am proud to announce that our cloud-native business automation platform is hitting a major milestone. Today we release Kogito 1.0! Kogito includes best-of-class support for the battle-tested engines of the KIE platform: the Drools rule language and decision platform, the jBPM workflow and process automation engine, the OptaPlanner constraint satisfaction solver; and it bringsRead more →
For the last few months, here at KIE team we’ve been hard at work. Today I am proud to announce that our cloud-native business automation platform is hitting a major milestone. Today we release Kogito 1.0!
Kogito includes best-of-class support for the battle-tested engines of the KIE platform:
noSQL persistence through the Infinispan and the MongoDB addons
GraphQL as the query language for process data
microservice-based data indexing and timer management
completely revisited UIs for task and process state
CloudEvent for event handling
Code Generation
I believe there is a lot to be proud of, but I want to talk more about another thing that makes Kogito special, and that is the heavy reliance on code-generation.
we generate code ahead-of-time to avoid run-time reflection;
we automatically generate domain-specific services from user-provided knowledge assets.
Together, Kogito delivers a truly low-code platform for the design and implementation of knowledge-oriented REST services.
Ahead-of-Time Code-Generation
In Kogito, we load, parse, analyze your knowledge assets such as rules, decisions or workflow definitions during your build-time. This way, your application starts faster and it consumes less memory, and, at run-time, it won’t do more than what’s necessary.
Compare this to a more traditional pipeline, where instead the all the stages of processing of a knowledge asset would occur at run-time:
Application Density
The Cloud, albeit allegedly being «just someone else’s computer», is a deployment environment that we have to deal with. More and more businesses are using cloud platforms to deploy and run their services. Thus, because they are paying for the resources they use, they are caring more and more about them.
This is why application density is becoming increasingly more important: we want to fit more application instances in the same space, because we want to keep costs low. If your application has a huge memory footprint and high CPU requirements, it will cost you more.
While we do support Spring Boot (because, hey, you can’t really ignore such a powerhouse), we chose Quarkus as our primary runtime target, because through its extension system, it lets us truly embrace ahead-of-time code generation.
Whichever you choose, be it Spring, or Quarkus, Kogito will move as much processing as possible at build time. But if you want to get the most out of it, we invite you to give Quarkus a try: through its simplified support to native image generation, allows Kogito to truly show its potential, producing the tiniest native executables. So tiny and cute, they are the envy of a gopher.
Kogito cuts the fat, but you won’t lose flavor. And if you pick Quarkus, you’ll get live code reload for free.
Automated Generation of Services and Live Reload
Although build-time processing is a characterizing trait of Kogito, code-generation is also key to another aspect. We automatically generate a service starting from the knowledge assets that users provide.
From Knowledge to Service: a Low-Code Platform
You write rules, a DMN decision, a BPMN process or a serverless workflow: in all these cases, in order for these resources to be consumed, you need an API to be provided. In the past, you had full access to the power of our engines, through a command-based REST API for remote execution or through their Java programmatic API, when embedding them in a larger application.
While programmatic interaction will always be possible (and we are constantly improving it in Kogito to make it better, with a new API), in Kogito we aim for low-code. You drop your business assets in a folder, start the build process, and you get a working service running.
In the animation you see that a single DMN file is translated into an entire fully-functional service, complete with its OpenAPI documentation and UI.
From Knowledge to Deployed Service: Kogito Operator
Through the Kogito Operator you are also able to go from a knowledge asset to a fully-working service in a matter of one click or one command. In this animation you can see the kogito cli in action: the operator picks up the knowledge assets, builds a container and deploys it to OpenShift with just 1 command!
Fast Development Feedback
For local development, the [Kogito Quarkus extension][qex] in developer mode extends Quarkus’ native live code reloading capabilities going further from reloading plain-text source code (a feature in Quarkus core) to adding support to hot reload of graphical models supported by our modeling tools. In this animation, for instance you can see hot-reload of a DMN decision table.
In this animation, we update a field of the decision table. As a result, the next time we invoke the decision, the result is different. No rebuild process is necessary, as it is all handled seamlessly by the Kogito extension. You get the feeling of live, run-time processing, but under the hood, Quarkus and Kogito do the heavy lifting of rebuilding, reloading and evaluating the asset.
Future Work
In the future we plan to support customization of these automatically-generated services, with a feature we call scaffolding. With scaffolding you will also be able to customize the code that is being generated. You can already get a sneak peek of this preview feature by following the instructions in the manual.
Conclusions
Kogito 1.0 brings a lot of new features, we are excited for reaching this milestone and we can’t wait to see what you will build! Reach out for feedback on all our platforms!
In 0.7.2.alpha3 we started shipping a new component of the KIE tooling, what we’re calling Standalone Editors. These Standalone Editors provide a straightforward way to use our tried-and-true DMN and BPMN Editors embedded in your own web applications. The editors are now distributed in a self contained library that provides an all-in-one JavaScript file forRead more →
In 0.7.2.alpha3 we started shipping a new component of the KIE tooling, what we’re calling Standalone Editors.
These Standalone Editors provide a straightforward way to use our tried-and-true DMN and BPMN Editors embedded in your own web applications.
The editors are now distributed in a self contained library that provides an all-in-one JavaScript file for each of them, that can be interacted using a comprehensive API for setup and control of them.
Installation
In this release, you can choose from three ways to install the Standalone Editors:
readOnly (optional, defaults to false): Use false to allow content edition, and true for read-only mode, in which the Editor will not allow changes. WARNING: Currently only the DMN Editor supports read-only mode.
origin (optional, defaults to window.location.origin): If for some reason your application needs to change this parameter, you can use it.
resources (optional, defaults to []): Map of resources that will be provided for the Editor. This can be used, for instance, to provide included models for the DMN Editor or Work Item Definitions for the BPMN Editor. Each entry in the map has the resource name as its key and an object containing the content-type (text or binary) and the resource content (Promise similar to the initialContent parameter) as its value.
The returned object will contain the methods needed to manipulate the Editor:
getContent(): Promise<string>: Returns a Promise containing the Editor content.
setContent(content: string): void: Sets the content of the Editor.
getPreview(): Promise<string>: Returns a Promise containing the SVG string of the current diagram.
subscribeToContentChanges(callback: (isDirty: boolean) => void): (isDirty: boolean) => void: Setup a callback to be called on every content change in the Editor. Returns the same callback to be used for unsubscription.
unsubscribeToContentChanges(callback: (isDirty: boolean) => void): void: Unsubscribes the passed callback from content changes.
markAsSaved(): void: Resets the Editor state, signalizing that its content is saved. This will also fire the subscribed callbacks of content changes.
undo(): void: Undo the last change in the Editor. This will also fire the callbacks subscribed for content changes.
redo(): void: Redo the last undone change in the Editor. This will also fire the callbacks subscribed for content changes.
close(): void: Closes the Editor.
getElementPosition(selector: string): Promise<Rect>: Provides an alternative for extending the standard query selector when the element lives inside a canvas or even a video component. The selector parameter must follow the format of “:::”, e.g. Canvas:::MySquare or Video:::PresenterHand. Returns a Rect representing the element position.
Now let’s implement an application that provides the DMN Editor and adds a simple toolbar to the top that explores the main features of the API.
First, we start with a simple HTML page, and add a script tag with the DMN Standalone Editor JS library. We also add a <div> for the toolbar and a <div> for the Editor.
For the toolbar, we will add a few buttons, that will take advantage of the Editor’s API:
This script will open an empty and modifiable DMN Editor inside the div#dmn-editor-container. But we still have to implement the toolbar actions. To be able to undo and redo changes, we can add the following script:
I want to explain how to bring Case Management to Kogito using the tools that flexible processes provide. Previously I introduced flexible processes in Kogito. See part 1 for a generic introduction and part 2 to see it in action. I would like to highlight that I will not cover all the concepts described inRead more →
I want to explain how to bring Case Management to Kogito using the tools that flexible processes provide.
Previously I introduced flexible processes in Kogito. See part 1 for a generic introduction and part 2 to see it in action.
I would like to highlight that I will not cover all the concepts described in the CMMN specification, just the most important and the ones that we considered a must for having a good Case Management experience. If you feel we are not covering some important use cases, don’t hesitate to join the Kogito Community and ask us directly.
The blog post contains is structured as follows:
Go through the most common Case Management concepts and explain how to use Kogito’s flexible processes in each case.
How to use Documents in Kogito.
Concepts
From my experience I find Case Management open to interpretation and what I will define is my point of view so it is fine to not agree with what I say 🙂
Case
A Case can be understood as an unstructured process where there isn’t necessarily a defined starting or end point. Maybe none of them even exist.
Cases are driven by Knowledge workers and they decide what information is missing or when the Case is ready to move forward or back to a previous stage.
So if we consider a Case a special process where we don’t need a start or an end… In Kogito we can say our process is an Ad-Hoc process.
By checking the Ad-Hoc flag our process will not require to have a start node because we will be adding individual tasks or groups of tasks that are known as discretionary tasks.
This is a hypothetical case for a public organization grant funding. You can see several discretionary tasks for preparing the proposal, adding comments and archiving the proposal. They do not depend on any previous action but may depend on the state of the case, that is, the Case file.
Case File
Knowledge workers will manipulate and transform the data used during the life of the Case. This data is the Case File. As we are in a process, the Case File will just be our Process Data. Sub-processes and tasks can share the data, either totally or partially.
Case ID
The Case ID is a value that helps Knowledge workers uniquely identify the Case, a correlation key. It is usually a sequential number with a prefix. E.g. CASE-00010230
In Kogito there is the concept of Business Key that you can set to each process instance, i.e. the Case instance upon creation. The process will always be created using a generated UUID but it will be possible to identify the process by the Business Key.
Milestones
The concept of a milestone is almost self-descriptive. We can define it as a single point of achievement within a case instance. No work is associated with a milestone, but it will allow the possibility to work on a different set of tasks after a certain state is reached.
A Milestone will wait for a signal to occur, the event will have the same name as the Milestone.
In this example, when the Resolve Case service task is executed, the “CaseResolved” is emitted. This signal triggers the CaseResolved milestone.
Milestones may have “Conditions” to control their completion. Conditions are defined using Java expressions and will be re-evaluated each time a variable is updated.
These conditions allow you to synchronize multiple events or states into a single point. Imagine a case where our “Approved” milestone should be completed when it has been reviewed by the expert and the financial availability check has also succeeded. We would have a condition like the following:
Stages can be defined as a group of tasks that are executed for a common reason. In this example, there is the “Proposal preparation” stage that includes two tasks related to the preparation of a proposal. Once the proposal is ready to be reviewed, the stage will complete and the Case will move to the next stage.
In flexible processes we consider that Stages are a sub-process embedded in the main process. The formal name for that is Ad-hoc sub-process. You don’t need to define any task within a Stage but it will help with business visualization and grouping. Ad-hoc sub-processes can have activation and completion conditions that allows you to set the stage as completed or to re-use it when needed.
For example, if the knowledge worker responsible for the documentation review concludes that there is something missing this stage can be activated again so that the user can update or attach more documents.
Ad-Hoc Fragments
Discretionary tasks are any type of task that users or the knowledge workers can interact with at any point in time during the Case life cycle.
These tasks can be of any type and can be triggered by signals (internal or external). In flexible processes a discretionary task or an ad-hoc fragment is any node that:
Is not a start node
Is not linked with a previous node. I.e. doesn’t have an incoming connection
Is not auto-started
Case Life cycle
Cases have a life cycle, they can be started, aborted, suspended, completed or terminated.
After receiving a new request containing the initial Case File data the process will be started. If it has a start node and/or ad-hoc auto-start nodes, they will all be executed.
Then users and knowledge workers can interact with the discretionary tasks or any other active tasks in the process.
The Endnode sets the process as completed but not terminated. Tasks are still available in case the knowledge worker wants to execute any more actions.
The process terminates after reaching the Termination End node.
It is important to mention that once the case is terminated, Case data will remain in the Data Index but will be removed from the engine memory. See more about the Data Index
Documents in Kogito
Case Files very often contain documents but nowadays most companies and individuals rely on cloud services for that document management. That means the most common way to work with documents is by using the reference or the minimal metadata to uniquely identify such documents in a specific cloud provider.
For example in Microsoft OneDrive, if you wanted to identify a document you will need to provide:
userId
driveId
itemId
Whereas for Google Drive it seems enough to just keep the fileId (similar to the OneDrive’s itemId)
So it seems reasonable to create a POJO that represents a reference to the document depending on your organizations requirements and just provide the necessary metadata to identify the documents in the cloud provider.
With this solution, Kogito doesn’t need to directly handle documents which is something that it wasn’t meant to do. It’s up to the developers to provide the best experience with documents to their users and knowledge workers. Besides it provides an additional layer of confidentiality and security already defined at the organization level.
Examples of front-end applications:
Embed a widget to let the user pick documents directly from their OneDrive folders and attach them without even uploading them.
Let the users upload the document to an application-managed drive to just keep the document metadata.
In the previous blog post Flexible processes in Kogito – Part 1 I talked about the new components and functionalities for flexible processes introduced in Kogito 0.12.0. In this post I will walk you through an example process putting all this in practice. Service Desk process The flexible-process-quarkus example is available in the kogito-examples GitHubRead more →
In the previous blog post Flexible processes in Kogito – Part 1 I talked about the new components and functionalities for flexible processes introduced in Kogito 0.12.0.
In this post I will walk you through an example process putting all this in practice.
Service Desk process
The flexible-process-quarkus example is available in the kogito-examples GitHub repository. In the project README.md file you can find instructions to build and execute the process.
It describes a service desk process where customers open a ticket related to a problem or question about a specific product.
First, the ticket goes through a triage process where it is assigned to a support team and then a random engineer will be designated. If the system can’t decide which team has to handle the ticket, the support engineer will be assigned manually.
At any moment both, the engineer and the customer, can add comments until any part is satisfied and the case can be considered as Resolved. If for any reason, someone decides to add another comment, the case will be reopened and assigned to the other party until it is resolved again.
Finally, once resolved, the customer will have a task to send a satisfaction questionnaire considering the case as closed.
Overview
This is a diagram of the service desk process:
Create a support case
In order to start the process, an initial supportCase object has to be provided in the request. It will contain information about the product, the customer and a description of the problem itself.
{
"supportCase": {
"customer": "Paco the customer",
"description": "Kogito is not working for some reason.",
"product": {
"family": "Middleware",
"name": "Kogito"
}
}
}
The triage
The Triage sub-process is automatically started when the process is instantiated and the support group is decided through the following decision table that takes as the entry the product name and family and returns the supportGroup.
You can see that for the case where the product family is not in the list. The engineer will have to be assigned manually.
Adding comments
The Work case sub-process is also auto-started. Anyone from the customer group can add comments and the same for people from the support group.
As mentioned above, an empty POST request has to be sent in order to create a task that can be used to add a comment. Imagine it as the “Add comment” button that renders the form next.
So, after sending this empty POST you will see something like this:
curl -D - -XPOST -H 'Content-Type:application/json' -H 'Accept:application/json' http://localhost:8080/serviceDesk/b3c75b24-2691-4a76-902c-c9bc29ea076c/ReceiveSupportComment
HTTP/1.1 200 OK
Content-Length: 305
Link: </b3c75b24-2691-4a76-902c-c9bc29ea076c/ReceiveSupportComment/f3b36cf9-3953-43ae-afe6-2a48fea8a79a>; rel='instance'
Content-Type: application/json
{
"id":"b3c75b24-2691-4a76-902c-c9bc29ea076c",
"supportCase":{
"product": {
"name":"Kogito",
"family":"Middleware"
},
"description":"Kogito is not working for some reason.",
"engineer":"kelly",
"customer":"Paco the customer",
"state":"WAITING_FOR_OWNER",
"comments":null,
"questionnaire":null
},
"supportGroup":"Kogito"
}
The URL present in the Link HTTP header can be used when rendering the form action. Don’t forget to add the user and the group query parameters. Note that the data sent must be in JSON format. This example assumes the Javascript framework will deal with that.
Let’s say the customer is happy with the resolution provided by the support engineer. The customer decides to click the “Resolve Case” button.
This button sends an empty POST which triggers the service task. The case is then set as Resolved and a “CaseResolved” event is emitted.
Closing the case
The event will signal the milestone which doesn’t have any Condition. Then, the Questionnaire task is started and assigned to the customer.
The task expects a comment and a numeric evaluation based on the customer’s satisfaction. As a result, the questionnaire is added to the support case and the case is finally closed.
What are flexible processes you might be wondering about? If you are familiar with Case Management you will get it quickly. If not, I suggest you start here. As a brief introduction. In Case Management there is the concept of Case that is opened. Then the knowledge workers will perform tasks or update the Case-relatedRead more →
What are flexible processes you might be wondering about? If you are familiar with Case Management you will get it quickly. If not, I suggest you start here.
As a brief introduction. In Case Management there is the concept of Case that is opened. Then the knowledge workers will perform tasks or update the Case-related information, known as Case File. At a certain point the Case can be considered as closed. Cases might eventually be reopened, depending on the situation.
Case Management is a way to provide traditional processes with more flexibility because the execution path is not rigid. This is an advantage when the process can’t be fully automated or the scope is hard to measure. A typical example is related to healthcare. A patient has a file that is accessible to the doctor and any health professionals assigned to the patient. They take actions, make different decisions and assessments until the patient is diagnosed and treated.
Instead of implementing Case Management as defined in the CMMN specification we decided to use BPMN extensions to provide a similar experience. This is what we call Flexible processes in order to avoid false expectations and misunderstandings.
What’s new?
Ad-Hoc process
Starting with the processes themselves, the first thing to do is to say that your process is an ad-hoc process in the modeler. For that, open the process properties and check the Ad-Hoc flag. This flag tells the modeler that you are defining a flexible process that might or might not have a start node.
Ad-Hoc sub-processes
An Ad-Hoc sub-process is a way to wrap related tasks together. Consider it as the equivalent to Stages in Case Management.
This type of sub-processes have some interesting features like the Activation and Completion Conditions. Activation Conditions control when the tasks within a node can start their execution. On the other hand, Completion Conditions decide when is time to move on to the next node. The same applies to the completion condition after the execution of all the nodes. The sub-process node will wait for the right state to complete and continue to the next node, if any.
These conditions are defined as Java lambdas and they will be re-evaluated each time a process variable is updated.
Milestones
This is a concept directly taken from Case Management. Although the same functionality can be achieved through different BPMN components we considered some users might find it simpler to understand and to apply.
Milestones are a node that can be automatically started by an event with the same name of the node
Additionally, they can have an optional “Condition” input assignment. Use a Java expression to decide when this node is ready to be completed and move forward to the next one.
Imagine the example about the patient. Possible milestones could be “Patient diagnosed” or “Patient recovered”. Each milestone implies that a specific state has been achieved and it’s time to move on to the next one.
AdHoc Fragments
We call AdHoc fragments nodes that don’t have an incoming connection to any other node and are not start nodes. AdHoc fragments may or may not start automatically, depending on the AdHoc Autostart flag.
If the node is automatically started it will be executed when the process is created. However, when this flag is not set, the only way to trigger the node execution is with a signal. Processes receive signals externally through the REST API, internally by using events or programmatically. See more about external signals in the next section.
REST API
For non-flexible processes, the main POST request will trigger the Start node whereas in a flexible process it will just create the process instance. All nodes with the AdHoc Autostart flag will start with the process.
For the case of the AdHoc fragments an empty POST can be sent to start the execution of the node. In the specific case of the HumanTasks where a new task is created, users or API consumers will most likely be interested in the generated task URL. This is possible through the Link HTTP header present in the response. It will contain a relative path with the process instance ID, the task name and the task instance ID.
curl curl -D - -XPOST -H 'Content-Type:application/json' -H 'Accept:application/json' http://localhost:8080/serviceDesk/b3c75b24-2691-4a76-902c-c9bc29ea076c/ReceiveSupportComment
HTTP/1.1 200 OK
Content-Length: 305
Link: </b3c75b24-2691-4a76-902c-c9bc29ea076c/ReceiveSupportComment/f3b36cf9-3953-43ae-afe6-2a48fea8a79a>; rel='instance'
Content-Type: application/json
{
"id":"b3c75b24-2691-4a76-902c-c9bc29ea076c",
"supportCase":{
"product": {
"name":"Kogito",
"family":"Middleware"
},
"description":"Kogito is not working for some reason.",
"engineer":"kelly",
"customer":"Paco the customer",
"state":"WAITING_FOR_OWNER",
"comments":null,
"questionnaire":null
},
"supportGroup":"Kogito"
}