When planning usage of processes that are complex, long-running, and with possible points of failures like custom work item handlers, asynchronous tasks, timers, signals, and service tasks, it is crucial to understand how jBPM deals with transactions. This knowledge might save some troubleshooting hours if you have persistent processes. “Why?“, you might ask: if theRead more →
When planning usage of processes that are complex, long-running, and with possible points of failures like custom work item handlers, asynchronous tasks, timers, signals, and service tasks, it is crucial to understand how jBPM deals with transactions.
This knowledge might save some troubleshooting hours if you have persistent processes. “Why?“, you might ask: if the process is not properly designed and an unhandled error occurs, you may end-up with tasks being triggered more times than you planned.
Let’s drill down.
During the following process execution, jBPM will execute all these tasks in a single transaction:
Therefore, if the process reaches Service B, and it throws an exception, Service A won’t be rolled back. This process will not be persisted on the database, it is like it was never executed. This would happen because there is no wait state point in this process.
To the engine, a wait state is any point during the process execution which it might need to wait for some external intervention. When this occurs, the engine persists the process state in the database. These are the possible wait states of a process:
Human Tasks
Catch events (signals, timers)
Async Tasks (marked with Async flag)
So, if we consider the following process, what will happen if Service B throws an exception?
If service B with an unrecoverable error and throwing exception, Service A would be triggered three times. (worrying, right?)
In this case, the process was persisted when the human task was completed. So, when Service B threw an exception, the transaction will roll back and return to the Human Task again. Gateway is validated, Service A is triggered the second time, and then, Service B is triggered. If B throws an exception, the execution again returns to the Human Task. This will happen as many times as configured in the engine retry count, which by default is three.
In cases like the one represented below, by simply configuring Rest Task 2 as Async you can make the engine persist the process state, and in case of an error on REST Task 3, Rest Task 1 and 2 would not be executed again.
To avoid situations like this, during the process authoring, have the transaction boundaries clear in your mind. Choose wisely which task should be async, which are the wait states, and if you need to create any wait state or handle errors with compensation events to avoid unexpected businesses outcomes.
Summary
Understanding how the engine works will help business automation specialists and developers to deliver fine-tuned projects that perform better on the jBPM engine. The business automation specialist should be able to absorb the business requirements and be able to reflect this on the project not only by authoring the assets but by improving the execution of these assets adjusting every configuration to best perform in each scenario and by being able to understand the impacts of the transaction boundaries in each delivered process.
When working with business processes, it is expected to work with persistent process data scenarios. Considering this situation, it is common for users to use a different database to store process data, apart from the database where the domain information is stored. As an example, storing critical information from customers apart from the engine databaseRead more →
When working with business processes, it is expected to work with persistent process data scenarios. Considering this situation, it is common for users to use a different database to store process data, apart from the database where the domain information is stored.
As an example, storing critical information from customers apart from the engine database is an expected data architecture. In this way, the user can maintain the data consistent and isolated. But what if these objects, stored in a different database, needs to be used in one of the business automation projects?
[INFO] Many of the concepts here applied are valid for any application which involves distributed transactions (XA Transactions), this means, any application which might possibly have a transaction which spawns through two or more different databases. An overview of how application deployed in Java EE application servers, communicates with a database can be found at this blog post: Datasources, what, why, how?
Let’s understand how to configure and access different databases within the same project.
Pluggable Variable Persistence (PVP)
Here are the steps required to make a business automation process store process variables (domain information) in a different database:
Configure the app server datasource pointing the database where you want to store the custom data;
Make sure custom POJOs (the object to be persisted) are a JPA object
Data Object must implement Serializable interface;
Must be a JPA Entity;
Must have a unique id, a primary key;
Configure business automation project
Configure a JPA Marshalling Strategy;
Configure the persistence unit (pointing to the datasource mentioned on the first step);
Once this is done, every time this object gets created or updated during the process execution, it will be properly persisted on the database. Let’s check this with a hands-on exercise.
Persisting custom objects using PVP
Bootstrap a database
To run this example, using docker to start a database is the easiest way to get a database up and running. Make sure you have docker running in your environment. Let’s create a directory to store the database information.
This command will download the image (if you don’t have it locally), start a container named postgres, based on the latest PostgreSQL image. expose the internal port 5432 opened by PostgreSQL, to be accessed externally also in 6543.
Let’s enter in this SQL Server to create a new database for our jBPM tables. Run the command below to get the Container ID of this psql container:
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
620350186bbe postgres:9.4 "docker-entrypoint.s…" About a minute ago Up About a minute 0.0.0.0:6543->5432/tcp jbpm-postgres-container
Use the whole ID, or simply the first two numbers, to enter the container. Then, we will start the psql client:
Now, let’s create a database to be used by jBPM, with a new a user and password. This user needs permision on this new database. The tables will be created automatically by jBPM on startup.
[TIP] You might want to create additional indexes on some tables and columns in your production environment
CREATE DATABASE jbpmdb;
CREATE USER jbpmdb WITH ENCRYPTED PASSWORD 'jbpmdb';
GRANT ALL PRIVILEGES ON DATABASE jbpmdb TO jbpmdb;
And create another database, to store the specific domain data.
CREATE DATABASE labpvp;
CREATE USER labuser WITH ENCRYPTED PASSWORD 'labuser';
GRANT ALL PRIVILEGES ON DATABASE labpvp TO labuser;
When done, type q and press enter, to quit psql client, and then exit to exit the container.
Configuring the Application Server
Now, let’s configure WildFly: we will create two datasources one connecting to jbpmdb and other to labpvp . Once the datasources are configured, applications can start connecting to these databases.
In this example we expect that you have downloaded jBPM and have it available in your ~/projects directory. If you want to use the commands here, you can optionally create a symbolic link for the folder with the command below:
Adding a JDBC driver (the responsible for teaching WildFly how to talk the specific database language);
Adding a datasource with the credentials, connection URL, and driver information.
The simplest way to add a JDBC driver to WildFly is deploying it. Download postgresql-42.2.2.jarand place it into ~/projects/jbpm/standalone/deployments. WildFly will immediately tell you the driver is deployed:
16:11:07,515 INFO [org.jboss.as.connector.deployers.jdbc] (MSC service thread 1-4) WFLYJCA0005: Deploying non-JDBC-compliant driver class org.postgresql.Driver (version 42.2)
16:11:07,540 INFO [org.jboss.as.connector.deployers.jdbc] (MSC service thread 1-5) WFLYJCA0018: Started Driver service with driver-name = postgresql-42.2.12.jar
16:13:33,885 INFO [org.jboss.as.server] (ServerService Thread Pool -- 45) WFLYSRV0010: Deployed "postgresql-42.2.12.jar" (runtime-name : "postgresql-42.2.12.jar")
Let’s use the JBoss CLI script, which is less error-prone, to do the configuration. Start your WildFly in one terminal tab, and open another terminal tab so you can connect to it using CLI.
$ ~/projects/jbpm/bin/jboss-cli.sh -c
Add a new datasource, informing the connection data, and your deployed driver name:
Finally, we need to tell jBPM to stop using the H2 database and start using Postgres by connecting through the datasource with JNDI name: java:/jboss/datasources/psqljBPMXADS .
It will bootstrap creating its tables inside the Postgres in your docker container. You can confirm this by getting into your Postgres container and listing the tables:
Scenario: You joined a team which has on-going project, hello-jbpm. In this project, the final user does a question, complain or suggestion and gets an automatic or a manual answer. You noticed that the data architecture can be improved by storing the reported issue and person details, into a separate database. apart from the one used by the engine.
Import project 02-customer-service-jbpm if you don’t have it in your business central yet. You should see it in your space:
In order to persist a POJO into a database, we need will make use of JPA benefits and to convert it into a JPA Entity.
Open hello-jbpm project, and notice both data object named Issue and the Person.
You can use the filter to see only specific types of assets. In this example the list is filtered to show only Model assets.
Let’s start by configuring Person. By clicking on the “Source” tab you can view it’s code.
[TIP] In order to be persisted, classes should implement java.io.Serializable interface. By default, Business Central creates the Data Objects implementing Serializable interface.
Transform it into an entity by adding @javax.persistence.Entity annotation;
Inform the name of the table you want to store this Data Object with the annotation @javax.persistence.Table(name = "Person")
This class does not have a primary key yet. Add an id, with type Long to Person.java. Don’t forget the attribute should have the get and set method and be annotated with @javax.persistence.Id.
Person.java should look like this:
@javax.persistence.Entity
@javax.persistence.Table(name = "Person")
public class Person implements java.io.Serializable {
@javax.persistence.Id
@javax.persistence.SequenceGenerator(name = "ID_GENERATOR", sequenceName = "PERSON_ID_SEQUENCE")
private java.lang.Long id;
Click on the save button.
Configuring business application persistence unit with Business Central
The persistence unit, if the configuration which tells a Java EE application which datasource it must use to connect to the database. To configure a business application to persist process variables in a custom database, inform the datasource details in the persistence unit, and create a Marshaller configuration. This Marshaller configuration allows the engine to convert the variables data to/from process and database.
When the JPA Marshaller is configured, every JPA Entity used as a process variable, is automatically persisted in the database pointed by the application used datasource.
To configure the persistence unit details, access the project Settings, and select the Persistence option.
2. Considering we are using PostgreSQL and the datasource configured in the last step, use the following values in the form:
3. Set the show_sql attribute to true, to enable logs and permit checking in the server logs if the entity is being manipulated as expected. Save the configuration.
4. Make sure you scroll down and add the persistable class, Person:
4. Still in the project Settings, configure the JPA Marshaller. Access the Deployments option, and select Marshalling Strategies.
5. Click on “Add Marshalling Strategy“, insert the following value and hit save:
new org.drools.persistence.jpa.marshaller.JPAPlaceholderResolverStrategy("customer-service-jbpm-pu", classLoader)
Now you have JPA Classes ready to be used as process variables, and to be automatically persisted in a different database!
You can check the result of this work by clicking on the Deploy button. If everything is properly set, you should see something similar in the server log:
17:41:12,801 INFO [stdout] (default task-20) Hibernate: create table Person (id int8 not null, birthDate timestamp, name varchar(255), primary key (id))
Start a new process instance, and validate if the object was persisted in the database table you specified.
Kie Server can be configured to deal differently with the requests it receives and the objects it stores in memory or in the database. Properly configuring how the engine deals with the objects in memory, ensures a project with fewer resource consumption and avoids unexpected behaviors. Runtime Strategy configuration affects directly how the engine dealsRead more →
Kie Server can be configured to deal differently with the requests it receives and the objects it stores in memory or in the database. Properly configuring how the engine deals with the objects in memory, ensures a project with fewer resource consumption and avoids unexpected behaviors.
Runtime Strategy configuration affects directly how the engine deals with the lifecycle of the Kie Sessions and objects loaded in memory.
You have to choose between four options, for each new project deployment you create:
Singleton Runtime Strategy
Per Process Runtime Strategy
Per Request Runtime Strategy
Per Case
Choosing a strategy
Understand the key concepts of each strategy to guide your decision of when you should use each of them:
Singleton Runtime Strategy: If you create and deploy a project, this runtime strategy configuration is used. It’s the default configuration. This strategy uses a single Kie Session. This makes things easier for beginners since every inserted object is available in the same Kie Session. Also, jBPM persists the Kie Session, and it is reused if the server restarts.
Rule execution within processes can be impacted. Consider that: if process instance A inserts person John to be evaluated by business rules. Process instance B, inserts Maria. Later, when process C fires rules for its own person, John and Maria will still be in memory and will be evaluated by process C rules improperly!
When using this configuration, process execution happens within a synchronized Runtime Engine: it is thread-safe. Consequently, this characteristic can affect and bring consequences on performance. If you expect a high workload, or if the project use timers or signals in the process definitions, you may want to consider a different strategy for your production environment.
Per Request Strategy: defines a stateless behavior. The engine will create one Kie Session per request and destroy it at the end. The Kie Session will not be persisted. This is a good choice for high workload environments.
The following situation can occur: Consider two requests executed simultaneously to interact with the same process instance. One of the two requests might lock a database row to update it, and the other request might try to update it as well. This will lead to an OptimisticLockException.
This strategy might do the implementation more challenging if your project uses timers events or if the process involves business rule tasks.
Per Process Instance Strategy: recommended when your process uses timers and businessrule tasks. The Kie Session is created when the process instance starts, and it dies, when the process instance ends. This is usually adequate for most use cases.
Keep in mind: creating Kie Bases is an expensive operation. Creating new Kie Sessions is cheap.
The engine stores the Kie Session in the database, and always use the same Kie Session for the same process instances.
Per Case: this strategy should be used if you are using a Case Management project. If you create a new “Case Project”, this strategy is selected by default. The Kie Session will last while the case is still opened (active).
Now, let’s check the possible places to configure Runtime Strategies, and when should each one be used.
Choosing where to configure the strategy
Container Level: Affects the deployed kjar. Configuration can be made in kie containers using Business Central via the Execution Servers page.
Project level: The configuration will affect only this project. Configuration can be done via Business Central in the XML file or via. Inside the file src/main/resources/META-INF/kie-deployment-descriptor.xml the following configuration is present: SINGLETON
Server level: No UI Available. A new system property needs to be configured pointing to the new file.
This change will impact all projects from this server. A new deployment-descriptor.xml needs to be provided with all the tags and components (not only the ones which you want to customize). The system property is org.kie.deployment.desc.location and its value must point to a valid accessible file: “file:/path/to/file/deployment-descriptor.xml“
Finally, considering that we can configure the runtime strategy in three different places, this is how the precedence works by default: Container configuration overrides KJar and Server configs.KJar configuration overrides Server configs.
Previous posts of the jBPM Getting Started Series should provide a good base for the understanding of this chapter. Also, basic knowledge about application transactions should provide a better understanding of the chapter. The engine can be tuned for business automation projects to achieve better performance results are achieved. Mainly when involving business rules validation,Read more →
Previous posts of the jBPM Getting Started Series should provide a good base for the understanding of this chapter. Also, basic knowledge about application transactions should provide a better understanding of the chapter.
The engine can be tuned for business automation projects to achieve better performance results are achieved. Mainly when involving business rules validation, there are important decisions that need to be taken about the process execution runtime.
And this is not about how to tune your JVM or operational system. This is about using the engine in the best possible way for each project you deliver.
To create high-performance business automation projects, there are some topics in which the business automation specialist needs to worry:
Is your project lightweight and easily scalable?
What is the best way, in each scenario, for the engine to deal with objects in-memory?
Are you properly storing the domain data – do you need to store data in a different data store?
What will happen in cases of failure – make sure no service is called twice unexpectedly!
How can you extend the engine API to better attend to your domain needs?
Do you need special auditing or some type of listener?
Let’s understand a little bit more about how you can improve and adapt your project and the engine in the blog posts of this section:
During the development phase, it is expected that developers deal and treat unexpected behaviors, predictable and unpredicted errors that might happen during the execution of code. Consider the following situation: An online traveling company named MaTrip.com sells a whole trip experience with a discount for a single package buying: flight + hotel. But each ofRead more →
During the development phase, it is expected that developers deal and treat unexpected behaviors, predictable and unpredicted errors that might happen during the execution of code. Consider the following situation:
An online traveling company named MaTrip.com sells a whole trip experience with a discount for a single package buying: flight + hotel. But each of these services is independent as they are provided by external specialized companies. In this way, MaTrip orders need to interact with the services via services integration.
The customer has bought a trip package and the flight has already been successfully booked. What should happen if the hotel application services go offline for two days? Should the flight be canceled? Should it be automatically reassigned? Should the user be contacted in order to schedule a new trip?
There are several business outcomes to solve this scenario, all it takes is that the developer properly handles this incidents. There are cases where only the main biz scenarios are described by the biz team, and the development team does not have enough information to implement the exception handling. In these cases, the Business Analyst of the project should be involved in order to define the outcomes of these scenarios.
When handling errors like this, for example, in a Java or Javascript code, the developer directly implement exception handling with try and catch; When considering the endpoint layers, developers likely uses HTTP codes to transmit a proper message to the consumer; but, how to deal with exceptions and errors when working with a businessflow?
Treating exceptions the right way
Give a look at the following situation, considering it is at an initial development phase. The “happy path” for this trip schedule flow is:
The user selects a trip (User Task);
An embedded subprocess starts to handle pre-reservation;
Both hotel and flight tasks are parallelized for quicker processing ( Divergent Parallel Gateway); Flight and hotel availabilities are checked (RestTask);
Flight and hotel are pre-reserved (Rest Task);
The process only ends steps are completed for both flight and hotel( Convergent Parallel Gateway);
Finally, the user confirms the trip(User Task).
There are two gaps in this process that can lead to instances aborted or failed:
First uncovered scenario: The selected option (flight or hotel) is not available (business exception);
Second uncovered scenario: One or more rest services are unavailable (runtimeexception);
Treating Business Exceptions
Business Exceptions are errors that are not technical. They are not thrown by unexpectederrors in the code itself. See some examples of business exceptions:
Sell a product that does not exist in the inventory;
Reserve a seat in a full theater movie;
Register a monthly payment twice in the same month;
These exceptions are examples of behaviors directly related to each domain exception. The recommended way to handle business exceptions would be to catch the error
using error events, and then, triggering following actions to handle the error according to the business logic. By using this approach, the whole process – including the treatment of the errors – is clear for all the personas involved in the project. This will facilitate validation by the business users and further maintenance, and enhancements.
Look at this process of a store which opted to follow digital transformation. They allow their customer to select products in a rich multi-channel online store; once payment is confirmed, a personal shopper manually chooses the items; finally, the delivery team takes the order to the customer address:
Both tasks Charge from credit card service and reserve products from stock are rest tasks. Eventually, the store can run out of available products in the stock when the task reserveproducts from stock is triggered. Does this look like a business exception to you?
Reserve product from stock when the number of available products is zero.
We just identified a business exception: NoAvailableProductException. Let’s consider that this Java REST Service treats this error like this:
if (!product.isAvailable()){
log.error("Product is not available, throwing exception.");
throw new NoAvailableProductException(Response.Status.CONFLICT);
}
In this example, you can consider that NoAvailableProductException extends WebApplicationException.
Now, the author of the process design knows which error the service will throw in case of a NoAvailableProductException: HTTP 409 code.
javax.ws.rs.core.Response.Status.CONFLICT relates to HTTP 409. As per RFC specification, this code should be used in situations where the user might be able to try the same request and possibly get a different response. The server should include information about the error in the payload for the consumers. See more details in: https://tools.ietf.org/html/rfc7231#section-6.5.8
The business automation specialist also receives the updates from the business team, and the new exception flow business scope is also defined: if the product is not available, the personal shopper should call the consumer and switch missing item for a similar one or remove it from the order. The order price will change and the new value should be charged.
Scenario 1: The BA specialist increments his process to deal with the business exception thrown in a specific task: He/She added a boundary intermediate catch event to the REST Task. The error code is configured on the error events properties.
In this way, the business analyst can capture business exceptions thrown in specific tasks and provide proper handling for each scenario. But when there are too many possible errors, this approach can make the process too verbose and might affect the clarity of the flow.
Considering that the same business exception can be thrown by more than one task, the author can choose to group the tasks in a subprocess and catch the exception in the parent process definition. See the following scenario.
Scenario 2: Tasks inside a subprocess throw an error end event, which will be catch and handled by the parent process.
Another possible approach is to store the output of the processing in a process variable instead of throwing an exception. Based on the variable value, a gateway can lead to an end error event which will be handled by an event subprocess.
Scenario 3: Process throws an error endevent, which will be handled by an event subprocess with a starting error event.
The author of the process should choose the proper option which better suits the business needs, worrying about the variable scopes and with the understanding and maintainability of the process. On scenario 2 for example, the variables contained inside the subprocess, will not be available during the handling of the exception in the parent process.
Treating business exceptions inside a business process is considered an advanced process modeling technique, and is crucial for proper implementation of successful projects. The Business Exceptions also matters for the organization improvement and the modeling of it can also lead to the creation of business monitoring dashboards.
Treating Technical Exceptions
Technical exceptions are raised within the code implementation itself and are not related to the domain flow. It can happen on script tasks, on custom code implemented on “On Entry” an “On Exit” properties of tasks and custom Work Item Handlers. See some examples of technical exception:
Can’t unmarshall an object into a specific class;
Try to execute an operation in a null object;
Try to cast an object which cannot be cast;
Integration with external components should be handled by external services and not by treated by the process design itself
These kinds of errors can and should be avoided by leveraging the usage of custom code to necessary scenarios, and by increasing the usage of the provided native features. Exceptions raised on script tasks, cannot be caught and handled by the error events demonstrated on the “Business Exceptions” examples.
By default, the flow of tasks is executed in a synchronous way: all tasks will be treated one after the other, by a single thread. This being said, if a process contains, example, four service calls – where each call lasts around 30 seconds – this process execution will run – and allocate JVM, CPU,Read more →
By default, the flow of tasks is executed in a synchronous way: all tasks will be treated one after the other, by a single thread. This being said, if a process contains, example, four service calls – where each call lasts around 30 seconds – this process execution will run – and allocate JVM, CPU, etc.. – for long two minutes. And worse, the caller of this instance has to wait for two minutes until getting a response. In scenarios using default configurations, this situation results in a timeout exception.
“But my process is quite simple and it should be faster. The problem is the legacy services we have to interact with. How can we improve this process design to achieve better performance, execution and resource consumption?“
The answer is: Use asynchronous capabilities when you have long-running tasks or when you wantto define transaction boundaries (Transaction Boundaries will be explained in upcoming posts). In this way, you can delegate the processing of this work unit to a different thread, while the process goes on with the execution.
Here are some possible ways to use asynchronous execution: async tasks, jobs, timers events. The timer and the async tasks can be included in a process diagram. The jobs, differently, are scheduled – registered – within Kie Server to be executed recurrently or not, based on the determined agenda.
The tasks (WI) provided by jBPM have a configurable “Async” property. When a task is marked async:
The task will create a wait-state on the process;
The engine does not wait until this task finishes to trigger the next task;
This task execution will be handled in a different thread;
This task will be automatically retried by the Executor in case of failure (read the Jobs section for more details).
Let’s understand the Job feature available in jBPM which also allows asynchronous execution.
A job, simply saying, is an independent code unit that is called based on a schedule and is executed asynchronously, in background. jBPM has a generic environment for the execution of these Commands, where the Job Executor is responsible for triggering the timer events, async tasks and executing scheduled jobs.
The job executor is an out-of-the-box jBPM component responsible for resuming process executions asynchronously. It can be configured to attend custom needs from each environment. Possible configuration about job executor and details about native jobs can be found at the official documentation and additional information is available at
The job executor is an out-of-the-box jBPM component responsible for resuming process executions asynchronously. It can be configured to attend custom needs from each environment. Possible configuration about job executor and details about native jobs can be found at the official documentation and additional information is available at https://karinavarela.me/2019/06/07/jbpm7-quicktips-jobs .
The Job Executor can properly manage jobs in a clustered environment and guarantee jobs are triggered only once, even if Kie Server runs in a multiple servers architecture. Jobs are persisted and maintained in the database.
On executor start, all jobs are always loaded, regardless if there are one or more instances in the environment. That makes sure all jobs will be executed, even in cases where their fire time has already passed or was scheduled by other executor instance.
Jobs are always stored in DB, no matter what trigger mechanism is used to execute the job (JMS or thread pool). The table RequestInfo stores the jobs that need to be executed. In the expected cycle jobs should be queued, run, and completed. Additionally, it can go through the statuses represented in this diagram.
By default, a single thread pool is used, and services will be retried three times in case of failures. When the executor starts, the following log is displayed in the application server with the current configuration:
23:56:58,050 INFO [org.jbpm.executor.impl.ExecutorImpl] (EJB default - 1) Starting jBPM Executor Component ...
- Thread Pool Size: 1
- Retries per Request: 3
- Load from storage interval: 0 SECONDS (if less or equal 0 only initial sync with storage)
Native jobs, custom jobs, and async tasks execution are performed by the Executor which has the characteristics above. A good example of jobs usage in jBPM is to schedule jobs to maintain a good environment health by constantly cleaning the old database audit data.
Besides supporting the ops team by keeping a healthy environment, making good use of the async behavior and its capabilities lead the dev and BA team to perform a better process design. Additionally, it supports the lifecycle of running processes by providing good visibility of the errors that happened during process execution and allows the possibility of solving the unexpected happenings.
User Tasks allow the interaction of humans with a set of automated tasks. In this way, a series of automatic tasks can be triggered before – providing input for – human decisions, and the output of the user task can then be used to define further actions of a flow. User tasks have a moreRead more →
User Tasks allow the interaction of humans with a set of automated tasks. In this way, a series of automatic tasks can be triggered before – providing input for – human decisions, and the output of the user task can then be used to define further actions of a flow.
User tasks have a more elaborate lifecycle than other activity tasks. The lifecycle of a Human Task involves states and actions. The actions that can be used to interact with the human task and change its state.
Considering the lifecycle of a human task, let’s talk about some possibilities that can be achieved out-of-the-box by using a user task. Here is hiring process an example of a process that has a different ending based on a human decision:
Once started, the candidate should send his curriculum vitae (first user task). The next step is that someone from the Human Resource team picks this task and do an analysis over the candidate curriculum. Based on this analysis, this person should decide whether this is or not an appropriate candidate for the role. This decision is stored in a processvariable, that will be evaluated by the exclusive gateway, defining if the candidate was reproved (send an email) or approved (start hiring processes).
Let’s analyze some important concepts based on this example:
The process definition says that the Validate Candidate Profile should be done by someone from the “Human Resources” team. This task is assigned to agroup.
This task has an SLA, which determines 5 days as maximum expected duration to the HR group – team – to finish this task. The SLA can also be an expression whichholds a process variable like #{determinedDueDate} .
User task variables can be manipulated by the human that owns the task. This is how the process variables are updated by humans involved in this execution:
The process contains two variables named CV and hr_approval. The user that interacts with this task, has access to the variables specified in “Data Inputs andAssignments“, in this case, in_CV. The user can interact with this task in many ways – a form in a web application, mobile app, business central, google home, etc – in whichever way, in the end, this interaction will be converted into a request submitted to the engine.
When the user completes this task, his outputs with the answer are stored into “DataOutputs and Assignments” fields, in this case,out_approval. This simple boolean which tells if the candidate was approved or not, will be automatically assigned to the process variable hr_approval in order to be used in other steps of this hiring flow.
Interacting with User Tasks via Business Central
A client application with a customized layout can interact with human tasks via the available APIs, but for now, let’s see how to interact with this task using business central. Business Central provides a Task Inbox, accessible via the menu.
It is required that the user belongs to the HR group, otherwise, he is not a potential owner and won’t be able to visualize the tasks. When the user opens the inbox, he/she will see the list of tasks which he/she is a potential owner: There are two tasks in this list, because two candidates applied and we have two running process instances. One of the tasks is currently in progress and being executed by a user named karina. Another task is “Ready” to be executed and has no owner. Notice the possible actions for this task: Claim and Claim and Work. But what exactly does this mean?
In order to understand this, we should go back to the WS-HT specification and the lifecycle of a human task.
Human Task Lifecycle
According to the WS-HT specification, human tasks can be in the following states:
Created, Ready, Reserved, In Progress, Suspended, Completed, Failed, Error, Exited or Obsolete.
And, they can the possible actions can be used upon the human task:
jBPM provides Java client API and REST API available to interact with the task and move it through states. Check these states and actions represented on the following diagram: jBPM implements an easy and comprehensible flow based on Web Services Human Task specification (a.k.a WS-HT Spec). http://download.boulder.ibm.com/ibmdl/pub/software/dw/specs/ws-bpel4people/WS-HumanTask_v1.pdf But how is this translated into a real example? Let’s get back to our hiring process, in the point when the candidate finishes the first task, and the second task is activated:
The engine creates the second human task “Validate Candidate Profile”;
Once the engine finishes this creation, the task will be ready for users of this group;
All the members from this group (HR) are potential owners and consequently, can see this task, but only one at a time can claim it, be the owner of it.
Once claimed, this task will be reserved for this user, and will no longer be part of the list presented to the group.
The user now has the option to start actually working in this task;
The user finishes the analysis and can complete the task, providing the required data.
As showed above, a whole lifecycle of management of user interactions is already provided out-of-the-box by jBPM. The user role also matters. It is directly related to the actions that can be executed over a task.
Form Modeler
We are now going to enhance the human interaction by using forms. The Form Modeler is a component from Business Central which allows the Business Automation Specialist to create forms in a low-code manner. It significantly shortens the time spent into forms creation.
These forms are .frm files delivered as part of the kjar and can be also consumed via rest and embedded into custom client applications. This makes it easy to consume the whole process from a single point, without propagating changes to other services and impacting their functioning.
See how simple it is to create a form for the hiring process: once all the input and output variables are properly defined for user tasks, look for the option Generate all forms in the process designer toolbar.
This option will automatically create one form for each human task present in this process definition with all the fields configured as input/output assignment variables.
All the forms are made available and are accessible via the project lateral toolbar, or in the project main page. The lateral toolbar shows the assets categorized by type.
By clicking on one of the forms, in this example, “ValidateCV-taskform”, the form modeler will open with the generated form. This generated form has a proper field for every variable configured on the input/output assignment variables of the human task. It takes in consideration the variable class type, so for example, for the document it provided a file upload component. For the boolean field, it provided a checkbox.
All the BA specialist needs to do is adjust the form according to the domain knowledge and requirements. By clicking on the three dots on the right corner, of each field, options for editing and removing are available. To reorganize items, simply drag and drop them. See the edition of the checkbox for a more appropriate label:
The HTML field type editor is a rich text editor to give more flexibility to the user. There is a whole set of components to be used into the form creation. Also, if the user adds new variables or wants to remove them, they will be displayed in the “ModelFields” tab to be used dragged into the form is necessary.
And this is how, after three minutes, a whole new reusable form got created without much coding: In order to test it, the project needs to be deployed to kie server. Once deployed, when the task is activated, the results can be visualized via business central when clicking on the task. The same form could be embedded into a custom client application.
The form can be obtained in two ways:
XML format with plain details:
For process form: GET /server/containers/{containerId}/forms/processes/{processId}
For task form: GET /server/containers/{containerId}/forms/tasks/{taskInstanceId}
Rendered format
For process forms: GET /server/containers/{containerId}/forms/processes/{processId}/content
For case forms: GET /server/containers/{containerId}/forms/cases/{caseDefId}/content
For task forms: GET /server/containers/{containerId}/forms/tasks/{taskInstanceId}/content
In a business automation project, business process assets are described with BPMN diagram or CMMN diagrams. It’s recommended to base the creation of diagrams on specifications definition, therefore, the implementation will be executable in any software which attends to the specification. Process modeling knowledge is not restricted to specific products. Just like a Java classRead more →
In a business automation project, business process assets are described with BPMN diagram or CMMN diagrams. It’s recommended to base the creation of diagrams on specifications definition, therefore, the implementation will be executable in any software which attends to the specification. Process modeling knowledge is not restricted to specific products.
Just like a Java class can be imported and edited within different IDEs like Eclipse, VSCode, and IntelliJ, the BPMN2 files are also parsed and displayed correctly in other BPMN Modeling tools. If the user chooses different BA tools to work on process authoring, that’s fine, as long as the tools follow the specification definition.
First, let’s review the core concepts of commonly used BPMN2 elements and how jBPM provides them. Then, we will proceed on more in-depth details about how model processes following best practices and advanced modeling concepts.
Along with the descriptions, the book also references the specific jBPM source code which delivers the features. In this way, dev-focused readers can have a better understanding of how everything works under the covers.
Visualizing the source code of open-source projects is an advantage for developers who wants to get more in-depth on the functioning without going depending exclusively on documentation.
Business processes can be modeled in diagrams using this three element types: Events, Activities, and Gateways. Let’s start with the events.
As per BPMN2 specification: AnEventis something that“happens”during a process execution.
Events are categorized as: Starting events, Intermediate events and End events. Events can react to and be triggered based on different happenings. It can also be used to throw a result, raise an error. All these behaviors are examples of events of a business process. The events are represented with circles, and BPMN2 specification has sixty-four different types of events.
Don’t worry, though. It is not necessary to know all sixty-four. Once you understand the concept behind the types, you will then know when to use them. The more you get familiar with it, the more intuitive it gets. The following concepts provide a baseline:
Events are classified with types Start, Intermediate, and End. (Whendoes ithappen during the flow?);
Separate by definitions: The most commonly used definitions are None, Escalation, Error, Timer, Signal, and Terminate. (Whatis happening?);
An event might be triggered, or it may trigger something, by being a catching or throwing event;
When put on top of a task or subprocess, the event is classified as a Boundary event;
Events can interrupt the current work, or it can allow it to go on. This behavior classifies it as an Interrupting or Non-Interrupting event.
The BA Specialist should mix these capacities to deliver the expected business solution. Take a look at the following example. It represents the usage of different events: two start events, two catching intermediate events, one throwing intermediate event, and four end events. Note the events placed along with the flow and the ones placed as boundary events of the tasks. Notice how the Intermediate Error is used on top of an external REST Service, which makes an external request, to catch possible errors. Also, the Intermediate Timer set on top of the Human Task “Collect User feedback” to take necessary steps in case of delayed user action after a determined time limit. When an event happens, a step or series of steps are then triggered: tasks which need human interaction, e-mails that should be sent, external services which need to be called, business rules which needs to validate data within the process. The actions executed during the flow are considered activities.
Activities can be atomic, or they can be decomposable. Atomic activity, is, for example, a Service Task which does a call to an external service (REST Task), or a User Task, which needs intervention from a human. The decomposable activity would be a step which contains steps within it; in other words, a subprocess.
jBPM delivers the following activity types:
A Business Rule task is used to add decision logic to the process, a.k.a. business rules. It triggers a set of rules which is implemented using approaches provided by Drools engine like, DMN models, Decision Tables, DSLs, Guided Rules, and also drools code.
The Script Task is an activity which provides to more advanced developers the possibility to use Java, Javascript, or MVEL language to code logic within a process execution. From the business point of view, it is not very intuitive when used, but might be required for some use cases.
A user task is used to query information from humans as an input for the process. This is not an automatic task. The process always stops when reaching a User Task and waits for human interaction. A user task is assigned to a single user or a group of users. Only one user can complete the task. The engine is flexible and allows notification, escalation, delegation based on the business requirement (i.e. if the task is not completed within two days, notificate the manager).
The decomposable activities, the subprocesses are the following types: embedded, ad-hoc, reusable, multiple instance (MI) and event subprocess.
Before proceeding, it’s essential to understand how jBPM implements and delivers activities, look deeper into the concept of Work Items.
What is a Work Item
A Work Item (WI) is a set of logic that can be automated, graphically represented in a process or case. The same logic can be reused by simply changing the parameters. BPMN2 and CMMN specs already defined the most commonly used work items.
Understanding these concepts facilitates the task of creating custom tasks, content described later in this section. This concept is mainly essential for developers or the ops team who needs to deploy custom tasks into environments;
Every WI has two main components:
Definition: defines a unique name, a description, and a 16×16 icon. This information isdisplayed in the process designer. It is a .wid file;
Handler: Actual logic that makes the task happen, that executes the task. The WIH is a Javaclass which implements WorkItemHandler interface. It has at least, an execute and an abort method.
Some Work Items, mostly Service Tasks, requires configuration during its initialization, and because of that, it needs to be configured in the project deployment settings. This configuration is set via parameters in the class constructor method. The topics Service Tasks and Custom Tasks will show more details about the initialization configuration.
Finally, the business automation specialist can choose whether to use the native work items provided by jBPM or to create custom work items which are adapted for domain-specific logic.
Service Tasks
A service task is a task that does not require any human action with the engine. It can be executed by the engine synchronously or asynchronously (process execution should wait for the execution or not, respectively). We can invoke external services via REST, send e-mails, log messages, and even invoke complex business rules in order to define the next steps to be taken in the process flow.
Service Task Configuration Tips: Service tasks need additional configuration, set in the Deployment Configuration, Work Item Handlers Section of the project. These configurations can either be set manually or added via the Service Tasks menu. The configuration is set into /projectDir/src/main/resources/META-INF/kie-deployment-descriptor.xml.
Rest Task : The Rest Task allows external services to be accessed using REST standards and using HTTP 4.3 API.
Every new task will have by default the following variables available. Use or not authentication, use different types of HTTP methods, and body formats. All these configurations are done via variables in the task.
The Data Inputs and Assignments are further converted and used in the request, like URL, headers, body, etc. The Data Outputs and Assignments receives the response, the resulting information from the execution, returned by the service.
Email Task: The e-mail task handles e-mail transmission and allows the user to addwithout email functionality to a process, without coding a new feature or service. The transmission is handled directly from the process server, in other words, the kie server can use the application server SMTP, previously configured.
The Data Inputs and Assignments are further converted and used in the request, like URL, headers, body, etc. The Data Outputs and Assignments receives the response, the resulting information from the execution, returned by the service.
During the process design, the following parameters are expected by default on the Email task: the subject, content of the email, the sender e-mail, and the destination (single or list).
Log Task: The Log task can be used in a process to output information into the applicationserver logs. To configure it, it is possible to use the out-of-the-box WIH SystemOutWorkItemHandler, or create a custom one to handle specific business needs.
Business Rules Tasks: jBPM engine has cloud-native BRM capabilities which allow the businessautomation specialist to author processes and rules, and execute them with scalable native core engine integration.
The capability of adding decision tasks to a process brings the possibility of creating simple solutions to solve complex scenarios.
When the process reaches a Business Rules Task, the configured set of rules is triggered in the Drools engine within Kie Server. Once there are no more active rules, the flow goes on to the next node. jBPM offers three out-of-the-box options to use this feature: Business Rule Task, Decision Task, Business Rules Remote Task. More details are presented further in this section, in the blog post about “Business Tasks”.
During the design and development phases of an application, developers and architects should not spare valuable time around implementing a reliant and performant way to process the business rules and flows. How to scale and guarantee the proper execution of more than a hundred thousand rules? How to properly design an engine that consistently handlesRead more →
During the design and development phases of an application, developers and architects should not spare valuable time around implementing a reliant and performant way to process the business rules and flows. How to scale and guarantee the proper execution of more than a hundred thousand rules? How to properly design an engine that consistently handles the lifecycle of long-term running processes, the lifecycle of tasks which requires human interaction, and asynchronous clustered job executions?
Why and How to use Kie Server
Intelligent Kie Server engine addresses these questions and saves your team’s time. Kie Server is a Java-based engine designed to effectively execute business assets. This cloud-native engine can be integrated and consumed by any external services via REST or JMS protocols.
In a business rules scope (drools), it can load a large rule base (more than a hundred thousand rules) into the kbase and the execution is really fast and performant. It also has the capability of updating business rules with zero downtime.
When working with workflows, the engine can run in a standalone or clustered environment and deals with synchronous and asynchronous tasks. It handles short and long-term process or cases execution and persists the state of each step including human tasks lifecycle management. The execution engine also allows advanced queries to be created for custom applications or dashboards.
In the application design phase, besides defining if the Kie Server will be monitored by a BC or if it will be a headless server, the architect may also define whether the Kie Server will run in embedded mode or in a remote server.
Accessing Kie Server
Let’s start by using the REST API in this first Kie Server. Make sure your jBPM is up and running before starting and preferentially check if you deployed the project lab01-hello-jbpm.
The server requires a valid user and password containing the roles “kie-server” and “rest-all“. You can use the user wbadmin and password wbadmin.
If your Kie Server is properly up and running, the browser output should be an XML. This XML structure shows some information around this Kie Server instance. It has all the capabilities currently enabled and can process all types of business assets. The timestamp of its start is also displayed along with its version, name, and id.
Accessing the BC execution servers page will allow you to confirm that the kie server you are currently accessing is the same that your BC is controlling. A complete API specification documentation is available for users to check all the possible actions that can be triggered on the engine and how to do it. Access http:// localhost:8080/kie-server/docs and navigate through the page to see all the possible actions.
Using Swagger
Swagger UI facilitates the execution of the described APIs and even provides a command line version of the request using the cURL tool. The same request can be made using curl or any other REST tools like Postman.
Locate “Kie Server and Kie Containers” section, and then, click on the option “/server/containers Returns a list of KIE containers on the KIE Server“.
Click on “Try it Out” and notice that the page will enable the optional parameters for edition. Don’t fill any of the inputs, click on “Execute” blue button.
Right below the “Execute” button, the curl command will be displayed. The interface also shows the result “HTTP 200” along with the container list obtained directly from this Kie Server. The request created automatically by swagger uses curl, and it should be looking similar to:
curl -X GET "http://localhost:8080/kie-server/services/rest/server/containers" -H "accept: application/json"
Observe the JSON result of all the deployed containers in the Kie Server. Information like the container id like “lab01-hello-jbpm_1.0.4-SNAPSHOT“, or the container alias like “lab01-hello-jbpm” are displayed along with each container status. These same containers are displayed in BC execution servers page, as “Deployment Units“.
Using command line tools
Client URL, a.k.a. curl, is a command-line tool that can be used to try out the REST API requests along with the jBPM Getting Started series examples. You can also use it to request Kie Server REST APIs.
Open your terminal and execute the curl command to request a list of containers. If an Unauthorized error is received, make sure you set the headers with proper user. Try it like this:
curl --user wbadmin:wbadmin -X GET "http://localhost:8080/kie-server/services/rest/server/containers" -H "accept: application/json"
Whichever tool you choose to use to consume the REST API, should be fine. Just remember to set the authentication headers and the header “accept” with value “application/json”, to simplify visualization of the results.
Take some time to visualize the metadata retrieved for the available Kie Containers deployed in this Kie Server.
Interacting with a business project
Try to identify in the API specification documentation, what request should be made in order to start a new process instance of a particular process.
It is important for the development team to get used to the Kie REST API. Create the habit to check the API docs to identify the necessary request and all available parameters and options to increase the chances of delivering the best integration possible. It is important to put in contrast that, when using Kogito, the REST API is domain driven. Details can be found on Kogito docs.
In this examples, the curl tool will be used since it clearly displays in a simple way all the necessary headers and parameters. The usage of curl is optional, feel free to use whichever tool you prefer.
Besides using a valid user, starting the execution of a new process instance based on a process definition, requires informing to the Kie Server:
Which Kie Container has the proper deployment? Inform the ID or alias.
Which Process Definition should be used? Inform the ID.
Does the process require information in order to start? Send it via body in JSON or XML format.
Container ID or Container Alias?
Whenever the requested action is going to execute updates of data, the user needs to specify container id in the REST URL. Actions like starting a new process, claiming a task, firing rules and uploading case documents will need the contained id parameters. More general actions like querying all a respective process instance or all the tasks owned by a user do not require the container id parameter.
It is correct to relate the concept of a kie container to a respective deployment version. The container id is composed of the project maven GAV, in the following format group:artifactId:version. To know the GAV of your project checks the pom.xml file.
Considering a project GAV which is com.myspace:lab01-hello-jbpm:1.0.3-SNAPSHOT , a deployment of this project in the Kie Server results in a Kie Container with id com.myspace:lab01-hello-jbpm:1.0.3-SNAPSHOT.
The Kie Container has some metadata information as you could see in the previous lab. One of these deployment’s metadata is the container alias. When a URL with the container alias is consumed, Kie Server will always route the request to the most recent version of the available containers that match this container alias.
The GAV present in the URL reflects changes whenever the project version changes. The URL used on client applications for the request should always be adapted to the proper version. To facilitate this for those teams who always want to use the most recent version, instead of using the container id (project GAV), it is also possible to use the container alias.
Kie REST API: Start an instance using Kie REST API
On the previous post, we started a new instance of the process was created using Business Central UI and received the input via the business project forms. Now, we will start an instance of the same process, using Kie Server directly via the Kie REST API.
As stated in the docs, the context “/server/containers/{containerId}/processes/{processId}/instances” can be used to create a process instance. Let’s try starting an instance of the Issue Process available in our lab01-hello-jbpm project.
Kie Server automatically marshalls the JSON string with Issue and Person complex data objects to the defined Data Models.
The following body input data can be used to start a new process instance.
Container ID/Alias: 01-customer-service-jbpm
Process Definition ID: IssuesProcess
Headers:
accept: application/json
content-type: application/json
Body: The issue, is a process variable of complex type (POJO) Issue. The issue has a question, a type, and a reporter which is also a complex object with type Person. The person, is sending only the name.
Open your favorite API Client, or use cURL. Use the suggested JSON structure above as the body request. Take note of the number retrieved in the response.
The response will be a number, the ID of the created process instance. Using this ID you can retrieve details about this current process instance via rest API or you can check business central management options to check the newly created process instance and the active tasks.
Kie REST API: Retrieving information about a process instance
Try finding in the documentation, how to use a REST API that returns information about a specified process instance. Let’s try it, here are suggested parameters:
The withVars parameter will ask the engine to fetch the process variables. The default value is false, and the “process-instance-variables” will be retrieved with null value.
container id: 01-customer-service-jbpm
process instance id: Use the ID of the process you just created.
Do a GET request to retrieve details on the process instance. This example uses aprocess instance id 5.
Use some time to observe the information present in the JSON result. Besides other important information, these are highlighted:
process-instance-id: process instance ID which was queried.
process-state: current process state, 1 means “Active”
container-id: Kie Container which related to this process execution. With this, itis possible to know the respective project version used when the process was created.
initiator: username from the user who started this process instance;
active-user-tasks: list of user tasks in an active state (waiting for humaninteraction);
process-instance-variables: the variables from this process execution. Notice theIssue now has a solution that was automatically defined by the project business rules.
Kie REST API: Retrieve diagram from running process instances
Let’s check graphically how this process is going by. Open a new tab in your browser and access this URL. Change the {processInstanceId} per the new process instance id you created.
This request retrieves an SVG from the process instance id, showing in a graphical manner the ongoing status of this process. This can, for example, be used within custom client applications.
Kie REST API: Aborting a process
During development, we may reach unexpected scenarios where we may need to abort running process instances, even in production environments. Try to identify in the APIdocumentation, how to abort a running process instance.
Using your API client, send a DELETE request to the following context: /server/containers/{containerId}/processes/instances?instanceId={processInstanceId}
You can abort as many instances as needed, by adding more instanceId parameters. Example: /server/containers/{containerId}/processes/instances?inst anceId=1&instanceId=2&instanceId=3
See the example below using curl and aborting process instance 24:
Business Central is a Java Web-based application that supports the creation, management, and monitoring of business applications. It is not a required component, although, the usage of this tool can accelerate the development phase with proper rules and process authoring tools, form modeler components, advanced dashboard page creator with out-of-the-box components, and more. Once yourRead more →
Business Central is a Java Web-based application that supports the creation, management, and monitoring of business applications. It is not a required component, although, the usage of this tool can accelerate the development phase with proper rules and process authoring tools, form modeler components, advanced dashboard page creator with out-of-the-box components, and more.
Once your environment is up and running, feel free to access business central to test the features as we go through the topics. We will have an overview of Business Central (BC) features, common tasks, and usage.
Business Assets Overview
After creating a project, Business Central allows the users to create any type of asset related to a business automation project: The assets above are colored and numbered as per the following characteristics:
Business Assets
1.Black
Assets related to business process management (BPM). Allows the creation of process diagrams using two different designer options: process designer and legacy process designer. Both will allow the creation of BPMN2 assets.
2.Green
Assets related to the project structure and data models. It will allow the creation of packages, java enumerations (Enum), global variables, and data objects (POJOs) in a friendly interface.
3.Blue
Assets related to business rules (BRMS). The rules assets are friendly ways for business analysts to define and validate rules without going into deep development scope. Decision Tables can be created with pre-defined options (Rule Template) so that the analyst worries only about configuring the business rule as required per the organization. Decision Model and Notation (DMN) editor is a feature on jBPM which allow the graphic design of business rules.
4.Orange
Forms are used to collect inputs from humans to proceed with the process. Traditionally, forms were developed using basically HTML, CSS, or Javascript. Business Central has an out-of-the-box Form Modeler. It automatically generates forms based on the task parameters and allows the edition and creation of new forms in a low-code manner. These forms can be used inside business central or can be consumed via REST API and embedded into client applications.
5.Pink
Asset related to Case Management (CMMN). Case Management assets can also be created into projects. They allow the user to use elements from CMMN to define flexible processes. I’ll post details about Case Management later on further sections.
6.Purple
It refers to Resource Planning Assets. The creation of solver configurations for optimization and scheduling use case projects can also be made via the Business Central interface. Like every other business asset, the problem solving happens within Kie Server engine, which can, for example, solve resource planning problems using an auto-scalable cloud environment.
These assets are presented in a user-friendly manner with drag-and-drop tools and easily readable formats for business users. Business Central also provides the possibility of editing the source code if a more developer focused user wants to enhance the project with technically advanced concepts.These assets are presented in a user-friendly manner with drag-and-drop tools and easily readable formats for business users. Business Central also provides the possibility of editing the source code if a more developer focused user wants to enhance the project with technically advanced concepts.
Projects in Business Central
Creating a new project
Creating projects via Business Central is intuitive and straightforward since all the technical details are transparent and more friendly for business users.
The projects are organized in spaces. When created using business central it contains, by default, a proper structure for a kjar with the necessary configuration files like the kmodule.xml.
To create a project, select a space, and click on “Add Project” button:
2. Choose a name and click save.
A new project was just created and the user should be ready to start authoring business assets.
Importing and exporting projects
The development team can choose to work with standard tools developer IDEs like VSCode, or in Business Central. Due to that, it may be a common practice for the team to import/export the project to/from business central.
To import an existing project from a git repository into Business Central, follow these steps:
Log in to Business Central and select the space named “MySpace“;
On the next screen select the “01-customer–service-jbpm” project and click “Ok” button;
After these steps, business central will clone this project and use Lucene to index all the assets in the local environment. The project should now be available in Business Central for authoring.
This project is now stored into an internal git repository from jBPM which is by default located in the same directory used to run the application server. All the spaces and projects reside within an .niogit hidden folder:
It is possible to customize the folder where business central stores its data and projects by configuring the system property -Dorg.uberfire.nio.git.dir. When using EAP for example, add to standalone.xml system properties tag: <property name=”org.uberfire.nio.git.dir” value=”/path/to/my/niogit”/>
The user can do any type of authoring to this project, create new assets, edit existing ones, create packages, or change configurations. Business Central will automatically commit these changes to the internal git repo. It will not push these commits back to the repository by default.
If the user wants the information from Business Central to be automatically pushed to a remote repository, it is required to configure a git hook.
Let’s explore the imported project using Business Central.
Open the project page, locate the asset named “IssuesProcess” and click on it. The process designer opens with a simple business process containing a rule task (Auto Solve) and two human tasks. This is a process definition that uses bpmn2 specification.
2. Now, accessing the project page again, search and open the asset named “IssueSolver“. The rule editor displays a decision table. This is a group of business rules, each line represents a single rule and its consequence (when this, then that).
The source tab is available if the developer wants to check the generated drools code. The first table line is converted to:
//from row number: 1
//Docker Question
rule "Row 1 IssueSolver"
ruleflow-group "autoSolver"
dialect "mvel"
when
f1 : Issue( type matches "question" , question : question matches "Docker"
then
modify( f1 ) {
setAutomatic( true ),
setSolution( "Please give a look at https://hub.docker.com/r/jboss/jbpm-server-full/ documentation" )
}
end
Explore the project and feel free to add new rules or change the process if you want.
Exporting a project from business central
Whenever a user wants to get the maven project that exists within business central into a local environment, it is necessary to export it.
It is recommended to work with a remote git repository which is the “Source of truth”. When using this practice, every developer of the project, using Business Central or any other IDEs, should always keep the project in sync with that unique git remote repository, that’s why it is named a “single source of truth”.
The URL for every project repository inside Business Central is easily accessible. Check the following steps on how to export the project that you authored into Business Central, as a maven project to your local machine.
Check the following steps on how to export the project that you authored with Business Central, as a maven project on your local machine.
Inside Business Central, access the project page and click on the settings tab. Locate and copy the git URL ready to use with git or ssh protocols:
In the above example, Business Central is running on localhost. Although, this environment could be running on a docker container inside a Kubernetes environment with a dynamic address, for example. This is an easy way to find the URL to clone the project.
With this URL, a developer can clone the project and keep it in sync using common git versioning practices. The SSH protocol requires that the user is configured into the URL when cloning the project. Valid user and roles are required because SSH protocol allows the user to push changes back to the business central repository.
Next, enter the ~/learn-jbpm/labs/ folder and clone the project from Business Central to your local machine:
# Enter the labs folder
$ cd ~/learn-jbpm
# Get the project from business central internal git repository using wbadmin user
$ git clone ssh://wbadmin@localhost:8001/MySpace/lab01-hello-jbpm
Feel free to import it as a maven project to your favorite IDE’s, check the project structure, change or add some files. It is an usual maven project so you can run a maven clean install, or a maven clean package . After compiling the project you can notice that a jar is created in the project’s target folder.
When editing the project don’t forget to add new files using git add, to commit them with a message, and finally pushing them back to business central.
$ git commit -m "my first commit"
$ git push origin master
Deployment
Now that we have our project inside Business Central, let’s deploy it! … But where? In theKie Server!
Another feature provided by Business Central (BC), is an integration with the intelligent engine, Kie Server. Business Central is able to communicate with Kie Server, create Kie Containers and deploy a kjar into it.
The deployment inside Kie Server creates the executable version of your business project and its assets.
When accessing the project page, there are currently two options: “Build” (Build & Install) and “Deploy” (Redeploy).
Build: compiles the current project;
Build & Install: compiles the project and install it to Maven repositories; The considered repositories are configured into the project pom.xml <repositories> or <distributionManagement> sections, or into the maven global configuration settings.xml (maven installed on the machine where this BC runs).
Deploy: Will compile and install the project, connect to available Kie Servers, create a Kie Container and deploy the current version;
Redeploy: Will redeploy the current version and when development mode is activated, the active process instances are not aborted. This is only recommended onthe development environment, therefore, it is available only for SNAPSHOT versions;
From jBPM version 7.20 and higher, a development mode configuration for the project is available. It simplifies the deployment strategy and development cycle during the development phase.
Now, let’s deploy the project we imported previously.
Access the project lab01-hello-jbpm page, and deploy it to the Kie Server engine.
2. After clicking the deploy button, you should see two green boxes with the following two messages:”Build Successful”, “Deploy to server configuration successful and container successfullyupdated.”.
The project is deployed and ready to be tested! Now, let’s open the Kie Server management page to validate the execution environment.
Click on the “Menu” option in the top bar, and select “Execution Servers” under the “Deploy” options. After doing that, you should see the following page: The Execution Server page shows details about each Kie Server which is being monitored by the Business Central, the controller. It displays the capabilities provided by each Kie Server, displays the Kie Containers deployed in each Kie Server, and displays the link to access the available Kie Containers that have deployment units available.
The monitoring and access to Kie Server are facilitated when using Business Central. In this example, Business Central and Kie Server are running within the same JVM, same WildFly server. It is important to notice that the URL for the Kie Container is basedon IP:PORT/kie-server/. This means that the kjar deployment is not running on top Business Central but, this controller can manage and deploy projects on top of Kie Servers available even on different machines.
Kie Server will be detailed in a bit. Let’s proceed with Business Central overview.
Management
Business Central has all the out-of-the-box features to enable the user to interact with business processes, cases, and tasks. Once the project is finished, it is possible to test the whole cycle:
Build and Deploy the project into a Kie Server;
Start a new instance of a process;
Interact with the human tasks of this process;
Visualize the diagram with live information and retry option; Possibility to abort processes or tasks, as well as other available actions.
Try managing the processes deployed in Kie Server, with Business Central, by following the next hands on example.
The available process definitions in the monitored Kie Servers can be accessed in the top menu, in the “Manage” option, “Process Definitions” page. A list of processes definitions is displayed with process name, version and the deployment (GAV). This is where we can start new process instances for the definition created during the authoring. The web form which is displayed is also part of the project. It was automatically generated and slightly adapted. Fields like the date picker are out-of-the-box components.
Whenever a user wants to get the maven project that exists within business central into a local environment, it is necessary to export it.
If you want t change the form, open the project page and search for “com_myspace_lab01_hello_jbpm_Issue” form file. The form modeler will display a friendly way to change this HTML form.
Fill the form with the following artificial data (for a first try, use the suggestedreport and type):
Person Name: John Doe Reporter (Any name is valid)
What would you like to report? : jBPM Documentation
Type?: Question
Click on submit to start a new instance of this process.
Business Central redirected you to the Process Instance management page. You are now visualizing real-time data from this running process instance, with id 1. Click on “Diagram” tab to check what has happened already, and what is the currently active task in this process instance: This information is valuable not only for debugging task but also for business team members who want to check the details of a respective process instance execution (“Why isit taking too long?”, “How long did each task take?”, “Which paths did the flow go through?”, “How many times a task was executed?”).
The diagram shows that the Rule Task has been executed, and based on the input data, provided in the form, it automatically solved the issue. Therefore, the customer service was not necessary. The flow then activates a task “Customer answer validation” where it is expected that the requester (John Doe in this example), observe the solution and rates it.
Let’s interact with this task as if we are the “John Doe” reporter:
On the top Menu, under the “Manage” section, click on “Tasks”. The manage tasks page opens with a list of available tasks (which belongs to active process instances) and its details.
Click on the task line from “Customer answer validation“. The task form will open, showing the data that was provided during the process instance creation, and this form is also part of the project and can be visualized and edited in the project editor.
The solution was automatically defined by the engine based on the decision table. The engine identified this is an automatically solved task, and forwarded the task to the customer so that he can see the provided solution and rate it based on how helpful it was.
Whenever a user wants to get the maven project that exists within business central into a local environment, it is necessary to export it.
The Human Task has its own lifecycle based on HT-WS specification. Details will be provided on further blog posts.
This task is available and waiting for the user to claim it and start it.
Click on the blue “Claim” button, and then, click on “Start“.
Select one of the three options for the question “Is the solution helpful?“, and click on the blue button “Complete“.
Business central redirects back to tasks list and the task is no longer present.
On the top “Menu“, under the “Manage” section, choose “Process Instances“
On the left bar, filter the processes list by clicking on the “Completed” checkbox. The process instance you are interacting with appears on the list.
Click on the process instance, and navigate to the “Diagram” tab.
Observe the flow which was executed and notice that now, the “Customer answer validation” task is gray, meaning it was executed. And finally, the process reaches the end.
Feel free to explore Business Central authoring, deployment and management pages for a while. Try to increase the IssueProcess process definition version, using the process designer. Then, change one of the available forms and deploy the project again. Start a process instance and validate your changes. Have fun!
Monitoring
There are two Business Analysis and Monitoring dashboards available by default. The “Process Reports” show data about the running processes that run in the monitored Kie Servers. The “Tasks Reports“, shows data about the users’ tasks status, owners, and more.
Both reports are available under the top Menu, and under “Track” section. Process
Reports contains information about the process instances which you played with on the last exercises. These real-time data are valuable and really useful for Biz users.
If you tried running some process instances and executing the tasks, when you access “TaskReports” page you will see a more diverse graph. This dashboard with information from aproduction environment leads to the identification of bottlenecks in the organization processes which involves human tasks. Staff performance is just one of the possible information obtained.
Well designed dashboards with that expose the right KPIs, are converted into information that acts like lights that guide the biz team through the dark roads of enterprise improvements and growth.
On the next post, we’ll learn a little bit more about the intelligent execution engine – Kie Server.