Service task repository integrated into Business Central

Service tasks (aka work items) are of tremendous use in business processes. Users can build their custom logic into well defined tasks that can be reused across processes or even projects. jBPM comes with rather large set of service tasks out of the box, you can explore them in jbpm-work-items repository in GitHub.

jBPM also provides standalone service repo that could be used from jBPM designer to import service tasks. Though that was just intermediate step towards better integration with authoring tooling – Business Central.

Brand new integration between service task repository and business central is under development and I’d like to share a bit of news about this upcoming feature…

Service Task administration

First and foremost there is global administration of service tasks. This allows to select what service tasks (that the authoring environment comes with) are allowed to be installed in projects.

There are three configuration options

  • Install as Maven artefact – will upload the jar file of the handler if it does not exist in the local or Business Central’s maven repo
  • Install service tasks artefact as maven dependency of project – will update pom.xml of project upon installation of the service task
  • Use version range – when adding service task artefact as project dependency it will use version range instead of fixed version e.g. [7.16,) instead of 7.16.0.Final

Service task installation – project settings

Once the service tasks are enabled they can be used within projects. Simply go into project settings page and install (or uninstall) service tasks of your desire. Note that this settings page will only list service tasks that are globally enabled.

Service tasks can then be installed into projects. During installation following will be done

  • dedicated wid (work definition) file is created for installed service task
  • custom icon for the service task is installed into project resources (if exists)
  • pom.xml of the project is updated to include dependencies (if it is enabled in the global settings)
  • deployment descriptor is updated to register work item handler for the service task
Similar steps are performed for uninstallation though to remove rather than add configuration.
Here is a short video (this time with audio.. not sure if that is good or bad …) that illustrates the entire feature working together, including use of service task in business process.

This is part one of this feature so stay tuned for more updates in coming weeks…

Here is a complete video showing all features in action including

  • service repository administration
  • uploading new service tasks
  • default service tasks (REST, Email, Decision, etc)
  • installing service tasks into project with prompt for required parameters

This feature is planned for 7.17 so all the feedback is more than welcome.

Audit log mode applied to all audit data

jBPM allows to store so called audit logs in various modes

  • JPA (default)
  • JMS
  • None
JPA mode means that data will be stored directly and within the same transaction as process execution. That usually have some additional performance overhead although it’s certainly not significant and unless huge volume is expected is a sound default setting.
JMS mode means that all audit data will be stored in background and the engine will push all required data via JMS layer. That allows to offload main thread from being responsible for storing audit logs and then allow to process more process instances while JMS listener will deal with storing audit logs in background.
None mode means that audit logs won’t be stored at all, which might make sense in some cases (usually straight through processes) so the audit data is not required. Keep in mind that with disabled audit logs (set to None mode) both jBPM console and Kie Server features are limited as they do rely on audit data.
Until 7.15 audit mode applied only to process related audit data and that consists of
  • ProcessInstanceLog
  • NodeInstanceLog
  • VariableInstanceLog
it has been improved to cover all audit logs that span across processes, user tasks and cases. With that said it covers (in addition to listed above) following
  • AuditTaskImpl
  • TaskEvent
  • TaskVariableImpl
  • CaseFileDataLog
  • CaseRoleAssignmentLog
BAMTaskSummary is not covered with audit mode except for NONE mode which also disables BAM logging.


JPA and NONE mode do not require additional configuration and can be used directly after installation. JMS does need a bit of configuration to allow to take advantage of JMS layer.
This sample configuration assumes the runtime environment is based on WildFly (or EAP) as application server.

Enable JMS queue creation in kie-server-jms.xml

    First you need to enable dedicated JMS queue for sending audit data through. To do so, go to  kie-server.war/META-INF and edit kie-server-jms.xml file. Locate the commended queue named KIE.SERVER.AUDIT and uncomment the entire queue configuration, it should look like

    <messaging-deployment xmlns="urn:jboss:messaging-activemq-deployment:1.0">
    <server name="default">

    <!-- Kie Server REQUEST queue -->
    <jms-queue name="KIE.SERVER.REQUEST">
    <entry name="queue/KIE.SERVER.REQUEST" />
    <entry name="java:jboss/exported/jms/queue/KIE.SERVER.REQUEST" />

    <!-- Kie Server RESPONSE queue -->
    <jms-queue name="KIE.SERVER.RESPONSE">
    <entry name="queue/KIE.SERVER.RESPONSE" />
    <entry name="java:jboss/exported/jms/queue/KIE.SERVER.RESPONSE" />

    <!-- Kie Server EXECUTOR queue -->
    <jms-queue name="KIE.SERVER.EXECUTOR">
    <entry name="queue/KIE.SERVER.EXECUTOR" />

    <!-- JMS queue for signals -->
    <!-- enable when external signals are required -->
    <jms-queue name="KIE.SERVER.SIGNAL.QUEUE">
    <entry name="queue/KIE.SERVER.SIGNAL" />
    <entry name="java:jboss/exported/jms/queue/KIE.SERVER.SIGNAL" />

    <!-- JMS queue for audit -->
    <!-- enable when jms mode for audit is required -->
    <jms-queue name="KIE.SERVER.AUDIT">
    <entry name="queue/KIE.SERVER.AUDIT"/>
    <entry name="java:jboss/exported/jms/queue/KIE.SERVER.AUDIT"/>


    Enable message listener in ejb-jar.xml

    Next, go to kie-server.war/WEB-INF and edit ejb-jar.xml file. Locate  CompositeAsyncAuditLogReceiver and uncomment entire section for that message driven bean. Also uncomment the enterprise-beans tags for the document.
    It should look like below

    <ejb-jar id="ejb-jar_ID" version="3.1"


    <!-- enable when external signals are required and queue and connection factory is defined

    <!-- enable when jms mode for audit is required and queue and connection factory is defined-->



    Configure JMS related config for audit logs

      Lastly, go to kie-server.war/WEB-INF/classes and rename the to

      And that’s all that is required to make use of JMS audit logging in jBPM. For other applications servers, make sure to create JMS queue (and then refer to it in ejb-jar.xml file) according to application server guides for JMS.

      jBPM empowered by Camel to integrate with … everything!

      Apache Camel is extremely powerful integration library, comes with hundreds of components to integrate with 3rd party systems. jBPM on the other hand provides great support for business processes and cases. In many situations data produced by jBPM is required to be pushed to external systems or business processes would need to be informed about changes in external systems that can influence business logic.

      So why not combine these two and provide state of the art business solution that can focus on business goals and yet integrate with pretty much anything in the world.

      Improved camel-jbpm component

      camel-jbpm component has been added in 2.6 version of Camel. At that time it was based on jBPM 6.5 and provided only producer based on kie-remote-client that interacts with jBPM console (aka workbench) REST api. It’s been a while since then and even more important jBPM Console REST api for execution does not exists any more and same applies for kie-remote-client. It has been replaced completely with way more powerful kie server client.
      So it was high time to improve the camel-jbpm component to first of all upgrade to latest version (7.14) and replace use of kie-remote-client with kie-server-client for producer usage. And to provide consumer support as well that will enable simple integration with outside world for pushing out data from jBPM.
      So when it comes to consumer of camel-jbpm component users can now take advantage of following integrations of jBPM empowered by Camel
      • ProcessEventListeners
      • TaskLifeCycleEventListeners
      • CaseEventListeners
      • EventEmitter
      All of these can be easily configured as Camel routes, here is a simple example that will be triggered upon process events being generated by jBPM
      <routes xmlns="">
      <route id="processes">
      <from uri="jbpm:events:process"/>
      <simple>${in.header.EventType} == 'beforeProcessStarted'</simple>
      <to uri="log:kjar.processes?level=INFO&amp;showBody=true&amp;showHeaders=true"/>
      as you can see as soon as there will be events produced on the jbpm:events:processes a new exchange will be processed that will simply go over filter and get only beforeProcessStarted events (each event type is set as header) and the body is actual event produced by jBPM.

      NOTE: if you need more than one on the same consumer type you should suffix it with sort of classifier to make the endpoints unique jbpm:events:process:startedOnly

      Similar endpoints can be used for user tasks and cases

      • jbpm:events:tasks
      • jbpm:events:cases

      Configure routes

      Routes can be configured either on application level (kie server or business app) or kjar level. 
      camel-jbpm component comes with KIE server extension that will be automatically registered in KIE Server when the jar file is present – see Installation section for more details on it.
      Global routes should be created in the root of the application class path (kie-server.war/WEB-INF/classes) in a file name global-camel-routes.xml
      Such global routes will apply to all deployed kjars in KIE Server.
      KJAR specific routes can also be used by placing camel-routes.xml file in the root of kjar class path (src/main/resources folder of the kjar source). When such file is found new (kjar scoped) CamelContext is created with all routes defined in that file. These routes will only apply to that specific KIE Container.


      Installation is really simple, it requires to drop two jar files into kie-server.war/WEB-INF/lib
      • camel-core
      • camel-jbpm
      and that’s it, start the server and you will see that Camel KIE Server extensions boots and does its thing 🙂
      In case you would like to use another component to interact with, you need to do the same, drop the component jar file and its runtime dependencies. For the sake of example we use camel-kafka that requires these jar files to be placed in kie-server.war/WEB-INF/lib
      • camel-kafka-2.19.0.jar
      • kafka-clients-
      • lz4-1.3.0.jar
      • snappy-java-

      NOTE: Make sure to use camel-kafka and kafka-clients matching your Kafka cluster.


      A simple use case to illustrate is to take advantage of the camel-jbpm consumer to react to events produced by jBPM for both tasks and processes
      • for tasks we just log them to console
      • for processes we push them out to Kafka
      here is the camel-routes.xml for this example
      <routes xmlns="">    
      <route id="processes">
      <from uri="jbpm:events:process:test"/>
      <simple>${in.header.EventType} starts with 'before'</simple>
      <simple>${in.header.EventType} for process instance ${}</simple>
      <to uri="kafka:TestLog?brokers=localhost:9092"/>

      <route id="tasks">
      <from uri="jbpm:events:task:test"/>
      <simple>${in.header.EventType} starts with 'before'</simple>
      <to uri="log:kjar.tasks?level=INFO&amp;showBody=true&amp;showHeaders=true"/>
      and here is just a short screencast showing this in action

      IMPORTANT: This improved camel-jbpm component is not yet released, it will go out with Apache Camel 2.23.0 release that is expected to be out in couple of days from now. So prepare yourself and make sure to give it a go.

      A sample project with just camel logging the events can be found here.

      Implement your own form renderer for KIE Server

      As it was described in this article, KIE Server now provides form renderers for process and task forms built in jBPM Console (aka workbench). Out of the box there are two renderers provided

      • based on PatterFly to provide same look and feel as entire jBPM tooling – it’s the default renderer
      • based on Bootstrap to provide a simple alternative that utilises well established framework for building web and mobile UIs
      This obviously won’t cover all possible needs of users and thus the renderers are actually pluggable. In this article we build a custom one from scratch to illustrate what it takes to have your own.

      Create project with dependencies

      First of all, a new maven project needs to be created. It should be most basic project with packaging jar. Then let’s add required dependencies to pom.xml

      <project xmlns="" xmlns:xsi="" xsi:schemaLocation="">
      <name>Custom Form Renderer</name>



      Create configuration folders

      Create folders in the project that will configure the renderer – all should be done src/main/resources

      • form-templates-providers – folder that will contain templates, css and java script files used to render the form
      • META-INF/services/ – an empty file that will be used as discovery mechanism to find and register the renderer – it will be edited a bit later to provide actual implementation details

      Create form renderer implementation

      In src/main/java create a class (e.g. org.kie.server.samples.CustomFormRenderer) that will extend and implement the required methods 
      • getName – provide the name of the template that shall be used as reference when rendering
      • loadTemplates – main implementation that loads different types of templates to be used by renderer
      • default constructor
      IMPORTANT: this new class must be configured as the implementation of the renderer so add its fully qualified class name into 
      There are several types of templates that renderer must provide (and load on startup)
      • master – main template that builds the HTML page
      • header – header template that creates header of the form
      • form-group – form input fields template
      • case-layout – layout for case forms
      • process-layout – layout for process forms
      • task-layout – layout for user task forms
      • table – table to be build for multi subforms
      The easiest way is to base your customisation on top of the out of the box templates (either patternfly or bootstrap). In this example I will use bootstrap templates that can be found here.
      Copy all resources from the linked directory into 
      and then implement the loadTemplates method of the CustomFormRenderer class
      package org.kie.server.samples;

      import org.slf4j.Logger;
      import org.slf4j.LoggerFactory;

      public class CustomFormRenderer extends AbstractFormRenderer {

      private static final Logger logger = LoggerFactory.getLogger(CustomFormRenderer.class);

      public CustomFormRenderer() {
      super(null, null);

      public CustomFormRenderer(String serverPath, String resources) {
      super(serverPath, resources);

      public String getName() {
      return "custom";

      protected void loadTemplates() {
      loadTemplate(MASTER_LAYOUT_TEMPLATE, this.getClass().getResourceAsStream("/form-templates-providers/custom/master-template.html"));
      loadTemplate(PROCESS_LAYOUT_TEMPLATE, this.getClass().getResourceAsStream("/form-templates-providers/custom/process-layout-template.html"));
      loadTemplate(TASK_LAYOUT_TEMPLATE, this.getClass().getResourceAsStream("/form-templates-providers/custom/task-layout-template.html"));
      loadTemplate(FORM_GROUP_LAYOUT_TEMPLATE, this.getClass().getResourceAsStream("/form-templates-providers/custom/input-form-group-template.html"));
      loadTemplate(HEADER_LAYOUT_TEMPLATE, this.getClass().getResourceAsStream("/form-templates-providers/custom/header-template.html"));
      loadTemplate(CASE_LAYOUT_TEMPLATE, this.getClass().getResourceAsStream("/form-templates-providers/custom/case-layout-template.html"));
      loadTemplate(TABLE_LAYOUT_TEMPLATE, this.getClass().getResourceAsStream("/form-templates-providers/custom/table-template.html"));"Custom Form renderer templates loaded successfully.");


      Customise your templates

      Since the templates where copied from another renderer we need to customise it, let’s start with master template. Open it and replace ${serverPath}/bootstrap with ${serverPath}/custom  
      This will ensure that our customised files will be loaded.
      Make any additional changes to the master template as needed. I will just add custom text next to header.
      Master template is the place where you can add additional scripts or stylesheets. There is main js file called kieserver-ui.js that provide all the logic required to manage and submit forms. It also includes validation, so in case you need extensions to that logic consider to create new file with your changes and replace the location of it to point to your new file.
      Make additional customisation to other templates as needed.

      Build and deploy renderer to KIE Server

      Implementation is completed so now it’s time to build the project and deploy to KIE Server.
      • Build the project with maven – mvn clean package
      • Deploy the project to KIE Server by coping the jar file to kie-server.war/WEB-INF/lib
      Start the server and take advantage of your custom renderer by using following URL that works for one of the sample projects – Evaluation (make sure to deploy it before using the renderer).
      As you can see new renderer is fully operational and customised to your needs.

      That’s it, you have now your custom form renderer. The sample described in this article can be found in GitHub.

      Launch of Business Applications

      The time has come – Business Applications are here!!!

      It’s a great pleasure to announce that the Business Applications are now officially launched and ready for you to get started.
      Business application can be defined as an automated solution, built with selected frameworks and capabilities that implements business functions and/or business problems. Capabilities can be (among others):
      • persistence
      • messaging
      • transactions
      • business processes, 
      • business rules
      • planning solutions
      Capabilities essentially define the features that your business application will be equipped with. Available options are:
      • Business automation covers features for process management, case management, decision management and optimisation. These will be by default configured in the service project of your business application. Although you can turn them off via configuration.
      • Decision management covers mainly decision and rules related features (backed by Drools project)
      • Business optimisation covers planning problems and solutions related features (backed by OptaPlanner project)

      Business application is more of a logical grouping of individual services that represent certain business capabilities. Usually they are deployed separately and can also be versioned individually. Overall goal is that the complete business application will allow particular domain to achieve their business goals e.g. order management, accommodation management, etc.

      Business application consists of various project types

      • data model – basic maven/jar project to keep the data structures
      • business assets – kjar project that can be easily imported into workbench for development
      • service – service project that will include chosen capabilities with all bits configured
      Read more about business applications here

      Get started now!

      To get started with your first business application, just go to and generate your business application. This will provide you with a zip file that will consists of (selected) projects ready to run.
      Once you have the application up and running have a look at documentation that provides detailed description about business applications and various options in terms of configuration and development.
      Make sure to not miss the tutorials that are included in the official documentation… these are being constantly updated so more and more guides are on the way. Each release will introduce at least 2 new tutorials … so stay tuned.

      Samples and more

      business-applications samples
      Business application launch cannot be done without quite few examples that can give you some ideas on how to get going, to name just few (and again more are coming)
      • Driver pickup with IFTTT 
      • Dashboard app with Thymeleaf
      • IT Orders with tracking service built with Vert.x
      • Riot League of Legends
      This business application GitHub organisation includes also source code for tutorials so make sure you visit it (and stay around for a bit as more will come).

      Call for contribution and feedback

      Last but not least, we would like to call out for contribution and feedback. Please give this approach a go and let us know what you think, what we could improve or share the ideas for business application you might have.
      Reach out to us via standard channels such as mailing lists or IRC channel.

      Handle service exceptions via subprocess

      Interacting with services as part of your business process (or in more general business automation) is a common requirement. Though we all know that services tend to fail from time to time and business automation solutions should be able to cope with that. A worth reading article was recently published by Donato Marrazzo and can be found here Reducing data inconsistencies with Red Hat Process Automation Manager

      Described feature in this article was actually inspired by discussion with Donato so all credit goes to him!

      BPMN2 has already a construct for similar thing – error boundary events that can be easily attached to service tasks to deal with exceptions and perform additional processing or decision taking. This approach has some drawbacks in more advanced scenarios

      • error handling needs to be done on individual task level
      • retry of the same service call needs to be done via process modelling – usually a loop
      • each error type needs to be handled separately and by that makes the process definition too verbose
      The first point from the above list can be addressed by using even based subprocess that starts with error event but that still suffers from the two other points.
      To address this an additional error handling is introduced (in jBPM version 7.13) that allows the work item handlers (that implement the logic responsible for service interaction) to throw special type of exception org.kie.api.runtime.process.ProcessWorkItemHandlerException
      This exception requires three parameters
      • process id that should be created to deal with this exception
      • selected handling strategy to be applied when exception handling process is completed
      • root cause exception
      When such exception is thrown from work item handler it triggers automatic error handling by starting subprocess instance with the definition identified by process id set on the exception. If the process is straight through (meaning does not have any wait states) service task that failed will apply the handling strategy directly, otherwise it puts the service task in a wait state until the subprocess instance is completed. Then the strategy is applied on the service task.

      Supported strategies

      There are four predefined strategies 
      • COMPLETE – it completes the service task with the variables from the completed subprocess instance – these variables will be given to the service task as output of the service interaction and thus mapped to main process instance variables
      • ABORT – it aborts the service task and moves on the process without setting any variables
      • RETRY – it retries the service task logic (calls the work item handler again) with variables from both the original service task parameters and the variables from subprocess instance – variables from subprocess instance overrides any variables of the same name
      • RETHROW – it simply throws the error back to the caller – this strategy should not be used with wait state subprocesses as it will simply rollback the transaction and thus the completion of the subprocess instance
      With this feature service implementors (those who implement work item handlers) can simply decide how the exception should be handled and what strategy it should apply to the failing service tasks once the error was evaluated and fixed.

      Out of the box handlers and exception handling

      Three major out of the box work item handlers are equipped with this handling automatically as soon as they are created with process id to handle exception and strategy. This takes over the regular error handling that will be applied when RETHROW strategy is selected.

      • RESTWorkItemHandler
      • WebServiceWorkItemHandler
      • EmailWorkItemHandler

      Sample registration for RESTWorkItemHandler via deployment descriptor would be

      new"username", "password", classLoader, "handlingProcessId", "handlingStrategy")

      Look at available constructors of given handler class to see all possible options. Similar way email and Webservice handlers can be configured to handle the exceptions via subprocess.

      Use cases

      Here are few examples that could be implemented
      • report to administrator that service task failed – this would have to be via subprocess that has user task assigned to administrators 
      • use a timer based sub process to introduce some delay in retries
      • ask business users to provide the information in case system is down in time critical situations 
      There can be many more use cases implemented by that and one of the most important things with this is it does not have to be modelled for each and every task that could fail. It is up to the service handler to instruct what and how should be done in case of errors.

      In action…

      A short screencast showing this in action, simple process with custom service task that has a work item handler that throws ProcessWorkItemHandlerException exception and starts user focused subprocess to provide expected values and then apply COMPLETE strategy.

      Here is the complete sample repository shown in the above screencast.

      Hopefully you will find this useful and that will allow you to better automate your business to avoid any repetitive actions …

      Visit to download latest version of the jBPM project

      Let’s embed forms … rendered by KIE Server

      jBPM comes with rather sophisticated form modeller that allows to graphically build forms for processes and tasks. These forms can then be used to interact with process engine to start new instances or complete user tasks.

      One of the biggest advantages of using forms built in workbench is that they share the same life cycle as your business assets (processes and user tasks). By that they are versioned exactly the same way – so if you have another version of a process that requires more information to start you simply create new version of the project and make changes to both process definition and form. Once deployed you can start different versions of the process using dedicated forms.

      Although to be able to take advantage of these forms users have to be logged into workbench as the only way to render the content is … through workbench itself. These days are now over … KIE Server provides pluggable renderers for forms created in workbench. That means you can solely interact with kie server to perform all the needed operations. So what does this brings:

      • renders process forms – used to start new instances
      • renders case forms – used to start new case instances – includes both data and role assignments
      • renders user task forms – used to interact with user tasks – includes life cycle operations
      Worth noting is that rendered forms are fully operational, meaning they come with buttons to perform all the operations that are based on the context – e.g. if user task is in in progress state there are buttons to stop, release, save and complete.
      Here are few screenshots on how the forms look like, these are taken from the sample projects that come out of the box with jBPM distribution
      Evaluation start process form
      Mortgage start process form
      IT Orders start case form
      As it was mentioned, form renderers are pluggable and out of the box there are two implementations
      • based on PatternFly – this is the default renderer that keeps the look and feel consistent with workbench
      • based on Bootstrap
      Renderers can be switched per each form rendering request by simply appending query parameter
      ?renderer=patternfly or ?renderer=boostrap if not given patternfly is the default one.
      Here are few examples of the REST endpoints that illustrate how to get these forms rendered



      Note that containers are given as alias so that brings in additional benefits when working with forms and multiple project versions.

      And at the end few short screen casts showing this feature in action

      Evaluation process

      Mortgage process

      IT Orders case

      Multi Sub Form – dealing with list of items in forms

      More technical information will be provided in the next article as this one is just a quick preview of what’s coming. Hope you like it and don’t forget to provide feedback!

      Performance baseline for jBPM 7 (7.8.0)

      The aim of this article is to show a base information about performance of the jBPM to set a baseline and to answer basic question how good jBPM performs when it comes to execution. This is not to be seen as competitive information or show jBPM is faster or slower than other engines but more for setting a stage and open the door for more performance tests that can be performed in different types of environments.


      The performance test is executed on KIE Server so it actually measures performance of the jBPM as a running service instead of focusing on raw execution of the APIs. So anyone can perform this tests by following the instructions at the end of this article.


      The test has been executed on:
      • community 7.8.0 single zip distribution that you can download on
        • WildFly 11
        • Postgres data base 
      • hardware
        • macOS 10.13.4
        • Processor Intel Core i7 2,3 GHz
        • Memory 16GB
      • JMeter as the test client
      All components (client, application server and database) are on the same hardware, meaning they share the resources.


      There are three scenarios selected for this test that are executed with various concurrency settings.

      Script task

      Most basic process definition that runs directly from the beginning till the end without persisting any state in between.
      This test consists of just single call to KIE Server.

      User task

      User task based process that will persist its state when reaching user task activity. Completion of the task is done in another call. 
      This test consists of three calls to KIE Server
      • start process
      • get tasks for given process instance
      • complete first task

      Parallel script and user tasks

      More advanced process definition that combines both user and script tasks with parallel gateways.
      This test consists of five calls to KIE Server
      • start process
      • get tasks for given process instance
      • complete first task
      • get tasks for given process instance
      • complete second task

      Performance test

      Tests are separated per scenario and then number of concurrent threads. The test is designed to run fixed number of process instances (1000) in the shortest possible time.

      Script task execution results

      Actual figures
      • 1 thread – 11 421 ms
      • 4 threads – 4 428 ms
      • 8 threads – 3 124 ms
      • 1 thread – 91 instances/s
      • 4 threads – 240 instances/s
      • 8 threads – 361 instances/s

      User task execution results

      Actual figures
      • 1 thread – 64 439 ms
      • 4 threads – 18 397 ms
      • 8 threads – 13 927 ms
      • 1 thread – 16 instances/s
      • 4 threads – 52 instances/s
      • 8 threads – 72 instances/s

      NOTE: throughput is for complete process instance execution including completion of user task

      Parallel script and user tasks execution results

      Actual figures
      • 1 thread – 153 543 ms
      • 4 threads – 34 769 ms
      • 8 threads – 20 426 ms
      • 1 thread – 11 instances/s
      • 4 threads – 45 instances/s
      • 8 threads – 70 instances/s

      NOTE: throughput is for complete process instance execution including completion of user tasks


      These performance results show the base performance of the jBPM execution through KIE Server – meaning it adds network and marshalling overhead. The application server or hardware has not been tuned in anyway and the sample processes are simple as well. So with that said, it’s not meant to provide complete performance report but rather a base line. More advanced performance tests can be performed on dedicated hardware and with tuned application server and database for optimal performance.

      Instruction for execution

      In case someone would like to try these tests themselves, here are few steps one how to do it.
      1. Download and install jBPM 7.8.0 (or newer)
        1. download 
        2. getting started 
      2. Change data base to PostgreSQL or MySQL – see getting started (bottom of the page)
      3. Import this project into workbench on your running jBPM server
      4. Download and start JMeter
      5. Open this script in JMeter
      6. Run the selected scenario.

      jBPM 7.8 native execution of BPMN2, DMN 1.1 and CMMN 1.1

      with upcoming 7.8 release of jBPM there is quite nice thing to announce – native execution of:

      • BPMN2 – was there already for many years
      • DMN 1.1 – from the early days of version 7
      • CMMN 1.1 – comes with version 7.8
      The biggest thing coming with 7.8 is actually CMMN execution. It is mainly added for completeness of the execution so people who would like to model case with CMMN can actually execute that directly on jBPM (via KIE Server or embedded).
      Although jBPM supports now CMMN, it is still recommended to use BPMN2 and case management features of jBPM for advanced cases to benefit from features that both specification brings rather to be limited to particular approach. Nevertheless CMMN can be a good visualisation for less complex cases where data and loosely coupled activities can build a good business view.
      Disclaimer: jBPM currently does not provide nor plans to provide any modelling capabilities for CMMN.
      With that said let’s take a quick look at what is supported from the CMMN specification as obviously it’s not covering 100% of the spec.

      • tasks (human task, process task, decision task, case task)
      • discretionary tasks (same as above)
      • stages
      • milestones
      • case file items
      • sentries (both entry and exit)
      Not all attributes of tasks are supported – required, repeat and manual activation are currently not supported. Although most of the behaviour can still be achieved using different constructs.
      Sentries for individual tasks are limited to entry criteria while entry and exit are supported for stages and milestones.
      Decision task by default maps to DMN decision although ruleflow group based is also possible with simplified syntax – decisionRef should be set to ruleflow-group attribute.
      Event listeners are not supported as they do not bring much value for execution and instead CaseEventListener support in jBPM should be used as substitute.
      Let’s have a quick look at how the sample Order IT case would look like designed in CMMN
      some might say it’s less or more readable and frankly speaking it’s just a matter of preferences.
      Here is a screencast showing this CMMN model being executed 

      Next I’d like to show the true power of jBPM – execution of all three types of models:
      • CMMN for top level case definition
      • DMN for decision service
      • BPMN2 for process execution
      you can add all of them into kjar (via import asset in workbench) build, deploy from workbench directly to KIE Server and execute. So here are our assets
      A case definition that has:
      • decision task that invokes DMN decision that calculates vacation days (Total Vacation Days)
      • two human tasks that are triggered based on the data (entry criterion)
      • process task that invokes BPMN2 process if the entry condition is met
      Here is our DMN model
      and last but not least is the BPMN2 process (actually the most simple one but still a valid process)
      Another thing to mention is that, all the models where done with Tristotech Editors to illustrate that they can be simply created with another tool and imported into kjar for execution.
      Here is another screencast showing this all step by step, from exporting from Tristotech, importing into workbench, building and deploying kjar and lastly execute on KIE Server.

      That’s all to share for now, 7.8 is just around the corner so keep your eyes open and visit to learn more.

      And at the end here are the links to the projects (kjars) in github


      single zip distribution for jBPM

      To simplify getting started experience for users I’d like to showcase a single zip distribution that includes:

      • WildFly server (at the moment version 11.0.0.Final)
      • workbench (aka jbpm console)
      • kie server with all capabilities enabled
      • jBPM case management show case application
      All of them are perfectly configured and ready to run with just single and short command:
      or on windows
      The only thing user needs to do is download, unzip and run!
      But that’s not all that comes with this single zip distribution – it comes with very handy scripts that allow to switch to different databases as easy as just one click.
      There are three databases supported out of the box:
      • H2 – default with file based data base stored under WILDFLY_HOME/standalone/data
      • MySQL
      • PostgreSQL
      MySQL and PostgreSQL must be installed before use. Moreover the scripts assume following values:

      • host -> localhost
      • port -> 3306 for MySQL and 5432 for PostgreSQL
      • database name -> jbpm
      • user name -> jbpm
      • password -> jbpm
      in case the values are not correct, edit them in the script files 
      • jbpm-mysql-config.cli for MySQL 
      • jbpm-postgres-config.cli for PostgreSQL 
      in both scripts values to be updated are on line 17 and these are located under WILDFLY_HOME/bin.
      To switch to MySQL stop the server and use following command when server is stopped
      <WILDFLY_HOME>/bin/ –file=jbpm-mysql-config.cli           (Unix / Linux)
      <WILDFLY_HOME>binjboss-cli.bat –file=jbpm-mysql-config.cli     (Windows)
      To switch to PostgreSQL stop the server and  use following command when server is stopped
      <WILDFLY_HOME>/bin/ –file=jbpm-postgres-config.cli      (Unix / Linux)
      <WILDFLY_HOME>binjboss-cli.bat –file=jbpm-postgres-config.cli     (Windows)
      next, start the server again and all your data will be stored in external database.
      All this in action can be seen in this “not so short” screencast

      As usual feedback welcome and please share your opinion if you’d like to see this in the official distribution of jBPM.
      For those that would like to give it a go directly here is the project – just clone it and build locally – in case you want to use another version of jBPM change property named kie.version to the version number you want to use.