Introducing jBPM’s Human Task recommendation API

In this post, we’ll introduce a new jBPM API which allows for predictive models to be trained with Human Tasks (HT) data and for HT to incorporate model predictions as outputs and complete HT without user interaction.

This API will allow you to add Machine Learning capabilities to your jBPM project by being able to use, for instance, models trained with historical task data to recommend the most likely output. The API also gives developers the flexibility to implement a “recommendation-only” service (which only suggests outputs) as well as automatically completing the task if the prediction’s confidence meets a user-defined prediction confidence threshold.
This API exposes the HT handling to a recommendation service.
A recommendation service is simply any third-party class which implements the org.kie.internal.task.api.prediction.PredictionService interface.

 This interface consists of three methods:

  • getIdentifier() – a method which returns a unique (String) identifier for your prediction service
  • predict(Task task, Map<String, Object> inputData) – a method that takes task information and the task’s inputs from which we will derive the model’s inputs, as a map. The method returns a PredictionOutcome instance, which we will look in closer detail later on
  • train(Task task, Map<String, Object> inputData, Map<String, Object> outputData) – this method, similarly to predict, takes task info and the task’s inputs, but now we also need to provide the task’s outputs, as a map, for training 

This class will consist of:

  • A Map<String, Object> outcome containing the prediction outputs, each entry represents an output attribute’s name and value. This map can be empty, which corresponds to the model not providing any prediction.
  • A confidence value. The meaning of this field is left to the developer (e.g. it could represent a probability between 0.0 and 1.0). It’s relevance is related to the confidenceThreshold below.
  • A confidenceThreshold – this value represents the confidence cutoff after which an action can be taken by the HT item handler.

As an example, let’s assume our confidence represents a prediction probability between 0.0 and 1.0. If the confidenceThreshold is 0.7, that would mean that for confidence > 0.7 the HT outputs would be set to the outcome and the task automatically closed. If the confidence <= 0.7, then the HT would set the prediction outcome as suggested values, but the task would not be closed and still need human interaction. If the outcome is empty, then the HT life cycle would proceed as if no prediction was made.
By defining a confidence threshold which is always higher than the confidence, developers can create a “recommendation-only” service, which will assign predicted outputs to the task, but never complete it.

The initial step is then, as defined above, the predict step. In the scenario where the prediction’s confidence is above the threshold, the task is automatically completed. If the confidence is not above the threshold, however, when the task is eventually completed both the inputs and the outputs will then be used to further train the model by calling the prediction service’s train method.

Example project

An example project is available here. This project consists of a single Human Task, which can be inspected using Business Central. The task is generic and simple enough in order to demonstrate the working of the jBPM’s recommendation API.

For the purposes of the demonstration, this task will be used to model a simple purchasing system where the purchase of a laptop of a certain brand is requested and must be, eventually, manually approved. The tasks inputs are:

  • item – a String with the brand’s name
  • price – a Float representing the laptop’s price
  • ActorId – a String representing the user requesting the purchase

The task provides as outputs:

  • approved – a Boolean specifying whether the purchase was approved or not

This repository contains two example recommendation service implementations as Maven modules and a REST client to populate the project with tasks to allow the predictive model training.

Start by downloading, or alternatively cloning, the repository:

$ git clone git@github.com:ruivieira/jbpm-recommendation-demo.git
 

For this demo, two random forest-based services, one using the SMILE library and another as a Predictive Model Markup Language (PMML) model, will be used. The services, located respectively in services/jbpm-recommendation-smile-random-forest and services/jbpm-recommendation-pmml-random-forest, can be built with (using SMILE as an example):

$ cd services/jbpm-recommendation-smile-random-forest
$ mvn clean install

The resulting JARs files can then be included in the Business Central’s kie-server.war located in standalone/deployments directory of your jBPM server installation. To do this, simply create a WEB-INF/lib, copy the compiled JARs into it and run

$ zip -r kie-server.war WEB-INF

The PMML-based service expects to find the PMML model in META-INF, so after copying the PMML file in jbpm-recommendation-pmml-random-forest/src/main/resources/models/random_forest.pmml into META-INF, it should also be included in the WAR by using

$ zip -r kie-server.war META-INF

jBPM will search for a recommendation service with an identifier specified by a Java property named org.jbpm.task.prediction.service. Since in our demo, the random forest service has the identifier SMILERandomForest, we can set this value when starting Business Central, for instance as:

$ ./standalone.sh -Dorg.jbpm.task.prediction.service=SMILERandomForest

For the purpose of this documentation we will illustrate the steps using the SMILE-based service. The PMML-based service can be used by starting Business Central and setting the property as

$ ./standalone.sh -Dorg.jbpm.task.prediction.service=PMMLRandomForest

Once Business Central has completed the startup, you can go to http://localhost:8080/business-central/ and login using the default admin credential wbadmin/wbadmin. After choosing the default workspace (or creating your own), then select “Import project” and use the project git URL:

https://github.com/ruivieira/jbpm-recommendation-demo-project.git

The repository also contains a REST client (under client) which allows to add Human Tasks in batch in order to have sufficient data points to train the model, so that we can have meaningful recommendations.

NOTE: Before running the REST client, make sure that Business Central is running and the demo project is deployed and also running.

The class org.jbpm.recommendation.demo.RESTClient performs this task and can be executed from the client directory with:

$ mvn exec:java -Dexec.mainClass="org.jbpm.recommendation.demo.RESTClient"

The prices for Lenovo and Apple laptops are drawn from Normal distributions with respective means of 1500 and 2500 (pictured below). Although the recommendation service is not aware of the deterministic rules we’ve used to set the task outcome, it will train the model based on the data it receives. The tasks’ completion will adhere to the following logic:

  • The purchase of a laptop of brand Lenovo requested by user John or Mary will be approved if the price is around $1500
  • The purchase of a laptop of brand Apple requested by user John or Mary will be approved if the price is around $2500
  • The purchase of a laptop of brand Lenovo requested by user John or Mary will be rejected if the price is around $2500 

    The client will then simulate the creation and completion of human tasks, during which the model will be trained.

    SMILE-based service

    As we’ve seen, when creating and completing a batch of tasks (as previously) we are simultaneously training the predictive model. The service implementation is based on a random forest model a popular ensemble learning method.

    When running the RESTClient, 1200 tasks will be created and completed to allow for a reasonably sized training dataset. The recommendation service initially has a confidence threshold of 1.0 and after a sufficiently large number (arbitrarily chosen as 1200) of observations are used for training, the confidence threshold drops to 0.75. This is simply to demonstrate the two possible actions, i.e. recommendation without completing and completing the task. This also allows us to avoid any cold start problems.

    After the model is trained with the task from RESTClient, we will now create a new Human Task.

    If we create a HT requesting the purchase of an “Apple” laptop from “John” with the price $2500, we should expect it to be approved.

    If fact, when claiming the task, we can see that the recommendation service recommends the purchase to be approved with a “confidence” of 91%.

    If he now create a task for the request of a “Lenovo” laptop from “Mary” with the price $1437, he would expect it to be approved. We can see that this is the case, where the form is filled in by the recommendation service with an approved status with a “confidence” of 86.5%.

    We can also see, as expected, what happens when “John” tries to order a “Lenovo” for $2700. The recommendation service fills the form as “not approved” with a “confidence” of 71%.

    In this service, the confidence threshold is set as 1.0 and as such the task was not closed automatically.

    The minimum number of data points was purposely chosen so that after running the REST client and completing a single task, the service will drop the confidence threshold to 0.75.

    If we complete one of the above tasks manually, the next task you create will be automatically completed if the confidence is above 0.75. For instance, when creating a task we are pretty sure will be approved (e.g. John purchasing a Lenovo $1500) you can verify that the task is automatically completed.

    PMML-based service

    The second example implementation is the PMML-based recommendation service. PMML is a predictive model interchange standard, which allows for a wide variety of models to be reused in different platforms and programming languages.

    The service included in this demo consists of pre-trained model (with a dataset similar to the one generated by RESTClient) which is executed by a PMML engine. For this demo, the engine used was jpmml-evaluator, the de facto reference implementation of the PMML specification.

    There are two main differences when comparing this service to the SMILE-based one:

    • The model doesn’t need the training phase. The model has been already trained and serialised into the PMML format. This means that we can start using predictions straight away from jBPM.
    • The train API method is a no-op in this case. This means that whenever the service’s train method is called, it will not be used for training in this example (only the predict method is needed for a “read-only” model), as we can see from the figure below.

    You can verify that the Business Central workflow is the same as with the SMILE service, although in this case no training is necessary.

    The above instructions on how to setup the demo project are also available in the following video (details are in the subtitles):

    In conclusion, in this post we’ve shown how to use a new API which allows for predictive models to suggest outputs and complete Human Tasks.

    We’ve also shown a project which can use different recommendation service backends simply by registering them with jBPM without any changes to the project.

    Why not create your own jBPM recommendation service using your favourite Machine Learning framework, today?

        Kogito, ergo Rules — Part 1: Bringing Drools Further

        The Kogito initiative is our pledge to bring our business automation suite to the cloud and the larger Kubernetes ecosystem. But what does this mean for our beloved rule engine, Drools? In this post we introduce modular rule bases using rule units: a feature that has been experimental for a while in Drools 7, but that will be instrumental for Kogito, where it will play a much bigger role. This is the first post of a series where we will give you an overview of this feature (read part 2)

        Bringing Drools Further

        Droolsis our state-of-the-art, high-performance, feature-rich open source rule engine. People love it because it is a swiss-army knife to the many problems that can be solved using rule-based artificial intelligence. But as the computer programming landscape evolves, we need to think of ways to bring further Drools as well. As you may already know, Kogito is our effort to make Drools and jBPM really cloud-native, and well-suited for serverless deployments: we are embracing the Quarkus framework and GraalVM’s native binary compilation for super-fast startup times and low memory footprint; but we are not stopping there.

        The way we want to bring further Drools evolution is twofold: on the one hand, we want to make our programming model easier to reason about, by providing better ways to define boundaries in a rule base with a better concept of module; on the other hand, the concept of modular programming dates back at least to the 1970s and to Parnas’ original seminal paper. Needless to say, if our contribution stopped there, we would be bringing nothing new to the plate. In the last few years, computing has evolved, slowly but steadily embracing the multicore and distributed revolution; yet, to this day, many general-purpose programming languages do not really make it simple to write parallel or distributed programs. rule-based programming system we have the chance to propose something different: a rule engine that is great when stand-alone, but outstanding in the cloud.

        Modular Rule Bases. As you already know, Drools provides a convenient way to partition set of rules into knowledge bases. Such knowledge bases can be composed together, yielding larger sets of rules. When a knowledge base is instantiated (the so-called session), rules are put together in the same execution environment (the production memory), and values (the facts) are all inserted together in the same working memory.

        This model is very simple and powerful but in some senses it is also very limited. It is very simple, because, as a user of the rule base, you just worry about your data: the values are inserted into the working memory, and the engine does its magic. It is very powerful, because, as a rule author, you can rely upon the rules you have written to realize complex flows of reasoning, without worrying about how and when they will trigger.

        At the same time, such an execution model lacks all of the principles, that over the years we have been learning are good programming practice. For instance, there is no proper notion of a module: it is not possible to perfectly isolate one rule from another, or to properly partition the working memory. As the rule base scales up in complexity, it may become harder to understand which rules trigger and why. In some senses, it is as if you were programming in an odd world where proper encapsulation of state does not exist, as if years of programming language evolution had not happened.

        Object-Oriented Programming. The term object-oriented programming has been overloaded over the years to mean a lot of different things; it has to do both with inheritance, with encapsulation of state, with code reuse, with polymorphism. All these terms get often confused, but they are not really related: you can reuse code without inheritance, you can encapsulate state without objects, you can write polymorphic code without classes. Very recent, imperative programming languages such as Go and Rust do not come with proper classes, yet they support a form of object-orientation; there is even a beautiful 2015 talk from C++’s dad, Bjarne Stroustrup, showing how his child supports object-orientation without inheritance.

        Alan Kay, who fathered the term in his Smalltalk days at Xerox, in his inspiring lecture at OOPSLA 1997 said «I made up the term “object-oriented”, and I can tell you I did not have C++ in mind». In fact, the idea of objects that Alan Kay pioneered was more similar to the concept of actors and microservices. In proper object-oriented programming, objects encapsulate their internal state and expose their behavior by exchanging messages (usually called methods) with the external world.

        Today actor systems have seen a renaissance, message buses are very central to what today we call reactive programming, microservices are almost given for granted. So, we wondered, what would it mean for Drools to become a first-class citizen of this new programming landscape?

        Kogito, ergo Cloud

        In the next post we will see our take on rule-based, modular programming, using rule units. Rule units will provide an alternative to plain knowledge base composition and an extended model of execution. We believe that rule units will make room for a wider spectrum of use cases, including parallel and distributedarchitectures. Stay tuned to read how they fit in the Kogito story, and the exciting possibilities that they may open for the future of our automation platform.




        jBPM monitoring using Prometheus and Grafana


        In this post, we will introduce the new Prometheus Kie Server Extension, which has been released as part of jBPM version 7.21.0.Final. This extension aims to make extremely easy for you to publish metrics about your Kie Server runtime execution. Using Prometheus and Grafana has become a standard for monitoring cloud services these days, and allowing the Kie Server to expose metrics related to processes, tasks, jobs and more, becomes a powerful integration not only for you to get a snapshot of the current status inside the server but also for combining it with information from different sources such as JVM, Linux and more. Not only in terms of infrastructure monitoring, but it is also a great tool to get insights into the execution of your business process.
        To get started with Prometheus, take a look in this overview and the full list of exporters and integrations. Grafana is also another powerful tool that allows you to create nice looking dashboards, combining data from multiple sources, to get started take a look here.
        Here is an example based on the metrics exposed from the Kie Server:


        To enable this new extension, set the Prometheus system property to org.kie.prometheus.server.ext.disabled=false. When you enable this extension, a series of metrics will be collected, including information about deployments, start time, data sets, execution errors, jobs, tasks, processes, cases, and more. For the complete list of metrics, see the Prometheus services repository on GitHub.

        After the extension is started, you can access the available metrics at ${context}/services/rest/metrics.
        For example:

        curl -u wbadmin:wbadmin http://localhost:8080/kie-server/services/rest/metrics
        To quickly demonstrate all the capabilities of this integration, we created a short video, with more details about how to get started, two example dashboards for Grafana, as well as a Docker compose configuration that you can use as a playground to explore all these tools working together.



        Docker compose example configuration is available here and to get started, simply run:

        docker-compose -f jbpm-kie-server-prometheus-grafana.yml up

        After all images start, you have the following tools available:


        To access the example dashboards, please login to Grafana using the default credentials: username admin and password admin. Then navigate to Dashboards -> Manage, in there you should have two examples: jBPM Dashboard and jBPM Kie Server Jobs.


        As you interact with your Kie Server and Business Central instances, like deploying and starting new process instances, you should notice the metrics values changing in the dashboard. Prometheus is configured to scrape data every 10 seconds.

        Hope you have fun monitoring your Kie Server!

        Webinar: Re-imagining business automation: Convergence of decisions, workflow, AI/ML, RPA — vision and futures

        WEBINAR 

        Title: Re-imagining business automation: Convergence of decisions, workflow, AI/ML, RPA—vision and futures

        Time: June 20, 2019, 5:00 p.m. BST (UTC+ 1)

        Registration https://www.redhat.com/en/events/webinar/re-imagining-business-automation-convergence-decisions-workflow-aiml-rpa%E2%80%94vision-and-futures

        drools.js: Towards a Polyglot Drools on GraalVM (with Bonus Tech-Lead Prank)

        Image courtesy of Massimiliano Dessì

        You can find the full source code for this blog post in the submarine-examples repository.

        Different programming languages are better for different purposes. Imagine how hard would it be to query a database using an imperative language: luckily, we use SQL for that. Now, imagine how useless would a rule engine be, if defining rules were not convenient! This is the reason why Drools comes with its own custom language, the DRL. The Drools Rule Language is in a so-called domain-specific language, a special-purpose programming language specifically designed to make interaction with a rule engine easier.

        In particular, a rule is made of two main parts, the condition and the consequence.

        The condition is a list of logic predicates, usually pattern matches, while the consequence is written using an imperative language, usually Java.

        An Abstract Rule Engine

        Rules are what really make a rule engine. After all, that’s what a rule engine does: processing rules. Thus, it might sound kind of logical for the engine to be a bit entangled with the language for rule definitions. Our engine is not specially tied to the DRL; but it used to.

        In the last year or so, we spent a lot of time unbundling the innards of the DRL from the guts of the Drools core. The result of this effort is what we called the Canonical Model; that is, an abstract representation of the components that make up a rule engine, including rule definitions. Incidentally, this also paved the way for supporting GraalVM and the Quarkus framework; but our goal was also different. We wanted to abstract our engine from the rule language.

        Internally, the DRL is now translated into the canonical representation; but, as we said previously, this canonical model is described using Java code. While this representation is not currently intended to be hand-coded, it is very possible to do so. The following is a simple rewriting of the previous DRL rule.

        As you can see, although the rule definition is now embedded in a Java “host” language, it still shows the main features of a DRL definition, namely, the logic condition and the imperative consequence (introduced by the on…execute pair) In other words, this is a so-called embedded or internal domain-specific language.

        A small disclaimer applies: the code above works, but our translator takes extra steps for best performance, such as introducing indexes. In fact, one of the reasons why we do not intend this API for public consumption is that, currently, a naive rewrite like this may produce inefficient rules.

        A Polyglot Automation Platform

        As part of our journey experimenting with our programming model, we wanted to see whether it was feasible to interact with our engine using different programming languages. DRL aside, the canonical model rule definition API is pure-Java.

        But GraalVM is not only a tool to generate native binaries: in fact, this is only one of the capabilities of this terrific project. GraalVM is, first and foremost, the one VM to rule them all: that is, a polyglot runtime, with first-class support for both JVM languages and many other dynamic programming languages, with a state-of-the-art JIT compiler, that easily compares or exceeds the performance of the industry standards. For instance, there is already support for R, Ruby, JavaScript and Python; and, compared to writing a JIT compiler from scratch, the Truffle framework makes it terribly easy to write your own, and fine-tuning it to perfection.

        GraalVM gave us a great occasion to show how easy could it be to make Drools polyglot, and, above all, to play an awful practical joke on our beloved, hard-working, conference-speaking, JavaScript-hating, resident Java Champion and tech lead Mario!

        Enter drools.js:

        And here’s a picture of Mario screaming in fear at the monster we have created



        Jokes aside, this experiment is a window over one of the many possible futures of our platform. The world of application development today is polyglot. We cannot ignore this, and we are trying to understand how to reach a wider audience with our technologies, be it our rule engine, or our workflow orchestration engine; in fact, we are doing the same experiments with other parts of the platform, such as jBPM.

        jBPM provides its own DSL for workflow definition. Although this is, again, work in progress, it shows a lot of promise as well. Behold: jbpm.js!

        Conclusion

        The DRL has served its purpose for a very long time, but we are already providing different ways to interact with our powerful engine, such as DMN and PMML; but power users will always want to reach for finer tuning and write their own rules.

        The canonical model API is still a work-in-progress, and, above all, an internal API that is not intended for human consumption; but, if there is enough interest, we do plan to work further to provide a more convenient embedded DSL for rule definition. Through the power of GraalVM, we will be able to realize an embedded DSL that is just as writable in Java as any other language that GraalVM supports.

        And this includes JavaScript; sorry Mario!

        JHipster generator for jBPM Business Apps

        If you are a fan of JHipster you can now generate jBPM Business apps with it! We created a generator module for it which you can use as follows:

        With Yarn:

        yarn global add generator-jba

        Or with NPM:

        npm install -g generator-jba

        Once installed generate your app with:

        yo jba

        and follow the questions. If you want to generate the app with default settings, run:

        yo jba –quick=true


        jBPM Visual Studio Extension – New version 0.6.0 adds jBPM Business Apps debugging

        Happy to announce a new 0.6.0 version of the JBAVSC extension for Visual Studio Code.

        This extension adds process debugging for your business apps!

        Debugging business app process in Visual Studio Code

        JBAVSC Github: https://github.com/BootstrapJBPM/jbavsc
        Visual Studio Code Marketplace:  https://marketplace.visualstudio.com/items?itemName=tsurdilovic.jbavsc

        Here is a youtube vide showing off all the features of this extension:

        We can make this extension much much more powerful so if you are interested in helping please let us know!

        Enabling CORS in your jBPM Business Application

        Currently when you generate your jBPM Business Application (online via start.jbpm.org, command-line via the jba-cli package, or in Visual Studio code via the jBPM extension) your app will have CORS (Cross-origin resource sharing) disabled by default.

        With CORS disabled, if you have a consumer app (e.g. React frontend)  which does not live on the same domain as your business app, it will not be able to query its REST api.

        CORS will be enabled by default with the next jBPM community release (7.18.0.Final), see Jira JBPM-8176, but if you would like to enable this on you own now, it is very easy to do:

        In your generated business app service module edit the DefaultWebSecurityConfig.java file and replace it with the one in this Gist. That’s it 🙂

        With this change in place you will now be able to query your business apps REST api from any domain, for example if you are using jQuery.ajax and want to get your server information (/rest/server endpoint) you could do for example:

        Sample ajax request to /rest/server

        Visual Studio Code extension for generating jBPM Business Apps

        If you are developing your apps using Visual Studio Code you can now install a new jBPM Business Application extension. With this extension and the great tooling support of VSC you can now generate, develop, and launch your jBPM business apps without ever leaving your development environment.

        Here is the youtube video showcasing how to install and use this extension:

        The sources of the extension are on github. We are looking for contributions to make this extension better in the future.

        Generate jBPM Business Apps with Node.js Command-line interface (CLI)

        In addition to start.jbpm.org there is now an command-line way to generate your jBPM Business Applications, namely with the jba-cli Node package.

        jba-cli package on npmjs.com
        Sample CLI usage
        If you have Node installed locally you can install and run this package with:
        npm install jba-cli -g
        jba gen
        This allows your to build your jBPM Business app zip file without having to go through the browser. 
        To contribute to this little cool project feel free to clone it and create pull requests from its github repo.
        Here is youtube video showing how to install and use the jba-cli command line interface to generate your app: