Drools Workshops Chile/Argentina (Nov/15)

Hi Everyone, I’m going to South America for two weeks in November and I’m planning to deliver two community workshops around Drools and jBPM. The workshops will be mostly focused on Drools (time restrictions) but I will try to include brief intro about jBPM as well.
I’ve drafted an initial agenda, that might change based on the feedback from people that is planning to attend. Feel free to drop me a comment if you want to see something in particular. I will do my best to accommodate more topics based on the amount of time that we have available.
Click here for the Spanish Version of this post.


Initially these are the dates and cities for the Workshops:
If you are interest in attending please get in touch with me or the organisers in each of the cities and if you can help me to spread the word about these workshops I will appreciate it. The more community members that we can gather the better the workshops are, mostly because we can all share experiences, headaches and future roadmaps.

Suggested Agenda

Here is my initial draft for the Agenda (this might change based on feedback)
1) Intro to Drools
2) Creating a Simple Project
3) Intro to the new KIE Server, how to integrate our apps with Drools
4) Overview about the KIE Workbench
5) Roadmap (7.x and future)
6) Drools + Microservices (Docker/Kubernetes)
7) Community Stories (here I will encourage the participants to share their use cases, if you are interested in giving a very short presentation about what your project is about please let me know, so we can create a list)
Feel free to drop me a message if you want to add something to it. The idea is to work with our laptops and get an application working by the end of the day. If we all work together during the Workshop we will be able to make it more interactive and share more experiences and doubts. I would like to avoid giving too much presentations, so I will try to keep these meet ups as hands on as possible.

Workshop Chile

I will be updating this post with more news every week so stay tuned!


Hi all, this is a follow up post from my previous entry about how to use the jBPM Console. The main idea of this post is to describe some of the most common configurations that you will required to do to the jBPM Console NG in order to use it in your own company. But before going into technical details we will be covering the differences between the KIE Workbench (KIE-WB) and the jBPM Console NG itself. Both applications require similar configurations and its good to understand when to pick one or the other. We will be covering these topics in the free workshops in London. 


If you look at the project source code and documentation, you will notice that there are several projects that are being created to provide a complete set of tools for Drools and jBPM. Because of the modular approach that we have adopted for building the tools, you can basically choose between different distributions depending on your needs. The jBPM Console NG can be considered as a distribution of a set of packaged related with BPM only. The KIE Workbench (KIE-WB) is the full distribution, that contains all the components that we are creating, so inside it you will find all the BPM and Rules modules. If more modules are added to the platform, the KIE-WB will contain them.
Sometime ago Michael Anstis posted an article in blog.athico.com to explain this transition: http://blog.athico.com/2013/06/goodbye-guvnor-hello-drools-workbench.html This blog post was targeted to Guvnor users, so they can understand the transition between Drools 5.5 and Drools 6. So the intention behind the following  section is to explain the same but for jBPM users, trying to unify all the concepts together.

Projects Distributions

The previous mentioned blog explains most of the components that we are creating now, but the following image add some details on the BPM side:
Project Distributions
Project Distributions
Some quick notes about this image:
  • Uberfire and Guvnor are both frameworks, not distributions.
  • We are keeping the name Guvnor for what it was originally intended. Guvnor is a framework to define all the internal project automation and organization. Guvnor is the internal framework that we will use to provide a smart layer to define how projects and all the knowledge assets will be managed and maintained.
  • KIE-WB-Common is not a distribution by itself but it could, because it contains all the shared bits between all the distributions.
  • Drools Workbench only contains authoring tools related with Rules, notice that in the same way as Guvnor it doesn’t provide a runtime for your rules. This could be added in the future but in 6.0 is not.
  • The jBPM Console NG replaced the old jBPM GWT console
  • The difference between names (Drools Workbench and jBPM Console NG) is due the fact that the jBPM Console NG does provide all the runtime mechanisms to actually run your Business Processes and all the assets associated with them.
  • Notice that the jBPM Console NG uses some of the Drools-WB modules and also integrates with the jBPM Designer and the Form Modeller.
  • KIE Workbench contains all the components inside the platform and also add the Remote Services to interact with processes.
  • Notice that the Remote Service in 6.x are only for the BPM side, that means that we can also provide the jBPM Console NG distribution with those services, it is not a priority right now but it can be done if someone thinks that it’s a good idea.
  • You can find all these projects under the droolsjbpm organization in github: http://github.com/droolsjbpm
  • All the configurations and blogs related to the jBPM Console NG also applies for the KIE Workbench
  • The jBPM 6.0 installer will come with KIE Workbench bundled and because of this most of my posts will be showing screenshots of KIE-WB instead of the jBPM Console NG.

Configurations & Deployment

If you take a look at the source code repositories in Github, you will find that the jBPM Console NG, Drools Workbench and Kie Workbench contains a project called *-distribution-wars. These projects are in charge of generating the applications to be distributed for different Servlet Containers and Application Servers. For now we are providing bundles for Tomcat 7, JBoss AS 7, and JBoss EAP 6.1. (If you are a developer, you can also run these applications using the GWT Hosted Mode, which starts up a Jetty server and automatically deploys the application so it can be easily debugged.)
Here we will see how to deploy and configure the application to work in JBoss AS 7. Obviously you don’t need to do so if the jBPM Installer does that for you. But is always good to know what is going on under the hood, just in case that you prefer to manually install the applications.
There are three points to consider when we configure the application for deployment:
  1. Users/Roles/Groups
  2. Domain Specific (Custom) Connectors
  3. JBoss AS 7 Profile
For the sake of simplicity, I’ve borrowed a JBoss AS 7 configured by Maciej and deployed the KIE Workbench latest snapshot, so you can download it and we can review it’s configurations from there. You can download it from here:


By default the KIE-Workbench uses the JBoss AS configured users to work. In order to create a new user we need to use the ./add-user.sh script located inside the /bin/ directory. Using this script we will be creating all the users required by our business processes, and for that reason we will be also assigning them groups and roles.
Adding a New User
Adding a New User
As you can see in the previous image, using the ./add-user.sh script you can create a new user for the application (first two options: option B, and empty realm). Note that you need to use different strings for the user name and for the password. For now you can create users with role admin, so it will have access to all the screens of the tool and then you can write the groups where the user belongs. In this case the user salaboy has Role: admin and he belongs to the IT group. There are some restricted words that cannot be used as group names. For now avoid using “analyst”, “admin”, “developer” for group names.

Domain Specific (Custom) Tasks / Connectors

Domain Specific Connectors are the way to integrate your business processes with external services that can be inside or outside your company. These connectors are considered technical assets and because of that needs to be handled by technical users. Most of the time it is recommended to not change/modify the connectors when the application is running, and for that reason these connectors needs to be provided for the application to use in runtime.
Three things are required to use a Custom Connector:
  1. Provide an implementation of the WorkItemHandler interface, which is the one that will be executed in runtime.
  2. Bind the implementation to a Service Task name
  3. Create the WorkItem Descriptor inside the tool
In order to provide these three configuration points you can take a look at the Customer Relationship example in the jbpm-playground repository.
Customer Relationships Example
Customer Relationships Example
The main idea here is to have a separate project that contains the workItems implementations, for example: CreateCustomerWorkItemHandler , you will need to compile this project with maven and install the produced jar file inside the KIE-WB application. In order to do that you just copy the customer-services-workitems-1.0-SNAPSHOT.jar into the WEB-INF/lib directory of the kie-wb.war app. On this example the workItemHandler implementations interacts with a public web service that you can check here , so you will require internet connection in order to try this example.
Notice also that inside the customer-relationship project there are some high level mappings of the Domain Specific Tasks that can be used inside our Customer Relationship Project -> WorkItemDefinitions.wid. This configuration will basically add you Service Tasks inside the Process Designer Palette:
Domain Specific Service Tasks
Domain Specific Service Tasks
The last step is to bind the High Level mapping to their implementation for this environment. You can do that by adding new entries into the WEB-INF/classes/META-INF/CustomWorkItemHandlers.conf  file, for this example we just need to add the following entries:
“CreateCustomer”: new org.jbpm.customer.services.CreateCustomerWorkItemHandler(),
“AddCustomerComment”: new org.jbpm.customer.services.AddCustomerCommentsWorkItemHandler(),
“ManagersReport”: new org.jbpm.customer.services.ManagersReportWorkItemHandler(),

Note about the JBoss AS 7 Profile

In order to run the KIE Workbench you need to run it with full JBoss AS7 profile, so if you are installing it using a fresh JBoss AS7 please don’t forget to point to the full project when you use the ./standalone.sh script:
./standalone.sh –server-config=standalone-full.xml


You can download a pre installed version of KIE-WB where you can clone the jbpm-playground repository which contains the example (Authoring -> Administration and then Clone a Repository using the jbpm-playground url: https://github.com/droolsjbpm/jbpm-playground).
This pre installed version contains the workItemHandlers already installed and configured for the Customer Relationship example, but you can obviously make some changes and upgrade them if it’s needed.
It also has two users created:
User/Password: jbpm/jbpm6 (Groups: IT, HR, Accounting, etc)
User/Password: salaboy/salaboy123 (Groups: IT)
Please feel to try it out and let me know if it works for you.
There are some few seats available for the Drools & jBPM Free Workshop Tomorrow and on Thursday. If you are planning to assist please write me an email to salaboy (at) redhat (dot) com. For more details about it look here. 

Using the jBPM Console NG – HR Example

The best way to learn about a new tool is using it, for that reason I’ve decided to write some posts  about how to use the jBPM Console NG. On this we will be following a simple “Hiring Example” process. I will try to recreate step by step how to test this example, so you can play with it, change it and extend it if you want to. This example can also be used as a reference to test the application or give feedback about the data that is being shown in the jBPM Console Screens. We will be reviewing this example among others during the free workshops happening in London the 23rd and 24th of October. 

The Example – Hiring Process V1

In order to show all the current features of the jBPM Console NG, I’ve wrote a very simple process to show in a 20 minutes walkthrough what you can do with the tool. Let me explain the business scenario first in order to understand what you should expect from the jBPM Console NG.
Hire a new Developer (click to enlarge)
Hire a new Developer (click to enlarge)
Let’s imagine for a second that you work for a Software company that works with several projects and from time to time the company wants to hire new developers. So, which employees, Departments and Systems are required to Hire a new Developer in  your company? Trying to answering these questions will help you to define your business process. The previous figure, represents how does this process works for Acme Inc. We can clearly see that three Departments are involved: Human ResourcesIT and Accounting teams are involved. Inside our company we have Katy from the Human Resources Team, Jack on the IT team and John from the Accounting team involved. Notice that there are other people inside each team, but we will be using Katy, Jack and John to demonstrate how to execute the business process.
Notice that there are 6 activities defined inside this business process, 4 of them are User Tasks, which means that will be handled by  people. The other two are Service Tasks, which means an interaction with another system will be required.
The process diagram is self explanatory, but just in case and to avoid confusions this is what is supposed to happen for each instance of the process that is started a particular candidate:
  1. The Human Resources Team perform the initial interview to the candidate to see if he/she fits the profile that the company is looking for.
  2. The IT Department perform a technical interview to evaluate the candidate skills and experience
  3. Based on output of the Human Resources and IT teams, the accounting team create a Job Proposal which includes the yearly salary for the candidate. The proposal is created based on the output of both of the interviews (Human Resources and Technical).
  4. As soon as the proposal has being created it is automatically sent to the candidate via email
  5. If the candidate accept the proposal, a new meeting is created with someone from the Human Resource team to sign the contract
  6. If everything goes well, as soon as the process is notified that the candidate was hired, the system will automatically post a tweet about the new Hire using the twitter service connector
As you can see Jack, John and Katy will be performing the tasks for this example instance of the business process, but any person inside the company that have those Roles will be able to claim and interact with those tasks.

Required Configurations

In order to run the example, and any other process you will need to provide a set of configurations and artifacts, that for this example are provided out of the box. Just for you to know which are the custom configurations that this example require:
  1. Users and Roles Configuration: you will usually do this at the beginning, because it’s how you set up all persons that will be able to interact with your business processes.
  2. Specific Domain Service Connectors (WorkItemHandlers in the class path): this could be done on demand, when you need a new system connector you will add it
  3. A Business Process Model to run
  4. A set of Forms for the Human Tasks (If you don’t provide this the application will generate dynamic forms for them): this needs to be done for each User Task that you include in your process. This is extremely important because it represent the screen that the end user will see to perform the task. The better the form the better the user can perform its job.
For the users, roles and deployment instructions you need to check my previous post. The following steps require that you have the application deployed inside JBoss, Tomcat or that you are running the application in Hosted Mode (Development Mode).
The following steps can also be used to model and run a different business process if you want to.

The Example inside the jBPM Console NG

Initially we need to be logged in into the system in order to start working with the tools. There are no role based restrictions yet, but we are planning to add that soon.
Once  you are inside the Home section gives you an overview of what are the tools provided in the current version.
The “Hiring a new Developer” process is being provided out of the box with the tool, so let’s take a look at it going to the Authoring -> Business Process  section using the top level menu.
We will now be in the Authoring Perspective, where in the left hand side of the screen we will find the Project Explorer, which will allow us to see the content of the Knowledge Repositories that are configured to be used by the jBPM Console NG. The configuration of these repositories will be left out for another post. But it is important for you to know that you will be able to configure the jBPM Console NG to work against multiple repositories that contains business processes and business rules.
Process Authoring
Process Authoring
In the right hand side of the screen a you will see the Project Explorer  where you can choose between different projects and between different knowledge asset types. In this case the HR project is selected, so you can check out the hiring process inside the Business Processes category.
You can try modeling your own process, by selecting New in the contextual menu and then Business Process.
Some of the things that you can look inside the process model are:
  1. Global Process properties: Click in the back of the canvas and then access to the properties menu. Notice the process id, the process name and the process version, and the process variables defined.
  2. User Task assignments: click into one of the User Tasks and look a the ActorId and GroupId properties (see previous screenshot)
  3. Tasks Data Mappings: take a look at the DataInputs, DataOutputs and Assignments properties. When we see each activity execution in the following section we will be making reference to the data mappings to see what information is expected to be used and to be generated by each task.  (see previous screenshot)
Once we have our business process modelled, we need to save it and then Build & Deploy the project. We can do this by using the Project Editor screen. You need to select Tools in the contextual menu and then Project Editor.
Project Editor
Project Editor
On the top right corner of the Project Editor you will find the Build & Deploy button. If you click on this button, the project will be built and if everything is OK it will be automatically deployed to the runtime environment, so you can start using the knowledge assets. If the deployment went right and you saw the Build Successfully notification you can now go to the Process Definitions screen under Process Management in the main menu to see all the deployed definitions.
Process Definitions
Process Definitions
If you don’t see your process definition, you will need to go back to the Authoring perspective and see what is wrong with your project, because it wasn’t deployed.
Notice that from this screen you can access to see the process details clicking in the magnifying glass located in the process Actions column.You can also create a new process instance from this screen clicking in the Start button in the process definition list or the New Instance button in the  Definition Details panel. Let’s analyze the process execution and the information that the process requires to be generated by the different users.

Hire a new Developer Process Instance

When we start a new Process Instance a pop will  be presented with the process initial form. This initial form allows us to enter some information that is required by the process in order to start. For this example the process  require only the candidate name to start, so the popup just ask us enter the candidate name in order to start.
New Process Instance
New Process Instance
If we hit the big Start button, the new process instance will be created and the first task of the process will be create for the Human Resources Team. Depending on the assigned roles of the user that you are using to create the process instance you will be able to see the created task or not. In order to see the first task of the process we will need to logout tot he application and log in as someone from the Human Resources team.

Human Resources Interview

For this example we are already logged as Katy, who belongs to the HR team,  so if we go to Work -> Tasks   we will see Katy’s pending tasks. Notice that this HR Interview Task is a group task, which means that Katy will need to claim the task in order to start working on it.
Katy's Tasks
Katy’s Tasks
If Katy claim this task, she will be able to release it if she cannot work any more on it. In order to claim the task you can click in the lock icon in the Task List or you can click in the Work section of the Task Details to see the task form which also offer the tasks operations.
She can also set up the Due Date for the task to match that with the Interview Meeting date. When the candidate assist to the Interview, Katy will  need to produce some information such as:
  • The candidate age
  • The candidate email
  • The score for the interview
In order to produce that information, she will need to access the Task Form, which can be accessed by clicking in the task row check icon or in the clicking on the Work in the Task Details panel.
Human Resources Interview
Human Resources Interview
Another important thing to notice here is that the task operations of save, release and complete the task will be logged and used to track down how the work is being performed. For example, how much time takes in average a Human Resources interview.
Notice that Katy requires to score the candidate at the end of the interview.

Technical Interview

After completing the Human Resources Interview, the candidate will require to do a Technical Interview to evaluate his/her technical skills. In this case a member of the IT team will be required to perform the Interview. Notice that the technical interview task for the IT team will be automatically created by the process instance, as soon as the HR interview task is finished. Also notice that you will need to logout the user Katy from the application and login as Jack or any other member of the IT team to be able to claim the Technical Interview task.
The Technical interview will require the following information to be provided:
  • The list of validated skills of the candidate
  • The twitter account
  • The score for the interview
As you can see in the following screenshot, some of the information collected in the HR Interview is used in the Tech Interview Task Form, to provide context to the interviewer.
Jack's Tasks
Jack’s Tasks
Once the Technical interview is completed the next task in the process will be created, and now it will be the turn of the Accounting team to work on Creating a Job Proposal and an Offer for the candidate if the interviews scores are OK.
You can log out as Jack and login as John in order to complete that task.

Process Instance Details

At all times you can go to the Process Management -> Process Instances to see the state of each of the process instances that you are running.
As you can see in the following screenshot you will have updated information about your process executions:
Instance Details
Instance Details
The Instance Log section gives you detailed information about when the process was created, when each specific task was created and which is  activity is being executed right now. You can also inspect the Process Variables going to the View -> Process Variables option.
As you may notice, in this screen you will be also able to signal an event to the process if its needed and abort the process instance if for some reason is not longer needed.

Summing Up

On this post we had quickly reviewed the screens that you will use most frequently inside the application. The main objective of these post is to help you to get used to the tools, so feel free to ask any question about it. In my next post I will be describing the configuration required to set up users/roles/groups and also we will extend the example to use Domain Specific Connectors for the Send Proposal and Tweet New Hire tasks that are being emulated now with a simple text output to the console.
Remember if you are London, don’t miss the opportunity to meet some community members here: http://salaboy.com/2013/10/04/drools-and-jbpm-6-workshops-2324-october-london/

KIE Press #1: SNOMED CT Ontology Quality Assurance

Welcome to the first article of KIE (Knowledge Is Everything) Press. Under the KIE Press title we will be publishing  a set of articles and blog post to share with the community where the Drools, jBPM, Opta Planner and other related projects are being used by our community members.

In this situation we wanted to share a really interesting use case of Drools in the Health Care Industry. The International Health Terminology Standards Development Organization (IHTSDO – www.ihtsdo.org) is in charge of maintain the SNOMED CT which is an ontology composed of 400,000 concepts and uses Description Logic based definitions. SNOMED CT is a standard for representing clinical knowledge in electronic medical records, widely adopted as a national standard in many countries (see http://www.ihtsdo.org/members/).

A new version of SNOMED CT is published every 6 month, in a time oriented, relational database structure. The IHTSDO manages an authoring team that makes any necessary changes in the ontology for each release, adding new concepts, descriptions, relationships, etc. It’s important to notice that every change must be coherent and a set of checks must be done in order to guarantee that the editor is not leaving the ontology in an inconsistent state. That’s where Drools kicks in.

The IHTSDO is using Drools to do real time validation of SNOMED CT. These validations are based on rules which are defined using Guvnor and exposed via the Knowledge Repository. These validations operates on the changes that are being introduced in the authoring process before saving the changes. If the validation process succeed the changes are applied, if not the user is notified.

The IHTSDO has developed the IHTSDO Terminology Workbench (terminology IDE), which is the tool used by the organization members to update and maintain the SNOMED ontology.  This tool has an integration with the Drools based QA System, so it can submit content for a real time check using the rules stored in Guvnor. The IHTSDO Workbench is the first example of an integration with the Drools Knowledge Bases and models represented in the Guvnor server, and potentially this can be extended to any other tool, as SNOMED CT content is represented in a generic way in the knowledge repository, independent of the tooling. The Drools Knowledge Bases are versioned in Maven, so they are readily accessible as dependencies for any tool development environment.

The IHTSDO is also running checkings on the full ontology in batch processes everyday, which guarantees that the changes introduced by one person doesn’t conflict with the changes introduced by another. In this case the batch process operates on the full ontology that is 400,000 concepts and 1.5 million relationships between them.

The IHTSDO Workbench integration

The Workbench is a desktop application (Swing), which is downloaded by each of the organization members with the rights of updating the ontology. The application allows the users to inspect the ontology, make queries to it and propose changes.

Realtime errors detection in the IHTSDO Workbench

The previous figure shows how the tool notifies the user about the validation errors found by the execution of the rules that verify the changes consistency. Notice that the validation in this case failed because there are two Fully Specified Names for the same concept, and only one active FSN is allowed at a given time. The Desktop application runs the rules locally, but they are defined and compiled inside Guvnor and its Knowledge Repository.

BRL Rule in Guvnor

Here we can see a rule that was defined using the Business Rule Editor, inside Guvnor, which is also used to compile the Rules Packages and provide the Desktop clients the rules to perform the validations.

Finally, because the IHTSDO requires to release the ontology every 6 months, a report is generated based on the daily batch validations executed on the full ontology. This report is verified by all the authors and is used to measure the quality of the released ontology.

Report of Batch QA Runs results

This Drools integration has been in production for 2 years now, and it has provided great benefits and flexibility, the check of one concept, including model conversion is around 20 milliseconds for a random check, and it can be as low as 3 milliseconds when is run in batch over a collection iterated in a natural order, reducing data access delays.

One of the main benefits has been the simplified maintenance of the business rules base compared to the previous environment, that was based on hard coded Java if-then conditions. Drools also allows for thinking on how to write each of the rules independently, leaving the complexity of deciding how the rules run to the rules engine. And also has provided easy updates of the knowledge bases without the need of updating actual software libraries or the version of the application itself, when a rules managed in the IHTSDO updates a rule, the author only needs to run a rules “refresh” function in the workbench and the latest updates are immediately effective.

Join KIE Press

If you want to share your own implementations and use cases of Drools, jBPM, Opta Planner or other related projects feel free to contact us, and we can help you to share your experiences with the community. If you don’t have time to write an article about what you are doing, we can also help you out on that. I’ve wrote this article because I personally know one of the maintainers of the tool (Alejandro Lopez Osornio, working for termMed), but if you are interested in sharing your experiences here, we can do a Google Hangout to define how to share your story.

If your use case is confidential, you can a more generic version of what you are doing, the problems that you found or more generic architectural patterns that you have used in your implementations.

Some of the benefits of sharing what you are doing with the KIE community are:
  • Keep everyone else informed about what you are doing, share experiences and improve your implementations based on the community feedback
  • Save research time in your implementations by staying in contact with people that is implementing similar tools
  • Most of the time, similar solutions can be implemented for different industries
  • Serve as inspirations for new implementations
  • Build confidence on the tools provided by the projects and be part of the community members that are actively creating tools using these technologies

Original post: http://salaboy.com/2013/08/28/kie-press-1-snomed-ct/

jBPM5 & GSoC 2012

This year, jBPM5 was one of the lucky projects which was accepted in the Google Summer of Code Program. The topic of this year was to build a customizable and pluggable Human Task Lifecycle mechanism to extend the current functionality.
Demian Calcaprina is doing a wonderful job researching different alternatives to provide the mentioned features plus giving the project new ideas about how things can be improved.

Current Status

Demian, is finishing up some details and preparing the source code to be merged into the master repository of the jBPM5. He also wrote a very good post explaining the different options that he is proposing. This post:
Can be easily transformed into the project documentation for this new mechanism as soon as its integrated.

Benefits for the project

Having Demian participating from the community side of the project, help us to spend some time analyzing different alternatives that until know the project members didn’t have time to explore. The contributions from are extremely valuable and I personally think that having a BPM project as part of the GSoC is really nice and we as a project will continue looking forward for more community contributors.
We are all learning on how to improve and how to coordinate the community contributions and we notice that more and more people is interesting and adopting open source BPM projects.
From the BPM and Rule Engine arena every team member and community contributor is pushing forward the community engagement trying make this projects evolve!
If you wanna contribute with these projects but you don’t know how, get it contact, write a comment here and we will guide you!
Kudos to Demian who is doing a wonderful job!