The Relationship of Decision Model and Notation (DMN) to SBVR and BPMN (Full Article)

Publications by James Taylor and Neil Raden[2], Barbara von Halle and Larry Goldberg[1], Ron Ross[7], and others have popularized “Decision Modeling.”  The very short summary is that this is about modeling business decision logic for and by business users.

A recent Decision Modeling Information Day conducted by the Object Management Group (OMG)[4] showed considerable interest among customers, consultants, and software vendors.  The OMG followed up by releasing a Request for Proposals (RFP) for a Decision Model and Notation (DMN) specification.[5]  According to the RFP,

“Decision Models are developed to define how businesses make decisions, usually as a part of a business process model (covered by the OMG BPMN standard in Business Process Management Solutions).  Such models are both business (for example, using business vocabularies per OMG SBVR) and IT (for example, mapping to rule engines per OMG PRR in Business Rule Management Systems).”

This quote says a little about how DMN may relate to SBVR[6] and BPMN[3], but there are many more open questions than answers.  How do SBVR rules relate to decisions?  Is there just one or are there multiple decisions per SBVR rule?  Is there more to say about how SBVR and DMN relate to BPMN?
This article attempts to “position” DMN against the SBVR and BPMN specifications.  Of course, DMN doesn’t exist yet so the concepts presented here are more the authors’ ideas about how these three specifications shouldrelate to each other, than reality.  We present these ideas in the hope that they will positively influence the discussions that lead up to the DMN specification.”

jBPM 6 first steps

This post is about to give a very quick introduction to how users can take their first steps in jBPM 6. Using completely web tooling to build up:

  • processes
  • rules
  • process and task forms
  • data model
With just three simple examples you will learn how easy and quickly you can start with BPM. So let’s start.

The simples process

First process is illustrating how you move around in KIE workbench web application. Where to:
  • create repository
  • create project
  • configure Knowledge Base, KnowledgeSession
  • create process
  • build and deploy
  • execute process and work with user task

Custom data and forms

Next let’s explore more and start with bit advanced features like:

  • building custom data model that will be used as process variable
  • make use of process variables in user task
  • define custom forms for process and tasks
  • edit and adjust your process and task forms

Make use of business rules and decisions in your process

At the end let’s make the process be more efficient by applying business rules in the process and then use gateways as deception points. This example introduces:

  • use of business rule task
  • define business rules with Drools
  • use XOR gateway to split between different paths in the process
Important to note that business rule task can automatically insert and retract process variables using data input and output of business rule task. When defining them make sure that both data input and output are named exactly the same to allow engine to properly retract the facts on business rule task completion.
That would be all for the first steps with jBPM 6. Stay tuned for more examples and videos!
As usual comments more than welcome.


Hi all, this is a follow up post from my previous entry about how to use the jBPM Console. The main idea of this post is to describe some of the most common configurations that you will required to do to the jBPM Console NG in order to use it in your own company. But before going into technical details we will be covering the differences between the KIE Workbench (KIE-WB) and the jBPM Console NG itself. Both applications require similar configurations and its good to understand when to pick one or the other. We will be covering these topics in the free workshops in London. 


If you look at the project source code and documentation, you will notice that there are several projects that are being created to provide a complete set of tools for Drools and jBPM. Because of the modular approach that we have adopted for building the tools, you can basically choose between different distributions depending on your needs. The jBPM Console NG can be considered as a distribution of a set of packaged related with BPM only. The KIE Workbench (KIE-WB) is the full distribution, that contains all the components that we are creating, so inside it you will find all the BPM and Rules modules. If more modules are added to the platform, the KIE-WB will contain them.
Sometime ago Michael Anstis posted an article in to explain this transition: This blog post was targeted to Guvnor users, so they can understand the transition between Drools 5.5 and Drools 6. So the intention behind the following  section is to explain the same but for jBPM users, trying to unify all the concepts together.

Projects Distributions

The previous mentioned blog explains most of the components that we are creating now, but the following image add some details on the BPM side:
Project Distributions
Project Distributions
Some quick notes about this image:
  • Uberfire and Guvnor are both frameworks, not distributions.
  • We are keeping the name Guvnor for what it was originally intended. Guvnor is a framework to define all the internal project automation and organization. Guvnor is the internal framework that we will use to provide a smart layer to define how projects and all the knowledge assets will be managed and maintained.
  • KIE-WB-Common is not a distribution by itself but it could, because it contains all the shared bits between all the distributions.
  • Drools Workbench only contains authoring tools related with Rules, notice that in the same way as Guvnor it doesn’t provide a runtime for your rules. This could be added in the future but in 6.0 is not.
  • The jBPM Console NG replaced the old jBPM GWT console
  • The difference between names (Drools Workbench and jBPM Console NG) is due the fact that the jBPM Console NG does provide all the runtime mechanisms to actually run your Business Processes and all the assets associated with them.
  • Notice that the jBPM Console NG uses some of the Drools-WB modules and also integrates with the jBPM Designer and the Form Modeller.
  • KIE Workbench contains all the components inside the platform and also add the Remote Services to interact with processes.
  • Notice that the Remote Service in 6.x are only for the BPM side, that means that we can also provide the jBPM Console NG distribution with those services, it is not a priority right now but it can be done if someone thinks that it’s a good idea.
  • You can find all these projects under the droolsjbpm organization in github:
  • All the configurations and blogs related to the jBPM Console NG also applies for the KIE Workbench
  • The jBPM 6.0 installer will come with KIE Workbench bundled and because of this most of my posts will be showing screenshots of KIE-WB instead of the jBPM Console NG.

Configurations & Deployment

If you take a look at the source code repositories in Github, you will find that the jBPM Console NG, Drools Workbench and Kie Workbench contains a project called *-distribution-wars. These projects are in charge of generating the applications to be distributed for different Servlet Containers and Application Servers. For now we are providing bundles for Tomcat 7, JBoss AS 7, and JBoss EAP 6.1. (If you are a developer, you can also run these applications using the GWT Hosted Mode, which starts up a Jetty server and automatically deploys the application so it can be easily debugged.)
Here we will see how to deploy and configure the application to work in JBoss AS 7. Obviously you don’t need to do so if the jBPM Installer does that for you. But is always good to know what is going on under the hood, just in case that you prefer to manually install the applications.
There are three points to consider when we configure the application for deployment:
  1. Users/Roles/Groups
  2. Domain Specific (Custom) Connectors
  3. JBoss AS 7 Profile
For the sake of simplicity, I’ve borrowed a JBoss AS 7 configured by Maciej and deployed the KIE Workbench latest snapshot, so you can download it and we can review it’s configurations from there. You can download it from here:


By default the KIE-Workbench uses the JBoss AS configured users to work. In order to create a new user we need to use the ./ script located inside the /bin/ directory. Using this script we will be creating all the users required by our business processes, and for that reason we will be also assigning them groups and roles.
Adding a New User
Adding a New User
As you can see in the previous image, using the ./ script you can create a new user for the application (first two options: option B, and empty realm). Note that you need to use different strings for the user name and for the password. For now you can create users with role admin, so it will have access to all the screens of the tool and then you can write the groups where the user belongs. In this case the user salaboy has Role: admin and he belongs to the IT group. There are some restricted words that cannot be used as group names. For now avoid using “analyst”, “admin”, “developer” for group names.

Domain Specific (Custom) Tasks / Connectors

Domain Specific Connectors are the way to integrate your business processes with external services that can be inside or outside your company. These connectors are considered technical assets and because of that needs to be handled by technical users. Most of the time it is recommended to not change/modify the connectors when the application is running, and for that reason these connectors needs to be provided for the application to use in runtime.
Three things are required to use a Custom Connector:
  1. Provide an implementation of the WorkItemHandler interface, which is the one that will be executed in runtime.
  2. Bind the implementation to a Service Task name
  3. Create the WorkItem Descriptor inside the tool
In order to provide these three configuration points you can take a look at the Customer Relationship example in the jbpm-playground repository.
Customer Relationships Example
Customer Relationships Example
The main idea here is to have a separate project that contains the workItems implementations, for example: CreateCustomerWorkItemHandler , you will need to compile this project with maven and install the produced jar file inside the KIE-WB application. In order to do that you just copy the customer-services-workitems-1.0-SNAPSHOT.jar into the WEB-INF/lib directory of the kie-wb.war app. On this example the workItemHandler implementations interacts with a public web service that you can check here , so you will require internet connection in order to try this example.
Notice also that inside the customer-relationship project there are some high level mappings of the Domain Specific Tasks that can be used inside our Customer Relationship Project -> WorkItemDefinitions.wid. This configuration will basically add you Service Tasks inside the Process Designer Palette:
Domain Specific Service Tasks
Domain Specific Service Tasks
The last step is to bind the High Level mapping to their implementation for this environment. You can do that by adding new entries into the WEB-INF/classes/META-INF/CustomWorkItemHandlers.conf  file, for this example we just need to add the following entries:
“CreateCustomer”: new,
“AddCustomerComment”: new,
“ManagersReport”: new,

Note about the JBoss AS 7 Profile

In order to run the KIE Workbench you need to run it with full JBoss AS7 profile, so if you are installing it using a fresh JBoss AS7 please don’t forget to point to the full project when you use the ./ script:
./ –server-config=standalone-full.xml


You can download a pre installed version of KIE-WB where you can clone the jbpm-playground repository which contains the example (Authoring -> Administration and then Clone a Repository using the jbpm-playground url:
This pre installed version contains the workItemHandlers already installed and configured for the Customer Relationship example, but you can obviously make some changes and upgrade them if it’s needed.
It also has two users created:
User/Password: jbpm/jbpm6 (Groups: IT, HR, Accounting, etc)
User/Password: salaboy/salaboy123 (Groups: IT)
Please feel to try it out and let me know if it works for you.
There are some few seats available for the Drools & jBPM Free Workshop Tomorrow and on Thursday. If you are planning to assist please write me an email to salaboy (at) redhat (dot) com. For more details about it look here. 

Using the jBPM Console NG – HR Example

The best way to learn about a new tool is using it, for that reason I’ve decided to write some posts  about how to use the jBPM Console NG. On this we will be following a simple “Hiring Example” process. I will try to recreate step by step how to test this example, so you can play with it, change it and extend it if you want to. This example can also be used as a reference to test the application or give feedback about the data that is being shown in the jBPM Console Screens. We will be reviewing this example among others during the free workshops happening in London the 23rd and 24th of October. 

The Example – Hiring Process V1

In order to show all the current features of the jBPM Console NG, I’ve wrote a very simple process to show in a 20 minutes walkthrough what you can do with the tool. Let me explain the business scenario first in order to understand what you should expect from the jBPM Console NG.
Hire a new Developer (click to enlarge)
Hire a new Developer (click to enlarge)
Let’s imagine for a second that you work for a Software company that works with several projects and from time to time the company wants to hire new developers. So, which employees, Departments and Systems are required to Hire a new Developer in  your company? Trying to answering these questions will help you to define your business process. The previous figure, represents how does this process works for Acme Inc. We can clearly see that three Departments are involved: Human ResourcesIT and Accounting teams are involved. Inside our company we have Katy from the Human Resources Team, Jack on the IT team and John from the Accounting team involved. Notice that there are other people inside each team, but we will be using Katy, Jack and John to demonstrate how to execute the business process.
Notice that there are 6 activities defined inside this business process, 4 of them are User Tasks, which means that will be handled by  people. The other two are Service Tasks, which means an interaction with another system will be required.
The process diagram is self explanatory, but just in case and to avoid confusions this is what is supposed to happen for each instance of the process that is started a particular candidate:
  1. The Human Resources Team perform the initial interview to the candidate to see if he/she fits the profile that the company is looking for.
  2. The IT Department perform a technical interview to evaluate the candidate skills and experience
  3. Based on output of the Human Resources and IT teams, the accounting team create a Job Proposal which includes the yearly salary for the candidate. The proposal is created based on the output of both of the interviews (Human Resources and Technical).
  4. As soon as the proposal has being created it is automatically sent to the candidate via email
  5. If the candidate accept the proposal, a new meeting is created with someone from the Human Resource team to sign the contract
  6. If everything goes well, as soon as the process is notified that the candidate was hired, the system will automatically post a tweet about the new Hire using the twitter service connector
As you can see Jack, John and Katy will be performing the tasks for this example instance of the business process, but any person inside the company that have those Roles will be able to claim and interact with those tasks.

Required Configurations

In order to run the example, and any other process you will need to provide a set of configurations and artifacts, that for this example are provided out of the box. Just for you to know which are the custom configurations that this example require:
  1. Users and Roles Configuration: you will usually do this at the beginning, because it’s how you set up all persons that will be able to interact with your business processes.
  2. Specific Domain Service Connectors (WorkItemHandlers in the class path): this could be done on demand, when you need a new system connector you will add it
  3. A Business Process Model to run
  4. A set of Forms for the Human Tasks (If you don’t provide this the application will generate dynamic forms for them): this needs to be done for each User Task that you include in your process. This is extremely important because it represent the screen that the end user will see to perform the task. The better the form the better the user can perform its job.
For the users, roles and deployment instructions you need to check my previous post. The following steps require that you have the application deployed inside JBoss, Tomcat or that you are running the application in Hosted Mode (Development Mode).
The following steps can also be used to model and run a different business process if you want to.

The Example inside the jBPM Console NG

Initially we need to be logged in into the system in order to start working with the tools. There are no role based restrictions yet, but we are planning to add that soon.
Once  you are inside the Home section gives you an overview of what are the tools provided in the current version.
The “Hiring a new Developer” process is being provided out of the box with the tool, so let’s take a look at it going to the Authoring -> Business Process  section using the top level menu.
We will now be in the Authoring Perspective, where in the left hand side of the screen we will find the Project Explorer, which will allow us to see the content of the Knowledge Repositories that are configured to be used by the jBPM Console NG. The configuration of these repositories will be left out for another post. But it is important for you to know that you will be able to configure the jBPM Console NG to work against multiple repositories that contains business processes and business rules.
Process Authoring
Process Authoring
In the right hand side of the screen a you will see the Project Explorer  where you can choose between different projects and between different knowledge asset types. In this case the HR project is selected, so you can check out the hiring process inside the Business Processes category.
You can try modeling your own process, by selecting New in the contextual menu and then Business Process.
Some of the things that you can look inside the process model are:
  1. Global Process properties: Click in the back of the canvas and then access to the properties menu. Notice the process id, the process name and the process version, and the process variables defined.
  2. User Task assignments: click into one of the User Tasks and look a the ActorId and GroupId properties (see previous screenshot)
  3. Tasks Data Mappings: take a look at the DataInputs, DataOutputs and Assignments properties. When we see each activity execution in the following section we will be making reference to the data mappings to see what information is expected to be used and to be generated by each task.  (see previous screenshot)
Once we have our business process modelled, we need to save it and then Build & Deploy the project. We can do this by using the Project Editor screen. You need to select Tools in the contextual menu and then Project Editor.
Project Editor
Project Editor
On the top right corner of the Project Editor you will find the Build & Deploy button. If you click on this button, the project will be built and if everything is OK it will be automatically deployed to the runtime environment, so you can start using the knowledge assets. If the deployment went right and you saw the Build Successfully notification you can now go to the Process Definitions screen under Process Management in the main menu to see all the deployed definitions.
Process Definitions
Process Definitions
If you don’t see your process definition, you will need to go back to the Authoring perspective and see what is wrong with your project, because it wasn’t deployed.
Notice that from this screen you can access to see the process details clicking in the magnifying glass located in the process Actions column.You can also create a new process instance from this screen clicking in the Start button in the process definition list or the New Instance button in the  Definition Details panel. Let’s analyze the process execution and the information that the process requires to be generated by the different users.

Hire a new Developer Process Instance

When we start a new Process Instance a pop will  be presented with the process initial form. This initial form allows us to enter some information that is required by the process in order to start. For this example the process  require only the candidate name to start, so the popup just ask us enter the candidate name in order to start.
New Process Instance
New Process Instance
If we hit the big Start button, the new process instance will be created and the first task of the process will be create for the Human Resources Team. Depending on the assigned roles of the user that you are using to create the process instance you will be able to see the created task or not. In order to see the first task of the process we will need to logout tot he application and log in as someone from the Human Resources team.

Human Resources Interview

For this example we are already logged as Katy, who belongs to the HR team,  so if we go to Work -> Tasks   we will see Katy’s pending tasks. Notice that this HR Interview Task is a group task, which means that Katy will need to claim the task in order to start working on it.
Katy's Tasks
Katy’s Tasks
If Katy claim this task, she will be able to release it if she cannot work any more on it. In order to claim the task you can click in the lock icon in the Task List or you can click in the Work section of the Task Details to see the task form which also offer the tasks operations.
She can also set up the Due Date for the task to match that with the Interview Meeting date. When the candidate assist to the Interview, Katy will  need to produce some information such as:
  • The candidate age
  • The candidate email
  • The score for the interview
In order to produce that information, she will need to access the Task Form, which can be accessed by clicking in the task row check icon or in the clicking on the Work in the Task Details panel.
Human Resources Interview
Human Resources Interview
Another important thing to notice here is that the task operations of save, release and complete the task will be logged and used to track down how the work is being performed. For example, how much time takes in average a Human Resources interview.
Notice that Katy requires to score the candidate at the end of the interview.

Technical Interview

After completing the Human Resources Interview, the candidate will require to do a Technical Interview to evaluate his/her technical skills. In this case a member of the IT team will be required to perform the Interview. Notice that the technical interview task for the IT team will be automatically created by the process instance, as soon as the HR interview task is finished. Also notice that you will need to logout the user Katy from the application and login as Jack or any other member of the IT team to be able to claim the Technical Interview task.
The Technical interview will require the following information to be provided:
  • The list of validated skills of the candidate
  • The twitter account
  • The score for the interview
As you can see in the following screenshot, some of the information collected in the HR Interview is used in the Tech Interview Task Form, to provide context to the interviewer.
Jack's Tasks
Jack’s Tasks
Once the Technical interview is completed the next task in the process will be created, and now it will be the turn of the Accounting team to work on Creating a Job Proposal and an Offer for the candidate if the interviews scores are OK.
You can log out as Jack and login as John in order to complete that task.

Process Instance Details

At all times you can go to the Process Management -> Process Instances to see the state of each of the process instances that you are running.
As you can see in the following screenshot you will have updated information about your process executions:
Instance Details
Instance Details
The Instance Log section gives you detailed information about when the process was created, when each specific task was created and which is  activity is being executed right now. You can also inspect the Process Variables going to the View -> Process Variables option.
As you may notice, in this screen you will be also able to signal an event to the process if its needed and abort the process instance if for some reason is not longer needed.

Summing Up

On this post we had quickly reviewed the screens that you will use most frequently inside the application. The main objective of these post is to help you to get used to the tools, so feel free to ask any question about it. In my next post I will be describing the configuration required to set up users/roles/groups and also we will extend the example to use Domain Specific Connectors for the Send Proposal and Tweet New Hire tasks that are being emulated now with a simple text output to the console.
Remember if you are London, don’t miss the opportunity to meet some community members here:

Drools and jBPM 6 Free Workshops (23/24 October – London)

Hi All, I would like to invite everyone to a couple of developer oriented workshops about the tools in the newest Drools and jBPM releases (6 series). The main idea of these workshops is to introduce developers to the new set of features and tooling provided by the projects.
We (Michael Anstis and I) will be showing how to configure and set up your working environment to work, customize and contribute to these projects.
We will be trying to cover the following topics:
  • General Overview about the tools
  • Distributions and Modules
  • Technology Stack
  • How to setup your working environment
  • How to extend/customize the tooling
If you are brave enough and want to know the low level technical details of the tooling, please bring your laptop and be prepared to download the code and compile it in your own environment. We will assist you in the process and give you all the pointers to fix issues or provide new features.
Michael will be in charge of the Drools Side of the platform and I (Salaboy) will be in charge of the BPM side of the tooling. If you are planning to start using these tools, we encourage you to attend to see the new features and get a high level overview about all the new things that are coming with the new version.
The place and the coffee will be sponsored by Plug Tree and the workshops will take place on the 23rd and 24th of October at No. 1 Poultry, London, EC2R 8JR From 3pm to 5pm+.  Seats are very limited, and because workshops are free you need to get in touch with us (salaboy at redhat dot com) if you are planning to attend. We will probably send you details of what you need to download before coming to the workshop so as not to depend on the local internet connection.

Make your work asynchronous

Asynchronous execution as part of a business process is common requirement. jBPM has had support for it via custom implementation of WorkItemHandler. In general it was as simple as providing async handler (is it as simple as it sounds?) that delegates the actual work to some worker e.g. a separate thread that proceeds with the execution.

Before we dig into details on jBPM v6 support for asynchronous execution let’s look at what are the common requirements for such execution:

  • first and foremost it allows asynchronous execution of given piece of business logic
  • it allows to retry in case of resources are temporarily unavailable e.g. external system interaction
  • it allows to handle errors in case all retries has been attempted
  • it provides cancelation option
  • it provides history log of execution
When confronting these requirements with the “simple async handler” we can directly notice that all of these would need to be implemented all over again by different systems. So that is not so appealing, isn’t?

jBPM executor to the rescue 

Since version 6, jBPM introduces new component called jbpm executor which provides quite advanced features for asynchronous execution. It delivers generic environment for background execution of commands. Commands are nothing more than business logic encapsulated with simple interface. It does not have any process runtime related information, that means no need to complete work items, or anything of that sort. It purely focuses on the business logic to be executed. It receives data via CommandContext and returns results of the execution with ExecutionResults. The most important rule for both input and output data is – it must be serializable.
Executor covers all requirements listed above and provides user interface as part of jbpm console and kie workbench (kie-wb) applications.
Illustrates Jobs panel in kie-wb application
Above screenshot illustrates history view of executor’s job queue. As can be seen on it there are several options available:
  • view details of the job
  • cancel given job
  • create new job
With that quite few things can already be achieved. But what about executing logic as part of a process instance – via work item handler?

Async work item handler

jBPM (again since version 6) provides an out of the box async work item handler that is backed by the jbpm executor. So by default all features that executor delivers will be available for background execution within process instance. AsyncWorkItemHandler can be configured in two ways:
  1. as generic handler that expects to get the command name as part of work item parameters
  2. as specific handler for given type of work item – for example web service
Option number 1 is by default configured for jbpm console and kie-wb web applications and is registered under async name in every ksession that is bootstrapped within the applications. So whenever there is a need to execute some logic asynchronously following needs to be done at modeling time (using jbpm web designer):
  • specify async as TaskName property 
  • create data input called CommandClass
  • assign fully qualified class name for the CommandClass data input
Next follow regular way to complete process modeling. Note that all data inputs will be transferred to executor so they must be serializable.
Illustrates assignments for an async node (web service execution)
Second option allows to register different instances of AsyncWorkItemHandler for different work items. Since it’s registered for dedicated work item most likely the command will be dedicated to that work item as well. If so CommandClass can be specified on registration time instead of requiring it to be set as work item parameters. To register such handlers for jbpm console or kie-wb additional class is required to inform what shall be registered. A CDI bean that implements WorkItemHandlerProducer interface needs to be provided and placed on the application classpath so CDI container will be able to find it. Then at modeling time TaskName property needs to be aligned with those used at registration time.

Ready to give it a try?

To see this working it’s enough to give a try to the latest kie-wb or jbpm console build (either master or CR2). As soon as application is deployed, go to Authoring perspective and you’ll find an async-examples project in jbpm-playground repository. It comes with three samples that illustrates asynchronous execution from within process instance:
  • async executor
  • async data executor
  • check weather
Async executor is the simplest execution process that allows execute commands asynchronously. When starting a process instance it will ask for fully qualified class name of the command, for demo purpose use org.jbpm.executor.commands.PrintOutCommand which is similar to the SystemOutWorkItemHandler that simple prints out to logs the content of the CommandContext. You can leave it empty or provide invalid command class name to see the error handling mechanism (using boundary error event).
Async data executor is preatty much same as Async executor but it does operate on custom data (included in the project – User and UserCommand). On start process form use org.jbpm.examples.cmd.UserCommand to invoke custom command included in the project.

Check weather is asynchronous execution of a web service call. It checks weather for any U.S. zip code and provides results as a human task. So on start form specify who should receive user task with results and what is the zip code of the city you would like to get weather forecast for.

Start Check weather process with async web service execution
And that’s it, asynchronous execution is now available out of the box in jBPM v6. 
Have fun and as usual keep the comments coming so we can add more useful features!

Clustering in jBPM v6

Clustering in jBPM v5 was not an easy task, there where several known issues that had to be resolved on client (project that was implementing solution with jBPM) side, to name few:

  • session management – when to load/dispose knowledge session
  • timer management – required to keep knowledge session active to fire timers
This is not the case any more in version 6 where several improvements made their place in code base, for example new module that is responsible for complete session management was introduced – jbpm runtime manager. More on runtime manager in next post, this one focuses on how clustered solution might look like. First of all lets start with all the important pieces jbpm environment consists of:
  1. asset repository – VFS based repository backed with GIT – this is where all the assets are stored during authoring phase
  2. jbpm server that includes JBoss AS7 with deployed jbpm console (bpm focused web application) of kie-wb (fully features web application that combines BPM and BRM worlds)
  3. data base – backend where all the state date is kept (process instances, ksessions, history log, etc)

Repository clustering

Asset repository is GIT backed virtual file system (VFS) that keeps all the assets (process definitions, rules, data model, forms, etc) is reliable and efficient way. Anyone who used to work with GIT understands perfectly how good it is for source management and what else assets are if not source code?
So if that is file system it resides on the same machine as the server that uses it, that enforces it to be kept in sync between all servers of a cluster. For that jbpm makes use of two well know open source projects:
Zookeeper is responsible for gluing all parts together where Helix is cluster management component that registers all cluster details (cluster itself, nodes, resources).
So this two components are utilized by the runtime environment on which jbpm v6 is based on:
  • kie-commons – provides VFS implementation and clustering 
  • uber fire framework – provides backbone of the web applications
So let’s take a look at what we need to do to setup cluster of our VFS:

Get the software

  • download Apache Zookeeper (note that 3.3.4 and 3.3.5 are the only versions that were currently tested so make sure you get the correct version)
  • download Apache Helix  (note that version that was tested was 0.6.1)

Install and configure

  • unzip Apache Zookeeper into desired location – ( from now one we refer to it as zookeeper_home)
  • go to zookeeper_home/conf and make a copy of zoo_sample.conf to zoo.conf
  • edit zoo.conf and adjust settings if needed, these two are important in most of the cases:

# the directory where the snapshot is stored.
# the port at which the clients will connect

  •  unzip Apache helix to into desired location (from now one we refer to it as helix_home)

Setup cluster

Now we have all the software available locally so next step is to configure the cluster itself. We start with start of the Zookeeper server that will be master of the configuration of the cluster:
  • go to zookeeper_home/bin
  • execute following command to start zookeeper server:

sudo ./ start

  • zookeeper server should be started, if the server fails to start make sure that the data directory defined in zoo.conf file exists and is accessible
  • all zookeeper activities can be viewed zookeeper_home/bin/zookeeper.out
To do so, Apache Helix provides utility scripts that can be found in helix_home/bin.
  • go to helix_home/bin
  • create cluster
./ –zkSvr localhost:2181 –addCluster jbpm-cluster
  • add nodes to the cluster 
node 1
./ –zkSvr localhost:2181 –addNode jbpm-cluster nodeOne:12345
    ./ –zkSvr localhost:2181 –addNode jbpm-cluster nodeTwo:12346
add as many nodes as you will have cluster members of jBPM server (in most cases number of application servers in the cluster)
NOTE: nodeOne:12345 is the unique identifier of the node, that will be referenced later on when configuring application severs, although it looks like host and port number it is use to identify uniquely logical node.
  • add resources to the cluster
./ –zkSvr localhost:2181 
           –addResource jbpm-cluster vfs-repo 1 LeaderStandby AUTO_REBALANCE
  • rebalance cluster to initialize it

./ –zkSvr localhost:2181 –rebalance jbpm-cluster vfs-repo 2

  • start the Helix controller to manage the cluster
./ –zkSvr localhost:2181 
                        –cluster jbpm-cluster 2>&1 > /tmp/controller.log &
Values given above are just examples and can be changed according to the needs:
cluster name: jbpm-cluster
node name: nodeOne:12345, nodeTwo:12346
resource name: vfs-repo
zkSvr value must match Zookeeper server that is used.

Prepare data base 

Before we start with application server configuration data base needs to be prepared, for this example we use PostgreSQL data base. jBPM server will create all required tables itself by default so there is no big work required for this but some simple tasks must be done before starting the server configuration.

Create data base user and data base

First of all PostgreSQL data base needs to be installed, next user needs to be created on the data base that will own the jbpm schema, in this example we use:
user name: jbpm
password: jbpm
Once the user is ready, data base can be created, and again for the example jbpm is chosen for the data base name.
NOTE: this information (username, password, data base name) will be used later on in application server configuration.

Create Quartz tables

Lastly Quartz related tables must be created, to do so best is to utilize the data base scripts provided with Quartz distribution, jbpm uses Quartz 1.8.5. DB scripts are usually located under QUARTZ_HOME/docs/dbTables.

Create quartz definition file 

Quartz configuration that will be used by the jbpm server needs to accomodate the needs of the environment, as this guide is about to show the basic setup obviously it will not cover all the needs but will allow for further improvements.
Here is a sample configuration used in this setup:
# Configure Main Scheduler Properties  

org.quartz.scheduler.instanceName = jBPMClusteredScheduler
org.quartz.scheduler.instanceId = AUTO

# Configure ThreadPool  

org.quartz.threadPool.class = org.quartz.simpl.SimpleThreadPool
org.quartz.threadPool.threadCount = 5
org.quartz.threadPool.threadPriority = 5

# Configure JobStore  

org.quartz.jobStore.misfireThreshold = 60000

org.quartz.jobStore.clusterCheckinInterval = 20000

# Configure Datasources  

Configure JBoss AS 7 domain

1. Create JDBC driver module – for this example PostgreSQL
a) go to JBOSS_HOME/modules directory (on EAP JBOSS_HOME/modules/system/layers/base)
b) create module folder org/postgresql/main
c) copy postgresql driver jar into the module folder (org/postgresql/main) as postgresql-jdbc.jar          name
d) create module.xml file inside module folder (org/postgresql/main) with following content:
         <module xmlns=”urn:jboss:module:1.0″ name=”org.postgresql”>
           <resource-root path=”postgresql-jdbc.jar”/>

      <module name=”javax.api”/>
      <module name=”javax.transaction.api”/>
2. Configure data sources for jbpm server
a) go to JBOSS_HOME/domain/configuration
b) edit domain.xml file
for simplicity sake we use default domain configuration which uses profile “full” that defines two 
        server nodes as part of main-server-group
c) locate the profile “full” inside the domain.xml file and add new data sources
main data source used by jbpm
   <datasource jndi-name=”java:jboss/datasources/psjbpmDS” 
                pool-name=”postgresDS” enabled=”true” use-java-context=”true”>
        additional data source for quartz (non managed pool)
        <datasource jta=”false” jndi-name=”java:jboss/datasources/quartzNotManagedDS”   
           pool-name=”quartzNotManagedDS” enabled=”true” use-java-context=”true”>
defined the driver used for the data sources
<driver name=”postgres” module=”org.postgresql”>
3. Configure security domain 
     a) go to JBOSS_HOME/domain/configuration
     b) edit domain.xml file
for simplicity sake we use default domain configuration which uses profile “full” that defines two
        server nodes as part of main-server-group
     c) locate the profile “full” inside the domain.xml file and add new security domain to define security 
         domain for jbpm-console (or kie-wb) – this is just a copy of the “other” security domain defined 
         there by default
<security-domain name=”jbpm-console-ng” cache-type=”default”> <authentication>
            <login-module code=”Remoting” flag=”optional”>
               <module-option name=”password-stacking” value=”useFirstPass”/>
            <login-module code=”RealmDirect” flag=”required”>
                  <module-option name=”password-stacking” value=”useFirstPass”/>
for kie-wb application, simply replace jbpm-console-ng with kie-ide as name of the security domain.  
4. Configure server nodes
    a) go to JBOSS_HOME/domain/configuration
    b) edit host.xml file
    c) locate servers that belongs to “main-server-group” in host.xml file and add following system  

property name property value comments
org.uberfire.nio.git.dir /home/jbpm/node[N]/repo location where the VFS asset repository will be stored for the node[N] /jbpm/ absolute file path to the quartz definition properties nodeOne unique node name within cluster (nodeOne, nodeTwo, etc) jbpm-cluster name of the helix cluster
org.uberfire.cluster.zk localhost:2181 location of the zookeeper server nodeOne_12345 unique id of the helix cluster node, note that ‘:’ is replaced with ‘_’
org.uberfire.cluster.vfs.lock vfs-repo name of the resource defined on helix cluster
org.uberfire.nio.git.daemon.port 9418 port used by the GIT repo to accept client connections, must be unique for each cluster member
org.uberfire.nio.git.ssh.port 8001 port used by the GIT repo to accept client connections (over ssh), must be unique for each cluster member localhost host used by the GIT repo to accept client connections, in case cluster members run on different machines this property must be set to actual host name instead of localhost otherwise synchronization won’t work localhost host used by the GIT repo to accept client connections (over ssh), in case cluster members run on different machines this property must be set to actual host name instead of localhost otherwise synchronization won’t work
org.uberfire.metadata.index.dir /home/jbpm/node[N]/index location where index for search will be created (maintained by Apache Lucene)
org.uberfire.cluster.autostart false delays VFS clustering until the application is fully initialized to avoid conficts when all cluster members create local clones
examples for the two nodes:
  •     nodeOne
  <property name=”org.uberfire.nio.git.dir” value=”/tmp/jbpm/nodeone” 
  <property name=”” 
      value=”/tmp/jbpm/quartz/” boot-time=”false”/>
  <property name=”” value=”nodeOne” boot-time=”false”/>
  <property name=”” value=”jbpm-cluster” 
    <property name=”org.uberfire.cluster.zk” value=”localhost:2181″ 
  <property name=”” value=”nodeOne_12345″ 
  <property name=”org.uberfire.cluster.vfs.lock” value=”vfs-repo” 
  <property name=”org.uberfire.nio.git.daemon.port” value=”9418″ boot-time=”false”/>
  <property name=”org.uberfire.metadata.index.dir” value=”/tmp/jbpm/nodeone” boot-time=”false”/>
  <property name=”org.uberfire.cluster.autostart” value=”false” boot-time=”false”/>
  •     nodeTwo
    <property name=”org.uberfire.nio.git.dir” value=”/tmp/jbpm/nodetwo” 
    <property name=”” 
       value=”/tmp/jbpm/quartz/” boot-time=”false”/>
    <property name=”” value=”nodeTwo” boot-time=”false”/>
    <property name=”” value=”jbpm-cluster” 
    <property name=”org.uberfire.cluster.zk” value=”localhost:2181″ 
    <property name=”” value=”nodeTwo_12346″ 
    <property name=”org.uberfire.cluster.vfs.lock” value=”vfs-repo” 
    <property name=”org.uberfire.nio.git.daemon.port” value=”9419″ boot-time=”false”/>
    <property name=”org.uberfire.metadata.index.dir” value=”/tmp/jbpm/nodetwo” boot-
    <property name=”org.uberfire.cluster.autostart” value=”false” boot-time=”false”/>

NOTE: since this example runs on single node host properties for ssh and git daemons are omitted.

Since repository synchronization is done between git servers make sure that GIT daemons are active (and properly configured – host name and port) on every cluster member.
5. Create user(s) and assign it to proper roles on application server
Add application users
In previous step security domain has been created so jbpm console (or kie-wb) users could be authenticated while logging on. Now it’s time to add some users to be able to logon to the application once it’s deployed. To do so:
 a) go to JBOSS_HOME/bin
 b) execute ./ script and follow the instructions on the screen
  – use Application realm not management
  – when asked for roles, make sure you assign at least:
  for jbpm-console: jbpm-console-user
  for kie-wb: kie-user
add as many users you need, same goes for roles, listed above are required to be authorized to use the web application. 

Add management (of application server) user
To be able to manage the application server as domain, we need to add administrator user, it’s similar to what was defined for adding application users but the realm needs to be management
 a) go to JBOSS_HOME/bin
 b) execute ./ script and follow the instructions on the screen
  – use Management realm not application
Application server should be now ready to be used, so let’s start the domain:
after few seconds (it’s still empty servers) you should be able to access both server nodes on following locations:
administration console: http://localhost:9990/console
the port offset is configurable in host.xml for given server.

Deploy application – jBPM console (or kie-wb)

Now it’s time to prepare and deploy application, either jbpm-console or kie-wb. As by default both application comes with predefined persistence that uses ExampleDS from AS7 and H2 data base there is a need to alter this configuration to use PostgreSQL data base instead.

Required changes in persistence.xml

  • change jta-data-source name to match one defined on application server


  • change hibernate dialect to be postgresql 


Application build from source

If the application is built from source then you need to edit persistence.xml file that is located under:
next rebuild the jbpm-distribution-wars module to prepare deployable package – once that is named: 

    Deployable package downloaded

    In case you have deployable package downloaded (which is already a war file) you need to extract it change the persistence.xml located under:
    once the file is edited and contains correct values to work properly with PostgreSQL data base application needs to be repackaged:
    NOTE: before repackaging make use that previous war is not in the same directory otherwise it will be packaged into new war too.

    jar -cfm jbpm-console-ng.war META-INF/MANIFEST.MF *

    IMPORTANT: make sure that you include the same manifest file that was in original war file as it contains valuable entires.

    To deploy application logon as management user into administration console of the domain and add new deployments using Runtime view of console. Once the deployment is added to the domain, assign it to the right server group – in this example we used main-server-group it will be default enable this deployment on all servers within that group – meaning deploy it on the servers. This will take a while and after successful deployment you should be able to access jbpm-console (or kie-wb) on following locations:
    the context root (jbpm-console-ng) depends on the name of the war file that was deployed so if the filename will be jbpm-console-ng-jboss7.war then the context root will be jbpm-console-ng-jboss7. Same rule apply to kie-wb deployment.
    And that’s it – you should have fully operational jbpm cluster environment!!!
    Obviously in normal scenarios you would like to hide the complexity of different urls to the application from end users (like putting in front of them a load balancer) but I explicitly left that out of this example to show proper behavior of independent cluster nodes.
    Next post will go into details on how different components play smoothly in cluster, to name few:
    • failover – in case cluster node goes down
    • timer management – how does timer fire in cluster environment
    • session management – auto reactivation of session on demand
    • etc
    As we are still in development mode, please share your thoughts on what would you like to see in cluster support for jBPM, your input is most appreciated!

    There was a change in naming of system properties since the article was written so for those that configured it already for 6.0.0.Final there will be a need to adjust name of following system properties:

    • org.kie.nio.git.dir -> org.uberfire.nio.git.dir
    • org.kie.nio.git.daemon.port -> org.uberfire.nio.git.daemon.port
    • org.kie.kieora.index.dir -> org.uberfire.metadata.index.dir
    • org.uberfire.cluster.autostart – new parameter
    Table above already contains proper values for 6.0.0.Final

    jBPM web designer runs on VFS

    As part of efforts for jBPM and Drools version 6.0 web designer is going through quite few enhancements too. One of the major features is to provide flexible mechanism to persist modeled processes (and other assets that relate to them such as forms, process image, etc) a.k.a assets even without being embedded in Drools Guvnor.
    So let’s start with the main part here – what does it mean flexible mechanism to persists assets? To answers this let’s look at what is currently (jBPM 5.x) available:

    • designer by default runs in embedded mode inside Drools Guvnor
    • designer stores all assets inside Drools Guvnor JCR repository
    • designer can run in standalone mode but only as modeling tool without capabilities to store assets
    So as listed above there is only one option to persist assets – inside Drools Guvnor. In most of the cases this is good enough or even desired but there are quite few situation where modeling capabilities are required to be delivered with the custom application and including complete Drools Guvnor could be too much. 
    That leads us to the flexible mechanism implemented – designer was equipped with Repository interface that is considered entry point to interact with underlying storage. Designer by default comes with Virtual File System based repository that provides:
    • default implementation that supports 
      • simple (local) file system repository
      • git based repository
    • allows for pluggable VFS provider implementations
    • is based on standards – java NIO2
    Extensions to what is delivered out of the box can be done in one of the following ways:
    1. if VFS based repository is not what user needs an alternative implementation of the Repository interface can be provided, e.g data base
    2. if VFS is what user is looking for but neither local file system nor git is the right implementation additional providers can be developed
    Let’s look little bit deeper into what these new features are and how users will benefit from them. 

    1. It’s based on Java 7 NIO2

    VFS support provides is based on Java SE 7 NIO2 but does not require Java 7 to run as it comes with backport implementation of selected parts of NIO2 that are required

    2. Different providers for Virtual File System

    The simplest option is to use designer with local file system storage that will simply utilize file system on which designer is running. As most likely it will provide the best performance it leaves user with rather limited options when it comes to clustering, distributed environments or backups.
    Next option that personally would recommend is to utilize GIT as underlying storage. People that works with GIT in their software development projects will most likely notice quite few advantages as in the end process definitions are more like source code that can be versioned, developed in parallel (branching) and included in some sort of release cycle.

    3. Save process directly in designer editor

    Designer now allows users to save process directly from the editor which will store the svg process content as well with just one click!

    4. Repository menu

    With the repository designer provides a simple UI menu to navigate through the repository and perform basic operations such as:

    • open processes in designer editor
    • create assets and directories
    • copy/move assets
    • delete assets and directories
    • preview files
    This menu is intended to be seen as basic file system browser utilized more in a standalone mode as when integrated with jBPM console-ng, guvnor-ng (UberFire) more advanced options will be delivered in this area.

    5. Simpler integration with jBPM console and Drools Guvnor

    Both jBPM console and Drools Guvnor are going to be “refreshed” for version 6.0 and thus integration between these components and designer will be simplified as they all will be unified on the repository level, meaning single repository can be shared across all these three components.
    That will be all for a brief introduction but certainly not all in this topic. Expect more to come as soon as preview will be released on how to configure different repositories and more updates on git based repository on how to make best of it.
    Your comments are more than welcome as they can help to make designer best modeling tool out there 🙂

    Known limitation

    Currently git based repository does not support move of assets and directories as atomic operation which means that preferred is to copy first and then delete.

    Human Interactions: Task List UIs Requirements

    Most of the times the Front Ends Applications which are built on top of Process Engines contains a fair amount of screens/components dedicated to deal with Human Interactions. On this post the requirements to build these panels will be dissected to define which bits are important. The main idea behind this analysis is to be able to evolve the current state of art of the Human Interactions screens/components to the next level.

    State of the Art

    Nowadays most of the BPM Suites offers the following set of screens:
    • Inbox  – (Personal) Task Lists
    • Task Creation
      • Task Details
      • Sub Tasks
      • Task Help
      • Task Content
    • Task Forms
    • Group Task Lists
    • Identity related screens
    Let’s dive into each of these categories in order to understand in details what is required in each of them:

    Inbox a.k.a. Personal Task List

    This is the main screen for a User to interact. This screen will contain a list or a data grid which will display all the pending tasks for the currently logged in user. The following image describe in detail the key bits:
    Personal Task List (Inbox)
    This screen is usually compared with the Inbox folder of any Email Client application where per each mail we have a row and in order to see the content of the email we need to drill down into each item.
    It is not necessary to explain each button and piece of information that is being displayed in the previous picture but for the sake of this analysis we can divide the features into two big groups:

    Basic Features

    A set of Basic Features can be quickly coded based on the mechanisms provided by the Engine:
    1. All the pending tasks for the user
    2. Generic data about those tasks (Columns)
    3. A set of actions to interact with each task (also bulk actions as displayed in the image)
    All these features will represent the basic pieces that needs to be provided by the, but on top of those bits a set of user customizations needs to be allowed by the tool.

    Domain Specific & Usability Features

    In order to adopt a Generic Task List interface most of the companies requires a high degree of customization on top of the basic set of features provided by the tools. Most of these customizations are domain specific and are extremely difficult to provide in a generic way. But what we can provide, as tools designers, is several layers of flexibility to allow each company to modify and adapt the generic tooling to their needs.
    Specifically for Task Lists these custom features can be:
    1. Filter/Search by generic and domain specific data inside or related with the tasks
    2. Define Labels and Label Tasks
    3. Define different perspectives to display the same information. For example: Number of Columns to Display, Default Sorting, etc
    4. User defined graphical annotation and comments,
    5. User defined timers and alerts
    6. Find other users associated with each task and use different communication channels to get things done, etc
    7. User defined Meta-Data for future analysis
    I can list a very extensive list of features that can be added, but we need to be careful and be focused on the features that will make our task lists/inbox usable and not more difficult to use.
    So there is an extra factor that we need to consider, and that factor is how the end users want interact with our software. The cruel reality is that there is no single answer, we need to provide an infrastructural framework flexible enough to allow each user to customize his/her experience in front of the software. Each user will need to be able to add or limit the set of features that they want to use. The company need to know that if there is a feature missing, the learning curve to add it is almost none and the developers needs to be comfortable with how these additions or removals are done.

     Task Creation & Task Details

    If we do a similar analysis for the screens intended to allow the user to create a new task or to modify an existing task we will start finding a lot of repeated requirements.
    The following figure shows a set of screens that are usually involved in the Task Creation and Task Details Edition process:
    Task Creation + Task Details
    The previous figure shows a very simple and fluid mechanism to quickly create new Tasks.
    In order to create a task we just need a name and all the rest of the properties will be defaulted based on global or the user configurations.
    If we want to edit the task basic properties we will have a different panel with the most common things that the user will want to change, like for example: Due Date, Priority and Assignment.

    Advanced Details

    More advanced options can be displayed separately and only if the User wants to see them:
    Advanced Task Details
    Sub Tasking strategies, Deadlines and Escalation Options can be shown if they are needed. There are cases when these advanced behaviors are not required, and for that reason they should be all optional at the UI level.

    Sub Tasks

    If Sub Task are allowed by Default a separate panel embedded in the Task Details view can be added to allow the quick creation of subtasks that are related with a specific parent Task.
    Adding Sub Tasks
    Once the Sub Task are created, they can be edited via the normal panels, which will probably add the reference to the parent Task and some bits of extra information (like for example the parent task name and assignee for reference).

    Task Help

    Something that we usually leave to the end is the Help feature. As you can see in the following figure, with a very simple panel we can write a set of localized help texts which will guide to the person that needs to work in such task. For simple Tasks, the help can be avoided, but for more complex tasks, such as legal, medical, government tasks which usually contains CODEs and regulations this can be very useful.
    Task Contextual Help

    Task Content

    One very important aspect of every task is the information that the task will handle and how this information will be exposed to the user and how the UI will gather the required input from the user.
    For this reason we will need a way to define the Task Input Data and The Task Output data.
    Defining Task Content
    The Task Inputs represent all the information that will be displayed and is required by the user in order work on the task. The Task Ouputs are the data that the user must enter in order to complete the particular task.
    As extension points we can also add a set of Completion Rules that will need to be validated in order to automatically decide if all the data is coherent and the task can be successfully completed or if the user will be pushed to add more or modify the current information.
    In the same way we can define a simple mechanism to define which actions will be available for each specific task. Creating a set of Default set of actions the user just can select between different groups of standard actions that will be rendered inside the Task Form.
    Most of the time, if our task is in the context of a Business Process, the Task Inputs and Task Outputs can be inferred by the process data mappings specification around the Task.
    Task Inputs and Outputs can also be defined by using a Form Builder which can aggregate information from different places and allows us to have a more flexible way of defining the information that will be handled by the task. A mixed approach is also valid.
    It’s important to understand that Task Inputs and Outputs are vital to handle complex tasks that are in some way standard to the company and are executed multiple times. For a simple TODO task, which will merely serve as a reminder to ourselves, we can avoid adding such complexity. The idea of Inputs and Outputs also make more sense when we are creating a Task that will be executed by a different person who doesn’t fully understand the goal of the task. Specifying the Inputs and Outputs we will be formalizing the task expected results as well as the required information needed to do the expected work.

    Task Forms

    The final goal of Task Lists and all the previously introduced panels is to help and guide the users to do their work. In order to do the work, the users needs to interact with we usually as a Task Form. The most simplistic and generic representation of a Task Form can be the one shown in the following figure:
    Generic Task Form Structure
    There are no limitation on how the Task Form needs to look like, but it will usually contain the information displayed in the previous figure. Most of the information that will be displayed is based on the Task information that we have stored and the graphical arrangement of components that we can do using  Form Builder tool. No matter the technique that we use, it’s important to highlight that there is no good or bad way of structuring the information. We can say that we did a good job if the user who is interacting with the task:
    1. Have all the information required to work
    2. Is not delayed by the tool
    3. Doesn’t feel that there are too many options that are not used in the normal course of actions

    Group Task Lists

    As you may know most of the Task Lists systems also provide the possibility of showing in a separate list all the tasks assigned to the groups were a certain user belongs. The tasks displayed in this list are not assigned to any person in particular, but they can be claimed. So, let’s say that you don’t have any task to do today, you can go to the Group Tasks list and claim a task from there. Once you claim the task, the task is automatically assigned to you and no one else can work on it. For this specific kind of tasks you will have a specific Task Action to release it if you can no longer work on it. As soon as you release the task, the task is placed again into the group tasks and anyone inside that group can claim it.
    Group Task Lists
    There are sometimes when we will want to display both lists in the same screen which make total sense if the user wants to have a quick overview about the overall work load related to him/her.

    Identity Related Screens

    All the identity information is usually handled by a separate component and it is usually central to the company. From the UI Perspective is important to have a set of simple tools which allows us to query the company directory and relate a User to a Task or query for Groups to be able to assign a Task.
    Identity Utility
    One important thing here is to notice that this panel depends on the underlaying technology used to contain the Identity information. If your company uses Active Directory or LDAP or a Database this component will need to be adapted to connect to your Identity Directory and consume data from it. There is no single generic solution that fits all the environments.

    Get Involved

    If you think that all the features described here are important, or if you are planning to adopt a solution like the one described in this post please get involved and share your thoughts and extra requirements that you may need. I will start sharing my personal experiments on the basic components exposed in this post. This is a very good opportunity to join forces and help in the creation of these components. If you wanna help and learn in the process please write a comment here or contact me via the public jBPM5 channels. The more requirements that we can gather, more re-usable will be the end results.


    This post cover what I believe are the most important bits that every Task Lists Oriented Software should provide. I’m not saying that these features are enough but this post will serve as the starting point for more advanced mechanisms to improve how the work is being done. I’ve intentionally left out of this post two important topics: the Form Builder and Task Statistics and Reporting. Both topics will be covered in future posts. Now that you reach the end of the post please read again the “Get Involved” section and drop us a line with your comments/suggestions.

    jBPM5 & GSoC 2012

    This year, jBPM5 was one of the lucky projects which was accepted in the Google Summer of Code Program. The topic of this year was to build a customizable and pluggable Human Task Lifecycle mechanism to extend the current functionality.
    Demian Calcaprina is doing a wonderful job researching different alternatives to provide the mentioned features plus giving the project new ideas about how things can be improved.

    Current Status

    Demian, is finishing up some details and preparing the source code to be merged into the master repository of the jBPM5. He also wrote a very good post explaining the different options that he is proposing. This post:
    Can be easily transformed into the project documentation for this new mechanism as soon as its integrated.

    Benefits for the project

    Having Demian participating from the community side of the project, help us to spend some time analyzing different alternatives that until know the project members didn’t have time to explore. The contributions from are extremely valuable and I personally think that having a BPM project as part of the GSoC is really nice and we as a project will continue looking forward for more community contributors.
    We are all learning on how to improve and how to coordinate the community contributions and we notice that more and more people is interesting and adopting open source BPM projects.
    From the BPM and Rule Engine arena every team member and community contributor is pushing forward the community engagement trying make this projects evolve!
    If you wanna contribute with these projects but you don’t know how, get it contact, write a comment here and we will guide you!
    Kudos to Demian who is doing a wonderful job!